Wednesday, November 21, 2012

It's all just a little bit meta

So I am a terrible blogger.
Good thing no one really is interested in all the draft posts I never put up I guess.

I'm working on meta concepts right now.
I have always loved compiler compilers, rules engines, tokenisers and lexical analysis.
But Ruby has given me the power to really indulge in these fetishes, while also allowing me to generate complex applications from spreedsheets and as additionally benefit granting migraines to anyone who looks at the code.

Ahh bliss...

I like to describe writing in java as like being a bit drunk.
You feel smarter and more charming than you really are, take chances you shouldn't and too much makes you want to throw up.
While every one else thinks your are some obnoxious rude idiot.

Well Ruby is like being stoned.
You feel like you are going through this mind expanding experience, everything is beatuful, you speak in cliques and to much makes you deeply paranoid and a little bit stupid.
While everyone else just thinks your a stoner.

Ruby is like swimming, eating ribs or sex.
You'll never really get it in theory or hearing about others experience.
You need to jump straight in.
And like thoose things at first, you'll be scared, short of breath, overwhelmed and generally make mess of it. Yet strangly intrigued.

Then it'll just click, and your hooked.





Friday, September 28, 2012

Like Lazarus I'm back from the dead.

So when last posted I'd done a bit and was investigating P2P and compiler compilers.
The I went quiet.

What a terrible few months.
My wife started work (ok not terrible, great actually) again, my project took off (again pretty.great), Diablo 3 came out (thought it would be great, actually was terrible) and I and my family have been very ill the entire time.

I'm so sick of sickness.

Everyone is getting better, my project cleared 'ST' yesterday (best feeling in the world for a lead) and I am turning 33 in 2 days.

So I figured back to blogging.

So I've been considering engines, transport layers and the event model in current logic.

And I've decided my first step is to build an asynchrous and adhoc process call bus.

I say bus because it won't be a stack so much anymore. More like a bus, were the context gets on ride for a while and then gets off at the correct place its method is as declared by its operating function.

Kind of like functional coding but with sideeffects driven by declarations in objects, triggering procedures.

Don't worry if it sounds like chaos and madness. The trick will work or it won't but It'll be fun.

Monday, July 16, 2012

Illness, beer and steam

So yeah did a bit of that stuff.
But due to a mate's party to use up home brew not as much as I'd hoped.
This weekend my son is sick and steam is having a sale.
Then my son got sick.
So didn't do much on coding.
Also didn't get much time to work on a post over the weekend.

But I'm having wicked writers block right now on a couple of parsers, formaters and conditional logic engines.
So to clear the tubes I thought I'd write quick post on something unlike the other project I have done alot of work on.

I am talking about compiler compilers, or if you would prefer rules engines, or code that generates rules engines, or etl, or machine learning.. Look alot of things are around which are all the same thing this.

What is this? We this is a system, logic of sort that generates other logic based on arbitrary source data.
The source data could be anything, it could be rules, airline fares, server logs, usage logs, transaction records, data about something or some one, data structures to be processed, something completely different or a combination of these.

So we have our source structures and we want to drive behaviour based on them. The process of doing this is actually easy. We build a framework to construct a working logical structure made up of the various tokens in the source data. In early projects I found that simple lexical analyse worked well for creating these structures, since I've found that contextual analyse often allows for greater flexibility in this construction.

Now we assign a behavioural value to the structure, this behavioural value will relate to what we actually do based on the model.

Well my writers block is going..But so this is more than mindless babling an example.
A simple one.

Lets say you want to build a web application to I dont know... catelogue your son's skylander collection.
But your lazy and you just want to design the page with what it looks like and nothing else.
In this case we use two special elements coreValue and coreValueRepeat. The existance of these values indicates the data needs to exist.
We construct a in memory structure, database, view page, edit page, create page, search page and summary page from how these are used in the one html file.

And our input? Simply:

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
        "http://www.w3.org/TR/html4/loose.dtd">
<html xmlns="http://www.w3.org/1999/html">
<head>
    <title><coreValue src="name" type="alpha"/></title>
</head>
<body>
<div class="Title"> <coreValue src="name" type="alpha"/>
</div>
<div class="Description">
    <h1>Description of <coreValue src="name"/></coreValue> </h1>
    <p><coreValue src="ours.hasIt" type="boolean"/> <coreValue src="ours.maxLevel" type="boolean"/> <coreValue src="ours.fullUpgrades" type="boolean"/></p>
    <p><coreValue src="description" type="alpha"/></p>
</div>
<div class="Stats">
    <h1>Statistics for <coreValue src="name"/></coreValue> </h1>
    <p>Power: <coreValue src="statistic.power" type="int"/></p>
    <p>Defense: <coreValue src="statistic.defense" type="int"/></p>
    <p>Speed: <coreValue src="statistic.speed" type="int"/></p>
    <p>Luck: <coreValue src="statistic.Luck" type="int"/></p>
</div>
<div class="Abilities">
    <h1>Abilities for <coreValue src="name"/></coreValue> </h1>
    <coreValueRepeat src="abilities">
     <p><b></b><coreValue src="name" type="alpha"/></b></p>
        <p><coreValue src="description" type="alpha"/></p>
        <coreValueRepeat src="upgrates">
            <p>Name: <coreValue src="name" type="alpha"/></p>
            <p>Description: <coreValue src="description" type="alpha"/></p>
            <p>Cost: <coreValue src="cost" type="int"/></p>
        </coreValueRepeat>
    </coreValueRepeat>
</div>


</body>
</html>

From this file the building blocks for the entire web application are present, as such or compiler compiler will build the application for us.
There are many other ways we can use these for us..

Friday, July 6, 2012

Grand plans and simple beginnings

So all this sounds great but how are you actually going to do this I hear you ask..

Well I'm a big fan of not attempting to drink the ocean; figuratively speaking, so I'm going to start small and build a simple proof of technology first.

Like with all programmes I plan on starting with hello world.

In aid of this I plan on building a simple test framework from which I can work on the larger application.

This framework will be using JFrames for the user interface. This is largely because as JEE architect I'm sick of the web and writing this as a fat client sounds like a bit of fun.


But what is the framework actually testing?

Well in order to have a flexible cloud based crowd sourced application I need to build it arround a communication framework which will support this approach effectively.

As a result it will be testing the framework built around distributing functionality across multiple nodes using P2P.

Of course  you probably guessed this when I mentioned JXTA on the previous post.

So my hello world.
I plan on three phases this weekend.

Phase One
The first phase is quite simplistic.

In the test framework I plan on using two nodes; both on local host.
Both have their own listening port that the other knows about.

When one communicates with the other it binds to the listening port.
The connection manager on that node will create a handlers thread for that connection request and pass the connection on to the handler.
The requesting node will then pass its payload on to the  handler thread and disconnect.

For the first phase the payload will simply contain the arguments for the method, in this case a string to display on the UI of the receiving node.

Phase two
Once I have setup my two nodes and proven they can message each other effectively phase two will to be to add brokering logic to the framework.

I will programme the nodes to look for annotations on the methods, if a method has the appropriate annotation on it, it and its arguments will be registered against the method broker.

The payload from the calling node will be augmented to include the method it is calling as well as the arguments as name value pairs.

My display method to display the string on the UI will be suitably annotated and the brokerage system can be tested out.


Phase Three
P2P lends itself quite effectively to call backs and call back like logic.

Given it's fabric like qualities forcing it into a request, response type structure would be pointless.

Phase three will be about adding callback support and including logic in the framework to make the application think it is using request, response logic when it isn't.

I'm not sure the best way to talk through how Im going to do this. Its a bit of a chicken and egg issue. So forgive me if this is a bit (more than usual) confusing.

Pure call backs:
When this is a pure call back situation the implementation approach is simplest; the call back method will be a generic method taking in the actual callback method name and arguments as final values to inject them into the method, the signature will be VARGs this will allow the generic callback method to be as flexible as possible. This will connect to the other node in the pair, sending its payload in the same manner the node in question received its payload earlier.

Call backs when the receiving node is using methods with a return:
Of course in many situations its easier to write your methods just to return without worrying about call backs. This will ensure you don't need to worry so much about the stack; also if you are using recursion will make your life a little less complicated.

In the case where the receiving method returns a value, that value will be used to populate the callback by the handler.
If the callback argument type does not match the return value; then the handler will send the sending node an error when the payload is provided.

Call backs when the sending node wants a blocking method with a return value:
The reverse is also true. In many cases you want the code to block and wait for a return value and additionally fragmenting the logic flow into multiple methods linked only by call back would make the code maintainable.
In this case we will implement in the framework a wrapper.
This wrapper will take the expected method name, and the collection of arguments (provided a a VARG) in and return a single Object.

It will then register in method broker a temporary method. It will also create against a special map for the temporary method a look-up value containing the thread details and a play to store the return value.
The wrapper will then invoke the receiving node sending the temporary call back method it its payload and sleep.
When ever it wakes up it will check if the data against the temporary look-up value has been returned.
If it is never returned it will eventually time out, otherwise once it is returned, the wrapper will take the value, remove the method reference from the stack and de-register the method from the broker and return the value to the calling method.
The temporary call back of course will be marked as such. Handler on receiving the method is temporary will not attempt to invoke the method as it would normally. Rather it will store the value against the temporary method in the stack, and use the thread reference to wake the sleeping thread up.

I'm sure to most this may seem like I'm playing it safe and setting my goals low.
However my in-laws are in town so I'm not actually going to have allot of time to spend doing time.
Thus the three easy phases.

Ofcourse once I have this in place I'll be much better positioned to be more adventurous next time

Saturday, June 30, 2012

Some times an idea had its day, yet it never came. Instead the idea died cold and alone


So my mad idea.

Ok, before the why's and where fores lets get the explanation of the buzz word bingo out of the way.
Virtualized, crowd sourced, social media  cloud just sound like I was just grabbing the hottest memes and clumping them  together.

I wasn't of course, if I was it would have be synergistic, virtualized, crowd sourced, social media cloud (wait I like that even more.. err not doing my credibility any help here so I might move on).

People who have worked with me a little better may think I was grabbing concepts I really like and forcing them into a 'simple' paradigm.

I wasn't doing that either if I was it probably would have been a deterministic self learning, neural net clustered, grid, virtualized, crowd sourced, social media cloud (damn I need to focus here).

Like with most things tech focused, I've thought this through and I only have one problem with the concept, I'll discuss that briefly at the end.

So as a distributed application creator I like my systems componentized so lets walk through the buzz words in order of my love of them.

The Cloud
I love the idea of the cloud.
As a software developer who builds applications for an insane amount of people the idea appeals to me.
Like with most things, what I love about it probably isn't common.

When I started programming in the mid 80's it was simple.
You wrote a programme.
You compiled and linked it(spending hours being punished by make), then you ran it.
Simple right?
Painless even.
Anyone who was able to understand how to do it could  do it.
And letting others use it; even easier post the binary on a BBS, then latter a public ftp server; post the details on a forum or usenet group and you are good to go.

Dash forward to distributed applications. You write some code, write the unit tests, compile  the war (spending hours being punished by maven) and deploy it to JBoss, jetty or whatever.

But now no one can't access it without setting up an environment able to support providing functionality to a hostile internet.

Now knowing how isn't enough, you also need a huge infrastructure.

Ofcourse we all have permanent connections now but the client server model requires static ip addresses. Additionally the computer would need to be up constantly and opening up your natted firewall exposes security risk to your computer.

Onward comes the cloud.
For a small fee you can now expose that programme to the world. Great huh?
No fuss and no risk.
Thus the beauty of the cloud.

Virtualization
I also love virtualisation.
Virtualisation is just a tech term for an emulator. Like playing a snes game on your 'droid. Only for production servers.
The idea is you have a container and in that container is a simulation of the server in question.

The idea can be used to divorce your computer from the hard link between the number of physical real computers you have verus the effective number of  computers you have.

This means a super computer can act like 1000s of low range computers.
Or 1000s of low range computers (or maybe just fragments of them) act like a super computer.

Crowd Sourcing
And who doesn't love crowd sourcing. It brought us linux, YouTube, kickstarter and lots of other great things.Crowd sourcing is the idea that a thousand monkeys can write hamlet or atleast fund the next Shakespeare.

We knew for a long time that a lot of people in one place could unleash the worst in people.

Read YouTube comments some time if your not sure what I mean.
But crowd sourcing proves it can also unleash the best.



Social Networking
And how how do you bring a crowd together, get people to find others who even want the same goal, even if they are on the other side of the world?
Be it via forum, feed, mailing list, or website its going to be a social network of one description or another.

Virtualised, crowd sourced, social media cloud

So how it comes together.
Well we have a technology called virtualisation which allows us to build a entity that looks like a complete node of a certain type regardless of what it is underneath.
Concept called that the cloud that in many implementations is a flexible network of interacting nodes.
A movement called crowdsourcing which bring a lot of people together for common goals.
A tool called social networks which allows people who share common goals to find each other.


These things fit together so well I'm shocked no one though about combining them before.
You use a cloud like flexible platform, to power a virtualized node, made up of fragments provided by the crowd brought together for a common goal by social networks.

The problem
Except they did...

And failed..

It's called jxta and it was a great idea.

Some one actually thought of this over 11 years ago.
Thus my comment that this is the greatest idea that never was.
It should of worked and it should have wonderful.
But it wasn't..
At the start I promised I'd explain my one problem, and that is why didn't this work?

It's a great idea which someone has had before, so why isn't JXTA & co a huge open source monster along the lines of google?

Luckily for me the idea of this little project was to do something techically cool yet not enterprise focused then isn't it?


Join with me next time as I start to follow in the footsteps of the ill fated jxta team.
Also I'll be discuss P2P verus client server, discussing rest and options around lose coupling.

Programming for the enterprise; kinda like failing down the rabbit hole


So I started this blog a while ago, posted two times and stopped.

I stopped for a couple of reasons, but distraction is probably the best one to talk about on the internet.

That said recent events have motivated me to start again.

Like the Dirt Gently character in the recent UK TV adaption of the detective I too think better when I show off.

So like the would be actor who performs in front of the mirror; I will show off on a blog that I feel it is unlikely anyone will read.

So why the need to show off you may ask? It helps me think.

The deeper question what do I need to think about that a consulting firm isn't charging me out for? Thats a longer story...
I've always been very good with computers. When I completed uni I had already been working in the industry for a while as a contractor. Somehow; I'm not sure how I feel into specializing in big high volume commercial distributed applications. Or J2EE ( that is JEE now).

I can not tell you how it occurred; I mean I was a C unix programmer whose best idea of fun was to rebuild the tcpip stack from scratch or write printer drivers to convert mainframe machine code to PostScript. I shouldn't have even been touching java let alone let near the web.

I guess I fell into a bad crowd, before I knew it I was writing middle-ware for IBM and explaining how web-services were not the same as SOA and debating the benefits of REST over SOAP.
From that point I was further dragged away from my purest ideals into full blown distributed development.
Oh the shame.

But the thing is I dont feel like just a J2EE developer (ok, Designer (ok, chief Designer (ok, Architect (ok, senior/Lead Architect (I was informed I was one of these on Thursday) but I like to believe I'm still a developer at heart)))).
I still feel like the same wild developer who wrote his own memory management portions of his computers, hacked his windows 95 install to report itself as 'Luke Rocks' and general coding legend.

So I'm planning on doing stuff that Im not paid for and does not revolve around the web at all (well may be a little).
But back to the showing off. Doing this on my own would be lonely and feel empty without an outlet (even a fake one like this).

Sure I could show off to my family, but my wife doesn't appreciated hearing about my exploits in code... She tolerates it..
And my boys, well Jacob will want to know why I'm doing my pet project when I could be building him mods in minecraft.
So its going to have to be you...


And my great pet project?
The greatest idea that never was, an idea that died a death before it had a chance.

Creating a virtualized crowdsourced social network cloud.
Seriously...
.
.
.
.
.
.
OK its been a couple minutes you can stop the screaming now..

 I know what you are going to say; argg you cant just throw together the latest buzz trends thinking it'll magically work.

Ofcourse I'll do what I always do; I'll grin my biggest, shark faced, smug grin and say


"But I already have and this is how it is going to work...."

Sunday, September 5, 2010

I.P. addresses and obfuscation

One issue that I have been thinking on is I.P. addresses.
Specifically how obfuscate them. Now I happy use google applications and facebook, I believe it is up the user to define just how much privacy they give up and as a result have no problems with the monitoring and analytics those sites perform.

However the user's I.P. address can trace an user's active regardless or explicit choice and that concerns me. The fact there isn't a simple solution bugs me.

The reason you shouldn't obfuscate your I.P. address is the same reasons you shouldn't tell someone you want to call you a factitious number. None of the servers you are addressing will be able to send responses back to the correct address.

Of course that constraint is a hard one to beat.
The question is how can we ensure when we bind to a socket on a site that the responses in that connection return to our client, yet ensure that the server is unable to trace the source I.P. address of the client.

Now the common answer to that is using proxies, which while simple not true obfuscation. In addition arguably they move the traceability to a central choke point. By that I mean the proxy or VPN server can trace and monitor the user behaviour to a greater degree any other mechanism.

What I am discussing it a can to ensure the functional integrity of the I.P. address for the transport layer, but the obfuscation of the address for all layers above it.
To do this requires changing the tcp/ip stack on the sender/client. However the question of to what is still uncertain.
There are protocols which use encryption of the various header fields including the address field for communication between a client an a trusted server. But what actions are available when the server is not trusted?

Within the datagram transport the 12th to 15th (counting as a C programmer) bytes contain the identification of where the client is, that is the source address. Without this the can not be sent, the client can not be traced.