Monday, 17 November 2014

Have an old router and a wireless blind spot?

Ever had one of those situations where you have your wireless network set up, but it misses just a little bit in range of your house? I mean sure you can go to your local computer shop and pick up a wifi repeater or wifi extender as they are called - in fact these devices are so slick, you can set everything up pretty much with a click of a button - but hey... where is the fun in that?

With the new wireless ac protocols in place, I thought it was time to upgrade from my wireless G router... I mean its supposed to be much faster and go further right? Sadly I was mistaken. I mean, yes it is much faster and even though the product promised a wider range; it pretty much went the same distance as my old G.

Luckily, not all was lost. Turns out I could still salvage my old wireless G router and convert it into a wireless repeater! This does require some time and effort though - but it will save you $30; and is much better than throwing out an old piece of equipment that is otherwise fine.

Right on to business, I'll say outright that if you decide to mod your router, there is a chance you might not be able to bring it back to life... but you were throwing it out anyway yeah?

The name of the firmware is dd-wrt (their website They have a list of routers that they have firmwares for.

The part I got stuck on is looking for a WAN IP. I mean if I am bridging two routers, I should be creating a subnet with its own configuration different from my original wireless network. Turns out that is not the case, in fact; there is no WAN at all to connect to. This is most crucial.

Note that in most cases, you want to take your existing wireless network, and simply put your extender in a place where it gets its signals from the original router and simply extends that signal. This is called a *Repeater Bridge*. This is exactly the option you want in the configuration. Nothing else really worked for me.

I am attaching the steps from the dd-wrt wiki page. These must be followed verbatim.

The results are actually pretty good. I have been using the router for a while now. Except for the occasional drops in signal quality, I didn't find any trouble with browsing the web and doing my work. I haven't of course, put this to the test with for example, large scale file copying - I do expect there to be a performance lag there.

Thursday, 13 November 2014

Postgresql - crosstab function - rows to columns anyone?

Recently, I was tasked on enhancing a report for our system. Just to get an idea, here is a little background that you need to know.

Customers are registered in the system and have exactly one profile associated with them. The profile contained a bunch of answers to certain questions. Up until now, these questions were static - so conceptually you have one column for the answer to every question. Pretty simple right?

profile1 - answer1 - answer2
profile2 - answer1 - answer2

However, now the questions are asked *dynamically*. The sizes of questions can grow or shrink. All of a sudden that one little neat profile had to contain a collection of answers - enter a one to many relationship.

Now about that report, the columns would contain the answers which in the past was just a matter of adding the relevant answer to the select clause. This would result in a column in the result set and that would be that.

However, I can't do that any more. Once I join the answer to the profile, because of the one to many; I get many result sets for the same profile.

profile1 - answer1
profile1 - answer2
profile2 - answer1
profile2 - answer2

Yikes! I can no longer just iterate over my result set and put answers in my respective columns. 

So there are 2 solutions to this problem. The easy solution would be to not to join the profile to answers - rather loop through each profile; get my answers for that profile, and then use that for my remaining columns.

But wouldn't it be nice if I could just take those stupid rows and transpose them into columns? Enter crosstab. This confused me at first; so let me break it down the way I understood it.

I am going to borrow the example from postgres's official documentation

The key thing to remember is that 3 things are required:

1. The unique identifier - This is the guy that represents each unique record in our new transposed result set. This of this as what you would put in GROUP BY if you were performing aggregations. You can also think of it as our seed value. In the above, we want everything *GROUPED BY* year.

2. The categories you want transposed - These are the buckets that are coming up in your pesky rows instead of columns! Note these are not the values of the buckets - they just uniquely classify what should be transposed. In my business case, I wanted the questions to be transposed. In the above example, we want the months to be transposed.

3. The value of that category for each unique identifier - This makes sense yeah? We have a bucket for every record and we want that value. In the above, its the quantity the corresponding month. In my case, it is the answer to the question (buckets) for each profile.

Notice how you need to have a 1-1 relationship between your bucket and answer.

The rest is all syntax really, which I will let you read from the postgres docs.

Monday, 10 November 2014

Review - VMWare's ESXi 5.5.0

In my most recent project, when we first started out, our infrastructure demands were fairly modest. Really all we needed was a build server (we used Jenkins), a Maven Repository (Nexus), a DEV environment for daily deployments (database + application) and a QA environment for occasional releases (again database + application).

Since we did not anticipate much use for hardware, and we had plenty of good old AMD Athlon X2s available, we just started to use them. Life was good... I mean sure the servers were slow... but they weren't too shabby either.

And then our demands grew... well more like the *hardware* started dying out for some reason. After 3 such occurrences (you'd think one would be enough eh?), we knew it was time for a proper server. And with the *Cloud* and *PaaS* buzzwords floating out there, why not virtualize our own little platform in house?

Okay wait, that wasn't really our reason for virtualization. We wanted our new environment brought online ASAP, preferably without having to reconfigure all our servers.

What we really needed was a way to move our physical boxes into Virtual servers (p2v). Our first attempt was to use Xen as our platform. Unfortunately, Xen decided to charge customers for the converter....grr...too bad actually, because Xen is pretty light weight and is pretty easy to work with (or at least that is what I felt at the time).

We could still work around the problem, by figuring out other ways to virtualize our physical boxes, but again, we wanted something quick.

In comes ESXi. Using VMWare vCenter Converter, converting physical to virtual machines could not be more trivial. The only difficult part was that you needed to be *Administrator* to run the converter...a small price to pay. You also need to be administrator on the boxes you want to convert...which makes sense if you think about it.

So now all our boxes (windows and linux) are ported over. To manage them however, we needed to install vSphere Client (which is 385MB!). With Xen on the other hand, you can simply perform all configurations through their web client. Again, is it really a big deal...probably not. ESXi is remarkably similar to the VMWare player... so once you have everything up and running; its pretty easy to work with.

Thursday, 13 February 2014

WebSockets - An Unexpected Journey

If your like me, your always looking at what is going to come next in the web world; and given an opportunity, will try to bring life to it.

When I was told to do a POC on websockets under grails...I lit up like a bulb. Finally, going to work on the fringes of technology and do something really cool.... I should humbly mention that I've almost been successful in putting POCs together... unless something is really off.

However, what started out as awesome really turned sideways pretty quickly. And to put a long story short - its just not ready. Huge lack in framework integration... I primarily work with Groovy/Grails...and they have 3 different plugins that hook to Atmosphere...and *none* of them work perfectly.

After 2 days of fighting, I was finally able to get Websockets working with NIO asynchronously. Very nice. Finally I thought, a working prototype.

Then came my good friend Apache httpd. Websocket support is native, but only with version 2.4! At the time of writing, this is the latest version. won't get a binary....which means you build from source...which means resolving some very nasty hidden dependencies.

I was going to show code on how I got it working, but I realize that this technology will most definitely evolve and become mainstream. After falling back, I tried good old fashioned long polling with simple jQuery...the results are not too shabby.

Bottomline, unless you have a highly time sensitive application (like real time tickers), really high volume of traffic that can potentially clog your server, complete autonomy on your production environment; I'd stick to old school polling.

Saturday, 11 January 2014

Using Gist

Since I've been writing posts, one of the things I've been struggling with is writing code straight into the blog. Did a little research and found others facing the same problem. A lot of people recommended Gist, so this is me just trying it out.

Since I'm primarily focusing on Grails and SQL these days, I'll test out some sql, groovy and gsp code. First off the SQL:
Next the GSP:
Finally some Groovy:
The verdict: FULLY AWESOME!