Archive: Technical

Facebook's next big opportunity : Analytics

In my current term at ISB, I am taking a course on “Leveraging Social Media and Analytics”. Very awesome and interesting course, but it also has a project where we take a deep dive into one company’s Google Adwords, Google Analytics and Facebook Ads data.

Now Google analytics is brilliant at allowing users to see who’s visiting their websites, where are they coming from, what they do on it, etc. Its a very powerful tool, and especially since it integrates greatly with Google AdWords, providing a great one two punch for Google and is their big selling point.

Now enter Facebook, with their Ads. The biggest thing lost from FB’s point of view is data on how useful their ads are, how many conversions you get etc. This is still possible to figure out through correlation between FB’s ad data and Google Analytics. But is still a huge pain point from FB’s and FB user’s point of view. So why doesn’t Facebook offer something like Google Analytics?

Well, you might say, Google Analytics is the biggest one out there, and people require to put a code snippet in their websites to track usage, and they won’t do it twice or won’t take the hassle.

But think about this. Facebook already has their code snippets in most websites, either through their Like buttons, Share to Facebook buttons and who knows what buttons. All it takes is for them to include their tracking and analytic code snippet as part of these buttons. Suddenly, you realize that their tracking code could already be present in a gazillion odd websites, ready for analytics.

All Facebook needs to do is turn it on, and link to Facebook Analytics and voila : Facebook Analytics could have a huge installed base right off the bat!

Now this is all out there, but just a thought I had. Crazy? Logical? What do people think?

APIs and what not to do

APIs seem to be like opinions. Everyone has one, and no two people have the same concept of what constitutes a good one. An API is supposed to be an interface that is exposed for other programs or programmers to use to interact with your code. Except, each API, like an individual, is unique with its own flaws and niceties. A great API is one which reduces the amount of code you have to write when you use it. I personally feel amazing if I can get something done with minimal code. That just screams “GOOD API” to me.

On the other hand, a bad API leaves you feeling dirty, unclean even, as if you are committing grave sins against nature even by just using it. Here are a few common mistakes which end up leaving that bad taste in your mouth (with examples, of course!) :

Bad APIs

These are the worst offenders, the APIs which are supposedly there to make your life easier, but just end up making it more work to use it than rewriting it from scratch. I faced one of the bigger offenders of this one recently when I was working with GWT. I was trying to create a tree structure to represent a navigation hierarchy when it dawned on me.

A GWT Tree is created by creating a Tree object, and then creating a tree item for each node. To append children to each node, you create further tree items and add whatever text or elements you want to it. So to summarize, even if I have a data structure to represent my tree (which in most cases, I do), I will have to traverse it manually, create tree items, tell each one how to render itself and then append it to the correct items. Yuck.

Now consider how JFace creates a Tree (which I consider much more powerful and a nicer API altogether). You create a TreeViewer, set its data source / input. Then, you set a content provider which knows how to traverse your data object and get children / parents. You can also set a LabelProvider which tells it how to render its data elements. End result? Nice clean code that I actually feel satisfied about.

Most of these are the end result of rushed / not well thought out design. Having a concrete use case prior to designing it should have been enough to scream out “Its ugly!!!”. Suggestion to prevent this : write a test / use case for anything you start designing, so you can get a feel for how it feels in action. That should help you avoid a lot of these.

Not fully thought out APIs

This one is similar to the previous one, but I think it deserves section and example of its own. This happens when you almost nail the API, but fail to consider some common uses of the API. The biggest offender of this one I believe is the Java List API.

The two most common use cases I have in Java when I work with lists are
1.) Iterating through them to perform some operation and
2.) Filtering the list to get a subset

The second operation is so common that I get annoyed now that I have to create an empty list, iterate through each one using a for each and conditionally add elements to the new list. Now I realize that Java doesn’t make it easy to pass in functions (check my older article about this) as arguments, but what I really really want here is the ability to do myList.filter(predicate) where predicate is a predicate function I decide, which returns the filtered list with elements matching the predicate.

There are many more such common operations missing on the List interface, but this is the most egregious one I believe. Javascript also gets this wrong, but underscore, a JS library adds a lot of this which makes working with lists and collections a dream.

Misnamed APIs and methods

How many times have you called a method, only to realize that it didn’t really do what you thought it did? Or look for a method XYZ, only to realize later that it had been named YXZ instead. Raise your hands if you have experienced this. For some reason, an apple for someone almost always turns out to be an orange for someone else.

I’ll switch to bashing on JS for this one, underscore in particular. For all the amazing methods that underscore provides in JS, they really have a problem with naming. I ended up looking for a collection.contains method, and ended up finding only indexOf, so I initially assumed that they didn’t have it. I mean, if I look for contains, at best, I will also look for has, hasKey. Browsing through the list of method names, I might have even accepted includes (though it would not have been my first choice). But never in all my life would I have expected it to be include (Yes, that is include, as in singular!). People, what were you thinking????


The final set of APIs which can annoy (but are easily worked around, just like the previous section) are APIs which lie. These include APIs which don’t do what the function name suggests it does (no obvious example from the open source land comes to mind, thankfully). The other kind is one which is not done with work even after the object is created. Most times, it is the case of a lurking init / initialize method. And if you ever see an interface called Initializable, run in the opposite direction.

Is Strong Typing really needed?

This is something I have been struggling with for the last few months. I have had people argue ardently that all Strong typing is good for is false comfort and lots of unneeded typing. But I was strong. I was undeterred. I dismissed this as the crazy rants of those JS developers, those dynamic language people who believe that obfuscation and compactness is everything, even at the cost of maintainability. I mean, how could a language where you didn’t even know what was getting passed in in any ways better than something where the APIs are explicit and stop you from making mistakes. A dynamic language could work for a single developer, but definitely not for a team. That was my whole hearted conclusion.
Now, I’m not so sure anymore. Its been 3 weeks since our team made the whole hearted switch. Has it been roses and sunshine? No. But it hasn’t been as bad as I expected it to be. And there are a few reasons for that. But before that, I’ll lay down the pros and cons the way I see them from my (assuredly very limited) experience :

Benefits of Strong Typing :
  1. Errors / Warnings in your editor
    Simply put, this might just be the single most greatest benefit of strong typing, and the single reason why most java developers (a lot rightly so) will never even consider leaving the safety of strong typing. While compilation support doesn’t necessarily go hand in hand with strong typing, most people tend to associate Java with it, so lets run with that. Simply put, with Strong typing, your editor can (and should, I mean, if you are not going to get immd. feedback, what’s the point?) give you immediate feedback when you messed something up. Whether this be using the wrong variable name or trying to call a method that either does not exist or with the wrong parameters. Or if you are trying to use the wrong type of object.

    To a Java developer, an IDE like Eclipse or IntelliJ is godsend, as it tells you what is wrong in your world and lets you jump to them, gives you suggestions and autofixes and generally makes your life as painless as it can. And it is brilliant, I can tell you that.

    In Javascript (or any other dynamic language), everything is fine and dandy for the first 100 lines. After that, it becomes scarily unmanageable. The only way around this that I have found so far is to be super paranoid and write tests for every single line of code. If you can’t do that, stay far far away.

  2. Generics (but this is also a negative, in my opinion, which I’ll get to below)
    The idea behind generics is that gives developers some assurances about the types in a collection (or whatever it is you are genericizing). That way, all operations are type safe, without having to convert to and from different types. And you are assured that you will not be surprised suddenly by a different type of object popping up when you least expect it. But there are a lot of issues with them that I’ll cover in the second section.
  3. Ability to follow a chain and figure out what type of object is required at each step
    Now this is something I definitely miss in languages like Javascript and Python. The fact that I can trace (in my IDE, note that part) what the type of each variable / method call in an expression chain is simply amazing, especially when you are working with a new codebase. You never have to wonder what the parameter types of the method you are calling are. You don’t have to wonder what methods are available or visible. You just know this information (Again, assuming you are using an IDE. If not, god help you)
  4. Refactoring

    The biggest advantage of Strong typing though, in my opinion, is the ability to create IDEs which make refactoring a breeze. Renaming a method / variable? Trivial. Moving or extracting a method? Simple key combination. Stuff which can be extremely tedious and mind numbing are accomplished in a matter of minutes. (Want to know more about these shortcuts? Check out Eclipse shortcuts). This is simply not possible with languages like Python and Javascript.

Disadvantages of Strong typing :
  1. More concise and precise, less typing
    Dynamic languages do tend to be more dense, and it is much easier to accomplish in 10 lines what can easily take 50-100 in a language like Java, which is especially verbose. Consider trying to pass in a chunk of code to be executed at the end of a function in both Java and javascript (this is pretty common in web apps and task runners)
    Java :

    interface Function {
    T execute();   // Optional parameters is not easy here :(
    taskRunner.execute(taskArgument, new Function() {
    String execute() {
    return "Success";


    taskRunner.execute(params, function() {response="Success"});

  2. No badly implemented generics
    This is mostly Java’s fault of getting generics pretty badly wrong. The idea behind generics is sound, its the implementation that is horribly broken. Here are a few things which are wrong with it :
    Type erasure : This basically involves the fact that at runtime, there is no way to differentiate between say, a List<String> and a List<Integer> If you never work with reflection or Guice, then this might not be a problem. But it also is a pain with deeply nested generics and wildcards. I have seen compiling code which blows up at runtime because it cannot differentiate between a Provider<? extends Repository> and Provider<? extends Resource> and neither Resource nor Repository have anything in common. Crazy….

    Verbosity : Map<String, List<String>> myMap = new HashMap<String, List<String>>();. Enuff said.

    Guice & Reflection : Generics and java.lang.reflect just don’t mix. They just don’t. Type erasure blows away all type information, so you are bound to be using stuff like new Entity<?> which totally defeats the purpose. And don’t get me started on Guice. In guice, normal bindings (non generic classes) look as follows :


    With Generics involved, they now look as follows :

    bind(new TypeLiteral<MyInterface<String>>(){}).toInstance(instance);

    What the heck just happened there???

  3. Closures / Functions :
    Closures are a form of anonymous inner functions which can have an environment of their own, including variables bound to the scope of the function. The inner function has access to the local variables of the outer scope and can change state. But what it does allow is creating functions, as callbacks or for performing some quick little task in a repeated fashion, easily and quickly and pretty darn cheaply.Java has had a few proposals to add it ( but it has not passed the review committee yet. And probably won’t for the next few years. So till then, in Java, you are stuck creating interfaces, creating an implementation of it at runtime, passing in variables you need access to in the constructor or through some other mechanism, and generally be in a lot of pain. Thanks, but no thanks.

What I miss in Java

So I finally got some time to sit down and write, after being knee deep in work the past month or two. And without a doubt, I wanted to write about what has been heckling and annoying me over the past month. I am an ardent defender of Java as a good language, especially defending it from Misko day in and day out, but even I will agree that it does suck at times. So today, Java, the gloves are off. I love you, but this is the way things are.

To give some context, I have been working on GWT a lot recently, and have done some crazy things with GWT generators (which I might cover in a few posts later). I love GWT, but for all of GWT’s aims to allow developing modern web apps without losing any of Java’s tooling support, there are a lot of things which are made easier in javascript. Lets take a look at them one by one, shall we?

Closures (Abiity to pass around methods)

So this was the straw that broke the camel’s back. I had this use case today where I wanted to set some fields through setters on a POJO. Simple enough right? Well, NO, because someone used defensive programming (Don’t get me started about precondition checks, thats for another post) and so it through a null pointer exception. Ok, since I can’t change the POJO (since it is in someone else’s code base), I needed to check for nulls on my side and not call the setter if the value was null. Simple enough, I do a check and call the method conditionally. Except when you have a few 10 odd properties, thats a lot of conditionally crappy code.

Ok, so my other option is write a function which checks that, right? Except in java, you can’t pass around functions or closures. Ideally, I want to have a closure which takes a value and a function, and let the closure handle the null check and conditional calling. Something like :

callConditionally(myPojo.setValue, actualValue);

Except you can’t. Not in java. I mean, I could create an interface to wrap it, but that just adds more boilerplate than necessary. I ended up creating a method which uses reflection to find the method by name and calls it, but my point is that it shouldn’t be necessary. What should be two or three lines of code ended up being a 20 line monstrosity. And yes, before some smart aleck replies that if I wanted closures, I should go to javascript, I will point out that there have multiple proposals to include closures in Java, and Scala, which compiles into java, supports closures as well.

There are multiple JSR’s and open source libraries which try to implement this for Java, and one of these days, I’m gonna give it a try. But for those interested, check out and Both of them look promising.

Type inference and General Wordiness

They say a picture is worth a thousand words. Well, with java, and especially with generics, it seems that even a simple declaration is atleast a thousand words. For example :

Map> myMap = new HashMap>();

The above line could be so much shorter and sweeter as :

Map> myMap = new HashMap();

There are very few cases when I would want a map of something else when I just declared it of a particular types. Other examples like reading a file, working with regexes abound, all of which require much more syntax than other languages. And I definitely do miss being able to say

if (myValue)

instead of

if (myValue != null)

Sigh… And don’t even get me started with reflection. Reflection in Java is extremely powerful, but man is it wordy. Not only can you not recurse over the properties of an object directly (like say, in javascript), you also have to worry about exceptions (which I’ll get to in the next section)

Checked exceptions

That brings me to my last and biggest complaint. Checked exceptions in Java. They are just plain evil. I know people swear by them, and some of their arguments even make sense. Sometimes. But the fact remains that they make me write more boilerplate, more code that I don’t even care about than anything else in Java. The idea behind checked exceptions is sound. Its a great way to declare what the caller of a method needs to worry about. But the thing is, I should have an option other than rethrowing or logging it.

I did a very unscientific data gathering experiment of just looking at code randomly in different code bases (Codesearch was especially useful for this). And the majority of catch blocks I found either

  • Logged it using logger or System.err
  • Rethrew it as a wrapped exception

Me personally, I have changed Eclipse to generate all catch clauses for me by wrapping and rethrowing it as a RuntimeException so I don’t have to worry about adding a throws to my method declaration, when it is a non recoverable exception for the most part.

Furthermore, sometimes Checked exceptions can even lead to clauses which will never ever be executed. Point in case :

try {, "UTF8");
} catch (UnsupportedEncodingException e) {
 // Can never be thrown, but I am forced to catch it.
 // Because its a checked exception!!!

There are many more cases like this, but I think this is enough of a rant for now.

How to build a Deeplinking capable Flex / GWT App

I have been working extensively with GWT recently, and worked on a flash based webpage before that. And we tried many different approaches before finally settling down on a particular approach that works across all frameworks which are GWT / Flash-like. What do I mean by GWT / Flash-like? Well, these are frameworks which rely on on one page serving the entire content. The state of the page changes, but the web browser does not navigate between pages unlike a traditional website.

And both of these frameworks do not make it easy to provide deeplinking. Oh sure, its easy to add the functionality to hit the Back button in your browser and navigate in both GWT and Flex, its not as trivial to implement a way to provide a URL and browse immediately to the corresponding page without a lot of effort on the part of the developer. And this is where the following structure makes life a little bit easier.

The central concept in either of these is something called a Workspace. The workspace in this architecture represents the truth of the UI. Whatever the workspace contains is displayed in the UI. It is the backing model of the View. Now, for a mail app, it might represent the current view, like Inbox or Sent mail, and maybe the mails contained in it. And any other information needed to build and display the View. The workspace is also responsible for two more things, firing an event to all the Views saying that it has been updated, and another to a controller to tell it to go fetch data from the backend server.

Now the Views themselves are stateless to an extent, other than holding a reference to the Workspace. These would be the Panel classes in GWT. Their only responsibility is in channelling information to and from the workspace. They also listen to events on the Workspace. So whenever a Workspace_Changed event fires, the views go and grab relevant data from the workspace and render it.

The workspace also fires an event whenever a View tells it it needs more information. In that case, the controller goes and fetches the data, stuffs it in the workspace, and the workspace then fires an event to tell the Views that they should now update themselves. So basically, there are two events propagating through the system :

  • UPDATE_VIEWS : The workspace controller fires these when it has stuffed the information from the server into the workspace. The views listen on this event and update themselves accordingly
  • UPDATE_WORKSPACE : The views fire these when it wants more data loaded from the server. The Workspace controller listens on this, and based on the state of the workspace, fetches relevant information. The catch is that it should always be possible to compute what data is needed based on the workspace. When the controller finishes, it fires the UPDATE_VIEWS event.

Ok, so what does this give us with regards to Deeplinking? Well now, your URL / Token parser should be able to parse the URL or tokens (it is trivial to add a HistoryChangeListener in both GWT and Flex). And based on the parsed tokens, it should just update the workspace with the relevant fields. And go off and fire a UPDATE_WORKSPACE call. This will trigger a server call, get the relevant information, fire an UPDATE_VIEWS event, which tells the Views to go update themselves based on the state of the workspace. Voila, you have a working Deeplinking implementation.

Using Polymorphism instead of Conditionals

Interviewing nowadays with tech companies has become run of the mill. You have phone screens, then you are brought on for on site interviews. And in each one of these, you are asked one mind-bending, off-ball algorithm question after another. I personally have been on both sides of the interview table, having asked and been asked my fair share of these. And after a while, I started questioning whether these questions provided any insight into how interviewees think other than their knowledge of algorithms.

It was then that I decided I wanted to try and find out if the candidates really understood polymorphism and other concepts, rather than their knowledge of algorithms, since every other interviewer would be covering that. And that was when I stumbled upon this gem of a question, which also underlies a fundamental concept of object oriented programming.

The question is simple. “Given a mathematical expression, like 2 + 3 * 5, which can be represented as a binary tree, how would you design the classes and code the methods so that I can call evaluate() and toString() on any node of the tree and get the correct value.”. Of course, I would clarify that populating the tree was out of the scope of the problem, so they had a filled in tree to work with. It also gives me a chance to figure out how the candidate thinks, whether he asks whether filling in the tree is his problem or whether he just assumes stuff. You could preface this question with another about Tree’s and traversal to check the candidate’s knowledge and whether this one would be a waste of time or not.

Now, one of three things can happen at this point. One, the candidate has no clue about trees and traversals, in which case there is no point proceeding down this line. Second, which seems to happen more often than not, is the candidate gives a class and method like the following :
class Node {
char operator;
int lhsValue, rhsValue;
Node left, right;

public int evaluate() {
int leftVal = left == null ?
lhsValue : left.evaluate();
int rightVal = right == null ?
rhsValue : right.evaluate();
if (operator == '+') {
return leftVal + rightVal;
} else if (operator == '-') {
// So on and so forth.
// Same for toString()

Whenever I see code like the example above, it just screams that whoever is writing it has no clue about how to work with Polymorphism. While I agree that some conditionals are needed, like checks for boundary conditions, but when you keep working with similar variables, but apply different operations to them based on condition, that is the perfect place for polymorphism and reducing the code complexity.

In the above case, the biggest problem is that all the code and logic is enclosed in a single method. So when a candidate presents me with this solution, the first thing I ask is what happens when we need to add another operation, like division or something. When the prompt answer is that we add an if condition, that is when I prompt and ask if there is not a cleaner solution, which would keep code separate. Finally, often, you depend on third party libraries for functionality. Well, in those cases, you won’t be able to edit the original source code, leaving you cursing the developer who wrote it for not allowing an extensible design.

The ideal answer would, for this question, be that Node is an interface with evaluate() and toString(). Then, we have different implementations of Node, like a ValueNode, an AdditionOperationNode, and so on and so forth. The implementations would look as follows :
interface Node {
int evaluate();
String toString();

public class ValueNode
implements Node {
private int value;
public int evaluate() {
return value;
public String toString() {
return value + "";

public class AdditionOperationNode
implements Node {
Node left, right;
public int evaluate() {
return left.evaluate()
+ right.evaluate();
public String toString() {
return left.toString() + " + "
+ right.toString();

You could go one step further and have an abstract base class for all operations with a Node left and right, but I would be well satisfied with just the above solution. Now, adding another operation is as simple as just adding another class with the particular implementation. Testing-wise, each class can be tested separately and independently, and each class has one and only one responsibility.

Now there are usually two types of conditionals you can’t replace with Polymorphism. Those are comparatives (>, <) (or working with primitives, usually), and boundary cases, sometimes. And those two are language specific as well, as in Java only. Some other languages allow you to pass closures around, which obfuscate the need for conditionals.

Of course, one might say that this is overkill. The if conditions don’t really make it hard to read. Sure, with the example above, maybe. But when was the last time you had just one level of nesting? Most times, these conditionals are within conditionals which are within loops. And then, every bit of readability helps. Not to mention there is a combinatorial explosion of the amount of code paths through a method. In that case, wouldn’t it be easier to test that the correct method is called on a class, and just test those individually to do the right thing?

So next time you are adding a conditional to your code, stop and think about it for a second, before you go ahead and add it in.

Testing function vs testing implementation

Often I have got complaints from developers that I work with that their unit tests are prone to breakages, or they don’t like writing unit tests because their code changes frequently, which causes them to change their tests as well. Its just extra overhead at that point, and starts being a chore. Atleast thats what their claim is. Now of course, I don’t agree with this at all. Not. One. Bit.

You see, when I hear this, its always tells me that there is something wrong with the way tests are written. A unit test that requires changes every time someone changes the code implies that there is a extremely strong coupling between how the code is written to how its tested. Some useful indicators of such a thing could be having a getter methods or properties which are visible only for tests, but not to external code. Or Tests which check if a loop happened 6 times or a mock was called 17 times. Sure, these assert that the function is working as intended, but say you optimize and reduce the recursion or method calls, then you need to go and update your expectations.

Of course, some of this is unavoidable when you are working with classes that have mocks injected into them. But in such a case, unless it is plain delegation, there must be some logic that must be happening. That should be the target of your tests, not the mock delegations. Usually, when I work with mocks, I have a few tests to make sure the right methods are getting called, and only if there is logic, I test it further. Otherwise, 1 or 2 tests and then I go and test the implementation of the mocked class to make sure it works under all conditions.

So lets consider a run of the mill binary search method that would be tested with mocks (A little bit contrived, but bear with me on this) :

public int binarySearch(List<Integer> items, int itemToFind, int low, int high) {
    // Do the needful, in a recursive fashion 
// A Brittle test
public void testUsingMocks() {
  List<Integer> list = mockery.mock(List.class);
  mockery.checking(new Expectations() {{
    oneOf(list).size(); will(returnValue(3));
    oneOf(list).get(1); will(returnValue(6));
  assertEquals(1, binarySearch(list, 6, 0, 2));

Now, while a bit contrived, this is a familiar sight when mocks are used to test. Or it might happen that to check the correctness of the algorithm, the indices at which the split happens is stored in a list, and verified in the test. These are the kind of whitebox tests that make unit tests brittle. And the more of them there are, the harder it is to maintain or refactor code. Rather than testing it with for some use cases and boundary conditions, this is testing whether the algorithm itself is correct. Useful for some particular cases, but normally not required unless you are developing algorithm.

I would argue that its rare to write these kinds of tests if you write your tests before you write the methods. With a TDD, you just write your expectations, what you expect to give the method and what you expect out. You then write your code to get it to pass, and you might use internal variables or logic which the test really doesn’t care about. These tests are durable and hold up to refactorings, and even give you a nice safety net. There are times when these end up becoming integration tests rather than unit tests, but I still believe that they deliver more bang for the buck.

Of course, when you start testing edge cases, you do end up getting mostly a code dependent white box test, and those still are fine since they are supposed to be edge cases, which shouldn’t change that often. Though the fact that there are conditionals usually signifies that there is a polymorphic object hiding in there. But thats a blog post for another day.

Software Engineering vs Software Artistry

I never expected my last post about whether Inheritance was needed or could be done away with to spark such a furore. But spark a furore it did, especially at the Java DZone lobby. Maybe it was the inflammatory nature of the title (which could have been a tad bit exaggerated :) ), but whatever it was, it sure didn’t stop the flames. From being called “incredibly naive” to “nonsense” to even losing Dzone some subscribers, no stone was left overturned.

But what did surprise me at the end was the fact that for every dismissal of the idea, there was a proponent who understood my reasoning behind it. And of course, there were the people in the middle who would only say, “Depends on the situation.” And surprisingly, I agreed with a lot of their point of views. But that in turn led me down a line of thinking which led to this post.

Who is an engineer? In every other field other than computers and software, and engineer is one who uses scientific methodologies and time proven concepts to design and implement constructs / processes which reliably and safely perform specific tasks. Look at electrical engineering, or aerospace engineering. These guys consistently develop hardware (planes!!) which work. Every! Single! Time! No bugs, no defects. I mean, can you imagine a plane in the middle of the flight, and suddenly there’s a bug in the landing gear? Shudder….

These guys follow some tried and tested techniques. There’s probably lore that every engineer depends on to create his next system. Passed down from generation to generation of what works and what shouldn’t be done. Same with civil engineering, there are no two ways to construct, say a building. Sure, you might differ in how it looks and what materials you use, but the base work of, creating a frame, etc remains the same (Then again, I have no clue what does go into buildings). The probability of a bug, or building a system that the next person in finds it impossible to maintain, are far less from what I have heard (and I will admit that this is based on hearsay).

Now, an artist, on the other hand, is usually defined as someone who expresses themselves through a medium. Interestingly though, the oxford dictionary has one of the definitions for an artist as “A follower of a pursuit in which skill comes by study or practice – the opposite of a theorist“. Now what does that remind you of? Exactly, engineering. To an extent, artistry is engineering, except note that in artistry, while the basics might be the same, the end results are usually unique. There still is no defined methodology or “steps you follow to perform ABC”. You work with what you have in the best possible way you know and you churn out something that may or may not be what you desired.

Now where do we fit as software engineers? We have some lore, some history of tried and true practices. We have design patterns, we have team practices like Agile, XP, etc. And we almost have an algorithm for everything. Its almost like an Apple Iphone ad, “You need to search a graph? There’s an algorithm for that.” But when it comes to implementation and combining all these into a single product, there is so much divergence. Two people, given the exact same set of requirements, will come up with two almost completely differing solutions. And I’m not talking about just names. The architecture, the design patterns used, the way services are split up. And both of these may completely satisfy the requirements. Or they may end up being epic disasters.

What I’m saying is, there is no guaranteed recipe for success like in other fields of engineering. It is completely feasible to dig yourselves into a hole even while applying commonly known solid techniques. You might argue that this happens in other fields of engineering as well, like say, Boeings new 787 which has been delayed so many times. But to that, I say that they were trying to stretch the boundaries and innovate, and create something new. That rule applies to any engineering discipline, when you try to go above and beyond what currently exists.

But when you are creating run of the mill apps, like a Configuration system or a database data displayer, those should, by now, be trivial. But they aren’t. I know groups which spend more time and effort developing these than should be required. And finally developed, these turn into nightmares when you want to update them or add new features. You might say, “Well, I never do that.” To that, I say, sure, but remember the last time you moved onto a project with a legacy code base? Remember how that felt? Well, someone who was well-meaning, just like you, developed that disaster.

So are we Software Engineers or Artists? At the end of the day, it doesn’t matter what we are as long as the job gets done, but you would think we would finally start narrowing down on some concepts that can be universally agreed upon. Most software solutions I see as the end product are usually works of art. I have no clue how they made it, no clue how it works, but its beautiful nonetheless (or ugly, if that is how your artistic tendencies lie). Maybe we will be closer to being engineers in another 100 years? After all, civil engineering has been here for quite some time now.

Using Tracker for Agile projects

I first got exposed to the Agile methodology when I took a “Thinking in Agile” course. It was a two day course, and they walked you through what it meant to be agile, the processes involved, etc. But thinking back, I fell into the same trap I did back in college. It was mostly talk, and not enough action. I learn by doing, and ipso facto, nothing really stuck. One and half years later, still at google, I became part of an internal project, which needed a kick start. And lo and behold, we decided to subscribe to the agile philosophies and not half ass it as many do. And we decided to use Tracker to manage the project.

The basic ideologies of Agile (and Tracker) are as follows. You work in short iterations (preferably a week or two). Every iteration, the entire team gets together to talk about what got done, and what is up next. You interact directly with the customer or his proxy, who gives you things to do in the form of stories. For a company search app, a story might look something along the lines of “As a product owner, I want to be able to search for a particular employee, so that I know what he works on.” Notice that this story is very well defined, especially for you as a developer. It tells you the target audience, what functionality is needed and why. The why is important because sometimes (not always), you might be able to think of a better way to do something, in which case you can pipe up with said suggestion. The example project below ignores that suggestion and I didn’t spend time to change the demo to meet my suggestions.

Tracker screen

Tracker screen

Now tracker allows you to add and move stories around. You can move a story from the Icebox (which is where stories end up by default) into the Backlog, which are the things you need to work on. You can arrange them to order them by priority (and it is all immediate, so someone else with tracker open can see those changes as they happen). Now finally, during your iteration meeting, you sit as a team, and estimate how much each story is worth.

The most important point here is that clients are the only ones allowed to request stories (though you can probably figure out exceptions if you really think that some chores should get points). Customers get to decide what stories need to happen, and what order they need to happen in. Nothing more, nothing else. Now as developers, your role is to estimate how long it will take for the items in order. You can start off with some estimates like, “A point is one days work.” initially, when you are new. You can play planning poker to estimate, which is a fun little activity in itself to make sure no one is influenced by anyone else’s estimates. Now what tracker will do is it will measure how many stories you end up actually delivering per week, and calculate what we call your Average Velocity (pointed out in the screenshot above). This basically denotes how much tracker thinks you can do in the next iteration, assuming that your estimate base remains constant.

Another important thing to note, that only things that the customer cares about, or is customer facing, should be stories. These are thing the customer can see when delivered and brings value to him. So that refactoring you need to do to come out of that hole you dug for yourself? Guess what, its a chore and does not contribute to your velocity. Want to move database schemas to make it easier for yourself? Fixing that bug you introduced in trying to rush through all the stories? The customer doesn’t care (well he probably cares about the bugs, but you are not delivering value, you are cleaning up after your own mistakes), so no cookies nor any points for you.

So a basic flow for an iteration (after estimations and planning poker) is as follows ;

  1. Customer / Product owner prioritizes stories / chores / bugs in the backlog
  2. Tracker looks at average velocity and figures out how many it can squeeze into the next iteration
  3. Developers click start on a story / task when they start to work on it.
  4. They click finish when they are done implementing on it, but do not click on Deliver
  5. Pushmaster / Release Engineer clicks Deliver when those changes are pushed to a customer visible place
  6. Customer gets to try out each story, and then can decide whether it meets specifications, and decides if he wants to accept / reject.

Rinse and repeat, and you have a great way of managing requirements, release plans and so much more. You can figure out who’s working on what, you also get a great host of charting, including burndown charts, charts which allow you to figure out where you are spending the majority of time (whether stories, bugs, chores, etc). Example chart below :

An example burndown chart from Tracker

An example burndown chart from Tracker

Did I mention Tracker is currently free to sign up and start using? Regardless of whether you are a product manager who wants to keep his project in line, a developer interested in using agile practices, or just plain curious about what this thing is all about, Tracker has something for everyone. So, what are you waiting for, a personal invitation? You don’t need to be an Agile team to try this out for yourself.

Is Inheritance overrated ? Needed even?

To give some context to this topic, the idea was brought forward to me by Alex Eagle. I was happily coding away when Alex sprung his idea for Composition over Inheritance for Noop – a language we are developing with testability and dependency injection in mind. My gut reaction was that this was blasphemy, and it couldn’t be done. You can’t just do away with inheritance, its one of the building blocks of OO based programming languages. But now, after I have let the idea digest for a few days, it doesn’t seem so far fetched any more. And here’s why.

Let me first talk about the biggest problems with vanilla inheritance as we have it in Java. Joshua Bloch hits it on the nail in his Effective Java book item about “Favoring composition over inheritance.” But lets do a quick recap anyway.

The biggest problem is that inheritance often ends up breaking encapsulation. This is because the child class depends on the implementation of the parent class. But between releases, something in the parent class implementation can change and can break all child classes without even touching its code. Another common gotcha is in how protected fields and members are used. Often, the parent class changes the value of fields depending on how methods are called. Not understanding this behavior often leads to buggy or simply wrong behavior from the subclasses.

Another problem with a subclass – especially from the point of view of unit testing – is that there is no way to create an instance of the subclass in isolation. By this, I mean that everytime I create an instance of the subclass, I am forced to have the parent class as well. In most cases, this shouldn’t be a problem, but I have run into situations where the parent class is just a landmine waiting to explode, with the default constructor not being explicit in stating its dependencies. So instant Kablaam!!! Or the parent class will load things you don’t really care about and make things slow in a test. There was this insidious test I ran into once, which extended a base test case, which did the same thing. About 7 layers deep. And the test itself didn’t really care about 3 or 4 of those layers, but had to jump through all the hoops and get everything because it was a parent class.

There are a few more issues, which are well documented in Effective Java item 16, “Favor composition over inheritance.”. I won’t bore you further on this, assuming I have convinced the skeptics about the problems with inheritance. If not, go read that book, and you shall be convinced. But then, I wanted to postulate on whether it was at all possible to have a programming language which does away with inheritance (As Noop proposes).

So when do we use inheritance ? To me, Polymorphism is about the only time when inheritance and subclassing is deemed appropriate. Be it having different subtypes or just plain old code reuse. So unless you want to have a base abstract class which has some methods defined (Like Shape with draw() method and Circles and Rectangles), inheritance is not really needed.

In Java, interfaces allow you to perform polymorphic operations with abandon, and convert between types. And interfaces don’t straddle you down with the requirement that you get the base class for every instance.

Also, if you use composition, then you can reuse code by using delegation. For example, you could define a Shape interface with a DefaultShape implementation. Now rather than subclassing a concrete type Shape, you could have a Rectangle which implements Shape. And if you wanted to reuse some code, let Rectangle take in a DefaultShape instance and just delegate to it when necessary. This offers multiple benefits. One, you are not tied down to getting things from the base class. In your test, you could pass in a mock, a null, whatever you want. The only problem is that this option is not viable if you don’t have an interface. If that is the case (or the thing you are subclassing is in a package outside of your control), then you are stuck doing inheritance the old fashioned way.

And this is (atleast the last time I heard the proposal) what Noop aims to solve. When you want to subclass, you tell the class what you want to compose. Regardless of whether it is an interface or not, it will create that class with an instance of your composition type. By default, all methods in the composition type will be available in the subclass, and it will delegate automatically, unless you override it. You get complete control over object creation, and this could potentially support multiple inheritance through this approach.

What do other people think ? It this feasible ? Am I missing something obvious when inheritance is the only approach and composition just doesn’t cut it (both right now and in the Noop proposal) ? Are you interested in Noop ? Drop me a line.

Back to Top