Archive: java

APIs and what not to do

APIs seem to be like opinions. Everyone has one, and no two people have the same concept of what constitutes a good one. An API is supposed to be an interface that is exposed for other programs or programmers to use to interact with your code. Except, each API, like an individual, is unique with its own flaws and niceties. A great API is one which reduces the amount of code you have to write when you use it. I personally feel amazing if I can get something done with minimal code. That just screams “GOOD API” to me.

On the other hand, a bad API leaves you feeling dirty, unclean even, as if you are committing grave sins against nature even by just using it. Here are a few common mistakes which end up leaving that bad taste in your mouth (with examples, of course!) :

Bad APIs

These are the worst offenders, the APIs which are supposedly there to make your life easier, but just end up making it more work to use it than rewriting it from scratch. I faced one of the bigger offenders of this one recently when I was working with GWT. I was trying to create a tree structure to represent a navigation hierarchy when it dawned on me.

A GWT Tree is created by creating a Tree object, and then creating a tree item for each node. To append children to each node, you create further tree items and add whatever text or elements you want to it. So to summarize, even if I have a data structure to represent my tree (which in most cases, I do), I will have to traverse it manually, create tree items, tell each one how to render itself and then append it to the correct items. Yuck.

Now consider how JFace creates a Tree (which I consider much more powerful and a nicer API altogether). You create a TreeViewer, set its data source / input. Then, you set a content provider which knows how to traverse your data object and get children / parents. You can also set a LabelProvider which tells it how to render its data elements. End result? Nice clean code that I actually feel satisfied about.

Most of these are the end result of rushed / not well thought out design. Having a concrete use case prior to designing it should have been enough to scream out “Its ugly!!!”. Suggestion to prevent this : write a test / use case for anything you start designing, so you can get a feel for how it feels in action. That should help you avoid a lot of these.

Not fully thought out APIs

This one is similar to the previous one, but I think it deserves section and example of its own. This happens when you almost nail the API, but fail to consider some common uses of the API. The biggest offender of this one I believe is the Java List API.

The two most common use cases I have in Java when I work with lists are
1.) Iterating through them to perform some operation and
2.) Filtering the list to get a subset

The second operation is so common that I get annoyed now that I have to create an empty list, iterate through each one using a for each and conditionally add elements to the new list. Now I realize that Java doesn’t make it easy to pass in functions (check my older article about this) as arguments, but what I really really want here is the ability to do myList.filter(predicate) where predicate is a predicate function I decide, which returns the filtered list with elements matching the predicate.

There are many more such common operations missing on the List interface, but this is the most egregious one I believe. Javascript also gets this wrong, but underscore, a JS library adds a lot of this which makes working with lists and collections a dream.

Misnamed APIs and methods

How many times have you called a method, only to realize that it didn’t really do what you thought it did? Or look for a method XYZ, only to realize later that it had been named YXZ instead. Raise your hands if you have experienced this. For some reason, an apple for someone almost always turns out to be an orange for someone else.

I’ll switch to bashing on JS for this one, underscore in particular. For all the amazing methods that underscore provides in JS, they really have a problem with naming. I ended up looking for a collection.contains method, and ended up finding only indexOf, so I initially assumed that they didn’t have it. I mean, if I look for contains, at best, I will also look for has, hasKey. Browsing through the list of method names, I might have even accepted includes (though it would not have been my first choice). But never in all my life would I have expected it to be include (Yes, that is include, as in singular!). People, what were you thinking????


The final set of APIs which can annoy (but are easily worked around, just like the previous section) are APIs which lie. These include APIs which don’t do what the function name suggests it does (no obvious example from the open source land comes to mind, thankfully). The other kind is one which is not done with work even after the object is created. Most times, it is the case of a lurking init / initialize method. And if you ever see an interface called Initializable, run in the opposite direction.

Is Strong Typing really needed?

This is something I have been struggling with for the last few months. I have had people argue ardently that all Strong typing is good for is false comfort and lots of unneeded typing. But I was strong. I was undeterred. I dismissed this as the crazy rants of those JS developers, those dynamic language people who believe that obfuscation and compactness is everything, even at the cost of maintainability. I mean, how could a language where you didn’t even know what was getting passed in in any ways better than something where the APIs are explicit and stop you from making mistakes. A dynamic language could work for a single developer, but definitely not for a team. That was my whole hearted conclusion.
Now, I’m not so sure anymore. Its been 3 weeks since our team made the whole hearted switch. Has it been roses and sunshine? No. But it hasn’t been as bad as I expected it to be. And there are a few reasons for that. But before that, I’ll lay down the pros and cons the way I see them from my (assuredly very limited) experience :

Benefits of Strong Typing :
  1. Errors / Warnings in your editor
    Simply put, this might just be the single most greatest benefit of strong typing, and the single reason why most java developers (a lot rightly so) will never even consider leaving the safety of strong typing. While compilation support doesn’t necessarily go hand in hand with strong typing, most people tend to associate Java with it, so lets run with that. Simply put, with Strong typing, your editor can (and should, I mean, if you are not going to get immd. feedback, what’s the point?) give you immediate feedback when you messed something up. Whether this be using the wrong variable name or trying to call a method that either does not exist or with the wrong parameters. Or if you are trying to use the wrong type of object.

    To a Java developer, an IDE like Eclipse or IntelliJ is godsend, as it tells you what is wrong in your world and lets you jump to them, gives you suggestions and autofixes and generally makes your life as painless as it can. And it is brilliant, I can tell you that.

    In Javascript (or any other dynamic language), everything is fine and dandy for the first 100 lines. After that, it becomes scarily unmanageable. The only way around this that I have found so far is to be super paranoid and write tests for every single line of code. If you can’t do that, stay far far away.

  2. Generics (but this is also a negative, in my opinion, which I’ll get to below)
    The idea behind generics is that gives developers some assurances about the types in a collection (or whatever it is you are genericizing). That way, all operations are type safe, without having to convert to and from different types. And you are assured that you will not be surprised suddenly by a different type of object popping up when you least expect it. But there are a lot of issues with them that I’ll cover in the second section.
  3. Ability to follow a chain and figure out what type of object is required at each step
    Now this is something I definitely miss in languages like Javascript and Python. The fact that I can trace (in my IDE, note that part) what the type of each variable / method call in an expression chain is simply amazing, especially when you are working with a new codebase. You never have to wonder what the parameter types of the method you are calling are. You don’t have to wonder what methods are available or visible. You just know this information (Again, assuming you are using an IDE. If not, god help you)
  4. Refactoring

    The biggest advantage of Strong typing though, in my opinion, is the ability to create IDEs which make refactoring a breeze. Renaming a method / variable? Trivial. Moving or extracting a method? Simple key combination. Stuff which can be extremely tedious and mind numbing are accomplished in a matter of minutes. (Want to know more about these shortcuts? Check out Eclipse shortcuts). This is simply not possible with languages like Python and Javascript.

Disadvantages of Strong typing :
  1. More concise and precise, less typing
    Dynamic languages do tend to be more dense, and it is much easier to accomplish in 10 lines what can easily take 50-100 in a language like Java, which is especially verbose. Consider trying to pass in a chunk of code to be executed at the end of a function in both Java and javascript (this is pretty common in web apps and task runners)
    Java :

    interface Function {
    T execute();   // Optional parameters is not easy here :(
    taskRunner.execute(taskArgument, new Function() {
    String execute() {
    return "Success";


    taskRunner.execute(params, function() {response="Success"});

  2. No badly implemented generics
    This is mostly Java’s fault of getting generics pretty badly wrong. The idea behind generics is sound, its the implementation that is horribly broken. Here are a few things which are wrong with it :
    Type erasure : This basically involves the fact that at runtime, there is no way to differentiate between say, a List<String> and a List<Integer> If you never work with reflection or Guice, then this might not be a problem. But it also is a pain with deeply nested generics and wildcards. I have seen compiling code which blows up at runtime because it cannot differentiate between a Provider<? extends Repository> and Provider<? extends Resource> and neither Resource nor Repository have anything in common. Crazy….

    Verbosity : Map<String, List<String>> myMap = new HashMap<String, List<String>>();. Enuff said.

    Guice & Reflection : Generics and java.lang.reflect just don’t mix. They just don’t. Type erasure blows away all type information, so you are bound to be using stuff like new Entity<?> which totally defeats the purpose. And don’t get me started on Guice. In guice, normal bindings (non generic classes) look as follows :


    With Generics involved, they now look as follows :

    bind(new TypeLiteral<MyInterface<String>>(){}).toInstance(instance);

    What the heck just happened there???

  3. Closures / Functions :
    Closures are a form of anonymous inner functions which can have an environment of their own, including variables bound to the scope of the function. The inner function has access to the local variables of the outer scope and can change state. But what it does allow is creating functions, as callbacks or for performing some quick little task in a repeated fashion, easily and quickly and pretty darn cheaply.Java has had a few proposals to add it ( but it has not passed the review committee yet. And probably won’t for the next few years. So till then, in Java, you are stuck creating interfaces, creating an implementation of it at runtime, passing in variables you need access to in the constructor or through some other mechanism, and generally be in a lot of pain. Thanks, but no thanks.

What I miss in Java

So I finally got some time to sit down and write, after being knee deep in work the past month or two. And without a doubt, I wanted to write about what has been heckling and annoying me over the past month. I am an ardent defender of Java as a good language, especially defending it from Misko day in and day out, but even I will agree that it does suck at times. So today, Java, the gloves are off. I love you, but this is the way things are.

To give some context, I have been working on GWT a lot recently, and have done some crazy things with GWT generators (which I might cover in a few posts later). I love GWT, but for all of GWT’s aims to allow developing modern web apps without losing any of Java’s tooling support, there are a lot of things which are made easier in javascript. Lets take a look at them one by one, shall we?

Closures (Abiity to pass around methods)

So this was the straw that broke the camel’s back. I had this use case today where I wanted to set some fields through setters on a POJO. Simple enough right? Well, NO, because someone used defensive programming (Don’t get me started about precondition checks, thats for another post) and so it through a null pointer exception. Ok, since I can’t change the POJO (since it is in someone else’s code base), I needed to check for nulls on my side and not call the setter if the value was null. Simple enough, I do a check and call the method conditionally. Except when you have a few 10 odd properties, thats a lot of conditionally crappy code.

Ok, so my other option is write a function which checks that, right? Except in java, you can’t pass around functions or closures. Ideally, I want to have a closure which takes a value and a function, and let the closure handle the null check and conditional calling. Something like :

callConditionally(myPojo.setValue, actualValue);

Except you can’t. Not in java. I mean, I could create an interface to wrap it, but that just adds more boilerplate than necessary. I ended up creating a method which uses reflection to find the method by name and calls it, but my point is that it shouldn’t be necessary. What should be two or three lines of code ended up being a 20 line monstrosity. And yes, before some smart aleck replies that if I wanted closures, I should go to javascript, I will point out that there have multiple proposals to include closures in Java, and Scala, which compiles into java, supports closures as well.

There are multiple JSR’s and open source libraries which try to implement this for Java, and one of these days, I’m gonna give it a try. But for those interested, check out and Both of them look promising.

Type inference and General Wordiness

They say a picture is worth a thousand words. Well, with java, and especially with generics, it seems that even a simple declaration is atleast a thousand words. For example :

Map> myMap = new HashMap>();

The above line could be so much shorter and sweeter as :

Map> myMap = new HashMap();

There are very few cases when I would want a map of something else when I just declared it of a particular types. Other examples like reading a file, working with regexes abound, all of which require much more syntax than other languages. And I definitely do miss being able to say

if (myValue)

instead of

if (myValue != null)

Sigh… And don’t even get me started with reflection. Reflection in Java is extremely powerful, but man is it wordy. Not only can you not recurse over the properties of an object directly (like say, in javascript), you also have to worry about exceptions (which I’ll get to in the next section)

Checked exceptions

That brings me to my last and biggest complaint. Checked exceptions in Java. They are just plain evil. I know people swear by them, and some of their arguments even make sense. Sometimes. But the fact remains that they make me write more boilerplate, more code that I don’t even care about than anything else in Java. The idea behind checked exceptions is sound. Its a great way to declare what the caller of a method needs to worry about. But the thing is, I should have an option other than rethrowing or logging it.

I did a very unscientific data gathering experiment of just looking at code randomly in different code bases (Codesearch was especially useful for this). And the majority of catch blocks I found either

  • Logged it using logger or System.err
  • Rethrew it as a wrapped exception

Me personally, I have changed Eclipse to generate all catch clauses for me by wrapping and rethrowing it as a RuntimeException so I don’t have to worry about adding a throws to my method declaration, when it is a non recoverable exception for the most part.

Furthermore, sometimes Checked exceptions can even lead to clauses which will never ever be executed. Point in case :

try {, "UTF8");
} catch (UnsupportedEncodingException e) {
 // Can never be thrown, but I am forced to catch it.
 // Because its a checked exception!!!

There are many more cases like this, but I think this is enough of a rant for now.

How to build a Deeplinking capable Flex / GWT App

I have been working extensively with GWT recently, and worked on a flash based webpage before that. And we tried many different approaches before finally settling down on a particular approach that works across all frameworks which are GWT / Flash-like. What do I mean by GWT / Flash-like? Well, these are frameworks which rely on on one page serving the entire content. The state of the page changes, but the web browser does not navigate between pages unlike a traditional website.

And both of these frameworks do not make it easy to provide deeplinking. Oh sure, its easy to add the functionality to hit the Back button in your browser and navigate in both GWT and Flex, its not as trivial to implement a way to provide a URL and browse immediately to the corresponding page without a lot of effort on the part of the developer. And this is where the following structure makes life a little bit easier.

The central concept in either of these is something called a Workspace. The workspace in this architecture represents the truth of the UI. Whatever the workspace contains is displayed in the UI. It is the backing model of the View. Now, for a mail app, it might represent the current view, like Inbox or Sent mail, and maybe the mails contained in it. And any other information needed to build and display the View. The workspace is also responsible for two more things, firing an event to all the Views saying that it has been updated, and another to a controller to tell it to go fetch data from the backend server.

Now the Views themselves are stateless to an extent, other than holding a reference to the Workspace. These would be the Panel classes in GWT. Their only responsibility is in channelling information to and from the workspace. They also listen to events on the Workspace. So whenever a Workspace_Changed event fires, the views go and grab relevant data from the workspace and render it.

The workspace also fires an event whenever a View tells it it needs more information. In that case, the controller goes and fetches the data, stuffs it in the workspace, and the workspace then fires an event to tell the Views that they should now update themselves. So basically, there are two events propagating through the system :

  • UPDATE_VIEWS : The workspace controller fires these when it has stuffed the information from the server into the workspace. The views listen on this event and update themselves accordingly
  • UPDATE_WORKSPACE : The views fire these when it wants more data loaded from the server. The Workspace controller listens on this, and based on the state of the workspace, fetches relevant information. The catch is that it should always be possible to compute what data is needed based on the workspace. When the controller finishes, it fires the UPDATE_VIEWS event.

Ok, so what does this give us with regards to Deeplinking? Well now, your URL / Token parser should be able to parse the URL or tokens (it is trivial to add a HistoryChangeListener in both GWT and Flex). And based on the parsed tokens, it should just update the workspace with the relevant fields. And go off and fire a UPDATE_WORKSPACE call. This will trigger a server call, get the relevant information, fire an UPDATE_VIEWS event, which tells the Views to go update themselves based on the state of the workspace. Voila, you have a working Deeplinking implementation.

Using Polymorphism instead of Conditionals

Interviewing nowadays with tech companies has become run of the mill. You have phone screens, then you are brought on for on site interviews. And in each one of these, you are asked one mind-bending, off-ball algorithm question after another. I personally have been on both sides of the interview table, having asked and been asked my fair share of these. And after a while, I started questioning whether these questions provided any insight into how interviewees think other than their knowledge of algorithms.

It was then that I decided I wanted to try and find out if the candidates really understood polymorphism and other concepts, rather than their knowledge of algorithms, since every other interviewer would be covering that. And that was when I stumbled upon this gem of a question, which also underlies a fundamental concept of object oriented programming.

The question is simple. “Given a mathematical expression, like 2 + 3 * 5, which can be represented as a binary tree, how would you design the classes and code the methods so that I can call evaluate() and toString() on any node of the tree and get the correct value.”. Of course, I would clarify that populating the tree was out of the scope of the problem, so they had a filled in tree to work with. It also gives me a chance to figure out how the candidate thinks, whether he asks whether filling in the tree is his problem or whether he just assumes stuff. You could preface this question with another about Tree’s and traversal to check the candidate’s knowledge and whether this one would be a waste of time or not.

Now, one of three things can happen at this point. One, the candidate has no clue about trees and traversals, in which case there is no point proceeding down this line. Second, which seems to happen more often than not, is the candidate gives a class and method like the following :
class Node {
char operator;
int lhsValue, rhsValue;
Node left, right;

public int evaluate() {
int leftVal = left == null ?
lhsValue : left.evaluate();
int rightVal = right == null ?
rhsValue : right.evaluate();
if (operator == '+') {
return leftVal + rightVal;
} else if (operator == '-') {
// So on and so forth.
// Same for toString()

Whenever I see code like the example above, it just screams that whoever is writing it has no clue about how to work with Polymorphism. While I agree that some conditionals are needed, like checks for boundary conditions, but when you keep working with similar variables, but apply different operations to them based on condition, that is the perfect place for polymorphism and reducing the code complexity.

In the above case, the biggest problem is that all the code and logic is enclosed in a single method. So when a candidate presents me with this solution, the first thing I ask is what happens when we need to add another operation, like division or something. When the prompt answer is that we add an if condition, that is when I prompt and ask if there is not a cleaner solution, which would keep code separate. Finally, often, you depend on third party libraries for functionality. Well, in those cases, you won’t be able to edit the original source code, leaving you cursing the developer who wrote it for not allowing an extensible design.

The ideal answer would, for this question, be that Node is an interface with evaluate() and toString(). Then, we have different implementations of Node, like a ValueNode, an AdditionOperationNode, and so on and so forth. The implementations would look as follows :
interface Node {
int evaluate();
String toString();

public class ValueNode
implements Node {
private int value;
public int evaluate() {
return value;
public String toString() {
return value + "";

public class AdditionOperationNode
implements Node {
Node left, right;
public int evaluate() {
return left.evaluate()
+ right.evaluate();
public String toString() {
return left.toString() + " + "
+ right.toString();

You could go one step further and have an abstract base class for all operations with a Node left and right, but I would be well satisfied with just the above solution. Now, adding another operation is as simple as just adding another class with the particular implementation. Testing-wise, each class can be tested separately and independently, and each class has one and only one responsibility.

Now there are usually two types of conditionals you can’t replace with Polymorphism. Those are comparatives (>, <) (or working with primitives, usually), and boundary cases, sometimes. And those two are language specific as well, as in Java only. Some other languages allow you to pass closures around, which obfuscate the need for conditionals.

Of course, one might say that this is overkill. The if conditions don’t really make it hard to read. Sure, with the example above, maybe. But when was the last time you had just one level of nesting? Most times, these conditionals are within conditionals which are within loops. And then, every bit of readability helps. Not to mention there is a combinatorial explosion of the amount of code paths through a method. In that case, wouldn’t it be easier to test that the correct method is called on a class, and just test those individually to do the right thing?

So next time you are adding a conditional to your code, stop and think about it for a second, before you go ahead and add it in.

Testing function vs testing implementation

Often I have got complaints from developers that I work with that their unit tests are prone to breakages, or they don’t like writing unit tests because their code changes frequently, which causes them to change their tests as well. Its just extra overhead at that point, and starts being a chore. Atleast thats what their claim is. Now of course, I don’t agree with this at all. Not. One. Bit.

You see, when I hear this, its always tells me that there is something wrong with the way tests are written. A unit test that requires changes every time someone changes the code implies that there is a extremely strong coupling between how the code is written to how its tested. Some useful indicators of such a thing could be having a getter methods or properties which are visible only for tests, but not to external code. Or Tests which check if a loop happened 6 times or a mock was called 17 times. Sure, these assert that the function is working as intended, but say you optimize and reduce the recursion or method calls, then you need to go and update your expectations.

Of course, some of this is unavoidable when you are working with classes that have mocks injected into them. But in such a case, unless it is plain delegation, there must be some logic that must be happening. That should be the target of your tests, not the mock delegations. Usually, when I work with mocks, I have a few tests to make sure the right methods are getting called, and only if there is logic, I test it further. Otherwise, 1 or 2 tests and then I go and test the implementation of the mocked class to make sure it works under all conditions.

So lets consider a run of the mill binary search method that would be tested with mocks (A little bit contrived, but bear with me on this) :

public int binarySearch(List<Integer> items, int itemToFind, int low, int high) {
    // Do the needful, in a recursive fashion 
// A Brittle test
public void testUsingMocks() {
  List<Integer> list = mockery.mock(List.class);
  mockery.checking(new Expectations() {{
    oneOf(list).size(); will(returnValue(3));
    oneOf(list).get(1); will(returnValue(6));
  assertEquals(1, binarySearch(list, 6, 0, 2));

Now, while a bit contrived, this is a familiar sight when mocks are used to test. Or it might happen that to check the correctness of the algorithm, the indices at which the split happens is stored in a list, and verified in the test. These are the kind of whitebox tests that make unit tests brittle. And the more of them there are, the harder it is to maintain or refactor code. Rather than testing it with for some use cases and boundary conditions, this is testing whether the algorithm itself is correct. Useful for some particular cases, but normally not required unless you are developing algorithm.

I would argue that its rare to write these kinds of tests if you write your tests before you write the methods. With a TDD, you just write your expectations, what you expect to give the method and what you expect out. You then write your code to get it to pass, and you might use internal variables or logic which the test really doesn’t care about. These tests are durable and hold up to refactorings, and even give you a nice safety net. There are times when these end up becoming integration tests rather than unit tests, but I still believe that they deliver more bang for the buck.

Of course, when you start testing edge cases, you do end up getting mostly a code dependent white box test, and those still are fine since they are supposed to be edge cases, which shouldn’t change that often. Though the fact that there are conditionals usually signifies that there is a polymorphic object hiding in there. But thats a blog post for another day.

Is Inheritance overrated ? Needed even?

To give some context to this topic, the idea was brought forward to me by Alex Eagle. I was happily coding away when Alex sprung his idea for Composition over Inheritance for Noop – a language we are developing with testability and dependency injection in mind. My gut reaction was that this was blasphemy, and it couldn’t be done. You can’t just do away with inheritance, its one of the building blocks of OO based programming languages. But now, after I have let the idea digest for a few days, it doesn’t seem so far fetched any more. And here’s why.

Let me first talk about the biggest problems with vanilla inheritance as we have it in Java. Joshua Bloch hits it on the nail in his Effective Java book item about “Favoring composition over inheritance.” But lets do a quick recap anyway.

The biggest problem is that inheritance often ends up breaking encapsulation. This is because the child class depends on the implementation of the parent class. But between releases, something in the parent class implementation can change and can break all child classes without even touching its code. Another common gotcha is in how protected fields and members are used. Often, the parent class changes the value of fields depending on how methods are called. Not understanding this behavior often leads to buggy or simply wrong behavior from the subclasses.

Another problem with a subclass – especially from the point of view of unit testing – is that there is no way to create an instance of the subclass in isolation. By this, I mean that everytime I create an instance of the subclass, I am forced to have the parent class as well. In most cases, this shouldn’t be a problem, but I have run into situations where the parent class is just a landmine waiting to explode, with the default constructor not being explicit in stating its dependencies. So instant Kablaam!!! Or the parent class will load things you don’t really care about and make things slow in a test. There was this insidious test I ran into once, which extended a base test case, which did the same thing. About 7 layers deep. And the test itself didn’t really care about 3 or 4 of those layers, but had to jump through all the hoops and get everything because it was a parent class.

There are a few more issues, which are well documented in Effective Java item 16, “Favor composition over inheritance.”. I won’t bore you further on this, assuming I have convinced the skeptics about the problems with inheritance. If not, go read that book, and you shall be convinced. But then, I wanted to postulate on whether it was at all possible to have a programming language which does away with inheritance (As Noop proposes).

So when do we use inheritance ? To me, Polymorphism is about the only time when inheritance and subclassing is deemed appropriate. Be it having different subtypes or just plain old code reuse. So unless you want to have a base abstract class which has some methods defined (Like Shape with draw() method and Circles and Rectangles), inheritance is not really needed.

In Java, interfaces allow you to perform polymorphic operations with abandon, and convert between types. And interfaces don’t straddle you down with the requirement that you get the base class for every instance.

Also, if you use composition, then you can reuse code by using delegation. For example, you could define a Shape interface with a DefaultShape implementation. Now rather than subclassing a concrete type Shape, you could have a Rectangle which implements Shape. And if you wanted to reuse some code, let Rectangle take in a DefaultShape instance and just delegate to it when necessary. This offers multiple benefits. One, you are not tied down to getting things from the base class. In your test, you could pass in a mock, a null, whatever you want. The only problem is that this option is not viable if you don’t have an interface. If that is the case (or the thing you are subclassing is in a package outside of your control), then you are stuck doing inheritance the old fashioned way.

And this is (atleast the last time I heard the proposal) what Noop aims to solve. When you want to subclass, you tell the class what you want to compose. Regardless of whether it is an interface or not, it will create that class with an instance of your composition type. By default, all methods in the composition type will be available in the subclass, and it will delegate automatically, unless you override it. You get complete control over object creation, and this could potentially support multiple inheritance through this approach.

What do other people think ? It this feasible ? Am I missing something obvious when inheritance is the only approach and composition just doesn’t cut it (both right now and in the Noop proposal) ? Are you interested in Noop ? Drop me a line.

Gathering Rapid Feedback with TDD, better known as Infinitest!

So I have had this question often, especially from TDDers, on what the best way is to get rapid feedback as you type. One of the biggest things when you do Test Driven Development is the fact that you can see a Red / Green bar or some sort of indicator of your tests. Because that indicator is what tells you what you should be doing. Green bar? Time to write a failing test so you can add that next feature in. Red bar? Well, you know what is broken, so time to go ahead and write that feature. So initially, I used to just hit Alt + Shift + X, T to run the test I was currently editing. And then I learnt the joys of Ctrl + F11, which re runs your last configuration.

But still, I was left wanting more. I mean, I don’t want to have to hit something to tell eclipse to go run my tests. Eclipse already knows when I save, as it can automatically shoot off a build. Why couldn’t I just have another step afterwards which then runs my test, so I don’t have to do anything? It was along this line of thought that I stumbled upon Misko‘s setup of using build steps to run all unit tests at each save. Hallelujah! But the more I used this, the more I started realizing its pain points. You actually had to setup each project to do this? Uh, no. Not happening. Too much effort. I like being lazy.

So then I happened upon this Infinitest thing. The description sounds promising, “A continuous test runner for your JUnit tests.” And what, it “integrates with Eclipse and IntelliJ“. And its “Intelligent and runs only tests that are needed.” I’m sold, where do I sign up? So I went ahead and installed it, and tried it out. And it actually seems to live up to its claim, so far. Its main configuration has a single checkbox, which basically says run Infinitest or don’t run it. Nice. And it integrates seamlessly with the problems view. Like so :

Infinitest results in the problem view

Infinitest results in the problem view

As can be seen above, I made a change which broke two tests. Half a second after I saved those changes, I have problem markers popping up all over my project telling me that the last thing I just did blew up some tests. I can double click on the markers, go to the exact line where the failure is, and see if it was an issue of the test being wrong or me being stupid. And in some cases, that isn’t even needed, because you know the tests shouldn’t have broken.

Infinitest seems to be smart enough to recognize all JUnit tests without you having to point it out, and runs only the tests that matter, and not all of it. But if you have other kinds of tests, then you might run into trouble. I am still playing around with it, and will probably update this post or add a new post later with more detailed info if I deem it necessary. But in the meantime, for those of you who want to run tests at every save, check out Infinitest. It is awesome!

Eclipse Productivity Shortcuts

Back in college, I used to be a notepad nazi, so as to say. I used to code all my giant programs solely in notepad (The most I upgraded to was to Textpad). And once I joined Google, I was apathetic to IDEs, so I just picked IntelliJ and went with it. Somewhere down the line, I attended a Testing and Refactoring workshop, and the only IDE available was Eclipse. And it was for a great reason.

While the focus of the workshop was testing and refactoring, what they did do, which I applaud them for, was they showed us some awesome shortcuts. And suddenly, I was doing these insane refactorings at the blink of the eye. It was then that I started searching for and learning every eclipse shortcut that would help me be more productive. And before I knew it, I was typing 6 words a minute and coding up 100 words a minute :) . And I loved it.

And so, today, I just wanted to share the best shortcuts that make life easier. And without further ado :

Ctrl + Space : One of the two most important keyboard shortcuts that eclipse offers. This one is probably commonly known for autocomplete in eclipse, but not many people know that it is also context sensitive. For example, hitting Ctrl + Space when you are in the middle of typing will show you all members and methods that begin with your text. But hitting Ctrl + Space when you have nothing typed shows you all members and properties available. But the real eclipse masters know that, hitting Ctrl + Space when you type in for or foreach will show you autocomplete options for generating a for loop or for each loop. And if you do it right after you assign something to a collection or a list, it will fill in the loop variables for the for each loop. Autocomplete after typing in test, will allow you to generate the skeleton of a JUnit test case method. Autocomplete after typing in new, generates you a skeleton for a new call, which you can tab through and fill in. So many more uses and use cases for Ctrl + Space that can be found. You can generate / override method signatures in child classes. Just use and abuse it, and you will learn so much.

Ctrl + 1 : If there is just one more shortcut you remember from this post, let it be this one. The other super awesome, context sensitive shortcut in Eclipse, which is basically Quick Fix. If you have an error in a line, Ctrl + 1 will show you potential options to fix it, like importing a class, or adding an argument to a method or fixing the method signature. If you just do a method call which returns something, then you can hit Ctrl + 1 and ask it to assign it to a new local or field variable. You can hit Ctrl + 1 on a parameter to a method and assign it to a field. Ctrl + 1 on a variable can allow you to inline it, and on an assignment, can allow you to split up the declaration and assignment, or convert it to a field, parameter, etc. It is, by far, The Most Awesome  Keyboard shortcut that you can know and use. Especially on errors!

Ctrl + F11 : Reruns the last run configuration that was executed. If you do TDD, then Alt + Shift + X, T followed by Ctrl + F11 is the most standard approach.

Ctrl + Shift + R : Shows the Open Resource dialog. Type to filter, and jump directly between classes. I love this shortcut, and use and abuse it!

Ctrl + Shift + O : Organizes Imports, and gets rid of unused imports.

Ctrl + O : Shows the methods and properties of a class. You can start typing to filter and hit enter to jump to a particular signature / type. Hitting Ctrl + O again toggles showing inherited members. Very useful for jumping between sections in a class, or finding that one method you want to get to.

Ctrl + T : Opens the Type Heirarchy. Shows all super classes as well as sub classes / implementing types in your class path. Very useful for jumping to an implementation class. Can be called from the class type, or even a method signature. Can toggle between Supertype and Subtype heirarchy if you hit Ctrl + T again. Again, you can type and filter once you are in this menu.

Ctrl + / : Comment / Uncomment code. Single or multiple lines, depending on what you have selected. Enuff said.

Alt + Shift + R : One of my most used shortcuts, Rename. It renames anything from variables to methods to even classes, renaming the class files if necessary. Also fixes all references to refer to it by the new name. Can sometimes break if there are compile errors, so watch out when you use it. You can also ask it to fix all textual references as well.

Alt + Shift + M : Extract Method. Super useful method to break up a larger method into smaller chunks. If the code block you have selected does not need to return too many types, and looks reasonable as a separate method, then pulls up  a prompt where you can basically edit the method signature, including return type, method name and order and type of parameters. Very useful

Alt + Shift + C : Only useful when the cursor is on a method signature, but this one allows you to refactor and change the method signature. This includes changing the return type, method name, and the parameters to the method, including order, and default values if you are introducing a new one. Automagically fixes all references to said method.

Alt + Shift + L : Once you have a expression selected (a method call, or whatever), then Alt + Shift + L extracts that to a local variable. It prompts you for the name of the variable, and automatically infers the type as best as it can. Extremely useful shortcut!

Alt + Shift + Up / Down : This one is a useful one. If you hit up, it selects the next biggest code block, down selects the next smallest. Useful in conjunction with refactoring shortcuts like extract local variable, extract method, etc. Useful to know.

Alt + Shift + T : Brings up the Refactor menu. Depending on the context, this will show options like Rename, Move, Extract Interfaces and classes, Change Method Signature, etc. Nice to know, but not one I use very often. The ones I do use have already been listed above.

Alt + Shift + S : Shows the Source menu. This includes menu options like Comment related, and the ever useful Override / Implement Methods, Generate Getters and Setters, and much more. Some of the menu options have direct shortcuts, but a lot of the generate commands don’t, so useful to know.

Alt + Shift + X : Pulls up the Run menu, and shows what key you have to press to run a particular type. Now I generally use this as Alt + Shift + X, followed by T, which basically executes a JUnit Test. Fastest way to run unit tests without leaving the comfort of your keyboard.

Alt + Up / Down : Moves a block of lines up or down. Rather than say, selecting, hitting Ctrl + X and then going to the place and pasting, why not just select all the lines, and use Alt + Up or Down to move them. Automatically handles indentation depending on the block. Very convenient

Ctrl + D :  Nice and Simple, deletes the current line the cursor is on. If you have selected multiple lines, then they are all blown away. Much faster than selecting a line and hitting delete.

UPDATE: Adding in some of the shortcuts that I forgot or were mentioned in the comments for easy finding

Ctrl + L : Jump to a Line number

Ctrl + Shift + T : Display available types. A better version of Ctrl + Shift + R if you are only looking for Java classes

Alt + Shift + Up / Down : Duplicate selected lines above or below. Easier than hitting Ctrl + C followed by Ctrl + V

Ctrl + Alt + H : This one, I didn’t know about. but pulls up the Call heirarchy, showing you all callers and users of the method under the cursor. Super useful, especially if you are refactoring.

Ctrl + Shift + L : Show the list of shortcuts. You can hit it again to go in and edit your shortcuts.

Separation anxiety ?

We all have separation anxiety. Its a human tendency. We love inertia, and we don’t like change. But why should your code have separation anxiety ? Its not human (even though it might try and grow a brain of its own at times)!

I bring this up because I got the chance to work with someone who had some questions on how to test UI code. Fairly innocuous question, and in most cases, my response would have been, keep the UI code simple and free of any logic, and don’t worry too much about it. Or you could write some nice end to end / integration tests / browser based tests. So with that response set in mind, I set off into the unknown. Little was I to know was that was the least of my concerns. As I sat down to look at the code, I saw that there were already tests for the code. I was optimistic now, I mean, how bad could things be if there are already tests for it ?

Well, I should remember next time to actually look at the tests first. But anyways, there were tests, so I was curious what kinds of tests they wanted to write and what kind of help I could provide, if any. So it turns out, the class was some sort of GUI Component, which basically had some fields, and depending on whether they were set of not, displayed them as widgets inside of it. So there were, say, 5 fields, and differing combinations of what was set would produce different output. The nice thing was that the rendered data was returned as a nice Java object, so you could easily assert on what was set, and get a handle on the widgets inside of it, etc. So the method was something along the following lines :

public RenderedWidgetGroup render() {
    RenderedWidgetGroup group =
    if ( = null) {
        return group;
    group.addWidget(new TextWidget(;
        new DateWidget(
            this.updatedTimestamp == null ?
                 this.createdTimestamp : this.updatedTimestamp));
    return group;

So far, so good, right ? Nice, clean, should be easy to test if we can set up this component with the right fields. After that, it should just be a few tests based on the different code paths defined by the conditionals. Yeah, thats what I thought too. So, me, being the naive guy that I was, said, yeah, that  looks nice, should be easy to test. I don’t see the problem.

Well, then I was taken to the tests. The first thing I see is a huge test. Or atleast thats what I think it is. Then I read it more closely, and see its a private method. It seems to take in a bunch of fields (Fields with names that I distinctly remembered from just a while ago) and churn out a POJO (Plain Old Java Object). Except this POJO was not the GUI Component object I expected. So obviously, I was curious (and my testing sensors were starting to tingle).  So I followed through to where it was called (which wasn’t very hard, it was a private method) and lo and behold, my worst fears confirmed.

The POJO object generated by the private method was passed in to the constructor of the GUI component, which (on following it further down the rabbit hole) in its constructor did something like the following :

public MyGUIComponent(ComponentId id,
                      Component parent,
                      MyDataContainingPOJO pojo) {
    super(id, parent);

And of course, readData, as you would expect, is a :

  • Private method
  • Looks through the POJO
  • If it finds a field which is not null then it sets it in the Gui Component

And of course, without writing the exact opposite of this method in the unit test, it just wasn’t possible to write unit tests. And sudddenly, it became clear why they were having trouble unit testing their Gui Components. Because if they wanted to test render (Which was really their aim), they would have to set up this POJO with the correct fields (which of course, to make things more interesting / miserable, had sub POJOs with sub POJOs of their own. Yay!) to be exercised in their test.

The problem with this approach is two fold :

  1. I need tests to ensure that the parsing and reading from the POJO logic is sound, and that it correctly sets up the GUI Component.
  2. Every time I want to test the render logic, I end up testing (unintentionally, and definitely unwantedly) the parsing logic.

This also adds, as I saw, obviously complicated pre test setup logic which should not be required to test something completely different. This is a HUGE code smell. Unit testing a class should not be an arduous, painful task. It should be easy as setting up a POJO and testing a method. The minute you have to perform complicated setup to Unit test a class (Note, the keyword is unit test. You can have integration tests which need some non trivial setup.), stop! There is something wrong.

The problem here is that there is a mixing of concerns in the MyGuiComponent class. As it turns out, MyGuiComponent breaks a few fundamental rules of testability. One, it does work in the constructor. Two, it violates the law of demeter, which states that the class should ask for what it needs, not what it doesn’t need to get what it needs (Does that even make sense ?). Thirdly, it is mixing concerns. That is, it does too much. It knows both how to create itself as well as do the rendering logic. Let me break this down further :

Work in the constructor

If you scroll back up and look at the constructor for MyGuiComponent, you will see it calling readData(pojo). Now, the basic concept to test any class is that you have to create an instance of the class under test (unless it has static methods. Don’t get me started on that…). So now, every time you create an instance of MyGuiComponent, guess what ? readData(pojo) is going to get called. Every. Single. Time ! And it cannot be mocked out. Its a private method. And god forbid if you really didn’t care about the pojo and passed in a null. Guess what ? It most probably will blow up with a NullPointerException. So suddenly, that innocuous method in the constructor comes back to haunt you when you write yours tests (You are, aren’t you ?).

Law of Demeter

Furthermore, if you look at what readData(pojo) does, you would be even more concerned. At its base, MyGuiComponent only cares about 6 fields which it needs to render. Depending on the state of each of these fields, it adds widget. So why does the constructor ask for something totally unrelated ? This is a fundamental violation of the Law of Demeter. The Law of Demeter can be summed up as “Ask for what you need. If you need to go through one object to get what you need, you are breaking it.”. A much more fancier definition can be found on the web if you care, but the minute you see something like A.B().C() or something along those lines, there is a potential violation.

In this case, MyGuiComponent doesn’t really care about the POJO. It only cares about some fields in the POJO, which it then assigns to its own fields. But the constructor still asks for the POJO instead of directly asking for the fields. What this means is that instead of just creating an instance of MyGuiComponent with the required fields in my test, I now have to create the POJO with the required fields and pass that in instead of just setting it directly. Convoluted, anyone ?

Mixing Concerns

Finally, what could be considered an extension of the previous one, but deserves a rant of its own. What the fundamental problem with the above class is that it is mixing concerns. It knows both how to create itself as well as how to render itself once it has been created. And the way it has been coded enforces that the creation codepath is executed every single time. To provide an analogy for how ridiculous this is, it is like a a Car asking for an Engine Number and then using that number to create its own engine. Why the heck should a car have to know how to create its engine ? Or even a car itself ? Similarly, why should MyGuiComponent know how to create itself ? Which is exactly what is happening here.


Now that we had arrived at the root of the problem, we immediately went about trying to fix it. My immediate instinct was to pull out MyGuiComponent into the two classes that I was seeing. So we pulled out a MyGuiComponentFactory, which took up the responsibility of taking in the POJO and creating a GuiComponent out of it. Now this was independently testable. We also added a builder pattern to the MyGuiComponent, which the factory leveraged.

class MyGuiComponentFactory {
    MyGuiComponent createFromPojo(ComponentId id,
                                  Component parent,
                                  MyDataContainingPOJO pojo) {
      // Actual logic of converting from pojo to
      // MyGuiComponent using the builder pattern
class MyGuiComponent {
    public MyGuiComponent(ComponentId id,
                          Component parent) {
        super(id, parent);
    public MyGuiComponent setName(String name) { = name;
        return this;

And now, suddenly (and expectedly, I would like to add), the constructor for MyGuiComponent becomse simple. There is no work in the constructor. The fields are set up through setters and the builder pattern. So now, we started writing the unit tests for the render methods. It took about a single line of setup to instantiate MyGuiComponent, and we could test the render method in isolation now. Hallelujah!!

Disclaimer :
No code was harmed / abused in the course of the above blog post. There were no separation issues whatsoever in the end, it was a clean mutual break!

Back to Top