Hg versus Git – and why I did chose Hg

After my unpleasant experience with setting up Git, I've had some time to play around with Git and use it in a project. Git is really nice for a DVCS. What I like was the git status view, especially with colors turned on. Grouping added and modified files is much nicer than the Subversion style "A", "D" (...) list. What I also like is that Git is fast. And not very difficult to use for day-to-day jobs, with tutorials like the Svn-Git Crashcourse. Killer features for Git are git rebase and especially branching. Git is different to other VCS that it uses revisions and branches as views to a directory. Other VCS - like HG and SVN - use different directories for different branches. This makes it much harder to jump between those branches. Git seems to have the biggest community, mindshare and momentum to become the next SVN/CVS.

What made it unusable for me was the Windows port. It just does not work reliable. Cygwin is a problem in it's own. Msysgit isn't up to par neither. I use a MacBookPro so Git works for me, but others who needed to access the repo from Windows were out of luck. Windows support isn't very high on the list of Git priorities - perhaps because it's linked to Linux kernel development.

Now to Mercurial. It seems to have the second biggest mindshare (more than bzr) after Git. Some big Java projects use it and there are some good blog posts about Mercurial. So I gave it a try. The setup was easy for a central server over http, it just worked after following the instructions. It's command style is more like SVN than Git so the learning curve is even shallower than with Git. It works with Windows, the main reason for me the switch to Hg from Git. The downsides: SVN style hg status, no rebase and SVN style branches. Nice that hgignore is just a file in the repo and accessible by all developers (Maven target and generated files).

Should Git become usable under Windows, I'll probably move again.

(My personal take is, SVN will become a DVCS by adding local repos, moving to hash revisions and eat the DVCS market)

Thanks for listening.

Update: I forgot about that post:

For Subversion 2.0, a few of us are imagining a centralized system, but with certain decentralized features. We’d like to allow working copies to store “offline commits” and manage “local branches”, which can then be pushed to the central repository when you’re online again.

I want to meet Cameron Purdy ;-) Who do you want to meet?

Partially because of the good discussions on TSS about Coherence and the knowledge he has, but mostly because of this recent presentation. It's about "The Top 10 Ways to Botch Enterprise Java Application Scalability and Reliability". I've enjoyed the video very much and laughed several times so loud my colleague looked up. Cameron made a joke every 30 secs - noone laughed in the audience though I found them all funny.

Meeting Cameron - well no chance - I know - and I wouldn't know what to say.

Others I'd like to meet are above all Crazy Bob for Dynaop (and Guice), Cedric for his stand on dynamic languages and Rickard of course.

Whom do you want to meet and why?

Using Google Guice Providers to Solve Law of Demeter Problems

A post on the Google testing blog made me think. Their post presents an example of a class

class Mechanic {
 Engine engine;
 Mechanic(Context context) {
   this.engine = context.getEngine();
 }
}

which depends on an Context object in its constructor, when indeed it only depends on Engine, a violation of the law of demeter. This often happens with Context objects which play the role of a central object repository to give access to objects in different parts of an application. The resulting code is hard to test, hard to reuse and the Google testing team suggests refactoring the code.

Sometimes major refactorings are not possible. With IoC (Dependency Injection) this can be solved without (much) refactoring. For example with Google Guice one can write a Provider that provides an object of a given class, in this case Engine.

public class EngineProvider extends Provider<Engine> {
    private Context context;

    @Inject 
    public EngineProvider(Context context) {
       this.context = context;
    }

    public Engine get() {
       return context.getEngine();
    }
}

Binding the Provider to Engine,

  bind(Engine.class).toProvider(EngineProvider.class);

the application will use the provider (probably from the @Request scope) to extract the engine from the context. The Mechanic can be rewritten to use Engine directly, but no other code in the potentially large application needs to change.

 
class Mechanic {  
  Engine engine;  

  @Inject
  Mechanic(Engine engine) {  
    this.engine = engine;  
  }  
 } 

Thanks for listening.

Update: Are more clever Provider could support the NullObject Pattern.

 public class EngineProvider extends Provider {  
     private Context context;  
   
     @Inject  
     public EngineProvider(Context context) {  
        this.context = context;  
     }  
   
     public Engine get() {  
        if (null == context ||context.getEngine() == null) { 
           return new NullEngine(); // better Engine.NULLOBJECT
        }
        return context.getEngine();  
     }  
 }