Archive for January, 2013


To use or not to use… AngularJS

by Haro Lee

Like many of the developers in the world, when I heard about AngularJS for the first time I was very much fascinated by the idea and its seems-to-be easy implementation.

However, unfortunately, like some of the same developers in the world, my enthusiasm quickly faded out when the team decided to give it a go for a client project.

The first problem we encountered was lack of information of how to implement it for, yes, IE7, at that time. Some may argue that it is our own fault that we still work for this ancient beast but, if the client still want it in their pen what choice do we have? 

So we scoured the internet looking for the solution and tried a few options and finally made it work in the way we wanted it to.
After that we were all set to build a sort of wizard form with some chained lists that update themselves when the parent is updated.

Everything seemed to be working fine except some ups and downs we had while getting ourselves familiar to AngularJS, and finally we got our AngularJS powered wizard (form) to go. It felt great… so we ran it on IE7.
It worked! But the start up of the page was painfully slow. I could count some seconds before the page was fully initialised.

I still quite like the idea of AngularJS way of data binding directly on HTML. So I thought I may give it another go to see whether the IE7 problem we had was just my pure bad coding.
I knew on IE7 whatever I do the performance would be lower than the latest browsers but wanted to see how low it really is compared to other browsers including its own descendants IE8 and IE9.

The tests…

I found some interesting framework performance tests like this one from Paul Hammant’s blog  and Angular VS Knockout VS Ember on The latter one gave me the idea to use to measure the performance between browsers.

I downloaded a simple todo list sample using AngularJS and RequireJS from (this is a very useful site if you want to see how all those different frameworks work to achieve one same thing) and modified it to use on You can see my test page here and the modified todo list on my github repository.

I created three separate virtual machines with exactly the same system setting, one with IE7 on Windows XP, IE8 on Windows XP, and the last one with IE9, Chrome and Firefox on Windows7 as you cannot have IE9 on Windows XP. Safari is used in this test on Mac OSX.

And here is the result and… Safari did so amazingly well here but it wasn’t really a fair comparison as it was not running on the same virtual machine setting so I will ignore Safari on this table.

 The x-axis is the number of operations per second so the higher the better. And from top to bottom it’s Chrome, Firefox, IE7, IE8, IE9, and finally Safari, in the Alphabetical order.

I expected Google Chrome would top the table since AngularJS is from Google but it was Firefox that did the best.
Not so surprisingly, all IEs didn’t perform very well, but surprisingly even IE9 could do only a third of Chrome’s performance. Another surprise was that, for more intense tests like 30 cycles of pushing data(oh, yeah, so intense…) in one sample, IE7 actually outperformed IE8.

After the first test, I wanted to have a fair test between Chrome and Firefox against Safari so created another test only for these three browsers on Mac OSX.

Here is the second test, and the result is…


From top Chrome, Firefox, and Safari.

Chrome still couldn’t beat Safari in this performance test but did slightly better than Firefox. Firefox didn’t have much difference whether it’s on Windows or Mac OSX even when the virtual machine’s setting was a lot lower spec than the mac used.

To use or not to use…

As you can clearly see from the result, it can be quite fun using AngularJS if your project doesn’t have to support Internet Explorers, especially the older ones, but I wouldn’t recommend it if you’re thinking to use it for an adamant client that refuse to upgrade their systems up to date.



Effortless Continuous Integration and Deployment with Node.js, Travis CI & Capistrano

by Joe Stanton

At Red Badger there has been a significant transition to open source technologies which are better suited to rapid prototyping and highly scalable solutions. The largest area of growth for us as a company is in Node.js development, there are a number of active Node projects, some of which are now in production.

Node.js has gained enormous traction since its inception in 2009, yet it is still an immature technology (although maturing rapidly) therefore ‘best practices’ in the context of Node do not really exist yet. Initially, we did not have the streamlined Continuous Integration and deployment process we were used to from the .NET development world, so we began to look for a solution.

The Tools

Historically, we made constant use of JetBrains TeamCity as a CI server on our .NET projects. TeamCity is an excellent solution for these types of projects, which we would wholeheartedly recommend. However, it was hosted and maintained by us, running on a cloud instance of Windows Server 2008. It was both a heavyweight solution for our now much simpler requirements (no lengthly compile step!) and also not ideal for building and testing Node.js and other open source technologies which run much better in Linux based environments.

In searching for a new solution, we considered:

  • Jenkins – a well established, powerful and complex Java based CI server
  • Travis CI – Extremely popular in open source, particularly among the Ruby community. Travis CI is a lightweight hosted build server which typically only works on public GitHub repositories, although this is changing with its paid service, Travis Pro
  • Concrete – an extremely minimal open source CI server we found on GitHub, written in CoffeeScript by @ryankee
Driven by our desire for simplicity in our tools (and our new-found affection for CoffeeScript), we opted for Concrete. 

After making a few modifications to concrete, we deployed it to a (micro!) EC2 instance, set up some GitHub service hooks and began reaping the rewards of Continuous Integration once again! We set up build-success and build-failure bash scripts to manage deployment and failure logging, and all was working well. After running Concrete for a couple of weeks on a real project, we started to miss some fundamental features of more well established CI solutions, such as a clean, isolated build environment and even basics like email notifications. There were also a number of occasions where tests would time out, or builds would seemingly never start or get lost in the process. It became apparent that such a simple CI solution wouldn’t cut it for a real project, and we should look to a more reliable hosted solution.

Travis CI
Travis CI is a hosted CI server predominantly aimed at open source projects. It can very easily be integrated into a public GitHub repository with the addition of a simple .travis.yml config file which looks something like this:
Travis have recently launched a paid service for private repositories called Travis Pro. We decided to give it a try after being impressed by their free version. It is currently in beta but our experience so far has been very positive. Configuration is a matter of adding the .travis.yml config file to source control, and flicking a switch in the Travis dashboard to set up post-commit hooks to start triggering builds.
Travis runs a build from within an isolated VM, eliminating the side effects of previous builds and creating a much stricter environment in which every dependency must be installed from scratch. This is perfect for catching bugs or deployment mistakes before they make their way to the staging server. Travis also provides a great user interface to view the current build status, with a live tail of console output, which we find very useful during testing.
Additionally, Travis provides some nice features such as pre-installed test databases and headless browser testing with PhantomJS. Both of these features could prove extremely useful when testing the entire stack of your web application.
On a number of our node projects, we were performing deployments with a simple makefile which executed a git checkout over SSH to our staging server. Whilst this worked fine initially, it seemed rather low level and error prone, with no support for rollbacks and cleanups required to remove artifacts produced at runtime on the server. We also needed the opportunity to pre-compile and minify our CoffeeScript and didn’t think that the staging server was the right place to be performing these tasks.

After a small amount of research, We found Capistrano. It quickly became apparent that Capistrano is a very refined and popular tool for deployment – particularly in the Ruby on Rails community. Capistrano is another gem (literally – in the Ruby sense) from the 37signals gang. Despite it’s popularity in the RoR community, the tool is very generic and flexible and merely provides sensible defaults which suit a RoR project out of the box. It can be very easily adapted to deploy all kinds of applications, ranging from Node.js to Python (in our internal usage).
Installing Capistrano is very easy, simply run the command gem install capistrano. This will install two commands, ‘cap‘ and ‘capify‘. You can prepare your project for Capistrano deployment using the command ‘capify . ‘, this will place a Capfile in your project root which tells capistrano where to find the deployment configuration file.
The heart of Capistrano is the DSL based deploy.rb config file. It specifies servers and provides a way to override deployment specific tasks such as starting and stopping processes. Our deploy.rb customized for Node.js looks something like this:
We use the Forever utility provided by Nodejitsu to ensure that Node processes are relaunched if they crash. Forever also deals with log file redirection and provides a nice command line interface for checking on your processes, so is also definitely worth a look if you haven’t already.
Once this is all configured, all it takes is a simple ‘cap deploy‘ to push new code onto a remote server. Rollbacks are just as simple, ‘cap deploy:rollback‘.
Continuous Deployment
Hooking Travis CI and Capistrano together to automatically deploy upon a successful build is trivial. Travis provides a number of “hooks” which allow you to run arbitrary commands at various stages in the build process. The after_success hook is the right choice for deployment tasks.
Capistrano requires an SSH key to your staging server to be present, so commit this to your source control. Then simply add the following to your .travis.yml configuration file:
Where deployment/key.pem is the path to your SSH key.
End result
Fast and dependable Continuous Integration which allows an efficient flow of features through development, testing and staging. With speedy test suites, you can expect to see deployments complete in under a minute after a ‘git push‘.

Mind the step definition: 10 reasons why an experienced developer must actively take part in the coding of Cucumber step definitions.

by Arnaud Lamotte

Step definitions are an important part of the BDD process with Cucumber. They are the representation of the specifications in code and instructions for cucumber on what to do. They are composed of a regular expression and a piece of code. While Cucumber reads through the Gherkin specifications if it finds a matching regex it triggers the code associated with it. Writing Step definition is coding and it creates overhead especially at the beginning of projects or when automating specifications for the first time. It can therefore be tempting to delegate the task to testers, junior developers or even external consultants. Please find below, 10 reasons why to not do so.

  • The quality of the code that drives development should be of production quality

The value of automated specifications comes from the confidence that both business people and developers have in them. A ‘test’ passing must mean to business people that the feature is functional and to the developer that it is time for refactoring. Any doubt would trigger manual checking making the test rather useless. Brittle or buggy automation code is a recipe for failing to implement Acceptance Testing. Test code has to be maintained just like production code; in particular it has to be refactored and therefore benefits from unit tests.

  • Design for testing

The easier it is to plug automated tests in the software in development the better will be the quality of the tests and therefore the better will be the final product. The person who writes the step definitions must have the ability to take design decision.

  • End to end tests vs. in-process test

End to end tests are very useful because they cover the entire stack but they can be brittle and slow. The person who writes the step definitions should be able to find the right balance between both end to end tests and in process tests so they are manageable and the confidence in them remains high.

  • It is better to write automation code in the language of the application

There are reasons why Cucumber has been ported to different platforms (ruby, JavaScript, .net, Perl…): skills, implementations, in process acceptance testing. The person who writes the step definitions must be fluent in the language of the SUT (System Under Test) to be able to implement in process tests.

  • While writing step definition Developer starts visualizing the domain model

Automated specifications drive the development and the design. While writing step definitions developers think of the domain logic. Classes and methods will be the same in the production code than in the test code.

  • Retrofitting Acceptance tests/regression tests in an existing system

When working from the outside in the tests are written first and should therefore always have a clear format. When retrofitting tests in an existing system it is sometimes more difficult to write the tests, they can have a lot of preconditions and support code. This is usually the sign of a missing abstraction in the production code. The person who writes the step definitions should be able to identify this problem and address it.

  • In the BDD cycle Unit test and acceptance test are independent but nonetheless related

Write a failing Acceptance Test, watch it fail, write unit tests, make them pass and refactor until the Acceptance test pass. Typically the unit tests will be at the level of each step of the specification. While writing step definition developers can already think of the implementation of their unit tests.

  • Collaboration

BDD is all about collaboration. When developers write step definitions it is an opportunity for them to ask for further clarifications if necessary.

  • Knowledge of the SUT and access to its components is important

For instance it may be interesting in some cases to set the context in the database. The person who writes the step definition will need the appropriate access rights.

  • If testers wanted to code…

…they would be developers.

Does this mean testers should not write any automated test? No. It means testers can do what they do best, exploring, looking for edge cases and implementing them using the step definitions written by the developers for the happy path as a template. Furthermore they must manage the tests and decide what tests should be part of the regression tests suite. A pair Tester/Developer is ideal for writing step definition; it leads to collective test ownership.


Thread-based or Event-based?

by Stuart Harris

Q: What do our 3 favorite open source projects (node, redis and nginx) have in common?  Apart from being uber-cool?

A: They are all single threaded.  

But aren’t they all really fast and highly scalable?  Yep.  So how does that work?

Nginx, redis and node are all event-based.  They have an event loop that will listen for an event saying that an asynchronous operation (IO) has completed and then execute the callback that was registered when the async operation started.  Rinse, then repeat.  It never waits for anything, which means that the single thread can go hell-for-leather just running code.  Which makes it really fast.

In days gone by, when we were Microsoft slaves, we used to wrestle with multithreading as a way of dividing up work.  In web apps every request started a new thread.  We’d also use the Task Parallel Library (TPL) which was not an easy abstraction.  And combine that with some event processing library like Reactive Extensions (Rx).  Now you’re asking for a lot of trouble.  The new await keyword in C# helps out alot, but either way you have to think about thread safety all the time.  And all kinds of locking strategies to deal with concurrent access to the same data.  And even with all that, it isn’t as fast.

The difference between the two worlds lies in the way that pieces of work are orchestrated.

Event-based applications divide work up using callbacks, an event loop and a queue.  The unit of work, or task, is a callback.  Simple.  Only one callback is ever executing at a time.  There are no locking issues.  You can write code like you’re the only kid on the block.  You decide when you’re done and then effectively yield control to someone else.  Everyone is really polite so it just works.

Thread-based applications essentially divide work up in hardware.  Because each piece of work has its own thread, and will block if it needs to (like when it’s waiting for IO), the CPU will suspend that thread and start running another that is waiting.  Every time that happens there is quite a hefty context  switch, including moving about 2MB of data around.  In effect the hardware decides when to yield control and you don’t get much of a say.

Who’d have thought that a single thread, dealing with everything, could be faster than multiple threads each dealing with just one thing?  Well, on a single core, that may be true.  On multiple cores it actually may also be true.  That’s because you’ve probably got nginx and node and redis all running on the same machine – simplistically, on a quad core, that’s one core each and still one left over 🙂

But isn’t writing synchronous code for a multithreaded environment a lot easier than writing asynchronous code for a single threaded environment?  Well, maybe, a little.  But some great patterns have emerged within the node community that really help.  

The simplest continuation-passing style (CPS) is the callback.  Which actually is not at all hard when you get used to it.  And it happens to be a great way to encapsulate and really easy to modularise.  The pattern for async functions is that the last argument is always the callback, and the pattern for callbacks is that errors are always the first argument (with results after that).  This standardisation makes composition really easy.

There are a ton of npm modules that can often help reduce complexity.  The best, in my opinion, is still Caolan’s async.  It’s still the most popular and follows the node conventions.  And there are also a few CPS compilers that allow you to code in a more synchronous style.  I wouldn’t have recommended these in the past, but there are a few, such as tamejs and Iced CoffeeScript, that use an “await, defer” pattern that is quite nice.  We’re using CoffeeScript more and more these days, and this “icing” is very tempting (seeing as we’re compiling anyway), but we haven’t strayed that way yet.

We’ve been writing big apps in node since October 2011 and have learnt a lot about how to separate concerns and modularise our code.  It’s a lot different to the object-oriented class-based separation we were used to, but after your head is reprogrammed to use a functional style it becomes second nature and actually much easier to structure.  Caolan’s post on programming style for node sums it up nicely.  If you hear anyone say that node is no good for big projects, tell them that all you have to do is follow a few simple rules and then it becomes perfect.  And fast.


Tech Round Table 2012

by David Wynne

Last Tuesday we got together as a tech team (developers, testers, agile project managers) and held a Tech Round Table of 2012. The premise was simple – let’s make a list of all the tech we’d used in 2012 and then figure out what we liked best, what worked well and what we’d like to learn more of in 2013.

Each of us took a Post-it Notepad and a pen, and started a quick review of each project we’d worked on in 2012 – noting down the tech, tools, languages and platforms we’d encountered along the way. As we started to each stick up our list of tech, and roughly group them, it became apparent we were going to need a bigger space to complete the exercise than the wall I had initially chosen…

We cleared a table and started to re-group the notes into more clearly defined categories until we had everything in more or less the right place and could clearly see each note and patterns started to emerge.

As we ran through each tech, sharing experiences and use cases it became clear that, bar a few examples, most everybody had, in one form or another, some exposure to most of the tech on the table. A fair amount of new tech to the team in 2012 had been learnt, implemented, tested and deployed to production. For those of us in the world of tech, we often take the level of change we encounter for granted, but it’s actually quite eye opening to take a step back and consider how much knowledge we are able to gain and put to great use in a relatively short space of time.

Most Common

Where piles had appeared, we could start to see the most common tech in use across the team. When taken together it makes for a pretty awesome stack and one with which you could build a ton of truly cool other tech. In no particular order, the most in-use tech is:

  • Node
  • Express
  • CoffeeScript
  • Mocha
  • Jade
  • Jam
  • Less
  • Bootstrap
  • jQuery
  • Redis
  • MongoDB
  • elasticsearch
  • Vagrant
  • Chef & Puppet

Most Popular

Next up we gave ourselves 8 votes each (not quite sure why it was 8, but that was the number we settled on) and put a dot, star, or in the case of Jon & Stu, a smiley face on the tech we either liked the most and/or had the most interest in. Unsurprisingly, there was a fair amount of crossover with the most common tech in use, but the placing wasn’t what you might expect:

1st Place

  • Vagrant w/Chef & Puppet
  • Jam

2nd Place

  • Node
  • CoffeeScript
  • Bootstrap
  • Redis

3rd Place

  • Sinon
  • Jade
  • Capistrano
  • Cucumber + Relish
  • JavaScript Code Coverage (Istanbul / Blanket)
  • Lawnchair

The most popular tech is heavily littered with tools and frameworks that make development life easier; Vagrant and Jam taking the top spot speaks volumes about the way our team works. If you simply take the 4 bits of tech in 2nd place on their own, you could build a lot of fast and scalable applications in next to no time. 3rd place is occupied by more efficiencies in Jade creating cleaner, more maintainable HTML and Capistrano for taking the guesswork out of multi-environment deployment. It’s also nice to see a few bits of tech in there that we’re keen to do more of this year in Cucumber, JavaScript Code Coverage and Lawnchair.

New Trends

DevOps is both an emerging trend in the industry and something that exploded into Red Badger with huge rewards in 2012. Vagrant together with Chef and Puppet have transformed how we think about provisioning platforms, both in the development environment and the cloud. Expect to hear a lot more about that from us on the blog soon.

The real-time web has also had a big impact on a number of projects we ran in 2012 with heavy use of both WebSockets and Server-sent Events (SSE) to create dynamic and constantly updated applications. Redis together with WebSockets/SSE are a match made in heaven.

I’m not sure I can get away with calling the Cloud “new” any more  but it was interesting to see that over the course of the year we’d had experience using most of the major big hitters in AWS, Azure, Rackspace and Heroku. We’ve always been heavily reliant on the cloud and still have no physical infrastructure of our own to speak of. All 4 of the Infrastructure/Platform as a Service offerings mentioned bring something slightly different to the table that we’ve leveraged across our various projects, but AWS has probably had the most usage with Azure and Rackspace usage tailing off.

What’s Missing

In a word .Net. Whilst we’ve certainly delivered projects during 2012 using .Net and have a long history of doing so, but there’s no doubting that within our own little tech trend, the move has been away from MS tech towards that of open source. The reasons why are probably the subject of another blog post entirely, but given our mantra of “choosing the best technology for the job” – it would appear that open source has fulfilled that edict more often than not.

What Next

We’ve been so busy during 2012 that we haven’t done the best job of sharing the fruits of our labour with you, dear reader. Our new years resolution has been to remedy that and we’re kicking off the year with a cornucopia of blogs that cover our favourite tech of 2012 – so stay tuned!

In the meantime, you can peruse the the list of tech we used in 2012 in all it’s gory details (and in no particular order) below.


The Full List


Template Engines

Data Serialization



  • C#
  • Java
  • Objective C
  • PHP
  • Python
  • CoffeeScript
  • JavaScript
  • Ruby
  • PHP 


Identity and Access Management

Package Managers/Dependency Managers

Server Application Stack


Client-side Frameworks


Realtime Web





Cloud Infrastructure


Back in the sett

by Stephen Fulljames

Hello, I’m Stephen. I’m a front-end web developer, focusing these days mainly on Javascript and Node but with a wide range of experience in most of the technologies that make stuff look good in a web browser.

I’ve been working with Red Badger as a freelance collaborator on and off for almost two years, in fact since it was just Stu, David and Cain at the Ravensbourne incubator programme, but we’ve finally decided to make it official and I’m going to be joining the team here permanently.

Switching back to a full time role after a stint freelancing has been a tough decision, but the strength of the team and the technologies in use here make it an exciting place to be. With a background in interface development I’ve had exposure at the integration stage to the various languages – PHP, C#, Java, and so on – that make the pages I build actually work, without really gaining a deep understanding of them. However now I can write server-side code in Javascript as well, with Node, it feels like I can really build on my existing skills to do new and interesting things.

With other developers in the team adopting Node from the other direction, from their excellent C# and infrastructure experience, it feels like we can bring the client and server parts of web development closer together – whether in shared code, better build processes or improved performance. On the recent BBC Connected Studios pilot project, Joe, David and I were all able to contribute not only ideas but also implementation across the whole application. There are still some problems to solve and the best ways of working will settle down over time, but as a company we want to contribute to the community and share what we learn so there’ll be more blogging on these subjects in the near future.

Now if you’ll excuse me, I need to go and get used to being an employee again… 


Firestarter Events

by Jon Sharratt

Before last year closed out and we headed off for festivities and copious amounts of booze I mentioned a new idea to introduce Red Badger employees the opportunity to use their training budget (£2,000 each year) to invest in our own ideas.  I am pleased to announce that the first event will be taking place internally on the 31st January – 1st February.

The team will consist of myself, Sari, Haro and Joe, the theme of the idea I have chosen is around real-time social media coupled with music festivals, think foursquare meets waze, facebook and beyond.

The new name for these rapid prototyping events after a collaborative effort over the web (as some of us are on client site) using (a great collaboration application found by Sari) is Firestarter Events.  Along with the name we now also have a couple of logos to kickstart the branding and get things well under-way.

As I previously mentioned I am hoping for this process to be as open as possible and will post successes and failure throughout the two days.  In the mean time you can check out the agenda and invitation that was sent out to the team below:

Agenda | Invitation

Watch this space for photos of the event and more importantly the end MVP that we will be releasing live at the end of the two days.


Birdsong Retirement

by David Wynne

Today we’re announcing the imminent removal of Birdsong, our Windows Phone Twitter client, from the Windows Phone Marketplace.

We started developing Birdsong on pre-release Windows Phone hardware and when it was released, almost 2 years ago, it was one of the first fully featured Twitter clients to hit the marketplace. At it’s height it was the top ranked paid for Twitter client in 91% of territories, and the #1 paid for social app in 51% of all territories.

To any casual observer it is fair to say that our zeal for maintaining Birdsong’s market position has waned and we now feel it fairer to remove it from the marketplace than leave it there until the next change in the Twitter API renders it unusable.

Our decision is fuelled by two primary factors. Firstly Twitter has made it abundantly clear that they no longer wish to encourage the development of clients that emulate the core Twitter experience. Any developer who has built a client around the Twitter API will have always been aware that they were ultimately at the mercy of Twitter, but in recent months Twitter have had a distinct change of heart in regards to 3rd party developers that has been well documented and dissected elsewhere to the point there is no need to cover it further here.

Secondly Windows Phone Marketplace has not proven itself, for us at least, to be a financially viable proposition at this moment in time. The revenue generated from Birdsong sales vs. the internal cost of development and push service hosting simply doesn’t add up. We genuinely have high hopes for the Windows Phone platform and still very much hope that it manages to find it’s feet.

We’d like to extend our thanks to all the users of Birdsong over the last 2 years, especially the enthusiastic folk who helped us beta test each version. We apologies to those who are still actively using Birdsong and are inconvenienced by our decision, it was not an easy one to come to. Thankfully our friends over at Rowi are offering a great Twitter client (both free and premium) and we’d encourage any Birdsong users to take a look at their app.