Posts Tagged ‘Node.js’


Product Hunt Hackathon

by Jon Sharratt

Last weekend Product Hunt decided to host a global hackathon opening up their API to the community for consumption.  Budding developers got together over at YCombinator on-site for a two day hackathon to come up with and deliver a new product idea.  Remote entries were also allowed and across the globe from Hong Kong, France and our home town in London, plenty of fresh ideas were ready to be developed.

I applied on my own (a bit last minute) and with no real thought and put down the first idea that came to my head, ‘crowdsourcing for products’ via the product hunt API.  The main thing as a personal goal was to prove to the people over at YCombinator I could come up and deliver an idea over two days.  After a couple of days I got an email with an invite to participate, that was it I was ready to hack!

I got up Saturday morning and opened up the Badger HQ (a little hungover from the night before) to find Albert a fellow badger who had stayed the night on the sofa after having a few beers also with some of the other badgers the night before.  The tech I decided to use was a slight risk as I had only dabbled with it previously.  The tech chosen:

I began by using the basis of the project from what the Badger Academy Cubs have been creating for our own internal project of which they are doing a great job as you might of seen already.  Albert started to gain interest in the whole project idea and got setup.  Another great addition to the team a couple of hours later was  Viktor, a beast at LiveScript, just what we needed.  He saw on our companies Slack I was in the office and got involved.  This was it we had a great team to get this hack really moving.

We decided on getting the core functionality we wanted to show off to the judges done on Saturday.  Then on Sunday we would style it up and tweak the UI to make it a more usable and a nicer experience.  I had implemented the core layout using twitter bootstrap ( with styling from a theme on Bootswatch (  Later Viktor informed us of an awesome library over at react bootstrap ( and converted the project so we could change the layout quickly and more effectively.


Product Fund Day 1


By the end of Saturday the project was taking shape with the huge help of Viktor and Albert.  Authentication done, Product Hunt API consumed and stripe checkout integrated to allow users to pledge money.  I had previously created a quick and dirty node.js passport ( strategy to make the authentication process easier (  So with all of that said it was time to call it a night ready for a fresh start on Sunday.

Sunday came along and all that was left to do was add validation to forms and finish off some of the advanced part of the journey such as supporting product makers having the ability to assign costs and a deadline for features to be funded.  Viktor also added the awesome firebase ( to add a storage layer for pledges and feature requests rather than it being stored in memory on the client.

Not only did it allow an easy way to implement a storage layer it also allowed the UI to live update when any pledges or features were added to the site.  It really helped make the site come alive and would made the site more engaging to users viewing the site.  I would say as a side note that the blend of React, LiveScript, Node and Firebase is just a match made in heaven for this kind of project (a blog post for another time).


On Sunday we were also joined by @jazlalli1 who worked in another team on a cool hack for Product Hunt taking their data to produce analytics and trends.

As the deadline approached our own lovely QA Roisi joined on Slack did some testing remotely which helped iron out a few creases.  Once we were pleased we were ready to submit the hack on  We had created a droplet on digital ocean, registered the domain and got it deployed and submitted on time.

Check out the final result on at

The next day we found out that we had made the top 20 finalists! we had some great feedback from the community.

We then waited to hear about the finalists and who had won.  Turns out our small idea made the top 3 hacks of the first ever Product Hunt hackathon.  All in all great job on everyones behalf for two days work.

The prize:

“The top 3 winners will receive guaranteed interviews with 500 Startups, Techstars, and office hours with YC partner and founder of Reddit, Alexis Ohanian!”

Just to add there were some great entries, checkout the other finalists at



JLT World Risk Review – Rapid Innovation

by Cain Ullah

Afghanistan Country Dashboard

We have recently delivered a project for Jardine Lloyd Thompson (JLT) to re-design and build their World Risk Review website. We’re currently in the final hardening sprint, doing some bug fixing and UAT. We’ll be able to talk more about the benefits in a case study, once the site has been live for a while and we can look at the analytics. In the mean-time I want to discuss some of the great bits of innovation (both tech and process) we have produced in delivering this project, which was just 8 weeks in total with only 6 weeks of development. 

What is World Risk Review?

World Risk Review is a country risk ratings modelling tool that JLT founded in 2006, providing corporations, banks and other organisations involved with international trade and investments with an assessment of short to medium term country risk. This allows users to build well informed strategies to manage political, security and economic risks. 

JLT is the only Insurance Broker to have invested in this capability in-house so they required a really modern website that would allow users of World Risk Review to have an intuitive and highly informative experience when consuming JLT’s expert advice.

What did we do?

World Risk Review is made up of three key areas – peril ratings data for each country, key insights (articles, insights, reports and blogs) and news. Red Badger’s role was to make these three areas easily accessible, engaging and informative. With more and more devices being used in the financial services sector, it would also need to work on tablet and mobile. So as well as being visually rich, the site would also need to be lightweight with regard to page size so that it remains speedy on mobile devices.


We designed the site using a visually rich set of dashboards to allow users to consume the data in a really intuitive way, compare different types of data and perform country comparisons. This is underpinned with easy navigation throughout the site via the dashboards and a flexible and fast search function.

How did we do it?

The site is effectively made up of two applications. The main website and a custom built admin console which consists of an analytics section as well as a custom built content management system (CMS) that provides the ability to do inline editing of content, preview of changes and deployment straight into the live environment.

I am not a techie, so I’ll ask the developers of the site to produce some more technical blogs with more detail (They promise me these are to follow!). However, below is a brief outline of how we delivered the project.

The tech

For the visual dashboards we used D3.js. D3.js is a JavaScript library designed to bring data to life using HTML, SVG and CSS. It is based on web standards. It is efficient, flexible, lightweight (which means it is fast) and the animations and interactions look and feel beautiful. The front-end is then underpinned by a Node.js server application stack (Any JavaScript run outside of the browser is run on Node.js including the admin console, APIs and Livescript – see below) and a powerful search function built using Elasticsearch. We have built an incredibly fast search based on tags, allowing flexible filtering of data with some advanced features such as “did you mean” suggestions.

My co-founder Stuart is a huge fan of Component (read his blog) so the website is built using components in almost every way you could use them, from just packaging bits of JavaScript, through custom UI elements (such as the autocomplete tag field which we vastly improved – public repo can be found here. We also improved the datepicker component – public repo here) to whole pages being rendered by a hierarchy of components. All client side JavaScript we use in the website is packaged as components, including the visualisation code. The benefit of building a site in this way, is that it is ruthlessly efficient and every bit of code that is contained in the application has a use. You build lots of little tiny modules that are great at doing one thing and then you hook them all together.

We also switched from CoffeeScript to Livescript to compile our JavaScript by writing in a functional way. The developers on the project find it really nice to use. It has tons of little tools for the typical boring tasks you do all the time and also has a lot of functional programming features, including the amazing prelude-ls, which make it ideal for data processing, such as the static site generator (see below). 

Last year we re-built as a static site. We loved the results so decided to follow a similar technical architecture for World Risk Review. The static site generation architecture deploys the site at the time that content is updated so that when users access the site, they are accessing a very simple static page rather than requesting content from a database for each action. The result is a website that is more secure and can serve pages much faster than traditional Content Management Systems (such as Drupal). The site is deployed to an Amazon S3 bucket and distributed via Cloudfront to 51 edge locations around the globe. Originally we were using Docpad as our static site generator (as we had for but we found it started to really slow us down so we built our own static site generator which brought down the time it takes to generate the HTML from the source markdown documents and Jade layouts from about 90 to about 6 seconds . This allowed us to work much faster and also enabled us to build a CMS where you could preview your changes almost in real-time. Having tested the application around the globe, it is incredibly fast wherever you are, with as little as 10 milliseconds and no more than 300 milliseconds to the first byte.

We have also set-up continuous delivery using Travis CI and Ansible. This is incredibly important for how we develop software but it also underpins how we have architected the CMS. Using continuous delivery allowed us to commit changes into a staging environment many times a day and made them available to test immediately. In the production environment, once the project is live, the content editor will be able to deploy their changes in the CMS straight into the live environment. The custom CMS is built on Git. An administrator can view the site as if they are a user, but can edit any element on the page, save it and then review comprehensive line-by-line changes to each document (or add new documents such as news items). Once they are happy with the changes, a publish button will commit to Git and will deploy into live. It allows multiple users to edit the site at the same time without stepping on each other’s toes and merges their changes in a smart way, so content management is not a race of who saves first anymore. In order to build in-line editing we were looking at a number of options such as CreateJS. However, we again decided to build our own editing tool using Javascript components for YAML and Front-matter.

The final piece of the puzzle and by no means the least important, was to build in analytics. Using the power of Elasticsearch, we built a tag based analytics tool that allows JLT to monitor user behaviour on the site. They can add custom tags to each user (such as “watch list”), filter, sort and search. This gives JLT a quantitative view of customers behaviour to allow them to adapt their future strategy around what their customers want.

The process

Given that we had only 8 weeks to deliver the project of which 6 weeks were for development, we decided to use Kanban as the methodology of choice, reducing as much friction in process as possible and allowing the developers to do exactly that – develop. The backlog was tightly managed by Sinem (the project manager) and the product owner from JLT who was deployed full-time to sit with us in our office every day. I cannot stress how important it was having the product owner integrated into the team full-time. We managed user stories on a Kanban Board and although physical boards are great, the developers managed all tasks in Github. This reduced duplication of effort, increasing productivity. Stand-ups each morning were held around the Kanban board, talking about what we had been doing at story level and we were focussed on getting stories through to delivery as soon as possible so used WIP limits to streamline the process.

To ensure quality control, we used Github flow to manage the process of building new features, ensuring that no piece of code is deployed without first going through code review by a 2nd pair of eyes. There are some simple rules to Github Flow: 1) Anything in the master branch is deployable. 2) To create something new, you create a new branch off of master. 3) You continue to commit to that branch locally until your feature is complete. 4) When you think your feature is complete, you raise a pull request. 5) Another developer then reviews your code and upon sign-off it can be merged to master. 6) Continuous Deployment then deploys your changes immediately.

When delivering a project at this speed, it is paramount that your features are tested properly. To do this, we integrate a tester into the team and get them to test as soon as a feature is deployed. In the past we have used separate tools such as Youtrack as our bug management system. However, in this project, we switched to Github issues. Having one central place for the developers to see all features and bugs together in Github has most certainly helped productivity of the team.


In just 6 weeks of development we achieved an incredible amount. We had an integrated team of Project Management, UX, Design, Dev and Test, all dependent on constant communication to get the job done. We built an exceptionally well designed, useable site on a really innovative tech stack. The use of Kanban, Github Flow and Github Issues proved to be an incredibly productive way to deliver the project. It was a very intense environment of rapid delivery but was lots of fun too. JLT were a great client not just in allowing us to be innovative with our tech and process, but also in the efforts they put in to make this collaborative. We couldn’t have delivered so quickly without their constant involvement.

As always, there is room for improvement in our process and the tech team are looking forward to new technology emerging such as those contained in the Web Components spec. Our project retrospective has highlighted some areas for improvement and we will continue to iterate our process, always pushing to try and provide our clients with better value. We have loads of great ideas about how the World Risk Review site can be improved in future phases but after 8 weeks, it is currently in a great place to deliver a far improved experience for both JLT’s customers and their admin staff.



Full Frontal 2013

by Stephen Fulljames


When assessing conferences for myself, I tend to break them down in to “doing conferences” and “thinking conferences”. The former being skewed more towards picking up practical tips for day-to-day work and the latter being more thought provoking, bigger picture, ‘I want to try that’ kind of inspiration.

Despite being pitched as a tech-heavy event for Javascript developers, Remy and Julie Sharp’s Full Frontal held at the wonderful Duke of Yorks cinema in Brighton has always felt like more of the latter. That’s not to say the practical content isn’t very good. It is, very very good, and naturally the balance has ebbed and flowed over the event’s five year history, but the general feeling I get when I walk out at the end of the day is always ‘Yeah, let’s do more of that!’ It’s been that way right from the start, in 2009, when Simon Willison ditched his prepared talk at a few days notice to speak about a new language in its infancy – a little thing called Node. So I was hopeful that this year’s conference would provoke similar enthusiasm.

High expectations, then, and a promising start with Angus Croll taking us through some of the new features in EcmaScript 6 (ES6), aka “the next version of Javascript”. Presenting a series of common JS patterns as they currently are in ES5, and how they will be improved in ES6, Angus made the point that we should be trying this stuff out and experimenting with it, even before the specification is eventually finalised and brower support fully implemented, as David commented that if you’ve done Coffeescript you’re probably well prepared for ES6, and really one of the aims of Coffeescript was to plug the gap and drive the evolution of the language, so its hopefully something I will be able to pick up fairly easily.

This was followed by Andrew Nesbitt, organiser of the recent Great British Node Conference, demonstrating the scope of hardware hacking that is now becoming possible using Javascript. As well as the now-obligatory attempt to crash a Node-controlled AR drone into the audience, Andrew also explained that “pretty much every bit of hardware you can plug into USB has a node module these days” and demonstrated a robotic rabbit food dispenser using the latest generation of Lego Mindstorms. Being able to use Javascript in hardware control really lowers the barrier to entry, and the talk only reinforced the feeling I got after the Node Conf that I need to try this (and ideally stop procrastinating and just get on with it).

Joe McCann of Mother New York gave a high-level view on how mobile is increasingly reshaping how we interact with the web, with the world and with each other. Use of phones as payment methods in Africa, where availability of bank accounts is challenging, has reached around 80% of the population with systems such as M-Pesa. And SMS, the bedrock of mobile network operators’ revenue since the early 90s, is being disrupted by what are known as “over the top” messaging services that use devices’ data connections. These are familiar to us as iMessage and Whatsapp, but also growing at a phenomenal scale in the far east with services such as Line which is offering payment, gaming and even embedded applications within its own platform. Joe’s insight from a statistical point of view was fascinating, but it didn’t really feel like many conclusions were drawn from the talk overall.

Andrew Grieve and Kenneth Auchenberg then got down to more development-focussed matters with their talks. The former, drawn from Andrew’s experience working on mobile versions of Google’s productivity apps, was a great explanation of the current state of mobile performance. It turns out that a lot of the things we often take for granted, such as trying to load Javascript as required, aren’t as important now as perhaps they were a couple of years ago. Mobile devices are now able to parse JS and selectively execute it, so putting more effort in to minimising DOM repaints, using event delegation, and taking advantage of incremental results from XHR calls and progress events are likely to be better bets for improving performance.

Kenneth spoke about the web development workflow, a subject he blogged about earlier in the year. His premise was that the increasing complexity of browser-based debug tools, while helpful in their purpose, are only really fixing the symptoms of wider problems by adding more tools. We should be able to debug any browser in the environment of our choice, and he demonstrated this by showing early work on RemoteDebug which aims to make browsers and debuggers more interoperable – shown by debugging Firefox from Chrome’s dev tools. By working together a community on projects like this we can continue to improve our workflows.

My brain, I have to admit, was fairly fried in the early afternoon after an epic burger for lunch from the barbeque guys at The World’s End, a spit-and-sawdust boozer round the corner from the conference venue. So the finer points of Ana Tudor’s talk on some of the more advanced effects you can do purely with CSS animation were lost to struggling grey matter. Suffice it to say, you can do some amazing stuff in only a few lines of CSS, in modern browser, and the adoption of SASS as a pre-processor with its functional abilities makes the process much easier. It’s also brilliant that Ana came on-board as a speaker after impressing Remy in the JSBin birthday competition, and a perfect demonstration that participating in the web community can have a great pay off.

The last development-orientated session was from Angelina Fabbro, on Web Components and the Brick library. Web Components are a combination of new technologies which will allow us to define our own custom, reusable HTML elements to achieve specific purposes – for example a robust date-picker that is native to the page rather than relying on third party Javascript. This is naturally quite a large subject, and it felt like the talk only really skimmed the surface of it, but it was intriguing enough to make me want to dig further.

The finale of the day, and a great note to finish on, was Jeremy Keith speaking about “Time”. Not really a talk on development, or at least not the nuts and bolts of it, but more of a musing about the permanence of the web (if indeed it will be so) interspersed with clips from Charles and Ray Eames’ incredible short film, Powers of Ten – which if you haven’t seen it is a sure-fire way to get some perspective on the size of your influence in the universe.

Definitely a thought-provoking end to the day. As someone who has done their time in, effectively, the advertising industry working on short-lived campaign sites that evaporate after a few months (coincidentally Jeremy mentioned that the average lifetime of a web page is 100 days) it has bothered me that a sizeable chunk of the work I’ve done is no longer visible to anyone. On the other hand I have worked on projects that have been around for a long time, and are likely to remain so, and I suppose in the end its up to each of us to focus our efforts and invest our time in the things that we ourselves consider worthwhile.

(Photo: Jeremy Keith recreating the opening scene of Powers of Ten on a visit to Chicago)



by Stuart Harris


I should have written this post a while ago because something I love is not getting the traction it deserves and writing this earlier may have helped in some small way to change that.

Earlier this year we spent a lot of time trying to understand the best way to package and deliver client-side dependencies. It’s a problem that afflicts all modern web development regardless of the stack you use. Most of the solutions we tried don’t address the real problem, which is about delivering large monolithic libraries and frameworks to the client because some small part of them is needed. Like jQuery, for example. Even underscore, as much as I love it. You might use a few features from each. And then it all adds up. Even uglified and gzipped, it’s not uncommon for a page to be accompanied by up to a megabyte of JavaScript. That’s no good. Not even with pervasive broadband. Especially not on mobile devices over a flaky connection.

Some of these, like bootstrap, allow you to customise the download to include just the bits you want. This is great. But a bit of a faff. And it seems like the wrong solution. I don’t know many people that actually do it.

As an industry we’re moving away from all that. We’re learning from the age-old UNIX way that Eric Raymond so brilliantly described in The Art of UNIX Programming; small, sharp tools, each only doing one thing but doing it well. Modern polyglot architectures are assembled from concise and highly focussed modules of functionality. Software is all about abstracting complexity because our brains cannot be everywhere at once. We all know that if we focus on one job and do it well, we can be sure it works properly and we won’t have to build that same thing again. This is the most efficient way to exploit reuse in software engineering.

But small modules have to be composed. And their dependencies managed. We need something that allows us to pluck a module out of the ether and just use it. We want to depend on it without worrying about what it depends on.

npm is one of the best dependency managers I’ve used. I love how it allows your app to reference a directed acyclic graph of dependencies that is managed for you by the beautiful simplicity of ‘require’ (commonjs modules). In node.js, this works brilliantly well, allowing each module to reference specific versions of its dependencies so that overall there may be lots of different versions of a module in the graph. Even multiple copies of the same version. It allows each module to evolve independently on its own track. And it doesn’t matter how many different versions or copies of a library you’ve got in your app when it’s running on a server. Disk space and memory are cheap. And the stability and flexibility it promotes is well worth the price.

But on the client it’s a different story. You wouldn’t want to download several versions of a library in your page just because different modules were developed independently and some haven’t been updated to use the latest version of something. And the bigger the modules are the worse this would become. Fortunately, the smaller they are, the easier they are to update and the less they, themselves, depend on in the first place. It’s simple to keep a small module up to date. And by small, I’m talking maybe 10 lines of code. Maybe a few hundred, but definitely not more than that.

Enter Component by the prolific (and switched on) TJ Holowaychuk. Not perfect, but until we get Web Components, it’s the best client-side module manager out there. Why? Because it promotes tiny modules. They can be just a bit of functionality, or little bits of UI (widgets if you like). If you use Component, you’re encouraged to use, and/or write, small modules. Like a string trimmer, for example; only 13 loc. Or a tiny, express-like, client-side router in under 1200 bytes.  There are thousands of them. This is a Hacker News button, built with Component:

The Hacker News button, built with Component, showing dependencies

The registry

The great thing about Component is that it fetches the files specified in the component.json from Github, following the pattern “organisation/account” (you can specify other locations). This is great. The namespacing stops the bun-fight for cool names because the organisation is included in the unique identifier.

The other major benefit of this is that you can fork a component, modify it and point your app at your own repo if you’re not getting any of your pull requests integrated.

App structure

But it’s not really about 3rd party modules. In my head it’s more about how you structure the code that drives your page.

Component allows you to write completely self-contained features and plug them together. Your components will be DRY and follow the SRP. Each component can have scripts (e.g. JavaScript, or CoffeeScript), styles (e.g. CSS, or Less, or Sass), templates (e.g. compiled from HTML, or Jade), data (e.g. JSON), images, fonts and other files, as well as their own dependencies (other components). All this is specified in the component.json file, which points to everything the component needs, and informs the build step so that everything is packaged up correctly. It can be a little laborious to specify everything in the component.json, but it’s worth it. When you install a component, the component.json specifies exactly which files (in the Github repo) should be downloaded (unlike Bower, for example, where the whole repo has to be fetched) – check out how fast “component install” is.

The self-contained nature of components means that you don’t have a separate scripts folder with every script for the page in it, and a styles folder with all the CSS. Instead, everything is grouped by function, so everything the component needs is contained in the component’s folder. At build time, you can use Grunt to run component build which transpiles the CoffeeScript to JavaScript, the Less to CSS, the Jade to JavaScript functions, and packages the assets. The dependencies are analysed and all the JavaScript ends up in the right order in one file, all the CSS in another. These and the other assets are copied to the build directory, uglified/compressed ready for delivery to the client.

Getting started

The best docs are in the wiki on the Github repo. The FAQ is especially germane. And TJ’s original blog post is great reading, including the rather brilliant discussion about AMD vs Common JS modules. AMD was invented for asynchronous loading. But when you think about it, you’re gonna package all your script up in one compressed HTTP response anyway; there’s still too much overhead associated with multiple requests, even with HTTP keepalive (it’s not so bad with Spdy). The perceived benefits of loading asynchronously, as required, are not yet fully realisable, so we may as well go for the simple require and module.exports pattern we know and love from node.js.

If you’re using CoffeeScript, Jade and JSON in your components, you can use a Gruntfile that looks like this (which contains a workaround for the fact that the coffee compilation step changes the filename extensions from .coffee to .js):

We’ve tried a bunch of different tools to solve the problem of easily and efficiently distributing your app to the browser. All of them have flaws. We used to love Jam.js and Bower. But we got into a jam with jam, because updates were getting jammed due to unresponsive maintainers (sorry, couldn’t resist that). Bower was great, but too heavy. Browserify is too tightly coupled with node.js and npm. None of them make simple, self contained, focused modules as straightforward and elegant as Component. Nice one, TJ!


Robots, pedal bins and dTrace: The 2013 Great British Node Conference

by Stephen Fulljames


If there’s a common theme from the popular London Node User Group evening meet-ups, from which the Great British Node Conference has evolved as a full day event, it’s that the Node.js ecosystem appears to be approximately 50% useful production tooling and 50% wonderfully insane hacks – with both sides of the personality aided by Node’s asynchronous nature and ability to process data I/O very quickly.

This ratio felt like it was also borne out during the conference, the first big event to be held at the brand new Shoreditch Works village Hall in Hoxton Square. The event space itself was great; fashionably minimal with rock-solid wifi and on-site coffee shop. The only slight niggle being that the low ceiling height meant the presentation screens became partially obscured by those seated in front, but with two projectors on the go you could usually get a clear view of one.

So, on to the talks. As mentioned there was a definite split between “useful” and “wtf?” and also between micro and macro ideas. Paul Serby of Clock kicked off with a review of his company’s experience of Node in production use for clients over the last 3 years, which was high level but a great introduction to the philosophy behind adopting Node and some of the successes and pain points along the way. It was interesting, and pleasing, to see that their journey has been similar to our own switch towards Node at Red Badger with many similar learnings and changes to our respective programming styles.

Performance was a big theme of the day, both in Paul’s overview talk and in examples much closer to the metal, such as Anton Whalley’s forensic examination of a memory leak bug in the node-levelup module (a wrapper for LevelDB). Usually hand-in-hand with mention of performance was the use of dTrace – not a Node tool in itself but a very useful analysis tool for discovering how applications are running and identifying the source of problems. The overall picture from this being that while Node can offer great performance advantages, it can also be prone to memory leaking and needs careful monitoring in production.

Other talks at the practical end of the spectrum included Hannah Wolfe on Ghost, a new blogging platform built on Node which is looking like an interesting alternative to WordPress and after a very successful Kickstarter campaign to raise funding should be available generally very soon. Tim Ruffles also took us through the various options (and pitfalls) to avoid the callback hell which asynchronous programming can often fall in to. There are a few useful flow control modules available for Node already, but as the Javascript language develops native features to help with async flows – known as generators but acting in a similar way to C#’s yield – will start to become available both in Node and in browsers as they adopt ES6.

Over on the hack side, we were treated to the now obligatory sight of a Node-driven quad-copter drone crashing into the audience and then a brilliant demonstration by Darach Ennis of his Beams module, which attempts to give compute events the same kind of streaming behaviour that I/O enjoys in Node. The key difference being that compute streams are necessarily infinite, and the Beams module allows you to filter, merge and compose these compute streams into useful data. The demo was topped off by an interactive light-tennis game adjudicated by a hacked Robosapiens robot which not only reacted to the gameplay but also ran the software which drove the game.

Probably the highlight for me, although its relation to practical application at work was close to zero, was Gordon Williams talking about Espruino, a JS interpreter for micro-controllers. Running at a lower level than the well-known Raspberry Pi or even Arduino boards, micro-controllers are the tiny computers that make all the stuff around us work and typically have RAM measured in the kilobytes. For anyone who ever tried to write games on a ZX Spectrum this may bring back memories! Gordon showed real-time development via a terminal application, also hooked up to a webcam so we could watch him create a pedal bin which opened based on a proximity sensor. Maybe not useful in my work at Red Badger, but I could instantly see loads of applications in my personal interests and thanks to the immediate familiarity of being able to use Javascript in a new context I’m definitely going to look in to Espruino some more.

Overall this felt like a conference where delegates were looked after probably better than any I’ve been to for a long time, with plenty of tea and biscuits, great coffee and chilled water on hand and a catered lunch and evening meal nearby. Whether this was down to the smaller scale of the event (around 150 attended) or the care and attention to detail taken by the organisers I’m not sure, but either way I came out of it feeling enthusiastic for Node (both practically and hackerly) and eager to go back next time.


Faster, longer and more responsive

by Stephen Fulljames


You know how it is – you set out to make “a few content changes” and end up with a whole new web site. We wanted to make a bit of an adjustment to the version 2 Red Badger site, launched in mid-2012, to give a better feel for who we are and what we can offer. In the end, it made more sense to go the whole way and create version 3. So here we are.

The fundamentals

In the previous version of the site we used WordPress to manage most content, but rather than use it to render every page we instead pulled content through its JSON API to feed a more flexible front-end built on Node and Express. This was fine in theory, but in practice performance wasn’t really as good as we’d hoped – even with some aggressive caching built in – and any kind of content beyond blog posts turned into a tangle of custom fields and plugins. WordPress specialists would probably have an answer for all the problems we faced, but we felt it was time to strike out and try something different.

This also coincided with an interest in evaluating Docpad, a semi-static site generator built on Node. If you’re familiar with Jekyll on Ruby or Hammer for Mac its kinda like that, but as well as building purely static sites it can also run with an Express server to allow some dynamic behaviour where needed. We hacked on it for a couple of days, I rebuilt my personal site with it, and then liking what we saw we decided to proceed.

The principle of Docpad and other site generators is pretty simple. You have a folder of documents written in any supported format (we’re using Jade and Markdown), reflecting the desired URL structure of the site, a folder of templates, and a folder of static assets. The generator runs through the documents, compiling where needed, applying the templates, and saves them into an output directory along with a copy of the assets. Logic in the templates is responsible for constructing navigations and outputting collections of data – just as a dynamically rendered site would – but it only does it once. The theory being, your site only changes when you act to change the content, so why do you need to serve something other than flat files between those changes?

Docpad is light in its core form, but there’s a good plugin ecosystem to extend it. We’re using plugins to compile SASS, Markdown, Jade and Coffeescript, fetch RSS from Flickr using Feedr and serve pages with clean URLs rather than .html extensions (this is where the Express part comes in handy). We’ve also created a few of our own. The main one is to allow us to curate “featured” items in the meta data of any page – so if you look on the homepage, for example, all the content items below the masthead are manually set and can be changed and reordered simply by altering a list of relative URLs. We’re also using custom plugins to pull in Tweets and posts from the old WordPress blog, and Docpad’s event system makes it easy to hook this data into the appropriate render step in the site’s generation.

For the time being we’re still using WP for our blog, recent posts are imported by the Docpad site on server start using a custom plugin in order to be able to feature them on the home and other pages, and list them on our team profiles. In the longer term we’re planning to move to blog itself on to Docpad as well, but that will need careful consideration to make sure all the functionality currently given by WordPress plugins can be maintained (or improved!).

We’re responsive

The previous iteration of the site was built using the Bootstrap CSS framework. My own views on that notwithstanding, some of the design changes we were planning meant it made sense to look again at our approach to the CSS.

As the site design created by Sari does not inherit from Bootstrap’s design opinions, only a small subset of it – mainly the grid – had been relevant, and structurally with full-width background patterns on each section of a page it was easier to start over.

That’s not to say we’ve rejected it completely. In the new grid system we’re still using some of Bootstrap’s thinking, along with patterns adopted from Harry Roberts’ Inuit.css and Chris Coyier’s thoughts on minimal grids. So we haven’t really done anything earth-shakingly innovative, but we have found it to be a solid, responsive grid that – most importantly – we completely understand and can use again.


1 – ‘section’ element. Inherent 100% width. Top/bottom padding set, optional background texture applied by class, otherwise unstyled. The paper tear effect is added with the :after psuedo element.
2 – ‘div.container’. Width set to site default (920px) with left/right margin auto for centering. Any full width headings are placed directly inside this element.
3 – ‘div.row’. Unstyled except for negative left margin of default gutter width (40px) to compensate for grid element margin, :before and :after psuedo-elements set table display and clear as with Bootstrap.
4 – ‘div.grid’. Floated left, all .grid elements have a left margin of default gutter width (40px). Default width 100% with over-ride classes for column width, in the case above we use ‘one-third’. Background texture classes can also be applied here.

We’ve switched from our regular LESS to SASS for this version of the site, again as a way to explore what it can do. With Docpad its as easy as swapping plugins and all the compilation is handled for you during site generations. Needless to say the site is responsive, with the grid gradually collapsing down to a single column view and elements such as the services carousels and case study quotes shifting layout subtly to suit smaller viewports. And to make sure corporate decision makers get a good experience we’ve also tested and fixed back to IE8.

The other final touch is that we chose to use a custom icon font for the various calls to action and other decorations around the site. We used Icomoon to generate it from our own vector artwork, and this has saved an enormous amount of time because you can colour the icons with CSS rather than cutting and spriting images for all the variants you might need. The adoption of icon fonts has had a few challenges, chief among them accessibility (screen readers tended to attempt to announce them) but with the technique of encoding icons as symbols, in the Private Use Area of the Unicode range, this problem is now largely overcome.


There were still a few things we found in the process of creating our icon font. It’s not advisable to do anything more complicated than just use them as icons, for example, as it’s very hard to align them perfectly to regular fonts without browser hacks. Each browser has a slightly different way to calculate and render line height. Also, depending on how much consideration you give to it, they’re not supported by Internet Explorer on Windows Phone 7 (although the symbols it uses instead are quite cute).

What’s next

Of course a consultancy’s own web site is never really finished, as we’re always looking to show our capabilities and keep up with the latest thinking in a world of fast-moving technological change. And the cobbler’s boots syndrome also applies; its hard to find time to look after your own when you’re always busy helping clients deliver their own transformative projects.

But we feel like we’ve achieved a stable, maintainable new site and its time to put it out there. It feels much faster than the old one, and we’ve delivered the flexibility in content layout we set out to achieve. We wanted to make sure we were demonstrating our own best practices, and while a good deal of that is under the hood we’d be happy to talk through it in more detail if you’d like to get in touch.

There are tweaks to come, naturally, based on things we’ve already spotted and feedback we’ll no doubt get. One big task will be to make the site more editable for those of a non-technical nature. Its actually not too difficult, as most content pages are written in Markdown, but a nicer UI exposing that rather than the intricacies of Git pushes and deployments feels like a good next step. No doubt we’ll be blogging again to talk about that when the time comes.


Retrospective, Firestarter Event – Day Two

by Jon Sharratt

So as day one closed we were ready to get our heads down into hacking some code and try to produce something we could release.  We broke up the responsibilities for UX, creative and technical tasks to the relevant roles.  As I mentioned in day one we had chosen a technical set that was mostly familiar.

A problem did arise in the fact we had decided to go with Backbone.js and Marionette, not a problem with the framework itself but the fact we were not so familiar with its inner workings.  I have to say that it wasn’t the best choice to go down with for the event i.e. a single page application route using a technology we had little experience with at the time.  

It was challenging but we did make some great progress in the morning.  Authentication was up and running with Facebook along with the ability to post content.  Haro did some sterling work in getting MapBox integrated into the application to display hotel details in around the Las Vegas area.  The hotel data itself was to be added manually (I had collated the hotel details the night before in a Google Spreadsheet) but obviously this wasn’t the most ideal solution but for now it would suffice.


As lunchtime came about we decided to get the team together and go to the local pub for some well deserved fish and chips.  After a few pints and a bit of banter with team we got back to it and continued hammering out as much as we could.  Sari got her magic wand out and managed to come up with some branding in no time at all, working with Samera the UX was also really taking shape.


All in all were coming to the end of the afternoon and it was apparent as much as we tried we were not going to get a completed application out of the door this time.  This was mainly due to trying to create a one page application with new technologies as previously mentioned.  I would safely predict that if we had gone down a standard web application with the technologies we have used many many times before in projects we may have got it out of the door.  Having said that we now have good knowledge of Backbone.js and Marionette to allow us to judge when or when not to use it in future projects.

The event itself to me was a success and I have mostly gotten what I wanted to really jump start the application to get something delivered.  The biggest downside I had after the event was losing the vision of delivering a true Minimum Viable Product.  This is something I need to work on for future events and any ideas I want to deliver in the future, “it’s all about delivery” and at the moment the phrase “close but no cigar” comes to mind.  

It really rams home what / why people say single member teams rarely succeed due to not ever ending up shipping their products.  An example of integrating yelp! really steered me off track and I have ended up adding more features that I could of left out to deliver grazr earlier.


 What I need to do right now is literally get the last bits of functionality delivered and delivered fast, I am in what I would call a “lingering phase” and if left to linger too long grazr will rot away.  It needs to be out there for the world to see, it is the worst position to be in as I can’t even say “I tried and failed” as no users have even tried the product at the moment it is just that “I tried”.

I intend to follow this up next week with the launch of grazr, and hopefully some statistics on what happens with users I send it out too.

If you are interested in one page applications the guys over at airbnb are doing some very interesting things with Backbone.js to allow client and server side rendering of templates for single page applications (see blog post here).

The full technology stack we ended up using was as follows:

  • Node.js
  • Backbone.js
  • Marionette
  • MongoDB
  • Heroku
  • Jade Templates
  • Grunt
  • LESS
  • Bootstrap

Watch this space (no really I mean it)…. 


Effortless Continuous Integration and Deployment with Node.js, Travis CI & Capistrano

by Joe Stanton

At Red Badger there has been a significant transition to open source technologies which are better suited to rapid prototyping and highly scalable solutions. The largest area of growth for us as a company is in Node.js development, there are a number of active Node projects, some of which are now in production.

Node.js has gained enormous traction since its inception in 2009, yet it is still an immature technology (although maturing rapidly) therefore ‘best practices’ in the context of Node do not really exist yet. Initially, we did not have the streamlined Continuous Integration and deployment process we were used to from the .NET development world, so we began to look for a solution.

The Tools

Historically, we made constant use of JetBrains TeamCity as a CI server on our .NET projects. TeamCity is an excellent solution for these types of projects, which we would wholeheartedly recommend. However, it was hosted and maintained by us, running on a cloud instance of Windows Server 2008. It was both a heavyweight solution for our now much simpler requirements (no lengthly compile step!) and also not ideal for building and testing Node.js and other open source technologies which run much better in Linux based environments.

In searching for a new solution, we considered:

  • Jenkins – a well established, powerful and complex Java based CI server
  • Travis CI – Extremely popular in open source, particularly among the Ruby community. Travis CI is a lightweight hosted build server which typically only works on public GitHub repositories, although this is changing with its paid service, Travis Pro
  • Concrete – an extremely minimal open source CI server we found on GitHub, written in CoffeeScript by @ryankee
Driven by our desire for simplicity in our tools (and our new-found affection for CoffeeScript), we opted for Concrete. 

After making a few modifications to concrete, we deployed it to a (micro!) EC2 instance, set up some GitHub service hooks and began reaping the rewards of Continuous Integration once again! We set up build-success and build-failure bash scripts to manage deployment and failure logging, and all was working well. After running Concrete for a couple of weeks on a real project, we started to miss some fundamental features of more well established CI solutions, such as a clean, isolated build environment and even basics like email notifications. There were also a number of occasions where tests would time out, or builds would seemingly never start or get lost in the process. It became apparent that such a simple CI solution wouldn’t cut it for a real project, and we should look to a more reliable hosted solution.

Travis CI
Travis CI is a hosted CI server predominantly aimed at open source projects. It can very easily be integrated into a public GitHub repository with the addition of a simple .travis.yml config file which looks something like this:
Travis have recently launched a paid service for private repositories called Travis Pro. We decided to give it a try after being impressed by their free version. It is currently in beta but our experience so far has been very positive. Configuration is a matter of adding the .travis.yml config file to source control, and flicking a switch in the Travis dashboard to set up post-commit hooks to start triggering builds.
Travis runs a build from within an isolated VM, eliminating the side effects of previous builds and creating a much stricter environment in which every dependency must be installed from scratch. This is perfect for catching bugs or deployment mistakes before they make their way to the staging server. Travis also provides a great user interface to view the current build status, with a live tail of console output, which we find very useful during testing.
Additionally, Travis provides some nice features such as pre-installed test databases and headless browser testing with PhantomJS. Both of these features could prove extremely useful when testing the entire stack of your web application.
On a number of our node projects, we were performing deployments with a simple makefile which executed a git checkout over SSH to our staging server. Whilst this worked fine initially, it seemed rather low level and error prone, with no support for rollbacks and cleanups required to remove artifacts produced at runtime on the server. We also needed the opportunity to pre-compile and minify our CoffeeScript and didn’t think that the staging server was the right place to be performing these tasks.

After a small amount of research, We found Capistrano. It quickly became apparent that Capistrano is a very refined and popular tool for deployment – particularly in the Ruby on Rails community. Capistrano is another gem (literally – in the Ruby sense) from the 37signals gang. Despite it’s popularity in the RoR community, the tool is very generic and flexible and merely provides sensible defaults which suit a RoR project out of the box. It can be very easily adapted to deploy all kinds of applications, ranging from Node.js to Python (in our internal usage).
Installing Capistrano is very easy, simply run the command gem install capistrano. This will install two commands, ‘cap‘ and ‘capify‘. You can prepare your project for Capistrano deployment using the command ‘capify . ‘, this will place a Capfile in your project root which tells capistrano where to find the deployment configuration file.
The heart of Capistrano is the DSL based deploy.rb config file. It specifies servers and provides a way to override deployment specific tasks such as starting and stopping processes. Our deploy.rb customized for Node.js looks something like this:
We use the Forever utility provided by Nodejitsu to ensure that Node processes are relaunched if they crash. Forever also deals with log file redirection and provides a nice command line interface for checking on your processes, so is also definitely worth a look if you haven’t already.
Once this is all configured, all it takes is a simple ‘cap deploy‘ to push new code onto a remote server. Rollbacks are just as simple, ‘cap deploy:rollback‘.
Continuous Deployment
Hooking Travis CI and Capistrano together to automatically deploy upon a successful build is trivial. Travis provides a number of “hooks” which allow you to run arbitrary commands at various stages in the build process. The after_success hook is the right choice for deployment tasks.
Capistrano requires an SSH key to your staging server to be present, so commit this to your source control. Then simply add the following to your .travis.yml configuration file:
Where deployment/key.pem is the path to your SSH key.
End result
Fast and dependable Continuous Integration which allows an efficient flow of features through development, testing and staging. With speedy test suites, you can expect to see deployments complete in under a minute after a ‘git push‘.

Thread-based or Event-based?

by Stuart Harris

Q: What do our 3 favorite open source projects (node, redis and nginx) have in common?  Apart from being uber-cool?

A: They are all single threaded.  

But aren’t they all really fast and highly scalable?  Yep.  So how does that work?

Nginx, redis and node are all event-based.  They have an event loop that will listen for an event saying that an asynchronous operation (IO) has completed and then execute the callback that was registered when the async operation started.  Rinse, then repeat.  It never waits for anything, which means that the single thread can go hell-for-leather just running code.  Which makes it really fast.

In days gone by, when we were Microsoft slaves, we used to wrestle with multithreading as a way of dividing up work.  In web apps every request started a new thread.  We’d also use the Task Parallel Library (TPL) which was not an easy abstraction.  And combine that with some event processing library like Reactive Extensions (Rx).  Now you’re asking for a lot of trouble.  The new await keyword in C# helps out alot, but either way you have to think about thread safety all the time.  And all kinds of locking strategies to deal with concurrent access to the same data.  And even with all that, it isn’t as fast.

The difference between the two worlds lies in the way that pieces of work are orchestrated.

Event-based applications divide work up using callbacks, an event loop and a queue.  The unit of work, or task, is a callback.  Simple.  Only one callback is ever executing at a time.  There are no locking issues.  You can write code like you’re the only kid on the block.  You decide when you’re done and then effectively yield control to someone else.  Everyone is really polite so it just works.

Thread-based applications essentially divide work up in hardware.  Because each piece of work has its own thread, and will block if it needs to (like when it’s waiting for IO), the CPU will suspend that thread and start running another that is waiting.  Every time that happens there is quite a hefty context  switch, including moving about 2MB of data around.  In effect the hardware decides when to yield control and you don’t get much of a say.

Who’d have thought that a single thread, dealing with everything, could be faster than multiple threads each dealing with just one thing?  Well, on a single core, that may be true.  On multiple cores it actually may also be true.  That’s because you’ve probably got nginx and node and redis all running on the same machine – simplistically, on a quad core, that’s one core each and still one left over 🙂

But isn’t writing synchronous code for a multithreaded environment a lot easier than writing asynchronous code for a single threaded environment?  Well, maybe, a little.  But some great patterns have emerged within the node community that really help.  

The simplest continuation-passing style (CPS) is the callback.  Which actually is not at all hard when you get used to it.  And it happens to be a great way to encapsulate and really easy to modularise.  The pattern for async functions is that the last argument is always the callback, and the pattern for callbacks is that errors are always the first argument (with results after that).  This standardisation makes composition really easy.

There are a ton of npm modules that can often help reduce complexity.  The best, in my opinion, is still Caolan’s async.  It’s still the most popular and follows the node conventions.  And there are also a few CPS compilers that allow you to code in a more synchronous style.  I wouldn’t have recommended these in the past, but there are a few, such as tamejs and Iced CoffeeScript, that use an “await, defer” pattern that is quite nice.  We’re using CoffeeScript more and more these days, and this “icing” is very tempting (seeing as we’re compiling anyway), but we haven’t strayed that way yet.

We’ve been writing big apps in node since October 2011 and have learnt a lot about how to separate concerns and modularise our code.  It’s a lot different to the object-oriented class-based separation we were used to, but after your head is reprogrammed to use a functional style it becomes second nature and actually much easier to structure.  Caolan’s post on programming style for node sums it up nicely.  If you hear anyone say that node is no good for big projects, tell them that all you have to do is follow a few simple rules and then it becomes perfect.  And fast.


Back in the sett

by Stephen Fulljames

Hello, I’m Stephen. I’m a front-end web developer, focusing these days mainly on Javascript and Node but with a wide range of experience in most of the technologies that make stuff look good in a web browser.

I’ve been working with Red Badger as a freelance collaborator on and off for almost two years, in fact since it was just Stu, David and Cain at the Ravensbourne incubator programme, but we’ve finally decided to make it official and I’m going to be joining the team here permanently.

Switching back to a full time role after a stint freelancing has been a tough decision, but the strength of the team and the technologies in use here make it an exciting place to be. With a background in interface development I’ve had exposure at the integration stage to the various languages – PHP, C#, Java, and so on – that make the pages I build actually work, without really gaining a deep understanding of them. However now I can write server-side code in Javascript as well, with Node, it feels like I can really build on my existing skills to do new and interesting things.

With other developers in the team adopting Node from the other direction, from their excellent C# and infrastructure experience, it feels like we can bring the client and server parts of web development closer together – whether in shared code, better build processes or improved performance. On the recent BBC Connected Studios pilot project, Joe, David and I were all able to contribute not only ideas but also implementation across the whole application. There are still some problems to solve and the best ways of working will settle down over time, but as a company we want to contribute to the community and share what we learn so there’ll be more blogging on these subjects in the near future.

Now if you’ll excuse me, I need to go and get used to being an employee again…