Posts Tagged ‘Continuous Integration’


Docker and assets and Rails, OH MY!

by Jon Yardley

How to precompile Ruby on Rails assets with Docker using --build-arg for deployment to a CDN.


I love Docker. I really enjoy all the benefits it brings not only to the developer experience (DX) but also confidence in deployments. Docker, however, is not a silver bullet on it’s own. It has brought with it a new set of problems which we would not have come across in more old school methods of application deployment.

Recently I came across a particularly annoying issue with Rails 4 and it’s asset pipeline when serving assets from AWS S3 via CloudFront. Referenced assets were not resolving to the correct location when running assets:precompile.

Also finding the right place to precompile assets was apparently obvious. At build time? When deploying? At startup? After trawling the web for a long time I found no obvious answer to this problem.

In detail: The Problem – TL;DR

In production or any other remote environment you want to have your assets served via a CDN and to do this with Rails you need to precompile your assets. This compresses all your assets and runs them through any precompilers you use i.e. SASS. If you use any frameworks it will also bundle all those assets up too.

The application I am currently developing uses Solidus Commerce (a fork of Spree Commerce) which has a bunch of it’s own assets for the admin panel. When precompiling these assets it fixes paths to your referenced assets, e.g. Font files.

If you don’t have the config.action_controller.asset_host set in production.rb at the precompile then these references will be relative to your application domain and won’t resolve. Not ideal!

Another problem is that with Docker you want to build your container and ship it across different environments not changing anything about the application in between and Environment Variables tell your application where it currently lives. e.g. Staging, Production etc…

If you tell Rails to run with config.assets.digest = true then you need to have the precompile assets manifest file which tells rails about your precompiled assets which means you would want it at build time however at this point your container has no awareness of it’s environment.

This particular problem rules out compiling assets when you deploy. Even though your assets will live on your CDN your container won’t know where to point as the manifest won’t exist inside the container and therefore references to assets will be incorrect.

Why not run the assets:precompile rake task in the script when the container starts up?

There are a few problems with this approach. The first being that we are deploying our application using the AWS EC2 Container Service which has a timeout when you start the container. If the Dockerfile CMD command does not run within a certain amount of time it will kill your container and start it again. This can be very frustrating and difficult to work our what is going on.

Also, if your container ever dies in production before starting up it will have to precompile all the assets which is not great. You really want your container to start up as quickly as it can in the event of a failure.

The Solution: –build-arg

I had no idea until spending a day banging my head against a wall trying to fix this that Docker has the option --build-arg. Here is a snippet from the Docker Docs:

You can use ENV instructions in a Dockerfile to define variable values. These values persist in the built image. However, often persistence is not what you want. Users want to specify variables differently depending on which host they build an image on.

A good example is http_proxy or source versions for pulling intermediate files. The ARG instruction lets Dockerfile authors define values that users can set at build-time using the --build-arg flag.

This option allows you to build your container with variables. This is perfect for compiling assets when building a Docker image. I know this sort of goes against the whole idea of immutable infrastructure however Rails, in my case, needs to know which environment it will be living whilst it is built so that any asset references resolve correctly.

How to use –build-arg

Set your asset host

In your Rails application make sure you set the asset_host from an Environment Variable:

Ammend your Dockerfile

In your Dockerfile insert the following after you have added all your application files:

Build your image

Then in your CI build script:

The resulting image will now have your precompiled assets inside the container. Your Rails application then has access to the manifest file with all the correct urls.

Deploy your precompiled assets

To then deploy your assets to S3 you can copy the images out of the container and then push them up to AWS:

Hopefully this will help others who have been having the same problems. Comments and other solution suggestions are welcome!

Want to use Docker in production?

At Red Badger we are always on the look out for “what’s next” and we embrace new technologies like AWS ECS and Docker. If your looking to work with a team who are delivering software using the latest technology and in the right way then get in touch. We are constantly on the lookout for talented developers.


Tech Round Table 2015

by Stuart Harris


Every year since 1988 I’ve been saying “there’s never been a better time to be a software developer” and I expect to continue saying it for a long while yet. But it seems that the last year has been especially significant. Several incredibly exciting technologies have emerged recently that are changing everything.

In our 5-year history, Red Badger has never seen a year like this one. The open source movement is truly blossoming and it’s benefits are rippling through the software industry at lightning speed. We recently got together to list out all the tech we love, but there are a few notable technologies that I want to highlight.

Facebook React


The first is Facebook’s React.js. I think this is the most important development in web tech in the last 10 years. Less than 18 months old, it’s managed to turn the web developer’s world upside down. The traditional MVC approach with data binding that we’d been thinking was the best way to build web apps for a decade or more turns out to be be inferior to the more functional approach that React takes.

React is so simple. Simply because it allows the UI to be a pure function of application state. This makes applications much simpler and a lot easier to reason about.

UI = f(state)

There you go. Now you know React! That’s it. Wow. Who’d have thought it could be that simple. When the state changes we simply apply the function again and bingo, we have new UI. That’s at the heart of a new revolution in UI engineering.

It turns out that if we build slightly different functions we can create native UI for mobile devices (React Native) , for TVs (e.g. Netflix), for HTML5 Canvas (e.g. Flipboard), for any rendering surface. And the same team can build all of these. As Facebook says: “learn once, write everywhere”.

How to manage the state is a separate problem. And Red Badger has recently open sourced a new application framework called Arch, which makes that bit easy too, whilst leveraging all the incredible power of React.

React popularity is soaring and it’s rapidly establishing itself as, hands-down, the best way to build user interfaces. As an example of its popularity, Red Badger started the London React Meetup in June 2014. After a few months we outgrew our office and now host the meetup at Facebook’s London office. Already, the meetup has nearly 1100 members and the 250 seats each month get snapped up in 30 minutes. On May 20th we're holding a special meetup at Cargo as part of the Digital Shoreditch festival. Come along. If you can get a ticket.

Big thanks to Facebook for bringing React to the world.



The second technology I want to mention is Linux Containers (LXC). Made popular by Docker.

When you develop an application these days, you really need a Macbook Pro with unfettered access to the Internet and all its open source goodness. But you often have to deliver the application into secure locked-down operational networks. Before containers, you had to work inside these restrictive environments and it’s so difficult it’s enough to drive you insane. Now you can build your application in containers in the open environment of the Web and then ship those same containers to your test environments, then to your staging and production environments. The containers hold everything that your application needs, so they can run anywhere. And I mean anywhere: Circle CI and other test and Continuous Integration environments, public cloud infrastructure like AWS and Azure, and private cloud infrastructure like IBM Bluemix and Red Hat’s OpenShift.

Containers are the enabling technology for true Continuous Delivery pipelines. You can automatically push (and scale) your application into any environment you can think of, regardless of how locked down and secure it claims to be.

The developer’s handcuffs are removed and the business gets continuous improvement with very little maintenance overhead. And because the application is running in the exact environment in which it was created to run (and tested in), it’s more stable and secure. Everyone wins.

ES6 and Babel


At Red Badger we've used LiveScript a lot. That’s because it’s a great language with loads of functional goodness influenced by great functional languages like Haskell and F#. We still love LiveScript, but now, with Babel (thanks to Sebastian McKenzie), we can use ES6 everywhere. ES6 is the upcoming JavaScript standard and it’s so much better than ES5 (the current JavaScript). It doesn’t have everything that LiveScript has (like currying, piping, prelude-ls etc) but it goes a long way and it’s getting lots of traction because of Babel. Browser support is getting much better and it will very soon be as ubiquitous as ES5 is today.

We’re now using ES6 on many of our projects and tooling support is already very mature. For example, the amazing ESLint has support for ES6, JSX and React. As does Atom and Sublime Text. It’s always a good sign when the tools converge on a technology.

And other cool things...

There are a ton of other new and exciting technologies that we’ve been using at Red Badger over the last 12 months. We listed them all in our tech round table. Go and have a look and see where our love goes right now.



Badger Academy Week 3

by Tiago Azevedo

The third week of Badger Academy has passed, and with it ends the first cycle of seniors helping interns. For this thursday we were paired with Joe Stanton. We ran into a lot of problems during the week, which left us somewhat frustrated but also increased our eagerness to learn. Most of our environment setup for development has been done by this point. We managed to decrease our docker build times from ~20 minutes to 3-5 minutes depending on how good of a day the server was having, but overall it was consistent and fast.

Our focus this week was on testing standards. We were aware of the best practices for testing our software, but their implementations within our projects was what took the bulk of our time.

Testing the API

Testing the Rails backend was fairly straightforward. When we scaffolded the controllers and models for our project, a set of pre-generated RSpec tests was provided for us. Most of them were fairly unoptimised and some were not suited for an API, but rather a project written completely in Rails.

We kept a few things in mind while writing these tests;

  • Keep tests of one model/controller isolated from other models and controllers
  • Avoid hitting the database where we could.
  • Avoid testing things which are covered by higher level tests.

Expanding on that third point, Joe helped explain what layers to test and what layers we could skip. At the core of our app we have model tests, which would be independent of the database and would test things like logic and validation. These should eventually make up the majority of our tests, but for the meantime we only have a few validation checks. The ‘medium-level’ tests were things like routing and request tests.

We ended up skipping the routing tests since once we got to the higher-level integration tests, we could infer that if those passed then all our routing was correct. We kept request tests at a minimum, only checking that the API returned the correct status codes, so we could have a sense of consistency across our app, and those weren’t necessarily implied by the integration tests.

Following that, we removed the unnecessary stuff and, through the use of FactoryGirl, we converted our logic and validation tests to avoid hitting the database, as it would cause a significant slowdown once our project became larger. Some of our higher level controller tests did hit the database, however this is unavoidable in most cases and attempting to bypass this would have been more trouble than it was worth.

Testing the Frontend

Our Frontend testing was much more difficult to set up. We’re currently running a stack of PhantomJS, CucumberJS and Selenium. CucumberJS is a tool that allows us to write tests in a human-readable format, so that anyone, without an understanding of programming, could see what’s happening and even write their own tests if they wanted to. This is the basic premise of BDD (behaviour-driven development) – we write tests for functionality of the software beforehand, from the standpoint of the end user and in a language that they can understand. This differentiates from the TDD (test-driven) principles used in the API as that is written purely in Ruby, and not necessarily from a user’s point of view.


That’s an example of a test written in Gherkin (the CucumberJS language – yes we are aware of all the slightly strange vegetable references). You can probably guess what it tests for. Behind the scenes, the software captures and identifies each of those lines and performs tests based on parameters that are specified (e.g. what page you’re on and what action you’re performing)

One issue we struggled past was how to go about isolating these tests from the API. Since the pages would have content from the backend displayed, we’d need a way to test using fake data. We went through a variety of methods during the week. Firstly, we thought of simply stubbing out the calls to the API using Sinon, a popular mocking and stubbing JavaScript library. While this would have been the most robust option, we had big difficulties using it with Browserify – a tool we are using which bundles your entire application into one file – and we decided on simply creating a fake api server using Stubby, which runs only for the duration of the tests and can serve multiple datasets to the frontend so we can still test a variety of cases.


Since we got the testing frameworks down, we expect to make fast progress from here on out. We ended up learning and using CircleCI, which will automatically run tests on any pushes or pull requests made to the github repos, and this makes sure we only merge stuff into master when everything is working as planned, and also makes sure that all tests are passing on a fresh system before deployment.

Despite all the new technology we have introduced, everything is going more or less smoothly and we couldn’t ask for a better foundation to build this project from. Not only are we rethinking the way the tech badgers go about the development process, we also streamline the entire production process with lower build times, safe and consistent deployment and a highly scalable and portable infrastructure.


Automated cross-browser testing with BrowserStack and CircleCI

by Viktor Charypar

Robot testing an application

By now, automated testing of code has hopefully become an industry standard. Ideally, you write your tests first and make them a runnable specification of what your code should do. When done right, test-driven development can improve code design, not mentioning you have a regression test suite to stop you from accidentally breaking things in the future. 

However, unit testing does just what it says on the tin: tests the code units (modules, classes, functions) in isolation. To know the whole application or system works, you need to test the integration of those modules.

That’s nothing new either. At least in the web application world, which this post is about, we’ve had tools like Cucumber (which lets you write user scenarios in an almost human language) for years. You can then run these tests on a continuous integration server (we use the amazing CircleCI) and get a green light for every commit you push.

But when it comes to testing how things work in different web browsers, the situation is not that ideal. Or rather it wasn’t. 

Automated testing in a real browser

The golden standard of automated testing against a real browser is Selenium, the browser automation tool that can drive many different browsers using a common API. In the ruby world, there are tools on top of Selenium providing a nice DSL for driving the browsers using domain specific commands like 'Login' and expectations like page.has_content?('something').

Selenium will open a browser and run through your scripted scenario and check that everything you expected to happen did actually happen. This should still be an old story to you. You can improve on the default setup by using a faster headless browser (like PhantomJS), although watching your test complete a payment flow on PayPal is kinda cool. There is still a big limitation though.

When you need to test your application on multiple browsers, versions, operating systems and devices, you first need to have all that hardware and software and second, you need to run your test suite on all of them.

So far, we’ve mostly solved this by having human testers. But making humans test applications is a human rights violation and a time of a good tester is much better spent creatively trying to break things in an unexpected way. For some projects, there even isn’t enough budget for a dedicated tester.

This is where cloud services, once again, come to the rescue. And the one we’ll use is called BrowserStack.


BrowserStack allows you to test your web applications in almost every combination of browser and OS/Device you can think of, all from your web browser. It spins up the right VM for you and gives you a remote screen to play around. That solves the first part of our problem, we no longer need to have all those devices and browsers. You can try it yourself at

Amazingly, BrowserStack solves even the second part of the problem by offering the automate feature: it can act as a Selenium server, to which you can connect your test suite by using Selenium remote driver and automate the testing. It even offers up to ten parallel testing sessions!

Testing an existing website

To begin with, let’s configure a Cucumber test suite to run against a staging deployment of your application. That has it’s limitations – you can only do things to the application that a real user could, so forget mocking and stubbing for now (but keep on reading).

We’ll demonstrate the setup with a rails application, using cucumber and Capybara and assume you already have some scenario to run.

First, you need to tell Capybara what hostname to use instead of localhost

Next, loosely following the BrowserStack documentation we’ll configure the remote driver. Start with building the browser stack URL using environment variables to set the username and API authorization key.

then we need to set the desired capabilities of the remote browser. Let’s ask for Chrome 33 on OS X Mavericks.

Next step is to register a driver with these capabilities with Capybara

and use it

If you run cucumber now, it should connect to BrowserStack and run your scenario. You can even watch it happen live in the Automate section!

Ok, that was a cool experiment, but we wanted multiple browsers and the ability to run on BrowserStack only when needed would be good as well.

Multiple different browsers

What we want then, is to be able to run a simple command to run cross-browser tests in one browser or a whole set of them. Something like

rake cross_browser


rake cross_browser:chrome

In fact, let’s do exactly that. First of all, list all the browsers you want in a browsers.json in the root of your project

Each of those browser configurations is stored under a short key we’ll use throughout the configuration to make things simple.

The rake task will look something like the following

First we load the JSON file and store it in a constant. Then we define a task that goes through the list and for each browser executes a browser specific task. The browser tasks are under a cross_browser namespace.

To pass the browser configuration to Capybara when Cucumber gets executed we’ll use an environment variable. Instead of passing the whole configuration we can just pass the browser key and load the rest in the configuration itself. To be able to pass the environment variable based on the task name, we need to wrap the actual cucumber task in another task.

The inner task then extends the Cucumber::Rake::Task and provides some configuration for cucumber. Notice especially the --tags option, which means you can specifically tag Cucumber scenarios for cross-browser execution, only running the necessary subset to keep the time down (your daily time running BrowserStack sessions is likely limited after all).

The cross_browser.rb changes to the following:

That should now let you run

rake cross_browser

and watch the four browsers fly through your your scenarios one after another.

We’ve used this setup with a few modifications for a while. It has a serious limitation however. Because the remote browsers is accessing a real site, it can only do as much as a real user can do. The initial state setup and repeatability is difficult. Not mentioning it isn’t the fastest solution. We really need to run the application locally.

Local testing

Running your application locally and letting Capybara start your server enables you to do everything you are used to in your automated tests – load fixtures, create data with factories, mock and stub pieces of your infrastructure, etc. But how can a browser running in a cloud access your local machine? You will need to dig a tunnel.

BrowserStack provides a set of binaries able to open a tunnel to the remote VM and connect to any hostname and port from the local one. The remote browser can then connect to that hostname as if it could itself access it. You can read all about it in the documentation.

After you downloaded a BrowserStack tunnel binary for your platform, you’ll need to change the configuration again. The app_host is localhost once again and we also need Capybara to start a local server for us.

We also need to tell BrowserStack we want to use the tunnel. Just add

to the list of capabilities. Start the tunnel and run the specs again

./BrowserStackLocal -skipCheck $BS_AUTHKEY,3001 &
rake cross_browser

This time everything should go a bit faster. You can also test more complex systems that need external APIs or direct access to your data store because you can now mock those.

This is great! I want that to run for every single build before it’s deployed like my unit tests. Testing everything as much as possible is what CI servers are for after all.

Running on CircleCI

We really like CircleCI for it’s reliability, great UI and especially it’s ease of configuration and libraries and services support.

On top of that, their online chat support deserves a praise in a separate paragraph. Someone is in the chat room all the time, responds almost immediately and they are always very helpful. They even fix an occasional bug in near real time.

To run our cross browser tests on CircleCI we will need a circle.yml file and a few changes to the configuration. The circle.yml will contain the following

We run unit tests, then cucumber specs normally, then open the tunnel and run our rake task. When it’s done, we can close the tunnel again. To download and eventually stop the tunnel we wrote a little shell script

It downloads the 64-bit linux browserstack binary and unpacks it into a browserstack directory (which is cached by CircleCI). When passed a stop parameter, it will kill all the browserstack tunnels running. (We will eventually make the script start the tunnel as well, but we had problems with backgrounding the process so it’s done as an explicit step for now).

Finally, we can update the configuration to use the project name and build number supplied by Circle to name the builds for BrowserStack

That setup should work, but it will take a while going through all the browsers. That is a problem when you work in multiple branches in parallel, because the testing becomes a race for resources. We can use another brilliant feature of CircleCI to limit the impact of this issue: we can run the tests in parallel.

The holy grail

Marking any task in circle.yml with parallel: true will make it run in multiple containers at the same time. You can than scale your build up to as many containers you want (and are willing to pay for). We are limited by the concurrency BrowserStack offers us and on top of that we’re using just 4 browsers anyway, so let’s start with four, but plan for more devices.

First, we need to spread the individual browser jobs across the containers. We can use the environment variables provided by CircleCI to see which container we’re running on. Our final rake task will look like this

Reading the nodes environment variable we check the concurrency limit and spread the browsers across the same number of buckets. For each bucket, we’ll only run the actual test if the CIRCLE_NODE_INDEX is the same as the order of the bucket.

Because we’re now opening multiple tunnels to BrowserStack, we need to name them. Add

to the capabilities configuration in cross_browser.rb. The final file looks like this

We need to supply the same identifier when openning the tunnel from circle.yml. We also need to run all the cross-browser related commands in parallel. Final circle.yml will look like the following (notice the added nodes=4 when running the tests)

And that’s it. You can now scale your build out to four containers and run the tests in paralel. For us this gets the build time down to about 12 minutes on a complex app and 5 minutes on a very simple one.


We are really happy with this setup. It’s really stable, fast, individual test runs are completely isolated and we don’t need to deploy anything anywhere. It has just one drawback compared to the previous setup which first deployed the application to a staging environment and then ran cross-browsers tests against it. It doesn’t test the app in it’s real runtime environment (Heroku in our case). Otherwise it’s a complete win on all fronts.

We plan to solve that remaining problem by writing a separate test suite testing our whole system (consisting from multiple services consuming each other’s APIs) cleanly from the outside. It won’t go into as much detail as the normal tests since it is only there to confirm that the different pieces fit together and users can complete the most important journes. Coupled with Heroku’s slug promotion feature, we will actually test the exact thing that will end up in production in the exact same environment. And you can look forward to another blogpost about that soon.


Effortless Continuous Integration and Deployment with Node.js, Travis CI & Capistrano

by Joe Stanton

At Red Badger there has been a significant transition to open source technologies which are better suited to rapid prototyping and highly scalable solutions. The largest area of growth for us as a company is in Node.js development, there are a number of active Node projects, some of which are now in production.

Node.js has gained enormous traction since its inception in 2009, yet it is still an immature technology (although maturing rapidly) therefore ‘best practices’ in the context of Node do not really exist yet. Initially, we did not have the streamlined Continuous Integration and deployment process we were used to from the .NET development world, so we began to look for a solution.

The Tools

Historically, we made constant use of JetBrains TeamCity as a CI server on our .NET projects. TeamCity is an excellent solution for these types of projects, which we would wholeheartedly recommend. However, it was hosted and maintained by us, running on a cloud instance of Windows Server 2008. It was both a heavyweight solution for our now much simpler requirements (no lengthly compile step!) and also not ideal for building and testing Node.js and other open source technologies which run much better in Linux based environments.

In searching for a new solution, we considered:

  • Jenkins – a well established, powerful and complex Java based CI server
  • Travis CI – Extremely popular in open source, particularly among the Ruby community. Travis CI is a lightweight hosted build server which typically only works on public GitHub repositories, although this is changing with its paid service, Travis Pro
  • Concrete – an extremely minimal open source CI server we found on GitHub, written in CoffeeScript by @ryankee
Driven by our desire for simplicity in our tools (and our new-found affection for CoffeeScript), we opted for Concrete. 

After making a few modifications to concrete, we deployed it to a (micro!) EC2 instance, set up some GitHub service hooks and began reaping the rewards of Continuous Integration once again! We set up build-success and build-failure bash scripts to manage deployment and failure logging, and all was working well. After running Concrete for a couple of weeks on a real project, we started to miss some fundamental features of more well established CI solutions, such as a clean, isolated build environment and even basics like email notifications. There were also a number of occasions where tests would time out, or builds would seemingly never start or get lost in the process. It became apparent that such a simple CI solution wouldn’t cut it for a real project, and we should look to a more reliable hosted solution.

Travis CI
Travis CI is a hosted CI server predominantly aimed at open source projects. It can very easily be integrated into a public GitHub repository with the addition of a simple .travis.yml config file which looks something like this:
Travis have recently launched a paid service for private repositories called Travis Pro. We decided to give it a try after being impressed by their free version. It is currently in beta but our experience so far has been very positive. Configuration is a matter of adding the .travis.yml config file to source control, and flicking a switch in the Travis dashboard to set up post-commit hooks to start triggering builds.
Travis runs a build from within an isolated VM, eliminating the side effects of previous builds and creating a much stricter environment in which every dependency must be installed from scratch. This is perfect for catching bugs or deployment mistakes before they make their way to the staging server. Travis also provides a great user interface to view the current build status, with a live tail of console output, which we find very useful during testing.
Additionally, Travis provides some nice features such as pre-installed test databases and headless browser testing with PhantomJS. Both of these features could prove extremely useful when testing the entire stack of your web application.
On a number of our node projects, we were performing deployments with a simple makefile which executed a git checkout over SSH to our staging server. Whilst this worked fine initially, it seemed rather low level and error prone, with no support for rollbacks and cleanups required to remove artifacts produced at runtime on the server. We also needed the opportunity to pre-compile and minify our CoffeeScript and didn’t think that the staging server was the right place to be performing these tasks.

After a small amount of research, We found Capistrano. It quickly became apparent that Capistrano is a very refined and popular tool for deployment – particularly in the Ruby on Rails community. Capistrano is another gem (literally – in the Ruby sense) from the 37signals gang. Despite it’s popularity in the RoR community, the tool is very generic and flexible and merely provides sensible defaults which suit a RoR project out of the box. It can be very easily adapted to deploy all kinds of applications, ranging from Node.js to Python (in our internal usage).
Installing Capistrano is very easy, simply run the command gem install capistrano. This will install two commands, ‘cap‘ and ‘capify‘. You can prepare your project for Capistrano deployment using the command ‘capify . ‘, this will place a Capfile in your project root which tells capistrano where to find the deployment configuration file.
The heart of Capistrano is the DSL based deploy.rb config file. It specifies servers and provides a way to override deployment specific tasks such as starting and stopping processes. Our deploy.rb customized for Node.js looks something like this:
We use the Forever utility provided by Nodejitsu to ensure that Node processes are relaunched if they crash. Forever also deals with log file redirection and provides a nice command line interface for checking on your processes, so is also definitely worth a look if you haven’t already.
Once this is all configured, all it takes is a simple ‘cap deploy‘ to push new code onto a remote server. Rollbacks are just as simple, ‘cap deploy:rollback‘.
Continuous Deployment
Hooking Travis CI and Capistrano together to automatically deploy upon a successful build is trivial. Travis provides a number of “hooks” which allow you to run arbitrary commands at various stages in the build process. The after_success hook is the right choice for deployment tasks.
Capistrano requires an SSH key to your staging server to be present, so commit this to your source control. Then simply add the following to your .travis.yml configuration file:
Where deployment/key.pem is the path to your SSH key.
End result
Fast and dependable Continuous Integration which allows an efficient flow of features through development, testing and staging. With speedy test suites, you can expect to see deployments complete in under a minute after a ‘git push‘.

Meet Joe – The New Badger Intern

by Joe Stanton

After joining Red Badger a couple of weeks ago, I thought I should share who I am, and some of the things I’ll be working on in the near future. I’m a student at King’s College London studying Computer Science and I applied to join Red Badger a couple of months ago to gain some experience of developing real software in a company which do things the right way. Mugshot

After a friendly email exchange, I was invited to take part in a programming challenge to whittle down the number of applicants, and give the real developers (now my mentors) a feel for my experience. With a background in C#.NET and web development, combined with a passion for cutting edge technology, Red Badger are a natural fit for my current skillset and how I’d like to develop my skills in future.

My first couple of weeks have so far introduced me to how development at Red Badger works; Agile and highly creative with a strong emphasis on User Experience and Design,development starts with writing spec’s to ensure code quality, attention to detail is very important. I have also spent some time getting acquainted with the tools Red Badger use during collaborative development, such as GitHub for source control and TeamCity for Continuous Integration with integrated Unit Testing.

My main project initially will be Birdsong, Red Badger’s fantastic WP7 twitter client. This should please many of the current users, as it will be receiving a lot of care and attention over the coming weeks after a period of neglect! There are a few features in the pipeline, including support for Trending Topics (both local and global), ReadItLater/Instapaper support for tweeted links and large-scale improvements to the push service.

If you are a current user of Birdsong and have a feature request, now would be a great time to submit it to our support site at If you aren’t a current user and you own a Windows Phone, what are you waiting for!

I look forward to learning lots and adding real value to the project and any others I may be involved in in the future!


TeamCity, GitHub and SSH Keys

by David Wynne

If you’re a Windows based user of GitHub and using TortoiseGit then it’s highly likely you’ve used PuTTYGen to generate the SSH key you’re using with GitHub and why not – it works fine.  That is until you want to start using TeamCity with GitHub.

If you try to configure your VCS root in TeamCity using the bundled Git plug-in, with a private key generated with PuTTYGen, you’ll likely get the following error: Connection failed!  Repository ‘’: Unable to load identity file: C:\whatever\YourPrivateKey.ppk (passphrase protected)

TeamCity Connection Failed

We spent a while messing around with the different authentication methods available in the TeamCity – trying to configure default .ssh keys for the logged on user, adding SSH config files and nothing worked.

Eventually we re-generated our SSH key using Git Bash, instead of PuTTYGen (as detailed here) and suddenly – Connection successful!

I’ve since discovered that you can get the same result using PuTTYGen, but you have to export your key as a OpenSSH key: Load your existing private key – File/Load private key (enter your passphrase).  Export to OpenSSH – Conversions/Export OpenSSH key.  Use the resulting key as the private key you give to TeamCity.