Posts Tagged ‘Continuous Integration’

27
May
2014

Automated cross-browser testing with BrowserStack and CircleCI

by Viktor Charypar

Robot testing an application

By now, automated testing of code has hopefully become an industry standard. Ideally, you write your tests first and make them a runnable specification of what your code should do. When done right, test-driven development can improve code design, not mentioning you have a regression test suite to stop you from accidentally breaking things in the future. 

However, unit testing does just what it says on the tin: tests the code units (modules, classes, functions) in isolation. To know the whole application or system works, you need to test the integration of those modules.

That’s nothing new either. At least in the web application world, which this post is about, we’ve had tools like Cucumber (which lets you write user scenarios in an almost human language) for years. You can then run these tests on a continuous integration server (we use the amazing CircleCI) and get a green light for every commit you push.

But when it comes to testing how things work in different web browsers, the situation is not that ideal. Or rather it wasn’t. 

Automated testing in a real browser

The golden standard of automated testing against a real browser is Selenium, the browser automation tool that can drive many different browsers using a common API. In the ruby world, there are tools on top of Selenium providing a nice DSL for driving the browsers using domain specific commands like page.click 'Login' and expectations like page.has_content?('something').

Selenium will open a browser and run through your scripted scenario and check that everything you expected to happen did actually happen. This should still be an old story to you. You can improve on the default setup by using a faster headless browser (like PhantomJS), although watching your test complete a payment flow on PayPal is kinda cool. There is still a big limitation though.

When you need to test your application on multiple browsers, versions, operating systems and devices, you first need to have all that hardware and software and second, you need to run your test suite on all of them.

So far, we’ve mostly solved this by having human testers. But making humans test applications is a human rights violation and a time of a good tester is much better spent creatively trying to break things in an unexpected way. For some projects, there even isn’t enough budget for a dedicated tester.

This is where cloud services, once again, come to the rescue. And the one we’ll use is called BrowserStack.

BrowserStack

BrowserStack allows you to test your web applications in almost every combination of browser and OS/Device you can think of, all from your web browser. It spins up the right VM for you and gives you a remote screen to play around. That solves the first part of our problem, we no longer need to have all those devices and browsers. You can try it yourself at http://www.browserstack.com/.

Amazingly, BrowserStack solves even the second part of the problem by offering the automate feature: it can act as a Selenium server, to which you can connect your test suite by using Selenium remote driver and automate the testing. It even offers up to ten parallel testing sessions!

Testing an existing website

To begin with, let’s configure a Cucumber test suite to run against a staging deployment of your application. That has it’s limitations – you can only do things to the application that a real user could, so forget mocking and stubbing for now (but keep on reading).

We’ll demonstrate the setup with a rails application, using cucumber and Capybara and assume you already have some scenario to run.

First, you need to tell Capybara what hostname to use instead of localhost

Next, loosely following the BrowserStack documentation we’ll configure the remote driver. Start with building the browser stack URL using environment variables to set the username and API authorization key.

then we need to set the desired capabilities of the remote browser. Let’s ask for Chrome 33 on OS X Mavericks.

Next step is to register a driver with these capabilities with Capybara

and use it

If you run cucumber now, it should connect to BrowserStack and run your scenario. You can even watch it happen live in the Automate section!

Ok, that was a cool experiment, but we wanted multiple browsers and the ability to run on BrowserStack only when needed would be good as well.

Multiple different browsers

What we want then, is to be able to run a simple command to run cross-browser tests in one browser or a whole set of them. Something like

rake cross_browser

and

rake cross_browser:chrome

In fact, let’s do exactly that. First of all, list all the browsers you want in a browsers.json in the root of your project

Each of those browser configurations is stored under a short key we’ll use throughout the configuration to make things simple.

The rake task will look something like the following

First we load the JSON file and store it in a constant. Then we define a task that goes through the list and for each browser executes a browser specific task. The browser tasks are under a cross_browser namespace.

To pass the browser configuration to Capybara when Cucumber gets executed we’ll use an environment variable. Instead of passing the whole configuration we can just pass the browser key and load the rest in the configuration itself. To be able to pass the environment variable based on the task name, we need to wrap the actual cucumber task in another task.

The inner task then extends the Cucumber::Rake::Task and provides some configuration for cucumber. Notice especially the --tags option, which means you can specifically tag Cucumber scenarios for cross-browser execution, only running the necessary subset to keep the time down (your daily time running BrowserStack sessions is likely limited after all).

The cross_browser.rb changes to the following:

That should now let you run

rake cross_browser

and watch the four browsers fly through your your scenarios one after another.

We’ve used this setup with a few modifications for a while. It has a serious limitation however. Because the remote browsers is accessing a real site, it can only do as much as a real user can do. The initial state setup and repeatability is difficult. Not mentioning it isn’t the fastest solution. We really need to run the application locally.

Local testing

Running your application locally and letting Capybara start your server enables you to do everything you are used to in your automated tests – load fixtures, create data with factories, mock and stub pieces of your infrastructure, etc. But how can a browser running in a cloud access your local machine? You will need to dig a tunnel.

BrowserStack provides a set of binaries able to open a tunnel to the remote VM and connect to any hostname and port from the local one. The remote browser can then connect to that hostname as if it could itself access it. You can read all about it in the documentation.

After you downloaded a BrowserStack tunnel binary for your platform, you’ll need to change the configuration again. The app_host is localhost once again and we also need Capybara to start a local server for us.

We also need to tell BrowserStack we want to use the tunnel. Just add

to the list of capabilities. Start the tunnel and run the specs again

./BrowserStackLocal -skipCheck $BS_AUTHKEY 127.0.0.1,3001 &
rake cross_browser

This time everything should go a bit faster. You can also test more complex systems that need external APIs or direct access to your data store because you can now mock those.

This is great! I want that to run for every single build before it’s deployed like my unit tests. Testing everything as much as possible is what CI servers are for after all.

Running on CircleCI

We really like CircleCI for it’s reliability, great UI and especially it’s ease of configuration and libraries and services support.

On top of that, their online chat support deserves a praise in a separate paragraph. Someone is in the chat room all the time, responds almost immediately and they are always very helpful. They even fix an occasional bug in near real time.

To run our cross browser tests on CircleCI we will need a circle.yml file and a few changes to the configuration. The circle.yml will contain the following

We run unit tests, then cucumber specs normally, then open the tunnel and run our rake task. When it’s done, we can close the tunnel again. To download and eventually stop the tunnel we wrote a little shell script

It downloads the 64-bit linux browserstack binary and unpacks it into a browserstack directory (which is cached by CircleCI). When passed a stop parameter, it will kill all the browserstack tunnels running. (We will eventually make the script start the tunnel as well, but we had problems with backgrounding the process so it’s done as an explicit step for now).

Finally, we can update the configuration to use the project name and build number supplied by Circle to name the builds for BrowserStack

That setup should work, but it will take a while going through all the browsers. That is a problem when you work in multiple branches in parallel, because the testing becomes a race for resources. We can use another brilliant feature of CircleCI to limit the impact of this issue: we can run the tests in parallel.

The holy grail

Marking any task in circle.yml with parallel: true will make it run in multiple containers at the same time. You can than scale your build up to as many containers you want (and are willing to pay for). We are limited by the concurrency BrowserStack offers us and on top of that we’re using just 4 browsers anyway, so let’s start with four, but plan for more devices.

First, we need to spread the individual browser jobs across the containers. We can use the environment variables provided by CircleCI to see which container we’re running on. Our final rake task will look like this

Reading the nodes environment variable we check the concurrency limit and spread the browsers across the same number of buckets. For each bucket, we’ll only run the actual test if the CIRCLE_NODE_INDEX is the same as the order of the bucket.

Because we’re now opening multiple tunnels to BrowserStack, we need to name them. Add

to the capabilities configuration in cross_browser.rb. The final file looks like this

We need to supply the same identifier when openning the tunnel from circle.yml. We also need to run all the cross-browser related commands in parallel. Final circle.yml will look like the following (notice the added nodes=4 when running the tests)

And that’s it. You can now scale your build out to four containers and run the tests in paralel. For us this gets the build time down to about 12 minutes on a complex app and 5 minutes on a very simple one.

Conclusions

We are really happy with this setup. It’s really stable, fast, individual test runs are completely isolated and we don’t need to deploy anything anywhere. It has just one drawback compared to the previous setup which first deployed the application to a staging environment and then ran cross-browsers tests against it. It doesn’t test the app in it’s real runtime environment (Heroku in our case). Otherwise it’s a complete win on all fronts.

We plan to solve that remaining problem by writing a separate test suite testing our whole system (consisting from multiple services consuming each other’s APIs) cleanly from the outside. It won’t go into as much detail as the normal tests since it is only there to confirm that the different pieces fit together and users can complete the most important journes. Coupled with Heroku’s slug promotion feature, we will actually test the exact thing that will end up in production in the exact same environment. And you can look forward to another blogpost about that soon.

31
Jan
2013

Effortless Continuous Integration and Deployment with Node.js, Travis CI & Capistrano

by Joe Stanton

At Red Badger there has been a significant transition to open source technologies which are better suited to rapid prototyping and highly scalable solutions. The largest area of growth for us as a company is in Node.js development, there are a number of active Node projects, some of which are now in production.

Node.js has gained enormous traction since its inception in 2009, yet it is still an immature technology (although maturing rapidly) therefore ‘best practices’ in the context of Node do not really exist yet. Initially, we did not have the streamlined Continuous Integration and deployment process we were used to from the .NET development world, so we began to look for a solution.

The Tools

Historically, we made constant use of JetBrains TeamCity as a CI server on our .NET projects. TeamCity is an excellent solution for these types of projects, which we would wholeheartedly recommend. However, it was hosted and maintained by us, running on a cloud instance of Windows Server 2008. It was both a heavyweight solution for our now much simpler requirements (no lengthly compile step!) and also not ideal for building and testing Node.js and other open source technologies which run much better in Linux based environments.

In searching for a new solution, we considered:

  • Jenkins – a well established, powerful and complex Java based CI server
     
  • Travis CI – Extremely popular in open source, particularly among the Ruby community. Travis CI is a lightweight hosted build server which typically only works on public GitHub repositories, although this is changing with its paid service, Travis Pro
     
  • Concrete – an extremely minimal open source CI server we found on GitHub, written in CoffeeScript by @ryankee
Driven by our desire for simplicity in our tools (and our new-found affection for CoffeeScript), we opted for Concrete. 
 
687474703a2f2f646c2e64726f70626f782e636f6d2f752f313135323937302f636f6e63726574655f73637265656e73686f745f68692e706e67

After making a few modifications to concrete, we deployed it to a (micro!) EC2 instance, set up some GitHub service hooks and began reaping the rewards of Continuous Integration once again! We set up build-success and build-failure bash scripts to manage deployment and failure logging, and all was working well. After running Concrete for a couple of weeks on a real project, we started to miss some fundamental features of more well established CI solutions, such as a clean, isolated build environment and even basics like email notifications. There were also a number of occasions where tests would time out, or builds would seemingly never start or get lost in the process. It became apparent that such a simple CI solution wouldn’t cut it for a real project, and we should look to a more reliable hosted solution.

Travis CI
 
Travis CI is a hosted CI server predominantly aimed at open source projects. It can very easily be integrated into a public GitHub repository with the addition of a simple .travis.yml config file which looks something like this:
 
Travis have recently launched a paid service for private repositories called Travis Pro. We decided to give it a try after being impressed by their free version. It is currently in beta but our experience so far has been very positive. Configuration is a matter of adding the .travis.yml config file to source control, and flicking a switch in the Travis dashboard to set up post-commit hooks to start triggering builds.
 
Travis runs a build from within an isolated VM, eliminating the side effects of previous builds and creating a much stricter environment in which every dependency must be installed from scratch. This is perfect for catching bugs or deployment mistakes before they make their way to the staging server. Travis also provides a great user interface to view the current build status, with a live tail of console output, which we find very useful during testing.
 
 
 
Additionally, Travis provides some nice features such as pre-installed test databases and headless browser testing with PhantomJS. Both of these features could prove extremely useful when testing the entire stack of your web application.
 
Capistrano
 
On a number of our node projects, we were performing deployments with a simple makefile which executed a git checkout over SSH to our staging server. Whilst this worked fine initially, it seemed rather low level and error prone, with no support for rollbacks and cleanups required to remove artifacts produced at runtime on the server. We also needed the opportunity to pre-compile and minify our CoffeeScript and didn’t think that the staging server was the right place to be performing these tasks.

After a small amount of research, We found Capistrano. It quickly became apparent that Capistrano is a very refined and popular tool for deployment – particularly in the Ruby on Rails community. Capistrano is another gem (literally – in the Ruby sense) from the 37signals gang. Despite it’s popularity in the RoR community, the tool is very generic and flexible and merely provides sensible defaults which suit a RoR project out of the box. It can be very easily adapted to deploy all kinds of applications, ranging from Node.js to Python (in our internal usage).
 
Installing Capistrano is very easy, simply run the command gem install capistrano. This will install two commands, ‘cap‘ and ‘capify‘. You can prepare your project for Capistrano deployment using the command ‘capify . ‘, this will place a Capfile in your project root which tells capistrano where to find the deployment configuration file.
 
The heart of Capistrano is the DSL based deploy.rb config file. It specifies servers and provides a way to override deployment specific tasks such as starting and stopping processes. Our deploy.rb customized for Node.js looks something like this:
 
We use the Forever utility provided by Nodejitsu to ensure that Node processes are relaunched if they crash. Forever also deals with log file redirection and provides a nice command line interface for checking on your processes, so is also definitely worth a look if you haven’t already.
 
Once this is all configured, all it takes is a simple ‘cap deploy‘ to push new code onto a remote server. Rollbacks are just as simple, ‘cap deploy:rollback‘.
 
Continuous Deployment
 
Hooking Travis CI and Capistrano together to automatically deploy upon a successful build is trivial. Travis provides a number of “hooks” which allow you to run arbitrary commands at various stages in the build process. The after_success hook is the right choice for deployment tasks.
 
Capistrano requires an SSH key to your staging server to be present, so commit this to your source control. Then simply add the following to your .travis.yml configuration file:
 
Where deployment/key.pem is the path to your SSH key.
 
End result
 
Fast and dependable Continuous Integration which allows an efficient flow of features through development, testing and staging. With speedy test suites, you can expect to see deployments complete in under a minute after a ‘git push‘.
9
Dec
2011

Meet Joe – The New Badger Intern

by Joe Stanton

After joining Red Badger a couple of weeks ago, I thought I should share who I am, and some of the things I’ll be working on in the near future. I’m a student at King’s College London studying Computer Science and I applied to join Red Badger a couple of months ago to gain some experience of developing real software in a company which do things the right way. Mugshot

After a friendly email exchange, I was invited to take part in a programming challenge to whittle down the number of applicants, and give the real developers (now my mentors) a feel for my experience. With a background in C#.NET and web development, combined with a passion for cutting edge technology, Red Badger are a natural fit for my current skillset and how I’d like to develop my skills in future.

My first couple of weeks have so far introduced me to how development at Red Badger works; Agile and highly creative with a strong emphasis on User Experience and Design,development starts with writing spec’s to ensure code quality, attention to detail is very important. I have also spent some time getting acquainted with the tools Red Badger use during collaborative development, such as GitHub for source control and TeamCity for Continuous Integration with integrated Unit Testing.

My main project initially will be Birdsong, Red Badger’s fantastic WP7 twitter client. This should please many of the current users, as it will be receiving a lot of care and attention over the coming weeks after a period of neglect! There are a few features in the pipeline, including support for Trending Topics (both local and global), ReadItLater/Instapaper support for tweeted links and large-scale improvements to the push service.

If you are a current user of Birdsong and have a feature request, now would be a great time to submit it to our support site at http://support.red-badger.com. If you aren’t a current user and you own a Windows Phone, what are you waiting for!

I look forward to learning lots and adding real value to the project and any others I may be involved in in the future!

28
Aug
2010

TeamCity, GitHub and SSH Keys

by David Wynne

If you’re a Windows based user of GitHub and using TortoiseGit then it’s highly likely you’ve used PuTTYGen to generate the SSH key you’re using with GitHub and why not – it works fine.  That is until you want to start using TeamCity with GitHub.

If you try to configure your VCS root in TeamCity using the bundled Git plug-in, with a private key generated with PuTTYGen, you’ll likely get the following error: Connection failed!  Repository ‘git@github.com:accountName/repoName.git’: Unable to load identity file: C:\whatever\YourPrivateKey.ppk (passphrase protected)

TeamCity Connection Failed

We spent a while messing around with the different authentication methods available in the TeamCity – trying to configure default .ssh keys for the logged on user, adding SSH config files and nothing worked.

Eventually we re-generated our SSH key using Git Bash, instead of PuTTYGen (as detailed here) and suddenly – Connection successful!

I’ve since discovered that you can get the same result using PuTTYGen, but you have to export your key as a OpenSSH key: Load your existing private key – File/Load private key (enter your passphrase).  Export to OpenSSH – Conversions/Export OpenSSH key.  Use the resulting key as the private key you give to TeamCity.

@dwynne