Responsive Testing on Mobile with Ghostlab and Device Lab

by Roisi Proven

When starting Responsive web development, making sure that a website works on the growing number of phones, tablets and phablets out there feels like a daunting task. The granularity of screen sizes is as such that even just getting a solid core of devices means having a pile of gadgets cluttering up your desk pretty darn quick.

When I was first shown Device Lab, I must admit I was skeptical. It’s just a box covered in velcro, right? It might look pretty but will it actually help my workflow?

I suppose in the most basic of terms, it is “just” that. However, Device Lab is nothing without Ghostlab. When you bring the two together, Device Lab becomes an awesome tool for streamlining workflows and getting different devices tested quickly. It also has the ability to freak out your workmates as you scroll synchronously with 9 devices at once despite them having no instantly apparent connection to one another.

Device Lab

Running Fortnum & Mason.london via Ghostlab

Ghostlab is incredibly easy to set up given the complexity of the task it performs. Simply install, fire up, then drag and drop the URL or repo that you want to test. It’s that easy. I spent a little while searching the Knowledge Base for the point at which it was going to get complicated, but it hasn’t yet.

Once you’re set up with a site to test, the magic begins. There are two ways to deploy the site to your mobiles, either using the IP connected to the site, or even easier, by scanning the handy QR code that is generated within Ghostlab. There are no wires, no awkward configurations, just click and run. After you are hooked up, anything that you do on one device will be replicated on all the others, and it even works intelligently to show the corresponding interactions on both desktop and mobile.

Screen Shot 2014-07-21 at 13.41.56

Ghostlab setup.

I’m unsure of a limit to devices running at the same time, and of course when running over WiFi it all depends on your connection, but I’ve found that over our work internet I can happily run 7 or 8 devices with fairly minimal lag. This means that I can do a run of tests on devices in portrait mode, then reconfigure Device Lab and run all the tests again in landscape.

Testing Responsive sites is never going to be easy, but with tools like Ghostlab out there we can at least make more headway in order to give customers on all devices a great experience.


Red Badger at the Financial Times Creative Summit

by Joe Dollar-Smirnov


The tone was set from the moment we arrived at the FT for the 2 day creative summit. A cheerful and friendly security guard welcomed us and issued our name badges.

“Yes young man, take a seat and someone will be down to collect you” I havn’t been called young man, since… I was a young man. A few glances around reception and it is clear that we are not the only early arrivals keen to get stuck in to some creative conundrums.

Accents and faces from all over the world. Surely enough, the summit brought together some seriously impressive talent from home and away. BBC, Google, MIT, FT China were just a few of the represented organisations.

The brainchild of the Product Management Director for FT.com and organised by the smart chap who brought us the BBC’s The Apprentice. The Creative Summit event was designed to illicit the most creative and innovative ideas from the attendees through 2 days of intense creative thinking, discussion, design and development. Various ‘unconference’ activities and organised, bite sized friend-making and networking sessions allowed everyone in the room to move around and get to know a few people. Backgrounds acknowledged, expectations exchanged and breakfast pastries devoured we were ready to start understanding the big problems the newspaper industry is facing. And even better, consider some solutions.

As many refreshments as you could consume in your wildest dreams kept everyone firing on all cylinders for the entire session. Of course lunch was laid on and an evening meal with an option to work as late as we like if we thought that was a good idea.

Camera people were filming the creativity and taking photos along the way. It was obvious that this was of massive importance to the Financial Times and is a clear sign that their commitment to develop new services will help keep their nose ahead of the competition and remain forward looking regarding interesting ways to engage with readers old and new.

We got to meet and work with some very interesting people from all levels of some very interesting organisations. Leaders within the FT took a very active role in the event and spent time walking the room sitting down and understanding the concepts that were coming out of the summit.

The Badgers split to join two different teams. I joined a team that consisted of a serial startup Chief Exec with a history in financial risk management, an FT Developer and an FT Marketing exec who were both able to be our insight for the 2 days providing not only valuable ideas but also key information regarding the typical FT users, marketing insights and future aspirations for the company. One of my biggest personal challenges of the 2 days was adapting to working with very different people very quickly. You can not take part in a project like this without throwing yourself in to it completely and that means that you have to avoid dancing around any conflicts and face them head on. A heated debate over UCD and heavy umming and ahhing over our numerous and constant stream of ideas kept me on my toes. It also proved a great testing ground for one of our key philosophies of collaboration. Externalising ideas and working as a team proved to be an essential contributor towards our winning idea.

financial times creative summit

Through gratuitous use of post-its, plasticine, pipe cleaners and morning pastries we worked on an initial brain dump of ideas around 6 core issues / problems the FT have that range from introducing new readers to the publication through to new ways to monetise and increase subscriptions. They had varying levels of grandiosity with the most ambitious not dissimilar to how to be a better Google. There was no shortage of inspiration and challenge.

At the end of day one, teams took turns standing up and explaining their loose concepts. Some teams worked into the night, fortunately the Badgers went home to get their beauty sleep. The final day was more about refining the ideas and contrary to my initial thoughts was not as hectic as I imagined. We all had a common goal that we were charging towards. The grand finale consisted of a pitch on stage with a 3 min deadline. Ideas were judged by some heavy hitters from FT.com. Namely, the Editor, CIO and the Director of Analytics.

financial times creative summit


Both the teams Red Badger were part of won 2 of the 4 commendations for their great work. The top 2 overall winning concepts went in to production, well deserved as well. I look forward to seeing how the new products develop and go to market.

ft creative summit

The 2 winning entries that we were part of were commended for:

Innovative Reader Experience

“Which re-imagined the way stories could be constructed (or deconstructed) for time poor younger readers who want the quick facts and analysis.” This team included our very own Imran Sulemanji and Maite Rodriguez.

Best Social

“A creative way to gain FT profile and reputation and engage with others through FT content.”

This was one of the most interesting creative summits I have been to. For the sheer mix of people, breadth of problems to solve and the level of involvement from internal stakeholders. I am glad that we had the opportunity to take a role in it and spread some of the Red Badger process, enthusiasm and creativity.



First London Facebook React User Group

by Stuart Harris

On Wednesday we held the inaugural London Facebook React User Group meeting at Red Badger's office in Shoreditch. In just a few weeks, the group has grown to 138 members and we're holding monthly meetups.

React meetup

Not a bad turnout for the first one!

We had 3 talks. The first was from Alex Savin on using LiveScript (instead of JSX) to build React components. I did a talk on building isomorphic apps with React (we've put together a sample repo on github so please contribute ideas).

Finally Forbes Lindesay, who maintains the much loved Jade, talked about the promising React Jade (his slides are here). There were some really interesting conversations during the course of the evening, both during the session and over the pizza and beer.

If you attended the meetup, we would love to hear your comments or suggestions so please take the quick survey.

Also, if you'd like to talk at an upcoming event, please let us know. We're already lining up some great talks, but will always need more :-)

You can watch the event on YouTube.

The next meetup is on Wednesday 23rd July so please register and come along. After that we'll take a summer break and start again in September.


Automated cross-browser testing with BrowserStack and CircleCI

by Viktor Charypar

Robot testing an application

By now, automated testing of code has hopefully become an industry standard. Ideally, you write your tests first and make them a runnable specification of what your code should do. When done right, test-driven development can improve code design, not mentioning you have a regression test suite to stop you from accidentally breaking things in the future. 

However, unit testing does just what it says on the tin: tests the code units (modules, classes, functions) in isolation. To know the whole application or system works, you need to test the integration of those modules.

That’s nothing new either. At least in the web application world, which this post is about, we’ve had tools like Cucumber (which lets you write user scenarios in an almost human language) for years. You can then run these tests on a continuous integration server (we use the amazing CircleCI) and get a green light for every commit you push.

But when it comes to testing how things work in different web browsers, the situation is not that ideal. Or rather it wasn’t. 

Automated testing in a real browser

The golden standard of automated testing against a real browser is Selenium, the browser automation tool that can drive many different browsers using a common API. In the ruby world, there are tools on top of Selenium providing a nice DSL for driving the browsers using domain specific commands like page.click 'Login' and expectations like page.has_content?('something').

Selenium will open a browser and run through your scripted scenario and check that everything you expected to happen did actually happen. This should still be an old story to you. You can improve on the default setup by using a faster headless browser (like PhantomJS), although watching your test complete a payment flow on PayPal is kinda cool. There is still a big limitation though.

When you need to test your application on multiple browsers, versions, operating systems and devices, you first need to have all that hardware and software and second, you need to run your test suite on all of them.

So far, we’ve mostly solved this by having human testers. But making humans test applications is a human rights violation and a time of a good tester is much better spent creatively trying to break things in an unexpected way. For some projects, there even isn’t enough budget for a dedicated tester.

This is where cloud services, once again, come to the rescue. And the one we’ll use is called BrowserStack.


BrowserStack allows you to test your web applications in almost every combination of browser and OS/Device you can think of, all from your web browser. It spins up the right VM for you and gives you a remote screen to play around. That solves the first part of our problem, we no longer need to have all those devices and browsers. You can try it yourself at http://www.browserstack.com/.

Amazingly, BrowserStack solves even the second part of the problem by offering the automate feature: it can act as a Selenium server, to which you can connect your test suite by using Selenium remote driver and automate the testing. It even offers up to ten parallel testing sessions!

Testing an existing website

To begin with, let’s configure a Cucumber test suite to run against a staging deployment of your application. That has it’s limitations – you can only do things to the application that a real user could, so forget mocking and stubbing for now (but keep on reading).

We’ll demonstrate the setup with a rails application, using cucumber and Capybara and assume you already have some scenario to run.

First, you need to tell Capybara what hostname to use instead of localhost

Next, loosely following the BrowserStack documentation we’ll configure the remote driver. Start with building the browser stack URL using environment variables to set the username and API authorization key.

then we need to set the desired capabilities of the remote browser. Let’s ask for Chrome 33 on OS X Mavericks.

Next step is to register a driver with these capabilities with Capybara

and use it

If you run cucumber now, it should connect to BrowserStack and run your scenario. You can even watch it happen live in the Automate section!

Ok, that was a cool experiment, but we wanted multiple browsers and the ability to run on BrowserStack only when needed would be good as well.

Multiple different browsers

What we want then, is to be able to run a simple command to run cross-browser tests in one browser or a whole set of them. Something like

rake cross_browser


rake cross_browser:chrome

In fact, let’s do exactly that. First of all, list all the browsers you want in a browsers.json in the root of your project

Each of those browser configurations is stored under a short key we’ll use throughout the configuration to make things simple.

The rake task will look something like the following

First we load the JSON file and store it in a constant. Then we define a task that goes through the list and for each browser executes a browser specific task. The browser tasks are under a cross_browser namespace.

To pass the browser configuration to Capybara when Cucumber gets executed we’ll use an environment variable. Instead of passing the whole configuration we can just pass the browser key and load the rest in the configuration itself. To be able to pass the environment variable based on the task name, we need to wrap the actual cucumber task in another task.

The inner task then extends the Cucumber::Rake::Task and provides some configuration for cucumber. Notice especially the --tags option, which means you can specifically tag Cucumber scenarios for cross-browser execution, only running the necessary subset to keep the time down (your daily time running BrowserStack sessions is likely limited after all).

The cross_browser.rb changes to the following:

That should now let you run

rake cross_browser

and watch the four browsers fly through your your scenarios one after another.

We’ve used this setup with a few modifications for a while. It has a serious limitation however. Because the remote browsers is accessing a real site, it can only do as much as a real user can do. The initial state setup and repeatability is difficult. Not mentioning it isn’t the fastest solution. We really need to run the application locally.

Local testing

Running your application locally and letting Capybara start your server enables you to do everything you are used to in your automated tests – load fixtures, create data with factories, mock and stub pieces of your infrastructure, etc. But how can a browser running in a cloud access your local machine? You will need to dig a tunnel.

BrowserStack provides a set of binaries able to open a tunnel to the remote VM and connect to any hostname and port from the local one. The remote browser can then connect to that hostname as if it could itself access it. You can read all about it in the documentation.

After you downloaded a BrowserStack tunnel binary for your platform, you’ll need to change the configuration again. The app_host is localhost once again and we also need Capybara to start a local server for us.

We also need to tell BrowserStack we want to use the tunnel. Just add

to the list of capabilities. Start the tunnel and run the specs again

./BrowserStackLocal -skipCheck $BS_AUTHKEY,3001 &
rake cross_browser

This time everything should go a bit faster. You can also test more complex systems that need external APIs or direct access to your data store because you can now mock those.

This is great! I want that to run for every single build before it’s deployed like my unit tests. Testing everything as much as possible is what CI servers are for after all.

Running on CircleCI

We really like CircleCI for it’s reliability, great UI and especially it’s ease of configuration and libraries and services support.

On top of that, their online chat support deserves a praise in a separate paragraph. Someone is in the chat room all the time, responds almost immediately and they are always very helpful. They even fix an occasional bug in near real time.

To run our cross browser tests on CircleCI we will need a circle.yml file and a few changes to the configuration. The circle.yml will contain the following

We run unit tests, then cucumber specs normally, then open the tunnel and run our rake task. When it’s done, we can close the tunnel again. To download and eventually stop the tunnel we wrote a little shell script

It downloads the 64-bit linux browserstack binary and unpacks it into a browserstack directory (which is cached by CircleCI). When passed a stop parameter, it will kill all the browserstack tunnels running. (We will eventually make the script start the tunnel as well, but we had problems with backgrounding the process so it’s done as an explicit step for now).

Finally, we can update the configuration to use the project name and build number supplied by Circle to name the builds for BrowserStack

That setup should work, but it will take a while going through all the browsers. That is a problem when you work in multiple branches in parallel, because the testing becomes a race for resources. We can use another brilliant feature of CircleCI to limit the impact of this issue: we can run the tests in parallel.

The holy grail

Marking any task in circle.yml with parallel: true will make it run in multiple containers at the same time. You can than scale your build up to as many containers you want (and are willing to pay for). We are limited by the concurrency BrowserStack offers us and on top of that we’re using just 4 browsers anyway, so let’s start with four, but plan for more devices.

First, we need to spread the individual browser jobs across the containers. We can use the environment variables provided by CircleCI to see which container we’re running on. Our final rake task will look like this

Reading the nodes environment variable we check the concurrency limit and spread the browsers across the same number of buckets. For each bucket, we’ll only run the actual test if the CIRCLE_NODE_INDEX is the same as the order of the bucket.

Because we’re now opening multiple tunnels to BrowserStack, we need to name them. Add

to the capabilities configuration in cross_browser.rb. The final file looks like this

We need to supply the same identifier when openning the tunnel from circle.yml. We also need to run all the cross-browser related commands in parallel. Final circle.yml will look like the following (notice the added nodes=4 when running the tests)

And that’s it. You can now scale your build out to four containers and run the tests in paralel. For us this gets the build time down to about 12 minutes on a complex app and 5 minutes on a very simple one.


We are really happy with this setup. It’s really stable, fast, individual test runs are completely isolated and we don’t need to deploy anything anywhere. It has just one drawback compared to the previous setup which first deployed the application to a staging environment and then ran cross-browsers tests against it. It doesn’t test the app in it’s real runtime environment (Heroku in our case). Otherwise it’s a complete win on all fronts.

We plan to solve that remaining problem by writing a separate test suite testing our whole system (consisting from multiple services consuming each other’s APIs) cleanly from the outside. It won’t go into as much detail as the normal tests since it is only there to confirm that the different pieces fit together and users can complete the most important journes. Coupled with Heroku’s slug promotion feature, we will actually test the exact thing that will end up in production in the exact same environment. And you can look forward to another blogpost about that soon.


Using LiveScript with React

by Stuart Harris

Let me introduce you to a marriage made in heaven. Two beautiful things - React and LiveScript - that work together so well they could have been built for each other.

React components are mostly declarative. But they're written in script rather than a templating language. JavaScript ends up being too messy for this job and so Facebook invented JSX (an XML syntax you can embed into JavaScript). Everyone's first reaction seems to be "yuk"!

This is what it looks like (the examples are taken from the must-read article Thinking in React):

/** @jsx React.DOM */
var ProductCategoryRow = React.createClass({
    render: function() {
        return (<tr><th colSpan="2">{this.props.category}</th></tr>);

But see how it looks in LiveScript:

product-category-row = React.create-class do
  render: ->
    tr null,
      td col-span: '2', @props.category

Cool, hey? Clean, to the point and no clutter.

But it gets better...

First with JSX:

/** @jsx React.DOM */
var ProductRow = React.createClass({
    render: function() {
        var name = this.props.product.stocked ?
            this.props.product.name :
            <span style={{color: 'red'}}>
        return (

Now with LiveScript:

product-row = React.create-class do
  render: ->
    tr null,
      td null,
        span do
          if @props.product.stocked
              color: 'red'
      td null, @props.product.price

This is much easier to understand and much more declarative. Because everything is an expression, you can put if statements, for loops, anything you want in place of either the props or the children arguments to the component constructor.

OK, so you could do something simliar in CoffeeScript. But you'd miss out on all the amazing extras and functional goodness that LiveScript brings to the table (as well as fixing a whole bunch of CoffeeScript problems such as its scoping. Don't get me started).

But hang on, isn't it a bit weird though. The tr and the td have a null after them and then a comma, but the span has a do and no comma. What exactly are the rules? And where do I pass in the props and where do I add the children?

All React component constructor functions have the same two arguments: initialProps and children. So if we aren't sending in any initial props, we must specify null (or void). That's the first argument to tr. Fortunately for us we can pass in the children as an array or as separate arguments. So the two td components, in the example above, are passed in as the 2nd and 3rd arguments to the tr component. The do simply creates a block to pass as the first argument (in this case).

But passing in arrays works well in LiveScript too. We can use all the functional list manipulations from prelude-ls.

First in JSX:

/** @jsx React.DOM */
var ProductTable = React.createClass({
    render: function() {
        var rows = [];
        var lastCategory = null;
        this.props.products.forEach(function(product) {
            if (product.category !== lastCategory) {
                rows.push(<ProductCategoryRow category={product.category} key={product.category} />);
            rows.push(<ProductRow product={product} key={product.name} />);
            lastCategory = product.category;
        return (

Then in LiveScript:

product-table = React.create-class do
  render: ->
    last-category = null
    table null,
      thead null,
        tr null,
          th null, 'Name'
          th null, 'Price'
      tbody null,
        @props.products |> map ->
          if it.category isnt last-category
            product-category-row do
              category: it.category
              key: it.category
            last-category = it.category
            product-row do
              product: it
              key: it.name

I suppose the first thing to note is that you don't have to build up chunks of UI first and then add them later. It's all inline. And that's because we can pass arrays of children as the second argument. So we take the products from the passed-in props and pipe them to the curried map function from prelude-ls. This returns an array into the tbody's second argument.

If the product-row instance, in the example above, had children, you could add them after the props (you can see an example of this in the span inside the product-row component itself). LiveScript is clever enough to know that they don't look like more props and so will pass them as the next argument. Here's a better example:

product-row do
  product: it
  key: it.name
  span do
    class-name: 'child-class'
    "I'm a child of the span, which in turn is a child of the product-row"

It looks beautiful to me. Not unlike Jade. But you get to use all the power of a proper language :-)

By the way, Red Badger is hosting the London React User Group. We already have 71 members and the first meetup will be in mid-June. Please join and come along!


The first London Spree Commerce User Group

by Cain Ullah

We have just held our first London Spree Commerce User Group so I thought I’d write a brief blog to quickly summarise.

Red Badger have been working with Spree Commerce over the last 5 months on a commercial opportunity which resulted in me going over to New York for SpreeConf in February. I was literally blown away by the enthusiasm of the community. There was a genuine excitement about Spree from everyone there. On my return to London, I looked to join a Spree Meetup and realised there wasn’t one. So, decided to start one in London.

We’re genuinely excited about the platform and think it could help to change the retail landscape in the UK. So, hopefully through this User Group we can help build the community, build more open source extensions and all collaborate to make the platform better.


The presentations

We had 4 presentations for the evening:

First up was Josh Resnik, COO of Spree Commerce who had flown in from Washington DC to help kick the user group off. He presented on 2 key themes. The first was Spree as a company. Given the acquisitions of Magento by Ebay and Hybris by SAP, a few retailers have been nervous of Spree going down the same corporate route. So Josh covered this topic in his presentation. Second was where Spree fits in the marketplace and why you should choose it as your platform.

The second presentation was by Joe Simms, CTO of Surfdome. Surfdome are currently re-platforming to Spree Storefront and Spree Hub and will be the UK’s largest implementation on the platform. Joe covered why they chose Spree, their experience to date, what they are currently working on (specifically some really complicated stuff around pricebooks) and what they would like to work on with the community to improve the platform.

Third, David and I, both founders of Red Badger, talked about a hackathon that we did recently to build a store for a target client based on Spree in just 2 days. I discussed the process we went through during the hackathon and David discussed what was needed to build the store before presenting a demo of the end product.

Finally, Peter Berkenbosch a Spree Software Engineer based in the Netherlands, did a presentation on the Spree Hub, the new Spree Hub User Interface that has recently gone live and then did a demo of the Spree Hub, building webhooks, events and flows as well as how you debug.


I think overall the first event was a great success. We had a turn out of about 40 people which is not bad for a first user group. The people that were in attendance were really enthusiastic and interested in the presentations and the buzz about the Badger office was great.

It was awesome to have commitment from Spree to help us kick this off with Josh coming in from Washington DC and Peter from the Netherlands. With support from them, we should be able to make this User Group a success.

Our objective is to build up the Spree Commerce community in London.

Next Steps

Given that we’ve just started this user group we’ll keep it quarterly for now so that we can ensure quality content. So, we’ll look to get the next one ready for early September.

If anyone has feedback or has general ideas about what we should do with the next user group, please contact ideas@red-badger.com.

I’d like to look at different possibilities, perhaps doing a hackathon to build new extensions to Spree etc…

We will also be looking for presenters so if you are working with Spree already and would be interested in doing a talk, then please do get in touch. We don’t have to stick to the 20 minute presentation format so can accommodate a number of 5 minute lightning talks as well.

For all future event news, check the meetup page here.

The stream of the event is now also live on our Youtube channel.



Digging Tunnels

by Alex Savin


When developing a web app of any sort, usually you have a local server on your local machine. The server is running your app, which you can access using your favourite browser. Later on, when the moment is right, the app is deployed to a public server. From there you can access it from any other machine or mobile device.

Sometimes however you’d want to make your local server publicly accessible. Here are just a few reasons why this might be useful:

  • Developing a mobile web app. While you can use various emulators locally, they are not really replacing any real device. Being able to access local server from any mobile device while developing the app is a huge benefit.
  • Collaboration. You can send a public link to your colleague (or client) and have them try the real app. Again, they can very well try your app on their mobile devices, on the go.
  • Breaking out of the virtual box. Some companies prefer putting development environments into various kinds of virtual terminals. With local tunnelling you can stream the app out of the box into your local machine – or well, to any machine.

If you are connected to a local network, in many cases machines on the same network will be able to access your local server just by entering your machine name and port into a browser. If you are on wi-fi network, you could also access your local server from any mobile device hooked to the same network in exactly the same way.

But what to do if that’s not enough?

Level 1: Simple SSH tunnel

Assuming that you are developing on your own machine, which has unrestricted access to the internet. There are a number of services that allow SSH tunnelling of your local server to the public internet.

  • Ngrok – originally a Ruby Gem called localtunnel, now evolved into a proper web service. Simple to use.
  • PageKite – a Python script with a number of interesting features.
  • OpenSSH, or simple plain ssh command on your Mac/Linux console also allows you forwarding local traffic to the outer world. Here is one instruction how.
  • Tools like BrowserStack and SauceLabs also offer forwarding traffic from your local server.

In order to have something to forward we are going to quickly bootstrap a Hello World Meteor app. Here it is, running on localhost:3000.


Let’s install ngrok and run it with the following command:

./ngrok 3000

This command will initialise SSH tunnel and now our local server is available via two public URL:s via both HTTP and HTTPS. If we try and open one of those in the browser, we will see our familiar app view.

hellometeor 2

Ngrok also provides you with traffic analyser, available on http://localhost:4040/. In addition to console log, you can view in real time how packets are requested and sent, as well as dig into the packets for more curious details.

Ngrok Traffic Analyser

Level 2: SSL tunnel via proxy

Ngrok also supports tunnelling via proxy. You can use following configuration in the ngrok config file, which is loaded from ~/.ngrok location by default:

http_proxy: "http://user:password@"

This works well if your machine has unrestricted SSH access to the outer world via port 22. What to do, if you don’t have SSH access?

TCP tunnelling to the rescue! Ngrok is smart enough to figure out which port is open, and if your normal SSH port is closed for some reason, it will enable tunnelling via SSL port 443.

Let’s see now how this works with PageKite. Once you download the script, run it with this slightly less documented argument to enable connection via proxy:

pagekite.py --proxy=http:user:password@yourproxyip:proxyport

When running PageKite script for the very first time, it will ask you to register and create username/password. Interesting details is that you’ll need to specify proxy only once, to let the registration pass through. After that PageKite will save user info into a local config file, and use it for authentication. No need to specify proxy after that – PageKite will get the info from a local config and use it for all future connections.

Level 3: Dynamic altering of the host header value

If you app runs on IIS server, it will normally be accessible via localhost:80. You would normally map your app name to the localhost in the hosts file, and then, when opening your app url from the browser, request header would contain proper host value, based on which IIS server will decide which app to serve back.

This is all great, but if we simply make a tunnel to the localhost:80, we’d get an IIS welcome page, and not our app. That’s because the HTTP request header doesn’t contain proper value to identify our app.

If we try and modify request header on the browser while requesting one of the URL:s on PageKite or Ngrok services, we would get another error from the tunnelling service itself. Thing is that both PageKite and Ngrok are able to map requests to the proper tunnels based again on the HTTP host header value.

We need some sort of way to tell the tunnelling service to alter the hosts header for this particular tunnel before forwarding request. To achieve this we are going to use PageKite’s super useful +rewritehost feature.

pagekite.py mywebapp.local:80 mywebapp.pagekite.me +rewritehost

Now when requesting mywebapp.pagekite.me, PageKite service will rewrite host header with mywebapp.local value, and our IIS server will be able to correctly serve our app.

Level 4: Tunnelling with reverse proxy

What if we have a web app consisting of multiple server URL:s? For example:

  • www.myapp.com:80 – root url to request the app
  • resources.myapp.com:80 – url used by the app client side to request static files
  • api.myapp.com:80 – API services url used by the main app on the client

Now it’s not enough to tunnel just one mainapp.local:80 url to the outer world. Let’s imagine this for a second:

  • www.myapp.com:80 is tunneled to the public via myapp.pagekite.me url.
  • You open this url on the client. The app is loaded on the client and starts requesting static assets from resources.myapp.com:80 and api services from api.myapp.com:80. Both of these will fail.
  • Even if you make two additional tunnels for the resources and api services, the client won’t be able to map those public tunnel URL:s to the URL:s expected by the app.

We are going to setup our very personal proxy server to handle all of these issues.

Let’s setup a micro EC2 Amazon AWS instance. The best part is that micro instances are free, and their limited resources usually are more than enough for our proxying needs.

Use your key pair to be able to SSH into your new and shiny EC2 instance:

ssh -l ubuntu -i ~/.ssh/mykey.pem ec2-something.eu-west-1.compute.amazonaws.com

Nginx is a natural choice for the proxy server. SSH to your new and shiny EC2 instance and install Nginx.

Here is an example configuration for the Nginx proxy. Assuming that www.myapp.com is our main site, and resources.myapp.com is additional service for our app, and both of those url:s are tunnelled via separate tunnels on PageKite:

server {
  server_name www.myapp.com;
  location / {
    proxy_pass http://myapp.pagekite.me;
    proxy_redirect off;
    proxy_buffering off;

server {
  server_name resources.myapp.com;
  location / {
    proxy_pass http://myappresources.pagekite.me;
    proxy_redirect off;
    proxy_buffering off;

Launch or restart Nginx with sudo nginx -s reload command.

The interesting part is to route all requests from your browser via our new proxy. This is easily achievable on a desktop browser via Network Settings. On iOS you can setup manual proxy in Wi-Fi settings.


After doing so, all network requests from your device will go through our proxy – and we are finally in full power to do what we want – to route certain requests to another location. In particular, all requests to resources.myapp.com will be routed to http://myappresources.pagekite.me, which is the url of the public tunnel for our resources served on the local server. Again, use +rewritehost option on PageKite if you are running the app on a host header sensitive server.

Couple of words on security

Obvious concern is how do you know your super secret Facebook killer web app is not going to be compromised while you are streaming it to the public web? Here are few thoughts on this:

  • Your tunnel from localhost to the service is encrypted, often with your personal SSH keys
  • Usually you have option of accessing your tunnel via http or https url:s, provided by the service
  • Both Ngrok and PageKite can generate random url:s every time you restart a tunnel. This might not be very convenient since you’ll have to copy-paste new url every time, and probably reconfigure some of your test suits. But this adds additional security layer.
  • Both Ngrok and PageKite offers you simple http password protection. Here is example command for PageKite: pagekite.py 80 foo.pagekite.me +password/bjarni=testing
  • PageKite also offers you whitelisting of the allowed IP:s: pagekite.py 80 foo.pagekite.me +ip/ +ip/4.5.6=ok


Local tunnelling is a perfect way of freeing your web app to the outer world while it is still being developed. In a modern responsive/adaptive world the apps are still being developed on desktops, with desktop browsers. With local tunnelling you can easily develop your app on any mobile device using any mobile browser and not just emulation. You can also share the app to a client, UX designer, fellow developer or test engineer for any kind of purposes. With a fast paced development sometimes it’s really nice to try feature on a target device even before it is deployed to staging or production servers.


UX Testing in Kenya – Summary

by Joe Dollar-Smirnov

creative workshop

It is now some time since we arrived back in Blighty after an amazing trip away demoing and testing the new Haller app, designed and built in collaboration with Pearlfisher.

After consolidating our data we have some key findings that we would like to share. The top line is that the app was received very well indeed. As suspected, most people do not have smart phones (but 99.5% of people DO have a feature phone). Community leaders, on the other hand do tend to have smart phones.

In the end we spoke to around 60 people in total over the 2 week period. Qualitative and quantitative methods were used to measure the level of perceived usability of the app that has allowed the team to understand where the focus needs to be made during the coming weeks and beyond.

Current tech landscape

Most of the community farmers under the Haller outreach program in and around Mombasa do not regularly access the internet. In fact, most of them had never accessed the internet. The tea and coffee farmers, however, based north of Nairobi had more exposure. These are some of the websites visited by a minority that came out of our discussions. Yahoo, Google, Skype and Facebook were all scoring high on the mentions among internet users followed by a small selection of localised websites relating directly to their area of business such as farming, tea and coffee regulatory bodies and government websites.

  • Approx 20% of the tea and coffee farmers have an email address and less that 5% of the community farmers
  • Network: Approx 95% of mobile phone owning people we met were on Safaricom
  • This Vodafone owned network has a near monopoly in Kenya thanks to the massive adoption of MPESA
  • Everyone we met with a phone used MPESA
  • Approx 10% of the people we met had 2 phones. These people normally had at least 1 smart phone with Opera Mini. It is not unusual in Kenyan towns and cities to have 2 mobile phones to cover 2 networks, but in the rural areas we were in, it was not common
  • Approx 30% of tea and coffee farmers had a smart phone (internet phone)
  • Less than 10% of the community farmers had a smartphone in the Haller community
  • Weather prediction is still primarily done by looking at the sky, using the seasons as a secondary indicator

Kenyas answer to Silicon Valley and Tech City London is Konza City a development outside Nairobi. There is by no means any doubt that there is a thriving technology scene in Kenya and the enthusiasm for new technology has spread to all areas of rural Kenya. The biggest hurdle, it seems, in preventing the early adoption of technology is infrastructure. For example, some of the farmers we spoke to did not have electricity to charge their own phones themselves, instead they had to take the phones down to the local store and leave them there while they charged. 

ux testing C3 Nokia

  • Opera Mini was the browser on most of the smart phones we saw
  • Radio is a prevalent as a form of communication followed by SMS
  • A few of the groups have used cyber cafes. One of them in particular said he used a cyber cafe on his way to work and on his way home to check Facebook
  • The costs of data on their mobile phones for Safaricom varies depending on the amount you buy but we were told that 100KSH would by 80MB of data. At time of writing 100KSH is about £0.70
  • The speed of the internet varied but it was enough to run the app without any significant slow down
  • The farmers we spoke to had little experience of the app store or google play for downloading apps
  • All the farmers we met were using pay as you go phones. This is the most common route to phone ownership among the farmers and locals we spoke to

Haller currently trains local village leaders who tend to take the knowledge back to their communities and share the information among the people of their villages. This works well in small and tightly knit communities and is the model that must be adopted initially to make sure that the information on the app is shared across their communities. Adoption of smartphones is accelerating and no doubt within a year or 2 the amount of people with smart phones will out weigh those with older feature phones.

While growth in the number of people using a mobile will moderate over the next 5 years, we still expect 130 million new mobile services subscribers every year to 2017. This means an increasing total addressable mobile for development market, uniquely positioned to use the mobile as an alternative to traditional modes of service delivery
mAgri www.gsma.com/magri

There is some additional work going on in the background to bring the app to a releasable standard so we can put it out to market and watch how people use it. At that point we will be shouting about it from the roof tops so keep an eye out for its release and we’ll welcome feedback from all corners of the globe not just the localised part of Kenya we are targeting initially.

Example of one of the farmers playing with the app and making notes about the content!



Ecommerce best of the best UX practices

by Joe Dollar-Smirnov

There are many reports that contain ecommerce best practice guidelines.

Some reports have nearly 1000 individual guidelines in fact and in some cases these best practices conflict. For example, one such recommendation says keep it as simple as possible and breakdown the shopping cart in to steps in order not to confuse the customer. A conflicting recommendation says you should give the customer all the information they need on one page.

Of course each of these have dependencies and will affect the decisions you ultimately make when designing the experience.

Navigating through all the recommendations can be tricky and time consuming so I have collated some of the repeated best practice guidelines that do not conflict with each other. For each item I have included a link to the most relevant article on that subject so you can read more if you have the time.

These best practices augment those already established UX best practices for general UI design. These can be read about in Donald Norman’s Design of Everyday Things and on the Nielsen Norman group website. Those are just 2 of many useful resources for designers.

The only certainty in best practice guidelines is that they will evolve. And rapidly. This is why testing should always be at the top of the list. The rest are in no particular order.


Test different versions of your pages. A-B testing, MVT testing, any testing is better than no testing.

This one is a universal accepted best practice and is essential to ensure we are always looking at the improving the experience for users, building customer loyalty and revenue for our clients.

Deliver exactly what works best to the user. Increased sales and customer loyalty.

Read more

Button labels

Not ‘submit’ but ‘Go to payment options’. Imagine road signs that direct you off the road in the correct direction (left, right or straight on) but do not tell you where you are going. It doesn’t work. Make sure your button labels relate to the action the user is about to do.


Giving user sense of system control, where they are and error prevention. Studies have shown marked increase in people clickthroughs when using this technique.

Read more

Toby Biddle

Guest checkout

Customers who do not shop online with you can be put off by the perception of long and dull registration forms. Allowing people to checkout as ‘Guest’ reduces shopping cart abandonment.


Reduces shopping cart abandonment therefore increase in conversion.

Read more

Derek Nelson

Reduce clutter

Many studies have revealed an increase in conversion where shopping carts have removed distractions from the process on the assumption that a user is already committed to buy so you should not give them an easy way to get distracted away from the shopping cart experience.


Reduces shopping cart abandonment therefore increase in conversion. Beatiful design.

Read more

Graham Charlton


Security reassurance is not simply displaying a padlock symbol. It can be any number of things to give the user confidence in the store they are using from phone numbers to online assistance. So think brand awareness and clear contact information as well.


Brand trust and loyalty. Increased conversion and revenue.

Read more

Spyre Studios

Product images

Rich images are not only great from a design point of view but also tell a story. A picture says 1000 words. Large images that show detail have been shown to increase conversion.


Users glean information from large product images that may not be apparent from the product description.

Read more

Amy Schade

Suggested products

If someone is browsing for cheese, perhaps they would like chutney to go with it. Or if a customer has bought a series of fishing related products, are they likely to be interested in the latest fishing reel? Amazon are famous for their product suggestion algorithm.


This enhances the experience by suggesting sensible additions to the cart so is useful as well as good for revenue.

Read More

JP Mangalindan

Product ‘findability’

This may sound like an obvious one. If a user can not find a product then how can they buy it? This incorporates clear and clickable categories, searchable products, duplicate categories if required and recently viewed.


Customer experience enhanced and therefore revenue.

Read More

Christian Holst


This list represents the majority of guidelines that are currently available. They all count towards the ultimate goal for customers – improving the experience and increasing revenue for clients. Having taken all these in to account we must continually test, improve and build on the collective knowledge our industry is building up.


Africa road trip: The challenges for app design and development

by Joe Dollar-Smirnov


After speaking to over 50 locals about the Haller app, making reams of notes, taking over 600 photos and shooting 25 videos of usage and interviews, we are now a few days away from boarding our flight back to Blighty.

Coming to Africa to test a prototype was always going to be as much about the technology available to the people as it was about the usability of the app itself. Dr Rene Haller has been in Africa for nearly 60 years demonstrating how to turn waste land in to rich land that can yield provisions for a family of 5 without ever having to rely on external aid. During the past week we have met the community farmers who his work has had a direct impact on. The app, that has been created in a collaboration between Pearlfisher and Red Badger is showing great potential to bring the information directly in to the hands of the grass roots famers. These are some of the hurdles we are seeing:

how to use computers poster


Some of the farmers we are speaking to, before our demos, had never accessed the internet. This means learning what it means to have internet access has to come before using websites. For some of these people, the whole concept is new. The Haller project includes a technology learning centre that is being used to train the locals on the basics of internet use. In order to improve the experience for these people it will be important to offer regular training sessions to build confidence in what for some is a brand new technology.

nokia phone

Mobile Technology

The people we are speaking to, generally,do not have smart phones. We are finding that the majority of our target users have feature phones that are incapable of accessing an HTML website. This is a potential blocker for the adoption of a mobile app! A question we need to consider is how long we wait before saturation of smart phones reaches critical mass? and are we able to get this information in to the hands of the people within the current technological constraints? We have been discussing some of the solutions to make this happen, from the use of community group leaders with smart phones and word of mouth through to SMS based messaging and IVR (interactive voice response) systems. 

Screen Shot 2014-04-09 at 14.18.40


Although English is the first taught language in Kenya, there are 42 different local dialects. We have spoken to groups covering just 2 – Kikuyu and Mijikenda. The Kikuyu people are the largest tribe in Kenya and also happen to be among the most advanced in terms of agricultural development with seasoned tea and coffee farmers as we saw during our first workshop sessions in Nyeri, north of Nairobi. Less developed rural communities speaking local dialects, such as the Mijikenda will prefer to read their local dialect, although so far we have discovered that our target users all know Swahili, but not comfortable with English.


Despite everybody already having mobile phones, everybody does not have electricity at their disposal. As we all know a charge of an older feature phone can last for days at a time, but smart phones tend to use a lot of energy. This does not stop at smart phones. Laptops and computers for rural schools suffer from the same issues. Many schools do not have mains electricity – this did not stop the government promising to supply every child in Kenya with a laptop which has currently been put on hold due to inconsistencies in the finances surrounding the program.


Our research and usability testing concludes with a series of meetings that will happen at the dwellings of our target users. The body of feedback and research is reaching ‘saturation point’, that we are finding similar comments and issues are being raised by different groups of people and are able to form a set of validated recommendations for the ongoing design and development of the application. We’ll summarise our findings on this blog in due course.