The term “Agency” is overused

by Cain Ullah

What is an Agency?

-noun, plural a.gen.cies – an organization, company, or bureau that provides some service for another.

 ”Agency”, is a pretty broad term. If I say I own an “agency”, in the literal sense, I could be in recruitment, window cleaning a taxi driver or a million other occupations. Red Badger are in digital. We build enterprise scale web applications for the likes of Tesco, Sky, Lloyds and Fortnum & Mason. In my industry, most would classify Red Badger as an “agency”. We are a member company of the Society of Digital Agencies (SoDA) after all. But “agency” in my mind is an outdated term and is used to describe too many things.

When most of my peers, colleagues or competitors are talking about “agency”, we are specifically talking about professional services companies that are in marketing/advertising and/or the digital space. Have a look at this list of companies in the Econsultancy Top 100 Digital Agencies (Formerly NMA Top 100) to get an idea of what the industry would define an “agency” these days. There are a number of categories in this list, Full Service/Marketing, Design & Build, Technology, Creative and Media. The categorisation of companies in this list seem dubious at best and service offerings of many of them are very different, despite being placed in the same categorisation. The lines between marketing, advertising, brand, web, agency, consultancy, software house and product startup seem to have become far too blurred, all of which have been thrown into the “agency” bucket.



The term “agency” for me has it’s origins in marketing/advertising like AKQA of old, but as we have moved into the digital age, companies like AKQA have had to adapt their service offerings, adding in a strong technical capability to their armoury. AKQA were once an advertising “agency”; they now call themselves an “ideas and innovation company”. AKQA still have an “agency” arm to them as they still do a lot of brand / campaign work associated to a typical advertising “agency”. Digital or not, a campaign is not built to last. However, they now also do full service delivery of longer lasting strategic applications that have a long lasting effect on their clients’ business operations; look at AudiUSA.com. I would argue that this type of work is not that of an “agency”.

With the transition of some traditional marketing/advertising agencies to digital agency, technical companies such as Red Badger have been thrown into the “agency” bucket.

This has been something Red Badger has struggled with. We don’t see ourselves as an “agency”. As I said peviously, for us, the term “agency” has it’s origins in the marketing space, with work largely focussed on campaigns or brand, be it digital or not. We also don’t see ourselves as “consultancy” because the connotations of that are associated to big cumbersome Tier 1 Management Consultancies such as Accenture and McKinsey.

What’s the alternative?

Red Badger deliver enterprise scale web applications for large corporations. They are highly complex, technically advanced solutions that can take upwards of 12 months to build. However, we also take User Centred Design as seriously as we do the Tech. Everything we build is user driven, beautifully designed and simple to use and we have an expert team of creatives to ensure this is the case. Finally, we wrap both the tech and creative teams into incredibly efficient Lean processes, running multi-disciplined, cross-functional teams, shipping into live multiple times a day. This is not the work of “Agency”.  So for now, as a slogan to describe Red Badger, we have settled on “Experience Led Software Development Studio”.

Why does it even matter?

The overuse of the term ”Agency” can cause issues. With the ambiguity of what a modern “agency” is, comes hand-in-hand confusion of what different “agencies” do. For big corporations, the sourcing strategy for a supplier has become equally confusing because they don’t know what they are buying.

When does an “agency” become a consultancy? Or are they the same thing? How do you differentiate from a digital advertising “agency” and a software house that builds digital products? I’ll leave you to ponder on that yourselves.

Some examples of companies that might be in the “Agency” bucket but have started to move away from describing themselves as such include some of the following:

  • Red Badger – “Experience Led Software Development Studio”

  • AKQA – “Ideas and Innovation Company”

  • UsTwo – “Global digital product studio”

  • Adaptive Lab – “We’re a digital innovation company”

Companies are starting to cotton on to the fact that the term “Agency” is confusing and those that provide full service application development are starting to distance themselves from the term and the brand/marketing/advertising stigma attached to it. Surprisingly, companies such as SapientNitro and LBI still describe themselves as an “agency”.

So the question I suppose, is do you class your company as an “agency” or is it altogether something else? I think it might be time for a new term that is not “Agency” or “Consultancy” that is more interesting than “Company”. Suggestions on a stamped addressed envelope to Red Badger please!!


What I Learned about the Future of Internet From Fluent Conference

by Alex Savin


By the year 2015 I imagined the internet to be something close to how it was envisioned in Gabriele Salvatores’ Nirvana cyberpunk film. You’d plug yourself into cerebral cortex, put VR goggles on and off you would go flying into the cyberspace.

Funnily enough, there was a glimpse of that during the recent O’Reilly Fluent conference in San Francisco. But all things in order.

Fluent isn’t a conference to attend if you want to get deep knowledge on a certain subject. 30 min long sessions are great to get a drive through the topic, to give you understanding on what it is. What Fluent is perfect at is to give you broad overview on stuff you had no idea about. 5 simultaneous tracks of talks, one extra track of meetups, one more for workshops and as a bonus there is ongoing exhibition hall with companies and live demos of hardware and software.

There is optional extra day for just workshops – which actually was good for getting deep into a subject. We went full contact with ES6/7 new features, intricate details on type coercion in Javascript, and did a good session on mobile web performance with Max Firt.

O’Reilly also brought some heavy weight keynote speakers. Paul Irish (creator of HTML5 boilerplate and Chrome web tools dev), Brendan Eich (creator of Javascript), Eric Meyer (author of pretty much every book on CSS), Andreas Gal (CTO at Mozilla) among others delivered a comprehensive overview on where we’re now and what to expect in a next few years. I’m going to cover a few interesting trends spotted during the conference.

Rise of the transpilers

This was mentioned directly and indirectly during many talks, including creator of JavaScript himself. Future of Javascript is not meant for humans, but instead a something you’re compiling into. Many new features in ES6 and 7 should make this easier for compilers. If you had any doubts so far on things like LiveScript, ClojureScript or even Babel, it’s time to stop worrying and start implementing transpilation into your build pipeline. Especially since Babel might as well become a language on its own.

React is hot

In all the honesty I missed all React related talks during the conference, of which there were quite a few. Later catching up with other attendees I got my confirmation that all of the talks were indeed mostly of introduction into React, Flux and ways of injecting React into your app. What was interesting is that each time I’d mention React and Red Badger working with React for the past year and a half, I’d become centre of attention, with people asking questions on how we do things. After describing our (now pretty much standard) setup with apps being isomorphic, hot components reload in place, immutables for state handling and ability to deliver app to no-JS clients, people would often say that Red Badger lives on a cutting edge. I couldn’t agree more.

Many people would also ask – how come it’s year and a half already – isn’t React just being released? The important part here is that lots of people are hearing about React now for the first time, and are really excited.


Eric Meyer made a great point during his keynote dropping a (deliberately) controversial slide that Web is not a platform. Right now lots of web developers assume JavaScript support as a given on any client. This is something Eric tried to address. Web is not a platform, there are and will be lots of clients with support (or lack of) for any sort of tech stack. When you take CSS off the webpage, it suppose to stay functional and content should still be human readable. Same goes for JavaScript, and any other tech we might introduce in the future. Assuming that you are targeting a broad audience, it’s always a great idea to implement your app isomorphically, with gradual degradation of user experience, in case client doesn’t support some of the tech we’re relying upon.

Be conservative in what you send, be liberal in what you accept.

All of this goes very well with the React and what we’re currently doing with it. If you have modern device, browser and JavaScript support, you’ll get single page app experience, with blackjack ajax, client side routing and visual effects. If not, we’ll gracefully degrade to the server side rendering – which will actually improve performance on older mobile browsers instead of forcing JavaScript on them.

Importance of the progressive enhancement was echoed by @obiwankimberly during her talk on second World Wide Web browser and CERN’s efforts to revive it. Line Mode Browser was the first browser ported to all operating systems, and was used by many people around the world. Most of us might not consume web in text only mode anymore, but the idea of progressive enhancement goes all the way to accessibility, screen readers, and those people who rely on text-only representation of the Web.

HTTP/2 is here

Or simply H/2. Not only the spec is finished, H/2 (in its SPDY incarnation) is supported in 80% of all modern browsers.

For the transition period you can setup your web server to fallback to HTTP/1 in case client is not ready for new hotness. IIS doesn’t support H/2 at all, but there’s nothing stopping you from getting Nginx proxy in front of it with full support for H/2.

Why H/2 is important? Few things:

  • Multiplexing. Each HTTP request requires quite a bit of time to initialise, thanks to something called “TCP three way handshake”. Even with fibre connection this consumes precious time and patience of the user. In addition to that, HTTP’s known bottleneck is slow requests that block all other requests. With H/2 you can retrieve multiple resources during a single connection, practically streaming data from the server. No more need for concatenation of CSS / JS files, image sprites, inlining CSS / JS. Even domain sharding is obsolete now.
  • Security is not optional. It is not part of the official spec, but Firefox and Chrome require you to encrypt all communication when using H/2. Which means, you have to use HTTPS. This has one interesting consequence – if one of the web proxies on the way of our request doesn’t support H/2, such request will still get through, since it is just a stream of encrypted data.
  • HTTP headers compression. All headers are now nicely packed together using something called HPACK encoding.


It seems that people at Mozilla are sharing my passion to see Internet with users immersed into it. If you have Oculus Rift and latest Firefox nightly, you’re all set to check out something they call WebVR. Idea is simple – WebGL graphics rendered by the browser into your virtual reality goggles, as well as you being able to look around and interact with the experience.

To boldly go where no man has gone before.

On the second day of exhibition they enhanced the VR experience by sticking LeapMotion sensor with tape on the front of Oculus, which allowed people to see their hands and fingers inside the simulation. That’s something that makes experience truly immersive – we like to move our hands around and seeing them interacting with stuff.

Another possible use for such technology is spherical panorama viewing, be it still picture or video. YouTube already supports such experiments, and one can only imagine this expanding in the future. I believe Mozilla VR department is spot-on about VR experiences delivered with the browser. There are still lots of questions, most of VR territory is completely unknown (UX in WebVR, anyone?), but it feels truly exciting to live during times when such things are coming to life.

Internet of Things

This whole IoT concept was pretty abstract to me, until IBM did a live on-stage demo of a cloud based AI overlord, speaking in a nice voice and doing neat tricks.

I can monitor and command your devices. Typically I monitor vehicle fleets, cool things like power boats, medical devices, and perform real time analytics on information as well as historical data storage and visualisation. I can see you’re wearing heart monitor today. – Sarah, the AI overlord.

During a very short keynote by Stewart Nickolas he let Bluemix cloud based app to scan the room for controllable devices and sensors, and then asked Sarah the App to take control over one of the robotic spheres on stage that she discovered. Interesting part starts where you can imagine Sarah taking over fleets of vehicles, smart sensors and making decisions for us, humans. IBM is building a platform to unleash apps out of cloud into the real world.

On conference

Fluent was by far the biggest and extremely well organised conference I ever attended. Here are a few observation on how they did things.

  • Wifi. As usual on internet conferences with 1k+ attendees, it’s hard to provide reliable wifi signal in a single room. Biggest problem was keynotes, since everybody would be in the same place, actively posting things. In addition to bringing more routers into the room, they also quickly realised to ask audience to pause Dropbox syncing and whatever other cloud backup apps you might have on laptop. That actually helped a lot.
  • Speed networking sessions in the mornings were actually more fun than expected. Not only you get to meet bunch of mostly interesting people, it also puts you into the right mood of the conference and makes you a bit less introvert.
  • Game of code. During first two days you could do your part in collecting stickers by performing various quests and attending events. Each sticker contained few lines of JavaScript code. Once you’ve collected all 12 stickers, you’re suppose to figure out the order of those lines, enter them into browser console of the Fluent Conference website, and, if successful, get a confirmation ID for a prize.
  • Info board. In the main hallway they placed a huge paper board where anyone could leave messages, post job ads or distribute stickers. Often people would pin business cards next to the job ads.
  • After lunch desert location was used to great effect to direct flocks of attendees. In a last two days it was moved to the exhibition hall, so you could munch on a desert while checking out Mozilla VR experience.


All keynote videos are now available on O’Reilly YouTube channel. Few of my personal favourites would be:

I also compiled a video for the whole San Francisco trip – was my first time in California after all.


Join me as a new voice in the tech community

by Winna Bridgewater


A glimpse at the weekend workshop I led. Photo by Alessia D’Urso, Girls In Tech UK Board Member.

London has a staggering number of events for coders. It’s awesome.

The only thing is I’ve gone to a lot of events, and I can count on one hand the number of times a technical discussion—about a tool, an industry standard, or the craft of code—was led by a female. For the majority of the events I attend, I am one of only two or three females in attendance.

I decided that this year, if I want to see more women presenters, I need to step up. I need to talk at an event.

Getting Inspiration

As if the universe was listening, two events came up this spring that got me more excited and less nervous about going for it.

First, Ladies Who Code London ran a workshop on public speaking, and it was hosted here at Red Badger. Trisha Gee and Mazz Mosley were wonderful. Their message was simple: it gets easier each time you do it, and there are ways to work through nervousness. They also emphasized that the audience is rooting for you–everyone in the room wants to see you succeed. Pretty nice, right?

Then I had the chance to attend the Women Techmakers Summit, an evening celebrating International Women’s Day that was organized by Google Women Techmakers and Women Who Code London. There was a series of speakers and panelists, and every presenter had a powerful message. The speaker whose message stayed with me most was the keynote, Margaret Hollendoner

Margaret said she sometimes makes her career sound like it was all “right place at the right time” luck. But she told us it wasn’t that simple. Every opportunity was lucky in one sense, but the opportunity wouldn’t be there without her hard work. She also emphasized that deciding to say “yes”  required confidence and bravery.

Margaret’s presentation gave me another nudge: get past my fear and present at an event.

Saying Yes


Only two days after the Summit, I got an email from Lora Schellenberg about Spring Into Code, a weekend workshop on Web Development offered by
and Girls In Tech. Lora asked if I was available to teach–the original instructor had to back out, and they were looking for a replacement.

It sounded like my lucky chance, so I agreed.

Then I got all the details. I was going to be the only teacher for 12 hours of instruction over two days. I’d be teaching at Twitter headquarters to an audience of 100 people.

I felt pretty panicked, so I knew it was time to make some lists.

Why I should not do the workshop
  1. I’m not an expert in web development.
  2. I’ve only been doing this stuff professionally for a year and a half.
  3. I won’t be able to answer all their questions.
  4. 12 hours is a long time.
  5. 100 people is a lot of people.
Why I should do the workshop
  1. I’m not an expert in web development. I still spend most days learning new things. I know what it’s like to feel confused and lost. And I know how to recognize and celebrate small triumphs.
  2. I did set that personal goal.
  3. Those nice ladies did tell me the audience will root for me.
  4. That other nice lady did say you need to take advantage of luck that comes your way.
  5. If I’m going to teach 100 people for 12 hours, this is the ideal audience. Eager learners who, by choice, agree to a weekend in front of a computer taking in as much as possible.

I decided to go for it—butterflies, sweaty palms and all.

There are so many things I could focus on under the umbrella of Introduction to Web Development. I decided my main goals would be:

  • Make techy code stuff seem less scary.
  • Make people feel ok about asking questions.

Saturday morning arrived, and I had a rough start. I spent the first session working out how to use the mic and the two screens floating on either side of me. My notes weren’t loading like I hoped. The Internet was down. My demo wasn’t working even though it worked mere hours before. I was shaking.

After the first demo fell flat on its face, I knew I needed to stop. I took a few minutes to get everything running properly. I got water. I took some deep breaths. Those minutes felt like ages, but it was worth it. When I started up again, stuff finally started working.

The first day flew by. A few folks came by during breaks to say they were enjoying themselves. At the end of the day, lots of people came by to say thanks. Were my silly jokes working? Did my missteps when typing—forgetting a closing bracket, leaving off a semicolon, incorrectly specifying a source path—help people understand that breaking and fixing things is what this job is all about? During the second day, people from all over the room were asking questions. Tables were helping each other debug and understand what was going on. Breaks came and went with people staying at their seats to try things out. I couldn’t have hoped for more.


I have so many ideas about what I’d change if I could do it again. I missed some concepts. I glossed through others. But I did it, and I had an amazing time.

If you’re tempted to give a talk or run a workshop, please go for it. It’s scary but great, and you have a wonderful community rooting for you.


Staggeringly Smooth – the Fortnum & Mason E-com Release

by Roisi Proven


A release as smooth as a great cup of Fortnum's tea.

It would be difficult for me to overstate the importance of launching a brand new version of an international e-commerce website. This was no mere reskin, we have been rebuilding the Fortnum & Mason website from the ground up. We started with the decision to use Spree Commerce for the storefront, and have been continuing all the way through to building them a bespoke CMS using our very own Colonel.

Because the Fortnum brand values customers above all, they felt it was of the utmost importance that the customers help drive the direction of the site. As a result, we decided it was important not to rush a release. We have spoken already about our approach to deployment, allowing us to deliver rapidly and regularly, but in order to get to that stage, we needed to be confident that the core of what we were delivering was sound. 

The First Step

The first thing that we had to do before we could make a plan, was to figure out where our weaknesses were. If we knew the elements that were most likely to fail, we could work pre-emptively to fix these areas before going live to the world. What has always been apparent to us is that with so many 3rd parties to rely on, no amount of automated or manual testing was going to truly expose our pain points.

So, as soon as the site was fully transactional, we made the decision to do a highly controlled soft launch prior to Christmas peak. A selection of Fortnum’s trusted customers were contacted, and given a password to our still very much unfinished site. By communicating to these customers, and making them a part of our development, we hoped to not just get a more robust test of our site. We also aimed to gain feedback from these key users which would inform the ongoing development.

The Slow Burn

And so the feedback came. However, it came slowly. Too slowly for our liking. So, further down the line towards the middle of January, we had a meeting with the team over at Fortnum. We came up with a plan to run both the old site and the new site in parallel, directing traffic in ever-increasing quantities to our site. So new.fortnumandmason.com was born.

The need to login to get access to the site was removed, and people started using our site. For the first time, we weren’t just inviting people to use the site, we were allowing them to get there on their own, initially via a marketing email, and later by directing a percentage of traffic from the old site to the new. In doing this, we were making sure that when the time came to release to 100% of the public we wouldn’t see the dip in sales and conversion that so often accompanies site re-launches.

Running two sites at the same time came with it’s own problems. There were moments when we wondered if we should have held off on introducing users to the new site, despite the fact that we knew releasing to customers early with the intention of learning from them was hugely important. However, how well it worked was undeniable. Major issues were uncovered, but instead of affecting hundreds of customers, it would affect one or two. If a couple of people encounter an issue, you can contact them personally and make sure they still feel of value. If hundreds, or even dozens of people are affected, it becomes a great deal harder. Also, if there were any issues that we needed more time to fix, it was very easy to direct all users back to the old site.

“Release” Day

The 17th of February rolled around. This was the day we had all been waiting for. For the previous fortnight, around 40% of Fortnum and Mason users were being directed to new.fortnumandmason.com using the Qubit system to redirect them. We had encountered issues, some minor and some not so minor. We had managed to maintain our development momentum along with fixing these issues. Using Kanban, we had a clear view of work in progress and bugs that needed fixing urgently, all the while maintaining a focus on throughput – getting features shipped into live.

At 8am we made the call to change the DNS records. The old site drifted away, and new.fortnumandmason.com became www.fortnumandmason.com. We pulled up our analytics and collectively held out breath.

It just worked.

A part of everyone was waiting for the other shoe to drop, for complaints to start flooding in or for orders to start failing. A bigger part of us knew that our gentle filter of customers over the previous weeks and months had prepared us well for this. Of course there were still small issues, a customer struggling to pay here, a missing product there, but overall there were no alarming issues.

Ongoing Positivity

The new Fortnum & Mason, complete with Feedback banner


As the website bedded in over the next couple of weeks, we continued to see minimal issues and an increasingly positive impact. We have already seen an overall growth of 89% year on year, with the conversion increase coming in at an impressive 20% up. We have also seen an 18% reduction in customer service calls, with a particular drop in calls related to issues with payment.

As we finish up our first couple of months being fully live, the mood across Red Badger, and across the business at Fortnum, has been hugely positive. We are immensely proud of what we have achieved, and we don’t feel that we could have had such a successful release if we hadn’t ramped up slowly in the way that we did.


Announcing Arch – a functional style application framework for React

by Viktor Charypar

Announcing Arch – a functional style framework for React


A little over a year ago, Red Badger teamed up with the Haller foundation to help them build an app for farmers in Africa. Like we always do at Red Badger, we looked at the new and interesting technology available at that point and decided that we’d try and build it using Facebook’s new UI library called React.

Little did we know that within a year, React would become a major part of pretty much all of our client projects and some internal ones. We fell in love with its simplicity and the power of delivering complex user interfaces quickly and with much less bugs than with any other tool before.

Naturally, we started building entire applications with React at the front end and we quickly realised we need a bit more of a pattern to manage state and data in general in the application. At about that time Flux came out and we tried our own implementation of it successfully, but something didn’t feel quite right.

We loved the functional aspects of React, like the fact React components can be pure functions of their props (which works beautifully with LiveScript). It simplifies testing and even just basic understanding of what’s happening in the application at any given time. Passing around references to stores and figuring out what state all the stores are in still felt complex, compared to the beautiful simplicity of React itself.

We really liked the ideas introduced in Om - the React binding for ClojureScript – but making the transition to an entirely different stack felt like a pretty big step. It is also not the easiest thing to eventually hand over to a client, whereas everyone is familiar with JavaScript. We are still keeping an eye on Clojure and ClojureScript for another time. In the meantime we wanted to get the benefits of the architecture in a familiar stack.

At that point, our React stack got quite complicated – we used LiveScript, browserify, tried a couple of different routing libraries to get the isomorphism that React makes so easy, and gradually, our answer to the question “What should I do to get the most out of React?” was getting more and more complicated. So we decided to distill our experiences and opinions into a framework that gives you all of the features we loved and a pattern to build React applications the best way we know how.

We spent the past couple of months slowly building and refining it and towards the end of April, we finally got it to a state where we felt it’s ready to open source. We call it Arch.


Arch is a front-end functional style application framework using React as a UI layer. Arch applications are isomorphic out of the box, including form processing. This means you write your application as if it was client-side only and Arch will manage the server-side portion.

This also means you don’t get any control over the code running server-side, which is a design decision. The theory behind it is that any server-side code you need to run should sit in a separate server application which you talk to over an API. This is very similar, at a high level, to Facebook’s recently announced Relay framework architecture and we agree that the server-side portion of your application should just be separate. (As with anything in Arch, with a bit of effort you can opt-out of the choices we made and run the Arch server from your own node.js/io.js application.) For development, Firebase is a good tool to give you an API based server-side persistence.

Arch applications are written in [LiveScript] by default. We picked it, because it’s a functional language that’s a perfect fit for React and makes it a joy to build React applications. It doesn’t take long to learn (at the end of the day, it’s still just JavaScript) and gives a huge productivity boost while letting you write very, very readable code. Although you can just as easily build Arch applications with ES6 and JSX, it’s definitely worth taking a look at LiveScript.

Central state

The biggest feature of Arch is the application architecture it proposes, inspired by Om and other ideas from functional (and functional reactive) programming. The architecture stands on an idea of central immutable state, describing your entire UI in a simple data structure. That data structure – a single composite value – serves as a complete model of your user interface at any point in time. Your user interface is a pure functional projection of this data structure.

Obviously, rendering a single UI state is not enough to build an application, you need to update it over time in response to the user’s actions, and you need to do it in a way that doesn’t couple your entire application to the structure of the central state. In other words, you need a way to distribute the central state to your UI components and collect the updates they make in response to user events. Om’s and Arch’s solution for this is a Cursor.

Cursor lets you dig down into a composite data structure and pass a reference to that particular path in it that can later be updated. This doesn’t require any understanding of where in the data structure the cursor is pointing to from the receiver (a UI component for example). So if you pass a cursor with a piece of state to a React component, it can safely use it if it was the entire thing. For example (written in LiveScript):

render: ->
  profile-picture picture: @props.user.get ‘profilePicture’

When rendering a profile-picture component we pass in a picture prop, which is a cursor we obtained by getting a profilePicture key from a user cursor, which was passed to us in the current component’s props. Inside, we can then use the value:

render: ->
  dom.img do
    src: @props.picture.get ‘url’ .deref!

The deref! call dereferences the cursor and gives you a plain piece of JSON data back.

Application state and user interface are not the only things you will ever need. There’s always some business logic to do: computations, data validation, sync with an API, collection of some statistics, etc. For that Arch uses state observers.

Each cursor is observable and you can register a change handler on it that can do work every time the value changes. For example talk to a backend API:

query.on-change (q) ->
  # make API request
  .then (results) ->
    search-results.update -> results

Whenever the query changes, this observer makes an API request and updates the search-results cursor with the results from the response.Extracting the business logic into state observers results in easier to reuse React components and a more modular application. Changing a search provider, for example, can be as easy as swapping the relevant state observer without touching the UI components.

Arch focuses on simplicity (as opposed to complexity) and minimal solutions at every step, which sometimes means things seem harder at first. To help people get started, we provide a CLI that helps you generate a new application by simply typing arch-cli init in your Terminal and a server that handles the isomorphic aspect of your app (arch-cli serve). Arch tries to provide a good choice for all the decisions you make when building an application from scratch, but at the same time, not constrain you. We believe the default set of choices we made is sensible, but each decision we’ve made, you should be able to opt-out of. The aim is also to split up arch into submodules, so you can, for example, use just our cursor implementation. All those goals together should hopefully make a good flexible tool for building React applications.

Can I use it?

Hopefully the previous paragraphs gave you enough of an overview of what Arch is, the philosophy behind it and how you would use it. Although Arch itself or parts and ideas from it are used on multiple projects at Red Badger, it’s still very early in its life and not necessarily ready for production without a bit of effort. We’ll continue working on it and post regular updates.

You can find out more about Arch at http://archjs.org (currently redirects you to Github, but we should have a website up soon). We’ll appreciate all feedback and examples of the awesome stuff you build with it and hope you’ll like it as much as we do!


Don’t go chasing waterfalls

by Phil Brooks



I used to really like being a Project Manager, I mean really like it. The order, the structure, the process. I would talk to people about ‘how things got done in an organised manner’ until their eyes glazed over. You see, I cut my project management teeth with local and central government. Yes! I was an advocate / practitioner of Prince 2 and the waterfall methodology. Now don’t get me wrong, I am not saying its not the right approach. Its just not the right approach for me. I thought it was, but then something happened. 

I discovered/was introduced to a new way and soon realised that the light at the end of the tunnel was not the headlight of a fast approaching,  document heavy, process driven train. There was an alternative. So I mothballed the gant charts and Microsoft Project, archived the 30 page requirements and specification docs and confined the ‘sign off by committee’  trackers to the basement. I packed my things in a hanky on a stick and went on my merry way. 

My leap into the Scrum unknown opened my eyes to a more flexible way of working, but with some limitations. Oh Yes, you still have to do certain manifesto ceremonies. Sprint planning, estimating (be-it story pointing or time-boxing) end of sprint retrospectives, end of sprint demos and daily scrums. All of which are valuable in their own right, but a little overkill, don’t you think? 

That aside, I still really really liked being a Project Manager. It opened up my creative side, it was not all about spreadsheets, and trackers, highlight reports and phone book sized documents. It was writing and drawing on walls, post it notes and sharpies, collaboration and teamwork, trusting and enabling. I noticed the teams were more open and honest. They worked together to find solutions and listened to their peers. My role changed from telling to coaching, from reporting to showing and from excessive planning, to agreed prioritisation. I was in my element and really really enjoyed my job. 

Then I joined Red-Badger Consulting and pretty much everything turned on its head. Firstly I was introduced to Kanban (similar to Scrum, but without doing the sprints). Secondly, there is no right or wrong way and nothing is set in stone. Thirdly, and probably more importantly, I was/am given thinking time, a luxury that us Project Mangers seldom get. I had to un-learn what I had learnt and un-think what I had previously thunk! 

I had never used Kanban before, and to be honest was a little worried that I had no experience with this particular methodology. But I needn’t have concerned myself. At Red Badger Consulting we are encouraged to think differently, encouraged to do what we think is right, encouraged to continuously improve and generally encouraged.

Kanban is our preferred approach, however, how we tweak and change it is totally the team’s shout. Oh yes, its a team thing, a collaboration not a dictatorship, at times even the COO hasn’t got all the answers! (although he mostly does, ask his opinion or advice and he will give it, it’s your call if you use it)

Here at the badger sett we use a mix of Kanban, common sense and flexibility to enable us to meet client’s expectations and deliver in a sensible way. Each team is empowered to make relevant changes to their processes, which makes such obvious, but often overlooked, sense, as each client is different and has specific drivers and goals. 


In my team we have made some radical changes, not to be non conformist, just simply because its what works better for us. Flexible, agile, collaborative and forward thinking:

We don’t do estimating - we see no value of finger in the air guesstimates – we do the work and collect the metrics. Real data on real events giving real lead times. 

We use littles law - what we have to do + what we are already doing / what we ship in a given timeframe (daily in our case) We ask the team to put a date on the card when it leaves the todo column and a date when it is shipped. IN and OUT. This gives us our estimated lead time and highlights tickets that have been in play for excessive amounts of time. We use this data to learn and understand what happened and how to avoid it happening or to ensure it continues happening. We also track blockers and impediments, as this has an impact on our lead times

Now, before you ask, yes I do have a spreadsheet, its a huge one too. (I am, after all a Project Manager) but I don’t bore the team with it. I take out what is useful for them, like average WIPhow long it should take to do the todo and overall metrics for the entire backlog and the occasional cumulative flowI capture data daily, how many tickets are in each line.  Todo, W.I.P, Shipped. Now I have estimate lead times, useful for when you want to advise your clients on how long something might take and why,

No more finger in the air guesstimating. Remember, real data, from real events, giving real lead times. 


We don’t do retrospectives at a set time or set day - we do retrospective when it makes sense to – we did do retro every Tuesday at 16:30, but when that reminder popped up on the screens, it was met with an audible groan from the team. So we ditched it. Now we collect thoughts and topics, when we have three or more, we do retro. The team control the topics, unless there is a crisis, then we focus on that and do a retro around that specific item. We also do it quick sharp and to the point, nobody has a chance to get bored and switch off. But in all honesty, we talk to each other constantly, so we tweak little and often.  

We don’t do massive planning sessions - we plan retrospectively. When we have enough todo, we focus on that. When the todo starts emptying out, we plan the next things that will be pulled from the backlog. We focus on the job in hand, we don’t waste time planning for something that might not make the cut.

We have continuous, one piece flow. - the team focuses on the the board from todo, discovery, UXD, dev, test and shipped. Nothing goes backwards, we have exit criteria for each swim-line. If a bug is found, the ticket stays where it is and a fix is worked on. Continuous flow,  everything moves from left to right, nothing jumps back and nothing falls off the board. Once a ticket passes todo, it is tracked to the end,  every time.  

We include Bugs and Issues in the flow - it’s still work, right? still needs to get done, so why separate it out, why not prioritise it along with the other things to do. Site goes bang, we all jump on it, slight irritating non customer facing issue we could live with, we prioritise it and get on with other things. (unless the client or the team think it should be sorted as soon as) But it’s all work, some is big, some small, but work nonetheless. 

We include UX and Design in the flow - again, work is work right? we are all the same team, right? why segment it?  well we don’t. If it has UX or design elements they get done in the flow and we measure the throughput along with everything else. 

We pair program - the designers work closely with the developers to do the designs within the app, saving time, effort and iterations. The developers pair with developers, they share knowledge and skills. They collaborate and review, the produce quality assured code with little to no technical debt

We collaborate, communicate, celebrate and encourage, through all stages of the process. 

I am fortunate to be involved in some of the most innovative, creative, technical, ground breaking and award winning projects with Red Badger. I love using Kanban. I love being able to work with a team of awesome individuals from different specialist areas. I love the ‘lets try it and see’ approach. I love the challenge and the change. 

Since Joining Red Badger,  I really really love being a Project Manager. which proves that working for the right company and the right group of people can have a profound impact on how you feel about your job.

Thanks for reading to the end, I take my hat off to you. If you want to see how we do it here, come and have a chat. We love a cuppa and a biscuit. 

visit us at http://red-badger.com/


Phil Brooks 



Death by meeting (strippers need not apply)

by Harriet Adams

Over the years, I’ve worked with a number of FTSE 100 companies. To this day, I still have no idea how anyone gets anything done, considering most of their 9 – 5 is either in meetings, preparing for a meeting, or talking about the actions from those meetings in another meeting.



Utterly bewildered, I’d go along, often spending hours going over the same topics for the output to be:

  • We need another meeting to discuss further because we’ve run out of time
  • We can’t make a decision yet because [insert name] isn’t here
  • We don’t have enough information to make a decision

URGH. If there’s something that frustrates me it’s inefficiency. What’s the point of inviting all these people unless we reach a decision or a series of next steps? And why on earth am I here?!

It’s almost like the more people involved, the better and more important the meeting is perceived to be.


Funeral strippers

A couple of weeks ago, I read an article about the Chinese authorities clamping down on “funeral strippers“. Supposedly, the more mourners you have to your funeral, the more well-off your family appears. Therefore, in order to achieve higher levels of attendance and seem more wealthy, some families have resorted to hiring strippers to attract the crowds. 

On paper, it may seem a bit of a stretch as a comparison, but all too often we are big-headed and assume that what we need to discuss is particularly important. The entire team (and often innocent bystanders) are invited to pointless, boring discussions with no tangible output. Everyone joins, drawn in by the promise of something exciting, feels awkward throughout, and ultimately leaves feeling dead inside.


Upfront contracts

I learnt a technique on a training course a while ago about Upfront Contracts – something I try to enforce before every conversation and every meeting. A UFC should consist of the following.

  1. Purpose. What’s the point of the meeting / conversation?
  2. Agenda. What are we going to be talking about? Are we all in agreement that this is the right agenda?
  3. Timing. How much time do we have in total to reach a decision?
  4. Output. What do we all need out of the meeting, and at what point do we decide that it is over?

What this doesn’t consider, however, is the quality or number of people in attendance. This goes back to the Lean Start-up concept. Who are the influencers that will enable you to progress? What’s the minimum amount of input you need to reach a way forward?

This is a very important point, as inviting the wrong people may lead to the same problem, despite having a clear set of objectives.


Another one bites the dust

In fact, sometimes meetings aren’t necessary at all. Again, referring back to Lean Start-up, sometimes it’s necessary to pivot to be more efficient. A good example of this at Red Badger was during a recent project. Retrospective discussions were being held at 4.30pm every Tuesday afternoon but it became clear that the team were mature enough to raise issues in a reactive manner, and add them straight to the Kanban board.

By removing the meeting entirely and only running it when there were three items for discussion on the board, the team were able to be more productive but continue to make important changes to the process when it was necessary.


Making peace 

I’m not saying that ALL meetings are unnecessary. Quite the opposite.

But next time you have an important decision to make or need to arrange a meeting, think about the following.

  1. Is this meeting absolutely, definitely necessary?
  2. What is the purpose of the meeting and what do you need the end result to be?
  3. Who are the main influencers in reaching this point?
  4. What do we need to talk about to get to this result?

If you ask yourself these questions and make the answers clear to everyone who joins, you’ll make decisions faster and avoid disrupting the team.

In Layman’s terms, you’ll be slicker and quicker (with no stripper).


Fortnum & Mason – Slack Deployments, Confident Delivery

by Jon Sharratt

As you may or may not be aware the team at Red Badger has been hard at work crafting away at the new Fortnum & Mason e-commerce website.  It has also recently been nominated for the best customer experience award at the BT tech & ecom awards (https://techecommawards.retail-week.com/shortlist-2015).  We have delivered the site from concept to live in just 8 months using agile and lean methods such as Kanban. One of the core concepts that compliments our Kanban approach when delivering features on the project is the ability to deploy without friction, confidently and multiple times a day.

Lets break this down and take a look from a high level what the process looks like.


Reduced Friction

You might recall I blogged about how we used GitFlow (http://red-badger.com/blog/2013/08/15/sprint-efficiently-with-github/) within our development team.  In Fortnum & Mason and other projects we have recently moved to GitHub Flow (https://guides.github.com/introduction/flow/) mainly due to the recent supporting features implemented by GitHub. 

The core principle is that the team pair programs on a feature branch.  A pull request is then created with relevant specs, it then gets reviewed and collaborated on by the team.  Once the feature branch runs all of the tests via CircleCI it can then be merged into master.  Our master branch reflects production code always.  We use CircleCI which executes our Ansible scripts for provisioning and deployment.   

In Fortnum & Mason we have unit tests along with golden path journeys using Capybara that run using Chrome driver.  These golden path specs are the core journeys that reflect if the site is transactional.  Once all of the specs have passed the master branch is then deployed to our staging environment immediately ready for our QA to test.

If our QA is happy with the build then the QA will take ownership and tag a release via GitHub releases (https://github.com/blog/1547-release-your-software) stating what issues have been fixed as well as any new features added.  The last part is to fill the release tag name with a semantic version (http://semver.org/) number.  This gives us fantastic rolling documentation that at a glance everyone can see what changes have taken place.  We are transparent with our clients as much as we can be and our product owner has access so can also take a look and really get a feel of what work has been completed day to day or even minute by minute.

Another tool widely used across Red Badger is Slack for company wide collaboration and communication.  For this project we decided to setup hubot (https://hubot.github.com/), an automated bot that (mostly) obeys your commands.  We added a couple of custom scripts that allows the QA or any of the team to deploy a release as and when necessary.  It is as simple as a message @badgerbot fm list tags. Which lists the 5 latest tags in our repository. Once you have the tag you want you can deploy it using @badgerbot fm deploy v1.0.0.  This causes a parameterised build (https://circleci.com/docs/parameterized-builds) within CircleCI to run the relevant Ansible scripts using the tag specified which then deploys into the production environment.



Increased Confidence

Our deployments already come with a high degree of confidence due to the development practices of pair programming, code review, specs and QA tested features / issues.  But if something does go wrong in production we are safe in the knowledge that we will know about it immediately, how do we know… well in comes New Relic and a quick Red Badger service we hacked together in a few hours.  The Fortnum & Mason site is rigged up all over with New Relic alerts and events throughout its codebase.  Every instance, moving part and third party call has its instrumentation and performance tracked.  CircleCI even tracks each deployment so we can quickly see any performance degrading for every deployment that goes out.

Another element we have in Fortnum & Mason is the ability to flip features on and off using a concept called feature flipping.  This allows us to incrementally release larger features to select users, we can then ensure that we are confident it works as the code runs side by side against deployed production code.  A good example is adding another payment provider such as PayPal, we can test run it in production with a few users to make sure everything integrates before switching it on to everyone.  We can have fine grained control and can release it to the product owner, groups of users or even a random percentage of users.

This really helps the teams principle of always moving forward. 

Here is a breakdown of the monitoring and alerting services we use and what for:

New Relic Application Performance Monitoring (APM)

Instrumentation, performance and error logging.  Every server and third party performance call is logged and alerting is setup to inform of us of any bottlenecks and errors.

New Relic Synthetics
Continuous golden path testing to ensure the the site is always testing transactional flows.  Selenium scripts using Chrome that run every 15 minutes to ensure core journeys on the site are operational.

New Relic Insights
Customer behaviour analytics as well as KPI.  We log everything from delivery methods, revenue, average basket sizes and much more allowing us to analyse and test new assumptions to improve customer experience. 

Red Badger Phone Alerting
Although not part of New Relic we hacked a service that accepts hooks from ZenDesk and New Relic.  If any critical part of our monitoring raises a critical alert or email the service uses the Twilio API to phone a badger who is on 24/7 support.

Early and often

With all this in place we can be confident that fixing issues and deploying new features multiple times a day is second nature to the whole team. Deployments are not blocked by hefty deadlines and big ‘release planning’ becomes a thing of the past.  Just a quick review of the tagged release in Github by the team is all that is required.  The other huge added benefit when deploying early and often is minimising risk of every deployment with ultimately smaller incremental changes.

So next time you are ‘release planning’ ask yourself how confident, efficient and easy is it for you and your team to deploy multiple times a day.


I heart pens and paper

by Steve Cottle

Pile of paper with wireframe sketches


I love pens and paper and putting them together to make marks and lines and scribbles and that. Lots of people tend to avoid them nowadays, favouring the tippy-tap of a keyboard and the clicky-drag of a mouse. A few years ago I moved jobs and was surprised to see some of my new colleagues taking notes by typing stuff into Word. Maybe I’m showing my age but that really can’t be a good way of taking notes. The office had a healthy supply of pens and all manner of notebooks to choose from. Still she punched away at her laptop. She certainly wasn’t the quickest typist in the world, how did she get everything noted down? How would she go back and make new notes around her original notes? How would she doodle the cup of tea she was drinking, or the sandwich she was looking forward to at lunch? What was wrong with her?


It’s all about the pens and the paper, people. The pens and the paper…


Easy peasy

The vast majority of us can sketch a line, with little to no training. Those who can sketch a line, can join the lines together too. Once the lines have been joined together, loads of options then become available. We can now sketch anything in whole world, simply by joining lots of lines together in different ways. Woah there, anything? Sure, remember sketching isn’t drawing. The marks on the page don’t have to be a photographic representation of what’s in your head or what you see in front of you. The purpose is to get the ideas out of your head and to communicate your thoughts to other people. Sketches are ideas. The sketch needs to be legible but in no way does it super polished. And it just so happens that if like me, you’re responsible for communicating ideas for stuff like processes, websites, apps, flows and journeys, the basis of our sketches are boxes. We just need to sketch the boxes. Who can go wrong with a box? And if you’re feeling a little sketchy at first – don’t worry, the more you do, the more comfortable you’ll become.


Feel the draw sketch

Once you start putting pen to paper a wonderful thing happens. You start to show people, you pick the paper up and turn it over, slide it across a desk, touch it and pin it to a wall. Now your sketches are visible and out in the wild for all to see and get involved with. Now other people are interacting with it and collaboration begins, the idea isn’t hidden away behind a computer screen – it’s ownership has been removed and it belongs to the team. Even jargon becomes watered down and a common language develops. You’re now not trying to remember the names of modules or functionality, you’re pointing things out and bringing them into the group.


As easy as they are to create, sketches are also easy (and fun!) to get rid of. Screw them up and throw them (literally) away, make paper aeroplanes from them, or even an origami swan. The interaction continues even when they’re on their last legs.


Pros and pros

Sketching is quick. You really can make a fair few marks on some paper while your colleagues’ computer is firing up and their favourite software is loading. It requires minimal setup and very little investment in time, training and materials. Also, you don’t need to match software across teams or make sure you have the right amount of licenses. The tools are available from loads of easily accessible places and sometimes for free (shops, lying around, Argos*).


*I don’t advise you use Argos as a supplier of free sketching material.


Something old and something new

Wikipedia says paper was invented by our Chinese friends during the Han dynasty (206BC – 220AD). It was the equivalent of modern day wrapping paper and bubble wrap – used to protect anything from mirrors to medicine. More recently Architects and Design Engineers developed this ancient packaging material by making it more translucent so drawings could be precisely copied. This process of tracing is one of the fundamental processes in Product Design. Iteration is key to getting closer to solving a problem, refining and developing an idea. With sketching you can grab a new piece of paper, trace the old version, and iterate a new one, again and again and again.


Hmmm, but …

Hold on there smokey Joe, we don’t want any of that “well …”,  “erm …”, “I dunno …”. It’s quick, fun and produces lots and lots of ideas. And it all started thousands of years ago in ancient China.


Here at Red Badger we open sketching up to everybody from UXers, Designers, Engineers and Developers to the users of the products we’re producing. And from a tinchy 20 minute session we can get a whole heap of ideas and potential solutions to the problem we’re working on. All with no training and no budget. And we all get a break from our computer screens t’boot.

So, go on. Tool up, get your pens and paper out and pull up a pew with your team.


Tech Round Table 2015

by Stuart Harris


Every year since 1988 I’ve been saying “there’s never been a better time to be a software developer” and I expect to continue saying it for a long while yet. But it seems that the last year has been especially significant. Several incredibly exciting technologies have emerged recently that are changing everything.

In our 5-year history, Red Badger has never seen a year like this one. The open source movement is truly blossoming and it’s benefits are rippling through the software industry at lightning speed. We recently got together to list out all the tech we love, but there are a few notable technologies that I want to highlight.

Facebook React


The first is Facebook’s React.js. I think this is the most important development in web tech in the last 10 years. Less than 18 months old, it’s managed to turn the web developer’s world upside down. The traditional MVC approach with data binding that we’d been thinking was the best way to build web apps for a decade or more turns out to be be inferior to the more functional approach that React takes.

React is so simple. Simply because it allows the UI to be a pure function of application state. This makes applications much simpler and a lot easier to reason about.

UI = f(state)

There you go. Now you know React! That’s it. Wow. Who’d have thought it could be that simple. When the state changes we simply apply the function again and bingo, we have new UI. That’s at the heart of a new revolution in UI engineering.

It turns out that if we build slightly different functions we can create native UI for mobile devices (React Native) , for TVs (e.g. Netflix), for HTML5 Canvas (e.g. Flipboard), for any rendering surface. And the same team can build all of these. As Facebook says: “learn once, write everywhere”.

How to manage the state is a separate problem. And Red Badger has recently open sourced a new application framework called Arch, which makes that bit easy too, whilst leveraging all the incredible power of React.

React popularity is soaring and it’s rapidly establishing itself as, hands-down, the best way to build user interfaces. As an example of its popularity, Red Badger started the London React Meetup in June 2014. After a few months we outgrew our office and now host the meetup at Facebook’s London office. Already, the meetup has nearly 1100 members and the 250 seats each month get snapped up in 30 minutes. On May 20th we're holding a special meetup at Cargo as part of the Digital Shoreditch festival. Come along. If you can get a ticket.

Big thanks to Facebook for bringing React to the world.



The second technology I want to mention is Linux Containers (LXC). Made popular by Docker.

When you develop an application these days, you really need a Macbook Pro with unfettered access to the Internet and all its open source goodness. But you often have to deliver the application into secure locked-down operational networks. Before containers, you had to work inside these restrictive environments and it’s so difficult it’s enough to drive you insane. Now you can build your application in containers in the open environment of the Web and then ship those same containers to your test environments, then to your staging and production environments. The containers hold everything that your application needs, so they can run anywhere. And I mean anywhere: Circle CI and other test and Continuous Integration environments, public cloud infrastructure like AWS and Azure, and private cloud infrastructure like IBM Bluemix and Red Hat’s OpenShift.

Containers are the enabling technology for true Continuous Delivery pipelines. You can automatically push (and scale) your application into any environment you can think of, regardless of how locked down and secure it claims to be.

The developer’s handcuffs are removed and the business gets continuous improvement with very little maintenance overhead. And because the application is running in the exact environment in which it was created to run (and tested in), it’s more stable and secure. Everyone wins.

ES6 and Babel


At Red Badger we've used LiveScript a lot. That’s because it’s a great language with loads of functional goodness influenced by great functional languages like Haskell and F#. We still love LiveScript, but now, with Babel (thanks to Sebastian McKenzie), we can use ES6 everywhere. ES6 is the upcoming JavaScript standard and it’s so much better than ES5 (the current JavaScript). It doesn’t have everything that LiveScript has (like currying, piping, prelude-ls etc) but it goes a long way and it’s getting lots of traction because of Babel. Browser support is getting much better and it will very soon be as ubiquitous as ES5 is today.

We’re now using ES6 on many of our projects and tooling support is already very mature. For example, the amazing ESLint has support for ES6, JSX and React. As does Atom and Sublime Text. It’s always a good sign when the tools converge on a technology.

And other cool things...

There are a ton of other new and exciting technologies that we’ve been using at Red Badger over the last 12 months. We listed them all in our tech round table. Go and have a look and see where our love goes right now.