Docker and assets and Rails, OH MY!

by Jon Yardley

How to precompile Ruby on Rails assets with Docker using --build-arg for deployment to a CDN.


I love Docker. I really enjoy all the benefits it brings not only to the developer experience (DX) but also confidence in deployments. Docker, however, is not a silver bullet on it’s own. It has brought with it a new set of problems which we would not have come across in more old school methods of application deployment.

Recently I came across a particularly annoying issue with Rails 4 and it’s asset pipeline when serving assets from AWS S3 via CloudFront. Referenced assets were not resolving to the correct location when running assets:precompile.

Also finding the right place to precompile assets was apparently obvious. At build? When deploying? At startup? After trawling the web for a long time I found no obvious answer to this problem.

In detail: The Problem – TL;DR

In production or any other remote environment you want to have your assets served via a CDN and to do this with Rails you need to precompile your assets. This compresses all your assets and runs them through any precompilers you use i.e. SASS. If you use any frameworks it will also bundle all those assets up too.

The application I am currently developing uses Solidus Commerce (a fork of Spree Commerce) which has a bunch of it’s own assets for the admin panel. When precompiling these assets it fixes paths to your referenced assets, e.g. Font files.

If you don’t have the config.action_controller.asset_host set in production.rb at the precompile then these references will be relative to your application domain and won’t resolve. Not ideal!

Another problem is that with Docker you want to build your container and ship it across different environments not changing anything about the application in between and Environment Variables tell your application where it currently lives. e.g. Staging, Production etc…

If you tell Rails to run with config.assets.digest = true then you need to have the precompile assets manifest file which tells rails about your precompiled assets which means you would want it at build time however at this point your container has no awareness of it’s environment.

This particular problem rules out compiling assets when you deploy. Even though your assets will live on your CDN your container won’t know where to point as the manifest won’t exist inside the container and therefore references to assets will be incorrect.

Why not run the assets:precompile rake task in the entrypoint.sh script when the container starts up?

There are a few problems with this approach. The first being that we are deploying our application using the AWS EC2 Container Service which has a timeout when you start the container. If the Dockerfile CMD command does not run within a certain amount of time it will kill your container and start it again. This can be very frustrating and difficult to work our what is going on.

Also, if your container ever dies in production before starting up it will have to precompile all the assets which is not great. You really want your container to start up as quickly as it can in the event of a failure.

The Solution: –build-arg

I had no idea until spending a day banging my head against a wall trying to fix this that Docker has the option --build-arg. Here is a snippet from the Docker Docs:

You can use ENV instructions in a Dockerfile to define variable values. These values persist in the built image. However, often persistence is not what you want. Users want to specify variables differently depending on which host they build an image on.

A good example is http_proxy or source versions for pulling intermediate files. The ARG instruction lets Dockerfile authors define values that users can set at build-time using the --build-arg flag.

This option allows you to build your container with variables. This is perfect for compiling assets when building a Docker image. I know this sort of goes against the whole idea of immutable infrastructure however Rails, in my case, needs to know which environment it will be living whilst it is built so that any asset references resolve correctly.

How to use –build-arg

Set your asset host

In your Rails application make sure you set the asset_host from an Environment Variable:

Ammend your Dockerfile

In your Dockerfile insert the following after you have added all your application files:

Build your image

Then in your CI build script:

The resulting image will now have your precompiled assets inside the container. Your Rails application then has access to the manifest file with all the correct urls.

Deploy your precompiled assets

To then deploy your assets to S3 you can copy the images out of the container and then push them up to AWS:

Hopefully this will help others who has been having the same problems. Comments and other solution suggestions are welcome!

Want to use Docker in production?

At Red Badger we are always on the look out for “what’s next” and we embrace new technologies like AWS ECS and Docker. If your looking to work with a team who are delivering software using the latest technology and in the right way then get in touch. We are constantly on the lookout for talented developers.


Online Learning Resources

by Anna Doubkova

Life as a software engineer is one of continuous learning. Some people enjoy exploring new technologies and others complain about #jsfatigue. At the end of the day though, we’re all in this together.


I’ve gathered quite a few useful resources over the past couple of years. I used them to learn more about various topics that I thought would either help me become a better programmer or that I just wanted to dip into for a bit. They could be of interest to all coders out there, ranging from Computer Science to Physics, and that’s why I wrote up a few tips for you.

All of these e-learning sources are free and available to you at any time – enjoy! (And let me know in comments what your favourite ones are.)

Web Development

Functional programming textbook


This one is a good classic for anyone who’d like to learn a bit more about functional programming without having to delve deep into mathematical definitions and weird terminology. Professor Frisby’s Mostly Adequate Guide to Functional Programming (that’s the official name) is a wonderful book for beginners and others who’d like to strengthen their knowledge of the topic. It’s fun, easily readable and has plenty of examples to support the theory.



Let’s get onto the practical side of things! A lot has been said and done and written about Redux. However the more it gets wide-spread, the more misconceptions and mis/abuses of this library you can see. That’s why it’s really worth watching these videos from Dan Abramov (maker of Redux) because he talks not only about how to do things properly in Redux but also why you should stick to these rules. Not mentioning he points out a lot of good practices in Getting Started with Redux that can save you a lot of headaches.

Learn ES6 by fixing unit tests


This one might be slightly passé as probably every single JS developer uses ES6 by now. But still ES6 has a lot of features and some – such as generators – aren’t that easy to comprehend at first. On ES6 Katas, you can learn all ES6 features by fixing unit tests. Not only you can improve your knowledge of the new version of JS but you can also enjoy making tests go green! Perfect combination.

JSON Schema


JSON Schema is super useful when working on web data structures that should be universal and generic. The docs are unfortunately quite cryptic and that’s why Understanding JSON Schema  is such a bliss as it’s readable and shows a lot of examples along the theory. Obviously it’s not something you would read before sleep but it’s a good reference in case you need to find out more about schemas or some of their parts and capabilities.

Computer Science

Machine Learning


I think I should really start this section with my favourite online textbook out there. Having said that, I must also admit I didn’t finish it because the learning curve got pretty brutal half-way through. However, thanks to Neural Networks and Deep Learning, I learnt so much about principles of this technology that I couldn’t have hoped to achieve on my own. There are practical python examples and mathematical exercises included so you can always check that you understand what it’s on about. The tricky part is that you need to have a pretty good knowledge of high mathematics, mainly matrices and derivations (and gradients and all these things in n-dimensional spaces). Still I highly recommend at least reading through the first two chapters that are quite accessible even for us who never finished uni studies 😉


Rich Hickey



Rich Hickey is the person behind Clojure and Datomic so as you can expect, he’s a pretty smart guy. He’s also a great presenter that makes some very complex ideas easily understandable for us common folks. I included two of my favourite presentations he did at QCon. Simple Made Easy is a talk that considers programming from a very general point of view. It works with the idea that we want to create very simple (even if complex) applications but that’s usually a difficult task. In Database as a Value, Rich shares some underlying principles of Datomic. Even if you don’t know much about Clojure or Datomic, it’s simple to understand the key ideas and start thinking about data in a more universal way.

AI course on edx


This was the first course I took on artificial intelligence, and although it wasn’t guided by a tutor, the videos and exercises are well sufficient to learn the principles. Apart from the fact that you can listen to talks on something incredibly cool from professors from Berkley for free, it’s fun and interactive, and it really feels rewarding at the end of the course if you stick with it. I’ve seen similar courses available online on the MIT website and the likes but they never made it to such high standards in terms of online accessibility.

It might be worth noting – because I didn’t know when I started this course – that AI isn’t really about making sci-fi like fantastic creatures as the authors of books and scripts would have us believe. It’s about finding clever algorithms that somehow help computers understand our world and make intelligent decisions. As much fun as reading Assimov, if you ask me.


Algorithms and data structures


Four semesters of CS in 6 hours might be a slight exaggeration – you can certainly learn a lot in a relatively short amount of time because you won’t see much theory here. Instead, the concepts are briefly explained and then you do exercises which are based on fixing unit tests to check that your algorithms or data structures work as expected. Solutions are also available so if you get lost, you can always get back on the track easily. Great intro into computer science if you never managed to study that at uni or have forgotten everything since :)



I must admit I haven’t read this book but it is usually referred to as the “Bible” of learning algorithms. Plus it’s free and online so it’d be a sin not to include it on this list. It’s definitely very high on my list of books to read when I have some time… But you know how that goes!


Khan academy


Most of you have probably heard of Khan Academy or have even used it already but I can’t imagine not including it on this list nevertheless. You can learn pretty much anything there – online, for free, with exercises and videos and gamification, from biology to computer science. I used it to refresh my knowledge of derivations and integrals and I love their explanations of mathematical proofs. The videos (together with comments under them) are amazing especially when you’re trying to understand the underlying principles.

Special Relativity


No Nonsense Guide to Special Relativity is another one of those great online textbooks that don’t rely on the dry theory as much as on examples and explanations made for normal (?) humans. If you always found this part of physics fascinating but couldn’t quite understand “what the hell is going on there”, try to check out this link. It’s not a version “for dummies” but it’s still very accessible.


Bonus: How we learn to code


This final entry isn’t really an e-learning resource but I found it so impressive I wanted to share it with you. Kathy Sierra did a talk called Making Badass Developer about how we actually learn to code. She mentions that we shouldn’t be too hard on ourselves – we all have only certain amount of cognitive energy for each day and it’s okay to take it easy. She shares a few useful tips on how to learn faster and more effectively and it sped up my learning process by a lot. Thinking about it, maybe this is actually much more important resource than any textbook or tutorial out there that would teach you a specific piece of know-how. What you gain from this talk is applicable to everything else I’ve posted in this blog.


At Red Badger, we have a generous training budget to learn even more, with experts in the industry, anywhere around the world. We are also actively supporting the OSS community by contributing to repos, running React meet-ups in London, and organising a conference in March 2017. Oohh and have I mentioned our amazing altruistic project Haller for African farmers? If you want to know more and maybe even work for us, drop us a line!


ReactEurope 2016 Retrospective

by Melissa Marshall

What a whirlwind of a trip to Paris! The badgers are back in London after three days of meeting ReactJS developers, hastily starring cool GitHub repos, and having a few too many at the Frog. As a relative newcomer to React it was fascinating to hear about how much the ecosystem and community have grown in just a year.

Red Badger team exploring Montmartre

Exploring Montmartre with the Red Badger team.

One mark of a good conference is just how badly it makes you want to grab your laptop and start coding, and React Europe certainly gave me itchy fingers. My list of new libraries and technologies to investigate will certainly keep me busy for a while but before diving in I’d like to look back at the conference: what was good, what could be better, and my hopes for React Europe 2017.

The Good

By far the best thing about React is the community. The sheer number of people building libraries and tools for the framework and its ecosystem is impressive. In part that’s due to the culture surrounding React development – to hear “celebrities” like Dan Abramov and Vjeux talk about the importance of humility and encouraging more people into OSS contribution was unusual and refreshing. I also liked that Facebook sent key members of its React, GraphQL, and Flow teams out to Paris — there’s no better way to hear about the future of a technical landscape than from its creators.

ReactEurope showcased several high quality, engaging talks. A couple of my favorites were:

  • Jeff Morrison’s deep dive into Flow — having used Flow on my last project I thought this was fascinating. I’ll also be the first to admit I got rather lost midway through. Impressive and interesting nonetheless, it’s one I’ll watch again with the pause button ready.
  • Jonas Gebhardt’s talk on using React to build better visual programming environments. Tons of programmers get their start in visual languages despite most of them being clunky and removed from more standard coding environments. Although just a prototype, the tool this talk introduced had great potential for improving CS education.
  • Bonnie Eisenman’s retrospective on React Native. Although not technical, I thought this talk was really important, especially to those new to the React community (like me). Out of every talk at the conference this was the one that made me want to get coding ASAP.
  • Andrew Clark’s talk on his Recompose library — immediately useful and interesting, I will absolutely be checking this out for my next project.
  • Laney Kuenzel and Lee Byron’s talk on the future of GraphQL. I haven’t had the opportunity to use GraphQL yet but this talk was well delivered and nailed the balance between accessible to beginners and interesting to experts. I can’t wait to try it out.

And a major thanks to all the presenters for being so humble and open to questions about your work.

Dan Abramov's Redux talk at React Europe

Excellent use of emojis by Dan Abramov.

What Could Be Improved

Although I really enjoyed React Europe there were definitely some things I found frustrating throughout the week. Starting with the least important, coffee and wifi were in short supply! This was annoying but I was still impressed with how smoothly the whole thing ran overall — catering for 600+ attendees is very challenging.

Next year, I would like the conference to run multiple tracks in smaller spaces. It seemed difficult for the speakers to engage the whole (gigantic) room. Especially if you happened to be seated behind a pillar watching a screen, it felt like you may as well be at home in bed watching the livestream. There was also no alternative event to attend if you weren’t interested in a particular talk.

Over the course of the conference, I started to wonder if React is actually a large enough domain for a multi-day event like this one. Very few talks introduced new React paradigms or techniques. Most of the talks could be categorised as either something about GraphQL, something about React Native, or a demo of someone’s new React library. A lot of these were interesting, but after two days, repetitive. Personally I think something like a functional web programming conference might have more value than a React-specific one. However, a major part of the conference was getting to meet React’s developers and creators which for me was the highlight of the conference, and not something I’d want to miss out on.

And of course my final wish for the next React Europe is a higher percentage of women attending and speaking. I’m used to being a minority in tech but the gender ratio at React Europe was probably one of the worst I’ve ever experienced. In any case, I very much appreciate that the conference had a code of conduct which is step one in making events more accessible to women. Hats off also to some really fantastic women who spoke — Lin ClarkBonnie Eisenman and Laney Kuenzel all did a stellar job.

Overall I had a lovely time and met some incredible people. Hope to see you all next year!

Red Badger are hosting the 1st React conference in London next March. If you’d like to be kept up to date with news you can sign up here.


What Can we do With all This History?

by Roisi Proven


In Kanban, behaviour changing data is key. We will visualise absolutely everything we do, and track it diligently. We do this so that we can use real-world examples to enable us to give accurate, tangible forecasts for our projects, and identify bottlenecks and inefficiencies so we can continuously improve.

Here at Red Badger, we have, for the past few years, recommended to our clients that they use Kanban. Some take to it more readily than others. While we had previously transitioned businesses from the more traditional Agile model of Scrum in to Kanban, Fortnum & Mason was the first project where we were using Kanban from day one. Their confidence in our expertise allowed us to build a strong foundation for a project that is still going strong, over 2 years after it first began.

With those two years comes a hell of a lot of data. We have released code into production 317 times since the project began, and in the last year alone we have shipped over 300 user stories. So your first thought would be that our forecasts must now be alarmingly accurate, right?

Wrong. Because maths is hard.

As it turns out, too much data can be just as worthless as too little, so how do you figure out where to draw the line?

Kanban: The Basics

For the uninitiated, Kanban is an Agile framework focused on the “flow” of work. Rather than prescribing  the sprints and ceremonies used in the more traditional Scrum methodology, Kanban is all about facilitating the team to reach a cadence that allows them to deliver continuously and consistently.

There are many ways to forecast within the Kanban framework, but here at Red Badger we utilise Little’s Law, illustrated below.


This formula can also be switched around to allow you to calculate one of the three variables using your historical data, thus providing a forecast that often proves much more accurate than the estimation process of Scrum.

How Much is Too Much?

It’s never going to be clear when you first start, but your data will always let you know when it is becoming less useful. The most common way that this manifests is when a notable variable change does not result in a shift of your averages. For instance, a change in team size should, after a couple of weeks, start showing an affect on your average Throughput and Lead Time. However, after reducing the team size from 6 devs to 4, we noticed that even after 6 weeks, our Throughput was remaining steady.

It quickly became clear that the sheer volume of data meant that we had hit an average that was no longer affected by outliers. This is covered within the Central Limit Theorem, which states:

given certain conditions, the arithmetic mean of a sufficiently large number of iterates of independent random variables, each with a well-defined expected value and well-defined variance, will be approximately normally distributed, regardless of the underlying distribution.

As a consequence of this, we noticed difficulty in forecasting using our data in its current form. It’s always a bad sign when you run a forecast past your team and they laugh at it because it’s so ridiculous. Always heed the laughter of a developer.

Making the Most of History

You have all that data, but it isn’t helping you. So what can you do?

  • Create a moving averageThe reason your averages aren’t changing is because there is simply too much data for several outlier weeks to affect it. So instead, make the window at which you calculate your averages narrower. Take a ten week period (or 8, or 4, it’s definitely worth mucking around with different lengths of time), and base your averages off that. Keep the period the same, always working back the same number of weeks from your current data point. This allows those big variable changes to reflect in your data far more quickly, giving you a better overall view of the world.
  • Compartmentalise – split your project into milestones and create an average from each section. Work backwards from the single task level back up to the “epic” level. This creates a less granular, but still well defined, datapoint average of each piece of functionality you have delivered. This is good for projects which have clearly defined goals or milestones and a team size/skillset that remains constant , but perhaps less so where the flow of work is more to do with business as usual.
  • Start from scratch – This should only be done in the most dire of circumstances. 9 times out of 10 all your data needs is a little love and attention. Occasionally, however, the data you have may be representing your project so badly that you should archive it for posterity, and start from scratch. You’ll have those same early project wobbles that affect your data, but sometimes a full refresh is exactly what you need to bring the project back to a meaningful place.

The list above is by no means exhaustive, and by and large the main thing to remember is that as a Project Manager, what you track and how you track it will constantly evolve and change. There is no such thing as a “perfect” process, only one that is well-tended to and respected by the team using it.

Also, maths is hard.


You can only get good process when you’ve got good people. Come be a good person with us by checking out our vacancies!


What’s the point of going to conferences?

by Hanna Cevik

We have a £2,000 annual training budget at Red Badger that can be used however we like. Most people use it to travel to attend a conference in the US, Asia-Pacific or somewhere equally exciting. Training is really specific to your job role and expanding / honing your skills though, so sometimes the most relevant conference is… at the London ExCel.

On 12th May, I took myself out to deepest, darkest docklands (admittedly in my MX5 with the roof down as it was a super sunny day) and wandered around hall S6 for 7 hours. Amongst the stuff I wanted to hear about was why buyer journeys are all wrong and how to speak to big prospects whilst still sounding like a human being.

At Red Badger, it’s really important to us that we talk sense, both in terms of what we do and how we tell you about it. I was keen to hear how other people did it, and what the audience thought about it. One of the things I love about how we build new business here is that we don’t have a sales team. It means that we find new business based on our reputation, the need of the client and our suitability to do the job, not because someone wants to meet their target and get their bonus. Many agencies do use that model and it leads to division internally; projects teams hate the sales team because they just throw projects over the fence and don’t care about how it’s been sold. The clients are almost always disappointed too; they end up having their projects de-scoped to make them possible in the time or for the price they’ve been promised.


What are you doing right now?

We don’t work like that at Red Badger. Ever. We are one team from pre-sale conversations to support; you’re always talking to people who know and respect each other’s working practices and understand how and why something has been designed or built that way. As a marketer, it is a joy to work with.

The speaker in the “Maximising your Business Relationships” session talked about how he felt the same disillusionment with that model, and set out to prove that large projects could be sold and managed without resorting to sales speak. This actually makes life a lot easier for both the seller and buyer. The pressure to talk in acronyms and business language can make it really hard to know what the other party means or wants. It’s a lot easier to say “I’m going to provide you with some recommendations to help get everyone on board” than saying “we realise this is going to be a c-suite decision, and I will provide you with a formal RfP response via procurement”. You have the same obligations to meet due diligence but everyone feels like they are dealing with another human person. There were murmurs of uncertainty in the room; “but how will we sound important and knowledgable without using all those buzzwords?” – and frankly that is exactly the problem. If you don’t know how to sell your product without being plain and transparent, it’s probably not the sales process that is flawed.

It’s a lot like the Agile/ Lean process itself – cut the waste, cooperate constantly, deliver fast. Endless documentation (e.g. large proposal documents) doesn’t get anything done faster, and may well add to losing sight of the end goal. Just like when you propose Agile, lots of people in the room looked worried. It’s hard to let go of the models you’ve been using for years. But that’s exactly why you should do – they are obsolete. Just like the monolithic agency giants – they no longer provide the best solution.

It tied in with the buyer journeys talk I’d heard earlier in the day. If you are using the ‘traditional’ sales funnel, you’re going to be disappointed with your conversions.

sales funnel

This is just not how it works anymore. Most of your prospects simply aren’t interested in hearing about how your solution is going to do something X times better and Y times cheaper than your competitors over 40 pages of sales documentation. They want to know what it’s going to be like to work with you and how that is going to get the result they need delivered. They want to know why they should work with your teams, specifically, to achieve their aims. The old sales funnel model focuses too much on saying the right thing to the prospect to get them ‘down the funnel’, when you should be focusing on how to solve their issues.

Going to conferences isn’t always about learning new skills, sometimes it’s about being given the confidence to let go of old habits. Knowing that sales-speak isn’t necessary, that doing the right thing is more important than saying the buzzwords and being bold in your decisions will mean that you don’t make the same mistakes as before, and get a different, better result.

So, thanks B2B Marketing Expo! You reminded me that doing my job well is often about simply treating people as human beings.


Lean UX Workshop with Jeff Gothelf – My top takeaways

by Sasha Ward

Last month I enjoyed a trip down to Brighton for a day long workshop with the author of Lean UX, Jeff Gothelf. UX enthusiasts from across the UK and some from as far as Sweden came down to the south of England to learn more about Lean, UX and all things in-between.

I’m going to run through my top takeaways from the workshop and explain how they can be applied within your product teams.

1.Don’t solve problems that don’t exist

It’s a waste of time, money and resources making solutions for problems that don’t exist. How do you know if a problem actually is a problem? Well, get out of the building (GOOB), talk to your users, ask them about their problems and observe their current behaviours or current workarounds. They might be happy with their current solution but you won’t know until you talk to them.



Within your team, get together and write down your assumptions of what you think the problem is and create a problem statement that you all agree on (you can use a problem statement template – see Jeff Gothelf Lean UX).

2.Focus on the ‘M’ in Minimum Viable Product (MVP)?

Every organisation, every team will have a different opinion of what an MVP is, “a stripped back version of the final product”, “our must have features”, “version 1″ the list goes on…

What Jeff helped cement in my mind about MVPs was two things:

– that MVPs are for learning, not basic versions of the “final” product
– and they’re not restricted to digital means

Think of MVP’s as tests or experiments to gauge interest/viability of a certain feature or product. The first question to ask yourself is “what are you trying to learn?”, then ask yourself “how can we learn that by doing the smallest amount possible?”. At this point don’t be restricted by thinking that you can only learn by shipping a basic version of the final product. What you can do instead is leverage existing technologies/services/products in order to create an experiment to test your hypothesis by using methods like landing page tests, feature fakes or wizard of oz/concierge tests.

It’s important to set a threshold for success when creating  your MVP, e.g. when we see more than 100 clicks per day on this feature we know it is worthwhile to continue with this, or if 1000 people subscribe on our landing page test in the first week we will know we are successful. If you’re not clear upfront what what success looks like then how will you know when to stop the experiment?

3.Write tactical hypothesis statements

A hypothesis is a hunch, it’s our best guess of what we believe to be true. They are a great place to start when at the beginning of building a new product or feature, but remember they are still our assumptions so they need thorough validation.

They are quite tricky to get right as a good hypothesis statement is made up of a lot of variables with a lot of assumptions baked into them. They are usually comprised of a KPI or success metric, a persona, a user goal and a feature.

Jeff Gothelf has a great template in his book Lean UX that combines them all in a succinct way:

– We believe will will achieve [KPI] if [persona] will attain [user goal] with [feature].

The most valuable lesson I learnt from the workshop was to be tactical about how you write these, think about what is in your control and think about what you are trying to learn. If something isn’t in your control then don’t go ahead with it, have a rethink and move forward swiftly.

4.Start learning more

What I mean by this is, add stories to  your product backlog purely aimed at learning something or running an experiment. Treat learnings the same as you treat other user stories and get your team invested in learning more.

These can be treated in the same way as regular user stories, scope them, define the problem, write a hypothesis and go out and test it. In doing so you are likely to learn something you didn’t know before and it’s good practice at getting everyone in your team involved in the collaborative exercise of writing hypothesis statements and creating MVPs.

5.Getting buy-in to Lean UX in the enterprise

This was a topic that had a lot of interest from the attendees of the workshop and the best pieces of advice was to speak the language of the people you are trying to convince to buy-in to Lean UX and in doing so try to bring them closer to the process.

There is a lot of technical jargon that gets thrown around day-to-day and understandably it can be quite off-putting and intimidating. What’s important when trying to get buy-in to Lean UX in the enterprise is to speak the same language as the stakeholders and understand what they value. If their interests are in acquisition and conversion, then speak to them in this terminology, don’t start talking about progressive enhancement or typeahead as this will only cause more confusion and more unproductive conversations.

In summary…

We all aim to create great products that people need, and Lean UX provides us with guidelines to help move teams from doubt (assumptions) to certainty through evidence based decision making. As the UXer, start by getting your team on the same page by facilitating hypothesis workshops and design studios to help identify the problem and create actionable hypotheses that you want to validate through testing. You and your team will learn every time you speak to or test prototypes on your users so make sure you do this as frequent and early on in the process as possible, seeing as these are the people that you are designing for you need to create a dialogue with them as often as possible to ensure that you are on the right track.

1. Lean UX – Jeff Gothelf- http://www.jeffgothelf.com/blog/lean-ux-book/#sthash.Y6ULPslO.qI01GYOY.dpbs
2. Popcorn hoodie – http://www.ohgizmo.com/wp-content/uploads/2013/02/Pop-corn-hoodie.jpg
3. Feature fakes – https://www.industriallogic.com/blog/fast-frugal-learning-with-a-feature-fake/
4. Wizard of oz/concierge tests – http://www.usabilityfirst.com/glossary/wizard-of-oz-prototype/

This is only a brief summary of some of the things we covered throughout the day so if you’ve got any questions then please get in touch on twitter or comment below.


Badger Slack Digest – May 2016 issue

by Alex Savin


Selected bits of shared information from our private Slack channels at Red Badger. Grouped by channel names.




    • Gitsome – A Supercharged Git/Shell Autocompleter with GitHub Integration.
    • JavaPoly.js – a library that polyfills native JVM support in the browser. It allows you to import your existing Java code, and invoke the code directly from Javascript.
    • Mocha Trumpet Reporter – display a trumpet and make authentic sound when your mocha tests fail. Get a microwave ding when everything is green









  • Reasons.to Design, Code, Create & Come Together – Brighton, UK; Sept 5-7 2016
  • Strange Loop St. Louis, Sept 15-17th, 2016
  • PolyConf – Poznan, PL; Jun 30 – July 02 2016
  • From The Front 15th and 16th September — Bologna, Italy
  • EuroClojure – Bratislava, Slovakia; October 25-26

Bonus track


React Amsterdam – a few quick takeaways

by Alex Savin


This was my first React dedicated conference, and to be honest I had my reservations. There is only so much you can fit into a single topic. One day single track conf turned out to be quite diverse to keep the attention all the way. As usual I was keeping random notes from the event. All notes and ideas are originated from talks, but are not necessarily direct quotes.


React Native

If you always wanted to try RN but were afraid of the tools and setup you have to do prior to writing a single line of code, fear no more. There is a service that allows you to write and run native iOS/Android apps in browser as you type. Basically, JSBin for React Native. Meet rnplay.org. It also allows you to switch and try apps on your actual native device.

Behind the scenes there is Appetize.io service that streams iOS or Android simulator directly into your browser. As far as I know there are no real devices involved, but native simulation is a great start. In fact, React Native official documentation now uses the very same embedded simulation for illustrating the examples. Go ahead, run some native iOS apps in your browser.


Tinker. Release. Repeat.

Another interesting initiative is React Native Package Manager – RNPM. Managing modules with native dependencies is not easy at the moment, and RNPM is here to help. There are rumours that it might even be included as part of RN offering (and it is already mentioned in the official RN documentations as a preferred way of linking).


JSS builds on top of  @Vjeux‘s idea that you can specify CSS properties of HTML elements as part of JSX declaration (which compiles into JS). The original approach lacks quite a few things like media queries and pseudo classes support. Christopher himself said that you should use common sense and combine inline CSS in JSX with conventional stylesheets.

Not anymore. JSS supports all CSS properties, can easily be used with React components, or simply compiled into CSS. It also offers dead code elimination, name scoping, rule isolation and fast selectors. It is even claimed to be faster than conventional CSS. And if you like idea of plugins, it has them too. The selection is far from PostCSS plugins catalog, but give it time and some crayons.

Apollo by Meteor

GraphQL is a great way of communicating data from backend to clients. It also requires a bit of setup. It is just a language, as a developer you will have to implement server side and client side support. On a server side it is likely that you’d use GraphQL layer as an opportunity to unify multiple RESTful (or otherwise) endpoints into a single GraphQL API. On a client you would in turn create and consume GraphQL requests either directly, or with something like Facebook Relay.

This is where the Apollo project steps in. Apollo server helps you consolidate multiple data sources into a single GraphQL endpoint. Apollo client help’s you to consume that endpoint, provide pagination, reactivity and optimistic UI updates.

Both are still very much in development and not really ready for prime time. There isn’t even much details on how exactly it’ll work.


Recent tweet by Dan Abramov

MobX (former Mobservable) allows you to observe changes in things and act when those changes are happening. It uses ES6 decorators to help you wrap existing things (like app state and React components) into observables and computables. Observables are things that are being watched for changes. Computables are the things that must be updated as a result of a change.

If you choose so, MobX can be used as a Redux alternative. Like Redux, it promotes pure functions and is built on Functional Reactive Programming principles (FRP). You define observables and things that can change as a result, and then sit and watch how things are happening automatically.

There is pretty good 10 min MobX interactive tutorial available for playing.

Tweet Cube


A cube that projects tweets. Bonus for the cat userpics.

It would be a perfect companion to the throwable cube mic.


The place was called Pllek. From the official description:

This post-industrial spot with a beach-like atmosphere offers one of the best panoramic views of the IJ River… Pllek also is home to Amsterdam’s largest disco ball at night.

To get there you cross the IJ river on a ferry from Amsterdam Centraal station. The place is pretty bizarre, with a rusting Soviet submarine in the bay, cranes, sea containers and a chilly wind. I even made a short film on getting there and back.

I get the feeling that organisers didn’t expect the weather to be so cold in the mid April – it was pretty chilly inside. There were lots of developers from Netherlands, but also from all over the Europe. It was a chilly, crowded, but very friendly event with nice snacks and coffee. It was also first of a kind, and they do intend to continue next year.



Chilly weather didn’t affect popularity of the (free) ice cream stand



This trip was possible thanks to Red Badger’s learning budget perk. Join us, and you’ll get it too!


London React Meetup – April 2016

by Leon Hewitt

An evening of testing workflows and dynamically built forms awaited visitors to Code Node last Wednesday when the London React Meetup was once again in town.

Tom Duncalf kicked things off by describing what he has found to be effective unit and integration testing strategies for React applications.

Tom explained his rationale for writing tests and in particular unit tests. In addition to verifying the application behaves as expected and providing a useful set of automated regression tests (allowing you to refactor with confidence), he pointed out how well written tests can act as documentation for the code and enable faster debugging with less dependency on end-to-end tests (be they automated or manual) to expose errors.

Taking this testing philosophy, Tom went on to discuss how this applies to testing applications built with React and listed the qualities he looks to test in his components (e.g. do they render correctly, can you interact with them as expected and how do the integrate with the rest of the application).

The talk was full of code examples of how Tom went about implementing his tests using his toolchain of choice: Mocha, Chai and Enzyme.

Next up was Anna Doubkova discussing her experiences with Redux Form and how useful it was in developing a CMS application she was involved in with her team here at Red Badger. One aim of the project was to deliver a CMS with less dependency on developer input to extend. Anna noted how great it would be for the customer to alter their CMS just by changing the data structure. i.e. have fields added by the CMS administrator automatically render on the page without the need to bring the development team in.

A combination of JSON SchemaRedux Form and React enabled the team to do just that. Anna took us through the journey of developing the solution and the reasons for the technical choices made.

Anna ended by listing the pros and cons of working with redux form, expressing overall how easy the team found it to use.

Rounding the evening off was Arnaud Rinquin who shared his journey of reducing the friction he was feeling around testing in the javascript ecosystem. It’s no secret that the toolchain takes a bit of setting up, leading to developers experiencing what has been politely termed Javascript Fatigue.

Inspired by Dan Abramov’s Redux tutorial,  Arnaud aimed to recreate the feel of Dan’s environment in his own workspace. That being: run the tests in the browser, have tests alongside the production code (in the same file), have the tests run automatically on a code change and have the tests execute in the shell  (to facilitate continuous integration).

He successfully achieved this through a combination of a babel plugin (to remove the test code and any dependencies from the code files) and a specially written chrome plugin (to control the test runs). This achievement has enabled Arnaud to enjoy what is for him “a proper TDD workflow”. He can now keep coding and stop worrying.

The success of these meetups (all the 300 tickets for this event were snapped up within an hour) demonstrates the popularity of React in the London software community and the quality of the talks highlights how open the React community is to exploring and embracing new techniques. Everyone’s already looking forward to what fresh insights May’s meetup will provide.

Hear about when new events are announced first by joining the meetup group here.



There are two kinds of websites, which one are you designing?

by Clementine Brown


Somehow, in the fast-paced world of our many, many Red Badger Slack channels, I managed to accidentally accept an invitation to talk on a panel.

So, at an event hosted by inVision, in an very large building in Bishopsgate, I found myself flanked (stage left to right) by the Head of Design for Disney Labs, Lead Designer for BBC Worldwide (little old me) Head of Design for GDS, and the Creative Director of Lagom Magazine – talking to 400 people about Design+Ethics.

We were asked a lot of questions. Everyone said some interesting things. So over the course of a few blog posts I’m going to outline some of the issues we covered, and some of the debates that could come out of it. And maybe you’ll have some of those conversations around the water cooler.

Hey, I drew that!

I’m going to start with one of the questions that interests me the most, which was broadly addressing the issue of websites looking the same – and whether there is a line a designer can cross between design inspiration and design theft?

What happens if you see something you’ve designed re-used, and repurposed, with someone else’s name on it? A difficult question, but luckily one I don’t really have to think about. Because I’m a consultant, I spend a lot of my time either starting design work, or putting my sticky little mitts in the middle of work that’s already on the go. Rarely am I a finisher.

But when you think of this in terms of the web – who really owns the idea of the burger nav? Who owns the 3 column pattern? Who owns the concept of a hero image? If these are used over and over again, is it imitation, or is it in fact some kind of hive-mind-design-pattern? And should we be reframing the question, to ask instead what our ethical responsibility is to *make* things the same? If you think of “the typical website”, what comes to mind? That’s right, a full-width image with an H1 overlaid, then 3 columns of info, then perhaps a portfolio element. It’s not the most inspiring, but one thing you can be sure of – most users will understand it, know where to look for the content they want, and it will work on their phone. And for the most part, that’s all users care about. Simplicity and patterns win.

During the conversation, Ed Fairman (lead design at BBC worldwide) said that as a community, designers are proud of sharing ideas, of sharing work – and I think accept with that the probability of someone else using the theory or application in their own work. Now these systems are established, the challenge is for designers to do their own thing within them.

There are only a certain amount of ways to design a product and it remain effective. At one point Louise Downe asked the audience “who actually enjoys novel things on the internet?” and the result was, well, underwhelming. It turns out that not many people like novel things on the internet – and I expect those who do only do when they’re browsing around, and if they were looking for a something specific like, say, the etymology of the word “kangaroo”, I’m reasonably sure they’d be annoyed if a silhouette of Rolf Harris singing ‘Tie me Kangaroo Down Sport’ scrolled across the screen (what? I’d say that’s pretty novel). I think that this similarity of design is in fact a reflection of what users expect from a website, and of designers and UXers listening. So, from whence else does this similarity spring forth?

Fork it

Think for a moment about a fork. They’ve been around for quite a while (according to Wikipedia, the earliest one found is dated to be 2400 BC). They may have ‘evolved’ by being made of different materials, but, fundamentally, the way we use forks has remained the same, and so the design of a fork has remained the same. Not so with the web. With the proliferation of tablet and mobile devices, and their increased internet connectivity (you could even read this blog while being 190 feet underground on the tube, for crying out loud) the demands on a website have changed. We’re using it in different circumstances, in different ways, and to do different things.

At the moment the design trend is big, bold, flat, simple. Many may think this comes at the whim of some group of self-proclaimed superstar-ninja-design-prophets (my mother, to this day, refuses to upgrade above iOS6 because “why would I want something that looks like it’s been designed by a child?”) But really, this trend is (in part) down to the limitations of our fat little fingers. As web traffic from mobile devices is increasing every day, so is the knowledge that soon having a touch-unfriendly site will no longer be an option. Big bold buttons, card-based design (more clickable area!), boxy layouts and easy column hierarchy are all resultants of touch-based interactions, and so naturally that has impacted on the aesthetic. 

The skeuomorphic approach of the likes of my mother’s favoured iOS6 has disappeared from new web designs, not just because of the ‘modern look’ or touch screen constraints – it also had an impact on page load time. Heavy graphics and large css files meant that users on a slower mobile network had a significantly low-grade experience compared to those on a newer device who paid for 4G – and that is an unnecessary separation. As designers and developers in this new era of on-the-go browsing, we have a responsibility to make sure the content and information a website provides is available in the same way to anyone who chooses to look at it.

Don’t make me screen

Designers are excited about this, because responsive design allows us to make the most of our screens. In the early (and not that long ago) days of mobile design, the very term ‘mobile design’ meant literally designing for a mobile phone. So you would design two versions of the website – one for computers and one for phones. There was no in-between. As technology has evolved, so has design – it’s simply unsustainable to design a version of the same site for each device that we can view it on – especially as now those range from the 1.4 inch Apple watch to the 88 inch 4k TV.

With HTML5, JavaScript, CSS transitions, standards compliances for browsers, and users increasingly becoming creators, the internet is no longer a place for the few to build and the many to browse. People are more interested than ever in having their own space, creating, uploading and sharing their own content – and with this interest has come the evolution of frameworks. With no design or code knowledge, people can not only create their own website in a matter of hours, they can create a site that other people understand how to use. This is because all the big frameworks (think Bootstrap, Foundation, Squarespace etc) have all used a standard pattern that has become recognisable on the web. As this shared language of interaction becomes more widespread, so the internet will become more accessible and intuitive.

At one point Elliot Jay Stocks asked and answered “What value does bespoke work bring?” The answer was a deeper understanding of the medium. So we now have to address how we navigate this frameworked, patterned landscape, where we have these design systems that are common, but are also expected to create something novel. As a designer creating a web space for a client’s content to be showcased, I think we have a responsibility to do that in a recognisable, effective and frictionless way. And at the moment that means gradually defining patterns and encouraging behaviours that are relatable and effortless.

Fancy being part of our team? Head over here to check out our Digital Designer job spec!