Screen Printing workshop

by Tania Pasia

The benefits of creative activities have been the focus of several studies over the years proving that engaging in creative activities has multiple benefits to our mental health and physical wellbeing. When these activities take place within a social setting the gains grow substantially, not only on an individual level but also on a collective one. Hobbies, side projects, workshops are few great ways to get involved in the creative process and relish the associated benefits.

Here at Red Badger we have a number of internal team building practices, one of them being our ‘Social Budget’ – a Red Badger initiative to support badgers socialise outside work. This initiative gives us the opportunity to hang out and participate in creative activities. One of the activities that got ‘sponsored’ by it was a Screen Printing workshop we did a while ago. The experience was brilliant; one I couldn’t but write about for those of you who’ d like to give it a go.

Here we are, the RB ladies proudly holding our artwork


Few words about Screen Printing – Background

According to Google:


Historically, screen printing was a form of stencilling that first appeared in China. It was largely introduced to Western Europe around the late 18th century but did not gain large acceptance or use until silk mesh was more available for trade from the east and a profitable outlet for the medium discovered.

Andy Warhol is among the pop artists that popularised screen printing as an artistic technique. One of his most famous paintings is Marylin Diptych shown below. This publicity photograph is from the 1953 film Niagara and Warhol made more than 20 silkscreen paintings of Marilyn Monroe based on the same photo.


What to expect from the workshop at Print Club London

The day kicks off with a warm welcome, a cup of tea and cookies. Your friendly instructor will briefly talk about the screen printing technique and what you will be doing on the day. You will hand over your artwork to a technician (images that you have saved on a .jpeg, .pdf, .psd) to edit in Photoshop and get the design ready for your screen.

Aprons are provided. I would highly recommend wearing them -how many times do you get the chance to look like a crazy scientist?-. Once you wear them you are ready for action.

This is where the magic happens. Ready? Go!

You will all gather in the Dark room to coat your screen with a light sensitive emulsion.

1_Applying emulsion

Once this is done you will leave the screen to dry. After a while you will join your instructor in another dark room with a massive exposure unit to expose your artwork to the screen.

2_Exposure unit

You will place your image facing up and put the screen on top of it, turn on the vacuum until the blanket is tout and start the exposure timer. A couple of minutes later the image will be exposed to your screen. Worry not! your screen will look the same, i.e. blank; you will be able to see the image on your screen only after you’ve rinsed the excess emulsion away!

3_Rinsing the excess emulsion

You will then let it dry outside -if weather permits- and head over for lunch.

4_Leaving them out to dry

–Lunch Break: Time for some yummy food around the area–

By the time you are back from lunch your screen will be dry. At this point you will have to use the tape and apply it to its edges to cover any pinholes you might spot on the screen.

5_Applying tape to the edges and pinholes

Now you are ready to go. Off to the print bench!

6_Print bench

Important step here is to attach the screen to the print bench and secure it to a fixed spot. You will then want to register the paper on the bench to ensure the image will be printed on the centre of your canvas. To do this you line up a scrap paper to the print and with a masking tape mark the spots where the canvas paper will be located (called registration marks).

You will choose a colour (or 2!) and apply a generous amount of paint across the width of the screen.

7_Applying a generous amount of ink

What you want to do next is cover the whole image with paint; you will do this by gently moving the ink upwards using the squeegee (a move called “Flooding”). Once the image is covered, you will push the squeegee downwards at a 60% angle. Make sure you apply enough pressure to ensure the ink is pushed through the mesh.

Ta da! Your print is ready! Remove it from the table and leave it to dry on the rack provided. You can carry on with the same process of Flooding and Printing as many times as you like (we had up to 6 official copies), with as many inks as you want!

When all printing is done, you will wash off the screen with high pressure water (yeap, it’s fun!)

8_Washing off the screens

Once you are done with the cleaning, your instructor will hand over your artwork.

Mission accomplished. Off you go to show off and gift a few copies to your amigos.

Fun Facts

  • You get to dress like a crazy scientist
  • You play with water and paints
  • You will go home with 6 ‘ownmade’ designs you will be proud of 🙂

And because as Badgers we are fans of communicating ideas through visuals, here is a beautiful illustration of the process described above.


Fancy being part of our team and join us for some fun stuff? Check out our job openings here and get in touch.


Functional Programming for the Web – July 2016

by Marcel Cutts

Another month, another fantastic meetup! This time we were treated to a lesson in combining music with Elm by the exceptional John Watson, and given a whistle stop tour of functional web language design by the phenomenal Bodil Stokke.

Audio in Elm – John Watson

John led us on an audio journey using one of the hottest languages around – Elm! He demonstrated what’s available with today’s browser Web Audio APIs, how we can use soundfonts to produce different flavours of noise, and taught us the ABC musical notation structure – a way of describing music that fits well with traditional genres.

With the clever Elm Combine in hand, we were shown how swiftly and elegantly we can write our own parser for the ABC musical notation and get something playing in our browser pretty quick!

John also talked about the practicalities of writing an Audio API library for Elm, the restrictions libraries have, why they exist, and what you can do to overcome these challenges.

We were treated to a demo of all this in action with some live music – check out his talk in the video below and have a listen for yourself!

There were plenty of juicy Elm tips and approaches shown – visit his Elm project on GitHub for some beautiful and pragmatic code.

A Realist’s Guide to Language Design – Bodil Stokke

Bodil grabbed our hands and took us through the fields of functional programming languages to teach us that there is no single best language, no silver bullet for every situation. Luckily, she provided the tools to let us make our own decision about which languages suit us and our projects.

After a fun introduction reminiscent of Gary Bernhardt’s famously charming ’Wat’ talk, Bodil asked us to think about some key aspects of language design, and if we were to make our own, what are the kind of things we should be thinking about?

  • What are you trying to achieve?
  • Who and for whom are you trying to achieve it for?
  • What level of abstraction is right?

She emphasised  that a good programming language should ideally feel like you’re having a conversation with your compiler.

Bodil then whizzed through examples in Elm, PureScript, and JS extensions to highlight their similarities, differences and what those could mean to you.

Elm is extremely browser focused, letting you get on your feet very quickly if your goal is a web application. It forces you to use a very specific type system that allows it to serve you immensely helpful error messages, automatically ensure package are where they’re meant to be and delivers solid confidence in development.

However, Bodil demonstrated that even uglified, Elm took up 108kb of space, and Elm on the server is a bit of a wish at the moment! Using foreign JS outside of the Elm space can also be a pain, and relies on the use of ports.

PureScript has a more generic type system, allowing you to reach a higher level of abstraction than Elm. However this more complex, powerful type system means there’s a loss in simplicity, the error messages aren’t as helpful, and you may need to reference a site like Pursuit for occasional guidance.

Interoperating with JavaScript is much easier than Elm’s approach through PureScripts Foreign Function Interface. Even better, you can manage these foreign functions and packages easily through trusty old Bower.

JS Extensions
Through the use of libraries like Flow, it’s possible to gain some of the positives of type systems without committing to a whole new language. This is fantastic news for legacy projects that want to gradually improve rather than commit to the risky gambit of a full re-write.

Which one is right for you? Bodil stressed that each has a different purpose, and we should take what we’ve learnt to assess what we need, what effort we have available, and the future.

Watch her talk and give us your thoughts!


That’s all! Thanks to everyone who came along, our gracious speakers, and to Skills Matter for hosting.

See you all next month!

If you aren’t already a member of the Functional Programming for the Web meetup you can join here. See you at our next event!



by Anna Doubkova

When starting our last project, we had long discussions about immutability of our state. Using spread operator or Object.assign in reducers didn’t quite cover our requirements. As crazy as we sometimes are, we decided for various reasons not to use ImmutableJS like most people do – instead, we went with mori. Combining reducers with redux’s combineReducers wouldn’t work in our case as it only works with plain JS objects. We looked around for a solution that we could use instead, something similar to redux-immutable. After a while, it became obvious that if we really want to use mori in our redux app, we’d need to write our own solution – and that’s how redux-mori was born.

Tl;dr: Redux-mori is a very slim library that allows using mori’s immutable data structures for state (and action) objects in a redux app.


It’s a little bit less nonsensical than this – promise.


I mean do we even have to explain this?


I think it’s fair to stop for a while before getting on with our project to explain why and how we use mori – and what it actually is.

We can learn from mori docs that it is “A library for using ClojureScript’s persistent data structures and supporting API from the comfort of vanilla JavaScript.” Couldn’t describe it better myself.

A few examples would probably make it clearer than a whole article on the topic. Let’s see how it actually looks when used in our apps?

1. When we get data from our server, we convert it immediately to a hashMap.


2. We then pass this data into our reducer using an appropriate action.

3. Finally, to use our state in components, we change clojure data into plain JS data at the last possible moment.


In the examples above, we’ve pretty much exhausted all mori functions and structures we use in our app. Mori has much more to offer though, and I’d really encourage you to have a good look at their docs to learn more cool stuff.


We decided to go for mori for a lot of more or less “emotional” reasons. First of all, we’re quite into functional programming and Clojure here at Red Badger. We like immutability and pure functions and aren’t that keen on objects.

Another reason we wanted to dodge ImmutableJS is the API. After using it on a few previous projects, some coders on our team were quite tired of the object-oriented, slightly over-complicated way of accessing and creating data structures.

Last but not least – with every project, we strive to learn more, to explore various options, and have fun. When we spotted something few people have done before, we wanted to get out there and try it out.

Using mori

You could have noticed that so far, we haven’t really used redux-mori in our examples. We really don’t need it for that much at the end of the day; we just need to re-write a few redux functions to work with a different data structure.

1. In your root reducer:

2. To create your store:

This way both your state and actions will be clojure data. When logged into the console using redux-logger, they will be automatically translated back into plain JS objects so it’s easily readable when debugging.

Why we like it

Doing things this way seems to be working out for us pretty well. Here’s list of pros:

  • It’s immutable
  • It’s functional
  • It’s readable
  • It’s reliable
  • It makes sense as a part of our workflow


Using it with other redux “extensions” such as redux-saga and redux-form can be tricky (same worked for redux-logger until we wrote our wrapper). These libraries typically expect you to have state as plain JS object and often doesn’t allow you to modify data structure of your app at all.

In order to make it work with redux-saga, we decided not to use createStore from ‘redux-mori’ in our current project. It works fine with redux-thunk however and we’re planning to extend either our project or submit a PR to redux-saga so that we can “go mori” all the way.

When trying to find a work-around for redux-form, Erik Rasmussen pointed out that redux-form support ImmutableJS already but not other data formats. There’s a “hack” of sorts that you can use though so even this problem has a solution.


Even if you’re not as adventurous (or mad) as we are and you want to stick to solutions well proven by others, I’d encourage you to give this a try. If you have any problems with making it work, feel free to get in touch on twitter.

We welcome any comments, issues, discussions, and pull requests.

And to conclude with – have fun, writing in a functional way is sexy 😉


Enough about Diversity

by Amy Crimmens

This morning Becca and I went to a breakfast briefing on diversity. I’m not usually up for breakfast events as I’m grumpy in the morning but I was interested to see if there was anything different being said after hearing Dave, our COO speak at a diversity meetup last week.

There were 3 speakers on the line up; each covered a lot of the stuff that is often discussed at these events like “your team makeup should reflect the makeup of society” and “put yourself in a situation where you are in a minority and you’ll understand how it feels”. All very valid points and great things to be reminded of but not necessarily anything new. They also slipped in a few stereotypes and assumptions, for example a girl will feel excluded amongst a group of guys if they talk about football (Becca and I both raised our eyebrows at this) or that we need to hire more female designers (6 out of 8 of our digital designers are female- we almost have the opposite issue). The main focus of the event was gender diversity and in my mind when we talk about gender diversity we should talk about exactly that- and not simply focus on whether women are adequately represented.

Anyhoo..a nice point came from Mike Islip CEO at Digitas Lbi, he talked about helping women come back into work after having children and told us that Digitas has just signed its 1st term-time only contract. Now that I’ve heard Mike talk about this type of contract, it seems like common sense and a bit daft that it isn’t a more of a common occurrence (maybe it is and I just don’t know about it). I could definitely see Red Badger considering offering this in future if someone needed it, hats off to Digitas for leading the way.

I listened to a lot of the talks and kept thinking it was all a bit unnecessary; and then during the discussion panel my feelings were summed up nicely by one of the audience members. Maybe the point is that it’s not about focusing on gender diversity after all; instead we should focus on being a good and responsible company. Simple as that.

img_4753 (1)


At Red Badger we don’t need to make rules about things like only having meetings scheduled  from 10-4 and banning emails from 8pm – 8am because we have a culture where we talk openly about important things and we respect each other. We don’t expect colleagues to respond to emails in the evenings and if there is someone who needs to work certain hours due to child care (or any other legit reason actually) we talk about it and make a plan that works. We like each other and recognise the contribution each of our colleagues brings to the company so why would we want to exclude someone or disregard their childcare responsibilities?

I am aware that we are a company of 70ish people (& 1 dog) at the moment and already we are finding we need to formalise our stance on more things as we grow. However, we are also very focused on maintaining the culture we have worked hard to foster and which I would say has led to us organically building a team which is very diverse on many fronts.

There is space for one or two more friendly faces on our team. If you would like to join us check out our vacancies here and get in touch so we can talk about the important stuff.



Celebrate National Badger Week

by Roisi Proven

It’s National Badger Week! I thought I’d take this time to talk a little bit about our namesake, and the qualities that we share (and some that we don’t).


  • Badgers have been present on the British Isles for up to 400,000 years. Red Badger has been present on the British Isles for 6 years.
  • Badgers live to about 24 years in captivity, and around 14 years in the wild. We have proof that Red Badgers live well into their 50s, and we hope much longer than that.
  • There are eight different species of badger. There are two species at Red Badger. Human, and Daschund (which is German for “badger dog”).
  • The welsh name for badger is “moch daear”. Which translates to “earth pig”. People at Red Badger have been called many things, but never earth pigs.
  • Badgers are very clean and will not poo in their sett. They have special chambers designated as latrines. Red Badger has two toilets which are fiercely fought over. Milo has not been given the badger rule book on pooping.
  • Badgers can eat several hundred earthworms every night. I asked in the Red Badger Slack channel if anyone has ever eaten an earthworm, but I received no response. However someone did respond with the snake emoji which I am taking as an admission of guilt.


  • Badgers have unusually thick skin. Here at Red Badger, we’re a far more sensitive lot.
  • European Badgers are famously social and vocal creatures. Red Badger parties are the stuff of legend, and we do a mean karaoke.

by flickr user https://www.flickr.com/photos/hellie55/


Badgers are protected in the UK. It is an offence to wilfully kill, injure or take a badger (or attempt to do so), or to cruelly ill-treat a badger. We always make sure everyone at Red Badger is treated kindly, and we have space in our sett for more.


Docker and assets and Rails, OH MY!

by Jon Yardley

How to precompile Ruby on Rails assets with Docker using --build-arg for deployment to a CDN.


I love Docker. I really enjoy all the benefits it brings not only to the developer experience (DX) but also confidence in deployments. Docker, however, is not a silver bullet on it’s own. It has brought with it a new set of problems which we would not have come across in more old school methods of application deployment.

Recently I came across a particularly annoying issue with Rails 4 and it’s asset pipeline when serving assets from AWS S3 via CloudFront. Referenced assets were not resolving to the correct location when running assets:precompile.

Also finding the right place to precompile assets was apparently obvious. At build time? When deploying? At startup? After trawling the web for a long time I found no obvious answer to this problem.

In detail: The Problem – TL;DR

In production or any other remote environment you want to have your assets served via a CDN and to do this with Rails you need to precompile your assets. This compresses all your assets and runs them through any precompilers you use i.e. SASS. If you use any frameworks it will also bundle all those assets up too.

The application I am currently developing uses Solidus Commerce (a fork of Spree Commerce) which has a bunch of it’s own assets for the admin panel. When precompiling these assets it fixes paths to your referenced assets, e.g. Font files.

If you don’t have the config.action_controller.asset_host set in production.rb at the precompile then these references will be relative to your application domain and won’t resolve. Not ideal!

Another problem is that with Docker you want to build your container and ship it across different environments not changing anything about the application in between and Environment Variables tell your application where it currently lives. e.g. Staging, Production etc…

If you tell Rails to run with config.assets.digest = true then you need to have the precompile assets manifest file which tells rails about your precompiled assets which means you would want it at build time however at this point your container has no awareness of it’s environment.

This particular problem rules out compiling assets when you deploy. Even though your assets will live on your CDN your container won’t know where to point as the manifest won’t exist inside the container and therefore references to assets will be incorrect.

Why not run the assets:precompile rake task in the entrypoint.sh script when the container starts up?

There are a few problems with this approach. The first being that we are deploying our application using the AWS EC2 Container Service which has a timeout when you start the container. If the Dockerfile CMD command does not run within a certain amount of time it will kill your container and start it again. This can be very frustrating and difficult to work our what is going on.

Also, if your container ever dies in production before starting up it will have to precompile all the assets which is not great. You really want your container to start up as quickly as it can in the event of a failure.

The Solution: –build-arg

I had no idea until spending a day banging my head against a wall trying to fix this that Docker has the option --build-arg. Here is a snippet from the Docker Docs:

You can use ENV instructions in a Dockerfile to define variable values. These values persist in the built image. However, often persistence is not what you want. Users want to specify variables differently depending on which host they build an image on.

A good example is http_proxy or source versions for pulling intermediate files. The ARG instruction lets Dockerfile authors define values that users can set at build-time using the --build-arg flag.

This option allows you to build your container with variables. This is perfect for compiling assets when building a Docker image. I know this sort of goes against the whole idea of immutable infrastructure however Rails, in my case, needs to know which environment it will be living whilst it is built so that any asset references resolve correctly.

How to use –build-arg

Set your asset host

In your Rails application make sure you set the asset_host from an Environment Variable:

Ammend your Dockerfile

In your Dockerfile insert the following after you have added all your application files:

Build your image

Then in your CI build script:

The resulting image will now have your precompiled assets inside the container. Your Rails application then has access to the manifest file with all the correct urls.

Deploy your precompiled assets

To then deploy your assets to S3 you can copy the images out of the container and then push them up to AWS:

Hopefully this will help others who have been having the same problems. Comments and other solution suggestions are welcome!

Want to use Docker in production?

At Red Badger we are always on the look out for “what’s next” and we embrace new technologies like AWS ECS and Docker. If your looking to work with a team who are delivering software using the latest technology and in the right way then get in touch. We are constantly on the lookout for talented developers.


Online Learning Resources

by Anna Doubkova

Life as a software engineer is one of continuous learning. Some people enjoy exploring new technologies and others complain about #jsfatigue. At the end of the day though, we’re all in this together.


I’ve gathered quite a few useful resources over the past couple of years. I used them to learn more about various topics that I thought would either help me become a better programmer or that I just wanted to dip into for a bit. They could be of interest to all coders out there, ranging from Computer Science to Physics, and that’s why I wrote up a few tips for you.

All of these e-learning sources are free and available to you at any time – enjoy! (And let me know in comments what your favourite ones are.)

Web Development

Functional programming textbook


This one is a good classic for anyone who’d like to learn a bit more about functional programming without having to delve deep into mathematical definitions and weird terminology. Professor Frisby’s Mostly Adequate Guide to Functional Programming (that’s the official name) is a wonderful book for beginners and others who’d like to strengthen their knowledge of the topic. It’s fun, easily readable and has plenty of examples to support the theory.



Let’s get onto the practical side of things! A lot has been said and done and written about Redux. However the more it gets wide-spread, the more misconceptions and mis/abuses of this library you can see. That’s why it’s really worth watching these videos from Dan Abramov (maker of Redux) because he talks not only about how to do things properly in Redux but also why you should stick to these rules. Not mentioning he points out a lot of good practices in Getting Started with Redux that can save you a lot of headaches.

Learn ES6 by fixing unit tests


This one might be slightly passé as probably every single JS developer uses ES6 by now. But still ES6 has a lot of features and some – such as generators – aren’t that easy to comprehend at first. On ES6 Katas, you can learn all ES6 features by fixing unit tests. Not only you can improve your knowledge of the new version of JS but you can also enjoy making tests go green! Perfect combination.

JSON Schema


JSON Schema is super useful when working on web data structures that should be universal and generic. The docs are unfortunately quite cryptic and that’s why Understanding JSON Schema  is such a bliss as it’s readable and shows a lot of examples along the theory. Obviously it’s not something you would read before sleep but it’s a good reference in case you need to find out more about schemas or some of their parts and capabilities.

Computer Science

Machine Learning


I think I should really start this section with my favourite online textbook out there. Having said that, I must also admit I didn’t finish it because the learning curve got pretty brutal half-way through. However, thanks to Neural Networks and Deep Learning, I learnt so much about principles of this technology that I couldn’t have hoped to achieve on my own. There are practical python examples and mathematical exercises included so you can always check that you understand what it’s on about. The tricky part is that you need to have a pretty good knowledge of high mathematics, mainly matrices and derivations (and gradients and all these things in n-dimensional spaces). Still I highly recommend at least reading through the first two chapters that are quite accessible even for us who never finished uni studies 😉


Rich Hickey



Rich Hickey is the person behind Clojure and Datomic so as you can expect, he’s a pretty smart guy. He’s also a great presenter that makes some very complex ideas easily understandable for us common folks. I included two of my favourite presentations he did at QCon. Simple Made Easy is a talk that considers programming from a very general point of view. It works with the idea that we want to create very simple (even if complex) applications but that’s usually a difficult task. In Database as a Value, Rich shares some underlying principles of Datomic. Even if you don’t know much about Clojure or Datomic, it’s simple to understand the key ideas and start thinking about data in a more universal way.

AI course on edx


This was the first course I took on artificial intelligence, and although it wasn’t guided by a tutor, the videos and exercises are well sufficient to learn the principles. Apart from the fact that you can listen to talks on something incredibly cool from professors from Berkley for free, it’s fun and interactive, and it really feels rewarding at the end of the course if you stick with it. I’ve seen similar courses available online on the MIT website and the likes but they never made it to such high standards in terms of online accessibility.

It might be worth noting – because I didn’t know when I started this course – that AI isn’t really about making sci-fi like fantastic creatures as the authors of books and scripts would have us believe. It’s about finding clever algorithms that somehow help computers understand our world and make intelligent decisions. As much fun as reading Assimov, if you ask me.


Algorithms and data structures


Four semesters of CS in 6 hours might be a slight exaggeration – you can certainly learn a lot in a relatively short amount of time because you won’t see much theory here. Instead, the concepts are briefly explained and then you do exercises which are based on fixing unit tests to check that your algorithms or data structures work as expected. Solutions are also available so if you get lost, you can always get back on the track easily. Great intro into computer science if you never managed to study that at uni or have forgotten everything since 🙂



I must admit I haven’t read this book but it is usually referred to as the “Bible” of learning algorithms. Plus it’s free and online so it’d be a sin not to include it on this list. It’s definitely very high on my list of books to read when I have some time… But you know how that goes!


Khan academy


Most of you have probably heard of Khan Academy or have even used it already but I can’t imagine not including it on this list nevertheless. You can learn pretty much anything there – online, for free, with exercises and videos and gamification, from biology to computer science. I used it to refresh my knowledge of derivations and integrals and I love their explanations of mathematical proofs. The videos (together with comments under them) are amazing especially when you’re trying to understand the underlying principles.

Special Relativity


No Nonsense Guide to Special Relativity is another one of those great online textbooks that don’t rely on the dry theory as much as on examples and explanations made for normal (?) humans. If you always found this part of physics fascinating but couldn’t quite understand “what the hell is going on there”, try to check out this link. It’s not a version “for dummies” but it’s still very accessible.


Bonus: How we learn to code


This final entry isn’t really an e-learning resource but I found it so impressive I wanted to share it with you. Kathy Sierra did a talk called Making Badass Developer about how we actually learn to code. She mentions that we shouldn’t be too hard on ourselves – we all have only certain amount of cognitive energy for each day and it’s okay to take it easy. She shares a few useful tips on how to learn faster and more effectively and it sped up my learning process by a lot. Thinking about it, maybe this is actually much more important resource than any textbook or tutorial out there that would teach you a specific piece of know-how. What you gain from this talk is applicable to everything else I’ve posted in this blog.


At Red Badger, we have a generous training budget to learn even more, with experts in the industry, anywhere around the world. We are also actively supporting the OSS community by contributing to repos, running React meet-ups in London, and organising a conference in March 2017. Oohh and have I mentioned our amazing altruistic project Haller for African farmers? If you want to know more and maybe even work for us, drop us a line!


ReactEurope 2016 Retrospective

by Melissa Marshall

What a whirlwind of a trip to Paris! The badgers are back in London after three days of meeting ReactJS developers, hastily starring cool GitHub repos, and having a few too many at the Frog. As a relative newcomer to React it was fascinating to hear about how much the ecosystem and community have grown in just a year.

Red Badger team exploring Montmartre

Exploring Montmartre with the Red Badger team.

One mark of a good conference is just how badly it makes you want to grab your laptop and start coding, and React Europe certainly gave me itchy fingers. My list of new libraries and technologies to investigate will certainly keep me busy for a while but before diving in I’d like to look back at the conference: what was good, what could be better, and my hopes for React Europe 2017.

The Good

By far the best thing about React is the community. The sheer number of people building libraries and tools for the framework and its ecosystem is impressive. In part that’s due to the culture surrounding React development – to hear “celebrities” like Dan Abramov and Vjeux talk about the importance of humility and encouraging more people into OSS contribution was unusual and refreshing. I also liked that Facebook sent key members of its React, GraphQL, and Flow teams out to Paris — there’s no better way to hear about the future of a technical landscape than from its creators.

ReactEurope showcased several high quality, engaging talks. A couple of my favorites were:

  • Jeff Morrison’s deep dive into Flow — having used Flow on my last project I thought this was fascinating. I’ll also be the first to admit I got rather lost midway through. Impressive and interesting nonetheless, it’s one I’ll watch again with the pause button ready.
  • Jonas Gebhardt’s talk on using React to build better visual programming environments. Tons of programmers get their start in visual languages despite most of them being clunky and removed from more standard coding environments. Although just a prototype, the tool this talk introduced had great potential for improving CS education.
  • Bonnie Eisenman’s retrospective on React Native. Although not technical, I thought this talk was really important, especially to those new to the React community (like me). Out of every talk at the conference this was the one that made me want to get coding ASAP.
  • Andrew Clark’s talk on his Recompose library — immediately useful and interesting, I will absolutely be checking this out for my next project.
  • Laney Kuenzel and Lee Byron’s talk on the future of GraphQL. I haven’t had the opportunity to use GraphQL yet but this talk was well delivered and nailed the balance between accessible to beginners and interesting to experts. I can’t wait to try it out.

And a major thanks to all the presenters for being so humble and open to questions about your work.

Dan Abramov's Redux talk at React Europe

Excellent use of emojis by Dan Abramov.

What Could Be Improved

Although I really enjoyed React Europe there were definitely some things I found frustrating throughout the week. Starting with the least important, coffee and wifi were in short supply! This was annoying but I was still impressed with how smoothly the whole thing ran overall — catering for 600+ attendees is very challenging.

Next year, I would like the conference to run multiple tracks in smaller spaces. It seemed difficult for the speakers to engage the whole (gigantic) room. Especially if you happened to be seated behind a pillar watching a screen, it felt like you may as well be at home in bed watching the livestream. There was also no alternative event to attend if you weren’t interested in a particular talk.

Over the course of the conference, I started to wonder if React is actually a large enough domain for a multi-day event like this one. Very few talks introduced new React paradigms or techniques. Most of the talks could be categorised as either something about GraphQL, something about React Native, or a demo of someone’s new React library. A lot of these were interesting, but after two days, repetitive. Personally I think something like a functional web programming conference might have more value than a React-specific one. However, a major part of the conference was getting to meet React’s developers and creators which for me was the highlight of the conference, and not something I’d want to miss out on.

And of course my final wish for the next React Europe is a higher percentage of women attending and speaking. I’m used to being a minority in tech but the gender ratio at React Europe was probably one of the worst I’ve ever experienced. In any case, I very much appreciate that the conference had a code of conduct which is step one in making events more accessible to women. Hats off also to some really fantastic women who spoke — Lin ClarkBonnie Eisenman and Laney Kuenzel all did a stellar job.

Overall I had a lovely time and met some incredible people. Hope to see you all next year!

Red Badger are hosting the 1st React conference in London next March. If you’d like to be kept up to date with news you can sign up here.


What Can we do With all This History?

by Roisi Proven


In Kanban, behaviour changing data is key. We will visualise absolutely everything we do, and track it diligently. We do this so that we can use real-world examples to enable us to give accurate, tangible forecasts for our projects, and identify bottlenecks and inefficiencies so we can continuously improve.

Here at Red Badger, we have, for the past few years, recommended to our clients that they use Kanban. Some take to it more readily than others. While we had previously transitioned businesses from the more traditional Agile model of Scrum in to Kanban, Fortnum & Mason was the first project where we were using Kanban from day one. Their confidence in our expertise allowed us to build a strong foundation for a project that is still going strong, over 2 years after it first began.

With those two years comes a hell of a lot of data. We have released code into production 317 times since the project began, and in the last year alone we have shipped over 300 user stories. So your first thought would be that our forecasts must now be alarmingly accurate, right?

Wrong. Because maths is hard.

As it turns out, too much data can be just as worthless as too little, so how do you figure out where to draw the line?

Kanban: The Basics

For the uninitiated, Kanban is an Agile framework focused on the “flow” of work. Rather than prescribing  the sprints and ceremonies used in the more traditional Scrum methodology, Kanban is all about facilitating the team to reach a cadence that allows them to deliver continuously and consistently.

There are many ways to forecast within the Kanban framework, but here at Red Badger we utilise Little’s Law, illustrated below.


This formula can also be switched around to allow you to calculate one of the three variables using your historical data, thus providing a forecast that often proves much more accurate than the estimation process of Scrum.

How Much is Too Much?

It’s never going to be clear when you first start, but your data will always let you know when it is becoming less useful. The most common way that this manifests is when a notable variable change does not result in a shift of your averages. For instance, a change in team size should, after a couple of weeks, start showing an affect on your average Throughput and Lead Time. However, after reducing the team size from 6 devs to 4, we noticed that even after 6 weeks, our Throughput was remaining steady.

It quickly became clear that the sheer volume of data meant that we had hit an average that was no longer affected by outliers. This is covered within the Central Limit Theorem, which states:

given certain conditions, the arithmetic mean of a sufficiently large number of iterates of independent random variables, each with a well-defined expected value and well-defined variance, will be approximately normally distributed, regardless of the underlying distribution.

As a consequence of this, we noticed difficulty in forecasting using our data in its current form. It’s always a bad sign when you run a forecast past your team and they laugh at it because it’s so ridiculous. Always heed the laughter of a developer.

Making the Most of History

You have all that data, but it isn’t helping you. So what can you do?

  • Create a moving averageThe reason your averages aren’t changing is because there is simply too much data for several outlier weeks to affect it. So instead, make the window at which you calculate your averages narrower. Take a ten week period (or 8, or 4, it’s definitely worth mucking around with different lengths of time), and base your averages off that. Keep the period the same, always working back the same number of weeks from your current data point. This allows those big variable changes to reflect in your data far more quickly, giving you a better overall view of the world.
  • Compartmentalise – split your project into milestones and create an average from each section. Work backwards from the single task level back up to the “epic” level. This creates a less granular, but still well defined, datapoint average of each piece of functionality you have delivered. This is good for projects which have clearly defined goals or milestones and a team size/skillset that remains constant , but perhaps less so where the flow of work is more to do with business as usual.
  • Start from scratch – This should only be done in the most dire of circumstances. 9 times out of 10 all your data needs is a little love and attention. Occasionally, however, the data you have may be representing your project so badly that you should archive it for posterity, and start from scratch. You’ll have those same early project wobbles that affect your data, but sometimes a full refresh is exactly what you need to bring the project back to a meaningful place.

The list above is by no means exhaustive, and by and large the main thing to remember is that as a Project Manager, what you track and how you track it will constantly evolve and change. There is no such thing as a “perfect” process, only one that is well-tended to and respected by the team using it.

Also, maths is hard.


You can only get good process when you’ve got good people. Come be a good person with us by checking out our vacancies!


What’s the point of going to conferences?

by Hanna Cevik

We have a £2,000 annual training budget at Red Badger that can be used however we like. Most people use it to travel to attend a conference in the US, Asia-Pacific or somewhere equally exciting. Training is really specific to your job role and expanding / honing your skills though, so sometimes the most relevant conference is… at the London ExCel.

On 12th May, I took myself out to deepest, darkest docklands (admittedly in my MX5 with the roof down as it was a super sunny day) and wandered around hall S6 for 7 hours. Amongst the stuff I wanted to hear about was why buyer journeys are all wrong and how to speak to big prospects whilst still sounding like a human being.

At Red Badger, it’s really important to us that we talk sense, both in terms of what we do and how we tell you about it. I was keen to hear how other people did it, and what the audience thought about it. One of the things I love about how we build new business here is that we don’t have a sales team. It means that we find new business based on our reputation, the need of the client and our suitability to do the job, not because someone wants to meet their target and get their bonus. Many agencies do use that model and it leads to division internally; projects teams hate the sales team because they just throw projects over the fence and don’t care about how it’s been sold. The clients are almost always disappointed too; they end up having their projects de-scoped to make them possible in the time or for the price they’ve been promised.


What are you doing right now?

We don’t work like that at Red Badger. Ever. We are one team from pre-sale conversations to support; you’re always talking to people who know and respect each other’s working practices and understand how and why something has been designed or built that way. As a marketer, it is a joy to work with.

The speaker in the “Maximising your Business Relationships” session talked about how he felt the same disillusionment with that model, and set out to prove that large projects could be sold and managed without resorting to sales speak. This actually makes life a lot easier for both the seller and buyer. The pressure to talk in acronyms and business language can make it really hard to know what the other party means or wants. It’s a lot easier to say “I’m going to provide you with some recommendations to help get everyone on board” than saying “we realise this is going to be a c-suite decision, and I will provide you with a formal RfP response via procurement”. You have the same obligations to meet due diligence but everyone feels like they are dealing with another human person. There were murmurs of uncertainty in the room; “but how will we sound important and knowledgable without using all those buzzwords?” – and frankly that is exactly the problem. If you don’t know how to sell your product without being plain and transparent, it’s probably not the sales process that is flawed.

It’s a lot like the Agile/ Lean process itself – cut the waste, cooperate constantly, deliver fast. Endless documentation (e.g. large proposal documents) doesn’t get anything done faster, and may well add to losing sight of the end goal. Just like when you propose Agile, lots of people in the room looked worried. It’s hard to let go of the models you’ve been using for years. But that’s exactly why you should do – they are obsolete. Just like the monolithic agency giants – they no longer provide the best solution.

It tied in with the buyer journeys talk I’d heard earlier in the day. If you are using the ‘traditional’ sales funnel, you’re going to be disappointed with your conversions.

sales funnel

This is just not how it works anymore. Most of your prospects simply aren’t interested in hearing about how your solution is going to do something X times better and Y times cheaper than your competitors over 40 pages of sales documentation. They want to know what it’s going to be like to work with you and how that is going to get the result they need delivered. They want to know why they should work with your teams, specifically, to achieve their aims. The old sales funnel model focuses too much on saying the right thing to the prospect to get them ‘down the funnel’, when you should be focusing on how to solve their issues.

Going to conferences isn’t always about learning new skills, sometimes it’s about being given the confidence to let go of old habits. Knowing that sales-speak isn’t necessary, that doing the right thing is more important than saying the buzzwords and being bold in your decisions will mean that you don’t make the same mistakes as before, and get a different, better result.

So, thanks B2B Marketing Expo! You reminded me that doing my job well is often about simply treating people as human beings.