London React Meetup – April 2016

by Leon Hewitt

An evening of testing workflows and dynamically built forms awaited visitors to Code Node last Wednesday when the London React Meetup was once again in town.

Tom Duncalf kicked things off by describing what he has found to be effective unit and integration testing strategies for React applications.

Tom explained his rationale for writing tests and in particular unit tests. In addition to verifying the application behaves as expected and providing a useful set of automated regression tests (allowing you to refactor with confidence), he pointed out how well written tests can act as documentation for the code and enable faster debugging with less dependency on end-to-end tests (be they automated or manual) to expose errors.

Taking this testing philosophy, Tom went on to discuss how this applies to testing applications built with React and listed the qualities he looks to test in his components (e.g. do they render correctly, can you interact with them as expected and how do the integrate with the rest of the application).

The talk was full of code examples of how Tom went about implementing his tests using his toolchain of choice: Mocha, Chai and Enzyme.

Next up was Anna Doubkova discussing her experiences with Redux Form and how useful it was in developing a CMS application she was involved in with her team here at Red Badger. One aim of the project was to deliver a CMS with less dependency on developer input to extend. Anna noted how great it would be for the customer to alter their CMS just by changing the data structure. i.e. have fields added by the CMS administrator automatically render on the page without the need to bring the development team in.

A combination of JSON SchemaRedux Form and React enabled the team to do just that. Anna took us through the journey of developing the solution and the reasons for the technical choices made.

Anna ended by listing the pros and cons of working with redux form, expressing overall how easy the team found it to use.

Rounding the evening off was Arnaud Rinquin who shared his journey of reducing the friction he was feeling around testing in the javascript ecosystem. It’s no secret that the toolchain takes a bit of setting up, leading to developers experiencing what has been politely termed Javascript Fatigue.

Inspired by Dan Abramov’s Redux tutorial,  Arnaud aimed to recreate the feel of Dan’s environment in his own workspace. That being: run the tests in the browser, have tests alongside the production code (in the same file), have the tests run automatically on a code change and have the tests execute in the shell  (to facilitate continuous integration).

He successfully achieved this through a combination of a babel plugin (to remove the test code and any dependencies from the code files) and a specially written chrome plugin (to control the test runs). This achievement has enabled Arnaud to enjoy what is for him “a proper TDD workflow”. He can now keep coding and stop worrying.

The success of these meetups (all the 300 tickets for this event were snapped up within an hour) demonstrates the popularity of React in the London software community and the quality of the talks highlights how open the React community is to exploring and embracing new techniques. Everyone’s already looking forward to what fresh insights May’s meetup will provide.

Hear about when new events are announced first by joining the meetup group here.



There are two kinds of websites, which one are you designing?

by Clementine Brown


Somehow, in the fast-paced world of our many, many Red Badger Slack channels, I managed to accidentally accept an invitation to talk on a panel.

So, at an event hosted by inVision, in an very large building in Bishopsgate, I found myself flanked (stage left to right) by the Head of Design for Disney Labs, Lead Designer for BBC Worldwide (little old me) Head of Design for GDS, and the Creative Director of Lagom Magazine – talking to 400 people about Design+Ethics.

We were asked a lot of questions. Everyone said some interesting things. So over the course of a few blog posts I’m going to outline some of the issues we covered, and some of the debates that could come out of it. And maybe you’ll have some of those conversations around the water cooler.

Hey, I drew that!

I’m going to start with one of the questions that interests me the most, which was broadly addressing the issue of websites looking the same – and whether there is a line a designer can cross between design inspiration and design theft?

What happens if you see something you’ve designed re-used, and repurposed, with someone else’s name on it? A difficult question, but luckily one I don’t really have to think about. Because I’m a consultant, I spend a lot of my time either starting design work, or putting my sticky little mitts in the middle of work that’s already on the go. Rarely am I a finisher.

But when you think of this in terms of the web – who really owns the idea of the burger nav? Who owns the 3 column pattern? Who owns the concept of a hero image? If these are used over and over again, is it imitation, or is it in fact some kind of hive-mind-design-pattern? And should we be reframing the question, to ask instead what our ethical responsibility is to *make* things the same? If you think of “the typical website”, what comes to mind? That’s right, a full-width image with an H1 overlaid, then 3 columns of info, then perhaps a portfolio element. It’s not the most inspiring, but one thing you can be sure of – most users will understand it, know where to look for the content they want, and it will work on their phone. And for the most part, that’s all users care about. Simplicity and patterns win.

During the conversation, Ed Fairman (lead design at BBC worldwide) said that as a community, designers are proud of sharing ideas, of sharing work – and I think accept with that the probability of someone else using the theory or application in their own work. Now these systems are established, the challenge is for designers to do their own thing within them.

There are only a certain amount of ways to design a product and it remain effective. At one point Louise Downe asked the audience “who actually enjoys novel things on the internet?” and the result was, well, underwhelming. It turns out that not many people like novel things on the internet – and I expect those who do only do when they’re browsing around, and if they were looking for a something specific like, say, the etymology of the word “kangaroo”, I’m reasonably sure they’d be annoyed if a silhouette of Rolf Harris singing ‘Tie me Kangaroo Down Sport’ scrolled across the screen (what? I’d say that’s pretty novel). I think that this similarity of design is in fact a reflection of what users expect from a website, and of designers and UXers listening. So, from whence else does this similarity spring forth?

Fork it

Think for a moment about a fork. They’ve been around for quite a while (according to Wikipedia, the earliest one found is dated to be 2400 BC). They may have ‘evolved’ by being made of different materials, but, fundamentally, the way we use forks has remained the same, and so the design of a fork has remained the same. Not so with the web. With the proliferation of tablet and mobile devices, and their increased internet connectivity (you could even read this blog while being 190 feet underground on the tube, for crying out loud) the demands on a website have changed. We’re using it in different circumstances, in different ways, and to do different things.

At the moment the design trend is big, bold, flat, simple. Many may think this comes at the whim of some group of self-proclaimed superstar-ninja-design-prophets (my mother, to this day, refuses to upgrade above iOS6 because “why would I want something that looks like it’s been designed by a child?”) But really, this trend is (in part) down to the limitations of our fat little fingers. As web traffic from mobile devices is increasing every day, so is the knowledge that soon having a touch-unfriendly site will no longer be an option. Big bold buttons, card-based design (more clickable area!), boxy layouts and easy column hierarchy are all resultants of touch-based interactions, and so naturally that has impacted on the aesthetic. 

The skeuomorphic approach of the likes of my mother’s favoured iOS6 has disappeared from new web designs, not just because of the ‘modern look’ or touch screen constraints – it also had an impact on page load time. Heavy graphics and large css files meant that users on a slower mobile network had a significantly low-grade experience compared to those on a newer device who paid for 4G – and that is an unnecessary separation. As designers and developers in this new era of on-the-go browsing, we have a responsibility to make sure the content and information a website provides is available in the same way to anyone who chooses to look at it.

Don’t make me screen

Designers are excited about this, because responsive design allows us to make the most of our screens. In the early (and not that long ago) days of mobile design, the very term ‘mobile design’ meant literally designing for a mobile phone. So you would design two versions of the website – one for computers and one for phones. There was no in-between. As technology has evolved, so has design – it’s simply unsustainable to design a version of the same site for each device that we can view it on – especially as now those range from the 1.4 inch Apple watch to the 88 inch 4k TV.

With HTML5, JavaScript, CSS transitions, standards compliances for browsers, and users increasingly becoming creators, the internet is no longer a place for the few to build and the many to browse. People are more interested than ever in having their own space, creating, uploading and sharing their own content – and with this interest has come the evolution of frameworks. With no design or code knowledge, people can not only create their own website in a matter of hours, they can create a site that other people understand how to use. This is because all the big frameworks (think Bootstrap, Foundation, Squarespace etc) have all used a standard pattern that has become recognisable on the web. As this shared language of interaction becomes more widespread, so the internet will become more accessible and intuitive.

At one point Elliot Jay Stocks asked and answered “What value does bespoke work bring?” The answer was a deeper understanding of the medium. So we now have to address how we navigate this frameworked, patterned landscape, where we have these design systems that are common, but are also expected to create something novel. As a designer creating a web space for a client’s content to be showcased, I think we have a responsibility to do that in a recognisable, effective and frictionless way. And at the moment that means gradually defining patterns and encouraging behaviours that are relatable and effortless.

Fancy being part of our team? Head over here to check out our Digital Designer job spec!


Characteristic of a Good Product Owner

by Toqir Khalid

Typically on most Agile projects, the success (or failure) of a project can heavily depend on how good (or bad) the product owner is and how committed they are. Good Agile teams that consistently deliver the right thing at the right time, and with quality in mind, will always have a product owner that is focused on the overall bigger picture and able to articulate this into the smaller pieces that are needed to deliver their vision/goal – i.e. the individual user stories.

Anyone can take on the role of the product owner, but to be a great one is hard! Let’s take a look at some characteristics that great product owner’s have:

Relating the vision/goal to backlog items

A lot of product owners are good at making the business case for a project to go ahead, but very quickly end up with a backlog that is disjointed and seems disconnected from the original vision/goal that they started with. After a couple of sprints, it’s likely that most of the team (perhaps they too) have forgotten what the original goal was and why they’re doing the project. The phrase ‘can’t see the forest for the trees’ comes to mind.

A great product owner will

  • ensure the business case is encompassed into a short vision statement,
  • makes this visible to everybody.
  • But more importantly, can always refer back to this when explaining every user story on the backlog.

One of the best ways to convey your vision/goal is with an elevator pitch. The product owner should

  • come up with their elevator pitch,
  • make it clearly visible to the team,
  • and constantly refer back to this to explain why the team is working on the prioritised backlog and the individual user stories.

Write User Stories with the team and allow them to contribute to the business value, i.e. the ‘So that…’

Often product backlogs are created by the product owner in silo and given to development teams. A lot of the time, the user stories are a set of task lists (or wish lists) the product owner wants the team to do. It can be difficult for the team to understand why they’re doing what they’re doing, especially if the vision hasn’t been stated.

A great product owner will

  • work with the team to come up with a product backlog that is
    • relevant,
    • links back to the vision,
    • but also includes the business value (i.e. the so that …).

This means the team fully understands the reasons and value of each user story and how it is intrinsically linked back to the vision.

The product owner must ensure that each user story has

  • a clear ‘why’ for each user story which explains the value/goal for the user
  • what the user is trying to accomplish rather than the feature in the system.

A great product owner will try to work with the team to solve the problem/issue for the user. This is likely to allow the team to solve the problem in a way any individual wouldn’t have thought of doing.

Respond to changing circumstances

For the duration of a project, business goals, technology changes, new regulations, or a competitor’s latest release will likely mean that the business/project priorities need to be changed in order to adapt. Often product owners and senior management will try to stick to the original goal in the hope that they will be seen as correct. Often that is not the case, and the project ends up delivering something that is out of date before it’s even launched.

Product owners should

  • always be able to adapt to changing circumstances
  • and modify the current product backlog so that it’s inline with what’s current now.

The modern world moves at a rapid pace, and great product owners are able to ride the wave and change the product backlog accordingly without having to throw away lots of work.

In order to do this, the product owner and the team should only be looking a couple of sprints ahead. The top of the product backlog should contain user stories that are small enough and have the necessary details that the team feels comfortable with. It’s very hard to see past a couple of weeks, as things are likely to change, so why plan to detail product backlog items that are planned for several months into the future. Simply add Epics/Themes to the backlog which can be elaborated upon when the time is right

Backlog priority

Say No without being a jerk

Senior stakeholders trust product owners to make the right decisions for the product that is being built. Therefore, a product owner must be conscious of what they say yes or no to. Great product owners will say no to many requests without being a jerk. The following are some examples a product owner could use:

  • ‘Thats a good idea, but we’d need to pull something out from the current backlog to accommodate this. What would you suggest?’
  • ‘Can you help me understand why this is important for you. Maybe there’s another way to solve your problem and we might have something similar already on the backlog’
  • ‘Maybe we could include that as part of this story, which is similar?’
  • ‘I”m not sure that aligns with the current vision for the product/project’
  • ‘No, because [reason why you’re not putting the item onto the product backlog]’

It’s very hard to say no to requests, but if it doesn’t align with the vision for the product, the product owner should be saying no, but providing the reasons for it in a friendly, and courteous manner.

Size matters! Small is good (for user stories)

A product owner must be able to

  • take the high-level vision and start to deliver value early on
  • in collaboration with the team, turn the vision into a prioritised list of user stories that are small enough to deliver value to the users immediately.

Having user stories that are large, vague, and take a long time to deliver are of no value to anybody.

Splitting out user stories into small manageable chunks of work also allows you to prioritise and focus on the high-value items that should be worked on first.

Becoming good at breaking down user stories into smaller chunks has the following benefits

  • Helps prioritise the high and low-value user stories, allowing the team to always work on delivering high-value user stories, and not waste time on low-value items
  • Focuses the team on building what’s right for the product and the overarching vision
  • Reduces risk by allowing you to concentrate on completing the hard to do things first. Generally, items that are of high value tend to be hard to do. By focusing on this first, you reduce risk on the project
  • Early feedback – by completing small chunks of work, users will be able to feedback to the team and allow them to adapt and change the backlog accordingly

Available, Engaged & Committed

In an ideal world, the product owner should be available to the team at all times. However, most product owners will have other work to do too, so it may not be possible to be 100% available. They should try to spend as much time as possible with the team, preferably face to face, to work in a collaborative manner. If they aren’t available to respond to questions, or participate in team discussions, it’s likely that the project will struggle, due to the lack of leadership. If they aren’t available to provide feedback and sign off for completed user stories, the team is likely to lose momentum on the project. If they can’t be made available for considerable amounts of time to support the team, consideration should be given to a proxy product owner who has the authority to make decisions.

The product owner should also be actively engaged and committed to the project’s success. The more time they spend with the team and is working with them in a collaborative manner, the better the chances of success on the project. An engaged product owner is a natural leader, as they find themselves leading a team through their decisions and makes it apparent to the team that they are committed to the final product. Great product owners build up a very good rapport with the team, and builds relationships with them to allow the team to focus on delivering the right things at the right time.

Great product owners are able to inspire and motivate teams just by being available to the team and working side by side with them to keep the project aligned to the original vision/goal. One of the best things they can do is to sit with the team as much as possible, so that they are available to the team and allow relationships within the team to be built.

Empathy and humility

Product owners have been given permission by senior management to make the necessary decisions to build the right product. With this comes the product power to lead teams and drive their vision forward. Great product owners lead through empathy and humility, working side by side with the team to make the correct decisions. They show their empathy and humility through their past success, continually making the right decisions, and correcting poor decisions to lead the team forward.

Be Prepared

As any good boy scout will tell you, you should always be prepared. It comes as no surprise that great project owners are always prepared. They alway come prepared to the ceremonial agile meetings, they are prepared and ready to make the necessary decisions.

A team will have greater confidence in their product owner, if they’re prepared for everything. If a product owner isn’t prepared, it will quickly become apparent, and the teams trust will be lost very quickly. It’s essential (and important) that the product owner is always prepared to lead the team, turns up to meetings with the right material, and takes pride in their work.

In God we Trust

No, I’m not trying to convert anybody to become religious, or even pledge their allegiance to the USA (‘In God we trust’ appears on a $20 bill). What is fascinating about religious people is their trust in their faith and trust in God (or gods for that matter). Some may even say it’s blind trust.

So what has trust got to do with being a good product owner? Well, quite a lot actually. Having a product owner that trusts the team to make the right decisions, and vice versa having a team that trusts them is great for team harmony and more importantly being productive. Having a product owner that trusts the team means they can convey their vision, and know that the team will do their best to achieve that goal. From the team’s perspective, having trust in the product owner means they’re doing the right thing and taking the necessary actions to deliver their vision for the product.

Having a team and a product owner that trust each other doesn’t’ just happen overnight. As the saying goes, trust has to be earned. Good product owners will establish a culture of openness and honesty within the team and encourage everybody to always be open and honest with each other, knowing that decisions and actions are being made for the betterment of the vision everybody is working hard to achieve. Trust also leads to respect. Having both trust and respect within the team and with the product owner helps immensely to deliver the right thing, at the right time, and with quality in mind.   



Building Desktop Apps With Electron, Webpack and Redux

by Roman Schejbal

In March 2016, as a part my annual training budget – a perk that every badger gets – I had the opportunity to go all the way to the Fluent Conf, which was in sunny San Francisco. You can read the best bits from the conference in Alex's blog.

One of the workshops I attended was about building a basic desktop application with Electron, that’d be compiled for every major OS and I’d like to share the knowledge and the takeaways I grasped during a 3 hour long session.

We’ll take a high level look at how Electron works, but we’ll also use ES2015/16, Git, Webpack, Babel and a bit of Redux, therefore it’s good to have a clue about what those are so we can focus on our topic and it’s not too overwhelming. We’ll see how we can implement live reloading and get that fast-paced development cycle that most of today’s developers are used to.

What we’ll be building

To highlight some of the things that a desktop application excels in from a normal web app, we’ll need to build something using the native features. Electron provides many APIs above the native functionality and I’ve decided to build a simple HackerNews app or a watcher, that’ll let me know when there is a popular post (>= XXX votes), because I don’t want to miss those and it’s quite reasonable size of a project for this purpose. Well, at least I thought so when I started writing this blog. ¯_(ツ)_/¯ You can download the app (for Mac) here.


If we go to the Electron homepage, we’ll find quick startup instructions at the bottom of that page; so startup your terminal and let’s get on with it!

Note: make sure you have the latest node and npm installed to avoid any potential errors

# Clone the Quick Start repository
$ git clone https://github.com/atom/electron-quick-start

# Go into the repository
$ cd electron-quick-start

# Install the dependencies and run
$ npm install && npm start

After running those commands you should have a Hello World app running on your desktop.

Main Process

What you see is genuinely a browser window, in electron we call them renderers and they are created by the main process. By main process we mean the main script defined inside package.json. You can think of it like a parent of all it’s children (renderers) that is responsible for creating instances of the BrowserWindow class. This is also the place where you’d work with file system operations for example.

Renderer Process

The browser window you see is one renderer process. Electron uses Chromium for displaying pages, but it’s topped with some Node.js APIs allowing interactions on a lower level.

Now that we know the entry point, let’s have a look into it. It’s pretty well commented out of the box so should give you a good idea what’s going on in there.


'use strict';

const electron = require('electron');
// Module to control application life.
const app = electron.app;
// Module to create native browser window.
const BrowserWindow = electron.BrowserWindow;

// Keep a global reference of the window object, if you don't, the window will
// be closed automatically when the JavaScript object is garbage collected.
let mainWindow;

function createWindow () {
  // Create the browser window.
  mainWindow = new BrowserWindow({width: 800, height: 600});

  // and load the index.html of the app.
  mainWindow.loadURL('file://' + __dirname + '/index.html');

  // Open the DevTools.

  // Emitted when the window is closed.
  mainWindow.on('closed', function() {
    // Dereference the window object, usually you would store windows
    // in an array if your app supports multi windows, this is the time
    // when you should delete the corresponding element.
    mainWindow = null;

// This method will be called when Electron has finished
// initialization and is ready to create browser windows.
app.on('ready', createWindow);

// Quit when all windows are closed.
app.on('window-all-closed', function () {
  // On OS X it is common for applications and their menu bar
  // to stay active until the user quits explicitly with Cmd + Q
  if (process.platform !== 'darwin') {

app.on('activate', function () {
  // On OS X it's common to re-create a window in the app when the
  // dock icon is clicked and there are no other windows open.
  if (mainWindow === null) {

On the application ready event we call the createWindow function that’ll instantiate a new BrowserWindow (a renderer process) and load url 'file://' + __dirname + '/index.html;' which is our main html file, from there on we are in our very known single-page application land. Also, we programmatically open the Developer Tools by calling mainWindow.webContents.openDevTools(); since Cmd+Alt+J does not do anything inside Electron.

Looking into the index.html we can see there is a usage of the global process variable and as you know this is not available in a normal browser window. It carries all the environment values that can come in handy as we’ll see in our app.


<!DOCTYPE html>
    <meta charset="UTF-8">
    <title>Hello World!</title>
    <h1>Hello World!</h1>
    We are using node <script>document.write(process.versions.node)</script>,
    Chromium <script>document.write(process.versions.chrome)</script>,
    and Electron <script>document.write(process.versions.electron)</script>.

I mentioned that the renderer is topped with some Node.JS APIs, the process variable is one of them. The other one worth mentioning is that you can actually use require on the client and load modules as you do in Node environment, but we’ll go a slightly different direction today.

The setup

We’ll use Webpack with it’s hot module replacement (HMR for short) for live reloading. So we need to build a little server that’ll host and reload our code while we develop.

In order to do that, we need to install a few node modules:

npm i -—save-dev express webpack webpack-dev-middleware webpack-hot-middleware webpack-target-electron-renderer

Then we create a basic webpack configuration:


var webpack = require('webpack');
var webpackTargetElectronRenderer = require('webpack-target-electron-renderer');

var config = {
  entry: [
  module: {
    loaders: [{
      test: /\.jsx?$/,
      loaders: ['babel-loader'],
      exclude: /node_modules/
    }, {
      test: /\.css$/,
      loader: 'style!css-loader?modules&importLoaders=1&localIdentName=[name]__[local]___[hash:base64:5]!postcss-loader'
    }, {
      test: /\.png|\.svg$/,
      loaders: ['file-loader']
  output: {
    path: __dirname + '/dist',
    publicPath: 'http://localhost:9000/dist/',
    filename: 'bundle.js'
  resolve: {
    extensions: ['', '.js', '.jsx'],
  plugins: [
    new webpack.HotModuleReplacementPlugin(),

config.target = webpackTargetElectronRenderer(config);

module.exports = config;

Since the electron index.html page is running from the file system, we need to provide a correct path for the webpack-hot-middleware so it knows where to connect. Same goes with the output.publicPath for the webpack-dev-middleware so the reload of scripts works properly. The webpack-target-electron-renderer is needed to set all of electron built-in modules as externals plus some other bits here and there. You can find out what exactly it’s doing in the npm package itself.
Also, as you can see we’ll use babel and css-modules so we actually need to install a few more modules which you can do with this command:

npm i —-save-dev babel-cli babel-loader babel-polyfill babel-preset-es2015 babel-preset-stage-0 babel-preset-react css-loader style-loader postcss-loader

Now that we have our config, let’s write up the server and connect it to Webpack.


import express from 'express';
import webpack from 'webpack';
import webpackDevMiddleware from 'webpack-dev-middleware';
import webpackHotMiddleware from 'webpack-hot-middleware';

import config from './webpack.config.development';

const compiler = webpack(config);
const app = express();

app.use(webpackDevMiddleware(compiler, {
  publicPath: config.output.publicPath,
  stats: {
    colors: true



Update the index.html to use the built javascript.


    <div id="root">
      We are using node <script>document.write(process.versions.node)</script>,
      Chromium <script>document.write(process.versions.chrome)</script>,
      and Electron <script>document.write(process.versions.electron)</script>.
      (function() {
        const script = document.createElement('script');
        script.src = process.env.ENV === 'development' ? 'http://localhost:9000/dist/bundle.js' : './dist/bundle.js';

Then tweak the package.json for a babel configuration and a startup script:


  "main": "index.js",
  "scripts": {
    "start": "ENV=development electron .",
    "server": "babel-node server.js"
  "babel": {
    "presets": [

But now we have to run two scripts to startup the app.

npm start
npm run server

Let’s get rid of that by installing the concurrently module npm i —-save-dev concurrently, update the package.json once more and we are back to one command:
npm run dev


  "scripts": {
    "start": "ENV=development electron .",
    "dev": "concurrently -k 'babel-node server.js' 'npm start'"

Engage dev

Until this point, we were setting up the development environment to have this convenient developer experience. From here we’ll actually start building our app, but I want to apologize for omitting (on purpose) quite a lot of app-specific stuff just because we want to primarily focus on the electron APIs and see the usage of it. In any case, you can find the full source code on my github.

Inside the webpack config we’ve set the entry point to ./src/index, so here is the content of it.


import 'babel-polyfill'; // generators
import React from 'react';
import { render as renderReact } from 'react-dom';
import debounce from 'debounce';
import configureStore from './store/configureStore';

const state = JSON.parse(localStorage.getItem('state'));
const store = configureStore(state || {});

let App = require('./components/app').default;
const render = (Component) => {
  renderReact(<Component {...store} />, document.getElementById('root'));

if (module.hot) {
  module.hot.accept('./components/app', function() {
    let newApp = require('./components/app').default;

const saveState = debounce(() => {
  localStorage.setItem('state', JSON.stringify(store.getState()));
}, 1000);
store.subscribe(() => {
  if (process.env.ENV === 'development') {
    console.log('state', store.getState());
store.dispatch({ type: 'APP_INIT', store });

Since we are using Redux and having a global state of the app on one place, we can use this minimal HMR mechanism that’s inspired by Dan Abramov’s blog.
Basically we re-render the app every time when something that’s imported under the App or even the App component itself changes. If it’s something else, we then refresh the whole page as this is set by query parameter reload=true inside our webpack config. Additionally we could write a reducer replacement mechanism so the webpack doesn’t have to refresh the page when we update actions, reducers or sagas. On any change of the state we save it into the localStorage, therefore we don’t really care about losing the state after refresh.

Moving on


function fetchTopStoriesApi() {
  return fetch(`https://hacker-news.firebaseio.com/v0/topstories.json?print=pretty`)
          .then(response => response.json())
          .then(stories => stories.map(storyId => ({ id: storyId, loaded: false, loading: false })));

I’m using redux-saga but feel free to use anything else like redux-thunk or not redux at all! The key thing to see here is the native fetch function for collecting HackerNews stories. I want to point out that in a normal web application I wouldn’t be able to do this (at least on the client side) because of CORS. But since we are in a native-like application, there’s no restriction on CORS in Electron.

Once we have the stories inside our state, we can print them out and attach some onClick handlers. In a normal web application we’d just create the anchor tag and give it a href, but if we do this inside an electron application and then click on the link, electron would load the page inside giving us no option to go back! What we want instead is to open the story in the user’s default web broswer. That’s where the electron’s shell module comes into play.


import electron from 'eletron';
handleClick(story) {
    return (e) => {

Now let’s just skip all the other components, reducers and have a look at one particular action.


export function notifyAboutStory(story, onClick) {
  const notification = new Notification(`Hacker News ${story.score} 👍💥 votes`, {
    body: story.title
  notification.onclick = onClick;

This is where we trigger the native notification to pop up. It follows the Web\Notification API so if you, for example, want to make it silent, you’d just add that as an option inside the options parameter within the constructor.

Communication between processes

Sometimes, depending on what we’re building we might need to communicate from the renderer process to the main process. It could be anything from a native open-file dialog that’s available only on the main process, or simply quitting the application with app.quit() as we have here.

Processes may communicate by messaging each other via ipcRenderer or ipcMain (depending on what side is being implemented) which is basically an instance of EventEmitter and we use it like this:

Event emitting:

import electron, { ipcRenderer } from 'eletron';
<button className={styles.quitBtn} onClick={() => ipcRenderer.send('quit')}>Quit App</button>

Listening on event and taking an action:

var ipcMain = electron.ipcMain;
ipcMain.on('quit', () => {

Driving this home

From the screenshot above you can see we have the app inside the OSX menubar, we’ll use this menubar package for it’s ease of use and all we have to do is to update our main.js to implement it.


var menubar = require('menubar');
const mb = menubar({
  'width': 500,
  'height': 700,
  'preload-window': true,
  'resizable': false
mb.on('ready', function ready () {
  console.log('app is ready')
  // your app code here

Building the app

For building the app we have a special webpack production config, that just overrides some of the development configuration and saves the build into a dist folder.

Then we use electron-packager to get the actual executable build.

Our final scripts section inside package.json looks like this:


  "scripts": {
    "start": "ENV=development electron .",
    "dev": "concurrently -k 'babel-node server.js' 'npm start'",
    "build": "webpack --config webpack.config.production.js && electron-packager . HackerNews --platform=darwin --arch=all --overwrite"

And that’s it! If you have any questions, use the comments below or get in touch on twitter.


London React Meetup – March 2016

by Joe Paice

This month we were back at the London Facebook office for another brilliant London React Meetup. We had another full house with plenty of pizza, beer and React geekery.



For this meetup we had two presentations. Red Badger’s very own Stuart Harris gave a talk on Redux Sagas and Facebook’s Martin Konicek gave us a look under the hood of React Native.

Managing and testing side-effects in Redux with Redux Saga

Stuart Harris – Red Badger



Stu gave some great insight into how to go about incorporating Redux Saga into Redux applications. Redux Saga is a Redux middleware that handles asynchronous actions in your applications. Instead of dispatching Thunks which get handled by the redux-thunk middleware, you create Sagas to gather all your side effect logic in a central place.

Stu took us through what Sagas are, how to use them and how easy they are to test. No more crazy spies or mocks. No more manipulating time or rewiring dependencies. Just pass in actions and test the output effects.

If you are having trouble with asynchronous actions in your project perhaps Redux Saga can help.

Under the hood of React Native

Martin Konicek – Facebook


Martin works on the React Native team at Facebook. He gave a great tour of React Native with a focus on Android development.

Martin delved into the porting of Facebook’s Ads Manager app from iOS to Android. He demonstrated some of the ways they were able to share code across both platforms, boasting over 85% code reuse in the Ads Manager.

Martin elaborated on the architecture and extensibility of React Native and gives some insight into the React Native Open Source process. It is pretty amazing how Facebook handles syncing their internal React Native and the public React Native on GitHub. I look forward to more tools like the Facebook Github Bot to help developers manage their open source projects.

The recordings of the talks will be available shortly, we’ll post a link here soon.  Thanks to Stu and Martin for their great talks, Facebook for hosting and the members of the community that joined us! Join the meetup group to receive updates on future events!


Badger Digest – April 2016

by Alex Savin

Image by Chris Beckett, used with Creative Commons license

Fresh issue of Badger Digest bringing you the most discussed topics and links from our 100+ private Slack channels over the past month.




One line news

  • Amazon EC2 Container Registry (ECR) is now available in the EU (Ireland) region







Deadline Driven Development: Just stop.

by Viktor Charypar

More often than not, software projects end up with pretty arbitrary deadlines. We’ve all had a version of this conversation:

  • Product manager: So that’s the scope and we want this done by the end of June. Can you do that?
  • Software engineer: Well it’s tight, but sounds doable, I suppose…
  • Product manager: Ok, but I need you to commit to that deadline.
  • Software engineer: Well, I’m not really sure…

Sometimes you don’t even get to have the conversation, you just get a date from the manager. It’s very frustrating, but try to put yourself in their shoes.


Image by Dooder

It must be really baffling for managers why exactly it is so hard to get delivery teams to commit to dates, estimate how long it will take them to deliver features and generally predict timelines before starting the work. Surely when you know what you’re building, you can tell how long it will take!

I think it’s because “building” software is fundamentally the wrong word. It suggests the majority of effort is spent on production, where in reality it’s exploring and defining the problem at hand.

This is a blog for product owners, product managers and really entire organisations they represent, to help them understand what they ask teams to do and why it doesn’t work.

The difference between building and finding a solution

When building a house, you draw up a plan, decide where to put walls, what floors and wall decorations to use, what materials to build from, and put all of that on paper. This takes some time but it’s generally something that takes a few days, then you have a meeting, agree and you have a plan. Having the plan and having built houses before, you can estimate the time it will take pretty confidently. It’s a known problem, it’s just a matter of executing the plan.

Software is not “built” in this sense. You don’t hear mathematicians saying they are building a proof. They are trying to “find” a proof. Writing software is similar. A much better metaphor would be searching for certain objects in a house that somebody has lived in.

Imagine us standing in front of an average house. You’ve seen similar houses before. Now I will give you a list of things – a toy car, a frying pan, a shoe, a set of keys, etc. – to find in the house and ask you how long it will take you to find them. And I will tell you I expect you to find all of them and stay within about 10% of the original estimate you gave me.

If you searched houses before, you can probably give an educated guess – it’s unlikely to take you weeks to find all the things and you can be done as soon as an hour from now. But you never know. And you probably feel quite uncomfortable about me holding you to your wild guess.

Finding things in houses depends on many things. What is the internal structure of the house? Where are different rooms? What’s in them? Is it all neat and organised, or is it a huge mess? Are some of the rooms locked? What if there is a vicious dog in this house that you need to get sausages for first? All of that can have a huge effect.

Now, if I give you my list and walk away, then come back at the end of the time I gave you, I will most likely find that even if you did find all of the things I asked you for, some of them are not the ones I wanted. I wanted a red toy car, not a blue one and the keys you found are for the shed, I actually wanted car keys.

When the deadline gets near and you’re still missing half of the things and tell me, as a manager, my typical response will be: do you need more people to help with the search? And to some extent, it may help. But it will take you quite a while to explain to them what you’re looking for and where you’ve already searched and show them where they can be useful. And at some point it won’t help to add more people, you will just trip over each other frantically searching the house and spend more time organising yourselves than looking for things.

So at the end of this, I’ll end up with random things from my list missing and a whole lot of things that I didn’t really want. This sounds completely insane when it comes to searching houses, but these exact things happen in software projects regularly.

How do we fix this?

The list of things, of course, is our backlog – the list of features we want delivered. I am the Product Owner and you are the team responsible to deliver them. So how do we do this better?

First of all, I should be able to tell you which things I need the most – give you a rough order in which to look for them. This should really be a conversation – I know my priorities, but it probably makes sense to look for bathroom related items together and you may have other tips, since you searched houses before.

It would also be much better if I stayed around to check on things you found so that both of us know right away in case they are not the ones I wanted. I can also ask for more things if I realise I wanted them and change the priority of the ones I already got, rather than trying to nail this up front.

Even better, I can get involved and help you look in all the ways I can, because I likely know some things about this particular house. And if you find a problem – a locked room for instance – I may have the key for it.

And secondly, while up-front, you had no clue how long it will take to find the things, as you’re searching the house and finding things, two things happen:

  1. You start knowing the house, having a sense of its size, the various rooms and what things are in them and therefore have a better idea where to look
  2. I get some data about how long it takes you to find a thing to my satisfaction (call it cycle time) from when I ask for it and how many things you and the team of people with you bring me in a given period of time (call it throughput rate).

The latter gives us a good ability to forecast how long the total of things will take to find using a formula called Little’s Law. This is what we use to forecast dates for projects in Red Badger.

Product managers really need to change their thinking

I admit saying product managers is a bit personal. I really mean entire organisations they represent to the team. So if you happen to be one, you are under pressure from your boss(es) and you can’t do anything about it, I understand. But this can hopefully help start a conversation.

If you think about writing software as searching a house instead of building it, you realise how silly the typical approach to the problem is. This isn’t executing a plan, it’s precisely defining a problem and finding the solution.

So stop trying to plan everything ahead, stop asking teams for estimates they can’t reasonably give and instead get involved, measure and forecast. And most of all, talk to the delivery team and plan together. You’ll deliver value earlier, more in line with the customer needs and on time.

If you’d like to join a team that does things properly check out Red Badger’s current vacancies here.


Stepping into the Offline World

by Anna Doubkova

Here’s the question that gets asked by so many that it’s become annoying – what’s the next big thing in web development? React? Angular 2? CycleJS? WebSockets? HTTP2?

Instead of looking into the future for a bit, let’s take a few steps back. The web has developed into an incredible thing since it began – from static, funky looking pages…


…into a net of apps connecting millions of people in realtime…


Into the Future

…but what are we to expect now?


Well, it’s not really. As Dion Almaer pointed out in his talk at Progressive Web Apps conference (slides are here), at some point desktop applications were hugely popular and web was trying to seem exactly like them. With smart phones  and the popularity of native apps, we can only expect a similar shift in web applications to catch up with these.

There are a few glaring differences between web and native apps that heavily benefit the latter:

  • You can use them offline or while on a bad internet connection
  • They can send you notifications so as to remind you to use them (which doesn’t have to be as annoying as it sounds)
  • It’s easy to access them from your home page without having to open a browser and type in a URL
  • They are much more performant

All these points are pretty important given how we’re all lazy as users of any application, and how much we’ve learnt to expect from mobile experience. While checking things out on our phones, we’re quite often in places with horrible mobile internet, abroad on roaming, or we’re juggling morning coffee, opening the office door and chatting on WhatsApp all at the same time. So at the end of the day, it’s not just a matter of comfort for smart phone users – it’s a necessity due to phones’ flexibility.

Service Workers

Although making web apps seem like native apps seems a little bit like a sci-fi, a lot of great features are coming to browsers in the form of Service Workers. These guys can create a layer between the browser and server that lives on the client and can be persistent. That gives us huge power – and also great responsibility. We can push notifications, store our whole websites and databases locally for the user to access them at any point in time, and much much more; doesn’t that sound at least a little bit exciting?

Total Recall – or the New Caching

Let me focus on the point that will make the biggest difference – using web applications offline. As much as every point in the list above is important, using our apps whenever and wherever means a huge difference in the way we perceive the web.

As I mentioned above, Service Workers create a layer between a rendered page and server that can handle a lot of things for us and that we can manipulate using javascript. When a user goes to our website for the first time, it creates a service worker, checks if  have this page cached, and then goes onto the server to fetch it. On its way back, the data gets saved in cache and from this time on, our website can be rendered from cache. In the case that we lose internet connection, the user can still access this page. 

Or we can just always render the same page from service worker to decrease the load on our servers.



Dark Side

There are the glaring issues of pages running out of date but that’s something we have to deal with whenever we’re using caching and there are quite a few well known cache busts that we can implement if we need to.

What I see as the biggest potential danger however isn’t really related to synchronisation of data. Imagine that every page you visit decides to save incredible amount of data into your phone. Next time you’re doing your iOS update or try to download some music, you might find out that there’s no space left because all images from imgur are forever saved in your Service Worker cache.

Right now, the system relies on programmers to be reasonable about the demands they make on their users; this can be unintentionally messed up though and easily exploited. I’m quite curious to know how browsers will decide to address these drawbacks as this tech gets widespread.

Progressive Enhancement

One of the famous recent uses of Service Workers was done by The Guardian. It’s a simple but really clever idea – imagine going on the train to work like every morning, reading news… when suddenly you’re on very bad or no internet connection. Infuriating, right? Well now you can play an offline crossword instead of hitting reload every three seconds hoping it’ll come back… at some point… maybe.

This doesn’t solve the issue of actually accessing the content I want to see in the first place but it’s a great first step to take. As this feature is an “extra”, it’s okay that it’s not supported by some browsers. One day, we’ll be able to use it everywhere – and we can walk towards the bright future by using it now without introducing any regression to our apps.

What’s Coming

As if this wasn’t enough, what’s even more exciting is the news of Background Sync coming to the party later on this year. This feature is currently being implemented in Chrome and should allow you to – well, sync data on background.


Background sync should allow developers to finally access all important features for creating truly offline-first web apps.

2016 is definitely bringing a lot of exciting new features to web development and we’ll get our toolkit buffed up quite a lot – by streams, Houdini, HTTP2, and many more technologies. It’s definitely bad news to those who already find JavaScript development too complex – but it’s really exciting for us who love to learn and explore new possibilities.

Useful Resources

This article – and my excitement about Service Workers – have been hugely boosted by Enhance Conf and the Progressive Web Apps event.

Here are a few useful links I gathered on these events if you want to learn more about progressive enhancement and what’s ahead of us this year:

We love thinking about, talking about and using new tech at Red Badger; if you’d like to join the team here please get in touch!


O’Reilly Fluent 2016 – impressions and trends

by Alex Savin


Roman and Kadi and I are back from San Francisco and Fluent conference. A week-long escape into the rains and floods of California, with an extra flavour of JavaScript. This was my second Fluent conf, and it’s time to share some takeaways.

For now this is the last Fluent event in San Francisco. It was announced that next year will be in San Jose, and there is also one more Fluent this fall coming up in Amsterdam.

Worthy talks available to watch

All keynotes from day 1 and 2 are available online. To save your time, here’s my favourites:

  1. How NPM split a monolith and lived to tell the tale. Laurie Voss on making large breaking changes without anyone noticing.
  2. Quality, equality, and accessibility – Laura Palmaro on the current state of web accessibility (which has become cutely named ‘a11y’ because it takes too long to type).
  3. Complex responsive SVG animations by Sarah Drasner.

Douglas Crockford from PayPal also did an intro on The Seif project – PayPal’s own reinvention of the Internet. They are pretty early stage, and their vision is debatable to say the least. They do have good intentions.

General trends and feelings

Last year React / Node combo was something of a novelty, a curious thing to learn. Netflix was doing introductory level talks on React and why they chose it. This year React / Redux / Babel / Node is the default. Dan Abramov’s name was mentioned in every other talk. Interestingly, Facebook was pretty much absent from this conf – which is understandable since they are busy having their own React-dedicated events. Notable cool guys on the ground included Google, Netflix, New Relic, Uber, and Léonie Watson doing an epic talk on a11y.

Notable conference swag included Heroku socks

A lot of talks were about interesting ideas or wishful thinking, less of production-ready reality. I suppose this is what webdev is about – there is no platform really, more like a herd of clients and engines somewhat held together by loose rules and standards.

Meanwhile, Brendan Eich was pushing for service workers, WebAssembly and class decorators. He was surprisingly quiet on the sensitive topic of his new browser Brave, which blocks bunches of JavaScript by default.

Two days of talks

The keynotes were followed by five tracks of talks to choose from. Sometimes it was hard. These talks are not available online and will be later published and sold by O’Reilly. I’m going to handpick some quick takeaways from the sessions I attended.

Design process in a nutshell



Accessibility is still very important. There are lots of different disabilities, and addressing such users is more important than addressing old browsers. Different OS platforms contain different a11y APIs, but with regard to the Web, semantic HTML combined with basic ARIA markup gives a huge heads up for a11y tools to read through your page correctly.

Few personal takeaways:

  • Browser in addition to DOM tree creates separate a11y element tree based on your markup
  • aria-live=“polite” element creates live region on the page, when action on one element affect content of other element. Every time content of live region is affected, screen reader would announce that as soon as it changes. Generally screenreader is unable to jump around the page, and cannot be in two places at once.
  • role="complimentary” when content is incidental but related to the content on page (similar to aside HTML tag)
  • Use tabindex=“0” to allow keyboard focusing on otherwise unfocusable elements

Léonie Watson did a highly practical talk on semantic HTML and ARIA roles. Her screenreader for some reason failed to function when connected to the projector, and so she did the whole talk from memory, without being able to see any slides.

There was also an announcement of the upcoming Web A11y free course on Udacity.

Progressive webapps

Google did a presentation on their vision of web apps and software delivery to the end user. In a way from the user’s perspective there is no difference between web apps and native apps, as long as they behave in exactly the same way. There’s also a negative trend regarding users buying new apps from App Stores, and in general they use only three native apps 80% of the time. Google wants to address issues with web apps lagging behind native apps in terms of offline availability, home screen shortcuts, direct to full screen launching and push notifications. Some of that stuff is already solved. The biggest missing piece is probably offline access, which they claim can be resolved with service workers and an extra layer that intercepts all requests and behaves according to the network availability.

All these ideas have a common name and a site – Google’s Progressive Web Apps initiative.

NPM lifehacks

As an avid npm user I spend a good chunk of time every day typing npm into console. There was a talk with various hints and hidden features that might be pretty useful.

  • npm install when you are offline – npm contains a (huge) library of modules on your machine and can install the relevant module even when you’re offline from the local cache. To do so run npm install --cache-min 999999.
  • npm version [major, minor, patch] will bump version of your npm package and save it in package.json file. You can even auto-commit this with npm version major -m "bump to version %s”
  • npm prune – get rid of all packages in your local node_modules folder that are not explicitly specified in the package.json
  • npm outdated – quickly check for old and outdated packages in your app
  • https://tonicdev.com/ – Node sandbox that lets you require npm packages in the browser
  • NSP is great for maintaining 3rd party security in your Node app (although it is known to be down from time to time).


There were two somewhat orthogonal talks on aspects of monitoring your production apps.

New Relic did a presentation on NR Browser, which is a handy way of getting data about how your users actually perceive the app. It gathers everything that a normal browser would, including JS errors, and time to load pages to the client based in various locations. Server side pages usually load 10-20x faster than client side. There are also a number of most strange JS related errors that can be experienced by users and their (derelict) browsers. You can also detect bad deployments by monitoring spikes of client errors.

Shape Security did a talk on dark traffic. According to them, 92% of the global web traffic belongs to non-humans, which is sort of a pity because you still get all the monitoring alerts, but none of your human users are actually affected. Here lies an interesting question of traffic origin detection. You are likely to only care about human users, but they are a minority nowadays. Most of the bots will also try and pretend to be real humans the best they can. Bots will try to authenticate with real usernames / passwords, which are quite openly available on sites like Reddit/r/pwned, or less openly traded for money on the dark net. Since most users are using the same password everywhere, bots will often succeed in signing in, then will crawl everything they can, and move on to the next site.

Falcor by Netflix

Falcor deserves a dedicated blog post. It is Netflix’s answer to GraphQL and Relay by Facebook – central store and dynamic data fetching combined with request optimisation. Its implementation however is distinctively different from both GraphQL and Relay.

@jhusain did an impressive job to a full house explaining Falcor while live coding an app and getting things to work. We are using GraphQL in our production apps, so my angle was obviously on how Falcor is compared to what we already have. Here are a few takeouts:

  • GraphQL is a query language that allows you to request any amount of resources. With Falcor you request either a single resource or a known range. There is no way in Falcor to ask something like “give me all you have”.
  • JSONGraph for data
  • There is no schema in Falcor – as opposed to GraphQL where you must specify types. This works for Netflix since their production app has something like 6 types of resources.
  • Falcor might be more lightwave and easy to start
  • Current implementations of Falcor are in Node and Java. There is internal implementation for iOS which is not released yet.

I shall come back to this topic and write a more comprehensive blog.relevant-swag-600
Other relevant swag

Workshops day

We also attended a full day of workshops. The first half of the day was about implementing desktop apps with Electron, then we did a session on writing your first language compiler, and finally real time drawing on HTML canvas. Electron was probably most notable out of the bunch – in about three hours we ended up implementing two functional desktop apps from scratch. Roman is going to write more on this topic.

Electron provides you with tools to make native OSX / Windows / Linux desktop apps while using the familiar stack of Node and React. But unlike conventional web apps, you also get full access to the filesystem, no CORS restrictions and the ability to integrate into the system menu. If you always wanted to get your app behind the system tray icon, with Electron you finally can.

Extra activities

Red Badger is hiring and we have an #officedog

The general format over the conf was 30 min talks followed by 15 min breaks. After two or three talks there would be 30 min break. In the middle of the day there was an hour lunch break. Everything started at 9am and finished around 6pm.

Breaks were filled with extra activities you could choose to attend. The main attraction and largest source of swag was the exhibition hall, filled with companies presenting their products and giving out freebies. During the last two days the organisers also moved coffee and snacks to the middle of the exhibition hall.

I should probably mention – we were fed pretty well, considering the scale of the operation.

In the main foyer they also tried to get introverted software devs to talk to each other by having topical meetups, speed networking and lightning talks.

exhibition-oreilly-600 vr-signed-book-600
Being primarily a book publisher, O’Reilly brought a bunch of authors signing and giving away free books. I got a couple too.


This trip was possible thanks to Red Badger’s training budget programme. Me, Roman and Kadi had an amazing time, despite the daily dosage of rain. This time I also did daily video log episodes covering our full journey outside of the conference. Yes, we had some fun.


Team Badgers

I enjoy Fluent mostly because of the variety of topics covered. Writing compilers, programming GPU, WebVR, fending off the evil bots, deploying clusters of containers, debugging performance – there’s something for everyone. So, thanks O’Reilly for making this a reality once again!

If you like the idea of an annual training budget, trips to conferences like this and a big focus on learning, Red Badger could be the place for you. Check out of current vacancies here.


Knowing the Elephant: Mobbing Way

by Leila Firouz

Once upon a time, six blind men went to find out what an elephant is.

The first man touched the legs of the elephant and thought, an elephant is like a big pillar or a tree with strong skin. The second man touched the tail and came to the conclusion that an elephant is like a rope with a brush at the end and it can move right and left very easily in air. Well, I won’t bore you with what the rest thought as I’m sure you can sort of imagine.


What brought this story back to me from the old memories of childhood was a one day course I did on ‘Collaborative Exploratory and Unit Testing’ which was an introduction to ‘Mob Programming’ with the focus on collaboration of developers and testers. In this article I’ll try to explain why after this 8 hours course, as an experienced QA, I felt I’ve been a blind man and the projects I’ve worked on are some sort of elephants or dinosaurs! I could also sense a delicious scent of more modern Agile.

What is Mob Programming?

‘Mob Programming’ is an agile approach to software development where all the team are in one room working together with one keyboard. Just to be clear all the team means all the stakeholders, devs, testers, designers, project owners and so on. There are three roles in Mob Programming:

Navigators: Everyone in the team who ‘guide’ what should go in the keyboard. The brains of the team.

Designator: The decision maker of the Navigators. The final voice who decides what is the final decision of all the ideas to go in the keyboard.

Driver: Person behind the keyboard. The muscles of the rest of the team. Driver doesn’t give feedback for the time he is on behind the keyboard and only does as being told by Designator.

There is a rota and every few minutes the roles will be switched (common rotation intervals are 5 min, 10 min or 15 min). In the training, we sat in a circle and each time the timer beeped we would shift one to the right to switch the roles.

The mobbing technique applies to all aspects of the software development process, including requirements and testing. A project owner for example will not write code but the team might decide, for example, to work on refining stories first. It can be thought of as the outgrowth of ‘Pair Programming’.

We started practicing this method in the class for a few hours in a group of 13 developers and testers. We were given an application which we explored and documented our findings. For the second half of the course, we started working on one of the bugs that we had found. We investigated the reason, made the code testable and added a few unit tests for it.

In all honesty, the first couple of hours was quite confusing. I thought it was interesting but the same way as a mad hatter tea party is! Everyone was throwing around different ideas, we didn’t know the application and the code, we had different skills and there were quite a few misunderstandings and disagreements. “How can this approach improve the performance?” I asked myself. But as the time passed, we started understanding each other’s languages, we were able to prioritise ideas and started becoming more of a united force. The clouds of doubts started to fade out and some sunshine started to show up.

Why Mob Programming Felt Like Some Sunshine?

Whole team working together  improves the average team performance. Every team member is good at something and not so good at another, also everyone has bad days and excellent days. Mobbing has the potential to pick the best of the team.

There is no hand over state. This means “I can work with you on this, as opposed to handing this over to you”. Teams can complete more work faster and less issues will be generated after coding.

More thinking is put into the product before an idea forms a piece of code.

It builds up a shared knowledge and leads the team to find and form a ubiquitous language. It also means less dependency on key skills and knowledge.

Tester and Developer Collaboration

As a tester what impressed me the most was how this approach promotes transparency and creates empathy between devs and QAs and has the potential to improve the quality of the product in less time. There are differences between the mindset and the language of devs and QAs. The most obvious example is indeed the word ‘Testing’.

In a developer language, ‘Test’ is usually done to:

  • Check a feature works exactly as the spec says
  • Creating feedback from code
  • Prevent regressions by writing unit tests

To a QA, ‘Test’ means:

  • Explore a feature with some guidance
  • How this feature works with the rest of the product
  • Look for regressions

Mobbing, gives the opportunity for these two worlds to meet in a new way.

Tester’s feedback while the code is being written can prevent a defect from appearing. This can also help the developer to write (unit-)testable code at early stages before going too far in development.

On the other hand knowing about which unit tests exist saves a lot of time for the tester when scripting and running tests. They can focus more on exploratory and integration test as well as finding out which part of the code has been touched and needs regression testing.

It also gives everyone a better understanding of the application and the state of it.

Yes to Mobbing?

Mobbing probably is not a solution for every project and every team and maybe not the best approach from start to deliver, but I think it is a method that’s worth a try if a team is struggling with delivery or aim for higher performance. It helps the blind men know sooner and better what an elephant is!

For more information about Mob Programming have a look here:

A Success Story of Mob Programming

Mob Programming Guide Book


Red Badger is currently looking for a manual tester to join our team, if you are interested take a look at the job role here.