26
Sep
2014

Badger Academy Week 5

by Eric Juta

Over the last week, we had worked on deployment and developing on top of the Flux/Om-like architecture that Viktor scaffolded beyond our eyes. This week we were joined by the digital poet Joe Stanton to spurt out the solutions to our woes.

It seems that there is an endless stream of problems on any project; we truly are being moulded into “True Grit” characters throughout this academy. Keeping up-to-date with development, we decided to halt that troubling issue of the front-end testing which has been lingering for the past few weeks ever since we came across that milestone.

These past two weeks, Alex joined us to scaffold out the frontend modular structure with proper SCSS styling! This week included the majority of Joe’s work attempting to stub out the real-api calls temporarily for testing sessions on the smart tv.

Styling separation

The final solution which has proven well the last few weeks was inclusive of compiling the SCSS files in a Gulp build task.
Sadly without ruby being installed in the node docker container, we were unable to use SASS!
#INTERNALSASSVSLESSWARS

Adhering to the Assets, Module, Vendor and Partial folder structures, we learned a straight-forward way to scaffold styling.
Assets – Personal static files that aren’t of stylesheet file type. (Image files and fonts!)
Modules – Different files for particular views/components pages within your application
Vendor – Generic files from open source frameworks or non-personal stylesheets
Partial – Potentially default mixins if you would like to label them in this way!

SVGs

SVGs are scalable vector graphics. In common fashion, we’re using a responsive css grid framework (We extracted and imported the grid system out of Foundation; lovely if you ask), we definitely require scalable images otherwise our application will look ugly!
#PixelPerfect?
Thanks to Sari, Badger-Time has its gorgeous logo exported to SVG file format! Our proud work is labeled in perfect rendition on any device in any resolution our audience pleases.

Continous integration/deployment

We can frankly say that API development including the Rspec TDD approach is going smoothly.

Finally scaffolding the last part of the continuous integration + delivery process this week. That last part involves the Amazon S3 Cloudfront deployment for the frontend and a Digital Ocean droplet for the backend.

Both of these were relatively straight forward compared to other obstacles we had come across! Thank you open source contributions! (Namely the Gulp S3 plugin and Docker within Digital Ocean droplets)

Tiago created his own Node.js web hook module that on a POST request with a required token of course sent after the CircleCI tests had passed, would pull down the required Docker image with all the binary dependencies pre-installed and swap out the new git cloned production-ready version of the application into it.

For the frontend, deployment is done through running a basic Gulp ‘deploy’ task.
It’s also good to note that environmental variables can be set in CircleCI to be read from that Gulp deploy task!
12Factorness!

Tiago’s open source contribution:- https://github.com/tabazevedo/catch

As he likes to call it, CircleCI in less than ~50 lines!

Fighting the frontend tests

Our attempt of scaffolding out Stubby programatically had failed:- LXC container network address mapping issues.
Joe’s fierce battle meant redirect the API endpoint to mocked out API routes responding with JSON datasets only for the duration of the Gulp ‘test’ task.
This required restarting the Node.js server inbetween each Cucumber.js test; absolutely brilliant!

At one point during the debugging process, Joe was unable to differentiate whether the correct ‘Test API’ was being requested from. He lazily evaluated the real API to force this confirmation. …I know right

In the end Joe fathomed the situation but due to the restrictions and the obvious refusal to recreate the backend API logic in Node.js for the purpose of frontend testing, the result was static datasets. The situation remains unstirred.

A potential final option is to reroute the testing API endpoint to a deployed “Staging backend API” then deploy the backend API to production in succession.
This keeps the logic intact but at the same time separates such data pools.

For next week, Badger-Time faces OAUTH, serialising and sync diffing algorithms!
Lower woes, we’d agree; honestly.

22
Sep
2014

Badger Academy Week 4

by Sarah Knight

It’s week 4 of Badger Academy, but for me personally as the 3rd intern to join, it’s the first week. Not only am I a few weeks behind on the Badger Time project, but fresh from the 3 month Web Development Immersive course at General Assembly, I’m also several years behind Tiago and Eric in terms of general programming experience. So my first few days were spent in a state of confusion and growing panic as I tried to read up on an ever-growing list of techniques and technologies that were completely new to me.

Vagrant, Docker, React.js, Gulp, Gherkin, Phantom.js, Browserify, Nginx, Selenium, and Circle CI were a few of the terms I was busy googling. I now have a rough grasp on what most of these are, and how they fit together but it might be a while before I can blog about them with any confidence! Watch this space …

By Wednesday, I was able to get stuck in and start to write some proper code though, which felt good. I made a start on some tests for the API. We were thinking about using Cucumber for these, but in the end it was agreed that plain Rspec made more sense for the technical back end, and use the more English language readable Cucumber tests for the front end and potentially less techie readers.

Viktor was our senior developer this week, and spent time helping me write some tests for the JSON responses. He also helped refactor some of the React.js code on the front end while also giving me an overview of how it all fits together. This was really helpful, as I think I’m now beginning to understand React on a conceptual level … we’ll see how it goes when it comes to actually working with it though!

badger_acad_4

Github Flow

With 3 full-time team members plus 3 part-time senior devs on this project, having a standardised system for version control is important. Most of the projects I’ve worked on previously have been solo efforts, so it was crucial for me to understand the system in place and make sure I didn’t mess up. Luckily we’re using the Github Flow workflow, which is simple to pick up, and facilitates continuous deployment.

The workflow:

1) Create a new branch locally

Create a new descriptive branch locally from master and commit regularly. Naming things descriptively is always tricky, but done right, it allows everyone to see who’s working on what.

2) Add commits

Committing regularly allows you and others to keep track of your progress on the branch. Each commit is like a snapshot of the branch at a particular time, so you don’t want to leave it too long between commits or too much will have changed. With regular commits of small chunks of code, if you introduce bugs or change your mind about something, you can rollback changes easily. (It’s a bit like time travel!).

3) Open a Pull Request

Once you are ready to merge to master, or want some feedback, open a pull request. Pull requests allow others to review your code, and everyone can add comments. Because Pull Requests accept Markdown syntax, you can even create tickboxes of things to be ticked off (top tip courtesy of Alex!).

4) Discuss and review code

Once a pull request has been opened, other people can see what you’ve been working on, and enter into discussion on Github about it.

5) Merge and deploy

Once you’re happy with the code, and it passes all the tests, you can merge to Master. We have Circle CI set up to automatically test code once a Pull Request has been opened, so you can easily see whether the code is passing tests before you merge.

The golden rule of Github Flow is: Anything on the master branch is deployable.

Any code on the master branch has been tested and is totally stable. You can create new branches from it with confidence, and deploy from it. We don’t yet have any kind of production server set up, so there is currently no deployment. However, the whole point of Github Flow is continuous deployment, so once that’s up and running, this step will be implemented regularly.

Next Week

To ensure that we learn as much as possible about all aspects of development, we’re taking it in turns to work on the different parts of the project. So just as I was starting to get to grips with the API this week, next week I’ll be doing something completely different and taking a turn on the front end. However, I’m looking forward to exploring React.js and seeing how the testing differs.

17
Sep
2014

Badger Academy Week 3

by Tiago Azevedo

The third week of Badger Academy has passed, and with it ends the first cycle of seniors helping interns. For this thursday we were paired with Joe Stanton. We ran into a lot of problems during the week, which left us somewhat frustrated but also increased our eagerness to learn. Most of our environment setup for development has been done by this point. We managed to decrease our docker build times from ~20 minutes to 3-5 minutes depending on how good of a day the server was having, but overall it was consistent and fast.

Our focus this week was on testing standards. We were aware of the best practices for testing our software, but their implementations within our projects was what took the bulk of our time.

Testing the API

Testing the Rails backend was fairly straightforward. When we scaffolded the controllers and models for our project, a set of pre-generated RSpec tests was provided for us. Most of them were fairly unoptimised and some were not suited for an API, but rather a project written completely in Rails.

We kept a few things in mind while writing these tests;

  • Keep tests of one model/controller isolated from other models and controllers
  • Avoid hitting the database where we could.
  • Avoid testing things which are covered by higher level tests.

Expanding on that third point, Joe helped explain what layers to test and what layers we could skip. At the core of our app we have model tests, which would be independent of the database and would test things like logic and validation. These should eventually make up the majority of our tests, but for the meantime we only have a few validation checks. The ‘medium-level’ tests were things like routing and request tests.

We ended up skipping the routing tests since once we got to the higher-level integration tests, we could infer that if those passed then all our routing was correct. We kept request tests at a minimum, only checking that the API returned the correct status codes, so we could have a sense of consistency across our app, and those weren’t necessarily implied by the integration tests.

Following that, we removed the unnecessary stuff and, through the use of FactoryGirl, we converted our logic and validation tests to avoid hitting the database, as it would cause a significant slowdown once our project became larger. Some of our higher level controller tests did hit the database, however this is unavoidable in most cases and attempting to bypass this would have been more trouble than it was worth.

Testing the Frontend

Our Frontend testing was much more difficult to set up. We’re currently running a stack of PhantomJS, CucumberJS and Selenium. CucumberJS is a tool that allows us to write tests in a human-readable format, so that anyone, without an understanding of programming, could see what’s happening and even write their own tests if they wanted to. This is the basic premise of BDD (behaviour-driven development) – we write tests for functionality of the software beforehand, from the standpoint of the end user and in a language that they can understand. This differentiates from the TDD (test-driven) principles used in the API as that is written purely in Ruby, and not necessarily from a user’s point of view.

 

That’s an example of a test written in Gherkin (the CucumberJS language - yes we are aware of all the slightly strange vegetable references). You can probably guess what it tests for. Behind the scenes, the software captures and identifies each of those lines and performs tests based on parameters that are specified (e.g. what page you’re on and what action you’re performing)

One issue we struggled past was how to go about isolating these tests from the API. Since the pages would have content from the backend displayed, we’d need a way to test using fake data. We went through a variety of methods during the week. Firstly, we thought of simply stubbing out the calls to the API using Sinon, a popular mocking and stubbing JavaScript library. While this would have been the most robust option, we had big difficulties using it with Browserify – a tool we are using which bundles your entire application into one file – and we decided on simply creating a fake api server using Stubby, which runs only for the duration of the tests and can serve multiple datasets to the frontend so we can still test a variety of cases.

CircleCI

Since we got the testing frameworks down, we expect to make fast progress from here on out. We ended up learning and using CircleCI, which will automatically run tests on any pushes or pull requests made to the github repos, and this makes sure we only merge stuff into master when everything is working as planned, and also makes sure that all tests are passing on a fresh system before deployment.

Despite all the new technology we have introduced, everything is going more or less smoothly and we couldn’t ask for a better foundation to build this project from. Not only are we rethinking the way the tech badgers go about the development process, we also streamline the entire production process with lower build times, safe and consistent deployment and a highly scalable and portable infrastructure.

29
Aug
2014

Badger Academy week 2!

by Eric Juta

This week in Badger Academy, we were joined by Alexander Savin, a senior engineer of many talents. Under his guidance, he assessed the current state of our DevOps including the decision to use docker.
Finalising last week’s architecture choices, we were promptly laying down the foundations for the road to pave ahead.
#9containerDevOpsRefactoringWoes
There really was a lot of googling, not much stackoverflow!
Deciding on a one command workflow for any compatible unix system, we proceeded to create the mammoth script.

Badger-Academy week 2!

Bash Shell Script

Iteratively tweaking it (Agile!) in the end allowed us to do the following:

    • Git clone Badger-Time
    • Use Vagrant to up the initial CoreOS VM
    • Run the shell script from within the ssh instance to build the docker containers

(Current container stack each with their respective data containers being: Rails API, Redis, Postgres, Node, Nginx)

  • Pull preinstalled images down
  • Add our config files into them; specifically our Nginx and SSL certificates
  • Mount our Badger-Time code into their respective destinations
  • Install node and rails dependencies then create the databases and migrate them
  • Run all the linked containers with persisted daemons and their services in a hierarchal order.

Voila!

Badger-Time code up and running on any potential unix system in less than 15 minutes without any further interaction.
It sounds like a lot but in fact this is allowed due to the high internet speed within the office!

Advantages

The advantages we had discovered in this approach compared to the previous Badger-Time Vagrant + Ansible were vastly great in so so so many ways!

First of all, an all-in-one up command; we have one extra intern joining us in a week’s time, getting her laptop up to current versioning requires little to no effort.
(Yes, we’ve tested it already on her preview day of the office)

  • No makefile building? Yes please!
  • Faster tests
  • Reduced memory footprints
  • Same environment from development to our build server to our deployment server
  • Isolate local dev dotfiles and configs from the application
  • 12factor application coherence!

Disadvantages

There are many disadvantages as such you would imagine with any new technology:

  • Initial volume mount mapping configuration
  • Networking association is difficult to comprehend.
    (Dynamic host files generated by linked containers, exposed ports, vagrant)
  • Developer productivity affected by added configuration complexity
  • Double layer virtualisation! Linux native support only
  • The lack of a structured DevOps docker approach documented online leaves a lot of decisions to the creator.

Admittedly, as we’re still continuously learning, we will accumulate the software architect’s hat overtime.
Luckily we have constant surveillance and access to the senior engineers over Slack! #badgerbants

Scaffolding the frontend

With the majority of the DevOps out the way for the developer environment, together with Alex we conversed potential ways to scaffold the frontend tests.
This took a lot of learning Gulp with him to customise further our frontend workflow.

Gulpfile.ls

Our gulpfile was chosen to do the following tasks:

  • Pull down npm and bower dependencies
  • Build LiveScript React.js components, Index.Jade, Less files, Semantic Grid system
  • Browserify, Concatenate, Uglify
  • Build the LiveScript tests for compatibility with CucumberJS
  • Start the Phantomjs service from within the docker container before running the CucumberJS tests
  • Watch for source code file changes and compile

Letting Gulp do such things allows us to commit and push less code to Github plus have the added developer workflow productivity!
Less context switching, the above are just abstractions!

Food for thought

One problem that had to be overcome was the choice of running frontend tests from within the container or outside.
The issue is that we have to keep in mind that the tests will inevitably be run from within a build server environment before being deployed.
This poses the question because of Nginx serving static files in a container,
should we reroute the webdriver to examine outside in for tests?

We were a bit stumped at first so can someone document a best practices guide for Docker networking + Docker Frontend testing please!
It may be the case that someone at Red Badger will have to!

Next week tasks!

Tiago and I for next week will ponder about what kind of tests should be written.

BDD is a major cornerstone to the quality of our projects, we’ll have to assess such implementations with a split frontend and backend!
Let alone learn API design!

27
Aug
2014

Red Badger does Podcasting!

by Roisi Proven

radiobadger-profile-pic_360

 

Always keen to branch out into all aspects of tech, Alex Savin, Robbie McCorkell and myself have been badgering away (ho ho!) at creating our very own podcast.

Tentatively titled “Red Badger Don’t Care…”, the podcast will cover a wide range of tech and science topics, and will regularly feature guests from both inside and outside of Red Badger.

We are currently preparing to record our third episode, and although we have not yet released on iTunes, our first two episodes are available for download now and we welcome feedback! 

Episode 1: Red Badger Don’t Care…About Mr Sir Benedict Cumberwumble

Listen Here!(http://radiobadger.com/episodes/Badgercast-episode-01.mp3)

Also known as “Three people figuring out how to Podcast”, we navigate the strange new world of talking about interesting things while microphones are pointing at you. We discuss Dark Matter, Facebook React, GitHub, and Roisi’s confusing feelings about Patrick Stewart. 

Episode 2: Red Badger Don’t Care…About Skydiving Sheep

Listen Here!(http://radiobadger.com/episodes/Badgercast-episode-02.mp3)

The difficult second album of our podcasting lives! We discuss hyperlapse, renewable energy, Babylon 5, outdated laws and Piracy (the internet kind, not the boat kind). We also introduce a new game, 3 degrees of Law and Order!

 

You can learn more about each episode, and get updated about future episodes, but following us over at Radio Badger

 

We have some great interviews planned with Joe Dollar-Smirnov and Cain Ullah, as well as some exciting plans to talk to people outside of Red Badger. We’re still learning and growing and we look forward to getting our work out in the world!

22
Aug
2014

The first week of Badger Academy

by Tiago Azevedo

Last Wednesday marked the beginning of Red Badger’s intern training program – Badger Academy. As interns, Eric and I will be the ‘prototypes’ with a dynamic syllabus covering the fundamentals.

Our guidance consists of a full day a week with a senior developer, and in this case we’ll be re-engineering an internal app that was started two years prior but left unfinished.

Badger Time Reborn

Badger Time, wittily named by Cain, was to be a resource management platform. At a basic level, it would enable a business owner to plan potential projects and analyse ongoing projects, calculating financial figures based on how many hours people were assigned to projects and how many hours they’d fulfilled (using data from the FreeAgent platform).

design

We collectively decided that the best course of action was to build it up again from scratch, as it would take less time and effort than fixing what was wrong now, using the old codebase as a reference point.

As far as any intern is concerned, writing any software from scratch is a daunting task. We are taking the briefing with a positive attitude, and reveled in the prospects of being able to learn every aspect of building a working, maintainable piece of software.

Planning

The first stage of any structured task is planning – what do we want, how do we want it and what problems will we face? Thankfully, we had the ability to recycle the designs from the old project which simplified a lot of the what and the how.

User Stories

The bulk of last thursday was spent on these. Viktor, the senior assigned to us for the day, took us through building a backlog of user stories for each feature that would be in the minimum viable product. Building these user stories helped us to understand these features from a user’s point of view, and simplified the process of figuring out potential problems. We used Trello to organise these, as it allowed us to sort the backlog by priorities and establish a pipeline of things to get done. 

Building a data model

As we’d be handling large amounts of data coming from different sources, it was imperative that we had a well-built data model. There were two main factors to keep in mind:

  • Avoid repeating the same data in various places
  • Avoid storing anything in the database that can be calculated on demand

Technology

Docker – the foundation of our project

We made a few ambitious decisions regarding tech choices in Badger Time. We’d be using Docker to handle our development and production environment. Both the seniors and us were super interested in this new technology as it would solve a lot of current problems. Right now, most of the projects are handled using Vagrant for a virtual machine and Ansible for provisioning. This poses a performance hit as everything is ran on a virtual machine and it can also take upwards of 30 minutes to get it up and running on a new machine. Docker eliminates this by running everything in containers, which are like ‘layers’ on top of the current host machine, and containers can be built (similar to provisioning) once, pushed to a remote server and then downloaded and ran on any machine capable of using docker.

Because docker containers are purely layers on top of the existing system, they are much smaller and more portable than a full-blown virtual machine. It also means we eliminate any discrepancies between development and production, allowing for a much smoother deployment process.

Rails – the trustworthy powerhouse

We’ll be using Ruby on Rails to write a RESTful API which will handle any requests and serve data from our database, as well as make frequent syncs of data from FreeAgent. Ruby on Rails is solid, easy to read and write and provides a large repository of ‘gems’ which allow us to extend the functionality of our app easily. It was an easy, safe choice, and was backed up by the fact that the old Badger Time was written completely in Rails and we could recycle some of the code as most of the business intelligence was up-to-date.

React.js and LiveScript – lightning fast rendering, with clean and structured code

Rather than making an isomorphic app, we took the same design principles as Haller and divided the backend and frontend of the app. This enables our app to scale much more easily – we can serve the frontend as a static site on a CDN like Amazon S3 (fast!) and then scale the backend seperately. Using React and LiveScript, we can build a purely functional frontend – ditching the traditional MVC application model in favour of having our UI broken up into simple components which contain logic within themselves (and are ridiculously fast because of how React works).

Compare the following (functionally) identical pieces of code;

You don’t need to understand what’s happening to notice how much simpler and cleaner it looks with LiveScript! You can read Stuart’s post on this very same stack for a deeper understanding of why its awesome. We love it and we’re sticking to it!

So as you can see, its a pretty ambitious proposal full of new, exciting stuff, and this project is the perfect opportunity to test it all out on! We’re keen to get the show on the road and get into the meaty part of the development work, but we’re also eager to build something that’s slick and will be considered as a solid, well-written codebase. I have high hopes that Docker will become a thing around here, and it might catch on as the go-to tool for handling DevOps around here, just like what React and LiveScript do for frontend!

21
Aug
2014

Computer Science students should work for a start up while at uni

by Albert Still

Don’t completely rely on your grades to get you a job after uni. A CS degree will prove you’re intelligent and understand the theory, but that will only take you so far during an interview. We have to talk about apps we’ve built with potential employers, just like you would want a carpenter to show you pictures of his previous work before you hired him.

While studying CS at university I worked 2 days a week for Red Badger, even in my final year when I had a dissertation. Some of my class mates questioned if it was a good idea suggesting the time it takes up could damage my grades. But it did the opposite, I got better grades because I learnt so much on the job. And when you see solutions to real life problems it makes for a better understanding to the theory behind it. And it’s that theory you will get tested on at university. 

What I’ve been exposed to that I wasn’t in lectures

  • Using open source libraries and frameworks. Theres rarely a need to reinvent the wheel, millions of man hours have been put into open source projects which you can harness to make yourself a more productive developer.
  • GitHub is our bible for third party libraries. Git branches and pull request are the flow of production.
  • Ruby - the most concise, productive and syntactically readable language I’ve ever used. Unlike Java which the majority of CS degrees teach, Ruby was designed to make the programmers work enjoyable and productive. Inventor Yukihiro Matsumoto in a Google tech talk sais “I hope to see Ruby help every programmer in the world to be productive, and to enjoy programming, and to be happy. That is the primary purpose of Ruby language”. Ruby is loved by start ups and consultants because they need to role out software quickly.
  • Ruby on Rails - built in Ruby it’s the most starred web application framework on GitHub. It’s awesome and the community behind it’s massive. If you want to have a play with it I recommend it’s “Getting started” guide (Don’t worry if you’ve never used Ruby before just dive in, it’s so readable you’ll be surprised how much Ruby you’ll understand!).
  • Good habits – such as the DRYYAGNI  and KISS principles. The earlier you learn them the better!
  • Heroku - makes deploying a web app as easy as running one terminal command. Deploy your code to the Heroku servers via Git and it will return you a URL to view your web app. Also its free!
  • Responsive web applications are the future. Make one code base that looks great on mobile, tablets and desktops. Twitter Bootstrap is the leading front end framework, it’s also the most starred repo on GitHub.
  • JavaScript – The worlds JS mad and you should learn it even if you hate it because it’s the only language the web browser knows! You’ll see the majority of the most popular repositories on GitHub are JS. Even our desktop software is using web tech, such as the up and coming text editor Atom. If you want to learn JS I recommend this free online book.
  • Facebook React – Once hard JS problems become effortless. It’s open source but developed and used by Facebook, therefore some seriously good engineers have developed this awesome piece of kit.
  • Polyglot – Don’t just invest in one language, be open to learning about them all.
  • Testing – This was not covered in the core modules at my university however Red Badger are big on it. Simply put we write software to test the software we make. For example we recently made an interactive registration form in React for a client. To test this we made Capybara integration tests, they load the form in a real browser and imitate a user clicking and typing into it. It rapidly fills in the form with valid and invalid data and makes sure the correct notifications are shown to the user. This is very satisfying to watch because you realise the time your saving compared to manually testing it.

Reasons for applying to a start up

  • They are small and have flat hierarchies which means you will rub shoulders with some talented and experienced individuals. For example a visionary business leader or an awesome developer. Learn from them!
  • More responsibility.
  • They’re more likely to be using modern front line tech. Some corporates are stuck using legacy software!
  • If they become successful you will jump forward a few years in the career ladder compared to working for a corporate.
  • Share options – own a slice of your company!
  • There are lots of them and they’re hiring!

Where to find start ups

A good list of hiring startups for London is maintained by the increasingly successful start up job fair Silicon Milk Roundabout. Also Red Badger are currently launching Badger Academy, they’re paying students to learn web tech! This is extremely awesome when you consider UK universities charge £9,000 a year. If your interested in applying email jobs@red-badger.com

Best,

Albert

There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies.

C.A.R. Hoare, 1980 ACM Turing Award Lecture

20
Aug
2014

I Spent 3 Days With Sandi Metz – Here’s What I Learned

by Jack Hoy

original_sandi_metz_jack_hoy

A Learning Experience

Choosing a professional training course has always seemed like a bit of a minefield to me. Most courses have hefty price tags and it’s hard to judge beforehand whether they actually represent good value. Although I find that you can learn pretty much anything online now with a combination of videos, blog posts, ebooks and open source documentation – I really wanted an in-person learning experience, to be in the same room as a master and hear directly from them what makes great software.

Enter Sandi Metz, 20+ years of software development experience and author of the excellent Practical Object Oriented Design in Ruby. Lucky for me she had decided to bring her accompanying Practical Object Oriented Design course to London with assistance from the insightful Matt Wynne, one of the authors of Cucumber. For 20 of us it would be 3 full days of pair programming, code reviews and spirited group discussions.

I jumped at the chance to take part and after attending the course this past June, I wanted to share some of the core concepts with you. Hopefully this post will give you a few new ideas to consider and try out the next time you are in front of your editor.

Let’s begin!

The Brief

One of the tasks we were given during the course was to programmatically generate the lyrics to the song 99 Bottles of Beer. We were given a set of tests and only the first one was currently being executed, the rest were being skipped for the time being. We were then asked to make the first one pass before doing anything else. Once that test passed we could unskip the next test and try to make that one pass. We were to repeat this process until all tests passed.

Duplication Is Better Than The Wrong Abstraction

The next thing we were told to do goes against all intuition.

Write shameless code, full of duplication just to make the tests green.

Hang on, isn’t duplication the first thing we learn not to do?

Well yes, it’s true that ultimately you want DRY code but Sandi advised that you are setting yourself up for failure when you try to make your code DRY and full of abstractions before you really understand the problem you are solving.

So this is the first test:

Here we expect that two lines from the song (lines #13 & #14) should be returned when we create an instance of the Bottles class and call the verse method, passing in the number 99 (see line #16).

How would you normally approach this test? Would you get distracted at the prospect of having to generate the whole song and start thinking about writing some clever method to do that? I know I would have in the past. It’s very easy to fall into that trap. I think it’s because as problem solvers, we are always so eager to reach that moment where we ‘get’ the pattern, that we rush ahead and remove duplication too soon or skip it altogether.

Although writing the minimum code to pass the test is a well known techinque from TDD, it’s very hard to write ‘shameless’ code, even when explicitly told to do so. Only a couple of pairs in our group managed to meet this goal.

To get the first test to pass, we can start with something simple like this:

We just defined the verse method in Bottles with a parameter of number and just returned the exact same string from the test. We didn’t even use the number. Pretty shameless.

Then removing the skip from the next test case (see below), we have a similar scenario but this time the number passed in to the verse method is 89 (line #8):

So now we are forced to do something with the number but we can start the process of duplication by adding a case statement which just returns the full string based on the number passed in:

You are probably itching to clean that up already but we are not yet ready to start abstracting yet. Sandi advised that code with duplication will be easier to handle than the wrong abstraction, so we are better off gathering more information, adding it to the solution until at some point, an abstraction will naturally occur. The cost of waiting for more information is low.

If we do skip ahead to writing a super smart abstraction too soon, we drastically increase the risk of having to untangle a mess later on.

Why is it easier and cheaper to handle? Although duplication looks ugly, it has far less mental overhead because the input cases are right there in front of you and there is less logic to keep track of. Adding a new input to the solution becomes a matter of adding to the duplication and you will see shortly that we have a neat technique for eventually DRY-ing out this code.

So we can continue in this vain to get the next 3 tests to green, they also just pass in different numbers (2, 1 and 0) to the verse method and each return a different verse string. To make these tests pass we add them to our case statement and return the strings directly:

Yuck. But our tests are green and it means we can keep moving forward with the challenge. The next test requires us to implement a verses method. This takes two numbers which define the range of verses in the song to be generated (line #11):

In this case it’s just 99 down to 98. We don’t yet have a case to handle 98 bottles, so we can add that to our verse method the same as we did for 99. Then we can define a new verses method that takes an upper_bound and lower_bound to determine the verses that must be generated. Within the verses method we can call our existing verse method for 99 and 98:

The tests pass and we can move to the next one which requires us to return 3 verses:

So now we need to be a bit smarter about how we generate the verses. We can do this by iterating over the number range with ruby’s .downto, then using the collect method to get each verse and finally join them all with new lines:

The final test requires us to implement a song method, that should return the full song from 99 down to 0.

This is actually fairly easy for us to pass, we can just call our ready made verses method, passing in 99 and 0 as the range.

Great, now all our tests are shamelessly passing! You can view the solution here here. Although you may have noticed one snag, our song method doesn’t actually generate the full song because our verse method only returns when the verse is 0, 1, 2, 89, 98 or 99. Don’t worry, we’ll soon put that right when we start refactoring.

I think some programmers may argue that this example is trivial enough that you could potentially start abstracting sooner, however, this problem was used to introduce the shameless technique and Sandi made it clear that this approach will serve you well even when faced with harder problems, where you have no idea what the end solution looks like.

To summarise the advice so far, resist the urge to leap ahead to an abstraction. Start breaking the problem down with a simple, shameless solution and don’t be afraid of duplication when starting out.

Refactoring Is Not An Afterthought

One of the most interesting ideas I took away from the course is that refactoring is not really the icing on the cake, it is the process of making the cake.

Instead of spending a long time in the red while we write our complicated method, then eventually getting to green, then maybe if we have enough energy left doing a bit of refactoring – we quickly obtained green tests from our shameless solution and that provides us with a platform to immediately begin the process of refactoring.

How To Refactor

Refactoring is rearranging code without changing behaviour and the approach Sandi recommended was to make tiny, tiny changes in a technique she perfected with Katrina Owen. The technique is to always stay one CTRL-Z (or ⌘-Z) away from green tests using a 4 step process:

  • Compile: Get the new code you want to implement to compile within the same file – it shouldn’t be called yet, this is in order to catch syntax errors
  • Execute: Run your new code but don’t use the result
  • Use: Replace the old code with your new implementation
  • Clean: Clean up and remove any old code you have now replaced

After each step you should run your tests to make sure you are still green. I’d never seen an approach like this before but I did experience a certain sense of ‘flow’ when following it during the course and it really forces you to stay on the baby steps path.

It still feels a bit unnatural for me to work in increments this small and I often tend to combine some of them but I have been making an effort to try it out. The idea is that by doing less and being able to CTRL-Z when red, it’s always cheap to go back to a safe place and it prevents you from spending long periods of time stuck with failing tests, hoping it will come right in the end.

What To Refactor

Now we know the process of refactoring (making frequent small changes without changing behaviour), the question remains what should we refactor? If we think we are now in a place where our code has enough duplication and we have enough information, then we can start abstracting.

The process for abstracting is to find the two lines of code that are most similar then to make them more alike.

The important thing to note is that we don’t want to take the things are in common and extract them – e.g. “bottles of beer on the wall” is duplicated throughout but it adds no value to extract that into a method call or variable. Instead we find the 2 lines of code with the smallest differences and make them more alike or the same. By doing this we gradually chip away at the duplication and will result in a number of small methods that can later be refactored into classes.

The best way I can explain this technique really is to demonstrate it. Watch the video below and I will take you through the process of refactoring the code we have written so far:

Hopefully that gave you a flavour of how easy it is to create abstractions once you have followed the path of duplication. The next stage in this code base is to start extracting some of these methods into a separate class but I will leave that until next time.

In the meantime I would definitely recommend reading Sandi’s book and checking out her next course taking place in October.

Happy hacking :-)

15
Aug
2014

Founders Week: The Importance of Taking Time Out

by Cain Ullah

As I mentioned briefly in my blog post discussing the launch of the Badger Academy, I went to a retreat back in January to take some thinking time away from work. I was cut off from the outside world. There was no internet. Mobile phones were not allowed. Writing and talking was even banned. It was pretty extreme. But it proved to be an enlightening experience not least for coming up with a plethora of new ideas, many of which were strategic ideas on how Red Badger could be improved.

Out of the back of the retreat, I had lots of ideas, a Red Badger Charity Division being one of them. As discussed in greater detail in the Badger Academy blog post, the Charity Division was all about improving our ability to develop from within, developing young talent to become senior leaders in their field. After 6 months of developing the idea in my spare time and with my colleagues, the charity division has now been superseded by Badger Academy, but the objectives have passed verbatim from one to the other. The mechanism through which we achieve the objective has changed.

This isn’t the first time that cutting myself off from the outside world has resulted in new ideas. At Burning Man, an art festival in the middle of the Nevada Desert which is totally cut off from any wifi or phone signal, I thought about bringing in Non-Exec Directors to help advise Red Badger. The move to bring in Mike Altendorf as a Non-Exec is one of the best things we have ever done at Red Badger. He has helped us to become a much more mature business, faster, stopped us from making mistakes (that he had made in the past) and helped us to re-shape how we do sales.

Building product as part of a pitch (via a Hackathon) was also thought up at the same retreat as the Charity Division this January. This new lean approach to sales “The Proof is in the Pudding” helped us to win the biggest project in our history in May.

I think you get the point. Cutting off wifi and phone signal is important in fostering creativity. It’s become such a distraction in everyone’s lives. If you sit on a bus on the way to work and look around you, everyone’s head is buried in a digital screen. On the bus, people contemplate less, do less book reading and less talking to each other in general. However, more important than just cutting yourself off from wifi or the telephone, taking time out is about giving your mind the space to think creatively and you can’t do this with the distraction of everyday life; internet or no internet.

I’m not saying we wouldn’t have gotten to these decisions or ideas  anyway. I expect Mike Altendorf would have joined our ranks eventually anyway. Or we might have started a Badger Academy eventually. I just don’t know. What I am sure of, is that it would have taken much longer had I not taken time out to just think.

Red Badger Founders Week

Reflecting on the value of the time I have had to myself, I have been doing some reading about it. It seems that taking time out is not uncommon. I watched a great 90’s documentary called “Triumph of the Nerds” in which Bill Gates talks about setting aside a week every year to read all of the books that he had in his “to read” list.

So I suggested to Dave and Stu that the three of us take a Founders Week, to do some more strategic thinking away from the day-to-day of running the business. After I suggested the  idea, it became apparent that Dave had also already been considering taking a week, but to himself, not the three of us together. When suggesting we do it together, both Stu and Dave were sold immediately.

Dave sent me this link: Take a Bill Gates-Style “Think Week” to Recharge Your Thinking on Lifehacker. The article by Michael Karnjanaprakorn talks about Steve Jobs, Mark Zuckerberg, and Bill Gates all taking regular think weeks in the past. It links to some great articles “Creative Thinking Matters” which focuses specifically on Bill Gates’ “think weeks”, what he used to do during the week and how much innovation evolved out of Microsoft as a result.

There is also a health aspect to taking time out. Michael Karnjanaprakorn is starting “Feast Retreats”. He says, Feast Retreats are for 20 people (max) where he will ban cell phone/WiFi usage throughout the weekend. “My goal is to share what I learned during my time off with The Feast community. There will be lots of yoga, healthy eating, and personal development to show the value and power of taking time off.”

All of the articles I have read about the power of time off can’t speak highly enough about the value it brings in promoting creative thinking, innovation and an increase in company productivity.

So, Stu, Dave and I are taking our first “Founders Week” at the end of November. We are going to book a cottage somewhere just outside of London, switch our phones off and take some time to ourselves. We’re not sure exactly what we’re going to do yet, but we all have books we want to read that we just haven’t had chance to yet, we’ll eat healthily and probably do some workshops. Apart from that, it’s just an opportunity to take some time to think, reflect and generally relax our minds.

The benefit I am sure will result in a rapid generation of new ideas that will impact Red Badger for years to come.

14
Aug
2014

The Launch of Badger Academy

by Cain Ullah

Back in January this year I went away for a 10 day retreat. The initial intention was to get away from work completely. No phone. No internet. No work. However, unexpectedly it ended up being incredibly conducive to coming up with a whole plethora of creative ideas. Some were non-work related but lots of new ideas were very much work related. (See this blog post I have written on Founders Week: The Importance of Taking Time Out). One of these ideas, in its rawest form was how we can source and develop young talent and turn them into very highly skilled developers, designers, project managers or whatever else. This has resulted in the quiet launch of Badger Academy this week.

A little bit of context

At Red Badger, a huge amount of investment goes into recruitment. Finding the best talent out there is difficult. As a company we hang our hat on quality, quality being the #1 Red Badger defining principle. As a result, we’re very fussy when it comes to hiring people. This I am in no doubt, will hold us in great stead for the future, so we are determined to maintain our standards in staff acquisition. But it poses a problem – how do we scale the business to service our ever increasing demands from a rapidly growing sales pipeline, without reducing quality?

I think the answer is to improve our ability to develop from within. So, we are hatching plans to invest heavily in developing young talent to become senior leaders in their field. We realise this will take time but Badger Academy is the first experiment that we hope will fulfill the overall objectives.

A Blueprint for Success

In the summer of 2011 when we were a much, much smaller business, we put out a job ad for a summer intern. Out of the 60 or so applicants, one Joe Stanton stood out head and shoulders above the rest. By the time he joined us, he had just started his 2nd year of Uni so worked with us for 8 hours a week. He had bags of talent but obviously lacked experience and as a Computer Science degree student, was being taught vital foundational knowledge stuff that you’d expect from a Computer Science Degree. However, he had no knowledge of modern web application engineering practices such as Behaviour Driven Development.

At the time, we had much more time to spend with Joe to ensure that he was doing things properly and with our guidance and his astute intellect, he developed his knowledge rapidly. He then had a gap year with us during which he was deployed and billed on real projects before going back to part-time for his final year of University. He graduated this summer and after a bit of travelling around Europe, he joined us permanently. On his first day, he was deployed onto a project as a billable resource having had almost 3 years of industry experience. He has hit the ground running in a way that most graduates would not be able to.

Joe has been a resounding success. The problem is how you scale this to develop multiple interns especially now that as a company, our utilisation is much higher. We can no longer spare the senior resources to spend the sort of time we could with Joe at the very beginning.

JoeGrad.png

Joe Stanton – The Badger Academy Blueprint !!!

The Evolving Plan

When I was at the aforementioned retreat, my ideas were based around a project that we were just kicking off for an incredible charity – The Haller Foundation. We were embarking on a journey to build a responsive mobile web application to help farmers in Kenya realise the potential of the soil beneath their feet (For more info, search our previous blogs and look out for more info once the Haller website is officially launched later this year). What was key in my thinking was that we had planned for a mixture of experience in the project team which included two intern software engineers (one being Joe Stanton) that were working 2 days a week whilst completing their final year at Uni. We were delivering the project for free (so Haller were getting a huge amount of benefit) and we were training and developing interns at the same time. Win-win.

So, this formed the basis of my initial idea – The Red Badger Charity Division. We would use interns to deliver projects on a pro-bono basis for registered charities only. The charity would need to understand that this is also a vehicle for education and thus would need to be lax on their timelines and we would develop interns through real world project experience in the meantime. Although a great idea, this wasn’t necessarily practical. In the end, the Haller project required some dedicated time from some senior resources and cost us over £20K internally to deliver. A great cause but not a sustainable loss to build a platform for nurturing talent upon.

So, over several months after my retreat (7 to be exact) in-between many other strategic plans that were being put in place at Red Badger, with the help of my colleagues, I developed the idea further and widened its horizons.

Rather than being focussed on just charity projects (charity projects will remain part of the remit of the Badger Academy), we opened the idea out to other internal product development ideas as well. We also put a bit of thinking into how we could ensure the juniors get enough coaching from senior resources to ensure they are being trained properly.

Objective

Badger Academy’s primary objective is to train interns that are still at University who will be working part-time with a view to them having a certain level of experience upon graduation and hopefully joining Red Badger’s ranks. However, it may also extend to juniors who have already graduated (as a means to fast tracking them to a full-time job), graduates from General Assembly or juniors who have decided not to go to University.

It will require some level of experience. i.e. We will not train people from scratch. But once Badger Academy has evolved, the level of experience  of participants will vary greatly. In the long term we envisage having a supply chain of interns that are 1st years, 2nd years, gap year students and 3rd years, all working at once. Youth Development.png

Above is a diagram I drew back in April 2014 when initially developing the future strategy for Badger Academy. This has now been superseded and developed into a much more practical approach but the basic concept of where we want to get to still remains the same.

So what about the likes of General Assembly?

Badger Academy does not compete with the likes of General Assembly. We are working very closely with General Assembly, providing coaches for their courses and have hired several of their graduates. In fact, General Assembly fits in very nicely with Badger Academy. It is the perfect vehicle for us to hire a General Assembly graduate to fast track them over a period of 3 months until they are billable on projects. A graduate from General Assembly would generally not have been a viable candidate for Badger Academy prior to doing the General Assembly course. Like I say, all candidates need a certain level of experience beforehand. Badger Academy is not a grassroots training course.

Implementation

It is imperative that interns and juniors are trained by more senior resources. As a result we’ll be taking one senior resource for one day a week off of a billable project to dedicate their time to training the Badger Academy participants. To reduce impact on specific projects, we will rotate the senior coaches across multiple projects. We will also rotate by the three University terms. So for autumn term at Uni, we will have 3-4 senior coaches (all from separate projects) on weekly rotation until the end of the term. The spring term we will refresh the 3-4 coaches and again for the summer term. This way, everyone gets to teach, there is some consistency in tutors for the interns during term time and project impact is mitigated.

Summary

There will be a set syllabus of training topics for each discipline. As this is the first week, we have decided to build the syllabus as we go. Our current interns are both software engineers so we can imagine us getting pretty quickly into engineering practices such as testing strategy (E.g. BDD) but also other disciplines that are vital to delivering quality products such as Lean/Agile methodologies, devops and all of the other goodness that Red Badger practices daily.

This is an initial blog about our current activity but is light on detail. As this develops, we’ll formalise the approach and publish more insightful information of what this actually entails.

What we need to not lose sight of, is that this is an innovation experiment. We need to learn from it, measure our success (as well as our failures) and adapt. This is part of a long term strategy and we are just at the beginning.

Disclaimer: Red Badger reserves the right to change the name from Badger Academy. This has not been well thought through!