Posts Tagged ‘Internship’


Badger Academy – Week 10

by Sarah Knight

Badger Time Retro

This week Joe introduced us to the concept of Agile Retrospectives. This is an opportunity to look back at the project so far and review how everything is going. What do we want to start doing? Continue doing more of? Stop doing? It’s a good way of opening up communication about what we think we need to improve on, and come up with specific ways this can be achieved.

We decided that the easiest way of tracking this was by Trello board. We’ve got 4 columns:

- To improve
- Improving/Monitor
- Improved
- N/A – (this is for if we later decide that a card is no longer relevant, but doesn’t fit in the improved column)

We created a card for each thing we want to improve on, and labelled it STOP, START or CONTINUE. These were all placed in the ‘To improve’ column. We then went through them all and discussed how they could be implemented, or if any of them needed to be reworded to make them more SMART.

A few examples:

START: Descriptive pull requests for every card currently in development, with a task list.
CONTINUE: Tracking possible enhancements as GitHub issues.
START: Using blockers (as a marker) for stories that can’t be worked on for some external reason (or because you’re stuck).
START: When pairing, make sure to swap around who’s writing and who’s reviewing.

I’d not come across the practice of a retrospective in this form before, but think it’s a really great method to open up dialogues about things you want to do differently. We’ve been using it for a few weeks now and are really seeing the benefits. Communication has improved, and coaches and cubs alike are able to see who’s been working on what, and how things are going. At regular intervals we revisit the retro board and discuss our progress. It’s a good way to track improvements, as we move cards from one column to another, and ensure that we continue to think and talk about other improvements we’d like to make.


We added in the functionality to edit a resource on the front end and back end. However, because a resource can be linked to a role, which has days associated with it, changing a resource cost could potentially affect the costs on a number of days and therefore projects as a whole.

So on the front end we needed to provide some options as to what editing a resource would actually affect.

Affected projects

- No bookings – no previous bookings would be affected by the new cost, it will only be applied to future bookings
- Bookings from – allows the user to choose a date and any booked days from that date onwards will have the new cost applied
- All bookings – All previous bookings will be updated with the new cost

So that the user can see exactly what they’re doing, we needed to flag up the projects and phases that would be affected by any changes they made. However, because a resource isn’t directly associated with a project, but via a role, that belongs to a phase, that belongs to a project it was a bit tricky.

project associations

As you can see from the diagram above, editing Resource 1 would affect Project 1, and both its phases (via Roles 1 and 3). Editing Resource 2 will only impact on Phase 1 (via Role 2), but still affect Project 1 as a whole.

We needed to go through all the roles linked to the current resource id, and in turn go through all the phases associated with those roles, and the projects associated with those phases to build up a list of affected phases and projects. Several phases from the same project could be associated with the same resource, so we needed to make sure that the projects didn’t get repeated.

To filter by a from date, we needed to add in an extra step to turn the string from the date input field into a date object. We then went through all the roles linked to the current resource id and filtered out those that have a day with a date greater than or equal to the from-date. Then pass in the list of roles to the same filtering process as above.

At one point we got a bit stuck trying to do a nested filter to get the roles that belonged to the resource, and find the days that belonged to those same roles. Viktor pointed out using a second ‘filter’ returns an array to the first filter when it’s expecting a boolean. So we changed it to ‘any’ which does return true or false and everything worked!


Badger Academy Week 9

by Eric Juta

Badger Academy endeavours have slowed down slightly, managing my last year of university and working part-time is a becoming a sizeable challenge. Much more so to come once I arrive in the real world!

In reference to Albert’s post, I personally have found the same experience that working at a startup while at University is a significant contribution towards my own academic performance! The amount of trials and tributes faced at Badger Academy have prepared me for kata-like performance at University. There’s so much I’ve learned at Badger Academy that isn’t taught at University or put into practice! (Sadly they don’t teach git workflows at University)

I highly recommend working for a startup and gaining experience in preparation for post-grad life. My dissertation has its foundations laid ahead of it due to the concepts taught at Red-Badger!


The architecture of a single page application talking via ajax requests to the backend rails api emphasizes data flow. Without the chance to just install a ruby gem and have most of the work done for you, we are forced to implement the same methodology and best network practices (As demonstrated before with Nginx).

The process of authentication leading to API-data fetching is similar to a TCP Three-way handshake.

In Badger-Time, the process occurs like the following:

  1. The clientside router checks if an generated authentication token is stored in HTML5 LocalStorage on any route (A persisted datastore in the browser with its own API)
  2. The router redirects the user to the /login route and renders the React.js component
  3. The user logs in with their pre-registered Badger-Time details.
  4. The user’s credentials are verified in the backend api, a generated authentication token is sent over (Made to expire after a day until a refresh call is made by the user)
  5. The generated authentication token is received, it is stored in HTML5 LocalStorage.
  6. Sequential requests from then on after include the authentication token in the request headers.
  7. The API checks if the request header has a valid authentication token and replies back after executing the body of the request.

(I take that back, that was more like 7 steps rather than 3)

Technically and code-wise, the above process is implemented and our decisions in doing so are:

  • NPM’s Superstore-sync module to have an API for setting and getting the auth token from HTML5 LocalStorage.
  • Modification of the API helper on the frontend to send token in all request headers if present.
  • A Before filter/action in the Application controller to verify whether the request header has a token for a session table match; there is also an expiry value.

  • An action verifies the appropriate BCrypt encrypted password details and generates a token value from a hash.


A similar fashion is seen via the OAuth protocol to talk between the backend rails api and FreeAgent.

The tokens are stored in the process environment variables and are read directly instead!

So for now, the FreeAgent account is hardcoded.

FreeAgent OAuth tokens are refreshed with data pull down on a recurrent clockwork module task to keep the rails models updated! Asynchronously too because of the Sidekiq and Redis combination! No interruptions at all! Deployment and usage has continuous activity!

There was also the decision to diff our Remote Timeslips (FreeAgent populates this model) and diff our Days model on every sync too.

This was actually quite easy (algorithmic wise!), we assume that all Timeslips are up-to-date, therefore the Days model and its burnt hours attributes would be overwritten. Don’t overwrite if the burnt hour is already up-to-date; a comparison of updated-at or burnt hours values.

Leaving comments

Our BDD process is finally done; I’d like to mention again!

Another trick we setup DevOps wise was to start the phantomjs debug server in a docker container then run the cucumber tests, we now have console session logs stored! We can view those logs through the Phantomjs Web UI!

No more document writing on javascript error triggers!



Badger Academy – Week 7

by Sarah Knight

It’s week 7 at Badger Academy, and it feels like things are really starting to come together. As the codebase begins to take shape, and more of the blanks are being filled in, I’m finding it easier to contribute as there are now more examples for me to refer to and less code to write completely from scratch. I spent a couple of days building the Roles section (Projects > Phases > Roles), on the frontend and feel like I’m really starting to grasp how things are linked together, where code is being called from, and what properties are getting passed from one section to another.

Money, money, money

Tiago and I started the week pair-programming to get the money values working properly. We implemented the money-rails gem, and created migrations to change the money columns to add the suffix ‘_pence’ to them. E.g. the fixed_price column in Phases, was renamed to fixed_price_pence. However, using the monetize method, the suffix is ignored everywhere else, so you can still just refer to the attribute as fixed_price.

We were able to access the monetize method in the models, which creates a money object from the attribute. Money is converted from pounds to pence, and saved in the database as pence. Then we make sure to convert back from pence to pounds in the views, for users. This means that calculations are done using integers, and no weird rounding errors will occur with floats. Also, should we ever go international with Badger Time, or start getting paid in exotic currencies, having everything set up with money-rails should make currency conversions a doddle.

Promises, Promises

Clicking around in the browser, I discovered a bug where the page was rendering before data had finished loading. A phase belongs to a project, and I found that trying to access some of the phases resulted in an error. It turned out that after fetching each project, and then each of the phases for the first project, the data was being registered as fetched before the other phases had been retrieved.

Viktor decided that the best way to solve this issue was through the use of promises, an entirely new concept to me. The basic idea is that they can store tasks in memory, and ‘promise’ to do something in future, once other criteria have been fulfilled. So you can hold off an action until other tasks have been completed.

The really clever thing about promises is that you can chain them together, so that once a stage in the code is reached, you can start another promise, and then another one, and so on. Then each promise will wait for the required actions to be completed before launching its own action, until you get back to the first promise. Everything will run in the sequence you’ve set, and you know that the final task won’t be run until everything else has finished. Another really useful feature is the .all function, which allows you to run several tasks in parallel, and wait for them all to finish before running another task. This would be much more difficult just using classic node callbacks.

By passing in a silent option, we could hold off on notifying the listeners that data had been fetched, until it truly had all been fetched. It also cut down on the number of times the page was being re-rendered, as previously it was rendering after every single item was fetched, which would get ridiculous once Badger Time was filled with content, (and was already slightly ridiculous with the small amount of example content that’s in there currently!).

We installed the Bluebird promise library, and then required it in each file we added promises to.

Here’s the code that was added to the Projects store file:

Here’s the code from the Phases store, that gets called from the Projects store:


Badger Academy Week 4

by Sarah Knight

It’s week 4 of Badger Academy, but for me personally as the 3rd intern to join, it’s the first week. Not only am I a few weeks behind on the Badger Time project, but fresh from the 3 month Web Development Immersive course at General Assembly, I’m also several years behind Tiago and Eric in terms of general programming experience. So my first few days were spent in a state of confusion and growing panic as I tried to read up on an ever-growing list of techniques and technologies that were completely new to me.

Vagrant, Docker, React.js, Gulp, Gherkin, Phantom.js, Browserify, Nginx, Selenium, and Circle CI were a few of the terms I was busy googling. I now have a rough grasp on what most of these are, and how they fit together but it might be a while before I can blog about them with any confidence! Watch this space …

By Wednesday, I was able to get stuck in and start to write some proper code though, which felt good. I made a start on some tests for the API. We were thinking about using Cucumber for these, but in the end it was agreed that plain Rspec made more sense for the technical back end, and use the more English language readable Cucumber tests for the front end and potentially less techie readers.

Viktor was our senior developer this week, and spent time helping me write some tests for the JSON responses. He also helped refactor some of the React.js code on the front end while also giving me an overview of how it all fits together. This was really helpful, as I think I’m now beginning to understand React on a conceptual level … we’ll see how it goes when it comes to actually working with it though!


Github Flow

With 3 full-time team members plus 3 part-time senior devs on this project, having a standardised system for version control is important. Most of the projects I’ve worked on previously have been solo efforts, so it was crucial for me to understand the system in place and make sure I didn’t mess up. Luckily we’re using the Github Flow workflow, which is simple to pick up, and facilitates continuous deployment.

The workflow:

1) Create a new branch locally

Create a new descriptive branch locally from master and commit regularly. Naming things descriptively is always tricky, but done right, it allows everyone to see who’s working on what.

2) Add commits

Committing regularly allows you and others to keep track of your progress on the branch. Each commit is like a snapshot of the branch at a particular time, so you don’t want to leave it too long between commits or too much will have changed. With regular commits of small chunks of code, if you introduce bugs or change your mind about something, you can rollback changes easily. (It’s a bit like time travel!).

3) Open a Pull Request

Once you are ready to merge to master, or want some feedback, open a pull request. Pull requests allow others to review your code, and everyone can add comments. Because Pull Requests accept Markdown syntax, you can even create tickboxes of things to be ticked off (top tip courtesy of Alex!).

4) Discuss and review code

Once a pull request has been opened, other people can see what you’ve been working on, and enter into discussion on Github about it.

5) Merge and deploy

Once you’re happy with the code, and it passes all the tests, you can merge to Master. We have Circle CI set up to automatically test code once a Pull Request has been opened, so you can easily see whether the code is passing tests before you merge.

The golden rule of Github Flow is: Anything on the master branch is deployable.

Any code on the master branch has been tested and is totally stable. You can create new branches from it with confidence, and deploy from it. We don’t yet have any kind of production server set up, so there is currently no deployment. However, the whole point of Github Flow is continuous deployment, so once that’s up and running, this step will be implemented regularly.

Next Week

To ensure that we learn as much as possible about all aspects of development, we’re taking it in turns to work on the different parts of the project. So just as I was starting to get to grips with the API this week, next week I’ll be doing something completely different and taking a turn on the front end. However, I’m looking forward to exploring React.js and seeing how the testing differs.


Badger Academy Week 3

by Tiago Azevedo

The third week of Badger Academy has passed, and with it ends the first cycle of seniors helping interns. For this thursday we were paired with Joe Stanton. We ran into a lot of problems during the week, which left us somewhat frustrated but also increased our eagerness to learn. Most of our environment setup for development has been done by this point. We managed to decrease our docker build times from ~20 minutes to 3-5 minutes depending on how good of a day the server was having, but overall it was consistent and fast.

Our focus this week was on testing standards. We were aware of the best practices for testing our software, but their implementations within our projects was what took the bulk of our time.

Testing the API

Testing the Rails backend was fairly straightforward. When we scaffolded the controllers and models for our project, a set of pre-generated RSpec tests was provided for us. Most of them were fairly unoptimised and some were not suited for an API, but rather a project written completely in Rails.

We kept a few things in mind while writing these tests;

  • Keep tests of one model/controller isolated from other models and controllers
  • Avoid hitting the database where we could.
  • Avoid testing things which are covered by higher level tests.

Expanding on that third point, Joe helped explain what layers to test and what layers we could skip. At the core of our app we have model tests, which would be independent of the database and would test things like logic and validation. These should eventually make up the majority of our tests, but for the meantime we only have a few validation checks. The ‘medium-level’ tests were things like routing and request tests.

We ended up skipping the routing tests since once we got to the higher-level integration tests, we could infer that if those passed then all our routing was correct. We kept request tests at a minimum, only checking that the API returned the correct status codes, so we could have a sense of consistency across our app, and those weren’t necessarily implied by the integration tests.

Following that, we removed the unnecessary stuff and, through the use of FactoryGirl, we converted our logic and validation tests to avoid hitting the database, as it would cause a significant slowdown once our project became larger. Some of our higher level controller tests did hit the database, however this is unavoidable in most cases and attempting to bypass this would have been more trouble than it was worth.

Testing the Frontend

Our Frontend testing was much more difficult to set up. We’re currently running a stack of PhantomJS, CucumberJS and Selenium. CucumberJS is a tool that allows us to write tests in a human-readable format, so that anyone, without an understanding of programming, could see what’s happening and even write their own tests if they wanted to. This is the basic premise of BDD (behaviour-driven development) – we write tests for functionality of the software beforehand, from the standpoint of the end user and in a language that they can understand. This differentiates from the TDD (test-driven) principles used in the API as that is written purely in Ruby, and not necessarily from a user’s point of view.


That’s an example of a test written in Gherkin (the CucumberJS language - yes we are aware of all the slightly strange vegetable references). You can probably guess what it tests for. Behind the scenes, the software captures and identifies each of those lines and performs tests based on parameters that are specified (e.g. what page you’re on and what action you’re performing)

One issue we struggled past was how to go about isolating these tests from the API. Since the pages would have content from the backend displayed, we’d need a way to test using fake data. We went through a variety of methods during the week. Firstly, we thought of simply stubbing out the calls to the API using Sinon, a popular mocking and stubbing JavaScript library. While this would have been the most robust option, we had big difficulties using it with Browserify – a tool we are using which bundles your entire application into one file – and we decided on simply creating a fake api server using Stubby, which runs only for the duration of the tests and can serve multiple datasets to the frontend so we can still test a variety of cases.


Since we got the testing frameworks down, we expect to make fast progress from here on out. We ended up learning and using CircleCI, which will automatically run tests on any pushes or pull requests made to the github repos, and this makes sure we only merge stuff into master when everything is working as planned, and also makes sure that all tests are passing on a fresh system before deployment.

Despite all the new technology we have introduced, everything is going more or less smoothly and we couldn’t ask for a better foundation to build this project from. Not only are we rethinking the way the tech badgers go about the development process, we also streamline the entire production process with lower build times, safe and consistent deployment and a highly scalable and portable infrastructure.


Badger Academy week 2!

by Eric Juta

This week in Badger Academy, we were joined by Alexander Savin, a senior engineer of many talents. Under his guidance, he assessed the current state of our DevOps including the decision to use docker.
Finalising last week’s architecture choices, we were promptly laying down the foundations for the road to pave ahead.
There really was a lot of googling, not much stackoverflow!
Deciding on a one command workflow for any compatible unix system, we proceeded to create the mammoth script.

Badger-Academy week 2!

Bash Shell Script

Iteratively tweaking it (Agile!) in the end allowed us to do the following:

    • Git clone Badger-Time
    • Use Vagrant to up the initial CoreOS VM
    • Run the shell script from within the ssh instance to build the docker containers

(Current container stack each with their respective data containers being: Rails API, Redis, Postgres, Node, Nginx)

  • Pull preinstalled images down
  • Add our config files into them; specifically our Nginx and SSL certificates
  • Mount our Badger-Time code into their respective destinations
  • Install node and rails dependencies then create the databases and migrate them
  • Run all the linked containers with persisted daemons and their services in a hierarchal order.


Badger-Time code up and running on any potential unix system in less than 15 minutes without any further interaction.
It sounds like a lot but in fact this is allowed due to the high internet speed within the office!


The advantages we had discovered in this approach compared to the previous Badger-Time Vagrant + Ansible were vastly great in so so so many ways!

First of all, an all-in-one up command; we have one extra intern joining us in a week’s time, getting her laptop up to current versioning requires little to no effort.
(Yes, we’ve tested it already on her preview day of the office)

  • No makefile building? Yes please!
  • Faster tests
  • Reduced memory footprints
  • Same environment from development to our build server to our deployment server
  • Isolate local dev dotfiles and configs from the application
  • 12factor application coherence!


There are many disadvantages as such you would imagine with any new technology:

  • Initial volume mount mapping configuration
  • Networking association is difficult to comprehend.
    (Dynamic host files generated by linked containers, exposed ports, vagrant)
  • Developer productivity affected by added configuration complexity
  • Double layer virtualisation! Linux native support only
  • The lack of a structured DevOps docker approach documented online leaves a lot of decisions to the creator.

Admittedly, as we’re still continuously learning, we will accumulate the software architect’s hat overtime.
Luckily we have constant surveillance and access to the senior engineers over Slack! #badgerbants

Scaffolding the frontend

With the majority of the DevOps out the way for the developer environment, together with Alex we conversed potential ways to scaffold the frontend tests.
This took a lot of learning Gulp with him to customise further our frontend workflow.

Our gulpfile was chosen to do the following tasks:

  • Pull down npm and bower dependencies
  • Build LiveScript React.js components, Index.Jade, Less files, Semantic Grid system
  • Browserify, Concatenate, Uglify
  • Build the LiveScript tests for compatibility with CucumberJS
  • Start the Phantomjs service from within the docker container before running the CucumberJS tests
  • Watch for source code file changes and compile

Letting Gulp do such things allows us to commit and push less code to Github plus have the added developer workflow productivity!
Less context switching, the above are just abstractions!

Food for thought

One problem that had to be overcome was the choice of running frontend tests from within the container or outside.
The issue is that we have to keep in mind that the tests will inevitably be run from within a build server environment before being deployed.
This poses the question because of Nginx serving static files in a container,
should we reroute the webdriver to examine outside in for tests?

We were a bit stumped at first so can someone document a best practices guide for Docker networking + Docker Frontend testing please!
It may be the case that someone at Red Badger will have to!

Next week tasks!

Tiago and I for next week will ponder about what kind of tests should be written.

BDD is a major cornerstone to the quality of our projects, we’ll have to assess such implementations with a split frontend and backend!
Let alone learn API design!


The first week of Badger Academy

by Tiago Azevedo

Last Wednesday marked the beginning of Red Badger’s intern training program – Badger Academy. As interns, Eric and I will be the ‘prototypes’ with a dynamic syllabus covering the fundamentals.

Our guidance consists of a full day a week with a senior developer, and in this case we’ll be re-engineering an internal app that was started two years prior but left unfinished.

Badger Time Reborn

Badger Time, wittily named by Cain, was to be a resource management platform. At a basic level, it would enable a business owner to plan potential projects and analyse ongoing projects, calculating financial figures based on how many hours people were assigned to projects and how many hours they’d fulfilled (using data from the FreeAgent platform).


We collectively decided that the best course of action was to build it up again from scratch, as it would take less time and effort than fixing what was wrong now, using the old codebase as a reference point.

As far as any intern is concerned, writing any software from scratch is a daunting task. We are taking the briefing with a positive attitude, and reveled in the prospects of being able to learn every aspect of building a working, maintainable piece of software.


The first stage of any structured task is planning – what do we want, how do we want it and what problems will we face? Thankfully, we had the ability to recycle the designs from the old project which simplified a lot of the what and the how.

User Stories

The bulk of last thursday was spent on these. Viktor, the senior assigned to us for the day, took us through building a backlog of user stories for each feature that would be in the minimum viable product. Building these user stories helped us to understand these features from a user’s point of view, and simplified the process of figuring out potential problems. We used Trello to organise these, as it allowed us to sort the backlog by priorities and establish a pipeline of things to get done. 

Building a data model

As we’d be handling large amounts of data coming from different sources, it was imperative that we had a well-built data model. There were two main factors to keep in mind:

  • Avoid repeating the same data in various places
  • Avoid storing anything in the database that can be calculated on demand


Docker – the foundation of our project

We made a few ambitious decisions regarding tech choices in Badger Time. We’d be using Docker to handle our development and production environment. Both the seniors and us were super interested in this new technology as it would solve a lot of current problems. Right now, most of the projects are handled using Vagrant for a virtual machine and Ansible for provisioning. This poses a performance hit as everything is ran on a virtual machine and it can also take upwards of 30 minutes to get it up and running on a new machine. Docker eliminates this by running everything in containers, which are like ‘layers’ on top of the current host machine, and containers can be built (similar to provisioning) once, pushed to a remote server and then downloaded and ran on any machine capable of using docker.

Because docker containers are purely layers on top of the existing system, they are much smaller and more portable than a full-blown virtual machine. It also means we eliminate any discrepancies between development and production, allowing for a much smoother deployment process.

Rails – the trustworthy powerhouse

We’ll be using Ruby on Rails to write a RESTful API which will handle any requests and serve data from our database, as well as make frequent syncs of data from FreeAgent. Ruby on Rails is solid, easy to read and write and provides a large repository of ‘gems’ which allow us to extend the functionality of our app easily. It was an easy, safe choice, and was backed up by the fact that the old Badger Time was written completely in Rails and we could recycle some of the code as most of the business intelligence was up-to-date.

React.js and LiveScript – lightning fast rendering, with clean and structured code

Rather than making an isomorphic app, we took the same design principles as Haller and divided the backend and frontend of the app. This enables our app to scale much more easily – we can serve the frontend as a static site on a CDN like Amazon S3 (fast!) and then scale the backend seperately. Using React and LiveScript, we can build a purely functional frontend – ditching the traditional MVC application model in favour of having our UI broken up into simple components which contain logic within themselves (and are ridiculously fast because of how React works).

Compare the following (functionally) identical pieces of code;

You don’t need to understand what’s happening to notice how much simpler and cleaner it looks with LiveScript! You can read Stuart’s post on this very same stack for a deeper understanding of why its awesome. We love it and we’re sticking to it!

So as you can see, its a pretty ambitious proposal full of new, exciting stuff, and this project is the perfect opportunity to test it all out on! We’re keen to get the show on the road and get into the meaty part of the development work, but we’re also eager to build something that’s slick and will be considered as a solid, well-written codebase. I have high hopes that Docker will become a thing around here, and it might catch on as the go-to tool for handling DevOps around here, just like what React and LiveScript do for frontend!


Computer Science students should work for a start up while at uni

by Albert Still

Don’t completely rely on your grades to get you a job after uni. A CS degree will prove you’re intelligent and understand the theory, but that will only take you so far during an interview. We have to talk about apps we’ve built with potential employers, just like you would want a carpenter to show you pictures of his previous work before you hired him.

While studying CS at university I worked 2 days a week for Red Badger, even in my final year when I had a dissertation. Some of my class mates questioned if it was a good idea suggesting the time it takes up could damage my grades. But it did the opposite, I got better grades because I learnt so much on the job. And when you see solutions to real life problems it makes for a better understanding to the theory behind it. And it’s that theory you will get tested on at university. 

What I’ve been exposed to that I wasn’t in lectures

  • Using open source libraries and frameworks. Theres rarely a need to reinvent the wheel, millions of man hours have been put into open source projects which you can harness to make yourself a more productive developer.
  • GitHub is our bible for third party libraries. Git branches and pull request are the flow of production.
  • Ruby - the most concise, productive and syntactically readable language I’ve ever used. Unlike Java which the majority of CS degrees teach, Ruby was designed to make the programmers work enjoyable and productive. Inventor Yukihiro Matsumoto in a Google tech talk sais “I hope to see Ruby help every programmer in the world to be productive, and to enjoy programming, and to be happy. That is the primary purpose of Ruby language”. Ruby is loved by start ups and consultants because they need to role out software quickly.
  • Ruby on Rails - built in Ruby it’s the most starred web application framework on GitHub. It’s awesome and the community behind it’s massive. If you want to have a play with it I recommend it’s “Getting started” guide (Don’t worry if you’ve never used Ruby before just dive in, it’s so readable you’ll be surprised how much Ruby you’ll understand!).
  • Good habits – such as the DRYYAGNI  and KISS principles. The earlier you learn them the better!
  • Heroku - makes deploying a web app as easy as running one terminal command. Deploy your code to the Heroku servers via Git and it will return you a URL to view your web app. Also its free!
  • Responsive web applications are the future. Make one code base that looks great on mobile, tablets and desktops. Twitter Bootstrap is the leading front end framework, it’s also the most starred repo on GitHub.
  • JavaScript – The worlds JS mad and you should learn it even if you hate it because it’s the only language the web browser knows! You’ll see the majority of the most popular repositories on GitHub are JS. Even our desktop software is using web tech, such as the up and coming text editor Atom. If you want to learn JS I recommend this free online book.
  • Facebook React – Once hard JS problems become effortless. It’s open source but developed and used by Facebook, therefore some seriously good engineers have developed this awesome piece of kit.
  • Polyglot – Don’t just invest in one language, be open to learning about them all.
  • Testing – This was not covered in the core modules at my university however Red Badger are big on it. Simply put we write software to test the software we make. For example we recently made an interactive registration form in React for a client. To test this we made Capybara integration tests, they load the form in a real browser and imitate a user clicking and typing into it. It rapidly fills in the form with valid and invalid data and makes sure the correct notifications are shown to the user. This is very satisfying to watch because you realise the time your saving compared to manually testing it.

Reasons for applying to a start up

  • They are small and have flat hierarchies which means you will rub shoulders with some talented and experienced individuals. For example a visionary business leader or an awesome developer. Learn from them!
  • More responsibility.
  • They’re more likely to be using modern front line tech. Some corporates are stuck using legacy software!
  • If they become successful you will jump forward a few years in the career ladder compared to working for a corporate.
  • Share options – own a slice of your company!
  • There are lots of them and they’re hiring!

Where to find start ups

A good list of hiring startups for London is maintained by the increasingly successful start up job fair Silicon Milk Roundabout. Also Red Badger are currently launching Badger Academy, they’re paying students to learn web tech! This is extremely awesome when you consider UK universities charge £9,000 a year. If your interested in applying email



There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies.

C.A.R. Hoare, 1980 ACM Turing Award Lecture


The Launch of Badger Academy

by Cain Ullah

Back in January this year I went away for a 10 day retreat. The initial intention was to get away from work completely. No phone. No internet. No work. However, unexpectedly it ended up being incredibly conducive to coming up with a whole plethora of creative ideas. Some were non-work related but lots of new ideas were very much work related. (See this blog post I have written on Founders Week: The Importance of Taking Time Out). One of these ideas, in its rawest form was how we can source and develop young talent and turn them into very highly skilled developers, designers, project managers or whatever else. This has resulted in the quiet launch of Badger Academy this week.

A little bit of context

At Red Badger, a huge amount of investment goes into recruitment. Finding the best talent out there is difficult. As a company we hang our hat on quality, quality being the #1 Red Badger defining principle. As a result, we’re very fussy when it comes to hiring people. This I am in no doubt, will hold us in great stead for the future, so we are determined to maintain our standards in staff acquisition. But it poses a problem – how do we scale the business to service our ever increasing demands from a rapidly growing sales pipeline, without reducing quality?

I think the answer is to improve our ability to develop from within. So, we are hatching plans to invest heavily in developing young talent to become senior leaders in their field. We realise this will take time but Badger Academy is the first experiment that we hope will fulfill the overall objectives.

A Blueprint for Success

In the summer of 2011 when we were a much, much smaller business, we put out a job ad for a summer intern. Out of the 60 or so applicants, one Joe Stanton stood out head and shoulders above the rest. By the time he joined us, he had just started his 2nd year of Uni so worked with us for 8 hours a week. He had bags of talent but obviously lacked experience and as a Computer Science degree student, was being taught vital foundational knowledge stuff that you’d expect from a Computer Science Degree. However, he had no knowledge of modern web application engineering practices such as Behaviour Driven Development.

At the time, we had much more time to spend with Joe to ensure that he was doing things properly and with our guidance and his astute intellect, he developed his knowledge rapidly. He then had a gap year with us during which he was deployed and billed on real projects before going back to part-time for his final year of University. He graduated this summer and after a bit of travelling around Europe, he joined us permanently. On his first day, he was deployed onto a project as a billable resource having had almost 3 years of industry experience. He has hit the ground running in a way that most graduates would not be able to.

Joe has been a resounding success. The problem is how you scale this to develop multiple interns especially now that as a company, our utilisation is much higher. We can no longer spare the senior resources to spend the sort of time we could with Joe at the very beginning.


Joe Stanton – The Badger Academy Blueprint !!!

The Evolving Plan

When I was at the aforementioned retreat, my ideas were based around a project that we were just kicking off for an incredible charity – The Haller Foundation. We were embarking on a journey to build a responsive mobile web application to help farmers in Kenya realise the potential of the soil beneath their feet (For more info, search our previous blogs and look out for more info once the Haller website is officially launched later this year). What was key in my thinking was that we had planned for a mixture of experience in the project team which included two intern software engineers (one being Joe Stanton) that were working 2 days a week whilst completing their final year at Uni. We were delivering the project for free (so Haller were getting a huge amount of benefit) and we were training and developing interns at the same time. Win-win.

So, this formed the basis of my initial idea – The Red Badger Charity Division. We would use interns to deliver projects on a pro-bono basis for registered charities only. The charity would need to understand that this is also a vehicle for education and thus would need to be lax on their timelines and we would develop interns through real world project experience in the meantime. Although a great idea, this wasn’t necessarily practical. In the end, the Haller project required some dedicated time from some senior resources and cost us over £20K internally to deliver. A great cause but not a sustainable loss to build a platform for nurturing talent upon.

So, over several months after my retreat (7 to be exact) in-between many other strategic plans that were being put in place at Red Badger, with the help of my colleagues, I developed the idea further and widened its horizons.

Rather than being focussed on just charity projects (charity projects will remain part of the remit of the Badger Academy), we opened the idea out to other internal product development ideas as well. We also put a bit of thinking into how we could ensure the juniors get enough coaching from senior resources to ensure they are being trained properly.


Badger Academy’s primary objective is to train interns that are still at University who will be working part-time with a view to them having a certain level of experience upon graduation and hopefully joining Red Badger’s ranks. However, it may also extend to juniors who have already graduated (as a means to fast tracking them to a full-time job), graduates from General Assembly or juniors who have decided not to go to University.

It will require some level of experience. i.e. We will not train people from scratch. But once Badger Academy has evolved, the level of experience  of participants will vary greatly. In the long term we envisage having a supply chain of interns that are 1st years, 2nd years, gap year students and 3rd years, all working at once. Youth Development.png

Above is a diagram I drew back in April 2014 when initially developing the future strategy for Badger Academy. This has now been superseded and developed into a much more practical approach but the basic concept of where we want to get to still remains the same.

So what about the likes of General Assembly?

Badger Academy does not compete with the likes of General Assembly. We are working very closely with General Assembly, providing coaches for their courses and have hired several of their graduates. In fact, General Assembly fits in very nicely with Badger Academy. It is the perfect vehicle for us to hire a General Assembly graduate to fast track them over a period of 3 months until they are billable on projects. A graduate from General Assembly would generally not have been a viable candidate for Badger Academy prior to doing the General Assembly course. Like I say, all candidates need a certain level of experience beforehand. Badger Academy is not a grassroots training course.


It is imperative that interns and juniors are trained by more senior resources. As a result we’ll be taking one senior resource for one day a week off of a billable project to dedicate their time to training the Badger Academy participants. To reduce impact on specific projects, we will rotate the senior coaches across multiple projects. We will also rotate by the three University terms. So for autumn term at Uni, we will have 3-4 senior coaches (all from separate projects) on weekly rotation until the end of the term. The spring term we will refresh the 3-4 coaches and again for the summer term. This way, everyone gets to teach, there is some consistency in tutors for the interns during term time and project impact is mitigated.


There will be a set syllabus of training topics for each discipline. As this is the first week, we have decided to build the syllabus as we go. Our current interns are both software engineers so we can imagine us getting pretty quickly into engineering practices such as testing strategy (E.g. BDD) but also other disciplines that are vital to delivering quality products such as Lean/Agile methodologies, devops and all of the other goodness that Red Badger practices daily.

This is an initial blog about our current activity but is light on detail. As this develops, we’ll formalise the approach and publish more insightful information of what this actually entails.

What we need to not lose sight of, is that this is an innovation experiment. We need to learn from it, measure our success (as well as our failures) and adapt. This is part of a long term strategy and we are just at the beginning.

Disclaimer: Red Badger reserves the right to change the name from Badger Academy. This has not been well thought through!


Designers, learn how to code (at Steer)

by Michel Erler

First things first: my name is Michel, I recently moved to London from the maritime city of Bremen, Germany, and I joined Red Badger as a design intern a couple of weeks ago. I study Interaction & Moving Image at the London College of Communication and came here to get my teeth stuck into the wonderful world of technology.

As a young designer it feels kind of weird being so familiar with the internet and at the same time don’t know much about the way sites actually work. Every member of the web generation should know a bit about the (virtual) places they spend their time at. I knew I wanted to get a handle on coding, but needed a foundation to build on. Recently I came across a company called Steer claiming to teach anyone how to code and build websites. After a few  emails I decided to attend their intense front end web development course: Five days of key bashing leading to a kick-start in coding.


Rik explaining some JavaScript


So I arrived on Monday at their studio in Clerkenwell and was welcomed by a very friendly team with a tasty breakfast. Our course teacher Rik worked in web development for over 8 years and he was able to translate these hieroglyphics for us beginners into something that makes actually sense. We started with the very basics of HTML and that wasn’t as complicated as I thought. The very first site we built reminded me of the web how it used to be when I was six years old (e.g. Space Jam). We soon went on to more CSS and at the end of the day we had a simple, yet good looking site.

On the second day we learned about grids and forms, so we were able to build a decent site that could go straight on the net. Adding further CSS knowledge on day three I was quite surprised how easily you could create impressive 3D animations and cutting-edge layouts. 


On the last two days we went into JavaScript, mainly using jQuery. I think that’s quite a big step for a coding virgin. It’s challenging at first, but the result was pretty amazing. We went to another level of website designing, building responsive sites, integrating Twitter feeds and Google Maps or using parallax scrolling (every designer’s favourite right now).  


The course really demystified the world of coding for me. I never thought you can gain such a comprehensive set of skills within five days. Now it is up to me to keep up practicing and learning. Sites like or definitely make it a lot easier to keep you motivated. But in case you get stuck (which is highly likely to happen at some point) and you can’t figure out the bug yourself, it is good to know that Steer offers bi-weekly coding emergency sessions for all alumni. 

Coding really feels like an essential skill for the modern working world, especially if you have a couple of years ahead in your career. I am now in the position to create prototypes on my own. I probably even know some things a lot of front end developers out there don’t know as Steer teaches you the newest stuff available. All in all I was more than satisfied with the course and I’m thinking of attending the new iOS course in summer.  

There’s nothing to be afraid of. Don’t wait, learn how to code! 

To find out more about the different courses that Steer offers, just go to 

Photo: Steer