23
Oct
2014

Badger Academy Week 8 – Frontend testing using WebdriverIO, Stubby and CucumberJS

by Tiago Azevedo

Over the past few weeks on Badger Time, we’ve had a steady workflow for the API where we followed TDD principles of writing feature tests first and code later. It wasn’t an issue to set that up in Rails as the various gems already out there for Ruby (RSpec/FactoryGirl specifically) made it a breeze.

The frontend was a different beast altogether and required quite a lot more thought which we finally decided to give over the past week.

The problems and their eventual solutions.

There were several problems which we struggled to solve initially. Firstly, we had to run a GhostDriver instance which would allow our testing suite to communicate with PhantomJS. We’d also have to run a Node server simultaneously which would serve the app in a test environment to the PhantomJS browser.

Doing this was a bit tricky; Gulp’s asynchronous nature meant that running those background processes from within Gulp was a no-go. Depending on how quickly or how slowly it launched, some tests would pass or fail as the server might not be up before the tests ran.

It was probably more effort than it was worth to find a workaround for it so we simply added the processes as a part of the container’s boot sequence. As our containers were based on Phusion BaseImage it was a case of adding simple init scripts to BaseImage’s custom init process.

start-stop-daemon --start --background --quiet --exec /bin/bash -- -c "/usr/bin/phantomjs --webdriver=8080 --remote-debugger-port=8081 --ignore-ssl-errors=true > /tmp/phantom.log"
start-stop-daemon --start --background --quiet --exec /bin/bash -- -c "node /data/server.js"

That was one catch out of the way. The next issue we faced was actually running the tests. Previously we took advantage of gulp-run to pipe our compiled spec files (we wrote the tests in LiveScript!) to the CucumberJS executable.

This was a bit overkill and we ended up just using Node’s script system to run the compile task then run the CucumberJS task on the appropriate files. As a side-effect, we got really nice formatting on the tests so we could see exactly what went wrong (if it failed).

Screen Shot 2014-10-20 at 11.16.47

Nice!

We had these tests running with the API endpoint set as a local Stubby mock API. Stubby’s Node implementation gave us a programmatic API which meant we could start, stop and modify the API as our tests were running.

This allowed us to feed data using Gherkin (Cucumber language) data tables to a function which would simply modify an endpoint with the supplied data. It removed our dependency on the real API to have the frontend tests working, which reduced our CircleCI build times from a staggering 15-20 minutes down to 2-3.

A look at WebdriverIO

Selenium WebDriver is somewhat of an elephant in the office at Red Badger. We all dislike it – even you Pete, you just don’t know it yet – but we put up with it. The API is just a bit rubbish and documentation is quite difficult to find. As somebody working out its usage from scratch, I can say my options were quite limited; spend hours sifting through Java documentation and hope it works the same in the JavaScript implementation or go through endless amounts of user issues trying to find a solution which matches my own problem.

That’s where WebdriverIO helped tremendously. It’s a JavaScript wrapper to Selenium’s confusing API and offers quite a few helpful additions of its own. Just having documentation – however incomplete it might be – was a godsend. At least the functions which weren’t documented have a link to their source so we can see what’s going on and extrapolate from that.

How LiveScript facilitates the callback-based nature of CucumberJS

If you’re familiar with the term ‘callback hell’ then you know how asynchronous code can be a real pain deal with, as you end up with nested logic inside nested logic inside a browser action, all ending with a callback to the top level to pass (or fail) the test. Take this simple example of a browser action which would type a phrase into an input on the screen. In JavaScript, we can immediately see why it quickly grows into something that isn’t nice to deal with.

We take advantage of LiveScript’s unnested callbacks to offer code which is functionally the same as the example above, but reads and writes like synchronous code (much easier to handle).

Writing our tests is inherently easy due to the way Cucumber works and in most cases we don’t even need to write any code for new features as we recycle logic from the more generic step definitions. 

We’re excited to finally be able to adhere to BDD principles on our frontend. After all, the whole premise of Badger Academy isn’t to ship a finished product, but to bring our code quality and knowledge to a higher level.

20
Oct
2014

London React October Meetup

by Chris Shepherd

Last week saw the fourth London React User Group and yet again the turnout was fantastic for such a young user group.

There were three talks this time and the first, “Learning to Think With Ember and React”, was by Jamie White who runs the Ember London User Group. Red Badger's own Stuart Harris had previously done a talk on React at the Ember User Group that had been well received, so it was time for them to come and talk to us. Jamie talked about how it was possible to combine Ember with React and it was interesting to see how flexible React is and how it's so easy to fit into a variety of existing workflows.

Next up was Rob Knight with his talk, “Diff-ing Algorithms and React”. Rob covered something that I think a lot of us now take for granted with React: how it updates the DOM through the use of a virtual DOM and efficient algorithms.

Lastly, Markus Kobler's talk, “React and the Importance of Isomorphic SPAs”, showed how to use React to avoid poor SEO and slow loading times. These are usually considered the pitfalls of building Single Page Applications.

If you missed the talk then you can watch it on YouTube below. To be informed about the next meet up then sign up here.

Hopefully we'll see you there next time!

14
Oct
2014

Badger Academy – Week 7

by Sarah Knight

It’s week 7 at Badger Academy, and it feels like things are really starting to come together. As the codebase begins to take shape, and more of the blanks are being filled in, I’m finding it easier to contribute as there are now more examples for me to refer to and less code to write completely from scratch. I spent a couple of days building the Roles section (Projects > Phases > Roles), on the frontend and feel like I’m really starting to grasp how things are linked together, where code is being called from, and what properties are getting passed from one section to another.

Money, money, money

Tiago and I started the week pair-programming to get the money values working properly. We implemented the money-rails gem, and created migrations to change the money columns to add the suffix ‘_pence’ to them. E.g. the fixed_price column in Phases, was renamed to fixed_price_pence. However, using the monetize method, the suffix is ignored everywhere else, so you can still just refer to the attribute as fixed_price.

We were able to access the monetize method in the models, which creates a money object from the attribute. Money is converted from pounds to pence, and saved in the database as pence. Then we make sure to convert back from pence to pounds in the views, for users. This means that calculations are done using integers, and no weird rounding errors will occur with floats. Also, should we ever go international with Badger Time, or start getting paid in exotic currencies, having everything set up with money-rails should make currency conversions a doddle.

Promises, Promises

Clicking around in the browser, I discovered a bug where the page was rendering before data had finished loading. A phase belongs to a project, and I found that trying to access some of the phases resulted in an error. It turned out that after fetching each project, and then each of the phases for the first project, the data was being registered as fetched before the other phases had been retrieved.

Viktor decided that the best way to solve this issue was through the use of promises, an entirely new concept to me. The basic idea is that they can store tasks in memory, and ‘promise’ to do something in future, once other criteria have been fulfilled. So you can hold off an action until other tasks have been completed.

The really clever thing about promises is that you can chain them together, so that once a stage in the code is reached, you can start another promise, and then another one, and so on. Then each promise will wait for the required actions to be completed before launching its own action, until you get back to the first promise. Everything will run in the sequence you’ve set, and you know that the final task won’t be run until everything else has finished. Another really useful feature is the .all function, which allows you to run several tasks in parallel, and wait for them all to finish before running another task. This would be much more difficult just using classic node callbacks.

By passing in a silent option, we could hold off on notifying the listeners that data had been fetched, until it truly had all been fetched. It also cut down on the number of times the page was being re-rendered, as previously it was rendering after every single item was fetched, which would get ridiculous once Badger Time was filled with content, (and was already slightly ridiculous with the small amount of example content that’s in there currently!).

We installed the Bluebird promise library, and then required it in each file we added promises to.

Here’s the code that was added to the Projects store file:

Here’s the code from the Phases store, that gets called from the Projects store:

26
Sep
2014

Badger Academy Week 5

by Eric Juta

Over the last week, we had worked on deployment and developing on top of the Flux/Om-like architecture that Viktor scaffolded beyond our eyes. This week we were joined by the digital poet Joe Stanton to spurt out the solutions to our woes.

It seems that there is an endless stream of problems on any project; we truly are being moulded into “True Grit” characters throughout this academy. Keeping up-to-date with development, we decided to halt that troubling issue of the front-end testing which has been lingering for the past few weeks ever since we came across that milestone.

These past two weeks, Alex joined us to scaffold out the frontend modular structure with proper SCSS styling! This week included the majority of Joe’s work attempting to stub out the real-api calls temporarily for testing sessions on the smart tv.

Styling separation

The final solution which has proven well the last few weeks was inclusive of compiling the SCSS files in a Gulp build task.
Sadly without ruby being installed in the node docker container, we were unable to use SASS!
#INTERNALSASSVSLESSWARS

Adhering to the Assets, Module, Vendor and Partial folder structures, we learned a straight-forward way to scaffold styling.
Assets – Personal static files that aren’t of stylesheet file type. (Image files and fonts!)
Modules – Different files for particular views/components pages within your application
Vendor – Generic files from open source frameworks or non-personal stylesheets
Partial – Potentially default mixins if you would like to label them in this way!

SVGs

SVGs are scalable vector graphics. In common fashion, we’re using a responsive css grid framework (We extracted and imported the grid system out of Foundation; lovely if you ask), we definitely require scalable images otherwise our application will look ugly!
#PixelPerfect?
Thanks to Sari, Badger-Time has its gorgeous logo exported to SVG file format! Our proud work is labeled in perfect rendition on any device in any resolution our audience pleases.

Continous integration/deployment

We can frankly say that API development including the Rspec TDD approach is going smoothly.

Finally scaffolding the last part of the continuous integration + delivery process this week. That last part involves the Amazon S3 Cloudfront deployment for the frontend and a Digital Ocean droplet for the backend.

Both of these were relatively straight forward compared to other obstacles we had come across! Thank you open source contributions! (Namely the Gulp S3 plugin and Docker within Digital Ocean droplets)

Tiago created his own Node.js web hook module that on a POST request with a required token of course sent after the CircleCI tests had passed, would pull down the required Docker image with all the binary dependencies pre-installed and swap out the new git cloned production-ready version of the application into it.

For the frontend, deployment is done through running a basic Gulp ‘deploy’ task.
It’s also good to note that environmental variables can be set in CircleCI to be read from that Gulp deploy task!
12Factorness!

Tiago’s open source contribution:- https://github.com/tabazevedo/catch

As he likes to call it, CircleCI in less than ~50 lines!

Fighting the frontend tests

Our attempt of scaffolding out Stubby programatically had failed:- LXC container network address mapping issues.
Joe’s fierce battle meant redirect the API endpoint to mocked out API routes responding with JSON datasets only for the duration of the Gulp ‘test’ task.
This required restarting the Node.js server inbetween each Cucumber.js test; absolutely brilliant!

At one point during the debugging process, Joe was unable to differentiate whether the correct ‘Test API’ was being requested from. He lazily evaluated the real API to force this confirmation. …I know right

In the end Joe fathomed the situation but due to the restrictions and the obvious refusal to recreate the backend API logic in Node.js for the purpose of frontend testing, the result was static datasets. The situation remains unstirred.

A potential final option is to reroute the testing API endpoint to a deployed “Staging backend API” then deploy the backend API to production in succession.
This keeps the logic intact but at the same time separates such data pools.

For next week, Badger-Time faces OAUTH, serialising and sync diffing algorithms!
Lower woes, we’d agree; honestly.

22
Sep
2014

Badger Academy Week 4

by Sarah Knight

It’s week 4 of Badger Academy, but for me personally as the 3rd intern to join, it’s the first week. Not only am I a few weeks behind on the Badger Time project, but fresh from the 3 month Web Development Immersive course at General Assembly, I’m also several years behind Tiago and Eric in terms of general programming experience. So my first few days were spent in a state of confusion and growing panic as I tried to read up on an ever-growing list of techniques and technologies that were completely new to me.

Vagrant, Docker, React.js, Gulp, Gherkin, Phantom.js, Browserify, Nginx, Selenium, and Circle CI were a few of the terms I was busy googling. I now have a rough grasp on what most of these are, and how they fit together but it might be a while before I can blog about them with any confidence! Watch this space …

By Wednesday, I was able to get stuck in and start to write some proper code though, which felt good. I made a start on some tests for the API. We were thinking about using Cucumber for these, but in the end it was agreed that plain Rspec made more sense for the technical back end, and use the more English language readable Cucumber tests for the front end and potentially less techie readers.

Viktor was our senior developer this week, and spent time helping me write some tests for the JSON responses. He also helped refactor some of the React.js code on the front end while also giving me an overview of how it all fits together. This was really helpful, as I think I’m now beginning to understand React on a conceptual level … we’ll see how it goes when it comes to actually working with it though!

badger_acad_4

Github Flow

With 3 full-time team members plus 3 part-time senior devs on this project, having a standardised system for version control is important. Most of the projects I’ve worked on previously have been solo efforts, so it was crucial for me to understand the system in place and make sure I didn’t mess up. Luckily we’re using the Github Flow workflow, which is simple to pick up, and facilitates continuous deployment.

The workflow:

1) Create a new branch locally

Create a new descriptive branch locally from master and commit regularly. Naming things descriptively is always tricky, but done right, it allows everyone to see who’s working on what.

2) Add commits

Committing regularly allows you and others to keep track of your progress on the branch. Each commit is like a snapshot of the branch at a particular time, so you don’t want to leave it too long between commits or too much will have changed. With regular commits of small chunks of code, if you introduce bugs or change your mind about something, you can rollback changes easily. (It’s a bit like time travel!).

3) Open a Pull Request

Once you are ready to merge to master, or want some feedback, open a pull request. Pull requests allow others to review your code, and everyone can add comments. Because Pull Requests accept Markdown syntax, you can even create tickboxes of things to be ticked off (top tip courtesy of Alex!).

4) Discuss and review code

Once a pull request has been opened, other people can see what you’ve been working on, and enter into discussion on Github about it.

5) Merge and deploy

Once you’re happy with the code, and it passes all the tests, you can merge to Master. We have Circle CI set up to automatically test code once a Pull Request has been opened, so you can easily see whether the code is passing tests before you merge.

The golden rule of Github Flow is: Anything on the master branch is deployable.

Any code on the master branch has been tested and is totally stable. You can create new branches from it with confidence, and deploy from it. We don’t yet have any kind of production server set up, so there is currently no deployment. However, the whole point of Github Flow is continuous deployment, so once that’s up and running, this step will be implemented regularly.

Next Week

To ensure that we learn as much as possible about all aspects of development, we’re taking it in turns to work on the different parts of the project. So just as I was starting to get to grips with the API this week, next week I’ll be doing something completely different and taking a turn on the front end. However, I’m looking forward to exploring React.js and seeing how the testing differs.

17
Sep
2014

Badger Academy Week 3

by Tiago Azevedo

The third week of Badger Academy has passed, and with it ends the first cycle of seniors helping interns. For this thursday we were paired with Joe Stanton. We ran into a lot of problems during the week, which left us somewhat frustrated but also increased our eagerness to learn. Most of our environment setup for development has been done by this point. We managed to decrease our docker build times from ~20 minutes to 3-5 minutes depending on how good of a day the server was having, but overall it was consistent and fast.

Our focus this week was on testing standards. We were aware of the best practices for testing our software, but their implementations within our projects was what took the bulk of our time.

Testing the API

Testing the Rails backend was fairly straightforward. When we scaffolded the controllers and models for our project, a set of pre-generated RSpec tests was provided for us. Most of them were fairly unoptimised and some were not suited for an API, but rather a project written completely in Rails.

We kept a few things in mind while writing these tests;

  • Keep tests of one model/controller isolated from other models and controllers
  • Avoid hitting the database where we could.
  • Avoid testing things which are covered by higher level tests.

Expanding on that third point, Joe helped explain what layers to test and what layers we could skip. At the core of our app we have model tests, which would be independent of the database and would test things like logic and validation. These should eventually make up the majority of our tests, but for the meantime we only have a few validation checks. The ‘medium-level’ tests were things like routing and request tests.

We ended up skipping the routing tests since once we got to the higher-level integration tests, we could infer that if those passed then all our routing was correct. We kept request tests at a minimum, only checking that the API returned the correct status codes, so we could have a sense of consistency across our app, and those weren’t necessarily implied by the integration tests.

Following that, we removed the unnecessary stuff and, through the use of FactoryGirl, we converted our logic and validation tests to avoid hitting the database, as it would cause a significant slowdown once our project became larger. Some of our higher level controller tests did hit the database, however this is unavoidable in most cases and attempting to bypass this would have been more trouble than it was worth.

Testing the Frontend

Our Frontend testing was much more difficult to set up. We’re currently running a stack of PhantomJS, CucumberJS and Selenium. CucumberJS is a tool that allows us to write tests in a human-readable format, so that anyone, without an understanding of programming, could see what’s happening and even write their own tests if they wanted to. This is the basic premise of BDD (behaviour-driven development) – we write tests for functionality of the software beforehand, from the standpoint of the end user and in a language that they can understand. This differentiates from the TDD (test-driven) principles used in the API as that is written purely in Ruby, and not necessarily from a user’s point of view.

 

That’s an example of a test written in Gherkin (the CucumberJS language - yes we are aware of all the slightly strange vegetable references). You can probably guess what it tests for. Behind the scenes, the software captures and identifies each of those lines and performs tests based on parameters that are specified (e.g. what page you’re on and what action you’re performing)

One issue we struggled past was how to go about isolating these tests from the API. Since the pages would have content from the backend displayed, we’d need a way to test using fake data. We went through a variety of methods during the week. Firstly, we thought of simply stubbing out the calls to the API using Sinon, a popular mocking and stubbing JavaScript library. While this would have been the most robust option, we had big difficulties using it with Browserify – a tool we are using which bundles your entire application into one file – and we decided on simply creating a fake api server using Stubby, which runs only for the duration of the tests and can serve multiple datasets to the frontend so we can still test a variety of cases.

CircleCI

Since we got the testing frameworks down, we expect to make fast progress from here on out. We ended up learning and using CircleCI, which will automatically run tests on any pushes or pull requests made to the github repos, and this makes sure we only merge stuff into master when everything is working as planned, and also makes sure that all tests are passing on a fresh system before deployment.

Despite all the new technology we have introduced, everything is going more or less smoothly and we couldn’t ask for a better foundation to build this project from. Not only are we rethinking the way the tech badgers go about the development process, we also streamline the entire production process with lower build times, safe and consistent deployment and a highly scalable and portable infrastructure.

29
Aug
2014

Badger Academy week 2!

by Eric Juta

This week in Badger Academy, we were joined by Alexander Savin, a senior engineer of many talents. Under his guidance, he assessed the current state of our DevOps including the decision to use docker.
Finalising last week’s architecture choices, we were promptly laying down the foundations for the road to pave ahead.
#9containerDevOpsRefactoringWoes
There really was a lot of googling, not much stackoverflow!
Deciding on a one command workflow for any compatible unix system, we proceeded to create the mammoth script.

Badger-Academy week 2!

Bash Shell Script

Iteratively tweaking it (Agile!) in the end allowed us to do the following:

    • Git clone Badger-Time
    • Use Vagrant to up the initial CoreOS VM
    • Run the shell script from within the ssh instance to build the docker containers

(Current container stack each with their respective data containers being: Rails API, Redis, Postgres, Node, Nginx)

  • Pull preinstalled images down
  • Add our config files into them; specifically our Nginx and SSL certificates
  • Mount our Badger-Time code into their respective destinations
  • Install node and rails dependencies then create the databases and migrate them
  • Run all the linked containers with persisted daemons and their services in a hierarchal order.

Voila!

Badger-Time code up and running on any potential unix system in less than 15 minutes without any further interaction.
It sounds like a lot but in fact this is allowed due to the high internet speed within the office!

Advantages

The advantages we had discovered in this approach compared to the previous Badger-Time Vagrant + Ansible were vastly great in so so so many ways!

First of all, an all-in-one up command; we have one extra intern joining us in a week’s time, getting her laptop up to current versioning requires little to no effort.
(Yes, we’ve tested it already on her preview day of the office)

  • No makefile building? Yes please!
  • Faster tests
  • Reduced memory footprints
  • Same environment from development to our build server to our deployment server
  • Isolate local dev dotfiles and configs from the application
  • 12factor application coherence!

Disadvantages

There are many disadvantages as such you would imagine with any new technology:

  • Initial volume mount mapping configuration
  • Networking association is difficult to comprehend.
    (Dynamic host files generated by linked containers, exposed ports, vagrant)
  • Developer productivity affected by added configuration complexity
  • Double layer virtualisation! Linux native support only
  • The lack of a structured DevOps docker approach documented online leaves a lot of decisions to the creator.

Admittedly, as we’re still continuously learning, we will accumulate the software architect’s hat overtime.
Luckily we have constant surveillance and access to the senior engineers over Slack! #badgerbants

Scaffolding the frontend

With the majority of the DevOps out the way for the developer environment, together with Alex we conversed potential ways to scaffold the frontend tests.
This took a lot of learning Gulp with him to customise further our frontend workflow.

Gulpfile.ls

Our gulpfile was chosen to do the following tasks:

  • Pull down npm and bower dependencies
  • Build LiveScript React.js components, Index.Jade, Less files, Semantic Grid system
  • Browserify, Concatenate, Uglify
  • Build the LiveScript tests for compatibility with CucumberJS
  • Start the Phantomjs service from within the docker container before running the CucumberJS tests
  • Watch for source code file changes and compile

Letting Gulp do such things allows us to commit and push less code to Github plus have the added developer workflow productivity!
Less context switching, the above are just abstractions!

Food for thought

One problem that had to be overcome was the choice of running frontend tests from within the container or outside.
The issue is that we have to keep in mind that the tests will inevitably be run from within a build server environment before being deployed.
This poses the question because of Nginx serving static files in a container,
should we reroute the webdriver to examine outside in for tests?

We were a bit stumped at first so can someone document a best practices guide for Docker networking + Docker Frontend testing please!
It may be the case that someone at Red Badger will have to!

Next week tasks!

Tiago and I for next week will ponder about what kind of tests should be written.

BDD is a major cornerstone to the quality of our projects, we’ll have to assess such implementations with a split frontend and backend!
Let alone learn API design!

27
Aug
2014

Red Badger does Podcasting!

by Roisi Proven

radiobadger-profile-pic_360

 

Always keen to branch out into all aspects of tech, Alex Savin, Robbie McCorkell and myself have been badgering away (ho ho!) at creating our very own podcast.

Tentatively titled “Red Badger Don’t Care…”, the podcast will cover a wide range of tech and science topics, and will regularly feature guests from both inside and outside of Red Badger.

We are currently preparing to record our third episode, and although we have not yet released on iTunes, our first two episodes are available for download now and we welcome feedback! 

Episode 1: Red Badger Don’t Care…About Mr Sir Benedict Cumberwumble

Listen Here!(http://radiobadger.com/episodes/Badgercast-episode-01.mp3)

Also known as “Three people figuring out how to Podcast”, we navigate the strange new world of talking about interesting things while microphones are pointing at you. We discuss Dark Matter, Facebook React, GitHub, and Roisi’s confusing feelings about Patrick Stewart. 

Episode 2: Red Badger Don’t Care…About Skydiving Sheep

Listen Here!(http://radiobadger.com/episodes/Badgercast-episode-02.mp3)

The difficult second album of our podcasting lives! We discuss hyperlapse, renewable energy, Babylon 5, outdated laws and Piracy (the internet kind, not the boat kind). We also introduce a new game, 3 degrees of Law and Order!

 

You can learn more about each episode, and get updated about future episodes, but following us over at Radio Badger

 

We have some great interviews planned with Joe Dollar-Smirnov and Cain Ullah, as well as some exciting plans to talk to people outside of Red Badger. We’re still learning and growing and we look forward to getting our work out in the world!

22
Aug
2014

The first week of Badger Academy

by Tiago Azevedo

Last Wednesday marked the beginning of Red Badger’s intern training program – Badger Academy. As interns, Eric and I will be the ‘prototypes’ with a dynamic syllabus covering the fundamentals.

Our guidance consists of a full day a week with a senior developer, and in this case we’ll be re-engineering an internal app that was started two years prior but left unfinished.

Badger Time Reborn

Badger Time, wittily named by Cain, was to be a resource management platform. At a basic level, it would enable a business owner to plan potential projects and analyse ongoing projects, calculating financial figures based on how many hours people were assigned to projects and how many hours they’d fulfilled (using data from the FreeAgent platform).

design

We collectively decided that the best course of action was to build it up again from scratch, as it would take less time and effort than fixing what was wrong now, using the old codebase as a reference point.

As far as any intern is concerned, writing any software from scratch is a daunting task. We are taking the briefing with a positive attitude, and reveled in the prospects of being able to learn every aspect of building a working, maintainable piece of software.

Planning

The first stage of any structured task is planning – what do we want, how do we want it and what problems will we face? Thankfully, we had the ability to recycle the designs from the old project which simplified a lot of the what and the how.

User Stories

The bulk of last thursday was spent on these. Viktor, the senior assigned to us for the day, took us through building a backlog of user stories for each feature that would be in the minimum viable product. Building these user stories helped us to understand these features from a user’s point of view, and simplified the process of figuring out potential problems. We used Trello to organise these, as it allowed us to sort the backlog by priorities and establish a pipeline of things to get done. 

Building a data model

As we’d be handling large amounts of data coming from different sources, it was imperative that we had a well-built data model. There were two main factors to keep in mind:

  • Avoid repeating the same data in various places
  • Avoid storing anything in the database that can be calculated on demand

Technology

Docker – the foundation of our project

We made a few ambitious decisions regarding tech choices in Badger Time. We’d be using Docker to handle our development and production environment. Both the seniors and us were super interested in this new technology as it would solve a lot of current problems. Right now, most of the projects are handled using Vagrant for a virtual machine and Ansible for provisioning. This poses a performance hit as everything is ran on a virtual machine and it can also take upwards of 30 minutes to get it up and running on a new machine. Docker eliminates this by running everything in containers, which are like ‘layers’ on top of the current host machine, and containers can be built (similar to provisioning) once, pushed to a remote server and then downloaded and ran on any machine capable of using docker.

Because docker containers are purely layers on top of the existing system, they are much smaller and more portable than a full-blown virtual machine. It also means we eliminate any discrepancies between development and production, allowing for a much smoother deployment process.

Rails – the trustworthy powerhouse

We’ll be using Ruby on Rails to write a RESTful API which will handle any requests and serve data from our database, as well as make frequent syncs of data from FreeAgent. Ruby on Rails is solid, easy to read and write and provides a large repository of ‘gems’ which allow us to extend the functionality of our app easily. It was an easy, safe choice, and was backed up by the fact that the old Badger Time was written completely in Rails and we could recycle some of the code as most of the business intelligence was up-to-date.

React.js and LiveScript – lightning fast rendering, with clean and structured code

Rather than making an isomorphic app, we took the same design principles as Haller and divided the backend and frontend of the app. This enables our app to scale much more easily – we can serve the frontend as a static site on a CDN like Amazon S3 (fast!) and then scale the backend seperately. Using React and LiveScript, we can build a purely functional frontend – ditching the traditional MVC application model in favour of having our UI broken up into simple components which contain logic within themselves (and are ridiculously fast because of how React works).

Compare the following (functionally) identical pieces of code;

You don’t need to understand what’s happening to notice how much simpler and cleaner it looks with LiveScript! You can read Stuart’s post on this very same stack for a deeper understanding of why its awesome. We love it and we’re sticking to it!

So as you can see, its a pretty ambitious proposal full of new, exciting stuff, and this project is the perfect opportunity to test it all out on! We’re keen to get the show on the road and get into the meaty part of the development work, but we’re also eager to build something that’s slick and will be considered as a solid, well-written codebase. I have high hopes that Docker will become a thing around here, and it might catch on as the go-to tool for handling DevOps around here, just like what React and LiveScript do for frontend!

21
Aug
2014

Computer Science students should work for a start up while at uni

by Albert Still

Don’t completely rely on your grades to get you a job after uni. A CS degree will prove you’re intelligent and understand the theory, but that will only take you so far during an interview. We have to talk about apps we’ve built with potential employers, just like you would want a carpenter to show you pictures of his previous work before you hired him.

While studying CS at university I worked 2 days a week for Red Badger, even in my final year when I had a dissertation. Some of my class mates questioned if it was a good idea suggesting the time it takes up could damage my grades. But it did the opposite, I got better grades because I learnt so much on the job. And when you see solutions to real life problems it makes for a better understanding to the theory behind it. And it’s that theory you will get tested on at university. 

What I’ve been exposed to that I wasn’t in lectures

  • Using open source libraries and frameworks. Theres rarely a need to reinvent the wheel, millions of man hours have been put into open source projects which you can harness to make yourself a more productive developer.
  • GitHub is our bible for third party libraries. Git branches and pull request are the flow of production.
  • Ruby - the most concise, productive and syntactically readable language I’ve ever used. Unlike Java which the majority of CS degrees teach, Ruby was designed to make the programmers work enjoyable and productive. Inventor Yukihiro Matsumoto in a Google tech talk sais “I hope to see Ruby help every programmer in the world to be productive, and to enjoy programming, and to be happy. That is the primary purpose of Ruby language”. Ruby is loved by start ups and consultants because they need to role out software quickly.
  • Ruby on Rails - built in Ruby it’s the most starred web application framework on GitHub. It’s awesome and the community behind it’s massive. If you want to have a play with it I recommend it’s “Getting started” guide (Don’t worry if you’ve never used Ruby before just dive in, it’s so readable you’ll be surprised how much Ruby you’ll understand!).
  • Good habits – such as the DRYYAGNI  and KISS principles. The earlier you learn them the better!
  • Heroku - makes deploying a web app as easy as running one terminal command. Deploy your code to the Heroku servers via Git and it will return you a URL to view your web app. Also its free!
  • Responsive web applications are the future. Make one code base that looks great on mobile, tablets and desktops. Twitter Bootstrap is the leading front end framework, it’s also the most starred repo on GitHub.
  • JavaScript – The worlds JS mad and you should learn it even if you hate it because it’s the only language the web browser knows! You’ll see the majority of the most popular repositories on GitHub are JS. Even our desktop software is using web tech, such as the up and coming text editor Atom. If you want to learn JS I recommend this free online book.
  • Facebook React – Once hard JS problems become effortless. It’s open source but developed and used by Facebook, therefore some seriously good engineers have developed this awesome piece of kit.
  • Polyglot – Don’t just invest in one language, be open to learning about them all.
  • Testing – This was not covered in the core modules at my university however Red Badger are big on it. Simply put we write software to test the software we make. For example we recently made an interactive registration form in React for a client. To test this we made Capybara integration tests, they load the form in a real browser and imitate a user clicking and typing into it. It rapidly fills in the form with valid and invalid data and makes sure the correct notifications are shown to the user. This is very satisfying to watch because you realise the time your saving compared to manually testing it.

Reasons for applying to a start up

  • They are small and have flat hierarchies which means you will rub shoulders with some talented and experienced individuals. For example a visionary business leader or an awesome developer. Learn from them!
  • More responsibility.
  • They’re more likely to be using modern front line tech. Some corporates are stuck using legacy software!
  • If they become successful you will jump forward a few years in the career ladder compared to working for a corporate.
  • Share options – own a slice of your company!
  • There are lots of them and they’re hiring!

Where to find start ups

A good list of hiring startups for London is maintained by the increasingly successful start up job fair Silicon Milk Roundabout. Also Red Badger are currently launching Badger Academy, they’re paying students to learn web tech! This is extremely awesome when you consider UK universities charge £9,000 a year. If your interested in applying email jobs@red-badger.com

Best,

Albert

There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies.

C.A.R. Hoare, 1980 ACM Turing Award Lecture