30
Oct
2014

Product Hunt Hackathon

by Jon Sharratt

Last weekend Product Hunt decided to host a global hackathon opening up their API to the community for consumption.  Budding developers got together over at YCombinator on-site for a two day hackathon to come up with and deliver a new product idea.  Remote entries were also allowed and across the globe from Hong Kong, France and our home town in London, plenty of fresh ideas were ready to be developed.

I applied on my own (a bit last minute) and with no real thought and put down the first idea that came to my head, ‘crowdsourcing for products’ via the product hunt API.  The main thing as a personal goal was to prove to the people over at YCombinator I could come up and deliver an idea over two days.  After a couple of days I got an email with an invite to participate, that was it I was ready to hack!

I got up Saturday morning and opened up the Badger HQ (a little hungover from the night before) to find Albert a fellow badger who had stayed the night on the sofa after having a few beers also with some of the other badgers the night before.  The tech I decided to use was a slight risk as I had only dabbled with it previously.  The tech chosen:

I began by using the basis of the project from what the Badger Academy Cubs have been creating for our own internal project of which they are doing a great job as you might of seen already.  Albert started to gain interest in the whole project idea and got setup.  Another great addition to the team a couple of hours later was  Viktor, a beast at LiveScript, just what we needed.  He saw on our companies Slack I was in the office and got involved.  This was it we had a great team to get this hack really moving.

We decided on getting the core functionality we wanted to show off to the judges done on Saturday.  Then on Sunday we would style it up and tweak the UI to make it a more usable and a nicer experience.  I had implemented the core layout using twitter bootstrap (http://getbootstrap.com/) with styling from a theme on Bootswatch (http://bootswatch.com).  Later Viktor informed us of an awesome library over at react bootstrap (http://react-bootstrap.github.io/) and converted the project so we could change the layout quickly and more effectively.

 

Product Fund Day 1

 

By the end of Saturday the project was taking shape with the huge help of Viktor and Albert.  Authentication done, Product Hunt API consumed and stripe checkout integrated to allow users to pledge money.  I had previously created a quick and dirty node.js passport (http://passportjs.org/) strategy to make the authentication process easier (https://github.com/allotropyio/passport-product-hunt).  So with all of that said it was time to call it a night ready for a fresh start on Sunday.

Sunday came along and all that was left to do was add validation to forms and finish off some of the advanced part of the journey such as supporting product makers having the ability to assign costs and a deadline for features to be funded.  Viktor also added the awesome firebase (https://www.firebase.com/) to add a storage layer for pledges and feature requests rather than it being stored in memory on the client.

Not only did it allow an easy way to implement a storage layer it also allowed the UI to live update when any pledges or features were added to the site.  It really helped make the site come alive and would made the site more engaging to users viewing the site.  I would say as a side note that the blend of React, LiveScript, Node and Firebase is just a match made in heaven for this kind of project (a blog post for another time).

product__hunt_hackathon_

On Sunday we were also joined by @jazlalli1 who worked in another team on a cool hack for Product Hunt taking their data to produce analytics and trends.

As the deadline approached our own lovely QA Roisi joined on Slack did some testing remotely which helped iron out a few creases.  Once we were pleased we were ready to submit the hack on challengepost.com.  We had created a droplet on digital ocean, registered the domain productfund.io and got it deployed and submitted on time.

Check out the final result on producthunt.com at http://www.producthunt.com/posts/product-fund

The next day we found out that we had made the top 20 finalists! we had some great feedback from the community.

We then waited to hear about the finalists and who had won.  Turns out our small idea made the top 3 hacks of the first ever Product Hunt hackathon.  All in all great job on everyones behalf for two days work.

The prize:

“The top 3 winners will receive guaranteed interviews with 500 Startups, Techstars, and office hours with YC partner and founder of Reddit, Alexis Ohanian!”

Just to add there were some great entries, checkout the other finalists at http://www.producthunt.com/e/product-hunt-hackathon-2014

 

28
Oct
2014

Haller App Launch

by Joe Dollar-Smirnov

Red Badger in collaboration with Haller and Pearlfisher designed and built a web based app for the charity Haller. Our primary users for this app are Kenyan based, rural farmers who live a life far removed from the abundance of our comfortable western home comforts.

 

 

Haller bring life changing but basic civilised facilities to communities. The construction of reservoirs, wells, sanitisation, medical centres and learning facilities are all just a part of the work carried out by dedicated Haller recruits both on the ground in Kenya and in the UK. Led by renowned environmental expert Rene Haller, education and dissemination of agricultural knowledge is a big part of their work. Through education, Haller help local communities build sustainable futures.

The Haller app is a constant, on demand source of this information and an alternative way to reach further afield. Red Badger spent time in Africa working directly with the farmers to ensure the final product was focussed on their goals, accessible and understandable. Some of the users we were targeting had very little or no experience of using applications or websites so intuitive interactions were essential. We could not rely on any existing knowledge or experience of conventions.

The app has now launched and to mark the occasion Pearl Fisher have created this fantastic video that tells the story. To get the full background on Red Badgers involvement in the app, how we approached the research, workshops and testing there are a series of blog posts below.

Farmer Training Research

Africa Road Trip: Day Zero

Africa Road Trip: Day one and two

Africa Road Trip: The workshops begin

Africa Road Trip: The challenges for app design and development

UX Testing in Africa – Summary

 

27
Oct
2014

Improving Performance with New Relic APM and Insights

by Roisi Proven

In any kind of tech development, knowledge is power. When working on an ecommerce site, knowledge is essential.

The more information you have about your application, both during development and when it goes live, the more value you will be able to provide your client and in turn to the client’s customers. When in a development environment, it’s easy to provide yourself with a breadcrumb trail back to an issue, but when your code moves into a staging environment, the information you provide can end up being a lot less useful. At one point, this was as useful as it got for us:

With no way to know what this “something” was, and after a few awkward problems where we very quickly reached a dead end, we made the decision to introduce New Relic APM into our workflow.

New Relic APM helps you monitor your application or website all the way down to the code level. We have been using this in conjunction with New Relic Insights, their Analytics platform.

With New Relic we have been able to track VPN downtime, monitor response times and get stack traces even when working in a Production environment. So the above vague message, becomes this:

This monitoring enables you to increase confidence in your product in a way that isn’t possible with simple manual or even automated testing.

In addition to the APM, we’ve also been working with New Relic insights. It behaves similarly to Google Analytics. However, its close ties to APM’s tracking and monitoring, means that the data is only limited by the hooks you create and the queries you can write in NRQL (New Relic’s flavoured SQL language). It feels far meatier than GA, and you can also more easily track back end issues like time outs, translating it into graphical form with ease (if you’re into that sort of thing).

Being a new product, it is not without its pitfalls. In particular, NRQL can feel quite limited in its reach. A good example of this is the much publicised addition of maths to NRQL. That a query language didn’t include maths in the first place felt a bit like an oversight. However, this has been remedied, and they have also introduced funnels and cohorts which should add a lot to the amount you can do using Insights.

As a company Red Badger has always valued fast, continuous development. While traditional BDD test processes have increasingly slowed us down, by improving our instrumentation integration we hope to be able to improve our speed and quality overall.

23
Oct
2014

Badger Academy Week 8 – Frontend testing using WebdriverIO, Stubby and CucumberJS

by Tiago Azevedo

Over the past few weeks on Badger Time, we’ve had a steady workflow for the API where we followed TDD principles of writing feature tests first and code later. It wasn’t an issue to set that up in Rails as the various gems already out there for Ruby (RSpec/FactoryGirl specifically) made it a breeze.

The frontend was a different beast altogether and required quite a lot more thought which we finally decided to give over the past week.

The problems and their eventual solutions.

There were several problems which we struggled to solve initially. Firstly, we had to run a GhostDriver instance which would allow our testing suite to communicate with PhantomJS. We’d also have to run a Node server simultaneously which would serve the app in a test environment to the PhantomJS browser.

Doing this was a bit tricky; Gulp’s asynchronous nature meant that running those background processes from within Gulp was a no-go. Depending on how quickly or how slowly it launched, some tests would pass or fail as the server might not be up before the tests ran.

It was probably more effort than it was worth to find a workaround for it so we simply added the processes as a part of the container’s boot sequence. As our containers were based on Phusion BaseImage it was a case of adding simple init scripts to BaseImage’s custom init process.

start-stop-daemon --start --background --quiet --exec /bin/bash -- -c "/usr/bin/phantomjs --webdriver=8080 --remote-debugger-port=8081 --ignore-ssl-errors=true > /tmp/phantom.log"
start-stop-daemon --start --background --quiet --exec /bin/bash -- -c "node /data/server.js"

That was one catch out of the way. The next issue we faced was actually running the tests. Previously we took advantage of gulp-run to pipe our compiled spec files (we wrote the tests in LiveScript!) to the CucumberJS executable.

This was a bit overkill and we ended up just using Node’s script system to run the compile task then run the CucumberJS task on the appropriate files. As a side-effect, we got really nice formatting on the tests so we could see exactly what went wrong (if it failed).

Screen Shot 2014-10-20 at 11.16.47

Nice!

We had these tests running with the API endpoint set as a local Stubby mock API. Stubby’s Node implementation gave us a programmatic API which meant we could start, stop and modify the API as our tests were running.

This allowed us to feed data using Gherkin (Cucumber language) data tables to a function which would simply modify an endpoint with the supplied data. It removed our dependency on the real API to have the frontend tests working, which reduced our CircleCI build times from a staggering 15-20 minutes down to 2-3.

A look at WebdriverIO

Selenium WebDriver is somewhat of an elephant in the office at Red Badger. We all dislike it – even you Pete, you just don’t know it yet – but we put up with it. The API is just a bit rubbish and documentation is quite difficult to find. As somebody working out its usage from scratch, I can say my options were quite limited; spend hours sifting through Java documentation and hope it works the same in the JavaScript implementation or go through endless amounts of user issues trying to find a solution which matches my own problem.

That’s where WebdriverIO helped tremendously. It’s a JavaScript wrapper to Selenium’s confusing API and offers quite a few helpful additions of its own. Just having documentation – however incomplete it might be – was a godsend. At least the functions which weren’t documented have a link to their source so we can see what’s going on and extrapolate from that.

How LiveScript facilitates the callback-based nature of CucumberJS

If you’re familiar with the term ‘callback hell’ then you know how asynchronous code can be a real pain deal with, as you end up with nested logic inside nested logic inside a browser action, all ending with a callback to the top level to pass (or fail) the test. Take this simple example of a browser action which would type a phrase into an input on the screen. In JavaScript, we can immediately see why it quickly grows into something that isn’t nice to deal with.

We take advantage of LiveScript’s unnested callbacks to offer code which is functionally the same as the example above, but reads and writes like synchronous code (much easier to handle).

Writing our tests is inherently easy due to the way Cucumber works and in most cases we don’t even need to write any code for new features as we recycle logic from the more generic step definitions. 

We’re excited to finally be able to adhere to BDD principles on our frontend. After all, the whole premise of Badger Academy isn’t to ship a finished product, but to bring our code quality and knowledge to a higher level.

20
Oct
2014

London React October Meetup

by Chris Shepherd

Last week saw the fourth London React User Group and yet again the turnout was fantastic for such a young user group.

There were three talks this time and the first, “Learning to Think With Ember and React”, was by Jamie White who runs the Ember London User Group. Red Badger's own Stuart Harris had previously done a talk on React at the Ember User Group that had been well received, so it was time for them to come and talk to us. Jamie talked about how it was possible to combine Ember with React and it was interesting to see how flexible React is and how it's so easy to fit into a variety of existing workflows.

Next up was Rob Knight with his talk, “Diff-ing Algorithms and React”. Rob covered something that I think a lot of us now take for granted with React: how it updates the DOM through the use of a virtual DOM and efficient algorithms.

Lastly, Markus Kobler's talk, “React and the Importance of Isomorphic SPAs”, showed how to use React to avoid poor SEO and slow loading times. These are usually considered the pitfalls of building Single Page Applications.

If you missed the talk then you can watch it on YouTube below. To be informed about the next meet up then sign up here.

Hopefully we'll see you there next time!

14
Oct
2014

Badger Academy – Week 7

by Sarah Knight

It’s week 7 at Badger Academy, and it feels like things are really starting to come together. As the codebase begins to take shape, and more of the blanks are being filled in, I’m finding it easier to contribute as there are now more examples for me to refer to and less code to write completely from scratch. I spent a couple of days building the Roles section (Projects > Phases > Roles), on the frontend and feel like I’m really starting to grasp how things are linked together, where code is being called from, and what properties are getting passed from one section to another.

Money, money, money

Tiago and I started the week pair-programming to get the money values working properly. We implemented the money-rails gem, and created migrations to change the money columns to add the suffix ‘_pence’ to them. E.g. the fixed_price column in Phases, was renamed to fixed_price_pence. However, using the monetize method, the suffix is ignored everywhere else, so you can still just refer to the attribute as fixed_price.

We were able to access the monetize method in the models, which creates a money object from the attribute. Money is converted from pounds to pence, and saved in the database as pence. Then we make sure to convert back from pence to pounds in the views, for users. This means that calculations are done using integers, and no weird rounding errors will occur with floats. Also, should we ever go international with Badger Time, or start getting paid in exotic currencies, having everything set up with money-rails should make currency conversions a doddle.

Promises, Promises

Clicking around in the browser, I discovered a bug where the page was rendering before data had finished loading. A phase belongs to a project, and I found that trying to access some of the phases resulted in an error. It turned out that after fetching each project, and then each of the phases for the first project, the data was being registered as fetched before the other phases had been retrieved.

Viktor decided that the best way to solve this issue was through the use of promises, an entirely new concept to me. The basic idea is that they can store tasks in memory, and ‘promise’ to do something in future, once other criteria have been fulfilled. So you can hold off an action until other tasks have been completed.

The really clever thing about promises is that you can chain them together, so that once a stage in the code is reached, you can start another promise, and then another one, and so on. Then each promise will wait for the required actions to be completed before launching its own action, until you get back to the first promise. Everything will run in the sequence you’ve set, and you know that the final task won’t be run until everything else has finished. Another really useful feature is the .all function, which allows you to run several tasks in parallel, and wait for them all to finish before running another task. This would be much more difficult just using classic node callbacks.

By passing in a silent option, we could hold off on notifying the listeners that data had been fetched, until it truly had all been fetched. It also cut down on the number of times the page was being re-rendered, as previously it was rendering after every single item was fetched, which would get ridiculous once Badger Time was filled with content, (and was already slightly ridiculous with the small amount of example content that’s in there currently!).

We installed the Bluebird promise library, and then required it in each file we added promises to.

Here’s the code that was added to the Projects store file:

Here’s the code from the Phases store, that gets called from the Projects store:

26
Sep
2014

Badger Academy Week 5

by Eric Juta

Over the last week, we had worked on deployment and developing on top of the Flux/Om-like architecture that Viktor scaffolded beyond our eyes. This week we were joined by the digital poet Joe Stanton to spurt out the solutions to our woes.

It seems that there is an endless stream of problems on any project; we truly are being moulded into “True Grit” characters throughout this academy. Keeping up-to-date with development, we decided to halt that troubling issue of the front-end testing which has been lingering for the past few weeks ever since we came across that milestone.

These past two weeks, Alex joined us to scaffold out the frontend modular structure with proper SCSS styling! This week included the majority of Joe’s work attempting to stub out the real-api calls temporarily for testing sessions on the smart tv.

Styling separation

The final solution which has proven well the last few weeks was inclusive of compiling the SCSS files in a Gulp build task.
Sadly without ruby being installed in the node docker container, we were unable to use SASS!
#INTERNALSASSVSLESSWARS

Adhering to the Assets, Module, Vendor and Partial folder structures, we learned a straight-forward way to scaffold styling.
Assets – Personal static files that aren’t of stylesheet file type. (Image files and fonts!)
Modules – Different files for particular views/components pages within your application
Vendor – Generic files from open source frameworks or non-personal stylesheets
Partial – Potentially default mixins if you would like to label them in this way!

SVGs

SVGs are scalable vector graphics. In common fashion, we’re using a responsive css grid framework (We extracted and imported the grid system out of Foundation; lovely if you ask), we definitely require scalable images otherwise our application will look ugly!
#PixelPerfect?
Thanks to Sari, Badger-Time has its gorgeous logo exported to SVG file format! Our proud work is labeled in perfect rendition on any device in any resolution our audience pleases.

Continous integration/deployment

We can frankly say that API development including the Rspec TDD approach is going smoothly.

Finally scaffolding the last part of the continuous integration + delivery process this week. That last part involves the Amazon S3 Cloudfront deployment for the frontend and a Digital Ocean droplet for the backend.

Both of these were relatively straight forward compared to other obstacles we had come across! Thank you open source contributions! (Namely the Gulp S3 plugin and Docker within Digital Ocean droplets)

Tiago created his own Node.js web hook module that on a POST request with a required token of course sent after the CircleCI tests had passed, would pull down the required Docker image with all the binary dependencies pre-installed and swap out the new git cloned production-ready version of the application into it.

For the frontend, deployment is done through running a basic Gulp ‘deploy’ task.
It’s also good to note that environmental variables can be set in CircleCI to be read from that Gulp deploy task!
12Factorness!

Tiago’s open source contribution:- https://github.com/tabazevedo/catch

As he likes to call it, CircleCI in less than ~50 lines!

Fighting the frontend tests

Our attempt of scaffolding out Stubby programatically had failed:- LXC container network address mapping issues.
Joe’s fierce battle meant redirect the API endpoint to mocked out API routes responding with JSON datasets only for the duration of the Gulp ‘test’ task.
This required restarting the Node.js server inbetween each Cucumber.js test; absolutely brilliant!

At one point during the debugging process, Joe was unable to differentiate whether the correct ‘Test API’ was being requested from. He lazily evaluated the real API to force this confirmation. …I know right

In the end Joe fathomed the situation but due to the restrictions and the obvious refusal to recreate the backend API logic in Node.js for the purpose of frontend testing, the result was static datasets. The situation remains unstirred.

A potential final option is to reroute the testing API endpoint to a deployed “Staging backend API” then deploy the backend API to production in succession.
This keeps the logic intact but at the same time separates such data pools.

For next week, Badger-Time faces OAUTH, serialising and sync diffing algorithms!
Lower woes, we’d agree; honestly.

22
Sep
2014

Badger Academy Week 4

by Sarah Knight

It’s week 4 of Badger Academy, but for me personally as the 3rd intern to join, it’s the first week. Not only am I a few weeks behind on the Badger Time project, but fresh from the 3 month Web Development Immersive course at General Assembly, I’m also several years behind Tiago and Eric in terms of general programming experience. So my first few days were spent in a state of confusion and growing panic as I tried to read up on an ever-growing list of techniques and technologies that were completely new to me.

Vagrant, Docker, React.js, Gulp, Gherkin, Phantom.js, Browserify, Nginx, Selenium, and Circle CI were a few of the terms I was busy googling. I now have a rough grasp on what most of these are, and how they fit together but it might be a while before I can blog about them with any confidence! Watch this space …

By Wednesday, I was able to get stuck in and start to write some proper code though, which felt good. I made a start on some tests for the API. We were thinking about using Cucumber for these, but in the end it was agreed that plain Rspec made more sense for the technical back end, and use the more English language readable Cucumber tests for the front end and potentially less techie readers.

Viktor was our senior developer this week, and spent time helping me write some tests for the JSON responses. He also helped refactor some of the React.js code on the front end while also giving me an overview of how it all fits together. This was really helpful, as I think I’m now beginning to understand React on a conceptual level … we’ll see how it goes when it comes to actually working with it though!

badger_acad_4

Github Flow

With 3 full-time team members plus 3 part-time senior devs on this project, having a standardised system for version control is important. Most of the projects I’ve worked on previously have been solo efforts, so it was crucial for me to understand the system in place and make sure I didn’t mess up. Luckily we’re using the Github Flow workflow, which is simple to pick up, and facilitates continuous deployment.

The workflow:

1) Create a new branch locally

Create a new descriptive branch locally from master and commit regularly. Naming things descriptively is always tricky, but done right, it allows everyone to see who’s working on what.

2) Add commits

Committing regularly allows you and others to keep track of your progress on the branch. Each commit is like a snapshot of the branch at a particular time, so you don’t want to leave it too long between commits or too much will have changed. With regular commits of small chunks of code, if you introduce bugs or change your mind about something, you can rollback changes easily. (It’s a bit like time travel!).

3) Open a Pull Request

Once you are ready to merge to master, or want some feedback, open a pull request. Pull requests allow others to review your code, and everyone can add comments. Because Pull Requests accept Markdown syntax, you can even create tickboxes of things to be ticked off (top tip courtesy of Alex!).

4) Discuss and review code

Once a pull request has been opened, other people can see what you’ve been working on, and enter into discussion on Github about it.

5) Merge and deploy

Once you’re happy with the code, and it passes all the tests, you can merge to Master. We have Circle CI set up to automatically test code once a Pull Request has been opened, so you can easily see whether the code is passing tests before you merge.

The golden rule of Github Flow is: Anything on the master branch is deployable.

Any code on the master branch has been tested and is totally stable. You can create new branches from it with confidence, and deploy from it. We don’t yet have any kind of production server set up, so there is currently no deployment. However, the whole point of Github Flow is continuous deployment, so once that’s up and running, this step will be implemented regularly.

Next Week

To ensure that we learn as much as possible about all aspects of development, we’re taking it in turns to work on the different parts of the project. So just as I was starting to get to grips with the API this week, next week I’ll be doing something completely different and taking a turn on the front end. However, I’m looking forward to exploring React.js and seeing how the testing differs.

17
Sep
2014

Badger Academy Week 3

by Tiago Azevedo

The third week of Badger Academy has passed, and with it ends the first cycle of seniors helping interns. For this thursday we were paired with Joe Stanton. We ran into a lot of problems during the week, which left us somewhat frustrated but also increased our eagerness to learn. Most of our environment setup for development has been done by this point. We managed to decrease our docker build times from ~20 minutes to 3-5 minutes depending on how good of a day the server was having, but overall it was consistent and fast.

Our focus this week was on testing standards. We were aware of the best practices for testing our software, but their implementations within our projects was what took the bulk of our time.

Testing the API

Testing the Rails backend was fairly straightforward. When we scaffolded the controllers and models for our project, a set of pre-generated RSpec tests was provided for us. Most of them were fairly unoptimised and some were not suited for an API, but rather a project written completely in Rails.

We kept a few things in mind while writing these tests;

  • Keep tests of one model/controller isolated from other models and controllers
  • Avoid hitting the database where we could.
  • Avoid testing things which are covered by higher level tests.

Expanding on that third point, Joe helped explain what layers to test and what layers we could skip. At the core of our app we have model tests, which would be independent of the database and would test things like logic and validation. These should eventually make up the majority of our tests, but for the meantime we only have a few validation checks. The ‘medium-level’ tests were things like routing and request tests.

We ended up skipping the routing tests since once we got to the higher-level integration tests, we could infer that if those passed then all our routing was correct. We kept request tests at a minimum, only checking that the API returned the correct status codes, so we could have a sense of consistency across our app, and those weren’t necessarily implied by the integration tests.

Following that, we removed the unnecessary stuff and, through the use of FactoryGirl, we converted our logic and validation tests to avoid hitting the database, as it would cause a significant slowdown once our project became larger. Some of our higher level controller tests did hit the database, however this is unavoidable in most cases and attempting to bypass this would have been more trouble than it was worth.

Testing the Frontend

Our Frontend testing was much more difficult to set up. We’re currently running a stack of PhantomJS, CucumberJS and Selenium. CucumberJS is a tool that allows us to write tests in a human-readable format, so that anyone, without an understanding of programming, could see what’s happening and even write their own tests if they wanted to. This is the basic premise of BDD (behaviour-driven development) – we write tests for functionality of the software beforehand, from the standpoint of the end user and in a language that they can understand. This differentiates from the TDD (test-driven) principles used in the API as that is written purely in Ruby, and not necessarily from a user’s point of view.

 

That’s an example of a test written in Gherkin (the CucumberJS language - yes we are aware of all the slightly strange vegetable references). You can probably guess what it tests for. Behind the scenes, the software captures and identifies each of those lines and performs tests based on parameters that are specified (e.g. what page you’re on and what action you’re performing)

One issue we struggled past was how to go about isolating these tests from the API. Since the pages would have content from the backend displayed, we’d need a way to test using fake data. We went through a variety of methods during the week. Firstly, we thought of simply stubbing out the calls to the API using Sinon, a popular mocking and stubbing JavaScript library. While this would have been the most robust option, we had big difficulties using it with Browserify – a tool we are using which bundles your entire application into one file – and we decided on simply creating a fake api server using Stubby, which runs only for the duration of the tests and can serve multiple datasets to the frontend so we can still test a variety of cases.

CircleCI

Since we got the testing frameworks down, we expect to make fast progress from here on out. We ended up learning and using CircleCI, which will automatically run tests on any pushes or pull requests made to the github repos, and this makes sure we only merge stuff into master when everything is working as planned, and also makes sure that all tests are passing on a fresh system before deployment.

Despite all the new technology we have introduced, everything is going more or less smoothly and we couldn’t ask for a better foundation to build this project from. Not only are we rethinking the way the tech badgers go about the development process, we also streamline the entire production process with lower build times, safe and consistent deployment and a highly scalable and portable infrastructure.

29
Aug
2014

Badger Academy week 2!

by Eric Juta

This week in Badger Academy, we were joined by Alexander Savin, a senior engineer of many talents. Under his guidance, he assessed the current state of our DevOps including the decision to use docker.
Finalising last week’s architecture choices, we were promptly laying down the foundations for the road to pave ahead.
#9containerDevOpsRefactoringWoes
There really was a lot of googling, not much stackoverflow!
Deciding on a one command workflow for any compatible unix system, we proceeded to create the mammoth script.

Badger-Academy week 2!

Bash Shell Script

Iteratively tweaking it (Agile!) in the end allowed us to do the following:

    • Git clone Badger-Time
    • Use Vagrant to up the initial CoreOS VM
    • Run the shell script from within the ssh instance to build the docker containers

(Current container stack each with their respective data containers being: Rails API, Redis, Postgres, Node, Nginx)

  • Pull preinstalled images down
  • Add our config files into them; specifically our Nginx and SSL certificates
  • Mount our Badger-Time code into their respective destinations
  • Install node and rails dependencies then create the databases and migrate them
  • Run all the linked containers with persisted daemons and their services in a hierarchal order.

Voila!

Badger-Time code up and running on any potential unix system in less than 15 minutes without any further interaction.
It sounds like a lot but in fact this is allowed due to the high internet speed within the office!

Advantages

The advantages we had discovered in this approach compared to the previous Badger-Time Vagrant + Ansible were vastly great in so so so many ways!

First of all, an all-in-one up command; we have one extra intern joining us in a week’s time, getting her laptop up to current versioning requires little to no effort.
(Yes, we’ve tested it already on her preview day of the office)

  • No makefile building? Yes please!
  • Faster tests
  • Reduced memory footprints
  • Same environment from development to our build server to our deployment server
  • Isolate local dev dotfiles and configs from the application
  • 12factor application coherence!

Disadvantages

There are many disadvantages as such you would imagine with any new technology:

  • Initial volume mount mapping configuration
  • Networking association is difficult to comprehend.
    (Dynamic host files generated by linked containers, exposed ports, vagrant)
  • Developer productivity affected by added configuration complexity
  • Double layer virtualisation! Linux native support only
  • The lack of a structured DevOps docker approach documented online leaves a lot of decisions to the creator.

Admittedly, as we’re still continuously learning, we will accumulate the software architect’s hat overtime.
Luckily we have constant surveillance and access to the senior engineers over Slack! #badgerbants

Scaffolding the frontend

With the majority of the DevOps out the way for the developer environment, together with Alex we conversed potential ways to scaffold the frontend tests.
This took a lot of learning Gulp with him to customise further our frontend workflow.

Gulpfile.ls

Our gulpfile was chosen to do the following tasks:

  • Pull down npm and bower dependencies
  • Build LiveScript React.js components, Index.Jade, Less files, Semantic Grid system
  • Browserify, Concatenate, Uglify
  • Build the LiveScript tests for compatibility with CucumberJS
  • Start the Phantomjs service from within the docker container before running the CucumberJS tests
  • Watch for source code file changes and compile

Letting Gulp do such things allows us to commit and push less code to Github plus have the added developer workflow productivity!
Less context switching, the above are just abstractions!

Food for thought

One problem that had to be overcome was the choice of running frontend tests from within the container or outside.
The issue is that we have to keep in mind that the tests will inevitably be run from within a build server environment before being deployed.
This poses the question because of Nginx serving static files in a container,
should we reroute the webdriver to examine outside in for tests?

We were a bit stumped at first so can someone document a best practices guide for Docker networking + Docker Frontend testing please!
It may be the case that someone at Red Badger will have to!

Next week tasks!

Tiago and I for next week will ponder about what kind of tests should be written.

BDD is a major cornerstone to the quality of our projects, we’ll have to assess such implementations with a split frontend and backend!
Let alone learn API design!