Posts Tagged ‘training’


Learning to program with the world’s best hypnotists.   

by Toby Merlet


Everybody was very focused during this 2 day seminar.  Veerryy Fooccussedd…   And I’m sorry to disappoint, but they were not teaching us to program JavaScript, Closure or Elixir! This was about a different sort of programming altogether.


Hypnosis pendulum


Most of you will have heard of Paul McKenna.   A household name in the UK, he started out as a stage hypnotist getting audience members to dance around like chickens.  But he soon realised he could make more money help people change their lives for the better.  You can hardly walk into a branch of Waterstones without  being assaulted by his myriad of books and audio recordings.  Apparently he can  “Make you sleep”, “Make you rich”, “Make you thin” but also “Make you smarter” and “Play great golf”.

All for a mere 12 pounds each.  Bargain.

His change from entertainer to self-help guru was facilitated by a man called Richard Bandler, the co-creator of a discipline called NLP (Neuro Linguistic Programming) and a hypnotist himself.  What is NLP?

“(n) a model of interpersonal communication chiefly concerned with the relationship between successful patterns of behavior and the subjective experiences (esp. patterns of thought) underlying them; a system of alternative therapy based on this which seeks to educate people in self-awareness and effective communication, and to change their patterns of mental and emotional behavior.”




The seminar I attended was called “Get the life you want“,  hosted by both Paul McKenna and Richard Bandler.  I was interested in attending to learn more about NLP, and hopefully feel more confident.

And yes, we did get hypnotised.

If you’ve never been hypnotised then it might sound a bit scary.  Mainly because stage hypnotists have given it a bad name.  But the induced trance feels more like a state of increased focus.  The best way I can describe it is when you read a book  and the world around you disappears because you’re so engrossed in what you’re reading.  Coders might call it “being in the zone”.  Anyway, I first learned techniques of self-hypnosis from Dr. Val Walters, who was at the time Professor of Hypnosis at University College London, so it wasn’t unfamiliar to me.

40 years ago, if you wanted to get rid of a phobia, it required 6 months of desensitising therapy, and even then, the results were uncertain.  McKenna claims that ‘if it takes you more than 60 minutes for 99% of patients’, you’re doing it wrong.

Traditional forms of therapy look into the past to try and find the root of your psychosis.  Because, apparently, having that knowledge will fix the problem.  Bandler on the other hand, told us with a degree of playfulness that “People start telling me how fucked up they are, but frankly, I don’t give a shit”.  Charming.  But he wasn’t being mean, he just believes that it isn’t very helpful knowing about the past, and cares more about how you think rather than why.  What is it you see in your mind’s eye when you remember a disturbing thought?  What goes through your head when you’re beating yourself up?  How does it make you feel physically?  With this knowledge he can, with the help of hypnosis, do something about it.


Hand with eye


He was a charismatic fellow.  His stories were entertaining and while he was talking to us he was apparently using his NLP techniques to connect with our subconscious and feed us positive subliminal messages.  I felt very focused listening to him, but I’m still to be convinced by NLP; that may be because I wasn’t sure what to look for. I knew when he was using hypnosis, but NLP was not apparent to me.

On the other hand, I was sold on the hypnosis aspect of the seminar.  We learned practical techniques to help control that little voice we all have in our heads.  The one that tells you you’re not good enough, that you can’t do something, or that you want to eat the rest of that giant cake all by yourself.   You know who you are.

When I left the seminar, I was a bit disappointed.  I was tired and didn’t feel much different to when I first walked in the room 2 days earlier.  But a week on, I can honestly say that I feel like a much happier more content human being.

That’s a big claim! So let me clarify. I’m sleeping better, I worry less, I’m able to concentrate better and feel more productive.  I’ve even written my first blog.   Training Budget well spent?

“If you like the idea of an annual training budget, trips to conferences like this and a big focus on learning, Red Badger could be the place for you. Check out of current vacancies here.”


What’s the point of going to conferences?

by Hanna Cevik

We have a £2,000 annual training budget at Red Badger that can be used however we like. Most people use it to travel to attend a conference in the US, Asia-Pacific or somewhere equally exciting. Training is really specific to your job role and expanding / honing your skills though, so sometimes the most relevant conference is… at the London ExCel.

On 12th May, I took myself out to deepest, darkest docklands (admittedly in my MX5 with the roof down as it was a super sunny day) and wandered around hall S6 for 7 hours. Amongst the stuff I wanted to hear about was why buyer journeys are all wrong and how to speak to big prospects whilst still sounding like a human being.

At Red Badger, it’s really important to us that we talk sense, both in terms of what we do and how we tell you about it. I was keen to hear how other people did it, and what the audience thought about it. One of the things I love about how we build new business here is that we don’t have a sales team. It means that we find new business based on our reputation, the need of the client and our suitability to do the job, not because someone wants to meet their target and get their bonus. Many agencies do use that model and it leads to division internally; projects teams hate the sales team because they just throw projects over the fence and don’t care about how it’s been sold. The clients are almost always disappointed too; they end up having their projects de-scoped to make them possible in the time or for the price they’ve been promised.


What are you doing right now?

We don’t work like that at Red Badger. Ever. We are one team from pre-sale conversations to support; you’re always talking to people who know and respect each other’s working practices and understand how and why something has been designed or built that way. As a marketer, it is a joy to work with.

The speaker in the “Maximising your Business Relationships” session talked about how he felt the same disillusionment with that model, and set out to prove that large projects could be sold and managed without resorting to sales speak. This actually makes life a lot easier for both the seller and buyer. The pressure to talk in acronyms and business language can make it really hard to know what the other party means or wants. It’s a lot easier to say “I’m going to provide you with some recommendations to help get everyone on board” than saying “we realise this is going to be a c-suite decision, and I will provide you with a formal RfP response via procurement”. You have the same obligations to meet due diligence but everyone feels like they are dealing with another human person. There were murmurs of uncertainty in the room; “but how will we sound important and knowledgable without using all those buzzwords?” – and frankly that is exactly the problem. If you don’t know how to sell your product without being plain and transparent, it’s probably not the sales process that is flawed.

It’s a lot like the Agile/ Lean process itself – cut the waste, cooperate constantly, deliver fast. Endless documentation (e.g. large proposal documents) doesn’t get anything done faster, and may well add to losing sight of the end goal. Just like when you propose Agile, lots of people in the room looked worried. It’s hard to let go of the models you’ve been using for years. But that’s exactly why you should do – they are obsolete. Just like the monolithic agency giants – they no longer provide the best solution.

It tied in with the buyer journeys talk I’d heard earlier in the day. If you are using the ‘traditional’ sales funnel, you’re going to be disappointed with your conversions.

sales funnel

This is just not how it works anymore. Most of your prospects simply aren’t interested in hearing about how your solution is going to do something X times better and Y times cheaper than your competitors over 40 pages of sales documentation. They want to know what it’s going to be like to work with you and how that is going to get the result they need delivered. They want to know why they should work with your teams, specifically, to achieve their aims. The old sales funnel model focuses too much on saying the right thing to the prospect to get them ‘down the funnel’, when you should be focusing on how to solve their issues.

Going to conferences isn’t always about learning new skills, sometimes it’s about being given the confidence to let go of old habits. Knowing that sales-speak isn’t necessary, that doing the right thing is more important than saying the buzzwords and being bold in your decisions will mean that you don’t make the same mistakes as before, and get a different, better result.

So, thanks B2B Marketing Expo! You reminded me that doing my job well is often about simply treating people as human beings.


Knowing the Elephant: Mobbing Way

by Leila Firouz

Once upon a time, six blind men went to find out what an elephant is.

The first man touched the legs of the elephant and thought, an elephant is like a big pillar or a tree with strong skin. The second man touched the tail and came to the conclusion that an elephant is like a rope with a brush at the end and it can move right and left very easily in air. Well, I won’t bore you with what the rest thought as I’m sure you can sort of imagine.


What brought this story back to me from the old memories of childhood was a one day course I did on ‘Collaborative Exploratory and Unit Testing’ which was an introduction to ‘Mob Programming’ with the focus on collaboration of developers and testers. In this article I’ll try to explain why after this 8 hours course, as an experienced QA, I felt I’ve been a blind man and the projects I’ve worked on are some sort of elephants or dinosaurs! I could also sense a delicious scent of more modern Agile.

What is Mob Programming?

‘Mob Programming’ is an agile approach to software development where all the team are in one room working together with one keyboard. Just to be clear all the team means all the stakeholders, devs, testers, designers, project owners and so on. There are three roles in Mob Programming:

Navigators: Everyone in the team who ‘guide’ what should go in the keyboard. The brains of the team.

Designator: The decision maker of the Navigators. The final voice who decides what is the final decision of all the ideas to go in the keyboard.

Driver: Person behind the keyboard. The muscles of the rest of the team. Driver doesn’t give feedback for the time he is on behind the keyboard and only does as being told by Designator.

There is a rota and every few minutes the roles will be switched (common rotation intervals are 5 min, 10 min or 15 min). In the training, we sat in a circle and each time the timer beeped we would shift one to the right to switch the roles.

The mobbing technique applies to all aspects of the software development process, including requirements and testing. A project owner for example will not write code but the team might decide, for example, to work on refining stories first. It can be thought of as the outgrowth of ‘Pair Programming’.

We started practicing this method in the class for a few hours in a group of 13 developers and testers. We were given an application which we explored and documented our findings. For the second half of the course, we started working on one of the bugs that we had found. We investigated the reason, made the code testable and added a few unit tests for it.

In all honesty, the first couple of hours was quite confusing. I thought it was interesting but the same way as a mad hatter tea party is! Everyone was throwing around different ideas, we didn’t know the application and the code, we had different skills and there were quite a few misunderstandings and disagreements. “How can this approach improve the performance?” I asked myself. But as the time passed, we started understanding each other’s languages, we were able to prioritise ideas and started becoming more of a united force. The clouds of doubts started to fade out and some sunshine started to show up.

Why Mob Programming Felt Like Some Sunshine?

Whole team working together  improves the average team performance. Every team member is good at something and not so good at another, also everyone has bad days and excellent days. Mobbing has the potential to pick the best of the team.

There is no hand over state. This means “I can work with you on this, as opposed to handing this over to you”. Teams can complete more work faster and less issues will be generated after coding.

More thinking is put into the product before an idea forms a piece of code.

It builds up a shared knowledge and leads the team to find and form a ubiquitous language. It also means less dependency on key skills and knowledge.

Tester and Developer Collaboration

As a tester what impressed me the most was how this approach promotes transparency and creates empathy between devs and QAs and has the potential to improve the quality of the product in less time. There are differences between the mindset and the language of devs and QAs. The most obvious example is indeed the word ‘Testing’.

In a developer language, ‘Test’ is usually done to:

  • Check a feature works exactly as the spec says
  • Creating feedback from code
  • Prevent regressions by writing unit tests

To a QA, ‘Test’ means:

  • Explore a feature with some guidance
  • How this feature works with the rest of the product
  • Look for regressions

Mobbing, gives the opportunity for these two worlds to meet in a new way.

Tester’s feedback while the code is being written can prevent a defect from appearing. This can also help the developer to write (unit-)testable code at early stages before going too far in development.

On the other hand knowing about which unit tests exist saves a lot of time for the tester when scripting and running tests. They can focus more on exploratory and integration test as well as finding out which part of the code has been touched and needs regression testing.

It also gives everyone a better understanding of the application and the state of it.

Yes to Mobbing?

Mobbing probably is not a solution for every project and every team and maybe not the best approach from start to deliver, but I think it is a method that’s worth a try if a team is struggling with delivery or aim for higher performance. It helps the blind men know sooner and better what an elephant is!

For more information about Mob Programming have a look here:

A Success Story of Mob Programming

Mob Programming Guide Book

Red Badger is currently looking for a manual tester to join our team, if you are interested take a look at the job role here.


Advanced React Course

by Anna Doubkova

Last week I had the great opportunity to join a group of React-enthusiasts at a course ran by Michael Jackson and Ryan Florence. 


ePQr9sK (1)

 Michael Jackson at React London meetup.


I had previous knowledge of React basics but I thought it would be good to get a bit more in-depth understanding of how this framework works and what way it can be used. As we all probably know by know, learning a new framework is always tricky. No matter how many amazing online tutorials we read and cool videos we watch, we can always use a bit of personal advice. Otherwise we can end up like this kid when we think we got it and by the time our app seems safe to release, we realise we didn’t get it at all:


The compulsory gif – courtesy of

Who Ran It

Unfortunately Ryan didn’t make it all the way to the training due to misunderstanding with a customs’ officer and different definitions of “work” in terms of visa. That left Michael doing a job of two people at once and I must say he managed with flying colours.

Michael Jackson (who started the training by saying “no I’m not dead”) has a great background for giving this sort of a course. Having contributed to a number of great JS libraries and created React Router, he not only knows how to make things work. He could tell us “why” things should be done in certain ways and where the main strengths of React lie – and that’s something one normally doesn’t get from online tutorials or self-study hackathons.

What It Looked Like

I expected two days of listening to how React works, a lot of theory and maybe a little bit of coding. I was pleasantly surprised to find out that assumption was wrong. Michael’s course was amazingly prepared in detail, including 14 chapters of lectures (that we could code along with), exercises, solutions, and hints to lead us through the journey of getting solid understanding of React. The interactive lectures gave us a great opportunity to get a hands-on practise and exercises helped us gain the confidence that we can use these principles in our own day-to-day work.

What I learnt

As a developer who’s obsessed with performance, I really loved learning about rendering optimisation, especially when displaying a huge amount of data. Mobile developers have this covered by UITableView but we need to come up with our own solutions on the web. Rendering just what you see at any given moment using less than 100 lines of code seemed pretty magical!

Another bit that I found really interesting was covering data flow. I’ve been using Redux and Flux previously but never really understood the underlying principles. Thanks to this diagram, a great lecture and a step-by-step exercise, I finally understood why it’s such an amazing idea.


Courtesy of Facebook


It’s simple (even if it takes a little time to get intuition for the whole process) and effective, and that is one of those things all programmers love.

Last but not least, we tried animating things with the Motion library. I was sad not to be able to try it out on an older Android phone to get a feeling how useful it could be in production where you have to support all sorts of crazy devices and browsers, but playing with animation is always nice, and somehow this joy increases when I code just to see how things work and not to solve a “real world” problem.

The End

I suppose all that is left to be said is – it was awesome! I wish the difficulty level was a little higher so I could learn more but I do appreciate it’s difficult to balance this out in such a varied group of developers. It was fun to speak with the other trainees, see how very different experiences we can have while using the same piece of technology. (Not to mention the procrastination recommendations we exchanged such as 2048.)

Besides, attending this sort of training isn’t here for us just to learn new things but also to support the creators of open source code and their amazing products. I’d definitely recommend everyone to join in one of these.

Red Badger runs a React User Group in London, sign up here for updates on our next meetup.


Badger Academy – Week 10

by Sarah Knight

Badger Time Retro

This week Joe introduced us to the concept of Agile Retrospectives. This is an opportunity to look back at the project so far and review how everything is going. What do we want to start doing? Continue doing more of? Stop doing? It’s a good way of opening up communication about what we think we need to improve on, and come up with specific ways this can be achieved.

We decided that the easiest way of tracking this was by Trello board. We’ve got 4 columns:

– To improve
– Improving/Monitor
– Improved
– N/A – (this is for if we later decide that a card is no longer relevant, but doesn’t fit in the improved column)

We created a card for each thing we want to improve on, and labelled it STOP, START or CONTINUE. These were all placed in the ‘To improve’ column. We then went through them all and discussed how they could be implemented, or if any of them needed to be reworded to make them more SMART.

A few examples:

START: Descriptive pull requests for every card currently in development, with a task list.
CONTINUE: Tracking possible enhancements as GitHub issues.
START: Using blockers (as a marker) for stories that can’t be worked on for some external reason (or because you’re stuck).
START: When pairing, make sure to swap around who’s writing and who’s reviewing.

I’d not come across the practice of a retrospective in this form before, but think it’s a really great method to open up dialogues about things you want to do differently. We’ve been using it for a few weeks now and are really seeing the benefits. Communication has improved, and coaches and cubs alike are able to see who’s been working on what, and how things are going. At regular intervals we revisit the retro board and discuss our progress. It’s a good way to track improvements, as we move cards from one column to another, and ensure that we continue to think and talk about other improvements we’d like to make.


We added in the functionality to edit a resource on the front end and back end. However, because a resource can be linked to a role, which has days associated with it, changing a resource cost could potentially affect the costs on a number of days and therefore projects as a whole.

So on the front end we needed to provide some options as to what editing a resource would actually affect.

Affected projects

– No bookings – no previous bookings would be affected by the new cost, it will only be applied to future bookings
– Bookings from – allows the user to choose a date and any booked days from that date onwards will have the new cost applied
– All bookings – All previous bookings will be updated with the new cost

So that the user can see exactly what they’re doing, we needed to flag up the projects and phases that would be affected by any changes they made. However, because a resource isn’t directly associated with a project, but via a role, that belongs to a phase, that belongs to a project it was a bit tricky.

project associations

As you can see from the diagram above, editing Resource 1 would affect Project 1, and both its phases (via Roles 1 and 3). Editing Resource 2 will only impact on Phase 1 (via Role 2), but still affect Project 1 as a whole.

We needed to go through all the roles linked to the current resource id, and in turn go through all the phases associated with those roles, and the projects associated with those phases to build up a list of affected phases and projects. Several phases from the same project could be associated with the same resource, so we needed to make sure that the projects didn’t get repeated.

To filter by a from date, we needed to add in an extra step to turn the string from the date input field into a date object. We then went through all the roles linked to the current resource id and filtered out those that have a day with a date greater than or equal to the from-date. Then pass in the list of roles to the same filtering process as above.

At one point we got a bit stuck trying to do a nested filter to get the roles that belonged to the resource, and find the days that belonged to those same roles. Viktor pointed out using a second ‘filter’ returns an array to the first filter when it’s expecting a boolean. So we changed it to ‘any’ which does return true or false and everything worked!


Badger Academy Week 9

by Eric Juta

Badger Academy endeavours have slowed down slightly, managing my last year of university and working part-time is a becoming a sizeable challenge. Much more so to come once I arrive in the real world!

In reference to Albert’s post, I personally have found the same experience that working at a startup while at University is a significant contribution towards my own academic performance! The amount of trials and tributes faced at Badger Academy have prepared me for kata-like performance at University. There’s so much I’ve learned at Badger Academy that isn’t taught at University or put into practice! (Sadly they don’t teach git workflows at University)

I highly recommend working for a startup and gaining experience in preparation for post-grad life. My dissertation has its foundations laid ahead of it due to the concepts taught at Red-Badger!


The architecture of a single page application talking via ajax requests to the backend rails api emphasizes data flow. Without the chance to just install a ruby gem and have most of the work done for you, we are forced to implement the same methodology and best network practices (As demonstrated before with Nginx).

The process of authentication leading to API-data fetching is similar to a TCP Three-way handshake.

In Badger-Time, the process occurs like the following:

  1. The clientside router checks if an generated authentication token is stored in HTML5 LocalStorage on any route (A persisted datastore in the browser with its own API)
  2. The router redirects the user to the /login route and renders the React.js component
  3. The user logs in with their pre-registered Badger-Time details.
  4. The user’s credentials are verified in the backend api, a generated authentication token is sent over (Made to expire after a day until a refresh call is made by the user)
  5. The generated authentication token is received, it is stored in HTML5 LocalStorage.
  6. Sequential requests from then on after include the authentication token in the request headers.
  7. The API checks if the request header has a valid authentication token and replies back after executing the body of the request.

(I take that back, that was more like 7 steps rather than 3)

Technically and code-wise, the above process is implemented and our decisions in doing so are:

  • NPM’s Superstore-sync module to have an API for setting and getting the auth token from HTML5 LocalStorage.
  • Modification of the API helper on the frontend to send token in all request headers if present.
  • A Before filter/action in the Application controller to verify whether the request header has a token for a session table match; there is also an expiry value.

  • An action verifies the appropriate BCrypt encrypted password details and generates a token value from a hash.


A similar fashion is seen via the OAuth protocol to talk between the backend rails api and FreeAgent.

The tokens are stored in the process environment variables and are read directly instead!

So for now, the FreeAgent account is hardcoded.

FreeAgent OAuth tokens are refreshed with data pull down on a recurrent clockwork module task to keep the rails models updated! Asynchronously too because of the Sidekiq and Redis combination! No interruptions at all! Deployment and usage has continuous activity!

There was also the decision to diff our Remote Timeslips (FreeAgent populates this model) and diff our Days model on every sync too.

This was actually quite easy (algorithmic wise!), we assume that all Timeslips are up-to-date, therefore the Days model and its burnt hours attributes would be overwritten. Don’t overwrite if the burnt hour is already up-to-date; a comparison of updated-at or burnt hours values.

Leaving comments

Our BDD process is finally done; I’d like to mention again!

Another trick we setup DevOps wise was to start the phantomjs debug server in a docker container then run the cucumber tests, we now have console session logs stored! We can view those logs through the Phantomjs Web UI!

No more document writing on javascript error triggers!



Badger Academy – Week 7

by Sarah Knight

It’s week 7 at Badger Academy, and it feels like things are really starting to come together. As the codebase begins to take shape, and more of the blanks are being filled in, I’m finding it easier to contribute as there are now more examples for me to refer to and less code to write completely from scratch. I spent a couple of days building the Roles section (Projects > Phases > Roles), on the frontend and feel like I’m really starting to grasp how things are linked together, where code is being called from, and what properties are getting passed from one section to another.

Money, money, money

Tiago and I started the week pair-programming to get the money values working properly. We implemented the money-rails gem, and created migrations to change the money columns to add the suffix ‘_pence’ to them. E.g. the fixed_price column in Phases, was renamed to fixed_price_pence. However, using the monetize method, the suffix is ignored everywhere else, so you can still just refer to the attribute as fixed_price.

We were able to access the monetize method in the models, which creates a money object from the attribute. Money is converted from pounds to pence, and saved in the database as pence. Then we make sure to convert back from pence to pounds in the views, for users. This means that calculations are done using integers, and no weird rounding errors will occur with floats. Also, should we ever go international with Badger Time, or start getting paid in exotic currencies, having everything set up with money-rails should make currency conversions a doddle.

Promises, Promises

Clicking around in the browser, I discovered a bug where the page was rendering before data had finished loading. A phase belongs to a project, and I found that trying to access some of the phases resulted in an error. It turned out that after fetching each project, and then each of the phases for the first project, the data was being registered as fetched before the other phases had been retrieved.

Viktor decided that the best way to solve this issue was through the use of promises, an entirely new concept to me. The basic idea is that they can store tasks in memory, and ‘promise’ to do something in future, once other criteria have been fulfilled. So you can hold off an action until other tasks have been completed.

The really clever thing about promises is that you can chain them together, so that once a stage in the code is reached, you can start another promise, and then another one, and so on. Then each promise will wait for the required actions to be completed before launching its own action, until you get back to the first promise. Everything will run in the sequence you’ve set, and you know that the final task won’t be run until everything else has finished. Another really useful feature is the .all function, which allows you to run several tasks in parallel, and wait for them all to finish before running another task. This would be much more difficult just using classic node callbacks.

By passing in a silent option, we could hold off on notifying the listeners that data had been fetched, until it truly had all been fetched. It also cut down on the number of times the page was being re-rendered, as previously it was rendering after every single item was fetched, which would get ridiculous once Badger Time was filled with content, (and was already slightly ridiculous with the small amount of example content that’s in there currently!).

We installed the Bluebird promise library, and then required it in each file we added promises to.

Here’s the code that was added to the Projects store file:

Here’s the code from the Phases store, that gets called from the Projects store:


Badger Academy Week 4

by Sarah Knight

It’s week 4 of Badger Academy, but for me personally as the 3rd intern to join, it’s the first week. Not only am I a few weeks behind on the Badger Time project, but fresh from the 3 month Web Development Immersive course at General Assembly, I’m also several years behind Tiago and Eric in terms of general programming experience. So my first few days were spent in a state of confusion and growing panic as I tried to read up on an ever-growing list of techniques and technologies that were completely new to me.

Vagrant, Docker, React.js, Gulp, Gherkin, Phantom.js, Browserify, Nginx, Selenium, and Circle CI were a few of the terms I was busy googling. I now have a rough grasp on what most of these are, and how they fit together but it might be a while before I can blog about them with any confidence! Watch this space …

By Wednesday, I was able to get stuck in and start to write some proper code though, which felt good. I made a start on some tests for the API. We were thinking about using Cucumber for these, but in the end it was agreed that plain Rspec made more sense for the technical back end, and use the more English language readable Cucumber tests for the front end and potentially less techie readers.

Viktor was our senior developer this week, and spent time helping me write some tests for the JSON responses. He also helped refactor some of the React.js code on the front end while also giving me an overview of how it all fits together. This was really helpful, as I think I’m now beginning to understand React on a conceptual level … we’ll see how it goes when it comes to actually working with it though!


Github Flow

With 3 full-time team members plus 3 part-time senior devs on this project, having a standardised system for version control is important. Most of the projects I’ve worked on previously have been solo efforts, so it was crucial for me to understand the system in place and make sure I didn’t mess up. Luckily we’re using the Github Flow workflow, which is simple to pick up, and facilitates continuous deployment.

The workflow:

1) Create a new branch locally

Create a new descriptive branch locally from master and commit regularly. Naming things descriptively is always tricky, but done right, it allows everyone to see who’s working on what.

2) Add commits

Committing regularly allows you and others to keep track of your progress on the branch. Each commit is like a snapshot of the branch at a particular time, so you don’t want to leave it too long between commits or too much will have changed. With regular commits of small chunks of code, if you introduce bugs or change your mind about something, you can rollback changes easily. (It’s a bit like time travel!).

3) Open a Pull Request

Once you are ready to merge to master, or want some feedback, open a pull request. Pull requests allow others to review your code, and everyone can add comments. Because Pull Requests accept Markdown syntax, you can even create tickboxes of things to be ticked off (top tip courtesy of Alex!).

4) Discuss and review code

Once a pull request has been opened, other people can see what you’ve been working on, and enter into discussion on Github about it.

5) Merge and deploy

Once you’re happy with the code, and it passes all the tests, you can merge to Master. We have Circle CI set up to automatically test code once a Pull Request has been opened, so you can easily see whether the code is passing tests before you merge.

The golden rule of Github Flow is: Anything on the master branch is deployable.

Any code on the master branch has been tested and is totally stable. You can create new branches from it with confidence, and deploy from it. We don’t yet have any kind of production server set up, so there is currently no deployment. However, the whole point of Github Flow is continuous deployment, so once that’s up and running, this step will be implemented regularly.

Next Week

To ensure that we learn as much as possible about all aspects of development, we’re taking it in turns to work on the different parts of the project. So just as I was starting to get to grips with the API this week, next week I’ll be doing something completely different and taking a turn on the front end. However, I’m looking forward to exploring React.js and seeing how the testing differs.


Badger Academy Week 3

by Tiago Azevedo

The third week of Badger Academy has passed, and with it ends the first cycle of seniors helping interns. For this thursday we were paired with Joe Stanton. We ran into a lot of problems during the week, which left us somewhat frustrated but also increased our eagerness to learn. Most of our environment setup for development has been done by this point. We managed to decrease our docker build times from ~20 minutes to 3-5 minutes depending on how good of a day the server was having, but overall it was consistent and fast.

Our focus this week was on testing standards. We were aware of the best practices for testing our software, but their implementations within our projects was what took the bulk of our time.

Testing the API

Testing the Rails backend was fairly straightforward. When we scaffolded the controllers and models for our project, a set of pre-generated RSpec tests was provided for us. Most of them were fairly unoptimised and some were not suited for an API, but rather a project written completely in Rails.

We kept a few things in mind while writing these tests;

  • Keep tests of one model/controller isolated from other models and controllers
  • Avoid hitting the database where we could.
  • Avoid testing things which are covered by higher level tests.

Expanding on that third point, Joe helped explain what layers to test and what layers we could skip. At the core of our app we have model tests, which would be independent of the database and would test things like logic and validation. These should eventually make up the majority of our tests, but for the meantime we only have a few validation checks. The ‘medium-level’ tests were things like routing and request tests.

We ended up skipping the routing tests since once we got to the higher-level integration tests, we could infer that if those passed then all our routing was correct. We kept request tests at a minimum, only checking that the API returned the correct status codes, so we could have a sense of consistency across our app, and those weren’t necessarily implied by the integration tests.

Following that, we removed the unnecessary stuff and, through the use of FactoryGirl, we converted our logic and validation tests to avoid hitting the database, as it would cause a significant slowdown once our project became larger. Some of our higher level controller tests did hit the database, however this is unavoidable in most cases and attempting to bypass this would have been more trouble than it was worth.

Testing the Frontend

Our Frontend testing was much more difficult to set up. We’re currently running a stack of PhantomJS, CucumberJS and Selenium. CucumberJS is a tool that allows us to write tests in a human-readable format, so that anyone, without an understanding of programming, could see what’s happening and even write their own tests if they wanted to. This is the basic premise of BDD (behaviour-driven development) – we write tests for functionality of the software beforehand, from the standpoint of the end user and in a language that they can understand. This differentiates from the TDD (test-driven) principles used in the API as that is written purely in Ruby, and not necessarily from a user’s point of view.


That’s an example of a test written in Gherkin (the CucumberJS language – yes we are aware of all the slightly strange vegetable references). You can probably guess what it tests for. Behind the scenes, the software captures and identifies each of those lines and performs tests based on parameters that are specified (e.g. what page you’re on and what action you’re performing)

One issue we struggled past was how to go about isolating these tests from the API. Since the pages would have content from the backend displayed, we’d need a way to test using fake data. We went through a variety of methods during the week. Firstly, we thought of simply stubbing out the calls to the API using Sinon, a popular mocking and stubbing JavaScript library. While this would have been the most robust option, we had big difficulties using it with Browserify – a tool we are using which bundles your entire application into one file – and we decided on simply creating a fake api server using Stubby, which runs only for the duration of the tests and can serve multiple datasets to the frontend so we can still test a variety of cases.


Since we got the testing frameworks down, we expect to make fast progress from here on out. We ended up learning and using CircleCI, which will automatically run tests on any pushes or pull requests made to the github repos, and this makes sure we only merge stuff into master when everything is working as planned, and also makes sure that all tests are passing on a fresh system before deployment.

Despite all the new technology we have introduced, everything is going more or less smoothly and we couldn’t ask for a better foundation to build this project from. Not only are we rethinking the way the tech badgers go about the development process, we also streamline the entire production process with lower build times, safe and consistent deployment and a highly scalable and portable infrastructure.


Badger Academy week 2!

by Eric Juta

This week in Badger Academy, we were joined by Alexander Savin, a senior engineer of many talents. Under his guidance, he assessed the current state of our DevOps including the decision to use docker.
Finalising last week’s architecture choices, we were promptly laying down the foundations for the road to pave ahead.
There really was a lot of googling, not much stackoverflow!
Deciding on a one command workflow for any compatible unix system, we proceeded to create the mammoth script.

Badger-Academy week 2!

Bash Shell Script

Iteratively tweaking it (Agile!) in the end allowed us to do the following:

    • Git clone Badger-Time
    • Use Vagrant to up the initial CoreOS VM
    • Run the shell script from within the ssh instance to build the docker containers

(Current container stack each with their respective data containers being: Rails API, Redis, Postgres, Node, Nginx)

  • Pull preinstalled images down
  • Add our config files into them; specifically our Nginx and SSL certificates
  • Mount our Badger-Time code into their respective destinations
  • Install node and rails dependencies then create the databases and migrate them
  • Run all the linked containers with persisted daemons and their services in a hierarchal order.


Badger-Time code up and running on any potential unix system in less than 15 minutes without any further interaction.
It sounds like a lot but in fact this is allowed due to the high internet speed within the office!


The advantages we had discovered in this approach compared to the previous Badger-Time Vagrant + Ansible were vastly great in so so so many ways!

First of all, an all-in-one up command; we have one extra intern joining us in a week’s time, getting her laptop up to current versioning requires little to no effort.
(Yes, we’ve tested it already on her preview day of the office)

  • No makefile building? Yes please!
  • Faster tests
  • Reduced memory footprints
  • Same environment from development to our build server to our deployment server
  • Isolate local dev dotfiles and configs from the application
  • 12factor application coherence!


There are many disadvantages as such you would imagine with any new technology:

  • Initial volume mount mapping configuration
  • Networking association is difficult to comprehend.
    (Dynamic host files generated by linked containers, exposed ports, vagrant)
  • Developer productivity affected by added configuration complexity
  • Double layer virtualisation! Linux native support only
  • The lack of a structured DevOps docker approach documented online leaves a lot of decisions to the creator.

Admittedly, as we’re still continuously learning, we will accumulate the software architect’s hat overtime.
Luckily we have constant surveillance and access to the senior engineers over Slack! #badgerbants

Scaffolding the frontend

With the majority of the DevOps out the way for the developer environment, together with Alex we conversed potential ways to scaffold the frontend tests.
This took a lot of learning Gulp with him to customise further our frontend workflow.

Our gulpfile was chosen to do the following tasks:

  • Pull down npm and bower dependencies
  • Build LiveScript React.js components, Index.Jade, Less files, Semantic Grid system
  • Browserify, Concatenate, Uglify
  • Build the LiveScript tests for compatibility with CucumberJS
  • Start the Phantomjs service from within the docker container before running the CucumberJS tests
  • Watch for source code file changes and compile

Letting Gulp do such things allows us to commit and push less code to Github plus have the added developer workflow productivity!
Less context switching, the above are just abstractions!

Food for thought

One problem that had to be overcome was the choice of running frontend tests from within the container or outside.
The issue is that we have to keep in mind that the tests will inevitably be run from within a build server environment before being deployed.
This poses the question because of Nginx serving static files in a container,
should we reroute the webdriver to examine outside in for tests?

We were a bit stumped at first so can someone document a best practices guide for Docker networking + Docker Frontend testing please!
It may be the case that someone at Red Badger will have to!

Next week tasks!

Tiago and I for next week will ponder about what kind of tests should be written.

BDD is a major cornerstone to the quality of our projects, we’ll have to assess such implementations with a split frontend and backend!
Let alone learn API design!