20
Aug
2014

I Spent 3 Days With Sandi Metz – Here’s What I Learned

by Jack Hoy

original_sandi_metz_jack_hoy

A Learning Experience

Choosing a professional training course has always seemed like a bit of a minefield to me. Most courses have hefty price tags and it’s hard to judge beforehand whether they actually represent good value. Although I find that you can learn pretty much anything online now with a combination of videos, blog posts, ebooks and open source documentation – I really wanted an in-person learning experience, to be in the same room as a master and hear directly from them what makes great software.

Enter Sandi Metz, 20+ years of software development experience and author of the excellent Practical Object Oriented Design in Ruby. Lucky for me she had decided to bring her accompanying Practical Object Oriented Design course to London with assistance from the insightful Matt Wynne, one of the authors of Cucumber. For 20 of us it would be 3 full days of pair programming, code reviews and spirited group discussions.

I jumped at the chance to take part and after attending the course this past June, I wanted to share some of the core concepts with you. Hopefully this post will give you a few new ideas to consider and try out the next time you are in front of your editor.

Let’s begin!

The Brief

One of the tasks we were given during the course was to programmatically generate the lyrics to the song 99 Bottles of Beer. We were given a set of tests and only the first one was currently being executed, the rest were being skipped for the time being. We were then asked to make the first one pass before doing anything else. Once that test passed we could unskip the next test and try to make that one pass. We were to repeat this process until all tests passed.

Duplication Is Better Than The Wrong Abstraction

The next thing we were told to do goes against all intuition.

Write shameless code, full of duplication just to make the tests green.

Hang on, isn’t duplication the first thing we learn not to do?

Well yes, it’s true that ultimately you want DRY code but Sandi advised that you are setting yourself up for failure when you try to make your code DRY and full of abstractions before you really understand the problem you are solving.

So this is the first test:

Here we expect that two lines from the song (lines #13 & #14) should be returned when we create an instance of the Bottles class and call the verse method, passing in the number 99 (see line #16).

How would you normally approach this test? Would you get distracted at the prospect of having to generate the whole song and start thinking about writing some clever method to do that? I know I would have in the past. It’s very easy to fall into that trap. I think it’s because as problem solvers, we are always so eager to reach that moment where we ‘get’ the pattern, that we rush ahead and remove duplication too soon or skip it altogether.

Although writing the minimum code to pass the test is a well known techinque from TDD, it’s very hard to write ‘shameless’ code, even when explicitly told to do so. Only a couple of pairs in our group managed to meet this goal.

To get the first test to pass, we can start with something simple like this:

We just defined the verse method in Bottles with a parameter of number and just returned the exact same string from the test. We didn’t even use the number. Pretty shameless.

Then removing the skip from the next test case (see below), we have a similar scenario but this time the number passed in to the verse method is 89 (line #8):

So now we are forced to do something with the number but we can start the process of duplication by adding a case statement which just returns the full string based on the number passed in:

You are probably itching to clean that up already but we are not yet ready to start abstracting yet. Sandi advised that code with duplication will be easier to handle than the wrong abstraction, so we are better off gathering more information, adding it to the solution until at some point, an abstraction will naturally occur. The cost of waiting for more information is low.

If we do skip ahead to writing a super smart abstraction too soon, we drastically increase the risk of having to untangle a mess later on.

Why is it easier and cheaper to handle? Although duplication looks ugly, it has far less mental overhead because the input cases are right there in front of you and there is less logic to keep track of. Adding a new input to the solution becomes a matter of adding to the duplication and you will see shortly that we have a neat technique for eventually DRY-ing out this code.

So we can continue in this vain to get the next 3 tests to green, they also just pass in different numbers (2, 1 and 0) to the verse method and each return a different verse string. To make these tests pass we add them to our case statement and return the strings directly:

Yuck. But our tests are green and it means we can keep moving forward with the challenge. The next test requires us to implement a verses method. This takes two numbers which define the range of verses in the song to be generated (line #11):

In this case it’s just 99 down to 98. We don’t yet have a case to handle 98 bottles, so we can add that to our verse method the same as we did for 99. Then we can define a new verses method that takes an upper_bound and lower_bound to determine the verses that must be generated. Within the verses method we can call our existing verse method for 99 and 98:

The tests pass and we can move to the next one which requires us to return 3 verses:

So now we need to be a bit smarter about how we generate the verses. We can do this by iterating over the number range with ruby’s .downto, then using the collect method to get each verse and finally join them all with new lines:

The final test requires us to implement a song method, that should return the full song from 99 down to 0.

This is actually fairly easy for us to pass, we can just call our ready made verses method, passing in 99 and 0 as the range.

Great, now all our tests are shamelessly passing! You can view the solution here here. Although you may have noticed one snag, our song method doesn’t actually generate the full song because our verse method only returns when the verse is 0, 1, 2, 89, 98 or 99. Don’t worry, we’ll soon put that right when we start refactoring.

I think some programmers may argue that this example is trivial enough that you could potentially start abstracting sooner, however, this problem was used to introduce the shameless technique and Sandi made it clear that this approach will serve you well even when faced with harder problems, where you have no idea what the end solution looks like.

To summarise the advice so far, resist the urge to leap ahead to an abstraction. Start breaking the problem down with a simple, shameless solution and don’t be afraid of duplication when starting out.

Refactoring Is Not An Afterthought

One of the most interesting ideas I took away from the course is that refactoring is not really the icing on the cake, it is the process of making the cake.

Instead of spending a long time in the red while we write our complicated method, then eventually getting to green, then maybe if we have enough energy left doing a bit of refactoring – we quickly obtained green tests from our shameless solution and that provides us with a platform to immediately begin the process of refactoring.

How To Refactor

Refactoring is rearranging code without changing behaviour and the approach Sandi recommended was to make tiny, tiny changes in a technique she perfected with Katrina Owen. The technique is to always stay one CTRL-Z (or ⌘-Z) away from green tests using a 4 step process:

  • Compile: Get the new code you want to implement to compile within the same file – it shouldn’t be called yet, this is in order to catch syntax errors
  • Execute: Run your new code but don’t use the result
  • Use: Replace the old code with your new implementation
  • Clean: Clean up and remove any old code you have now replaced

After each step you should run your tests to make sure you are still green. I’d never seen an approach like this before but I did experience a certain sense of ‘flow’ when following it during the course and it really forces you to stay on the baby steps path.

It still feels a bit unnatural for me to work in increments this small and I often tend to combine some of them but I have been making an effort to try it out. The idea is that by doing less and being able to CTRL-Z when red, it’s always cheap to go back to a safe place and it prevents you from spending long periods of time stuck with failing tests, hoping it will come right in the end.

What To Refactor

Now we know the process of refactoring (making frequent small changes without changing behaviour), the question remains what should we refactor? If we think we are now in a place where our code has enough duplication and we have enough information, then we can start abstracting.

The process for abstracting is to find the two lines of code that are most similar then to make them more alike.

The important thing to note is that we don’t want to take the things are in common and extract them – e.g. “bottles of beer on the wall” is duplicated throughout but it adds no value to extract that into a method call or variable. Instead we find the 2 lines of code with the smallest differences and make them more alike or the same. By doing this we gradually chip away at the duplication and will result in a number of small methods that can later be refactored into classes.

The best way I can explain this technique really is to demonstrate it. Watch the video below and I will take you through the process of refactoring the code we have written so far:

Hopefully that gave you a flavour of how easy it is to create abstractions once you have followed the path of duplication. The next stage in this code base is to start extracting some of these methods into a separate class but I will leave that until next time.

In the meantime I would definitely recommend reading Sandi’s book and checking out her next course taking place in October.

Happy hacking :-)

15
Aug
2014

Founders Week: The Importance of Taking Time Out

by Cain Ullah

As I mentioned briefly in my blog post discussing the launch of the Badger Academy, I went to a retreat back in January to take some thinking time away from work. I was cut off from the outside world. There was no internet. Mobile phones were not allowed. Writing and talking was even banned. It was pretty extreme. But it proved to be an enlightening experience not least for coming up with a plethora of new ideas, many of which were strategic ideas on how Red Badger could be improved.

Out of the back of the retreat, I had lots of ideas, a Red Badger Charity Division being one of them. As discussed in greater detail in the Badger Academy blog post, the Charity Division was all about improving our ability to develop from within, developing young talent to become senior leaders in their field. After 6 months of developing the idea in my spare time and with my colleagues, the charity division has now been superseded by Badger Academy, but the objectives have passed verbatim from one to the other. The mechanism through which we achieve the objective has changed.

This isn’t the first time that cutting myself off from the outside world has resulted in new ideas. At Burning Man, an art festival in the middle of the Nevada Desert which is totally cut off from any wifi or phone signal, I thought about bringing in Non-Exec Directors to help advise Red Badger. The move to bring in Mike Altendorf as a Non-Exec is one of the best things we have ever done at Red Badger. He has helped us to become a much more mature business, faster, stopped us from making mistakes (that he had made in the past) and helped us to re-shape how we do sales.

Building product as part of a pitch (via a Hackathon) was also thought up at the same retreat as the Charity Division this January. This new lean approach to sales “The Proof is in the Pudding” helped us to win the biggest project in our history in May.

I think you get the point. Cutting off wifi and phone signal is important in fostering creativity. It’s become such a distraction in everyone’s lives. If you sit on a bus on the way to work and look around you, everyone’s head is buried in a digital screen. On the bus, people contemplate less, do less book reading and less talking to each other in general. However, more important than just cutting yourself off from wifi or the telephone, taking time out is about giving your mind the space to think creatively and you can’t do this with the distraction of everyday life; internet or no internet.

I’m not saying we wouldn’t have gotten to these decisions or ideas  anyway. I expect Mike Altendorf would have joined our ranks eventually anyway. Or we might have started a Badger Academy eventually. I just don’t know. What I am sure of, is that it would have taken much longer had I not taken time out to just think.

Red Badger Founders Week

Reflecting on the value of the time I have had to myself, I have been doing some reading about it. It seems that taking time out is not uncommon. I watched a great 90’s documentary called “Triumph of the Nerds” in which Bill Gates talks about setting aside a week every year to read all of the books that he had in his “to read” list.

So I suggested to Dave and Stu that the three of us take a Founders Week, to do some more strategic thinking away from the day-to-day of running the business. After I suggested the  idea, it became apparent that Dave had also already been considering taking a week, but to himself, not the three of us together. When suggesting we do it together, both Stu and Dave were sold immediately.

Dave sent me this link: Take a Bill Gates-Style “Think Week” to Recharge Your Thinking on Lifehacker. The article by Michael Karnjanaprakorn talks about Steve Jobs, Mark Zuckerberg, and Bill Gates all taking regular think weeks in the past. It links to some great articles “Creative Thinking Matters” which focuses specifically on Bill Gates’ “think weeks”, what he used to do during the week and how much innovation evolved out of Microsoft as a result.

There is also a health aspect to taking time out. Michael Karnjanaprakorn is starting “Feast Retreats”. He says, Feast Retreats are for 20 people (max) where he will ban cell phone/WiFi usage throughout the weekend. “My goal is to share what I learned during my time off with The Feast community. There will be lots of yoga, healthy eating, and personal development to show the value and power of taking time off.”

All of the articles I have read about the power of time off can’t speak highly enough about the value it brings in promoting creative thinking, innovation and an increase in company productivity.

So, Stu, Dave and I are taking our first “Founders Week” at the end of November. We are going to book a cottage somewhere just outside of London, switch our phones off and take some time to ourselves. We’re not sure exactly what we’re going to do yet, but we all have books we want to read that we just haven’t had chance to yet, we’ll eat healthily and probably do some workshops. Apart from that, it’s just an opportunity to take some time to think, reflect and generally relax our minds.

The benefit I am sure will result in a rapid generation of new ideas that will impact Red Badger for years to come.

14
Aug
2014

The Launch of Badger Academy

by Cain Ullah

Back in January this year I went away for a 10 day retreat. The initial intention was to get away from work completely. No phone. No internet. No work. However, unexpectedly it ended up being incredibly conducive to coming up with a whole plethora of creative ideas. Some were non-work related but lots of new ideas were very much work related. (See this blog post I have written on Founders Week: The Importance of Taking Time Out). One of these ideas, in its rawest form was how we can source and develop young talent and turn them into very highly skilled developers, designers, project managers or whatever else. This has resulted in the quiet launch of Badger Academy this week.

A little bit of context

At Red Badger, a huge amount of investment goes into recruitment. Finding the best talent out there is difficult. As a company we hang our hat on quality, quality being the #1 Red Badger defining principle. As a result, we’re very fussy when it comes to hiring people. This I am in no doubt, will hold us in great stead for the future, so we are determined to maintain our standards in staff acquisition. But it poses a problem – how do we scale the business to service our ever increasing demands from a rapidly growing sales pipeline, without reducing quality?

I think the answer is to improve our ability to develop from within. So, we are hatching plans to invest heavily in developing young talent to become senior leaders in their field. We realise this will take time but Badger Academy is the first experiment that we hope will fulfill the overall objectives.

A Blueprint for Success

In the summer of 2011 when we were a much, much smaller business, we put out a job ad for a summer intern. Out of the 60 or so applicants, one Joe Stanton stood out head and shoulders above the rest. By the time he joined us, he had just started his 2nd year of Uni so worked with us for 8 hours a week. He had bags of talent but obviously lacked experience and as a Computer Science degree student, was being taught vital foundational knowledge stuff that you’d expect from a Computer Science Degree. However, he had no knowledge of modern web application engineering practices such as Behaviour Driven Development.

At the time, we had much more time to spend with Joe to ensure that he was doing things properly and with our guidance and his astute intellect, he developed his knowledge rapidly. He then had a gap year with us during which he was deployed and billed on real projects before going back to part-time for his final year of University. He graduated this summer and after a bit of travelling around Europe, he joined us permanently. On his first day, he was deployed onto a project as a billable resource having had almost 3 years of industry experience. He has hit the ground running in a way that most graduates would not be able to.

Joe has been a resounding success. The problem is how you scale this to develop multiple interns especially now that as a company, our utilisation is much higher. We can no longer spare the senior resources to spend the sort of time we could with Joe at the very beginning.

JoeGrad.png

Joe Stanton – The Badger Academy Blueprint !!!

The Evolving Plan

When I was at the aforementioned retreat, my ideas were based around a project that we were just kicking off for an incredible charity – The Haller Foundation. We were embarking on a journey to build a responsive mobile web application to help farmers in Kenya realise the potential of the soil beneath their feet (For more info, search our previous blogs and look out for more info once the Haller website is officially launched later this year). What was key in my thinking was that we had planned for a mixture of experience in the project team which included two intern software engineers (one being Joe Stanton) that were working 2 days a week whilst completing their final year at Uni. We were delivering the project for free (so Haller were getting a huge amount of benefit) and we were training and developing interns at the same time. Win-win.

So, this formed the basis of my initial idea – The Red Badger Charity Division. We would use interns to deliver projects on a pro-bono basis for registered charities only. The charity would need to understand that this is also a vehicle for education and thus would need to be lax on their timelines and we would develop interns through real world project experience in the meantime. Although a great idea, this wasn’t necessarily practical. In the end, the Haller project required some dedicated time from some senior resources and cost us over £20K internally to deliver. A great cause but not a sustainable loss to build a platform for nurturing talent upon.

So, over several months after my retreat (7 to be exact) in-between many other strategic plans that were being put in place at Red Badger, with the help of my colleagues, I developed the idea further and widened its horizons.

Rather than being focussed on just charity projects (charity projects will remain part of the remit of the Badger Academy), we opened the idea out to other internal product development ideas as well. We also put a bit of thinking into how we could ensure the juniors get enough coaching from senior resources to ensure they are being trained properly.

Objective

Badger Academy’s primary objective is to train interns that are still at University who will be working part-time with a view to them having a certain level of experience upon graduation and hopefully joining Red Badger’s ranks. However, it may also extend to juniors who have already graduated (as a means to fast tracking them to a full-time job), graduates from General Assembly or juniors who have decided not to go to University.

It will require some level of experience. i.e. We will not train people from scratch. But once Badger Academy has evolved, the level of experience  of participants will vary greatly. In the long term we envisage having a supply chain of interns that are 1st years, 2nd years, gap year students and 3rd years, all working at once. Youth Development.png

Above is a diagram I drew back in April 2014 when initially developing the future strategy for Badger Academy. This has now been superseded and developed into a much more practical approach but the basic concept of where we want to get to still remains the same.

So what about the likes of General Assembly?

Badger Academy does not compete with the likes of General Assembly. We are working very closely with General Assembly, providing coaches for their courses and have hired several of their graduates. In fact, General Assembly fits in very nicely with Badger Academy. It is the perfect vehicle for us to hire a General Assembly graduate to fast track them over a period of 3 months until they are billable on projects. A graduate from General Assembly would generally not have been a viable candidate for Badger Academy prior to doing the General Assembly course. Like I say, all candidates need a certain level of experience beforehand. Badger Academy is not a grassroots training course.

Implementation

It is imperative that interns and juniors are trained by more senior resources. As a result we’ll be taking one senior resource for one day a week off of a billable project to dedicate their time to training the Badger Academy participants. To reduce impact on specific projects, we will rotate the senior coaches across multiple projects. We will also rotate by the three University terms. So for autumn term at Uni, we will have 3-4 senior coaches (all from separate projects) on weekly rotation until the end of the term. The spring term we will refresh the 3-4 coaches and again for the summer term. This way, everyone gets to teach, there is some consistency in tutors for the interns during term time and project impact is mitigated.

Summary

There will be a set syllabus of training topics for each discipline. As this is the first week, we have decided to build the syllabus as we go. Our current interns are both software engineers so we can imagine us getting pretty quickly into engineering practices such as testing strategy (E.g. BDD) but also other disciplines that are vital to delivering quality products such as Lean/Agile methodologies, devops and all of the other goodness that Red Badger practices daily.

This is an initial blog about our current activity but is light on detail. As this develops, we’ll formalise the approach and publish more insightful information of what this actually entails.

What we need to not lose sight of, is that this is an innovation experiment. We need to learn from it, measure our success (as well as our failures) and adapt. This is part of a long term strategy and we are just at the beginning.

Disclaimer: Red Badger reserves the right to change the name from Badger Academy. This has not been well thought through!

8
Aug
2014

The Design Sprint at Red Badger

by Sinem Erdemli

At Red Badger, we typically start projects by gathering insights and working out initial concepts. This allows us to understand the users, identify a scope for the project with clients and prepare design assets for development. Also known as ‘sprint zero’, it is an intensive week of absorbing as much as possible and coming up with a plan with the client working with us. 

 

Project Brief

We were approached to design and develop a multichannel touch screen that would go in a retail store. Unlike most e-commerce projects, this one didn’t want more sale conversion or higher profits. The goal was to improve in-store experience and increase customer engagement. Handed over a presentation as the project brief we could almost see the open-ended solution sea in front of us. Just as we were daydreaming about virtual shopping assistants, personalised product recommendations, we were hit by the expected launch date and resource plan allowing only 4 weeks of development and an expected launch in 6 weeks. This was going to be a very short pilot project with possibly many to come. We needed a tangible starting point to validate the concept and iterate as quickly as possible. 

 

Our Approach

We decided to run a design sprint to kickstart the project. We wanted to come up with a concept we could test and validate before our developer joined the team. 

The design sprint is an intensive week-long process of problem solving. Google Ventures runs Design Sprints for their portfolio companies to be able to make fast and predictable product design decisions. It doesn’t matter if the product/service is brand new, or is looking for a make-over; the whole point of a design sprint is to explore the problem in hand and come up with testable solutions quickly.  

The design sprint combines design thinking principles with the lean methodology. Based on iterations and fast paced decision making and prototyping, the outcomes try to find the sweet spot between the three forces desirability (user), availability (technology), and the viability (marketplace/project scope). 

The one hour planning meeting quickly turned into rapid sketching sessions and before we knew it we were already busy sketching out our ideas.

 

Day 1 

designsprint1

We started with the project brief and went through any material we had available. We looked into the best examples of multichannel implementations in-store, discussed the results from previous user research on customers and mapped out a high level user journey. By the end of the day we had defining keywords for the experience, dozens of bookmarked examples and the rough sketches of our Crazy Eights sketching session.

 

Day 2

designsprint3

We spent the morning working on the sketches and refining the user journey. The official kickoff with our clients was in the afternoon, which would let us get an initial feedback on everything we had and the direction we were taking.

 

Day 3

By Day 3, we had quite a few sketches so we started to put in a digital format. To get initial feedback on some of our assumptions we used the projector in the office to see how the other Badgers reacted. We grabbed a few developers by the kettle and asked if any of the elements might cause headaches when it came it implementation. UX and Design worked closely at this stage, the transition from sketch to wireframe and then to design was almost at lightning speed. 

 

Day 4

designsprint4

Day 4 was testing day. We had been itching for some feedback to see if our assumptions were somewhat valid. We went over the wireframes before it was ready for testing. We mocked up a prototype by sticking an iPad and the homepage design on a piece of cardboard and went out for some user testing. 

 

Day 5

designsprint5

By Day 5, the meeting room looked more like a ‘war room’ than anything. We iterated on the wireframes for the 4th time based on the user tests and flagged our technical requirements.

We had a clickable prototype that linked all screens for our first progress meeting. The meeting went well with positive feedback. Having a clickable prototype helped us discuss features that would have potential development risks. 

By the end of the week we had enough information from users, the client and developers to put together a backlog of epics and list our technical requirements. The overall impact of the design sprint is still to be seen but for now it’s fair to say that it will stick with the Badgers for a while.

 

30
Jul
2014

London React Meet Up – V2

by Robbie McCorkell

On Wednesday last week we held the second meet up of the London Facebook React User Group. We had a fantastic turnout this month, with roughly 80 people packing into Red Badger’s office in Shoreditch.

React London User Group Second Meetup

There was a broad range of talks throughout the evening. Stuart Harris kicked off proceedings with a 10 minute intro to React in response to feedback we had from last time. This led on to an in depth look on what’s new inReact 0.11 and a look at how to improve performance in your app using React’s new diagnostic tools (slides here).

Next up was Alex Savin with a convenient Segway on to improving React performance with persistent data structures using Morearty (slides here). Forbes Lindesay led on to tell us about his latest project Moped. A framework for building real time isomorphic applications in React.

Our final talk was from Viktor Charypar who was able to proudly show off Red Badger’s work with Sky, an isomorphic React application built with Ruby on Rails. For all these talks and more, hit the youtube link below.

Hopefully this meet up will continue to be more popular, and if so we might need a bigger venue! You can keep up with the latest announcements here, and we are always looking for more speakers so if you want to do a talk, no matter the length, then please do let us know.

See you all next time.

24
Jul
2014

Responsive Testing on Mobile with Ghostlab and Device Lab

by Roisi Proven

When starting Responsive web development, making sure that a website works on the growing number of phones, tablets and phablets out there feels like a daunting task. The granularity of screen sizes is as such that even just getting a solid core of devices means having a pile of gadgets cluttering up your desk pretty darn quick.

When I was first shown Device Lab, I must admit I was skeptical. It’s just a box covered in velcro, right? It might look pretty but will it actually help my workflow?

I suppose in the most basic of terms, it is “just” that. However, Device Lab is nothing without Ghostlab. When you bring the two together, Device Lab becomes an awesome tool for streamlining workflows and getting different devices tested quickly. It also has the ability to freak out your workmates as you scroll synchronously with 9 devices at once despite them having no instantly apparent connection to one another.

Device Lab

Running Fortnum & Mason.london via Ghostlab

Ghostlab is incredibly easy to set up given the complexity of the task it performs. Simply install, fire up, then drag and drop the URL or repo that you want to test. It’s that easy. I spent a little while searching the Knowledge Base for the point at which it was going to get complicated, but it hasn’t yet.

Once you’re set up with a site to test, the magic begins. There are two ways to deploy the site to your mobiles, either using the IP connected to the site, or even easier, by scanning the handy QR code that is generated within Ghostlab. There are no wires, no awkward configurations, just click and run. After you are hooked up, anything that you do on one device will be replicated on all the others, and it even works intelligently to show the corresponding interactions on both desktop and mobile.

Screen Shot 2014-07-21 at 13.41.56

Ghostlab setup.

I’m unsure of a limit to devices running at the same time, and of course when running over WiFi it all depends on your connection, but I’ve found that over our work internet I can happily run 7 or 8 devices with fairly minimal lag. This means that I can do a run of tests on devices in portrait mode, then reconfigure Device Lab and run all the tests again in landscape.

Testing Responsive sites is never going to be easy, but with tools like Ghostlab out there we can at least make more headway in order to give customers on all devices a great experience.

14
Jul
2014

Red Badger at the Financial Times Creative Summit

by Joe Dollar-Smirnov

financial_times

The tone was set from the moment we arrived at the FT for the 2 day creative summit. A cheerful and friendly security guard welcomed us and issued our name badges.

“Yes young man, take a seat and someone will be down to collect you” I havn’t been called young man, since… I was a young man. A few glances around reception and it is clear that we are not the only early arrivals keen to get stuck in to some creative conundrums.

Accents and faces from all over the world. Surely enough, the summit brought together some seriously impressive talent from home and away. BBC, Google, MIT, FT China were just a few of the represented organisations.

The brainchild of the Product Management Director for FT.com and organised by the smart chap who brought us the BBC’s The Apprentice. The Creative Summit event was designed to illicit the most creative and innovative ideas from the attendees through 2 days of intense creative thinking, discussion, design and development. Various ‘unconference’ activities and organised, bite sized friend-making and networking sessions allowed everyone in the room to move around and get to know a few people. Backgrounds acknowledged, expectations exchanged and breakfast pastries devoured we were ready to start understanding the big problems the newspaper industry is facing. And even better, consider some solutions.

As many refreshments as you could consume in your wildest dreams kept everyone firing on all cylinders for the entire session. Of course lunch was laid on and an evening meal with an option to work as late as we like if we thought that was a good idea.

Camera people were filming the creativity and taking photos along the way. It was obvious that this was of massive importance to the Financial Times and is a clear sign that their commitment to develop new services will help keep their nose ahead of the competition and remain forward looking regarding interesting ways to engage with readers old and new.

We got to meet and work with some very interesting people from all levels of some very interesting organisations. Leaders within the FT took a very active role in the event and spent time walking the room sitting down and understanding the concepts that were coming out of the summit.

The Badgers split to join two different teams. I joined a team that consisted of a serial startup Chief Exec with a history in financial risk management, an FT Developer and an FT Marketing exec who were both able to be our insight for the 2 days providing not only valuable ideas but also key information regarding the typical FT users, marketing insights and future aspirations for the company. One of my biggest personal challenges of the 2 days was adapting to working with very different people very quickly. You can not take part in a project like this without throwing yourself in to it completely and that means that you have to avoid dancing around any conflicts and face them head on. A heated debate over UCD and heavy umming and ahhing over our numerous and constant stream of ideas kept me on my toes. It also proved a great testing ground for one of our key philosophies of collaboration. Externalising ideas and working as a team proved to be an essential contributor towards our winning idea.

financial times creative summit

Through gratuitous use of post-its, plasticine, pipe cleaners and morning pastries we worked on an initial brain dump of ideas around 6 core issues / problems the FT have that range from introducing new readers to the publication through to new ways to monetise and increase subscriptions. They had varying levels of grandiosity with the most ambitious not dissimilar to how to be a better Google. There was no shortage of inspiration and challenge.

At the end of day one, teams took turns standing up and explaining their loose concepts. Some teams worked into the night, fortunately the Badgers went home to get their beauty sleep. The final day was more about refining the ideas and contrary to my initial thoughts was not as hectic as I imagined. We all had a common goal that we were charging towards. The grand finale consisted of a pitch on stage with a 3 min deadline. Ideas were judged by some heavy hitters from FT.com. Namely, the Editor, CIO and the Director of Analytics.

financial times creative summit

Winners

Both the teams Red Badger were part of won 2 of the 4 commendations for their great work. The top 2 overall winning concepts went in to production, well deserved as well. I look forward to seeing how the new products develop and go to market.

ft creative summit

The 2 winning entries that we were part of were commended for:

Innovative Reader Experience

“Which re-imagined the way stories could be constructed (or deconstructed) for time poor younger readers who want the quick facts and analysis.” This team included our very own Imran Sulemanji and Maite Rodriguez.

Best Social

“A creative way to gain FT profile and reputation and engage with others through FT content.”

This was one of the most interesting creative summits I have been to. For the sheer mix of people, breadth of problems to solve and the level of involvement from internal stakeholders. I am glad that we had the opportunity to take a role in it and spread some of the Red Badger process, enthusiasm and creativity.

endofday

29
Jun
2014

First London Facebook React User Group

by Stuart Harris

On Wednesday we held the inaugural London Facebook React User Group meeting at Red Badger's office in Shoreditch. In just a few weeks, the group has grown to 138 members and we're holding monthly meetups.

React meetup

Not a bad turnout for the first one!

We had 3 talks. The first was from Alex Savin on using LiveScript (instead of JSX) to build React components. I did a talk on building isomorphic apps with React (we've put together a sample repo on github so please contribute ideas).

Finally Forbes Lindesay, who maintains the much loved Jade, talked about the promising React Jade (his slides are here). There were some really interesting conversations during the course of the evening, both during the session and over the pizza and beer.

If you attended the meetup, we would love to hear your comments or suggestions so please take the quick survey.

Also, if you'd like to talk at an upcoming event, please let us know. We're already lining up some great talks, but will always need more :-)

You can watch the event on YouTube.

The next meetup is on Wednesday 23rd July so please register and come along. After that we'll take a summer break and start again in September.

27
May
2014

Automated cross-browser testing with BrowserStack and CircleCI

by Viktor Charypar

Robot testing an application

By now, automated testing of code has hopefully become an industry standard. Ideally, you write your tests first and make them a runnable specification of what your code should do. When done right, test-driven development can improve code design, not mentioning you have a regression test suite to stop you from accidentally breaking things in the future. 

However, unit testing does just what it says on the tin: tests the code units (modules, classes, functions) in isolation. To know the whole application or system works, you need to test the integration of those modules.

That’s nothing new either. At least in the web application world, which this post is about, we’ve had tools like Cucumber (which lets you write user scenarios in an almost human language) for years. You can then run these tests on a continuous integration server (we use the amazing CircleCI) and get a green light for every commit you push.

But when it comes to testing how things work in different web browsers, the situation is not that ideal. Or rather it wasn’t. 

Automated testing in a real browser

The golden standard of automated testing against a real browser is Selenium, the browser automation tool that can drive many different browsers using a common API. In the ruby world, there are tools on top of Selenium providing a nice DSL for driving the browsers using domain specific commands like page.click 'Login' and expectations like page.has_content?('something').

Selenium will open a browser and run through your scripted scenario and check that everything you expected to happen did actually happen. This should still be an old story to you. You can improve on the default setup by using a faster headless browser (like PhantomJS), although watching your test complete a payment flow on PayPal is kinda cool. There is still a big limitation though.

When you need to test your application on multiple browsers, versions, operating systems and devices, you first need to have all that hardware and software and second, you need to run your test suite on all of them.

So far, we’ve mostly solved this by having human testers. But making humans test applications is a human rights violation and a time of a good tester is much better spent creatively trying to break things in an unexpected way. For some projects, there even isn’t enough budget for a dedicated tester.

This is where cloud services, once again, come to the rescue. And the one we’ll use is called BrowserStack.

BrowserStack

BrowserStack allows you to test your web applications in almost every combination of browser and OS/Device you can think of, all from your web browser. It spins up the right VM for you and gives you a remote screen to play around. That solves the first part of our problem, we no longer need to have all those devices and browsers. You can try it yourself at http://www.browserstack.com/.

Amazingly, BrowserStack solves even the second part of the problem by offering the automate feature: it can act as a Selenium server, to which you can connect your test suite by using Selenium remote driver and automate the testing. It even offers up to ten parallel testing sessions!

Testing an existing website

To begin with, let’s configure a Cucumber test suite to run against a staging deployment of your application. That has it’s limitations – you can only do things to the application that a real user could, so forget mocking and stubbing for now (but keep on reading).

We’ll demonstrate the setup with a rails application, using cucumber and Capybara and assume you already have some scenario to run.

First, you need to tell Capybara what hostname to use instead of localhost

Next, loosely following the BrowserStack documentation we’ll configure the remote driver. Start with building the browser stack URL using environment variables to set the username and API authorization key.

then we need to set the desired capabilities of the remote browser. Let’s ask for Chrome 33 on OS X Mavericks.

Next step is to register a driver with these capabilities with Capybara

and use it

If you run cucumber now, it should connect to BrowserStack and run your scenario. You can even watch it happen live in the Automate section!

Ok, that was a cool experiment, but we wanted multiple browsers and the ability to run on BrowserStack only when needed would be good as well.

Multiple different browsers

What we want then, is to be able to run a simple command to run cross-browser tests in one browser or a whole set of them. Something like

rake cross_browser

and

rake cross_browser:chrome

In fact, let’s do exactly that. First of all, list all the browsers you want in a browsers.json in the root of your project

Each of those browser configurations is stored under a short key we’ll use throughout the configuration to make things simple.

The rake task will look something like the following

First we load the JSON file and store it in a constant. Then we define a task that goes through the list and for each browser executes a browser specific task. The browser tasks are under a cross_browser namespace.

To pass the browser configuration to Capybara when Cucumber gets executed we’ll use an environment variable. Instead of passing the whole configuration we can just pass the browser key and load the rest in the configuration itself. To be able to pass the environment variable based on the task name, we need to wrap the actual cucumber task in another task.

The inner task then extends the Cucumber::Rake::Task and provides some configuration for cucumber. Notice especially the --tags option, which means you can specifically tag Cucumber scenarios for cross-browser execution, only running the necessary subset to keep the time down (your daily time running BrowserStack sessions is likely limited after all).

The cross_browser.rb changes to the following:

That should now let you run

rake cross_browser

and watch the four browsers fly through your your scenarios one after another.

We’ve used this setup with a few modifications for a while. It has a serious limitation however. Because the remote browsers is accessing a real site, it can only do as much as a real user can do. The initial state setup and repeatability is difficult. Not mentioning it isn’t the fastest solution. We really need to run the application locally.

Local testing

Running your application locally and letting Capybara start your server enables you to do everything you are used to in your automated tests – load fixtures, create data with factories, mock and stub pieces of your infrastructure, etc. But how can a browser running in a cloud access your local machine? You will need to dig a tunnel.

BrowserStack provides a set of binaries able to open a tunnel to the remote VM and connect to any hostname and port from the local one. The remote browser can then connect to that hostname as if it could itself access it. You can read all about it in the documentation.

After you downloaded a BrowserStack tunnel binary for your platform, you’ll need to change the configuration again. The app_host is localhost once again and we also need Capybara to start a local server for us.

We also need to tell BrowserStack we want to use the tunnel. Just add

to the list of capabilities. Start the tunnel and run the specs again

./BrowserStackLocal -skipCheck $BS_AUTHKEY 127.0.0.1,3001 &
rake cross_browser

This time everything should go a bit faster. You can also test more complex systems that need external APIs or direct access to your data store because you can now mock those.

This is great! I want that to run for every single build before it’s deployed like my unit tests. Testing everything as much as possible is what CI servers are for after all.

Running on CircleCI

We really like CircleCI for it’s reliability, great UI and especially it’s ease of configuration and libraries and services support.

On top of that, their online chat support deserves a praise in a separate paragraph. Someone is in the chat room all the time, responds almost immediately and they are always very helpful. They even fix an occasional bug in near real time.

To run our cross browser tests on CircleCI we will need a circle.yml file and a few changes to the configuration. The circle.yml will contain the following

We run unit tests, then cucumber specs normally, then open the tunnel and run our rake task. When it’s done, we can close the tunnel again. To download and eventually stop the tunnel we wrote a little shell script

It downloads the 64-bit linux browserstack binary and unpacks it into a browserstack directory (which is cached by CircleCI). When passed a stop parameter, it will kill all the browserstack tunnels running. (We will eventually make the script start the tunnel as well, but we had problems with backgrounding the process so it’s done as an explicit step for now).

Finally, we can update the configuration to use the project name and build number supplied by Circle to name the builds for BrowserStack

That setup should work, but it will take a while going through all the browsers. That is a problem when you work in multiple branches in parallel, because the testing becomes a race for resources. We can use another brilliant feature of CircleCI to limit the impact of this issue: we can run the tests in parallel.

The holy grail

Marking any task in circle.yml with parallel: true will make it run in multiple containers at the same time. You can than scale your build up to as many containers you want (and are willing to pay for). We are limited by the concurrency BrowserStack offers us and on top of that we’re using just 4 browsers anyway, so let’s start with four, but plan for more devices.

First, we need to spread the individual browser jobs across the containers. We can use the environment variables provided by CircleCI to see which container we’re running on. Our final rake task will look like this

Reading the nodes environment variable we check the concurrency limit and spread the browsers across the same number of buckets. For each bucket, we’ll only run the actual test if the CIRCLE_NODE_INDEX is the same as the order of the bucket.

Because we’re now opening multiple tunnels to BrowserStack, we need to name them. Add

to the capabilities configuration in cross_browser.rb. The final file looks like this

We need to supply the same identifier when openning the tunnel from circle.yml. We also need to run all the cross-browser related commands in parallel. Final circle.yml will look like the following (notice the added nodes=4 when running the tests)

And that’s it. You can now scale your build out to four containers and run the tests in paralel. For us this gets the build time down to about 12 minutes on a complex app and 5 minutes on a very simple one.

Conclusions

We are really happy with this setup. It’s really stable, fast, individual test runs are completely isolated and we don’t need to deploy anything anywhere. It has just one drawback compared to the previous setup which first deployed the application to a staging environment and then ran cross-browsers tests against it. It doesn’t test the app in it’s real runtime environment (Heroku in our case). Otherwise it’s a complete win on all fronts.

We plan to solve that remaining problem by writing a separate test suite testing our whole system (consisting from multiple services consuming each other’s APIs) cleanly from the outside. It won’t go into as much detail as the normal tests since it is only there to confirm that the different pieces fit together and users can complete the most important journes. Coupled with Heroku’s slug promotion feature, we will actually test the exact thing that will end up in production in the exact same environment. And you can look forward to another blogpost about that soon.

27
May
2014

Using LiveScript with React

by Stuart Harris

Let me introduce you to a marriage made in heaven. Two beautiful things - React and LiveScript - that work together so well they could have been built for each other.

React components are mostly declarative. But they're written in script rather than a templating language. JavaScript ends up being too messy for this job and so Facebook invented JSX (an XML syntax you can embed into JavaScript). Everyone's first reaction seems to be "yuk"!

This is what it looks like (the examples are taken from the must-read article Thinking in React):

/** @jsx React.DOM */
var ProductCategoryRow = React.createClass({
    render: function() {
        return (<tr><th colSpan="2">{this.props.category}</th></tr>);
    }
});

But see how it looks in LiveScript:

product-category-row = React.create-class do
  render: ->
    tr null,
      td col-span: '2', @props.category

Cool, hey? Clean, to the point and no clutter.

But it gets better...

First with JSX:

/** @jsx React.DOM */
var ProductRow = React.createClass({
    render: function() {
        var name = this.props.product.stocked ?
            this.props.product.name :
            <span style={{color: 'red'}}>
                {this.props.product.name}
            </span>;
        return (
            <tr>
                <td>{name}</td>
                <td>{this.props.product.price}</td>
            </tr>
        );
    }
});

Now with LiveScript:

product-row = React.create-class do
  render: ->
    tr null,
      td null,
        span do
          if @props.product.stocked
            style:
              color: 'red'
          @props.product.name
      td null, @props.product.price

This is much easier to understand and much more declarative. Because everything is an expression, you can put if statements, for loops, anything you want in place of either the props or the children arguments to the component constructor.

OK, so you could do something simliar in CoffeeScript. But you'd miss out on all the amazing extras and functional goodness that LiveScript brings to the table (as well as fixing a whole bunch of CoffeeScript problems such as its scoping. Don't get me started).

But hang on, isn't it a bit weird though. The tr and the td have a null after them and then a comma, but the span has a do and no comma. What exactly are the rules? And where do I pass in the props and where do I add the children?

All React component constructor functions have the same two arguments: initialProps and children. So if we aren't sending in any initial props, we must specify null (or void). That's the first argument to tr. Fortunately for us we can pass in the children as an array or as separate arguments. So the two td components, in the example above, are passed in as the 2nd and 3rd arguments to the tr component. The do simply creates a block to pass as the first argument (in this case).

But passing in arrays works well in LiveScript too. We can use all the functional list manipulations from prelude-ls.

First in JSX:

/** @jsx React.DOM */
var ProductTable = React.createClass({
    render: function() {
        var rows = [];
        var lastCategory = null;
        this.props.products.forEach(function(product) {
            if (product.category !== lastCategory) {
                rows.push(<ProductCategoryRow category={product.category} key={product.category} />);
            }
            rows.push(<ProductRow product={product} key={product.name} />);
            lastCategory = product.category;
        });
        return (
            <table>
                <thead>
                    <tr>
                        <th>Name</th>
                        <th>Price</th>
                    </tr>
                </thead>
                <tbody>{rows}</tbody>
            </table>
        );
    }
});

Then in LiveScript:

product-table = React.create-class do
  render: ->
    last-category = null
    table null,
      thead null,
        tr null,
          th null, 'Name'
          th null, 'Price'
      tbody null,
        @props.products |> map ->
          if it.category isnt last-category
            product-category-row do
              category: it.category
              key: it.category
          else
            last-category = it.category
            product-row do
              product: it
              key: it.name

I suppose the first thing to note is that you don't have to build up chunks of UI first and then add them later. It's all inline. And that's because we can pass arrays of children as the second argument. So we take the products from the passed-in props and pipe them to the curried map function from prelude-ls. This returns an array into the tbody's second argument.

If the product-row instance, in the example above, had children, you could add them after the props (you can see an example of this in the span inside the product-row component itself). LiveScript is clever enough to know that they don't look like more props and so will pass them as the next argument. Here's a better example:

product-row do
  product: it
  key: it.name
  span do
    class-name: 'child-class'
    "I'm a child of the span, which in turn is a child of the product-row"

It looks beautiful to me. Not unlike Jade. But you get to use all the power of a proper language :-)

By the way, Red Badger is hosting the London React User Group. We already have 71 members and the first meetup will be in mid-June. Please join and come along!