29
May
2015

Nerdy Babelfish – The Importance of Communication in Tech

by Roisi Proven

narrow-lined-puffer-586338_1280

Here on the Red Badger blog, we certainly don’t pull any punches where technical discussion is concerned. In a lot of posts, we assume a high level of knowledge when it comes to our readers, and avoid going overboard with explanation in order to get to the core of our discussion without being boring.

I for one think that this is a great thing. We want to attract the most intelligent, highly skilled people to interact with us, and we can’t do that if we over-simplify things. I feel hugely proud to be a part of a company that has the courage to push forward with new tech, no matter how complex it may seem at first glance.

However, for the less technical people in the team (I include myself in that) it can present a bit of a dilemma. For testers and project managers especially, who bridge the gap between technical and operational, knowing how to explain things is crucial. When you work with a team of such highly skilled developers, you have to be able to not only keep up with them, but make a very complicated thing highly digestible from a business perspective. 

From Code to Conversation

To me, good communication skills are vital if you want to work in tech of any kind. Just because one person understands everything there is to know about a system, doesn’t mean the person fixing the bug you write does. Similarly, your client will likely have a very different, non-technical focus to the project. As a result, you need to be able to distill a sometimes mind-bogglingly complicated technical problem into a form that even the least technical Product Owner can understand.

Let’s take something that, for anyone in web development, will be very familiar. Deploying code from a local environment into a live environment. This will obviously vary depending on how your business functions, but the core steps are the same. You write your code, spinning up a local environment to check your work. You write some automated tests to cover your work, and then submit a Pull Request on your repository for review by another coder. This code will then be tested further and deployed on to your live environment, usually running the tests through a Continuous Integration tool such as Circle or Travis. Simple right?

Or maybe not. Here is a top level breakdown of concepts and terms in the paragraph above that may leave non-technical people scratching their heads:

  • Deployment

  • Local environment

  • Automated tests

  • Pull request

  • Repository

  • Continuous Integration tool

That list is just the tip of the iceberg, as I’m sure if we were to go into more detail about the the build process we would lose people even further. So when I’m explaining how long a build will take to a customer, there is absolutely no point in me explaining it like this. 

From Conversation to Code

This process can most definitely work the other way around as well. If a client adds a bug of their own to the database, or requests a new feature, it can be easy to dismiss it because it seems frivolous or insignificant. This is because, as technical people, our priorities are different. We want to have something that runs smoothly and looks beautiful, and woe betide anyone who asks for something that we feel may compromise that.

A notable example of this for me, is some of the tracking and 3rd parties that I’ve seen requested for integration into various projects. If you look at them in a purely technical way, they are by and large quite undesirable. They may risk slowing down your site, or perhaps cause conflicts with already implemented features that will take more work to fix.

This won’t matter to your client. What matters, invariably, is the continuing success of the business. If the implementation of a 3rd party could hugely improves customer engagement on their site, arguing with them about milliseconds is unproductive and may become needlessly combative.

So What Can You Do?

On both sides, the point at which communication breaks down is the point at which people stop asking “Why?” From the tech side, why am I making this change? Why is doing it this way better than doing it in a way that may seem faster? There’s no need to get technical with those answers.

Going the other way, we should be asking why we are implementing, while still staying empathetic to the overall needs of the client. Yes, to a developer certain 3rd party integrations may seem painful, but if they have sat down prior to starting the work and learned a bit more about the situation surrounding the task, the work will feel more meaningful, and you may even be able to suggest a better alternative.

Through all of this, you should always try to put yourself in the other person’s shoes. I will never sit down and have a conversation with someone without first spending some time thinking about how they might take the news I’m going to give. Often it’s not about the words you say, it’s about the delivery. So I can boil a build deployment right down to “the feature you want will take us 3 hours to take through our internal process” and as long as I’m delivering that with meaningful information such as timescales, the complicated stuff isn’t going to feel so important.

Once you put tech jargon or business jargon aside, it really is just two people having a conversation about something that they both really care about. As long as you remember that the person sitting across the table from you wants the same thing, the rest is easy.

25
May
2015

Red Badger is 5!!!

by Cain Ullah

Time goes so quickly when you are having fun. It seems like yesterday that Stu, Dave and I put £10K each of our personal savings into Red Badger and started working from our bedrooms. That was on 24th May 2010, 5 years ago, and Red Badger has come a long way since then.

In the last two years, Red Badger have been growing more than 100% year on year in terms of revenue, profit and employees. We have some great current clients in Sky, Tesco, Fortnum & Mason, Lloyds Commercial Bank and Financial Times and are doing some really interesting, cutting edge work.

Sometimes you get so busy in the now that you forget to look back at where you have come from and what you have achieved and I have to say that I am immensely proud of where Red Badger are at today. It has been an interesting journey with plenty of mistakes but we have learnt and adapted as we have grown and have achieved good success to date.

Two big reasons for Red Badger’s success are the amazing team and culture that we have built up and the fact that nearly 50 employees and 5 years later, Red Badger is still built on the same core values as it was on day one.

First I want to discuss our core principles and then we will get to our employees and culture.

Core principles

When Stu, Dave and I started Red Badger, we were not seasoned entrepreneurs. This was our first crack at running a proper business. However, we were seasoned consultants and were battle hardened enough to know what we didn’t like about how other businesses were run and how we thought we could do it better.

To illustrate to some of our more recent employees that they were living and breathing a vision from 5 years ago, I recently pointed out this blog post that I wrote just 1 month into Red Badger’s existence, based on a term I coined ethical consulting. It was an incredibly simple blog based on one idea – we didn’t ever want to have an incentivised sales team because of the problems we felt it caused when it came to delivery. We wanted to do sales differently to the traditional way in which other companies operated. Our sales process would follow some guiding principles upon which we wanted to base the rest of the company: Quality, Value, Transparency, Honesty, Collaboration. When this blog was released, it was met by some with disdain and dismissed as being naive but still to this day, we do not have an incentivised sales team and have done perfectly well without one.

Strong opinions weakly held

A favourite mantra of Stu’s and one that ripples through Red Badger is “strong opinions weakly held”. We believed strongly in our core principles but were willing to listen and adapt if someone showed us a better way. If having no incentivised sales team hadn’t worked, we would have admitted it didn’t work and changed it. However, all three founders had a vision for how Red Badger should be run and to date it has worked and we have a great company built upon a strong foundation. A key learning is to never be afraid of trying something different if you believe in it. If it doesn’t work, or someone shows you a better way then try something else.

Doing the right thing

5 years on, our core values remain the same. We want to do the right thing. We want to provide quality and we want to provide value to our clients. If doing the right thing is at your core, increased revenue becomes a consequence.

At our last company day in the Summer of 2014, Dave our COO presented the following slide to all of our staff re-iterating that doing the right thing is paramount and the rest follows.

Screen Shot 2015-05-25 at 14.31.05.png

Doing the right thing – Company Day presentation slide.

We hang our hat on quality and don’t take on new business unless we know we can deliver it to the best of our ability.  This has held us in good stead.

Looking at our current 5 concurrent clients, 2 are new but 3 we have been working with for over 12  months; Sky we have been working with since September 2013. All of these client engagements started with projects of no more than 4 months but through doing great work, the clients have continued to want to work with us for as long as is feasible.

We ask for no commitment. We just do great work by doing the right thing and as a result, end up winning lots of repeat business to supplement the new business efforts.

Core values are incredibly important but of equal importance is building a strong culture.

Culture

Of course, none of any of the successes over the last 5 years would be possible had it not been for our employees. We have built up an incredibly dynamic, talented team that are all simply a pleasure to work with. We put a hell of a lot of effort to create a great culture at Red Badger. We want Red Badger to be the best place that anyone could ever want to work at. A lofty goal but one we constantly strive for.

Recruitment

Great culture starts with recruitment. No-one gets to work at Red Badger unless we think that the existing team would love to work with them. This is a monumental effort. In the last month alone we have had 227 candidates pass the first screening stage and are currently hiring at a rate of approximately 4 per month. However, the effort is most definitely worth it in the long term. You don’t get it right 100% of the time but it is important to us to not hire the wrong person in haste because we are in a hurry to resource a new project. We’d prefer to turn the work down, be patient, hire the right people and focus on building a great culture.

Building a culture

Once you have the right people, a lot of work goes into maintaining culture and creating an environment which is great to work in. A lot of this is to do with trust. As a Director you have to let go and trust your employees to get on with it.

In Dan Pink’s “Drive”, a fantastic book about what motivates us, he talks about three key elements: Autonomy, Mastery and Purpose.

In a nutshell, these mean the following things:

  • Autonomy – the desire to direct our own lives

  • Mastery – the urge to get better and better at something that matters

  • Purpose – the yearning to do what we do in the service of something larger than ourselves

These three elements are really important in creating a great culture.

Autonomy

Red Badger trust our staff to do the right thing. We don’t micromanage them, we have flexible working hours, we trust them to run projects how they see fit and we even give them a £2K training budget per year to spend on what they like. The key thing is for them to be in control of the decisions that they make day-to-day. The more autonomy you provide your staff, the more productive and happy, they tend to be.

Mastery

We also try to provide our employees with the best possible environment to collaborate and share knowledge. We explore various ways of doing this, including a monthly company meeting back at the office where everyone takes it in turn to present on key bits of knowledge, be it a demo of a client project or some thought leadership. We encourage them to innovate. We don’t take predetermined solutions to our clients but cater solutions specific to the requirements and if this means using technology that we haven’t before, that’s fine. Our staff are always driving the evolution of how Red Badger do things because they are passionate, smart people who love what they do and we don’t get in their way. To see a good example of this keep an eye on our tech page and see it evolve over time.

Purpose

Every year we also have a company day during which, we get our employees to do a workshop on our vision and purpose. The outcome of the workshop is a whole bunch of post-its written by our employees on why Red Badger exist, how we realise the why and what the tangible outcomes are. We then use the outputs of the workshop to drive our value propositions and service offerings. By doing this, all of our staff feel part of a common purpose because they have been instrumental in building it.

P1050440.jpg

Why?, How?, What?

Some favourite statements written by our employees from the workshop include:

  • Why – “To make the internet a better place” / “Fix the nonsense”

  • How – “Best people, best tools, best methods and processes and always innovative consultancy”

  • What – “The place you go to find great software and deliver value to clients”

We hired an island!!

More important than anything is that we have lots of fun. We do lots of social events inside and outside of work. Many of our employees would consider themselves best of friends. Part of our culture is to also share the success of Red Badger with our employees when we do well and being fair with how that is distributed. This summer to commemorate our 5th birthday and to thank all of our staff for their continued contribution, we have hired an island and will be taking them all away for a full weekend of relaxation, fun and plenty of drinking. A just reward for all of their efforts and something to be really excited about!

2-Manor_Beach__stay-with-us (1).jpg

The location of our 5th birthday party

The Badger Way

Core values, culture and great employees are key to the success of the business. By getting those things right it has allowed us to get incredibly efficient in delivering value to our clients. We have built up what we call the “Badger Way”. It is an ever evolving process through which we help big corporations to transform their business with a core focus on enterprise scale web applications. Our focus is on three key things:

  1. Help clients to focus their efforts on being customer driven

  2. Build a solution that delivers the best possible technology to meet the client’s requirements

  3. Help clients to be much leaner in their approach

You can read more about the “Badger Way” elsewhere on our website in existing and upcoming ideas and blog posts.

The Future

Who knows what the future will hold but our intention is to continue in the direction in which the last 5 years have gone.

Red Badger has always set out to work with large corporations for a number of reasons, but most importantly, they have the most complex problems to solve where our ways of working can provide the most value. We thrive on complicated and we want to help large corporations feel like startups, implementing lean ways of working, cutting edge tech and delivering great customer experience to their customers.

We will look to continue to grow sustainably. As Red Badger has grown there has naturally been some growing pains. The key is not to ignore them and to make sure you are always listening to your employees.  We are putting all of the right things in place to fix them.

The three founders, Stu, Dave and I are committed to our core values. We are determined to continue to hire amazing people, maintain a great culture and we want to continue to do the right thing.

Scaling excellence will not be easy but I think that Red Badger can continue to do things a little bit differently and in growth, succeed where others have failed by providing the best quality and value to our clients and have fun in doing it.

21
May
2015

The term “Agency” is overused

by Cain Ullah

What is an Agency?

-noun, plural a.gen.cies – an organization, company, or bureau that provides some service for another.

 ”Agency”, is a pretty broad term. If I say I own an “agency”, in the literal sense, I could be in recruitment, window cleaning a taxi driver or a million other occupations. Red Badger are in digital. We build enterprise scale web applications for the likes of Tesco, Sky, Lloyds and Fortnum & Mason. In my industry, most would classify Red Badger as an “agency”. We are a member company of the Society of Digital Agencies (SoDA) after all. But “agency” in my mind is an outdated term and is used to describe too many things.

When most of my peers, colleagues or competitors are talking about “agency”, we are specifically talking about professional services companies that are in marketing/advertising and/or the digital space. Have a look at this list of companies in the Econsultancy Top 100 Digital Agencies (Formerly NMA Top 100) to get an idea of what the industry would define an “agency” these days. There are a number of categories in this list, Full Service/Marketing, Design & Build, Technology, Creative and Media. The categorisation of companies in this list seem dubious at best and service offerings of many of them are very different, despite being placed in the same categorisation. The lines between marketing, advertising, brand, web, agency, consultancy, software house and product startup seem to have become far too blurred, all of which have been thrown into the “agency” bucket.

Origins

Agency

The term “agency” for me has it’s origins in marketing/advertising like AKQA of old, but as we have moved into the digital age, companies like AKQA have had to adapt their service offerings, adding in a strong technical capability to their armoury. AKQA were once an advertising “agency”; they now call themselves an “ideas and innovation company”. AKQA still have an “agency” arm to them as they still do a lot of brand / campaign work associated to a typical advertising “agency”. Digital or not, a campaign is not built to last. However, they now also do full service delivery of longer lasting strategic applications that have a long lasting effect on their clients’ business operations; look at AudiUSA.com. I would argue that this type of work is not that of an “agency”.

With the transition of some traditional marketing/advertising agencies to digital agency, technical companies such as Red Badger have been thrown into the “agency” bucket.

This has been something Red Badger has struggled with. We don’t see ourselves as an “agency”. As I said previously, for us, the term “agency” has its origins in the marketing space, with work largely focussed on campaigns or brand, be it digital or not. We also don’t see ourselves as “consultancy” because the connotations of that are associated to big cumbersome Tier 1 Management Consultancies such as Accenture and McKinsey.

What’s the alternative?

Red Badger deliver enterprise scale web applications for large corporations. They are highly complex, technically advanced solutions that can take upwards of 12 months to build. However, we also take User Centred Design as seriously as we do the Tech. Everything we build is user driven, beautifully designed and simple to use and we have an expert team of creatives to ensure this is the case. Finally, we wrap both the tech and creative teams into incredibly efficient Lean processes, running multi-disciplined, cross-functional teams, shipping into live multiple times a day. This is not the work of “Agency”.  So for now, as a slogan to describe Red Badger, we have settled on “Experience Led Software Development Studio”.

Why does it even matter?

The overuse of the term ”Agency” can cause issues. With the ambiguity of what a modern “agency” is, comes hand-in-hand confusion of what different “agencies” do. For big corporations, the sourcing strategy for a supplier has become equally confusing because they don’t know what they are buying.

When does an “agency” become a consultancy? Or are they the same thing? How do you differentiate from a digital advertising “agency” and a software house that builds digital products? I’ll leave you to ponder on that yourselves.

Some examples of companies that might be in the “Agency” bucket but have started to move away from describing themselves as such include some of the following:

  • Red Badger – “Experience Led Software Development Studio”

  • AKQA – “Ideas and Innovation Company”

  • UsTwo – “Global digital product studio”

  • Adaptive Lab – “We’re a digital innovation company”

Companies are starting to cotton on to the fact that the term “Agency” is confusing and those that provide full service application development are starting to distance themselves from the term and the brand/marketing/advertising stigma attached to it. Surprisingly, companies such as SapientNitro and LBI still describe themselves as an “agency”.

So the question I suppose, is do you class your company as an “agency” or is it altogether something else? I think it might be time for a new term that is not “Agency” or “Consultancy” that is more interesting than “Company”. Suggestions on a stamped addressed envelope to Red Badger please!!

20
May
2015

What I Learned about the Future of Internet From Fluent Conference

by Alex Savin

fluent-conf-title

By the year 2015 I imagined the internet to be something close to how it was envisioned in Gabriele Salvatores’ Nirvana cyberpunk film. You’d plug yourself into cerebral cortex, put VR goggles on and off you would go flying into the cyberspace.

Funnily enough, there was a glimpse of that during the recent O’Reilly Fluent conference in San Francisco. But all things in order.

Fluent isn’t a conference to attend if you want to get deep knowledge on a certain subject. 30 min long sessions are great to get a drive through the topic, to give you understanding on what it is. What Fluent is perfect at is to give you broad overview on stuff you had no idea about. 5 simultaneous tracks of talks, one extra track of meetups, one more for workshops and as a bonus there is ongoing exhibition hall with companies and live demos of hardware and software.

There is optional extra day for just workshops – which actually was good for getting deep into a subject. We went full contact with ES6/7 new features, intricate details on type coercion in Javascript, and did a good session on mobile web performance with Max Firt.

O’Reilly also brought some heavy weight keynote speakers. Paul Irish (creator of HTML5 boilerplate and Chrome web tools dev), Brendan Eich (creator of Javascript), Eric Meyer (author of pretty much every book on CSS), Andreas Gal (CTO at Mozilla) among others delivered a comprehensive overview on where we’re now and what to expect in a next few years. I’m going to cover a few interesting trends spotted during the conference.

Rise of the transpilers

This was mentioned directly and indirectly during many talks, including creator of JavaScript himself. Future of Javascript is not meant for humans, but instead a something you’re compiling into. Many new features in ES6 and 7 should make this easier for compilers. If you had any doubts so far on things like LiveScript, ClojureScript or even Babel, it’s time to stop worrying and start implementing transpilation into your build pipeline. Especially since Babel might as well become a language on its own.

React is hot

In all the honesty I missed all React related talks during the conference, of which there were quite a few. Later catching up with other attendees I got my confirmation that all of the talks were indeed mostly of introduction into React, Flux and ways of injecting React into your app. What was interesting is that each time I’d mention React and Red Badger working with React for the past year and a half, I’d become centre of attention, with people asking questions on how we do things. After describing our (now pretty much standard) setup with apps being isomorphic, hot components reload in place, immutables for state handling and ability to deliver app to no-JS clients, people would often say that Red Badger lives on a cutting edge. I couldn’t agree more.

Many people would also ask – how come it’s year and a half already – isn’t React just being released? The important part here is that lots of people are hearing about React now for the first time, and are really excited.

Isomorphism

Eric Meyer made a great point during his keynote dropping a (deliberately) controversial slide that Web is not a platform. Right now lots of web developers assume JavaScript support as a given on any client. This is something Eric tried to address. Web is not a platform, there are and will be lots of clients with support (or lack of) for any sort of tech stack. When you take CSS off the webpage, it suppose to stay functional and content should still be human readable. Same goes for JavaScript, and any other tech we might introduce in the future. Assuming that you are targeting a broad audience, it’s always a great idea to implement your app isomorphically, with gradual degradation of user experience, in case client doesn’t support some of the tech we’re relying upon.

Be conservative in what you send, be liberal in what you accept.

All of this goes very well with the React and what we’re currently doing with it. If you have modern device, browser and JavaScript support, you’ll get single page app experience, with blackjack ajax, client side routing and visual effects. If not, we’ll gracefully degrade to the server side rendering – which will actually improve performance on older mobile browsers instead of forcing JavaScript on them.

Importance of the progressive enhancement was echoed by @obiwankimberly during her talk on second World Wide Web browser and CERN’s efforts to revive it. Line Mode Browser was the first browser ported to all operating systems, and was used by many people around the world. Most of us might not consume web in text only mode anymore, but the idea of progressive enhancement goes all the way to accessibility, screen readers, and those people who rely on text-only representation of the Web.

HTTP/2 is here

Or simply H/2. Not only the spec is finished, H/2 (in its SPDY incarnation) is supported in 80% of all modern browsers.

For the transition period you can setup your web server to fallback to HTTP/1 in case client is not ready for new hotness. IIS doesn’t support H/2 at all, but there’s nothing stopping you from getting Nginx proxy in front of it with full support for H/2.

Why H/2 is important? Few things:

  • Multiplexing. Each HTTP request requires quite a bit of time to initialise, thanks to something called “TCP three way handshake”. Even with fibre connection this consumes precious time and patience of the user. In addition to that, HTTP’s known bottleneck is slow requests that block all other requests. With H/2 you can retrieve multiple resources during a single connection, practically streaming data from the server. No more need for concatenation of CSS / JS files, image sprites, inlining CSS / JS. Even domain sharding is obsolete now.
  • Security is not optional. It is not part of the official spec, but Firefox and Chrome require you to encrypt all communication when using H/2. Which means, you have to use HTTPS. This has one interesting consequence – if one of the web proxies on the way of our request doesn’t support H/2, such request will still get through, since it is just a stream of encrypted data.
  • HTTP headers compression. All headers are now nicely packed together using something called HPACK encoding.

WebVR

It seems that people at Mozilla are sharing my passion to see Internet with users immersed into it. If you have Oculus Rift and latest Firefox nightly, you’re all set to check out something they call WebVR. Idea is simple – WebGL graphics rendered by the browser into your virtual reality goggles, as well as you being able to look around and interact with the experience.

To boldly go where no man has gone before.

On the second day of exhibition they enhanced the VR experience by sticking LeapMotion sensor with tape on the front of Oculus, which allowed people to see their hands and fingers inside the simulation. That’s something that makes experience truly immersive – we like to move our hands around and seeing them interacting with stuff.

Another possible use for such technology is spherical panorama viewing, be it still picture or video. YouTube already supports such experiments, and one can only imagine this expanding in the future. I believe Mozilla VR department is spot-on about VR experiences delivered with the browser. There are still lots of questions, most of VR territory is completely unknown (UX in WebVR, anyone?), but it feels truly exciting to live during times when such things are coming to life.

Internet of Things

This whole IoT concept was pretty abstract to me, until IBM did a live on-stage demo of a cloud based AI overlord, speaking in a nice voice and doing neat tricks.

I can monitor and command your devices. Typically I monitor vehicle fleets, cool things like power boats, medical devices, and perform real time analytics on information as well as historical data storage and visualisation. I can see you’re wearing heart monitor today. – Sarah, the AI overlord.

During a very short keynote by Stewart Nickolas he let Bluemix cloud based app to scan the room for controllable devices and sensors, and then asked Sarah the App to take control over one of the robotic spheres on stage that she discovered. Interesting part starts where you can imagine Sarah taking over fleets of vehicles, smart sensors and making decisions for us, humans. IBM is building a platform to unleash apps out of cloud into the real world.

On conference

Fluent was by far the biggest and extremely well organised conference I ever attended. Here are a few observation on how they did things.

  • Wifi. As usual on internet conferences with 1k+ attendees, it’s hard to provide reliable wifi signal in a single room. Biggest problem was keynotes, since everybody would be in the same place, actively posting things. In addition to bringing more routers into the room, they also quickly realised to ask audience to pause Dropbox syncing and whatever other cloud backup apps you might have on laptop. That actually helped a lot.
  • Speed networking sessions in the mornings were actually more fun than expected. Not only you get to meet bunch of mostly interesting people, it also puts you into the right mood of the conference and makes you a bit less introvert.
  • Game of code. During first two days you could do your part in collecting stickers by performing various quests and attending events. Each sticker contained few lines of JavaScript code. Once you’ve collected all 12 stickers, you’re suppose to figure out the order of those lines, enter them into browser console of the Fluent Conference website, and, if successful, get a confirmation ID for a prize.
  • Info board. In the main hallway they placed a huge paper board where anyone could leave messages, post job ads or distribute stickers. Often people would pin business cards next to the job ads.
  • After lunch desert location was used to great effect to direct flocks of attendees. In a last two days it was moved to the exhibition hall, so you could munch on a desert while checking out Mozilla VR experience.

References

All keynote videos are now available on O’Reilly YouTube channel. Few of my personal favourites would be:

I also compiled a video for the whole San Francisco trip – was my first time in California after all.

15
May
2015

Join me as a new voice in the tech community

by Winna Bridgewater

wb_workshop_cropped

A glimpse at the weekend workshop I led. Photo by Alessia D’Urso, Girls In Tech UK Board Member.


London has a staggering number of events for coders. It’s awesome.

The only thing is I’ve gone to a lot of events, and I can count on one hand the number of times a technical discussion—about a tool, an industry standard, or the craft of code—was led by a female. For the majority of the events I attend, I am one of only two or three females in attendance.

I decided that this year, if I want to see more women presenters, I need to step up. I need to talk at an event.

Getting Inspiration

As if the universe was listening, two events came up this spring that got me more excited and less nervous about going for it.

First, Ladies Who Code London ran a workshop on public speaking, and it was hosted here at Red Badger. Trisha Gee and Mazz Mosley were wonderful. Their message was simple: it gets easier each time you do it, and there are ways to work through nervousness. They also emphasized that the audience is rooting for you–everyone in the room wants to see you succeed. Pretty nice, right?

Then I had the chance to attend the Women Techmakers Summit, an evening celebrating International Women’s Day that was organized by Google Women Techmakers and Women Who Code London. There was a series of speakers and panelists, and every presenter had a powerful message. The speaker whose message stayed with me most was the keynote, Margaret Hollendoner

Margaret said she sometimes makes her career sound like it was all “right place at the right time” luck. But she told us it wasn’t that simple. Every opportunity was lucky in one sense, but the opportunity wouldn’t be there without her hard work. She also emphasized that deciding to say “yes”  required confidence and bravery.

Margaret’s presentation gave me another nudge: get past my fear and present at an event.

Saying Yes

wb_teaching

Only two days after the Summit, I got an email from Lora Schellenberg about Spring Into Code, a weekend workshop on Web Development offered by
GeekGirlMeetup
and Girls In Tech. Lora asked if I was available to teach–the original instructor had to back out, and they were looking for a replacement.

It sounded like my lucky chance, so I agreed.

Then I got all the details. I was going to be the only teacher for 12 hours of instruction over two days. I’d be teaching at Twitter headquarters to an audience of 100 people.

I felt pretty panicked, so I knew it was time to make some lists.

Why I should not do the workshop
  1. I’m not an expert in web development.
  2. I’ve only been doing this stuff professionally for a year and a half.
  3. I won’t be able to answer all their questions.
  4. 12 hours is a long time.
  5. 100 people is a lot of people.
Why I should do the workshop
  1. I’m not an expert in web development. I still spend most days learning new things. I know what it’s like to feel confused and lost. And I know how to recognize and celebrate small triumphs.
  2. I did set that personal goal.
  3. Those nice ladies did tell me the audience will root for me.
  4. That other nice lady did say you need to take advantage of luck that comes your way.
  5. If I’m going to teach 100 people for 12 hours, this is the ideal audience. Eager learners who, by choice, agree to a weekend in front of a computer taking in as much as possible.

I decided to go for it—butterflies, sweaty palms and all.

There are so many things I could focus on under the umbrella of Introduction to Web Development. I decided my main goals would be:

  • Make techy code stuff seem less scary.
  • Make people feel ok about asking questions.

Saturday morning arrived, and I had a rough start. I spent the first session working out how to use the mic and the two screens floating on either side of me. My notes weren’t loading like I hoped. The Internet was down. My demo wasn’t working even though it worked mere hours before. I was shaking.

After the first demo fell flat on its face, I knew I needed to stop. I took a few minutes to get everything running properly. I got water. I took some deep breaths. Those minutes felt like ages, but it was worth it. When I started up again, stuff finally started working.

The first day flew by. A few folks came by during breaks to say they were enjoying themselves. At the end of the day, lots of people came by to say thanks. Were my silly jokes working? Did my missteps when typing—forgetting a closing bracket, leaving off a semicolon, incorrectly specifying a source path—help people understand that breaking and fixing things is what this job is all about? During the second day, people from all over the room were asking questions. Tables were helping each other debug and understand what was going on. Breaks came and went with people staying at their seats to try things out. I couldn’t have hoped for more.

Reflections

I have so many ideas about what I’d change if I could do it again. I missed some concepts. I glossed through others. But I did it, and I had an amazing time.

If you’re tempted to give a talk or run a workshop, please go for it. It’s scary but great, and you have a wonderful community rooting for you.

13
May
2015

Staggeringly Smooth – the Fortnum & Mason E-com Release

by Roisi Proven

 

It would be difficult for me to overstate the importance of launching a brand new version of an international e-commerce website. This was no mere reskin, we have been rebuilding the Fortnum & Mason website from the ground up. We started with the decision to use Spree Commerce for the storefront, and have been continuing all the way through to building them a bespoke CMS using our very own Colonel.

Because the Fortnum brand values customers above all, they felt it was of the utmost importance that the customers help drive the direction of the site. As a result, we decided it was important not to rush a release. We have spoken already about our approach to deployment, allowing us to deliver rapidly and regularly, but in order to get to that stage, we needed to be confident that the core of what we were delivering was sound. 

The First Step

The first thing that we had to do before we could make a plan, was to figure out where our weaknesses were. If we knew the elements that were most likely to fail, we could work pre-emptively to fix these areas before going live to the world. What has always been apparent to us is that with so many 3rd parties to rely on, no amount of automated or manual testing was going to truly expose our pain points.

So, as soon as the site was fully transactional, we made the decision to do a highly controlled soft launch prior to Christmas peak. A selection of Fortnum’s trusted customers were contacted, and given a password to our still very much unfinished site. By communicating to these customers, and making them a part of our development, we hoped to not just get a more robust test of our site. We also aimed to gain feedback from these key users which would inform the ongoing development.

The Slow Burn

And so the feedback came. However, it came slowly. Too slowly for our liking. So, further down the line towards the middle of January, we had a meeting with the team over at Fortnum. We came up with a plan to run both the old site and the new site in parallel, directing traffic in ever-increasing quantities to our site. So new.fortnumandmason.com was born.

The need to login to get access to the site was removed, and people started using our site. For the first time, we weren’t just inviting people to use the site, we were allowing them to get there on their own, initially via a marketing email, and later by directing a percentage of traffic from the old site to the new. In doing this, we were making sure that when the time came to release to 100% of the public we wouldn’t see the dip in sales and conversion that so often accompanies site re-launches.

Running two sites at the same time came with it’s own problems. There were moments when we wondered if we should have held off on introducing users to the new site, despite the fact that we knew releasing to customers early with the intention of learning from them was hugely important. However, how well it worked was undeniable. Major issues were uncovered, but instead of affecting hundreds of customers, it would affect one or two. If a couple of people encounter an issue, you can contact them personally and make sure they still feel of value. If hundreds, or even dozens of people are affected, it becomes a great deal harder. Also, if there were any issues that we needed more time to fix, it was very easy to direct all users back to the old site.

“Release” Day

The 17th of February rolled around. This was the day we had all been waiting for. For the previous fortnight, around 40% of Fortnum and Mason users were being directed to new.fortnumandmason.com using the Qubit system to redirect them. We had encountered issues, some minor and some not so minor. We had managed to maintain our development momentum along with fixing these issues. Using Kanban, we had a clear view of work in progress and bugs that needed fixing urgently, all the while maintaining a focus on throughput – getting features shipped into live.

At 8am we made the call to change the DNS records. The old site drifted away, and new.fortnumandmason.com became www.fortnumandmason.com. We pulled up our analytics and collectively held out breath.

It just worked.

A part of everyone was waiting for the other shoe to drop, for complaints to start flooding in or for orders to start failing. A bigger part of us knew that our gentle filter of customers over the previous weeks and months had prepared us well for this. Of course there were still small issues, a customer struggling to pay here, a missing product there, but overall there were no alarming issues.

Ongoing Positivity

The new Fortnum & Mason, complete with Feedback banner

 

As the website bedded in over the next couple of weeks, we continued to see minimal issues and an increasingly positive impact. We have already seen an overall growth of 89% year on year, with the conversion increase coming in at an impressive 20% up. We have also seen an 18% reduction in customer service calls, with a particular drop in calls related to issues with payment.

As we finish up our first couple of months being fully live, the mood across Red Badger, and across the business at Fortnum, has been hugely positive. We are immensely proud of what we have achieved, and we don’t feel that we could have had such a successful release if we hadn’t ramped up slowly in the way that we did.

8
May
2015

Announcing Arch – a functional style application framework for React

by Viktor Charypar

Announcing Arch – a functional style framework for React

archblogposter

A little over a year ago, Red Badger teamed up with the Haller foundation to help them build an app for farmers in Africa. Like we always do at Red Badger, we looked at the new and interesting technology available at that point and decided that we’d try and build it using Facebook’s new UI library called React.

Little did we know that within a year, React would become a major part of pretty much all of our client projects and some internal ones. We fell in love with its simplicity and the power of delivering complex user interfaces quickly and with much less bugs than with any other tool before.

Naturally, we started building entire applications with React at the front end and we quickly realised we need a bit more of a pattern to manage state and data in general in the application. At about that time Flux came out and we tried our own implementation of it successfully, but something didn’t feel quite right.

We loved the functional aspects of React, like the fact React components can be pure functions of their props (which works beautifully with LiveScript). It simplifies testing and even just basic understanding of what’s happening in the application at any given time. Passing around references to stores and figuring out what state all the stores are in still felt complex, compared to the beautiful simplicity of React itself.

We really liked the ideas introduced in Om - the React binding for ClojureScript – but making the transition to an entirely different stack felt like a pretty big step. It is also not the easiest thing to eventually hand over to a client, whereas everyone is familiar with JavaScript. We are still keeping an eye on Clojure and ClojureScript for another time. In the meantime we wanted to get the benefits of the architecture in a familiar stack.

At that point, our React stack got quite complicated – we used LiveScript, browserify, tried a couple of different routing libraries to get the isomorphism that React makes so easy, and gradually, our answer to the question “What should I do to get the most out of React?” was getting more and more complicated. So we decided to distill our experiences and opinions into a framework that gives you all of the features we loved and a pattern to build React applications the best way we know how.

We spent the past couple of months slowly building and refining it and towards the end of April, we finally got it to a state where we felt it’s ready to open source. We call it Arch.

Arch

Arch is a front-end functional style application framework using React as a UI layer. Arch applications are isomorphic out of the box, including form processing. This means you write your application as if it was client-side only and Arch will manage the server-side portion.

This also means you don’t get any control over the code running server-side, which is a design decision. The theory behind it is that any server-side code you need to run should sit in a separate server application which you talk to over an API. This is very similar, at a high level, to Facebook’s recently announced Relay framework architecture and we agree that the server-side portion of your application should just be separate. (As with anything in Arch, with a bit of effort you can opt-out of the choices we made and run the Arch server from your own node.js/io.js application.) For development, Firebase is a good tool to give you an API based server-side persistence.

Arch applications are written in [LiveScript] by default. We picked it, because it’s a functional language that’s a perfect fit for React and makes it a joy to build React applications. It doesn’t take long to learn (at the end of the day, it’s still just JavaScript) and gives a huge productivity boost while letting you write very, very readable code. Although you can just as easily build Arch applications with ES6 and JSX, it’s definitely worth taking a look at LiveScript.

Central state

The biggest feature of Arch is the application architecture it proposes, inspired by Om and other ideas from functional (and functional reactive) programming. The architecture stands on an idea of central immutable state, describing your entire UI in a simple data structure. That data structure – a single composite value – serves as a complete model of your user interface at any point in time. Your user interface is a pure functional projection of this data structure.

Obviously, rendering a single UI state is not enough to build an application, you need to update it over time in response to the user’s actions, and you need to do it in a way that doesn’t couple your entire application to the structure of the central state. In other words, you need a way to distribute the central state to your UI components and collect the updates they make in response to user events. Om’s and Arch’s solution for this is a Cursor.

Cursor lets you dig down into a composite data structure and pass a reference to that particular path in it that can later be updated. This doesn’t require any understanding of where in the data structure the cursor is pointing to from the receiver (a UI component for example). So if you pass a cursor with a piece of state to a React component, it can safely use it if it was the entire thing. For example (written in LiveScript):

render: ->
  …
  profile-picture picture: @props.user.get ‘profilePicture’
  …

When rendering a profile-picture component we pass in a picture prop, which is a cursor we obtained by getting a profilePicture key from a user cursor, which was passed to us in the current component’s props. Inside, we can then use the value:

render: ->
  dom.img do
    src: @props.picture.get ‘url’ .deref!

The deref! call dereferences the cursor and gives you a plain piece of JSON data back.

Application state and user interface are not the only things you will ever need. There’s always some business logic to do: computations, data validation, sync with an API, collection of some statistics, etc. For that Arch uses state observers.

Each cursor is observable and you can register a change handler on it that can do work every time the value changes. For example talk to a backend API:

query.on-change (q) ->
  # make API request
  .then (results) ->
    search-results.update -> results

Whenever the query changes, this observer makes an API request and updates the search-results cursor with the results from the response.Extracting the business logic into state observers results in easier to reuse React components and a more modular application. Changing a search provider, for example, can be as easy as swapping the relevant state observer without touching the UI components.

Arch focuses on simplicity (as opposed to complexity) and minimal solutions at every step, which sometimes means things seem harder at first. To help people get started, we provide a CLI that helps you generate a new application by simply typing arch-cli init in your Terminal and a server that handles the isomorphic aspect of your app (arch-cli serve). Arch tries to provide a good choice for all the decisions you make when building an application from scratch, but at the same time, not constrain you. We believe the default set of choices we made is sensible, but each decision we’ve made, you should be able to opt-out of. The aim is also to split up arch into submodules, so you can, for example, use just our cursor implementation. All those goals together should hopefully make a good flexible tool for building React applications.

Can I use it?

Hopefully the previous paragraphs gave you enough of an overview of what Arch is, the philosophy behind it and how you would use it. Although Arch itself or parts and ideas from it are used on multiple projects at Red Badger, it’s still very early in its life and not necessarily ready for production without a bit of effort. We’ll continue working on it and post regular updates.

You can find out more about Arch at http://archjs.org (currently redirects you to Github, but we should have a website up soon). We’ll appreciate all feedback and examples of the awesome stuff you build with it and hope you’ll like it as much as we do!

7
May
2015

Don’t go chasing waterfalls

by Phil Brooks

 

2000px-WaterfallCreationDiagram.svg

I used to really like being a Project Manager, I mean really like it. The order, the structure, the process. I would talk to people about ‘how things got done in an organised manner’ until their eyes glazed over. You see, I cut my project management teeth with local and central government. Yes! I was an advocate / practitioner of Prince 2 and the waterfall methodology. Now don’t get me wrong, I am not saying its not the right approach. Its just not the right approach for me. I thought it was, but then something happened. 

I discovered/was introduced to a new way and soon realised that the light at the end of the tunnel was not the headlight of a fast approaching,  document heavy, process driven train. There was an alternative. So I mothballed the gant charts and Microsoft Project, archived the 30 page requirements and specification docs and confined the ‘sign off by committee’  trackers to the basement. I packed my things in a hanky on a stick and went on my merry way. 

My leap into the Scrum unknown opened my eyes to a more flexible way of working, but with some limitations. Oh Yes, you still have to do certain manifesto ceremonies. Sprint planning, estimating (be-it story pointing or time-boxing) end of sprint retrospectives, end of sprint demos and daily scrums. All of which are valuable in their own right, but a little overkill, don’t you think? 

That aside, I still really really liked being a Project Manager. It opened up my creative side, it was not all about spreadsheets, and trackers, highlight reports and phone book sized documents. It was writing and drawing on walls, post it notes and sharpies, collaboration and teamwork, trusting and enabling. I noticed the teams were more open and honest. They worked together to find solutions and listened to their peers. My role changed from telling to coaching, from reporting to showing and from excessive planning, to agreed prioritisation. I was in my element and really really enjoyed my job. 

Then I joined Red-Badger Consulting and pretty much everything turned on its head. Firstly I was introduced to Kanban (similar to Scrum, but without doing the sprints). Secondly, there is no right or wrong way and nothing is set in stone. Thirdly, and probably more importantly, I was/am given thinking time, a luxury that us Project Mangers seldom get. I had to un-learn what I had learnt and un-think what I had previously thunk! 

I had never used Kanban before, and to be honest was a little worried that I had no experience with this particular methodology. But I needn’t have concerned myself. At Red Badger Consulting we are encouraged to think differently, encouraged to do what we think is right, encouraged to continuously improve and generally encouraged.

Kanban is our preferred approach, however, how we tweak and change it is totally the team’s shout. Oh yes, its a team thing, a collaboration not a dictatorship, at times even the COO hasn’t got all the answers! (although he mostly does, ask his opinion or advice and he will give it, it’s your call if you use it)

Here at the badger sett we use a mix of Kanban, common sense and flexibility to enable us to meet client’s expectations and deliver in a sensible way. Each team is empowered to make relevant changes to their processes, which makes such obvious, but often overlooked, sense, as each client is different and has specific drivers and goals. 

unnamed-1

In my team we have made some radical changes, not to be non conformist, just simply because its what works better for us. Flexible, agile, collaborative and forward thinking:

We don’t do estimating - we see no value of finger in the air guesstimates – we do the work and collect the metrics. Real data on real events giving real lead times. 

We use littles law - what we have to do + what we are already doing / what we ship in a given timeframe (daily in our case) We ask the team to put a date on the card when it leaves the todo column and a date when it is shipped. IN and OUT. This gives us our estimated lead time and highlights tickets that have been in play for excessive amounts of time. We use this data to learn and understand what happened and how to avoid it happening or to ensure it continues happening. We also track blockers and impediments, as this has an impact on our lead times

Now, before you ask, yes I do have a spreadsheet, its a huge one too. (I am, after all a Project Manager) but I don’t bore the team with it. I take out what is useful for them, like average WIPhow long it should take to do the todo and overall metrics for the entire backlog and the occasional cumulative flowI capture data daily, how many tickets are in each line.  Todo, W.I.P, Shipped. Now I have estimate lead times, useful for when you want to advise your clients on how long something might take and why,

No more finger in the air guesstimating. Remember, real data, from real events, giving real lead times. 

unnamed

We don’t do retrospectives at a set time or set day - we do retrospective when it makes sense to – we did do retro every Tuesday at 16:30, but when that reminder popped up on the screens, it was met with an audible groan from the team. So we ditched it. Now we collect thoughts and topics, when we have three or more, we do retro. The team control the topics, unless there is a crisis, then we focus on that and do a retro around that specific item. We also do it quick sharp and to the point, nobody has a chance to get bored and switch off. But in all honesty, we talk to each other constantly, so we tweak little and often.  

We don’t do massive planning sessions - we plan retrospectively. When we have enough todo, we focus on that. When the todo starts emptying out, we plan the next things that will be pulled from the backlog. We focus on the job in hand, we don’t waste time planning for something that might not make the cut.

We have continuous, one piece flow. - the team focuses on the the board from todo, discovery, UXD, dev, test and shipped. Nothing goes backwards, we have exit criteria for each swim-line. If a bug is found, the ticket stays where it is and a fix is worked on. Continuous flow,  everything moves from left to right, nothing jumps back and nothing falls off the board. Once a ticket passes todo, it is tracked to the end,  every time.  

We include Bugs and Issues in the flow - it’s still work, right? still needs to get done, so why separate it out, why not prioritise it along with the other things to do. Site goes bang, we all jump on it, slight irritating non customer facing issue we could live with, we prioritise it and get on with other things. (unless the client or the team think it should be sorted as soon as) But it’s all work, some is big, some small, but work nonetheless. 

We include UX and Design in the flow - again, work is work right? we are all the same team, right? why segment it?  well we don’t. If it has UX or design elements they get done in the flow and we measure the throughput along with everything else. 

We pair program - the designers work closely with the developers to do the designs within the app, saving time, effort and iterations. The developers pair with developers, they share knowledge and skills. They collaborate and review, the produce quality assured code with little to no technical debt

We collaborate, communicate, celebrate and encourage, through all stages of the process. 

I am fortunate to be involved in some of the most innovative, creative, technical, ground breaking and award winning projects with Red Badger. I love using Kanban. I love being able to work with a team of awesome individuals from different specialist areas. I love the ‘lets try it and see’ approach. I love the challenge and the change. 

Since Joining Red Badger,  I really really love being a Project Manager. which proves that working for the right company and the right group of people can have a profound impact on how you feel about your job.

Thanks for reading to the end, I take my hat off to you. If you want to see how we do it here, come and have a chat. We love a cuppa and a biscuit. 

visit us at http://red-badger.com/

Cheers

Phil Brooks 

 

6
May
2015

Death by meeting (strippers need not apply)

by Harriet Adams

Over the years, I’ve worked with a number of FTSE 100 companies. To this day, I still have no idea how anyone gets anything done, considering most of their 9 – 5 is either in meetings, preparing for a meeting, or talking about the actions from those meetings in another meeting.

 

 

Utterly bewildered, I’d go along, often spending hours going over the same topics for the output to be:

  • We need another meeting to discuss further because we’ve run out of time
  • We can’t make a decision yet because [insert name] isn’t here
  • We don’t have enough information to make a decision

URGH. If there’s something that frustrates me it’s inefficiency. What’s the point of inviting all these people unless we reach a decision or a series of next steps? And why on earth am I here?!

It’s almost like the more people involved, the better and more important the meeting is perceived to be.

 

Funeral strippers

A couple of weeks ago, I read an article about the Chinese authorities clamping down on “funeral strippers“. Supposedly, the more mourners you have to your funeral, the more well-off your family appears. Therefore, in order to achieve higher levels of attendance and seem more wealthy, some families have resorted to hiring strippers to attract the crowds. 

On paper, it may seem a bit of a stretch as a comparison, but all too often we are big-headed and assume that what we need to discuss is particularly important. The entire team (and often innocent bystanders) are invited to pointless, boring discussions with no tangible output. Everyone joins, drawn in by the promise of something exciting, feels awkward throughout, and ultimately leaves feeling dead inside.

 

Upfront contracts

I learnt a technique on a training course a while ago about Upfront Contracts – something I try to enforce before every conversation and every meeting. A UFC should consist of the following.

  1. Purpose. What’s the point of the meeting / conversation?
  2. Agenda. What are we going to be talking about? Are we all in agreement that this is the right agenda?
  3. Timing. How much time do we have in total to reach a decision?
  4. Output. What do we all need out of the meeting, and at what point do we decide that it is over?

What this doesn’t consider, however, is the quality or number of people in attendance. This goes back to the Lean Start-up concept. Who are the influencers that will enable you to progress? What’s the minimum amount of input you need to reach a way forward?

This is a very important point, as inviting the wrong people may lead to the same problem, despite having a clear set of objectives.

 

Another one bites the dust

In fact, sometimes meetings aren’t necessary at all. Again, referring back to Lean Start-up, sometimes it’s necessary to pivot to be more efficient. A good example of this at Red Badger was during a recent project. Retrospective discussions were being held at 4.30pm every Tuesday afternoon but it became clear that the team were mature enough to raise issues in a reactive manner, and add them straight to the Kanban board.

By removing the meeting entirely and only running it when there were three items for discussion on the board, the team were able to be more productive but continue to make important changes to the process when it was necessary.

 

Making peace 

I’m not saying that ALL meetings are unnecessary. Quite the opposite.

But next time you have an important decision to make or need to arrange a meeting, think about the following.

  1. Is this meeting absolutely, definitely necessary?
  2. What is the purpose of the meeting and what do you need the end result to be?
  3. Who are the main influencers in reaching this point?
  4. What do we need to talk about to get to this result?

If you ask yourself these questions and make the answers clear to everyone who joins, you’ll make decisions faster and avoid disrupting the team.

In Layman’s terms, you’ll be slicker and quicker (with no stripper).

5
May
2015

Fortnum & Mason – Slack Deployments, Confident Delivery

by Jon Sharratt

As you may or may not be aware the team at Red Badger has been hard at work crafting away at the new Fortnum & Mason e-commerce website.  It has also recently been nominated for the best customer experience award at the BT tech & ecom awards (https://techecommawards.retail-week.com/shortlist-2015).  We have delivered the site from concept to live in just 8 months using agile and lean methods such as Kanban. One of the core concepts that compliments our Kanban approach when delivering features on the project is the ability to deploy without friction, confidently and multiple times a day.

Lets break this down and take a look from a high level what the process looks like.

deployment 

Reduced Friction

You might recall I blogged about how we used GitFlow (http://red-badger.com/blog/2013/08/15/sprint-efficiently-with-github/) within our development team.  In Fortnum & Mason and other projects we have recently moved to GitHub Flow (https://guides.github.com/introduction/flow/) mainly due to the recent supporting features implemented by GitHub. 

The core principle is that the team pair programs on a feature branch.  A pull request is then created with relevant specs, it then gets reviewed and collaborated on by the team.  Once the feature branch runs all of the tests via CircleCI it can then be merged into master.  Our master branch reflects production code always.  We use CircleCI which executes our Ansible scripts for provisioning and deployment.   

In Fortnum & Mason we have unit tests along with golden path journeys using Capybara that run using Chrome driver.  These golden path specs are the core journeys that reflect if the site is transactional.  Once all of the specs have passed the master branch is then deployed to our staging environment immediately ready for our QA to test.

If our QA is happy with the build then the QA will take ownership and tag a release via GitHub releases (https://github.com/blog/1547-release-your-software) stating what issues have been fixed as well as any new features added.  The last part is to fill the release tag name with a semantic version (http://semver.org/) number.  This gives us fantastic rolling documentation that at a glance everyone can see what changes have taken place.  We are transparent with our clients as much as we can be and our product owner has access so can also take a look and really get a feel of what work has been completed day to day or even minute by minute.

Another tool widely used across Red Badger is Slack for company wide collaboration and communication.  For this project we decided to setup hubot (https://hubot.github.com/), an automated bot that (mostly) obeys your commands.  We added a couple of custom scripts that allows the QA or any of the team to deploy a release as and when necessary.  It is as simple as a message @badgerbot fm list tags. Which lists the 5 latest tags in our repository. Once you have the tag you want you can deploy it using @badgerbot fm deploy v1.0.0.  This causes a parameterised build (https://circleci.com/docs/parameterized-builds) within CircleCI to run the relevant Ansible scripts using the tag specified which then deploys into the production environment.

slack

 

Increased Confidence

Our deployments already come with a high degree of confidence due to the development practices of pair programming, code review, specs and QA tested features / issues.  But if something does go wrong in production we are safe in the knowledge that we will know about it immediately, how do we know… well in comes New Relic and a quick Red Badger service we hacked together in a few hours.  The Fortnum & Mason site is rigged up all over with New Relic alerts and events throughout its codebase.  Every instance, moving part and third party call has its instrumentation and performance tracked.  CircleCI even tracks each deployment so we can quickly see any performance degrading for every deployment that goes out.

Another element we have in Fortnum & Mason is the ability to flip features on and off using a concept called feature flipping.  This allows us to incrementally release larger features to select users, we can then ensure that we are confident it works as the code runs side by side against deployed production code.  A good example is adding another payment provider such as PayPal, we can test run it in production with a few users to make sure everything integrates before switching it on to everyone.  We can have fine grained control and can release it to the product owner, groups of users or even a random percentage of users.

This really helps the teams principle of always moving forward. 

Here is a breakdown of the monitoring and alerting services we use and what for:

New Relic Application Performance Monitoring (APM)

Instrumentation, performance and error logging.  Every server and third party performance call is logged and alerting is setup to inform of us of any bottlenecks and errors.

New Relic Synthetics
Continuous golden path testing to ensure the the site is always testing transactional flows.  Selenium scripts using Chrome that run every 15 minutes to ensure core journeys on the site are operational.

New Relic Insights
Customer behaviour analytics as well as KPI.  We log everything from delivery methods, revenue, average basket sizes and much more allowing us to analyse and test new assumptions to improve customer experience. 

Red Badger Phone Alerting
Although not part of New Relic we hacked a service that accepts hooks from ZenDesk and New Relic.  If any critical part of our monitoring raises a critical alert or email the service uses the Twilio API to phone a badger who is on 24/7 support.

Early and often

With all this in place we can be confident that fixing issues and deploying new features multiple times a day is second nature to the whole team. Deployments are not blocked by hefty deadlines and big ‘release planning’ becomes a thing of the past.  Just a quick review of the tagged release in Github by the team is all that is required.  The other huge added benefit when deploying early and often is minimising risk of every deployment with ultimately smaller incremental changes.

So next time you are ‘release planning’ ask yourself how confident, efficient and easy is it for you and your team to deploy multiple times a day.