React Native – When the Mobile Web Isn’t Enough

by Joe Stanton

Back in January, a few Red Badgers were lucky enough to be sent to React.js Conf at Facebook HQ. We’re huge fans of React and have been using it to great effect on every project since we discovered it in early 2014.

There were many excellent talks at the conference, and it was staggering to see just how much adoption React has gained in such a short space of time, especially by some huge players in web technology (anyone heard of Netflix, AirBnB or Yahoo?).

There was one very significant announcement that overshadowed the other talks, though. Tom Occhino took to the stage in the first keynote and delivered a Steve Jobs-esque announcement of React Native. As a bonus, all conference attendees were given early access to the project, now open source.

Tom Occhino

After experimenting with, and contributing to React Native during its private beta phase, I’ve formed an opinion on what it might mean for Web developers like us, and our clients. And it’s exciting.

The Present

Red Badger are rather good at building responsive websites. Our clients love them because they deliver a fantastic experience for their customers on all kinds of devices. These sites feel better, convert better, and look great. They also don’t cost significantly more to build than traditional websites, when developed with responsive in mind from the beginning.

They are the 80% solution, though. They don’t solve everything. The last 20% has proven to be a problem. This elusive 20% sometimes means we have to say no to projects where the mobile web is not yet a good fit, and we hate it.

So what is the missing 20%? Or rather, what’s wrong with the current state of the mobile web?

What’s wrong with the Mobile Web?

  • Real performance problems – The DOM is just slow. The weight of the DOM feels much worse on mobile, where resources are constrained. Attaining the 60fps necessary for a smooth experience can be very difficult when images are being decoded whilst scrolling, for example.
  • Poor gesture support – This is a huge part of why native apps feel more intuitive and higher quality. I frankly had no idea just how much control was given to native developers in dealing with gestures correctly until researching it, and it shows. Simple things like not accidentally clicking a link whilst scrolling, responding to sliding events with velocity etc. are much harder to deal with on the web.
  • Many Native API’s are still not exposed for web – So we simply can’t take advantage of them (Push Notifications, iBeacons, Vibration, TouchID to name a few). This is a major cause of us saying “no” to a requirement. This can be somewhat addressed by a framework like PhoneGap, which comes with its own set of negatives.
  • Half baked features that don’t really work as intended eg. offline support with AppCache.
  • Years of legacy browser support – A huge productivity killer.

Perhaps the biggest problem of all, though, is the glacial rate at which these problems are being addressed. We’re die-hard advocates of the web, but we can’t wait forever. Exciting new web standards such as ServiceWorker (bringing Push Notifications, much better offline support etc.) are still a way off, and Apple have no publicly announced plans to adopt this standard at all for Safari.

Web apps still don’t feel as good as native apps, and it’s a shame. We’ve been saying they’ll catch up for a while now (Facebook famously went native in 2012) and they aren’t.

So how do you avoid the unreasonable cost of native development, of running separate teams with separate tech stacks, with no code reuse, limited shared knowledge, and their minds on technology instead of the product?

React Native

After using React Native for a few weeks, I’m convinced. I’ve distilled my feelings into a list of benefits:

  • It’s Real, Native UI – With great performance, animations and gesture control, that blend effortlessly into the platform. This means you’ll still have different Web, iOS and Android implementations, but they’ll provide a great experience that’s tailored to each platform. It’s nothing like a WebView.
  • It’s React, a technology we already love – it’s fast, it’s familiar and we write far less bugs in it thanks to its functional properties.
  • It’s the same tech stack – modern JavaScript (ES6), the good parts of CSS (Flexbox) and the same tools for building, packaging (NPM), linting (eslint), testing (jasmine/jest) etc.

    This means the same team can develop the Web, iOS and Android versions of an app, using technologies they’re already comfortable with. The developers best placed to build a product are the ones that understand it the best, not those segregated into different teams based entirely on technology.

  • Speed of Iteration – Changes in the UI are reflected immediately via Live Reload. There is no lengthy recompilation step, and most changes appear within a few seconds. This is what we’re used to on the web, but something very new to Native development. It leads to happier, more productive development sessions, and a willingness to experiment without frustration.
  • A simple JS->Native Bridge – So you can use libraries written in Obj-C/Java when you need to. It’s possible to build both native libraries (eg. bridging the TouchID service on iPhone) and native UI components, such as a switch, or slider. You can create nice, declarative API’s for these libraries and distribute them on NPM.
  • It works on iOS and Android (coming soon) – There’s no reason it can’t work on other platforms too.
  • It’s even possible to hot-load new JavaScript code, thanks to this recent (extremely significant) relaxation of App Store rules:


These benefits boil down to enormous workflow and productivity gains over traditional native development, making that last 20% much more bearable and potentially a lot less expensive, too.

The Future

The most promising part of React Native is not the code itself, but the ideas. Facebook core contributors have regularly stated that they see it as a testbed for new ideas. These ideas can lead us to better abstractions and solve real problems before being implemented within browsers. More eloquently put:


We need better primitives for building performant and native feeling UI within mobile browsers, and this shouldn’t start as a draft from a standards body, perhaps React Native can help lead us there.


Responsive Design: How device usage is changing the way we build websites

by Andrew Cumine

How many web enabled devices do you own? A very simple question. A phone? A tablet? A laptop? A desktop? These are just the basics. The most common. There are also web enabled TV’s, games consoles, and even fridges. Slowly more and more devices are connecting to the web.

As a developer or stakeholder, you will want your website to be visible and fully functional on as many devices as possible. The problem is that devices have different sized screens with different screen resolutions.

If you write your website to work on desktop devices you don’t want to have to write another whole website to do exactly the same thing for mobile devices. Doing so would be a massive waste of time and resources that could be better used somewhere else.

That being said, you don’t want mobile device users to see a broken site that just doesn’t work. That wouldn’t reflect well on you or your business.

The answer to this problem is responsive design.

What is responsive design?

Responsive web design was first discussed by Ethan Marcotte in his 25th May article of A List Apart titled “Responsive Web Design”. He discusses how interior designers and architects had been experimenting with “responsive architecture”. Creating art installations and wall structures that change shape when crowds approach them.

He puts forward the idea that we should design websites for optimal viewing experience on an increased number of devices. Design that adapts to the device that is used to view it. This is responsive web design.

Responsive Web Design

How can we make a website responsive?

In his article, Ethan Marcotte also laid out a solution. Media queries.

Included in CSS3, the media query, is a wonderful tool for responsive web design. Arguably the most useful two features of the media query are max-width and min-width.

The use of a max-width or min-width media query allows you to specify DOM (document object model) breakpoints ranges for different CSS styles. This means that the same DOM element can have different CSS styling at different browser pixel resolutions.

div { 
  background-color: green;

@media (min-width: 300px) {
  div {
    background-color: red;

Above the div will have a green background colour unless the browser width is greater than 300 pixels, then it will have a red background colour.

div {
  background-color: green;

@media (max-width: 300px) {
  div {
    background-color: blue;

Above the div will have a green background colour unless the browser width is less than 300 pixels, then it will have a blue background colour.

A very basic working example of media query usage is shown here. To see the affect that the media query has change the width or your browser. there are two breakpoints, one at 300 pixels and one at 1200 pixels.

These media queries are used to overwrite existing CSS markup at the specified breakpoint. This allows you to make a website responsive while only adding a few lines of CSS to you codebase.

Further Information

Mobile First

Where do you start your breakpoints? On the project I am currently working on our Google Analytics reports that almost 50% of traffic is from mobile users and that 37% of this is iOS. Gartner reported that 83% of device sales were tablet or smart phone in 2014. It makes sense to ensure that your site is well designed at smaller screen resolutions. This is where the mobile first methodology comes in.

Mobile first is the process of creating the layout and styling of your site for mobile devices before any work is done on the desktop views. Usually this results in a less cluttered visual design across all devices due to the limited screen space available on a mobile device. Importantly this ensures a strong, mobile user experience.

After the initial mobile design styling is complete, breakpoints are added for other, larger screen resolution devices using media queries. This is where you would use min-width media queries.

Common Breakpoints

Breakpoints are often added to a website in order to meet the design and styling required regardless of the device being used to view the content. This kind of breakpoint can vary from website to website, whereas device based breakpoints are fairly standard.

Mobile portrait – 320px

Mobile landscape – 480px

Tablet portrait – 768px

Tablet landscape – 1024px

Desktop – 1224px

These are common guidelines for CSS breakpoints. Tablet landscape and desktop are often changed to suit the style of the site.

CSS Preprocessors

CSS preprocessors such as Sass, Less, and Stylus have extremely useful features to use in responsive design. In this post I will focus on Sass although both Less and Stylus have their own equivalents.

Predominantly used as a Ruby on Rails plugin, Sass is a CSS preprocessor that really improves the speed of development of a responsive site with media queries. Most useful are nested rules and variables.

Using SCSS rules in your CSS makes it much more clear when defining styling for each breakpoint. Instead of having styling for each breakpoint separated by media queries, all of the media queries for a specific DOM element can be nested inside the element’s initial styles.

Setting your commonly used breakpoints as SCSS variables allows you to reuse the same breakpoints easily while referring to descriptive, easy to remember, names.

# No nesting

div {
  # All your default styling

@media (min-width: $mobile-landscape) {
  div {
    # All your mobile landscape styling

@media (min-width: $tablet-portrait) {
  div {
    # All your tablet portrait styling

# etc for more breakpoints

Without nesting each specific DOM element to be styled will appear in each breakpoint, creating significant clutter in your CSS files. This can be confusing and difficult to manage when you are working on large projects, as refactoring the CSS codebase may become a daunting task.

# With nesting

div {
  # All your default styling
@media (min-width: $mobile-landscape) { # All your mobile landscape styling }
@media (min-width: $tablet-portrait) { # All your tablet portrait styling }
# etc for more breakpoints }

With nesting the DOM element to be styled has its own breakpoint styles all in one place making it much easier to maintain and find the CSS styling you are working on. Note the use of SCSS variables $mobile-landscape and $tablet-portrait in both examples.


This is a very basic overview of responsive design. Much of the nature of responsive design and how it is carried out is down to personal preference, site specification, or dictated by your tech stack.

With an ever increasing number of devices network enabled, it is up to developers to ensure that the user experience is maintained and consistent across as many devices as possible.


Badger Digest, pilot issue

by Alex Savin

Original image by Peter Trimming

At Red Badger we’re using Slack. As of this moment, we have about 50 active channels (not counting archived ones), containing lots of shared items and interesting discussions. Most of it is (obviously) hidden from the public eye. This blog is going to lift a curtain on this matter by revealing some of the most discussed and shared links on our channels for the past week. With some luck, this thing might as well become regular. Enjoy!


  • Accessibility APIs: A Key To Web Accessibility. This was a follow up on our efforts to get better understanding of implementing proper accessibility in web apps. James Hopkins also did his personal blog titled There Should Be No Disabled Users On The Web.
  • Nodyn – a Node.js compatible framework, running on the JVM
  • Reapp – React based framework for building mobile web apps that look like native. With Webpack out of the box.
  • Developer infrastructure at Facebook’s scale – impressive talk by Katie Coons on software delivery process and automation testing
  • JavaScript coding style by Gov.UK
  • Community roundup blog on React (Native). They mention something important about React Native schedule for Android:

When is React Native Android coming? -> Give us 6 months.


  • Details explanation of the DDoS attack on Github servers


  • Your website should be so simple, a drunk person could use it


  • AtomPair – an Atom package that allows for epic pair programming
  • React Parts – A catalog of React Native components

Events radar

A tweet for your thoughts


React London meetup: March

by Alex Savin


Our March meetup was once again kindly hosted by Facebook at their cozy Euston office. 250 available spots were gone in just few hours after registration start, and so this blog will provide you with all the necessary materials to catch up on some important topics when developing React apps.

@pocketjoso delivered a talk on isomorphic React apps. Isomorphism in web development can be defined as the ability to run applications with or without JavaScript. Here are just a few reasons to have your app isomorphic:

  • Your app will be fully functional even if the browser doesn’t support JavaScript, or a user deliberately disables it (like, say, if you’re using Tor browser with JS disabled for security reasons).
  • It is good for SEO. Googlebot and other crawlers are still most effective with static HTML pages. Google indeed announced last year that Googlebot will gradually start supporting JavaScript, but it is still very limited in doing so.
  • If for any reason your app’s JavaScript breaks, it will still remain fully functional.

Implementing isomorphic app with React is not that hard – it provides an extremely useful method for serverside rendering of static HTML pages. There are however a few more issues you’ll need to solve in order to get fully functional isomorphic app, like passing props from server to client. In his talk Jonas covers all the important aspects, from rendering initial state of the app on server to making snapshots of stores and implementing isomorphic Flux pattern.

Slides available.

@robknight_ explains the important topic of testing React apps. As Facebook mentioned during the recent F8 event, unit testing of your app is very important.

There are few ways of doing so when implementing React app, most of them revolve around the idea of testing React components in isolation. React itself provides some batteries out of the box, and Facebook gently suggests more tools to achieve painless testing.

Robert tells his story of exploring different testing paths and covers a great deal of issues related to component based testing, caveats of using Jest, testing Flux flow, and even goes to mention new testing features of React 0.13.

Slides available.

@StuartHarris presented Tesco International Grocery mobile app made with React. It is fully isomorphic, works on most mobile browsers with or without JavaScript support, and is currently live in 7 countries around the world. Stuart offers a unique insight into how this massive app was implemented, as well as in-depth look at the tech stack evolution.

Implementing an international app brings some  interesting challenges like branding variations depending on country, localisation, support for multiple languages, different currencies and legal requirements. Stuart also performs a live demo of the app, showcasing how the app is rendered on both server and client, usage of data attributes to transfer localisation strings to the browser and how parts of the app were implemented as mini single page apps.

You too can try brand new Tesco mobile web app live – here is one for Thailand.

Big thanks for everyone who made it to the meetup, and who followed us online. Stay tuned for the React London April meetup announcement!


There should be no disabled users on the web

by James Hopkins

A few months ago, I came across a thought-provoking tweet, suggesting that businesses tend to prioritise IE8 support over accessibility, even though the latter comprises a larger demographic. It’s a statement that I’ve been pondering over for a while now, and it’s made me completely re-evaluate my own stance towards accessibility; hopefully after reading this blog post, it will hopefully change yours too.

To begin, we need to define the term ‘web accessibility’. It’s a term that’s thrown around in the web community (along with the infamous #a11y tag), a fair bit. Wikipedia’s description is:

…the inclusive practice of removing barriers that prevent interaction with, or access, by people with disabilities.

However, I think this is incorrect, since it implies a level of retrospect; rather, there fundamentally shouldn’t be any barriers in the first place that need removing. With this in mind, it can be shortened to:

The inclusive practice of ensuring everyone can use websites

And herein is where the problem I believe lies.

Why do we need to make the web accessible?

A better way of phrasing the question is why would you intentionally restrict certain users being able to use your site?. If, for some reason, you need further persuasion, then:

The traditional approach

Put simply, the same priority given to ensure accessibility across browsers (using cursor/touch navigation), isn’t afforded to assistive technologies. These can include:

  • screen readers, which read what the browser has rendered

  • braille terminals

  • screen magnification/page zooming

  • speech recognition

  • keyboard overlays

I strongly believe this phenomenon is endemic of the web community as a whole, although it’s refreshing to see a changing rhetoric recently. Ask yourself this – how many teams have you worked in, where testing the tabbing interface, and/or using a screenreader is done during the development/QA phase?

Why are visual browsers given priority?

One theory I have as to why most of you will have answered zero to the above question, is related to the makeup of those stakeholders & developers with whom you worked with on those projects. It’s safe to assume that, in the majority of cases, they were unlikely to be sufferers of the same conditions that the aforementioned devices are designed to help. Therefore, a conclusion could be made, that they’re able to relate more to users of visual browsers.

The inefficient workflow

Traditionally, accessibility conformance has been treated as a separate task that’s carried out either, when the client gets panicky about legal implications and decides to carry out an ‘audit’, or the development team flags that there are issues with the site post-launch. Trying to work retrospectively like this, simply doesn’t work since:

  • an audit is a static document; assuming the project’s under active development, more often than not, the audit becomes out-of-date as soon as it hits your inbox

  • trying to mitigate the risk of code conflicts during component ‘patching’, especially when working in an Agile environment, is incredibly tricky

  • it can be a touchy subject getting buy-in from your client to remedy the issues identified, when you’re implying that you’ve delivered a product that not everyone can use

A better idea

For the purposes of this article, accessible UX and design is assumed.

Accept that a proportion of your users have a disability

This fact is easily demonstrated through available consensus statistics. If you’re attempting to detect assistive devices (for whatever reason), in my opinion, you’re doing it wrong - although there is a current proposal to enable this feature. Disability is an umbrella term describing a wide variety impairments; assistive devices are mechanisms designed to empower only a select disability demographic, and therefore using these metrics to represent the demographic as a whole, isn’t appropriate. How can you accurately detect, for example, a user with a motor impairment, who relies on an intuitive keyboard interface?

Changes to development workflow

  1. Test assistive workflows and devices. Obtain buy-in from your project manager to ensure this is done during the development phase. Incorporate new exit criteria into user stories to ensure that no user story can progress past development without the relevant conformance.
  2. Implement WAI ARIA patterns. The WAI ARIA guidelines can be defined as providing rich semantics for users who can’t discern context through visual language. Technically speaking, it’s a declarative microformat that describes component patterns through the use of attribute labels on DOM elements to denote application composition and mutable state. Should be considered a mandatory requirement for screenreader users.
  3. Ensure intuitive keyboard routing. This guideline is particularly pertinent for web applications that feature rich visual interactivity (dialogs/modals, submenus that can obscure content when visible, etc), where a natural navigation and reading order needs to be preserved.

Think differently from now on

As part of the delivery team for a project, you have a responsibility to deliver a quality product. Next time you’re working on a user story, spare a thought for people who can’t use a visual browser, and exercise your due diligence a little more to increase equality.

Here at Red Badger, we’re currently transitioning to this new workflow, which will increase our coverage for assistive devices even further.


Writing better CSS with meaningful class selectors

by Dominik Piatek

With the rise of frameworks like React, Angular, Ember and the Web Components spec, the web seems to be moving towards a more components based approach to building apps. The abstractions for these are getting better, except for one part: CSS. In many ways, we write CSS just as we wrote it years ago. Even if we think about our UI as being more modular, the CSS is still spaghetti. Last year I was tasked with designing a CSS system for a components library and after much trial and error I found a quite satisfying pattern, that allowed for the codebase to grow in a quick and maintainable way.

Let’s start by looking at some standard spaghetti. Say we “think” about our CSS classes as “components”.

.menu {}
.overlay {}
.button {}

See the Pen QwZYwP by dpiatek (@dpiatek) on CodePen.

And we want a “modular” CSS system. So we start writing rules like this:

.menu .overlay > .button {}
#home .sidebar.managers-special {}

See the Pen PwyVqB by dpiatek (@dpiatek) on CodePen.


Last time I checked “modular” did not mean cobble everything together with a jackhammer. All our elements are now tied together and the selector makes it ambiguous where it belongs. By side effect, it will also force our HTML to a specific structure:

See the Pen dPgaYz by dpiatek (@dpiatek) on CodePen.

It’s not uncommon to mix “components” as well:


See the Pen PwyVPB by dpiatek (@dpiatek) on CodePen.

The “flexibility” of the language is what makes it so hard to reason about; could we avoid some of it’s features to make it better?

From my experience, the answer is yes.

Classes are the most powerful selectors because we have the most control about what they actually mean. But if that meaning is fuzzy and not respected, we lose its benefit. We need to make sure that the way we write our selectors keeps their meaning and not muddies it.

When working on a pattern library for econsultancy.com I’ve experimented a lot with approaches to writing such classes, and ended up with 3 rules with that help the most:

1. A CSS class always represents a component. Any other abstractions that a CSS class may represent needs a prefix, suffix or what have you – they need to be easily distinguishable and the convention needs to be well established.


// In this project, components are:
.menu {}
.menu-title {}

// Classes that modify a component start with a `-`
.-left-aligned {}
.-in-sidebar {}

// Helpers are started with `h-`
.h-margin-bottom {}

// Theming classes are started with `t-`
.t-brand-color {}

See the Pen NPOoGo by dpiatek (@dpiatek) on CodePen.

2. All CSS selectors that contain a CSS component class must start with it and they must not modify any other CSS component class or it’s children directly (by referencing it – so a selector can ever include only a single CSS component class) or indirectly (by element selectors for example).


// Ok selectors. Element selectors here would be defaults for given tags.
img {}
a {}
header {} 
.main-logo {}
.menu {}
.menu-item a {}
.menu-item img ~ a {}
.menu-item.-last {}

// Not ok
header a {}  
header .menu {}  
.menu .menu-item {}
img ~ a {}  
.menu-item img ~ a {}
.menu li:nth-child(2) a {}

See the Pen qEJgbm by dpiatek (@dpiatek) on CodePen.

3. An HTML tag can only have a single CSS component class.


As with any subset, in some cases this will make writing some of your CSS harder and more verbose. It’s probably an overkill for smallish projects, where what you need is a stylesheet and not a component library. It’s probably not well suited to drawing iPhones with box-shadows either.

Things can be done with preprocessors to simplify this approach. For example, on the aforementioned project, we used SCSS and all classes were output with mixins. This made for some crazy looking CSS, but in the end, all developers on the project found it much more easier to extend and reason about our CSS.


This blog post first appeared on dominikpiatek.com

Red Badger are hiring. Come join us


Ghostlab – This won’t hurt a bit!

by Monika Ferencz

I always thought cross-browser testing on multiple devices was a lot like going to the dentist: it is unpleasant even if you are not deathly afraid of the process (which I am), but an absolute necessity to do from time to time, otherwise the undetected problems could escalate and eventually get painful… But maybe now I will be proven wrong about mobile testing, as Ghostlab seems to be the solution that eases the dread and makes the process smoother.

Ghostlab devices

If you are interested in the topic of mobile testing, you probably know quite a few tools and ways of testing, like Sauce Labs and Browserstack, which are useful and easy to learn. At a basic level, Ghostlab is also a simple cross-browser testing app: you can fire up your own server, connect devices to it, then use the app itself to debug your site and make the necessary changes as you go. But Ghostlab manages to be even more than the sum of all similar apps, by offering built-in solutions to the most prevalent problems that surface when doing mobile testing.

No setup whatsoever

Most tools require quite a bit of setup, save for those that work in your browser (but those lack most of the functionality offered by locally run apps in return). Ghostlab only needs to be installed on your machine once, and ensure it’s on the same wifi network as the devices you will use it with. Once the server is started for your own (local or external) project or website, Ghostlab will provide you with the IP address through which your browsers and devices can connect to it. And that is the extent of it, you don’t have to struggle with different apps whether you test on iOS, Android, Windows or Firefox OS devices. As any JS enabled client can connect instantly, compatibility ceases to be an issue: instead of installing something onto all your devices, you can just type in Ghostlab’s address and have all the synchronised magic happen at once!

Synchronised clicks, scrolls and more

Easily the most unique and brilliant feature of Ghostlab is how every interaction is synchronised over all the different browsers. Once you connect a device, you can open pages, click, scroll and even fill out forms in one of the connected browsers (even on your desktop), and see the exact same thing happening across all the other browsers which connect to your Ghostlab server. This makes testing on multiple devices far easier than individually going through the motions on them all. Not only does this awesome synchronisation save a huge amount of time, it also looks surprisingly cool when a bunch of different sized devices start clicking and scrolling through websites in unison!


Remote debugging for every browser

Remote code inspection is traditionally a pain when it comes to mobile testing – especially when you consider older devices or most versions of Internet Explorer. Even for browsers with built in debug capabilities, using it on mobile can be uncomfortable. The solution Ghostlab has to offer for this is a thing of beauty: you can simply click the “inspect” button on the tab of browser/device that is connected, which brings up a devtools window where you can inspect, select and edit DOM content, CSS and scripts. This is made possible by weinre, a remote web inspector bundled with Ghostlab, which lets you play around with and fix your code instantly.

So, do you need it?

Apart from the above described stuff, all of which I find infinitely useful, Ghostlab has a few more nice features which are described in detail on their website. The ability to tweak exactly how content is served through Ghostlab, the idea of resizable, preset workspaces, and the straightforward interface design are all bonuses to an array of must-have functionality.

But the best things in life are usually either very expensive, or they make you fat. While Ghostlab is low on calories, it probably costs more than a freelancer or small company would pay without thinking twice. You can try it free for the first week to see if it does what you need it for, and if you buy multiple licenses it gets a bit cheaper, but when it comes to testing, quality is admittedly expensive…

At the end of the day, Ghostlab might not be the ultimate tool for every need (I don’t believe such a thing exists, anyway) but it made my mobile testing experience considerably less scary and painful. Now if only I found a similar solution for going to the dentist…



Public Speaking for the Terrified

by Sarah Knight



Red Badger were pleased to welcome the Ladies Who Code group to Badger HQ for their March meetup – a public speaking clinic aimed at novices (and the generally petrified). It was presented by Trisha Gee who was aided by her self-proclaimed ‘controversial’ friend Mazz Mosley. Trisha and Mazz both have very different styles of presenting and preparation (hence the ‘controversy’), so it was great to get both their perspectives on things and really drive home the main message of the evening: Be Yourself.

The evening started as every good public speaking workshop should – by subjecting the participants to abject terror, and forcing them to stand up in front of everyone and say a few words. Most people were pretty terrified at the prospect, (my heart rate went through the roof), but all gave it a good go and made it through their allocated 20 seconds. After that the audience was able to relax, and take in some tips that will help combat those feelings in future.

Be yourself

The biggest thing I took away from the workshop is to just be yourself. The best speakers are the ones who are comfortable on stage. The audience doesn’t want to be worried for you – they need to feel they’re in good hands. If you’re relaxed, they will be too.

So stick to your strengths, and what you’re comfortable with. Wear clothes that you’re happy in, and stick to a style of delivery that suits you. If you’re naturally funny, keep things light-hearted and get people laughing. If you find it difficult to raise a laugh, don’t even try. Stick to a more matter-of-fact delivery. Both approaches are fine – don’t try to be something you’re not.

Don’t worry about the ‘rules’, or advice you’ve been given. You don’t have to eliminate all your natural mannerisms and filler words. If you try to change too many things it will feel and sound unnatural. You’ll be stressing about remembering what you should or shouldn’t be doing, and are more likely to mess up.


Trisha went through a number of questions, (if you get the chance to attend one of her workshops – do!) but I’ve just picked out a few:

What should I talk about?

People are often under the false impression that they will need to talk about something super-technical. Actually people will often want more of an introduction to something, than a really in-depth analysis.

Some potential topics are something …
… you struggle(d) with
… you’re new to
… only you know
… people are always asking you
… that people find hard
… that interests you

Talking about your own personal experiences is often a good way to get started in public speaking – nobody can ask you difficult questions or contradict you!

What do I do if my mind goes blank?

Don’t be afraid of silence. You don’t have to talk constantly and fill every single second. A pause will often actually generate interest, as people wait to hear what you’re going to say next.

Don’t panic – remember that the audience have no idea what’s happening in your head. Take a drink of water to give yourself a bit more thinking time, maybe check your notes. Find another way to say the same thing, or just move on. People won’t even realise if you skip some of what you’d planned to talk about.

How do I deal with difficult questions?

Remember that the audience is on your side. They’re (usually) not out to get you or trip you up. They either have a genuine question, or are wanting validation.

Set expectations at the beginning – are you happy to take questions during the talk as they occur to people? Or would you rather they save them until the end? If you don’t want to take questions, that’s ok too – maybe provide an alternative way of contacting you, either in person later, or online.

Always repeat the question back to them. This means that everyone hears what the question is and allows you to check you’ve understood it.

Validate the person who asked the question – “That’s a great question”, “That’s a really good point”, ”Does that answer your question?”

Answering questions is nowhere near as scary as you think it’s going to be. If you don’t know the answer, that’s fine. Just say that – and maybe refer them to someone else who will know.

Don’t wait to be perfect

Don’t overthink things – you just need to get out there and give it a go.

The standard for public speaking isn’t as high as you think it is. The audience is rooting for you – they want to hear you talk about what you’ve come to say, and go away having learnt a few things that they didn’t know before. That’s pretty much all they’re hoping for.

There are a lot of benefits to developing this skill. Not only getting to share things that you think are important, and potentially getting paid to travel and attend conferences. You’ll also be developing your communication skills, and the ability to concisely and efficiently articulate your thoughts and decisions. This is an essential skill for developers who need to justify technical choices to colleagues and clients.

There’s a huge amount of advice about public speaking. As with so much in life – just do what works for you, and don’t get caught up worrying about the ‘correct’ way of doing things.

If it helps – do it.
If it doesn’t – don’t.

At the end of the day, it’s just standing up and saying some things.
What’s the worst that could happen?

Red Badger are hiring. Come join us


React Native – The Killer Feature that Nobody Talks About

by Robbie McCorkell

React Native logo
At the end of January I was lucky enough to go to React conf at Facebook HQ in Menlo Park. This was my first tech conference, and it was a great and inspiring experience for me. The talks were excellent and I recommend everybody check out the videos, but the talks that really stole the show were the ones on React Native.
React Native allows developers to build real native applications using javascript and react, but not web wrapper applications as we too commonly see. React simply takes charge of the view controllers and programatically generates native views using javascript. This means you can have all the speed and power of a native application, with the ease of development that comes with React. 

Playing with React Native

This is a really exciting development for the native app world, and gives a new perspective on what native app development could be. I’d previously tried to learn iOS development a couple of times, first with Objective-C, and later with the introduction of Swift. Whilst I think Swift is a massive improvement in iOS development, I still ended up getting bored and life got in the way of learning this new system. So initially, the idea of using my current skill set in React web development to build truly native apps was extremely enticing (and still is).
I generally believe that as a developer you should pull your finger out and learn the language that suits the job, but in this instance React Native seemed to offer more than just an easy way into iOS development. It offered a simple and fast way to build interfaces and to manage application logic, and the live reloading of an application in the simulator without recompiling blew my mind.

Luckily for me, conference attendees were given access to a private repo with React Native source code inside, so I began playing as soon as I got back to my hotel room. Within 5 minutes I was modifying one the provided examples with ease without any extra iOS development knowledge, and I was hooked.

An addendum

Since then I’ve been leveraging my early access to talk publicly about React Native to some great reception. It’s been fascinating discussing this with people in the community because I hear very little scepticism and a lot of excitement from web developers and native developers alike (at least those that come to a React meetup).

However the more I talked about it the more I realised the message I was conveying in my presentations was not quite right. One of the major themes I focused on was the fact javascript developers like myself can now easily get into the native world, and that companies only need to hire for one skill set to build and maintain their entire suite of applications. 

This is still a hugely important advantage, but it isn’t the major benefit and doesn’t highlight what React Native offers over competing frameworks. It also doesn’t perfectly align with my view that looking for a tool or framework just to reduce the need for learning another language is lazy thinking. There’s more to React Native than this.

React Native’s advantage

React Native’s biggest feature is React.

This may seem a bit obvious (the clue is in the title) but let me explain. When I first looked at React I thought it was insane like most people. It takes such a different approach to web development that it gives many people an immediate repulsive reaction. But of course the more I used it the more I realised I could never go back to building web applications (or any front-end app for that matter) any other way. The patterns react provides are an extremely powerful way of building applications.

If you haven’t used React much it might help to know that React lets you declaratively define what your view should look like given some input data. A react component is passed some properties that it requires to render a view, and as a programmer you simply define the structure of the view and where that data should sit. In doing this you’ve already done half of the work in building your application, because if a component or any of its parents changes their data (in the form of state), React will simply re-render the affected components given the new data.

No specific data binding. No event management. No micro managing the view. Just change the data and watch React recalculate what your view should look like. React will then use its diffing algorithm to calculate the minimum possible DOM manipulations it can do to achieve the desired result.

The second half of your application structure is of course user interaction. Patterns and tools like Facebook’s Flux and Relay help with this, but essentially these are just ways in which you can modify the data in your application in a neat and scalable manner. The application still simply recalculates the structure of the view once the data has changed.

React really shines when you start to scale an application, because the complexity of your application doesn't have to increase too much. This is of course quite hard to demonstrate in a blog post, but I can give you an example of a simple little app written in angular and react that I adapted from Pete Hunt, one of React's creators.


See the Pen RNBVZz by Robbie (@robbiemccorkell) on CodePen.


See the Pen VYBbyN by Robbie (@robbiemccorkell) on CodePen.

You can see in the code above that the React implementation is pretty short, even with the markup defined in javascript. This is mostly because of the lack of linking functions connecting the code to the markup. React will just figure this out by itself. With all of this re-rendering of the markup every time something changes, you would think React would be quite slow, but it's not. For a demonstration of how React performs in comparison to other frameworks check out Ryan Florence's presentation from React conf.

This is an extremely simple and powerful way to build front-end applications. It combines the best of both worlds in a simple and easy interface for the programmer, and a performant experience for the user. It’s for this reason more than any other that React Native is an exciting new tool in native app development. It changes the way a programmer thinks in building the front-end of their app.

A sample React Native app

Tappy Button screenshotIn the talk I mentioned above I demonstrated some of my ground breaking research in video game technology with a game I created that I call Tappy Button. The objective of this game is to tap the button in the middle to increase your score.

The sample code below defines the full application seen in the screenshot including view structure, styling and application logic. What you should notice here is that the code is extremely unremarkable. It’s the same kind of React we know and love simply applied to a native app context, and that is what’s so remarkable about it. The only differences to traditional React web development are the element names1 and the inline styling2.

Building the same application in Xcode using Swift requires a similarly small amount of code, but it does require the developer to perform a process of clicking and dragging to create elements, constraints and method stubs in the code which quickly becomes tedious. As the application becomes more complex, the developer must manage collections of view controllers that manually update the view when required. But in React the view is declared once and all the developer must do is make sure the data inside each component changes when necessary. 

Other frameworks have crossed the JS-native divide in a similar way to React. NativeScript neatly gives developers direct access to native APIs in javascript3, and the creators of Titanium have been very vocal about the fact that they have provided similar javascript bridging and CSS styling for native apps for years. But all of the articles I’ve read that compare these frameworks are missing the biggest differentiator between React Native and the others, and that is React itself.

When discussing React Native, the ability to write native apps in javascript only really deserves a cursory mention in the discussion. What's really important here is that we can build native applications using the power of React.

React as the view engine

The important things in the future of React won't be more features, add-ons or utility libraries. Yes some improvements, optimisations and structural changes will be built in down the road, but I don't think Facebook want to overcomplicate something that works so brilliantly and simply already. 

We already have React being applied to different areas with React Native, and at React conf Netflix announced they were taking a similar approach, even applying React to embedded devices like TVs using their own custom built rendering engine. We've also heard that Flipboard have swapped out the DOM in favour of canvas elements in their latest web app to impressive results.

In the future we are going to see React applied to many different areas of front-end development by simply swapping the DOM and browser out for another view structure and rendering engine, whether that be mobile and desktop native elements, canvas, embedded devices, or who knows what. This will allow developers to use the power of React and its development patterns in any environment.  

Writing native applications for our phones is just the beginning, and it’s a direction I can stand behind. The biggest benefit of react native isn’t javascript. It’s React.


  1. The two base elements we can work with are 'View' and 'Text' and which act as block and inline elements respectively. Coming from the world of the DOM, we can simply translate 'div' to 'View' and 'span' to 'Text' and we’re basically good to go. Any other elements like 'TouchableHighlight' are utility components provided to us by React Native’s component library.

  2. Facebook have provided their own CSS based styling interpretation including their own clever implementation of Flexbox. We now write our styles as javascript objects and apply them inline to our view elements. Currently styles don’t seem to cascade, which is both an advantage and disadvantage depending on your use case. But the interesting thing about applying styles in this way is you have the full power of javascript at your fingertips. With a bit of forethought you could conceivably come up with clever systems to share global styles, and apply them to elements automatically in your application.

  3. I think NativeScript could be an interesting way to build react native bridges to native APIs, and we'll be experimenting with this in the future. However I’m still sceptical as to whether the overhead is worth it, and maybe if we want to build native bridges we should just learn the bloody language!

Red Badger are hiring. Come join us


Understanding the Enigma machine with 30 lines of Ruby. Star of the 2014 film “The Imitation Game”

by Albert Still

Scene from The Imitation Game 2014 filmI recently watched the film “The Imitation Game” and it’s brilliant despite Keira Knightley’s failed attempt at a posh British accent. Everyone should watch it especially if you’re interested in technology. It’s about the life of a genius called Alan Turing, he is also known as the father of modern computer science. That means the very tech you’re using to read this blog works on the foundations of his discoveries. His major breakthrough took place at GCHQ during WW2, he made a machine that decrypted secret German messages they sent using the Enigma machine. Furthermore, historians believe his work shortened the war by two years and saved thousands of lives. To me stories don’t get much more interesting than this, how us Brits used maths and computer science to help the Allies win WW2 and save Europe from fascism. 

After watching the film I became very interested in the Enigma machine and I’m going to use Ruby to explain to you how it works. If you don’t know the Ruby programming language don’t worry, you’ll be surprised how much syntax you understand. 

Journey of a single letterFirstly it’s important to realise the Enigma machine is just one big circuit. Each time a key is pressed at least one rotor is rotated which changes the circuitry of the machine thus lighting a different letter faced light bulb. The rotors are the only moving parts within the circuit and each have 26 steps, one for each letter of the alphabet. When a key is pressed the right rotor rotates by a step, when this reaches a full revolution the middle rotor rotates by a step, and when the middle rotor does a full revolution the left rotor rotates a step. Here is a video to best explain the circuitry:

I found the best way to understand the machine was to visualise the path of an electron traveling through the circuit after a key is pressed. First of all we get to the plugboard, here we literally swap 10 of the 26 letters. For example D becomes E and therefore E becomes D. In Ruby we can model this as Plugboard = Hash[*('A'..'Z').to_a.shuffle.first(20)] and then Plugboard.merge!(Plugboard.invert) so we can make it reflective. As for the 6 letters that are untouched and simply map to themselves we can make the value default to the entered key with Plugboard.default_proc = proc { |hash, key| key }.

Next we get to the rotors, think of each of the rotors as a substitution cypher, each of its two sides has 26 metal contacts. And there is a mess of 26 wires inside that randomly pair the contacts. We can model a rotor as Hash[('A'..'Z').zip(('A'..'Z').to_a.shuffle)], for the return journey we can call invert on the hash before passing it a key.

The electron travels through all the rotors and then it gets to the reflector. The reflectors job is simply to turn the electron around and pass it back through the rotors. We can model it with Reflector = Hash[*('A'..'Z').to_a.shuffle] and Reflector.merge!(Reflector.invert).

It will now go back through the rotors, through the plugboard again and finally hit the letter faced lightbulb which would indicate to the operator what the letter had been encrypted or decrypted to, depending wether the operator is receiving or sending messages.

I have written a gist of an Enigma machine using the above code, please feel free to comment below if any of it needs explanation.

Why was it secure?

The Enigmas security comes from the large number of configurations the machine can be in. Every month German operators were issued a code book that told them what setting to put their plugboard and rotors in for each day of that month.

  • The Enigma machine came with 5 rotors to choose from and the machine used 3.  You have 5 to choose from, then 4, then 3. Order matters here therefore this yielded 5 x 4 x 3: 60 combinations.
  • Then each rotor has 26 starting positions. This yielded 26 x 26 x 2617,576  combinations.
  • The plugboard maths is more complicated, Andrew Hodges explains it well. The number of ways of choosing m pairs out of n objects is n! /((n-2m)! m! 2m). Therefore the enigma machine has (26! / ((26-2*10)! 10! 2^10): 150, 738, 274, 937, 250 combinations.

Shot of the Enigma machine in The Imitation Game 2014Multiply 60 x 17,576 x 150,738,274,937,250 and you have 158,962,555,217,826,360,000 combinations! And that is what GCHQ were faced with. They had to figure out which one of the combinations the Germans Enigma machines were using for that day so we could understand what the Germans were saying to each other. Furthermore the next day it would change making any promising work from the day before useless, a moving target!

This is where Turing’s master mind came in. Him and others built a bombe machine that speed up the process of working out what setting the Enigma machines were in for that day. First they tried guessing a small section of plaintext within the ciphertext, then the bombe was used to find the setting that would yield this plaintext through brut force. If I get the time I would love to study the bombe and try and write it in Ruby.

Did UX lose Germans the war?

The reflector was practical because it meant the Enigma machine operators used the same setting to send and receive messages. This was great but it also meant that a letter could never map to itself. This was a major weakness in the system, because it allowed the code breakers to eliminate possible solutions when the ciphertext and the putative piece of plaintext had the same letter in the same place. And it was this weakness Turing exploited with his bombe machine.

Why did they only use 10 pairs for the plugboard?

This puzzled me for a while, if you play around with the n! /((n-2m)! m! 2m) formula you will see 11 plug pairs yields the most combinations, with the number decreasing after 11. Here is the best explanation to why they used 10:

British cryptanalysts believed that the Germans chose 10 plugs because this resulted in the maximum number of plug-board permutations.  While the actual maximum occurs for 11 plugs, the discrepancy could well be a mistake in slide-rule computations on the part of the Germans.” — Deavours and Kruh (1985), “Machine Cryptography and Modern Cryptanalysis

So because they didn’t have computers back then doing maths with large numbers was difficult and had to be done by hand, and they simply believe it was a slide rule mistake.

Red Badger are hiring. Come join us