25
Mar
2015

There should be no disabled users on the web

by James Hopkins

A few months ago, I came across a thought-provoking tweet, suggesting that businesses tend to prioritise IE8 support over accessibility, even though the latter comprises a larger demographic. It’s a statement that I’ve been pondering over for a while now, and it’s made me completely re-evaluate my own stance towards accessibility; hopefully after reading this blog post, it will hopefully change yours too.

To begin, we need to define the term ‘web accessibility’. It’s a term that’s thrown around in the web community (along with the infamous #a11y tag), a fair bit. Wikipedia’s description is:

…the inclusive practice of removing barriers that prevent interaction with, or access, by people with disabilities.

However, I think this is incorrect, since it implies a level of retrospect; rather, there fundamentally shouldn’t be any barriers in the first place that need removing. With this in mind, it can be shortened to:

The inclusive practice of ensuring everyone can use websites

And herein is where the problem I believe lies.

Why do we need to make the web accessible?

A better way of phrasing the question is why would you intentionally restrict certain users being able to use your site?. If, for some reason, you need further persuasion, then:

The traditional approach

Put simply, the same priority given to ensure accessibility across browsers (using cursor/touch navigation), isn’t afforded to assistive technologies. These can include:

  • screen readers, which read what the browser has rendered

  • braille terminals

  • screen magnification/page zooming

  • speech recognition

  • keyboard overlays

I strongly believe this phenomenon is endemic of the web community as a whole, although it’s refreshing to see a changing rhetoric recently. Ask yourself this – how many teams have you worked in, where testing the tabbing interface, and/or using a screenreader is done during the development/QA phase?

Why are visual browsers given priority?

One theory I have as to why most of you will have answered zero to the above question, is related to the makeup of those stakeholders & developers with whom you worked with on those projects. It’s safe to assume that, in the majority of cases, they were unlikely to be sufferers of the same conditions that the aforementioned devices are designed to help. Therefore, a conclusion could be made, that they’re able to relate more to users of visual browsers.

The inefficient workflow

Traditionally, accessibility conformance has been treated as a separate task that’s carried out either, when the client gets panicky about legal implications and decides to carry out an ‘audit’, or the development team flags that there are issues with the site post-launch. Trying to work retrospectively like this, simply doesn’t work since:

  • an audit is a static document; assuming the project’s under active development, more often than not, the audit becomes out-of-date as soon as it hits your inbox

  • trying to mitigate the risk of code conflicts during component ‘patching’, especially when working in an Agile environment, is incredibly tricky

  • it can be a touchy subject getting buy-in from your client to remedy the issues identified, when you’re implying that you’ve delivered a product that not everyone can use

A better idea

For the purposes of this article, accessible UX and design is assumed.

Accept that a proportion of your users have a disability

This fact is easily demonstrated through available consensus statistics. If you’re attempting to detect assistive devices (for whatever reason), in my opinion, you’re doing it wrong - although there is a current proposal to enable this feature. Disability is an umbrella term describing a wide variety impairments; assistive devices are mechanisms designed to empower only a select disability demographic, and therefore using these metrics to represent the demographic as a whole, isn’t appropriate. How can you accurately detect, for example, a user with a motor impairment, who relies on an intuitive keyboard interface?

Changes to development workflow

  1. Test assistive workflows and devices. Obtain buy-in from your project manager to ensure this is done during the development phase. Incorporate new exit criteria into user stories to ensure that no user story can progress past development without the relevant conformance.
  2. Implement WAI ARIA patterns. The WAI ARIA guidelines can be defined as providing rich semantics for users who can’t discern context through visual language. Technically speaking, it’s a declarative microformat that describes component patterns through the use of attribute labels on DOM elements to denote application composition and mutable state. Should be considered a mandatory requirement for screenreader users.
  3. Ensure intuitive keyboard routing. This guideline is particularly pertinent for web applications that feature rich visual interactivity (dialogs/modals, submenus that can obscure content when visible, etc), where a natural navigation and reading order needs to be preserved.

Think differently from now on

As part of the delivery team for a project, you have a responsibility to deliver a quality product. Next time you’re working on a user story, spare a thought for people who can’t use a visual browser, and exercise your due diligence a little more to increase equality.

Here at Red Badger, we’re currently transitioning to this new workflow, which will increase our coverage for assistive devices even further.

13
Mar
2015

Writing better CSS with meaningful class selectors

by Dominik Piatek

With the rise of frameworks like React, Angular, Ember and the Web Components spec, the web seems to be moving towards a more components based approach to building apps. The abstractions for these are getting better, except for one part: CSS. In many ways, we write CSS just as we wrote it years ago. Even if we think about our UI as being more modular, the CSS is still spaghetti. Last year I was tasked with designing a CSS system for a components library and after much trial and error I found a quite satisfying pattern, that allowed for the codebase to grow in a quick and maintainable way.

Let’s start by looking at some standard spaghetti. Say we “think” about our CSS classes as “components”.

.menu {}
.overlay {}
.button {}

See the Pen QwZYwP by dpiatek (@dpiatek) on CodePen.

And we want a “modular” CSS system. So we start writing rules like this:

.menu .overlay > .button {}
#home .sidebar.managers-special {}

See the Pen PwyVqB by dpiatek (@dpiatek) on CodePen.

Stop.

Last time I checked “modular” did not mean cobble everything together with a jackhammer. All our elements are now tied together and the selector makes it ambiguous where it belongs. By side effect, it will also force our HTML to a specific structure:

See the Pen dPgaYz by dpiatek (@dpiatek) on CodePen.

It’s not uncommon to mix “components” as well:

 

See the Pen PwyVPB by dpiatek (@dpiatek) on CodePen.

The “flexibility” of the language is what makes it so hard to reason about; could we avoid some of it’s features to make it better?

From my experience, the answer is yes.

Classes are the most powerful selectors because we have the most control about what they actually mean. But if that meaning is fuzzy and not respected, we lose its benefit. We need to make sure that the way we write our selectors keeps their meaning and not muddies it.

When working on a pattern library for econsultancy.com I’ve experimented a lot with approaches to writing such classes, and ended up with 3 rules with that help the most:

1. A CSS class always represents a component. Any other abstractions that a CSS class may represent needs a prefix, suffix or what have you – they need to be easily distinguishable and the convention needs to be well established.

Example:

// In this project, components are:
.menu {}
.menu-title {}

// Classes that modify a component start with a `-`
.-left-aligned {}
.-in-sidebar {}

// Helpers are started with `h-`
.h-margin-bottom {}

// Theming classes are started with `t-`
.t-brand-color {}

See the Pen NPOoGo by dpiatek (@dpiatek) on CodePen.

2. All CSS selectors that contain a CSS component class must start with it and they must not modify any other CSS component class or it’s children directly (by referencing it – so a selector can ever include only a single CSS component class) or indirectly (by element selectors for example).

Example:

// Ok selectors. Element selectors here would be defaults for given tags.
img {}
a {}
header {} 
.main-logo {}
.menu {}
.menu-item a {}
.menu-item img ~ a {}
.menu-item.-last {}

// Not ok
header a {}  
header .menu {}  
.menu .menu-item {}
img ~ a {}  
.menu-item img ~ a {}
.menu li:nth-child(2) a {}

See the Pen qEJgbm by dpiatek (@dpiatek) on CodePen.

3. An HTML tag can only have a single CSS component class.

Example:

As with any subset, in some cases this will make writing some of your CSS harder and more verbose. It’s probably an overkill for smallish projects, where what you need is a stylesheet and not a component library. It’s probably not well suited to drawing iPhones with box-shadows either.

Things can be done with preprocessors to simplify this approach. For example, on the aforementioned project, we used SCSS and all classes were output with mixins. This made for some crazy looking CSS, but in the end, all developers on the project found it much more easier to extend and reason about our CSS.

 

This blog post first appeared on dominikpiatek.com

Red Badger are hiring. Come join us

10
Mar
2015

Ghostlab – This won’t hurt a bit!

by Monika Ferencz

I always thought cross-browser testing on multiple devices was a lot like going to the dentist: it is unpleasant even if you are not deathly afraid of the process (which I am), but an absolute necessity to do from time to time, otherwise the undetected problems could escalate and eventually get painful… But maybe now I will be proven wrong about mobile testing, as Ghostlab seems to be the solution that eases the dread and makes the process smoother.

Ghostlab devices

If you are interested in the topic of mobile testing, you probably know quite a few tools and ways of testing, like Sauce Labs and Browserstack, which are useful and easy to learn. At a basic level, Ghostlab is also a simple cross-browser testing app: you can fire up your own server, connect devices to it, then use the app itself to debug your site and make the necessary changes as you go. But Ghostlab manages to be even more than the sum of all similar apps, by offering built-in solutions to the most prevalent problems that surface when doing mobile testing.

No setup whatsoever

Most tools require quite a bit of setup, save for those that work in your browser (but those lack most of the functionality offered by locally run apps in return). Ghostlab only needs to be installed on your machine once, and ensure it’s on the same wifi network as the devices you will use it with. Once the server is started for your own (local or external) project or website, Ghostlab will provide you with the IP address through which your browsers and devices can connect to it. And that is the extent of it, you don’t have to struggle with different apps whether you test on iOS, Android, Windows or Firefox OS devices. As any JS enabled client can connect instantly, compatibility ceases to be an issue: instead of installing something onto all your devices, you can just type in Ghostlab’s address and have all the synchronised magic happen at once!

Synchronised clicks, scrolls and more

Easily the most unique and brilliant feature of Ghostlab is how every interaction is synchronised over all the different browsers. Once you connect a device, you can open pages, click, scroll and even fill out forms in one of the connected browsers (even on your desktop), and see the exact same thing happening across all the other browsers which connect to your Ghostlab server. This makes testing on multiple devices far easier than individually going through the motions on them all. Not only does this awesome synchronisation save a huge amount of time, it also looks surprisingly cool when a bunch of different sized devices start clicking and scrolling through websites in unison!

 

Remote debugging for every browser

Remote code inspection is traditionally a pain when it comes to mobile testing – especially when you consider older devices or most versions of Internet Explorer. Even for browsers with built in debug capabilities, using it on mobile can be uncomfortable. The solution Ghostlab has to offer for this is a thing of beauty: you can simply click the “inspect” button on the tab of browser/device that is connected, which brings up a devtools window where you can inspect, select and edit DOM content, CSS and scripts. This is made possible by weinre, a remote web inspector bundled with Ghostlab, which lets you play around with and fix your code instantly.

So, do you need it?

Apart from the above described stuff, all of which I find infinitely useful, Ghostlab has a few more nice features which are described in detail on their website. The ability to tweak exactly how content is served through Ghostlab, the idea of resizable, preset workspaces, and the straightforward interface design are all bonuses to an array of must-have functionality.

But the best things in life are usually either very expensive, or they make you fat. While Ghostlab is low on calories, it probably costs more than a freelancer or small company would pay without thinking twice. You can try it free for the first week to see if it does what you need it for, and if you buy multiple licenses it gets a bit cheaper, but when it comes to testing, quality is admittedly expensive…

At the end of the day, Ghostlab might not be the ultimate tool for every need (I don’t believe such a thing exists, anyway) but it made my mobile testing experience considerably less scary and painful. Now if only I found a similar solution for going to the dentist…

 

6
Mar
2015

Public Speaking for the Terrified

by Sarah Knight

lwc_public_speaking3

 

Red Badger were pleased to welcome the Ladies Who Code group to Badger HQ for their March meetup – a public speaking clinic aimed at novices (and the generally petrified). It was presented by Trisha Gee who was aided by her self-proclaimed ‘controversial’ friend Mazz Mosley. Trisha and Mazz both have very different styles of presenting and preparation (hence the ‘controversy’), so it was great to get both their perspectives on things and really drive home the main message of the evening: Be Yourself.

The evening started as every good public speaking workshop should – by subjecting the participants to abject terror, and forcing them to stand up in front of everyone and say a few words. Most people were pretty terrified at the prospect, (my heart rate went through the roof), but all gave it a good go and made it through their allocated 20 seconds. After that the audience was able to relax, and take in some tips that will help combat those feelings in future.

Be yourself

The biggest thing I took away from the workshop is to just be yourself. The best speakers are the ones who are comfortable on stage. The audience doesn’t want to be worried for you – they need to feel they’re in good hands. If you’re relaxed, they will be too.

So stick to your strengths, and what you’re comfortable with. Wear clothes that you’re happy in, and stick to a style of delivery that suits you. If you’re naturally funny, keep things light-hearted and get people laughing. If you find it difficult to raise a laugh, don’t even try. Stick to a more matter-of-fact delivery. Both approaches are fine – don’t try to be something you’re not.

Don’t worry about the ‘rules’, or advice you’ve been given. You don’t have to eliminate all your natural mannerisms and filler words. If you try to change too many things it will feel and sound unnatural. You’ll be stressing about remembering what you should or shouldn’t be doing, and are more likely to mess up.

FAQs

Trisha went through a number of questions, (if you get the chance to attend one of her workshops – do!) but I’ve just picked out a few:

What should I talk about?

People are often under the false impression that they will need to talk about something super-technical. Actually people will often want more of an introduction to something, than a really in-depth analysis.

Some potential topics are something …
… you struggle(d) with
… you’re new to
… only you know
… people are always asking you
… that people find hard
… that interests you

Talking about your own personal experiences is often a good way to get started in public speaking – nobody can ask you difficult questions or contradict you!

What do I do if my mind goes blank?

Don’t be afraid of silence. You don’t have to talk constantly and fill every single second. A pause will often actually generate interest, as people wait to hear what you’re going to say next.

Don’t panic – remember that the audience have no idea what’s happening in your head. Take a drink of water to give yourself a bit more thinking time, maybe check your notes. Find another way to say the same thing, or just move on. People won’t even realise if you skip some of what you’d planned to talk about.

How do I deal with difficult questions?

Remember that the audience is on your side. They’re (usually) not out to get you or trip you up. They either have a genuine question, or are wanting validation.

Set expectations at the beginning – are you happy to take questions during the talk as they occur to people? Or would you rather they save them until the end? If you don’t want to take questions, that’s ok too – maybe provide an alternative way of contacting you, either in person later, or online.

Always repeat the question back to them. This means that everyone hears what the question is and allows you to check you’ve understood it.

Validate the person who asked the question – “That’s a great question”, “That’s a really good point”, ”Does that answer your question?”

Answering questions is nowhere near as scary as you think it’s going to be. If you don’t know the answer, that’s fine. Just say that – and maybe refer them to someone else who will know.

Don’t wait to be perfect

Don’t overthink things – you just need to get out there and give it a go.

The standard for public speaking isn’t as high as you think it is. The audience is rooting for you – they want to hear you talk about what you’ve come to say, and go away having learnt a few things that they didn’t know before. That’s pretty much all they’re hoping for.

There are a lot of benefits to developing this skill. Not only getting to share things that you think are important, and potentially getting paid to travel and attend conferences. You’ll also be developing your communication skills, and the ability to concisely and efficiently articulate your thoughts and decisions. This is an essential skill for developers who need to justify technical choices to colleagues and clients.

There’s a huge amount of advice about public speaking. As with so much in life – just do what works for you, and don’t get caught up worrying about the ‘correct’ way of doing things.

If it helps – do it.
If it doesn’t – don’t.

At the end of the day, it’s just standing up and saying some things.
What’s the worst that could happen?

Red Badger are hiring. Come join us

4
Mar
2015

React Native – The Killer Feature that Nobody Talks About

by Robbie McCorkell

React Native logo
 
At the end of January I was lucky enough to go to React conf at Facebook HQ in Menlo Park. This was my first tech conference, and it was a great and inspiring experience for me. The talks were excellent and I recommend everybody check out the videos, but the talks that really stole the show were the ones on React Native.
 
React Native allows developers to build real native applications using javascript and react, but not web wrapper applications as we too commonly see. React simply takes charge of the view controllers and programatically generates native views using javascript. This means you can have all the speed and power of a native application, with the ease of development that comes with React. 

Playing with React Native

This is a really exciting development for the native app world, and gives a new perspective on what native app development could be. I’d previously tried to learn iOS development a couple of times, first with Objective-C, and later with the introduction of Swift. Whilst I think Swift is a massive improvement in iOS development, I still ended up getting bored and life got in the way of learning this new system. So initially, the idea of using my current skill set in React web development to build truly native apps was extremely enticing (and still is).
 
I generally believe that as a developer you should pull your finger out and learn the language that suits the job, but in this instance React Native seemed to offer more than just an easy way into iOS development. It offered a simple and fast way to build interfaces and to manage application logic, and the live reloading of an application in the simulator without recompiling blew my mind.

Luckily for me, conference attendees were given access to a private repo with React Native source code inside, so I began playing as soon as I got back to my hotel room. Within 5 minutes I was modifying one the provided examples with ease without any extra iOS development knowledge, and I was hooked.

An addendum

Since then I’ve been leveraging my early access to talk publicly about React Native to some great reception. It’s been fascinating discussing this with people in the community because I hear very little scepticism and a lot of excitement from web developers and native developers alike (at least those that come to a React meetup).

However the more I talked about it the more I realised the message I was conveying in my presentations was not quite right. One of the major themes I focused on was the fact javascript developers like myself can now easily get into the native world, and that companies only need to hire for one skill set to build and maintain their entire suite of applications. 

This is still a hugely important advantage, but it isn’t the major benefit and doesn’t highlight what React Native offers over competing frameworks. It also doesn’t perfectly align with my view that looking for a tool or framework just to reduce the need for learning another language is lazy thinking. There’s more to React Native than this.

React Native’s advantage

React Native’s biggest feature is React.

This may seem a bit obvious (the clue is in the title) but let me explain. When I first looked at React I thought it was insane like most people. It takes such a different approach to web development that it gives many people an immediate repulsive reaction. But of course the more I used it the more I realised I could never go back to building web applications (or any front-end app for that matter) any other way. The patterns react provides are an extremely powerful way of building applications.

If you haven’t used React much it might help to know that React lets you declaratively define what your view should look like given some input data. A react component is passed some properties that it requires to render a view, and as a programmer you simply define the structure of the view and where that data should sit. In doing this you’ve already done half of the work in building your application, because if a component or any of its parents changes their data (in the form of state), React will simply re-render the affected components given the new data.

No specific data binding. No event management. No micro managing the view. Just change the data and watch React recalculate what your view should look like. React will then use its diffing algorithm to calculate the minimum possible DOM manipulations it can do to achieve the desired result.

The second half of your application structure is of course user interaction. Patterns and tools like Facebook’s Flux and Relay help with this, but essentially these are just ways in which you can modify the data in your application in a neat and scalable manner. The application still simply recalculates the structure of the view once the data has changed.

React really shines when you start to scale an application, because the complexity of your application doesn't have to increase too much. This is of course quite hard to demonstrate in a blog post, but I can give you an example of a simple little app written in angular and react that I adapted from Pete Hunt, one of React's creators.

Angular:

See the Pen RNBVZz by Robbie (@robbiemccorkell) on CodePen.

React:

See the Pen VYBbyN by Robbie (@robbiemccorkell) on CodePen.

You can see in the code above that the React implementation is pretty short, even with the markup defined in javascript. This is mostly because of the lack of linking functions connecting the code to the markup. React will just figure this out by itself. With all of this re-rendering of the markup every time something changes, you would think React would be quite slow, but it's not. For a demonstration of how React performs in comparison to other frameworks check out Ryan Florence's presentation from React conf.

This is an extremely simple and powerful way to build front-end applications. It combines the best of both worlds in a simple and easy interface for the programmer, and a performant experience for the user. It’s for this reason more than any other that React Native is an exciting new tool in native app development. It changes the way a programmer thinks in building the front-end of their app.

A sample React Native app

Tappy Button screenshotIn the talk I mentioned above I demonstrated some of my ground breaking research in video game technology with a game I created that I call Tappy Button. The objective of this game is to tap the button in the middle to increase your score.

The sample code below defines the full application seen in the screenshot including view structure, styling and application logic. What you should notice here is that the code is extremely unremarkable. It’s the same kind of React we know and love simply applied to a native app context, and that is what’s so remarkable about it. The only differences to traditional React web development are the element names1 and the inline styling2.

Building the same application in Xcode using Swift requires a similarly small amount of code, but it does require the developer to perform a process of clicking and dragging to create elements, constraints and method stubs in the code which quickly becomes tedious. As the application becomes more complex, the developer must manage collections of view controllers that manually update the view when required. But in React the view is declared once and all the developer must do is make sure the data inside each component changes when necessary. 

Other frameworks have crossed the JS-native divide in a similar way to React. NativeScript neatly gives developers direct access to native APIs in javascript3, and the creators of Titanium have been very vocal about the fact that they have provided similar javascript bridging and CSS styling for native apps for years. But all of the articles I’ve read that compare these frameworks are missing the biggest differentiator between React Native and the others, and that is React itself.

When discussing React Native, the ability to write native apps in javascript only really deserves a cursory mention in the discussion. What's really important here is that we can build native applications using the power of React.

React as the view engine

The important things in the future of React won't be more features, add-ons or utility libraries. Yes some improvements, optimisations and structural changes will be built in down the road, but I don't think Facebook want to overcomplicate something that works so brilliantly and simply already. 

We already have React being applied to different areas with React Native, and at React conf Netflix announced they were taking a similar approach, even applying React to embedded devices like TVs using their own custom built rendering engine. We've also heard that Flipboard have swapped out the DOM in favour of canvas elements in their latest web app to impressive results.

In the future we are going to see React applied to many different areas of front-end development by simply swapping the DOM and browser out for another view structure and rendering engine, whether that be mobile and desktop native elements, canvas, embedded devices, or who knows what. This will allow developers to use the power of React and its development patterns in any environment.  

Writing native applications for our phones is just the beginning, and it’s a direction I can stand behind. The biggest benefit of react native isn’t javascript. It’s React.

 

  1. The two base elements we can work with are 'View' and 'Text' and which act as block and inline elements respectively. Coming from the world of the DOM, we can simply translate 'div' to 'View' and 'span' to 'Text' and we’re basically good to go. Any other elements like 'TouchableHighlight' are utility components provided to us by React Native’s component library.

  2. Facebook have provided their own CSS based styling interpretation including their own clever implementation of Flexbox. We now write our styles as javascript objects and apply them inline to our view elements. Currently styles don’t seem to cascade, which is both an advantage and disadvantage depending on your use case. But the interesting thing about applying styles in this way is you have the full power of javascript at your fingertips. With a bit of forethought you could conceivably come up with clever systems to share global styles, and apply them to elements automatically in your application.

  3. I think NativeScript could be an interesting way to build react native bridges to native APIs, and we'll be experimenting with this in the future. However I’m still sceptical as to whether the overhead is worth it, and maybe if we want to build native bridges we should just learn the bloody language!

Red Badger are hiring. Come join us

23
Feb
2015

Understanding the Enigma machine with 30 lines of Ruby. Star of the 2014 film “The Imitation Game”

by Albert Still

Scene from The Imitation Game 2014 filmI recently watched the film “The Imitation Game” and it’s brilliant despite Keira Knightley’s failed attempt at a posh British accent. Everyone should watch it especially if you’re interested in technology. It’s about the life of a genius called Alan Turing, he is also known as the father of modern computer science. That means the very tech you’re using to read this blog works on the foundations of his discoveries. His major breakthrough took place at GCHQ during WW2, he made a machine that decrypted secret German messages they sent using the Enigma machine. Furthermore, historians believe his work shortened the war by two years and saved thousands of lives. To me stories don’t get much more interesting than this, how us Brits used maths and computer science to help the Allies win WW2 and save Europe from fascism. 

After watching the film I became very interested in the Enigma machine and I’m going to use Ruby to explain to you how it works. If you don’t know the Ruby programming language don’t worry, you’ll be surprised how much syntax you understand. 

Journey of a single letterFirstly it’s important to realise the Enigma machine is just one big circuit. Each time a key is pressed at least one rotor is rotated which changes the circuitry of the machine thus lighting a different letter faced light bulb. The rotors are the only moving parts within the circuit and each have 26 steps, one for each letter of the alphabet. When a key is pressed the right rotor rotates by a step, when this reaches a full revolution the middle rotor rotates by a step, and when the middle rotor does a full revolution the left rotor rotates a step. Here is a video to best explain the circuitry:

I found the best way to understand the machine was to visualise the path of an electron traveling through the circuit after a key is pressed. First of all we get to the plugboard, here we literally swap 10 of the 26 letters. For example D becomes E and therefore E becomes D. In Ruby we can model this as Plugboard = Hash[*('A'..'Z').to_a.shuffle.first(20)] and then Plugboard.merge!(Plugboard.invert) so we can make it reflective. As for the 6 letters that are untouched and simply map to themselves we can make the value default to the entered key with Plugboard.default_proc = proc { |hash, key| key }.

Next we get to the rotors, think of each of the rotors as a substitution cypher, each of its two sides has 26 metal contacts. And there is a mess of 26 wires inside that randomly pair the contacts. We can model a rotor as Hash[('A'..'Z').zip(('A'..'Z').to_a.shuffle)], for the return journey we can call invert on the hash before passing it a key.

The electron travels through all the rotors and then it gets to the reflector. The reflectors job is simply to turn the electron around and pass it back through the rotors. We can model it with Reflector = Hash[*('A'..'Z').to_a.shuffle] and Reflector.merge!(Reflector.invert).

It will now go back through the rotors, through the plugboard again and finally hit the letter faced lightbulb which would indicate to the operator what the letter had been encrypted or decrypted to, depending wether the operator is receiving or sending messages.

I have written a gist of an Enigma machine using the above code, please feel free to comment below if any of it needs explanation.

Why was it secure?

The Enigmas security comes from the large number of configurations the machine can be in. Every month German operators were issued a code book that told them what setting to put their plugboard and rotors in for each day of that month.

  • The Enigma machine came with 5 rotors to choose from and the machine used 3.  You have 5 to choose from, then 4, then 3. Order matters here therefore this yielded 5 x 4 x 3: 60 combinations.
  • Then each rotor has 26 starting positions. This yielded 26 x 26 x 2617,576  combinations.
  • The plugboard maths is more complicated, Andrew Hodges explains it well. The number of ways of choosing m pairs out of n objects is n! /((n-2m)! m! 2m). Therefore the enigma machine has (26! / ((26-2*10)! 10! 2^10): 150, 738, 274, 937, 250 combinations.

Shot of the Enigma machine in The Imitation Game 2014Multiply 60 x 17,576 x 150,738,274,937,250 and you have 158,962,555,217,826,360,000 combinations! And that is what GCHQ were faced with. They had to figure out which one of the combinations the Germans Enigma machines were using for that day so we could understand what the Germans were saying to each other. Furthermore the next day it would change making any promising work from the day before useless, a moving target!

This is where Turing’s master mind came in. Him and others built a bombe machine that speed up the process of working out what setting the Enigma machines were in for that day. First they tried guessing a small section of plaintext within the ciphertext, then the bombe was used to find the setting that would yield this plaintext through brut force. If I get the time I would love to study the bombe and try and write it in Ruby.

Did UX lose Germans the war?

The reflector was practical because it meant the Enigma machine operators used the same setting to send and receive messages. This was great but it also meant that a letter could never map to itself. This was a major weakness in the system, because it allowed the code breakers to eliminate possible solutions when the ciphertext and the putative piece of plaintext had the same letter in the same place. And it was this weakness Turing exploited with his bombe machine.

Why did they only use 10 pairs for the plugboard?

This puzzled me for a while, if you play around with the n! /((n-2m)! m! 2m) formula you will see 11 plug pairs yields the most combinations, with the number decreasing after 11. Here is the best explanation to why they used 10:

British cryptanalysts believed that the Germans chose 10 plugs because this resulted in the maximum number of plug-board permutations.  While the actual maximum occurs for 11 plugs, the discrepancy could well be a mistake in slide-rule computations on the part of the Germans.” — Deavours and Kruh (1985), “Machine Cryptography and Modern Cryptanalysis

So because they didn’t have computers back then doing maths with large numbers was difficult and had to be done by hand, and they simply believe it was a slide rule mistake.

Red Badger are hiring. Come join us

21
Feb
2015

London React Meetup: February

by Imran Sulemanji

In previous months we’ve witnessed growing interest in both React and the London React meetup. As the number of attendees swelled to beyond the reasonable capacity of our office, Facebook stepped in and offered to graciously host the event at their London offices. The timing couldn’t have been better, as this week attendances exceeded 250.

It is clear that with React, Facebook have captured the interest of developers and these growing numbers highlight just how united the community is with its philosophy.

In addition, this January seen the 2015 React.js conference in San Fransisco, with notable announcements being React-Native, Relay and GraphQL amongst others. It is exciting to see Facebook responding to community feedback and evolving React to accommodate its concerns.

What did we learn this month?

An Introduction to Flux

Guy Nesher; a lead developer at Conversocial, introduced us to the Flux architecture in a talk rich with granular details and applied examples. He demonstrated how Flux is a powerful pattern that can be used to simplify data flow and that these benefits are not exclusive to React applications but can be used to great effect in other JS frameworks. Given the announcement of Relay at React.js conf, it will be interesting to see how the Flux pattern changes and how we approach the problem of managing data flow using these new technologies.

A Preview of React-Native

Robbie McCorkell is a user interface developer at Red Badger, and attended React.js conf last month. In doing so, he saw React-Native first hand and walked us through an early release of the IOS implementation with examples in XCode. For the uninitiated, Robbie demonstrated just how productive and enjoyable React-Native development can be in the seemingly torturous world of native app development.

Show and Tell

Two lightning talks on projects currently being worked on by meetup members.

Rui Ramos demoed his project, Great.dj. A simple yet slick playlist manager for parties and powered by React and web sockets. The killer feature was the ability to join a party’s playlist simply by visiting the site via the same network.

Steve Heron showed us react-app-layout; a framework for handling complex, user customisable layouts. By harnessing the power of flex box , Steve showed how we might offer power users, levels of customisation previously only seen in native/desktop applications.

13
Feb
2015

Learning to Code in Two Days

by Amy Crimmens

animal-17474_1280

When I arrived at Red Badger to start my lovely new job as Community Manager last August I can honestly say I had very little idea about how websites were produced. Considering I had six years experience of working in the tech industry under my belt this was pretty ridiculous.

 So, once I’d got into the swing of things a bit, I decided I should spend some of my generous training budget on a coding course; we are each given £2K a year to spend on training so the world was my oyster.

 I decided on the General Assembly “Programming for Non- Programmers” course. This is a weekend bootcamp designed to give total beginners a crash-course in web development basics including HTML, CSS,  Javascript and a little bit of Ruby.

We know the General Assembly guys pretty well at Red Badger; we are one of their hiring partners and several of my colleagues coach on their courses so it seemed like a good bet.

 We started off the weekend with the usual introductions “Hello I’m Amy from Red Badger and I’m here because….”. Pretty much everyone fell into one of two camps; either they had an idea for a start up and didn’t want to have to pay a developer or they worked with developers and wanted to understand what was going on a bit more- like me.

 Our teacher Antonio warned us that it was going to be a full on weekend and explained that  by the end of  the weekend we would each have produced a simple “business card” website using basic programming.

 This seemed like a lot to ask but as we got to grips with HTML and then CSS I could see my site coming together. It was when we added Javascript that I was really impressed with myself, menu items now changed colour when you hovered over them like a real website!

 I can’t say Ill ever use the (rather ugly) website I built- that wasn’t the point, but the knowledge I picked up over the weekend will help me tremendously in my job at Red Badger. I now understand several of the mysterious terms that fly around the office (div means something quite different in programming) and I feel confident I could make simple content changes to our CMS- less website which was previously a bit daunting.

 The weekend has also given me a real appreciation of the work that goes into creating even quite basic sites never mind the stunners my colleagues build like the new Fortnum and Mason site. I already knew they were a talented bunch now I think so even more.

 Just as an aside, if you are also a super talented developer who can magic up websites as fancy as these we’d love to hear from you.  Get in touch via jobs@red-badger.com and I’ll try not to bore you with tales of my new found coding skills.

 

28
Jan
2015

Why I moved my static blog to HTTPS

by Alex Savin

Fialka ciphering machine
Photo credit Brett Neilson

By the year 2015 general awareness of the differences between plain HTTP and protected HTTPS seems to become more or less general. When making online purchases, or sending private messages, or entering passwords online we are now getting this warm fuzzy feeling if there is green lock icon shining next to the page’s URL. And panic attacks if there isn’t one. Being HTTPS only became standard for online services dealing with customer sensitive data.

On the other hand there are pages like personal blogs, or, well, Wikipedia. Or BBC. Just good old unencrypted HTTP. You can actually access Wikipedia via https, but if you try the same trick with BBC page, it will just redirect you back to the land of plain HTTP. And why would you want encrypted connection anyway? It’s not like they are selling stuff and you’re submitting personal credit card number. You’re just wandering around the Web, reading stuff, minding your own business.

Recently I moved my personal blog pages to be HTTPS only. Here are few reasons why.

  1. With plain HTTP there is no certain way of telling if you are actually reading my blog. You type URL in the browser (or click a link), and browser politely requests content for this address. During the journey of this request, at any point it can be read, interpreted, and responded with any content. And the browser will display whatever is returned.

  2. A classic example would be man-in-the-middle attack. Open unencrypted traffic allows anyone on your (free) wifi network to intercept it, track your requests, as well as potentially alter content of the page. With example of BBC pages, you could inject some news articles, or in general alter the content in any possible way.

  3. If you think that only evil hackers would do such atrocities, think again. There are reports of in-flight and hotel wifi networks injecting banners into webpages. Goes without saying that this would not be possible with HTTPS pages.

  4. Google started using HTTPS support on the page as a positive ranking signal last year. Meaning, HTTPS pages will be ranked higher than plain HTTP only pages.

  5. HTTPS traffic is much harder to block and filter out with corporate firewalls. You would have to force install additional certificate onto machines in that network (which is a common practice in certain places).

  6. In general, living in post-Snowden era, when TheVerge reveals new details on digital surveillance almost weekly, strong reliable encryption of everything seems like a very good idea.

  7. One interesting topic is state controlled firewalls, like Great Firewall of China. Wikipedia article on this matter contains following curious trivia:

The Tor anonymity network was and is subject to blocking by China’s Great Firewall. The Tor website is blocked when accessed over HTTP but it is reachable over HTTPS so it is possible for users to download the Tor Browser Bundle.

The very same Google recently announced that not all SSL certificates will be treated equal, and SHA-1 certificates are going to be retired and indicated as unsafe already in Chrome 41. In a way, it is a danger in itself to blindly trust any SSL certificate. There are quite a few online tools to test various security aspects of your SSL – the best and most helpful being SSL Test. It will test pretty much everything, and provide you with a grade and bunch of hints how to improve it, and why it is capped (if it is).

One notable thing about SSL Test is that once you decide to buy a certificate, you can run this test against pages of the issuer, to make sure that it is indeed legit, and there are no issues like broken certificate chain.

This post is inspired by HTTPS everywhere talk on Google IO 2014, and this ATP podcast episode.

23
Jan
2015

2015: Native App Development is Dead in the Enterprise

by Cain Ullah

With Gartner results showing that in 2014 mobile and tablet sales equated to approximately 2.4 billion unit sales compared to 318 million PCs and with an ever increasing proliferation of mobile devices, for many enterprise level organisations developing an effective mobile strategy continues to be one of their biggest challenges.

So what questions are organisations currently asking themselves when devising a strategy for a successful mobile presence?

Native vs Responsive

There has been a long standing argument between the value of building native applications specific for a device such as an iPhone or Android vs. the mobile web i.e. responsive web sites that work across all devices.

The nirvana is that developers wouldn’t have to worry about devices at all and that there would be a solution to write an application once, for it to cover all use cases and work on all devices be it in the guise of a native app or a website. Unfortunately this doesn’t yet exist and Facebook’s attempt at doing this via HTML5 was followed by a rapid backtrack that was well publicised.

There are obvious pros and cons to both (you can read David’s article here for some great insight into why responsive web design is a great strategy). Native applications give you more flexibility when accessing a smartphone’s features (such as the accelerometer) and there is the obvious advantage of offline browsing. However, native applications are only built for a specific device resulting in you having to build multiple applications; development costs and ongoing maintenance escalate as a result. Responsive websites on the other hand can be built once and work on most devices (if tested properly) providing a far greater reach for less development cost. But offline browsing is not easy and user experience can often be hindered by the limitations of HTML.

At the moment there are use cases for both that solve separate concerns. If you have a complex set of requirements you may not be able to avoid the need to have to build both a website and a native application. It is common place for enterprise companies to have many desktop applications each with 3-4 mobile applications to support them.

Enterprise strategies differ from company to company. A fine example is The Times newspaper. They have focussed more on the optimum interactive experience of their native applications for both Apple and Android tablets, with separate editorial teams dedicated to each device. The website on the other hand is not responsive. They’ve not even bothered to redirect to an m.site, instead just displaying the desktop site on a mobile with a link to download the app from the app store.

In contrast to The Times, the Guardian has opted for a great adaptive/responsive website detailed in David’s aforementioned article from October 2013.

So is there a happy medium?

Cross Platform Tools

To tackle this problem, there are tools in the market that facilitate cross-platform mobile development such as Phonegap and Titanium. These allow you to build apps using Javascript and Web Technologies, and deploy them to  multiple marketplaces in the guise of native apps, across multiple platforms. The advantages on the surface are obvious – you get access to native features that are not available to web browsers but you also only have to write and maintain one code base. If only it was that simple.

There is a great blog highlighting the comparison and the weakness of both Phonegap and Titanium by Kevin Whinnery here.

Other cross-platforms also exist. Another blog by Kevin Whinnery’s former colleague, Matt Schmulen provides details on the options and how they fit into the current mobile ecosystem where there is an ever increasing demand in the Enterprise.

These frameworks include:

  • PhoneGap/Cordova (HTML/JavaScript)

  • Titanium/Appcelerator (JavaScript)

  • Mono/Xamarin (C#)

  • Rhodes (Ruby)

  • Kony (Lua)

Our experience of these cross-platform mobile development tools is that you simply cannot replicate the native app experience. We have experimented with these tools before, an example being the build of the mobile applications for our BMW project using Phonegap. We found that the iPhone version of the app was great. However, getting an optimum experience on Android took a huge amount of effort in optimisation and testing to get it close (but still not close enough) to a native experience.

So what does the future hold?

Native App Development Is Dead in the Enterprise

Mobile

This is a bold statement that will take some time to become completely true. However, 2015 is going to be the dawn of a new era of technology that will replace existing cross-platform tools such as Phonegap and Titanium with a much better offering that will finally succeed where Phonegap and Titanium have largely failed. This will be the beginning of the end of enterprises building responsive sites with multiple native applications to compliment them.

Responsive websites, with the help of new technology will form the basis of the code for native applications without hindering user experience. The applications will be fast and responsive, and there will be a single code base with very little device specific code.

What does this mean? Software companies like Red Badger that currently focus on the enterprise web (i.e. responsive web sites and not native applications) will also be able to deliver great native applications without a great deal of additional effort or the need to hire native app developers. Native application agencies are going to have to adapt and re-skill their employees in order to keep up, or face the consequences of withering into eventual obscurity. The winners will be the enterprise companies. They will have options available to them to  build great experiences across web and native applications on multiple devices with close to a single code base. This means fewer applications, less maintenance, lower development costs and happier customers.

This will ultimately kill the question: “should I build native or responsive?”. Why choose when you can have it all?!

P.S. Where’s my proof you may ask? Watch this space.

UPDATE 2015/02/08: On 28th January, Facebook announced React Native at React Conf. React Native is a game changer and already answers the predictions made in this blog. You can view all of the videos from the conference here.