28
Nov
2014

UX Tip of the Week: Guerilla Testing

by Maite Rodriguez

What we learned this week:

Guerilla Testing

How to approach people and lower your chances of being rejected.

 

 

Do

  • Choose a location where people in your target audience are likely to convene. In our case it was a Co-op working space.
  • Go right at the end of lunch time when people have finished their food and are more likely to crave sweets.
  • Bring candy, brownies, cookies… get creative and place it next to your working space. Try to avoid being creepy. A fan to blow the smel may help catch their attention. Kidding!
  • Smile and ask if they have time to spare. (The amount of time you will be user testing)
  • If they say no let them walk away. No begging.
  • If they say yes, game on.
  • When you ask them the pre-screening questions and find out they are not your target user, its okay to cut the testing short.
  • Take notes. Voice recorder, video recorder or just a plain old note pad.
  • ASK PERMISSION before you do a voice or video recording!
  • During the testing. Watch, Listen, and ask follow up questions like:
    1. Why?
    2. What did you expect to see?
    3. How did that make you feel?
  • After 3 users it starts getting tiring.
  • Go home.
  • Next day, review notes and find the trends.

 

 

Don’t

  • Just show up with nothing prepared and wing it.
  • Lie and tell them it will take them 2 minutes when it will actually be 10.
  • User test anyone who is willing to talk to you.
  • Invade their personal space once they are close enough.
  • Creepily stare them down as they walk towards you.

 

http://giphy.com/gifs/black-and-white-smile-horror-XQLcriSh7za0w

 

The end

 

 

25
Nov
2014

Nodeconf Budapest 2014

by Alex Savin

Nodeconf Budapest

Suddenly, there is this moment in your life when you find yourself at the Node JS conference in Budapest. A one day single track event named One-Shot, with a bunch of speakers and a nice variety of topics, complemented with great Kenyan coffee and Hungarian craft beer. Organized by the local Javascript shop Rising Stack.

Some of the talks and slides are already available online, so I’ll just go through some of those that I feel are worthy of your attention.

Hacking with Tessel

Matteo Collina did a rather brave demo of live hacking on a Tessel – a Node-compatible microcontroller with built-in wifi and a bunch of extension modules.

What’s the first thing you do, when you get a shiny new microcontroller? Make it blink the light.

There are a bunch of npm modules compatible with Tessel, which makes it great fun to hack on the hardware and make it do various things for you. Like blinking lights in various fashions. As was expected, things didn’t go smoothly, but Matteo managed to stay away from cursing and get things blinking (again).

Torrent with Node

Mathias Buus was venturing onto the p2p side of the internet. After a short introduction to the technology behind torrents, Mathias demonstrated how easy it is to implement your own torrent client with Node, and even stream video. But that was not enough – the next stop was infamous Docker with its (rather large) images. During the course of a spotless demo, Mathias managed to share a Docker image via torrent, stream it on another machine, boot it and launch things. If you’d decided to launch Node, it would torrent-stream all the necessary files for a proper launch, but nothing else. And yes, you can do this too – the thing is named Torrent Mount and allows you to mount a filesystem via torrent link.

Docker with Bittorrent

Check his talk (Docker demo starts at around 15 min), and slides.

C J Silverio told a story of something we all take for granted most of the time – npmjs registry. How it all started as a small side CouchDB project and then miserably failed when the world suddenly start doing npm install way too often.

Watch how things evolved for npm, becoming faster and more reliable (or check out slides).

A full list of talks is available on the Rising Stack YouTube channel. Ok, here is another worthy talk – Design for Retry by Aria Stewart. She talks about something we all try to avoid – errors and how to design proper error handling in web apps.

Nitro coffee

Breaks were spiced by a special kind of cold brewed coffee, mixed with nitrogen. Yes, nitro powered coffee is a thing, and I’d love to taste it again next year.

18
Nov
2014

Silicon Milkroundabout – From both sides of the fence

by Roisi Proven

seedpacket

On Saturday May 10th, I nervously walked through the doors of the Old Truman Brewery for the first time. I’d heard good things about Silicon Milkroundabout, but had always been too nervous to give it a go myself. However, job dissatisfaction paired with the desire for a change drove me to finally sign up, and I was assigned a spot on the Saturday “Product” day.

I have to say as a first timer, the experience was a little overwhelming! The hall was already starting to fill up when I got there at 12pm, and there was a maze of stalls and stands to wade through. The SMR staff were very helpful, and the map of the hall I got on entrance made navigating the maze slightly less daunting.

I’d done my research before I got there on the day, and I had a little notebook of companies that I knew I needed to make time to speak to. There were 5 I felt I HAD to speak to, and a few that I was drawn to on the day, based on their stand or the company presence overall. At the top of my shortlist was Red Badger.

In May, RB had a small stand near the door, not in the middle of things but easy to find. I had to do a bit of awkward hovering to get some time with one of the founders, but when I did we had a short but interesting conversation. I took my card, filled in the form, and kicked off a process which led to me getting the job that I am very happy I have today.

Fast Forward to November, and my Saturday at Silicon Milkroundabout looks a whole lot different. This time, I’m not a candidate, I’m the person that people are selling themselves to. A different sort of weird! The Red Badger stall looks different this time around too. Where before we had a small table, this time around we had an award-winning shed.

badgershed

That awkward hovering I said I did? There was a lot of that going on. Having remembered how daunting it was to approach a complete stranger and ask for a job, I did my best to hoover up the hoverers. I had a few really interesting, productive conversations during the day, but just as many were people who just wanted to compliment us on our stand or our selfie video. It was great to get some positive feedback for all of the team’s hard work on the run up to the weekend.

The biggest difference was, given the fact I was standing still, I was able to fully appreciate the sheer amount of people that came through the doors, and the variety of roles that they represent. The team at SMR have done a great job of keeping the calibre of candidates high, and it does seem like there is a candidate for almost everything the companies are looking for.

Here at Red Badger we’ll be combing through the CVs and contacts that we made over the weekend, and will hopefully be making contact with a several potential new Badgers soon. For anyone that met us this time around, thanks for taking the time out to hang in our shed. For anyone that missed us, we’ll see you at SMR 9 next year!

12
Nov
2014

Badger Academy – Week 10

by Sarah Knight

Badger Time Retro

This week Joe introduced us to the concept of Agile Retrospectives. This is an opportunity to look back at the project so far and review how everything is going. What do we want to start doing? Continue doing more of? Stop doing? It’s a good way of opening up communication about what we think we need to improve on, and come up with specific ways this can be achieved.

We decided that the easiest way of tracking this was by Trello board. We’ve got 4 columns:

- To improve
- Improving/Monitor
- Improved
- N/A – (this is for if we later decide that a card is no longer relevant, but doesn’t fit in the improved column)

We created a card for each thing we want to improve on, and labelled it STOP, START or CONTINUE. These were all placed in the ‘To improve’ column. We then went through them all and discussed how they could be implemented, or if any of them needed to be reworded to make them more SMART.

A few examples:

START: Descriptive pull requests for every card currently in development, with a task list.
CONTINUE: Tracking possible enhancements as GitHub issues.
START: Using blockers (as a marker) for stories that can’t be worked on for some external reason (or because you’re stuck).
START: When pairing, make sure to swap around who’s writing and who’s reviewing.

I’d not come across the practice of a retrospective in this form before, but think it’s a really great method to open up dialogues about things you want to do differently. We’ve been using it for a few weeks now and are really seeing the benefits. Communication has improved, and coaches and cubs alike are able to see who’s been working on what, and how things are going. At regular intervals we revisit the retro board and discuss our progress. It’s a good way to track improvements, as we move cards from one column to another, and ensure that we continue to think and talk about other improvements we’d like to make.

Filtering

We added in the functionality to edit a resource on the front end and back end. However, because a resource can be linked to a role, which has days associated with it, changing a resource cost could potentially affect the costs on a number of days and therefore projects as a whole.

So on the front end we needed to provide some options as to what editing a resource would actually affect.

Affected projects

- No bookings – no previous bookings would be affected by the new cost, it will only be applied to future bookings
- Bookings from – allows the user to choose a date and any booked days from that date onwards will have the new cost applied
- All bookings – All previous bookings will be updated with the new cost

So that the user can see exactly what they’re doing, we needed to flag up the projects and phases that would be affected by any changes they made. However, because a resource isn’t directly associated with a project, but via a role, that belongs to a phase, that belongs to a project it was a bit tricky.

project associations

As you can see from the diagram above, editing Resource 1 would affect Project 1, and both its phases (via Roles 1 and 3). Editing Resource 2 will only impact on Phase 1 (via Role 2), but still affect Project 1 as a whole.

We needed to go through all the roles linked to the current resource id, and in turn go through all the phases associated with those roles, and the projects associated with those phases to build up a list of affected phases and projects. Several phases from the same project could be associated with the same resource, so we needed to make sure that the projects didn’t get repeated.

To filter by a from date, we needed to add in an extra step to turn the string from the date input field into a date object. We then went through all the roles linked to the current resource id and filtered out those that have a day with a date greater than or equal to the from-date. Then pass in the list of roles to the same filtering process as above.

At one point we got a bit stuck trying to do a nested filter to get the roles that belonged to the resource, and find the days that belonged to those same roles. Viktor pointed out using a second ‘filter’ returns an array to the first filter when it’s expecting a boolean. So we changed it to ‘any’ which does return true or false and everything worked!

7
Nov
2014

Will cars become the new mobile?

by Mike Altendorf

FutureCar

It seems slightly crazy to be talking about ‘the new mobile’. It seems like mobile only recently became the new mobile and I can’t even remember what it was before that. Such is the speed of change in this digital world of ours that simply by proclaiming something ‘new’ makes it old. What I find particularly interesting at the moment though is that it seems it is more reinvention than invention. The ‘internet of things’ is in the process of breathing new life into a host of everyday objects. I heard today about a company that is developing a lock that will text you if it is being broken into; I have heard about belt buckles that will alert family members if elderly relatives go on an unexpected walkabout; shoes that analyse the way you run, even toilet paper holders than text you when they are about to run out (ok, maybe I made the last one up).

All of sudden every physical thing around us has an opportunity to become something else. The thought is actually overwhelming. Can we deal with that much information? Do we need it? At what point does it become too much? In a dot.com-esque way there is no doubt that there will be many missteps along the way but there is no doubt that the ‘internet of things’ is set to revolutionise our lives and there is one piece of everyday equipment which I think has the potential to become ‘the new wearable mobile’ and that is the car.

The idea of the car that is more than the car has been with us as long as the car has. In 1938 at the New York World Fair General Motors predicted that motorists would be able to simply type in a destination, sit back and relax (http://www.economist.com/news/business-and-finance/21618531-making- autonomous-vehicles-reality-coming-street-near-you). Those of a certain generation will have dreamt of having their own Kit that could not only talk to them but could save the day against any baddy out there (and there is always the Batmobile of course). Cars have always had huge potential and right now we are on the cusp of that potential being realised. As far as platforms go cars have far more to offer than the mobile phone. They don’t just tell you where to go, they can take you there. The computing power in the average sedan is already overtaking that of a smartphone. Sensors help us park, keep us from getting too close to other cars on the road, alert us to obstacles, sense and adapt to different weather conditions, open garage doors. We can call, email, surf the web, watch movies. Almost without us noticing cars have become a whole lot more than four wheels and an engine. 

The reality is however that we have barely scratched the surface of what is possible. With the exception of mobiles and computers, cars are the machines that the majority of us spend the most time engaging with and now we have the possibility of them communicating with their surroundings? This not only opens up the idea of the self-driving car but of a whole transport network that is communicating internally. Cars talking to each other, to traffic lights, to buildings: Cars that are monitoring you as you drive, evaluating your behaviour and adapting accordingly, communicating with people and places. Imagine a car that can not just call for help from an accident but automatically request an ambulance? Cars that can prepare your house for your arrival or come and pick you up when you have had more than half a pint? 

The thing is once we no longer need to concentrate on the road that frees up the time we spend travelling to do other things. Retailers need to consider whether a car will become the next device from which people will shop and how they can exploit tech to connect with it. Unlike the mobile it doesn’t need to be small enough for us to carry round – in fact it will be carrying us around. What would you do with a screen the size of a windscreen if you didn’t need to be able to see out of it all the time? 

All of this might seem a bit fantastical but when you consider how mobile has exploded in the last five years perhaps there is a chance that I will be taking delivery of my very own Kit sometime before my 60th birthday.

3
Nov
2014

Badger Academy Week 9

by Eric Juta

Badger Academy endeavours have slowed down slightly, managing my last year of university and working part-time is a becoming a sizeable challenge. Much more so to come once I arrive in the real world!

In reference to Albert’s post, I personally have found the same experience that working at a startup while at University is a significant contribution towards my own academic performance! The amount of trials and tributes faced at Badger Academy have prepared me for kata-like performance at University. There’s so much I’ve learned at Badger Academy that isn’t taught at University or put into practice! (Sadly they don’t teach git workflows at University)

I highly recommend working for a startup and gaining experience in preparation for post-grad life. My dissertation has its foundations laid ahead of it due to the concepts taught at Red-Badger!

Authentication

The architecture of a single page application talking via ajax requests to the backend rails api emphasizes data flow. Without the chance to just install a ruby gem and have most of the work done for you, we are forced to implement the same methodology and best network practices (As demonstrated before with Nginx).

The process of authentication leading to API-data fetching is similar to a TCP Three-way handshake.

In Badger-Time, the process occurs like the following:

  1. The clientside router checks if an generated authentication token is stored in HTML5 LocalStorage on any route (A persisted datastore in the browser with its own API)
  2. The router redirects the user to the /login route and renders the React.js component
  3. The user logs in with their pre-registered Badger-Time details.
  4. The user’s credentials are verified in the backend api, a generated authentication token is sent over (Made to expire after a day until a refresh call is made by the user)
  5. The generated authentication token is received, it is stored in HTML5 LocalStorage.
  6. Sequential requests from then on after include the authentication token in the request headers.
  7. The API checks if the request header has a valid authentication token and replies back after executing the body of the request.

(I take that back, that was more like 7 steps rather than 3)

Technically and code-wise, the above process is implemented and our decisions in doing so are:

  • NPM’s Superstore-sync module to have an API for setting and getting the auth token from HTML5 LocalStorage.
  • Modification of the API helper on the frontend to send token in all request headers if present.
  • A Before filter/action in the Application controller to verify whether the request header has a token for a session table match; there is also an expiry value.

  • An action verifies the appropriate BCrypt encrypted password details and generates a token value from a hash.

OAuth

A similar fashion is seen via the OAuth protocol to talk between the backend rails api and FreeAgent.

The tokens are stored in the process environment variables and are read directly instead!

So for now, the FreeAgent account is hardcoded.

FreeAgent OAuth tokens are refreshed with data pull down on a recurrent clockwork module task to keep the rails models updated! Asynchronously too because of the Sidekiq and Redis combination! No interruptions at all! Deployment and usage has continuous activity!

There was also the decision to diff our Remote Timeslips (FreeAgent populates this model) and diff our Days model on every sync too.

This was actually quite easy (algorithmic wise!), we assume that all Timeslips are up-to-date, therefore the Days model and its burnt hours attributes would be overwritten. Don’t overwrite if the burnt hour is already up-to-date; a comparison of updated-at or burnt hours values.

Leaving comments

Our BDD process is finally done; I’d like to mention again!

Another trick we setup DevOps wise was to start the phantomjs debug server in a docker container then run the cucumber tests, we now have console session logs stored! We can view those logs through the Phantomjs Web UI!

No more document writing on javascript error triggers!

Automate!

30
Oct
2014

Product Hunt Hackathon

by Jon Sharratt

Last weekend Product Hunt decided to host a global hackathon opening up their API to the community for consumption.  Budding developers got together over at YCombinator on-site for a two day hackathon to come up with and deliver a new product idea.  Remote entries were also allowed and across the globe from Hong Kong, France and our home town in London, plenty of fresh ideas were ready to be developed.

I applied on my own (a bit last minute) and with no real thought and put down the first idea that came to my head, ‘crowdsourcing for products’ via the product hunt API.  The main thing as a personal goal was to prove to the people over at YCombinator I could come up and deliver an idea over two days.  After a couple of days I got an email with an invite to participate, that was it I was ready to hack!

I got up Saturday morning and opened up the Badger HQ (a little hungover from the night before) to find Albert a fellow badger who had stayed the night on the sofa after having a few beers also with some of the other badgers the night before.  The tech I decided to use was a slight risk as I had only dabbled with it previously.  The tech chosen:

I began by using the basis of the project from what the Badger Academy Cubs have been creating for our own internal project of which they are doing a great job as you might of seen already.  Albert started to gain interest in the whole project idea and got setup.  Another great addition to the team a couple of hours later was  Viktor, a beast at LiveScript, just what we needed.  He saw on our companies Slack I was in the office and got involved.  This was it we had a great team to get this hack really moving.

We decided on getting the core functionality we wanted to show off to the judges done on Saturday.  Then on Sunday we would style it up and tweak the UI to make it a more usable and a nicer experience.  I had implemented the core layout using twitter bootstrap (http://getbootstrap.com/) with styling from a theme on Bootswatch (http://bootswatch.com).  Later Viktor informed us of an awesome library over at react bootstrap (http://react-bootstrap.github.io/) and converted the project so we could change the layout quickly and more effectively.

 

Product Fund Day 1

 

By the end of Saturday the project was taking shape with the huge help of Viktor and Albert.  Authentication done, Product Hunt API consumed and stripe checkout integrated to allow users to pledge money.  I had previously created a quick and dirty node.js passport (http://passportjs.org/) strategy to make the authentication process easier (https://github.com/allotropyio/passport-product-hunt).  So with all of that said it was time to call it a night ready for a fresh start on Sunday.

Sunday came along and all that was left to do was add validation to forms and finish off some of the advanced part of the journey such as supporting product makers having the ability to assign costs and a deadline for features to be funded.  Viktor also added the awesome firebase (https://www.firebase.com/) to add a storage layer for pledges and feature requests rather than it being stored in memory on the client.

Not only did it allow an easy way to implement a storage layer it also allowed the UI to live update when any pledges or features were added to the site.  It really helped make the site come alive and would made the site more engaging to users viewing the site.  I would say as a side note that the blend of React, LiveScript, Node and Firebase is just a match made in heaven for this kind of project (a blog post for another time).

product__hunt_hackathon_

On Sunday we were also joined by @jazlalli1 who worked in another team on a cool hack for Product Hunt taking their data to produce analytics and trends.

As the deadline approached our own lovely QA Roisi joined on Slack did some testing remotely which helped iron out a few creases.  Once we were pleased we were ready to submit the hack on challengepost.com.  We had created a droplet on digital ocean, registered the domain productfund.io and got it deployed and submitted on time.

Check out the final result on producthunt.com at http://www.producthunt.com/posts/product-fund

The next day we found out that we had made the top 20 finalists! we had some great feedback from the community.

We then waited to hear about the finalists and who had won.  Turns out our small idea made the top 3 hacks of the first ever Product Hunt hackathon.  All in all great job on everyones behalf for two days work.

The prize:

“The top 3 winners will receive guaranteed interviews with 500 Startups, Techstars, and office hours with YC partner and founder of Reddit, Alexis Ohanian!”

Just to add there were some great entries, checkout the other finalists at http://www.producthunt.com/e/product-hunt-hackathon-2014

 

28
Oct
2014

Haller App Launch

by Joe Dollar-Smirnov

Red Badger in collaboration with Haller and Pearlfisher designed and built a web based app for the charity Haller. Our primary users for this app are Kenyan based, rural farmers who live a life far removed from the abundance of our comfortable western home comforts.

 

 

Haller bring life changing but basic civilised facilities to communities. The construction of reservoirs, wells, sanitisation, medical centres and learning facilities are all just a part of the work carried out by dedicated Haller recruits both on the ground in Kenya and in the UK. Led by renowned environmental expert Rene Haller, education and dissemination of agricultural knowledge is a big part of their work. Through education, Haller help local communities build sustainable futures.

The Haller app is a constant, on demand source of this information and an alternative way to reach further afield. Red Badger spent time in Africa working directly with the farmers to ensure the final product was focussed on their goals, accessible and understandable. Some of the users we were targeting had very little or no experience of using applications or websites so intuitive interactions were essential. We could not rely on any existing knowledge or experience of conventions.

The app has now launched and to mark the occasion Pearl Fisher have created this fantastic video that tells the story. To get the full background on Red Badgers involvement in the app, how we approached the research, workshops and testing there are a series of blog posts below.

Farmer Training Research

Africa Road Trip: Day Zero

Africa Road Trip: Day one and two

Africa Road Trip: The workshops begin

Africa Road Trip: The challenges for app design and development

UX Testing in Africa – Summary

 

27
Oct
2014

Improving Performance with New Relic APM and Insights

by Roisi Proven

In any kind of tech development, knowledge is power. When working on an ecommerce site, knowledge is essential.

The more information you have about your application, both during development and when it goes live, the more value you will be able to provide your client and in turn to the client’s customers. When in a development environment, it’s easy to provide yourself with a breadcrumb trail back to an issue, but when your code moves into a staging environment, the information you provide can end up being a lot less useful. At one point, this was as useful as it got for us:

With no way to know what this “something” was, and after a few awkward problems where we very quickly reached a dead end, we made the decision to introduce New Relic APM into our workflow.

New Relic APM helps you monitor your application or website all the way down to the code level. We have been using this in conjunction with New Relic Insights, their Analytics platform.

With New Relic we have been able to track VPN downtime, monitor response times and get stack traces even when working in a Production environment. So the above vague message, becomes this:

This monitoring enables you to increase confidence in your product in a way that isn’t possible with simple manual or even automated testing.

In addition to the APM, we’ve also been working with New Relic insights. It behaves similarly to Google Analytics. However, its close ties to APM’s tracking and monitoring, means that the data is only limited by the hooks you create and the queries you can write in NRQL (New Relic’s flavoured SQL language). It feels far meatier than GA, and you can also more easily track back end issues like time outs, translating it into graphical form with ease (if you’re into that sort of thing).

Being a new product, it is not without its pitfalls. In particular, NRQL can feel quite limited in its reach. A good example of this is the much publicised addition of maths to NRQL. That a query language didn’t include maths in the first place felt a bit like an oversight. However, this has been remedied, and they have also introduced funnels and cohorts which should add a lot to the amount you can do using Insights.

As a company Red Badger has always valued fast, continuous development. While traditional BDD test processes have increasingly slowed us down, by improving our instrumentation integration we hope to be able to improve our speed and quality overall.

23
Oct
2014

Badger Academy Week 8 – Frontend testing using WebdriverIO, Stubby and CucumberJS

by Tiago Azevedo

Over the past few weeks on Badger Time, we’ve had a steady workflow for the API where we followed TDD principles of writing feature tests first and code later. It wasn’t an issue to set that up in Rails as the various gems already out there for Ruby (RSpec/FactoryGirl specifically) made it a breeze.

The frontend was a different beast altogether and required quite a lot more thought which we finally decided to give over the past week.

The problems and their eventual solutions.

There were several problems which we struggled to solve initially. Firstly, we had to run a GhostDriver instance which would allow our testing suite to communicate with PhantomJS. We’d also have to run a Node server simultaneously which would serve the app in a test environment to the PhantomJS browser.

Doing this was a bit tricky; Gulp’s asynchronous nature meant that running those background processes from within Gulp was a no-go. Depending on how quickly or how slowly it launched, some tests would pass or fail as the server might not be up before the tests ran.

It was probably more effort than it was worth to find a workaround for it so we simply added the processes as a part of the container’s boot sequence. As our containers were based on Phusion BaseImage it was a case of adding simple init scripts to BaseImage’s custom init process.

start-stop-daemon --start --background --quiet --exec /bin/bash -- -c "/usr/bin/phantomjs --webdriver=8080 --remote-debugger-port=8081 --ignore-ssl-errors=true > /tmp/phantom.log"
start-stop-daemon --start --background --quiet --exec /bin/bash -- -c "node /data/server.js"

That was one catch out of the way. The next issue we faced was actually running the tests. Previously we took advantage of gulp-run to pipe our compiled spec files (we wrote the tests in LiveScript!) to the CucumberJS executable.

This was a bit overkill and we ended up just using Node’s script system to run the compile task then run the CucumberJS task on the appropriate files. As a side-effect, we got really nice formatting on the tests so we could see exactly what went wrong (if it failed).

Screen Shot 2014-10-20 at 11.16.47

Nice!

We had these tests running with the API endpoint set as a local Stubby mock API. Stubby’s Node implementation gave us a programmatic API which meant we could start, stop and modify the API as our tests were running.

This allowed us to feed data using Gherkin (Cucumber language) data tables to a function which would simply modify an endpoint with the supplied data. It removed our dependency on the real API to have the frontend tests working, which reduced our CircleCI build times from a staggering 15-20 minutes down to 2-3.

A look at WebdriverIO

Selenium WebDriver is somewhat of an elephant in the office at Red Badger. We all dislike it – even you Pete, you just don’t know it yet – but we put up with it. The API is just a bit rubbish and documentation is quite difficult to find. As somebody working out its usage from scratch, I can say my options were quite limited; spend hours sifting through Java documentation and hope it works the same in the JavaScript implementation or go through endless amounts of user issues trying to find a solution which matches my own problem.

That’s where WebdriverIO helped tremendously. It’s a JavaScript wrapper to Selenium’s confusing API and offers quite a few helpful additions of its own. Just having documentation – however incomplete it might be – was a godsend. At least the functions which weren’t documented have a link to their source so we can see what’s going on and extrapolate from that.

How LiveScript facilitates the callback-based nature of CucumberJS

If you’re familiar with the term ‘callback hell’ then you know how asynchronous code can be a real pain deal with, as you end up with nested logic inside nested logic inside a browser action, all ending with a callback to the top level to pass (or fail) the test. Take this simple example of a browser action which would type a phrase into an input on the screen. In JavaScript, we can immediately see why it quickly grows into something that isn’t nice to deal with.

We take advantage of LiveScript’s unnested callbacks to offer code which is functionally the same as the example above, but reads and writes like synchronous code (much easier to handle).

Writing our tests is inherently easy due to the way Cucumber works and in most cases we don’t even need to write any code for new features as we recycle logic from the more generic step definitions. 

We’re excited to finally be able to adhere to BDD principles on our frontend. After all, the whole premise of Badger Academy isn’t to ship a finished product, but to bring our code quality and knowledge to a higher level.