Posts Tagged ‘Red Badger’

23
Jan
2015

January London React Meetup

by Roisi Proven

1_red_badger-38-x3

January 21st saw the first React meetup of the year, and we packed as many people as we could squeeze in to the Red Badger office for another round of talks from teams around London who are using React within their organisations. This month we had Jof Arnold, VP of Product from timecounts.org, Gary Champers, a Software Engineer at Football Radar, and Christian Moller Takle from Billeto.

Jof Arnold – from Backbone to React

Jof has been a long term attendee of our React meetups, and we were excited to have him step forward and share his experiences with us.

After a painful experience with using purely Backbone, Jof and the team at timecounts.org discovered React. There were elements of backbone (isomorphism and Backbone models) that he and his team felt they needed to keep, so they worked to merge Backbone with React to get the best of both worlds. Although it’s not necessarily the “correct” way of using React, it is a way to use React with Backbone without rebuilding everything from scratch. 

Gary Chambers – Go Reactive Building UI with Rx and React

Gary came over from Football Radar to join our roster of external speakers for this month.

Football Radar have a very data heavy system, and publishing hundreds of events a second then trying to publish them all to the DOM didn’t work out. After investigating alternatives to their current solution, they too came across React, this time using it in conjunction with Rx. Rx is a library for async dataflow programming, or in other words, a methodology for “data first” thinking, which Football Radar needed. He then went on to discuss how you can translate Rx into Flux, to really get the most out of React.

Christian Moller Takle – Whirlwind Tour of Functional Reactive Programming Using Elm-lang as a Base

Last but not least we had another of our React meetup regulars. Christian built upon the other talks, and took the stand to give an engaging, and very technical, talk.

Elm is a functional programming language that is “not scary, just different.” Christian spoke about Elm language and put a functional programming spin on the core tenets of React. Using Elm, Christian showed how, as in React, you can improve your UI development by modelling your UI as data, and discussed the common concept of the virtual DOM.

 

Here at Red Badger, we love hearing about new ways of using React in conjunction with existing technologies, so this set of talks really gave us a lot of food for though. I think several of our developers are particularly interested in delving further into Elm.

As ever, this Meetup was massively popular, and our humble Red Badger office now fills to bursting point at every meetup. As a result, we have been speaking with Facebook, who have kindly agreed to let us use their offices near Euston for the next React meetup. So expect bigger, better things, and lots more beer and pizza!

Sad you missed it? You can watch the full meetup over on YouTube

8
Jan
2015

Consumers in the driver seat

by Olga Loyev

Empty carI love math. So when it comes to analytics and stats, my brain smiles and my eyes spark, which is one of the things that attracted me to work at Red Badger. We are agile by nature and methodical in practice. We take analytical approach to problem solving and it is an integral part of the solutions we provide to our customers.

I’m a project manager for the Red Badger team at BSkyB where we recently delivered their new Help Site along with the supporting custom CMS to improve user’s ability to find the right content available and enhance their overall experience on the Sky Help site. As part of the the new solution we also added custom events tracking (things like clicks on links, buttons, page views, etc) that feed stats into New Relic Insights so we can closely monitor user behaviour to understand consumer’s requirements. The data we get back serves many purposes: from helping to prioritise work to highlighting outages to understanding customer demand and responding accordingly.

One example that stands out in my mind is what we saw recently on just another normal rainy day in London:

  • 80% of all unique visitors going to a specific article were clicking on “still need help” indicating they haven’t found the information they were looking for

  • 10% of all unique clicks on “still need help” across ALL articles were coming from that specific article. In comparison, the next article where most of “still need help” clicks were coming from accounted for only 5%

  • Daily top 10 most common search terms from the next action after clicking on “still need help” were referring to a specific issue the users were looking for help with

Notifying the stakeholders, we quickly found out that the information regarding the issue needed to be approved by the legal department before it could be published on the help site. Understandable, as it’s important to make sure product information is correct and consistent across different channels.

Fortunately, as soon as demand for this information spiked, we were able to immediately respond to the changing customer requirement and adopt the content to meet those. We shared the stats above with the legal department as soon as the spike occurred. The data highlighted the urgency of the request which was approved to be published within 15 minutes. The editors were then able to instantaneously publish the latest draft of the article which they already had prepared (thanks brand new CMS!) with the info customer was seeking. The published copy appeared on the live site 60 seconds after.

Within 12 hours, the percentage of all unique visitors to that article who still needed help after viewing the content dropped 50% from 80% to 40% with numbers levelling out to expected average by end of next day. Shazam! The power of numbers used for good!

Stats and analytics are powerful tools and should be at the forefront of decision making; driving the delivery of features (and in this case, content) based on real time user requirements. Put your consumer in the driver seat to best meet their online needs and bon voyage!

 

18
Nov
2014

Silicon Milkroundabout – From both sides of the fence

by Roisi Proven

seedpacket

On Saturday May 10th, I nervously walked through the doors of the Old Truman Brewery for the first time. I’d heard good things about Silicon Milkroundabout, but had always been too nervous to give it a go myself. However, job dissatisfaction paired with the desire for a change drove me to finally sign up, and I was assigned a spot on the Saturday “Product” day.

I have to say as a first timer, the experience was a little overwhelming! The hall was already starting to fill up when I got there at 12pm, and there was a maze of stalls and stands to wade through. The SMR staff were very helpful, and the map of the hall I got on entrance made navigating the maze slightly less daunting.

I’d done my research before I got there on the day, and I had a little notebook of companies that I knew I needed to make time to speak to. There were 5 I felt I HAD to speak to, and a few that I was drawn to on the day, based on their stand or the company presence overall. At the top of my shortlist was Red Badger.

In May, RB had a small stand near the door, not in the middle of things but easy to find. I had to do a bit of awkward hovering to get some time with one of the founders, but when I did we had a short but interesting conversation. I took my card, filled in the form, and kicked off a process which led to me getting the job that I am very happy I have today.

Fast Forward to November, and my Saturday at Silicon Milkroundabout looks a whole lot different. This time, I’m not a candidate, I’m the person that people are selling themselves to. A different sort of weird! The Red Badger stall looks different this time around too. Where before we had a small table, this time around we had an award-winning shed.

badgershed

That awkward hovering I said I did? There was a lot of that going on. Having remembered how daunting it was to approach a complete stranger and ask for a job, I did my best to hoover up the hoverers. I had a few really interesting, productive conversations during the day, but just as many were people who just wanted to compliment us on our stand or our selfie video. It was great to get some positive feedback for all of the team’s hard work on the run up to the weekend.

The biggest difference was, given the fact I was standing still, I was able to fully appreciate the sheer amount of people that came through the doors, and the variety of roles that they represent. The team at SMR have done a great job of keeping the calibre of candidates high, and it does seem like there is a candidate for almost everything the companies are looking for.

Here at Red Badger we’ll be combing through the CVs and contacts that we made over the weekend, and will hopefully be making contact with a several potential new Badgers soon. For anyone that met us this time around, thanks for taking the time out to hang in our shed. For anyone that missed us, we’ll see you at SMR 9 next year!

27
Oct
2014

Improving Performance with New Relic APM and Insights

by Roisi Proven

In any kind of tech development, knowledge is power. When working on an ecommerce site, knowledge is essential.

The more information you have about your application, both during development and when it goes live, the more value you will be able to provide your client and in turn to the client’s customers. When in a development environment, it’s easy to provide yourself with a breadcrumb trail back to an issue, but when your code moves into a staging environment, the information you provide can end up being a lot less useful. At one point, this was as useful as it got for us:

With no way to know what this “something” was, and after a few awkward problems where we very quickly reached a dead end, we made the decision to introduce New Relic APM into our workflow.

New Relic APM helps you monitor your application or website all the way down to the code level. We have been using this in conjunction with New Relic Insights, their Analytics platform.

With New Relic we have been able to track VPN downtime, monitor response times and get stack traces even when working in a Production environment. So the above vague message, becomes this:

This monitoring enables you to increase confidence in your product in a way that isn’t possible with simple manual or even automated testing.

In addition to the APM, we’ve also been working with New Relic insights. It behaves similarly to Google Analytics. However, its close ties to APM’s tracking and monitoring, means that the data is only limited by the hooks you create and the queries you can write in NRQL (New Relic’s flavoured SQL language). It feels far meatier than GA, and you can also more easily track back end issues like time outs, translating it into graphical form with ease (if you’re into that sort of thing).

Being a new product, it is not without its pitfalls. In particular, NRQL can feel quite limited in its reach. A good example of this is the much publicised addition of maths to NRQL. That a query language didn’t include maths in the first place felt a bit like an oversight. However, this has been remedied, and they have also introduced funnels and cohorts which should add a lot to the amount you can do using Insights.

As a company Red Badger has always valued fast, continuous development. While traditional BDD test processes have increasingly slowed us down, by improving our instrumentation integration we hope to be able to improve our speed and quality overall.

20
Oct
2014

London React October Meetup

by Chris Shepherd

Last week saw the fourth London React User Group and yet again the turnout was fantastic for such a young user group.

There were three talks this time and the first, “Learning to Think With Ember and React”, was by Jamie White who runs the Ember London User Group. Red Badger's own Stuart Harris had previously done a talk on React at the Ember User Group that had been well received, so it was time for them to come and talk to us. Jamie talked about how it was possible to combine Ember with React and it was interesting to see how flexible React is and how it's so easy to fit into a variety of existing workflows.

Next up was Rob Knight with his talk, “Diff-ing Algorithms and React”. Rob covered something that I think a lot of us now take for granted with React: how it updates the DOM through the use of a virtual DOM and efficient algorithms.

Lastly, Markus Kobler's talk, “React and the Importance of Isomorphic SPAs”, showed how to use React to avoid poor SEO and slow loading times. These are usually considered the pitfalls of building Single Page Applications.

If you missed the talk then you can watch it on YouTube below. To be informed about the next meet up then sign up here.

Hopefully we'll see you there next time!

21
Aug
2014

Computer Science students should work for a start up while at uni

by Albert Still

Don’t completely rely on your grades to get you a job after uni. A CS degree will prove you’re intelligent and understand the theory, but that will only take you so far during an interview. We have to talk about apps we’ve built with potential employers, just like you would want a carpenter to show you pictures of his previous work before you hired him.

While studying CS at university I worked 2 days a week for Red Badger, even in my final year when I had a dissertation. Some of my class mates questioned if it was a good idea suggesting the time it takes up could damage my grades. But it did the opposite, I got better grades because I learnt so much on the job. And when you see solutions to real life problems it makes for a better understanding to the theory behind it. And it’s that theory you will get tested on at university. 

What I’ve been exposed to that I wasn’t in lectures

  • Using open source libraries and frameworks. Theres rarely a need to reinvent the wheel, millions of man hours have been put into open source projects which you can harness to make yourself a more productive developer.
  • GitHub is our bible for third party libraries. Git branches and pull request are the flow of production.
  • Ruby - the most concise, productive and syntactically readable language I’ve ever used. Unlike Java which the majority of CS degrees teach, Ruby was designed to make the programmers work enjoyable and productive. Inventor Yukihiro Matsumoto in a Google tech talk sais “I hope to see Ruby help every programmer in the world to be productive, and to enjoy programming, and to be happy. That is the primary purpose of Ruby language”. Ruby is loved by start ups and consultants because they need to role out software quickly.
  • Ruby on Rails - built in Ruby it’s the most starred web application framework on GitHub. It’s awesome and the community behind it’s massive. If you want to have a play with it I recommend it’s “Getting started” guide (Don’t worry if you’ve never used Ruby before just dive in, it’s so readable you’ll be surprised how much Ruby you’ll understand!).
  • Good habits – such as the DRYYAGNI  and KISS principles. The earlier you learn them the better!
  • Heroku - makes deploying a web app as easy as running one terminal command. Deploy your code to the Heroku servers via Git and it will return you a URL to view your web app. Also its free!
  • Responsive web applications are the future. Make one code base that looks great on mobile, tablets and desktops. Twitter Bootstrap is the leading front end framework, it’s also the most starred repo on GitHub.
  • JavaScript – The worlds JS mad and you should learn it even if you hate it because it’s the only language the web browser knows! You’ll see the majority of the most popular repositories on GitHub are JS. Even our desktop software is using web tech, such as the up and coming text editor Atom. If you want to learn JS I recommend this free online book.
  • Facebook React – Once hard JS problems become effortless. It’s open source but developed and used by Facebook, therefore some seriously good engineers have developed this awesome piece of kit.
  • Polyglot – Don’t just invest in one language, be open to learning about them all.
  • Testing – This was not covered in the core modules at my university however Red Badger are big on it. Simply put we write software to test the software we make. For example we recently made an interactive registration form in React for a client. To test this we made Capybara integration tests, they load the form in a real browser and imitate a user clicking and typing into it. It rapidly fills in the form with valid and invalid data and makes sure the correct notifications are shown to the user. This is very satisfying to watch because you realise the time your saving compared to manually testing it.

Reasons for applying to a start up

  • They are small and have flat hierarchies which means you will rub shoulders with some talented and experienced individuals. For example a visionary business leader or an awesome developer. Learn from them!
  • More responsibility.
  • They’re more likely to be using modern front line tech. Some corporates are stuck using legacy software!
  • If they become successful you will jump forward a few years in the career ladder compared to working for a corporate.
  • Share options – own a slice of your company!
  • There are lots of them and they’re hiring!

Where to find start ups

A good list of hiring startups for London is maintained by the increasingly successful start up job fair Silicon Milk Roundabout. Also Red Badger are currently launching Badger Academy, they’re paying students to learn web tech! This is extremely awesome when you consider UK universities charge £9,000 a year. If your interested in applying email jobs@red-badger.com

Best,

Albert

There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies.

C.A.R. Hoare, 1980 ACM Turing Award Lecture

8
Aug
2013

Faster, longer and more responsive

by Stephen Fulljames

blog-head  

You know how it is – you set out to make “a few content changes” and end up with a whole new web site. We wanted to make a bit of an adjustment to the version 2 Red Badger site, launched in mid-2012, to give a better feel for who we are and what we can offer. In the end, it made more sense to go the whole way and create version 3. So here we are.

The fundamentals

In the previous version of the site we used WordPress to manage most content, but rather than use it to render every page we instead pulled content through its JSON API to feed a more flexible front-end built on Node and Express. This was fine in theory, but in practice performance wasn’t really as good as we’d hoped – even with some aggressive caching built in – and any kind of content beyond blog posts turned into a tangle of custom fields and plugins. WordPress specialists would probably have an answer for all the problems we faced, but we felt it was time to strike out and try something different.

This also coincided with an interest in evaluating Docpad, a semi-static site generator built on Node. If you’re familiar with Jekyll on Ruby or Hammer for Mac its kinda like that, but as well as building purely static sites it can also run with an Express server to allow some dynamic behaviour where needed. We hacked on it for a couple of days, I rebuilt my personal site with it, and then liking what we saw we decided to proceed.

The principle of Docpad and other site generators is pretty simple. You have a folder of documents written in any supported format (we’re using Jade and Markdown), reflecting the desired URL structure of the site, a folder of templates, and a folder of static assets. The generator runs through the documents, compiling where needed, applying the templates, and saves them into an output directory along with a copy of the assets. Logic in the templates is responsible for constructing navigations and outputting collections of data – just as a dynamically rendered site would – but it only does it once. The theory being, your site only changes when you act to change the content, so why do you need to serve something other than flat files between those changes?

Docpad is light in its core form, but there’s a good plugin ecosystem to extend it. We’re using plugins to compile SASS, Markdown, Jade and Coffeescript, fetch RSS from Flickr using Feedr and serve pages with clean URLs rather than .html extensions (this is where the Express part comes in handy). We’ve also created a few of our own. The main one is to allow us to curate “featured” items in the meta data of any page – so if you look on the homepage, for example, all the content items below the masthead are manually set and can be changed and reordered simply by altering a list of relative URLs. We’re also using custom plugins to pull in Tweets and posts from the old WordPress blog, and Docpad’s event system makes it easy to hook this data into the appropriate render step in the site’s generation.

For the time being we’re still using WP for our blog, recent posts are imported by the Docpad site on server start using a custom plugin in order to be able to feature them on the home and other pages, and list them on our team profiles. In the longer term we’re planning to move to blog itself on to Docpad as well, but that will need careful consideration to make sure all the functionality currently given by WordPress plugins can be maintained (or improved!).

We’re responsive

The previous iteration of the site was built using the Bootstrap CSS framework. My own views on that notwithstanding, some of the design changes we were planning meant it made sense to look again at our approach to the CSS.

As the site design created by Sari does not inherit from Bootstrap’s design opinions, only a small subset of it – mainly the grid – had been relevant, and structurally with full-width background patterns on each section of a page it was easier to start over.

That’s not to say we’ve rejected it completely. In the new grid system we’re still using some of Bootstrap’s thinking, along with patterns adopted from Harry Roberts’ Inuit.css and Chris Coyier’s thoughts on minimal grids. So we haven’t really done anything earth-shakingly innovative, but we have found it to be a solid, responsive grid that – most importantly – we completely understand and can use again.

site-grid

1 – ‘section’ element. Inherent 100% width. Top/bottom padding set, optional background texture applied by class, otherwise unstyled. The paper tear effect is added with the :after psuedo element.
2 – ‘div.container’. Width set to site default (920px) with left/right margin auto for centering. Any full width headings are placed directly inside this element.
3 – ‘div.row’. Unstyled except for negative left margin of default gutter width (40px) to compensate for grid element margin, :before and :after psuedo-elements set table display and clear as with Bootstrap.
4 – ‘div.grid’. Floated left, all .grid elements have a left margin of default gutter width (40px). Default width 100% with over-ride classes for column width, in the case above we use ‘one-third’. Background texture classes can also be applied here.

We’ve switched from our regular LESS to SASS for this version of the site, again as a way to explore what it can do. With Docpad its as easy as swapping plugins and all the compilation is handled for you during site generations. Needless to say the site is responsive, with the grid gradually collapsing down to a single column view and elements such as the services carousels and case study quotes shifting layout subtly to suit smaller viewports. And to make sure corporate decision makers get a good experience we’ve also tested and fixed back to IE8.

The other final touch is that we chose to use a custom icon font for the various calls to action and other decorations around the site. We used Icomoon to generate it from our own vector artwork, and this has saved an enormous amount of time because you can colour the icons with CSS rather than cutting and spriting images for all the variants you might need. The adoption of icon fonts has had a few challenges, chief among them accessibility (screen readers tended to attempt to announce them) but with the technique of encoding icons as symbols, in the Private Use Area of the Unicode range, this problem is now largely overcome.

icon-font

There were still a few things we found in the process of creating our icon font. It’s not advisable to do anything more complicated than just use them as icons, for example, as it’s very hard to align them perfectly to regular fonts without browser hacks. Each browser has a slightly different way to calculate and render line height. Also, depending on how much consideration you give to it, they’re not supported by Internet Explorer on Windows Phone 7 (although the symbols it uses instead are quite cute).

What’s next

Of course a consultancy’s own web site is never really finished, as we’re always looking to show our capabilities and keep up with the latest thinking in a world of fast-moving technological change. And the cobbler’s boots syndrome also applies; its hard to find time to look after your own when you’re always busy helping clients deliver their own transformative projects.

But we feel like we’ve achieved a stable, maintainable new site and its time to put it out there. It feels much faster than the old one, and we’ve delivered the flexibility in content layout we set out to achieve. We wanted to make sure we were demonstrating our own best practices, and while a good deal of that is under the hood we’d be happy to talk through it in more detail if you’d like to get in touch.

There are tweaks to come, naturally, based on things we’ve already spotted and feedback we’ll no doubt get. One big task will be to make the site more editable for those of a non-technical nature. Its actually not too difficult, as most content pages are written in Markdown, but a nicer UI exposing that rather than the intricacies of Git pushes and deployments feels like a good next step. No doubt we’ll be blogging again to talk about that when the time comes.

15
Feb
2013

Automating myself – part 1

by Stephen Fulljames

photo-1

Having gone through the onerous task of selecting what type of laptop I wanted at Red Badger (13″ MacBook Pro with 16gb RAM and 512gb SSD) and waiting impatiently for it to be delivered, it was time to get it ready to use.

My previous personal Mac, a late 2010 11″ Air, has done sterling service – even just about managing when asked to run a couple of virtual machines concurrently alongside a Node.js environment. But it was starting to creak a bit and the poor thing’s SSD was chock full of all the random crap that had been installed to support various projects. So as well as the stuff I regularly use there were also leftover bits and pieces of Java, Python and who knows what else.

At Red Badger we’ve largely adopted Vagrant and Chef as a great way of provisioning development environments without imposing on the personal space of your own laptop. Clone a repo from Github, run ‘vagrant up’ on the project directory, the Vagrant script installs and configures a VM for you, ‘vagrant ssh’ and there you are with a fresh, clean dev environment that can be reconfigured, reloaded and chucked away as often as you like. Joe is going to be writing more about this process soon.

So I decided, I would challenge myself to install the bare minimum of tooling to get up and running, with all the stuff specific to our current project residing in a virtual machine. This should also, in theory, make a nice solid base for general front-end development to current practices. And hopefully leave 400gb -odd of disc space to fill up with bad music and animated cat gifs.

The clock was ticking…

  1. Google Chrome – goes without saying really (3 mins)
  2. Node.js – nice and easy these days with a pre-packaged installer available for Mac, Windows and Linux (5 mins)
  3. Useful Node global modules – First trip into the terminal to ‘sudo npm install -g …’ for coffee-script, grunt, express, less, nodemon and jamjs. This is by no means an exhaustive list but gives me all the Node-based essentials to build and run projects. (5 mins)
  4. Xcode Command Line Tools – Rather than the enormous heft of all of Xcode, the command line tools are a relatively slimline 122mb and give you the bits you need (as a web rather than OS developer) to compile the source code of stuff like wget. An Apple developer account is needed for this so I can’t give a direct link, unfortunately. (15 mins)
  5. Git – Last time I did this I’m sure there was a fair amount of messing around but there is now an installer package for OS X (5 mins)
  6. Virtual Box – This is where we start getting in to the real time saving stuff. Virtual Box is a free VM platform which can be scripted, essential for the next part (10 mins)
  7. Vagrant – This is the key part, as explained in the introduction. Once Vagrant is up and running, a pre-configured dev environment can be there in just a few minutes (3 mins)
  8. Github ssh key config – Back to the command line. To be able to clone our project from its Github repo I had to remember how to do this, and the documentation on the Github site doesn’t have the best signposting. (5 mins)
  9. Clone repo and post-install tasks – Get the code down, then run ‘npm install’ and ‘jam install’ in the project directory to fetch server and client dependencies. Run ‘grunt && grunt watch’ to build CoffeeScript, Less and Jade source into application code and then watch for file changes. (2 mins)
  10. Vagrant up – With the repo cloned and ready, the ‘vagrant up’ command downloads the specified VM image and then boots it and installs the relevant software. In this case, Node and MongoDB. At a touch over 300mb to download (albeit a one-off if you use the same image on subsequent projects) this is the longest part of the whole process (20 mins).
  11. Vagrant ssh – Log into the new virtual machine, change to the /vagrant directory which mounts all your locally held app code, run ‘node app/app.js’. Hit Vagrant’s IP address in Chrome (see step 1) and you’re there (1 min).

So there we go. By my reckoning 74 mins from logging in to the laptop to being able to get back to productivity. And because the VM environment is identical to where I was on my old Mac I could literally carry on from my last Git commit. I did those steps above sequentially to time them, but at the same time was installing the Dropbox client for project assets and Sublime Text for something to code in.

Obviously on a new project you would have the added overhead of configuring and testing your Vagrant scripts, but this only has to be done once and then when your team scales everyone can get to parity in a few minutes rather than the hours it might once have taken.

Vagrant really is one of those tools you wish you’d known about (or that had existed) a long time ago and really feels like a step change for quick, painless and agile development. This is also the first project I’ve really used Grunt on, and its role in automation and making your coding life easier has already been widely celebrated. Hence the title of this post, because this feels like the first step in a general refresh of the way I do things and hopefully I’ll be writing more about this journey soon.

25
Jan
2013

Back in the sett

by Stephen Fulljames

Hello, I’m Stephen. I’m a front-end web developer, focusing these days mainly on Javascript and Node but with a wide range of experience in most of the technologies that make stuff look good in a web browser.

I’ve been working with Red Badger as a freelance collaborator on and off for almost two years, in fact since it was just Stu, David and Cain at the Ravensbourne incubator programme, but we’ve finally decided to make it official and I’m going to be joining the team here permanently.

Switching back to a full time role after a stint freelancing has been a tough decision, but the strength of the team and the technologies in use here make it an exciting place to be. With a background in interface development I’ve had exposure at the integration stage to the various languages – PHP, C#, Java, and so on – that make the pages I build actually work, without really gaining a deep understanding of them. However now I can write server-side code in Javascript as well, with Node, it feels like I can really build on my existing skills to do new and interesting things.

With other developers in the team adopting Node from the other direction, from their excellent C# and infrastructure experience, it feels like we can bring the client and server parts of web development closer together – whether in shared code, better build processes or improved performance. On the recent BBC Connected Studios pilot project, Joe, David and I were all able to contribute not only ideas but also implementation across the whole application. There are still some problems to solve and the best ways of working will settle down over time, but as a company we want to contribute to the community and share what we learn so there’ll be more blogging on these subjects in the near future.

Now if you’ll excuse me, I need to go and get used to being an employee again… 

12
Sep
2012

How to build and launch a rocket in one night

by Sari Griffiths

We recently launched a competition ‘Windows Azure Media Challenge’ with Fluxx. The prize is to win an event called RapidStart. In the 11th hour* before the competition was introduced, I was asked to make a flyer to support it. 

It ended up with a stop motion animated Lego rocket. And this is a blog about it. If you’re interested in the competition, take a look at Fluxx’s Paul Dawson’s blog.

Rocket

*It was 11th hour because I had forgotten that I had Paralympic tickets on the day I planned to work on this. My fault. 

What’s RapidStart?

It’s a few extremely intense days of creative workshop and rapid prototyping. We all gather, come up with loads of ideas, and produce something tangible by the end of the event. We provided UX/Design and Dev support for some of these events organised by Fluxx. 

It’s a bit of a whirlwind experience even for those of us who have taken party in a number of these events. New set of people, new set of problem solving and new set of challenges all the time. But it’s always a very creative and productive few days, and it’s fun. 

Rocket & launch pad

My task was to illustrate the nature of these events. Quick prototyping in the way everyone understands. I was trying to explain this to my husband (a graphic designer and not-very-techie), and I was talking about sketches, origamis and Lego.

Wouldn’t it be great to build something bigger with these bits? Something that starts things off. Something that launches stuff. Did you say launch? What about a rocket launch pad?

As it was a flyer accompanying the presentation, I had a vague idea of creating an animation as well as having a few static visual ready for the flyer. A Lego rocket launch pad was perfect for it. I could take photos to make a stop motion animation!

Luckily both Paul (Fluxx) and David (Red Badger) who were creating the presentation loved the idea and got a go ahead immediately.

A stop motion fun

I started the whole thing after my little boy (2.5 years old who was way too eager to help) went to bed. 

As I only had a night (and was quite keen on the idea of sleep), I did quite a bit of planning before I started. I wrote down a list of shots for single pages to be used in the presentation. I downloaded a video of Saturn V rocket launch and created a mock up version so that I knew exactly how many frames were required and the compositions of each frame. I worked out the order of these shots to be most efficient.

I hung a sheet of paper over the kitchen table to create a back drop. I set up a tripod with my not-so-brilliant SLR and a bunch of masking tape to keep everything steady. The lighting was not brilliant, but there was nothing much I could do. And luckily, it looked quite stylish when I took a test shot. I was ready to go.

First, I took a series of shots for the launch. Looking at the edited Saturn V video, I added some smoke using white and yellow pieces. Then took a series of shots for building the rocket and the launch pad – backwards. I was removing one piece per shot. 

Then took a few more shots for other uses to finish the photoshoot. Like a bunch of Lego pieces to represent little bits of ideas at the beginning of the event.

Rapid idea2

And lots of different ideas.

Rapid idea1

All images for animations were put together in Flash for the presentation. Not sure if it was the best solution, but it did a job. Phew!

And here is the final animation that I stitched together later in iMovie. (In the presentation, it was separated to building bit and launch bit.)

Rapid Start Rocket Build & Launch from Sari Griffiths on Vimeo.

It was good fun making this. And yes, I did get some sleep too!