Posts Tagged ‘javascript’

15
May
2015

Join me as a new voice in the tech community

by Winna Bridgewater

wb_workshop_cropped

A glimpse at the weekend workshop I led. Photo by Alessia D’Urso, Girls In Tech UK Board Member.


London has a staggering number of events for coders. It’s awesome.

The only thing is I’ve gone to a lot of events, and I can count on one hand the number of times a technical discussion—about a tool, an industry standard, or the craft of code—was led by a female. For the majority of the events I attend, I am one of only two or three females in attendance.

I decided that this year, if I want to see more women presenters, I need to step up. I need to talk at an event.

Getting Inspiration

As if the universe was listening, two events came up this spring that got me more excited and less nervous about going for it.

First, Ladies Who Code London ran a workshop on public speaking, and it was hosted here at Red Badger. Trisha Gee and Mazz Mosley were wonderful. Their message was simple: it gets easier each time you do it, and there are ways to work through nervousness. They also emphasized that the audience is rooting for you–everyone in the room wants to see you succeed. Pretty nice, right?

Then I had the chance to attend the Women Techmakers Summit, an evening celebrating International Women’s Day that was organized by Google Women Techmakers and Women Who Code London. There was a series of speakers and panelists, and every presenter had a powerful message. The speaker whose message stayed with me most was the keynote, Margaret Hollendoner

Margaret said she sometimes makes her career sound like it was all “right place at the right time” luck. But she told us it wasn’t that simple. Every opportunity was lucky in one sense, but the opportunity wouldn’t be there without her hard work. She also emphasized that deciding to say “yes”  required confidence and bravery.

Margaret’s presentation gave me another nudge: get past my fear and present at an event.

Saying Yes

wb_teaching

Only two days after the Summit, I got an email from Lora Schellenberg about Spring Into Code, a weekend workshop on Web Development offered by
GeekGirlMeetup
and Girls In Tech. Lora asked if I was available to teach–the original instructor had to back out, and they were looking for a replacement.

It sounded like my lucky chance, so I agreed.

Then I got all the details. I was going to be the only teacher for 12 hours of instruction over two days. I’d be teaching at Twitter headquarters to an audience of 100 people.

I felt pretty panicked, so I knew it was time to make some lists.

Why I should not do the workshop
  1. I’m not an expert in web development.
  2. I’ve only been doing this stuff professionally for a year and a half.
  3. I won’t be able to answer all their questions.
  4. 12 hours is a long time.
  5. 100 people is a lot of people.
Why I should do the workshop
  1. I’m not an expert in web development. I still spend most days learning new things. I know what it’s like to feel confused and lost. And I know how to recognize and celebrate small triumphs.
  2. I did set that personal goal.
  3. Those nice ladies did tell me the audience will root for me.
  4. That other nice lady did say you need to take advantage of luck that comes your way.
  5. If I’m going to teach 100 people for 12 hours, this is the ideal audience. Eager learners who, by choice, agree to a weekend in front of a computer taking in as much as possible.

I decided to go for it—butterflies, sweaty palms and all.

There are so many things I could focus on under the umbrella of Introduction to Web Development. I decided my main goals would be:

  • Make techy code stuff seem less scary.
  • Make people feel ok about asking questions.

Saturday morning arrived, and I had a rough start. I spent the first session working out how to use the mic and the two screens floating on either side of me. My notes weren’t loading like I hoped. The Internet was down. My demo wasn’t working even though it worked mere hours before. I was shaking.

After the first demo fell flat on its face, I knew I needed to stop. I took a few minutes to get everything running properly. I got water. I took some deep breaths. Those minutes felt like ages, but it was worth it. When I started up again, stuff finally started working.

The first day flew by. A few folks came by during breaks to say they were enjoying themselves. At the end of the day, lots of people came by to say thanks. Were my silly jokes working? Did my missteps when typing—forgetting a closing bracket, leaving off a semicolon, incorrectly specifying a source path—help people understand that breaking and fixing things is what this job is all about? During the second day, people from all over the room were asking questions. Tables were helping each other debug and understand what was going on. Breaks came and went with people staying at their seats to try things out. I couldn’t have hoped for more.

Reflections

I have so many ideas about what I’d change if I could do it again. I missed some concepts. I glossed through others. But I did it, and I had an amazing time.

If you’re tempted to give a talk or run a workshop, please go for it. It’s scary but great, and you have a wonderful community rooting for you.

4
Mar
2015

React Native – The Killer Feature that Nobody Talks About

by Robbie McCorkell

React Native logo
 
At the end of January I was lucky enough to go to React conf at Facebook HQ in Menlo Park. This was my first tech conference, and it was a great and inspiring experience for me. The talks were excellent and I recommend everybody check out the videos, but the talks that really stole the show were the ones on React Native.
 
React Native allows developers to build real native applications using javascript and react, but not web wrapper applications as we too commonly see. React simply takes charge of the view controllers and programatically generates native views using javascript. This means you can have all the speed and power of a native application, with the ease of development that comes with React. 

Playing with React Native

This is a really exciting development for the native app world, and gives a new perspective on what native app development could be. I’d previously tried to learn iOS development a couple of times, first with Objective-C, and later with the introduction of Swift. Whilst I think Swift is a massive improvement in iOS development, I still ended up getting bored and life got in the way of learning this new system. So initially, the idea of using my current skill set in React web development to build truly native apps was extremely enticing (and still is).
 
I generally believe that as a developer you should pull your finger out and learn the language that suits the job, but in this instance React Native seemed to offer more than just an easy way into iOS development. It offered a simple and fast way to build interfaces and to manage application logic, and the live reloading of an application in the simulator without recompiling blew my mind.

Luckily for me, conference attendees were given access to a private repo with React Native source code inside, so I began playing as soon as I got back to my hotel room. Within 5 minutes I was modifying one the provided examples with ease without any extra iOS development knowledge, and I was hooked.

An addendum

Since then I’ve been leveraging my early access to talk publicly about React Native to some great reception. It’s been fascinating discussing this with people in the community because I hear very little scepticism and a lot of excitement from web developers and native developers alike (at least those that come to a React meetup).

However the more I talked about it the more I realised the message I was conveying in my presentations was not quite right. One of the major themes I focused on was the fact javascript developers like myself can now easily get into the native world, and that companies only need to hire for one skill set to build and maintain their entire suite of applications. 

This is still a hugely important advantage, but it isn’t the major benefit and doesn’t highlight what React Native offers over competing frameworks. It also doesn’t perfectly align with my view that looking for a tool or framework just to reduce the need for learning another language is lazy thinking. There’s more to React Native than this.

React Native’s advantage

React Native’s biggest feature is React.

This may seem a bit obvious (the clue is in the title) but let me explain. When I first looked at React I thought it was insane like most people. It takes such a different approach to web development that it gives many people an immediate repulsive reaction. But of course the more I used it the more I realised I could never go back to building web applications (or any front-end app for that matter) any other way. The patterns react provides are an extremely powerful way of building applications.

If you haven’t used React much it might help to know that React lets you declaratively define what your view should look like given some input data. A react component is passed some properties that it requires to render a view, and as a programmer you simply define the structure of the view and where that data should sit. In doing this you’ve already done half of the work in building your application, because if a component or any of its parents changes their data (in the form of state), React will simply re-render the affected components given the new data.

No specific data binding. No event management. No micro managing the view. Just change the data and watch React recalculate what your view should look like. React will then use its diffing algorithm to calculate the minimum possible DOM manipulations it can do to achieve the desired result.

The second half of your application structure is of course user interaction. Patterns and tools like Facebook’s Flux and Relay help with this, but essentially these are just ways in which you can modify the data in your application in a neat and scalable manner. The application still simply recalculates the structure of the view once the data has changed.

React really shines when you start to scale an application, because the complexity of your application doesn't have to increase too much. This is of course quite hard to demonstrate in a blog post, but I can give you an example of a simple little app written in angular and react that I adapted from Pete Hunt, one of React's creators.

Angular:

See the Pen RNBVZz by Robbie (@robbiemccorkell) on CodePen.

React:

See the Pen VYBbyN by Robbie (@robbiemccorkell) on CodePen.

You can see in the code above that the React implementation is pretty short, even with the markup defined in javascript. This is mostly because of the lack of linking functions connecting the code to the markup. React will just figure this out by itself. With all of this re-rendering of the markup every time something changes, you would think React would be quite slow, but it's not. For a demonstration of how React performs in comparison to other frameworks check out Ryan Florence's presentation from React conf.

This is an extremely simple and powerful way to build front-end applications. It combines the best of both worlds in a simple and easy interface for the programmer, and a performant experience for the user. It’s for this reason more than any other that React Native is an exciting new tool in native app development. It changes the way a programmer thinks in building the front-end of their app.

A sample React Native app

Tappy Button screenshotIn the talk I mentioned above I demonstrated some of my ground breaking research in video game technology with a game I created that I call Tappy Button. The objective of this game is to tap the button in the middle to increase your score.

The sample code below defines the full application seen in the screenshot including view structure, styling and application logic. What you should notice here is that the code is extremely unremarkable. It’s the same kind of React we know and love simply applied to a native app context, and that is what’s so remarkable about it. The only differences to traditional React web development are the element names1 and the inline styling2.

Building the same application in Xcode using Swift requires a similarly small amount of code, but it does require the developer to perform a process of clicking and dragging to create elements, constraints and method stubs in the code which quickly becomes tedious. As the application becomes more complex, the developer must manage collections of view controllers that manually update the view when required. But in React the view is declared once and all the developer must do is make sure the data inside each component changes when necessary. 

Other frameworks have crossed the JS-native divide in a similar way to React. NativeScript neatly gives developers direct access to native APIs in javascript3, and the creators of Titanium have been very vocal about the fact that they have provided similar javascript bridging and CSS styling for native apps for years. But all of the articles I’ve read that compare these frameworks are missing the biggest differentiator between React Native and the others, and that is React itself.

When discussing React Native, the ability to write native apps in javascript only really deserves a cursory mention in the discussion. What's really important here is that we can build native applications using the power of React.

React as the view engine

The important things in the future of React won't be more features, add-ons or utility libraries. Yes some improvements, optimisations and structural changes will be built in down the road, but I don't think Facebook want to overcomplicate something that works so brilliantly and simply already. 

We already have React being applied to different areas with React Native, and at React conf Netflix announced they were taking a similar approach, even applying React to embedded devices like TVs using their own custom built rendering engine. We've also heard that Flipboard have swapped out the DOM in favour of canvas elements in their latest web app to impressive results.

In the future we are going to see React applied to many different areas of front-end development by simply swapping the DOM and browser out for another view structure and rendering engine, whether that be mobile and desktop native elements, canvas, embedded devices, or who knows what. This will allow developers to use the power of React and its development patterns in any environment.  

Writing native applications for our phones is just the beginning, and it’s a direction I can stand behind. The biggest benefit of react native isn’t javascript. It’s React.

 

  1. The two base elements we can work with are 'View' and 'Text' and which act as block and inline elements respectively. Coming from the world of the DOM, we can simply translate 'div' to 'View' and 'span' to 'Text' and we’re basically good to go. Any other elements like 'TouchableHighlight' are utility components provided to us by React Native’s component library.

  2. Facebook have provided their own CSS based styling interpretation including their own clever implementation of Flexbox. We now write our styles as javascript objects and apply them inline to our view elements. Currently styles don’t seem to cascade, which is both an advantage and disadvantage depending on your use case. But the interesting thing about applying styles in this way is you have the full power of javascript at your fingertips. With a bit of forethought you could conceivably come up with clever systems to share global styles, and apply them to elements automatically in your application.

  3. I think NativeScript could be an interesting way to build react native bridges to native APIs, and we'll be experimenting with this in the future. However I’m still sceptical as to whether the overhead is worth it, and maybe if we want to build native bridges we should just learn the bloody language!

Red Badger are hiring. Come join us

11
Mar
2014

Componentify it

by Alex Savin

ball

Original image by Jonathan Kos-Read, licensed with Creative Commons BY-ND 2.0

One of the most amazing things about Web development is the scale of bicycle inventions. Pretty much every developer at some point decides to conquer a common problem by rewriting it from scratch. Reusing existing solutions is often difficult – they are likely to be tied to internal styles, libraries, build processes. This creates a nurturing ground where same features are implemented many, many times not only because developers think they can do it better, but also because there are too much of efforts to reuse existing solution without bringing up all the dependencies.

This blogpost is about Component approach. Components are designed to have as little dependencies as possible, relying on common web technologies, easily reusable and extendable. With component approach it’s trivial to unplug parts of the app and replace them with other parts or improved version of the same component. There are certain principles to be considered when releasing new components, since the Component itself doesn’t really limit you in any way. It is possible that later on we’ll have some monster components with huge amount of dependencies lurking around – but then again using such components is completely up to you.

When an aspect of a component may be useful to others, consider writing that as a component as well. If it requires reasonable effort to write the code in the first place, chances are someone else could use it too.

Building better components

Make a new component

Component comes with a handy console command. Assuming that you have component npm package installed – try running component help

To create a new component you can run component create mynewcomponent. This command will ask you (quite) a few questions, and then will create a new folder with some basic files.

If you try compiling newly generated component, you might get a following error:

$ component build
error : ENOENT, open '/Users/alex/red-badger/mytestcomponent/template.js'

This happens because there is template.js file specified in component.json, but is not generated by component create. You can either create this file manually, or remove it from component.json. After that component build should generate your first component under /build folder.

Each component can contain any amount of:

  • JS files, with mandatory index.js or main.js file assigned to a “main” directive in component.json
  • CSS files
  • HTML files
  • Local and external dependencies to other components

All assets must be explicitly listed in component.json, otherwise they are ignored. This is another clever feature of Component since the folder might contain temp files, npm packages, generated files. When Component is instructed to pick only certain files, it will not only limit the build to these files, but also fetch them from Github (ignoring everything else you could’ve pushed there on purpose or by accident). This way building components with dependencies becomes much faster.

Component doesn’t have to contain any JavaScript logic, and can simply export HTML template or CSS style:

You can componentify any bit of style, markup or functionality into a reusable package. As long as it makes sense.

Entry point

A fairly common pattern for a UI component is to ask for a DOM selector where the component would insert itself.

You can also use component/dom component for basic DOM manipulations. Dom component is obviously not a jQuery replacement, and lacks lots of jQuery functionality. You can live well also without dom component and manipulate DOM directly with document methods like document.getElementById, document.querySelectorAll. Adding and removing classes to elements can be slightly more challenging without any libraries, but there is a special component for that too – component/classes

Here is an example of component main.js using dom component and appending itself to the target element:

In this case dom component would be specified in the dependencies section of component.json:

Using CoffeeScript, LiveScript, LESS and JADE with components

If you are going to release a public component, it make sense to package it with conventional JS/CSS/HTML files to reduce amount of dependencies and increase reusability. For internal components we’ve mostly used LiveScript, CoffeeScript, LESS styles and JADE templates. And to complete the LiveScript picture, we’ve repackaged Prelude.ls library into a component and used it as dependency.

Notable npm packages for component builds:

In Gruntfile you can configure builder to use extra components:

And in grunt.initConfig section:

Global and local components

When using components extensively in your app you might end up with lots of public and private components. All public components and their dependencies will be automatically installed into a /components folder. It’s a good idea to put your private components into a /local folder next to the /components folder. You can also have dependencies between your local component:

Syntax for local dependencies slightly differs from the global dependencies – there is no version, and you just list all local components as array.

In the root of both /local and /components folders you will need a main component.json file, which will tell the build process which components needs to be included in the final main.js file:

paths will tell the builder to look for /local folder in addition to /components folder.

Later in the html file you simply include this generated main.js.

The content of this file is evaluated, now you can require and start using any of the components on your pages:

Most of Red Badger components include example html file with everything you need to start using that component on your pages.

Check for existing implementation

Before implementing your own component, it might be worth of checking for existing implementation. Most of the components are listed here:

https://github.com/component/component/wiki/Components

Component.io site is another slick way of browsing and searching for existing components. In many cases you’d want to start with existing component and either use it as dependency or fork and add your own functionality. Standard component/components are especially easy starting point for extending.

Conclusion

Component approach requires some extra efforts. You have to abstract and package bits of UI, make them fairly independent and think of the possible reuse cases. You can ignore the new components creation stage completely and just use existing components. We chose to componentify most of our recent World Risk Review project and this proved to be a brilliant decision. Instead of repetitive blocks of styles, markup and logic we managed to put most of the front end elements into components and reuse them when needed. Some of the components were also released as open source, and we hope to see even more useful components released in the future!

4
Mar
2014

Functional refactoring with LiveScript

by Alex Savin

sibelius_pipes
Original image by Dennis Jarvis. Used under Creative Commons BY-SA license.

LiveScript is a functional language that compiles to JavaScript. You could say it’s sort of like CoffeeScript, but in fact it’s so much better. This post features one hands on example of refactoring Javascript-like code using power of LiveScript syntax, combined with Prelude.ls extras.

Here is a function for processing an array of tags into an object for D3.js vizualisation component. On input you have an array like ['tag1', 'tag2', 'tag2', 'tag2', 'tag3', ... ]. The function selects the top 10 of the most popular tags and constructs a D3.js compatible object.

However when I showed this code to Viktor, he was quick to point out that LiveScript can do better. After another couple of minutes he produced this:

8 LOC vs. 21 LOC. The beauty of LiveScript is that it is very readable, and you can figure out what’s going on just by reading the code. Refactored version also compiles into a neat(er) looking JS.

What’s going on here?

|> is a LiveScript directive for piping. You get results from a previous operation and pass it on to the next one. We are effectively processing a single input, so it is piping all the way.

group-by (-> it) — using a Prelude.ls function to create index of the tags array. This will create an object which will look like this: {'tag1': ['tag1'], 'tag2': ['tag2', 'tag2', 'tag2'], ...} We can see a nice example of LiveScript syntax where -> it effectively compiles into:

Note that tags are piped into this function.

obj-to-pairs — another Prelude.ls function, which takes an object and returns a list of pairs. This way our previous array will turn into something like this:

[['tag1', ['tag1']], ['tag2', ['tag2', 'tag2', 'tag2']], ... ]

map (-> [it[0], it[1].length]) — maps every entry in the array using a supplied function. This will produce a new array:

[['tag1', 1], ['tag2', 3], ...]

Again, using the default argument it here for every array entry from the previous array.

sort-by (.1).1 is a clever use of LiveScript syntax to access second entry in a ['tag2', 3] array and sort the master array based on its value. The sort-by function is again provided by the awesome Prelude.ls. Interesting detail here is that (.1) actually compiles into a function:

This means that you can do things like sort-by last, array, which will sort an array of arrays by the last item of each inner array (last being a prelude function again).

reverse — simply reverses the array in order to get top 10 of the most used tags with the next step, which is…

take 10 — takes 10 first entries from an array. It is smart enough to take less entries if the array is not big enough.

And all this leads to the last step:

map (-> {name: it[0], size: it[1]}) — creates a final array of objects with name and size values. Final array will look like this:

[{name: 'tag2', size: 3}, {name: 'tag8', size: 2}, {name: 'tag1', size: 1}, ...]

In LiveScript last result is automatically returned, so there is no need to explicitly return value.

LiveScript is a very powerful language with (mostly) human readable syntax. Combined with Prelude.ls library you can write less code which looks elegant and does so much more.

26
Feb
2014

Red Badger Components

by Alex Savin

Components are a beautiful way of splitting your front end UI and logic into self contained packages, which can be reused and even released as open source. During our recent World Risk Review project we’ve used components extensively for rapid UI implementation.

Search page components

For example, the Search page was implemented as a parent component containing child components for search results, search filters and suggestions. Lots of internal components were implemented, which are very specific to this project. We’ve also released new public components, as well as forked and improved many of the existing ones.

Quick guide to Component.io

There is a great introduction post on the Component approach by Stuart Harris. In a nutshell, all components are designed to be a lightwave independent front end package, which you can plug into any web application, or even a static page. Most of our components also include /test/index.html file with example of using that particular component. Here is how you can quickly start using components (and creating new ones!).

Assuming that you already have Homebrew installed on your Mac.

Requirements:

  • Install Node.js and npm: brew install node
  • Install component.js: npm install -g component

Steps:

  • Clone a component git repository: git clone git@github.com:redbadger/pager.git
  • Navigate into a directory with component. Run component install to fetch dependencies
  • Run component build to combine all dependencies and assets into a single build
  • Open test/index.html file with your browser. This file will try to include build/build.js (and sometimes build/build.css) files, which should be generated now

You can integrate the component build step into your Gruntfile, or just copy generated build files into your project. You can also setup your web app to fetch latest version of the particular component, build it and include into your app.

Word of caution: the world of custom components is a bit wild now, and it’s a good idea to freeze the component version, or even the whole component build in your app.

Here are some of our public components that we built and improved over the course of the WRR project.

Pager

https://github.com/redbadger/pager

Fork of the original pagination component. Originally a very simple UI component displaying all pages at once, with links. Our current fork can:

  • Go to First and Go to Last page buttons
  • Display a window of pages – limiting number of page links to a specified amount to the both sides from current page

Demo

Animated demo of pager component

Usage 

  • Total number – total amount of, say, search results
  • Per page – how many entries do you want to display on a single page. Pager will calculate the amount of pages and render page links.
  • Max pages – when there are too many entries and pages, you can specify how many page links will be displayed on the left and right from the current page link.

Datepicker

https://github.com/redbadger/datepicker

Fork of the original Datepicker component, now heavily rewritten and improved. New features include:

  • Value is now being set and retrieved as Date
  • CSS was improved to display background correctly
  • It’s possible to click outside the datepicker to close it
  • Keyboard controls were added, specifically escape to close, enter to set value by hand
  • Datepicker emits events when date changes
  • Support for date format was added

Demo

Datepicker demo

Usage

You can specify the date format with a string like “MM.DD.YYYY”. Separator symbol and order of elements will be parsed from this string. You can also use “YY” for a two digit year value, or “YYYY” for full year.

Cookie disclaimer

https://github.com/redbadger/cookie-disclaimer

A simple UI component for displaying cookie disclaimer policy on top of the website. Supports local storage. Once closed, it will write a cookie-consent: true local storage entry and will not reappear again.

Demo

image

Usage

You can specify any html content for the cookie disclaimer. “Close” button will be added by the component.

Collapsible

https://github.com/redbadger/collapsible

Collapsible screenshot

A simple Bootstrap-like collapsible component for your DOM elements. Handy when you have a section with title, and you’d like to toggle collapse on the section by clicking on title. Collapsible is especially useful when combined with CSS media queries to build mobile friendly navigations.

Usage

  • Tag the collapsible toggle switch with .collapse-toggle and data-collapse attribute equal to the collapsible target selector
  • Call collapsible with two arguments – root selector and class name to be applied on the toggle when target is collapsed

Collapsible will parse all elements under root selector for data-collapse attributes and make them collapsible.

Body is used as root selector, and it will be used to find elements with data-collapse attribtues. .collapsed and .expanded classes will be applied to the toggle element.

Pillbox

https://github.com/redbadger/pillbox

Fork of the original component. Now extended with:

  • Support for autocomplete
  • Whitelisted tags
  • Supporting tags with spaces

Demo

Pillbox component demo

Conclusion

Components offer a nice way of abstracting parts of front end logic and UI. They are very lightweight, generally do not depend on libraries like jQuery, and are easy to reuse in different parts of the application, or between applications. We hope you might find some of our components useful for you. There are so many things you can do with components, and the best part is – you can use them today.

27
Nov
2013

Full Frontal 2013

by Stephen Fulljames

9597550996_35a0eb3570_b

When assessing conferences for myself, I tend to break them down in to “doing conferences” and “thinking conferences”. The former being skewed more towards picking up practical tips for day-to-day work and the latter being more thought provoking, bigger picture, ‘I want to try that’ kind of inspiration.

Despite being pitched as a tech-heavy event for Javascript developers, Remy and Julie Sharp’s Full Frontal held at the wonderful Duke of Yorks cinema in Brighton has always felt like more of the latter. That’s not to say the practical content isn’t very good. It is, very very good, and naturally the balance has ebbed and flowed over the event’s five year history, but the general feeling I get when I walk out at the end of the day is always ‘Yeah, let’s do more of that!’ It’s been that way right from the start, in 2009, when Simon Willison ditched his prepared talk at a few days notice to speak about a new language in its infancy – a little thing called Node. So I was hopeful that this year’s conference would provoke similar enthusiasm.

High expectations, then, and a promising start with Angus Croll taking us through some of the new features in EcmaScript 6 (ES6), aka “the next version of Javascript”. Presenting a series of common JS patterns as they currently are in ES5, and how they will be improved in ES6, Angus made the point that we should be trying this stuff out and experimenting with it, even before the specification is eventually finalised and brower support fully implemented, as David commented that if you’ve done Coffeescript you’re probably well prepared for ES6, and really one of the aims of Coffeescript was to plug the gap and drive the evolution of the language, so its hopefully something I will be able to pick up fairly easily.

This was followed by Andrew Nesbitt, organiser of the recent Great British Node Conference, demonstrating the scope of hardware hacking that is now becoming possible using Javascript. As well as the now-obligatory attempt to crash a Node-controlled AR drone into the audience, Andrew also explained that “pretty much every bit of hardware you can plug into USB has a node module these days” and demonstrated a robotic rabbit food dispenser using the latest generation of Lego Mindstorms. Being able to use Javascript in hardware control really lowers the barrier to entry, and the talk only reinforced the feeling I got after the Node Conf that I need to try this (and ideally stop procrastinating and just get on with it).

Joe McCann of Mother New York gave a high-level view on how mobile is increasingly reshaping how we interact with the web, with the world and with each other. Use of phones as payment methods in Africa, where availability of bank accounts is challenging, has reached around 80% of the population with systems such as M-Pesa. And SMS, the bedrock of mobile network operators’ revenue since the early 90s, is being disrupted by what are known as “over the top” messaging services that use devices’ data connections. These are familiar to us as iMessage and Whatsapp, but also growing at a phenomenal scale in the far east with services such as Line which is offering payment, gaming and even embedded applications within its own platform. Joe’s insight from a statistical point of view was fascinating, but it didn’t really feel like many conclusions were drawn from the talk overall.

Andrew Grieve and Kenneth Auchenberg then got down to more development-focussed matters with their talks. The former, drawn from Andrew’s experience working on mobile versions of Google’s productivity apps, was a great explanation of the current state of mobile performance. It turns out that a lot of the things we often take for granted, such as trying to load Javascript as required, aren’t as important now as perhaps they were a couple of years ago. Mobile devices are now able to parse JS and selectively execute it, so putting more effort in to minimising DOM repaints, using event delegation, and taking advantage of incremental results from XHR calls and progress events are likely to be better bets for improving performance.

Kenneth spoke about the web development workflow, a subject he blogged about earlier in the year. His premise was that the increasing complexity of browser-based debug tools, while helpful in their purpose, are only really fixing the symptoms of wider problems by adding more tools. We should be able to debug any browser in the environment of our choice, and he demonstrated this by showing early work on RemoteDebug which aims to make browsers and debuggers more interoperable – shown by debugging Firefox from Chrome’s dev tools. By working together a community on projects like this we can continue to improve our workflows.

My brain, I have to admit, was fairly fried in the early afternoon after an epic burger for lunch from the barbeque guys at The World’s End, a spit-and-sawdust boozer round the corner from the conference venue. So the finer points of Ana Tudor’s talk on some of the more advanced effects you can do purely with CSS animation were lost to struggling grey matter. Suffice it to say, you can do some amazing stuff in only a few lines of CSS, in modern browser, and the adoption of SASS as a pre-processor with its functional abilities makes the process much easier. It’s also brilliant that Ana came on-board as a speaker after impressing Remy in the JSBin birthday competition, and a perfect demonstration that participating in the web community can have a great pay off.

The last development-orientated session was from Angelina Fabbro, on Web Components and the Brick library. Web Components are a combination of new technologies which will allow us to define our own custom, reusable HTML elements to achieve specific purposes – for example a robust date-picker that is native to the page rather than relying on third party Javascript. This is naturally quite a large subject, and it felt like the talk only really skimmed the surface of it, but it was intriguing enough to make me want to dig further.

The finale of the day, and a great note to finish on, was Jeremy Keith speaking about “Time”. Not really a talk on development, or at least not the nuts and bolts of it, but more of a musing about the permanence of the web (if indeed it will be so) interspersed with clips from Charles and Ray Eames’ incredible short film, Powers of Ten – which if you haven’t seen it is a sure-fire way to get some perspective on the size of your influence in the universe.

Definitely a thought-provoking end to the day. As someone who has done their time in, effectively, the advertising industry working on short-lived campaign sites that evaporate after a few months (coincidentally Jeremy mentioned that the average lifetime of a web page is 100 days) it has bothered me that a sizeable chunk of the work I’ve done is no longer visible to anyone. On the other hand I have worked on projects that have been around for a long time, and are likely to remain so, and I suppose in the end its up to each of us to focus our efforts and invest our time in the things that we ourselves consider worthwhile.

(Photo: Jeremy Keith recreating the opening scene of Powers of Ten on a visit to Chicago)

17
Nov
2013

Component

by Stuart Harris

Component

I should have written this post a while ago because something I love is not getting the traction it deserves and writing this earlier may have helped in some small way to change that.

Earlier this year we spent a lot of time trying to understand the best way to package and deliver client-side dependencies. It’s a problem that afflicts all modern web development regardless of the stack you use. Most of the solutions we tried don’t address the real problem, which is about delivering large monolithic libraries and frameworks to the client because some small part of them is needed. Like jQuery, for example. Even underscore, as much as I love it. You might use a few features from each. And then it all adds up. Even uglified and gzipped, it’s not uncommon for a page to be accompanied by up to a megabyte of JavaScript. That’s no good. Not even with pervasive broadband. Especially not on mobile devices over a flaky connection.

Some of these, like bootstrap, allow you to customise the download to include just the bits you want. This is great. But a bit of a faff. And it seems like the wrong solution. I don’t know many people that actually do it.

As an industry we’re moving away from all that. We’re learning from the age-old UNIX way that Eric Raymond so brilliantly described in The Art of UNIX Programming; small, sharp tools, each only doing one thing but doing it well. Modern polyglot architectures are assembled from concise and highly focussed modules of functionality. Software is all about abstracting complexity because our brains cannot be everywhere at once. We all know that if we focus on one job and do it well, we can be sure it works properly and we won’t have to build that same thing again. This is the most efficient way to exploit reuse in software engineering.

But small modules have to be composed. And their dependencies managed. We need something that allows us to pluck a module out of the ether and just use it. We want to depend on it without worrying about what it depends on.

npm is one of the best dependency managers I’ve used. I love how it allows your app to reference a directed acyclic graph of dependencies that is managed for you by the beautiful simplicity of ‘require’ (commonjs modules). In node.js, this works brilliantly well, allowing each module to reference specific versions of its dependencies so that overall there may be lots of different versions of a module in the graph. Even multiple copies of the same version. It allows each module to evolve independently on its own track. And it doesn’t matter how many different versions or copies of a library you’ve got in your app when it’s running on a server. Disk space and memory are cheap. And the stability and flexibility it promotes is well worth the price.

But on the client it’s a different story. You wouldn’t want to download several versions of a library in your page just because different modules were developed independently and some haven’t been updated to use the latest version of something. And the bigger the modules are the worse this would become. Fortunately, the smaller they are, the easier they are to update and the less they, themselves, depend on in the first place. It’s simple to keep a small module up to date. And by small, I’m talking maybe 10 lines of code. Maybe a few hundred, but definitely not more than that.

Enter Component by the prolific (and switched on) TJ Holowaychuk. Not perfect, but until we get Web Components, it’s the best client-side module manager out there. Why? Because it promotes tiny modules. They can be just a bit of functionality, or little bits of UI (widgets if you like). If you use Component, you’re encouraged to use, and/or write, small modules. Like a string trimmer, for example; only 13 loc. Or a tiny, express-like, client-side router in under 1200 bytes.  There are thousands of them. This is a Hacker News button, built with Component:

The Hacker News button, built with Component, showing dependencies

The registry

The great thing about Component is that it fetches the files specified in the component.json from Github, following the pattern “organisation/account” (you can specify other locations). This is great. The namespacing stops the bun-fight for cool names because the organisation is included in the unique identifier.

The other major benefit of this is that you can fork a component, modify it and point your app at your own repo if you’re not getting any of your pull requests integrated.

App structure

But it’s not really about 3rd party modules. In my head it’s more about how you structure the code that drives your page.

Component allows you to write completely self-contained features and plug them together. Your components will be DRY and follow the SRP. Each component can have scripts (e.g. JavaScript, or CoffeeScript), styles (e.g. CSS, or Less, or Sass), templates (e.g. compiled from HTML, or Jade), data (e.g. JSON), images, fonts and other files, as well as their own dependencies (other components). All this is specified in the component.json file, which points to everything the component needs, and informs the build step so that everything is packaged up correctly. It can be a little laborious to specify everything in the component.json, but it’s worth it. When you install a component, the component.json specifies exactly which files (in the Github repo) should be downloaded (unlike Bower, for example, where the whole repo has to be fetched) – check out how fast “component install” is.

The self-contained nature of components means that you don’t have a separate scripts folder with every script for the page in it, and a styles folder with all the CSS. Instead, everything is grouped by function, so everything the component needs is contained in the component’s folder. At build time, you can use Grunt to run component build which transpiles the CoffeeScript to JavaScript, the Less to CSS, the Jade to JavaScript functions, and packages the assets. The dependencies are analysed and all the JavaScript ends up in the right order in one file, all the CSS in another. These and the other assets are copied to the build directory, uglified/compressed ready for delivery to the client.

Getting started

The best docs are in the wiki on the Github repo. The FAQ is especially germane. And TJ’s original blog post is great reading, including the rather brilliant discussion about AMD vs Common JS modules. AMD was invented for asynchronous loading. But when you think about it, you’re gonna package all your script up in one compressed HTTP response anyway; there’s still too much overhead associated with multiple requests, even with HTTP keepalive (it’s not so bad with Spdy). The perceived benefits of loading asynchronously, as required, are not yet fully realisable, so we may as well go for the simple require and module.exports pattern we know and love from node.js.

If you’re using CoffeeScript, Jade and JSON in your components, you can use a Gruntfile that looks like this (which contains a workaround for the fact that the coffee compilation step changes the filename extensions from .coffee to .js):

We’ve tried a bunch of different tools to solve the problem of easily and efficiently distributing your app to the browser. All of them have flaws. We used to love Jam.js and Bower. But we got into a jam with jam, because updates were getting jammed due to unresponsive maintainers (sorry, couldn’t resist that). Bower was great, but too heavy. Browserify is too tightly coupled with node.js and npm. None of them make simple, self contained, focused modules as straightforward and elegant as Component. Nice one, TJ!

14
Oct
2013

Robots, pedal bins and dTrace: The 2013 Great British Node Conference

by Stephen Fulljames

display_GBNC

If there’s a common theme from the popular London Node User Group evening meet-ups, from which the Great British Node Conference has evolved as a full day event, it’s that the Node.js ecosystem appears to be approximately 50% useful production tooling and 50% wonderfully insane hacks – with both sides of the personality aided by Node’s asynchronous nature and ability to process data I/O very quickly.

This ratio felt like it was also borne out during the conference, the first big event to be held at the brand new Shoreditch Works village Hall in Hoxton Square. The event space itself was great; fashionably minimal with rock-solid wifi and on-site coffee shop. The only slight niggle being that the low ceiling height meant the presentation screens became partially obscured by those seated in front, but with two projectors on the go you could usually get a clear view of one.

So, on to the talks. As mentioned there was a definite split between “useful” and “wtf?” and also between micro and macro ideas. Paul Serby of Clock kicked off with a review of his company’s experience of Node in production use for clients over the last 3 years, which was high level but a great introduction to the philosophy behind adopting Node and some of the successes and pain points along the way. It was interesting, and pleasing, to see that their journey has been similar to our own switch towards Node at Red Badger with many similar learnings and changes to our respective programming styles.

Performance was a big theme of the day, both in Paul’s overview talk and in examples much closer to the metal, such as Anton Whalley’s forensic examination of a memory leak bug in the node-levelup module (a wrapper for LevelDB). Usually hand-in-hand with mention of performance was the use of dTrace – not a Node tool in itself but a very useful analysis tool for discovering how applications are running and identifying the source of problems. The overall picture from this being that while Node can offer great performance advantages, it can also be prone to memory leaking and needs careful monitoring in production.

Other talks at the practical end of the spectrum included Hannah Wolfe on Ghost, a new blogging platform built on Node which is looking like an interesting alternative to WordPress and after a very successful Kickstarter campaign to raise funding should be available generally very soon. Tim Ruffles also took us through the various options (and pitfalls) to avoid the callback hell which asynchronous programming can often fall in to. There are a few useful flow control modules available for Node already, but as the Javascript language develops native features to help with async flows – known as generators but acting in a similar way to C#’s yield – will start to become available both in Node and in browsers as they adopt ES6.

Over on the hack side, we were treated to the now obligatory sight of a Node-driven quad-copter drone crashing into the audience and then a brilliant demonstration by Darach Ennis of his Beams module, which attempts to give compute events the same kind of streaming behaviour that I/O enjoys in Node. The key difference being that compute streams are necessarily infinite, and the Beams module allows you to filter, merge and compose these compute streams into useful data. The demo was topped off by an interactive light-tennis game adjudicated by a hacked Robosapiens robot which not only reacted to the gameplay but also ran the software which drove the game.

Probably the highlight for me, although its relation to practical application at work was close to zero, was Gordon Williams talking about Espruino, a JS interpreter for micro-controllers. Running at a lower level than the well-known Raspberry Pi or even Arduino boards, micro-controllers are the tiny computers that make all the stuff around us work and typically have RAM measured in the kilobytes. For anyone who ever tried to write games on a ZX Spectrum this may bring back memories! Gordon showed real-time development via a terminal application, also hooked up to a webcam so we could watch him create a pedal bin which opened based on a proximity sensor. Maybe not useful in my work at Red Badger, but I could instantly see loads of applications in my personal interests and thanks to the immediate familiarity of being able to use Javascript in a new context I’m definitely going to look in to Espruino some more.

Overall this felt like a conference where delegates were looked after probably better than any I’ve been to for a long time, with plenty of tea and biscuits, great coffee and chilled water on hand and a catered lunch and evening meal nearby. Whether this was down to the smaller scale of the event (around 150 attended) or the care and attention to detail taken by the organisers I’m not sure, but either way I came out of it feeling enthusiastic for Node (both practically and hackerly) and eager to go back next time.

5
Jun
2013

Using PubSub in a Require.js app

by Stephen Fulljames

Recently a friend asked “Can you use Require.js with PubSub?”, and since I’ve just done exactly that on a project I thought it was worth writing up a simple example. The Publish-Subscribe (PubSub) pattern is a useful way to pass event triggers and data around an application without having to explicitly wire the origin and the target together. A PubSub mechanism, sometimes called an Event Bus, is used to publish data to named topics, and then any other part of the code can subscribe to those topics and receive any data that is published.

In this way, for example, an AJAX API can fetch data from a server and publish the results out to your Event Bus without needing to know where the data has to go – anything that is interested in it will pick it up.

The complication in this case is that the AMD module coding style that Require.js uses prefers splitting your code up into small chunks in individual files, and its maybe not clear how an PubSub mechanism would work across all these disparate parts. The good news is, its not too difficult!

I’ve created a simple example of a shopping list application which fetches the ingredients you need to buy from an API, and posts back to the API when you’ve picked up each thing. We’re making eggy bread, because that’s what my daughters asked for for breakfast yesterday.

The following code is written in Coffeescript, which suits Require’s modular style, but if you look at the sample app on Github there’s also a version compiled to plain Javascript if you prefer to read that way. There’s also a Grunt file in the project if you want to modify and build the code yourself.

The demo consists of an HTML file, which displays the shopping list and has appropriate buttons to trigger GET and POST behaviour on the API, and four Coffeescript files.

index.html
js/
  lib/
    knockout.js
    pubsub.js
    require.js
  api.coffee
  app.coffee
  config.coffee
  viewmodel.coffee

I’m using Knockout as a viewmodel in my HTML, simply because that’s what was in the original project code I’ve stripped this out of, but you should be able to use any MV* library you prefer or indeed any other method you like to present the data.

In the HTML we place a script element for the Require library, and set its data-main attribute to load the application config first.

<script data-main="js/config" src="js/lib/require.js"></script>

The Require config is extremely simple, we’re just creating paths to both the libraries (Knockout and PubSubJS) the application uses and then requiring the main application file and calling its init method.

require.config
  paths:
    "knockout": "lib/knockout"
    "PubSubJS": "lib/pubsub"

require ['app'], (App) ->
  App.init()

I’m not going to go into the minutae of how Require works, there are plenty of resources out there if you want to look at it in more detail and for now I’ll assume you have at least a basic understanding.

The application as well is not complicated, we’re requiring Knockout, and our Viewmodel and API classes, instantiating both of the latter and then using Knockout’s applyBindings method to wire the viewmodel to the DOM.

define ['knockout','viewmodel','api'], (ko,Viewmodel,API) ->
  init: ->

    api = new API();
    vm = new Viewmodel();

    ko.applyBindings vm;

You’ll notice at this point there’s no mention of the pubsub library, this is entirely contained within the Viewmodel and API.

Let’s take a look at the API. It exposes two methods – getContent and postContent. This is entirely faked for demo purposes, so all these methods do is wait 250ms and then return some data. The API class also stores an array of ingredients which is the data sample we’ll send to the Viewmodel.

define ['PubSubJS'], (pubsub) ->

  delay = (ms, func) -> setTimeout func, ms

  list = ['Eggs','Milk','Bread','Vanilla Essence']

  class API
    constructor: ->

      pubsub.subscribe '/load', (channel, msg) =>
        return unless msg

        @getContent()

      pubsub.subscribe '/remove', (channel, msg) =>
        return unless msg

        @postContent msg

    getContent: ->
      # equivalent of an AJAX GET
      delay 250, -> pubsub.publish '/loaded', list.slice 0

    postContent: (data) ->
      list.splice list.indexOf(data), 1
      # equivalent of an AJAX POST (or DELETE)
      delay 250, -> pubsub.publish '/removed', data

You’ll see that the class requires the PubSubJS library, and then in its constructor sets up a couple of subscribers for events that the Viewmodel will publish. The getContent and postContent methods then publish on their own topics when they complete. The postContent method also deletes the posted item from the array so if the user tries to get the list again we can pretend the API actually did something.

(The delay method is just a convenience wrapper for setTimeout to make it more Coffeescript-y)

And now the Viewmodel.

define ['knockout','PubSubJS'], (ko, pubsub) ->
  class Viewmodel
    constructor: ->

      @firstLoad = ko.observable false
      @status = ko.observable()

      @list = ko.observableArray()

      @instructions = ko.computed =>
        if @firstLoad() then 'All done!' else 'Time to go shopping'

      @load = ->
        pubsub.publish '/load', true

      @remove = (data) ->
        pubsub.publish '/remove', data

      pubsub.subscribe '/loaded', (channel, msg) =>
        return unless msg

        @list msg
        @firstLoad true unless @firstLoad()
        @status 'API content loaded'

      pubsub.subscribe '/removed', (channel, msg) =>
        return unless msg

        @list.splice @list.indexOf(msg), 1
        @status 'Item removed via API'

Again we require the PubSubJS library, as well as Knockout. Its constructor does the usual Knockout setup to create its observables and other viewmodel functions, and then binds the equivalent PubSub subscribers to listen for activity from the API. The @load and @remove methods are bound to buttons in the UI, and they publish the events which the API listens for to trigger its (fake) AJAX calls.

Both parts of the application are completely decoupled from each other, only communicating though PubSub, and this separation of concerns means you could replace the API or the UI without the other side ever, in theory, needing to be changed.

And that’s about all of it. To make the demo meaningful I’ve set the Viewmodel and HTML bindings up so that a shopping list is displayed after the user clicks the ‘Get list from API’ button, and removes items from the list when the API reports the remove behaviour was successfully posted.

So how does it work? The PubSubJS library, used in this example, operates as a global object so when the Viewmodel and API classes require it they’re referring to the same object. Therefore when topics are subscribed to, and published, PubSubJS’s event bus is available to all parts of the application and data can be received and sent from anywhere. That doesn’t mean you should neccessarily use that library. Backbone, for example, has PubSub built in through its event system. And others are available. But most work on the same principle so if you have a favourite, give it a quick test and make sure it works.

All the code for the demo is on Github so do check it out for yourself and play around with it.

26
Mar
2013

Jam.js and Bower

by Jon Sharratt

Over the last few months of last year we have been crafting away and exploring many new exciting frameworks and technologies.  So many to mention…… but our core stack as of late has mostly consisted of node.js.  As people of node.js know the npm package manager that is bundled is a fantastic tool to resolve dependencies for the project in question.

With this in mind myself and Stu have been busy hard at work with a large client in the finance sector and looked to ‘resolve’ our client side dependencies in an equivalent manner.  We had a look around at some of the options and for our use case the same two kept cropping up.

Bower

The folks over at the Twitter Engineering team have been hard at work again and built Bower as a client-side package manager.  It allows you to resolve your dependencies via a component.json file and install the relevant git paths as required.  This approach is at a lower level than other alternatives it doesn’t limit you to any transports, ultimately you decide how you wish to deliver your packages client side.

The great thing is that you can just point to git paths and you are away without dependencies on other people to own and manage the updates of packages that exist.  From the experience we had working with Jam.js I suppose it might also be fair to say that it would have taken less “tinkering time” to get the transport with require.js working client side along side some out of date package versions we couldn’t update.

Jam.js

Jam.js is  a package manager with a different spin in that it will install packages from a core jam.js package repository, resolve dependencies and using AMD / require.js to configure transporting your packages to the client.  The configuration for the packages are defined and are stored within your standard package.json file.

There is a down side that is quite commonly raised with the fact you upload a package to a repository separated away from github.  Once a package is uploaded the owner is the only person in which they can update it (unless the owner adds other users).  There is an option to setup your own repository and have jam search the core repository in addition to the one you create, to me this is a big limitation in regards to package management.  

Package management should, in my opinion, be sourced from one place in that it is concise, self managing, effective to install and configured to deliver to the client in the best possible manner.  Jam.js for me meets three out of four in this criteria.

I have started a pull request to try and help with the update of packages via pull requests.  The idea is taken from github / git and allowing users to submit pull requests to repositories.  The initial implementation of this feature with a description and can be found at:  https://github.com/caolan/jam/issues/128

In Summary

We made the choice of Jam.js for out project as it suited our needs to have require.js be configured and setup up as the transport to the client (once we cracked some of the require.js path configuration).  All you need to do is quite naturally add your jam dependencies within your package.json and type on the command line jam install just like npm install and away you go.  

It is not to say we will never use Bower as it does it’s job well and the fact it is lower level may suit other projects where transport via AMD is not a preference.  We may also switch if we find the package management and the central package repository not being kept up to date by community, this is where Jam.js can and may fall down.