Posts Tagged ‘javascript’

11
Mar
2014

Componentify it

by Alex Savin

ball

Original image by Jonathan Kos-Read, licensed with Creative Commons BY-ND 2.0

One of the most amazing things about Web development is the scale of bicycle inventions. Pretty much every developer at some point decides to conquer a common problem by rewriting it from scratch. Reusing existing solutions is often difficult – they are likely to be tied to internal styles, libraries, build processes. This creates a nurturing ground where same features are implemented many, many times not only because developers think they can do it better, but also because there are too much of efforts to reuse existing solution without bringing up all the dependencies.

This blogpost is about Component approach. Components are designed to have as little dependencies as possible, relying on common web technologies, easily reusable and extendable. With component approach it’s trivial to unplug parts of the app and replace them with other parts or improved version of the same component. There are certain principles to be considered when releasing new components, since the Component itself doesn’t really limit you in any way. It is possible that later on we’ll have some monster components with huge amount of dependencies lurking around – but then again using such components is completely up to you.

When an aspect of a component may be useful to others, consider writing that as a component as well. If it requires reasonable effort to write the code in the first place, chances are someone else could use it too.

Building better components

Make a new component

Component comes with a handy console command. Assuming that you have component npm package installed – try running component help

To create a new component you can run component create mynewcomponent. This command will ask you (quite) a few questions, and then will create a new folder with some basic files.

If you try compiling newly generated component, you might get a following error:

$ component build
error : ENOENT, open '/Users/alex/red-badger/mytestcomponent/template.js'

This happens because there is template.js file specified in component.json, but is not generated by component create. You can either create this file manually, or remove it from component.json. After that component build should generate your first component under /build folder.

Each component can contain any amount of:

  • JS files, with mandatory index.js or main.js file assigned to a “main” directive in component.json
  • CSS files
  • HTML files
  • Local and external dependencies to other components

All assets must be explicitly listed in component.json, otherwise they are ignored. This is another clever feature of Component since the folder might contain temp files, npm packages, generated files. When Component is instructed to pick only certain files, it will not only limit the build to these files, but also fetch them from Github (ignoring everything else you could’ve pushed there on purpose or by accident). This way building components with dependencies becomes much faster.

Component doesn’t have to contain any JavaScript logic, and can simply export HTML template or CSS style:

You can componentify any bit of style, markup or functionality into a reusable package. As long as it makes sense.

Entry point

A fairly common pattern for a UI component is to ask for a DOM selector where the component would insert itself.

You can also use component/dom component for basic DOM manipulations. Dom component is obviously not a jQuery replacement, and lacks lots of jQuery functionality. You can live well also without dom component and manipulate DOM directly with document methods like document.getElementById, document.querySelectorAll. Adding and removing classes to elements can be slightly more challenging without any libraries, but there is a special component for that too – component/classes

Here is an example of component main.js using dom component and appending itself to the target element:

In this case dom component would be specified in the dependencies section of component.json:

Using CoffeeScript, LiveScript, LESS and JADE with components

If you are going to release a public component, it make sense to package it with conventional JS/CSS/HTML files to reduce amount of dependencies and increase reusability. For internal components we’ve mostly used LiveScript, CoffeeScript, LESS styles and JADE templates. And to complete the LiveScript picture, we’ve repackaged Prelude.ls library into a component and used it as dependency.

Notable npm packages for component builds:

In Gruntfile you can configure builder to use extra components:

And in grunt.initConfig section:

Global and local components

When using components extensively in your app you might end up with lots of public and private components. All public components and their dependencies will be automatically installed into a /components folder. It’s a good idea to put your private components into a /local folder next to the /components folder. You can also have dependencies between your local component:

Syntax for local dependencies slightly differs from the global dependencies – there is no version, and you just list all local components as array.

In the root of both /local and /components folders you will need a main component.json file, which will tell the build process which components needs to be included in the final main.js file:

paths will tell the builder to look for /local folder in addition to /components folder.

Later in the html file you simply include this generated main.js.

The content of this file is evaluated, now you can require and start using any of the components on your pages:

Most of Red Badger components include example html file with everything you need to start using that component on your pages.

Check for existing implementation

Before implementing your own component, it might be worth of checking for existing implementation. Most of the components are listed here:

https://github.com/component/component/wiki/Components

Component.io site is another slick way of browsing and searching for existing components. In many cases you’d want to start with existing component and either use it as dependency or fork and add your own functionality. Standard component/components are especially easy starting point for extending.

Conclusion

Component approach requires some extra efforts. You have to abstract and package bits of UI, make them fairly independent and think of the possible reuse cases. You can ignore the new components creation stage completely and just use existing components. We chose to componentify most of our recent World Risk Review project and this proved to be a brilliant decision. Instead of repetitive blocks of styles, markup and logic we managed to put most of the front end elements into components and reuse them when needed. Some of the components were also released as open source, and we hope to see even more useful components released in the future!

4
Mar
2014

Functional refactoring with LiveScript

by Alex Savin

sibelius_pipes
Original image by Dennis Jarvis. Used under Creative Commons BY-SA license.

LiveScript is a functional language that compiles to JavaScript. You could say it’s sort of like CoffeeScript, but in fact it’s so much better. This post features one hands on example of refactoring Javascript-like code using power of LiveScript syntax, combined with Prelude.ls extras.

Here is a function for processing an array of tags into an object for D3.js vizualisation component. On input you have an array like ['tag1', 'tag2', 'tag2', 'tag2', 'tag3', ... ]. The function selects the top 10 of the most popular tags and constructs a D3.js compatible object.

However when I showed this code to Viktor, he was quick to point out that LiveScript can do better. After another couple of minutes he produced this:

8 LOC vs. 21 LOC. The beauty of LiveScript is that it is very readable, and you can figure out what’s going on just by reading the code. Refactored version also compiles into a neat(er) looking JS.

What’s going on here?

|> is a LiveScript directive for piping. You get results from a previous operation and pass it on to the next one. We are effectively processing a single input, so it is piping all the way.

group-by (-> it) — using a Prelude.ls function to create index of the tags array. This will create an object which will look like this: {'tag1': ['tag1'], 'tag2': ['tag2', 'tag2', 'tag2'], ...} We can see a nice example of LiveScript syntax where -> it effectively compiles into:

Note that tags are piped into this function.

obj-to-pairs — another Prelude.ls function, which takes an object and returns a list of pairs. This way our previous array will turn into something like this:

[['tag1', ['tag1']], ['tag2', ['tag2', 'tag2', 'tag2']], ... ]

map (-> [it[0], it[1].length]) — maps every entry in the array using a supplied function. This will produce a new array:

[['tag1', 1], ['tag2', 3], ...]

Again, using the default argument it here for every array entry from the previous array.

sort-by (.1).1 is a clever use of LiveScript syntax to access second entry in a ['tag2', 3] array and sort the master array based on its value. The sort-by function is again provided by the awesome Prelude.ls. Interesting detail here is that (.1) actually compiles into a function:

This means that you can do things like sort-by last, array, which will sort an array of arrays by the last item of each inner array (last being a prelude function again).

reverse — simply reverses the array in order to get top 10 of the most used tags with the next step, which is…

take 10 — takes 10 first entries from an array. It is smart enough to take less entries if the array is not big enough.

And all this leads to the last step:

map (-> {name: it[0], size: it[1]}) — creates a final array of objects with name and size values. Final array will look like this:

[{name: 'tag2', size: 3}, {name: 'tag8', size: 2}, {name: 'tag1', size: 1}, ...]

In LiveScript last result is automatically returned, so there is no need to explicitly return value.

LiveScript is a very powerful language with (mostly) human readable syntax. Combined with Prelude.ls library you can write less code which looks elegant and does so much more.

26
Feb
2014

Red Badger Components

by Alex Savin

Components are a beautiful way of splitting your front end UI and logic into self contained packages, which can be reused and even released as open source. During our recent World Risk Review project we’ve used components extensively for rapid UI implementation.

Search page components

For example, the Search page was implemented as a parent component containing child components for search results, search filters and suggestions. Lots of internal components were implemented, which are very specific to this project. We’ve also released new public components, as well as forked and improved many of the existing ones.

Quick guide to Component.io

There is a great introduction post on the Component approach by Stuart Harris. In a nutshell, all components are designed to be a lightwave independent front end package, which you can plug into any web application, or even a static page. Most of our components also include /test/index.html file with example of using that particular component. Here is how you can quickly start using components (and creating new ones!).

Assuming that you already have Homebrew installed on your Mac.

Requirements:

  • Install Node.js and npm: brew install node
  • Install component.js: npm install -g component

Steps:

  • Clone a component git repository: git clone git@github.com:redbadger/pager.git
  • Navigate into a directory with component. Run component install to fetch dependencies
  • Run component build to combine all dependencies and assets into a single build
  • Open test/index.html file with your browser. This file will try to include build/build.js (and sometimes build/build.css) files, which should be generated now

You can integrate the component build step into your Gruntfile, or just copy generated build files into your project. You can also setup your web app to fetch latest version of the particular component, build it and include into your app.

Word of caution: the world of custom components is a bit wild now, and it’s a good idea to freeze the component version, or even the whole component build in your app.

Here are some of our public components that we built and improved over the course of the WRR project.

Pager

https://github.com/redbadger/pager

Fork of the original pagination component. Originally a very simple UI component displaying all pages at once, with links. Our current fork can:

  • Go to First and Go to Last page buttons
  • Display a window of pages – limiting number of page links to a specified amount to the both sides from current page

Demo

Animated demo of pager component

Usage 

  • Total number – total amount of, say, search results
  • Per page – how many entries do you want to display on a single page. Pager will calculate the amount of pages and render page links.
  • Max pages – when there are too many entries and pages, you can specify how many page links will be displayed on the left and right from the current page link.

Datepicker

https://github.com/redbadger/datepicker

Fork of the original Datepicker component, now heavily rewritten and improved. New features include:

  • Value is now being set and retrieved as Date
  • CSS was improved to display background correctly
  • It’s possible to click outside the datepicker to close it
  • Keyboard controls were added, specifically escape to close, enter to set value by hand
  • Datepicker emits events when date changes
  • Support for date format was added

Demo

Datepicker demo

Usage

You can specify the date format with a string like “MM.DD.YYYY”. Separator symbol and order of elements will be parsed from this string. You can also use “YY” for a two digit year value, or “YYYY” for full year.

Cookie disclaimer

https://github.com/redbadger/cookie-disclaimer

A simple UI component for displaying cookie disclaimer policy on top of the website. Supports local storage. Once closed, it will write a cookie-consent: true local storage entry and will not reappear again.

Demo

image

Usage

You can specify any html content for the cookie disclaimer. “Close” button will be added by the component.

Collapsible

https://github.com/redbadger/collapsible

Collapsible screenshot

A simple Bootstrap-like collapsible component for your DOM elements. Handy when you have a section with title, and you’d like to toggle collapse on the section by clicking on title. Collapsible is especially useful when combined with CSS media queries to build mobile friendly navigations.

Usage

  • Tag the collapsible toggle switch with .collapse-toggle and data-collapse attribute equal to the collapsible target selector
  • Call collapsible with two arguments – root selector and class name to be applied on the toggle when target is collapsed

Collapsible will parse all elements under root selector for data-collapse attributes and make them collapsible.

Body is used as root selector, and it will be used to find elements with data-collapse attribtues. .collapsed and .expanded classes will be applied to the toggle element.

Pillbox

https://github.com/redbadger/pillbox

Fork of the original component. Now extended with:

  • Support for autocomplete
  • Whitelisted tags
  • Supporting tags with spaces

Demo

Pillbox component demo

Conclusion

Components offer a nice way of abstracting parts of front end logic and UI. They are very lightweight, generally do not depend on libraries like jQuery, and are easy to reuse in different parts of the application, or between applications. We hope you might find some of our components useful for you. There are so many things you can do with components, and the best part is – you can use them today.

27
Nov
2013

Full Frontal 2013

by Stephen Fulljames

9597550996_35a0eb3570_b

When assessing conferences for myself, I tend to break them down in to “doing conferences” and “thinking conferences”. The former being skewed more towards picking up practical tips for day-to-day work and the latter being more thought provoking, bigger picture, ‘I want to try that’ kind of inspiration.

Despite being pitched as a tech-heavy event for Javascript developers, Remy and Julie Sharp’s Full Frontal held at the wonderful Duke of Yorks cinema in Brighton has always felt like more of the latter. That’s not to say the practical content isn’t very good. It is, very very good, and naturally the balance has ebbed and flowed over the event’s five year history, but the general feeling I get when I walk out at the end of the day is always ‘Yeah, let’s do more of that!’ It’s been that way right from the start, in 2009, when Simon Willison ditched his prepared talk at a few days notice to speak about a new language in its infancy – a little thing called Node. So I was hopeful that this year’s conference would provoke similar enthusiasm.

High expectations, then, and a promising start with Angus Croll taking us through some of the new features in EcmaScript 6 (ES6), aka “the next version of Javascript”. Presenting a series of common JS patterns as they currently are in ES5, and how they will be improved in ES6, Angus made the point that we should be trying this stuff out and experimenting with it, even before the specification is eventually finalised and brower support fully implemented, as David commented that if you’ve done Coffeescript you’re probably well prepared for ES6, and really one of the aims of Coffeescript was to plug the gap and drive the evolution of the language, so its hopefully something I will be able to pick up fairly easily.

This was followed by Andrew Nesbitt, organiser of the recent Great British Node Conference, demonstrating the scope of hardware hacking that is now becoming possible using Javascript. As well as the now-obligatory attempt to crash a Node-controlled AR drone into the audience, Andrew also explained that “pretty much every bit of hardware you can plug into USB has a node module these days” and demonstrated a robotic rabbit food dispenser using the latest generation of Lego Mindstorms. Being able to use Javascript in hardware control really lowers the barrier to entry, and the talk only reinforced the feeling I got after the Node Conf that I need to try this (and ideally stop procrastinating and just get on with it).

Joe McCann of Mother New York gave a high-level view on how mobile is increasingly reshaping how we interact with the web, with the world and with each other. Use of phones as payment methods in Africa, where availability of bank accounts is challenging, has reached around 80% of the population with systems such as M-Pesa. And SMS, the bedrock of mobile network operators’ revenue since the early 90s, is being disrupted by what are known as “over the top” messaging services that use devices’ data connections. These are familiar to us as iMessage and Whatsapp, but also growing at a phenomenal scale in the far east with services such as Line which is offering payment, gaming and even embedded applications within its own platform. Joe’s insight from a statistical point of view was fascinating, but it didn’t really feel like many conclusions were drawn from the talk overall.

Andrew Grieve and Kenneth Auchenberg then got down to more development-focussed matters with their talks. The former, drawn from Andrew’s experience working on mobile versions of Google’s productivity apps, was a great explanation of the current state of mobile performance. It turns out that a lot of the things we often take for granted, such as trying to load Javascript as required, aren’t as important now as perhaps they were a couple of years ago. Mobile devices are now able to parse JS and selectively execute it, so putting more effort in to minimising DOM repaints, using event delegation, and taking advantage of incremental results from XHR calls and progress events are likely to be better bets for improving performance.

Kenneth spoke about the web development workflow, a subject he blogged about earlier in the year. His premise was that the increasing complexity of browser-based debug tools, while helpful in their purpose, are only really fixing the symptoms of wider problems by adding more tools. We should be able to debug any browser in the environment of our choice, and he demonstrated this by showing early work on RemoteDebug which aims to make browsers and debuggers more interoperable – shown by debugging Firefox from Chrome’s dev tools. By working together a community on projects like this we can continue to improve our workflows.

My brain, I have to admit, was fairly fried in the early afternoon after an epic burger for lunch from the barbeque guys at The World’s End, a spit-and-sawdust boozer round the corner from the conference venue. So the finer points of Ana Tudor’s talk on some of the more advanced effects you can do purely with CSS animation were lost to struggling grey matter. Suffice it to say, you can do some amazing stuff in only a few lines of CSS, in modern browser, and the adoption of SASS as a pre-processor with its functional abilities makes the process much easier. It’s also brilliant that Ana came on-board as a speaker after impressing Remy in the JSBin birthday competition, and a perfect demonstration that participating in the web community can have a great pay off.

The last development-orientated session was from Angelina Fabbro, on Web Components and the Brick library. Web Components are a combination of new technologies which will allow us to define our own custom, reusable HTML elements to achieve specific purposes – for example a robust date-picker that is native to the page rather than relying on third party Javascript. This is naturally quite a large subject, and it felt like the talk only really skimmed the surface of it, but it was intriguing enough to make me want to dig further.

The finale of the day, and a great note to finish on, was Jeremy Keith speaking about “Time”. Not really a talk on development, or at least not the nuts and bolts of it, but more of a musing about the permanence of the web (if indeed it will be so) interspersed with clips from Charles and Ray Eames’ incredible short film, Powers of Ten – which if you haven’t seen it is a sure-fire way to get some perspective on the size of your influence in the universe.

Definitely a thought-provoking end to the day. As someone who has done their time in, effectively, the advertising industry working on short-lived campaign sites that evaporate after a few months (coincidentally Jeremy mentioned that the average lifetime of a web page is 100 days) it has bothered me that a sizeable chunk of the work I’ve done is no longer visible to anyone. On the other hand I have worked on projects that have been around for a long time, and are likely to remain so, and I suppose in the end its up to each of us to focus our efforts and invest our time in the things that we ourselves consider worthwhile.

(Photo: Jeremy Keith recreating the opening scene of Powers of Ten on a visit to Chicago)

17
Nov
2013

Component

by Stuart Harris

Component

I should have written this post a while ago because something I love is not getting the traction it deserves and writing this earlier may have helped in some small way to change that.

Earlier this year we spent a lot of time trying to understand the best way to package and deliver client-side dependencies. It’s a problem that afflicts all modern web development regardless of the stack you use. Most of the solutions we tried don’t address the real problem, which is about delivering large monolithic libraries and frameworks to the client because some small part of them is needed. Like jQuery, for example. Even underscore, as much as I love it. You might use a few features from each. And then it all adds up. Even uglified and gzipped, it’s not uncommon for a page to be accompanied by up to a megabyte of JavaScript. That’s no good. Not even with pervasive broadband. Especially not on mobile devices over a flaky connection.

Some of these, like bootstrap, allow you to customise the download to include just the bits you want. This is great. But a bit of a faff. And it seems like the wrong solution. I don’t know many people that actually do it.

As an industry we’re moving away from all that. We’re learning from the age-old UNIX way that Eric Raymond so brilliantly described in The Art of UNIX Programming; small, sharp tools, each only doing one thing but doing it well. Modern polyglot architectures are assembled from concise and highly focussed modules of functionality. Software is all about abstracting complexity because our brains cannot be everywhere at once. We all know that if we focus on one job and do it well, we can be sure it works properly and we won’t have to build that same thing again. This is the most efficient way to exploit reuse in software engineering.

But small modules have to be composed. And their dependencies managed. We need something that allows us to pluck a module out of the ether and just use it. We want to depend on it without worrying about what it depends on.

npm is one of the best dependency managers I’ve used. I love how it allows your app to reference a directed acyclic graph of dependencies that is managed for you by the beautiful simplicity of ‘require’ (commonjs modules). In node.js, this works brilliantly well, allowing each module to reference specific versions of its dependencies so that overall there may be lots of different versions of a module in the graph. Even multiple copies of the same version. It allows each module to evolve independently on its own track. And it doesn’t matter how many different versions or copies of a library you’ve got in your app when it’s running on a server. Disk space and memory are cheap. And the stability and flexibility it promotes is well worth the price.

But on the client it’s a different story. You wouldn’t want to download several versions of a library in your page just because different modules were developed independently and some haven’t been updated to use the latest version of something. And the bigger the modules are the worse this would become. Fortunately, the smaller they are, the easier they are to update and the less they, themselves, depend on in the first place. It’s simple to keep a small module up to date. And by small, I’m talking maybe 10 lines of code. Maybe a few hundred, but definitely not more than that.

Enter Component by the prolific (and switched on) TJ Holowaychuk. Not perfect, but until we get Web Components, it’s the best client-side module manager out there. Why? Because it promotes tiny modules. They can be just a bit of functionality, or little bits of UI (widgets if you like). If you use Component, you’re encouraged to use, and/or write, small modules. Like a string trimmer, for example; only 13 loc. Or a tiny, express-like, client-side router in under 1200 bytes.  There are thousands of them. This is a Hacker News button, built with Component:

The Hacker News button, built with Component, showing dependencies

The registry

The great thing about Component is that it fetches the files specified in the component.json from Github, following the pattern “organisation/account” (you can specify other locations). This is great. The namespacing stops the bun-fight for cool names because the organisation is included in the unique identifier.

The other major benefit of this is that you can fork a component, modify it and point your app at your own repo if you’re not getting any of your pull requests integrated.

App structure

But it’s not really about 3rd party modules. In my head it’s more about how you structure the code that drives your page.

Component allows you to write completely self-contained features and plug them together. Your components will be DRY and follow the SRP. Each component can have scripts (e.g. JavaScript, or CoffeeScript), styles (e.g. CSS, or Less, or Sass), templates (e.g. compiled from HTML, or Jade), data (e.g. JSON), images, fonts and other files, as well as their own dependencies (other components). All this is specified in the component.json file, which points to everything the component needs, and informs the build step so that everything is packaged up correctly. It can be a little laborious to specify everything in the component.json, but it’s worth it. When you install a component, the component.json specifies exactly which files (in the Github repo) should be downloaded (unlike Bower, for example, where the whole repo has to be fetched) – check out how fast “component install” is.

The self-contained nature of components means that you don’t have a separate scripts folder with every script for the page in it, and a styles folder with all the CSS. Instead, everything is grouped by function, so everything the component needs is contained in the component’s folder. At build time, you can use Grunt to run component build which transpiles the CoffeeScript to JavaScript, the Less to CSS, the Jade to JavaScript functions, and packages the assets. The dependencies are analysed and all the JavaScript ends up in the right order in one file, all the CSS in another. These and the other assets are copied to the build directory, uglified/compressed ready for delivery to the client.

Getting started

The best docs are in the wiki on the Github repo. The FAQ is especially germane. And TJ’s original blog post is great reading, including the rather brilliant discussion about AMD vs Common JS modules. AMD was invented for asynchronous loading. But when you think about it, you’re gonna package all your script up in one compressed HTTP response anyway; there’s still too much overhead associated with multiple requests, even with HTTP keepalive (it’s not so bad with Spdy). The perceived benefits of loading asynchronously, as required, are not yet fully realisable, so we may as well go for the simple require and module.exports pattern we know and love from node.js.

If you’re using CoffeeScript, Jade and JSON in your components, you can use a Gruntfile that looks like this (which contains a workaround for the fact that the coffee compilation step changes the filename extensions from .coffee to .js):

We’ve tried a bunch of different tools to solve the problem of easily and efficiently distributing your app to the browser. All of them have flaws. We used to love Jam.js and Bower. But we got into a jam with jam, because updates were getting jammed due to unresponsive maintainers (sorry, couldn’t resist that). Bower was great, but too heavy. Browserify is too tightly coupled with node.js and npm. None of them make simple, self contained, focused modules as straightforward and elegant as Component. Nice one, TJ!

14
Oct
2013

Robots, pedal bins and dTrace: The 2013 Great British Node Conference

by Stephen Fulljames

display_GBNC

If there’s a common theme from the popular London Node User Group evening meet-ups, from which the Great British Node Conference has evolved as a full day event, it’s that the Node.js ecosystem appears to be approximately 50% useful production tooling and 50% wonderfully insane hacks – with both sides of the personality aided by Node’s asynchronous nature and ability to process data I/O very quickly.

This ratio felt like it was also borne out during the conference, the first big event to be held at the brand new Shoreditch Works village Hall in Hoxton Square. The event space itself was great; fashionably minimal with rock-solid wifi and on-site coffee shop. The only slight niggle being that the low ceiling height meant the presentation screens became partially obscured by those seated in front, but with two projectors on the go you could usually get a clear view of one.

So, on to the talks. As mentioned there was a definite split between “useful” and “wtf?” and also between micro and macro ideas. Paul Serby of Clock kicked off with a review of his company’s experience of Node in production use for clients over the last 3 years, which was high level but a great introduction to the philosophy behind adopting Node and some of the successes and pain points along the way. It was interesting, and pleasing, to see that their journey has been similar to our own switch towards Node at Red Badger with many similar learnings and changes to our respective programming styles.

Performance was a big theme of the day, both in Paul’s overview talk and in examples much closer to the metal, such as Anton Whalley’s forensic examination of a memory leak bug in the node-levelup module (a wrapper for LevelDB). Usually hand-in-hand with mention of performance was the use of dTrace – not a Node tool in itself but a very useful analysis tool for discovering how applications are running and identifying the source of problems. The overall picture from this being that while Node can offer great performance advantages, it can also be prone to memory leaking and needs careful monitoring in production.

Other talks at the practical end of the spectrum included Hannah Wolfe on Ghost, a new blogging platform built on Node which is looking like an interesting alternative to WordPress and after a very successful Kickstarter campaign to raise funding should be available generally very soon. Tim Ruffles also took us through the various options (and pitfalls) to avoid the callback hell which asynchronous programming can often fall in to. There are a few useful flow control modules available for Node already, but as the Javascript language develops native features to help with async flows – known as generators but acting in a similar way to C#’s yield – will start to become available both in Node and in browsers as they adopt ES6.

Over on the hack side, we were treated to the now obligatory sight of a Node-driven quad-copter drone crashing into the audience and then a brilliant demonstration by Darach Ennis of his Beams module, which attempts to give compute events the same kind of streaming behaviour that I/O enjoys in Node. The key difference being that compute streams are necessarily infinite, and the Beams module allows you to filter, merge and compose these compute streams into useful data. The demo was topped off by an interactive light-tennis game adjudicated by a hacked Robosapiens robot which not only reacted to the gameplay but also ran the software which drove the game.

Probably the highlight for me, although its relation to practical application at work was close to zero, was Gordon Williams talking about Espruino, a JS interpreter for micro-controllers. Running at a lower level than the well-known Raspberry Pi or even Arduino boards, micro-controllers are the tiny computers that make all the stuff around us work and typically have RAM measured in the kilobytes. For anyone who ever tried to write games on a ZX Spectrum this may bring back memories! Gordon showed real-time development via a terminal application, also hooked up to a webcam so we could watch him create a pedal bin which opened based on a proximity sensor. Maybe not useful in my work at Red Badger, but I could instantly see loads of applications in my personal interests and thanks to the immediate familiarity of being able to use Javascript in a new context I’m definitely going to look in to Espruino some more.

Overall this felt like a conference where delegates were looked after probably better than any I’ve been to for a long time, with plenty of tea and biscuits, great coffee and chilled water on hand and a catered lunch and evening meal nearby. Whether this was down to the smaller scale of the event (around 150 attended) or the care and attention to detail taken by the organisers I’m not sure, but either way I came out of it feeling enthusiastic for Node (both practically and hackerly) and eager to go back next time.

5
Jun
2013

Using PubSub in a Require.js app

by Stephen Fulljames

Recently a friend asked “Can you use Require.js with PubSub?”, and since I’ve just done exactly that on a project I thought it was worth writing up a simple example. The Publish-Subscribe (PubSub) pattern is a useful way to pass event triggers and data around an application without having to explicitly wire the origin and the target together. A PubSub mechanism, sometimes called an Event Bus, is used to publish data to named topics, and then any other part of the code can subscribe to those topics and receive any data that is published.

In this way, for example, an AJAX API can fetch data from a server and publish the results out to your Event Bus without needing to know where the data has to go – anything that is interested in it will pick it up.

The complication in this case is that the AMD module coding style that Require.js uses prefers splitting your code up into small chunks in individual files, and its maybe not clear how an PubSub mechanism would work across all these disparate parts. The good news is, its not too difficult!

I’ve created a simple example of a shopping list application which fetches the ingredients you need to buy from an API, and posts back to the API when you’ve picked up each thing. We’re making eggy bread, because that’s what my daughters asked for for breakfast yesterday.

The following code is written in Coffeescript, which suits Require’s modular style, but if you look at the sample app on Github there’s also a version compiled to plain Javascript if you prefer to read that way. There’s also a Grunt file in the project if you want to modify and build the code yourself.

The demo consists of an HTML file, which displays the shopping list and has appropriate buttons to trigger GET and POST behaviour on the API, and four Coffeescript files.

index.html
js/
  lib/
    knockout.js
    pubsub.js
    require.js
  api.coffee
  app.coffee
  config.coffee
  viewmodel.coffee

I’m using Knockout as a viewmodel in my HTML, simply because that’s what was in the original project code I’ve stripped this out of, but you should be able to use any MV* library you prefer or indeed any other method you like to present the data.

In the HTML we place a script element for the Require library, and set its data-main attribute to load the application config first.

<script data-main="js/config" src="js/lib/require.js"></script>

The Require config is extremely simple, we’re just creating paths to both the libraries (Knockout and PubSubJS) the application uses and then requiring the main application file and calling its init method.

require.config
  paths:
    "knockout": "lib/knockout"
    "PubSubJS": "lib/pubsub"

require ['app'], (App) ->
  App.init()

I’m not going to go into the minutae of how Require works, there are plenty of resources out there if you want to look at it in more detail and for now I’ll assume you have at least a basic understanding.

The application as well is not complicated, we’re requiring Knockout, and our Viewmodel and API classes, instantiating both of the latter and then using Knockout’s applyBindings method to wire the viewmodel to the DOM.

define ['knockout','viewmodel','api'], (ko,Viewmodel,API) ->
  init: ->

    api = new API();
    vm = new Viewmodel();

    ko.applyBindings vm;

You’ll notice at this point there’s no mention of the pubsub library, this is entirely contained within the Viewmodel and API.

Let’s take a look at the API. It exposes two methods – getContent and postContent. This is entirely faked for demo purposes, so all these methods do is wait 250ms and then return some data. The API class also stores an array of ingredients which is the data sample we’ll send to the Viewmodel.

define ['PubSubJS'], (pubsub) ->

  delay = (ms, func) -> setTimeout func, ms

  list = ['Eggs','Milk','Bread','Vanilla Essence']

  class API
    constructor: ->

      pubsub.subscribe '/load', (channel, msg) =>
        return unless msg

        @getContent()

      pubsub.subscribe '/remove', (channel, msg) =>
        return unless msg

        @postContent msg

    getContent: ->
      # equivalent of an AJAX GET
      delay 250, -> pubsub.publish '/loaded', list.slice 0

    postContent: (data) ->
      list.splice list.indexOf(data), 1
      # equivalent of an AJAX POST (or DELETE)
      delay 250, -> pubsub.publish '/removed', data

You’ll see that the class requires the PubSubJS library, and then in its constructor sets up a couple of subscribers for events that the Viewmodel will publish. The getContent and postContent methods then publish on their own topics when they complete. The postContent method also deletes the posted item from the array so if the user tries to get the list again we can pretend the API actually did something.

(The delay method is just a convenience wrapper for setTimeout to make it more Coffeescript-y)

And now the Viewmodel.

define ['knockout','PubSubJS'], (ko, pubsub) ->
  class Viewmodel
    constructor: ->

      @firstLoad = ko.observable false
      @status = ko.observable()

      @list = ko.observableArray()

      @instructions = ko.computed =>
        if @firstLoad() then 'All done!' else 'Time to go shopping'

      @load = ->
        pubsub.publish '/load', true

      @remove = (data) ->
        pubsub.publish '/remove', data

      pubsub.subscribe '/loaded', (channel, msg) =>
        return unless msg

        @list msg
        @firstLoad true unless @firstLoad()
        @status 'API content loaded'

      pubsub.subscribe '/removed', (channel, msg) =>
        return unless msg

        @list.splice @list.indexOf(msg), 1
        @status 'Item removed via API'

Again we require the PubSubJS library, as well as Knockout. Its constructor does the usual Knockout setup to create its observables and other viewmodel functions, and then binds the equivalent PubSub subscribers to listen for activity from the API. The @load and @remove methods are bound to buttons in the UI, and they publish the events which the API listens for to trigger its (fake) AJAX calls.

Both parts of the application are completely decoupled from each other, only communicating though PubSub, and this separation of concerns means you could replace the API or the UI without the other side ever, in theory, needing to be changed.

And that’s about all of it. To make the demo meaningful I’ve set the Viewmodel and HTML bindings up so that a shopping list is displayed after the user clicks the ‘Get list from API’ button, and removes items from the list when the API reports the remove behaviour was successfully posted.

So how does it work? The PubSubJS library, used in this example, operates as a global object so when the Viewmodel and API classes require it they’re referring to the same object. Therefore when topics are subscribed to, and published, PubSubJS’s event bus is available to all parts of the application and data can be received and sent from anywhere. That doesn’t mean you should neccessarily use that library. Backbone, for example, has PubSub built in through its event system. And others are available. But most work on the same principle so if you have a favourite, give it a quick test and make sure it works.

All the code for the demo is on Github so do check it out for yourself and play around with it.

26
Mar
2013

Jam.js and Bower

by Jon Sharratt

Over the last few months of last year we have been crafting away and exploring many new exciting frameworks and technologies.  So many to mention…… but our core stack as of late has mostly consisted of node.js.  As people of node.js know the npm package manager that is bundled is a fantastic tool to resolve dependencies for the project in question.

With this in mind myself and Stu have been busy hard at work with a large client in the finance sector and looked to ‘resolve’ our client side dependencies in an equivalent manner.  We had a look around at some of the options and for our use case the same two kept cropping up.

Bower

The folks over at the Twitter Engineering team have been hard at work again and built Bower as a client-side package manager.  It allows you to resolve your dependencies via a component.json file and install the relevant git paths as required.  This approach is at a lower level than other alternatives it doesn’t limit you to any transports, ultimately you decide how you wish to deliver your packages client side.

The great thing is that you can just point to git paths and you are away without dependencies on other people to own and manage the updates of packages that exist.  From the experience we had working with Jam.js I suppose it might also be fair to say that it would have taken less “tinkering time” to get the transport with require.js working client side along side some out of date package versions we couldn’t update.

Jam.js

Jam.js is  a package manager with a different spin in that it will install packages from a core jam.js package repository, resolve dependencies and using AMD / require.js to configure transporting your packages to the client.  The configuration for the packages are defined and are stored within your standard package.json file.

There is a down side that is quite commonly raised with the fact you upload a package to a repository separated away from github.  Once a package is uploaded the owner is the only person in which they can update it (unless the owner adds other users).  There is an option to setup your own repository and have jam search the core repository in addition to the one you create, to me this is a big limitation in regards to package management.  

Package management should, in my opinion, be sourced from one place in that it is concise, self managing, effective to install and configured to deliver to the client in the best possible manner.  Jam.js for me meets three out of four in this criteria.

I have started a pull request to try and help with the update of packages via pull requests.  The idea is taken from github / git and allowing users to submit pull requests to repositories.  The initial implementation of this feature with a description and can be found at:  https://github.com/caolan/jam/issues/128

In Summary

We made the choice of Jam.js for out project as it suited our needs to have require.js be configured and setup up as the transport to the client (once we cracked some of the require.js path configuration).  All you need to do is quite naturally add your jam dependencies within your package.json and type on the command line jam install just like npm install and away you go.  

It is not to say we will never use Bower as it does it’s job well and the fact it is lower level may suit other projects where transport via AMD is not a preference.  We may also switch if we find the package management and the central package repository not being kept up to date by community, this is where Jam.js can and may fall down.

 

15
Feb
2013

Automating myself – part 1

by Stephen Fulljames

photo-1

Having gone through the onerous task of selecting what type of laptop I wanted at Red Badger (13″ MacBook Pro with 16gb RAM and 512gb SSD) and waiting impatiently for it to be delivered, it was time to get it ready to use.

My previous personal Mac, a late 2010 11″ Air, has done sterling service – even just about managing when asked to run a couple of virtual machines concurrently alongside a Node.js environment. But it was starting to creak a bit and the poor thing’s SSD was chock full of all the random crap that had been installed to support various projects. So as well as the stuff I regularly use there were also leftover bits and pieces of Java, Python and who knows what else.

At Red Badger we’ve largely adopted Vagrant and Chef as a great way of provisioning development environments without imposing on the personal space of your own laptop. Clone a repo from Github, run ‘vagrant up’ on the project directory, the Vagrant script installs and configures a VM for you, ‘vagrant ssh’ and there you are with a fresh, clean dev environment that can be reconfigured, reloaded and chucked away as often as you like. Joe is going to be writing more about this process soon.

So I decided, I would challenge myself to install the bare minimum of tooling to get up and running, with all the stuff specific to our current project residing in a virtual machine. This should also, in theory, make a nice solid base for general front-end development to current practices. And hopefully leave 400gb -odd of disc space to fill up with bad music and animated cat gifs.

The clock was ticking…

  1. Google Chrome – goes without saying really (3 mins)
  2. Node.js – nice and easy these days with a pre-packaged installer available for Mac, Windows and Linux (5 mins)
  3. Useful Node global modules – First trip into the terminal to ‘sudo npm install -g …’ for coffee-script, grunt, express, less, nodemon and jamjs. This is by no means an exhaustive list but gives me all the Node-based essentials to build and run projects. (5 mins)
  4. Xcode Command Line Tools – Rather than the enormous heft of all of Xcode, the command line tools are a relatively slimline 122mb and give you the bits you need (as a web rather than OS developer) to compile the source code of stuff like wget. An Apple developer account is needed for this so I can’t give a direct link, unfortunately. (15 mins)
  5. Git – Last time I did this I’m sure there was a fair amount of messing around but there is now an installer package for OS X (5 mins)
  6. Virtual Box – This is where we start getting in to the real time saving stuff. Virtual Box is a free VM platform which can be scripted, essential for the next part (10 mins)
  7. Vagrant – This is the key part, as explained in the introduction. Once Vagrant is up and running, a pre-configured dev environment can be there in just a few minutes (3 mins)
  8. Github ssh key config – Back to the command line. To be able to clone our project from its Github repo I had to remember how to do this, and the documentation on the Github site doesn’t have the best signposting. (5 mins)
  9. Clone repo and post-install tasks – Get the code down, then run ‘npm install’ and ‘jam install’ in the project directory to fetch server and client dependencies. Run ‘grunt && grunt watch’ to build CoffeeScript, Less and Jade source into application code and then watch for file changes. (2 mins)
  10. Vagrant up – With the repo cloned and ready, the ‘vagrant up’ command downloads the specified VM image and then boots it and installs the relevant software. In this case, Node and MongoDB. At a touch over 300mb to download (albeit a one-off if you use the same image on subsequent projects) this is the longest part of the whole process (20 mins).
  11. Vagrant ssh – Log into the new virtual machine, change to the /vagrant directory which mounts all your locally held app code, run ‘node app/app.js’. Hit Vagrant’s IP address in Chrome (see step 1) and you’re there (1 min).

So there we go. By my reckoning 74 mins from logging in to the laptop to being able to get back to productivity. And because the VM environment is identical to where I was on my old Mac I could literally carry on from my last Git commit. I did those steps above sequentially to time them, but at the same time was installing the Dropbox client for project assets and Sublime Text for something to code in.

Obviously on a new project you would have the added overhead of configuring and testing your Vagrant scripts, but this only has to be done once and then when your team scales everyone can get to parity in a few minutes rather than the hours it might once have taken.

Vagrant really is one of those tools you wish you’d known about (or that had existed) a long time ago and really feels like a step change for quick, painless and agile development. This is also the first project I’ve really used Grunt on, and its role in automation and making your coding life easier has already been widely celebrated. Hence the title of this post, because this feels like the first step in a general refresh of the way I do things and hopefully I’ll be writing more about this journey soon.

13
Feb
2013

Simple 3D without Canvas or WebGL

by Haro Lee

I’ve been always fascinated by 3D and 3D presentation on the web space, well, since the tragic VRML(http://en.wikipedia.org/wiki/VRML). During the time in the university, I poked around Direct3D and OpenGL a little bit and after then anything with 3D on its name attracted my attention but haven’t achieved anything serious thanks to my laziness.

And one day, there came a chance that I needed to do some, admittedly nothing fancy like what you can see on http://www.chromeexperiments.com/, 3D action in an actual project.

My project was a simple coin tossing in 3D and didn’t require complicated physics or lighting or anything that may come with many 3D Javascript libraries. 

During the search, I came across Sprite3D.js (https://github.com/boblemarin/Sprite3D.js/tree/v2) that gave me an idea to use CSS3 3D transform instead of canvas or WebGL.

Sprite3D.js is a very simple light-weight Javascript library that bridges between Javascript and CSS3 3D transform so the code is more readable. It doesn’t have physics engine or any prebuilt 3D primitive objects out of box except box or cube, and that was all I needed.

Surprisingly, I struggled to find any articles explaining how to draw a simple 3D object with code from scratch (that I could just copy and paste easily…), such as a closed cylinder for my coin in this case.

So I planned to use a shallow box as a base of the coin, texture the top and bottom face with the coin’s face and tail, and then place a reasonably smoothly segmented cylinder between the top and the bottom face to make the coin. 

 

What is underneath…

The image below shows the basic of how CSS3 3D space looks like.

axis

If the centre of your screen is the origin of the 3D space, it’s positive x toward your right, positive y toward up, and positive z toward you. The camera’s position is represented by perspective.

Lower perspective usually produces dramatically exaggerated distance look so called perspective distortion like using wide angle lens, and higher perspective vice versa. 

It can be quite fiddly job to find the right perspective to work on different situations but normally between 700px and 1000px is acceptable. Sprite3D.js uses 800px as default.

 

A Box that is…

After the stage is set, I made a box to use it as the base of the coin.

basic_box

http://jsfiddle.net/musshush/GdFJ6/

Easy!

 

Lots of triangles…

The next is to make a cylinder and this gets a bit trickier.

Let’s start with 8-facet cylinder so it’s easier to illustrate.

octagon

In short, to draw a polygon like above, we need “w” that is the width of each facets, angle “d” of the first facet to be used to find “p(x, y)”, and then duplicate the facet and rotate angle “d” until fill the full circle.

First, we need “d” that is the angle of the facet and convert it to radian to be used with Math.sin() and Math.cos().

And get “w”, width of the facet

And get the distance between “o” and “p” to get the position of the first facet

And then rotate it for each facets

box_cylinder

http://jsfiddle.net/musshush/cRxML/

With 80-facet cylinder it looks a lot smoother.

box_smooth_cylinder

http://jsfiddle.net/musshush/xXHrg/

And now texture the cylinder and make it a bit thinner so it looks more like a coin.

coin_textured

http://jsfiddle.net/musshush/VnU63/

 

Finally…

Now it’s the time to toss it and add a shadow layer for the finishing touch..

coin_shadow

http://jsfiddle.net/musshush/4ELHK/

With a bit of randomness with Math.random() and an easing function, it’s possible to create a convincing animation without physics engine.