Posts Tagged ‘html5’

17
Nov
2013

Component

by Stuart Harris

Component

I should have written this post a while ago because something I love is not getting the traction it deserves and writing this earlier may have helped in some small way to change that.

Earlier this year we spent a lot of time trying to understand the best way to package and deliver client-side dependencies. It’s a problem that afflicts all modern web development regardless of the stack you use. Most of the solutions we tried don’t address the real problem, which is about delivering large monolithic libraries and frameworks to the client because some small part of them is needed. Like jQuery, for example. Even underscore, as much as I love it. You might use a few features from each. And then it all adds up. Even uglified and gzipped, it’s not uncommon for a page to be accompanied by up to a megabyte of JavaScript. That’s no good. Not even with pervasive broadband. Especially not on mobile devices over a flaky connection.

Some of these, like bootstrap, allow you to customise the download to include just the bits you want. This is great. But a bit of a faff. And it seems like the wrong solution. I don’t know many people that actually do it.

As an industry we’re moving away from all that. We’re learning from the age-old UNIX way that Eric Raymond so brilliantly described in The Art of UNIX Programming; small, sharp tools, each only doing one thing but doing it well. Modern polyglot architectures are assembled from concise and highly focussed modules of functionality. Software is all about abstracting complexity because our brains cannot be everywhere at once. We all know that if we focus on one job and do it well, we can be sure it works properly and we won’t have to build that same thing again. This is the most efficient way to exploit reuse in software engineering.

But small modules have to be composed. And their dependencies managed. We need something that allows us to pluck a module out of the ether and just use it. We want to depend on it without worrying about what it depends on.

npm is one of the best dependency managers I’ve used. I love how it allows your app to reference a directed acyclic graph of dependencies that is managed for you by the beautiful simplicity of ‘require’ (commonjs modules). In node.js, this works brilliantly well, allowing each module to reference specific versions of its dependencies so that overall there may be lots of different versions of a module in the graph. Even multiple copies of the same version. It allows each module to evolve independently on its own track. And it doesn’t matter how many different versions or copies of a library you’ve got in your app when it’s running on a server. Disk space and memory are cheap. And the stability and flexibility it promotes is well worth the price.

But on the client it’s a different story. You wouldn’t want to download several versions of a library in your page just because different modules were developed independently and some haven’t been updated to use the latest version of something. And the bigger the modules are the worse this would become. Fortunately, the smaller they are, the easier they are to update and the less they, themselves, depend on in the first place. It’s simple to keep a small module up to date. And by small, I’m talking maybe 10 lines of code. Maybe a few hundred, but definitely not more than that.

Enter Component by the prolific (and switched on) TJ Holowaychuk. Not perfect, but until we get Web Components, it’s the best client-side module manager out there. Why? Because it promotes tiny modules. They can be just a bit of functionality, or little bits of UI (widgets if you like). If you use Component, you’re encouraged to use, and/or write, small modules. Like a string trimmer, for example; only 13 loc. Or a tiny, express-like, client-side router in under 1200 bytes.  There are thousands of them. This is a Hacker News button, built with Component:

The Hacker News button, built with Component, showing dependencies

The registry

The great thing about Component is that it fetches the files specified in the component.json from Github, following the pattern “organisation/account” (you can specify other locations). This is great. The namespacing stops the bun-fight for cool names because the organisation is included in the unique identifier.

The other major benefit of this is that you can fork a component, modify it and point your app at your own repo if you’re not getting any of your pull requests integrated.

App structure

But it’s not really about 3rd party modules. In my head it’s more about how you structure the code that drives your page.

Component allows you to write completely self-contained features and plug them together. Your components will be DRY and follow the SRP. Each component can have scripts (e.g. JavaScript, or CoffeeScript), styles (e.g. CSS, or Less, or Sass), templates (e.g. compiled from HTML, or Jade), data (e.g. JSON), images, fonts and other files, as well as their own dependencies (other components). All this is specified in the component.json file, which points to everything the component needs, and informs the build step so that everything is packaged up correctly. It can be a little laborious to specify everything in the component.json, but it’s worth it. When you install a component, the component.json specifies exactly which files (in the Github repo) should be downloaded (unlike Bower, for example, where the whole repo has to be fetched) – check out how fast “component install” is.

The self-contained nature of components means that you don’t have a separate scripts folder with every script for the page in it, and a styles folder with all the CSS. Instead, everything is grouped by function, so everything the component needs is contained in the component’s folder. At build time, you can use Grunt to run component build which transpiles the CoffeeScript to JavaScript, the Less to CSS, the Jade to JavaScript functions, and packages the assets. The dependencies are analysed and all the JavaScript ends up in the right order in one file, all the CSS in another. These and the other assets are copied to the build directory, uglified/compressed ready for delivery to the client.

Getting started

The best docs are in the wiki on the Github repo. The FAQ is especially germane. And TJ’s original blog post is great reading, including the rather brilliant discussion about AMD vs Common JS modules. AMD was invented for asynchronous loading. But when you think about it, you’re gonna package all your script up in one compressed HTTP response anyway; there’s still too much overhead associated with multiple requests, even with HTTP keepalive (it’s not so bad with Spdy). The perceived benefits of loading asynchronously, as required, are not yet fully realisable, so we may as well go for the simple require and module.exports pattern we know and love from node.js.

If you’re using CoffeeScript, Jade and JSON in your components, you can use a Gruntfile that looks like this (which contains a workaround for the fact that the coffee compilation step changes the filename extensions from .coffee to .js):

We’ve tried a bunch of different tools to solve the problem of easily and efficiently distributing your app to the browser. All of them have flaws. We used to love Jam.js and Bower. But we got into a jam with jam, because updates were getting jammed due to unresponsive maintainers (sorry, couldn’t resist that). Bower was great, but too heavy. Browserify is too tightly coupled with node.js and npm. None of them make simple, self contained, focused modules as straightforward and elegant as Component. Nice one, TJ!

8
Aug
2013

Faster, longer and more responsive

by Stephen Fulljames

blog-head  

You know how it is – you set out to make “a few content changes” and end up with a whole new web site. We wanted to make a bit of an adjustment to the version 2 Red Badger site, launched in mid-2012, to give a better feel for who we are and what we can offer. In the end, it made more sense to go the whole way and create version 3. So here we are.

The fundamentals

In the previous version of the site we used WordPress to manage most content, but rather than use it to render every page we instead pulled content through its JSON API to feed a more flexible front-end built on Node and Express. This was fine in theory, but in practice performance wasn’t really as good as we’d hoped – even with some aggressive caching built in – and any kind of content beyond blog posts turned into a tangle of custom fields and plugins. WordPress specialists would probably have an answer for all the problems we faced, but we felt it was time to strike out and try something different.

This also coincided with an interest in evaluating Docpad, a semi-static site generator built on Node. If you’re familiar with Jekyll on Ruby or Hammer for Mac its kinda like that, but as well as building purely static sites it can also run with an Express server to allow some dynamic behaviour where needed. We hacked on it for a couple of days, I rebuilt my personal site with it, and then liking what we saw we decided to proceed.

The principle of Docpad and other site generators is pretty simple. You have a folder of documents written in any supported format (we’re using Jade and Markdown), reflecting the desired URL structure of the site, a folder of templates, and a folder of static assets. The generator runs through the documents, compiling where needed, applying the templates, and saves them into an output directory along with a copy of the assets. Logic in the templates is responsible for constructing navigations and outputting collections of data – just as a dynamically rendered site would – but it only does it once. The theory being, your site only changes when you act to change the content, so why do you need to serve something other than flat files between those changes?

Docpad is light in its core form, but there’s a good plugin ecosystem to extend it. We’re using plugins to compile SASS, Markdown, Jade and Coffeescript, fetch RSS from Flickr using Feedr and serve pages with clean URLs rather than .html extensions (this is where the Express part comes in handy). We’ve also created a few of our own. The main one is to allow us to curate “featured” items in the meta data of any page – so if you look on the homepage, for example, all the content items below the masthead are manually set and can be changed and reordered simply by altering a list of relative URLs. We’re also using custom plugins to pull in Tweets and posts from the old WordPress blog, and Docpad’s event system makes it easy to hook this data into the appropriate render step in the site’s generation.

For the time being we’re still using WP for our blog, recent posts are imported by the Docpad site on server start using a custom plugin in order to be able to feature them on the home and other pages, and list them on our team profiles. In the longer term we’re planning to move to blog itself on to Docpad as well, but that will need careful consideration to make sure all the functionality currently given by WordPress plugins can be maintained (or improved!).

We’re responsive

The previous iteration of the site was built using the Bootstrap CSS framework. My own views on that notwithstanding, some of the design changes we were planning meant it made sense to look again at our approach to the CSS.

As the site design created by Sari does not inherit from Bootstrap’s design opinions, only a small subset of it – mainly the grid – had been relevant, and structurally with full-width background patterns on each section of a page it was easier to start over.

That’s not to say we’ve rejected it completely. In the new grid system we’re still using some of Bootstrap’s thinking, along with patterns adopted from Harry Roberts’ Inuit.css and Chris Coyier’s thoughts on minimal grids. So we haven’t really done anything earth-shakingly innovative, but we have found it to be a solid, responsive grid that – most importantly – we completely understand and can use again.

site-grid

1 – ‘section’ element. Inherent 100% width. Top/bottom padding set, optional background texture applied by class, otherwise unstyled. The paper tear effect is added with the :after psuedo element.
2 – ‘div.container’. Width set to site default (920px) with left/right margin auto for centering. Any full width headings are placed directly inside this element.
3 – ‘div.row’. Unstyled except for negative left margin of default gutter width (40px) to compensate for grid element margin, :before and :after psuedo-elements set table display and clear as with Bootstrap.
4 – ‘div.grid’. Floated left, all .grid elements have a left margin of default gutter width (40px). Default width 100% with over-ride classes for column width, in the case above we use ‘one-third’. Background texture classes can also be applied here.

We’ve switched from our regular LESS to SASS for this version of the site, again as a way to explore what it can do. With Docpad its as easy as swapping plugins and all the compilation is handled for you during site generations. Needless to say the site is responsive, with the grid gradually collapsing down to a single column view and elements such as the services carousels and case study quotes shifting layout subtly to suit smaller viewports. And to make sure corporate decision makers get a good experience we’ve also tested and fixed back to IE8.

The other final touch is that we chose to use a custom icon font for the various calls to action and other decorations around the site. We used Icomoon to generate it from our own vector artwork, and this has saved an enormous amount of time because you can colour the icons with CSS rather than cutting and spriting images for all the variants you might need. The adoption of icon fonts has had a few challenges, chief among them accessibility (screen readers tended to attempt to announce them) but with the technique of encoding icons as symbols, in the Private Use Area of the Unicode range, this problem is now largely overcome.

icon-font

There were still a few things we found in the process of creating our icon font. It’s not advisable to do anything more complicated than just use them as icons, for example, as it’s very hard to align them perfectly to regular fonts without browser hacks. Each browser has a slightly different way to calculate and render line height. Also, depending on how much consideration you give to it, they’re not supported by Internet Explorer on Windows Phone 7 (although the symbols it uses instead are quite cute).

What’s next

Of course a consultancy’s own web site is never really finished, as we’re always looking to show our capabilities and keep up with the latest thinking in a world of fast-moving technological change. And the cobbler’s boots syndrome also applies; its hard to find time to look after your own when you’re always busy helping clients deliver their own transformative projects.

But we feel like we’ve achieved a stable, maintainable new site and its time to put it out there. It feels much faster than the old one, and we’ve delivered the flexibility in content layout we set out to achieve. We wanted to make sure we were demonstrating our own best practices, and while a good deal of that is under the hood we’d be happy to talk through it in more detail if you’d like to get in touch.

There are tweaks to come, naturally, based on things we’ve already spotted and feedback we’ll no doubt get. One big task will be to make the site more editable for those of a non-technical nature. Its actually not too difficult, as most content pages are written in Markdown, but a nicer UI exposing that rather than the intricacies of Git pushes and deployments feels like a good next step. No doubt we’ll be blogging again to talk about that when the time comes.

20
May
2013

Something about the Quirky World of Mobile Web Apps

by Haro Lee

On a recent mobile-focused project we ran into a few challenges over getting what turned out to be a fairly complex layout working smoothly and responsively over a range of target devices. It also highlighted a number of real edge-case quirks in how different phones and browsers deal with modern web technologies such as the audio element, and even how ‘shake to undo’ can affect a form that the UI hid several transitions ago.

 

scr-300-a

(There was a time mobile phones were simpler… or not…)

 

Libraries

To help with the total payload of the project we chose to use the Zepto library rather than jQuery. This has an API very similar to jQuery’s and the advantage of size but is focussed on modern, mobile browsers so drops support for IE and – significantly – doesn’t cover Windows Phone. That wasn’t a problem on this particular project but worth bearing in mind for the future.

We found a couple of niggles in the Zepto API, mainly related to our own coding styles and the requirements of the project. We missed jQuery’s ability to set a hash of functions against AJAX http response codes, for example.

In the end, given the eventual size of the rest of the application’s payload, the filesize saving of Zepto over jQuery probably didn’t make a whole lot of difference.

 

How to be responsive

The biggest problem encountered while implementing a responsive layout with the given design was how the DOM flow needed to change depending on whether the screen is portrait or landscape. It would have been ideal if we’d had the option to change the design and eliminate the worst of the problems, but once we realised how complicated it would be it was already too late. So a lesson learned…

One of the big problems in trying to fit a fairly fixed size application design to a phone screen is orientation. Fine in a native app; you can just specify if the app fits to portrait or landscape, or both, but in a browser that level of control is not available.

Fitting the level of interaction we needed, responsively, into a landscape phone viewport meant quite a lot of juggling of DOM elements and events when switching from and to portrait layout. This was particularly tricky on iPhones as Safari in landscape orientation has three different possible sizes (address bar visible, address bar hidden, fullscreen) and generally smaller screen sizes than our target Android devices. The eventual experience in landscape wasn’t as satisfying as in portrait but we did manage to make it work.

Because of all the complexities that are not possible to handle only with CSS and media queries, we heavily depended on a Javascript layout manager to handle resize events and orientation changes, calculating available screen estate and rearranging elements.

A simpler way to get around this, although perhaps a cheeky one, would be to use the ‘orientationchange’ or window’s ‘resize’ event to effectively hide the main app with a “please turn your phone round” message. Unfortunately the ‘orientationchange’ event is not available on older Android devices.

 

Preloading

The procedure we used to preload assets for our single page web app was to have separate CSS files for the preloading/splash screen and for the main app.

The splash page was part of the app, so we didn’t want users to proceed from here and start using the main app until all the assets were loaded. To achieve this, we made one stylesheet for the splash screen with all the generic styles that didn’t require any images and another for the main app which referenced all images and assets (i.e. icons and background images) and only linked the splash stylesheet to the HTML page. We then used a Javascript preloader which requested all the image assets and the main stylesheet, attached this CSS file to the HTML page once everything was loaded and ready.

This method could also be used to load different stylesheets and assets depending on the screen size or device, if required.

 

Sounds

A new technology for us on this project was the Web Audio API which is a programmatic way of cuing and playing sound files rather than relying on a DOM element such as <audio>. Support is limited for now with only modern desktop browsers, and Safari in iOS 6, able to use it, but it looks to be a great way of adding spot effects to browser games and other interactions.

We used the Howler library to add sound to the application, with the expectation that we would use its fallback methods to increase support across devices and browsers. In the end due to other technical constraints (see ‘Device Quirks’ at the end of this post) we chose to only play sound through the Web Audio API, so if we were to refactor here we could have removed Howler (not that there’s anything wrong with it) and go direct to the API.

 

Jam off the boil

We’ve used the Jam package manager on a few projects now, and while it does have some advantages in dependency management and compilation it also uses its own module install system. So it relies on package owners keeping the versions that are published in Jam (rather than their main Github, etc, repo) up to date. We’ve had a few cases where the version of a library we needed isn’t available yet in Jam.

Jam interface with the browser is a modified version of Require, which we like for its clean module loading mechanism. So for future projects we’d look to dropping back to regular Require and find something else, perhaps a Grunt task, to handle the file concatenation and minification which Jam also offers.

 

App logic

The application logic required a fairly complex state to be maintained, and for these we used our old friend Knockout. We did consider lighter MV* libraries but ultimately felt that with Knockout’s very comprehensive view binding built-in we would save ourselves a lot of trouble. The other main aspect of the Javascript architecture was the use of a PubSub (publish-subscribe) library to communicate between the various parts of the app. So for example the AJAX API methods were fairly isolated – to enable their easy reuse – with their success states publishing the returned data for the viewmodels to pick up.

This decoupling did present a few edge case bugs around timing issues later on in development, but overall the ability to have communication, view state, preloading, sound and so on implemented independently and talking to each other via PubSub suited the Require modular loading we used as well as resulting in cleaner code.

This complexity of the application was borne out in the eventual size of the DOM, as we had several ‘pages’ within one document which were hidden and shown as required. For smoothness of experience this was useful, but retrospectively we’ve wondered if it might have been better to have a minimal HTML document with more extensive use of Javascript templating through Backbone or similar. Ultimately its hard to tell without implementing, but at least we have a steer for future iterations in this programme of projects.

 

Phonegap and other gotchas

It is well known that the performance of a web app wrapped in the Phonegap library is not as good as a native application.

Our previous Phonegap projects had a lot more DOM elements involved than this one so we hoped for better performance, but there were still a few performance issues on older iOS devices. Aside from that the Phonegap wrapping was pretty straightforward with minimal changes needed.

In the end perhaps the weirdest problem we discovered was that mobile service providers such as T-Mobile and Virgin appear to strip comments in HTML when sending data to mobile devices. This caused failures in our Knockout implementation, which used the library’s containerless control flow (i.e. specially formatted comments) to reduce the number of DOM elements – a very unexpected surprise. We ended up replacing all comment bindings with regular DOM bindings.

 

Device quirks – iOS[I] vs Android[A] vs Blackberry Z10[B]

- on-screen keyboard

[I] no viewport size change when hiding/showing the keyboard

[A, B] viewport size changes when keyboard is showing

[older A] even when the form page is hidden, the on-screen keyboard will not be dismissed unless the focus on the textfeld is forcibly removed.

- in-page scrolling

[A] even with overflow settings its still very buggy on the stock browser but Chrome doesn’t have problems.

- overflow: hidden

[older A] cannot interact with elements inside of “overflow: hidden” even if they are visible

- shake to undo (only on iOS)

[I] we found a filled-in form which was subsequently hidden triggered the iOS ‘shake to undo’ event. Unfortunately there was no known way for users to disable this feature easily but having an iframe somewhere in the DOM and changing its src (to anything) disables the “feature”. This hack doesn’t work if the app is wrapped in Phonegap but you can explicitly turn off shake to undo here.

- <audio> elements

[A] when a sound clip plays in an audio element it stops other background music from other apps

[B] the sound clip files are added to system playlist and so become accessible through its music player

- ‘devicemotion’ event

[A] on Android Chrome browser the ‘devicemotion’ event is not supported, although it is available on the stock browser. Future Chrome updates should add support.

- ‘orientationchange’ event

[older A] not supported

 

co-written by Stephen Fulljames

 

19
Apr
2013

Sorry Bootstrap, it’s over between us

by Stephen Fulljames

8658984467_0839741417_z

I’m sorry Bootstrap but things just aren’t working out between us. I think we should have a break from each other.

To be honest, it’s not me, it’s you.

When we first met I was kind of infatuated. You were so opinionated and successful, persuasive even, so many of friends couldn’t stop talking about you. It was easy to fall head over heels and looking back I think I let my heart rule over my head.

With you everything felt so straightforward. Prototyping was quick and easy. We put admin screens together in a snap, smiling and laughing the whole time. I began to think there was nothing you couldn’t do, I started to let you lead me.

But then I looked around and realised you’d been seeing other people. It was a real shock.

Everyone was starting to think the same way and it was all because of you. You made such a compelling case for the way that things should be that everyone, myself included, just kind of went along with it. Looking back I realise now that you were too controlling. If a great design didn’t quite fit to the way you wanted to do things, I would just change it to make you happy. It was a lot easier.

Then I came to realise, that wasn’t what I wanted after all. I mean, the designers I work with really know what they’re doing. They’re the ones I should listen to first, and if you can’t help me in the way I need you to then I guess it’s time to move on.

I got on just fine before you came along, and I know that I can get back to how things used to be without you. I’m sure I’ll be happier out there on my own, CSS is a wide world and there’s new stuff to see all the time. I just think I’d rather make those discoveries for myself rather than have you colour them.

Don’t be sad, Bootstrap, I’m sure you’ll find someone else. Just be good to them, okay?

Photo: prorallypix, Creative Commons licenced.

 

12
Mar
2012

QCon Notes Part 2: The Future Application Platform

by Can Gencer

Unsurprisingly, there were several talks about the web and JavaScript at QCon. There is no doubt about the meteoric rise of JavaScript in the recent years, and it’s hard to imagine that this will not continue. Web browsers have been powered by JavaScript for years and more and more desktop applications are moving to the web. Node.js proved that JavaScript is as good as any other language for building a framework for web applications. Mobile web seems to be the next frontier and where progress seems to be among the fastest.

JavaScript Today and Tomorrow: Evolving the Ambient Language of the Ambient Computing Era

Allen Wirfs-Brock (Mozilla)

Download Slides

Allen is a Mozilla Research Fellow and was the project editor for the ECMAScript 5/5.1 standard.

Allen started off his talk by illustrating the two major eras in computing, the corporate computing era and the personal computing era. A major shift in computing happened in the late 70s and early 80s, where the shift to personal computers radically changed the nature of computing. Currently, we are undergoing another significant shift to what could be called the “ambient computing” era. Ambient computing has the characteristics of being device based rather than computers and being ubiquitous.

Every computing era had a dominant application platform. The dominant platform emerged as the winner through a combination of market demand, good enough technical foundation and superior business execution. The dominant platform for the corporate computing era was IBM mainframes. In the personal computing era, the dominant platform was the combination of Microsoft Windows and Intel PC (much lovingly called Wintel). In the emerging ambient computing era, it is becoming clear that the new application platform will be the web.

Each computing era also had a canonical programming language – COBOL/Fortran for mainframes and C/C++ for personal computing. The canonical language for the web and thus the ambient computing era appears to be JavaScript. Allen brought up the interesting question about what could replace JavaScript and how that could happen. JavaScript, even with it’s quirks is “good enough” and there doesn’t seem to be any apparent way that it would be replaced by anything else. As such, his claim that “JavaScript will be the next canonical language for the next 20 years” seems spot on.

After the ECMAScript 4 fiasco, TC-39, the committee responsible for deciding the future of JavaScript, is moving a lot faster and is more driven and organized to improve the language. There are a lot of improvements to the JavaScript language coming with ECMAScript Harmony, which represents ECMAScript post version 5. Some might be considered controversial, such as the inclusion of classes, and are ongoing current discussion. Considering the slow browser adoption rate, even ES5 is not yet mainstream and will not be for a couple of years more. This unfortunately seems to be one of the biggest bottlenecks in moving the new ambient computing platform forward.

The Future of the Mobile Web Platform

Tobie Langel (Facebook)

Tobie is currently the chair of the Core Mobile Web Platform Community Group which is dedicated to accelerating the adoption of the Mobile Web as a platform for developing mobile applications. Tobie and his team at Facebook put a lot of effort into analysing the most popular native applications and finding out what capabilities were missing in web applications to make them on par with native applications in terms of user experience.

Facebook recently launched ringmark, a test suite aimed to accelerate the adoption of HTML5 across mobile devices and provide a common bar for implementations of the mobile web standards. Ringmark provides a series of concentric rings, where each ring is a suite of tests for testing mobile web app capabilities. There are currently three rings, however the intention is to continue the project by adding more rings as the capabilities of mobile devices increase.

Ring 0 is designed as the intersection of the current state of iOS and Android and 30% of the top 100 native mobile applications can be implemented using ring 0 capabilities.

Ring 1 includes features such as image capture, indexDB and AppCache. Browsers implementing ring 1 should be able to cater to 90% of the most popular native applications, most of which actually don’t or need utilize advanced device capabilities such as 3D. Tobie highlighted that getting ubiquitous ring 1 support should be the short term goal for mobile browser vendors and developers to drive mobile web adoption.

Ring 2 will fill the gap with the final 10% of applications, with things like WebGL, Web Intents and permissions. Ring 2 is aimed to be a longer term goal.

Mobile Web should also be able to achieve beyond 100% of the native apps, with capabilities such as hyperlocal applications (e.g. an application tailored to a certain local event) and deep linking.

Lack of standards for mobile web applications when it comes discoverability or manifest files was also mentioned as one of the hurdles that mobile web needs to overcome. It will be exciting to see how fast we will be able to reach there.

The Future Is Integrated: A Coherent Vision For Web API Evolution

Alex Russell (Google)

Slides (Built with HTML5!)

Alex is a TC-39 representative for Google and is also a member of the Chrome team. One of Alex’s missions has been to drive the web platform forward. He is as frustrated as the rest of us developers with the current state of fragmented support and slow progress.

WebIDL and JavaScript have a cognitive dissonance problem. DOM was specified as an API for the browser programmers rather than the actual consumers of the API who are the JavaScript/web developers. It was also devised at a time where there were expectations that other languages than JavaScript would be consuming it, and artifacts of such an ideal still persist in the API. Moreover, DOM does not conform to normal JavaScript rules. The DOM types cannot be extended or constructured. It is not possible to do a new HTMLElement() whereas it would be very useful for many scenarios.

As web applications have increased in complexity, the disconnect between application data and the browser model has grown making web development painful. The developers have been trying to solve this using frameworks such as Backbone.js, however they are not perfect. Alex outlined two proposals to W3C that seek to make web development easier.

Shadow DOM is a way to create web components by a browser provided API. Modern browsers include native controls, such as the standard HTML form components. These built in controls are isolated from the rest of the page and are only accessible through whatever API they expose. There is currently no API to create third party components with the same strong encapsulation enjoyed by the native components.

The other proposal is Model-driven Views which reminded me a lot of how Knockout.js works. MDV provides a way to build data driven, dynamic web pages through data binding and templating via native browser support.

 

Also interesting, but didn’t get the chance to attend:

Mobile, HTML5 and the cross-platform promise

Maximiliano Firtman

Download Slides

Wrap Up

The various efforts around HTML 5, JavaScript and the mobile web all point to an improved developer experience. The question is how soon will this future will arrive? Combined with browser vendors pushing updates aggressively and consumers changing mobile phones every 1-2 years, it might not be as far as it seems. Listening to the talks also confirmed my opinion that native mobile apps are only a stopgap solution and the future lies in HTML 5+ and JavaScript as the platform that will power applications in the future.