The title is a bit lofty, no? You’d expect a treatise to follow, but alas it will most likely be a rambling mess about the various tools and libraries I have a current fascination with and why – the reason, of which, is due to a recent desire to have a proper build system in place to comfortably develop JavaScript.

If you have been following along on this blog, you may be aware of the massroute_js project i have up on github. I am testing out various JavaScript toolkits, libraries and frameworks and have pushed a few up on the repo. As I was finishing up playing around with the last framework, it dawned on me that I didn’t have an example with NO framework; NO not being the latest the JS framework. NO being no… but with more face-palm. Perhaps it is telling of the state of JavaScript development these days that I am missing a plain vanilla JS example of my MassRoute application, and it is all true. I still don’t have one committed to the repo. However, the reasoning for that is a lengthy one – I got obsessed with a proper development environment and custom build system :) By obsessed, I mean I researched a fair bit of tools and libraries involved in the following areas that I thought fit into my workflow:

  • AMD
  • Code Quality
  • Unit Testing
  • Minification / Concatenation
  • Build / Deploy

I intend to address each of these topics in this article and the tools/libraries I found and those I had chosen for my workflow. It should be noted that I was not considered with development and deployment of the other two piece of the webstack – HTML and CSS. It is vital to have a good workflow when working with those technologies as well, and especially if all three are part of your job. There are some great tools and libraries out there for developing and shipping HTML and CSS and perhaps I will dive into that at a later date.

For now, I have created a common library in the massroute_js repo to develop what was turning out to be common pieces of the application I was developing against various JavaScript libraries out there. In there are the tools and libraries I will discuss in this post along with a simple build script: https://github.com/bustardcelly/massroute-js/tree/master/massroute-examples/common

I will do my best to explain the history and usage behind each tool and library, but will try to not get too in depth as I may get lost in the actual meaning of this post – which is to highlight the tools and the workflow in which they can be used together.

AMD

The Asynchronous Module Definition – commonly referred to as AMD – is a specification that defines how modules and their dependencies can be defined and loaded asynchronously. The asynchronous part of AMD fits nicely in a browser environment so as to not block rendering and script execution while loading modules, but the bigger take away for my development purposes is really the modularization of code and dependency management.

There is a larger history behind AMD and its fruition from CommonJS which I will not go into, not only as I do not have enough personal involvement to speak intelligently about such matters, but that James Burke and Addy Osmani already have some extremely insightful articles already out there on the webs:

James Burke: Simplicity and JavaScript modules
Addy Osmani: Writing Modular JavaScript with AMD, CommonJS and ES Harmony

I started dabbling and really getting interested in the idea of modular JavaScript when I began using the Dojo toolkit. That was back at 1.6.1 release and the use of dojo.provide, dojo.require and dojo.declare. Now at 1.7, Dojo is fully compliant with AMD. Here is an article discussing Dojo’s move to AMD compatibility: http://dojotoolkit.org/features/1.6/async-modules. Just as an aside – you really should check out Dojo, particularly if you come from a Flex background and have happened upon this post.

So, with some familiarity and a strong interest to incorporate modular development, I set out to find a library that would fit nicely in my workflow. There are a handful of AMD-compatible libraries, most notably curl from John Hann, Backdraft, and RequireJS from James Burke. I settled on the last RequireJS as, not only because of its ease of use and the cleanest, most concise documentation, but because of its author and his history and active role in the community; not to mention that the related optimization tool – r.js – was also a good selling point.

RequireJS

The RequireJS API defines how to define a module as well as how to load modules with dependencies. Along with other niceties, it also provides the ability to define dependency load order and loading of text files (including CSS).
Just taking some stripped down and generalized examples from the common lib of my massroute_js repo, a module is defined as such:

/script/com/custardbelly/js/RequestToken.js

define( function() {



    var RequestToken = (function() {

        this.then = function( handler ) {

            ...

        }

    });

    return RequestToken;



})l

and a module with dependencies are defined as so:

/script/com/custardbelly/js/Request.js

define( ["com/custardbelly/js/RequestToken"], function( RequestToken ) {



    var Request = (function( url ) {

        this.send = function( variables ) {

            var token = new RequestToken();

            ...

            return token;

        }

    });

    return Request;



});

So with these two examples we see how to define modules with and without dependencies, hopefully demonstrating the benefit of modularization but also the beauty of composition through dependency that AMD libraries like RequireJS provide.

Another great benefit in using RequireJS is that it gets rid of you having to manage your code in namespaces. In other words, this is no longer necessary:

(function( window ) {

    var massroute = getNamspace( 'com.custardbelly.massroute' );



    function getNamespace( value ) {

        var parts = value.split( '.' ),

            i = 0,

            length = parts.length,

            package, parent = window;



        for( i; i < length; i++ ) {

            package = parts[i];

            parent[package] = parent[package] || {};

            parent = package;

        }

        return parent;

     };



    var Something = function() {

        ....

    };



    massroute.Something = Something;



})( this );

What is happening under the hood, loading and reference-wise, is that RequireJS (now require on the window Object) holds a list of file references in its own contexts.urlMap property – the key being the normalized string of the module based on configuration and the value being the actual url for that module file. As dependency requests are made, a lookup is made on require’s contexts.loaded property which maps the normalized module string to a flag of already loaded and available. A script tag is actually written and attached to the DOM to begin loading of the script just as most script loaders do. What sets it apart is the use of async and datasets. If we take a look at what is appended for the Request module from a previous example:

<script type="text/javascript" charset="utf-8" async="" data-requirecontext="_" data-requiremodule="com/custardbelly/js/Request" src="./../script/com/custardbelly/js/Request.js"></script>

We can see that the values on the datasets directly correspond to those of the key/value pairs in require.context.urlMap for your application. So as these are loaded, the flag in require.context.loaded is flipped. Pretty elegant, and if you are interested in more about RequireJS’s design and the requirements it adheres to, this is a great article: http://requirejs.org/docs/requirements.html. Now… back to looking at code.

When it came time to employ these modules, I would require() where my application is responsible for making a request. Let’s just take on start-up in a main file as an example:

/app/main.js

(function( require ) {

    require.config({

        baseUrl: ".",

        paths: {

            "com": "./script/com"

        }

    });



    require( ['com/custardbelly/js/Request'], function( Request ) {



        var request = new Request( '[http://somewhere.fun/go](http://somewhere.fun/go)'),

            token = request.send({person:'Todd'});



        token.then( relax );



        function relax() {

            console.log( 'ahhh' );

        }

    });

})( requirejs );

It should be noted that anything (ie. objects, functions, native objects) can be returned from a module definition. Typically, though, and which is seen from these examples, i tend to return constructors probably due to my class-based language background; in other words, x requires y as x is going to create at least one new instance of it. But you could very well return an object or a function, and the real power comes in when you consider composition and module dependencies… and for all you DI fans out there like me, the cogs might start spinning up in that noggin. I have yet to do a real test-drive of IoC containers for JavaScript but there must be or could be something that is a perfect match for RequireJS. I am aware of wire, but as I said, have yet to give it a go – Santa failed on delivering more hours in the day :) If you know of any, please leave a comment.

So that is where my AMD loader choice stands. I have been using it for some time and have been pretty happy. Keep in mind that all these modules are separated to their own files. Depending on the size of the project, that can tally up to a lot of requests. And if we compound that with a slow network and a limited caching capabilities, we’re talking about trade-offs in even using AMD… unless we can optimize our development workspace down to a reasonable production environment that will go live. We’ll address those concerns a little later in the article…

AMD specification on GitHub
(UMD) Universal Module Definition
John Hann: AMD vs CommonJS
DailyJS: The How and Why of AMD
Addy Osmani: Writing Modular JavaScript with AMD, CommonJS and ES Harmony
James Burke: Simplicity and JavaScript modules
Tom Dale: AMD is Not the Answer
ES Harmony modules proposal
RequireJS

Code Quality

What makes JavaScript so fun is you can get away with a lot of shi… pped code that is littered with syntactical errors, misspellings, declared and unused variables and not to mention code that is improperly formatted. Such things can cause your application to fail silently and unsuspectingly to a user – no flag is explicitly raised that the code is failing unless an end-user cares about opening the debugger tools of a browser and submitting tickets for you. Such things you can’t mitigate and handle properly with more code because it is the code itself… well, i guess you could wrap everything in a little try… catch’s, but that would be silly.

If you are coming to JavaScript from a language that gets compiled, finding such mistakes during development might be a bit of a challenge in as far as the IDE department goes. There are a handful of excellent IDEs out there targeting web development (I enjoy Sublime Text 2), but the nature of an interpreted language leaves the ability to analyze and catch possible syntax and runtime errors before deploying code a challenge; unlike SDKs and IDEs that employ compiler tools that can determine errors in live edit or through a pre-compilation build.

That’s not to say, by any means, that a program has less bugs in a compiled environment than in an interpreted one. It just means you have to be a little more diligent in analyzing and testing your code – more on testing later. This type of analysis without actually executing or running the code for a program is commonly referred to as linting. When it comes to JavaScript, you can’t go far into searching for a linter without finding )JSLint, nor JSHint and their connection.

I won’t go over the benefits of one or the other, nor their history as there are plenty of articles out there on such. For the benefit of my own development environment and workflow, I have chosen JSHint for linting code – mainly because of the ease of set-up especially when it comes to disregarding the formatting of my code. I am much more concerned with syntactical errors and do not consider such things as indentation to be detrimental to the performance of the program, especially as it will later be run through minification. That is in no way stating that proper formatting is un-important, especially, especially, especially when working with a team. I do strive to keep a consistent format to my code, and certain IDEs help in some respect with coding standards and conventions, but at this time JSLint is a little too strict and feels a little more like prodding the code into a certain style – like a master’s apprentice. Not necessarily a bad thing and I may change my mind at some point, but for now JSHint it is. Plus, there is a sweet JSHint package for Sublime Text 2 that I can hook into a hotkey to check for errors on the current file I have open.

JSLint
JSHint
Anton Kovalyov: Why I Forked JSLint to JSHint
JSHint Sublime Plugin: uipoet/sublime-jshint

Unit Testing

This topic is too large to really go into discussing methodologies, extolling the virtues of and defining best practices for in this article. As such, I will simply state that I am guilty of not doing enough testing during the design of applications. For a time, when I was getting more familiar with languages and programming in general, creating unit tests for my code seemed more as a distraction from getting down to brass tacks and delivering a product. I’ll admit that, and the hugely naive stance that it takes :) That is in no way to say that now I practice good Test Driven Development (TDD). Far from it; really far from it. But I am trying to get better. More importantly, my thinking has changed in that I now believe unit testing is actually a proper way of testing the design of my application rather than finding errors in the code. I think that was a big leap for me. And, for whatever it may mean, coming to an interpreted language like JavaScript, designing and developing upon unit tests becomes even more important to me. Test-Driven JavaScript Development by Christian Johansen has definitely gone a long way in swaying my development practices for JavaScript, and I highly recommend picking up this book.

Like I said, it is too much to get into a discussion of TDD and unit testing, I wanted more to highlight the choices I have made for my development and deployment workflow in unit testing my code. I looked at and tried out a couple different unit testing frameworks for JavaScript – with Jasmine, Hiro and QUnit catching my eye.

QUnit

I ended up settling with QUnit for the following reasons:

  1. Familiarity and ease of use.
  2. Easy integration with RequireJS.
  3. Easy and already available source/docs for integration with headless test runners (JSTestDriver, PhantomJS – more on this later).

All of those are important. That list is really more representative of the order the contact states of a 3-way bulb where it eventually all came together. It was imperative that I could keep developing without choosing a Unit Testing framework that influenced the AMD library I chose and (as I will go into in a bit) it was necessary for my deployment needs to run the tests in a headless environment; during development, i want to write my tests (hopefully using “TDD-like-you-mean-it“) and run them quickly and visually, but I also was looking forward to take that work in unit testing unmodified and provide it to a headless test runner when it came to run a full deployment. So QUnit fit in nicely… for now at least. The familiarity and ease of use is a nice initial draw, but the other two are really the weighing factors and I would not mind giving Jasmine more of a go at a later date if i can ensure those two points.

In any event, using RequireJS and QUnit together is rather straight-forward, especially after finding this nice gist from drewwells: https://gist.github.com/920405. The main rule to remember is to set autostart to false on QUnit, and only invoke :start() once you have required all tests:

/test/index.html

<script src="../lib/require-1.0.4.js"></script>

<script src="lib/qunit.js"></script>

<script>

    QUnit.config.autostart = false;



    require.config({

        baseUrl: ".",

        paths: {

            "com": "../script/com"

        }

    });



    requirejs(['script/RequestTest.js', 'script/RequestTokenTest.js', 'script/RouteStopTest.js'], function() {

        QUnit.start(); // Tests loaded, run tests

    });

</script>

In basic terms, you have replaced adding