Node: JavaScript on the server

Read Chapter 11 from JavaScript Cookbook.

By Shelley Powers
December 10, 2015
Reading by José Ferraz de Almeida Júnior Reading by José Ferraz de Almeida Júnior (source: Wikipedia)

The dividing line between “old” and “new” JavaScript occurred when Node.js (referred to primarily as just Node) was released to the world. Yes, the ability to dynamically modify page elements was an essential milestone, as was the emphasis on establishing a path forward to new versions of ECMAScript, but it was Node that really made us look at JavaScript in a whole new way. And it’s a way I like—I’m a big fan of Node and server-side JavaScript development.

Note

I won’t even attempt to cover all there is to know about Node on an introductory level in one chapter, so I’m focusing primarily on the interesting bits for the relative newbie. For more in-depth coverage, I’m going to toot my own horn and recommend my book, Learning Node (O’Reilly).

Learn faster. Dig deeper. See farther.

Join the O'Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful.

Learn more

At a minimum, this chapter does expect that you have Node installed in whatever environment you wish, and are ready to jump into the solution examples.

Responding to a Simple Browser Request

Problem

You want to create a Node application that can respond to a very basic browser request.

Solution

Use the built-in Node HTTP server to respond to requests:

// load http module
var http = require('http');

// create http server
http.createServer(function (req, res) {

  // content header
  res.writeHead(200, {'content-type': 'text/plain'});

  // write message and signal communication is complete
  res.end("Hello, World!\n");
}).listen(8124);

console.log('Server running on 8124/');

Discussion

The simple text message web server response to a browser request is the “Hello World” application for Node. It demonstrates not only how a Node application functions, but how you can communicate with it using a fairly traditional communication method: requesting a web resource.

Starting from the top, the first line of the solution loads the http module using Node’s require() function. This instructs Node’s modular system to load a specific library resource for use in the application—a process I’ll cover in detail in not available. The http module is one of the many that comes, by default, with a Node installation.

Next, an HTTP server is created using http.createServer(), passing in an anonymous function, known as the RequestListener with two parameters. Node attaches this function as an event handler for every server request. The two parameters are request and response. The request is an instance of the http.IncomingMessage object and the response is an instance of the http.ServerResponse object.

The http.ServerResponse is used to respond to the web request. The http.IncomingMessage object contains information about the request, such as the request URL. If you need to get specific pieces of information from the URL (e.g., query string parameters), you can use the Node url utility module to parse the string. Example 1-1 demonstrates how the query string can be used to return a more custom message to the browser.

Example 1-1. Parsing out query string data
// load http module
var http = require('http');

// create http server
http.createServer(function (req, res) {

  // get query string and parameters
  var query = require('url').parse(req.url,true).query;

  // content header
  res.writeHead(200, {'content-type': 'text/plain'});

  // write message and signal communication is complete
  var name = query.first ? query.first : "World";

  res.end("Hello, " + name + "!\n");
}).listen(8124);

console.log('Server running on 8124/');

A URL like the following:

http://shelleystoybox.com:8124/?first=Reader

results in a web page that reads “Hello, Reader!”

Notice in the application that I used require() in the code, and chained methods directly on the returned module object. If you’re using an object multiple times, it makes sense to assign it a variable at the top of the application. However, if you’re only using the module object once, it can be more efficient to just load the object in place, and then call the methods directly on it. In the code, the url module object has a parse() method that parses out the URL, returning various components of it (href, protocol, host, etc.). If you pass true as the second argument, the string is also parsed by another module, querystring, which returns the query string as an object with each parameter as an object property, rather than just returning a string.

In both the solution and in Example 1-1, a text message is returned as page output, using the http.ServerResponse end() method. I could also have written the message out using write(), and then called end():

 res.write("Hello, " + name + "!\n");
 res.end();

The important takeaway from either approach is you must call the response end() method after all the headers and response body have been sent.

Chained to the end of the createServer() function call is another function call, this time to listen(), passing in the port number for the server to listen in on. This port number is also an especially important component of the application.

Traditionally, port 80 is the default port for most web servers (that aren’t using HTTPS, which has a default port of 443). By using port 80, requests for the web resource don’t need to specify a port when requesting the service’s URL. However, port 80 is also the default port used by our more traditional web server, Apache. If you try to run the Node service on the same port that Apache is using, your application will fail. The Node application either must be standalone on the server, or run off a different port.

Note

I cover how to run both Node and Apache seemingly on port 80 at the same time in Running Node and Apache on the Same Port.

You can also specify an IP address (host) in addition to the port. Doing this ensures that people make the request to a specific host, as well as port. Not providing the host means the application will listen for the request for any IP address associated with the server. You can also specify a domain name, and Node resolves the host.

There are other arguments for the methods demonstrated, and a host of other methods, but this will get you started. Refer to the Node documentation for more information.

See Also

Node documentation can be found at http://nodejs.org/api/.

Serving Up Formatted Data

Problem

Instead of serving up a web page or sending plain text, you want to return formatted data, such as XML, to the browser.

Solution

Use Node module(s) to help format the data. For example, if you want to return XML, you can use a module to create the formatted data:

var XMLWriter = require('xml-writer');

var xw = new XMLWriter;

// start doc and root element
xw.startDocument().startElement("resources");

// resource
xw.startElement("resource");
xw.writeElement("title","Ecma-262 Edition 6");
xw.writeElement("url","http://wiki.ecmascript.org/doku.php?id=harmony:specific
ation_drafts");

// end resource
xw.endElement();

// end resources
xw.endElement();

Then create the appropriate header to go with the data, and return the data to the browser:

  // end resources
  xw.endElement();

  res.writeHeader(200, {"Content-Type": "application/xml", "Access-Control-Allow
  -Origin": "*"});
  res.end(xw.toString(),"utf8");

Discussion

Web servers frequently serve up static or server-side generated resources, but just as frequently, what’s returned to the browser is formatted data that’s then processed in the web page before display.

In not available, in not available we examined one use of data formatted as XML that’s generated by a Node application on the server and then processed using the DOM API in the browser. Parts of the server application have been excerpted out for the solution.

There are two key elements to generating and returning formatted data. The first is to make use of whatever Node library to simplify the generation of the data, and the second is to make sure that the header data sent with the data is appropriate for the data.

In the solution, the xml-writer module is used to assist us in creating proper XML. This isn’t one of the modules installed with Node by default, so we have to install it using npm, the Node Package Manager:

npm install xml-writer

This installs the xml-writer module in the local project directory, in the /node-modules subdirectory. To install the module globally, which makes it available for all projects, use:

npm install xml-writer -g

Then it’s just a simple matter of creating a new XML document, a root element, and then each resource element, as demonstrated in the solution. It’s true, we could just build the XML string ourselves, but that’s a pain. And it’s too easy to make mistakes that are then hard to discover. One of the best things about Node is the enormous number of modules available to do most anything we can think of. Not only do we not have to write the code ourselves, but most of the modules have been thoroughly tested and actively maintained.

Caution

It’s important to understand that not all Node modules are actively maintained. When you look at a module in GitHub, check when it was last updated, and whether there are any old, unresolved issues. You may not want to use a module that’s no longer being actively updated. However, if you do like a module that’s not being actively maintained, you can consider forking it and maintaining the fork, yourself.

Once the formatted data is ready to return, create the header that goes with it. In the solution, because the document is XML, the header content type is set to application/xml before the data is returned as a string.

See Also

Using npm to install and manage Node modules is covered in not available.

Reading and Writing File Data

Problem

You want to read from or write to a locally stored file.

Solution

Node’s filesystem management functionality is included as part of the Node core, via the fs module:

var fs = require('fs');

To read a file’s contents, use the readFile() function:

var fs = require('fs');

fs.readFile('main.txt', {encoding: 'utf8'},function(err,data) {
  if (err) {
    console.log("Error: Could not open file for reading\n");
  } else {
    console.log(data);
  }
});

To write to a file, use writeFile():

var fs = require('fs');

var buf = "I'm going to write this text to a file\n";
fs.writeFile('main2.txt', buf, function(err) {
  if (err) {
    console.log(err);
  } else {
    console.log("wrote text to file");
  }
});

The writeFile() function overwrites the existing file. To append text to the file, use appendText():

var fs = require('fs');

var buf = "I'm going to add this text to a file";
fs.appendFile('main2.txt', buf, function(err) {
    if (err) {
      console.log(err);
    } else {
      console.log("appended text to file");
    }
 });

Discussion

Node’s filesystem support is both comprehensive and simple to use. To read from a file, use the readFile() function, which supports the following parameters:

  • The filename, including the operating system path to the file if it isn’t local to the application

  • An options object, with options for encoding, as demonstrated in the solution, and flag, which is set to r by default (for reading)

  • A callback function with parameters for an error and the read data

In the solution, if I didn’t specify the encoding in my application, Node would have returned the file contents as a raw buffer. Since I did specify the encoding, the file content is returned as a string.

The writeFile() and appendFile() functions for writing and appending, respectively, take parameters similar to readFile():

  • The filename and path

  • The string or buffer for the data to write to the file

  • The options object, with options for encoding (w as default for writeFile() and a as default for appendFile()) and mode, with a default value of 438 (0666 in Octal)

  • The callback function, with only one parameter: the error

The options value of mode is used to set the file’s sticky and permission bits, if the file was created because of the write or append. By default, the file is created as readable and writable by the owner, and readable by the group and the world.

I mentioned that the data to write can be either a buffer or a string. A string cannot handle binary data, so Node provides the Buffer, which is capable of dealing with either strings or binary data. Both can be used in all of the filesystem functions discussed in this section, but you’ll need to explicitly convert between the two types if you want to use them both.

For example, instead of providing the utf8 encoding option when you use writeFile(), you convert the string to a buffer, providing the desired encoding when you do:

var fs = require('fs');

var str = "I'm going to write this text to a file";
var buf = new Buffer(str, 'utf8');
fs.writeFile('mainbuf.txt', str, function(err) {
  if (err) {
    console.log(err);
  } else {
    console.log("wrote text to file");
  }
});

The reverse—that is, to convert the buffer to a string—is just as simple:

var fs = require('fs');

fs.readFile('main.txt', function(err,data) {
   if (err) {
      console.log(err.message);
   } else {
      var str = data.toString();
      console.log(str);
   }
});

The Buffer toString() function has three optional parameters: encoding, where to begin the conversion, and where to end it. By default, the entire buffer is converted using the utf8 encoding.

The readFile(), writeFile(), and appendFile() functions are asynchronous, meaning the won’t wait for the operation to finish before proceeding in the code. This is essential when it comes to notoriously slow operations such as file access. There are synchronous versions of each: readFileSync(), writeFileSync(), and appendFileSync(). I can’t stress enough that you should not use these variations. I only include a reference to them to be comprehensive.

Advanced

Another way to read or write from a file is to use the open() function in combination with read() for reading the file contents, or write() for writing to the file. The advantages to this approach is more finite control of what happens during the process. The disadvantage is the added complexity associated with all of the functions, including only being able to use a buffer for reading from and writing to the file.

The parameters for open() are:

  • Filename and path

  • Flag

  • Optional mode

  • Callback function

The same open() is used with all operations, with the flag controlling what happens. There are quite a few flag options, but the ones that interest us the most at this time are:

  • r: Opens the file for reading; the file must exist

  • r+: Opens the file for reading and writing; an exception occurs if the file doesn’t exist

  • w: Opens the file for writing, truncates the file, or creates it if it doesn’t exist

  • wx: Opens the file for writing, but fails if the file does exist

  • w+: Opens the file for reading and writing; creates the file if it doesn’t exist; truncates the file if it exists

  • wx+: Similar to w+, but fails if the file exists

  • a: Opens the file for appending, creates it if it doesn’t exist

  • ax: Opens the file for appending, fails if the file exists

  • a+: Opens the file for reading and appending; creates the file if it doesn’t exist

  • ax+: Similar to a+, but fails if the file exists

The mode is the same one mentioned earlier, a value that sets the sticky and permission bits on the file if created, and defaults to 0666. The callback function has two parameters: an error object, if an error occurs, and a file descriptor, used by subsequent file operations.

The read() and write() functions share the same basic types of parameters:

  • The open() methods callback file descriptor

  • The buffer used to either hold data to be written or appended, or read

  • The offset where the input/output (I/O) operation begins

  • The buffer length (set by read operation, controls write operation)

  • Position in the file where the operation is to take place; null if the position is the current position

The callback functions for both methods have three arguments: an error, bytes read (or written), and the buffer.

That’s a lot of parameters and options. The best way to demonstrate how it all works is to create a complete Node application that opens a brand new file for writing, writes some text to it, writes some more text to it, and then reads all the text back and prints it to the console. Since open() is asynchronous, the read and write operations have to occur within the callback function. Be ready for it in Example 1-2, because you’re going to get your first taste of a concept known as callback hell.

Example 1-2. Demonstrating open, read, and write
var fs = require('fs');

fs.open('newfile.txt', 'a+',function(err,fd){
   if (err) {
      console.log(err.message);
   } else {
      var buf = new Buffer("The first string\n");
      fs.write(fd, buf, 0, buf.length, 0, function(err, written, buffer) {
         if (err) {
            console.log(err.message);
         } else {
            var buf2 = new Buffer("The second string\n");
            fs.write(fd, buf2, 0, buf2.length, 0,
                               function(err, written2, buffer) {
               if (err) {
                  console.log(err.message);
               } else {
                  var length = written + written2;
                  var buf3 = new Buffer(length);
                  fs.read(fd, buf3, 0, length, 0,
                            function( err, bytes, buffer) {
                     if(err) {
                        console.log(err.message);
                     } else {
                        console.log(buf3.toString());
                     }
                  });
               }
            });
         }
      });
   }
});

To find the length of the buffers, I used length, which returns the number of bytes for the buffer. This value doesn’t necessarily match the length of a string in the buffer, but it does work in this usage.

That many levels of indentation can make your skin crawl, but the example demonstrates how open(), read(), and write() work. These combinations of functions are what’s used within the readFile(), writeFile(), and appendFile() functions to manage file access. The higher level functions just simplify the most common file operations.

Note

See Managing Callback Hell for a solution to all that nasty indentation.

Using let and Other ES 6 Additions in Node

Problem

You want to use some of the new ECMAScript 6 functionality, such as let in your Node application, but they don’t seem to work.

Solution

You’ll need to use two command-line options when you run the Node application: harmony, to add in support for whatever ECMAScript Harmony features are currently implemented, and use-strict to enforce strict JavaScript processing:

node --harmony --use-strict open.js

Or you can trigger strict mode by adding the following line as the first in the application:

'use strict';

Discussion

Internally, Node runs on V8, Google’s open source JavaScript engine. You might assume that the engine implements most if not all of the newest cutting-edge JavaScript functionality, including support for let. And it is true that Google has implemented much of the newest JavaScript functionality.

However, some of the newer functionality isn’t available to a Node application unless you specify the harmony command-line option, similar to having to turn the option on in your browser. You can find this and other options by typing the following at the command line:

man node

Once the --harmony option has been given, you can use let instead of var. However, you must also use strict mode to use let, either by providing the command-line flag, or using use strict in the application:

'use strict';

let fs = require('fs');

fs.readFile('main.txt', {encoding: 'utf8', flag: 'r+'},function(err,data) {
   if (err) {
      console.log(err.message);
   } else {
      console.log(data);
   }
});
Note

not available discusses using let in the browser.

Node’s parent company, Joyent, maintains a GitHub page listing all of the new ECMAScript 6 (Harmony) features currently implemented in V8. It also lists out the flags you can use to utilize all, or a subset, of the features.

Interactively Trying Out Node Code Snippets with REPL

Problem

You can test JavaScript code snippets in jsFiddle or jsBin, but what about Node’s server-based code snippets?

Solution

Use Node’s REPL (read-evalute-print-Loop), an interactive command-line version of Node that can run any code snippet.

To use REPL, type node at the command line without specifying an application to run. If you wish, you can also specify a flag, like --harmony, to use the ECMAScript 6 functionality:

$ node --harmony

You can then specify JavaScript in a simplified emacs (sorry, no vi) line-editing style. You can import libraries, create functions—whatever you can do within a static application. The main difference is that each line of code is interpreted instantly:

> var f = function(name) {
... console.log('hello ' + name);
... }
undefined
> f('world');
hello world
undefined

When you’re finished, just exit the program:

> .exit

Discussion

REPL can be started standalone or within another application if you want to set certain features. You type in the JavaScript as if you’re typing in the script in a text file. The main behavioral difference is you might see a result after typing in each line, such as the undefined that shows up in the runtime REPL.

But you can import modules:

> var fs = require('fs');

And you can access the global objects, which we just did when we used require().

The undefined that shows after typing in some code is the return value for the execution of the previous line of code. Setting a new variable and creating a function are some of the JavaScript that returns undefined, which can get quickly annoying. To eliminate this behavior, as well as make some other modifications, you can use the REPL.start() function within a small Node application that triggers REPL (but with the options you specify).

The options you can use are:

  • prompt: Changes the prompt that shows (default is >)

  • input: Changes the input readable stream (default is process.stdin, which is the standard input)

  • output: Changes the output writable stream (default is process.stdout, the standard output)

  • terminal: Set to true if the stream should be treated like a TTY, and have ANSI/VT100 escape codes written

  • eval: Function used to replace the asynchronous eval() function used to evaluate the JavaScript

  • useColors: Set to true to set output colors for the writer function (default is based on the terminal’s default values)

  • useGlobal: Set to true to use the global object, rather than running scripts in a separate context

  • ignoreUndefined: Set to true to eliminate the undefined return values

  • writer: The function that returns the formatted result from the evaluated code to the display (default is the util.inspect function)

An example application that starts REPL with a new prompt, ignoring the undefined values, and using colors is:

var net = require("net"),
    repl = require("repl");

var options = {
   prompt: '-- ',
   useColors: true,
   ignoreUndefined: true,
};

repl.start(options);

Both the net and repl modules are necessary. The options we want are defined in the options object and then passed as parameter to repl.start(). When we run the application, REPL is started but we no longer have to deal with undefined values:

# node reciple11-5.js
-- var f = function (name) {
... console.log('hello ' + name);
... }
-- f('world');
hello world

As you can see, this is a much cleaner output without all those messy undefined print outs.

Extra: Wait a Second, What global Object?

Caught that, did you?

One difference between JavaScript in Node and JavaScript in the browser is the global scoping. In a browser, when you create a variable outside a function, using var, it belongs to the top-level global object, which we know as window:

var test = 'this is a test';
console.log(window.test); // 'this is a test'

This has been a bit of a pain, too, as we get namespace collisions among all our older libraries.

In Node, each module operates within its own separate context, so modules can declare the same variables, and they won’t conflict if they’re all used in the same application.

However, there are objects accessible from Node’s global object. We’ve used a few in previous examples, including console, the Buffer object, and require(). Others include some very familiar old friends: setTimeout(), clearTimeout(), setInterval(), and clearInterval().

Getting Input from the Terminal

Problem

You want to get input from the application user via the terminal.

Solution

Use Node’s Readline module.

To get data from the standard input, use code such as the following:

var readline = require('readline');

var rl = readline.createInterface({
  input: process.stdin,
  output: process.stdout
});

rl.question(">>What's your name?  ", function(answer) {
   console.log("Hello " + answer);
   rl.close();
});

Discussion

The Readline module provides the ability to get lines of text from a readable stream. You start by creating an instance of the Readline interface with createInterface() passing in, at minimum, the readable and writable streams. You need both, because you’re writing prompts, as well as reading in text. In the solution, the input stream is process.stdin, the standard input stream, and the output stream is process.stdout. In other words, input and output are from, and to, the command line.

The solution used the question() function to post a question, and provided a callback function to process the response. Within the function, close() was called, which closes the interface, releasing control of the input and output streams.

You can also create an application that continues to listen to the input, taking some action on the incoming data, until something signals the application to end. Typically that something is a letter sequence signaling the person is done, such as the word exit. This type of application makes use of other Readline functions, such as setPrompt() to change the prompt given the individual for each line of text, prompt(), which prepares the input area, including changing the prompt to the one set by setPrompt(), and write(), to write out a prompt. In addition, you’ll also need to use event handlers to process events, such as line, which listens for each new line of text.

Example 1-3 contains a complete Node application that continues to process input from the user until the person types in exit. Note that the application makes use of process.exit(). This function cleanly terminates the Node application.

Example 1-3. Access numbers from stdin until the user types in exit
var readline = require('readline');
var sum = 0;

var rl = readline.createInterface({
  input: process.stdin,
  output: process.stdout
});

console.log("Enter numbers, one to a line. Enter 'exit' to quit.");

rl.setPrompt('>> ');
rl.prompt();

rl.on('line', function(input) {
   input = input.trim();
   if (input == 'exit') {
      rl.close();
      return;
   } else {
    sum+= Number(input);
   }
   rl.prompt();
});

// user typed in 'exit'
rl.on('close', function() {
   console.log("Total is " + sum);
   process.exit(0);
});

Running the application with several numbers results in the following output:

Enter numbers, one to a line. Enter 'exit' to quite.
>> 55
>> 209
>> 23.44
>> 0
>> 1
>> 6
>> exit
Total is 294.44

I used console.log() rather than the Readline interface write() to write the prompt followed by a new line, and to differentiate the output from the input.

Working with Node Timers and Understanding the Node Event Loop

Problem

You need to use a timer in a Node application, but you’re not sure which of Node’s three timers to use, or how accurate they are.

Solution

If your timer doesn’t have to be precise, you can use setTimeout() to create a single timer event, or setInterval() if you want a reoccurring timer:

setTimeout(function() {}, 3000);

setInterval(function() {}, 3000);

Both function timers can be canceled:

var timer1 = setTimeout(function() {}, 3000);
clearTimeout(timer1);

var timer2 = setInterval(function() {}, 3000);
clearInterval(timer2);

However, if you need more finite control of your timer, and immediate results, you might want to use setImmediate(). You don’t specify a delay for it, as you want the callback to be invoked immediately after all I/O callbacks are processed but before any setTimeout() or setInterval() callbacks:

setImmediate(function() {});

It, too, can be cleared, with clearImmediate().

Discussion

Node, being JavaScript based, runs on a single thread. It is synchronous. However, input/output (I/O) and other native API access either runs asynchronously or on a separate thread. Node’s approach to managing this timing disconnect is the event loop.

In your code, when you perform an I/O operation, such as writing a chunk of text to a file, you specify a callback function to do any post-write activity. Once you’ve done so, the rest of your application code is processed. It doesn’t wait for the file write to finish. When the file write has finished, an event signaling the fact is returned to Node, and pushed on to a queue, waiting for process. Node processes this event queue, and when it gets to the event signaled by the completed file write, it matches the event to the callback, and the callback is processed.

As a comparison, think of going into a deli and ordering lunch. You wait in line to place your order, and are given an order number. You sit down and read the paper, or check your Twitter account while you wait. In the meantime, the lunch orders go into another queue for deli workers to process the orders. But each lunch request isn’t always finished in the order received. Some lunch orders may take longer. They may need to bake or grill for a longer time. So the deli worker processes your order by preparing your lunch item and then placing it in an oven, setting a timer for when it’s finished, and goes on to other tasks.

When the timer pings, the deli worker quickly finishes his current task, and pulls your lunch order from the oven. You’re then notified that your lunch is ready for pickup by your order number being called out. If several time-consuming lunch items are being processed at the same time, the deli worker processes them as the timer for each item pings, in order.

All Node processes fit the pattern of the deli order queue: first in, first to be sent to the deli (thread) workers. However, certain operations, such as I/O, are like those lunch orders that need extra time to bake in an oven or grill, but don’t require the deli worker to stop any other effort and wait for the baking and grilling. The oven or grill timers are equivalent to the messages that appear in the Node event loop, triggering a final action based on the requested operation.

You now have a working blend of synchronous and asynchronous processes. But what happens with a timer?

Both setTimeout() and setInterval() fire after the given delay, but what happens is a message to this effect is added to the event loop, to be processed in turn. So if the event loop is particularly cluttered, there is a delay before the the timer functions’ callbacks are called:

It is important to note that your callback will probably not be called in exactly (delay) milliseconds. Node.js makes no guarantees about the exact timing of when the callback will fire, nor of the ordering things will fire in. The callback will be called as close as possible to the time specified.

Node Timers documentation

For the most part, whatever delay happens is beyond the kin of our human senses, but it can result in animations that don’t seem to run smoothly. It can also add an odd effect to other applications.

In not available, I created a scrolling timeline in SVG, with data fed to the client via WebSockets. To emulate real-world data, I used a three-second timer and randomly generated a number to act as a data value. In the server code, I used setInterval(), because the timer is reoccurring:

var app = require('http').createServer(handler)
  , fs = require('fs');
var ws = require("nodejs-websocket");

app.listen(8124);

// serve static page
function handler (req, res) {
  fs.readFile(__dirname + '/drawline.html',
  function (err, data) {
    if (err) {
      res.writeHead(500);
      return res.end('Error loading drawline.html');
    }
    res.writeHead(200);
    res.end(data);
  });
}

// data timer
function startTimer() {
   setInterval(function() {
      var newval = Math.floor(Math.random() * 100) + 1;
      if (server.connections.length > 0) {
         console.log('sending ' + newval);
         var counter = {counter: newval};
         server.connections.forEach(function(conn, idx) {
            conn.sendText(JSON.stringify(counter), function() {
               console.log('conn sent')
            });
         });
       }
   },3000);
}


// websocket connection
var server = ws.createServer(function (conn) {
    console.log('connected');
    conn.on("close", function (code, reason) {
        console.log("Connection closed")
    });
}).listen(8001, function() {
     startTimer(); }
);

I included console.log() calls in the code so you can see that the timer event in comparison to the communication responses. When the setInterval() function is called, it’s pushed into the process. When its callback is processed, the WebSocket communications are also pushed into the queue.

The solution uses setInterval(), one of Node’s three different types of timers. The setInterval() function has the same format as the one we use in the browser. You specify a callback for the first function, provide a delay time (in milliseconds), and any potential arguments. The timer is going to fire in three seconds, but we already know that the callback for the timer may not be immediately processed.

The same applies to the callbacks passed in the WebSocket sendText() calls. These are based on Node’s Net (or TLS, if secure) sockets, and as the socket.write() (what’s used for sendText()) documentation notes:

The optional callback parameter will be executed when the data is finally written out—this may not be immediately.

Node Net documentation

If you set the timer to invoke immediately (giving zero as the delay value), you’ll see that the data sent message is interspersed with the communication sent message (before the browser client freezes up, overwhelmed by the socket communications—you don’t want to use a zero value in the application again).

However, the timelines for all the clients remain the same because the communications are sent within the timer’s callback function, synchronously, so the data is the same for all of the communications—it’s just the callbacks that are handled, seemingly out of order.

Earlier I mentioned using setInterval() with a delay of zero. In actuality, it isn’t exactly zero—Node follows the HTML5 specification that browsers adhere to, and “clamps” the timer interval to a minimum value of four milliseconds. While this may seem to be too small of an amount to cause a problem, when it comes to animations and time-critical processes the time delay can impact the overall appearance and/or function.

To bypass the constraints, Node developers utilized Node’s process.nextTick() instead. The callback associated with process.nextTick() is processed on the next event loop go around, usually before any I/O callbacks (though there are constraints, which I’ll get to in a minute). No more pesky four millisecond throttling. But then, what happens if there’s an enormous number of recursively called process.nextTick() calls?

To return to our deli analogy, during a busy lunch hour, workers can be overrun with orders and so caught up in trying to process new orders that they don’t respond in a timely manner to the oven and grill pings. Things burn when this happens. If you’ve ever been to a well-run deli, you’ll notice the counter person taking the orders will assess the kitchen before taking the order, tossing in some slight delay, or even taking on some of the kitchen duties, letting the people wait just a tiny bit longer in the order queue.

The same happens with Node. If process.nextTick() were allowed to be the spoiled child, always getting its way, I/O operations would get starved out. Node uses another value, process.maxTickDepth, with a default value of 1000 to constrain the number of process.next() callbacks that are processed before the I/O callbacks are allowed to play. It’s the counter person in the deli.

In more recent releases of Node, the setImmediate() function was added. This function attempts to resolve all of the issues associated with the timing operations and create a happy medium that should work for most folks. When setImmediate() is called, its callback is added after the I/O callbacks, but before the setTimeout() and setInterval() callbacks. We don’t have the four millisecond tax for the traditional timers, but we also don’t have the brat that is process.nextTick().

To return one last time to the deli analogy, setImmediate() is a customer in the order queue who sees that the deli workers are overwhelmed with pinging ovens, and politely states he’ll wait to give his order.

Caution

However, you do not want to use setImmediate() in the scrolling timeline example, as it will freeze your browser up faster than you can blink.

Managing Callback Hell

Problem

You want to do something such as check to see if a file is present, and if so open it and read the contents. Node provides this functionality, but to use it asynchronously, you end up with nested code (noted by indentations) in the code that makes the application unreadable and difficult to maintain.

Solution

Use a module such as Async. For instance, in Example 1-2 we saw definitely an example of nested callbacks, and this is a fairly simple piece of code: open a file, write two lines to it, and then read them back and output them to the console:

var fs = require('fs');

fs.open('newfile.txt', 'a+',function(err,fd){
   if (err) {
      console.log(err.message);
   } else {
      var buf = new Buffer("The first string\n");
      fs.write(fd, buf, 0, buf.length, 0, function(err, written, buffer) {
         if (err) {
            console.log(err.message);
         } else {
            var buf2 = new Buffer("The second string\n");
            fs.write(fd, buf2, 0, buf2.length, 0,
                               function(err, written2, buffer) {
               if (err) {
                  console.log(err.message);
               } else {
                  var length = written + written2;
                  var buf3 = new Buffer(length);
                  fs.read(fd, buf3, 0, length, 0,
                            function( err, bytes, buffer) {
                     if(err) {
                        console.log(err.message);
                     } else {
                        console.log(buf3.toString());
                     }
                  });
               }
            });
         }
      });
   }
});

Notice the messy indentation for all the nested callbacks. We can clean it up using Async:

var fs = require('fs');
var async = require('async');

async.waterfall([
   function openFile(callback) {
      fs.open('newfile.txt', 'a+',function (err, fd){
        callback(err,fd);
      });
   },
   function writeBuffer(fd, callback) {
      var buf = new Buffer("The first string\n");
      fs.write(fd, buf, 0, buf.length, 0, function(err, written, buffer) {
         callback(err, fd, written);
      });
   },
   function writeBuffer2(fd, written, callback) {
      var buf = new Buffer("The second string\n");
      fs.write(fd, buf, 0, buf.length, 0, function(err, written2, buffer){
         callback(err, fd, written, written2);
      });
   },
   function readFile(fd, written, written2, callback) {
      var length = written + written2;
      var buf3 = new Buffer(length);
      fs.read(fd, buf3, 0, length, 0, function(err, bytes, buffer) {
          callback (err, buf3.toString());
      });
   }
], function (err, result) {
   if (err) {
     console.log(err);
   } else {
     console.log(result);
   }
});

Discussion

Async is a utility module that detangles the callback spaghetti that especially afflicts Node developers. It can now be used in the browser, as well as Node, but it’s particularly useful with Node.

Node developers can install Async using npm:

npm install async
Note

To access the source for the browser, go to the module’s GitHub page.

Async provides functionality that we’re now finding in native JavaScript, such as map, filter, and reduce. However, the functionality I want to focus on is its asynchronous control management.

The solution used Async’s waterfall(), which implements a series of tasks, passing the results of prior tasks to those next in the queue. If an error occurs in any task, when the error is passed in the callback to the next task, Async stops the sequence and the error is processed.

Comparing the older code and the new Async-assisted solution, the first task is opening a file for writing. In the older code, if an error occurs, it’s printed out. Otherwise, a new Buffer is created and used to write a string to the newly opened file. In the Async version, though, the functionality to create the file is embedded in a new function openFile(), included as the first element in an array passed to the waterfall() function. The openFile() function takes one parameter, a callback() function, which is called once the file is opened, in the fs.open() callback function and takes as parameters the error object and the file descriptor.

The next task is to write a string to the newly created file. In the old code, this happens directly in the callback function attached to the fs.open() function call. In the Async version, though, writing a string to the file happens in a new function, added as second task to the waterfall() array. Rather than just taking a callback as argument, this function, writerBuffer(), takes the file descriptor fd returned from fs.open(), as well as a callback function. In the function, after the string is written out to the file using fs.write(), the number of bytes written is captured and passed in the next callback, along with the error and file descriptor.

The following task is to write out a second string. Again, in the old code, this happens within the callback function, but this time, the first fs.write()‘s callback. At this time, we’re looking at the third nested callback in the old code, but in the Async version, the second written string operation is just another task and another function in the waterfall() task array. The function, writeBuffer2(), accepts the file descriptor, the number of bytes written out in the first write task, and, again, a callback function. Again, it writes the new string out and passes the error, file descriptor, the bytes written out in the first write, and now the bytes written out on the second to the callback function.

In the old code within the fourth nested callback function (this one for the second fs.write() function), the count of written bytes is added and used in a call to fs.read() to read in the contents of the newly created file. The file contents are then output to the console.

In the Async modified version, the last task function, readFile(), is added to the task array and it takes a file descriptor, the two writing buffer counts, and a final callback as parameters. In the function, again the two byte counts are added and used in fs.read() to read in the file contents. These contents are passed, with the error object, in the last callback function call.

The results, or an error, are processed in the waterfall()‘s own callback function.

Rather than a callback nesting four indentations deep, we’re looking at a sequence of function calls in an array, with an absolute minimum of callback nesting. And we could go on and on, way past the point of what would be insane if we had to use the typical nested callback.

I used waterfall() because this control structure implies a series of tasks, each implemented in turn, and each passing data to the next task. It takes two arguments: the task array and a callback with an error and an optional result. Async also supports other control structures such as parallel(), for completing tasks in parallel; compose(), which creates a function that is a composition of passed functions; and series(), which accomplishes the task in a series but each task doesn’t pass data to the next (as happens with waterfall().

Accessing Command-Line Functionality Within a Node Application

Problem

You want to access command-line functionality, such as ImageMagick, from within a Node application.

Solution

Use Node’s child_process module. For example, if you want to use ImageMagick’s identify, and then print out the data to the console, use the following:

var spawn = require('child_process').spawn,
    imcmp = spawn('identify',['-verbose', 'osprey.jpg']);

imcmp.stdout.on('data', function (data) {
  console.log('stdout: ' + data);
});

imcmp.stderr.on('data', function (data) {
  console.log('stderr: ' + data);
});

imcmp.on('exit', function (code) {
  console.log('child process exited with code ' + code);
});

Discussion

The child_process module provides four methods to run command-line operations and process returned data:

  • spawn(command, [args], [options]): This launches a given process, with optional command-line arguments, and an options object specifying additional information such as cwd to change directory and uid to find the user ID of the process.

  • exec(command, [options], callback): This runs a command in a shell and buffers the result.

  • execFile(file, [args],[options],[callback]): This is like exec() but executes the file directly.

  • fork(modulePath, [args],[options]): This is a special case of spawn(), and spawns Node processes, returning an object that has a communication channel built in. It also requires a separate instance of V8 with each use, so use sparingly.

The child_process methods have three streams associated with them: stdin, stdout, and stderr. The spawn() method is the most widely used of the child_process methods, and the one used in the solution. From the solution top, the command given is the ImageMagick identify command-line application, which can return a wealth of information about an image. In the args array, the code passes in the --verbose flag, and the name of the image file. When the data event happens with the child_process.stdout stream, the application prints it to the console. The data is a Buffer that uses toString() implicitly when concatenated with another string. If an error happens, it’s also printed out to the console. A third event handler just communicates that the child process is exiting.

If you want to process the result as an array, modify the input event handler:

imcmp.stdout.on('data', function (data) {
    console.log(data.toString().split("\n"));
});

Now the data is processed into an array of strings, split on the new line within the identify output.

If you want to pipe the result of one process to another, you can with multiple child processes. If, in the solution, I want to pipe the result of the identify command to grep, in order to return only a subset of the information, I can do this with two different spawn() commands, as shown in Example 1-4.

In the code, the resulting data from the identify command is written to the stdin input stream for the grep command, and the grep‘s data is then written out to the console.

Example 1-4. Spawning two child processes to pipe the results of one command to another
var spawn = require('child_process').spawn,
    imcmp = spawn('identify',['-verbose', 'fishies.jpg']),
    grep = spawn('grep', ['Resolution']);

imcmp.stdout.on('data', function (data) {
   grep.stdin.write(data);
});

imcmp.stderr.on('data', function (data) {
  console.log('stderr: ' + typeof data);
});

grep.stdout.on('data', function (data) {
  console.log('grep data: ' + data);
});

grep.stderr.on('data', function (data) {
  console.log('grep error: ' + data);
});
imcmp.on('close', function (code) {
  console.log('child process close with code ' + code);
  grep.stdin.end();
});

grep.on('close', function(code) {
  console.log('grep closes with code ' + code);
});

In addition, the application also captures the close event when the streams terminate (not necessarily when the child processes exit). In the close event handler for the identify child process, the stdin.end() method is called for grep to ensure it terminates.

The result of running the application on the test image is:

child process close with code 0
grep data:   Resolution: 240x240
    exif:ResolutionUnit: 2
    exif:XResolution: 2400000/10000
    exif:YResolution: 2400000/10000

grep closes with code 0

Note the order: the original identify child process stream terminates once its data is passed to the grep command, which then does its thing and prints out the target data (the photo resolution). Then the grep command’s close event is processed.

Instead of using a child process, if you have either GraphicsMagick or ImageMagick installed, you can use the gm Node module for accessing the imaging capability. Just install it as:

npm install gm

Of course, you can still use the child process, but using the GraphicsMagick module can be simpler.

Extra: Using Child Processes with Windows

The solution demonstrates how to use child processes in a Linux environment. There are similarities and differences between using child processes in Linux/Unix, and using them in Windows.

In Windows, you can’t explicitly give a command with a child process; you have to invoke the Windows cmd.exe executable and have it perform the process. In addition, the first flag to the command is /c, which tells cmd.exe to process the command and then terminate.

Borrowing an example from Learning Node (O’Reilly), in the following code, the cmd.exe command is used to get a directory listing, using the Windows dir command:

var cmd = require('child_process').spawn('cmd', ['/c', 'dir\n']);

cmd.stdout.on('data', function (data) {
    console.log('stdout: ' + data);
});

cmd.stderr.on('data', function (data) {
    console.log('stderr: ' + data);
});

cmd.on('exit', function (code) {
    console.log('child process exited with code ' + code);
});

Running Node and Apache on the Same Port

Problem

You want your users to be able to access your Node application without having to specify a port number. You can run it at port 80, but then your Node application is in conflict with your Apache web server.

Solution

There are a couple of options you can use to run Node and Apache seemingly on port 80 at the same time. One is to use nginx, as a reserve proxy for both Apache and Node. A reverse proxy intercepts a web request and routes it to the correct service. Using a reverse proxy, you can start Node on a different port address, and when the reverse proxy gets a request for the Node application, it properly routes it to the appropriate port.

Another option is to use either Node as the reverse proxy to Apache, or Apache as a reverse proxy to Node. In the discussion, I cover the steps to using Apache as a reverse proxy for a Node application.

Discussion

We take our server infrastructures for granted when we’re developing traditional web server applications. Node, though, changes all the rules, and we’re having to become more familiar with how it all holds together.

For instance, traditional web servers are listening on a specific port, though we don’t use a port number in our URLs. However, they’re listening on port 80, which is the default port when you’re using Hypertext Transfer Protocol (HTTP).

If you’re using Apache and attempt to start a Node web service on port 80, it will fail. If Apache isn’t running, you can still run into problems starting your Node application on port 80, because you’re doing so without administrative (root) privileges. You’ll get an EACCES error (“permission denied”) because starting an application on a port less than 1024 requires root privileges.

So you might try to then run the application using sudo, which allows you to run an application as root:

sudo node app.js

Chances are if you do have root privileges your application will start. But it also increases the vulnerability of your server. Very few applications are hardened enough to run with root privileges and that includes Apache, which actually spawns a worker thread running as a nonprivileged user to respond to all web requests.

There are options for running Apache and a Node application on the same server and seemingly both on port 80. One popular option is to use Nginx (pronounced as “Engine X”) as a reverse proxy for both Apache and the Node application. Another is to use a separate server for the Node application, which isn’t an impossible solution considering how affordable Virtual Private Servers (VPS) have become.

Note

Another option for Node application deployment is to use a cloud server or other third-party service that enables Node Hosting. Among some of the Node deployment services are Joyent, host company for Node, Nodejitsu, and Codeship.

However, if you’re interested in as simple a solution as possible, and the performance requirements for your Node application are such that Apache’s single-threaded processing won’t be detrimental, you can use Apache as a reverse proxy for your Node application.

A reverse proxy is when the user accesses a specific URL, and the server that receives the request then sends it to the correct application. To use Apache as a reverse proxy for a Node application, you need to ensure that two Apache modules are enabled:

sudo a2enmod proxy
sudo a2enmod proxy_http

Next, you’ll need to configure a virtual host for your Node application. I’m currently running a Ghost weblog (Node-based) on the same server as my main Apache server. The virtual host file I created for this weblog is contained in the following code snippet:

<VirtualHost ipaddress:80>
    ServerAdmin myemail
    ServerName shelleystoybox.com

    ErrorLog path-to-logs/error.log
    CustomLog path-to-logs/access.log combined

    ProxyRequests off

    <Location />
            ProxyPass http://ipaddress:2368/
            ProxyPassReverse http://ipaddress:2368/
    </Location>
</VirtualHost>

You’ll need to replace the IP address with your own. Note that the request is proxied to a specific port the Node application is listening to (in this case, port 2368). It’s essential that you set ProxyRequests to off, to ensure forward proxying is turned off. Keeping forward proxying open can allow your server to be used to access other sites, while hiding the actual origins of the request.

Then it’s a matter of just enabling the virtual host and reloading Apache:

a2ensite shelleystoybox.com
service apache2 reload

People can also access the Ghost weblog by directly specifying the port address. The only way to prevent this is to disable direct access to the port from outside the server. In my Ubuntu system, I configured this with an iptables rule:

iptables -A input -i eth0 -p tcp --dport 2368 -j DROP

But unless you really need this, use caution when messing around with iptables.

Now I can set my Node application to listen in on port 2368, and start the application without root privileges.

The main drawback to using Apache as a reverse proxy for a Node application is that Apache is single-threaded, which can cramp Node’s style. If performance is a problem for you, then you should consider the other approaches I outlined earlier.

Note

Read more about Apache mod_proxy at http://httpd.apache.org/docs/2.2/mod/mod_proxy.html.

Keeping a Node Instance Up and Running

Problem

You’re in Linux, and you want to start up a Node application, but you also don’t want to keep a terminal window open while the application is running.

Solution

Use Forever to ensure the application is restarted if it’s ever shut down:

forever start  -l forever.log -o out.log -e err.log index.js

Discussion

Forever is a CLI (Command-Line Interface) tool that can be used to not only start a Node application, but to ensure the application is restarted if, for some reason, it’s shut down.

Install Forever using npm:

sudo npm install forever -g

Then start your Node application, making use of one or more of Forever’s flags. For my Ghost installation, I used:

forever start  -l forever.log -o out.log -e err.log index.js

The start action is one of the many available with Forever. This action starts the Node application as a Unix daemon or background process. It makes use of node.daemon, another Node module that can be used to create Unix daemons.

The command line also makes use of three options:

  • -l to create a log file

  • -o to log stdout from the script to the specified output file

  • -e to log stderr from the script to the specified error file

Some other Forever actions are:

  • stop to stop the daemon script

  • restart to restart the daemon script

  • stopall to stop all scripts

  • restartall to restart all scripts

  • list to list all running scripts

  • logs to list log files for running scripts

Forever will restart the application if it shuts down for whatever reason. However, if the entire system is rebooted, you’ll need an additional step, to ensure that Forever is started. For my Node Ghost weblog, I used Ubuntu’s Upstart program. To do this, I created a configuration file in /etc/init named ghost.conf with the following text (generalized for the book):

# /etc/init/ghost.conf
description "Ghost"

start on (local-filesystems)
stop on shutdown

setuid your-userid
setgid your-grpid

script
    export HOME="path-to-ghost"
    cd path-to-ghost
    exec /usr/local/bin/forever -a -l
       path-to-logfiles/forever.log --sourceDir path-to-ghost index.js

end script

When my server reboots, Forever restarts my Ghost weblog’s daemon, using the given nonroot user and group IDs.

Monitoring Application Changes and Restarting

Problems

Development can get rather active, and it can be difficult to remember to restart an application when the code has changed.

Solution

Use the nodemon utility to watch your source code and restart your application when the code changes.

To use, first install nodemon:

npm install -g nodemon

Instead of starting the application with node, use nodemon instead:

nodemon serverapp.js

Discussion

The nodemon utility monitors the files within the directory where it was started. If any of the files change, the Node application is automatically restarted. This is a handy way of making sure your running Node application reflects the most recent code changes.

Needless to say, nodemon is not a tool you want to use in a production system. You don’t want tools to automatically start when a bit of code changes, because the code change may not be production ready. Production systems do better when rollouts are triggered by human intention not accidental software intervention.

If the application accepts values when started, you can provide these on the command line, just as with Node, but precede them with double dashes (--) flag, which signals to nodemon to ignore anything that follows, and pass it to the application:

nodemon serverapp.js -- -param1 -param2

When started, you should get feedback similar to the following:

14 Jul 15:11:40 - [nodemon] v1.2.1
14 Jul 15:11:40 - [nodemon] to restart at any time, enter `rs`
14 Jul 15:11:40 - [nodemon] watching: *.*
14 Jul 15:11:40 - [nodemon] starting `node helloworld.js`
Server running on 8124/

If the code changes, you’ll see something similar to the following:

14 Jul 15:13:42 - [nodemon] restarting due to changes...
14 Jul 15:13:42 - [nodemon] starting `node helloworld.js`
Server running on 8124/

If you want to manually restart the application, just type rs into the terminal where nodemon is running. You can also use a configuration file with the utility, monitor only select files or subdirectories, and even use it to run non-Node applications.

The nodemon utility can also be used with Forever, discussed in Keeping a Node Instance Up and Running. If the Node application crashes, Forever restarts it, and if the source code for the application changes, nodemon restarts the application. To use the two together, you do need to use the --exitcrash flag, to signal nodemon to exit if the application crashes:

forever nodemon --exitcrash serverapp.js

You can use this combination in production, but I’m wary of restarting applications automatically when code changes. However, you do have this option with this utility.

Screen Scraping with Request

Problem

You want to access a web resource from within your Node application.

Solution

Use Request, one of the most popular and widely used Node modules. It’s installed with npm:

npm install request;

and can be used as simply as:

var request = require('request');
request('http://oreilly.com', function (error, response, body) {
  if (!error && response.statusCode == 200) {
    console.log(body);
  }
})

Discussion

Request provides support for the HTTP methods of GET, POST, DELETE, and PUT. In the case of GET, if the status indicates success (a status code of 200), you can then process the returned data (formatted as HTML in this instance) however you would like.

You can stream the result to a file using the filesystem module:

var request = require('request');
var fs = require('fs');

request('http://burningbird.net/flame.png')
  .pipe(fs.createWriteStream('flame.png'));

You can also stream a system file to a remote server with PUT, as noted in the module’s documentation:

fs.createReadStream('flame.json')
  .pipe(request.put('http://mysite.com/flame.json'))

You can also handle multipart form uploading and authentication.

An interesting use of Request is to scrape a website or resource and then use other functionality to query for specific information within the returned material. A popular module to use for querying is Cheerio, which is a very tiny implementation of jQuery core intended for use in the server. In Example 1-5, a simple application is created to pull in all links (a) contained in h2 elements (typical for individual article titles in a main page) and then list the text of the link to a separate output.

Example 1-5. Screen scraping made easy with Request and Cheerio
var request = require('request');
var cheerio = require('cheerio');

request('http://burningbird.net', function (error, response, html) {
  if (!error && response.statusCode == 200) {
    var $ = cheerio.load(html);
    $('h2 a').each(function(i,element) {
        console.log(element.children[0].data);
    });
  }
});

After the successful request is made, the HTML returned is passed to Cheerio via the load() method, and the result is assigned to a dollar sign variable ($), so we can use the result in a manner we’re used to, when using jQuery.

The element pattern of h2 a is then used to query for all matches, and the result is processed using the each method, accessing the text for each heading. The output to the console should be the titles of all the articles on the main page of the weblog.

Creating a Command-Line Utility with Help From Commander

Problem

You want to turn your Node module into a Linux command-line utility, including support for command-line options/arguments.

Solution

To convert your Node module to a Linux command-line utility, add the following line as the first line of the module:

#!/usr/bin/env node

To provide for command-line arguments/options, including the ever important --help, make use of the Commander module:

var program = require('commander');

program
   .version ('0.0.1')
   .option ('-s, --source [website]', 'Source website')
   .option ('-f, --file [filename]', 'Filename')
   .parse(process.argv);

Discussion

Converting a Node module to a command-line utility is quite simple. First, add the following line to the module:

#!/usr/bin/env node

Change the module file’s mode to an executable, using CHMOD:

chmod a+x snapshot

Notice that I dropped the .js from the file once I converted it to a utility. To run it, I use the following:

./snapshot -s http://oreilly.com -f test.png

The command-line utility I created makes use of Phantom to create am image capture of a website. not available covers the use of Phantom, but for now, Example 1-6 contains the complete code, making use of Commander.

Example 1-6. Making a Screenshot utility constructed of Phantom and Commander
#!/usr/bin/env node

var phantom = require('phantom');
var program = require('commander');

program
   .version ('0.0.1')
   .option ('-s, --source [website]', 'Source website')
   .option ('-f, --file [filename]', 'Filename')
   .parse(process.argv);

phantom.create(function (ph) {
  ph.createPage(function (page) {
    page.open(program.source, function (status) {
      console.log("opened " + program.source, status);
      page.render(program.file, function() {
        ph.exit();
      });
    });
  });
});

Commander is another favorite Node module of mine, because it provides exactly what we need to create a command-line utility: not only a way to process command-line arguments, but also to handle requests for help with the module using --help. To use it, you just need to specify a version for the utility, and then list out all of the command-line arguments/options. Note that you need to specify which of the options require an argument, and provide an English language description of the purpose of the option. Lastly, call Commander’s parse() argument, passing to it the process.argv structure, which contains all of the arguments given on the utility’s command line.

Now, you can run the utility with the short option, consisting of a dash (-) and a single lowercase alphabetic character:

./snapshot -s http://oreilly.com -f test.png

Or you can use the long option, consisting of a double-dash (--) followed by a complete word:

./snapshot --source http://oreilly.com --file test.png

And when you run the utility with either -h or --help, you get:

  Usage: snapshot [options]

  Options:

    -h, --help              output usage information
    -V, --version           output the version number
    -s, --source [website]  Source website
    -f, --file [filename]   Filename

Running the following returns the version:

./snapshot -V

Commander generates all of this automatically, so we can focus on our utility’s primary functionality.

Note

Commander can be installed using npm:

npm install commander

Post topics: Web Programming
Share: