Tag Archives: coffeescript

Backbone.Validation with Chaplin and CoffeeScript

Any sizable web application needs validation. Doing it yourself is for the birds, so I wanted to incorporate a backbone plugin to help solve the problem. For this example I chose to use Backbone.Validation.

Start with a basic framework. Brunch Application Assembler is a great way to bootstrap these projects. I used Paul Miller’s brunch-with-chaplin skeleton.

brunch new gh:paulmillr/brunch-with-chaplin

To start up the server, type brunch watch –server and go to http://localhost:3333/ in a new browser window. If everything is good, you’ll see this:

Screen Shot 2013-09-06 at 3.25.22 PM

You’ll need a basic application to test out our concept, so we’ll modify the routes and the controller, and add a new view and template to our project.

module.exports = (match) ->
  match '', 'home#index'
  match 'form', 'home#form'
Controller = require 'controllers/base/controller'
HeaderView = require 'views/home/header-view'
FormView = require 'views/home/form'

module.exports = class HomeController extends Controller
  form: ->
    @view = new FormView region: 'main'
View = require 'views/base/view'
Form = require 'models/form'

module.exports = class FormView extends View
  autoRender: true
  className: 'form-view'
  template: require './templates/form'
  events: 
    'click a.validateButton' : "validate"

  initialize: ->
    super
    @model = new Form()

  validate: (e) ->
    @model.validate()
    e.preventDefault()

<form>
  <div>
    <label for="name">Name</label><input type="text" name="name" class="name" />
  </div>
  <div>
    <label for="phone">Phone</label><input type="text" name="phone" class="phone" />
  </div>
  <div>
    <label for="email">Email</label><input type="text" name="email" class="email" />
  </div>
  <a href="#" class="validateButton">Validate</a>
</form>

With that code in place, lets do a quick checkpoint http://localhost:3333/form. We should get an ugly view like this:

Screen Shot 2013-09-06 at 3.50.32 PM

So, we know we want a basic form that can save name, phone, and email. Following the guidelines on the validation docs https://github.com/thedersen/backbone.validation, lets add the rules to our model.

BaseModel = require 'models/base/model'

module.exports = class Form extends BaseModel
  validation :
    name:
      required: true
    email:
      required: true
      pattern: "email"

We’ll also need to add the code to our vendor/scripts folder.

In a perfect world, the @model.validate() call would execute our validation rules. However, in this world, we get a javascript error

Uncaught TypeError: Object # has no method ‘validate’

There is one final step. We need to bind our model to the validation, so add the call in the attach method of our view:

  attach: ->
    super
    Backbone.Validation.bind(@);

Thats it! Full source code for the example is available on github.

Advertisements
Tagged , , ,

Fluent Interfaces in Coffeescript

We’ve all seen them – builder patterns that make object construction clean and readable.

person().named(‘Bob’).withSpouse(‘Alice’).bornOn(’01-26-1982′).build()

I used to do these all the time in Java (we called them fluent interfaces), and I just realized today that I had no idea how to do this style in Coffeescript. Well, lets remedy that.

To get started, I’m going to follow the basic pattern I’ve followed in Java. Since CoffeeScript provides native class functionality, its a pretty simple clone.

class Person

  named: (name) ->
    @name = name
    @

  withSpouse: (spouse) ->
    @spouse = spouse
    @

  bornOn: (dob) ->
    @dob = dob
    @

  build : ->
    return {
      name: @name
      spouse: @spouse
      dob: @dob
    }

console.log new Person().named('Adam').withSpouse('Rachel').build()      

But hey, this is coffeescript. We can do better. Lets use a attribute shortcut to reduce the code length.

class Person
  
  named: (@name) ->
    @

  withSpouse: (@spouse) ->
    @

  bornOn: (@dob) ->
    @

  build : ->
    return {
      name: @name
      spouse: @spouse
      dob: @dob
    }

console.log new Person().named('Adam').withSpouse('Rachel').build()      

I suspect there may be an even cleaner way to do this, but this seems concise enough for now.

The full source code is available here: https://github.com/adamnengland/coffee-fluent-interface

Tagged

Heroku Scheduler with Coffeescript Jobs

Heroku provides a free add on for running scheduled jobs.  This provides a convenient way to run scheduled tasks in an on-demand dyno, freeing your web dynos to focus on user requests.   I’m currently writing a node.js application in coffeescript that has some modest job scheduling needs.

Heroku’s documentation is a little thin on explaining this particular use case, however a good starting point is reading the One-Off Dyno Documentation.  The important concept to remember is that if you can run your command using “heroku run xxx”, you’ll be able to run it in the scheduler.  These one-off dynos should be place in a bin/ directory in your project root.

My first attempt is below.  Note that we set it to use our node install to run (Heroku installs node at /app/bin/node).

#! /app/bin/node
require('../server/jobs/revenueCalculator').runOnce()

Deploy to heroku and run the following command using the toolbelt.  We get an error immediately.

heroku run runJob
Error: Cannot find module '../server/jobs/revenueCalculator'

Next I wanted to try running interactively, using the coffeescript interpreter:

heroku run coffee
Running `coffee` attached to terminal... up, run.6960
coffee> x = require('./server/jobs/revenueCalculator')
{ start: [Function], runOnce: [Function] }
coffee> x.runOnce()
Job Started
Job Complete

Now we seem to have zeroed in on the problem. Perhaps the script will work if we run it using coffeescript, rather than the node executable. Edit bin/nightlyJob as follows:

#! /app/node_modules/.bin/coffee
job = require '../server/jobs/revenueCalculator'
job.runOnce()

Deploy to heroku and run.

heroku run nightlyJob
Running `nightlyJob` attached to terminal... up, run.9139
Job Started
Job Complete

Using #! /app/node_modules/.bin/coffee in a standalone script to call the application code seems to do the trick. Now, lets add the heroku scheduler to our app, and configure the job to run nightly.

heroku addons:add scheduler:standard
heroku addons:open scheduler

A browser should pop open, and we can schedule our nightly job. Screen Shot 2013-04-23 at 11.07.48 AM

Thats all folks.

Tagged , ,

Clustering Web Sockets with Socket.IO and Express 3

Node.js gets a lot of well-deserved press for its impressive performance. The event loop can handle pretty impressive loads with a single process. However, most servers have multiple processors, and I, for one, would like to take advantage of them. Node’s cluster api can help.

While cluster is a core api in node.js, I’d like to incorporate it with Express 3 and Socket.io.

Final source code available on github

The node cluster docs gives us the following example.

cluster = require("cluster")
http = require("http")
numCPUs = require("os").cpus().length
if cluster.isMaster
  i = 0
  while i < numCPUs     cluster.fork()     i++   cluster.on "exit", (worker, code, signal) ->
    console.log "worker " + worker.process.pid + " died"
else
  http.createServer((req, res) ->
    res.writeHead 200
    res.end "hello world\n"
  ).listen 8000

The code compiles and runs, but I have not confirmation that things are actually working. I’d like to add a little logging to confirm that we actually have multiple workers going. Lets add these lines right before the ‘exit’ listener.

  cluster.on 'fork', (worker) ->
    console.log 'forked worker ' + worker.id

On my machine, we get this output:
coffee server
forked worker 1
forked worker 2
forked worker 3
forked worker 4
forked worker 5
forked worker 6
forked worker 7
forked worker 8

So far, so good. Lets add express to the mix.

cluster = require("cluster")
http = require("http")
numCPUs = require("os").cpus().length
if cluster.isMaster
  i = 0
  while i < numCPUs     cluster.fork()     i++   cluster.on 'fork', (worker) ->
    console.log 'forked worker ' + worker.process.pid
  cluster.on "listening", (worker, address) ->
    console.log "worker " + worker.process.pid + " is now connected to " + address.address + ":" + address.port
  cluster.on "exit", (worker, code, signal) ->
    console.log "worker " + worker.process.pid + " died"
else
  app = require("express")()
  server = require("http").createServer(app)
  server.listen 8000
  app.get "/", (req, res) ->
    console.log 'request handled by worker with pid ' + process.pid
    res.writeHead 200
    res.end "hello world\n"

At this point, I’d like to throw a few http requests against the setup to ensure that I’m really utilizing all my processors.
Running (curl -XGET “http://localhost:8000&#8221;) 6 times makes the node process go:

request handled by worker with pid 85229
request handled by worker with pid 85231
request handled by worker with pid 85231
request handled by worker with pid 85231
request handled by worker with pid 85227
request handled by worker with pid 85229

Alright, last step is getting socket.io involved. Just a couple extra lines for the socket, however, we’ll need to add a basic index.html file to actually make the socket calls.

cluster = require("cluster")
http = require("http")
numCPUs = require("os").cpus().length
if cluster.isMaster
  i = 0
  while i < numCPUs     cluster.fork()     i++   cluster.on 'fork', (worker) ->
    console.log 'forked worker ' + worker.process.pid
  cluster.on "listening", (worker, address) ->
    console.log "worker " + worker.process.pid + " is now connected to " + address.address + ":" + address.port
  cluster.on "exit", (worker, code, signal) ->
    console.log "worker " + worker.process.pid + " died"
else
  app = require("express")()
  server = require("http").createServer(app)
  io = require("socket.io").listen(server)
  server.listen 8000
  app.get "/", (req, res) ->
    res.sendfile(__dirname + '/index.html');
  io.sockets.on "connection", (socket) ->
    console.log 'socket call handled by worker with pid ' + process.pid
    socket.emit "news",
      hello: "world"

 

<script class="hiddenSpellError" type="text/javascript">// <![CDATA[
src</span>="/socket.io/socket.io.js">
// ]]></script><script type="text/javascript">// <![CDATA[

// ]]></script>
 var socket = io.connect('http://localhost');
 socket.on('news', function (data) {
 console.log(data);
 socket.emit('my other event', { my: 'data' });
 });
// ]]></script>

When I run this code, problems start to appear. Specifically, the following message shows up in my output

warn – client not handshaken client should reconnect

Not surprisingly, we have issues with sockets appearing disconnected. Socket.io defaults to storing its open sockets in an in-memory store. As a result, sockets in other processes have no access to the information. We can easily fix the problem by using the redis store for socket.io. The docs we need are here.

With the redis store in place, it looks like this:

cluster = require("cluster")
http = require("http")
numCPUs = require("os").cpus().length
RedisStore = require("socket.io/lib/stores/redis")
redis = require("socket.io/node_modules/redis")
pub = redis.createClient()
sub = redis.createClient()
client = redis.createClient()
if cluster.isMaster
  i = 0
  while i < numCPUs     cluster.fork()     i++   cluster.on 'fork', (worker) ->
    console.log 'forked worker ' + worker.process.pid
  cluster.on "listening", (worker, address) ->
    console.log "worker " + worker.process.pid + " is now connected to " + address.address + ":" + address.port
  cluster.on "exit", (worker, code, signal) ->
    console.log "worker " + worker.process.pid + " died"
else
  app = require("express")()
  server = require("http").createServer(app)
  io = require("socket.io").listen(server)
  io.set "store", new RedisStore(
    redisPub: pub
    redisSub: sub
    redisClient: client
  )
  server.listen 8000
  app.get "/", (req, res) ->
    res.sendfile(__dirname + '/index.html');
  io.sockets.on "connection", (socket) ->
    console.log 'socket call handled by worker with pid ' + process.pid
    socket.emit "news",
      hello: "world"
Tagged , , , ,

Redis Performance – Does key length matter?

I’m currently building a project using redis as a high performance cache in a node.js application (using the excellent node_redis). My key values will be fairly large ( between 512b and 1kb). The Redis documentation doesn’t specifically warn against keys of this size, but it still seems appropriate to do a benchmark, and see how Redis reacts to large keys (and whether or not 1k is really a large key, or just par for the course).

Test Script (source)

Basically, we insert 1000 records into redis, each with a 10,000 character value. After the writes are all complete, we read each key back from redis.

redis = require &quot;redis&quot;

randomString = (length) -&gt;
  chars = '0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ'
  result = &quot;&quot;
  i = length

  while i &gt; 0
    result += chars[Math.round(Math.random() * (chars.length - 1))]
    --i
  result

writeTest = (keyLength) -&gt;
	console.log &quot;1000 set statements for #{keyLength} character keys&quot;
	keys = []
	for x in [1..1000]
		keys.push randomString(keyLength)
	startTime = new Date().getTime()
	for x in keys
		client.set x, randomString(10000)
	client.quit -&gt;
		console.log &quot;1000 keys inserted in #{new Date().getTime() - startTime} ms&quot;
		readTest(keys)

readTest = (keys) -&gt;
	client = redis.createClient()
	startTime = new Date().getTime()
	for x in keys
		client.get x
	client.quit -&gt;
		console.log &quot;1000 keys retreived in #{new Date().getTime() - startTime} ms&quot;

client = redis.createClient()

client.flushdb -&gt;
	writeTest(20000)

This test was performed for key lengths of 10, 100, 500, 1000, 2500, 5000, 7500, 10,000, and 20,000 characters. Three runs of each were performed to avoid any fluke results. Without further ado, the results.

Write Performance (in ms)

Key Length Run 1 Run 2 Run 3
10 1235 1216 1259
100 1231 1242 1223
500 1283 1240 1270
1000 1277 1317 1345
2500 1318 1279 1294
5000 1376 1391 1386
7500 1223 1204 1265
10000 1220 1252 1235
20000 2065 2014 2016

Read Performance (in ms)

Key Length Run 1 Run 2 Run 3
10 43 41 51
100 45 45 43
500 60 54 58
1000 69 73 79
2500 97 101 102
5000 113 114 110
7500 134 133 136
10000 147 156 151
20000 244 234 241

Not surprisingly, as the key length increases, times do increase.  However, write times are relatively unaffected by key length, while read times seem to be impacted more.   To put it in perspective:

  • Key length 10 – an average write takes 1.24ms, an average read takes 0.045ms
  • Key length 10,000 – an average write takes 1.24ms, an average read takes 0.15ms

Whether or not this is significant is really up to you, however, for my purposes, it seems like an insignificant difference.  At the end of the day, redis is a fast and flexible tool for caching data.

Tagged , , ,
%d bloggers like this: