How we doubled our Vagrant performance with Rsync

The Engineering team at Red Nova Labs has been working to simplify our development process. Recently, we’ve been experimenting with Vagrant as a good tool to do this.

However, we’ve had the same problem that many developers have with Vagrant – performance. The app in question is very large, and running in development mode – no asset precompilation, no minification. The development machine is a Macbook Pro with 8GB Ram and a 2.6Ghz Core i5 processor.

The only thing that changes across these test runs is the shared folder configuration in vagrant.

Lets start with Synced Folders in VirtualBox. The out-of-the-box performance was pretty dismal.

» wrk -d30s http://localhost:3000/health_check
Running 30s test @ http://localhost:3000/health_check
  2 threads and 10 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     0.00us    0.00us   0.00us     nan%
    Req/Sec     0.00      0.00     0.00    100.00%
  10 requests in 30.09s, 4.09KB read
  Socket errors: connect 0, read 0, write 0, timeout 10
Requests/sec:      0.33
Transfer/sec:     139.24B

This isn’t surprising, it has been well established for years that NFS performance is superior to shared folders. In fact most of the documentation out there (including Stefan Wrobel’s wonderful How to make Vagrant performance not suck) recommends NFS as the best practice. Just add this line to your Vagrantfile

config.vm.synced_folder '.', '/vagrant', id: "vagrant_root", nfs: true

And it is faster. A lot faster.

» wrk -d30s http://localhost:3000/health_check
Running 30s test @ http://localhost:3000/health_check
  2 threads and 10 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     1.07s   196.68ms   1.59s    87.73%
    Req/Sec     4.69      2.36    13.00     75.68%
  277 requests in 30.02s, 113.34KB read
Requests/sec:      9.23
Transfer/sec:      3.78KB

The good news is that we don’t have to stop there anymore. As of Vagrant 1.5, we’ve got an even better option, rsync. Modify your Vagrantfile to include this:

config.vm.synced_folder ".", "/vagrant", type: "rsync", rsync__exclude: ".git/", rsync__auto: true

And when we run our benchmark:

» wrk -d30s http://localhost:3000/health_check
Running 30s test @ http://localhost:3000/health_check
  2 threads and 10 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   484.11ms  122.34ms   1.05s    66.56%
    Req/Sec    10.71      5.56    40.00     77.84%
  612 requests in 30.06s, 250.42KB read
Requests/sec:     20.36
Transfer/sec:      8.33KB

In this specific situation, we see a huge improvement moving from shared folders to NFS. On top of that, we are able to double it by switching to Rsync. The switch is only one line of code, so you really don’t have anything to lose by trying it out on your project.

If you’ve moved your Vagrant setup from NFS to Rsync, please leave a comment, and let other readers know whether it made a difference for you.

Tagged ,

Rapid Lo-Dash is now available!

I’m pleased to announce that my newest video series, Rapid Lo-Dash is available for purchase through Packt Publishing.

You can get all the details and view the free sample here: Rapid Lo-Dash

If you’re a JavaScript developer, and you aren’t using Lo-Dash yet, you really need to check it out. It’ll help you write some cleaner, faster, more maintainable code. Don’t believe me?  Take a look at this example, and decide which code you would rather maintain.

var numbers = [1, 2, 3, 4, 5, 6];
var odd_numbers = [];
for (var i = 0; i < numbers.length; i++) {
  if (numbers[i] % 2 == 1) {
    odd_numbers.push(numbers[i]);
  }
}
var numbers = [1, 2, 3, 4, 5, 6];
var odd_numbers = _.filter(numbers, function(num) { 
	return num % 2 == 1; 
});

Rapid Lo-Dash walks you through the basics of setting up your development environment, and using Lo-Dash to work with arrays. We’ll see examples of how to use Lo-Dash to work with object, collections, chaining, and get into some basic functional programming concepts.

5.Clean Up Your Code with Lo-Dash Chains

Thanks to the team at Packt Publishing for working with me to create this video series – hope you enjoy it.

Tagged ,

Avoiding Caching with Rails ActiveRecord

Today, I came across a puzzling issue with Rails 4.1, ActiveRecord, and Postgres when trying to select random records from a database table.

To demonstrate, let’s use a simple social network API example. Clients will POST a list of user, and I’ll “match” them to someone in my database.

Easy enough, for testing purposes we can just select a random user from our “users” table in postgres to simulate a match.

contacts = [{:id => 1}, {:id => 2}, {:id => 3}, {:id => 4}]
contacts.each do |c|
  user = User.order('random()').first
  c[:detail] = {:username => user.username}
end

Run that in our rails console, and good things happen. Seems like we have some random users from my (small) database.

[{:id=>1, :detail=>{:username=>"adam5"}},
 {:id=>2, :detail=>{:username=>"adam1"}},
 {:id=>3, :detail=>{:username=>"adam11"}},
 {:id=>4, :detail=>{:username=>"adam5"}}]

Let’s move that code into a controller action.

  def create
    contacts = [{:id => 1}, {:id => 2}, {:id => 3}, {:id => 4}]
    contacts.each do |c|
      user = User.order('random()').first
      c[:detail] = {:username => user.username}
    end
    render :json => contacts, :root => false
  end

Uh-oh. This time, we get a decidedly un-random response:

[
    {
        "id": 1,
        "detail": {
            "username": "adam6"
        }
    },
    {
        "id": 2,
        "detail": {
            "username": "adam6"
        }
    },
    {
        "id": 3,
        "detail": {
            "username": "adam6"
        }
    },
    {
        "id": 4,
        "detail": {
            "username": "adam6"
        }
    }
]

Looking at the logs, we’ll see this:

  User Load (0.7ms)  SELECT  "users".* FROM "users"   ORDER BY random() LIMIT 1
  User Load (0.7ms)  SELECT  "users".* FROM "users"   ORDER BY random() LIMIT 1
  CACHE (0.0ms)  SELECT  "users".* FROM "users"   ORDER BY random() LIMIT 1
  CACHE (0.0ms)  SELECT  "users".* FROM "users"   ORDER BY random() LIMIT 1
  CACHE (0.0ms)  SELECT  "users".* FROM "users"   ORDER BY random() LIMIT 1
  CACHE (0.0ms)  SELECT  "users".* FROM "users"   ORDER BY random() LIMIT 1
  CACHE (0.0ms)  SELECT  "users".* FROM "users"   ORDER BY random() LIMIT 1
  CACHE (0.0ms)  SELECT  "users".* FROM "users"   ORDER BY random() LIMIT 1

Rails caching, normally so useful, causes a problem in this situation. No sweat. You just need to use ActiveRecord’s uncached method.

Modifying the action to use uncached looks like this:

  def create
    contacts = [{:id => 1}, {:id => 2}, {:id => 3}, {:id => 4}]
    contacts.each do |c|
      User.uncached do
        user = User.order('random()').first
        c[:detail] = {:username => user.username}
      end
    end
    render :json => contacts, :root => false
  end

The database query no longer gets cached for the duration of the “User.uncached” block, and a new query is executed for each iteration. The controller action now gives us some nice, randomized output.

[
    {
        "id": 1,
        "detail": {
            "username": "adam3"
        }
    },
    {
        "id": 2,
        "detail": {
            "username": "adam3"
        }
    },
    {
        "id": 3,
        "detail": {
            "username": "adam4"
        }
    },
    {
        "id": 4,
        "detail": {
            "username": "adam1"
        }
    }
]
Tagged , ,

First Impressions of Atom – By Adam

For the past 3 years, I’ve worked primarily in Sublime Text, and it’s a fantastic application. I use it for Ruby, HTML, CSS, Javascript, CoffeeScript, and more. Today, Github open sourced their in-house developed text editor, Atom. A quick day working with Atom reveals that it may be a worthy replacement.

The first thing I notice is how familiar it is to a Sublime user. When opening up the folder that I keep my Knoda projects in, I get a simple directory tree to navigate.

Screen Shot 2014-05-06 at 10.22.57 AM

Next, I can use all of my familiar keyboard shortcuts. Press cmd-p in Atom and start typing.

Screen Shot 2014-05-06 at 10.24.29 AM

Awesome (though, according to the Atom docs, I should be using cmd-t). Syntax coloring for ruby files is excellent out of the box. Same for LESS files, CoffeeScript files, and HTML. On my Macbook Air however, the font size is huge. Almost comically huge.

How easy is it to fix that? Super easy, apparently. cmd-, opens the config. Font size is set to 16. Change it to 12, and things are a little more manageable.
Screen Shot 2014-05-06 at 10.29.28 AM

Configuration, key bindings, themes, and packages are all easily accessible in the cmd-, interface. This stands out for me a a superior out-of-the-box experience to Sublime Text. Incredibly intuitive.

Overall, its a great piece of software. Give it a download, and comment with your own review.

Benchmarks: acts-as-taggable-on vs PostgreSQL Arrays

While looking at performance optimizations for a rails project, I noticed these lines in my debug console:

  ActsAsTaggableOn::Tag Load (0.5ms)  SELECT "tags".* FROM "tags" INNER JOIN "taggings" ON "tags"."id" = "taggings"."tag_id" WHERE "taggings"."taggable_id" = $1 AND "taggings"."taggable_type" = $2 AND (taggings.context = ('tags'))  [["taggable_id", 103], ["taggable_type", "Prediction"]]

This makes sense, my project is using acts-as-taggable-on to tag models. However, our tagging needs are quite simple, and since we are using postgres, I wondered whether using postgres array types might be more efficient. To get a feel for the basic concept, see 41 studio’s writeup.

However, before going through all the trouble, I’d like to see if the performance gains are appreciable or not. Using rails benchmark functionality, we can do this pretty easily.

Full source code is available at https://github.com/adamnengland/rails-tag-bench, or follow along for the step by step for the full experience.

Getting Started
You’ll need

  • Rails 4.0.2
  • Ruby 2.0.0
  • Postgres.app on OS X – though you can certainly modify this to work with any postgres install

Create a new Rails project

rails new rails-tag-bench
cd rails-tag-bench

Open gemfile and add
gem ‘pg’, ‘0.17.1’

(I had to do this first: gem install pg — –with-pg-config=/Applications/Postgres93.app/Contents/MacOS/bin/pg_config)

Then bundle install to get your dependencies.

Replace config/database.yml with

development:
  adapter: postgresql
  encoding: unicode
  database: rails_tag_bench
  pool: 5
  username: rails_tag_bench
  password:
  
test:
  adapter: sqlite3
  database: db/test.sqlite3
  pool: 5
  timeout: 5000

production:
  adapter: sqlite3
  database: db/production.sqlite3
  pool: 5
  timeout: 5000

We’ll need a database user, so open up postgres and issue:

create user rails_tag_bench with SUPERUSER;

Okay, lets create the database

rake db:create

In postgres type
\c rails_tag_bench
to confirm that the database is set up.

To do this, we’ll also need acts-as-taggable-on, so update the gemfile

gem ‘acts-as-taggable-on’, ‘2.4.1’

and bundle install

rails g acts_as_taggable_on:migration
rake db:migrate

Lets start with the taggable version:
rails g model ArticleTaggable title:string body:text
rake db:migrate

open the created article_taggable.rb and edit

class ArticleTaggable < ActiveRecord::Base
  acts_as_taggable
end

Lets setup the benchmark:

rails g task bench

Fill out the body like so

require 'benchmark'
namespace :bench do
  task writes: :environment do
    Benchmark.bmbm do |x|
      x.report("Benchmark 1") do 
        1_000.times do
          ArticleTaggable.create(:title => ('a'..'z').to_a.shuffle[0,8].join, :body => ('a'..'z').to_a.shuffle[0,100].join, :tag_list => ['TAG1'])
        end
      end
    end    
  end

  task reads: :environment do
    Benchmark.bmbm do |x|
      x.report("Benchmark 1") do 
        1_000.times do
          ArticleTaggable.includes(:tags).find_by_id(Random.new.rand(1000..2000));
        end
      end
    end     
  end
end

You can run the benchmarks like so:

rake db:reset
rake bench:writes
rake bench:reads

Which should give you output like this:

➜  rails-tag-bench  rake bench:writes
Rehearsal -----------------------------------------------
Benchmark 1   8.620000   0.340000   8.960000 ( 10.716852)
-------------------------------------- total: 8.960000sec

                  user     system      total        real
Benchmark 1   8.540000   0.320000   8.860000 ( 10.543746)
➜  rails-tag-bench  rake bench:reads
Rehearsal -----------------------------------------------
Benchmark 1   2.930000   0.160000   3.090000 (  3.906484)
-------------------------------------- total: 3.090000sec

                  user     system      total        real
Benchmark 1   2.880000   0.150000   3.030000 (  3.825437)

So, on my macbook air, we wrote 1000 records in 10.5437 seconds, and read 1000 records in 3.8254 seconds with acts-as-taggable-on

Now, lets implement the example using postgres arrays, and see where we land

rails g model ArticlePa title:string body:text tags:string

Edit the new migration as follows

class CreateArticlePas < ActiveRecord::Migration
  def change
    create_table :article_pas do |t|
      t.string :title
      t.text :body
      t.string :tags, array: true, default: []

      t.timestamps
    end
  end
end
rake db:migrate

update our benchmarking code:

require 'benchmark'
namespace :bench do
  task writes: :environment do
    Benchmark.bmbm do |x|
      x.report("Using Taggable") do 
        1_000.times do
          ArticleTaggable.create(:title => ('a'..'z').to_a.shuffle[0,8].join, :body => ('a'..'z').to_a.shuffle[0,100].join, :tag_list => ['TAG1'])
        end
      end
      x.report("Using Postgres Arrays") do
        1_000.times do
          ArticlePa.create(:title => ('a'..'z').to_a.shuffle[0,8].join, :body => ('a'..'z').to_a.shuffle[0,100].join, :tags => ['TAG1'])
        end
      end
    end    
  end

  task reads: :environment do
    Benchmark.bmbm do |x|
      x.report("Using Taggable") do 
        1_000.times do
          ArticleTaggable.includes(:tags).find_by_id(Random.new.rand(1000..2000));
        end
      end
      x.report("Using Postgres Arrays") do 
        1_000.times do
          ArticlePa.find_by_id(Random.new.rand(1000..2000));
        end
      end      
    end     
  end
end
rake db:reset
rake bench:writes
rake bench:reads

The Results

➜  rails-tag-bench  rake bench:writes
Rehearsal ---------------------------------------------------------
Using Taggable          8.520000   0.330000   8.850000 ( 10.532700)
Using Postgres Arrays   1.460000   0.110000   1.570000 (  2.082705)
----------------------------------------------- total: 10.420000sec

                            user     system      total        real
Using Taggable          8.340000   0.310000   8.650000 ( 10.221277)
Using Postgres Arrays   1.410000   0.110000   1.520000 (  2.012559)

➜  rails-tag-bench  rake bench:reads
Rehearsal ---------------------------------------------------------
Using Taggable          2.920000   0.160000   3.080000 (  3.898911)
Using Postgres Arrays   0.420000   0.060000   0.480000 (  0.700684)
------------------------------------------------ total: 3.560000sec

                            user     system      total        real
Using Taggable          2.870000   0.140000   3.010000 (  3.805598)
Using Postgres Arrays   0.400000   0.060000   0.460000 (  0.677917)

For my money, the postgres arrays appear to be much faster, which comes as little surprise. By cutting out all the additional joins, we greatly reduce the query time.

However, it is important to note that this isn’t an apples-to-apples comparison. Acts-As-Taggable-On provides a lot of functionality that simple arrays do not provide. More importantly, this locks you into the postgres database, which may or may not be a problem for you. However, if you really have simplistic tag needs, the performance improvements might be worth it.

Tagged , , , , ,

Introducing Knoda – Predict. Compete. Conquer

As some readers might know, I’ve recently joined the team at Knoda. Based in Kansas City, Knoda provides people with a way to make their predictions, let their friends (and enemies) vote, and hold everyone accountable to the results. Sound like fun? Well, hurry over to the App Store and get it. Android users – the development team is feverishly working on your application, but in the meantime, you can reserve your username.

How it works

When I log into Knoda, I see a stream of users predictions.

2014-01-13 16.12.54

Looks like we’ve got a lot of interest in the KU basketball game tonight, and some predictions about Bitcoin. I don’t know who Fran Fraschilla is, but I think this prediction is wrong. Swipe from the right to disagree.

2014-01-13 16.13.13

Making a prediction of your own is easy as well. I’m going to predict that this blog post will get more than 50 views on Wednesday. I’ll keep voting open until 3pm CST, and I’ll declare the result Wednesday Morning.

2014-01-13 16.27.36

There are lots of other features to explore – searching, commenting, statistics, however I’ll let you download the app to learn about those. You can also follow our releases on the Knoda blog or on twitter @Knodafuture

You can also expect this blog to feature a lot more entries on the technologies that we are using at Knoda – Rails 4, Postgres, Objective-C, and Java. If you love these technologies as much as the Knoda team does, check out our Coder Wall. As our user base grows, Knoda will be hiring additional software developers to help build out the world’s best social prediction platform.

Finally, if my description hasn’t painted a clear enough picture for you, learn all about Knoda from our co-founders, Kyle Rogers and James Flexman, presenting last week at 1 Million Cups Kansas City.

Reactive Manifesto – First Reactions

While visiting the Play Framework website, I noticed a new banner in the top right corner. Perhaps “new” isn’t correct – I’ve been almost exclusively in Node.js and Rails land for the past 6 months, so I might be behind the times on this one.

Screen Shot 2013-11-10 at 10.18.14 PM

Following the links takes us to The Reactive Manifesto. As of December 6, 2013, the manifesto has 3098 signatures, and a quick google search shows that the term is taking off pretty quickly. So, I decided to dive in for a quick read, and see what, if any changes it might suggest for my development style.

This architecture allows developers to build systems that are event-driven, scalable, resilient and responsive…
-The Reactive Manifesto

In General…

  • The manifesto seems to encourage some behaviors that I really like.  For example, event-driven systems are a big part of reactive applications.  As a concrete take-away, I certainly like the idea of using lightweight message queues as a way to decouple event publishers and subscribers.  And while we are at it, let’s stay away from clumsy, closed systems like JMS, and stick to open, Polyglot platforms like RabbitMQ and ZeroMQ.
  • Resiliency.  This is the part of the manifesto that I should probably spend the most time absorbing.  As much as I hate it, I’ve written Node.js apps in the past that would crash the whole server on uncaught errors.  The classic problem of Cascading Failure shouldn’t be nearly so prevalent.  Treating errors as “events” rather than failures makes your app way more resilient.
  • Big servers are for suckers.  I know a few Sys Admins still get their jollies from putting together big iron setups, but all the cool kids are scaling horizontally.  Multi-threaded applications, and the complex, proprietary technology behind them, is less attractive than single event-driven, non-blocking processes.   The Reactive Manifesto, at some level, is a guide for writing distributed systems that are resilient in their software design, but also targeted at the physical resiliency of cloud computing.

With My Java Developer Hat…

  • If you like developing in the Play! Framework, you’ll already like a lot of what the manifesto has to say.  No surprise, due to to fact that Typesafe seems to be the driver behind the manifesto.  In any event, a few obvious features that Play! provides for Reactive Applications
    • Play! (unless you opt-out) runs on Netty.io, a asynchronous, event-driven application server, that supports Non-Blocking io.  The Reactive Manifesto (quite accurately) points out that event-driven applications will remain loosly coupled, and perform better than the syncronous, blocking, multi-threaded apps.

With My Node.js Hat…

  • Node.js developers should sign the manifesto, and be proud of the common ground they can find the with Java/Scala developers using Play!.  The event-driven, non-blocking approach works with the Node.js theory perfectly.  A couple specific things that sync up well
    • Node Streams implement a lot of functionality for fulfilling the “responsive” part of the reactive manifesto.  The current version of streams provides an intuitive interface, good backpressure controls, and resistance against overloads on traffic bursts.  If you are doing IO, and not using streams, you may be missing out on a big opportunity.
    • Callbacks, Queues, Streams, & EventEmitter all provide nice ways to keep your app asynchronous and message driven.

With My Rails Hat On…

  • This one is a little bit harder.  Don’t get me wrong, I’m 100% sure you can write reactive apps using Rails, I just don’t know that the framework encourages it as readily.
  • Unlike Play! or Node, the web server is a little more of a wild card when using Rails. Webrick, Unicorn, Thin, Puma, etc.  To my knowledge, the only one of those that supports non-blocking IO is Thin, but even that is subject to the blocking aspect of Rails.
  • EventMachine has some nice features for writing reactive, event-based ruby apps.
  • I think I’m going to have to stop there, and work on a who separate post re: Reactive Applications in Rails.

Yes, I signed.  While the manifesto smells too much like a marketing tool for Play!/Typesafe, I can’t hate on a good idea.  Read the manifesto, sign it if you like what you read, and leave me some comments if you think I’m way off base on this one.

Screen Shot 2013-11-10 at 11.35.08 PM

Benchmarks – Underscore.js vs Lodash.js vs Lazy.js

Update 10/10/2013 – A good point was made that doing the array creation isn’t really going to be different between the libraries. I’ve modified the find/map/lazy samples to reflect this, and updated the numbers appropriately.

Fast code is fun. And nothing is more fun than making your application faster by dropping in a new library, without spending time re-writing code or spending money on new hardware.

Luckily, there are 2 projects for your next node.js/web app that promise to do just this. lodash.js and lazy.js are both replacements for underscore.js that promise faster performance, as well as some new features.

Lodash is fairly well known for its excellent compatibility with underscore.js. Lazy, on the other hand, should potentially offer even better performance, at the cost of implementing a slightly different API.

Underscore = require('underscore')
Lodash = require('lodash')
Lazy = require('lazy.js')
exports.compare = {
  "underscore" : function () {
    var array = Underscore.range(1000)
  },
  "lodash" : function () {
    var array = Lodash.range(1000)
  },
  "lazy" : function () {
    var array = Lazy.range(1000).toArray()
  }
};
require("bench").runMain()

Running this comparison shows lodash as the winner, underscore close, and lazy way behind. That said, this item is too trivial to really be interesting, and it doesn’t really give lazy.js a fair chance to do any lazy evaluation, so lets keep going.

  • lodash – 110.98 operations / ms
  • underscore – 103.60 operations / ms
  • lazy – 28.85 operations /ms
Underscore = require('underscore')
Lodash = require('lodash')
Lazy = require('lazy.js')
var array = Underscore.range(1000)
exports.compare = {
  "underscore" : function () {
    Underscore.find(array, function(item) {
      return item == 500;
    })    
  },
  "lodash" : function () {
    Lodash.find(array, function(item) {
      return item == 500;
    })
  },
  "lazy" : function () {
    Lazy(array).find(function(item) {
      return item == 500;
    })
  }
};
require("bench").runMain()

And the results

  • WINNER -lazy – 175.65 operations /ms
  • lodash – 168.47 operations / ms
  • underscore – 36.98 operations / ms

Lazy.js is the clear winner here. Lets try another example to see if the setup changes with even more processing.

Underscore = require('underscore')
Lodash = require('lodash')
Lazy = require('lazy.js')

square = function(x) { return x * x; }
inc = function(x) { return x + 1; }
isEven = function(x) { return x % 2 === 0; }
var array = Underscore.range(1000)

exports.compare = {
  "underscore" : function () {
    Underscore.chain(array).map(square).map(inc).filter(isEven).take(5).value()
  },
  "lodash" : function () {
    Lodash.chain(array).map(square).map(inc).filter(isEven).take(5).value()
  },
  "lazy" : function () {
    Lazy(array).map(square).map(inc).filter(isEven).take(5)
  }
};
require("bench").runMain()
  • WINNER – lazy – 14375.12 operations /ms
  • lodash – 19.10 operations / ms
  • underscore – 7.17 operations / ms

Full source code is available on github

Backbone.Validation with Chaplin and CoffeeScript

Any sizable web application needs validation. Doing it yourself is for the birds, so I wanted to incorporate a backbone plugin to help solve the problem. For this example I chose to use Backbone.Validation.

Start with a basic framework. Brunch Application Assembler is a great way to bootstrap these projects. I used Paul Miller’s brunch-with-chaplin skeleton.

brunch new gh:paulmillr/brunch-with-chaplin

To start up the server, type brunch watch –server and go to http://localhost:3333/ in a new browser window. If everything is good, you’ll see this:

Screen Shot 2013-09-06 at 3.25.22 PM

You’ll need a basic application to test out our concept, so we’ll modify the routes and the controller, and add a new view and template to our project.

module.exports = (match) ->
  match '', 'home#index'
  match 'form', 'home#form'
Controller = require 'controllers/base/controller'
HeaderView = require 'views/home/header-view'
FormView = require 'views/home/form'

module.exports = class HomeController extends Controller
  form: ->
    @view = new FormView region: 'main'
View = require 'views/base/view'
Form = require 'models/form'

module.exports = class FormView extends View
  autoRender: true
  className: 'form-view'
  template: require './templates/form'
  events: 
    'click a.validateButton' : "validate"

  initialize: ->
    super
    @model = new Form()

  validate: (e) ->
    @model.validate()
    e.preventDefault()

<form>
  <div>
    <label for="name">Name</label><input type="text" name="name" class="name" />
  </div>
  <div>
    <label for="phone">Phone</label><input type="text" name="phone" class="phone" />
  </div>
  <div>
    <label for="email">Email</label><input type="text" name="email" class="email" />
  </div>
  <a href="#" class="validateButton">Validate</a>
</form>

With that code in place, lets do a quick checkpoint http://localhost:3333/form. We should get an ugly view like this:

Screen Shot 2013-09-06 at 3.50.32 PM

So, we know we want a basic form that can save name, phone, and email. Following the guidelines on the validation docs https://github.com/thedersen/backbone.validation, lets add the rules to our model.

BaseModel = require 'models/base/model'

module.exports = class Form extends BaseModel
  validation :
    name:
      required: true
    email:
      required: true
      pattern: "email"

We’ll also need to add the code to our vendor/scripts folder.

In a perfect world, the @model.validate() call would execute our validation rules. However, in this world, we get a javascript error

Uncaught TypeError: Object # has no method ‘validate’

There is one final step. We need to bind our model to the validation, so add the call in the attach method of our view:

  attach: ->
    super
    Backbone.Validation.bind(@);

Thats it! Full source code for the example is available on github.

Tagged , , ,
%d bloggers like this: