Category Archives: Uncategorized

First Impressions of Atom – By Adam

For the past 3 years, I’ve worked primarily in Sublime Text, and it’s a fantastic application. I use it for Ruby, HTML, CSS, Javascript, CoffeeScript, and more. Today, Github open sourced their in-house developed text editor, Atom. A quick day working with Atom reveals that it may be a worthy replacement.

The first thing I notice is how familiar it is to a Sublime user. When opening up the folder that I keep my Knoda projects in, I get a simple directory tree to navigate.

Screen Shot 2014-05-06 at 10.22.57 AM

Next, I can use all of my familiar keyboard shortcuts. Press cmd-p in Atom and start typing.

Screen Shot 2014-05-06 at 10.24.29 AM

Awesome (though, according to the Atom docs, I should be using cmd-t). Syntax coloring for ruby files is excellent out of the box. Same for LESS files, CoffeeScript files, and HTML. On my Macbook Air however, the font size is huge. Almost comically huge.

How easy is it to fix that? Super easy, apparently. cmd-, opens the config. Font size is set to 16. Change it to 12, and things are a little more manageable.
Screen Shot 2014-05-06 at 10.29.28 AM

Configuration, key bindings, themes, and packages are all easily accessible in the cmd-, interface. This stands out for me a a superior out-of-the-box experience to Sublime Text. Incredibly intuitive.

Overall, its a great piece of software. Give it a download, and comment with your own review.

Advertisements

Introducing Knoda – Predict. Compete. Conquer

As some readers might know, I’ve recently joined the team at Knoda. Based in Kansas City, Knoda provides people with a way to make their predictions, let their friends (and enemies) vote, and hold everyone accountable to the results. Sound like fun? Well, hurry over to the App Store and get it. Android users – the development team is feverishly working on your application, but in the meantime, you can reserve your username.

How it works

When I log into Knoda, I see a stream of users predictions.

2014-01-13 16.12.54

Looks like we’ve got a lot of interest in the KU basketball game tonight, and some predictions about Bitcoin. I don’t know who Fran Fraschilla is, but I think this prediction is wrong. Swipe from the right to disagree.

2014-01-13 16.13.13

Making a prediction of your own is easy as well. I’m going to predict that this blog post will get more than 50 views on Wednesday. I’ll keep voting open until 3pm CST, and I’ll declare the result Wednesday Morning.

2014-01-13 16.27.36

There are lots of other features to explore – searching, commenting, statistics, however I’ll let you download the app to learn about those. You can also follow our releases on the Knoda blog or on twitter @Knodafuture

You can also expect this blog to feature a lot more entries on the technologies that we are using at Knoda – Rails 4, Postgres, Objective-C, and Java. If you love these technologies as much as the Knoda team does, check out our Coder Wall. As our user base grows, Knoda will be hiring additional software developers to help build out the world’s best social prediction platform.

Finally, if my description hasn’t painted a clear enough picture for you, learn all about Knoda from our co-founders, Kyle Rogers and James Flexman, presenting last week at 1 Million Cups Kansas City.

Benchmarks – Underscore.js vs Lodash.js vs Lazy.js

Update 10/10/2013 – A good point was made that doing the array creation isn’t really going to be different between the libraries. I’ve modified the find/map/lazy samples to reflect this, and updated the numbers appropriately.

Fast code is fun. And nothing is more fun than making your application faster by dropping in a new library, without spending time re-writing code or spending money on new hardware.

Luckily, there are 2 projects for your next node.js/web app that promise to do just this. lodash.js and lazy.js are both replacements for underscore.js that promise faster performance, as well as some new features.

Lodash is fairly well known for its excellent compatibility with underscore.js. Lazy, on the other hand, should potentially offer even better performance, at the cost of implementing a slightly different API.

Underscore = require('underscore')
Lodash = require('lodash')
Lazy = require('lazy.js')
exports.compare = {
  "underscore" : function () {
    var array = Underscore.range(1000)
  },
  "lodash" : function () {
    var array = Lodash.range(1000)
  },
  "lazy" : function () {
    var array = Lazy.range(1000).toArray()
  }
};
require("bench").runMain()

Running this comparison shows lodash as the winner, underscore close, and lazy way behind. That said, this item is too trivial to really be interesting, and it doesn’t really give lazy.js a fair chance to do any lazy evaluation, so lets keep going.

  • lodash – 110.98 operations / ms
  • underscore – 103.60 operations / ms
  • lazy – 28.85 operations /ms
Underscore = require('underscore')
Lodash = require('lodash')
Lazy = require('lazy.js')
var array = Underscore.range(1000)
exports.compare = {
  "underscore" : function () {
    Underscore.find(array, function(item) {
      return item == 500;
    })    
  },
  "lodash" : function () {
    Lodash.find(array, function(item) {
      return item == 500;
    })
  },
  "lazy" : function () {
    Lazy(array).find(function(item) {
      return item == 500;
    })
  }
};
require("bench").runMain()

And the results

  • WINNER -lazy – 175.65 operations /ms
  • lodash – 168.47 operations / ms
  • underscore – 36.98 operations / ms

Lazy.js is the clear winner here. Lets try another example to see if the setup changes with even more processing.

Underscore = require('underscore')
Lodash = require('lodash')
Lazy = require('lazy.js')

square = function(x) { return x * x; }
inc = function(x) { return x + 1; }
isEven = function(x) { return x % 2 === 0; }
var array = Underscore.range(1000)

exports.compare = {
  "underscore" : function () {
    Underscore.chain(array).map(square).map(inc).filter(isEven).take(5).value()
  },
  "lodash" : function () {
    Lodash.chain(array).map(square).map(inc).filter(isEven).take(5).value()
  },
  "lazy" : function () {
    Lazy(array).map(square).map(inc).filter(isEven).take(5)
  }
};
require("bench").runMain()
  • WINNER – lazy – 14375.12 operations /ms
  • lodash – 19.10 operations / ms
  • underscore – 7.17 operations / ms

Full source code is available on github

Heroku vs NodeJitsu vs Appfog

For the next few months, I’ll be working with the team at LocalRuckus, building a new Node.js API and application.  As a small shop with no dedicated Sys Admin or Dev Ops, its essential that we find Node.js hosting that is flexible, fast, and cost-effective.  I’ve been considering three major players in the Node.js hosting scene, Heroku, Nodejitsu, and Appfog.  There are some good comparisons out there (I especially like Daniel Saewitz’s article), but I wanted to share my 2 cents.

Value for Development

Heroku provides a great feature for development/sandbox apps – your first dyno is FREE.  Combine this with the starter Postgres package, and you can have a development version of you app up and running for $0/month.

Nodejitsu does not offer a free tier, so you are on the hook for paying for pet projects, etc.  That said, their pricing starts at $9/mo for a micro package, and scales up pretty gently from there.

Appfog provides a pretty great package for trying out an app.  You can provision your database, caching server, queue server, and application servers in a few clicks, all managed from one central dashboard.

Winner:  Appfog

Value for Production

Heroku pricing scales linearly with your traffic.  Using a simple slider, you can add new dynos to your application.  Each new dyno runs $35/mo, however, there is no commitment – you can scale up for brief spikes, and scale down if traffic subsides.

Nodejitsu and Appfog, on the other hand have fixed monthly prices.

Nodejitsu prices based on drones, which seem to offer 256MB RAM and processing power roughly equivalent to half a heroku dyno.

Appfog prices based on RAM, which creates a bit of a problem.  While 2GB of memory can be had for $20/mo, moving up to 4GB is a rather steep $100/mo.

Winner: Heroku

Deployment

Heroku – Deploy to a git repository

Appfog – Use the downloadable af tool to push updates

NodeJitsu – Use the jitsu tool or git integration.

Winner: Nodejitsu

Database

Heroku leverages expertise in Postgres, providing plans that scale with your application, including free database levels for getting started.  Production databases support 1TB of data, starting at $50/month.  If you prefer another database platform, Add-Ons are available for Redis, Mongo, Couchdb, ClearDB, JustOneDB, and Amazon RDS.

Nodejitsu continues to take a fairly minimalist approach, with no built-in database.  They provide Add-Ons for Mongo, Redis, and CouchDb.

Appfog allows you to use your service instances to host Redis, Mongo, Postgres, or MySql databases.  They also provide add-ons for Mongo and ClearDB.  The main knock here is that your database shares the processing and memory quotas with your other services, and I’m skeptical that such an approach could support a high traffic app.

Winner: Heroku. Production quality Postgres, with great Add-Ons for a variety of other databases.

Other Languages

Heroku Nodejitsu Appfog
  • Ruby
  • Java
  • Python
  • Clojure
  • Scala
  • Play
  • Php
    None
  • Ruby
  • Java
  • Python
  • Php

Tie: Appfog & Heroku. Appfog’s PHP support opens a lot of opportunities (such as hosting WordPress), however, Appfog seems to have trouble keeping their runtimes up to date. As of July 2013, Node.js was only up to version 8. Heroku provides good language options, and a serious commitment to keeping the runtimes up to date.

Other Considerations and Final Thoughts

An important consideration for many node apps is web socket support.  Nodejitsu has it, the others don’t.  If you need this feature, your choice is clear.  At this point, Heroku’s flexibility, large community, and great add-ons make it my go-to for applications, however, I think Appfog has put together a great offering, and I’m looking forward to using it more in the future.

Introducing Borderizer – Helping Travelers Move Faster

On a recent trip to San Diego, my wife and I crossed the U.S./Mexico border at San Ysidro to visit the lovely city of Tijuana. A 5 minute walk across the border was all it took to enter Mexico.

After a few hours of touring the Mexican shops (and the accompanying drug offers), we were ready to return to the United States. However, returning to the U.S. is not nearly as easy as leaving it. The customs wait at the border checkpoint was over 2 hours long. A miserable way to spend 2 hours of my vacation, and a problem that I won’t tolerate any longer.

Enter Borderizer…

Borderizer provides you with up to the minute stats on how long the wait is at any border crossing in Mexico or Canada. Equipped with this information, you can plan your trip better. If the wait at San Ysidro is 2 hours, just relax, have a bite to eat at a local taqueria, and plan your customs wait for late afternoon, when the crowds have died down.

Available for iPhone in the App Store

iphone_ss1     iphone_ss2

Available for Android on Google Play
android_ss1     android_ss2

Best of all, Borderizer is FREE.  Yep, $0.00  So go ahead and download today.  We’ll be rolling out new features in the upcoming months, including Spanish language support, suggestions on the best time to cross, and maps of the border crossings.  Let us know if you have any suggestions for featues you’d like to have in Borderizer – we’d love to make it more useful for everyone.

Clustering Web Sockets with Socket.IO and Express 3

Node.js gets a lot of well-deserved press for its impressive performance. The event loop can handle pretty impressive loads with a single process. However, most servers have multiple processors, and I, for one, would like to take advantage of them. Node’s cluster api can help.

While cluster is a core api in node.js, I’d like to incorporate it with Express 3 and Socket.io.

Final source code available on github

The node cluster docs gives us the following example.

cluster = require("cluster")
http = require("http")
numCPUs = require("os").cpus().length
if cluster.isMaster
  i = 0
  while i < numCPUs     cluster.fork()     i++   cluster.on "exit", (worker, code, signal) ->
    console.log "worker " + worker.process.pid + " died"
else
  http.createServer((req, res) ->
    res.writeHead 200
    res.end "hello world\n"
  ).listen 8000

The code compiles and runs, but I have not confirmation that things are actually working. I’d like to add a little logging to confirm that we actually have multiple workers going. Lets add these lines right before the ‘exit’ listener.

  cluster.on 'fork', (worker) ->
    console.log 'forked worker ' + worker.id

On my machine, we get this output:
coffee server
forked worker 1
forked worker 2
forked worker 3
forked worker 4
forked worker 5
forked worker 6
forked worker 7
forked worker 8

So far, so good. Lets add express to the mix.

cluster = require("cluster")
http = require("http")
numCPUs = require("os").cpus().length
if cluster.isMaster
  i = 0
  while i < numCPUs     cluster.fork()     i++   cluster.on 'fork', (worker) ->
    console.log 'forked worker ' + worker.process.pid
  cluster.on "listening", (worker, address) ->
    console.log "worker " + worker.process.pid + " is now connected to " + address.address + ":" + address.port
  cluster.on "exit", (worker, code, signal) ->
    console.log "worker " + worker.process.pid + " died"
else
  app = require("express")()
  server = require("http").createServer(app)
  server.listen 8000
  app.get "/", (req, res) ->
    console.log 'request handled by worker with pid ' + process.pid
    res.writeHead 200
    res.end "hello world\n"

At this point, I’d like to throw a few http requests against the setup to ensure that I’m really utilizing all my processors.
Running (curl -XGET “http://localhost:8000&#8221;) 6 times makes the node process go:

request handled by worker with pid 85229
request handled by worker with pid 85231
request handled by worker with pid 85231
request handled by worker with pid 85231
request handled by worker with pid 85227
request handled by worker with pid 85229

Alright, last step is getting socket.io involved. Just a couple extra lines for the socket, however, we’ll need to add a basic index.html file to actually make the socket calls.

cluster = require("cluster")
http = require("http")
numCPUs = require("os").cpus().length
if cluster.isMaster
  i = 0
  while i < numCPUs     cluster.fork()     i++   cluster.on 'fork', (worker) ->
    console.log 'forked worker ' + worker.process.pid
  cluster.on "listening", (worker, address) ->
    console.log "worker " + worker.process.pid + " is now connected to " + address.address + ":" + address.port
  cluster.on "exit", (worker, code, signal) ->
    console.log "worker " + worker.process.pid + " died"
else
  app = require("express")()
  server = require("http").createServer(app)
  io = require("socket.io").listen(server)
  server.listen 8000
  app.get "/", (req, res) ->
    res.sendfile(__dirname + '/index.html');
  io.sockets.on "connection", (socket) ->
    console.log 'socket call handled by worker with pid ' + process.pid
    socket.emit "news",
      hello: "world"

 

<script class="hiddenSpellError" type="text/javascript">// <![CDATA[
src</span>="/socket.io/socket.io.js">
// ]]></script><script type="text/javascript">// <![CDATA[

// ]]></script>
 var socket = io.connect('http://localhost');
 socket.on('news', function (data) {
 console.log(data);
 socket.emit('my other event', { my: 'data' });
 });
// ]]></script>

When I run this code, problems start to appear. Specifically, the following message shows up in my output

warn – client not handshaken client should reconnect

Not surprisingly, we have issues with sockets appearing disconnected. Socket.io defaults to storing its open sockets in an in-memory store. As a result, sockets in other processes have no access to the information. We can easily fix the problem by using the redis store for socket.io. The docs we need are here.

With the redis store in place, it looks like this:

cluster = require("cluster")
http = require("http")
numCPUs = require("os").cpus().length
RedisStore = require("socket.io/lib/stores/redis")
redis = require("socket.io/node_modules/redis")
pub = redis.createClient()
sub = redis.createClient()
client = redis.createClient()
if cluster.isMaster
  i = 0
  while i < numCPUs     cluster.fork()     i++   cluster.on 'fork', (worker) ->
    console.log 'forked worker ' + worker.process.pid
  cluster.on "listening", (worker, address) ->
    console.log "worker " + worker.process.pid + " is now connected to " + address.address + ":" + address.port
  cluster.on "exit", (worker, code, signal) ->
    console.log "worker " + worker.process.pid + " died"
else
  app = require("express")()
  server = require("http").createServer(app)
  io = require("socket.io").listen(server)
  io.set "store", new RedisStore(
    redisPub: pub
    redisSub: sub
    redisClient: client
  )
  server.listen 8000
  app.get "/", (req, res) ->
    res.sendfile(__dirname + '/index.html');
  io.sockets.on "connection", (socket) ->
    console.log 'socket call handled by worker with pid ' + process.pid
    socket.emit "news",
      hello: "world"
Tagged , , , ,

Code School’s “Try R”

I feel like 2013 holds a lot of data analysis for me, so I’d like to start the year off by learning a language that excels at statistical analysis and visualization.  Enter R, a language that has gotten quite popular over the past few years.  In the interest of expanding my horizons, I decided to try to learn this using Code School’s Try R course.  Code School courses can be a little simplistic if you have programming experience, but since I haven’t ever looked at R, it seems appropriate.

Lesson One:  Using R

The first lesson covers basic variable assignment, functions, and expressions in the REPL environment.  Pretty simple for anyone with a programming background, but it does introduce the somewhat unusual assignment operator in R:

x <- 42

This is going to prove a little confusing, for me, as I’ve recently been using coffeescript a lot, with its -> operator for defining functions. I keep swapping the two operators during the lesson.

Lesson Two:  Vectors

Now we are getting somewhere.  In order for me to do any statistical analysis, I’m going to need some data structures.  Vectors are the fundamental one-dimensional list in R.  Codeschool does an excellent job in this lesson of moving into data visualization early and seamlessly.

> vesselsSunk <- c(4, 5, 1)
> barplot(vesselsSunk)

To be honest, I’ve never used a language before with a barplot function in the core language. At this point in the lesson, I’m pretty excited to keep going. Lesson 2 covers vector math and plotting.

Lesson Three: Matrices

Moving onto two-dimensional data sets.  I can almost feel the correlation coefficients and multiple regressions in my near future.

It turns out that this is kind of an odd chapter.  We look at basic matrix construction and manipulation.

# Construct a matrix
> elevation <- matrix(0,3,4)
     [,1] [,2] [,3] [,4]
[1,]    0    0    0    0
[2,]    0    0    0    0
[3,]    0    0    0    0

#Edit a value
elevation[2,2] <- 1

I suppose the lesson is successful in showing how to use matrices, but I don’t feel that it imparts much insight into the language. Similarly, we are introduced to contour, persp, and image functions, however, they remain fairly magical at the end of the lesson.

Lesson 4 – Summary Statistics

Mean, Median, standard deviation.  This one took about 2 minutes to complete, but is obviously very important if you never took statistics.

Lesson 5,6 – Factors & Data Frames

R’s Factors and Data Frames provide nice ways to group and categorize data.  Once you understand factors, you can group a set of users by age, or other distinguishing characteristics.  I really enjoyed these lessons, very practical.

Lesson 7 – Real-World Data

Great finish for the lessons, bit of analysis on real world software piracy data.  We finally got an example of data correlation using R, which I’m pretty excited to use in some data sets I’m looking at.

Conclusion

I’d recommend the Try R course to any developer who is interested in data analysis and visualization.    I’d REALLY recommend the course for anyone with an interest in statistics and data analysis who doesn’t know anything about programming.  It really is that easy.  Good work Code School.

%d bloggers like this: