Monday, August 26, 2013

Tanzania, Africa - In 1.75 Million Particles

Source: github.com/Zolmeister/tanzania (page: tanzania.zolmeister.com/full)
Note: Requires WebGL

Alright, let me explain what you're looking at. If you fullscreen the app and have a 1080p monitor (1920x1080), then 1.75 million particles (1,753,920) will be generated and animated in real-time per image. Handling 1.75 million particles is no walk in the park, and without a decently fast GPU it may be quite slow. While it may not seem that complicated (as I thought at first), there is in fact a ridiculous amount of intense code involved. Lets begin!

So, I just came back from a vacation in Tanzania, Africa where I summited Mt. Kilimanjaro (the highest free-standing mountain in the world - 5,895 meters), went on safari, and spent a week at the beach in Zanzibar. During my trip, I took ~2k photos, of which I selected ~150 (plus a few from friends I went with). I take pride in my photos, and I wanted to showcase them in a special way, so I decided to make a particle-based image viewer.

My first attempt was to try to use the HTML5 canvas object. This failed quickly and spectacularly, taking about a minute to render a single frame. This is because I was generating 1.75mil objects, each with an x, y, and color attribute, and then trying to update all of those objects x,y coordinated every frame.

For those that don't know, you get 16ms to do math every frame. This is the magic number which equates to a 60fps video stream, which also happens to align with most monitors refresh rates (60Hz), so even if you went over 60fps it wouldn't render any faster.

Let me put it this way. You can't even increment a number to 1.75mil in 16ms, and that's without doing any operations (on my cpu it took 1162ms). So the question is, how do you animate (change the x,y coordinates) 1.75mil particles? The GPU. GPUs, unlike CPUs, are really good at doing lots of small calculations in parallel. For example, your CPU probably has 4 cores, but your GPU may have 32, 64, or even 128 cores.

In order to take advantage of the GPUs ability to do massively parallel computing, we are going to need to use WebGL, and specifically the GLSL OpenGL shading language. I opted to use the great Three.js WebGL framework. Three.js has quite poor documentation, but it has a great community and lots of examples to learn from. These examples were especially helpful (yes, to manually inspect the uncommented unintuitive source)


I also learned a lot from these amazing tutorials: aerotwist.com/tutorials

Deviation: Here's how I took my 2k photos and picked a few.
python script to save select images while viewing them as a slide show
Image resize + crop bash script:
for file in *.jpg; do                                      
  convert -size 1624x1080 $file -resize '1624x1080^' -crop '1624x1080+0+0' -quality 90 +profile '*' $file
done

Thumbnail bash script:
for file in *.jpg; do         
  convert $file -resize 120x -quality 90 +profile '*' thumb/$file
done

File rename bash script:
for file in *.jpg; do
  mv "$file" "$file.tmp.jpg"
done
x=0
for file in *.jpg; do
  ((x++))
  mv "$file" "$x.jpg"
done

Alright, back to javascript. Let me explain a bit about how the GPU and WebGL work. You give WebGL a list of vertices (objects with x, y, z coordinates) that make up a 3D object mesh, and you give it a texture to be mapped to the vertices, this gives you a colorful 3D object. With a ParticleSystem, you get a mesh (vertices) and you give it a texture that is applied per vertex. This means you cannot add or remove particles easily (though you can hide them from view), so you need create a mesh with as many vertices as you will need particles.

So, what happens is that these two things, the vertices and the texture, get passed into the Vertex Shader, and the Fragment Shader. First, everything goes to the Vertex Shader. The Vertex Shader may alter the position of vertices and move stuff around (this is important, as it will let us do animation on the GPU). Then the Fragment Shader takes over, and applies color to all the parts of the object, using the vertices as guides for how to color things (for shadows for example). Here is a great tutorial.

Shaders are coded in a language called GLSL. Yeah, that already sounds scary. But it's not too bad once you spend a few hours banging your head against a wall. Here is what my (shortend) vertex shader looks like:
<script id='vertexshader' type='x-shader/x-vertex'>
  varying vec3 vColor;
  uniform float amplitude;
  uniform int transition;
  uniform int fullscreen;
  
  void main() {
    
    vColor = color;
    vec3 newPosition = position;
    newPosition = newPosition * (amplitude + 1.0) + amplitude * newPosition;
    vec4 mvPosition = modelViewMatrix * vec4( newPosition, 1.0 );
    gl_PointSize = 1.0;
    gl_Position = projectionMatrix * mvPosition;
  }
</script>

And here is what my fragment shader looks like:

<script id='fragmentshader' type='x-shader/x-fragment'>
  varying vec3 vColor;
  
  void main() {
    gl_FragColor = vec4(vColor, 1.0);
  }
</script>


Easy. GLSL has 3 main variables that get passed from javascript to the GPU.
  • varying - passed from the vertex shader to the fragment shader (used to pass attributes)
  • attribute - passed from JS to vertex shader only (per-vertex values)
  • uniform - global constant set by JS
Sometimes you can get away with doing a particle system by updating an 'attribute' variable for each particle, however this isn't feasible for us.

The way I will do animation (there are many ways), is to update a uniform (global constant) with a number from 0 to 1 depending on the frame I'm at. I will use the 'sine' function to do this, which will give me a smooth sequence from 0 to 1 and then back to 0 again.

// update the frame counter
frame += Math.PI * 0.02;

// update the amplitude based on the frame
uniforms.amplitude.value = Math.abs(Math.sin(frame))

Now, all I need to do to get cool animations is have each particle move with respect the the amplitude from its original location. Here is the second effect in the app (in the vertex shader):
newPosition = newPosition * (amplitude + 1.0) + amplitude * newPosition / 1.0;
newPosition = newPosition * (amplitude * abs(sin(distance(newPosition, vec3(0.0,0.0,0.0)) / 100.0))+ 1.0);

Now, as far as the fragment shader goes, you see I am updating the color from an attribute set by Three.js (`color`), which is the color per-pixel from the image to the vertex. This (I think) is quite inefficient (but fast enough for me), and the optimal way (I think) is to pass the image directly as a texture2D variable and let the fragment shader look at that to determine its color (nucleal.com/ does that I think). However I couldn't  figure out how to do this.

Here is the Three.js code for using a custom vertex and fragment shader:

    uniforms = {
      // This dictates the point of the animation sequence
      amplitude: {
        type: 'f', // float
        value: 0
      },
      // Dictates the transition animation. Ranges from 0-9
      transition: {
        type: 'i', // int
        value: 0
      }
    }

    // Load custom shaders
    var vShader = $('#vertexshader')
    var fShader = $('#fragmentshader')

    material = new THREE.ShaderMaterial({
      uniforms: uniforms,
      vertexShader: vShader.text(),
      fragmentShader: fShader.text(),
      vertexColors: true,
      depthTest: false,
      transparent: true
    })

    particles = new THREE.ParticleSystem(geometry, material)
    scene.add(particles)

Now, we're missing the `geometry` part of the particle system, as well as a way to display the particles with a 1:1 pixel ratio to the screen. The latter is solved by some field-of-view voodoo code:

camera = new THREE.PerspectiveCamera(90, width / height, 1, 100000)
// position camera for a 1:1 pixel unit ratio
var vFOV = camera.fov * (Math.PI / 180) // convert VERTICAL fov to radians
var targetZ = height / (2 * Math.tan(vFOV / 2))
    
// add 2 to fix rendering artifacts
camera.position.z = targetZ + 2

And the former is solved with a giant for-loop (Don't actually do this)
geometry = new THREE.Geometry()
var vertices = []
for(var i=0;i<1,750,000;i++){
  vertices.push(new THREE.Vertex(i, i*20 % (i/2), 0))
}
geometry.vertices = vertices

There are many things wrong with the code above (it won't even compile), but the most important part is to notice that it's pushing a NEW object every time it runs, and this is happening over 1million times. This loop is extremely slow, takes up hundreds of megabytes of memory, and breaks the garbage collector (but more on that later). Not to mention that you can't have a loading bar because it blocks the UI thread. Oh yeah, don't forget about colors! (This code is also ridiculously slow)
geometry = new THREE.Geometry()
var colors = []
for(var i=0;i<1,750,000*3;i+3){
  var col = new THREE.Color()
  col.setRGB(imgData[i], imgData[i+1], imgData[i+2])
}
geometry.colors = colors

The first thing that came to mind that would fix both problems at once (sort of) is to use Web Workers. However, when passing data to and from web workers it gets JSON.stringify 'd, which is horribly slow and takes forever (but there's another way - more later).

We can speed things up by using plain objects instead of Three.js objects (they seemed to work the same):
var col = {r: 1, g: 2, b:3}
This is considerably faster, but still not fast enough. Well, it was fast enough for me for a bit, but then the app started randomly crashing. Turns out, I was using over 1GB of memory. No, I wasn't leaking memory, the problem was instead that the Garbage Collector could not run and crashed the app.

Here is a great video on Memory management in the browser from GoogleIO, and I'll explain a little bit about GC. Javascript is a GC'd (Garbage Collected) language, which means you don't have to de-allocate memory you use. Instead what happens is you get 2 memory chunks. The first is short-term memory, and the second is long-term memory. Short term memory is collected often, and objects that survive long enough (aren't collected) move into long-term memory. In order to determine what memory can be collected safely, The garbage collector goes through every object and checks what objects can be reached from that object (an object graph if you will, as a tree with nodes). So when the GC runs on our app with 3million+ objects, it crashes.

Finally, after much headache, user `bai` on #three.js (freenode IRC) saved the day with the suggestion to use BufferGeometry in Three.js instead of just Geometry. BufferGeometry is exactly what I needed, as it exposes the raw typed javascript arrays used by WebGl, which means a huge performance increase and drastically reduced memory usage. I'm talking about going from 1GB of memory to 16MB.
// BufferGeometry is MUCH more memory efficient
geometry = new THREE.BufferGeometry()
geometry.dynamic = true
var imgWidth = imgSize.width
var imgHeight = imgSize.height
var num = imgWidth * imgHeight * 3
geometry.attributes.position = {
  itemSize: 3,
  array: new Float32Array(<this comes from web workers>),
  numItems: num
}
geometry.attributes.color = {
  itemSize: 3,
  array: new Float32Array(num),
  numItems: num
}

//update colors
geometry.attributes.color.needsUpdate = true

Ok. At this point, I had a reasonably fast application, but it still had a few rough edges. Specifically, with the blocking UI thread when loading a new image (read the image pixel by pixel and update the geometry.attributes.color array - 1.75mil pixels). See, Javascript is single threaded, so when you have a long running loop, the whole webpage freezes up. This is where Web Workers come in (I told you they would be back).

Web workers spawn a new OS thread, which means no UI blocking. However we still have the issue of having to send data via objecting cloning. Turns out that there are special objects which are transferable (specifically ArrayBuffer objects), which pass by reference (vs copy) except that the catch is that they are removed from the original thread. eg. I pass an array to a worker, and I can no longer read it from my main thread, but I can in the worker thread. In order to understand how to take advantage of this, we need to understand javascript typed arrays.

Typed arrays in javascript consist of a read-only ArrayBuffer object, and a `viewer` which allows you to write data to the ArrayBuffer behind the scenes. For example, if I have an ArrayBuffer, I can create a viewer on top of it (cheaply) Uint8ClampedArray(ArrayBuffer), which lets me read and write data to the buffer. Now, lets look at how I pass the ArrayBuffers back and forth between the web worker thread and the main thread to offload heavy work.
(web worker code)
<script id='imageworker' type='text/js-worker'>
onmessage = function (e) {
  var pixels = new Uint8ClampedArray(e.data.pixels)
  var arr = new Float32Array(e.data.outputBuffer)
  var i = e.data.len
  
  while (--i >= 0) {
    arr[i*3] = pixels[i*4] / 255
    arr[i*3 + 1] = pixels[i*4+1] / 255
    arr[i*3 + 2] = pixels[i*4+2] / 255
  }
  
  postMessage(arr.buffer, [arr.buffer])
  return close()

};
</script>

(main thread code)

function loadImage(num, buffer, cb) {
  var img = new Image()
  img.onload = function () {
    var c = document.createElement('canvas')
    imgWidth = c.width = img.width
    imgHeight = c.height = img.height

    var ctx = c.getContext('2d')
    ctx.drawImage(img, 0, 0, imgWidth, imgHeight)
    var pixels = ctx.getImageData(0, 0, c.width, c.height).data
    var blob = new Blob([$('#imageworker').text()], {
      type: "text/javascript"
    });
    var worker = new Worker(window.URL.createObjectURL(blob))
    console.time('imageLoad')

    worker.onmessage = function (e) {
      console.timeEnd('imageLoad')

      workerBuffer = geometry.attributes.color.array
      geometry.attributes.color.array = new Float32Array(e.data)
      cb && cb()
    };

    worker.postMessage({
      pixels: pixels.buffer,
      outputBuffer: workerBuffer.buffer,
      len: pixels.length
    }, [pixels.buffer, workerBuffer.buffer])

  }
  img.src = 'imgs/' + num + '.jpg'
}

(Bonus: console.time(), console.timeEnd() - useful timing shortcuts)

And that's it! It wasn't easy, but I definitely learned a lot.

Tuesday, July 23, 2013

Promiz.js


Promiz.js is a promises/A+ compliant library (mostly), which aims to have both a small footprint and have great performance (< 1Kb (625 bytes minified + gzip)). I wont go over why javascript promises are amazing. Instead, I'm going to focus on what goes on behind the scenes and what it takes to create a promise library. But first, some benchmarks (see bench.js for source - server side):
Benchmarks are obviously just that 'benchmarks', and do not necessarily test real-world application usage. However, I feel that they are still quite important for a control flow library, which is why Promiz.js has been optimized for performance. There is however, one thing I should mention: Promiz.js will attempt to execute synchronously if possible. This technically breaks spec, however it allows us to get Async.js levels of performance (note: Async.js is not a promise library and doesn't look as clean).

Alright, lets look at the API that our library has to provide. Here is a basic common use case:

function testPromise(val) {
    // An example asyncronous promise function
    var deferred = Promiz.defer()
    setTimeout(function(){
        deferred.resolve(val)
    }, 0)
    return deferred
}
testPromise(22).then(function(twentyTwo){
    // This gets called when the async call finishes
    return 33
}).then(function success(thiryThree){
    // Values get passed down the chain.
    // values can also be promises
    return testPromise(99)

}, function error(err) {
    // If an error happens, it gets passed here
})

Now, while the usage is simple, the backend can get a little bit complicated and requires a good bit of javascript knowledge. Lets start with the most minimal possible setup.

First we're going to need a generator, that creates the `deferred` (promise) objects:

var Promiz = {
    // promise factory
    defer: function(){
      return new defer()
    }
}

Now, lets define our promise object. Remember, to be spec compatible, it must have a .then() method, and have a state. In order to be able to chain these calls, we're also going to need to keep track of what we need to call later. This will constitute our `stack` (functions that need to be resolved eventually).

function defer(){

    // State transitions from pending to either resolved or rejected
    this.state = 'pending'

    // The current stack of deferred calls that need to be made
    this.stack = []

    // The heart of the promise
    // adding a deferred call to our call stack
    this.then = function(fn, er){
      this.stack.push([fn, er])
      if (this.state !== 'pending') {

        // Consume the stack, running the the next function
        this.fire()
      }
      return this
    }
}

The .then() simply adds the functions it was called with (a success callback and an optional error callback) to the stack, and then checks to see if it should consume the stack. Note that we return `this` which is a reference to our deferred object. This lets us call .then() again, and add to the same deferred stack. Notice, our promise needs to come out of its pending state before we can start consuming the stack. Lets add two methods to our deferred object:

    // Resolved the promise to a value
    // Only affects the first time it is called
    this.resolve = function(val){
      if (this.state === 'pending'){
        this.state = 'resolved'
        this.fire(val)
      }
      return this
    }

    // Rejects the promise with a value
    // Only affects the first time it is called
    this.reject = function(val){
      if (this.state === 'pending'){
        this.state = 'rejected'
        this.fire(val)
      }
      return this
    }

Alright, so this resolve actually does two things. It checks to see if we've already been resolved (by checking our pending state) which is important to be spec compliant, and it fires off our resolved value to start consuming the stack. At this point, were almost done (!). We just need a function that actually consumes our current promise stack (the `this.fire()` - the most complicated function).

    // This is our main execution thread
    // Here is where we consume the stack of promises
    this.fire = function (val) {
      var self = this
      this.val = typeof val !== 'undefined' ? val : this.val

      // Iterate through the stack
      while(this.stack.length && this.state !== 'pending') {
        
        // Get the next stack item
        var entry = this.stack.shift()
        
        // if the entry has a function for the state we're in, call it
        var fn = this.state === 'rejected' ? entry[1] : entry[0]
        
        if(fn) {
          
          // wrap in a try/catch to get errors that might be thrown
          try {
            
            // call the deferred function
            this.val = fn.call(null, this.val)

            // If the value returned is a promise, resolve it
            if(this.val && typeof this.val.then === 'function') {
              
              // save our state
              var prevState = this.state

              // Halt stack execution until the promise resolves
              this.state = 'pending'

              // resolving
              this.val.then(function(v){

                // success callback
                self.resolve(v)
              }, function(err){

                // error callback

                // re-run the stack item if it has an error callback
                // but only if we weren't already in a rejected state
                if(prevState !== 'rejected' && entry[1]) {
                  self.stack.unshift(entry)
                }

                self.reject(err)
              })

            } else {
              this.state = 'resolved'
            }
          } catch (e) {

            // the function call failed, lets reject ourselves
            // and re-run the stack item in case it handles errors
            // but only if we didn't just do that
            // (eg. the error function of on the stack threw)
            this.val = e
            if(this.state !== 'rejected' && entry[1]) {
              this.stack.unshift(entry)
            }

            this.state = 'rejected'
          }
        }
      }
    }

And that's it!

Sunday, July 7, 2013

CharityVid - User Auth, Jasmine Testing, and Dust.js

This is the last (official) post in my CharityVid series. I'm going to try and cram 3 big topics into one post, so lets see how it goes.

User Authentication

We're going to be using passport.js and MongoDB to create and store users. Here is what the passport code will look like:
 var passport = require('passport'),  
   FacebookStrategy = require('passport-facebook').Strategy,  
   db = require('./db'),  
   settings = require('./settings'),  
   log = require('./log');  
 passport.use(new FacebookStrategy({  
   clientID: FACEBOOK_APP_ID,  
   clientSecret: FACEBOOK_APP_SECRET,  
   callbackURL: "//" + settings.DOMAIN + "/auth/facebook/callback"  
 }, function(accessToken, refreshToken, profile, done) {  
   db.getUser(profile.id, function(err, result){  
     if (err || !result) { //user does not exist, create  
       //default user object  
       var user = {  
         fbid: profile.id,  
         username: profile.username,  
         displayName: profile.displayName,  
         ...  
       }  
       log.info("creating new user: "+user.fbid, user)  
       db.addUser(user, function(err, result) {  
         if(err || !result){  
           log.warn("error adding user", err)  
           return done(err)  
         }  
         return done(null, user)  
       })  
     } else {  
       return done(null, result)  
     }  
   })  
 }))  
 passport.serializeUser(function(user, done) {  
   done(null, user)  
 })  
 passport.deserializeUser(function(obj, done) {  
   done(null, obj)  
 })  

and then we need to add it in as express middleware.
 app.configure(function() {  
   app.use(express.cookieParser(settings.SESSION_SECRET))  
   app.use(express.session({  
     secret: settings.SESSION_SECRET,  
     store: new MongoStore({  
       url: settings.MONGO_URL  
     })  
   })) //auth  
   app.use(passport.initialize())  
   app.use(passport.session()) //defaults  
 })  
 app.get('/auth/facebook/callback', auth.passport.authenticate('facebook', {  
   failureRedirect: '/'  
 }), function(req, res) {  
   res.redirect('/')  
 })  
 app.get('/logout', function(req, res) {  
   req.logout()  
   res.redirect('/')  
 })  
 app.get('/auth/facebook', auth.passport.authenticate('facebook'), function(req, res) { /* function will not be called.(redirected to Facebook for authentication)*/  
 })  

Well that was a piece of cake, onto testing!

Testing

There are many kinds of testing (http://en.wikipedia.org/wiki/Software_testing#Testing_levels), and its up to you to decide how much or how little of it you wan't to do. CharityVid uses Jasmine-node for its tests. We have a folder named 'tests', and inside are javascript files named '<part of code>.spec.js'. The .spec.js extension tells jasmine that these are tests to run. Here is what a test might look like with jasmine:
 describe("Util check", function() {  
   var belt = require('../util')    
    it("retrieves charity data", function(done) {
        belt.onDataReady(function() {
            belt.getCharity("americanredcross", function(err, charity) {
                expect(charity.name).toBeDefined()
                expect(charity.website).toBeDefined()
                ...
                done()
            })
        })
    })
 })  

And then to test it:
 jasmine-node tests  

And now finally, onto Dust.js

Dust.js

CharityVid uses Dust.js, which is a template engine, similar to Jade, the default template engine used by express.js. Dust has a some nice features, including pre-compiled client side templates that can also be used server side (pre-compiling reduces the initial load times). Using dust.js is as simple as setting the view engine:
 var cons = require('consolidate')  
 app.engine('dust', cons.dust) //dustjs template engine  
 app.configure(function() {  
   app.set('view engine', 'dust') //dust.js default  
 })  

The dust engine comes from the Consolidate.js library, which supports a ton of different engines.
Here is an example of what dust.js looks like:
 {>"base.dust"/}  
 {<css_extra}<link href="/css/profile.css" rel="stylesheet">{/css_extra}  
 {<title}CharityVid - {name}{/title}  
 {<meta_extra}  
 <meta property="og:title" content="{name} - CharityVid"/>  
 {/meta_extra}  
 {<js}<script src='/js/profile.js' async></script>{/js}  
 {<profile_nav}class="active"{/profile_nav}  
 {<container}  
 <h1>{name}</h1>  
 <div class="row-fluid">  
   <img alt='{name}' class='profile-picture' src='https://graph.facebook.com/{fbid}/picture?type=large' align="left">  
   <span id='userQuote'>{quote}</span>  
   {?isUser}  
       <a class='edit' id='editQuote' href='#'>edit</a>  
   {/isUser}  
   <input type='hidden' name='_csrf' id='csrfToken' value='{token}'>  
 </div>
 {/container}  

Sunday, June 30, 2013

Retin.us - A new way to consume RSS

http://Retin.us  (chrome extension) (source)

Retin.us is not a Google Reader clone. Retin.us doesn't star or share articles. Retin.us doesn't `like` things, nor does it show you pretty pictures in a collage.

Retin.us does one thing, and it does it well. RSS (Rich Site Summary). Here is what it looks like:

That's it. That's all there is. In fact, you can even minimize the sidebar:
Retin.us is based off of my Google Reader usage pattern:
  • J (key) - next 
  • K (key) - previous
  • Ctrl + Enter - open selected article in new tab without losing focusing
When I open up my reader, I go through every unread item and open interesting articles in a new tab without losing focus. This is a bit different than most people who expect to read the article within their reader. There are many problems I found with this paradigm:
  • Long articles are unwieldy to read inline.
  • Collage based layouts are silly (Flipboard)
  • Some sites do not provide full articles in their RSS
  • Hacker News / Reddit subscriptions don't include any article data
As with my Google Reader app ZFeed, instead of relying on unreliable RSS feed data, I fetch a summary of each article using the embed.ly api. This way I can read the title and summary of an article before I make the decision to commit time to reading the whole thing.

There is still a lot more to come for Retin.us, but I have (as of last week) officially made it my RSS reader replacement. Expect another article soon about how it was built using Sails.js and Backbone. In the meantime, feel free to contribute on GitHub (GPL license).

Thursday, June 27, 2013

Node.js Tips

Here are some useful notes regarding Node.js development.

npm --save
When I first learned how to use npm, the process was like this:
npm install <package>
vi package.json # edit the dependencies manually, and have the package version be '*' 

which was a huge pain. Turns out, there is a great command-line flag which will add the module to package.json automatically.
npm install <package> --save # save to package.json with version
npm install <package> --save-dev # save to dev dependencies

npm local install
Installing dependencies globally (-g) can be quite scary because by default you have to sudo the command. In order to bypass this, we can compile npm to install locally to our home folder, and then add that folder to our path (.local directory). (source)
wget http://nodejs.org/dist/v0.10.12/node-v0.10.12.tar.gz
tar zxvf node-v0.10.12.tar.gz
cd node-v0.10.12

./configure --prefix=~/.local
make
make install

export PATH=$HOME/.local/bin:$PATH

npm publish
Publishing a module on npm couldn't be easier (take from this gist):
npm set init.author.name "Your Name"
npm set init.author.email "you@example.com"
npm set init.author.url "http://yourblog.com"

npm adduser

cd /path/to/your-project
npm init

npm publish .

--expose-gc
The V8 javascript garbage collector in node.js is usually pretty good, however there may be some times when you need fine control over the collection yourself. In those cases, this command is quite useful:
node --expose-gc app.js

global.gc(); # within app.js

npm link
Sometimes I find myself needing to modify an npm module, either to fix a bug or add a feature. In order to test my local modifications, and use my version across apps easily, I can use 'npm link':
git  clone git@github.com:Zolmeister/Polish.js.git
cd Polish.js
npm link

cd ~/path/to/app
npm link polish # instead of npm install polish

Bonus - Great modules:
socket.io - realtime websockets magic
request - making http requests easier (like the python library)
passport - user authentication
Q - great promise library
async - if you're not cool enough for promises
lodash - better than underscore
fs-extra - lets you actually copy/paste/rm -rf files/folders properly
mongojs - great library for working with mongodb
nodejitsu reccomendations

Friday, June 21, 2013

Scrolly - A beautiful custom scrollbar for every page


Scrolly is a chrome extension: web store (source) (live preview -->)

It turns this into this using css.

This is necessary because chrome does not use the scrollbars from your Linux theme:
Because widget rendering is done in a separate, sandboxed process that doesn't have access to the X server or the filesystem, there's no current way to do GTK+ widget rendering. We instead pass WebKit a few colors and let it draw a default scrollbar. (source)
Here is the source (yes, it's colorful now - hilite.me):

::-webkit-scrollbar {
    width: 12px;
    height: 12px;
}
::-webkit-scrollbar-track-piece {
    background: #aaa;
}
::-webkit-scrollbar-thumb {
    background: #7a7a7a;
    border-radius: 2px;
}
::-webkit-scrollbar-corner       {
    background: #999;
}
::-webkit-scrollbar-thumb:window-inactive {
    background: #888;
}

Feel free to fork it and change the css to be whatever you want. (pull requests welcome).

For more info on css scrollbars: http://css-tricks.com/custom-scrollbars-in-webkit/

Friday, June 14, 2013

CharityVid - Front-end Optimization

I've written a lot about the backend behind CharityVid, but there is quite a bit of front things that get overlooked when developing a web application. Specifically, front end optimization (eg. page load times, browser compatibility, server latency, etc.) Let's begin with page load.

There are many good tools for measuring page load times, but these are my favorite:
(note: http://gtmetrix.com/ will test your site with PageSpeed and YSlow at the same time)
With these tools we can analyse what resources are consuming the most bandwidth and compensate accordingly, as well as making sure that we are using all available methods for minimizing server load/latency. (CharityVid gets a 97% on PageSpeed, and 83% on YSlow).

Hopefully those tools are self explanatory, (don't feel like you need to to get to 100% on Page Speed/YSlow), usually just taking advantages of easy wins (like caching) is enough to make your site fast enough (aim for ~90%+ on PageSpeed and you should be good).

Here are some helpful snippets for express:
 app.configure('production', function() {  
   app.use(express.logger())  
   app.use(express.compress()) //gzip all the things  
 })  

 //force non-www  
 app.get('/*', function(req, res, next) {  
   if (req.headers.host.match(/^www/) !== null ) res.redirect(301,'http://' + req.headers.host.replace(/^www\./, '') + req.url);  
   else next();  
 });  

Next up is browser compatibility. Hopefully you don't have to support ie6, but even then browsers like ie 7 (mostly gone), ie8, ie9, ie10 are still a pain to work with. This is especially true because in order to test these out on a real computer (running linux), you have to install a windows VM. Tools like http://browsershots.org/ let you see your site running in other browsers pretty well, but this is just a quick check though, if you really want to support IE (which you shouldn't) then test it in a VM.

Finally, we get to <meta> tags (and such). Let me make it easy, and I'll just post what I use:
 <meta charset="utf-8">  
 <meta name="description" content="Be the difference, support charity just by watching a video.">  
 <meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1">  
 <meta name="twitter:card" content="summary">  
 <meta name="twitter:url" content="http://charityvid.org">  
 <meta name="twitter:title" content="CharityVid">  
 <meta name="twitter:description" content="CharityVid is dedicated to enabling people to donate to charity, even if all they can afford is their time. By donating just a few minutes day you can make a difference.">  
 <meta name="twitter:image" content="http://charityvid.org/ico/apple-touch-icon-144-precomposed.png">  
 <link rel="shortcut icon" href="/ico/favicon.ico">  
 <link rel="apple-touch-icon-precomposed" sizes="144x144" href="/ico/apple-touch-icon-144-precomposed.png">  
 <link rel="apple-touch-icon-precomposed" sizes="114x114" href="/ico/apple-touch-icon-114-precomposed.png">  
 <link rel="apple-touch-icon-precomposed" sizes="72x72" href="/ico/apple-touch-icon-72-precomposed.png">  
 <link rel="apple-touch-icon-precomposed" href="/ico/apple-touch-icon-57-precomposed.png">  

You should notice two things: I don't have a 'keywords' meta tag, and I have apple-touch-icon's.
As far as the keywords tag goes, I have read in many places that it isn't even looked at for SEO, and Google doesn't use it on its home page, so I decided to omit it. Apple-touch icons are used for when mobile users (both Android and iPhone) want to save your website as an application (it's just a web link, but shows up next to other native applications).

There is actually a lot more I could write about, however It's easier to provide relevant links to what others have written on the subject.
Web Dev Checklist # Extremely useful for all websites, definitely check this one out
Fantastic Front End Performance - Mozilla (part 2part 3) # this focuses on node.js performance
Blitz.io # Load testing, for testing both the server availability as well as latency
SEO Site Checkup # Checks websites for basic SEO best practices
Yahoo Smush It # Lossless Image file compression

Lastly, I highly recommend grunt (charityvid will be using this soon) to automate any compression/minification of files (all js should be concatenated and minified, same with css, and images should be compressed with SmushIt or similar).

Grunt seemed a bit daunting at a glance, but its actually quite simple. Here is an example Gruntfile.js:
 module.exports = function(grunt) {  
  grunt.initConfig({  
   concat: {  
    dist: {  
     src: ['public/js/**/*.js'],  
     dest: 'public/prod/js/production.js'  
    }  
   },  
   uglify: {  
    dist: {  
     files: {  
      'public/prod/js/production.min.js': ['public/prod/js/production.js'']  
     }  
    }  
   }  
  });  
  grunt.loadNpmTasks('grunt-contrib-uglify');  
  grunt.loadNpmTasks('grunt-contrib-concat');  
  grunt.registerTask('compress', ['concat', 'uglify']);  
 };  

Just run 'grunt compress', and you should be good to go (don't forget to npm install -g grunt-cli).