Tuesday, July 23, 2013

Promiz.js


Promiz.js is a promises/A+ compliant library (mostly), which aims to have both a small footprint and have great performance (< 1Kb (625 bytes minified + gzip)). I wont go over why javascript promises are amazing. Instead, I'm going to focus on what goes on behind the scenes and what it takes to create a promise library. But first, some benchmarks (see bench.js for source - server side):
Benchmarks are obviously just that 'benchmarks', and do not necessarily test real-world application usage. However, I feel that they are still quite important for a control flow library, which is why Promiz.js has been optimized for performance. There is however, one thing I should mention: Promiz.js will attempt to execute synchronously if possible. This technically breaks spec, however it allows us to get Async.js levels of performance (note: Async.js is not a promise library and doesn't look as clean).

Alright, lets look at the API that our library has to provide. Here is a basic common use case:

function testPromise(val) {
    // An example asyncronous promise function
    var deferred = Promiz.defer()
    setTimeout(function(){
        deferred.resolve(val)
    }, 0)
    return deferred
}
testPromise(22).then(function(twentyTwo){
    // This gets called when the async call finishes
    return 33
}).then(function success(thiryThree){
    // Values get passed down the chain.
    // values can also be promises
    return testPromise(99)

}, function error(err) {
    // If an error happens, it gets passed here
})

Now, while the usage is simple, the backend can get a little bit complicated and requires a good bit of javascript knowledge. Lets start with the most minimal possible setup.

First we're going to need a generator, that creates the `deferred` (promise) objects:

var Promiz = {
    // promise factory
    defer: function(){
      return new defer()
    }
}

Now, lets define our promise object. Remember, to be spec compatible, it must have a .then() method, and have a state. In order to be able to chain these calls, we're also going to need to keep track of what we need to call later. This will constitute our `stack` (functions that need to be resolved eventually).

function defer(){

    // State transitions from pending to either resolved or rejected
    this.state = 'pending'

    // The current stack of deferred calls that need to be made
    this.stack = []

    // The heart of the promise
    // adding a deferred call to our call stack
    this.then = function(fn, er){
      this.stack.push([fn, er])
      if (this.state !== 'pending') {

        // Consume the stack, running the the next function
        this.fire()
      }
      return this
    }
}

The .then() simply adds the functions it was called with (a success callback and an optional error callback) to the stack, and then checks to see if it should consume the stack. Note that we return `this` which is a reference to our deferred object. This lets us call .then() again, and add to the same deferred stack. Notice, our promise needs to come out of its pending state before we can start consuming the stack. Lets add two methods to our deferred object:

    // Resolved the promise to a value
    // Only affects the first time it is called
    this.resolve = function(val){
      if (this.state === 'pending'){
        this.state = 'resolved'
        this.fire(val)
      }
      return this
    }

    // Rejects the promise with a value
    // Only affects the first time it is called
    this.reject = function(val){
      if (this.state === 'pending'){
        this.state = 'rejected'
        this.fire(val)
      }
      return this
    }

Alright, so this resolve actually does two things. It checks to see if we've already been resolved (by checking our pending state) which is important to be spec compliant, and it fires off our resolved value to start consuming the stack. At this point, were almost done (!). We just need a function that actually consumes our current promise stack (the `this.fire()` - the most complicated function).

    // This is our main execution thread
    // Here is where we consume the stack of promises
    this.fire = function (val) {
      var self = this
      this.val = typeof val !== 'undefined' ? val : this.val

      // Iterate through the stack
      while(this.stack.length && this.state !== 'pending') {
        
        // Get the next stack item
        var entry = this.stack.shift()
        
        // if the entry has a function for the state we're in, call it
        var fn = this.state === 'rejected' ? entry[1] : entry[0]
        
        if(fn) {
          
          // wrap in a try/catch to get errors that might be thrown
          try {
            
            // call the deferred function
            this.val = fn.call(null, this.val)

            // If the value returned is a promise, resolve it
            if(this.val && typeof this.val.then === 'function') {
              
              // save our state
              var prevState = this.state

              // Halt stack execution until the promise resolves
              this.state = 'pending'

              // resolving
              this.val.then(function(v){

                // success callback
                self.resolve(v)
              }, function(err){

                // error callback

                // re-run the stack item if it has an error callback
                // but only if we weren't already in a rejected state
                if(prevState !== 'rejected' && entry[1]) {
                  self.stack.unshift(entry)
                }

                self.reject(err)
              })

            } else {
              this.state = 'resolved'
            }
          } catch (e) {

            // the function call failed, lets reject ourselves
            // and re-run the stack item in case it handles errors
            // but only if we didn't just do that
            // (eg. the error function of on the stack threw)
            this.val = e
            if(this.state !== 'rejected' && entry[1]) {
              this.stack.unshift(entry)
            }

            this.state = 'rejected'
          }
        }
      }
    }

And that's it!

Sunday, July 7, 2013

CharityVid - User Auth, Jasmine Testing, and Dust.js

This is the last (official) post in my CharityVid series. I'm going to try and cram 3 big topics into one post, so lets see how it goes.

User Authentication

We're going to be using passport.js and MongoDB to create and store users. Here is what the passport code will look like:
 var passport = require('passport'),  
   FacebookStrategy = require('passport-facebook').Strategy,  
   db = require('./db'),  
   settings = require('./settings'),  
   log = require('./log');  
 passport.use(new FacebookStrategy({  
   clientID: FACEBOOK_APP_ID,  
   clientSecret: FACEBOOK_APP_SECRET,  
   callbackURL: "//" + settings.DOMAIN + "/auth/facebook/callback"  
 }, function(accessToken, refreshToken, profile, done) {  
   db.getUser(profile.id, function(err, result){  
     if (err || !result) { //user does not exist, create  
       //default user object  
       var user = {  
         fbid: profile.id,  
         username: profile.username,  
         displayName: profile.displayName,  
         ...  
       }  
       log.info("creating new user: "+user.fbid, user)  
       db.addUser(user, function(err, result) {  
         if(err || !result){  
           log.warn("error adding user", err)  
           return done(err)  
         }  
         return done(null, user)  
       })  
     } else {  
       return done(null, result)  
     }  
   })  
 }))  
 passport.serializeUser(function(user, done) {  
   done(null, user)  
 })  
 passport.deserializeUser(function(obj, done) {  
   done(null, obj)  
 })  

and then we need to add it in as express middleware.
 app.configure(function() {  
   app.use(express.cookieParser(settings.SESSION_SECRET))  
   app.use(express.session({  
     secret: settings.SESSION_SECRET,  
     store: new MongoStore({  
       url: settings.MONGO_URL  
     })  
   })) //auth  
   app.use(passport.initialize())  
   app.use(passport.session()) //defaults  
 })  
 app.get('/auth/facebook/callback', auth.passport.authenticate('facebook', {  
   failureRedirect: '/'  
 }), function(req, res) {  
   res.redirect('/')  
 })  
 app.get('/logout', function(req, res) {  
   req.logout()  
   res.redirect('/')  
 })  
 app.get('/auth/facebook', auth.passport.authenticate('facebook'), function(req, res) { /* function will not be called.(redirected to Facebook for authentication)*/  
 })  

Well that was a piece of cake, onto testing!

Testing

There are many kinds of testing (http://en.wikipedia.org/wiki/Software_testing#Testing_levels), and its up to you to decide how much or how little of it you wan't to do. CharityVid uses Jasmine-node for its tests. We have a folder named 'tests', and inside are javascript files named '<part of code>.spec.js'. The .spec.js extension tells jasmine that these are tests to run. Here is what a test might look like with jasmine:
 describe("Util check", function() {  
   var belt = require('../util')    
    it("retrieves charity data", function(done) {
        belt.onDataReady(function() {
            belt.getCharity("americanredcross", function(err, charity) {
                expect(charity.name).toBeDefined()
                expect(charity.website).toBeDefined()
                ...
                done()
            })
        })
    })
 })  

And then to test it:
 jasmine-node tests  

And now finally, onto Dust.js

Dust.js

CharityVid uses Dust.js, which is a template engine, similar to Jade, the default template engine used by express.js. Dust has a some nice features, including pre-compiled client side templates that can also be used server side (pre-compiling reduces the initial load times). Using dust.js is as simple as setting the view engine:
 var cons = require('consolidate')  
 app.engine('dust', cons.dust) //dustjs template engine  
 app.configure(function() {  
   app.set('view engine', 'dust') //dust.js default  
 })  

The dust engine comes from the Consolidate.js library, which supports a ton of different engines.
Here is an example of what dust.js looks like:
 {>"base.dust"/}  
 {<css_extra}<link href="/css/profile.css" rel="stylesheet">{/css_extra}  
 {<title}CharityVid - {name}{/title}  
 {<meta_extra}  
 <meta property="og:title" content="{name} - CharityVid"/>  
 {/meta_extra}  
 {<js}<script src='/js/profile.js' async></script>{/js}  
 {<profile_nav}class="active"{/profile_nav}  
 {<container}  
 <h1>{name}</h1>  
 <div class="row-fluid">  
   <img alt='{name}' class='profile-picture' src='https://graph.facebook.com/{fbid}/picture?type=large' align="left">  
   <span id='userQuote'>{quote}</span>  
   {?isUser}  
       <a class='edit' id='editQuote' href='#'>edit</a>  
   {/isUser}  
   <input type='hidden' name='_csrf' id='csrfToken' value='{token}'>  
 </div>
 {/container}  

Sunday, June 30, 2013

Retin.us - A new way to consume RSS

http://Retin.us  (chrome extension) (source)

Retin.us is not a Google Reader clone. Retin.us doesn't star or share articles. Retin.us doesn't `like` things, nor does it show you pretty pictures in a collage.

Retin.us does one thing, and it does it well. RSS (Rich Site Summary). Here is what it looks like:

That's it. That's all there is. In fact, you can even minimize the sidebar:
Retin.us is based off of my Google Reader usage pattern:
  • J (key) - next 
  • K (key) - previous
  • Ctrl + Enter - open selected article in new tab without losing focusing
When I open up my reader, I go through every unread item and open interesting articles in a new tab without losing focus. This is a bit different than most people who expect to read the article within their reader. There are many problems I found with this paradigm:
  • Long articles are unwieldy to read inline.
  • Collage based layouts are silly (Flipboard)
  • Some sites do not provide full articles in their RSS
  • Hacker News / Reddit subscriptions don't include any article data
As with my Google Reader app ZFeed, instead of relying on unreliable RSS feed data, I fetch a summary of each article using the embed.ly api. This way I can read the title and summary of an article before I make the decision to commit time to reading the whole thing.

There is still a lot more to come for Retin.us, but I have (as of last week) officially made it my RSS reader replacement. Expect another article soon about how it was built using Sails.js and Backbone. In the meantime, feel free to contribute on GitHub (GPL license).

Thursday, June 27, 2013

Node.js Tips

Here are some useful notes regarding Node.js development.

npm --save
When I first learned how to use npm, the process was like this:
npm install <package>
vi package.json # edit the dependencies manually, and have the package version be '*' 

which was a huge pain. Turns out, there is a great command-line flag which will add the module to package.json automatically.
npm install <package> --save # save to package.json with version
npm install <package> --save-dev # save to dev dependencies

npm local install
Installing dependencies globally (-g) can be quite scary because by default you have to sudo the command. In order to bypass this, we can compile npm to install locally to our home folder, and then add that folder to our path (.local directory). (source)
wget http://nodejs.org/dist/v0.10.12/node-v0.10.12.tar.gz
tar zxvf node-v0.10.12.tar.gz
cd node-v0.10.12

./configure --prefix=~/.local
make
make install

export PATH=$HOME/.local/bin:$PATH

npm publish
Publishing a module on npm couldn't be easier (take from this gist):
npm set init.author.name "Your Name"
npm set init.author.email "you@example.com"
npm set init.author.url "http://yourblog.com"

npm adduser

cd /path/to/your-project
npm init

npm publish .

--expose-gc
The V8 javascript garbage collector in node.js is usually pretty good, however there may be some times when you need fine control over the collection yourself. In those cases, this command is quite useful:
node --expose-gc app.js

global.gc(); # within app.js

npm link
Sometimes I find myself needing to modify an npm module, either to fix a bug or add a feature. In order to test my local modifications, and use my version across apps easily, I can use 'npm link':
git  clone git@github.com:Zolmeister/Polish.js.git
cd Polish.js
npm link

cd ~/path/to/app
npm link polish # instead of npm install polish

Bonus - Great modules:
socket.io - realtime websockets magic
request - making http requests easier (like the python library)
passport - user authentication
Q - great promise library
async - if you're not cool enough for promises
lodash - better than underscore
fs-extra - lets you actually copy/paste/rm -rf files/folders properly
mongojs - great library for working with mongodb
nodejitsu reccomendations

Friday, June 21, 2013

Scrolly - A beautiful custom scrollbar for every page


Scrolly is a chrome extension: web store (source) (live preview -->)

It turns this into this using css.

This is necessary because chrome does not use the scrollbars from your Linux theme:
Because widget rendering is done in a separate, sandboxed process that doesn't have access to the X server or the filesystem, there's no current way to do GTK+ widget rendering. We instead pass WebKit a few colors and let it draw a default scrollbar. (source)
Here is the source (yes, it's colorful now - hilite.me):

::-webkit-scrollbar {
    width: 12px;
    height: 12px;
}
::-webkit-scrollbar-track-piece {
    background: #aaa;
}
::-webkit-scrollbar-thumb {
    background: #7a7a7a;
    border-radius: 2px;
}
::-webkit-scrollbar-corner       {
    background: #999;
}
::-webkit-scrollbar-thumb:window-inactive {
    background: #888;
}

Feel free to fork it and change the css to be whatever you want. (pull requests welcome).

For more info on css scrollbars: http://css-tricks.com/custom-scrollbars-in-webkit/

Friday, June 14, 2013

CharityVid - Front-end Optimization

I've written a lot about the backend behind CharityVid, but there is quite a bit of front things that get overlooked when developing a web application. Specifically, front end optimization (eg. page load times, browser compatibility, server latency, etc.) Let's begin with page load.

There are many good tools for measuring page load times, but these are my favorite:
(note: http://gtmetrix.com/ will test your site with PageSpeed and YSlow at the same time)
With these tools we can analyse what resources are consuming the most bandwidth and compensate accordingly, as well as making sure that we are using all available methods for minimizing server load/latency. (CharityVid gets a 97% on PageSpeed, and 83% on YSlow).

Hopefully those tools are self explanatory, (don't feel like you need to to get to 100% on Page Speed/YSlow), usually just taking advantages of easy wins (like caching) is enough to make your site fast enough (aim for ~90%+ on PageSpeed and you should be good).

Here are some helpful snippets for express:
 app.configure('production', function() {  
   app.use(express.logger())  
   app.use(express.compress()) //gzip all the things  
 })  

 //force non-www  
 app.get('/*', function(req, res, next) {  
   if (req.headers.host.match(/^www/) !== null ) res.redirect(301,'http://' + req.headers.host.replace(/^www\./, '') + req.url);  
   else next();  
 });  

Next up is browser compatibility. Hopefully you don't have to support ie6, but even then browsers like ie 7 (mostly gone), ie8, ie9, ie10 are still a pain to work with. This is especially true because in order to test these out on a real computer (running linux), you have to install a windows VM. Tools like http://browsershots.org/ let you see your site running in other browsers pretty well, but this is just a quick check though, if you really want to support IE (which you shouldn't) then test it in a VM.

Finally, we get to <meta> tags (and such). Let me make it easy, and I'll just post what I use:
 <meta charset="utf-8">  
 <meta name="description" content="Be the difference, support charity just by watching a video.">  
 <meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1">  
 <meta name="twitter:card" content="summary">  
 <meta name="twitter:url" content="http://charityvid.org">  
 <meta name="twitter:title" content="CharityVid">  
 <meta name="twitter:description" content="CharityVid is dedicated to enabling people to donate to charity, even if all they can afford is their time. By donating just a few minutes day you can make a difference.">  
 <meta name="twitter:image" content="http://charityvid.org/ico/apple-touch-icon-144-precomposed.png">  
 <link rel="shortcut icon" href="/ico/favicon.ico">  
 <link rel="apple-touch-icon-precomposed" sizes="144x144" href="/ico/apple-touch-icon-144-precomposed.png">  
 <link rel="apple-touch-icon-precomposed" sizes="114x114" href="/ico/apple-touch-icon-114-precomposed.png">  
 <link rel="apple-touch-icon-precomposed" sizes="72x72" href="/ico/apple-touch-icon-72-precomposed.png">  
 <link rel="apple-touch-icon-precomposed" href="/ico/apple-touch-icon-57-precomposed.png">  

You should notice two things: I don't have a 'keywords' meta tag, and I have apple-touch-icon's.
As far as the keywords tag goes, I have read in many places that it isn't even looked at for SEO, and Google doesn't use it on its home page, so I decided to omit it. Apple-touch icons are used for when mobile users (both Android and iPhone) want to save your website as an application (it's just a web link, but shows up next to other native applications).

There is actually a lot more I could write about, however It's easier to provide relevant links to what others have written on the subject.
Web Dev Checklist # Extremely useful for all websites, definitely check this one out
Fantastic Front End Performance - Mozilla (part 2part 3) # this focuses on node.js performance
Blitz.io # Load testing, for testing both the server availability as well as latency
SEO Site Checkup # Checks websites for basic SEO best practices
Yahoo Smush It # Lossless Image file compression

Lastly, I highly recommend grunt (charityvid will be using this soon) to automate any compression/minification of files (all js should be concatenated and minified, same with css, and images should be compressed with SmushIt or similar).

Grunt seemed a bit daunting at a glance, but its actually quite simple. Here is an example Gruntfile.js:
 module.exports = function(grunt) {  
  grunt.initConfig({  
   concat: {  
    dist: {  
     src: ['public/js/**/*.js'],  
     dest: 'public/prod/js/production.js'  
    }  
   },  
   uglify: {  
    dist: {  
     files: {  
      'public/prod/js/production.min.js': ['public/prod/js/production.js'']  
     }  
    }  
   }  
  });  
  grunt.loadNpmTasks('grunt-contrib-uglify');  
  grunt.loadNpmTasks('grunt-contrib-concat');  
  grunt.registerTask('compress', ['concat', 'uglify']);  
 };  

Just run 'grunt compress', and you should be good to go (don't forget to npm install -g grunt-cli).

Saturday, June 8, 2013

Avabranch Mobile


Last year I made a game called Avabranch for the github gameoff (http://www.zolmeister.com/2012/11/avabranch.html). I decided to take a few hours and port it to mobile, so without further ado, here it is: http://avabranch.zolmeister.com/m/
It's listening for touch events, so if you want to use it on the desktop, you're going to need to enable touch events in the chrome dev tools settings.




However the android app has a few known bugs (and some mobile browsers like google-chrome may not play the game properly). My advice is to use the firefox mobile browser, as it worked flawlessly for me the first time. There are currently no plans to fix the issues, mostly because its not my fault (it works in FF mobile, and the desktop flawlessly) and I don't feel like debugging the app, especially because the android app works perfectly on my phone (if there is enough interest I may fix the issues, but at the moment it was just an excuse to play with phone gap). Anyways, hope it works for you.