http://zoggle.zolmeister.com/
http://games.zolmeister.com/
http://avabranch.zolmeister.com/
http://gallery.zolmeister.com/
http://charityvid.org/
Let me preface this by saying that the way I deployed my sites before was the following:
- ssh myserver@amazonec2
- screen -ls (to display all the screen instances I had open)
- screen -r XXXX (guess at which screen was the one running the site I wanted to update)
- node app.js (launch a server process in the screen, and then Ctrl+a+d to detach from the screen)
description "CharityVid node server"
author "Zolmeister"
stop on shutdown
respawn
respawn limit 20 5
# Max open files are @ 1024 by default. Bit few.
limit nofile 32768 32768
script
echo $$ > /var/run/charityvid.pid
export NODE_ENV=production
cd /home/pi/websites/charityvid
exec /usr/local/bin/up -p 3100 -w app.js 2>&1 >> /var/log/charityvid.log
end script
What this conf file says is that on service start, log the process's pid (in case we need to kill it), set node to production mode, cd into the site directory (this is necessary because of issues with express.js template loading), and finally start the app. Notice that I start the app with the "up" command, and that I am piping 2>&1 (std err to std out) and appending (>>) it to the log file. The "up" command comes from the up node module which does auto-updating and load balancing. The alternative to "up" is node-supervisor which doesn't do load balancing but can be run on any application out of the box (without the modification required by "up").
Ok, so now we have a way to start all of our applications, but how to route all the connections to the right place (remember all our apps are on one server with one IP address). Normally you would use NGINX (pronounced "engine x"), but the stable branch doesn't support websockets (used by http://zoggle.zolmeister.com) so I decided to go with the simple node-http-proxy module. The documentation is a bit lacking, but I was able to get all the sites routed, including websockets support and ssl support (for https://charityvid.org/). Here is what my app.js file looks like for my "gateway" application:
var http = require('http'),
httpProxy = require('http-proxy'),
fs = require('fs'),
https = require('https'),
crypto = require("crypto"),
path = require('path');
var main1 = httpProxy.createServer({
router: {
'zoggle.zolmeister.com': 'localhost:3001',
'avabranch.zolmeister.com': 'localhost:3005',
'charityvid.org': 'localhost:3100'
}
});
main1.listen(8000);
var proxy = new httpProxy.HttpProxy({
target: {
host: 'localhost',
port: 8000
}
});
var server = http.createServer(function (req, res) {
// Proxy normal HTTP requests
proxy.proxyRequest(req, res);
});
server.on('upgrade', function(req, socket, head) {
// Proxy websocket requests too
proxy.proxyWebSocketRequest(req, socket, head);
});
server.listen(80);
function getCredentialsContext (cer) {
return crypto.createCredentials({
key: fs.readFileSync(path.join(__dirname, 'certs', cer + '.key')),
cert: fs.readFileSync(path.join(__dirname, 'certs', cer + '.crt'))
}).context;
}
var certs = {
"charityvid.org": getCredentialsContext("charityvid")
};
var options = {
https: {
SNICallback: function(hostname){
return certs[hostname];
}
},
hostnameOnly: true,
router: {
'charityvid.org': 'localhost:3100'
}
};
// https
httpProxy.createServer(options).listen(443);
After running the routing proxy every site should now be functional and up, but we still have to ssh into the server to start/stop apps and to update them (with supervisor and up you should rarely have to restart the node server). This is where fabric comes in. Fabric lets you automate the ssh process and run commands on the server from your work machine. For example a simple command to send the initial application.conf upstart file would be:
def installInit():
put('avabranch.conf', '/etc/init/avabranch.conf', True)
sudo('touch /var/log/avabranch.log')
sudo('chown admin /var/log/avabranch.log')
And if you're using git to manage your source (I store mine in a private bitbucket repo), updating your production code is as easy as "fab update:avabranch" with the following fab file declaration:
def update(app):
with cd('/home/admin/websites'):
if app=="avabranch":
run('cd avabranch && git pull')
Finally I keep my sites behind the Cloud Flare service, which basically means that they are protected from DDOS attacks, plus get free and efficient localized caching (cloud flare has many data-centers all over the world, which allows them to improve site cache latency times).