So I’ve been working on a way to serve dynamic SSL certificates from Nginx. The use case here is that you have a certain number of websites that you want to serve over HTTPS from the same servers, but you don’t want to have to update your Nginx config every time you have a new addition. After a few days of hitting walls, I’ve come up with a pretty good solution.
The main players in this are Postgres, Nginx, Lua, and a bunch of modules. Big credit goes to Yichun Zhang for his truly incredible work on OpenResty and the various modules that have come out of that project. For this, I’m using the lua-nginx-module – specifically the ssl-cert-by-lua branch. Zhang has done a pretty comprehensive writeup on how to use this branch – read his post carefully! I certainly missed some pieces the first few times. This branch adds a bunch of directives, the most important of which is ssl_certificate_by_lua. This is where the magic happens.
I’ve put up the code in a repository on Github; check that out to see what’s happening. Here’s how to run the thing:
The basic idea is to store the SSL certificates in the database, and retrieve them when a request comes in. You can do this using the ssl_certificate_by_lua directive, but there are a few snags. First, everything has to be in DER (binary) format, so you have to convert PEM certs over. For certificates, the functionality is built in – just use cert_pem_to_der. For the private keys, though, there’s no such function. To get around this, I take the private key, write it to a temp file, run that temp file through openssl to convert it into a DER, and then use that. The private key is pseudo-cached on the host, and is served up when it can. Retrieval from the database is done using pgmoon.
As I said earlier, this uses Postgres, and so requires a Postgres DB. I didn’t include that in the install because I’m going to host the DB elsewhere (in this case, RDS). If you’re going to autoscale this, having each box have its own Postgres DB doesn’t make sense. Since I’m running Postgres on the host box, I used netstat -rn within Vagrant to get the host IP; in this case, it’s 10.0.2.2, so that’s what I use for the Postgres IP.
Along with the Postgres DB is the actual database. I created a database called nginx and added a table called domains. Using this in production, you should probably encrypt the private key; that’s not covered here.
When testing locally, I created a self-signed root CA and then signed another CA using that. Here are the commands I ran:
openssl genrsa -out rootCA.key 2048
openssl req -x509 -new -nodes -key rootCA.key -days 1024 -out rootCA.pem
openssl genrsa -out device.key 2048
openssl req -new -key device.key -out device.csr
openssl x509 -req -in device.csr -CA rootCA.pem -CAkey rootCA.key -CAcreateserial -out device.crt -days 500
openssl rsa -in device.key -outform DER -out device.der
I kept the DER file around for testing in case, but just take the CRT and KEY outputs and put them into the database.
Then, edit your /etc/hosts file and point test domains to the Vagrant instance; something like “10.5.6.7 hello.test” should work.
I’ll admit that this is a bit sloppy and all over the place, but it works. There are optimizations to be made, such as storing both the regular cert and the private key on the host rather than retrieving from the database every time; that’s going to be in the next phase of this.