Zones on a Single Server, revisited


TL;DR: nginx in a zone, replacing apache as the reverse proxy. Then, one web app per zone, PHP or Python.

Broadly, I am happy with Solaris Zones, used as I previously laid out in my first post on the topic. I’ve not made dramatic changes to the idea. However, I have been fine tuning things.

In the previous post I mentioned multiple web zones, but I didn’t really go into that in much depth. What I had been doing is one web zone for me (which also was in charge of proxies) and one web zone behind the proxy for other users. Recently, I’ve installed nginx in its own zone to use as the proxy and I’ve been splitting the applications on my one web-zone into one web zone per application.

While splitting applications apart, I’ve been re-evaluating how they are run. For instance, anything running in mod_python should be updated to run against a wsgi server, and perhaps Apache isn’t the right server for that. I wonder the same thing about the PHP programs, but I’m not as ready to touch them yet.

In the case of the feed reader I moved from mod_php to CherryPy. This did involve a rewrite of the backend code, but that rewrite had been creaping along anyway. Since the web stuff was a re-write, CherryPy was a fairly easy choice and a chance to learn more about it. What I learned is that I feel a bit limited by the default routing choices and I don’t like the logging. For applications already in Python, I want to go straight to wsgi with either my own routing or a third party routing plugin, and using Python logging directly. I haven’t moved the other services yet, but I’m working hard on choosing a server for that.

I’ve also started using virtualenv, which is a Python specific virtual environment tool. Part of my reason for using a lot of zones is not just security, but also having a minimum install for each service (which helps keep track of what the dependencies are). For Python projects, virtualenv makes this dead simple. You create the virtualenv environment, you start it, then you pip install your dependencies inside the environment and they get installed there instead of in system site packages.

With using virtualenv to contain the project’s dependencies, and using a higher port so that root isn’t required at all to start the service, I’ve been starting to wonder if zones might perhaps be overkill? What separation do I need between a non-root process and it’s environment? Without root, I’m not sure we need network separation, and also without root do I need as much process separation?

I thought that perhaps sticking the virtualenv in a chroot might provide all the separation and security I need. However, when I looked into trying I found trouble. chroots wreaks havoc on module loading. I don’t see anything in virtualenv that would fix this. It occurs to me that if I put a new Python install (and pip) into each chroot, then I could use the chroot instead of virtualenv. This would take more disk space, but over all I suspect it would hog resources less than a full blown zone does.

I have further thoughts on that, but I need to actually test it. I doubt I will go that way. This isn’t what I want to spend all my time doing. I just want to have useful tools and make this easy on myself so I can work on more interesting things.

Currently, I’m mostly focussing on the Python side of things, but I want to reconsider how PHP apps are served as well. I only run third party PHP programs like WordPress and RoundCube.


Leave a Reply