Proxmox on DigitalOcean

I thought I’d try to install Proxmox 4 on DigitalOcean. How I struck on that idea was very convoluted, and I don’t have a specific purpose for doing so, but if you want LXC containers instead of Docker or rkt, and you want a full set of web admin tools, this might not be a bad way to go.

By the way, if you like this write up and don’t already use DigitalOcean, feel free to thank me by signing up with my referral link. That will give you a $10 getting started credit to experiment with and give me some referral credit to use for my future bill with them.

Thinking about it, I don’t expect DigitalOcean to support running KVM on their VMs (which are already KVM). I do expect the LXC containers to work. Also, there is likely to be some networking difficulty due to having only one external and one internal IP. However, that shouldn’t stop you from seeing it as a good idea. I’ve previously used LXC directly with only a single IP and before that Solaris Containers with only a single IP. Futher discussion on that point will come later.

DigitalOcean doesn’t let you install any distribution that you might want, so installing from the Proxmmox ISO was out of the question. However, Proxmox publishes directions for installing it on top of Debian Jessie, which is a supported distribution on DigitalOcean.

Here are those directions:

They are painless to follow and just work.

Here is a shortened version of them:

  1. Open /etc/hosts and change the line to replace that IP with your external IP.
  2. Run the following shell commands:
  3. echo "deb jessie pve-no-subscription" > /etc/apt/sources.list.d/pve-install-repo.list
    wget -O- "" | apt-key add -
    apt-get update && apt-get dist-upgrade
    apt-get install proxmox-ve ntp ssh postfix ksm-control-daemon open-iscsi openvswitch-switch

    Note: If you are following the guide on the proxmox site, I’ve added one additional package to be installed, openvswitch-switch. I will explain more about that later.

  4. You will be asked about configuring postfix. If you don’t know what to choose, just choose for it to be a local only postfix server.
  5. Reboot.

At that point, you can log into the web admin page. Use https://your-ip:8006/.

At this point you can log in, look around, and after configuring a local storage location, downloading templates works correctly. Unfortunately, Cceating a CT (lxc) doesn’t work initially. The dialog requires choosing a bridge and there are no bridges.

At this point I followed the guide for using OpenVSwitch here and installed OpenVSwitch then configured a switch that isn’t connected to a “physical” interface. From the command line,
I then created a vlan on the switch, so that the host could use the new switch.

Now, when creating the container, choose the vswitch bridge. Also, set the vlan tag of the containers interface to the number used on the host. Now you are good to go.

Using a vswitch not attached to a physical interface gets you private networking between your containers, but it still doesn’t solve what to do about only having a single IP. Previously my tactic was to have nginx in a container be the designated front end answering on port 80 and 443. This will mean setting up iptables on the host to forward those ports to the front end container. Then, in the nginx config for that container, I would proxy-pass by domain/hostname through to the various web service containers I was running (for instance, personal web site, project websites, and other web based software I might be using for monitoring, etc). This strategy can also be extended for other services like email, DNS, and so on.


I liked’s irccat as an easy way to post events (task completion and errors) to an IRC channel. I was less crazy about the recommendation to use ant to execute it. When I moved it to a OpenVZ VPS that used venet, it stopped working reliably. I’ve actually had trouble with quite of few Java services on that sort of VPS. So, I wrote my own in python, and it can be found on github and PyPI.

An added benefit is that much less memory is now used as well.

Postgres Upgrade

I just upgraded from PostgreSQL 8.3 to 9.2.  In doing so, I also went from a Sun optimized build (since I installed their supplied version before) to a self built version where I didn’t fiddle with optimization at all (the documentation suggests -O5, but I let it go with the default of -O3).  Despite the lazy build, it seems a lot faster in terms of the queries some of my python code executes.

Upgrading to Ubuntu 11.10

When I upgraded to 11.04, I stuck with the Ubuntu Classic desktop. With 11.10, that didn’t look feasible anymore, so I bit the bullet to try and learn to cope with Unity.

After the install, my first issues were:

  1. How to add a program to the Dash, specifically rxvt
  2. I use the terminal a lot, and I want something light, fast, and that doesn’t mess with my hotkeys (GNOME Terminal uses the Alt key for itself).

  3. How to launch more than one rxvt with the Dash.
  4. Many programs have a way to launch new windows or documents, but rxvt doesn’t and I’m sure I’ll come across others as well.

  5. How to switch between multiple windows open in Chrome or other software.
  6. Without the mouse that is.

  7. How to make it faster.
  8. I’m using a 2.6ghz quad core system (Opteron 2218s) with 4 gigs of ram and a NVidia 9400GT and while this isn’t the latest and greatest it should not crawl without a darn good reason.

And the answers turned out to be:

  1. Make a .desktop file
  2. In /usr/share/applications, create rxvt.desktop with these contents:

    [Desktop Entry]
    Comment=terminal emulator for the X window system
    Exec=rxvt -rv -sl 2000

    Then chown the file to root:root with permissions -rw-r–r–.

  3. Middle click on the icon in the Dash
  4. Alt + ` (backtick)
  5. Just like a Mac. Obviously, I didn’t try to guess hard enough before resorting to Google.

  6. Switch to Unity 2D
  7. Unity 2D is based on Qt and is meant for older machines with inadequate video cards. It is shocking to think that I have either, but it made a large difference. Besides, every time I’ve tried a compositing desktop before, I’ve had problems with other OpenGL programs, so I probably would have made the switch anyway.


At some point, my Ubuntu desktop started using NFSv4 to connect to my Solaris file server.
The visible symptom caused by this switch was all files showing up as owned by 4294967294:4294967294.

The fix turned out to be to edit the file /etc/default/nfs-common to change:

Also, change the Domain= line in /etc/idmapd.conf to match your networks domain line.

Then reset idmapd
sudo restart idmapd

Zones on a Single Server, revisited

TL;DR: nginx in a zone, replacing apache as the reverse proxy. Then, one web app per zone, PHP or Python.

Broadly, I am happy with Solaris Zones, used as I previously laid out in my first post on the topic. I’ve not made dramatic changes to the idea. However, I have been fine tuning things.

In the previous post I mentioned multiple web zones, but I didn’t really go into that in much depth. What I had been doing is one web zone for me (which also was in charge of proxies) and one web zone behind the proxy for other users. Recently, I’ve installed nginx in its own zone to use as the proxy and I’ve been splitting the applications on my one web-zone into one web zone per application.

While splitting applications apart, I’ve been re-evaluating how they are run. For instance, anything running in mod_python should be updated to run against a wsgi server, and perhaps Apache isn’t the right server for that. I wonder the same thing about the PHP programs, but I’m not as ready to touch them yet.

In the case of the feed reader I moved from mod_php to CherryPy. This did involve a rewrite of the backend code, but that rewrite had been creaping along anyway. Since the web stuff was a re-write, CherryPy was a fairly easy choice and a chance to learn more about it. What I learned is that I feel a bit limited by the default routing choices and I don’t like the logging. For applications already in Python, I want to go straight to wsgi with either my own routing or a third party routing plugin, and using Python logging directly. I haven’t moved the other services yet, but I’m working hard on choosing a server for that.

I’ve also started using virtualenv, which is a Python specific virtual environment tool. Part of my reason for using a lot of zones is not just security, but also having a minimum install for each service (which helps keep track of what the dependencies are). For Python projects, virtualenv makes this dead simple. You create the virtualenv environment, you start it, then you pip install your dependencies inside the environment and they get installed there instead of in system site packages.

With using virtualenv to contain the project’s dependencies, and using a higher port so that root isn’t required at all to start the service, I’ve been starting to wonder if zones might perhaps be overkill? What separation do I need between a non-root process and it’s environment? Without root, I’m not sure we need network separation, and also without root do I need as much process separation?

I thought that perhaps sticking the virtualenv in a chroot might provide all the separation and security I need. However, when I looked into trying I found trouble. chroots wreaks havoc on module loading. I don’t see anything in virtualenv that would fix this. It occurs to me that if I put a new Python install (and pip) into each chroot, then I could use the chroot instead of virtualenv. This would take more disk space, but over all I suspect it would hog resources less than a full blown zone does.

I have further thoughts on that, but I need to actually test it. I doubt I will go that way. This isn’t what I want to spend all my time doing. I just want to have useful tools and make this easy on myself so I can work on more interesting things.

Currently, I’m mostly focussing on the Python side of things, but I want to reconsider how PHP apps are served as well. I only run third party PHP programs like WordPress and RoundCube.

Services to Disable on New Zones

Update: This list is being moved to a script on github. See the script here.

A new zone copies the base system in a lot of areas, including services. Thus, new zones can often be found running CDE. To save RAM and CPU power, those might as well be turned off. User svcadm disable on the following services:

Also wbem is the Solaris web management system. I have never used this, nor wanted to use it and I can’t imagine it being very useful. And it also starts on every single zone and appears to be a fairly heavy Java app. So that goes as well:

I have a feeling that I’m missing other services that I like to turn off, but that is why I’m starting a list here. Does anyone have additional suggestions?

Virtual Networking between Zones

When I first started using zones on Solaris, I ran into networking difficulty. I didn’t want private zone traffic polluting the network at the ISP, and I didn’t need any of the zones to be directly exposed (all traffic could either be proxied or go through ipnat). There was no nice solution for doing this, so I had to do something that involved turning on routing and doing weird things with the arp cache. I forget the specifics, and really, the only reason to remember them would be to do something similar on a different platform if forced to.

About 2 years ago, Solaris added Crossbow network virtualization system to the feature set. This is very nice, and extremely simple. For a bit machine, it would be possible to create virtual networks that only some zones can use and not others. For instance, a customer with 10 zones could have their zones talk among themselves but not to another customers zones. It also makes it possible to control network profile of zones, for instance rate limiting and applying a quota to them.

For a small installation like mine, it just makes it easy to do the right thing. In the root zone:

dladmin create-etherstub etherstub0

Now, etherstub0 is your new private network. To attach devices to that network, do:

dladmin create-vnic -l etherstub0 vnic0

I use vnic0 in the root zone, then configure vnic1-N for the other containers. If you do this before creating the zone, then in the zone config you just do this:

set ip-type=exclusive
add net
set physical=vnic6

For a zone that is already setup, you have to alter the config while it is stopped. But you will also need to plumb that zones vnic interface and create /etc/hostname.vnicN. Like wise, in the root zone, if you have a vnic into the virtual network there, you will also need to plumb and create the hostname file for that vnic.

For my server, I have a single IP address. So, what I do is have ipnat on the root zone forward ports to specific zones on the virtual network. If you have multiple IPs, you could use routing from the root zone to other zones. However, it would probably be more efficient to use a virtual private network for interzone traffic, and also give each zone a vnic attached directly to the main interface with a command like:

dladm create-vnic -l bge0 vnic22

I’m sure there are many other possibilities that I’m not aware of.