HTPC

Back in September, I posted about evaluating an Intel D510MO board for HTPC applications, and determined that it wasn’t good enough.

Now, I am trying again with an Atom 330 / ION motherboard. This is a slower CPU, but a faster Nvida graphics chipset. Last September it still wouldn’t have done the job, but as of a few weeks ago it comes closer. The key change is Adobe’s Flash 10.2, which finally supports hardware assisted video decoding.

With this new box, I first tried XBMCLive. My plan was to use the launcher feature to launch Hulu Desktop as well. After the install, things somewhat worked. It took some fiddling to make HDMI audio work, but it turned out that the answer was on the XBMC faq. However, it used a lot of the CPU power, all the time, and I could never get Hulu desktop working acceptably.

Yesterday, I reloaded with Ubuntu 10.10, then installed the current Nvidia drivers, Hulu Desktop, and XBMC. Things are much closer to just working. It took me a little to figure out how to enable HDMI audio, but I found the setting in Sound Preferences. On the Hardware tab, there is a drop-down labeled “Profile:”. From there, I just had to select “Digital Stereo (HDMI) Output”.

Hulu Desktop is usable, but it still under performs. I have to keep the video quality set to low to prevent dropped frames. When I flip through the menus performance suffers and frames are dropped, but when I just leave it be, it behaves. This is when the display resolution is set to 720p though. When I had the display set to 1080p, it still dropped frames in low quality mode.

I think that Flash is still inefficient and that it just is running out of power. I have to say that I would probably recommend someone trying to duplicate this should use a good Core 2 Duo or better, along with 9400 or better Nvidia graphics to get more satisfactory results.

To control the system I am using a semi-generic Adesso USB remote. It uses IR to a receiver dongle. It acts like a keyboard and mouse, so no messing with LIRC is required. I am having trouble with Wake On USB. When I enable that, the computer won’t stay off. I have no way of knowing if Linux, the computer, or the remote are to blame. With that caveat, I would recommend this remote. I think it cost $25. Sometimes you want a real keyboard though, so I am going to have to look into getting a small wireless keyboard to use with this system.

Things I still want to explore is setting XBMC to launch Hulu desktop. Also, can I find another remote friendly launcher program to have Ubuntu auto-run on startup? I would like to try adding Boxee to the list. And I would like to find something better for image viewing, either a sepe

How I Setup Ubuntu 10.10

I’ve found myself doing this 3 times in the last month. I thought I would write up what I do so that I don’t have to try to remember the next time I need to do this (which probably won’t be all that soon).

First, I do a basic install from the live CD.

Then I open Gnome-Terminal and do:
sudo apt-get install rxvt-unicode thunderbird git python-virtualenv build-essential emacs23

Then I install Chrome from the web.

That takes care of basic software installs. What remains is what I consider to be the essential customization.

First, move the Workplace switcher and Window list to top panel. Second, delete the bottom panel.

Add Gnome-Terminal, Thunderbird, and Chrome to the panel (select them in the Applications menu, right-click, and select “Add this launcher to panel”. Use the preferences (right-click on the terminal icon on the panel) for gnome-terminal to change the launch command it to rxvt -rv -sl 2000. Remove Firefox from the panel.

Finally, use the Keyboard system preference (Layouts tab, Options button, Ctrl Key position submenu) to make CapsLock a second control key. Then use the Windows system preference to select windows when the mouse moves over them.

πp-v2, the new Plan 9 file protocol

The symbol is Pi. I point that out since it doesn’t render very nicely in my web browser. The URL for the paper is:
http://proness.kix.in/misc/%CF%80p-v2.pdf
Yes, they stuck the symbol into the file name as well. Plan9 people seem to like to do such things just because they invented UTF-8 and were the first to heavily use unicode.

πp intends to replace NFS, SMB/CIFS, HTTP, FTP, 9p2000/Styx (the former plan9 file system), as well as more obscure options like Coda. To that goal, it tries to contain all of the features of those different choices that the authors deemed still useful into one new protocol.  Ideas like extended metadata, non-file files (meaning the target on the other end is a file representing a printer, raw disk, or any other sort of device), offline operation, caching, and tolerance for proxies (possibly also caching) in the middle.

So, I think there are some features and decisions to like here. Here are a few.

File versioning can serve two purposes. It can be used to preserve history and it can be used for cache invalidation. πp intends to support both. File versioning to support history is potentially the most handy feature, but it will only work if it is matched with an appropriate on disk file system that supports file versioning. ZFS, one file system that can support versioning (by snapshots, but it doesn’t keep every revision automatically), doesn’t need the file sharing protocol’s help to support looking at the versions from remote systems. On ZFS, the snapshot history can be accessed in a directories .zfs sub-directory.

Without such a file system that supports versioning, πp’s versioning feature can still work for cache invalidation by setting an extended attribute on file systems that have some mechanism for doing so (xattr on some, resource forks on others).  Personally, I think that supporting caching is largely only of use when πp is used as a replacement for http, which is further discussed later.

Pipelining would help resolve a major NFS complaint of mine, but it also seems like a large pit of danger. I’d be happy to consider using this protocol, but only for applications that don’t depend on heavy synchronous behavior out of concern about this feature, until it is better proven.

Extended support for special files is great for remotely sharing devices.  This was always one of the nice features of plan9 (at least so it sounded, since I never gotten around to using plan9).

OTOH, here are some “features” that I require convincing of.

To replace http, the author’s get rid of the plain text parts. In web development, having those parts be human readable are tremendously useful and I doubt they waste much time in any given session. They also make web connections stateful using this protocol, while being stateless has always been one of http’s strong points (albeit one with some pain). Also extended file attributes seem like a poor alternative to http request and response headers.

Supporting lossy file transmission is rather a head scratcher. Their idea is that it could make for efficient streaming video transport, but still I’m not convinced. Does the client or server decide that the response is to be lossy? It looks like this is a client decision (client choosing by opening the πp connection over TCP versus UDP), however how is the client to know in advance which type of file this is? Is it supposed to query first? That seems ludicrous.

My conclusion though is that if you drop the usage ideas that I don’t like, I don’t see any changes being required in the protocol itself. This idea is obviously in the early states, but it looks like it could be worth exploring when there is more code to run.

On a side note, one thing that did interest me in the paper is that they wrote the initial implementation in Go, and then translate that to C. I didn’t know that Go existed on Plan9, but in looking into that it seems that it still doesn’t quite exist on plan9 since the runtime has yet to be ported. Oh well, one can hope for the future.

It also made me wonder how many plan9 from user space users make use of 9p still?

Oops

All the pictures in old entries are missing, and it seems that it is because a pictures folder was removed.  Now looking for a backup.  And this time perhaps I should let WordPress keep track of them instead of making my own image folder.

And now it should be fixed, so please let me know if you still see problems.

Using rxvt with [Open]Solaris.

After years of hard coding TERM to xterm or vt100 in .bashrc to get around rxvt incompatibility, I finally found this:

sudo cp /usr/gnu/lib/terminfo/r/rxvt /usr/share/lib/terminfo/r/rxvt

I suppose I really should use pfexec instead of sudo, but mastering that is a job for another month.

OpenSolaris on the Intel D510MO Atom

I wanted to tryOpenSolaris on the new Atom board on a seperate disk before the machine got settled into normal use (normal use covered in this post).

First impression, the LiveCD booted and worked correctly including graphics in VESA mode and the networking. The install was simple and painless, and afterwards came right up with graphics in VESA mode and the networking working on the on board RealTek chip.

Bonnie++ on the 3-4 year old local disk said that for block operations it would be able to do about 50 MB/s on reads and 35 MB/s on writes.

My main area of testing was to create a new file system and share it via NFS to an Ubuntu 9.10 workstation. Initial results were 42MB/s reads and 3-8 MB/s writes (measured by timing copies). Not so good.

I tried tuning wsize and rsize.  It turns out that those are more or less set to something reasonable. I tried noatime. It seems that any of the stuff that turns up for “NFS tuning” in Google don’t do much for reasonably modern systems on a generic network.  They may be worth revisiting for people trying to get a bit more performance out, but I want an order of magnitude more.

I disabled ZIL (obviously I wouldn’t do that for production, but I figured it was fair to do it now and assume that a sensible flash drive would give reasonably similar performance with ZIL on in the future) and tried again, and things got better. I tried running bonnie++ remotely over NFS, and OpenSolaris lost it’s networking. No amount of ifconfig up/down or un-plumbing and plumbing the interface would bring it back, so I resorted to rebooting the system.

At that point I did some research. It looks like many people have problems with the gre driver. I found a gani driver, but I also saw many people try that then end up adding a separate network interface. I didn’t bother with the gani drive.

I didn’t think I would easily be able to add a good ethernet card since most cheap PCI ones seem to be Realtek and most good Intel or Broadcom GigE cards seem to be 64bit and I didn’t think that such a card would fit. Still, I grabbed my unused Broadcom PCI-X card, and found that they left enough room on either side of the PCI slot to fit a 64bit card. Nice.

With the broadcom card, it delivered NFS writes of 32 MB/s and reads of 45 MB/s. I feel that this is reasonable evidence to suggest that the SuperMicro D510 server board will do nicely as a ZFS storage server. That SuperMicro board comes with dual Intel GigE ports, not Realtek. And it also offers 6x SATA ports.

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
hp-xw            8G 45167  68 44605  11 14707   7 32389  57 32344   7  65.8   0
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16   111   0  3489  10   108   0   111   0  3999   9   110   0

For low demand home applications (or small business ones I suppose) I think it would be interesting to try Intel D510MO board with mirrored SATA disk drives, a good Intel or Broadcom GigE NIC and a pro quality USB drive, such as this: http://www.logicsupply.com/products/af4gssgh
While USB isn’t all that fast, the linked drive claims to do writes of 25 MB/s, and that means that the ZFS server is limited to 25 MB/s, that probably isn’t too bad for storing videos and photos for a lot of home users.    What would be really exciting would be if someone would make a MiniPCIe flash or SATA card (or both).  A person can dream I suppose.

Zones on a single server

A few months ago my co-located Solaris server was hit by a root kit that setup a IRC bot. It appeared that it got it by attacking one of the off the shelf web apps I use.

To prevent having to do a complete rebuild in the future if this happens again, I decided to put each major externally visible service in a Solaris Container (also known as a zone). So, I have a Mail zone, and a web zone, and then actually several more web zones that were proxied behind the first web zone. The global zone uses ipnat to port forward to the mail zone and web zone.

Then, when it turned out that the server was losing a hard-drive when I bought a new server, I was able to copy the zones to the new machine without having to re-install everything.

If I ever move away from Solaris/SPARC, I would probably do a similar setup with VirtualBox or VMWare, but Solaris is particularly nice in that patch management is unified across zones, and I believe the Copy-On-Write nature of ZFS makes for more efficient disk utilization. On the other hand, SATA drive in a modern PC mean that you probably don’t care about those features as much as you do when using a 73gig SCSI disk.

Setting up a jumpstart server for Solaris Express.

I guess this post will have a somewhat limited life span since Solaris Express is being retired in favor of OpenSolaris. However, some of the pages I always refereed to every time I needed to do this have disappeared, so I’m writing it up again anyway for future reference. Maybe I’ll update it again when I try out OpenSolaris finally.

This will require a Solaris 10 Solaris Express, or OpenSolaris system to be the jumpstart server, and then of course a client that you want to install Solaris Express on.

Step 1, download the Solaris Express DVD image from. Currently, a link to this image can be found here: http://hub.opensolaris.org/bin/view/Main/downloads

Step 2, loop back mount that image somewhere on the jumpstart server.

[jdboyd@u80 ~]$ sudo lofiadm -a sol-nv-b127-sparc-dvd.iso /dev/lofi/1
Password:
[jdboyd@u80 ~]$ sudo mount -F hsfs -o ro /dev/lofi/1 /mnt
[jdboyd@u80 ~]$

Step 3, run the install server script.

[jdboyd@u80 ~]$ cd /mnt/Solaris_11/Tools/
[jdboyd@u80 Tools]$ ./setup_install_server /path/to/where/you_want_it

For /path/to/where/you_want_it, I use /export/jumpstart/Solaris_11.  At this point be prepared to wait awhile.  It doesn’t ask any questions while it works, so perhaps you can head onto the next step while still waiting.  When this completes, the install server is installed, so:

[jdboyd@u80 Tools]$ sudo umount /mnt
[jdboyd@u80 Tools]$ sudo lofiadm -d /dev/lofi/1

Step 4, gather the information you need from the machine to install.  You will need the MAC address, the IP address to use, the hostname to use, and the hardware type, which will probably be sun4u or sun4v.  The IP and hostname will already need to be in DNS.

Step 5, add the client to the install server.  This will use the information from step 4.

[jdboyd@u80 ~]$ cd /export/jumpstart/Solaris_11/Solaris_11/Tools/
[jdboyd@u80 Tools]$ sudo ./add_install_client -i $IP -e $MAC $HOSTNAME sun4u

Obviously, you need to substitute the $ items in the above command with the proper values.

Step 6, finally, you are ready to install on the client.  So, on the client, get to the open boot prompt, and do this:

ok boot net - install

At the point, your install proceeds normally. If you get a small stack of “Timeout waiting for ARP/RARP packet” messages right at the beginning, don’t worry. If it does it seemingly forever (say 15+ minutes), then maybe you do need to worry.

Some of this was taken from http://www.sunmanagers.org/pipermail/summaries/2005-March/006223.html

PostgreSQL connection pooling for mod_php

In a quest for better performance with postgres, I’ve been looking for connection pooling tools. There are a few quirks that I tend to require be met. First, it must run on Solaris. This isn’t so much a quirk, since the server runs Solaris and is SPARC hardware, and I’m not going to install a second server in colo just to accomodate software that doesn’t work on Solaris/SPARC. Additionally, I refuse to install GCC, so it must build with Sun Studio, which is much more GCC compatible that it used to be, but still isn’t GCC. Also, I want it to be reasonably simple to install and setup. I am willing to consider prebuilt packages from sunfreeware. If I get desperate enough, maybe even blastwave. Unfortunately, none of the top choices appear to be on sunfreeware.

The top choices appear to be:

  • pgpool
  • This is the classic choice, building and install is easy, but setup is very arcane.

  • pgbouncer
  • This looks like it should be simple to install and setup, but the configure script refuses to find my libevent install.

  • SQLRelay
  • Works for many databases, unlike the others, including sqlite. However, it requires the rudiments library from the same author, and this library won’t build because the autoconf stuff doesn’t understand anything but GCC.

So, I haven’t broken down to checking out blastwave yet, but so far none of the normal choices are working out for PostgreSQL connection pooling.

Then, I made a small breakthrough when I found that PHP has pg_pconnect. pg_pconnect does some background bookeeping to keep connections open after you call pg_close, and return the same connection if the arguments are the same. Practically, this means that if you use a PHP system that keeps persistant php interpreters (say, mod_php in Apache, which is what I use for PHP), then you have effectively gotten connection pooling for PHP only.

This is a big help already, but I still need a solution that helps out with python.

Yes, I am working on a little web development on vacation.