A set of pictures of David and Izzy together on the love seat.










A set of pictures of David and Izzy together on the love seat.










I wanted to tryOpenSolaris on the new Atom board on a seperate disk before the machine got settled into normal use (normal use covered in this post).
First impression, the LiveCD booted and worked correctly including graphics in VESA mode and the networking. The install was simple and painless, and afterwards came right up with graphics in VESA mode and the networking working on the on board RealTek chip.
Bonnie++ on the 3-4 year old local disk said that for block operations it would be able to do about 50 MB/s on reads and 35 MB/s on writes.
My main area of testing was to create a new file system and share it via NFS to an Ubuntu 9.10 workstation. Initial results were 42MB/s reads and 3-8 MB/s writes (measured by timing copies). Not so good.
I tried tuning wsize and rsize. It turns out that those are more or less set to something reasonable. I tried noatime. It seems that any of the stuff that turns up for “NFS tuning” in Google don’t do much for reasonably modern systems on a generic network. They may be worth revisiting for people trying to get a bit more performance out, but I want an order of magnitude more.
I disabled ZIL (obviously I wouldn’t do that for production, but I figured it was fair to do it now and assume that a sensible flash drive would give reasonably similar performance with ZIL on in the future) and tried again, and things got better. I tried running bonnie++ remotely over NFS, and OpenSolaris lost it’s networking. No amount of ifconfig up/down or un-plumbing and plumbing the interface would bring it back, so I resorted to rebooting the system.
At that point I did some research. It looks like many people have problems with the gre driver. I found a gani driver, but I also saw many people try that then end up adding a separate network interface. I didn’t bother with the gani drive.
I didn’t think I would easily be able to add a good ethernet card since most cheap PCI ones seem to be Realtek and most good Intel or Broadcom GigE cards seem to be 64bit and I didn’t think that such a card would fit. Still, I grabbed my unused Broadcom PCI-X card, and found that they left enough room on either side of the PCI slot to fit a 64bit card. Nice.
With the broadcom card, it delivered NFS writes of 32 MB/s and reads of 45 MB/s. I feel that this is reasonable evidence to suggest that the SuperMicro D510 server board will do nicely as a ZFS storage server. That SuperMicro board comes with dual Intel GigE ports, not Realtek. And it also offers 6x SATA ports.
Version 1.03c ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
hp-xw 8G 45167 68 44605 11 14707 7 32389 57 32344 7 65.8 0
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 111 0 3489 10 108 0 111 0 3999 9 110 0
For low demand home applications (or small business ones I suppose) I think it would be interesting to try Intel D510MO board with mirrored SATA disk drives, a good Intel or Broadcom GigE NIC and a pro quality USB drive, such as this: http://www.logicsupply.com/products/af4gssgh
While USB isn’t all that fast, the linked drive claims to do writes of 25 MB/s, and that means that the ZFS server is limited to 25 MB/s, that probably isn’t too bad for storing videos and photos for a lot of home users. What would be really exciting would be if someone would make a MiniPCIe flash or SATA card (or both). A person can dream I suppose.
After my initial toStr C++ exploration, I recent found myself reading about Boost’s lexical_cast, which is does something similar, albeit more general and with more verbosity. lexical_cast will not only convert nearly anything to a string, it will also do its best to convert anything to anything, via a string in the middle.
Upon finding that, I was considering rewriting toStr to use lexical_cast (the only reason to hang onto toStr at all would have been for brevity in my code), but then I somehow stumbled upon Herb Sutter’s article The String Formatters of Manor Farm, which talks about the performance of various int to string methods. As it is, int to string is what I use my toStr function for most of the time (followed by doubles probably), so this is the performance I’m most interested in.
From Herb’s article, I learned that lexical_cast was extremely slow. stringstream, which is what the current implementation of toStr uses, is also extremely slow (but not as bad as lexical_cast). On the other hand, snprintf is very fast. I verified this with some of my own tests on Solaris/SunStudio and Linux/GCC. Rather then write up my own performance tests, allow me to refer you to this write up, as well as back to Herb Sutter’s article that I reference above.
snprintf requires explicit formatting strings, but this isn’t an issue because I can specialize the template for certain types, like ints. And, if I know the type, then I also know the maximum length string that can be generated. For instance, a 32 bit int, is 12 digits (The ‘-‘ if negative, 10 digits for the main number, and one digit for the trailing \0), while a 64 bit int is 22 digits. I could also figure out the maximum size for floats and doubles, but I haven’t yet done so.
So I will now modify toStr to include the existing version while specializing to be about 17 times faster for ints.
template
static inline std::string toStr(T v)
{
std::stringstream s;
s << v;
return s.str();
}
template <>
inline std::string toStr
{
//max 64 value: 18446744073709551615
char tmp[22]; //derived from the max 64bit value + 1 for '-' and 1 for \0
snprintf(tmp, 22, "%i", v);
return std::string(tmp);
}
A few months ago my co-located Solaris server was hit by a root kit that setup a IRC bot. It appeared that it got it by attacking one of the off the shelf web apps I use.
To prevent having to do a complete rebuild in the future if this happens again, I decided to put each major externally visible service in a Solaris Container (also known as a zone). So, I have a Mail zone, and a web zone, and then actually several more web zones that were proxied behind the first web zone. The global zone uses ipnat to port forward to the mail zone and web zone.
Then, when it turned out that the server was losing a hard-drive when I bought a new server, I was able to copy the zones to the new machine without having to re-install everything.
If I ever move away from Solaris/SPARC, I would probably do a similar setup with VirtualBox or VMWare, but Solaris is particularly nice in that patch management is unified across zones, and I believe the Copy-On-Write nature of ZFS makes for more efficient disk utilization. On the other hand, SATA drive in a modern PC mean that you probably don’t care about those features as much as you do when using a 73gig SCSI disk.
This is an affordable BluRay player that also supports some streaming media. It works with Blockbuster Direct, Netflix, Youtube, and Pandora radio. As far as I can see, Netflix and YouTube are increasingly common features, but Pandora is unique to this device.
To cut to the chase, I rather like this device. I especially like the Pandora streaming feature. However, it seems like it could have easily been even better.
I am using this with a basic Visio 1080p LCD panel connected via HDMI. The audio out on the LCD panel go to an external stereo amplifier and external 2.1 speakers.
Installing the unit was trivial. Remove old upscaling DVD player, attach HDMI cable, fish the power out the back, and plug it in. For networking, I just plugged it in, and it asked me to do a software upgrade, which was painless. I’m happy with that.
Yep, it is Hi Def. It seems reasonably fast and responsive and easy to use.
I love the YouTube and Pandora features. Especially the Pandora (and I believe this is the only device that supports Pandora).
However, in the extra media players is where some rough edges start to show. The first complaint I have is the requirement that I only use this with no disc in the optical drive. This seems ridiculous to me. I watch a fair amount of episodic shows on DVD, so it isn’t uncommon for me to want to leave the save disc in the machine for a month at a time, and having to remove it to use Pandora or YouTube is irritating. What really rubs me the wrong way about this is that it seems like such a pointless restriction.
The next complaint, which is less significant, is entering text into Pandora. You have to use the arrow buttons to select letters, and there is no auto-suggestion system. This isn’t something you will need to do much though.
Then, entering text in YouTube is done by having multiple letters per numeric key like texting on a cell phone (ABC on 2, etc). Plus in the YouTube player, they provide suggestions for you to pick and edit. This is much superior to the Pandora text editor. However, that leads me to another complaint, which is that they aren’t the same. Why would they include two or more different text editors? Will I find a third style if I ever try to use Netflix?
I complain about the above things because it seems that they should have easily been able to double how nice the device is by fixing the text editors and required disc removal. They really are a bit of a stain on an otherwise great experience.
Beyond that, since this device can stream media over the network, and it can play many types of arbitrary file burned to a DVD, why in the world won’t this device stream music and video from my Mac or home server? Surely it has the power and the extra program space would have trivial?
Finally, I rather wish they would have included a web browser. I realize this is less trivial to do. OTOH, this box does run Linux, and WebKit (the heart of Chrome, Safari, and the web browser on numerous phones) is supposed to be light weight. Considering the potential support headache, I can understand if this were a feature saved for a higher end unit, but they don’t offer it on any of their models as far as I can see. This is still more wishfull thinking than a legitimate criticism though.
Despite the complaints, I really do love this device. I wouldn’t want to replace it with any other single machine. I may be convinced to replace it with a PS3 and a 2009 Mac Mini together, but I don’t anticipate getting those anytime soon.
I do hope that a future firmware update will unify the text editors and remove the requirement to remove the discs. Samsung, I hope you are reading this.
Sorry, no picture this time since I was taking it to a church function. I think that the seasoning could use some tweaking, maybe a bit less cayenne. Also, the cooking time for me should have been closer to the smaller number listed, rather than the larger number. The one time I did make it, I cooked it for 4 hours on high. Next time I’d prefer to try 6-7 hours on low.
DIRECTIONS
I guess this post will have a somewhat limited life span since Solaris Express is being retired in favor of OpenSolaris. However, some of the pages I always refereed to every time I needed to do this have disappeared, so I’m writing it up again anyway for future reference. Maybe I’ll update it again when I try out OpenSolaris finally.
This will require a Solaris 10 Solaris Express, or OpenSolaris system to be the jumpstart server, and then of course a client that you want to install Solaris Express on.
Step 1, download the Solaris Express DVD image from. Currently, a link to this image can be found here: http://hub.opensolaris.org/bin/view/Main/downloads
Step 2, loop back mount that image somewhere on the jumpstart server.
[jdboyd@u80 ~]$ sudo lofiadm -a sol-nv-b127-sparc-dvd.iso /dev/lofi/1 Password: [jdboyd@u80 ~]$ sudo mount -F hsfs -o ro /dev/lofi/1 /mnt [jdboyd@u80 ~]$
Step 3, run the install server script.
[jdboyd@u80 ~]$ cd /mnt/Solaris_11/Tools/ [jdboyd@u80 Tools]$ ./setup_install_server /path/to/where/you_want_it
For /path/to/where/you_want_it, I use /export/jumpstart/Solaris_11. At this point be prepared to wait awhile. It doesn’t ask any questions while it works, so perhaps you can head onto the next step while still waiting. When this completes, the install server is installed, so:
[jdboyd@u80 Tools]$ sudo umount /mnt [jdboyd@u80 Tools]$ sudo lofiadm -d /dev/lofi/1
Step 4, gather the information you need from the machine to install. You will need the MAC address, the IP address to use, the hostname to use, and the hardware type, which will probably be sun4u or sun4v. The IP and hostname will already need to be in DNS.
Step 5, add the client to the install server. This will use the information from step 4.
[jdboyd@u80 ~]$ cd /export/jumpstart/Solaris_11/Solaris_11/Tools/ [jdboyd@u80 Tools]$ sudo ./add_install_client -i $IP -e $MAC $HOSTNAME sun4u
Obviously, you need to substitute the $ items in the above command with the proper values.
Step 6, finally, you are ready to install on the client. So, on the client, get to the open boot prompt, and do this:
ok boot net - install
At the point, your install proceeds normally. If you get a small stack of “Timeout waiting for ARP/RARP packet” messages right at the beginning, don’t worry. If it does it seemingly forever (say 15+ minutes), then maybe you do need to worry.
Some of this was taken from http://www.sunmanagers.org/pipermail/summaries/2005-March/006223.html
In a quest for better performance with postgres, I’ve been looking for connection pooling tools. There are a few quirks that I tend to require be met. First, it must run on Solaris. This isn’t so much a quirk, since the server runs Solaris and is SPARC hardware, and I’m not going to install a second server in colo just to accomodate software that doesn’t work on Solaris/SPARC. Additionally, I refuse to install GCC, so it must build with Sun Studio, which is much more GCC compatible that it used to be, but still isn’t GCC. Also, I want it to be reasonably simple to install and setup. I am willing to consider prebuilt packages from sunfreeware. If I get desperate enough, maybe even blastwave. Unfortunately, none of the top choices appear to be on sunfreeware.
The top choices appear to be:
This is the classic choice, building and install is easy, but setup is very arcane.
This looks like it should be simple to install and setup, but the configure script refuses to find my libevent install.
Works for many databases, unlike the others, including sqlite. However, it requires the rudiments library from the same author, and this library won’t build because the autoconf stuff doesn’t understand anything but GCC.
So, I haven’t broken down to checking out blastwave yet, but so far none of the normal choices are working out for PostgreSQL connection pooling.
Then, I made a small breakthrough when I found that PHP has pg_pconnect. pg_pconnect does some background bookeeping to keep connections open after you call pg_close, and return the same connection if the arguments are the same. Practically, this means that if you use a PHP system that keeps persistant php interpreters (say, mod_php in Apache, which is what I use for PHP), then you have effectively gotten connection pooling for PHP only.
This is a big help already, but I still need a solution that helps out with python.
Yes, I am working on a little web development on vacation.