Tag Archives: solaris

OpenSolaris on the Intel D510MO Atom

I wanted to tryOpenSolaris on the new Atom board on a seperate disk before the machine got settled into normal use (normal use covered in this post).

First impression, the LiveCD booted and worked correctly including graphics in VESA mode and the networking. The install was simple and painless, and afterwards came right up with graphics in VESA mode and the networking working on the on board RealTek chip.

Bonnie++ on the 3-4 year old local disk said that for block operations it would be able to do about 50 MB/s on reads and 35 MB/s on writes.

My main area of testing was to create a new file system and share it via NFS to an Ubuntu 9.10 workstation. Initial results were 42MB/s reads and 3-8 MB/s writes (measured by timing copies). Not so good.

I tried tuning wsize and rsize.  It turns out that those are more or less set to something reasonable. I tried noatime. It seems that any of the stuff that turns up for “NFS tuning” in Google don’t do much for reasonably modern systems on a generic network.  They may be worth revisiting for people trying to get a bit more performance out, but I want an order of magnitude more.

I disabled ZIL (obviously I wouldn’t do that for production, but I figured it was fair to do it now and assume that a sensible flash drive would give reasonably similar performance with ZIL on in the future) and tried again, and things got better. I tried running bonnie++ remotely over NFS, and OpenSolaris lost it’s networking. No amount of ifconfig up/down or un-plumbing and plumbing the interface would bring it back, so I resorted to rebooting the system.

At that point I did some research. It looks like many people have problems with the gre driver. I found a gani driver, but I also saw many people try that then end up adding a separate network interface. I didn’t bother with the gani drive.

I didn’t think I would easily be able to add a good ethernet card since most cheap PCI ones seem to be Realtek and most good Intel or Broadcom GigE cards seem to be 64bit and I didn’t think that such a card would fit. Still, I grabbed my unused Broadcom PCI-X card, and found that they left enough room on either side of the PCI slot to fit a 64bit card. Nice.

With the broadcom card, it delivered NFS writes of 32 MB/s and reads of 45 MB/s. I feel that this is reasonable evidence to suggest that the SuperMicro D510 server board will do nicely as a ZFS storage server. That SuperMicro board comes with dual Intel GigE ports, not Realtek. And it also offers 6x SATA ports.

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
hp-xw            8G 45167  68 44605  11 14707   7 32389  57 32344   7  65.8   0
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16   111   0  3489  10   108   0   111   0  3999   9   110   0

For low demand home applications (or small business ones I suppose) I think it would be interesting to try Intel D510MO board with mirrored SATA disk drives, a good Intel or Broadcom GigE NIC and a pro quality USB drive, such as this: http://www.logicsupply.com/products/af4gssgh
While USB isn’t all that fast, the linked drive claims to do writes of 25 MB/s, and that means that the ZFS server is limited to 25 MB/s, that probably isn’t too bad for storing videos and photos for a lot of home users.    What would be really exciting would be if someone would make a MiniPCIe flash or SATA card (or both).  A person can dream I suppose.

Zones on a single server

A few months ago my co-located Solaris server was hit by a root kit that setup a IRC bot. It appeared that it got it by attacking one of the off the shelf web apps I use.

To prevent having to do a complete rebuild in the future if this happens again, I decided to put each major externally visible service in a Solaris Container (also known as a zone). So, I have a Mail zone, and a web zone, and then actually several more web zones that were proxied behind the first web zone. The global zone uses ipnat to port forward to the mail zone and web zone.

Then, when it turned out that the server was losing a hard-drive when I bought a new server, I was able to copy the zones to the new machine without having to re-install everything.

If I ever move away from Solaris/SPARC, I would probably do a similar setup with VirtualBox or VMWare, but Solaris is particularly nice in that patch management is unified across zones, and I believe the Copy-On-Write nature of ZFS makes for more efficient disk utilization. On the other hand, SATA drive in a modern PC mean that you probably don’t care about those features as much as you do when using a 73gig SCSI disk.

Setting up a jumpstart server for Solaris Express.

I guess this post will have a somewhat limited life span since Solaris Express is being retired in favor of OpenSolaris. However, some of the pages I always refereed to every time I needed to do this have disappeared, so I’m writing it up again anyway for future reference. Maybe I’ll update it again when I try out OpenSolaris finally.

This will require a Solaris 10 Solaris Express, or OpenSolaris system to be the jumpstart server, and then of course a client that you want to install Solaris Express on.

Step 1, download the Solaris Express DVD image from. Currently, a link to this image can be found here: http://hub.opensolaris.org/bin/view/Main/downloads

Step 2, loop back mount that image somewhere on the jumpstart server.

[jdboyd@u80 ~]$ sudo lofiadm -a sol-nv-b127-sparc-dvd.iso /dev/lofi/1
Password:
[jdboyd@u80 ~]$ sudo mount -F hsfs -o ro /dev/lofi/1 /mnt
[jdboyd@u80 ~]$

Step 3, run the install server script.

[jdboyd@u80 ~]$ cd /mnt/Solaris_11/Tools/
[jdboyd@u80 Tools]$ ./setup_install_server /path/to/where/you_want_it

For /path/to/where/you_want_it, I use /export/jumpstart/Solaris_11.  At this point be prepared to wait awhile.  It doesn’t ask any questions while it works, so perhaps you can head onto the next step while still waiting.  When this completes, the install server is installed, so:

[jdboyd@u80 Tools]$ sudo umount /mnt
[jdboyd@u80 Tools]$ sudo lofiadm -d /dev/lofi/1

Step 4, gather the information you need from the machine to install.  You will need the MAC address, the IP address to use, the hostname to use, and the hardware type, which will probably be sun4u or sun4v.  The IP and hostname will already need to be in DNS.

Step 5, add the client to the install server.  This will use the information from step 4.

[jdboyd@u80 ~]$ cd /export/jumpstart/Solaris_11/Solaris_11/Tools/
[jdboyd@u80 Tools]$ sudo ./add_install_client -i $IP -e $MAC $HOSTNAME sun4u

Obviously, you need to substitute the $ items in the above command with the proper values.

Step 6, finally, you are ready to install on the client.  So, on the client, get to the open boot prompt, and do this:

ok boot net - install

At the point, your install proceeds normally. If you get a small stack of “Timeout waiting for ARP/RARP packet” messages right at the beginning, don’t worry. If it does it seemingly forever (say 15+ minutes), then maybe you do need to worry.

Some of this was taken from http://www.sunmanagers.org/pipermail/summaries/2005-March/006223.html

A few Solaris 10 notes

Actually, these are primarily Solaris 11 notes, but they will probably all apply to Solaris 10 when the next release comes out, which I understand to be scheduled for sometime later this month.

First, recently a lot of SCSI hard drives I’ve gotten have been a little mysterious about being used by the Solaris installer and have looked a little odd in format. It turns out that they’ve been EFI labeled drives. Since Solaris understands EFI labelling, it doesn’t just suggest you relabel the drive and be done with it. However, despite Solaris understanding EFI, it refuses to boot or install from EFI on SPARC hardware. The trick has been to get a prompt, then use “format -e”. Then when you choose the label command, it will ask you about a SMI or a EFI label. Choose the SMI option. If you are going to choose to do a ZFS root, then the partitioning doesn’t matter.

After fixing the disk, you are ready to install. The ZFS boot option is only offered on very new copies of Solaris (2008/05 maybe, Solaris Express build 98 or maybe slightly older definately). However, you only get the choice from the text installer. If you are installing over the serial console, then no problem, you get this by default. However, from a graphical console, you will need to use a boot parameter. Thus, you boot command will look something like this: “boot cdrom – text” or “boot net – text”. Using – nowin instead may be faster.

When you get to the ZFS option, just choose it and away you go. You can choose to name the pool something other than rpool, but there is no need to.

If you want a mirrored root, it is easy to add the second disk later. First, when you install to a ZFS root, it repartitions the root drive and uses a slice (parition) instead of the whole disk (even though the slice fills the entire disk). You will need to partition the second disk identically. Just look at the partition map if the first disk in format, then copy it over to the second disk. Then from a root prompt, type something like “zpool attach rpool c0t0d0s0 c0t1d0s0”, assuming that c0t0d0 and c0t1d0 are the two disks in question (which is a good guess on a lot of two disk Sun systems). The mirror is now made, but it may take awhile to sync up in the background, and the machine may run slowly until it is done. Check the progress with “zpool status”.

To be able to do a fallback boot to the second disk will require rebooting and going back out to the OpenBoot ok prompt. But before that, you will need to make the second disk bootable with this command: “installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c0t1d0s0”
Finally, before you head to the OK prompt, you will want to find the openboot device paths for each disk. Do “ls -l /dev/dsk/c0t0d0s0 /dev/dsk/c0t1d0s0”. This will show you something like:

lrwxrwxrwx 1 root root 41 Oct 1 21:02 /dev/dsk/c0t0d0s0 -> ../../devices/pci@1f,4000/scsi@3/sd@0,0:a
lrwxrwxrwx 1 root root 41 Oct 1 22:57 /dev/dsk/c0t1d0s0 -> ../../devices/pci@1f,4000/scsi@3/sd@1,0:a

Write down the target of the symlinks, the part after the ../../devices, changing the sd’s to disk’s, and get rid of the :a’s.

Now reboot and Stop-A to an ok prompt. If your second disks isn’t where the second disk normally will be, you will need to create a devalias for it. Assuming that you used the c0t0d0 and c0t1d0, then you can just do this:
setenv boot-device disk disk2

If you need to change the disk and disk2 aliases (or want to create new names), use the nvalias command from the ok prompt. See the man page for more detailed operation though.