Categories
Emacs

Emacs Mode for Protobuf editing

I like using Google’s Protocol Buffers (aka protobuf). It is faster and more bandwidth/disk efficient than JSON, but perhaps not quite as simple or flexible.

In protobuf you have messages. In messages, everything is a key/value pair. Keys are tagged to show the type of the value (int, float, string, sub-message), then followed by the value. Keys can be used repeatedly to create something like arrays. The big difference between a protobuf message and a JSON associative array is that in JSON the keys are strings, while in protobuf, the key is a number. Protobuf uses .proto files to describe the messages, primarily to map a text name for keys to a number, and also to describe the type of the field, specify a default value, and to flag it as required/optional/repeated. The messages in the files look something like this:


message Person {
        required int32 id = 1;
        required string name = 2;
        optional string email = 3;
}

I also like Emacs a lot, and so obviously, I will use Emacs to edit these .proto files. Here is the start of a major mode for editing proto files. Just paste it into a file named protobuf.el, then (require ‘protobuf) in you .emacs file.


(define-derived-mode protobuf-mode c-mode
  "Protocol Buffer" "Major mode for editing Google Protocol Buffer files."
  (setq fill-column 80
          tab-width 4))

(add-to-list 'auto-mode-alist '("\\.proto$" . protobuf-mode))
(provide 'protobuf)

This is my first major mode. It is derived from C mode, and I still haven’t figured out how to add syntax rules for the =1 type stuff required by every line. I hope to get back and flesh this out further eventually.

Categories
Programming

RESTful message queuing in Python

Alright.  Pass 1 is done.  Here is a link to it.  The server is in Python.  Clients in PHP and Python are provided.  It follows this design document.  On a quad Opteron, it gets about 600 short messages a second.  It isn’t threaded.  Next step on this project is to redo this in Erlang.  And then maybe C++ for the heck of it.

This uses WSGI, specifically the wsgi reference server, so in theory it shouldn’t be hard to adopt to other wsgi servers, like mod_wsgi.  However, beware of problems of thread safety.  Also beware that wsgi servers that use multiple processes will require some sort of external data store instead of process local memory.

(edit: Download link was moved to github, design doc link was also changed)

Categories
Programming

A new message queuing system

First, why a new one?  Because I haven’t found any that do what I need and look simple and well supported.  Besides, it seems like a reasonable learning experience.

The initial summary of what I need is a light weight method for PHP (in the form of scripts running in mod_php) to send messages to a back end program written in Python. However, in the future other languages may be used, so general cross language compatibility is important. That means either bindings exist for every conceivable language, or bindings are trivial to write.  Also, the solution must run on Solaris.

Comparison:

At work I use Apache QPID, which is an AMQP implementation. I can’t find any PHP client for AMQP though. I was able to find discussion that suggested that the AMQP protocol is too heavyweight for PHP being run in mod_php.

Looking at other solutions, I don’t want to run anything that requires Java for the server. That rules out Apache ActiveMQ and other JMS systems. I also believe that XMPP is too heavy weight to parse.  I also found some systems written in Perl, Ruby, and PHP, but they looked rather slap dash, and I don’t particularly want to use those languages.  The initial requirement for supporting PHP is only because I’m working on a PHP web app that I forked.  I do not want to add any new PHP code bases that I need to maintain.  Besides, once I start looking at fringe choices, it gets to be a lot easier to justify writing my own, particularly if I am going to use it as a learning project to get more familiar with, say, Erlang.

Summary:

It is to be a RESTful design.  I will be using JSON for the payloads, but I haven’t decided yet if it makes sense to force this, or if it makes sense to allow all ascii data.  Queues will be single read, meaning that if multiple end points need to get the same message, then there will need to be a separate queue for each end point.  Initially there will be no security model or persistence.  Commands will be standard HTTP verbs.  If possible I will try to make response codes valid HTTP response codes.

Goals:

Run on Solaris

Support Python, PHP, and Javascript as clients.

Limitations:

Initially this will not support persistence.

Also, this will not support any security.

Commands:

PUT queueName

Create a new queue. What the responses will be still need to be decided.

Responses:

201 created, entity required, probably just a confirmation message

409 already existed.

POST queueName

The body of the post will be the contents of the message.

403 queueName wasn’t found.

201 created, and perhaps the entity will be an id for the message.

GET

Gets that do not match the following patterns will be answered with a 404.

GET msg/queueName

Get the next message from the queue queueName.  Here I need some way to to return a message ID in addition to the message body.  It may make sense for the response to be JSON: {‘id’: integer, ‘content’: <valid JSON here>}

If the response is as proposed, the the contents of the POST must be valid JSON as well.

403 queueName wasn’t found.

200, the message

GET queues

Get a list of the created queues.

200

DELETE queueName/integer

Delete a message identified with integer from queue queueName.

204 deleted, no entity required

403, queuename or integer not found.

DELETE queueName

Delete a queue and all the messages in it.

204 Deleted, no entity in response

403 queueName wasn’t found

Categories
Programming

Thread Worker Pooling in Python

The worker pool pattern is a fairly common tool for writing multi-threaded programs.  You divide your work up into chunks of some size and you submit the to a work queue.  Then there is a pool of threads that watch that queue for tasks to execute, and when complete, they add the jobs into the finished queue.

Here is the file.

Thanks to Global Interpreter Lock, threads are of somewhat limited usefulness in Python.  I foresee myself mostly using this for network limited tasks, like downloaded a large quantity of RSS feeds.  My idea is that tasks put into the system shouldn’t modify global state, so if I actually needed this for computational tasks, it may be feasible to build it on forks instead, or perhaps the 2.6 multiprocessing system.  However, I still use a lot of systems with only python 2.3 installed, so I’m not likely to want to write 2.6 specific code anytime soon.

Many of the thread pool systems I seem have you specify a single function for the pool, then you just enqueue the inputs.  Mine is different in that each item in the queue can be a different function.  I haven’t actually used it this way though, so it is possible that the extra flexibility is generally wasted.

Python’s lambda seem rather limited.  It is limited to a single expression.  I suppose that this is what Lisp and Scheme do as well, but their expressions offer things like progn.  My first idea is that the task to execute would be a function with no arguments.  I was picturing using a lambda to wrap up whatever I wanted to do.

Now, I still offer that via addTask and assume it internally, but I also offer addTaskArgs, and it takes a function reference, and either an argument list (as a list) or a named argument list (as a dict) and wraps it in a lambda to enqueue.

I now find that my knowledge about how to unit test threaded code is rather limited, and the included unit tests are extremely thin.

Categories
Cooking

Braided Cinnamon Bread

Preheat oven to 400 degrees.

Follow a basic milk bread recipe (flour, sugar, salt, yeast, water, milk, shortening) through the first kneading and rising stage.

Split dough into three equal portions

Roll out each portion, into a roughly 4:3 shape. Cover with a mixture of sugar, butter, and cinnamon, then roll into a tube.

Fan out three tubes, pinch ends together and braid.

Allow to rise again until double in size.

Bake until lightly golden.

Top with milk/powdered sugar glaze (roughly 1tbsp milk to 1/2 cup powdered suger, 2-4 multiples of that formula will be needed).

Serve. It took 4 hands and a lot of care to move that monster to the platter seen in the first picture.

Categories
System Administration

Making Miro work with USB sound devices on Ubuntu

On Ubuntu (and possibly other linux distributions) Miro refuses to work with a secondary sound card, it will only work with the primary one despite what the ALSA default is set to, unlike most programs which offer some way to override the default.

Potentially, the second sound card in question could be a PCI card or something else, but based on other people’s experience (like my own) it is usually a USB sound card that is causing trouble. See here (note, the suggested fix didn’t work for me, just like it didn’t the original poster there) and here (they mention fixing it in the trunk, but that doesn’t help me until a new release comes out).

Some people actively want both the onboard sound and the USB or PCI device working, but if you are willing to sacrifice on-board sound, I found a work around. In my case, the on-board sound is worthless. It has some terrible humming/buzzing in the background so I never ever want to use it again.

The solution is to find what the module is that supplies your on-board sound. In my case, the on-board sound is a VT8233, so when I looked at the output from lsmod, it was obvious that the module for this sound device was the snd_via82xx module.

Then, open the /etc/modprobe.d/blacklist file to edit it:
sudo pico /etc/modprobe.d/blacklist
and add the line:
blacklist snd_via82xx
Then reboot.

Now, the USB audio device will be the first audio device.

Categories
Programming

Seeking

I am now looking for a new job and am no longer with Sigma Electronics.

My first preference would be a position writing software for post production or visual effects at either a software company or a post production or visual effects company.

Other than that, I am also interested in positions or contract work developing embedded systems, graphics applications, or web applications.

I just thought I would throw this out in case anyone can point me towards any leads.

Thank you.

Categories
System Administration

A few Solaris 10 notes

Actually, these are primarily Solaris 11 notes, but they will probably all apply to Solaris 10 when the next release comes out, which I understand to be scheduled for sometime later this month.

First, recently a lot of SCSI hard drives I’ve gotten have been a little mysterious about being used by the Solaris installer and have looked a little odd in format. It turns out that they’ve been EFI labeled drives. Since Solaris understands EFI labelling, it doesn’t just suggest you relabel the drive and be done with it. However, despite Solaris understanding EFI, it refuses to boot or install from EFI on SPARC hardware. The trick has been to get a prompt, then use “format -e”. Then when you choose the label command, it will ask you about a SMI or a EFI label. Choose the SMI option. If you are going to choose to do a ZFS root, then the partitioning doesn’t matter.

After fixing the disk, you are ready to install. The ZFS boot option is only offered on very new copies of Solaris (2008/05 maybe, Solaris Express build 98 or maybe slightly older definately). However, you only get the choice from the text installer. If you are installing over the serial console, then no problem, you get this by default. However, from a graphical console, you will need to use a boot parameter. Thus, you boot command will look something like this: “boot cdrom – text” or “boot net – text”. Using – nowin instead may be faster.

When you get to the ZFS option, just choose it and away you go. You can choose to name the pool something other than rpool, but there is no need to.

If you want a mirrored root, it is easy to add the second disk later. First, when you install to a ZFS root, it repartitions the root drive and uses a slice (parition) instead of the whole disk (even though the slice fills the entire disk). You will need to partition the second disk identically. Just look at the partition map if the first disk in format, then copy it over to the second disk. Then from a root prompt, type something like “zpool attach rpool c0t0d0s0 c0t1d0s0”, assuming that c0t0d0 and c0t1d0 are the two disks in question (which is a good guess on a lot of two disk Sun systems). The mirror is now made, but it may take awhile to sync up in the background, and the machine may run slowly until it is done. Check the progress with “zpool status”.

To be able to do a fallback boot to the second disk will require rebooting and going back out to the OpenBoot ok prompt. But before that, you will need to make the second disk bootable with this command: “installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c0t1d0s0”
Finally, before you head to the OK prompt, you will want to find the openboot device paths for each disk. Do “ls -l /dev/dsk/c0t0d0s0 /dev/dsk/c0t1d0s0”. This will show you something like:

lrwxrwxrwx 1 root root 41 Oct 1 21:02 /dev/dsk/c0t0d0s0 -> ../../devices/pci@1f,4000/scsi@3/sd@0,0:a
lrwxrwxrwx 1 root root 41 Oct 1 22:57 /dev/dsk/c0t1d0s0 -> ../../devices/pci@1f,4000/scsi@3/sd@1,0:a

Write down the target of the symlinks, the part after the ../../devices, changing the sd’s to disk’s, and get rid of the :a’s.

Now reboot and Stop-A to an ok prompt. If your second disks isn’t where the second disk normally will be, you will need to create a devalias for it. Assuming that you used the c0t0d0 and c0t1d0, then you can just do this:
setenv boot-device disk disk2

If you need to change the disk and disk2 aliases (or want to create new names), use the nvalias command from the ok prompt. See the man page for more detailed operation though.

Categories
System Administration

Flash on Ubuntu 8.04 AMD64

I run Ubuntu 8.04 AM64 on a laptop at work.  I’ve been doing this since Ubuntu 6.10.  This has not been a smooth ride. Ubuntu 6.10 i386 on my old laptop (I only “upgraded” because the old one was stolen from the plane on a busines trip) worked flawlessly for me.  Things have gotten a bit better as upgrades came out, but I still can’t use the wireless  (BCM43 device of some sort, no native driver, ndiswrapper won’t play nice), for instance.

My first and biggest tip is to stay away from 64bit linux on the desktop or laptop, unless you know why you need it.  That is very unlikely to be the case on laptops.

Moving on, for the longest time Flash would not work.  When I tried to configure the nswrapper system, it would start (sometimes) then crash the plugin.  Maybe I could view one flash website before needing to restart, maybe no flash web sites.  I finally got Flash worked out, and that is the main point of this post.

The trick to make flash work was to first install the 32bit version of FF3 from the Mozilla web site.  Put it in a new location (I went with /usr/local/firefox), and put that location in your path before /usr/bin.  For this to run, you will need ia32-libs installed.

Step 2 then is to go to the Adobe web and download the Flash 9 .tar.gz.  Don’t try to use the autodiscovery/autoinstall thing that Firefox will offer to do.  Extract the Flash 9 installer to a temporary directory, then copy the file libflashplayer.so from the temporary directory to the plugins directory (/usr/local/firefox/plugins for me since that is where extracted the 32 bit firefox from the mozilla web site).  Now, when you restart firefox, you will be using the 32bit only Flash with a 32bit version of firefox, and everything will work happily.

I think that in general, Linux doesn’t handle the 64bit transition as well as Solaris or Irix did.  As far as I can tell, Flash is 32 bit only on all platforms.  However on Solaris and Irix, 32bit versions of firefox or Mozilla are supplied, even though they are running on 64bit hardware.  Also, there seems to be a lot more of defaulting to 32bit unless specified otherwise, which is often reasonable.  And the culter seems to do a better job with supplying both 32bit and 64bit versions of libraries.

Categories
Cooking

Vanilla Extract

I made vanilla extract.  Since I just followed the directions, here is the link.  I can’t try it for 4 weeks, and it won’t really be done for six months.  I hope it doesn’t come out weak.  I dropped a bit of the seeds on the floor and decided to trash what was on the floor rather than risk tainting the rest.

I got the beans from my brother in law.  I used Smirnoff as pictured.