Playing MP3 CDs in a Honda Element EX

2013-01-06 12.42.37

Rachel really likes her new-to-us Honda Element. I, on the other hand, merely tolerate its many quirks. I was having a unique problem with it the past couple days, so I thought I’d share the solution here in case anyone else is banging their head against the wall and stumbles across this post.

The situation is that we recently purchased a bunch of music from Amazon MP3. I copied it onto my iPhone, and I’m perfectly happy playing it through my truck’s aftermarket head unit with Bluetooth streaming or USB/dock connector. Even though the Element has an auxiliary input and we keep a 1/8″ cable in there for that purpose, Rach doesn’t like to mess with having to plug her phone in, find the music, switch the stereo to aux mode, etc.

The EX model Elements (which is what ours is) come with an MP3/WMA CD player standard. I have about 250 blank CD-Rs that have been collecting dust for the past 7-8 years, so I decided to give it a try. The first disc I burned showed only one track, and when it played, no sound came out. Hmmm… a dud. I realized I had specified the disc’s filesystem as UDF instead of ISO9660, so I burned a new disc with the correct filesystem and tried again. Slightly more success! The stereo recognized the MP3 files this time, but it refused to play them, showing a “FORMAT” message. Strike two.

According to the owner’s manual, the “FORMAT” message means the audio file is protected by DRM. That’s great, except that these are DRM-free MP3s. Obviously, something else was going on. I noticed that Amazon seems to use a fairly high quality VBR setting to encode their files, but that shouldn’t be a problem, because the owner’s manual says that variable bit rate files are supported. Or are they? Going out on a limb, I decided to transcode the files down to 192 kbps CBR and burn another disc (hey, I’ve still got about 247 left, right?). I put the new disc into the CD player and… kaboom! The disc melted, and the turn signals started flashing. Just kidding—it all worked fine.

In summary: the Element EX will play MP3 CDs, but it’s pretty picky about the format of everything (much more so than the owner’s manual indicates). Make sure your MP3s are encoded with a constant bit rate (CBR), and make sure your disc’s filesystem is set to ISO9660. I organized the files with a nested folder structure two levels deep, and that didn’t seem to present any problems. According to the manual, you can nest folders eight levels deep, so go nuts:

Each disc can hold up to 400 playable files within 8 folder layers. A disc can support a maximum number of 100 folders, and each folder can hold 255 playable files.

And this was easier than plugging in the stupid iPhone… how?

Posted in Uncategorized | Comments Off

Time to do some spring cleaning around here

I think I’m going to pare down some of the older, less relevant posts and start blogging again. I noticed that there are lots of broken images and corrupted special characters (probably from multiple server/database moves), so it looks like I’ve got some behind-the-scenes work to do, too. Time to break out the backhoe and the hard hat.

Update: done!

 

Lego Construction Set

Photo by tuxstorm

Posted in Uncategorized | Comments Off

Gmail scare, Pobox, WebFaction and procmail

In light of the recent Gmail bug that wiped out everything for a couple hundred thousand accounts (which were later restored, by the way), I thought it might be a good idea to set up a hot spare, poop-has-hit-the-fan email account that I could hop over to in case Gmail ever disappears for me. Since I’m already using Pobox to host addresses at my domains, forward mail to my Gmail account, filter spam, etc., it was pretty easy to do.

WebFaction (the company I use for web hosting) provides “unlimited” mailboxes and addresses, so I set up an address at an alternate domain and told Pobox to start forwarding mail to it. It occurred to me that it might be nice to set up filters mirroring my Gmail filters so that my backup inbox doesn’t become unruly, and luckily, WebFaction provides support for procmail. Unfortunately, their documentation is outdated, so I had to stumble through the configuration until I got it working.

I’m posting some sample procmail recipes below for my future reference and for anyone else trying to do server-side filtering at WebFaction. I won’t explain the procmail syntax here, but it does make for some good bedtime reading. Note: folders are case-sensitive, and the leading period and trailing slash are important. It seems like folders will be created if they don’t exist, but it’s probably better to create them yourself and be sure.

:0:
* ^TO_wp-hackers@lists\.automattic\.com
.list-wp-hackers/

:0:
* ^From:.*googlealerts-noreply@google\.com
.googlealerts/

If this helped you, or if I just misled a bunch of people, let me know in the comments!

Posted in Uncategorized | 2 Comments

The service Google doesn’t know it needs

I like using an email address at my own domain. Why?

  • It’s portable. My email address can’t be held hostage by any single hosting provider or ISP. I own the domain, so I can move to a different email provider in a matter of seconds by making a few DNS changes.
  • It’s easy to remember. It’s my first name at my full name with a dot com at the end. What could be easier than that? No numbers, no punctuation, no spelling bees. (Unless you can’t spell two biblical first names, but then you’ve probably got bigger problems to worry about.)
  • It has branding built in. Some people visit this site because it’s hidden in plain sight right there in my email address. It’s free publicity.

I also like using Gmail. I’ve had an account since May of 2004. I like the search. I like the archiving and the labeling. I like the amount of storage space. I like the constant improvements and beta features (Gmail Labs). I like the integration with my Android-based phone. Well, let’s just say that I like a lot of things about Gmail.

You’d think, then, that Google Apps would be the natural way to bridge those two things. For a variety of reasons I’ve already written about, though, it’s not. Most importantly, a lot of people have Gmail/Google Accounts with a long history (i.e., almost seven years) and tons of services attached to them. Some services are movable between Google Accounts, but most are not (not cleanly or easily, at least). And that’s exactly why Google needs to introduce the service I’m about to describe below.

What I’m proposing is a halfway point between Gmail and Google Apps. Let’s call it Google MX for the purposes of this post. If you wanted to get really creative, you could even call it something like Google Hosted Mail Routing. But since that name is incredibly lame, and since this is my idea, I’m going to keep calling it Google MX. Here’s what I’m envisioning: you modify the MX records for your domain to specify that Google should handle your incoming mail (just like Google Apps). From a simple control panel, you would set up aliases for your domain, but instead of creating any new mailboxes, everything would be delivered into a pre-existing Gmail account.

I’m positive that the infrastructure to support something like this is already in place. For example, Google Apps allows you to add equivalent domains and aliases, so it would just be a matter of pulling that functionality out and repackaging it into a separate product. It could even be a sub-product of Google Apps for all I care. You would set up your MX records exactly the same way, so it might make the most sense as a delivery option for a Google Apps address. Instead of delivering mail to a Google Apps mailbox, you’d be able to choose a Gmail mailbox instead.

I have a feeling that there a lot of people out there using the free version of Google Apps with just a single email address, and I’d be willing to bet that most of them had an existing Gmail account prior to signing up for Google Apps. With my imagined service, setting up a full-blown Google Apps domain would no longer be necessary, nor would you have to migrate all your stuff over from one account to another (disregarding for a moment the fact that some Google services can’t be migrated, period).

Right now, I’m using Pobox to forward my domain mail to my Gmail account, and it’s a workable solution. They have a really solid reputation, great customer service, and excellent spam filtering. It just seems like an unnecessary extra step given what we know about the Google services that currently exist. Why not cut out the middleman and have messages delivered straight to my Gmail account at the SMTP/MX level instead of having to proxy them through another service?

Posted in Uncategorized | Comments Off

Good image hosting is hard to find

Recently, we’ve been trying to unload stuff we no longer need/want onto eBay and Craigslist. Being the meticulous/neurotic/obsessive-compulsive person that I am, I like to provide no fewer than 10-15 high-quality photos of each item we list (depending on the item, of course). There is absolutely no good, easy, or efficient way to do this. Let me say that again: in 2010, there is no easy way to take images off a digital camera and post them to a site where people buy stuff.

Here’s the thing that kills me, though. There are currently at least three services in existence today that could implement the feature I’m talking about in a day or less. In different ways, Flickr, Picasa Web Albums, and Dropbox are all great at what they do. But for whatever reason, none of them give you a one-click solution to embed a grid of thumbnails that link back to the high-resolution versions. Flickr and Picasa give you the HTML to embed images in a lame way, but it’s less useful than you would think.

What I want is something that works like this:

  1. Take pictures with my digital camera.
  2. Transfer pictures to my computer with USB card reader.
  3. Make quick adjustments to pictures: rotation, color, etc.
  4. Some app uploads pictures to a photo hosting service.
  5. I get a <table> of thumbnails that are all linked up.

The HTML code that gets generated by the process would be intentionally simple. Both eBay and Craigslist impose lots of restrictions on the HTML they allow, so the solution would have to keep that in mind. Having said that, I think Picasa would be the most ideal place to implement something like this. They already offer a different feature that’s similar to what I want, but it requires too many extra steps for me to get really enthusiastic about.

The latest version of Picasa for Mac (3.6.7 for me) has an option to export an entire folder as an HTML page. Cool. The only problem is that I then have to upload all the generated images to my hosting space and search-and-replace on the HTML to point the links and images to the URL where all the stuff has been uploaded. It’s a bit tedious, but it works. That’s where I’m at right now, and that’s where I expect to be a year from now. I couldn’t help but think that there’s a better way, though.

One of the coolest features of Picasa is single-click syncing with Picasa Web Albums. All the image resizing and uploading is taken care of by toggling one switch. You can edit photos from inside Picasa and make all the simple adjustments you want, and those changes get synced up automatically. It’s way cool. Google gives you 1 GB of storage for free, and an extra 20 GB is only $5/year. The only thing they need is another option that says grab simplistic, dumbed-down HTML table of this gallery for embedding into eBay or Craigslist.

Until that happens, I’m off to spend entirely too much time generating pages of thumbnails for our virtual garage sale.

Posted in Uncategorized | 2 Comments

How I (might) roll: off-site backups

Knowing is half the battle

According to Backblaze, June is Backup Awareness Month, so my timing on this post is superb. Everyone who uses a computer knows (or should know) they’re supposed to be making regular backups of their precious data, yet hardly anyone actually does make regular backups. I don’t mean to brag, but I’m way past that. I’ve been kicking around a few ideas on backups for quite some time now, and I think I’ve finally got a decent solution worked out.

I’m posting my thoughts here in an attempt to get some feedback, and also for anyone else who’s in the same boat and might end up finding this later on. The technical implementation will focus mostly on Macs, since that’s what my wife and I use. You could extrapolate from this article and apply the same techniques to nearly any OS, though. I’ll have a Linux box (or two) in the mix as well, but it doesn’t really change anything.

Back that thang up (cash money, something, something)

The Macs in the house are already doing regular Time Machine backups to a QNAP NAS (which is pretty awesome in its own right, by the way). This handles the “oops, I deleted a file” scenario, and it works great over 802.11n and GbE. It’s automatic, so we don’t have to think about it. As long as the computers are powered on, Time Machine happily runs every hour in the background. The weakest link in this setup is the NAS.

Despite that the NAS is running in RAID 5, RAID is not backup—and it never will be. It can sustain a single drive failure and (hopefully) keep going long enough to add a new drive and rebuild the array. It can’t, however, sustain any number of fires, thefts, or zombie attacks and keep going. For that kind of protection, we specifically need off-site backups.

The underground bunker I don’t have

There are many options when it comes to off-site backups, and that’s probably why it’s taken me so long to come up with something that will work for us. Initially, Jungle Disk seemed like the best option, and I’ve used it since 2008. It’s backed by Amazon S3, which is highly redundant, and you only pay for what you use ($0.15/GB/month, plus bandwidth in and out). Jungle Disk is quite nice, and it’s also cross-platform (Windows/Mac/Linux). Unfortunately, it has some pretty strong cons stacked against it:

  • Initial upload takes forever over residential broadband connections
  • Limitless scalability, limitless cost (500 GB would cost $75/month)
  • Recovery could take almost as long as the initial backup
  • Cost is ongoing for as long as your data is backed up

Our DSL connection at home is roughly 7 Mbps down and 1 Mbps up, but even with cable, those numbers aren’t looking any better (12/1 or 16/2 or something close to that). Dropbox (which uses Amazon S3 itself) is great, but it suffers from the same slow pipe problem. In fact, any “cloud-based” backup solution needs to be ruled out for that reason. Mozy, Backblaze, CrashPlan, SugarSync, SpiderOak, etc. are all out.

Low-tech doesn’t necessarily mean no-tech

What we need, then, is a low-tech solution. And the answer lies in cheap hard drives. Seriously. At the time of this post, 1 TB WD Caviar Green drives are $59.99 on Amazon and Newegg. That comes out to $0.058/GB, but it’s not a recurring monthly cost. You pay that once and you’re done with it. Add in a USB/eSATA external enclosure for $20, and then multiply the whole thing by two for the total cost.

“Multiply by two?” you ask. Yes, by two. We’re skipping the internet and going straight to the sneakernet, baby. Here’s how it works. Once a week (probably on Sunday night), I’m going to take images of each computer using SuperDuper. I’ll have to do this twice the very first time—once for each hard drive. Then, I’m going to bring one drive to work and leave it there, along with its power supply for the external enclosure. Every Monday, I’ll rotate the drives. This way, there’s always a backup off-site, and it’s never more than a week old.

The important media files on the NAS will get backed up in the same way with a third+ drive. I say third+ because the capacity of the NAS is 4 TB of protected storage. That will probably happen monthly, and there won’t be any rotation on those drives. I’ll probably bring the drives home on a Friday, run the backups, and then keep them in the safe until they go back to work with the others on Monday. That leaves a brief single point of failure (i.e., the house burns down and melts the safe), but I’m only talking about movies, music, and TV shows here.

Getting some closure on enclosures

One last thought before I wrap it up. I had considered using bare drives (sans external enclosures) with my drive toaster, but I don’t like that idea as much. I’d still need to buy cases to transport the drives, and the risk of electrostatic discharge or some other kind of damage is much higher. In my opinion, it makes more sense to put that money towards enclosures. Plus, I can keep both the enclosure and its power supply off-site. If the house did burn down, I’d have to go out and buy another drive toaster. It’d be the least of my worries.

What sayest thou?

Thoughts? Suggestions? Any glaring omissions on my part? Anyone already doing something like this? Use the comments to describe any backup victories (or failures) you’ve experienced.

Posted in Uncategorized | 2 Comments

Google Apps: impressions after 14 months

More than a year ago, I decided to switch my personal email account from Gmail to Google Apps Premier Edition. The logic behind that move is laid out in a previous post, so I won’t rehash what’s already been said. In this post, I mainly want to reflect on what’s already been done and also point out some potential pitfalls for anyone else who’s considering making the jump.

Enjoying the status quo

My Google Apps mailbox is puttering along nicely, happily sucking down messages from two other accounts via POP. I originally had set my old Gmail account to forward to my new address, but after noticing that some new messages weren’t forwarded properly, I switched to POP, which seems to be more reliable. I changed all my important stuff directly at the source to point to my new address instead, so it’s really not that big of a deal—just something to be aware of. The other mailbox I’m checking is my Tuffmail account. I have about 30 or 40 addresses delivering to a single mailbox on the Tuffmail side, and then I just poll that single box.

Frustrations start to mount

Everything works pretty well in and of itself, but the problems start to pile up, in spades, when you throw additional Google services into the mix. It seems like there’s a strange dichotomy between Google Apps accounts and Google Accounts (notice the capital A). For those who don’t know, the distinction is this: a Google Apps account is simply one user of your Google Apps domain (you@yourdomain.com). A Google Account, on the other hand, is not related at all. That’s right—it’s completely and totally unrelated. Most commonly, a Google Account is also a Gmail account, but it doesn’t have to be. If you have a Google Apps email address (or any other non-Gmail email address), you can create a Google Account for it. Confused yet?

Google’s a/Accounts don’t play well together

The most notable problem—and my biggest gripe—is that email/Gmail-related functions of Google Apps don’t share data with other services that are accessed via a Google Account. I’ll give you the most prominent example: my Google Voice contacts are separate from my email contacts, even though my Google Voice account is linked to a Google Account that’s really just a Google Apps email address. Again, confused yet? The tasks for my Google Account are different from the tasks for my Google Apps account, and the contacts for my Google Account are different from the contacts for my Google Apps account. I know it seems minor, but the end result is an enormous pain in the butt. Gmail users don’t have this problem!

To further frustrate and complicate things, it looks like once a Google service is associated to a Google Account, it’s destined to be forever linked to that same account. The only exception seems to be Google Analytics, and it was fairly easy to move that over from my old Gmail Google Account to my new Google Apps Google Account. In my mind, you should be able to migrate a Google service to any Google Account you want. Just prove that you own both accounts, and off you go.

No longer a good value

Taking only storage space into consideration, Google Apps Premier Edition is no longer a good value for people using a single account for personal use. For organizations, it still makes a lot of sense, but Google’s recent price cuts on additional Gmail storage give you far more for your money. When I signed up for Google Apps, additional storage for Gmail/Picasa/Docs was $20 for 10 GB and $75 for 40 GB. Google Apps came in right between those with 25 GB for $50. In November of last year, Google started offering 20 GB for $5, 80 GB for $20, and 200 GB for $50.

Now, I’m only using 6% of my 25 GB at the moment, so talking about how I could get 200 GB for the same price seems a little silly. Even so, if Google is going to increase storage and decrease costs, they should do it across the board.

How to fix the mess they’ve made

There are two things that Google could do right now to resolve the sorry state of their Google Accounts. The first would be to link Google Apps accounts and Google Accounts that share the same email address. The behavior would be exactly the same as what already happens with Gmail accounts, except that the domain would be yourdomain.com instead of gmail.com. There would obviously need to be some kind of verification process, but there is absolutely no reason why this shouldn’t be happening. In fact, this is already happening—kind of.

When I’m logged into my Google Account that is also my Google Apps email address, and I try to visit Google Calendar, I’m asked if I want to use my Google Apps calendar or sign in under a different Google Account. Google Calendar knows that a Google Apps account exists with the same email address as my Google Account, or else it wouldn’t be asking!

The second thing Google could do would be to make services portable—and “mergeable”—between Google Accounts. If I want to move my Picasa or Google Voice account to a different Google Account, I should be able to. If Google detects that I already have a Picasa account in both places, it should offer to merge the photos and usernames and so on. Many other service providers have already built this functionality, and Google should be able to as well.

My advice to you

If you are like me and are only looking to host a single account (or a household of accounts) under Google Apps (Premier or Standard editions), think twice. After 14 months of using Google Apps and the corresponding new Google Account I had to create, I wish I had stuck with Gmail. The integration with other Google services is better, the additional storage space is cheaper, and the hassle of changing email addresses is nonexistent. The one benefit of Google Apps is that I get to use my own domain without having to resort to a half-hearted forwarding system that would never set From: headers properly, but it’s not worth it. Take my advice and stick with Gmail.

Update: Gina Trapani, of Lifehacker fame, has written a similar article with valuable input from a real Googler. Maybe that kind of write-up is what we need for this issue to get noticed and gain some traction at the Googleplex.

Posted in Uncategorized | 9 Comments

How I roll: outsourcing critical services

Warning: this post is moderately technical in nature. I think it’s the first technical post I’ve done in almost five years here, so I apologize in advance if it’s too geeky for you.

I’m trying a new feature around here called How I Roll. It seems like I get lots of questions from other people about how/why I do what I do as it relates to tech stuff. For instance, I put a lot of thought into how I handle backups, DNS, web hosting, email, etc., and I figured it might be worthwhile to document and share some of that knowledge. Maybe I can save you some trouble or give you additional insight into something you’re researching for yourself.

For this first installment of How I Roll, I’ll be talking about how I handle web, email, and DNS hosting–the infrastructure-level components of nearly any website. I think it’s really important for those three things to be done right, because these days, online media is the first point of contact for a lot of people.

Let’s talk about DNS first. The most common analogy I’ve heard for DNS is that it’s like the phonebook of the internet. I guess that’s mostly true, but if you don’t already have a strong grasp of what it is and how it works, you shouldn’t be reading this article. A lot of hosting companies and domain registrars provide free DNS hosting, but I tend to stay away from them and do my own thing. 

Nearly every major registrar has had DNS troubles of some kind over the past few years, and that’s understandable. It’s not their core business, it’s not important to them, and it’s not what they do well. Things also get complicated/broken when you start moving to different registrars. Just recently, I moved the majority of my domains from GoDaddy to NameCheap (I couldn’t stand GoDaddy’s low-class advertising or their constant upselling attempts anymore). If I had hosted my DNS with GoDaddy, I would have made a lot of extra work for myself by switching to someone else.

For the same reasons, I don’t let hosting companies handle my DNS, either. I’ve changed hosting companies too many times, and I just don’t feel comfortable giving a single company the “keys to the kingdom”. That’s why I use DNS Made Easy. Their user interface is really awful until you get used to it, and even then, it’s still pretty bad. But they do one thing well, and that’s all that matters: they offer rock-solid DNS hosting at a reasonable price.

Other companies specialize in DNS hosting, so don’t think that DNS Made Easy is the only choice. They just happen to have a very good track record and plans that fit my budget. EveryDNS.net is a free service that does an admirable job, and it was started by David Ulevitch of OpenDNS fame. Unfortunately, EveryDNS has had their own share of issues over the years. I’m mentioning it here because I think it’s the best option for people who aren’t willing to pay for this essential function.

Next on the list is web hosting. Again, I’m not going to explain the differences between shared/dedicated/VPS plans because I assume you already know that. In my earlier years on the web, I was perfectly content with shared hosting. My Linux CLI skills were modest, and I didn’t demand complete control over my server environment (root, baby). In the last 4-5 years, my sysadmin skills have grown and the price of hardware has dropped to the point where I can afford my own box.

The problem is that I don’t need my own box. My blog isn’t exactly on par with Amazon or eBay or Google in terms of traffic. I can get by just fine with a VPS, which is essentially a virtual chunk of a much bigger machine. For what I do, 256 or 512M of memory is more than enough, and I only require a couple gigs of HD space. After searching around for a long time and reading tons of reviews, I decided on Slicehost about a year ago (14 or 15 months ago, actually). They’ve been fantastic. 

After 300 days of uptime, I restarted my box because I wanted to, not because I had to. I think that fact alone speaks to Slicehost’s reliability. Sure, I could build out my own server and drop it in a colo facility somewhere, but that’s cost-prohibitive. I’d be looking at about a grand for the initial build-out and then a monthly colo fee on top of that. And if (when) the hardware fails, it’s all on me. Instead, I choose to pay Slicehost $20/month for use of their high-end hardware and connectivity. When a drive dies, they replace it, and it doesn’t cost me a cent.

Recently, I’ve been exploring some other VPS providers that offer more bang for the buck. Linode and Prgmr.com both provide more capacity for the same price (or less). A server move is a big deal and involves a lot of work, so being the lazy sysadmin that I am, I’m not too keen to pack up and move quite yet.

If you’re still with me, I saved the best for last: email. In 2009, email is absolutely critical, so we can’t take any chances here. Everything from job offers to utility bill notifications are sent via email, and even though the underlying protocols are designed to be fault-tolerant to a certain extent, I’d rather not risk it. Imagine what would happen if your home mailbox suddenly disappeared before the mailman could deliver your mail. Get it?

Well, I’ve had a Gmail account since before they were cool, so I was using that for the longest time. Seriously, I had one of the early invites back in May of 2004–now that’s some geek cred right there. At some point, it occurred to me that I was stupid for not taking advantage of my own domain name, which I’ve had since 2002. I knew I didn’t want to run my own mail server because it would take up too much of my time. (You think it’s easy to stay on top of spam filtering, virus filtering, and block lists?) Besides, it would be cheaper and more reliable to outsource it to a company that specializes in mail hosting.

After researching Google Apps and Tuffmail, I decided to go with Google Apps–sort of. I think Gmail has the best web interface around, and it would have been hard to settle for something less. Google Apps comes in both a free version (Standard) and a paid version (Premier). They have similar feature sets, but you get a 99.9% uptime SLA, 25G of storage space, phone support, and the ability to disable ads with Google Apps Premier. The cost is $50/account/year, and with only one account, that translates to $50/year (I was always this good at math, by the way).

Since I had been using and enjoying my free Gmail account for about five years, I figured it was time to pay it forward by signing up for Premier instead of Standard. I was also getting a little nervous after reading horror stories of people being locked out of their Gmail accounts for days at a time, and I reasoned that a paid user with an SLA would have more protection against that kind of thing. For $0.13/day, the peace of mind, extra space, and lack of ads has been well worth it.

Since Tuffmail is equally awesome in a different way, I decided to use them for all of my auxiliary domains. I’ve got their most basic plan that costs $6/quarter, and they too have been well worth the money. Considering that I would have needed a separate email server at $20/month, paying Tuffmail $24/year is a bargain, and that’s not even counting the cost of my time to manage another box (and an email box, at that). You can set up as many domains as you want at Tuffmail and then have unlimited aliases all deliver to a single physical box. Cool.

Because of the way I’ve set up my essential services, I can continue to receive email even if my website is down. And because of Google’s SLA, I should be able to receive email 99.9% of the time. Changing domain name registrars or hosting companies is a breeze thanks to the decoupled nature of everything. Oh, and did I mention that I’m still able to receive email even when that happens?

I hope this article has been helpful, and I intend to write more like it in the future. Feel free to ask questions or spark up a discussion in the comments.

That’s how I roll.

Posted in Uncategorized | 2 Comments

My world-famous hamburger recipe

Now that summer has officially arrived (in Tucson, anyway), it can mean only one thing: time to clean off the grill and pile on the red meat. I made some pretty decent burgers last night using my time-tested recipe, and I thought I’d share (the recipe, not the burgers) for all of the yet-to-be-enlightened burger chefs out there. The recipe is simple on purpose—quick and easy to make, and easier to scale up for large groups of people.

You will need:

  • 1 lb. ground beef (80-90% lean)
  • 1/4 cup Worcestershire sauce
  • 2 tsp. Old Bay seasoning
  • 3 cloves minced garlic (use less or omit if desired)

Here’s how to do it:

  1. In a bowl, mix ingredients together. Be careful not to handle the meat too much.
  2. Shape into 3 patties. You might be able to get 4 patties per pound of beef, but they’ll end up on the small side.
  3. Grill on high heat for about 5 minutes per side. I say about because all grills are different. Tip: oil up your grill with some cooking spray before lighting to prevent the burgers from sticking.
  4. Drop a piece of cheese on each burger for the last 30 seconds of cooking. Either colby-jack or cheddar works well.
  5. Serve on a toasted bun with condiments (bacon and BBQ sauce, anyone?).

So there you have it: my world-famous burger recipe (depending on your definition of world). Final note: I’ve seen some recipes that call for eggs and breadcrumbs, but in my opinion, that will give you grilled meatloaf rather than a hamburger. Stick with my recipe if you want to please a crowd.

Posted in Uncategorized | 3 Comments

Perils of being a tech worker

Tucson got pounded with what was arguably the best storm of the entire summer last night. We’re talking horizontal rain, winds that almost knocked my bike over, and flooding all over the place. Oh, and power outages. Lots and lots of power outages.

Power was out at my apartment last night from about 8:00 p.m. until sometime in the middle of the night. I stood on my balcony, which looks out southbound over the city, and literally watched rows of lights go out until all of Tucson was black. The only lights I could see were air traffic control towers and beacons at Davis-Monthan and the Tucson “International” Airport.

So, I woke up this morning, and the power was back on at my apartment. Cool. Fire up the UPSes, reboot the router, reboot the servers, reboot the Vonage adapter, and we’re back in business. I get in my car and start driving to work, only to get about one mile before hitting a wall of cars. We’re bumper-to-bumper and barely moving down a road that you can normally go 50-55 mph on.

Since we’re mostly stopped, I start posting tweets in an attempt to warn others who might be thinking about heading down the same road. I speculated about what the problem might be, and as it turns out, the traffic signals at River/Swan were completely. Only, instead of a cop directing traffic, the county had put up four stop signs around the intersection. Fantastic.

After watching several near-collisions, I came to the conclusion that people don’t know how to react to a 4-way stop at a major intersection. When I got past the blockage and a little closer to work, I realized traffic lights were out everywhere. “Hmm… I wonder if power is out at the office.” Sure enough, power was out at the office.

A few people had gotten to work ahead of me, and they were waiting around to see what would happen with the power situation. As you can imagine, it’s difficult for programmers to get work done when all the desktops, servers, and network connections are completely wiped out. So we did what any good employees would do given the circumstances: we played foosball for an hour.

That brings me to the present. I’m currently sitting in a Starbucks that happens to have power, recounting my morning in blog form for you, my loyal readers. It’s been about two hours since I left the office, though I’m supposed to get a call when we’re back online. I’m not particularly looking forward to that call, as it means a lot of babysitting servers and fscking* work for me.

* Just in case you were wondering, that’s not actually vulgar, though it might appear that way on the surface. Follow the link to find out what the fsck I’m talking about.

Posted in Uncategorized | 4 Comments