Friday, October 31, 2008

Thinking in Git

I recently started using Git on a small team project. Why Git, you ask? Well, previously I had used CVS and Subversion (and by such admission I am knowingly submitting myself to being mocked by Linus), and prior to that had used Visual SourceSafe. So I figured it was time to learn something new and dive into distributed version control.

I have to say, it's a bit of a leap. In the same way that I stumbled through learning to trust in CVS to merge my changes, as opposed to locking a file the way SourceSafe did, I'm finding the distributed model takes some getting used to.

It's great that I don't have to have network access to do commits. That's the easy part to appreciate. With Git, you commit to your local repository copy. If you want to bring your changes together with someone else's, either you push to their repository, or they pull from yours, but that is in no way tied to the act of committing your changes. There's no built-in notion of a master repository. Really.

For some reason I'm finding it difficult to grasp that concept. For years I've relied on the notion that all my changes are safely locked away in the master repository. The master repository is the source of all truth. (Ha ha, get it? "Source" of all truth? You don't get it. Fine.)

In our particular team setup, we do have a master copy hosted on GitHub. I'm finding that every single time I do a "git commit", I have this unstoppable compulsion to execute a "git push", and I'm not sure that's the right idea. I'm looking forward to that breakthrough moment when I finally get it. I'll let you know when that happens.

Friday, October 24, 2008

Singularity Summit 08 Day 1

Today was the Emerging Tech Workshop component of the Singularity Summit, and I have to say I'm fairly disappointed. The venue certainly didn't help. While the Tech Museum of Innovation is an inspiring place, it's also full of exuberant (loud) kids, and that combined with the less than adequate sound system made concentrating on the speakers a challenge.

Still, it was certainly possible to pull some useful stuff out of it. It was great to hear Thomas Dietterich on the semantic web panel validate what I've been thinking about AI for a while, namely that we can't expect to have useful AI until it has an awareness of what's going on in the real world. That might seem obvious, but it appears many folks have been working simply on text manipulation and inference engines, which will only get us so far. He also mentioned OAuth, which I hadn't yet heard of, which allows users to give applications access to their data in a controlled manner.

The highlight of the robotics panel was clearly Bruce Hall's use of the phrase to "spinning dreidel of death" to describe Velodyne's laser vision system (LIDAR), used by many of the vehicles in the DARPA grand challenge. One fellow asked if anyone on the panel saw potential commercial application of some of the research work being done using rat brains to control robots, to which the panel answered a flat "no". That surprised me a bit. I was thinking folks in a robotics panel at a summit on the singularity would perhaps be a little more forward thinking.

Anyway, there were also guys presenting on their supposedly singularity-related companies, including Climos, m2mi and Piryx. These mostly felt like VC pitches. Interesting stuff for sure, but hardly what I was expecting. Where were the guys working on heads-up displays and neural interfaces, for instance? How about the guys from Mind Balance working on mind-controlled video games? I think these were the things the audience was clamoring for but didn't get.

Friday, September 26, 2008

Towards the Singularity...

One of the defining characteristics of this age is the notion of a technological singularity. Whether or not our civilization actually achieves such an event, there is surely little doubt that we are currently experiencing a period of time marked by incredibly rapid discovery and innovation. For some time I've been a fan of Ray Kurzweil, first for the musical instruments he makes, and secondly for his writing on futuristic themes. So I'm very excited to be attending The Singularity Summit at the end of October, an event at which I hope to meet some like-minded folks, and hopefully sharpen my vision with respect to what may come...


Tuesday, September 23, 2008

$0.25 well spent...

I finally got around to playing with Amazon's EC2 yesterday, just going through the basic tutorial. Pretty much as I expected, it was trivial to create and start a new instance, although I could see getting tired of keeping track of the access identifiers for command line usage and wanting to find a GUI such as RightScale, which will even automatically launch new instances to deal with increases in demand on your site.



The idea of being able to quickly add extra compute power, and only pay for it when you need it is quite appealing. Many web sites experience significant peaks in their business and could potentially save on infrastructure costs with such a model. For example, at my last gig we designed systems that had to deal with peak traffic that was more than twenty times the average for just a couple hours at a time, requiring a significant investment in hardware that went largely unused most of the time. It's also nice for companies just starting out that don't have a firm grasp on their hardware requirements yet.



I do wonder how well Microsoft can play in this new space given their server-centric licensing model. If I want to have a hundred servers on standby, how would I license Windows for that? Some quick Google searching for "windows cloud computing" turns up some information on a future cloud-centric operating system called "Midori", a beta offering of Microsoft Online Services, and even references to Windows Live. But this wouldn't help me deploy a .NET application to the cloud today. This fairly recent post on ZDNet echoes my confusion, and adds even more concepts. Red Dog? Microsoft appears to be playing catch up again with respect to understanding the Internet.



Regardless, I want to start taking advantage of cloud computing. Even if it's not yet ready for mission-critical applications, one could start by moving certain workloads into the cloud, such as load testing or compute-intensive batch processing. I don't know of any other way I could start up a server or two and play with them for an hour for just $0.25.

Monday, September 01, 2008

What are *you* afraid of?

I'll tell you what I'm afraid of. Patents. Software patents specifically.

This week I'll be attending the Business of Software conference in Boston, where folks are getting together to discuss all the gnarly difficulties associated with building software for money. Judging from the presentation topics people are worried about how to build software, how to market it and how to charge for it. Clearly these are all very worthwhile topics. I guess for myself, however, these are the least of my worries. I classify them as rational problems. You can try things, see what works and what doesn't, experiment, and move on. Easy.

But patents. Patents lurk around late at night, watching and waiting, ready to strike at the worst possible time. Patents are not rational. They're a problem I clearly don't understand. Richard Stallman himself will be speaking at BoS 2008, mostly on the evils of patents as applied to software. I'm already convinced of the evils, what I need now is a strategy to manage patent risk!

Wednesday, August 20, 2008

From the "things I wish Java had" file...

Often you'll find yourself writing code similar to the following:
String childName = root.getChild().getName();
Now, if root.getChild() returns null, you'll get a dreaded NullPointerException. There are a couple ways to guard against this of course, but they're fairly verbose. You could check each traversal point for null...
String childName = null;
Node child = root.getChild();
if (child != null) {
childName = child.getName();
}
That works, but it gets messy quickly if you have another traversal or two (and yes, I know, excessive depth is a bad code smell).

Alternatively, you could do something like this:
String childName = null;
try {
childName = root.getChild().getName();
} catch(NullPointerException ignored) {
}
But you could really end up littering your code with ignored catch blocks.

Ideally, I would love to have a method of saying "traverse as far as you can, but return null if you encounter null along the way". Perhaps a special operator?
String childName = root,getChild(),getName();
Anyway, I'm sure there are languages out there that support this notion. It's high on my list of desired features for an alternative to Java!

Wednesday, June 25, 2008

RFCs and memory failure...

I often suggest to developers that they should read the RFCs relating to technology they're working with. It's all fine to read the vendor's documentation for whichever product you're using, but you can never be sure if their developers have gotten things right.

One of my favourite RFCs is RFC 2616, which specifies HTTP 1.1. HTTP is a fantastically useful and flexible protocol, and I had considered myself quite familiar with it, until a conversation I had today with a fellow developer.

We were discussing options for implementing asynchronous processing of web requests, with various considerations from the web tier to the back end. The conversation went something like this:

D: You could of course use a meta http-equiv tag to reload ze page.

Me: Or better yet, the HTTP "Refresh" header field.

D: "Refresh", she is an HTTP header?

Me: Yup, it's in the RFC.

D: Huh.

Sadly I am unable to reproduce his exquisite French accent in writing. But I digress. Later that day he informed me that in fact it was not in the RFC, which I took to be impossible. I've used it frequently, I just knew it had to be there. I've been known for my sketchy memory when it comes to social matters, but usually my memory for technical details is pretty good. So I was extremely dismayed when I pulled up the RFC and discovered that he was indeed correct.

Not satisfied with leaving it at that, I started doing a little digging. For those of you who are curious (all three of you, maybe), here's what I discovered.

A very early draft of the HTTP 1.1 specification did include a reference to a "Refresh" response header. However, this was removed from later drafts, as per this message from Roy Fielding dated June of 1996. It appears that Netscape introduced this as a proprietary feature as documented in An Exploration of Dynamic Documents. Evidently the W3C didn't think this was a good idea (search for "refresh", almost at the bottom of the page).

Regardless, the technique is widely supported, and allows you to exert server-side control of browser page reloading. This can even be used to reload non-HTML content such as images and text. Fun!