Posted by: shoji | September 8, 2007

Era of image analysis

Today, I watched my first match of this year’s U.S. Open.

One change from years past is the “Hawk-Eye” instant replay, line-calling system. Hawk-Eye uses multiple high-speed video cameras located around the court to track the tennis ball. A computer calculates the trajectory of the ball to determine whether it is in/out. The tennis organizers try to make it more interesting for the fans with computer animation, rules for how often a player may challenge a call, and, of course, a suspenseful pause.

How Hawk-Eye has affected the game for better or worse is not what I’m blogging about, though. (It’s certainly not without controversy.)

I grew up playing tennis in the days of “Cyclops“, a laser-based line-calling system. Unlike Hawk-Eye, though, Cyclops was only good for calling the service line. Even with singular focus, Cyclops was not very good. I watched a match where Andre Agassi and his opponent (What’s-His-Name?) asked the Chair Umpire to turn off Cyclops because of several egregious mistakes.

Also around that time, I remember seeing an episode of Beyond 2000 that described a prototype tennis line-calling system. Unlike Cyclops and Hawk-Eye, this system featured a wire-mesh embedded in the court. Players had to use a special tennis ball also having embedded wire mesh. The idea was that the special tennis ball would complete an electrical circuit in the wired court, which would signal in/out. Suffice to say, the Beyond 2000 tennis court never made it beyond 2000.

We now live in an era of image analysis. It is remarkable to me how computers now “see” compared to older technology. Beyond 2000 and Cyclops are two instances of how human beings formerly viewed machine vision; another example is the bar code (designed explicitly to-be machine readable).

The difference is that machines are now designed to capture a digital image of the environment and then interpret what is there: Hawk-Eye views the trajectory of a ball; the cameras around London read automobile license plates for congestion pricing. Sophisticated “vision” devices are used in self-driven vehicles designed for the DARPA Grand Challenge; a GPS only gets you so far.

In case you were wondering, the Roomba does not [yet] see. It’s only a matter of time, I think, before cameras and the necessary software will be installed to keep the Roomba from getting stuck in the kitchen corner.

What else will computers see?



  1. […] recognition of genetic disease I blogged earlier about the increased utility of image analysis programs. Here’s medical science getting into facial […]

  2. […] cameras to seek out parking violators I blogged about the tremendous advances in image analysis.  London is adding to their repertoire the ability to issue parking violations via camera […]

  3. […] (The immutable law of “the more/the better”.) The real innovation here is the “Hawk Eye-like” image reconstruction to permit viewing from any user-determined point-of-view. The key here […]

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s


%d bloggers like this: