The recent controversy over the MIT students hacking the Boston subway payment card got me thinking a bit about the difference between honest errors (which can be forgiven) and negligence (which can't). Consider the following:
Waiting for the cutover in Philly, 1943
1943: AT&T establishes a new long-distance switching and signaling system in Philadelphia. The system uses pairs of multifrequency tones, like Touch-Tones, sent over the same pair of wires that the customer speaks over. The system works well and AT&T deploys a form of this system throughout its network.
1960s-1970s: Phone phreaks spot the gaping security hole in AT&T system: control signals are sent over the same wires as customer speech. This means that phreaks can mimic these tones and cause all sorts of havoc, like making free calls, preventing records of calls having ever been made, and accessing internal telephone company people and equipment that customers are not supposed to be able to dial. AT&T realizes that it could cost billions of dollars to fix this fundamental design flaw and there is simply no easy way to retrofit security into the system. It just needs to be replaced by something better.
But let's give AT&T a break, huh? This was the first big automated network. It was uncharted territory. AT&T can be forgiven for not thinking about security at the outset.
Now let's fast forward 20 years.
1982: SMTP -- the Simple Mail Transfer Protocol -- becomes the Internet standard protocol for transfer of electronic mail. Even today this protocol remains the way electronic mail is sent over the Internet. Unfortunately, it has absolutely no provisions for security. As a result it is trivially easy to spoof- -- for example, to make email look like it came from your bank instead of the canonical 14-year-old hacker.
But hey, give the Internet community a break: the Internet is new and grew up in a collaborative, collegial environment. Security wasn't at the top of anyone's list of concerns.
Remember old brick phones like this?
1983: The first analog cellular telephone system is deployed. Called AMPS (Advanced Mobile Phone System), it does not use encryption and has little security. The result is that cell phone calls are vulnerable to eavesdropping by people with radio scanners. A few years later cellular telephone fraud via "cloning" takes off. The cellular industry claims it loses $150 million a year due to this fraud.
1986: On a more personal note, Brian Kantor and I develop NNTP -- Network News Transfer Protocol -- an Internet protocol for transferring USENET news articles.
Like SMTP, it has no provisions for security.
Fast forward another 20 years.
2006: The Boston subway deploys its CharlieCard electronic ticketing system.
Hey, look, a $653 CharlieCard! Neat!
2008: Ooops! Turns out CharlieCard has no security to speak of. MIT students hack the card using simple, easily available off-the-shelf hardware.
The Boston subway system's response? File a lawsuit to prevent the students from disclosing the system's vulnerabilities to the public.
To err is human, to forgive divine. There will always be security problems due to honest errors. But there is a difference between an honest error and negligence. Designing any system today without factoring in security from the ground up is negligence, plain and simple. The solution is not to sue people, nor to rely on laws that make it illegal to hack systems, but to use good engineering practice to design systems that are appropriately secure for their intended application. Please note that I'm not arguing for "absolute security" (whatever that is) but an appropriate level of security that isn't a total joke.
For AT&T in 1943, I can forgive. For those of us developing technologies in the 1980s, like SMTP or cellular phones, I can mostly forgive -- though we probably should have known better.
But for the MBTA in 2007? Sorry, Charlie. I'm fresh out of forgiveness.