It Was All Over After Punch Cards

Written By: Lawrence Waugh, Founder of Calavista Software


 

I don’t feel old, but when I look around at other people in the IT industry, I realize I’m a dinosaur.

How old a dinosaur? My first programming job (the summer after my sophomore year at MIT) was writing COBOL code to cull trends from Polaroid’s customer registration “database” – really just one big flat file. I have to shake my head at the sheer number of anachronisms in that one sentence.

The first computer I programmed, as a sophomore in High School, had 4K of RAM, and you loaded programs via punched tape. Half of that 4K was consumed by the BASIC interpreter, leaving you 2K to work with on your program. There was no swap space – there was no hard drive to swap with. So the 2K was what you had. You could be typing in a program (on a console that printed directly to a paper roll, not a CRT), and you’d actually get an OUT OF MEMORY response when your program got too big. Programs weren’t stored in some representational form – they were stored as literal strings of characters and interpreted later – so code brevity really mattered. I remember shortening the output strings of a program to save a hundred bytes or so, so that the entire program could fit in memory. That sounds like a Dilbert cartoon, but it’s true. Concise code became an art form for me.

But then I graduated to some serious computing power. The town’s computer was housed in my school, so in Comp Sci 2, we got to use it, and its’ awesome 8K of RAM. Of course, 2K reserved for the FORTRAN interpreter, but still essentially 3x the heap and stack space to write code. It also had a Winchester Fixed Drive, but that was some magical thing that we never got to play with. We had to submit our FORTRAN programs on punch cards, laboriously typed out on a punch card machine.

I learned a lot from that, believe it or not.

We would have one assignment per week. The “jobs” we submitted – a deck of cards wrapped in a rubber band, placed in a cardboard box in the server room – would be executed at night, after the town’s work was done. The jobs included cards for the data, so that the computer would read the program, then read the data, and then do its thing. Mr. Hafner, a tall, bespectacled man, would come back after dinner, load a deck, and press the “execute” button. The computer would suck in the cards, then print out the results. Whatever they were. He’d then take that deck, wrap it in the output (that venerable fan-fold, wide, striped paper), and put it back in the box. We’d come in the next morning and rummage through the box for our output.

So that’s exactly five attempts to get your program exactly correct, assuming you tried to run your program the same day it was assigned. More likely, you’d spend a few days writing the program before trying to execute it. So 2, or maybe 3, tries at best.

Now imagine coming in the day before a project is due, picking up your output, and seeing:

Syntax Error on line 18

D’oh! A day’s effort lost, with no indication of whether or not – once the stray comma on line 18 was corrected- your program would even compile, let alone run correctly. It all depended on the next night’s run. Last chance.

It didn’t take many F’s on assignments before you got very, very careful about your coding. Syntax errors were one thing- but algorithmic errors were another. The teacher, Mrs. Sheldon, had a set of data she’d feed your program. Running the program once or twice on trivial data wouldn’t catch most of the errors. So you sat down and flowcharted things out. You thought up edge cases. You compared algorithms with your friends’. You shot holes in their ideas, and defended your own. You read, and re-read, your punched cards. You’d swap decks and read each others’ work, in case your eyes might catch something your friends missed.

In short, because we only had a few tries to get it perfect, the cost of a mistake – whether design, implementation, or syntax – was grave. And because the cost was so grave, we killed ourselves to make sure we didn’t make mistakes. As a result, we did the kind of design, peer review, and QA work that most development shops today would be proud of. We were barely teenagers, writing complex code, working on antiquated equipment, writing everything from scratch. But our code almost always compiled, and ran correctly, the first time.

When I was a senior, the high school got a CMS system, with some CRT terminals. We also got an Apple IIe. Now I could type my code in and run it, on the spot. I never touched a punch card again.

And my coding immediately got sloppier. I started typing before I’d finished thinking. I started trying to compile before I’d finished coding. But worst, I started coding before I’d really designed. Sometimes things just wouldn’t work, and I’d have to start over. But the more insidious errors crept in when my code would almost work correctly. It would work in the obvious way, but I wouldn’t have spent the time to think through the edge cases, or complex numbers as input, or just the unexpected.

Over the years, I’ve tried to discipline myself to not work that way. I’ve used PSP, enforced estimates, allocated specific time to design and peer review… but at the end of the day, my keyboard beckons. And it’s hard to not want to start typing when you’re excited about a project. “Why not just try this, and see if it works?”

And when I do that, I invariably write inferior code.

At Calavista, we have a process where the developer doesn’t commit code directly. Instead, they create a regression test that demonstrates that their code works, and then tells our system that they’re ready to “close” the issue. The system might check to see if another developer has diff’ed their changes in the past 24 hours. If so, it will take their new code, merge it with any recent changes to the code base, check that code into a temporary sandbox, build the product from source, create an installable, install it, and smoke-check the result.

Then, it will run the developer’s new regression tests to ensure that what they set out to do actually works. Finally, it will run every regression test ever written as part of previous closes, to make sure all of those still work. Depending on the project, it may also run performance tests, or code coverage analysis, or any one of several other tests.

Only then, when all tests have passed- when everything has worked flawlessly- does the code get promoted into the main line (or root, or master branch, or whatever), and made available for manual testing. Which is a whole different story.

This process has worked incredibly well for making sure the code base is stable, and functional. Basically, the code is examined so carefully before it’s checked in that every commit is a release candidate.

But when I think about it, that’s pretty much what we were doing back when we had to work with punch cards. The cost of a mistake was grave- so we didn’t make them. And maybe that’s what we’ve lost in the intervening years as we’ve sped up and “improved” the development cycle.

So here’s to punch cards. You taught me a lot. Rest in Peace.

Share on Facebook
Share on Twitter
Share on LinkedIn