blue ball with gold shark
Sharkysoft

The 80s called. They want their line length limits back.

by Sharky

abstract

This article explores the persistence of outdated line length limits in modern coding style guides, tracing their origins to legacy hardware constraints. Using the classic anecdote of “Grandma’s ham” as a metaphor, it argues that many coding conventions persist without scrutiny, long after their original justifications have vanished. The piece highlights how contemporary engineers, with high-resolution screens and sophisticated editors, should prioritize readability and logical structure over arbitrary numerical constraints. Ultimately, it advocates for abandoning rigid line length limits in favor of human-centric code formatting that enhances clarity, maintainability, and collaboration.


						Obsolete computer equipment sits on a desk.
						In the foreground is a printing terminal spewing wide, continuous-feed paper; the desk is engulfed by the printer's output.
						The content on the printout is dense and columnar, representing large amounts of data.
						In the background is a green-screen monochrome monitor.
						The screen is filled with output similar to the printout.

Perhaps you've heard the story of the homemaker who always chopped off the ends of the ham before putting it in the oven. If you haven't, please take a moment to read one of the following accounts.

I've included two versions, because the first is simply better storytelling, while the second frames the story in the context of cultural training.

In software engineering, we see cultural training manifest in a variety of ways. Cultural training can be a time-saver, but it can also lead us down the wrong path when we fail to continually question the validity of what our predecessors regarded as "best practices."

One such example that comes to mind is the line length limit of 132 characters that seems to keep popping up in coding style guides. In the past, every time I encountered this number, I found myself asking, Why not 131 or 133? What's so special about 132? Recently, I found the answer.

Are you old enough to remember line printers? Do you remember those mile-long continuous fan-fold pages, with detachable sprocket-feed edges, that had to be manually loaded into those printers? As it turns out, 132 characters is the maximum number per line that certain line printers could accommodate. These were the expensive, business-oriented printers that fed 14-inch-wide paper. In the days before excellent interactive debuggers, fast processors, 4K screens, and reliable storage, engineers frequently printed source code — long vertical banners of geekiness — both for debugging and archival purposes. So, it was essential that everyone comply with the 132-character limit.

I fondly remember my 1980s dot matrix printer. It had a dip switch that let me toggle the font width to "skinny characters." This allowed me to easily "upgrade" my mere 80 character line printer to accommodate 132 characters on a line. On my 8½-inch paper, the skinny characters were dense and challenging to read, but I sure saved a lot of trees and money!

Yes, you guessed it. That 132-character line limit — the one we now see in many home-brewed coding style guides — is a vestige of ancient industrial line printers. That completely arbitrary 132-character hardware limitation induced a FORTRAN coding standard that was subsequently emulated in the coding standards for other emerging languages, and the rest of the story is — well, just the ends of the ham.

To all the engineering teams that continue to back up their source code with dot matrix line printers, you should probably hang on to your 132-character limit. For the rest of you, it's time to update your thinking.

By the way, if you're also wondering where the magic number 80 came from, here's a bit of trivia: FORTRAN punch cards were 80 columns wide. Even though most command-line consoles now default to 80 characters wide, it's just another leftover from the past. More ham. Today, no sane person would ever try to restrict his code to an 80-character line limit.

If there really is a valid use case for a 132-character limit, then there is probably also one for 133. At this range, the choice seems entirely arbitrary. If we're going to limit it, we should probably stick with a rounder number that's a bit more brain friendly, like 200. (Not 256! That's just unnecessary geek speak.) But even 200 might be unnecessarily restrictive. My tiny MacBook easily displays 230 characters per line, and it is quite readable even for my aging eyes. (Admittedly, I'm doing this while enjoying the benefit of a variable width font, but that's a whole separate piece of ham to discuss.)

The truth is, we don't really have a compelling reason anymore to enforce line length limitations at all. Memory is much cheaper now than it was in the '80s, disks are much faster, and compiler optimizers work great. The perverse motivations of the past, to pack as much code as possible into a single line, are simply not present anymore, especially if it will cause developers pain. These days, most engineers voluntarily break their code into manageable lines before they become unwieldy. And those who don't naturally do this — the unkind developers — they quickly learn from the rest of the team that this is necessary to produce reviewable code. Now it's the PR, not an ancient line printer limitation, that governs the readability of the code we write. The PR is the new thing to accommodate, not line printers. Let the reviewers dogpile on the offender when his lines are too complex. Forget about the number.

And when that dogpile doesn't happen, there's probably a good reason for it. For example, perhaps the line isn't meant to be reviewable at all, or at least reviewability isn't a worthwhile goal. For example, if a string literal in your source includes a base-64-encoded file, then readability isn't really a concern. After all, that string is a single, opaque unit — so why not just let it be a single line? Let that sucker shoot 400 characters off the right edge of the screen — and ignore it! If someone really wants to see the whole thing, then he can just enable soft line wrapping in his own editor.

It's already the case (or it should be) that when someone produces unreadable code, his reviewers will smite him (or nudge him in a friendly way). It's also already the case that engineers who are in full compliance with line length limitations can still produce unreadable code. The PR is the last backstop for producing maintainable code. This doesn't need to change, and honestly, it probably shouldn't change. It's pretty easy to argue that when a style guide is poorly conceived and unconditionally or automatically enforced, it can actually interfere with readability. Good judgment should always take precedence over style guides and automatic formatters.

Further complicating the matter of line length is the fact that many programming languages have entered a brave new world where it is actually pretty difficult to precisely define what a line of code is. The ubiquity of functional programming and its associated grammatical structures, including lambda functions, nested classes, etc. — these ideas have really screwed with the concept of "a line of code." Therefore, until we can all agree once again on the definition of a line of code, let's provisionally agree to adopt the following, much more enlightened pearl of cultural training:

Lines of code, however they are defined, should never be split based on length alone. Instead, they should only be split based on complexity and structure.

Most lines that are "too long" are probably also too complex and should therefore be split semantically, at structural seams. Thoughtful presentation of complex source code cannot easily be achieved through an automatic formatter. However, when humans take the time to do this, it not only benefits the reader, but when it is done properly, it also leads to more concise diffs during code reviews, fewer merge conflicts, and more durable change attribution. This in turn leads to better code reviews, higher-quality code, and happier teams. 🦈