paint-brush
The Throw Keyword was a Mistake by@Cheopys
6,929 reads
6,929 reads

The Throw Keyword was a Mistake

by Chris FoxDecember 20th, 2019
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

The Throw Keyword was a Mistake when Java added an exception to C++. Java developers are routinely expected to throw exceptions in error conditions. The throw keyword is a sledgehammer of a thumb on a thumb. It is likely one of the worst things to ever happen to software development until pair programming came along with the new keyword. Java came along and validated that bad judgment was just bad judgment, and that it was just an issue of bad judgment along with Java's bad judgment. It's not just another tool in the box, glorified to glorified exceptions; why should they be real exceptions?

Company Mentioned

Mention Thumbnail

Coin Mentioned

Mention Thumbnail
featured image - The Throw Keyword was a Mistake
Chris Fox HackerNoon profile picture

Exception Handling

Decades ago when a program crashed you would see a dire error message

This meant that something horrible had happened. Usually the problem was the software tried to read or write some memory outside its address space. It could mean some other problem; a full disk, a device error, but it was an unrecoverable condition, something that software developers call an exception.

There was only one option: tap the spacebar or use a mouse, and invoke “OK.”

Why? It’s not OK. You may have just lost two hours of work (that’s your fault. Hit Ctrl-S every few seconds or you deserve to lose it). But this still sucked.

Exceptions aren’t new. Dealing with them in a more elegant way than inelegantly crashing goes all the way back to 1962 in a near-forgotten language called LISP. See the Resources section for an article on the history.

Languages in modern programming pulled the better ideas from the diverse implementations of handling exceptions and formed a simple language syntax with a few new keywords, and Structured Exception Handling (SEH) was born.

The idea was that software developers could trap exceptions and offer a more elegant shutdown than simply crashing and losing everything:

try
{
    do_something_unsafe();
}
catch(Exception exception)
{
    save_work();
    apologize(exception);
    exit();
}

So far so good. Unless the application data was corrupted by the Horrible Event you had a cleaner shutdown and could restart the program and continue.

And it got better. Not all exceptions meant shutting down; some were recoverable. If the program tried to access forbidden memory it could return to the previous state and continue. Happy days.

Can’t Leave Well Enough Alone

Then some little smarty decided that the operating system shouldn’t have all the fun and that developers should be able to create exception conditions and use this new architecture themselves, and in what is likely one of the worst things to ever happen to software development until pair programming came along, extended the new architecture with

throw;

Calling this new keyword would make the instruction pointer jump to the nearest catch block, wherever that may be; exception types were defined so the jump could be to a selected handler. This may have been a well-meant idea, more likely it was just to be orthogonal and provide developers with both ends of the transaction.

Wise developers; experienced developers; thoughtful developers. Probably nobody ever thought what would happen when they put this catastrophic operation in the hands of less experienced developers.

And we can’t put the genie back in the bottle.

What’s So Special About Exceptions?

The point of SEH was EH. Exception handling. Gravely serious conditions. But just as the return keyword isn’t restricted to the end of a function, throw wasn’t restricted to exceptions. It wasn’t long before developers were using it in error conditions where return codes were more appropriate, like parameter validation.

And then for conditions that weren’t errors at all. And, inevitably, as a way to save a few minutes of typing. And a lot of these were the same people who compulsively hyper-optimized even the most performance-uncritical code, having no clue that throw exacts an enormous penalty.

Example. Our intrepid short-attention-span test-driven-developer is three levels deep in void functions and suddenly discovers a potential error condition. He doesn’t want to “refactor” three void functions to return error codes, he has a foosball game with the team in five minutes, so he issues a call to throw and wraps the call to the first of the void functions in a try-catch block and runs off to his game. Facepalm.

Throwing exceptions becomes just another tool in the box, a glorified longjmp; why should it be restricted to real exceptions?

Why indeed.

And Then Came Java

And in Java throw is idiomatic. Java developers are expected to throw routinely. It’s even better than return! Get a bad argument? Throw an exception. Didn’t find a value in an array? Throw an exception.

Whee.

Before this, using this sledgehammer of a keyword on a thumbtack of an issue was just bad judgment, Java came along and validated that bad judgment.

The Year Everything Crashed

Then SEH was added to C++. If you were using computers in the years that followed you remember the time well even if you didn’t know the reason; a lot of software that had been stable and solid before started crashing and misbehaving. In one year iTunes went from unbreakable to almost dangerous to use and you had to save off a copy of your music database or lose it every few days. There was a massive loss of software reliability and if you watched memory in your Task Manager you could see it leaking in torrents.

Everyone was using throw everywhere, and it wasn’t working right. Garbage Collection was far from perfect, far from reliable, and developers using throw just expected things to be magically cleaned up. They weren’t.

The Performance Penalty

Throw is hugely expensive. Some mobile groups at Microsoft started looking at their profiling data and their faces went white, word went out to people coding on weaker processors to call a halt to the throw keyword.

Processors and RAM have had to stay well ahead of the software running on them because too many developers write inefficient code. New languages had to collect allocated memory, cleaning up after sloppy work because hardly anyone had the discipline to properly manage memory (I remember a Microsoft division meeting where a David Stockman clone sagaciously told us that memory leaks were a fact of life). Rather than cultivate discipline the languages had to add toilet paper functionality.

With throw this problem was squared and cubed.

The Real Problem With Throw

Control Flow

Going all the way back to the earliest high-level languages we have had approximately the same constructs: if/else, while, for; functions with entrances and exits. Code could be analyzed, its behavior clearly deterministic, every path could be enumerated and every condition examined. This is control flow. Good developers could find bugs without a debugger, just reading the code.

Not Anymore

Yes throw is expensive in CPU cycles, but that’s not the real problem. Throw added a second system logic of control flow, disconnected from the usual one, unpredictable, hybridized and intertwined. Code using both could not be reliably analyzed, not by human eyes. Not unless the scope of throw and catch were very tightly and carefully constructed and if you think most developers are going to do that then you haven’t worked with many average ones.

One early large Java project was subject to analysis with advanced tools and found to have an astounding 13,000 ambiguous code paths based on thrown exceptions; the behavior of the code was massively unpredictable.

What we can predict, and with certainty, is that developers are not going to write with the rigor required to use this keyword safely. Software development is disintegrating before our eyes; focus and concentration are deprecated and just as inefficient code is mitigated by ever faster processors, increasingly sloppy work is mitigated by an obsessive shift of focus from development to testing.

People can’t enter a flow state anymore, not when their day starts with a slog of a commute to attend a daily scrum and they are interrupted all day by meetings and open office conversations. It’s not possible to rise above mediocrity when you can’t concentrate and software companies are more concerned with team cohesion than with quality work.

And it’s in this world that we have this horrible keyword with its wholly unpredictable behavior.

What We Can Do

The Impossible Dream

If I had my way

  • the keyword would be removed from all languages that use it
  • only the language runtime and operating system would be allowed to raise exceptions

That isn’t going to happen; most projects would break. They would have to be rewritten.

Within Our Grasp

Languages with SEH should have a compiler option of not recognizing throw so it cannot be used. Barring that, development leads should forbid its use. The mobile groups did so. Some languages already designate API that throw, calls to these API should be rigorously wrapped in local try-catch blocks, at the same scope as the exception-throwing API, with a global catch around everything, or perhaps nested ones, solely to find unhandled throw conditions and patch them. Software is ready when the outer handlers are never hit. Outside the small local blocks the control flow returns to deterministic predictability.

This preserves the original intent; programs don’t crash so much and recovery is possible. Software development goes back to being a deterministic endeavor and we regain some stability, probably lot of stability.

Don’t use throw. Just stop. Stop now.

Resources

바카라사이트 바카라사이트 온라인바카라