I hate exceptions. I especially hate exceptions in C#. I hate how they are created and how they're handled. I hate the syntactic structure used to process them and the semantic meaning behind it all.

Let's start with some definitions. For some teachers, an exception covers all forms of errors, from runtime failures to invalid inputs to missing keys in a dictionary. For other teachers, an exception is specifically for an exceptional circumstance — an unhelpful definition in that it requires a definition of its own. What is an exceptional circumstance? Most tutorials prefer to explain by example, and they (and Microsoft's own docs) use the 'dividing by 0' example:

static double SafeDivision(double x, double y)
{
  if (y == 0)
    throw new System.DivideByZeroException();
  return x / y;
}

This is the example the actual Microsoft uses in their actual docs on actual exceptions. What about the input value 0 is exceptional? It's the default value for a double. I'd be more surprised to discover that y was not in fact 0. Yet this is so exceptional it has its own exception type packaged in the standard libraries.

If this isn't confusing enough, their example function starts with the word 'safe' — which to my mind means the code won't ever throw an exception deliberately.


Maybe it would be better to think about exceptions in terms of purposes instead of definitions. I suppose an exception in C# should be capable of two things: human-readable feedback to a developer/consumer and diagnostic information that may help towards recovery. I can't fault the exception here: DivideByZeroException lets me know exactly what happened just from the type name, and any code catching that exception type can choose what to do next, either by defaulting the value or allowing the exception to bubble (maybe wrapped in some other contextualising exception).

What does the code look like if we attempt recovery?

double result;
try
{
    result = SafeDivision(x, y);
}
catch (DivideByZeroException ex)
{
    // log if you like
    result = -1;
}

DoThingWithResult(result);

It's not pretty. I've had to declare a value and then populate it in one of two places. I can't tell from this code if there are other exceptions the function might throw that I'm not dealing with. My default value doesn't make any semantic sense at all.

What does the code look like if we want to contextualise the exception and bubble it?

try
{
    var result = SafeDivision(x, y);
    DoThingWithResult(result);
}
catch (DivideByZeroException ex)
{
    // log
    throw new ResultCalculationException(ex);
}

Inconsistencies everywhere! Whenever the catch block isn't to attempt recovery, the DoThingWithResult step is done in the try block so we don't need to declare result in the outer scope.

Now, what if DoThingWithResult has its own possible exceptions that we know about? I suppose we should extend our try block.

try
{
    var result = SafeDivision(x, y);
    DoThingWithResult(result);
}
catch (DivideByZeroException ex)
{
    // log
    throw new ResultCalculationException(ex);
}
catch (FormatException ex)
{
    // log
    throw new ResultHandlingException(ex);
}

Most developers seem to be at ease with this, but I can't make heads or tails of it: which step in the try block causes which exception? Do both calls risk throwing the same exception? The sceptical reader can respond by saying that, of course, each dangerous step can exist in its own try block, coupled with only the exceptions that function may throw, but most developers see no reason to do this.

It's also very hard to audit this code: have I failed to catch an exception type from which I could recover successfully? Have I failed to catch an exception type I should contextualise if I see it? Is no one else very concerned that this code, for all its exception handling, could be failing to give me the information I need when it inevitably breaks?


I suppose what I'm saying is this: C#'s exception handling system isn't powerful enough to allow me to understand what any particular function call might do. So how can we use the strength of C#'s type system to fix it?

Before I jump into something more complicated, I think we should start with the classic division example. The SafeDivision function could be handled simply by strengthening its types.

There are a few glorious times in development when we know that a failure has one explanation. For example, if calling 'pop' or 'head' or 'first' or '[0]' on a non-null list happens to fail, we can be pretty sure that it failed because the list was empty. In the case of double division, we have the almost-guarantee that, if it failed, it failed because the denominator was 0. If we really wanted to, the function could simply return a double? and the caller could decide whether to default the value, or carry on with a possibly-null double in the next steps.

This still sucks, and we can do one better. We know that we plan to use input value y as a denominator, so we know that, regardless of where it comes from, it should be non-zero. That means we can build it into its type.

struct NonZeroDouble {
    public double Value { get; }
    private NonZeroDouble(double value) => Value = value;
    
    public static NonZeroDouble? From(double value) => value == 0
      ? null
      : new NonZeroDouble(value);
}

And SafeDivision changes too.

static double SafeDivision(double x, NonZeroDouble y) =>
    x / y.Value;

Here's the catch: one can only call SafeDivision with a non-null NonZeroDouble and they can only construct one using the NonZeroDouble.From function, which returns a nullable. The caller must handle the case where they've put 0 in before they're able to compile code that calls SafeDivision.

Here's the catch's catch: C#'s bang operator might get in the way — but any abuse of the bang operator can't be fixed by C#'s type system, so there's nothing we can do there.

It seems this exception's existence is dependent on weakly typing the function. The same is true of 90% of the user-defined exceptions I come across: they occur not because of exceptional circumstances, but because of weak typing. To throw an exception in a function body for a given input permitted by the function's type signature is, to my mind, barmy.


Now we come full circle. The given definitions of exceptions are too inconsistent to be useful, and when we think instead about their purposes, we can demonstrate that other constructs can do a better job, by moving error-handling code closer to where we should handle it. Having done that work, I think I'm in a better place to provide definitions. An exception, therefore, should be thrown for a problem that is caused independently of the choice of inputs.

The sceptical reader joins me in doubting wherever this re-typing of everything could possible deal with 90% of exceptions, especially when that retyping is likely to cause all manner of problems for incompatible codebases. The poor sod tasked with defining NonZeroDouble may soon need to define PositiveDouble too with all its own boilerplate, and then they'd need to make sure the latter inherits from the former, and so on.