Complex language features have much more of a toll on reading code than they have on writing code. One example that I like to use is that while Go's lack of operator overloading is occasionally a nuisance, it means that you'll never have to dig deep to understand what "+" is doing on this particular line, since it'll always behave according to the core language rules.
I think the total cost of debugging hours often ends up being several magnitudes higher than the cost of development hours. At least for me, the process of debugging is something that involves scanning the code to load up my brain with as much possibly relevant context as I can and then try to analyze what parts of that might explain the seen behavior. A carefully designed language should help to both minimize the size of that context and the amount of linguistical gotchas that are part of it.
If I'd use an AI assistant, that lets me ask more pointed questions. Wouldn't that improve the assistance the AI can provide?
That is not necessarily true. Especially the error handling makes it really awkward to read go code. A lot of languages with an expressive type system do look aweful when looking at methods. But the same thing can be done with go methods. Writing hard to understand go code is just as easy as say with scala or rust
Oh, absolutely. I don't mean to claim Go as an outstanding example of a well-designed programming language. I appreciate Go's language design, but I also similarly appreciate Scala, Rust, Java, Erlang and various other programming languages I've interacted with.
My point is more against the notion that AI assistants would make careful language design any less important than in favor of any particular language. The popular programming languages all have their own virtues and drawbacks (and the unpopular ones almost certainly have at least their own drawbacks).
> Especially the error handling makes it really awkward to read go code.
Go does not have error handling at all, just general value handing. What do you find awkward about its value handlers? They are essentially the same as every other C-style language out there.
It's certainly not difficult to understand Go error handling. But it does take up quite a bit of real-estate. I think more importantly for me, it's too easy to forget an error check. Having primitives which make error checking painless and validated by the compiler would be nice.
If I do the latter, I cannot ignore `err`. If I do, the compiler yells at me because there is now an unused value in my code.
If I do the former, then I made the conscious choice that the error doesn't matter to me. Yes, this goes for `foo()` as well, because Go makes it exceedingly clear that errors are just return values, and that call ignores all returns.
I never understood this. I do understand it is easy to forget to handle an error like it is easy to forget to handle a distance or temperature. Humans will make mistakes.
But nobody laments about how they forget to handle distances or temperatures, or cry for special constructs to ensure that they don't screw up their distance and temperature handling. What is it about the word error that sends developers into a tizzy?
Why would we want a solution that only works for errors and not for values of all kinds?
Because handling errors is important for complex programs. Whether errors-as-values or exceptions or aborts are the best option, being able to write a program that handles errors well makes it that much more robust. Don't want some weird behavior happening when the program should've noticed an error and dealt with it somehow.
Handing values is important for all programs. You don't want some weird behaviour happening in any case. If you are writing a system that turns on a heater when the temperature falls below freezing, you're going to have a really bad time if you forget to handle the temperature.
So, yes, it makes sense to have constructs that can help with that problem, but it would be weird to have such constructs only for what a human considers an error condition. You would want that for every case that needs to be handled.
Otherwise you have to resort to testing, and if you accept that testing is good enough to ensure that you don't forget to check the temperature, then it also good enough to ensure that you have checked an error. There is nothing special about the error case. It's just another state like any other.
An error signals that a non-error value you were relying on will not be available. Control flow should nearly always change in response to that, often drastically. That’s why a language should provide a caller-controlled non-local exit from a futile chain of calls. Many languages only do that for errors, which is not a major limitation when users can define errors.
> An error signals that a non-error value you were relying on will not be available.
Except the non-error value should always be available, even if that means relying on the zero value. Zero values should be useful. An error state is an independent state. It should also be useful without any other values.
While the caller may choose to use the error state to make decisions about other state, it is faulty API design to force that upon the caller.
Many domains have no meaningful zero value. If “getUserByID” returns a fake user, anything you do with him will be incorrect, if you don’t panic and die (as zeroes of builtin types often do).
Logically, getUserByID returns nil when there is no user, not a fake user. The caller can then check for its nil-ness without needing to consider the error value. nil is a useful value. For better or worse, nil is how Go convention represents the absence of a value. In fact, convention sees error also rely on nil to be useful in the very same way.
It's funny how people completely forget how to write software as soon as they see the word error. Why is that?
Well, I think that's why a lot of people are disappointed that Go doesn't quite have sum types, because sum types do increase expressivity around values in general. That they improve error handling is just one application.
There seems to be no consensus as to what sum types actually means, but the most popular definition I find in my travels is: Tagged unions. Which is funny as a sum is not a union, but anyway...
Assuming you share in that definition, while Go might benefit from tagged unions in general, I'm not sure they actually help in any way with remember to handle errors. A union is just as easy to forget to handle as a single discriminate type.
But perhaps you are of another sum type school? Perhaps one of the other sum type definitions are useful here?
Right. See that doesn't help, at least not in the context of Go without any other modifications. Logically, you can reduce that Result state to a simple boolean (Ok = true, Err = false). But the bool type in Go is prone to the very same problem.
It doesn't matter how many type layers you add. The problem is that there is nothing in the language that enforces handling of values, no matter what type that value might be.
Yes, manual tagging doesn't provide type safety. That's why I said some people are disappointed that Go doesn't have sum types. There isn't even a convoluted workaround either, at least not a full one.
It is not so much a matter of lack of type safety, it is forgetting to implement an application requirement.
Even an advanced type system capable of formal proofs isn't going to help you if you've forgotten the requirement when defining the types. And if you are using a language with a lesser type system (Go, Rust, any language you are likely to use in the real word), and haven't forgotten the requirement, then you would have encoded that knowledge in tests instead, making the forgetfulness in implementation ultimately immaterial. After all, there is no practical difference between learning that you forgot to implement logic by the compiler telling you or the test suite telling you.
So, we can infer that the OP is talking about where one straight up forgot about the requirement from start to finish. I'm skeptical that there is any technical solution to that problem, but sum types most certainly isn't it.
There is a point at which even dependent typing won't save you, but sum types are a useful tool to reduce negligence. Don't let perfect be the enemy of good.
Sum types are useful tool, but they are not helpful in this particular case. There is nothing about sum types that would help avoid this problem in the slightest. Perfect need not be the enemy of good, but good is not equivalent to not even trying.
This is not a general "wouldn't it be nice of Go had..." thread. We are talking about a specific problem.
I'm not really sure what your point is then. We should have comprehensive tests and therefore error checking isn't special compared to other programming mistakes? I think we should both have tests (don't skimp on them if given time, of course) and use tools like sum types to write more robust programs from the get-go. Errors can even be treated like other values using sum types; that raises the bar universally instead of just improving error checking.
If you're going to invoke some abstract "the programmer doesn't know what they're doing" situation, I'm not sure how to continue this discussion. Even testing may not help in that case. Things are going to slip through the cracks. Having features like sum types is a way to generally reduce the likelihood of such semantic errors.
Besides, the original topic was that Go doesn't make its error handling very foolproof. Sum types help with that and don't make error handling special, so I really don't see your point. Why not write everything in Brainfuck then? Everything is uniform and simple. Well, it's simple to the point that it's way too easy to make mistakes.
The original topic was, stated quite explicitly, that it is easy to forget to handle errors in Go. There is nothing about sum types, as defined, that addresses that concern.
Like we established in my very first comment, well before you even showed up, the handling forgetfulness problem exists for all types. That would include a hypothetical (tagged) union type. I did question if you were using some other definition of sum types, as there isn't a great consensus on what sum types actually mean, but you did confirm you were thinking of tagged unions...
But it must be that you didn't actually mean tagged unions. What is the definition of sum types that you are using here that would help with the stated problem?
There may be a misunderstanding. I say "sum type" to mean something along the lines of Rust's enum: a value stores both a tag and its data, and pattern matching distinguishes the variants (like a switch statement). It's a tagged union where the interpretation of the tag is managed by the compiler. The underlying value can't be used without extracting the variant with pattern matching. Even if the success and failure variants both contain an integer, you can't add an integer with this value.
So you're saying the programmer forgets to use the return value at all. There are linters, documentation, and basic common sense. This is not at all a relevant issue, or else there are serious issues with the programs you see. Why are you discussing whether Go's error handling needs improvement if your imagined programmer can't even program? Well, I've spent enough time on this thread.
"Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it." - Brian W. Kernighan
I dont know if it applies to language features, though.
Because parent is taking about operators like "+" and how it can be confusing not knowing if a function call is behind it. This is different from obvious functions like "Add()".
Because if the language doesn't allow overloading (which is the context of this thread) then "+" will always be implemented by the language. That's a big difference.
> They're just two different names for the same thing
No, they are not. One is a function, the other is an operator.
An operator has implicit meaning. I know with absolute certainty, that `+` is supposed to mean that 2 things are added to one another.
`Add()` is a function. Its name may or may not indicate what it is actually doing. Even if it indicates it, it may not be obvious how it is doing it's thing, or how good the indication is. Example: If Add takes as first argument a list and as second argument an integer, does it append the integer to the list? Does it add it to every element of the list? Does it sum the list and add the integer to the result? All these things could be meant by "Add"-ing an integer to the list. Does it have side effects?
I don't know, and I don't expect to know, until I look at `Add()` (or at least read it's documentation).
That is the big difference between operators and functions; With operators, I expect to know what it does, without looking it up. And that's also the big problem with operator-overloading, because as soon as a language supports that, that assumption flies out the window.
> Go's lack of operator overloading is occasionally a nuisance, it means that you'll never have to dig deep to understand what "+"
It's funny, in 10 years of writing Python professionally I can only think of one occasion when I had to dig into what a __add__ operation did (a Money library which didn't special case 0 meaning sum didn't work as you might hope).
I think it's probably because python has a pretty comprehensive standard library which gives reasonably consistent examples of where to use them.
I think the total cost of debugging hours often ends up being several magnitudes higher than the cost of development hours. At least for me, the process of debugging is something that involves scanning the code to load up my brain with as much possibly relevant context as I can and then try to analyze what parts of that might explain the seen behavior. A carefully designed language should help to both minimize the size of that context and the amount of linguistical gotchas that are part of it.
If I'd use an AI assistant, that lets me ask more pointed questions. Wouldn't that improve the assistance the AI can provide?