Simply put, safety slows code down. It's a matter of whether you know what's happening underneath or not. The more you try to make C completely safe, the more you slow it down and therefore remove the need to have written it in C in the first place. Whether that's a good thing or not is an exercise for the implementer.
True, but costs of comparison to NULL aren't massive. It is one of the fastest things out there.
It was my experience when tuning a linear algebra library (just for internal use) that such comparisons are almost unobservable in total performance tests.
If you're writing a linear algebra library, then the bulk of executing code should be floating point vector operations in tight loops with known bounds. A NULL pointer check in the prolog where you set up the loop is going to be negligible.
This is very different from kernel code, which by its nature isn't very computational but doing resource management all day long and pretty much all it does is stuff that looks like walking linked lists.
That was a very specific linear algebra, we toyed around with huge but sparse matrices. Not anything graphics-related, rather number theory-related. But there was a lot of integer comparisons in there.
This was around 2001-2002 anyway. Things might have changed.
Depends on how often you are doing the comparison. Is it 1 time/second, or 10 million times a second?
There is a difference.
If the code above is the one in charge of putting/removing network packets in a queue, or putting threads ordered by priority for the scheduler, then you should consider the side effects of checking for NULL.
If you are going to implement this function in a library for Jimmy The Programmer[1], then check for NULL.
It's written in C because C is popular, has platform support and the code was originally written in C, not because the kernel should be a security nightmare.
That's irrelevant to what Linus is saying. He's not saying "this is good code, but only with the caveat that it's written in C and run in the Linux Kernel". He's saying that changing it to make it far more difficult to read and modify, but cleverer, has made it better code in general.
See, and this is where I (and I'm guessing you) might beg to differ. I'm a web developer, so the vast majority of my time is spent working in JavaScript. Our team has been working in ES6 for the past ~2 years. Our lead developer just LOVES him some ES6 - destructuring, aliasing, lambdas, you name it, and he's all-in.
Me, though? Well, let's just put it like this: when we find some framework-level bug that's written in "clever" ES6 syntax, our first step in debugging is almost ALWAYS to rewrite the given function traditionally, without any of the ES6 shorthand. And the reason we do that is because reading and debugging a whole stack of anonymous lambda calls is a PAIN IN THE ASS. Or figuring out where a certain variable is coming from when someone uses overly-complex destructuring syntax to magically pull a value from deep within a nested object.
I mean, don't get me wrong, I do like and use almost all of the modern ES6 niceties, but I also feel like it's much more difficult to parse and understand code compared to what we were all writing a few years back. People will, I'm sure, be arguing about what constitutes "good code" for decades to come, but to me, when working in an evolving codebase, especially with other people, plain ol' human readability is paramount. If people can't figure out what your code is doing without throwing in a breakpoint and stepping through line-by-line, you've failed at writing good code. And this will be my opinion right up until the day humans stop writing code by hand.