That's bizarre and perverse! Surely someone has brought it up as a bug before that stdin is read-write! Is there some Posix standard preventing the standard shells from opening stdin as read-only???
stdin is readonly. `<a.txt ls /proc/self/fd/0` generally gives 'lr-x------' for the permissions. The problem is that having a open file descriptor to a file lets a program get and/or act-as-if-it-had a path to the file; /proc/self/fd/ is just the easiest way to do that.
So the thing is that this is a nothingburger. Just because you gave a pathname to an untrusted app means nothing. You've already trusted the app with your user account. It could already overwrite or vandalize that file no matter how you invoked it. Just because you indicate that file is special to you doesn't change anything in the threat model here. For all you know, the app could just traverse the entire directory tree and trash every file it could possibly write to, or just confine the damage to your $HOME.
There's no reason IMHO to avoid using a file as an argument, or directly as stdin. If you don't trust an app, don't run it in your user account; you run it in a sandbox, right? This is 2023.
Now a case could be made for defending against misbehavior by an app that might write to an fd by mistake, but as a1369209993 demonstrates, writing to stdin is a very deliberate choice, as you'll need to look up a pathname and deliberately open that file as writable. That's not misbehavior, that's malice, and that doesn't belong anywhere near your user account in the first place.
>But most of all the cat way just aligns with my mental model more. Data flows left to right, if you catch my drift.
Using `<` doesn't change that model. You can write `< somefile.txt whatever`. I always write my commandlines as `<in_file cmd1 | cmd2 | cmd3 >out_file`
(Though annoyingly this doesn't work for feeding input to while loops. `< <(echo a; echo b; echo c;) while read -r foo; do echo "$foo"; done` is invalid syntax; it needs to be `while read -r foo; do echo "$foo"; done < <(echo a; echo b; echo c;)`)
I thought it was an informative response. I certainly learned some stuff about shell I didn't know before. I'm still gonna use cat because it's simpler to me.
I share the same opinion, I was made fun of with the "useless use of cat award" but find it so convienent to cat | grep then cat | grep | awk | wc then whatever with data flowing left to right and modifying the command sequence as I explore the file content.
Quite. I had occasion to share this[0] in another thread, and it's relevant here, too:
> When I offer a pipeline as a solution I expect it to be reusable. It is quite likely that a pipeline would be added at the end of or spliced into another pipeline. In that case having a file argument to grep screws up reusability, and quite possibly do so silently without an error message if the file argument exists. I. e. `grep foo xyz | grep bar xyz | wc` will give you how many lines in xyz contain bar while you are expecting the number of lines that contain both foo and bar. Having to change arguments to a command in a pipeline before using it is prone to errors. Add to it the possibility of silent failures and it becomes a particularly insidious practice.
But most of all the cat way just aligns with my mental model more. Data flows left to right, if you catch my drift.
It also makes it easier to add arguments to the end if re-running it.