I am not sure what the other side of this argument looks like: Unlimited liability (i.e. liability no matter how poor an implementation and use of the tech is)?
The would be quite a novel burden, that no other tech (afaik) had to carry so far. We always assumed some operator responsibility. It's interesting to think of AI as a tech that could feasible be able to internally guardrail itself, and, maybe more so with increasing capability, no human can be expected to do so in it's stead – but, surely, some limits must apply and the more interesting question is what they are, as with any other tool?
Every other field in history considers it de rigeur that you're liable for the failure of quality in the products you produce. You make drugs that hurt people? You're liable. You build a building that falls down? You're liable. You serve coffee that literally burns the people drinking it? You're liable. It's also not new--the Code of Hammurabi (some 6000 years ago) prescribes the death penalty for people who build houses that fall down and kill the inhabitants inside.
It's only computer scientists who think it's some unreasonable burden to be held liable for the consequences of their work.
It is an unreasonable burden to ask the impossible. The technology to create an AI incapable of hurting people if misused or blindly trusted literally doesn't exist right now.
It'd be like holding a builder liable for their bridge being unable to withstand being hit by a meteor.
If I tell someone to kill someone else and they do, then I should be held responsible.
If I write instructions in a book that I give to someone telling them to kill someone else and they do, then I should be held responsible.
If I give someone a tool I made that I bill as more-than-PhD-level intelligence and it tells someone to kill someone else and they do, then I should be held responsible.
All of the above situations seem equivalent to me; I'm not the only person responsible in each case, but I gave them instructions and they followed them.
It is a tool, but it's a tool that is sold by OpenAI as providing a high degree of intelligence. That's an endorsement of what the tool outputs as advice, which is what makes them responsible.
> That's an endorsement of what the tool outputs as advice
That's not even close to true!
Even if you've been living under a rock for the last 5 years and didn't already know these models are not reliable, pretty much every provider has a disclaimer next to the chat box informing you of that fact.
A small disclaimer under the main flow that also acts as cookie banner doesn't outweigh the many, many other statements claiming capabilities. It's a minor undercutting, sure, but it's perfectly possible to have all sorts of disclaimers [0] while still keeping the point clear.
The would be quite a novel burden, that no other tech (afaik) had to carry so far. We always assumed some operator responsibility. It's interesting to think of AI as a tech that could feasible be able to internally guardrail itself, and, maybe more so with increasing capability, no human can be expected to do so in it's stead – but, surely, some limits must apply and the more interesting question is what they are, as with any other tool?