Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This isn't even comparable to a nuke. This kind of opinion is going to leave our entire species behind.

Imagine having a patent on 'fire' and then suing everybody who tries to cook a meal.



> leave our entire species behind

Leave us behind whom or what?

I agree with gp. It may not be LLMs, but we will certainly create a technology at some point that can't be openly shared due to existential danger, aka The Great Filter.

We can't just naively keep frolicking through the fields forever, can we?

We have to be able to at least agree on that, theoretically, right?


If we agreed with your premise that AI is a great filter and that this filter can somehow be contained by a small group, then I guess what it boils down to is two choices:

1. either lock everything down and accept the control of a small unaccountable group to dictate the future of humanity according to their morals and views - and I believe that AI will fundamentally shape how humanity will work and think, or 2. continue to uphold the ideas of individual freedom and democratic governance and accept a relative increase in the chance of a great filter event occurring.

I, like many here, am firmly against ggp's position. The harm that our spices sustains from having this technology controlled by the few far outweighs the marginal risk increase of some great filter occurring.

I will continue to help ensure that this technology remains open for everyone regardless their views, morals, and convictions until the day I die.


Let's forget today, and LLMs. Do you see no theoretical future case where a technology should not be shared freely, ever? Even 100 years from now?

The only benefit I can imagine of less players having control of a technology is that there are less chances for them to make a bad call. But when you democratize something you hit the law of large numbers.

https://en.wikipedia.org/wiki/Law_of_large_numbers

disclaimer: this goes against so much of what I believe, but I can't escape the logic.


> Leave us behind whom or what?

Whom: The corporations with enough money to burn.

What: Technological progress.

Here's a nice video that showcases the same patterns in history and how having free and open tech + breaking monopolies helped move society forward - https://youtu.be/jXf04bhcjbg


It's not comparable to a nuke because a nuke is dumb, and won't be dangerous unless you do something dangerous with it.

AI, on the other hand, will be dangerous by default, once it's powerful enough.


Given the non zero risk of an accidental nuclear launch I’m not so sure.

It’s like balancing a piano on a ledge above a busy street and saying “well if no one pushes it then it’s not dangerous!”

Nuclear war and climate change rank far higher as threats than rogue AI to me right now.


Fire is dangerous by default too.


Language models don't kill people, people kill people. You know what stops a bad ̶g̶u̶y̶ mega-corporation with a language model? A good guy with a language model.

Here is what mine had to tell you:

  It’s not like we don’t already have nuclear weapons, biological agents, chemical agents etc...

  AI is simply another tool which can be used for good or ill. It doesn’t matter how much regulation/control you put on it - if someone really wanted to use it maliciously then they will find ways around your safeguards. The best thing to do is educate yourself as much as possible.
(sampling parameters: temp = 100.000000, top_k = 40, top_p = 0.000000, repeat_last_n = 256, repeat_penalty = 1.176471)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: