Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Referring to this type of optimization program just as “AI” in an age where nearly everyone will misinterpret that to mean “transformer-based language model” seems really sloppy


Referring to this type of optimization as AI in the age where nearly everybody is looking to fund transformer-based language models and nobody is looking to fund this kind of optimization is just common sense though.


You are both right. Because the term "AI" is so vague and can mean so many things, it will be used and abused in various ways.

For me, when someone says, "I'm working on AI", it's almost meaningless. What are you doing, actually?


I think it's actually this repo:

https://github.com/artificial-scientist-lab/GWDetectorZoo/

Nothing remotely LLM-ish, but I'm glad they used the term AI here.


How can one article be expected to fix the problem of people sloppily using “AI” when they mean LLM or something like that?


I use "ML" when talking about more traditional/domain specific approaches, since for whatever reason LLMs haven't hijacked that term in the same way. Seems to work well enough to avoid ambiguity.

But I'm not paid by the click, so different incentives.


I like that.

AI for attempts at general intelligence. (Not just LLMs, which already have a name … “LLM”.)

ML for any iterative inductive design of heuristical or approximate relationships, from data.

AI would fall under ML, as the most ambitious/general problems. And likely best be treated as time (year) relative, i.e. a moving target, as the quality of general models to continue improve in breadth and depth.


Generative AI vs artificial neural network is my go-to (though ML is definitely shorter than ANN, lol).


Huge amounts of ml have nothing to do with ANNs and transformers are ANNs.


I stand corrected! What are your go-tos?


Not the person you're replying to, but there are tons of models that aren't neural networks. Triplebyte used to use random forests [1] to make a decision to pass or fail a candidate given a set of interview scores. There are a bunch of others, though, like naive Bayes [2] or k-nearest-neighbors [3]. These approaches tend to need a lot less of a training set and a lot less compute than neural networks, at the cost of being substantially less complex in their reasoning (but you don't always need complexity).

[1] https://en.wikipedia.org/wiki/Random_forest

[2] https://en.wikipedia.org/wiki/Naive_Bayes_classifier#Trainin...

[3] https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm


Just do not use AI for anything except LLMs anymore. Same way that crypto scam has taken the word crypto.

crypto must now be named cryptography and AI must now be named ML to avoid giving the scammers and hypers good press.


> AI must now be named ML

You just made a lot of 20th century AI researchers cry.


Ocaml users too. And Haskell.


Yep. I dislike it just as much as ceding crypto, but at the end of the day language changes, and clarity matters.

I think image and video generation that aren't based on LLMs can also use the term AI without causing confusion.


Just don't use the term AI. It has no well defined meaning and is mostly intended as a marketing term


So, "don't do marketing" is your advice?


Correct, "an editorially independent online publication launched by the Simons Foundation in 2012 to enhance public understanding of science" shouldn't be doing marketing and contributing to the problem.


By doing its part and using the term correctly.

The real problem is not people using the term incorrectly, it's papers and marketing material using the term incorrectly.


Lets be real here, the people with the money bags don't care either.


Thinking "nearly everyone" has that precise definition of AI seems way more sloppy. Most people haven't even heard of OpenAI and ChatGPT still, but among people who have, they've probably heard stories about AI in science fiction. My definition of AI is any advanced computer processing, generative or otherwise, that's happened since we got enough computing power and RAM to do something about it, aka lately.


Then that definition is at odds with how the field has used it for many decades.

You can have your own definition of words but it makes it harder to communicate.


You're absolutely right! We're not at a conference with other practicioners in the field, we're on the Internet where anybody with an Internet connection can contribute, and the article we're commenting on didn't take the time to define the term before using it either, so here we are.


>Most people haven't even heard of OpenAI and ChatGPT still

What? I literally don't know a single person anymore who doesn't know what chatGPT is. In this I include several elderly people, a number of older children and a whole bunch of adults with exactly zero tech-related background at all. Far from it being only known to some, unless you're living in a place with essentially no internet access to begin with, chances are most people around you know about chatGPT at least.

For OpenAI, different story, but it's hardly little-known. Let's not grossly understate the basic ability of most people to adapt to technology. This site seems to take that to nearly pathological levels.


Some 37% of humans alive today have never used the Internet. Most people I talk to have heard about ChatGPT, but far fewer have heard of Nvidia.

I don't question people's ability to adapt, people are adaptible. But if you've never even heard of it, what is there to adapt to?


You originally mentioned chatGPT, not Nvidia, different story there. Also, for the 37%, sure, if we want to go to the extremes of deeply isolated or subsistence poor communities, or countries run by deeply totalitarian regimes, you'll see plenty of people who know little or nothing about chatGPT, google, etc. I was referring to any normal or even semi-developed context that at last has widespread internet use.

Example: I live in a country that still has a great deal of deep poverty, it's what's called a "developing economy" (sort of an odd phrase since aren't all economies always still developing at all times? but I digress) and even in all but the most deeply poor rural places here, most people frequently use the internet. And I know nobody who doesn't at least know of chatGPT or about how AI can now talk to you like a person would and answer all kinds of questions, let alone not knowing about things like Google and so forth.


This exact kind of sloppy equivocation does seem to be one of the major PR strategies that tries to justify the massive investment in and sloppy rollout of transformer-based language models when large swaths of the public have turned against this (probably even more than is actually warranted)


I'll bet that almost everyone who reads Quanta Magazine knows what they mean by AI.


I know, but can we blame the masses for misunderstanding AI when they are deliberately misinformed that transformers are the universe of AI? I think not!


Yea, I can tolerate it when random business people do it. But scientists/tech people should know better.


While nowadays misleading as a title, I found the term being used in the traditional sense refreshing.


That's how I feel about Web 3.0...


Web 3(.0) always makes me think of the time around 14 years ago when Mark Zuckerberg publicly lightly roasted my room mate for asking for his predictions on Web 4.0 and 5.0.


Absolutely agree.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: