Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Am I misunderstanding something?

Instead of teaching the "AI" intelligent rules or rules for creating rules for maximising their goals. They teach them nothing, which means they have 0 usable high level knowledge. And the "AI" pure bruteforce for finding empirically best solutions for this ridiculously simple universe.

How is that advancing research? This is just a showcase of what modern hardware can do, and also a showcase of how far we are from teaching intelligence. My brain understand the semantics of this universe and would have been able to find most strategies without simulating the game more than once in my head. So definitely this is a showcase of how far (bruteforce is like step 0) we (or at least openAI) are from making AGI.



Some AI researchers believe that using learning methods with no built-in prior knowledge and throwing a bunch of compute at them is the path to building effective AI. I'm thinking of Richard Sutton in particular:

- Bitter Lesson essay: http://www.incompleteideas.net/IncIdeas/BitterLesson.html

- A lecture of his on temporal difference learning, which is a "model-free" method of reinforcement learning: https://www.youtube.com/watch?v=LyCpuLikLyQ

I personally don't agree with his emphasis on model-free learning, but it's not the case that people are building model-free RL agents because they don't understand the trade off that they're making.


How do you know your own brain isn't running thousands of parallel simulations in your head, even though you perceive it only once? How did your brain learn to reason about physics in the first place if not by repeatedly finding objects in your environment and randomly manipulating them?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: