Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

May be if you use 100+ of these (~14400 'computers') you can build a machine with emerging behavior, brain-like?


If 15,000 computers was all you needed for AI, we'd have it by now. Major datacenters have way more machines than that, which are far more powerful than a milliwatt microcontroller to boot.


I knew it. The name "Moore" sounded suspicious to me immediately. Now I know his real name is Chuck Testa!!!

On a more serious note, what we need much more of is not the processing speed. What we need much more of is what I call "memory processability", which roughly means "how many times per second can you process the whole memory" - or something like that. Basically how much CPU is there per RAM. Indexing is a great hack, but a hack it still is. Memory and processing cells must be merged into a single, massively parallel chip.


That was done in the 80's: http://en.wikipedia.org/wiki/Connection_Machine . Richard Feynman had to help the designers with the hard bits.

Users recall them fondly. In the end, clusters became too cheap for custom hardware like this to compete.


>What we need much more of is what I call "memory processability", which roughly means "how many times per second can you process the whole memory"

I might be misinterpreting, but I think the phrase you're looking for is "memory bandwidth".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: