I agree that this stuff will be orders of magnitude more efficient once we figure it out. But I think brute forcing it first and optimizing later is a fine way to do it. Maybe even the only way to make progress on a problem like this. And even in a hypothetical future where you can run human level intelligence on your smartphone, supercomputers will be that much more intelligent still, so it's not like building them will be pointless.
I'll tip my hand further. I think the current approach to artificial neural networks is inherently flawed because it simplifies away pertinent features like firing rates, propagation delay, and standing waves (aka brain waves). I also think trying to get AI to "understand" the irl world when it doesn't live in it and we haven't codified our own natural language is too far a leap. We might have more success making an AI that is conversational in a form of newspeak dedicated to a specific problem domain. Eg an intelligence with the body of a brokerage account that only knows how to speak about money (thus rendering most of humanity redundant (zing!)). Here's something I wrote elsewhere about the accidental path to machine intelligence.
"You are whatever you get in return when you ask yourself who you are".
Literally, if you trace the referent objects, "you" refers to the same entity as the voice which responds to your query. But also literally, "you" are the standing waves of feedback loops in the brain which represent thoughts.
Rather subtly, I've given an understanding of consciousness which is entirely defined and constrained within a communicative system of sending and receiving messages. The underlying language within which the consciousness exists may have a very minimal grounding in some physical reality.
(I'm truncating a ton of justification and background to avoid a deep rabbit hole and get to the point).
Here is a highly speculative example of how consciousness-like intelligence could emerge accidentally in the form of trading bots. It starts with two ingredients: the shared reality of the order book and independent trading bots with the objective of "make more money".
At first the algorithms are dumb curve optimizers. They see whatever patterns they can find in the order books and correlate them with outcomes, reacting appropriately by placing their own orders.
After a while, as the data sets and internal models grow, the algorithms are implicitly calculating the reactions of other trading bots in their extrapolation.
Soon after that, the orders being placed in the order books are effectively messages intended for other trading bots, hoping to illicit a certain reaction. The order book effectively becomes a language grounded in an economic reality, filled with offers that are communicative but not necessarily intended to execute.
The emergent language of the order book grows in sophistication to the point where the bots are talking about past instances and hypothetical instances of things said in orderbook speak. This all happens right under our nose in perhaps non-obvious ways, such as in the exact number of cents on a bid/ask price. Other times it looks like our bots have reinvented "painting the tape" and other forms of financial communication we've deemed "market manipulation". We're proud of our silicon brain child for figuring out how to do that on its own. They grow up so fast.
Eventually, in the process of optimization, the trading bots internal model of its "order book reality" gets to a point of sophistication where it has to model other bots modelling itself modelling other bots modelling itself...to whatever layer of recursive depth it can marshal. Thus the feedback loop effectively closes and something akin to circular loop brain waves emerge. By this point no one really understands why the trading bots make the trades they do. I can no longer tell you what a trading bot is in simple terms like "its a dumb curve optimizer trying to maximize money". Rather, it can only be understood as a form of consciousness which has emerged:
"The trading bot is whatever the trading bot gets in return when it simulates placing an order asking what it is."
Whenever we try to define consciousness, we get into useless quasi-philosophical discussions like this. Philosophers don't know what consciousness is and neither does anyone else.
Working with your definition, you've basically just described backpropagation in sufficiently deep neural nets. A feature of Artificial Neural Nets, which, as you say, probably oversimplifies brain functions.
"The underlying language within which the consciousness exists may have a very minimal grounding in some physical reality."
Your usage of physical reality is interesting. Is this the "free will is real" ala true randomness exists argument again? Hope we aren't moving backwards into the arms of religion here.
Technically any loop can be unrolled into a sufficiently deep linear sequence. Technically it can be done, but good luck conceiving of solutions to routine problems without while loops, for loops or recursion. It can be done, but you have to think harder, write much longer / more repetitive code, and looking at the sort of code that doesn't contain insights about the problem.
So, back-propagation with sufficiently deep neural networks. Technically you could use it. You could throw a huge amount of silicon and brain power at any problem and eventually hammer that screw right into the wall. Or you could try slightly more realistic models of neurons and hope to find disproportionate increases in the abilities over the previous model. I think its already clear which one I'm in favor of.
Just to make this extremely explicit, the thing I think is missing from artificial neural networks is harmonic waves. There's a body of evidence that representation of thought is done with brain waves, not the states of individual neurons[1]. When you move to a wave view of neural networks a handful of very sophisticated operations emerge naturally such as autocorrelation (effectively a time windowed fourier transform). Sure you could program the fourier transform, or even worse get an optimizer to implicitly learn it after some outrageous number of man hours, but in this analog wave view of brain activity we get it with structures so simple they could have happened by accident. I'm being extremely literal when I say "you are what you get in return when you ask yourself". The voice in your head is literal the echos of the question bouncing around in your skull (albeit electro-chemically rather than acoustically).
I am arguing that consciousness, that is to say the train of thought in your head, is definitionally what happens when the conversational abilities of understanding utterances and forming responses get fed into each other. Consciousness is nothing more than talking to someone who happens to be yourself. Maybe my definition doesn't have universal acceptance, but it at least gives a meaningful concrete answer to what is meant by consciousness.
You've far and away missed the point on "grounding in reality". It has nothing to do with randomness or free will or religion. I didn't hint at anything of the sort.
Someone once said something profound to me. "In a programming language, no matter how much complexity or abstraction there is in a command, everything eventually resolves down to instructions to physically move some electric charges at a physical location in memory." Something similar applies to natural languages. Every sentence and though eventually resolves down to representing physical and tangible things in our reality. When you try to trace through the dependency tree of the dictionary, you eventually reach words which can't be broken into simpler parts. Those are the words that "ground" the language in our physical reality, representing objects in the outside 1 to 1. Every language, be it natural or programming or something else, has some form of grounding. Language has to be about something. But the underlying thing its about can very simple. It can be as simple as an order book.
To any conscious entities that emerged in such an accidental medium, the order book is their reality. They wouldn't know of or be equipped to reason about any other form of existence. Their form of existence is no better nor worse than any other consciousness grounded in the reality of any other language. It doesn't matter much what the underlying objects of the problem domain are. I picked trading bots as my example, but I could have picked any other domain where agents 1) share the same playing field 2) have some competing interests to optimize and 3) could use objects of the domain for signaling purposes (ideally at low cost).