Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There aren't any concepts here that are "palatable to the human mind." It's ok to have a bias, but don't be so foolish to pretend it is universal.


There are some very abstract common threads in how the mind conceptualizes the world. See Gestalt Psychology: https://en.m.wikipedia.org/wiki/Gestalt_psychology

There's also this; a fascinating study that investigated the ways people intuitively try to articulate algorithms when they haven't been biased by experience with existing programming paradigms: https://dl.acm.org/citation.cfm?id=373422


>There are some very abstract common threads in how the mind conceptualizes the world.

Of course, this means nothing pertaining to a preferred language, formal or natural, for encoding it such notions.

>a fascinating study that investigated the ways people intuitively try to articulate algorithms

Try to articulate algorithms. And generally fail, because they don't have any experience expressing complex systems in their shoddy, careless notation.

Naturally, when the physicist sits down to express her algorithm using Lie theory and tensor calculus, she's really a moron because one should obviously grab an inexperience lay person and have them dictate your notation for you. The natural consequence of this is that you'll see things you couldn't have seen before and discover many novel and interesting structures in the language, through a process of lay magic.


There's no need to be rude.

Just as using a more computer-compatible language (assembly, at the extreme) more optimally utilizes the computer's available resources, using languages with cognition-compatible paradigms more optimally utilize the brain's available resources, reducing cognitive load and enabling anyone - intelligent engineer or layperson - to express and mentally parse greater and more complex concepts than they otherwise would have been able to.

The power of the Lisp syntax comes from the fact that everything looks the same, but this also makes it harder for a human to parse. For example, using different symbols to wrap function arguments than you do to wrap procedural blocks gives the brain something to latch on to, reducing the cognition necessary to differentiate between the two. This is also why code editors make the same types of tokens the same color.

Two big takeaways from the linked study were that event-based programming and set-based operations are intuitive. These aren't layperson schlock, these are real paradigms. Every notation, every language, is made-up. There's nothing fundamentally more true about one or the other; each offers different metaphors for expressing complex concepts. The question is what those metaphors should be, and in my experience many programmers don't think about general human cognition when deciding which to prefer.


When my brother was a little kid he had trouble learning to tie his shoes. He wanted to use velcro shoes, so he got them. Does he wear velcro shoes today? No! He learned to tie his shoes, even though it wasn't an intuitive process. Partly out of necessity, but also because tying knots is a generally useful skill (aside from attaching shoes to feet) and because learning new things is an important part of life.

If you stick to intuitive operations, you'll suffer a lack of perspective. You never expand your intuition. Better metaphors are those that more closely encapsulate the facts of the problem at hand, not those that more nicely fit human mental models. The human mental model is wrong by default. Aristotle's intuition about gravity was wrong, and a significant amount of damage would be done to force discussion of gravity to match what he found intuitive.

People can learn new notations. They do it every day. People can adapt to complex metaphors. But complex problems will never simplify themselves to fit human preconceptions and intuition.

If you're dealing with a set-like problem, then a set-based language is good. If you're dealing with an event-like problem, then an event-based language is good. Human intuition is not an important factor, save for our bias toward seeing a given problem as set-like or event-like. An arbitrary problem may be better modeled using another frame of mind. But you won't see that if you're requiring that problems be solved using 'intuitive' language.


And yet wildly different paradigms are used to solve the same problems every day. A video game (for example) can be programmed imperatively, or reactively, or with OOP, or following a compositional pattern, and so forth. All of these metaphors are broad enough to express computation in general. What I'm saying is, when designing general languages, it's worth consciously factoring in the general ways in which the human mind works. It's not the only factor, but it's one that's often ignored.

To be doubly clear: I'm not saying that unintuitive, highly-formalized syntaxes are useless, just like assembly isn't useless. Each is crucial for certain uses. What I am saying is, they shouldn't be necessary for (and often aren't even well-suited to) solving the average programming problem. We don't write software in assembly any more, but we haven't evolved as far beyond its paradigms as we like to think.

There is a huge number of problems that software engineers work on every day, whose domain (what's being modeled) is fully understood by many laypersons (or nontechnical subject-matter experts). And yet those laypeople lack the ability to express their ideas to a computer, and those engineers waste huge amounts of effort translating those simple ideas into needlessly esoteric code. This is a fundamental failure in language design, and it needs to be addressed.


> fully understood by many laypersons

Lol, like what? Go have a layperson check if a file exists. Or pass a substring of UTF-8 to C function. Or check equality of two floating point calculations. Or multiply two signed integers together. Or fix anything that doesn't work due to performance problems.

And when they do all these wrong, ask them to show you how to debug and correct a useful but 'simple' program in production.

Lay people can't express their ideas to a computer because their ideas don't work in a computer. The code is esoteric because logic is esoteric, and the human brain is just not good at reasoning about edge cases without years of practice and experience.

>I'm not saying that unintuitive, highly-formalized syntaxes are useless, just like assembly isn't useless.

Ok, good. So why did you object to the existence of LISP? Why would you say it is a mistake?

In any case, my overall point in all of this is that there's a difference between simplifying a language and simplifying the learning experience. We should always be looking to make things easier to learn, but that doesn't actually require changes to the language. You can do that by finding better ways of teaching, better documentation, and better explanations for how and why things work.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: