Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What people wanted Prolog to do, it couldn't do. People wanted to step away from the control flow specification of imperative language, the message-passing of OOP, and let the compiler solve your problem for you by feeding it a specification of the constraints the completed system would follow. Prolog did not, and cannot do this, because it is not strong AGI. When people realized this, they gave up on it, because it is easier to tell the computer how it should branch than to forcefully guide stupid heuristics around.

When there exists a large, universal library of "common sense", of the kind people build up from years of childhood sensory experience, declarative programming will come back. But not until then.



Very much true. During my education I was massively disappointed in Prolog, because the programming process was something like this:

1) write it declaratively 2) figure out that your program will take days to complete 3) add ugly procedural hacks (cuts) to make it more efficient

To an extent, you have this process in any language (write->measure->optimize) but typically not in a way that it forces you at gunpoint to rape the paradigm you're working in.


An excellent point - Prolog fans (usually those who never actually programmed anything) would describe it as a declarative language.

However, any "real" Prolog program that I ever saw was really using it as a slightly odd procedural language.


It doesn't fail on all occasions. Our research group wrote a robust natural language parsing system written in Prolog (with some C extensions). The (aptly named) unification grammar, lexicon, and most of the productive lexicon are written in a declarative manner.

This really paid off when in a new research project, which goal is to write a sentence realizer for the same system. With nearly no modifications we could reuse the grammar, lexicon, and productive lexicon. Both the parser and the sentence realizer use the same grammar and lexicon now.

I understand that this may seem somewhat trivial, as it may seem that the lexicon and grammar are plain data. However this is not true:

- The grammar is written as a declarative manner, where goals are mostly operators that manipulate attribute-value structures. These rules are later compiled to plain Prolog terms via term expansion (DCG-like) for efficiency. - You don't want to perform some unifications immediately, even when two terms are unified. Most Prolog implementations offer blocked goals, where a goal is blocked until a variable becomes instantiated.

However, I am the first to admit that Prolog makes some classes of problems trivial (unification grammars, parsers). There are also many things that you do not want to do in Prolog, because it is a waste of time, or very inefficient. For instance, in our system the following components are implemented in C or C++:

* Finite state automata for quick lookup of subcategorization frames.

* Part-of-speech tagger for restricting the number of frames for each word before parsing.

* N-gram models that are used as a feature in fluency estimation.

* Tokenization transducer.

* Bit arrays (comparable to Bloom filters) for excluding useless paths in parsing.

Conclusion: use the right tool for the job. Unification, structure sharing, and (some) pattern matching are cheap and easy to use in (WAM) Prolog. Most other things are prohibitively expensive and clumsy in Prolog.


Most programmers have a hard time learning to think in the Prolog way. That's why there is so much bad Prolog code around. But the same can be said of Lisp code.


Several Prologs have extensions for constraint programming, which give it much smarter hueristics. Instead of trying every possible combination of choices and failing/backtracking as early as possible (generate-and-test), it can use constraints to narrow the intervals / sets of possible choices - which usually sets off a chain reaction and further constrains the search space.

Once constraint propagation can't narrow things any further, it can copy the search space, use various search hueristics (such as splitting each copy of the narrowest interval in half), and see if that triggers further constraint propagation (or hits a dead end). It usually greatly reduces the amount of depth-first search. There are other ways of implementing constraints besides propagator-networks and space copying, but that's the way I understand best.

Generate-and-test is great for prototyping, but doesn't scale up to larger problems. Constraint programming fares much better.


You mean something like Cyc or Open Cyc? http://www.cyc.com/opencyc/

I am not sure that I fully agree. Even without that, Prolog is very useful for many problems.


Cyc failed because it succeeded.

Doug Lenat was able to run a very good business (hiring about 50 people for 10+ years) running about 50% off government contracts, about 50% off sales to big companies that could use better KB management tools that his competitors had.

Cyc didn't need to change the world in order to succeed, so it didn't change the world.

That said, the fundamental trouble in NLP is the lack of common-sense knowledge. The proper word-sensing of pen in "the pig is in the pen" vs "the ink is in the pen" is a matter of semantics, not syntax. You can do Noam Chomsky stuff until you're blue in the face and it will get you nowhere... But then some stupid Markov Chain comes along that conflates syntax and semantics and beats it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: