One of my college professors has a language that's heavily influenced by Prolog and designed to solve NLP problems called Dyna (I think because many NLP parsing problems can be solved by dynamic programing approaches that logic programs can easily express and optimize for).
I actually think the real reason we don't see more Prolog or Prolog-like languages is not because they are bad approaches, but because it's relatively easy to make your own half-baked constraint satisfaction backtracking solver in a language like Python, and then write rules for it to evaluate, so smart people just write it that way rather than busting out Prolog.
And backtracking is so useful when doing parsing in general. For an NLP class I did we made a shift-reduce parser with dotted items for an assignment, and my liberally commented Prolog source weighs in a 33 lines of code. My friend's Lisp code which only found one parse (but did make parse trees, to be fair), was several hundred lines if memory serves.
The corollary to Greenspun's tenth rule in action, I guess. =)
The most obvious use case is when you have some sort of list of rules or constraint satisfaction problem.
Gerrit uses its for that use case: https://gerrit-review.googlesource.com/Documentation/prolog-...
One of my college professors has a language that's heavily influenced by Prolog and designed to solve NLP problems called Dyna (I think because many NLP parsing problems can be solved by dynamic programing approaches that logic programs can easily express and optimize for).
https://github.com/nwf/dyna
I actually think the real reason we don't see more Prolog or Prolog-like languages is not because they are bad approaches, but because it's relatively easy to make your own half-baked constraint satisfaction backtracking solver in a language like Python, and then write rules for it to evaluate, so smart people just write it that way rather than busting out Prolog.
http://norvig.com/sudoku.html