TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Logic programming is overrated, at least for logic puzzles (2013)

68 pointsby alex_stoddardalmost 2 years ago

11 comments

jerfalmost 2 years ago
For me, I think the problem is that normal, boring, stupid, unsophisticated, plebian imperative programming lives in a world of O(1) operations. That is to say, a &quot;normal&quot; line of code you type will be O(1), and then you generally start gluing those together with various things that start stacking on O(n) complexities. As things like the accidentally quadratic blog [1], in the normal programming world it is a bit humorous to even so much as accidentally stack two of those things unnecessarily on top of each other.<p>Certainly, people manage to make poorly performing things even so, but at least at the base level, your primitives may be stupid, but they are generally fast.<p>The logic programming world works in a default space of O(n) operations, that stack together more freely than the imperative world, and that gives easy access to O(2^n). Since this is essentially impossible, a great deal of work is done to try to get that down, but you&#x27;re always intrinsically starting behind the eight-ball. It is easier to work up from O(1) operations than to write an exponential or super-exponential algorithm and then try to trim it back down to a decent complexity for a normal programmer.<p>I think this is the root cause as to why logic programming is generally not something we see a lot of. It&#x27;s like a wild stallion; it may be powerful but the amount of effort you pour into just keeping it under control may exceed any benefit you could get.<p>It isn&#x27;t useless, of course. Amazing work has been done in the field of SAT solvers, and there&#x27;s certainly a niche for it. The problems that are intrinsically higher-order polynomials or (technically) exponential, well, they are what they are and if you&#x27;re going to be stuck in that world, logic programming may offer you a much better toolset than conventional programming on its own. But there was a hope a long time ago, in the Prolog era, that it could become part of the normal toolkit of general purpose programming, and I don&#x27;t think that will ever happen, because of this line of logic.<p>This is a bit tangential to the article, it&#x27;s just what I happened to read that finally crystallized this in my mind.<p>[1]: <a href="https:&#x2F;&#x2F;accidentallyquadratic.tumblr.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;accidentallyquadratic.tumblr.com&#x2F;</a>
评论 #36154946 未加载
评论 #36156436 未加载
评论 #36155097 未加载
评论 #36156343 未加载
评论 #36156412 未加载
评论 #36159300 未加载
YeGoblynQueennealmost 2 years ago
&gt;&gt; I think a lot of people mistakenly believe that core.logic is working some magic behind the scenes to solve puzzles in some amazingly efficient manner. On the contrary, it&#x27;s just brute force search, using complex machinery which just slows it down versus a for comprehension.<p>If that&#x27;s what core.logic is, then core.logic is not logic programming.<p>Logic programming refers to one of two things: either Prolog-like languages where programs are sets of definite clauses executed by SLD-Refutation of a goal, and evaluation returns all bindings of the variables in the goal that make the goal true; or Answer Set Programming (ASP), where programs are similar to definite programs, but the semantics are stable model semantics and evaluation finds all the stable models of a program.<p>Both of these approaches incorporate search, but neither is &quot;just brute force search&quot;, in any way, shape or form.
grosealmost 2 years ago
As pointed out in the comments in the article, these kinds of logic puzzles are easier to solve using constraint programming than &quot;regular&quot; logic programming.<p>For example, see the solution to the Zebra Puzzle here: <a href="https:&#x2F;&#x2F;www.metalevel.at&#x2F;prolog&#x2F;puzzles" rel="nofollow">https:&#x2F;&#x2F;www.metalevel.at&#x2F;prolog&#x2F;puzzles</a> which uses CLPZ[^1].<p>[^1]: <a href="https:&#x2F;&#x2F;github.com&#x2F;triska&#x2F;clpz">https:&#x2F;&#x2F;github.com&#x2F;triska&#x2F;clpz</a>
评论 #36157181 未加载
评论 #36156871 未加载
PaulHoulealmost 2 years ago
What you really want for logic puzzles is a SAT or SMT solver as it gets the same results as exhaustive search except it usually can eliminate large amounts of the search space. A language like Prolog superficially looks like it can solve logic puzzles natively but the search strategy is usually too limited.
评论 #36155035 未加载
评论 #36155493 未加载
评论 #36154392 未加载
prologitoalmost 2 years ago
Given a list of numbers L, use operations +,-,*,&#x2F; to obtain a result of Result. Use each number of L exactly one time.<p><pre><code> Code: j24([X],R,H) :- !, X =:= R,H=X. j24(L,R,H) :- select(X1,L,L1), x2(X1,Op,X2,R), j24(L1,X2,H1),H =..[Op,X1,H1],tj24(Op,X1,H1). tj24(Op,X1,H1) :- member(Op,[&#x27;+&#x27;,&#x27;*&#x27;]), H1 =..[Op,Y1,_],integer(X1),integer(Y1),!,X1 =&lt; Y1. tj24(_,_,_) :- true. x2(X1,&#x27;+&#x27;,X2,R) :- X2 is R - X1, X1 =&lt; X2. x2(X1,&#x27;-&#x27;,X2,R) :- X2 is X1 - R. x2(X1,&#x27;*&#x27;,X2,R) :- X1 =\= 0, X2 is R &#x2F; X1, X1 =&lt; X2. x2(X1,&#x27;&#x2F;&#x27;,X2,R) :- R =\= 0, X2 is X1&#x2F;R. Example: ?- j24([2,5,7,11,20],280,L). L = 5+11*(7+(20-2)) ; L = 5+11*(7-(2-20)) ; L = 5+11*(20-(2-7)) ; L = 5*(20+2*(7+11)) ; L = 7-(2-11*(5+20)) and so on. A little more complex: ?- j24([2,3,4,7,11,13,17,23,27,31],7239,Solution). Solution = 2+(3+(11+31*(7-(17-27&#x2F;(4&#x2F;(13+23)))))) When L = [2,3,4,7,11,13,17,23,27,31,37,43,47] and Result is 1117239 one solution is 2+(3+(4+(27+(47+23*(43+13*(37+7*(11*(17+31)))))))) </code></pre> Edited: I made a small change to prune solutions:<p><pre><code> x2(X1,&#x27;+&#x27;,X2,R) :- X2 is R - X1, X1 =&lt; X2. x2(X1,&#x27;-&#x27;,X2,R) :- X2 is X1 - R,X2 &gt; 0. x2(X1,&#x27;\*&#x27;,X2,R) :- X1 =\= 0, X2 is R &#x2F; X1, X1 =&lt; X2. x2(X1,&#x27;&#x2F;&#x27;,X2,R) :- integer(R),R =\= 0, 0 is X1 mod R, X2 is X1&#x2F;R. </code></pre> The program does not compute all the solution. When the parameter Result is relatively small is easier to obtain a problem with many solution, so the algorithm is fast.<p>The Prolog program computes a solution in one or two seconds, how would be a similar python program to solve this problem in less than 10 lines?
alexdowadalmost 2 years ago
In the comments, swannodette promised a response blog post to this one. Here it is: <a href="https:&#x2F;&#x2F;swannodette.github.io&#x2F;2013&#x2F;03&#x2F;09&#x2F;logic-programming-is-underrated&#x2F;" rel="nofollow">https:&#x2F;&#x2F;swannodette.github.io&#x2F;2013&#x2F;03&#x2F;09&#x2F;logic-programming-i...</a>
lurqueralmost 2 years ago
I’ve written programs to solve and create those grid-style logic puzzles (such as “Five men went to dinner. Mr Brown ordered fish. The man with the red hat ordered lamb. The person who ordered beef was not Mr. White… etc.)<p>Once you have a way of encoding the clues into a machine-readable syntax, it’s trivial to solve. But, the tricky part is creating a good puzzle that can be solved by a human without guessing. That’s an art form. A brute-force solver won’t suffice to analyze your randomly generated sets of clues; instead, you need a solver that is only capable of making the types of deductions a human could make. Very subjective.
评论 #36156442 未加载
slaymaker1907almost 2 years ago
&gt; But the disadvantages of core.logic for solving logic puzzles don&#x27;t end there. core.logic is very sensitive to the way that goals are ordered, in a bad way. Certain orderings, for example, will cause the program to go into an infinite loop, and the DSL is complex enough that this is not always readily apparent.<p>This is just not correct (or at least phrased VERY poorly). My understanding is that core.logic uses Minikanren under the hood which is guaranteed to find an answer if an answer exists regardless of goal ordering. Unlike Prolog, Minikanren (and presumably core.logic) do not search depth first (which is why Prolog can run into infinite loops very easily). Instead, it roughly does interleaving BFS. It can search inefficiently, but that&#x27;s true of any combinatorial method.<p>As for practical use cases, I really like how easy it is to write analysis code for parse trees and type systems. The C# T-SQL parser (used in various tools) is very tedious to actually use for quick jobs because of how many different node types it has. Instead of handling that monstrosity, I wrote a much smaller app that just went over the whole tree and converted it to JSON (including the types and type hierarchy) via reflection. It was then way easier to load it into Prolog and query the tree.
TimSweeneyEpicalmost 2 years ago
This example shows that Clojure list comprehensions (the for... syntax) have much of the expressive power of logic programming (the first... syntax).<p>List comprehensions support a bundle of features including producing multiple values, and filtering them by testing and potentially failing.<p>Logic programming is that plus some more flexibility (more compositional forms of multiple value production) and some new features (such as using unification to solve for unknowns).<p>We should expect mainstream languages to evolve to support more logic features, since they provide a more general and more compositional way to do many things. Pattern matching expressions in most languages are a limited special case of logic programming that would benefit from using the more general case. Same with casting constructs, null propagation operators, boolean and&#x2F;or&#x2F;not, and the internal compiler logic supporting features like Haskell typeclasses and Rust traits. If we can replace lots of special-case language features a few general constructs, programming will be simpler as Niklaus Wirth advocates for.
bawolffalmost 2 years ago
I&#x27;ve only done a bit of prolog programming, but from what i&#x27;ve seen so far it feels like &quot;logic&quot; is the wrong lense.<p>It feels more like its based around describing a graph, where some verticies have side effects, and then releasing DFS on the graph. I&#x27;d call it DFS programming, but also im a noob at it, so maybe more experienced people would disagree.
评论 #36155451 未加载
tangusalmost 2 years ago
Sure, to solve a 5x5x5 puzzle we can test all 5!*5!*5! possibilities; it&#x27;s faster than thinking it out. To solve a 7x7x7 one you&#x27;ll need logic programming though.