> it allows to directly convert the problem statement into an efficiently solvable declarative problem specification without inventing an imperative algorithm. -- Sergii Dymchenko<p>This quote from the front page reminds me of the motivation for Autograd (and other AD frameworks)<p>> just write down the loss function using a standard numerical library like Numpy, and Autograd will give you its gradient.<p>or even probabilistic programming languages like Stan, where you can write down a Bayesian model and get posterior samples.<p>Working backwards (as I know Stan but not Picat), I guess to really put the language to work you need to be aware of limits of the implementations, and how to dance around them.