I find the often ML-inspired syntax too high of a bar for most programmers to clear when being introduced to a language.<p>I think a syntax more similar to something like what is introduced here: <a href="https://shuangrimu.com/posts/language-agnostic-intro-to-dependent-types.html" rel="nofollow">https://shuangrimu.com/posts/language-agnostic-intro-to-depe...</a> more accessible for a lot of programmers.<p>I separately think that proofs are actually a bit overblown when it comes to dependent types and that dependent types are most useful for specification, but often times could benefit from "fake" implementations.
Sorry but you cannot implement this in a day. I've written my own language very similar to this and from my experience it takes way longer. Implementing type checking for just lambdas, pi types, universes, unit, and absurd would take a day or two on it's own. Not to mention Sigma types, co-product types, w-types, identity types, natural numbers, and lists. You also have type inference and evaluation. Also the time spent coming to grips with what all of this means. It's a really cool project but I expect it would take a week minimum to implement this and have a solid understanding of everything you've done.
in practice, what kind of proof are people building when building real world programs ?<p>Are people writing only top level assertion ( such as "this main function always terminate") and the checker points at all the gaps ?<p>Or does one has to write proofs for every single intermediate layer and functions ?<p>In wich case, do people then have access to prebuilt "proven functions" in a stdlib ? such as "NeverEmptyList" or "AlwaysGreaterThanXVariable" ?
What exactly are the foundations of this? The rules on the page suggest that it is basically the calculus of constructions but the example involving the list suggests that there are inductive types too. Are the inductive types part of the foundations and omitted in the list of rules or are they something else that is not part of the foundation?
<i>>Pompom is an attractive implementation of an extensional (!) dependently typed language</i><p><i>>Pompom provides [...] a strong normalization system</i><p>How is this possible? Extensional dependent type theory has undecidable term/type equality, so there cannot be a strong normalization algorithm.
The Readme defines:<p><pre><code> // Data NonEmpty = | New a (NonEmpty a)
NonEmpty
|A :: ~ * ~> * => {(list A) :: |new}. // A list non-empty is list only with new constructor
</code></pre>
The haskell datatype reads like a co-inductive definition: a stream of list elements. But the NonEmpty in your language should be<p><pre><code> data NonEmpty = New a (List a)
</code></pre>
right?
So, as someone who is curious but have no idea of the theoretical background here.. what should I learn in order to help me understand the code + use this language in a basic way? I think the idea of being able to construct proofs is pretty cool but I'm pretty lost here.
The syntax is really bad. And I am familiar with Haskell, Lean, Idris, Agda etc. If the author reads this, I recommend instead copying the syntax used by Lean, Idris, or Agda. They are very close to each other and all good.
The syntax is quite ... complex, I dare say. It looks like you borrowed from Haskell, which already makes it hard to gain an intuitive understanding of the syntax and added bars?
> Pompom language is so simple that you can implement it yourself just by looking in the source code (you can think that our language is the BASIC language of proof assistants).<p><pre><code> List
| A :: ~ * ~> * => {(list A) :: |new |empty }. // A list is either a new or a empty constructor
</code></pre>
Okay... BASIC is a high level language, and it's aimed at people who are not involved in sciences. If your goal is to have a high level syntax, and aiming it at people maybe outside of those with formal proof background, I think that's a big miss. The syntax is anything but BASIC, compounded by the choice of lisp.
I’m constantly baffled by what functional programmers call “simple”. I’m sure this language does a lot with little but calling it simple is audacious to say the least. Why do so many things happen in a single line? Lack of imperative constructs in these languages forces one to nest a lot to be expressive and we get a symbol salad. It’s almost as if the code got passed through a minifier/function inliner.<p>Unless FP embraces more modular and structured ways of programming it’ll always remain a niche. OCaml’s ‘let’ and Haskell’s ‘where’ are steps in the right direction but we need to go a lot farther. To think that any human, with any amount of training, can parse such code in one pass is a fantasy. Since parsing is part of the coding loop, developer productivity is compromised massively.<p>Mathematics has a similar problem where single letter variable names with tiny letters on all 4 corners are ubiquitously used. There’s definitely a tendency in the community to play “symbol” golf. It’s high time we improve ergonomics of both math and FP because the rewards can be tremendous. Rust manages to do some of this to great success and programmers have embraced it so well.