Liquid Types enabled a 6x speed up in high-performance parsing of UDP packets, according to "Scrap your Bounds Checks with Liquid Haskell" of Gabriella Gonzalez [1].<p>With Liquid Haskell, the bound checks are moved from runtime to compile time, semi-automatically handled by SMT-solvers. Static types help programmers to write correct programs faster, and the programs also run faster.<p>As an aside, speeding up programs with static analysis (constrained dynamism) are also present in Mojo (a variant of Python) or Swift [2].<p>[1]: <a href="https://github.com/Gabriella439/slides/blob/main/liquidhaskell/slides.md">https://github.com/Gabriella439/slides/blob/main/liquidhaske...</a> "Scrap your Bounds Checks with Liquid Haskell"<p>[2]: <a href="https://github.com/modularml/mojo/discussions/466">https://github.com/modularml/mojo/discussions/466</a> "Mojo and Dynamism"
An important thing to remember about fancy type systems is that they can only make guarantees in a closed world. If your data comes from the network or a file, you will likely need a reader or parser that does runtime checks somewhere, but a good type system can provide assurance that you did each check exactly once.<p>On a complicated single-process system (a monolith), that can be a big improvement. For distributed systems (microservices), there will still be plenty of runtime checks. That's what allows each service to have its own lifecycle.<p>I'm wondering what Elixir will do, given that Erlang servers divide up work into lots of independent, distributed components?
I honestly thought that this was going to be about actual liquid types -- from the regular ones that (macroscopically) differ largely in terms of viscosity, to non-Newtonian liquids which have non-linear viscosity characteristics, to long-chain (PEG-type) polymers that have slightly unusual flow characteristics, to superfluids that have extremely weird characteristics and zero viscosity.<p>...I'd read that article. Now I'm starting to think that I should write it.
This sort of idea is why I sketched out pdv [0] in Zig. Even extremely simple type systems can handle arbitrary invariants when you are allowed to compute with those invariants at compile time.<p>[0] <a href="https://github.com/hmusgrave/pdv">https://github.com/hmusgrave/pdv</a>
I like this sort of idea for typifying dynamic languages (like JS)<p>What if instead of TS, you had plain JS as-is (no compiler needed!) then just add comment like:<p><pre><code> /* x is an integer, x > 0 ... */
function add(x, y) { ...
</code></pre>
Sorry the example ain't great but the point is that you can be precise with refinement types. I think refinement types (less all the mathy explanations) are more intuitive. They would be like assertions or C# code contracts. The content of those types would look "business-ey" rather than "category theory-ish".
DML link in the article is broken - Might I instead point you to a relevant ATS-lang (successor<i>2 of DML) documentation about `Dependent Types in ATS2` <a href="https://ats-lang.sourceforge.net/DOCUMENT/INTPROGINATS/HTML/c2100.html" rel="nofollow noreferrer">https://ats-lang.sourceforge.net/DOCUMENT/INTPROGINATS/HTML/...</a><p>This link discusses, along with other usefulness, the use of dependent types for the likes of array bounds checking, creating "sorts" of numbers satisfying various predicates, and more interestingly the utility of combining Dependent Types with PURE Linear Types (as opposed to Affine Types a la Rust lol) ...<p>AFAIK, DML was originally intended as a gateway to constructing typed assembly (DARPA)<p>Having been a researcher in this field for, this quote from the original paper on 'Liquid Types' originally seemed suspect....
"a system that combines Hindley-Milner type inference with Predicate Abstraction to automatically infer dependent types precise enough to prove a variety of safety properties" .... type inference with dependent types is undecidable!? Then again, this is one of the primary focuses of ATS3, I digress...<p>This area of research is rich in pipe-dreams and cerebral investigation. From my biased opinion, ATS3 (the current version of the successor</i>3 of DML, being actively devloped by the PI Professor Hongwei Xi, is the only language in this space to be aimed at practical programming in industry... of course there is always haskell....)<p>What I mean is that, TEMPLATES++ Linear Types + Dependent Types + Hindley Milner Type inference ....<p>At the end of the day, who doesn't love hyper efficient meta-programming magic?
How do you define operations on these liquid types? You can't even define subtraction or addition on this "non zero integer type". These types are not closed so how is it useful?
I wonder how pragmatic those can become. Usually I don't really care about "exact" exactness, a common theme in haskell. A sort of "under common assumptions" (no underflow/overflow/OOM etc.) proof of laws and properties. This eliminates a whole class of bugs and strengthens your ability to reason, but doesn't make everything super, super complicated by thinking about all those edge cases.<p>One usually doesn't program airplane control software. Some bugs are okay and speed of development and correctness is a trade-off.
I wonder what is possible in more common languages like Java or Python.<p>Certainly, the language itself won't have liquid types in the compiler any time soon. But taking Ruby's Sorbet as inspiration, I wonder if you could annotate Java classes with semantic rules and do some simple verifications as a post-compilation step?<p>There's too much bad code out there that can't just be replaced. Tools to help make it better would be a huge boon.
Programming language design took a wrong turn with these extremely complicated type systems. They end up hurting developer productivity, and you spend more time fiddling with types than solving the problem at hand. We should be focusing on more effective forms of abstraction instead of types.