TE
테크에코
홈24시간 인기최신베스트질문쇼채용
GitHubTwitter
홈

테크에코

Next.js로 구축된 기술 뉴스 플랫폼으로 글로벌 기술 뉴스와 토론을 제공합니다.

GitHubTwitter

홈

홈최신베스트질문쇼채용

리소스

HackerNews API원본 HackerNewsNext.js

© 2025 테크에코. 모든 권리 보유.

Having your compile-time cake and eating it too

32 포인트작성자: signa114일 전

4 comments

codebje3일 전
On running code at compile time, what this reads as to me is evaluating constant expressions at compile time. We&#x27;re thoroughly used to compilers optimising constant expressions away for us: &quot;if 4 &lt; 2 then a else b&quot; will usually result in identical compiler output as just &quot;b&quot;: &quot;4&quot; is constant, &quot;2&quot; is constant, &quot;&lt;&quot; is constant, and the application of &quot;&lt;&quot; to &quot;4&quot; and &quot;2&quot; is constant.<p>What about &quot;factorial 8&quot; ? Assuming the compiler knows that &quot;factorial&quot; is a pure function, it also knows that &quot;factorial 8&quot; is a constant, and in theory it could evaluate that function and substitute in the result. In practice this won&#x27;t generally happen automatically, because compile times would blow out something fierce.<p>But in, say, C++, you can do this:<p><pre><code> constexpr unsigned factorial(unsigned n) { return n == 1 ? 1 : n * factorial(n - 1); } constexpr unsigned fac8 = factorial(8); </code></pre> And it will compile to the static int value. This looks very much like the author&#x27;s proposal; reading the various rules around constant-initializer probably will help expose pitfalls.<p>More interesting for me is partial evaluation. Constant folding evaluates fully static parts of a program to produce a static result, shifting that execution cost from run time to compile time. Partial evaluation also allows some of the program&#x27;s dynamic inputs to be fixed to a static value, slaps a virtual &quot;constexpr&quot; on everything that now only has static inputs (and is a pure function), and reduces everything it can.<p>Partial evaluation gets fun when you start applying it to certain types of program: partial evaluation of an interpreter with a given program&#x27;s source as an input acts just like a compiler. Partial evaluation of a partial evaluator with an interpreter as fixed input produces a compiler. Partial evaluation of a partial evaluator with a partial evaluator as fixed input produces a compiler generator: feed it an interpreter for any language, get a compiler for that language.
aatd863일 전
Is it perhaps a eli5 way of speaking about higher kinded types and type checking decidability?<p>Is it about types not being available in the frontend of the language as first class values?<p>Or is it about types not being defined as set of values but something else?<p>It got me a little confused so I am asking.
评论 #44101918 未加载
评论 #44101870 未加载
burakemir3일 전
One angle (no static types) is racket, which knows multiple stages for its macros. Maybe the most developed actually working &quot;new tradition&quot;.<p>Researchers have also looked into multistage programming, which is enabled by representing code at runtime. Including how to represent it in type systems&#x2F;logic.<p>For Scala, there was a realization that both macros and multistage programming need a representation of programs. I am falling asleep so can&#x27;t dig out references now, but it is exciting stuff and I think the last word has not been written on all this.
codebje3일 전
On types, I think there&#x27;s a philosophical argument about whether types can be values, and a related one on whether types should be values, but I think I disagree with the author about what a value is, because as far as I understand it a value is a piece of data manipulated by a program during execution.<p>Languages with type erasure obviously never have types as values. Languages without type erasure have _something_ as a value, but is that something a type, or a representation of a type, and is that a valid distinction? I don&#x27;t feel well qualified to make a judgement call, largely because I favour type erasure. Types are specification, not implementation.<p>What the author calls values, I tend to call terms: bits of program that can be judged to be of a particular type or not. &quot;1&quot; is a term: you could judge &quot;1&quot; to be of type &quot;int&quot;, or of type &quot;unsigned&quot;. &quot;factorial 8&quot; is a term. A value is just a term that&#x27;s represented at runtime. But is there a type of a type? Commonly, no: types are in a different universe, specified with a different syntax, and understood under a different semantics. The type of &quot;factorial&quot;, eg, is a function from int to int. In a language with generics, you might have &quot;List T&quot; that takes some type T and produces the type of lists of Ts.<p>There&#x27;s no particular reason why you can&#x27;t say that the type of List is a function from Type to Type, why you can&#x27;t use the same syntax to specify that function, or why the semantics of what that function means should be any different to the semantics of &quot;factorial&quot;. Consider:<p><pre><code> def factorial : Nat → Nat := fun n =&gt; if n = 0 then 1 else n * factorial (n - 1) def natty : Type := Nat → Nat </code></pre> This doesn&#x27;t imply any fancy-pants type system features. The above two terms are expressible in the simply typed lambda calculus. If you add only outer-most universal quantification, ie, parametric polymorphism, you get Hindley-Milner. If you an arbitrary quantification, you get System F; with coercions, System Fw.<p>Universal quantification, HM style, gives you functions from types to values (terms that can exist at runtime). If you have, say, &quot;length : List a -&gt; Int&quot; you have a function that&#x27;s universally quantified over &quot;a&quot;: you tell &quot;length&quot; what type &quot;a&quot; is and you get back a function from one concrete type to another, something you can have at runtime. (All the functions producible here are identical, thanks to type erasure, so while this type-to-value thing actually happens in order to perform type checking the compiler will only ever produce one implementation.)<p>The last option, railed against in the article, is to have a function from a value to a type. This is, to agree with the article, generally pretty weird and hard to get a handle on. Where the article deliberately picks pointless examples, though, I&#x27;d like to point out two much more commonly encountered kinds: &quot;Vector a n&quot;, the type of vectors of type a and length n. The type argument is a type, but the length argument is a value. The other example comes from C: tagged unions. You have a union type, that might be one of a handful of type, and you have a value that distinguishes which one it is. In C, there&#x27;s no type system support and you just have to get it right every single time you use it. If you have functions from values to types, though, you can have the type checker get involved and tell you when you&#x27;ve made a mistake.<p>The whole point of static types is to specify your program&#x27;s behaviour in a way that the compiler can reject more incorrect implementations. We know from way back in Gödel&#x27;s day that no type system can be complete (accepting all correct programs) and consistent (rejecting any incorrect programs). Most type systems we can practically work with are inconsistent (you can get any program to be accepted by the compiler with the right type casts) because the incompleteness of the type system hinders expressivity. I believe it&#x27;s possible to have a consistent type system with sufficient expressivity to allow all _useful_ programs to be written, and those type systems will roughly correspond to the Calculus of Constructions, ie, they will be dependently typed. I am not yet sure I believe that the cognitive load of working in such a language will make it worth doing - which as far as I can tell is the point the author of the article is making about type systems.