TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Trap – Transformers in APL

95 pointsby tlack7 months ago

6 comments

photonthug7 months ago
&gt; Though APL may strike some as a strange language of choice for deep learning, it offers benefits that are especially suitable for this field: First, the only first-class data type in APL is the multi-dimensional array, which is one of the central object of deep learning in the form of tensors. This also signifies that APL is by nature data parallel and therefore particularly amenable to parallelization. Notably, the Co-dfns project compiles APL code for CPUs and GPUs, exploiting the data parallel essence of APL to achieve high performance. Second, APL also almost entirely dispenses with the software-specific &quot;noise&quot; that bloats code in other languages, so APL code can be directly mapped to algorithms or mathematical expressions on a blackboard and vice versa, which cannot be said of the majority of programming languages. Finally, APL is extremely terse; its density might be considered a defect by some that renders APL a cryptic write-once, read-never language, but it allows for incredibly concise implementations of most algorithms. Assuming a decent grasp on APL syntax, shorter programs mean less code to maintain, debug, and understand.<p>This is really cool. At about 150 lines, terse indeed. And it makes sense that of course APL could work well with gpus, but I’m kind of surprised there’s enough of it still out in the wild so that there’s already a reliable tool chain for doing this.
评论 #41783329 未加载
评论 #41782853 未加载
sakras7 months ago
&gt; Though APL may strike some as a strange language of choice for deep learning<p>I&#x27;ve actually spent the better part of last year wondering why we _haven&#x27;t_ been using APL for deep learning. And actually I&#x27;ve been wondering why we don&#x27;t just use APL for everything that operates over arrays, like data lakes and such.<p>Honestly, APL is probably a good fit for compilers. I seem to remember a guy who had some tree-wrangling APL scheme, and could execute his compiler on a GPU. But I can&#x27;t find it now.
评论 #41784704 未加载
评论 #41786740 未加载
bornaahz7 months ago
Hello everyone,<p>I am the author of this project. If anyone has any questions concerning trap, I&#x27;d be more than happy to address them.
评论 #41788625 未加载
anonzzzies7 months ago
There is this on <a href="https:&#x2F;&#x2F;shakti.com" rel="nofollow">https:&#x2F;&#x2F;shakti.com</a> (the &#x27;new k&#x27; from Arthur);<p>k-torch llm(61) 14M 2 14 6 288 288 x+l7{l8x<i>{x%1+E-x}l6x}rl5x+:l4@,&#x2F;(hvi,:l3w)Ss@S&#x27;&#x27;h(ki,:ql2w)</i>mql1w:rl0x (18M 2 32000 288)<p>which apparently can run on the gpu someone told me on discord (but i&#x27;m not sure if it&#x27;s true or not).
gcanyon7 months ago
&gt; Though APL may strike some as a strange language of choice for deep learning<p>It sure did to me, even as someone who has written (a trivial amount of) J. But the argument that follows is more than convincing.
smartmic7 months ago
It would be good if the APL dialect in which this is implemented is mentioned on the front page. I implemented some things in GNU APL which is an (almost) complete implementation of ISO standard 13751, based primarily on APL2. More common and modern is the proprietary Dyalog APL, which I assume is used here (and which is also free for personal use).
评论 #41786324 未加载
评论 #41786255 未加载