> Non-goals: Drop-in replacement for CPython: Codon is not a drop-in replacement for CPython. There are some aspects of Python that are not suitable for static compilation — we don't support these in Codon.<p>This is targeting a Python subset, not Python itself.<p>For example, something as simple as this will not compile, because lists cannot mix types in Codon (<a href="https://docs.exaloop.io/codon/language/collections#strong-typing" rel="nofollow">https://docs.exaloop.io/codon/language/collections#strong-ty...</a>):<p><pre><code> l = [1, 's']
</code></pre>
It's confusing to call this a "Python compiler" when the constraints it imposes pretty fundamentally change the nature of the language.
> Is Codon free? Codon is and always will be free for non-production use. That means you can use Codon freely for personal, academic, or other non-commercial applications.<p>I hope it is released under a truly open-source license in the future; this seems like a promising technology. I'm also wondering how it would match C++ performance if it is still garbage collected.
I immediately wonder how it compares to Shedskin¹<p>I can say one thing - Shedskin compiles to C++, which was very compelling to me for integrating into existing C++ products. Actually another thing too, Shedskin is Open Source under GPLv3. (Like GCC.)<p>1: <a href="https://github.com/shedskin/shedskin/">https://github.com/shedskin/shedskin/</a>
What's up with their benchmarks[1], it just shows benchmark names and I don't see any numbers or graphs. Tried Safari and Chrome.<p>[1]: <a href="https://exaloop.io/benchmarks/" rel="nofollow">https://exaloop.io/benchmarks/</a>
Unclear if this has been in the works longer as the graalvm LLVM build of python discussed yesterday[1]. The first HN discussion is from 2022 [3].<p>Any relation? Any comparisons?<p>Funny I can't find the license for graalvm python in their docs [2]. That could be a differentiator.<p>- [1] GraalVM Python on HN <a href="https://news.ycombinator.com/item?id=41570708">https://news.ycombinator.com/item?id=41570708</a><p>- [2] GraalVM Python site <a href="https://www.graalvm.org/python/" rel="nofollow">https://www.graalvm.org/python/</a><p>- [3] HN Dec 2022 <a href="https://news.ycombinator.com/item?id=33908576">https://news.ycombinator.com/item?id=33908576</a>
Reminds me of these two projects which were presented at EuroPython 2024 this summer:<p><a href="https://ep2024.europython.eu/session/spy-static-python-lang-fast-as-c-pythonic-as-python" rel="nofollow">https://ep2024.europython.eu/session/spy-static-python-lang-...</a><p><a href="https://ep2024.europython.eu/session/how-to-build-a-python-to-c-compiler-out-of-spare-parts-and-why" rel="nofollow">https://ep2024.europython.eu/session/how-to-build-a-python-t...</a><p>(The talks were fantastic but they have yet to upload the recordings to YouTube.)
It's a really expensive piece of software. They do not publish their prices because of it. I don't think it's reasonable to market such products onto your average dev because of it. Anyhow Cython and a bunch of others provide a free and open source alternative.
There is also RPython (used by PyPy) (<a href="https://rpython.readthedocs.io/" rel="nofollow">https://rpython.readthedocs.io/</a>), which is a strict subset of Python, allowing for static analysis, specifically for the translation logic needed by PyPy. Thus, I was told that RPython is not really intended as a general purpose language/compiler but only really specifically to implement sth like PyPy.<p>But it's anyway maybe an interesting comparison to Codon.
Instead of building their GPU support atop CUDA/NVIDIA [0], I’m wondering why they didn’t instead go with WebGPU [1] via something like wgpu [2]. Using wgpu, they could offer cross-platform compatibility across several graphics API’s, covering a wide range of hardware including NVIDIA GeForce and Quadro, AMD Radeon, Intel Iris and Arc, ARM Mali, and Apple’s integrated GPU’s.<p>They note the following [0]:<p>> The GPU module is under active development. APIs and semantics might change between Codon releases.<p>The thing is, based on the current syntax and semantics I see, it’ll almost certainly need to change to support non-NVIDIA devices, so I think it might be a better idea to just go with WebGPU compute pipelines sooner rather than later.<p>Just my two pennies…<p>[0]: <a href="https://docs.exaloop.io/codon/advanced/gpu" rel="nofollow">https://docs.exaloop.io/codon/advanced/gpu</a><p>[1]: <a href="https://www.w3.org/TR/webgpu" rel="nofollow">https://www.w3.org/TR/webgpu</a><p>[2]: <a href="https://wgpu.rs" rel="nofollow">https://wgpu.rs</a>
People that landed here may be interested in Mojo [0] as well.<p>[0] <a href="https://www.modular.com/mojo" rel="nofollow">https://www.modular.com/mojo</a>
so, assuming I don't get integers bigger than int64, and don't use the order of build in dicts, can I just use arbitrary python code and use it with codon? Can I use external libraries? Numpy, PyTorch? Also noticed that it isn't supported on windows
From the documentation of the differences with Python:<p>> Strings: Codon currently uses ASCII strings unlike Python's unicode strings.<p>That seems really odd to me. Who would use a framework nowadays that doesn't support unicode?
Biggest problem at the moment is async support, I guess<p><a href="https://github.com/exaloop/codon/issues/71">https://github.com/exaloop/codon/issues/71</a>
I hope one day the compiler itself will be optimized even more: <a href="https://github.com/exaloop/codon/issues/137">https://github.com/exaloop/codon/issues/137</a>