TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Why Python is not the programming language of the future

14 pointsby D_Guidiabout 5 years ago

3 comments

orfabout 5 years ago
This article has a lot of words but manages to say very little.<p>&gt; Another reason is that Python can only execute one task at a time. This is a consequence of flexible datatypes — Python needs to make sure each variable has only one datatype, and parallel processes could mess that up.<p>And the very little it does say makes you question if the author is either a genius at dumbing things down to the point where they loose all meaning, or if they just don’t know what they are talking about.
评论 #22833336 未加载
okaleniukabout 5 years ago
And yet again the speed argument. It&#x27;s not about the langauge really. If you want, you can make Python program run fast. Anecdotal evidence: <a href="https:&#x2F;&#x2F;wordsandbuttons.online&#x2F;outperforming_everything_with_anything.html" rel="nofollow">https:&#x2F;&#x2F;wordsandbuttons.online&#x2F;outperforming_everything_with...</a><p>But you don&#x27;t want that. You want the underlying libraries to be fast and Python code to be cute. And that&#x27;s the right way to go.<p>Part of my department&#x27;s job is adopting the researchers&#x27; prototypes in Python and making them fast in C++. First of all, it&#x27;s not about making them radically fast, it&#x27;s about making them some 10% - 100% faster. I guess, the best I&#x27;ve heard about was 3x. Second, there were cases when C++ remakes were even slower than the prototypes in Python.<p>This phenomenon has an explanation. If 99% of what it does is done by NumPy, then we really compete with NumPy.<p>For instance, it might be built for more recent architecture and using better superscalars. Researchers can afford the very recent builds but we have to support a lot of users with a lot of PCs so we&#x27;re basically stuck in 2010 with out target architecture.<p>And even this small detail matters more for the performance than the glue it was all brought together with.
评论 #22833402 未加载
zwapsabout 5 years ago
I am surprised about the notion given that &quot;speed doesn&#x27;t matter&quot;. I think it does, even if you have access to powerful workstations to develop on, and server to run it.<p>I think that this one of the biggest downsides to Python for medium scale projects: You necessarily need to think a lot about performance and infrastructure to get your stuff up and running. Do it wrong, and the speed is infeasible from the start - and it doesn&#x27;t scale and gets you into trouble later on.<p>For data analysis, for example, you can&#x27;t really just start coding arbitrary Python. You need to know how you will eventually speed things up - using C, or libraries based on C. And I maintain that parallelizing code in Python is not at all straightforward nor performant. That is, performance optimization is coupled to development and deployment. I can&#x27;t just &quot;use&quot; the base language to develop a prototype and worry about performance later. If I don&#x27;t know what I will eventually do, if I just code pure Python, then the program usually turns out unworkably slow when faced with data. And even if you have &quot;huge servers&quot;, then you need your code to actually scale. And in my experience, efficient small scale python code and efficient large scale python code are not the same thing!<p>E.g. I had deploy something on Windows and without having fork and with GIL, what ran well during testing became inefficiently slow. Just a choice of where to use multi-processing, for example, ended up making it slower than just running pure python without parallel code! And that speed meant that - contrary to what the article said - even on a large server, tasks simply would not finish. Meanwhile, the ingestion pipelines would clog up when data sizes became significant. Furthermore, one package I used to represent datastructures (networkx) just simply could not scale at all and crashed machines with even hundreds over hundreds of GBs of RAM during certain operations. And that really just happened without warning at certain sizes - unforeseen. I had to rewrite huge parts of the program to make it work, including all the database back end.<p>Of course all that is down to me not being a Python expert, just a normal scientist. Of course next time I will be smarter, but only becauce I will and will have to plan and test performance during conceptual coding and know which tools will eventually scale.<p>And that is not the &quot;promise&quot; Python seemingly makes to us applied programmers.<p>I am itching to move to Julia as soon as I can. Not only is the Matlab style syntax arguably superior for numerical &#x2F; data science stuff, you can also get things up and running at reasonable speed and then use the same tooling and structure to make it scale.
评论 #22832170 未加载