TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Ask HN: C/C++ developer wanting to learn efficient Python

35 pointsby jfbensonabout 1 year ago
Hey,<p>I have been writing python in my everyday work for the last 5 years or so. However, I never took a course on python. I want to prepare for some upcoming job interviews where I will be programming in python. How can I improve my ability to write data structure&#x2F;algo code in python? I want to gain a similar understanding of what is &quot;fast&quot; in C++ (std::move, copy elison, etc.), but with python. Any suggested resources?

25 comments

timmaxwabout 1 year ago
In Python job interviews, I think the interviewer will only judge your code on asymptotic complexity, not absolute speed. I think Python engineers generally aren&#x27;t expected to know how to micro-optimize their Python code.<p>Some general tips for algorithmic complexity in Python:<p>- Python&#x27;s list is equivalent to C++ std::vector. If you need to push&#x2F;pop at the head of the list, use Python&#x27;s &quot;collections.deque&quot; to avoid the O(N) cost. Python&#x27;s standard library doesn&#x27;t have a linked list implementation.<p>- Python&#x27;s dict is a fast unordered hashmap. However, if you need order-aware operations like C++&#x27;s std::map::lower_bound(), you&#x27;re out of luck; Python&#x27;s standard library doesn&#x27;t have a tree implementation.<p>- Python has a &quot;bisect&quot; module for binary searching in a Python list, and a &quot;heapq&quot; module for using a Python list as a heap. However, neither one is nicely encapsulated in a data type.<p>If your Python program is seriously CPU-bound, the normal solution is to use a C&#x2F;C++&#x2F;Rust extension. For example, if you&#x27;re doing large numeric calculations, use NumPy; it can store a vector of numbers as a single contiguous array, which is much faster than a list-of-Python-floats.<p>If you want to parallelize across CPU cores, it&#x27;s important to understand the Python GIL (global interpreter lock). Often you need to use multiple Python processes. See e.g. <a href="https:&#x2F;&#x2F;superfastpython.com&#x2F;multiprocessing-pool-gil&#x2F;" rel="nofollow">https:&#x2F;&#x2F;superfastpython.com&#x2F;multiprocessing-pool-gil&#x2F;</a><p>Maybe also worth reading about __slots__ (<a href="https:&#x2F;&#x2F;wiki.python.org&#x2F;moin&#x2F;UsingSlots);" rel="nofollow">https:&#x2F;&#x2F;wiki.python.org&#x2F;moin&#x2F;UsingSlots);</a> it&#x27;s a micro-optimization that helps when allocating a lot of tiny Python objects.<p>Hope some of that is helpful! Good luck with your job interviews.
评论 #39992225 未加载
gwkingabout 1 year ago
I have been writing python for 15 years now and only occasionally have needed or wanted to optimize algorithms. When I have, the general takeaways have been:<p>* Flattening data from hierarchical, custom types into flatter, builtin collection types can make a speed difference. The builtin type methods spend more time in the native code and are optimized.<p>* Lots of things that I thought could make a difference but would barely move the needle. There is so much pointer indirection internal to CPython.<p>* The set and frozenset types can offer easy and immediate algorithmic improvements. This is obvious from a formal&#x2F;academic perspective but continues to be my favorite thing about writing quick and dirty python. I have encountered cases where an algorithm called for a fancy data structure but doing simple set intersection and difference was good enough.<p>* Over time, Python taught me to worry less about optimizing things that didn&#x27;t matter. Everyone pays lip service to the perils of premature optimization but I think there are layers to the problem. Fast languages make many affordances for optimization, and to some extent compel you to make detailed choices that have performance implications (e.g. choice of integer width, choice of collection type). There is something very liberating about reminding myself &quot;it&#x27;s going to be slow either way, and it doesn&#x27;t matter.&quot; When I work in Swift for example I find myself getting distracted by finer details that relates to efficiency.
评论 #40001554 未加载
TheAlchemistabout 1 year ago
I would say, there is no such thing as fast Python code. It&#x27;s not the purpose of the language.<p>You make want to take a look at Mojo: <a href="https:&#x2F;&#x2F;www.modular.com&#x2F;max&#x2F;mojo" rel="nofollow">https:&#x2F;&#x2F;www.modular.com&#x2F;max&#x2F;mojo</a><p>It&#x27;s not really there yet, but the promise is to &quot;combine the usability of Python with the performance of C&quot;
评论 #39992220 未加载
评论 #39992015 未加载
vismit2000about 1 year ago
Advanced Python Mastery: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=36785005">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=36785005</a><p>Book: High Performance Python
评论 #39992277 未加载
v8engineabout 1 year ago
Check out old Pycon schedules and search for &#x27;efficient(cy)&#x2F;improve&#x2F;speed&#x2F;performance&#x27;.<p>Every Pycon has at least one talk on it and they are usually filled with a variety of tips and insights.<p>Prioritise by keynote speeches, recent ones, country level ones and then city Pycon.<p><a href="https:&#x2F;&#x2F;pycon.org&#x2F;" rel="nofollow">https:&#x2F;&#x2F;pycon.org&#x2F;</a>
BerislavLopacabout 1 year ago
Python is a very fast language, but not in the sense that you would expect as a C++ developer: its execution is (comparatively) very slow, but it shines at the speed of development.<p>Many things that one might take as given in other languages, in Python are <i>optional</i>: static type analysis, multithreading, immutability and the like. When it comes to writing algorithms in Python, it&#x27;s best to think about it as executable pseudocode, here to help you get to the correct execution first and optimise for performance later.<p>In Python, development happens in distinct steps. The first step is to get your code to do what you need, without thinking about the readability and efficiency. In many cases, this will result in the code that is ready to be used, even in production (especially if you include unit tests and type checking, for good measure); if not, the next step is to comb the code and optimise it to be faster. What that means exactly will depend on your needs, but with a combination of external libraries like NumPy, multiprocessing and async it is quite possible to reach sufficient performance.<p>Finally, if none of that result in the desired speed, the performance critical components can be extracted and rewritten in a low-level language (where you have a huge advantage as a C++ dev) that the Python will happily wrap around. This is the exact strategy used by most performant libraries out there, like NumPy itself.
评论 #39992330 未加载
stefanos82about 1 year ago
Just read the best Python book that&#x27;s out there: Fluent Python; enough said!
评论 #39979053 未加载
b20000about 1 year ago
Write your code in C++, then make a python wrapper for it. Done.
评论 #39991963 未加载
评论 #39988727 未加载
giancarlostoroabout 1 year ago
Definitely recommend readint PEP-8 to anyone period. It forces you to write smaller helper methods and make your code more concise, similar in vein to CleanCode principles.<p>Theres all sorts of write ups on writing performant Python, it just comes depending on what you are working on, chances are high you are writing algos that have already been optimized and prewritten for you.
评论 #39992400 未加载
sgillenabout 1 year ago
I like to joke that to make python fast you need to avoid using python. A ton of python stuff is C&#x2F;C++ or Fortran wrapped in python, numpy, pandas, pytorch etc is what I have experience with but this applies in all domains. A huge part of learning to write fast python is to first learn these libraries, and second learn about how data is shepherded back and forth. Your experience with C&#x2F;C++ gives you a good background to understand this!<p>That being said it&#x27;s totally possible to write pure python that is much faster than other ways of doing pure python, learn how its all implemented if you are interested, and profile like crazy! Good luck!
mettamageabout 1 year ago
Take my reaction with a grain of salt. I&#x27;m very much from a &quot;right tool for the right job&quot; type of school, and normally I&#x27;m not this emotional but somehow this hit a nerve. I find it funny it hit a nerve. I&#x27;m happy you posted! I&#x27;ve learned something about myself.<p>&gt; I want to gain a similar understanding of what is &quot;fast&quot; in C++ (std::move, copy elison, etc.), but with python.<p>But why why why why?!<p>The reason C++ is fast is because it gives you control! The reason Python is usable because it is very opinionated on how it gives you control.<p>People say &quot;do not fight the framework.&quot; This also applies to programming languages.<p>Next thing you know and people want best practices for modular OOP-design in brainfuck. Now I wonder if brainfuck has something like JSDoc.
评论 #39991980 未加载
anonymoushnabout 1 year ago
There isn&#x27;t really anything like this. Writing fast Python mostly means avoiding obvious blunders like copying huge things unnecessarily and calling out to C a lot. The latter thing isn&#x27;t useful for interviews though.
ReflectedImageabout 1 year ago
Going through the official Python tutorial is a must:<p><a href="https:&#x2F;&#x2F;docs.python.org&#x2F;3&#x2F;tutorial&#x2F;" rel="nofollow">https:&#x2F;&#x2F;docs.python.org&#x2F;3&#x2F;tutorial&#x2F;</a><p>Use multiprocessing not multithreading due to the GIL.<p>Python optimization is generally at a higher level, you use a better algorithm to get a result faster, rather than you saved an unnecessary copy.<p>Know when to use Python async.<p>Python is slow but you can always move work over into the database or dedicated library like NumPy.<p>Use the faster Python interpreter PyPy, which does JIT.<p>Do a multiprocessing pipeline with your CPU bound work broken down into stages.
adr1anabout 1 year ago
This depends a lot on the usecase. For example, data scientists prefer using libraries like dask and polars for faster analyses. What you are asking is, in a way, looking under the hood of such libraries. Oftentimes, you will find that under the hood there&#x27;s C or Rust code.. anyway, check these articles: <a href="https:&#x2F;&#x2F;pythonspeed.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;pythonspeed.com&#x2F;</a>
dragonwriterabout 1 year ago
There&#x27;s probably not a close analogy for Python; aside frim basic algorithmic things that transfer directly, there are some optimization gotchas in Python, but if you have real concern for optimization its about knowing the ecosystem and when to use native extensions (either existing ones or writing your own), not mostly about writing faster code in Python.
CoastalCoderabout 1 year ago
I&#x27;ve done a lot of interviewing lately, and I&#x27;ve encountered two kinds of efficiency-related questions:<p>(1) By <i>far</i> the more common: giving a big-O analysis of some algorithm&#x27;s running-time complexity.<p>(2) For performance-optimization roles: questions about using cache effectively, or take-home programming assignments in C++ or CUDA.
colundabout 1 year ago
There&#x27;s also a fun option to use PyO3 to easily generate a native Python module from Rust.
jdeatonabout 1 year ago
In python fast means calling libraries that are implemented in C&#x2F;C++ for example numpy.
cozzydabout 1 year ago
The key to efficient python is to have your code use as little python as possible in areas ehere performance matters. Write python like you write bash, as a way of connecting pieces written in more efficient languages.
sk11001about 1 year ago
&gt; How can I improve my ability to write data structure&#x2F;algo code in python?<p>You don&#x27;t need a book on Python, you need to practice solving data structures and algorithms problems.<p>Completely disagree with the recommendation to read Fluent Python, that book has nothing to do with your goals.<p>&gt; understanding of what is &quot;fast&quot;<p>Do you mean in terms of performance? Depending on the problem fastest is usually to use libraries instead of native Python code. But that&#x27;s not really relevant to DS&amp;A interviews.
评论 #39991869 未加载
评论 #39983918 未加载
rglullisabout 1 year ago
Learn how to call a C&#x2F;C++&#x2F;Rust library from Python code. That&#x27;s the way to make &quot;fast&quot; python. Anything other than that is lipstick on a pig.
Y_Yabout 1 year ago
They say that when Mahatma Gandhi was asked what he thought of efficient Python, he replied that he thought it might be a good idea.
gloryjulioabout 1 year ago
Just use Python class like c++ class oop without thinking about the speed. Done
zbentleyabout 1 year ago
Much of the other advice here is spot on, and is definitely the first place you should look.<p>That being said, if you really are constrained by pure-python speed for ordinary tasks (and your first resorts of native code&#x2F;multiple processes&#x2F;parallelize IO aren&#x27;t available), there is a large array of (often horrifying) dirty tricks you can use to eke out a few tens of percent of speed improvements.<p>Here are some random examples that come to mind, roughly sorted in order from &quot;somewhat advanced but useful things to know or do&quot; to &quot;disgusting; why are you even using Python?&quot;:<p>- Be familiar with BytesIO, memoryview, and the buffer protocol. Using these can dramatically improve memory efficiency (and even bring back a <i>little</i> bit of cache locality benefits in Python&#x27;s internal pointer hell) and reduce copies. If you&#x27;re coming from C++, abandon all hope of ever getting to <i>zero</i> copies, but careful use of BytesIO can bring the number way down, and unlike other hacks on this list it doesn&#x27;t damage the intelligibility of your code that much.<p>- Be deeply suspicious of others&#x27; broad statements about the GIL. These are often wrong in both directions: many things that you&#x27;d assume are not GIL-bottlenecked (independent calls into some native libraries) end up running in sequence due to the GIL; on the other hand, many things <i>can</i> be truly parallelized using native Python threads--even some non-I&#x2F;O tasks (some numpy operations, some cryptography&#x2F;compression libraries). Benchmark early and often.<p>- Use tuples wherever possible instead of lists (but if you find yourself casting back and forth, just use lists). This only occasionally brings performance benefits (e.g. via small-tuple reuse), but it&#x27;s a good practice anyway: don&#x27;t add unnecessary mutability.<p>- Not all functions are created equal. Functions with small number of positionals and no kwargs are marginally faster to call than functions with kwargs&#x2F;variadics&#x2F;more complicated signatures.<p>- When using functional list processing functions (e.g. sort, map), the functions in the operator module are much faster than lambdas; use them if you can.<p>- Keep the cost of function calls in mind when writing or using decorator-heavy code. Each decorator is usually an added function call, and often the more expensive (varargs&#x2F;complex signature) kind to boot.<p>- functools.partial can be slightly faster than wrapper functions if your arguments are uniform.<p>- Relatedly, if you are using decorators for non-intercepting purposes (like registering functions&#x2F;classes by decorating them), make sure your decorators are returning the passed-in function directly rather than a wrapper. That reduces their runtime cost to zero.<p>- This isn&#x27;t really algorithmic but: if you&#x27;re suffering from the startup time or CPU hit from lots of invocations of small&#x2F;fast standalone scripts, turn off bytecode caching. While the act of compiling bytecode is nearly free speed-wise, the I&#x2F;O hit of writing the bytecode back to the filesystem can be surprisingly high. Bytecode caching was such a mistake.<p>- When using multiprocessing, share data via fork(2) wherever possible. This makes it zero-cost to access largely read-only data in your parallel processes. I talked at length about this here and in adjacent comments: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=36941892">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=36941892</a><p>- Don&#x27;t be afraid to drop back to bytes for hot-loop string manipulation (unless, of course, you need non-ASCII characters). Some operations can be very slightly faster on bytes, but don&#x27;t assume strings are always slow. Also, just like tuples&#x2F;lists, lots of code just implicitly converts supplied bytes to strings internally anyway, so if you&#x27;re passing them to a library make sure you know what it&#x27;s doing.<p>- Cache dot lookups for things (even stdlib module accesses&#x2F;methods) in variables next to your hot loops. This makes code pretty ugly and is at the top of my list for things that I hope interpreter optimizations&#x2F;JIT can more reliably help with over the long term. There&#x27;s already a bit of optimization done in this area so it may not turn out to help as much as you think it will.<p>- You can live-patch classes to amortize the overhead of __getattr[ibute]__ and property descriptors by binding new methods&#x2F;fields at runtime and saving a bunch of dictionary hits. This isn&#x27;t a panacea since it does require you to trade away slotted speedups in some cases, and MRO cache invalidation can cause it to hurt more than it helps. As always, benchmark.<p>- Relatedly, the presence of __getattr&#x2F;__setattr <i>anywhere</i> in the MRO for a class is a bit of an optimization fence for speeding up method calls. The situations where this hurts performance have changed a lot between interpreter versions, but if you&#x27;re using OO code in hot loops, removing those dunder methods from your class hierarchy is a good next step to try after caching away self-dot lookups.<p>- Don&#x27;t access global variables in your hot loop; function-local variable lookups are a tiny bit faster (though this is an area where optimizations may moot this advice in the future). Remember that instance variables (&quot;self.foo&quot;) are slower than both because of the dictionary lookup in the dot.<p>- If using multiple Python threads (even if most of them are backgrounded&#x2F;waiting on IO, e.g. Sentry or database drivers), you can override the interpreter switch&#x2F;check intervals in your hot loops. I&#x27;ve seen this work more than once, but very rarely.<p>- If for some strange reason you have lots of small fast IOs in your hot loop, you can locally change interpreter buffering behavior (or drop to lower-level os.[read|write] calls and manage your own buffering) for a marginal speedup.<p>- In some very <i>very</i> rare cases, typing.Generic can actually add runtime overhead; benchmark with and without it.<p>- An easy win for small-script startup times is to remove locations from the module search path. If you strace(2) your program&#x27;s compile pass (replace main with a sleep and strace until that), you&#x27;ll often see it statting handfuls of (missing) locations per import before it finds the module. This only saves a little bit of time since filesystems tend to be good at metadata caching.<p>- Seriously, function calls are <i>expensive</i>. If you can&#x27;t inline them, the awful generator hack can save you a few % of function call overhead: turn your function call body into the inner loop of an infinite generator, create the generator outside of your hot loop and cache gen.send&#x2F;gen.next in variables to &quot;call&quot; the function by sending values into the generator (fun fact: gen.next is faster than next(gen)). But seriously, if you find yourself in a situation where this makes a difference, go for a smoke and rethink your life choices.
PythonDeveloperabout 1 year ago
A couple useful links:<p><a href="https:&#x2F;&#x2F;intermediate-and-advanced-software-carpentry.readthedocs.io&#x2F;en&#x2F;latest&#x2F;idiomatic-python.html" rel="nofollow">https:&#x2F;&#x2F;intermediate-and-advanced-software-carpentry.readthe...</a><p><a href="https:&#x2F;&#x2F;peps.python.org&#x2F;pep-0020&#x2F;" rel="nofollow">https:&#x2F;&#x2F;peps.python.org&#x2F;pep-0020&#x2F;</a>