TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

A beginner's guide to Big O notation (2009)

312 点作者 g4k大约 9 年前

13 条评论

krat0sprakhar大约 9 年前
On the topic of efficient algorithms, I recently read a nice note in an Algorithms textbook -<p>&gt; <i>It would appear that Moore’s law provides a disincentive for developing polynomial algorithms. After all, if an algorithm is exponential, why not wait it out until Moore’s law makes it feasible? But in reality the exact opposite happens: Moore’s law is a huge incentive for developing efficient algorithms, because such algorithms are needed in order to take advantage of the exponential increase in computer speed.</i><p><i>Here is why. If, for example, an O(2n) algorithm for Boolean satisfiability (SAT) were given an hour to run, it would have solved instances with 25 variables back in 1975, 31 variables on the faster computers available in 1985, 38 variables in 1995, and about 45 variables with today’s machines. Quite a bit of progress—except that each extra variable requires a year and a half’s wait, while the appetite of applications (many of which are, ironically, related to computer design) grows much faster. In contrast, the size of the instances solved by an O(n) or O(n log n) algorithm would be multiplied by a factor of about 100 each decade. In the case of an O(n2) algorithm, the instance size solvable in a fixed time would be multiplied by about 10 each decade. Even an O(n6) algorithm, polynomial yet unappetizing, would more than double the size of the instances solved each decade. When it comes to the growth of the size of problems we can attack with an algorithm, we have a reversal: exponential algorithms make polynomially slow progress, while polynomial algorithms advance exponentially fast! For Moore’s law to be reflected in the world we need efficient algorithms.</i>
评论 #11641668 未加载
mattlutze大约 9 年前
Looks like someone finally got it to the front page:<p><a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;from?site=rob-bell.net" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;from?site=rob-bell.net</a>
评论 #11638468 未加载
评论 #11637135 未加载
Lxr大约 9 年前
&gt; O(N) describes an algorithm whose performance will grow linearly and in direct proportion to the size of the input data set<p>Nitpick, but this is technically wrong. There needs to be a &quot;no faster than&quot; somewhere in there. Merge sort is, for example, included in O(2^n) by the actual definition of big O (vs big theta, which is defined as a tight bound). So there exist algorithms that are O(N) that don&#x27;t grow linearly.<p>f(x) is O(g(x)) just means there is some constant K such that f(x) &lt;= Kg(x) for all x beyond some value. This has nothing to do with &#x27;worst case performance&#x27;.
评论 #11637461 未加载
评论 #11637204 未加载
njharman大约 9 年前
I like this page <a href="http:&#x2F;&#x2F;bigocheatsheet.com&#x2F;" rel="nofollow">http:&#x2F;&#x2F;bigocheatsheet.com&#x2F;</a><p>Espcially the graph which hammers home how quickly things go wrong.
评论 #11639037 未加载
nayuki大约 9 年前
The article is mostly good, but I have one nitpick.<p>&gt; An example of an O(2^N) function is the recursive calculation of Fibonacci numbers<p>No, the naive recursive evaluation of the Fibonacci sequence has complexity O(1.618^N) (or just O(Fib(n)). It is <i>unequal</i> to O(2^N) because the base of the power is different.
评论 #11637875 未加载
评论 #11637704 未加载
btilly大约 9 年前
Common mistake. When people say O(2^n) they USUALLY mean something closer to 2^O(n).<p>The reason is that f(n) = O(g(n)) means that for some constant k and integer N, if N &lt; n then |f(n)| &lt; kg(n). In other words &quot;grows no faster than a constant times&quot;. However when you&#x27;ve got exponential growth the slightest of things can cause the exponent to be slightly different, or there to be a small polynomial factor.<p>That was the case in his example. (The exponent was different.)
minikites大约 9 年前
The best layman&#x27;s example of O(log N) I&#x27;ve heard is finding a name in the phone book. You open it around the middle, select which half gets you closer, and repeat. It&#x27;s then easy to &quot;get&quot; why it&#x27;s not a big deal if it&#x27;s a huge phone book versus a tiny phone book.
评论 #11636341 未加载
评论 #11640960 未加载
评论 #11636362 未加载
评论 #11636990 未加载
nathan_long大约 9 年前
Nice! Very succinct and clear. I wrote an intro myself, as someone who&#x27;d recently learned the concept. Wordier, but might be helpful if you think like I do. <a href="http:&#x2F;&#x2F;nathanmlong.com&#x2F;2015&#x2F;03&#x2F;understanding-big-o-notation&#x2F;" rel="nofollow">http:&#x2F;&#x2F;nathanmlong.com&#x2F;2015&#x2F;03&#x2F;understanding-big-o-notation&#x2F;</a>
gameguy43大约 9 年前
Super interesting comparing this to the one we have on Interview Cake: <a href="https:&#x2F;&#x2F;www.interviewcake.com&#x2F;article&#x2F;big-o-notation-time-and-space-complexity" rel="nofollow">https:&#x2F;&#x2F;www.interviewcake.com&#x2F;article&#x2F;big-o-notation-time-an...</a><p>I like how Rob Bell&#x27;s piece has headings with the most common time costs--sorta makes it easier to see at a glance what you&#x27;re going to get, and I imagine gives folks a quick sense of, &quot;Right, I&#x27;ve seen that! Been &#x2F;wondering&#x2F; what that means.&quot;
curiousDog大约 9 年前
I think the nicest way to learn this if you don&#x27;t have a formal CS education is to still pick up a Discrete Math text book (like Chapter 3 in Rosen) and then read chapters 3 and 4 in CLRS.
评论 #11640537 未加载
emodendroket大约 9 年前
It would be sensible to put the terms like &quot;constant&quot; and &quot;logarithmic&quot; in there, IMO.
kevindeasis大约 9 年前
Is there a beginner&#x27;s guide to proving different time complexities?
vvanders大约 9 年前
&gt; O(N) describes an algorithm whose performance will grow linearly and in direct proportion to the size of the input data set.<p>Argh, I hate this every time I see Big O notation covered.<p>Big O != performance. If you have an O(N) algorithm that walks the data set in the order that it&#x27;s laid out in memory(assuming contiguous data) it <i>will</i> beat your O(NlogN) and sometimes even O(logN).<p>[edit]meant to omit nlogn, that&#x27;s what I get for any early morning rant pre-coffee.<p>Radix Sort is the classic example I always bring up. On machine word size keys, with a separate pointer look-up table(to get final value) it will beat QSort, MergeSort and all the other NlogN sorts by 10-50x. This includes having to walk the data 3-4 times depending on how you want to split the radix to line up with cache sizes.<p>Friends don&#x27;t let Friends use Big O to describe absolute performance.
评论 #11636791 未加载
评论 #11636700 未加载
评论 #11636862 未加载
评论 #11637191 未加载
评论 #11636641 未加载
评论 #11636637 未加载