TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Can an O(n!) algorithm be faster than an O(1) algorithm? Yes, of course it can.

3 pointsby ivorasalmost 11 years ago

2 comments

TheLoneWolflingalmost 11 years ago
There&#x27;s a better method.<p>Create a multihashmap of &lt;sorted word -&gt; list of words&gt; for each word in your input dictionary. This is O(n), but only has to be done once.<p>Then just sort your input word and look it up in the multihashmap. O(1) lookup time (well, to be pedantic, O(|word|*log(|word|)), or just O(|word|) if you use a weird sort).<p>So, roughly:<p><pre><code> from collections import defaultdict def makeWordMap(): mm = defaultdict(set) for line in open(&#x27;words.txt&#x27;).readlines(): word = line.strip().lower() mm[tuple(sorted(word))].add(word) return mm def anagrams(w, wordMap=makeWordMap()): return list(wordMap[tuple(sorted(w.lower()))]) </code></pre> (Please ignore both my (ab)use of default arguments and the dance with lists being unhashable types)<p>This method ends up being ~17x faster than ana5 given on my machine for the input &quot;spare&quot;, even when using the SIL wordlist (109582 words) and changing ana5 to precompute the word list.<p>(On a sidenote, the largest collection of anagramming words in the SIL wordlist is [&#x27;spear&#x27;, &#x27;reaps&#x27;, &#x27;apres&#x27;, &#x27;asper&#x27;, &#x27;parse&#x27;, &#x27;apers&#x27;, &#x27;rapes&#x27;, &#x27;pares&#x27;, &#x27;pears&#x27;, &#x27;spare&#x27;], with length 10)
cauterizedalmost 11 years ago
It&#x27;s naive to consider your second algorithm O(1) - you&#x27;re just ignoring the complexity abstracted by Counter.