TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Exponential to Linear Complexity using Memoization

4 pointsby amjithover 13 years ago

1 comment

dashtover 13 years ago
There is a terminology problem with that article. People do get casual (or sloppy) with these terms, but its worth keeping them distinct:<p>The note describes memoization as using a "cache". That's a problem because memoization and caching are both useful, but they are different things.<p>A memo (which it appears is implemented by the Python library you mention) stores the results of earlier computations for re-use -- just as you've described.<p>A cache does the same thing <i>except</i> that if a cache becomes over-full, some earlier cached results may be recomputed (because they were dropped from the cache).<p>There are two critical differences:<p>Minimum space requirement for a memo: O(N x S) where N is the number of unique computations to be performed where S is average size of parameters and results.<p>Minimum space requirement for a cache: arbitrary, although performance varies with size.<p>Number of repeated computations with a memo: 0<p>Number of repeated computations with a cache: at least N where N is the total number of <i>unique</i> computations, and as many as T where T is the total number of computations with duplications.<p>A cache is useful in contrast to a memo when N*S is uncomfortably large, but "most" of the repeated computations are members of a set of size M where MxS' &#60;&#60; NxS where S' is the average size of params and results in the M-set.<p>A memo is useful where NxS is conveniently small, or where it is unacceptable to repeat any computation for some reason.
评论 #3577065 未加载