TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Anatomy of a Reducer

75 点作者 fogus大约 13 年前

2 条评论

drcode大约 13 年前
Someone should really write a blog post on what this all means for a novice. My impression is (and I might be wrong, since I haven't had the time to study the new reducer library yet) that the point of this library is the following:<p>Many algorithms are tree-like in nature. Suppose, for instance, you have a path-finding algorithm that looks at a map to find the best path for something. In that case, you're going to need to search different path options iteratively to find the best path, leading to a nested algorithm.<p>Now, this type of algorithm is tricky to parallelize: Sometimes the root of your "search tree" might be small (let's say 3 items) other times it might be large (1000 items). If you know your computer has 8 cores, it's very hard to meter out an equal amount of work to each core- Sometimes several cores need to split the work of a single branch. Other times, a single core might need to handle multiple branches. Also, it may be hard to know ahead of time how much work each branch will require, making the "parcelling out" of work even more difficult.<p>However, the work for each branch can be expressed as a collection of smaller "jobs", with a function applied to each. And each item in this collection would then also be a collection with functions applied to it (representing each sub-branch.) If you express a nested algorithm in this way, it can be viewed of as a nested collection with differing arbitrary functions mapped, filtered, etc across different parts of it.<p>If you use this new Reducer library to implement this type of nested operation, Clojure can automagically use the Java Fork/Join library to parcel out equal amounts of work to each core. In effect, you can think of the reducer library as letting you "flatten" these nested collections (without the memory &#38; CPU cost this would entail) to figure out how to split the work out evenly and in the most efficient manner possible, hiding all the ugliness required for this from the programmer.<p>Please, if you know this library better than I, let me know where I'm going wrong.
评论 #3978640 未加载
评论 #3978814 未加载
pron大约 13 年前
If I understand correctly, does this mean that a map operation (that is not reduced further; i.e. we want a collection as the output) could be carried out in parallel efficiently using reducers only if there is an efficient (O(1)) way to concatenate the collections produced at each combining step (in the "internal nodes" of the fork-join computation tree)?<p>This would constrain how collections are implemented to make good use of this technique.
评论 #3978191 未加载