TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Embarrassingly Parallel Time Series Analysis for Large Scale Weak Memory Systems

25 点作者 zeit_geist超过 9 年前

2 条评论

pmontra超过 9 年前
I skimmed through the paper. It&#x27;s packed with formulas and technical details. I can&#x27;t judge if it&#x27;s a relevant contribution to the subject but... the title. Do they need that to be noticed? Just imagine &quot;Embarrassingly Fast Electromagnetic Waves but not any Faster&quot; instead of &quot;On the Electrodynamics of Moving Bodies&quot; (Zur Elektrodynamik bewegter Körper) <a href="http:&#x2F;&#x2F;.ca&#x2F;wiki&#x2F;Zur_Elektrodynamik_bewegter_Körper" rel="nofollow">http:&#x2F;&#x2F;.ca&#x2F;wiki&#x2F;Zur_Elektrodynamik_bewegter_Körper</a>
评论 #10641085 未加载
SFjulie1超过 9 年前
Isn&#x27;t this lengthy thesis about saying that doing a map reduce on consecutive data that have are hot and localized in memory is all the better that we use commutative&#x2F;distributive operations. Well it was trivial.<p>Well, I would have prefered a first year student demonstrating the trivia that operation like sorting multidimensional vector will kick by the nature less optimization possible in CPU and GPU and that non linear operation requires reading the whole sample to have a non random quantifiable error.<p>Hence, map reduce performance is better for combination of distributive linear operation (like ARMA, and I still wonder about hilbertian geometry) and horrible for non linear mapped functions (median filter, nth percentile, sort, top X).