TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Embarrassingly Parallel Time Series Analysis for Large Scale Weak Memory Systems

25 pointsby zeit_geistover 9 years ago

2 comments

pmontraover 9 years ago
I skimmed through the paper. It&#x27;s packed with formulas and technical details. I can&#x27;t judge if it&#x27;s a relevant contribution to the subject but... the title. Do they need that to be noticed? Just imagine &quot;Embarrassingly Fast Electromagnetic Waves but not any Faster&quot; instead of &quot;On the Electrodynamics of Moving Bodies&quot; (Zur Elektrodynamik bewegter Körper) <a href="http:&#x2F;&#x2F;.ca&#x2F;wiki&#x2F;Zur_Elektrodynamik_bewegter_Körper" rel="nofollow">http:&#x2F;&#x2F;.ca&#x2F;wiki&#x2F;Zur_Elektrodynamik_bewegter_Körper</a>
评论 #10641085 未加载
SFjulie1over 9 years ago
Isn&#x27;t this lengthy thesis about saying that doing a map reduce on consecutive data that have are hot and localized in memory is all the better that we use commutative&#x2F;distributive operations. Well it was trivial.<p>Well, I would have prefered a first year student demonstrating the trivia that operation like sorting multidimensional vector will kick by the nature less optimization possible in CPU and GPU and that non linear operation requires reading the whole sample to have a non random quantifiable error.<p>Hence, map reduce performance is better for combination of distributive linear operation (like ARMA, and I still wonder about hilbertian geometry) and horrible for non linear mapped functions (median filter, nth percentile, sort, top X).