TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: What'd be possible with 1000x faster CPUs?

54 点作者 xept超过 2 年前
Imagine if we had an unlikely scientific breakthrough and many orders of magnitude faster general-purpose CPUs, probably alongside petabyte-scale RAM modules and appropriately fast memory bus, become widely available. Besides making bloatware on a previously unimaginable scale possible, what other interesting, maybe revolutionary, impossible today or at least impractical, applications would crop up then?

39 条评论

yourcousinbilly超过 2 年前
Video engineer here. Many seemingly network restricted tasks could be unlocked with faster CPUS doing advanced compression and decompression.<p>1. Video Calls<p>In video calls, encoding and decoding is actually a significant cost of video calls, not just networking. Right now the peak is Zoom&#x27;s 30 video streams onscreen, but with 1000x CPUS you can have 100s of high quality streams with advanced face detection and superscaling[1]. Advanced computer vision models could analyze each face creating a face mesh of vectors, then send those vector changes across the wire instead of a video frame. The receiving computers could then reconstruct the face for each frame. This could completely turn video calling into a CPU restricted task.<p>2. Incredible Realistic and Vast Virtual Worlds<p>Imagine the most advanced movie realistic CGI being generated for each frame. Something like the new Lion King or Avatar like worlds being created before you through your VR headset. With extremely advanced eye tracking and graphics, VR would hit that next level of realism. AR and VR use cases could explode with incredibly light headsets.<p>To be imaginative, you could have everything from huge concerts to regular meetings take play in the real world, but be scanned and sent to VR participants in real time. The entire space including the room and whiteboard or live audience could be rendered in realtime for all VR participants.<p>[1] <a href="https:&#x2F;&#x2F;developer.nvidia.com&#x2F;maxine-getting-started" rel="nofollow">https:&#x2F;&#x2F;developer.nvidia.com&#x2F;maxine-getting-started</a>
评论 #32945864 未加载
throwaway81523超过 2 年前
Realistically, AI network training at the level being done by corporations with big server farms, becomes accessible to solo devs and hobbyists (let&#x27;s count GPU&#x27;s as general purpose). So if you want your own network for Stable Diffusion or Leela Chess, you can do on your own PC. I think that is the most interesting obvious consequence.<p>Also, large scale data hoarding becomes far more affordable (I assume the petabyte ram modules also mean exabyte disk drives). So you can be your own Internet Archive, which is great. Alternatively, you can be your own NSA or Google&#x2F;Facebook in terms of tracking everyone, which is less great.
评论 #32932861 未加载
评论 #32951506 未加载
评论 #32932907 未加载
rozap超过 2 年前
Atlassian products would be twice as fast.
评论 #32952687 未加载
exq超过 2 年前
Instead of electron we&#x27;d be bundling an entire OS with our chat apps.
评论 #32933013 未加载
评论 #32932045 未加载
评论 #32942529 未加载
评论 #32934733 未加载
nirinor超过 2 年前
Some applications depend on approximately solving optimization problems that are hard even for small problems. The poster child here is combinatorial optimization (more or less equivalently, np-complete problems), concrete examples are SMT solvers and their applications to software verification [1]. Non convex problems are sometimes similarly bad.<p>Non smooth and badly conditioned optimization problems scale much better with size, but getting high precision solutions is hard. These are important for simulations mentioned elsewhere, but not just for architecture and games, also for automating design, inspections etc [2].<p>[1] <a href="https:&#x2F;&#x2F;ocamlpro.github.io&#x2F;verification_for_dummies&#x2F;" rel="nofollow">https:&#x2F;&#x2F;ocamlpro.github.io&#x2F;verification_for_dummies&#x2F;</a><p>[2] <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=1ALvgx-smFI&amp;t=14s" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=1ALvgx-smFI&amp;t=14s</a>
h2odragon超过 2 年前
1 million Science per Minute Factorio bases.
sahinyanlik超过 2 年前
Microsoft teams may work without locking my pc. Hopefully
评论 #32940020 未加载
ilaksh超过 2 年前
The thing is, computing has been getting steadily faster, just not at quite the pace it was before and in a different way.<p>With GPUs we have proven that parallelism can be just as good or even better than speed increases in enhancing computation. And there again have been speed increases trickling in.<p>I don&#x27;t think it&#x27;s realistic to say that more speed advances are unlikely. We have already been through many different paradigm shifts in computing, from mechanical to nanoscale. There are new paradigms coming up such as memristors and optical computing.<p>It seems like 1000x will make Stable Diffusion-style video generation feasible.<p>We will be able to use larger, currently slow AI models in realtime for things like streaming compression or games.<p>Real global illumination in graphics could become standard.<p>Much more realistic virtual reality. For example, imagine a realistic forest stream that your avatar is wading through, with realtime accurate simulation of the water, and complex models for animal cognition of the birds and squirrels around you.<p>I think with this type of speed increase we will see fairly general purpose AI, since it will allow average programmers to easily and inexpensively experiment with combining many, many different AI models together to handle broader sets of tasks and eventually find better paradigms.<p>It also could allow for emphasis on iteration in AI, and that could move the focus away from parallel-specific types of computation back to more programmer-friendly imperative styles, for example if combined with many smaller neural networks to enable program synthesis, testing and refinement in real time.<p>Here&#x27;s a weird one: imagine something like emojis in VR, but in 3d, animated, and customized on the fly for the context of what you are discussing, automatically based on an AI you have given permission to.<p>Or, hook the AI directly into your neocortex. Hook it into several people&#x27;s neocortices and then train an animated AI 3d scene generation system to respond to their collective thoughts and visualizations. You could make serialized communication almost obsolete.
评论 #32933260 未加载
评论 #32932710 未加载
ussrlongbow超过 2 年前
I wish CPUs for a while got 10x slower to allow some room for software products optimisation.
评论 #32938212 未加载
Jaydenaus超过 2 年前
First thing that comes to mind is using your mobile device as your main workstation would become a lot more realistic.
评论 #32936051 未加载
评论 #32932914 未加载
ttoinou超过 2 年前
Infinite arbitrary precision real time Mandelbrot zoom generation :-)
评论 #32932083 未加载
captaincrunch超过 2 年前
Likely we would see 8192 keys for SSH
yoyopa超过 2 年前
it would be nice for the architecture field. we deal with lots of crappy unoptimized software that&#x27;s 20-30 years old. so if you like nice buildings and better energy performance (which requires simulations), give us faster cpus.<p>imagine you&#x27;re working on airport. thousands of sheets, all of them PDF. hundreds or thousands of people flipping PDFs and waiting 2-3+ seconds for the screen to refresh. CPUs baby, we need CPUs.
评论 #32932394 未加载
mixmastamyk超过 2 年前
Real-time ray tracing was the goal in the old days. Are we there yet at adequate quality?
评论 #32932346 未加载
评论 #32932871 未加载
评论 #32933066 未加载
评论 #32940045 未加载
MisterSandman超过 2 年前
Much more complicated redstone CPUs in Minecraft.
Tepix超过 2 年前
One thing i&#x27;d like to see would be smart traffic lights. For example as soon as a person finishied crossing the road, when there is noone else it switches back to green immediately.
评论 #32955315 未加载
Tepix超过 2 年前
Assuming that a CPU at today&#x27;s speeds would require vastly less power, we would have very powerful, very efficient mobile devices such as smartwatches.<p>Probably using AI a lot more, on-device for every single camera.
alkonaut超过 2 年前
I’d just not discover my accidentally quadratic code and ship it. It would save me a lot of debugging time.
jensenbox超过 2 年前
Your question is missing the factor of power - If we have 1000x at current power usage or 1000x at 1000x power?<p>Also, 1000x parallelism or 1000x single core?
robertlagrant超过 2 年前
Be able to run Emacs as fast as I can run Vim?
评论 #32933836 未加载
VoodooJuJu超过 2 年前
Cheaper employees. With faster CPU&#x27;s, they won&#x27;t need to understand leetcode level optimization, i.e. they won&#x27;t need expensive or sophisticated training. Just find someone with a pulse and stick them in front of the computer. Less-than-ideal big O&#x27;s won&#x27;t be an issue with this kind of speed.
domenicrosati超过 2 年前
Simulation? Like fluid dynamics. I heard that was CPU intensive.
frontierkodiak超过 2 年前
Incredible biodiversity monitoring— everywhere, all the time
nyfresh超过 2 年前
More bloat
alexvoda超过 2 年前
I guess it depends on what you mean by faster.<p>Higher IPC, higher clock, more cores, more cache, more cache levels, more memory bandwidth, faster memory access, faster decode, etc.<p>One idea I imagine would be possible with a 1000x speed would be real time software defined radio capture, analysis and injection.
rubicon33超过 2 年前
React Native could now handle 500,000,000 3rd party jankfest line rather than just 100,000,000
valbaca超过 2 年前
If I dare to be optimistic for once, cure cancer via simulated protein folding.
bchelli超过 2 年前
Current encryption standard would become obsolete over night, internet&#x2F;network connectivity would become insecure.<p>This would lead to a complete chaos, until we update our security standards.
legulere超过 2 年前
Less time spent in software development on optimization. That might sound horrible at first, but also means that less resources need to be used for programming something
bob1029超过 2 年前
Single-shard MMO with no instancing requirements.
tarunmuvvala超过 2 年前
As per raykurzwiel <a href="https:&#x2F;&#x2F;www.kurzweilai.net&#x2F;images&#x2F;chart03.jpg" rel="nofollow">https:&#x2F;&#x2F;www.kurzweilai.net&#x2F;images&#x2F;chart03.jpg</a><p>With 1000X CPU computing, each computer will have equivalent computing power as human brain.<p>So brain compute interface or jarvis like AI may get possible
plantain超过 2 年前
Weather forecasts would be as good as they are now, perhaps 1-2 days further ahead.
cutler超过 2 年前
A Ruby on Rails renaissance.
kramerger超过 2 年前
Windows update in the background would take 3 hours invested of 4.<p>Average nodejs manifest file would contain x12000 more dependencies.<p>Also, we would see a ton more AI being done on the local CPU. Anything from genuine OS improvements to super realistic cat filters on teams&#x2F;zoom.<p>And finally, I think people would need to figure out storage and network bottlenecks because there is only so much you can do with compute before you end up stalling waiting for more data
评论 #32932535 未加载
评论 #32932846 未加载
评论 #32933054 未加载
8jef超过 2 年前
How many pi decimal could be generated within X time using such a machine?
Iwan-Zotow超过 2 年前
1000x better porno
quadcore超过 2 年前
Good code.
anigbrowl超过 2 年前
Whole brain simulation, AGI.
评论 #32932283 未加载
评论 #32931866 未加载
评论 #32934099 未加载
tiernano超过 2 年前
java might run at a decent speed... Might, but probably won&#x27;t (jk, sorry, I couldn&#x27;t help myself...) [edit Grammarly decided to remove some text when fixing spelling...]
评论 #32933864 未加载