Hmm the HN comments say that it's kinda interesting.<p>However, i read the title as: "We couldn't think of anything good about the product, so we added a buzzword in fashion."<p>Same comments mostly say this has nothing to do with AI.
Does not look like very fancy or innovative.<p>In the end, they just put a dedicated coprocessor directly with their memory chip. They named it AI because buzzword and marketing bullshit...
Now that's what I call edge computing.<p>But seriously though, it seems to answer an ancient techie question of mine: Since we're strobing memories millions/billions times per second, couldn't they be doing more than storage with all those clocks?
I speculate that the eventual ideal goal to strive towards will be RAM strip-to-strip processing taking all of one module's data, feeding one layer and dumping results into the next module. The individual layers accessible for both read and write as ordinary RAM.
Paper with more details: <a href="https://ieeexplore.ieee.org/document/9240974" rel="nofollow">https://ieeexplore.ieee.org/document/9240974</a>
Can you use this processing-in-memory (PIM) to perform garbage collection in memory? (Like in this article from RISC-V board member Krste Asanovic: <a href="https://people.eecs.berkeley.edu/~krste/papers/maas-isca18-hwgc.pdf" rel="nofollow">https://people.eecs.berkeley.edu/~krste/papers/maas-isca18-h...</a>)