Leaving sarcastic comments aside, I think the valid point is made - the rise of commodity, garbage-quality hardware overmanufactured in China and subsequent rise of "commodity-proud" software empowered by penny-wise / dollar-stupid business processes are overdue for correction.<p>Instead of putting 20 rusty bicycles together and claim to be a revolutionary fuel- and cost efficient rocket ship, why don't we build a rocket ship from the beginning that actually flies quite well?<p>Hardware and chips optimized for DB engines, queries and huge amount of streaming data will be welcomed.
Interestingly, it seems like GPU acceleration will be available in postgresql in one of the next few releases [0].<p>From that page it seems once enabled, there aren't any special requirements to get a GPU accelerating a query. As a result, I'd be surprised as a result if "GPU optimized" databases overtake regular-db-with-gpu-acceleration-addins.<p>[0] <a href="https://wiki.postgresql.org/wiki/PGStrom" rel="nofollow">https://wiki.postgresql.org/wiki/PGStrom</a>
It doesn't threaten anybody because Oracle/Microsoft/Splunk/SAP/Hadoop/Spark/etc can just add in GPU-optimized code themselves.
Funny how it mentions USPS using GPUdb to "process complex queries and display 2D visualizations in the time it takes to load a Web page", yet every time I visit the post office, It takes at least 8 seconds after scanning a prepaid package for the package details to appear on the screen. They need to port that tech over to where it matters
GPU databases are around for a while, but not much has changed.<p>I think bigger thread is cheap memory and raise of in-memory computing. Today you can have a workstation with half TB RAM for fairly reasonable price. Hadoop is already being crushed by Spark.
the point overlooked in this article is how much costlier vram is compared to ram. to store same amount of data in vram as ram would cost you an order of magnitude more. not to mention dbs like mapd are not distributed, so you are limited to the amount of gpus you can cram in a single box
I don't think they really threaten Oracle, even for analytics where this makes sense. The performance increase over in memory on sparc m7 per price point won't be that insane. So just like in memory db the main question is how long before Oracle accelerates it's own DB with this kind of tech. I think they have only 3 years before Oracle will be there.
AFAIK GPUs only excel at data-parallel tasks (i.e. doing the exact same operation to thousands of data points in parallel, like in a matrix multiplication e.g.). So I wonder how they utilize this for ad hoc SQL queries? Anybody have any pointers to some papers maybe?
Oracle already sells A LOT of Exadata; premium priced machines to run databases on overdrive. I think they would be fine competing against a GPU-optimized database.
Maybe a good time to point out we've been specializing more on the visual analytics side (mentioned companies are more like DBs or traditional Tableau) by connecting GPUs in the browser to GPUs in the datacenter: graphistry.com . And, we're hiring ;-)
"Any headline that ends in a question mark can be answered by the word no."<p><a href="http://enwp.org/Betteridge's_law_of_headlines" rel="nofollow">http://enwp.org/Betteridge's_law_of_headlines</a>
Because we've all had enough of buzzfeed: the article doesn't even come close to bothering to actually answer the question. Decide for yourself whether that makes it link bait or not.