SQream Platform
GPU Powered Data & Analytics Acceleration
Enterprise (Private Deployment) SQL on GPU for Large & Complex Queries
Public Cloud (GCP, AWS) GPU Powered Data Lakehouse
No Code Data Solution for Small & Medium Business
Scale your ML and AI with Production-Sized Models
By Razi Shoshani
While the big data explosion has grown exponentially over the last ten-years, CPUs have not kept pace. Performance is physically limited by the speed at which data can be transferred on-chip, CPU’s big development over the past decade was an increase in the number of cores, and the increase in frequency of the CPU – neither of which made a major difference in processing times.
While this was going on, the GPU graphics chips like NVDIA and Habana were becoming the go-to favorites in the world of Artificial Intelligence. But we need to remember that AI is part of a larger world, and even AI needs to be able to centrally run a function to get a prediction. The real challenge and the main pain are and will continue to be the ingestion and pre-processing of data. In order to do this, you need a powerful computer. And after the ingestion and pre-processing are completed, there is the query itself, for which you also need a powerful computer.
With IoT and edge computing, data centers are growing rapidly, adding more and more CPUs to keep up with exploding data needs. On top of the CPUs, they’ve been adding additional cores, and more cases to house those cores. Power consumption (one of the data center’s major cost drivers) is increasing, and the temperature is going up. Even cooling these systems requires greater rack space. It is all very uneconomical and un-green when you think about it. When it comes to data centers, leading analyst groups like Gartner for years have been pushing the importance of accounting for climate risk and sustainability, while taking a green approach to IT planning.
Now let’s go back to the GPU, which faced some resistance when it was first introduced as an answer to power computing needs. People saw it as their kid’s gaming graphics chip, but when they realized the performance capabilities the GPU provided, it wasn’t long before it gained acceptance. If you examine articles and studies reviewing the last decade, you see graphs showing that Intel CPU performance per watt has remained relatively linear, whereas NVDIA GPU performance has increased exponentially (see image below, taken from, “A Decade of Accelerated Computing Augurs Well for GPUs” by Dr. Vincent Natoli).
Energy savings aside, when it comes to predictable calculations, GPU computing enables significant speed increase on massive data sets, providing a viable solution to the exploding CPU gap for faster data processing.
Huge hardware companies have been trying to solve the CPU gap by creating hardware acceleration solutions – NVIDA, AMD (GPU), NEC (Aurora Vector Engine) and Intel (who tried to beat the equation by acquiring Habana) are examples. Now the question is, do the leaders of database companies such as Oracle, Exadata or Teradata still believe in the CPU, or will they try to find another hardware acceleration solution to resolve this gap between hysterically growing data and computation?
What do you plan to do? I am interested in hearing your feedback and opinions on how to navigate the challenge of the big data explosion.