The Added Value of SQream Acceleration for Machine Learning

By Inon Maman


In order to improve a machine learning (ML) model’s precision rate and build the pipeline feeding it in an optimized and efficient way, you should sufficiently prepare your data. It needs to be prepared in a unified way that will be “understandable” for the training phase. 

SQream’s analytical GPU-based engine has the unique capability of processing massive amounts of data, even on a petabyte scale, in a cost-efficient manner. In this blog, we will focus on SQream’s value proposition within the MLOps domain, and more specifically, data preparation for ML.

Processing Big Data

There’s nothing new about ML and big data going hand in hand; no ML model could provide quality predictions without being based on big data. One of the main advantages of SQream’s engine is its massive processing capability,, unlike traditional data warehouses that struggle to process terabytes or petabytes of data. SQream’s unique architecture and parallel processing abilities make it possible to handle massive amounts of data in a relatively short time.

Amongst the various big data processing tasks, data preparation is the most tedious. It takes a lot of time and effort, it’s typically full of errors, and sometimes feels Sisyphean. Moreover, data preparation usually involves constant communication between data scientists (defining requirements) and data engineers (providing access), which probably makes it much more complicated. Your data platform should be able to shorten the preparation time and consolidate all data in one centralized location.  


Data Scientists Python Ecosystem

Python ecosystem is the native environment for ML projects’, providing users with various embedded libraries that support a number of MLOps. Two common open-source frameworks used by data scientists are the Pandas and SQL Alchemy libraries. Both help manipulate and transform data, and while SQL Alchemy is used to interact with databases (connection management), Pandas provides easy access to the data and makes it easier to transform:

  • SQL Alchemy is a popular SQL toolkit and Object-Relational Mapping (ORM) library in Python. It allows data scientists to interact with relational databases with a Python look-alike interaction (behind the scenes it will be translated into SQL syntax).
  • Pandas provides high-performance data processing as well as data structures for efficiently handling structured data called DataFrames. It also provides a wide range of data manipulation functions such as filtering, grouping, merging, and reshaping of data.

Boosting ML Data Preparation Anywhere Needed 

SQream is accessible for the Python ecosystem through its Python native connector (PySQream), which allows connection and authentication to both SQreamDB (our on-premise deployment) and SQream Blue (our Google Cloud deployment) and encapsulates a repository of libraries including those mentioned above.

Complex data transformations, aggregations, feature engineering, and other data preparation tasks can now be done quickly and efficiently. Preparation of raw data (Bronze stage) into “cleaned” and augmented data (Silver stage) is a prolonged process that users can execute faster with SQream resulting in a faster time to model. Behind the cost-efficient performance lies the combination of SQream’s processing engine and Python libraries’ immediate accessibility, which allows the data scientist a complete management cockpit for MLOps. 



This is how users can prepare their data, and immediately start using SQream’s blazing-fast SQL engine without leaving their familiar environment.  In the below example, you can see the command for connecting your notebook to SQream Blue using SQL Alchemy connection skeleton:

%env DATABASE_URL sqream_blue://sqream:[email protected]/“<access_token>”

Once the connection is established, the user can perform data preparation tasks with SQream:

import pandas as pd


# Load the data into a dataframe

df = %sql select * from bronze_table


# Remove null values from the dataframe

df = df.dropna()


# Reformat the data in the dataframe

df[‘date’] = pd.to_datetime(df[‘date’], format=‘%Y-%m-%d’)


# Aggregate the data in the dataframe

grouped_df = df.groupby([‘category’, ‘date’]).sum()


# Write the transformed data back to the database

grouped_df.to_sql(‘transformed_table_name’, engine)

Then, connect to SQream and load the data into a Pandas dataframe using:

%sql select * from bronze_table

We will be able to have any manipulation with SQream that might count as preparation as we would have done on the dataframe: 

  • null values using the dropna() 
  • FunctionReformat the date column using the pd.to_datetime() 
  • FunctionAggregate the data using the groupby() function

Finally, we write the transformed data back to the table using to_sql() function.

The precision of the ML model dramatically improves as your model is fed with larger, more diverse, and quality data. With SQream, you can allow yourself not to compromise on this issue since SQream can process a massive amount of data in a cost-efficient manner. 

Future Vision for SQream in Machine Learning

Our future vision for machine learning is focused on enhancing the performance of the training and inference phases of the MLOps workflow, thus allowing an end-to-end pipeline without having to export data outside of SQream. SQream plans to enable users to run training jobs and parallelize inference with its own processing engine, eliminating the need to decouple the data preparation from the training and inference process. This will result in much faster training times, improved performance, and unified accessibility.

Download and Read the machine learning use cases ebook

SQream’s big data analytical engine provides unique capabilities that make it a valuable tool for data scientists and engineers. Its synergy between the ability to process large amounts of data quickly and efficiently, and the accessibility to popular ML Python libraries, results in significantly faster time to model.


Read more about The Added Value of GPU-Accelerated Analytics in my article on Hackernoon