Yield and Bid Optimizations for Ad-Tech
with SQream DB

United States


500 employees

Adding a Speed Layer for Hadoop Systems in Ad-Tech

A leading omnichannel ad publishing platform deployed SQream DB to unlock more insights from their collected data. By implementing a GPU SQL solution to enable their employees to query data from the past 48 hours, they increased revenues by improving bids on high-value publishers.

What is yield optimization?

Yield optimization is the use of data and statistical optimization techniques to maximize performance in order to increase revenue. Obtaining maximum efficiency is crucial for good return-on-investment for all parties involved in the publishing cycle.

Yield optimization often involves analyzing huge amounts of data and tweaking parameters until efficiency is tangibly improved.

For this omnichannel ad publisher, yield optimization involves identifying good performing and under-performing ads. The key element is understanding what works and exploiting them further.

The Hadoop-based system was not designed for analytics

The growing number of impressions (trillions per month) was becoming too difficult to analyze on a distributed Hadoop system. There was no best-of-breed solution that ticked all of the boxes. Most solutions suffered from slow performance or complex operations. This caused unreliable query performance, culminating in queries taking roughly 5 hours. The vendor was looking for a way to avoid spending more time and resources on unreliable, unexpected solutions.

Before deploying SQream DB, the infrastructure consisted of dozens of Hadoop nodes, with HBase leading to Apache Phoenix as the SQL frontend for HBase. The lengthy scan times on HBase limited the effectiveness of the yield team, forcing them to reduce the analysis window to short time-frames, thereby reducing their relevance.

85 TB ingest per day with SQream DBSimplified Data pipeline with SQream

The SQream solution extracts data from HDFS, avoiding HBase and Phoenix to simplify the process.

By combining 2U servers with NVIDIA K80 cards, the system can handle around 40TB of data per node. SQream DB ingests nearly 85TB per day in this setup, and can query both fresh and historical data at the same time.

Time-to-market for queries reduced by 60x

Using SQream DB as a speed layer, the vendor has offloaded heavy analytical workloads from the existing infrastructure, and reduces the time-to-market by 60x for queries (from 5 hours to a few minutes); increasing analytics abilities and enabling more users to run more queries.

We are monetizing our data and algorithms much better with the massive scalability and performance of SQream DB
CTO & Co-founder of vendor

Download this case study to learn more

Download the full case study PDF

or go back to the ad-tech industry page