Many companies start off their journey choosing an in-memory solution that can tackle severe high-performance requirements. In-memory databases are indeed excellent for high-performance querying of datasets that are limited in size. But what do those companies do once their data starts to scale rapidly along with the company’s natural growth, and real big data issues start popping up?  For many companies, in-memory solutions become a “burden”, once they realize that a large amount of money was spent on a short-term solution, which now requires expensive add-ons or replacements. The IT Managers may realize with great discomfort, that instead, they could have chosen a long-term big data solution from the start, that would have been able to deliver the high-performance but also to explore and research larger data-sets.

Once the in-memory database starts showing signs of struggling to deliver performance and required results, the IT Manager faces his first big data challenge of how to match an in-memory database with big data issues – i.e. scaling from 10TB to 100TB of data smoothly.

In these cases, the IT Manager has several options to choose from:

  1. Explain to management that the IT is not prepared for such a change
  2. Replace the existing in-memory technology with another costly database such as Oracle Exadata, IBM Netezza, Teradata etc..
  3. Migrate to a clustered solution, like Hadoop

The implications of each choice are:

Option #1 may cost the IT Manager his job.

Option #2 and #3 can lead to an unplanned spending of a big chunk of the budget on additional products/integration/skills that need to be purchased or acquired.

So, now you are probably wondering “didn’t the title say Happy IT Managers?”. Well, yes it did.

Instead of throwing away the in-memory database and spending a lot of time, resources and money – let’s discuss how the IT Manager can make the most of what he has – by maximizing the in-memory database in place to its full potential!

Others have already done this, with great success. How?

  1. By leaving the in-memory database – no need to replace it
  2. By installing SQream DB – an ultra-fast, high-performing, SQL database for large datasets, crunching 100TB of raw data on a commodity Dell 2U server
  3. Integrating SQream DB with the existing in-memory database, using standard connectors (JDBC; ODBC; .NET; OLEDB)
  4. Preserving existing business queries running on the in-memory solution, while new queries that investigate big data get streamed through the in-memory solution to SQream DB

This way, the IT Manager is happy continuing using his in-memory database empowering his IT investments with SQream DB as an accelerator to the existing infrastructure, without the need to replace his team or spend millions of dollars on full-rack solutions from other leading DBMS vendors.

Happy New Technologies!