We are looking for a Big Data Engineer that will lead analysis and implementation of SQream big data technology stack, implementing best practices from architectural design through administration, creating a highly scalable and extensible cloud-based and on-premise DWH solutions integrated with customers’ existing echo systems of huge sets of data.
You will be part of our Customer Delivery Team that is executing POCs and implementing SQream with significant interaction with SQream’s customers around the world. This job is a customer-facing job.
You’ll be working closely with our Product and R&D teams, as well as our clients, on a wide variety of business solutions. Together, we aspire to provide complete customer satisfaction for our current and pending projects.
This job requires travel.
- Contribute to overall data warehouse architecture, framework, and design patterns to store and process high data volumes
- Execute POC’s for potential opportunities as part as the presale process of the product both at customers sites or remotely
- Design, implement and fine tune SQream’s big data solutions according to customers’ needs as part of SQream DB implementation both at customer sites or remotely
- Conduct technical training for customers, partners and integrators of SQream products
- Integrating different Big Data tools and frameworks required to provide desired solution
- Maintain ongoing communications with prospects, customers and partners along their engagement with SQream
- Maintain ongoing communication with Product and R&D to ensure continuous improvement of SQream products
Skills and Qualifications
- At least 5+ years of experience in Data engineering, Data Ingestion methodologies, Advanced SQL programming, and Data modeling on RDBMS
- Good Experience with Linux and DB performance testing
- Excellent knowledge of SQL
- Performance tuning for Big Data platforms and data modeling (optimize/rewrite SQL queries)
- Familiar with different ETL tools (such as Talend, informatica and more) and experience in implementing complex ETL processes
- Experience with integration of data from multiple data sources
- Experience with big data tools such as Kafka, spark, Hadoop, snowflake, etc.
- Experience with different BI Tools (like Tableau, Qlik, etc)
- Excellent written and oral communication skills in English
Skills and Qualifications
- Experience with NoSQL databases, such as HBase, Cassandra, MongoDB
- Experience with Big Data ML toolkits, such as Mahout, SparkML, or H2O
- Experience with different storage systems
- Java/python programming experience
- Cloud based Data Warehousing Applications