In this role, you will create reliable architectures for building highly scalable data pipelines to collect a large amount of data from different sources and transform it into a usable format for analysis. You will design, implement, maintain a full suite of real-time and batch jobs that fuels our cutting edge data analytics platform to provide real-time intelligence to our businesses. We need your experience in designing and implementing data tools for analytics and data scientists to help them in building, optimizing and tuning of use cases.
Requirements and skills required for this role
- Bachelor’s Degree with minimum 4 years of data engineering experience.
- Minimum 2 years hands-on data engineering experience in working with big data using technologies like Hadoop/Hive, Hyperscale PostgreSQL, Java/Scala, Spark, Kafka, SQL and NoSQL, Python.
- Azure cloud-based data engineering solutions (ex: Azure Data Factory, Azure Data Lake Store, Azure Databricks, Azure HDInsight).
- Experience with elasticsearch, cloud-based data-warehousing system Snowflake will be added advantage
Apply from HERE