Create and maintain optimal data pipeline architecture. Assemble large, complex data sets that meet functional / non-functional business requirements. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources. Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
Min 2 yrs’ exp in data warehousing & big data projects.
Working knowledge with ELK stack & visualization tools.
Exp building & optimizing ‘big data’ data pipelines.
Strong analytic skills related to working with unstructured datasets.
Exp with big data tools: Hadoop, Spark, Kafka, Airflow, etc.
Exp with relational SQL & NoSQL databases.
Exp with object-oriented/object function scripting languages: Python, Java, Scala, etc.
To apply for this job email your details to email@example.com