CASY-MSCCN Jobs

CASY-MSCCN Logo

Job Information

Eaton Corporation Data Engineer in Hadapsar, India

What you’ll do:

If you desire to be part of something special, to be part of a winning team, to be part of a fun team – winning is fun. We are looking forward to Data Engineer based in Pune, India. In Eaton, making our work exciting, engaging, and meaningful; ensuring safety, health, and wellness; and being a model of inclusion & diversity are already embedded in who we are - it’s in our values, part of our vision, and our clearly defined aspirational goals. This exciting role offers an opportunity to:

Eaton Corporation’s Center for Intelligent Power has an opening for a Senior Data Engineer As a Senior Data Engineer, you will be responsible for designing, developing, and maintaining our data infrastructure and systems. You will collaborate with cross-functional teams to understand data requirements, implement data pipelines, and ensure the availability, reliability, and scalability of our data solutions. She/He can program in several languages and understands the end-to-end software development cycle including CI/CD and software release.

Job responsibilities

  • Design, develop, and maintain scalable data pipelines and data integration processes to extract, transform, and load (ETL) data from various sources into our data warehouse or data lake.

  • Collaborate with stakeholders to understand data requirements and translate them into efficient and scalable data engineering solutions.

  • Optimize data models, database schemas, and data processing algorithms to ensure efficient and high-performance data storage and retrieval.

  • Implement and maintain data quality and data governance processes, including data cleansing, validation, and metadata management.

  • Work closely with data scientists, analysts, and business intelligence teams to support their data needs and enable data-driven decision-making.

  • Develop and implement data security and privacy measures to ensure compliance with regulations and industry best practices.

  • Monitor and troubleshoot data pipelines, identifying and resolving performance or data quality issues in a timely manner.

  • Stay up to date with emerging technologies and trends in the data engineering field, evaluating and recommending new tools and frameworks to enhance data processing and analytics capabilities.

  • Collaborate with infrastructure and operations teams to ensure the availability, reliability, and scalability of data systems and infrastructure.

Qualifications:

  • Bachelor Degree in Computer Science or Software Engineering or Information Technology

Skills:

• Apache Spark, Python

• Azure experience (Data Bricks, Docker, Function App)

• Git

• Working knowledge of Airflow

• Knowledge of Kubernetes and Docker

  • Experience in Design Thinking or human-centered methods to identify and creatively solve customer needs, through a holistic understanding of customer’s problem area

  • Knowledgeable in leveraging multiple data transit protocols and technologies (MQTT, Rest API, JDBC, etc)

  • Knowledge of Hadoop and MapReduce/Spark or related frameworks

  • Experience on Cloud Development Platforms - Azure & AWS and their associated data storage options

  • Experience on CI/CD (Continuous Integration/Delivery) i.e. Jenkins, GIT, Travis-CI

  • Virtual build environments (Container, VMs and Microservices) and Container orchestration - Docker Swarm, Kubernetes/Red Hat Openshift.

  • Relational & non-relational database systems - SQL, Postgres SQL, NoSQL, MongoDB, CosmosDB, DocumentDB

  • Data Warehousing & ETL - Write complex queries that are accessible, secure and perform in optimized manner that outputs to different consumers and systems

  • ETL on Big Data Technologies - Hive, Impala

  • Progamming Knowledge - Python and associated IDE's (Eclipse, IntelliJ, PyCharm etc.)

  • Data pipelining, scripting, reporting

  • Experience in Azure Tools - Blob, SQL, Data Lake, Hive, Hadoop, Data Factory , Databricks, Azure Functions

  • SW Development life-cycle process & tools

  • Agile development methodologies and concepts including handson with Jira, bitbucket and confluence.

  • Ability to specify and write code that is accessible, secure, and performs in an optimized manner with an ability to output to different types of consumers and systems

  • Proven experience working as a Data Engineer, with at least 6+ years of experience in data engineering, data warehousing, or related fields.

  • Strong proficiency in SQL and experience with relational databases like MySQL, PostgreSQL, or similar.

  • Hands-on experience with big data technologies such as Apache Hadoop, Spark, Airflow or similar frameworks.

  • Expertise in data modeling, data integration, and ETL processes.

  • Proficiency in programming languages like Python, Java, or Scala, with experience in building data pipelines and automation scripts.

  • Familiarity with cloud-based data platforms and services such as AWS, Azure, or Google Cloud Platform.

  • Experience with data visualization tools like Tableau, Power BI, or similar.

  • Knowledge of data security and privacy principles

  • Strong problem-solving and analytical skills, with the ability to troubleshoot and resolve complex data engineering challenges.

  • Excellent communication and collaboration skills, with the ability to work effectively with cross-functional teams and stakeholders.

  • Experience with agile development methodologies and version control systems is preferred.

DirectEmployers