We are seeking an experienced AWS Data Engineer with a strong background in building scalable data solutions and expertise in utilities-related datasets.
In Short
Design and build scalable, reliable data pipelines using AWS services.
Use AWS Step Functions to orchestrate workflows across data pipelines.
Implement ETL/ELT processes using PySpark, Python, and Pandas.
Leverage experience with complex distributed systems.
Use AWS Lambda functions for serverless solutions.
Design data models tailored for utilities use cases.
Continuously monitor and improve the performance of data pipelines.
Implement robust security measures for sensitive utility data.
Requirements
At least 5 years of experience in data engineering.
Deep understanding of distributed systems.
Proficiency with AWS services and tools.
Experience with data integration and transformation.