#90859-en-us
orks
Optimizing data engineering pipelines
Reviews architectural designs to ensure consistency & alignment with defined target architecture and adherence to established architecture standards
Support data and cloud transformation initiatives
Contribute to our cloud strategy based on prior experience
Understand the latest technologies in a rapidly innovative marketplace
Independently work with all stakeholders across the organization to deliver point and strategic solutions
Ready to provide sufficient overlap with onshore team and business to gather requirements to own feature delivery
Need to act as Product Owner by owning end to end delivery of data pipelines
Mentor the junior team members
Skills, Experience and Requirements
Engineering degree with 5+ years of experience as Data engineer with at least 2+ years in the wireless and/or telecom network space
Experience in Python, Scala and SQL
Experience in both functional programming and Spark programming dealing with processing terabytes of data. Specifically, this experience must be in writing data engineering jobs for large-scale data integration in AWS.
Experience in logical & physical table design in Big data environment to suite processing frameworks
Experience in writing spark streaming jobs (producers/consumers) using Apache Kafka or AWS Kinesis is required
Should have knowledge in variety of data platforms such as Redshift, S3, MySQL/Postgres, DynamoDB
Experience in Airflow and AWS services such as EMR, Glue, S3, Athena, DynamoDB, IAM, Lambda, Cloud watch
Create and maintain automated ETL processes with special focus on data flow, error recovery, and exception handling and reporting
Gather and understand data requirements, work in the team to achieve high quality data ingestion and build systems that can process the data, transform the data
Benefits