#260644
le, transparent, global operating rhythm that ensures always-on access to high-quality data for stakeholders across the company
Responsible for day-to-day data collection, transportation, maintenance/curation and access to the PepsiCo corporate data asset
Work cross-functionally across the enterprise to centralize data and standardize it for use by business, data science or other stakeholders
Increase awareness about available data and democratize access to it across the company
Job Description
As a member of the data engineering team, you will be the key technical expert developing and overseeing PepsiCo's data product build & operations and drive a strong vision for how data engineering can proactively create a positive impact on the business. You'll be an empowered member of a team of data engineers who build data pipelines into various source systems, rest data on the PepsiCo Data Lake, and enable exploration and access for analytics, visualization, machine learning, and product development efforts across the company. As a member of the data engineering team, you will help lead the development of very large and complex data applications into public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. You will work closely with process owners, product owners and business users. You'll be working in a hybrid environment with in-house, on-premise data sources as well as cloud and remote systems.
Responsibilities
Active contributor to code development in projects and services.
Manage and scale data pipelines from internal and external data sources to support new product launches and drive data quality across data products.
Build and own the automation and monitoring frameworks that capture metrics and operational KPIs for data pipeline quality and performance.
Responsible for adopting best practices around systems integration, security, performance, and data management defined within the organization.
Empower the business by creating value through the increased adoption of data, data science, and business intelligence landscape.
Collaborate with internal clients (data science and product teams) to drive solutions and POC discussions.
Develop and optimize procedures to "productionalize" data engineering pipelines.
Define and manage SLA's for data products and processes running in production.
Support large-scale experimentation was done by data scientists.
Prototype new approaches and build solutions at scale.
Research in state-of-the-art methodologies.
Create documentation for learning and knowledge transfer.
Create and audit reusable packages or libraries.
Qualifications
5+ years of overall technology experience that includes at least 2+ years of hands-on software development, data engineering.
2+ years of experience in SQL optimization and performance tuning, and development experience in programming languages like Python, PySpark, Scala, etc.).
1+ years in cloud data engineering experience in Azure Certification is a plus.
Experience with version control systems like Github and deployment & CI tools.
Experience with data modeling, data warehousing, and building high-volume ETL/ELT pipelines.
Experience with data profiling and data quality tools is a plus.
Experience in working with large data sets and scaling applications like Kubernetes is a plus.
Experience with Statistical/ML techniques is a plus.
Experience with building solutions in the retail or in the supply chain space is a plus
Understanding metadata management, data lineage, and data glossaries is a plus.
Working knowledge of agile development, including DevOps and DataOps concepts.
Familiarity with business intelligence tools (such as PowerBI).
BA/BS in Computer Science, Math, Physics, or other technical fields.
The candidate must have thorough knowledge in Spark, SQL, Python, Databricks and Azure
Skills, Abilities, Knowledge