#318988
n of data from various data sources, transform the data into information and move to data stores like data lake, data warehouse and others.
DATA SYSTEMS: Reviews existing data systems and architectures to identify areas for improvement and optimization.
STAKEHOLDER MANAGEMENT: Collaborates with multi-functional data and advanced analytic teams to gain requirements and ensure that data solutions meet the functional and non-functional needs of various partners.
DATA FRAMEWORKS: Builds complex prototypes to test new concepts and implements data engineering frameworks and architectures that improve data processing capabilities and support advanced analytics initiatives.
AUTOMATED DEPLOYMENT PIPELINES: Develops automated deployment pipelines improving efficiency of code deployments with fit for purpose governance.
DATA MODELING: Performs complex data modeling in accordance to the datastore technology to ensure sustainable performance and accessibility.
Qualifications
Bachelor's degree in Computer Science, Engineering, or a related technical field with minimum 8 years of work experience.
CLOUD ENVIRONMENTS: Experience developing data systems on major cloud platforms (AWS, GCP, Azure).
DATA ARCHITECTURE: Hands-on experience building modern data architectures, including data lakes, data lakehouses, and data hubs, along with related capabilities such as ingestion, governance, modeling, and observability.
DATA INGESTION: Demonstrated proficiency in data collection, ingestion tools (Kafka, AWS Glue), and storage formats (Iceberg, Parquet).
DATA STREAMING: Experience developing data pipelines with streaming architectures and tools (Kafka, Flink).
DATA MODELING: Expertise in data transformation and modeling using SQL-based frameworks and orchestration tools (dbt, AWS Glue, Airflow). Deep experience with modeling concepts like SCD and schema evolution.
DATA TRANSFORMATION: Strong background with using Spark for data transformation, including streaming, performance tuning, and debugging with Spark UI.
PROGRAMMING: Advanced programming skills in Python, Java, Scala, or similar languages. Expert-level proficiency in SQL for data manipulation and optimization.
DEVOPS: Demonstrated experience in DevOps practices, including code management, CI/CD, and deployment strategies.
DATA GOVERNANCE: Strong background in data governance principles, including data quality, privacy, and security considerations for data product development and consumption.
#Standard
#Standard