#323481
seamless ingestion of data from various data sources, transform the data into information and move to data stores like data lake, data warehouse and others.
DATA SYSTEMS: Reviews existing data systems and architectures to identify areas for improvement and optimization.
STAKEHOLDER MANAGEMENT: Collaborates with multi-functional data and advanced analytic teams to gain requirements and ensure that data solutions meet the functional and non-functional needs of various partners.
DATA FRAMEWORKS: Builds complex prototypes to test new concepts and implements data engineering frameworks and architectures that improve data processing capabilities and support advanced analytics initiatives.
AUTOMATED DEPLOYMENT PIPELINES: Develops automated deployment pipelines improving efficiency of code deployments with fit for purpose governance.
DATA MODELING: Performs complex data modeling in accordance to the datastore technology to ensure sustainable performance and accessibility.
Qualifications
Minimum requirement of 6 years of relevant work experience.
TECHNICAL SKILLS REQUIRED:
Data Platform Design - Designing scalable ELT data platforms on Snowflake supporting batch and real-time workloads
Advanced Python Engineering - Building production-grade Python pipelines and reusable data frameworks, with working knowledge of .NET services and integrations
Snowflake & Relational Database Expertise - Deep knowledge of Snowflake architecture, advanced SQL, and experience working with Oracle, SQL Server, and PostgreSQL
Batch & Real-Time Processing - Designing and operating reliable batch and streaming / real-time data pipelines using Apache Kafka and Apache Pulsar
Performance & Cost Optimization - Optimizing Snowflake queries, warehouse usage, and Python workloads for efficiency and scale
Security & Governance - Implementing access controls, data protection, and secure data-sharing patterns across data platforms
Reliability & Data Quality - Ensuring pipeline resilience, monitoring, and data quality across critical datasets
GenAI Enablement - Enabling GenAI use cases through high-quality data pipelines, including preparation of structured and unstructured data, embeddings, and integration with OpenAI (e.g., RAG-style workflows)
PREFERED COMPETENCIES