#732617BR
and architectures using Apache Spark and databases.
High level job responsibilities:
Develop elegant, flexible, maintainable, and scalable solutions to complex problems by delivering data and analytics.
Demonstrate experience leading development efforts and coordinating work for junior developers.
Define and consolidate development and modelling practices within a team.
Leverage strong collaborative skills to define requirements, develop plans, and deliver iteratively to support our organization's mission.
Leverage strong analytical skills to understand business processes and propose effective data solutions.
Communicate and manage relationships with business and development teams - provide guidance, mentorship, and direction, as required.
Translate business needs into technical requirements.
Learn new tools, technologies, and processes for continuous improvement.
As part of a self-directed team, take ownership of activities and deliver them on-time and with quality.
Required Technical and Professional Expertise
Above all, we value curiosity, teamwork, and a desire to learn. We are confident that if you possess the right attitude, work ethic, and skill set that you could succeed in the role, even if you do not meet every one of the requirements below.
5+ years of experience in Data Engineering with Big Data.
Comfortable multi-tasking and working as part of a global team, as well as providing technical leadership and taking ownership.
Adaptive to ambiguity and willing to change in a fast-paced environment.
Advanced proficiency in Python, Scala, and PySpark.
Advanced experience with SQL for complex queries and data manipulation.
Expertise in developing and maintaining data pipelines using Apache Spark and Scala.
Experience making continuous documentation improvements and maintaining clear and concise technical documentation.
Strong skills in data modeling, including designing complex, dimensional data models.
Strong understanding of Coud platforms, particularly IBM Cloud and Cloud Object Storage.
Strong understanding and application of clean code principles and best development practices.
Proficiency in data validation, testing, and ensuring data quality.
Proficiency in creating and managing workflows with Apache Airflow and Argo Workflows.
Experience in test-driven development (TDD) and continuous improvement of code quality.
Familiarity with CI/CD tools like Tekton/Jenkis plus experience in setting up and maintaining CI/CD pipelines.
Experience in data engineering or business intelligence roles contributing to a shared codebase.
Experience with data structures, algorithms, software design and writing software in Python, Scala, Java, or similar.
Experience with relational databases including writing and optimizing SQL queries and designing schema.
Experience using continuous integration and deployment systems (e.g., Cloud Build, GitLab, Jenkins).
Experience with OpenShift / Kubernetes.
Preferred Technical and Professional Expertise