#R-214805
astructure to support advanced analytics and predictive modeling. You will collaborate with cross-functional teams to design and implement scalable data pipelines and manage our AWS-based data lake and warehouse environments.
Your responsibilities will include:
Designing, building, and optimizing data pipelines, data lakes, and data warehouses using AWS and Databricks.
Managing and maintaining AWS and Databricks environments for optimal performance and uptime.
Ensuring data integrity, accuracy, and consistency through rigorous quality checks and monitoring.
Collaborating with cross-functional teams to translate business needs into technical solutions.
Exploring and implementing new tools and technologies to enhance ETL platform performance.
WIN
WHAT WE EXPECT OF YOU
Our ideal candidate:
Degree educated in a computer science-based subject.
Proficient in SQL for complex data extraction and performance optimization.
Skilled in Python, PySpark, and Airflow for building scalable ETL processes.
Experience with SQL/NoSQL and vector databases for large language models.
Familiarity with data modeling and performance tuning for OLAP and OLTP systems.
Knowledge of Apache Spark, Apache Airflow, and DevOps practices.
Experience with cloud platforms such as AWS, GCP, or Azure.
THRIVE
WHAT YOU CAN EXPECT OF US
APPLY NOW FOR A CAREER THAT DEFIES IMAGINATION
In our quest to serve patients above all else, Amgen is the first to imagine, and the last to doubt. Join us.
? careers.amgen.com
EQUAL OPPORTUNITY STATEMENT
Amgen is an Equal Opportunity employer and considers all applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status.
We are committed to providing reasonable accommodations to individuals with disabilities throughout the application and interview process, and during employment. Please contact us to request accommodation.
.