#549786172448
ur design and development expertise with a never-ending quest to create innovative technology through solid engineering practices. You'll work with highly inspired and inquisitive team of technologists who are developing & delivering top quality technology products to our clients & stakeholders. Design, development and support of Data Ingestion Pipelines for Knowledge Graph using Python & Snowflake. Collaborate with business and other technology teams to translate business requirements into innovative solutions implementing performant, scalable, resilient distributed applications Bachelor's/Master's Degree in Computer Science, Computer Engineering, Data Analytics or related field 6+ Years of strong development experience building robust Data ingestion pipelines using tools like Python, PySpark and Snowflake as well as Autosys/Airflow. Strong experience in writing multi-processing scripts using Python and distributed processing scripts using PySpark. Strong problem-solving skills, business acumen, and demonstrated excellent oral and written communication skills with both technical and non-technical audiences Experience in closely working with application teams and managing all the DB related activities such as DB modeling, design, development, maintenance and support 2+ Years of strong experience in python for automating data cleansing, data re-formatting, data transformations Strong Skills in Relational Databases including database design, writing complex queries and stored procedures, performance tuning Strong Data Modeling skills - Enterprise Data Model (OLTP) and Dimensional modeling. Well versed with principles, techniques and best practices of data modeling. Extensive experience in designing and developing of complex mappings, applying various transformations such as lookup, source qualifier, update strategy, router, sequence generator,aggregator, rank, stored procedure, filter, joiner and sorter transformations Experience in integration of various data sources like DB2, SQL Server, and Flat Files into the staging area Expert in designing Parallel jobs using various stages like Join, Merge, Lookup, Remove duplicates, Filter, Dataset, Lookup file set, Modify, Aggregator, XML parsing stages. Extensive experience working on UNIX/Linux platform, Python scripting and managing Experience with Big Data processing and technologies like Hadoop, Spark, Kafka Experience with Graph Databases and Redis Data Stores. Exposure to LLM Technologies. Experience and expertise working with data wrangling and visualization tools like Dataiku, Tableau and Power BI. Financial Services experience will be a strong plus Our values - putting clients first, doing the right thing, leading with exceptional ideas, committing to diversity and inclusion, and giving back - aren't just beliefs, they guide the decisions we make every day to do what's best for our clients, communities and more than 80,000 employees in 1,200 offices across 42 countries. Our teams are relentless collaborators and creative thinkers, fueled by their diverse backgrounds and experiences. We are proud to support our employees and their families at every point along their work-life journey, offering some of the most attractive and comprehensive employee benefits and perks in the industry. There's also ample opportunity to move about the business for those who show passion and grit in their work. To learn more about our offices across the globe, please copy and paste https://https://www.morganstanley.com/about-us/global-offices into your browser. We work to provide a supportive and inclusive environment where all individuals can maximize their full potential. Our skilled and creative workforce is comprised of individuals drawn from a broad cross section of the global communities in which we operate and who reflect a variety of backgrounds, talents, perspectives, and experiences. Our strong commitment to a culture of inclusion is evident through our constant focus on recruiting, developing, and advancing individuals based on their skills and talents.