#R2520916_Charlotte
eration) architectures, and integrate with our data infrastructure. Familiarity with Snowflake integration and insurance industry use cases is a plus.
This role will have a Hybrid work schedule, with the expectation of working in an office location (Hartford, CT; Chicago, IL; Columbus, OH; and Charlotte, NC) 3 days a week (Tuesday through Thursday).
Primary Job Responsibilities
Design, develop, and implement complex data pipelines for AI/ML, including those supporting RAG architectures, using technologies such as Python, Snowflake, AWS, GCP, and Vertex AI.
Implement on end-to-end generative AI pipelines, from data ingestion to pipeline deployment and monitoring.
Build and maintain data pipelines that ingest, transform, and load data from various sources (structured, unstructured, and semi-structured) into data warehouses, data lakes, vector databases (e.g., Pinecone, Weaviate, Faiss - consider specifying which ones you use or are exploring), and graph databases (e.g., Neo4j, Amazon Neptune - same consideration as above).
Develop and implement data quality checks, validation processes, and monitoring solutions to ensure data accuracy, consistency, and reliability.
Implement end-to-end generative AI data pipelines, from data ingestion to pipeline deployment and monitoring.
Develop complex AI systems, adhering to best practices in software engineering and AI development.
Work with cross-functional teams to integrate AI solutions into existing products and services.
Keep up-to-date with AI advancements and apply new technologies and methodologies to our systems.
Assist in mentoring junior AI/data engineers in AI development best practices.
Implement and optimize RAG architectures and pipelines.
Develop solutions for handling unstructured data in AI pipelines.
Implement agentic workflows for autonomous AI systems.
Develop graph database solutions for complex data relationships in AI systems.
Integrate AI pipelines with Snowflake data warehouse for efficient data processing and storage.
Apply GenAI solutions to insurance-specific use cases and challenges.
Required Qualifications:
Candidates must be authorized to work in the US without company sponsorship. The company will not support the STEM OPT I-983 Training Plan endorsement for this position.
Bachelor's in Computer Science, Artificial Intelligence, or a related field.
2+ years of experience in data engineering
Awareness of data engineering, with at least some hands on with generative AI technologies.
Ability to showcase implementation of production-ready enterprise-grade GenAI pipelines.
Experience & awareness of prompt engineering techniques for large language models.
Experience & awareness in implementing Retrieval-Augmented Generation (RAG) pipelines, integrating retrieval mechanisms with language models.
Knowledge of vector databases and graph databases, including implementation and optimization.
Experience & awareness in processing and leveraging unstructured data for GenAI applications.
Proficiency in implementing agentic workflows for AI systems.
Compensation
The listed annualized base pay range is primarily based on analysis of similar positions in the external market. Actual base pay could vary and may be above or below the listed range based on factors including but not limited to performance, proficiency and demonstration of competencies required for the role. The base pay is just one component of The Hartford's total compensation package for employees. Other rewards may include short-term or annual bonuses, long-term incentives, and on-the-spot recognition. The annualized base pay range for this role is:
$100,960 - $151,440
Equal Opportunity Employer/Sex/Race/Color/Veterans/Disability/Sexual Orientation/Gender Identity or Expression/Religion/Age
About Us | Our Culture | What It's Like to Work Here | Perks & Benefits