#R2520638_Charlotte
echnologies. We use the latest data technologies, software engineering practices, MLOPs, Agile delivery frameworks, and are passionate about building well-architected and innovative solutions that drive business value. This cutting edge and forward focused organization presents the opportunity for collaboration, self-organization within the team, and visibility as we focus on continuous business data delivery to create efficiency and effectiveness at scale.
Responsibilities:
Lead development of high-quality data assets and scalable software modules for business intelligence, diagnostics analytics and machine learning business facing solutions.
Provide end-to-end data support and solution design for a full stack analytics team of data scientists, performance analysts and business intelligence consultants focused on underwriting analytics.
Formulates logical statements of business problems and devises, tests and implements efficient, cost-effective application program solutions.
Identify and validate internal and external data sources for availability and quality. Work with SMEs to describe and understand data lineage and suitability for a use case.
Create data assets and build data pipelines that align to modern software development principles for further analytical consumption. Perform data analysis to ensure quality of data assets.
Perform preliminary exploratory analysis to evaluate nulls, duplicates and other issues with data sources.
Assist in developing code that enables real-time modeling solutions to be ingested into front-end systems
Produce code artifacts and documentation using GitHub for reproducible results and hand-off to other data science teams.
Qualifications:
3+ years of relevant experience recommended
Bachelor's degree in Computer Science, Engineering, IT, Management Information Systems, or a related discipline
GenAI Experience (Prompt engineering, RAG, LLM) a plus
Required highly proficient SQL
Required intermediate experience in Python
Experience in ingesting data from a variety of structures including relational databases, Hadoop/Spark, cloud data sources, XML, JSON
Experience in Cloud data warehouses, automation, and data pipelines (i.e. Snowflake, Redshift)
Experience in Unix and Git
Exposure to Automation tools (Autosys a plus, AWS Step Function, Airflow, etc.)
Exposure to AWS Services (i.e. S3, EMR, Sagemaker, AWS batch etc) a plus
Exposure building machine learning batch, real-time and training data pipelines on AWS will be a plus
Familiarity with MLOps and DevOps concepts.
Required experience analyzing data to influence business outcomes
Able to communicate effectively with both technical and non-technical teams
Able to translate complex technical topics into business solutions and strategies
Compensation
The listed annualized base pay range is primarily based on analysis of similar positions in the external market. Actual base pay could vary and may be above or below the listed range based on factors including but not limited to performance, proficiency and demonstration of competencies required for the role. The base pay is just one component of The Hartford's total compensation package for employees. Other rewards may include short-term or annual bonuses, long-term incentives, and on-the-spot recognition. The annualized base pay range for this role is:
$100,960 - $151,440
Equal Opportunity Employer/Sex/Race/Color/Veterans/Disability/Sexual Orientation/Gender Identity or Expression/Religion/Age
About Us | Culture & Employee Insights | Diversity, Equity and Inclusion | Benefits