#R2419120
ring practices in Personal Lines space. This Data Engineer will work directly with Data Science team to transform the current class plans which in turn improves customer experience and retention.
Candidates must be authorized to work in the US without company sponsorship . The company will not support the STEM OPT I-983 Training Plan endorsement for this position.
This role will have a Hybrid work arrangement, with the expectation of working in an office location 3 days a week (Tuesday through Thursday).
Responsibilities:
Collaborates and works closely with stakeholders, product owner, and development team to apply advanced analytics techniques to analyze data, validate assumptions, and deliver data assets.
Extracts and manipulates data from various databases in Oracle, Snowflake and Hadoop into information fit for further analysis leveraging Python.
Implement the transformation roadmap using a new data reference architecture leveraging AWS, Snowflake and Big Data (HDFS, Hive, Spark, Pyspark)
Performs analytical data discovery and to reveal value-added insight through better understanding of data by working on ad-hoc and reoccurring analyses.
Design and develop high quality and scalable data transformation modules.
Analyze existing architectures and programming logic to provide more efficient automated solutions.
Prototype high impact innovations, catering to changing business needs, by leveraging new technologies (AWS - Cloud and Big Data)
Collaborates within the agile team to support data needs and team members.
Natural curiosity with a strong desire to learn, maintain, and apply knowledge of emerging data technologies, tools, methodology, and general best practices
Strong analytical, critical thinking and problem solving skills
Excellent written, verbal communication and presentation skills with ability to effectively tell story through data.
Qualifications:
1+ years ETL / Data Integration / Big Data / Cloud (AWS) Technologies experience
Bachelor's degree in Finance, Statistics, Computer Science, or other related field
AWS / Big Data Technologies certification preferred.
Experience required in at least three of the following platforms: Oracle, PL/SQL, Hadoop, Snowflake or AWS.
Experience with one or more SQL-on-Hadoop technology (Spark SQL, Presto, Hive)
Experience required in Python or Pyspark.
Knowledge on Data warehousing concepts
Experience required in basic github operations.
Self-motivated, strong sense of ownership/accountability, and results oriented with the ability to manage time and schedules effectively
Compensation
The listed annualized base pay range is primarily based on analysis of similar positions in the external market. Actual base pay could vary and may be above or below the listed range based on factors including but not limited to performance, proficiency and demonstration of competencies required for the role. The base pay is just one component of The Hartford's total compensation package for employees. Other rewards may include short-term or annual bonuses, long-term incentives, and on-the-spot recognition. The annualized base pay range for this role is:
$72,400 - $108,600
Equal Opportunity Employer/Females/Minorities/Veterans/Disability/Sexual Orientation/Gender Identity or Expression/Religion/Age
About Us | Culture & Employee Insights | Diversity, Equity and Inclusion | Benefits