esses into Hadoop/AWS Platform
-
Collect, organize, analyze, and disseminate significant amounts of information with attention to detail and accuracy
-
Identify, analyze, and interpret trends or patterns in complex data sets
-
Innovate new ways of managing, transforming and validating data
-
Establish and enforce guidelines to ensure consistency, quality and completeness of data assets
-
Apply quality assurance best practices to all work products
Required qualifications, capabilities, and skills**:**
- Formal training or certification on software engineering concepts and 2+ years applied experience
- Experience in a Big Data technologies (Spark, Glue, Hive, Redshift, Kafka, etc.)
- Experience programming in Python/JAVA
- Experience performing data analysis (NOT DATA SCIENCE) on AWS platforms
- Experience with data management process on AWS is a huge Plus
- Experience in implementing complex ETL transformations on big data platform like NoSQL databases (Mongo, DynamoDB, Cassandra)
- Familiarity with relational database environment (Oracle, Teradata, etc.) leveraging databases, tables/views, stored procedures, agent jobs, etc.
- Strong development discipline and adherence to best practices and standards
- Demonstrated independent problem solving skills and ability to develop solutions to complex analytical/data-driven problems
- Experience of working in a development teams using agile techniques