We are looking for a savvy Data Engineer to join our growing Data Science team. The hire will be responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up. The Data Engineer will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems and products. The right candidate will be excited by the prospect of optimizing or even re-designing our company’s data architecture to support our next generation of products and data initiatives.
Responsibilities for Data Engineer
- Create and maintain optimal data pipeline architecture,Assemble large, complex data sets that meet functional / non-functional business requirements.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources.
- Work with stakeholders including the Executive, Product and Data teams to assist with data-related technical issues and support their data infrastructure needs.
- Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
- Work with data and analytics experts to strive for greater functionality in our data systems.\
- We are looking for a candidate with 2+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools:
- Experience with relational SQL and NoSQL databases, including Postgres and MongoDB.
- Experience with AWS cloud services: EC2, SQS.
- Experience with container orchestration : Docker, Kubernetes
- Experience with object-oriented/object function scripting languages: Python
- Experience with big data tools: Hadoop, Spark, Kafka, etc.
- Experience with stream-processing systems: Storm, Spark-Streaming, etc.
- Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc