About the job
Key Responsibilities:
- Design and construct scalable, reliable data pipelines ensuring exceptional availability and performance.
- Create complex datasets that conform to both functional and non-functional business specifications.
- Identify, strategize, and execute enhancements to internal processes, including automating manual tasks, optimizing data delivery, and reengineering infrastructure for improved scalability.
- Adopt best practices for data storage, processing, and retrieval.
- Collaborate with various stakeholders, including executives, data scientists, and product managers, to comprehend data requirements and deploy effective data solutions.
- Enhance and fine-tune data workflows to achieve peak performance and efficiency.
- Ensure data security and compliance with pertinent data privacy regulations.
- Keep abreast of emerging technologies and industry advancements in data engineering and analytics.
- Mentor junior data engineers, providing guidance and support.
Qualifications:
- A Bachelor's or Master's degree in Computer Science, Engineering, or a related discipline.
- A minimum of 7 years of experience in data engineering or a comparable role.
- Proficient in programming languages such as Python, Scala, or Java.
- Experience in designing and implementing data pipelines with tools like Apache Kafka, Apache Spark, or AWS Glue.
- Strong SQL skills and familiarity with database technologies like PostgreSQL, MySQL, or MongoDB.
- Understanding of cloud platforms, particularly Azure.
- Experience with data modeling, ETL processes, and data warehousing principles.
- Exceptional problem-solving and troubleshooting capabilities.
- Excellent communication and teamwork skills.
- Detail-oriented with a proactive approach to work.
What We Offer:
- Paid Time Off
- Work From Home
- Health Insurance
- OPD
- Training and Development
- Life Insurance
