About the job
We are in search of a highly skilled and innovative Data Engineer to join our dynamic team. As a pivotal technical leader, you will:
Be the go-to expert in your team, guiding projects with your technical acumen.
Conquer complex challenges that others find daunting.
Deliver intricate features at an unparalleled pace.
Produce exceptionally clean and maintainable code.
Enhance the quality of our entire codebase.
If you're an exceptional developer with a proven track record, we want to hear from you! This role requires a unique blend of skills and experience, designed for the best in the field.
Responsibilities:
Develop, optimize, and scale data pipelines and infrastructure utilizing technologies such as Python, TypeScript, Apache Airflow, PySpark, AWS Glue, and Snowflake.
Design, operationalize, and oversee ingestion and transformation workflows, including DAGs, alerting, retries, SLAs, lineage, and cost controls.
Partner with platform and AI/ML teams to automate ingestion, validation, and real-time compute workflows, contributing towards a feature store.
Integrate pipeline health and metrics into engineering dashboards for enhanced visibility and observability.
Model data and execute efficient, scalable transformations using Snowflake and PostgreSQL.
Create reusable frameworks and connectors to standardize internal data publishing and consumption.

