About Trust WalletTrust Wallet stands as the premier non-custodial cryptocurrency wallet, embraced by over 200 million users around the globe for the secure management and growth of their digital assets. Our vision empowers individuals to fully own their assets, actively engage in the future economy, and access transformative opportunities that enhance their lives. We are committed to being a trusted companion in the journey through Web3, the on-chain economy, and the rapidly advancing AI landscape. With compatibility for over 10 million assets across more than 100 blockchains, Trust Wallet promises a seamless, multi-chain experience, supported by cutting-edge self-custody technology, a dynamic community, and a thriving network of partners.Your RoleAs a Senior Data Engineer, you will be instrumental in architecting and enhancing the data infrastructure that drives our AWS-based data lake. Your responsibilities will include designing, deploying, and maintaining robust data pipelines, ensuring a seamless, scalable, and reliable flow of information that underpins our analytics and strategic decision-making.Utilizing your expertise in AWS, Databricks, Airbyte (or similar ELT/ETL tools), alongside custom solutions in Python or Go, you will collaborate with cross-functional teams to transform raw data into invaluable assets that propel our business forward.Key ResponsibilitiesArchitect Data Infrastructure: Design and maintain a robust, scalable, and secure data infrastructure on AWS, leveraging Databricks.Develop Data Pipelines: Create and maintain efficient data pipelines using Airbyte and custom-built solutions in Go to automate data ingestion and ETL processes.Manage Data Lake: Oversee the establishment and upkeep of the data lake, ensuring optimal storage, high data quality, effective partitioning, and robust monitoring.Integrate and Customize: Integrate tools such as Airbyte with diverse data sources and tailor data flows to meet specific business requirements, including the development of custom connectors in Go.Optimize Performance: Enhance data pipelines and data lake storage for peak performance and scalability, ensuring low latency and high availability.Implement Data Governance: Establish best practices for data governance, security, and compliance within AWS and Databricks, emphasizing access control and encryption.
Jan 21, 2026