Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
About the job
100ms is looking for a Platform Engineer specializing in Core Infrastructure to join the team in Bengaluru. This position centers on designing and building infrastructure that supports both the reliability and performance of the 100ms platform. Collaboration with other engineering teams is a key part of the role, with a focus on improving systems, increasing scalability, and maintaining strong uptime.
What you will do
Design and implement infrastructure solutions for the 100ms platform
Collaborate with engineering groups to enhance systems and processes
Optimize infrastructure for scalability and high availability
Troubleshoot and resolve complex technical challenges
The team
Work alongside engineers dedicated to building reliable systems and driving improvements across the platform. The team values hands-on problem-solving and a collaborative approach to infrastructure work.
Role overview 100ms is looking for a Platform Engineer specializing in Core Infrastructure to join the team in Bengaluru. This position centers on designing and building infrastructure that supports both the reliability and performance of the 100ms platform. Collaboration with other engineering teams is a key part of the role, with a focus on improving syste…
Join zaimler as a Data Infrastructure EngineerAt zaimler, we understand that AI agents cannot effectively reason over fragmented data. In today's enterprise landscape, data is often scattered across multiple systems without a coherent context, leading to failures in AI applications. As we transition from copilots to fully autonomous agents, we are pioneering a new layer of infrastructure to address this challenge.Our platform at zaimler serves as the backbone for the agentic era, facilitating the automatic discovery of domain knowledge, mapping of relationships, and providing AI agents with the semantic context they need to operate accurately and efficiently at scale. Imagine knowledge graphs that enable real-time inference, designed for systems that require reasoning, not merely data retrieval.Founded by industry veterans Biswajit Das and Sofus Macskassy, who have extensive experience in data infrastructure and knowledge graph development, zaimler is a small but dynamic team at the seed stage, collaborating with major enterprises in sectors like insurance, travel, and technology. If you aspire to contribute to the foundational infrastructure that will support the next decade of AI advancements, we would love to connect with you.Role OverviewWe are seeking a skilled Data Infrastructure Engineer to play a pivotal role in developing our foundational distributed data layer that drives our semantic platform. You will be responsible for designing, constructing, and scaling systems that facilitate high-throughput data ingestion, transformation, and real-time processing, ultimately shaping the core of our knowledge layer.As an early member of our Bengaluru office, your expertise will significantly influence the technical direction, culture, and standards of our growing team.
About AdyenAdyen is a cutting-edge financial technology platform that provides an all-in-one solution for payments, data, and financial products for esteemed clients like Meta, Uber, H&M, and Microsoft. We are built for ambition, engineering everything we do to foster success.We pride ourselves on creating a supportive environment where our team members can thrive and take ownership of their careers. Our motivated professionals tackle unique technical challenges at scale, collaborating as a team to deliver innovative and ethical solutions that empower businesses to achieve their goals faster.Position Overview: Senior Platform Engineer - Infrastructure ServicesWe are seeking a Senior Platform Engineer with over 6 years of experience to join our ACS Infrastructure Services Team within the Platform Engineering organization in Bengaluru. This team is responsible for operating and maintaining Adyen's core container orchestration and traffic management infrastructure, which is essential for the functioning of our global payment platform. In this role, you will design, build, and manage critical infrastructure systems, enhance automation and reliability, and support the scalability of Adyen's platform as we expand globally.
Role overview Quince seeks a Staff Data Engineer in Bengaluru, Karnataka, India. This position centers on building and maintaining the core infrastructure behind the company’s data platform. The work directly supports major data initiatives and helps drive informed decisions throughout the business. What you will do Design and develop infrastructure that powers the data platform Maintain and improve systems supporting data needs across the organization Collaborate with other teams to strengthen the broader data ecosystem Requirements Solid background in data engineering Experience architecting and developing data infrastructure Comfort working collaboratively to address challenges Motivation to use data for meaningful solutions Appreciation for innovation and ongoing improvement
Join AION as a Senior Software EngineerAION is at the forefront of revolutionizing high-performance computing (HPC) through our innovative AI cloud platform. Our mission is to democratize access to compute resources, enabling organizations to seamlessly navigate the entire AI lifecycle—from data management to model deployment. With a focus on bare-metal performance and a forward-deployed engineering approach, we are committed to transforming the way businesses harness the power of AI.The demand for robust compute solutions is surging globally, and AION is poised to be the gateway for dynamic compute workloads. We are building integration bridges with diverse data centers worldwide and re-engineering the compute stack using cutting-edge serverless technology. At AION, we prioritize enterprise security and compliance, meticulously rethinking infrastructure from hardware to API interfaces.Founded by seasoned entrepreneurs with successful track records, AION is well-funded by prominent venture capitalists and has established strategic global partnerships. With our headquarters in the US and a growing presence in India and London, we are assembling our core team to drive our mission forward.
Teamwork makes the stream work. Join Roku and Transform the Future of TV Streaming!As the leading TV streaming platform in the U.S., Canada, and Mexico, Roku is at the forefront of revolutionizing how audiences engage with television. Our goal is to power every TV worldwide, connecting viewers to their favorite content while empowering publishers and advertisers with innovative solutions.From day one, your contributions at Roku will be recognized and valued. We are a dynamic, growing public company where every team member plays a crucial role in delighting millions of viewers around the globe while acquiring invaluable experience across diverse fields. About Our Big Data TeamRoku operates one of the largest data lakes globally, managing over 70 PB of data and executing more than 10 million queries each month. Our Big Data team is responsible for developing and maintaining the platform that makes this possible. We offer tools to acquire, generate, process, monitor, validate, and access data for both streaming and batch processing. Our technologies include Scribe, Kafka, Hive, Presto, Spark, Flink, Pinot, and more. The team actively contributes to the Open Source community and aims to expand its involvement.Your RoleWe are modernizing our Big Data Platform and need your expertise to redefine our architecture to enhance user experience, reduce costs, and boost efficiency. If you are passionate about Big Data technologies and eager to explore Open Source, this position is tailored for you!Key ResponsibilitiesOptimize and fine-tune existing Big Data systems and pipelines, while also developing new ones to ensure they operate efficiently and cost-effectively.
(P-1490) At Databricks, we process vast amounts of data, managing petabytes and billions of transaction events each day. Our infrastructure is critical; every cluster launch, query executed, and dollar billed must function flawlessly. With stringent accuracy requirements of 99.999% for billing transactions and the ability to ingest terabytes of data per second across over 100 regions, the stakes are high. A mere five-minute outage can lead to significant revenue loss and erode customer trust. Therefore, our infrastructure is not just important—it is essential for our survival. As we scale, the next phase of our growth demands that we establish disaster recovery systems that ensure reliability, not just hope for it. We need testing frameworks that can identify production-scale issues before they affect our users, correctness guarantees that eliminate billing errors, and automation that scales operations efficiently alongside growth. In this pivotal leadership role, you will spearhead the development of the data infrastructure organization that underpins Databricks' ongoing expansion. You will lay the groundwork for teams in Bengaluru, responsible for the foundational systems that assure billing accuracy, operational resilience, and zero-downtime recovery across our monetization stack. This encompasses multi-region data ingestion, developer platforms, and deployment automation that streamline operations at petabyte scale. Your mission transcends mere maintenance; it involves architecting the infrastructure that allows Databricks to grow while alleviating operational burdens. You will define the standards for world-class infrastructure that will serve data platforms for the next decade. In your role as a founding technical leader in our rapidly growing engineering hub, you will collaborate closely with global infrastructure leaders. Beyond building exceptional teams, you will influence architectural decisions that resonate throughout the organization and advocate for an infrastructure-as-product mindset that transforms infrastructure into a global force multiplier. You will thrive in an engineering culture rooted in Apache Spark and open source, where technical expertise is highly valued and infrastructure engineers are regarded as skilled artisans. The ideal candidate has previously built infrastructure organizations in environments where achieving five nines was not merely a goal but a reality, where petabyte-scale operations were a daily expectation, and where the technical strength of the infrastructure team determined business scalability. You possess the technical expertise to engage in discussions about data architecture, the strategic insight to shape multi-year platform roadmaps, and the leadership ability to create teams that attract top-tier engineers. Most importantly, you believe that effective data infrastructure not only supports the business but also defines its potential.
P-1346 At Databricks, we are dedicated to empowering data teams to tackle some of the world's most challenging issues—from transforming transportation to spurring medical advancements. We achieve this by developing and managing an unparalleled data and AI infrastructure platform, enabling our clients to leverage profound data insights for business enhancement. Founded by engineers with a strong customer focus, we eagerly embrace every chance to address technical challenges, whether it's designing next-gen UI/UX for data interaction or scaling our services across millions of virtual machines. Databricks Mosaic AI presents a distinctive data-driven approach to constructing enterprise-grade Machine Learning and Generative AI solutions, allowing organizations to securely and cost-effectively own and manage ML and Generative AI models, enriched with their enterprise data. We're expanding rapidly in Bengaluru, India, with plans to establish 14 new teams from the ground up! As a Staff Software Engineer in our Infrastructure team at Databricks India, you will have the opportunity to engage in: Backend (Infrastructure) Your Impact: Our Infrastructure Backend teams encompass diverse areas within our core service platforms. You may face challenges such as: Addressing issues that range from product to infrastructure, including distributed systems, large-scale service architecture and monitoring, workflow orchestration, and enhancing developer experience. Delivering reliable, high-performance services and client libraries for managing vast amounts of data on cloud storage backends, such as AWS S3 and Azure Blob Store. Building dependable, scalable services (e.g., Scala, Kubernetes) and data pipelines (e.g., Apache Spark™, Databricks) to support the pricing infrastructure that processes millions of cluster-hours daily while developing product features that allow customers to easily monitor and manage platform usage.
Collaboration is key to seamless streaming. Join Roku in revolutionizing television viewing.As the leading TV streaming platform in the U.S., Canada, and Mexico, Roku is on a mission to enhance every television experience globally. By pioneering streaming technologies, we aim to connect audiences with their favorite content, empower content creators to grow their reach, and offer advertisers innovative ways to engage.From day one, your contributions at Roku will be recognized and valued. We are a rapidly expanding public company where every team member plays a vital role. Get ready to engage millions of TV streamers globally while gaining invaluable experience across diverse areas. About Our TeamThe Search & Recommendations (S&R) Platform Engineering team is at the heart of our mission to provide exceptional streaming experiences for millions worldwide. We design and maintain the core infrastructure that enables search, personalization, and content discovery across all Roku platforms.Our diverse and collaborative team emphasizes ownership, transparency, and continuous improvement. We partner with various infrastructure teams to develop high-performance distributed systems and observability tools that facilitate real-time search, ranking, and recommendations.Our projects involve designing and optimizing online inference infrastructure, feature stores, and data pipelines, all seamlessly integrated within the broader platform ecosystem (Kubernetes, Istio, Envoy). We thrive on tackling complex technical challenges that impact user experience.
Join Nexthink, a leader in digital employee experience management software, as a Platform Software Engineer in our Engineering Productivity team within the Technical Platform group. We empower organizations to enhance employee experiences through our cutting-edge products that provide unmatched visibility across digital workspaces. Our mission is to equip IT teams with the tools they need to proactively identify and resolve workplace issues before they impact employees.In this role, you will collaborate with a diverse team of talented engineers who take full ownership of their projects from conception to deployment. You will closely engage with Product Engineering, Security, and Architecture teams to understand developer needs, design, implement solutions, and facilitate their adoption. Together, we will pave the way for a premier internal developer platform that integrates modern technologies and best practices for continuous integration and continuous deployment (CI/CD).As a Platform Software Engineer, your responsibilities will include:Providing essential tools for daily product development, integrating with cloud platforms, and assisting developers in managing their build systems and CI/CD pipelines.Setting up and maintaining development tools such as Jenkins, Artifactory, and GitHub.Developing internal self-service tools and platforms for Nexthink developers.Owning technical work for various projects from conception through production, including proposals and execution.Building strong relationships with Nexthink developers to identify improvement areas and drive platform adoption.Documenting solutions and conducting workshops to disseminate knowledge across development teams.Diagnosing and resolving deployment incidents in both development and production environments to maintain service levels.
(P-1384) At Databricks, we manage vast amounts of data, processing petabytes and billions of transaction events daily. Every cluster launch, every query executed, and every dollar billed flows through a robust infrastructure that is crucial for our operations. With requirements for 99.999% billing accuracy and the ability to ingest terabytes per second across over 100 regions, our infrastructure is not just important; it's vital. A five-minute outage can translate into millions lost in revenue and customer trust, making it essential that we build disaster recovery systems that demonstrate reliability, testing frameworks that identify production-scale issues before deployment, and automation that allows us to scale operations efficiently. In this pivotal leadership role, you will spearhead the development of the data infrastructure organization that will support Databricks' ongoing growth. You will establish foundational teams in Bengaluru responsible for the critical systems that ensure billing accuracy, operational resilience, and zero-downtime recovery across our entire monetization stack. This includes managing multi-region data ingestion, developer platforms, and deployment automation that smoothens processes at petabyte scale. This role is about architecting the future infrastructure that enables Databricks to expand while alleviating operational burdens. As a foundational technical leader in our rapidly growing engineering hub, you will collaborate strategically with global infrastructure leaders. Besides building exceptional teams, you will influence architectural decisions across the organization and promote an infrastructure-as-product mindset that transforms infrastructure into global force multipliers. You will thrive within an engineering culture rooted in Apache Spark and open source principles, where technical excellence is celebrated and infrastructure engineers are regarded as artisans. The ideal candidate has successfully built infrastructure teams in environments where five nines were not merely aspirational, where handling petabyte-scale data was a regular occurrence, and where the technical capabilities of the infrastructure team played a critical role in the organization's scalability. You possess the technical acumen to engage in data architecture discussions, the strategic foresight to outline multi-year platform roadmaps, the ability to cultivate teams that attract top engineering talent, and, above all, the conviction that well-executed data infrastructure does not just support a business; it defines its potential.
About Wisdom AIAt Wisdom AI, we believe that simplicity is the key to clarity. Our mission is to transform the complexities of data interaction for businesses, enabling them to democratize insights and make swift, informed decisions.Our culture is anchored by core values that drive our operations:Default to Action: We prioritize speed and value progress.Customer Obsessed: Our primary focus is delivering exceptional value to our customers.One Team: We foster open communication and respectfully challenge ideas.Role OverviewWe are seeking a dedicated backend/infrastructure engineer to join our pioneering team. You will be instrumental in developing machine learning and data pipelines that are efficient, secure, and designed for scalability. As a platforms engineer, you will oversee deployments, observability, and other critical elements necessary for scaling our production environment to meet customer needs.Your ResponsibilitiesTake ownership of the observability and deployments within the Wisdom tech stack.Collaborate with fellow software and sales engineers to establish effective processes and tools for managing production changes.Utilize modern cloud infrastructure to develop cost-efficient and scalable systems.Engage closely with customers and founders to shape the product and engineering roadmap.Contribute to the development of our engineering culture and technology stack.QualificationsProven experience in building highly available and scalable cloud infrastructure.Proficient in Kubernetes, Terraform, and Python.Bachelor's or Master's degree in Computer Science or a related field.Experience in startup environments is a plus.
Teamwork makes the stream work. Roku is transforming the television landscape.As the leading TV streaming platform in the U.S., Canada, and Mexico, Roku is on a mission to power every television globally. We were the pioneers of streaming to the TV, and our goal is to connect consumers with their favorite content, empower publishers to grow their audiences, and provide advertisers with innovative tools to engage viewers effectively.From day one at Roku, you will be a key contributor in a fast-paced environment where your work impacts millions of TV streamers. Join us to gain invaluable experience across various disciplines while delighting users around the world. About the TeamRoku leads the industry in TV streaming innovation. Our ongoing success depends on enhancing the Roku Content Platform to deliver an exceptional streaming experience worldwide. As part of our elite Content Platform team, you will collaborate with talented engineers responsible for developing and maintaining our extensive backend systems, data processing services, and storage solutions, providing insights for all content on Roku devices. About the RoleWe are seeking a Senior Software Engineer with extensive experience in backend development, Data Engineering, and Data Analytics. This role focuses on advancing our content platform and data intelligence capabilities, supporting critical systems such as Search and Recommendations across the Roku platform. Ideal candidates will embrace high visibility, possess a strong business acumen, and thrive on making impactful decisions while working on core data platform components essential for streaming services at Roku. What You’ll Be DoingCollaborate closely with product management, content data platform services, and internal teams to enhance our data platform's architecture.Develop low-latency, optimized streaming, and batch data pipelines to support downstream services.Build and maintain our Microservices-based Event-Driven architecture.
About Us Acceldata stands at the forefront of Enterprise Data Observability, having established itself as a leader since its inception in 2018. Based in Silicon Valley, we have pioneered the first Enterprise Data Observability Platform designed to facilitate the development and management of exceptional data products.Our approach to Enterprise Data Observability integrates cutting-edge technologies such as AI, LLMs, Analytics, and DataOps. Acceldata empowers organizations with vital capabilities that ensure the delivery of reliable and trustworthy data to fuel enterprise data products.As a SaaS solution, Acceldata's platform is trusted by a diverse range of global clients, including industry giants like HPE, HSBC, Visa, Freddie Mac, Manulife, Workday, Oracle, and many more. We are proud to be a Series-C funded company with backing from top-tier investors including Insight Partners, March Capital, Lightspeed, and others.About the Role: We are on the lookout for a highly skilled Senior Software Development Engineer in Test (SDET) to join our Open Data Platform (ODP) team, focusing on the quality assurance and performance enhancement of large-scale data systems.In this position, you will collaborate closely with both development and operations teams to design and implement comprehensive testing strategies for the Open Source Data Platform (ODP), which encompasses technologies such as Hadoop, Spark, Hive, and Kafka. Your expertise will be vital in automating tests, fine-tuning performance, and pinpointing bottlenecks within distributed data systems.Key responsibilities will include drafting test plans, developing automated test scripts, and executing functional, regression, and performance testing. You will play a critical role in identifying and rectifying defects, safeguarding data integrity, and optimizing testing methodologies. Strong teamwork and collaboration skills are essential, as you will engage with cross-functional teams and spearhead quality improvement initiatives. Your contributions will significantly impact the reliability and quality standards of big data solutions.https://www.acceldata.io/open-data-platform
Join Our Team at Safe Security At Safe Security, we're pioneering the future of Cyber Super Intelligence (CSI) with an innovative platform that autonomously forecasts, identifies, and mitigates cyber threats. We champion a culture of transparency and collaboration, ensuring that every team member feels valued and empowered in their role. We prioritize a culture-first mindset, offering unlimited vacation, a supportive work environment built on trust, and unwavering commitment to continuous professional development. Our ethos is that Culture is Our Strategy. Dive deeper into the unique qualities that make SAFE exceptional by checking out our Culture Memo.
At WEKA, we are redefining the enterprise data stack for the reasoning age. Our innovative solution, NeuralMesh by WEKA, stands at the forefront of agentic AI data infrastructure, providing a cloud and AI-native software solution deployable anywhere. We convert traditional data silos into dynamic data pipelines that significantly enhance GPU utilization, accelerating AI model training, inference, machine learning, and other resource-intensive tasks while being energy efficient.As a pre-IPO, growth-stage company on a rapid growth path, WEKA has successfully raised $375 million in funding from world-class venture capitalists and strategic investors. We partner with the world’s most innovative enterprises and research organizations, including 12 of the Fortune 50, to facilitate faster and more sustainable discoveries, insights, and business outcomes. Our commitment is to tackle our customers’ most intricate data challenges to foster intelligent innovation and drive business value. If you share our enthusiasm, we welcome you to embark on this exciting journey with us.
Role Overview Weekday-1 seeks a Chief Platform Officer in Bengaluru to lead the design, development, and scaling of enterprise technology platforms for a key client. This executive role focuses on cloud infrastructure, data engineering, and MLOps, balancing innovation with reliability and performance. Key Responsibilities Architect and oversee large-scale platform deployments on AWS, using services such as EC2, S3, Lambda, EMR, and Redshift. Design resilient, data-driven platforms that support advanced analytics, machine learning, and critical business applications. Guide cross-functional teams to ensure seamless integration across systems and alignment with organizational strategy. Implement best practices for cloud governance, security, and performance optimization. Design and manage high-throughput ETL pipelines for both real-time and batch data processing. Maintain high standards for data quality, reliability, and accessibility to support data-driven decisions. Lead the MLOps strategy, including deployment, monitoring, and lifecycle management of machine learning models. Establish CI/CD pipelines for ML workflows and integrate models into production systems at scale. Mentor and develop high-performing teams, fostering a culture of innovation, ownership, and continuous improvement. Collaborate closely with product, engineering, and data science teams to deliver scalable platform solutions. What We’re Looking For Extensive experience in cloud infrastructure, especially AWS and its core services. Strong background in data engineering, including ETL pipeline design and management. Proven track record in MLOps, with hands-on experience deploying and managing machine learning models at scale. Demonstrated leadership in guiding cross-functional teams and aligning technology strategy with organizational goals. Commitment to best practices in cloud security, governance, and performance. Ability to foster team growth and promote a culture of continuous learning and innovation. Location Bengaluru, Karnataka, India
Join our dynamic team at Harvey as a Staff Software Engineer, specializing in Core Infrastructure. In this vital role, you will be responsible for designing, building, and optimizing robust infrastructure solutions that support our innovative software products. Your expertise will contribute to enhancing the reliability and scalability of our systems as we continue to grow.We are looking for a professional who is passionate about technology and eager to tackle complex challenges. If you thrive in a collaborative environment and are driven to make a significant impact, we want to hear from you!
Join the Okta Family!At Okta, we are revolutionizing identity management by empowering individuals and organizations to securely access any technology, anywhere, on any device. Our innovative platforms, including the Okta and Auth0 Platforms, are designed to enhance security, streamline authentication, and facilitate automation, placing identity at the forefront of business growth and security.We value diverse perspectives and backgrounds, seeking lifelong learners who can contribute their unique experiences to our mission.Be part of a team that is building a future where identity is truly yours.About OktaOkta is committed to making the world a more secure and interconnected place by enabling organizations to embrace any technology. As the leading independent identity provider for enterprises, we collaborate with a diverse range of clients—from major corporations to innovative startups—to ensure secure connections between people and technology. Our robust AI and data capabilities are pivotal to our growth and success, powering secure and scalable products for both our customer base and employees.The OpportunityWe are on the lookout for a Senior Data Engineer to join our Enterprise Data Platform team. Reporting to the Sr. Manager of Enterprise Data Platform, you will play a crucial role in developing the data infrastructure that enables our internal AI initiatives through AI-ready data. Your responsibilities will include transforming our data product vision into a secure, scalable, and sophisticated technical framework.The ideal candidate possesses a strong passion for creating high-quality data solutions. You will contribute to establishing engineering excellence within the team and ensuring our core data platform is reliable and ready to support AI-driven decision-making at Okta.What You’ll DoDesign and maintain efficient data pipelines and models on our AI data platform, leveraging tools such as Snowflake, AWS, and dbt.Implement security and governance components to uphold compliance and security standards.Develop automated solutions for data classification and access control, and support vulnerability management processes.Utilize infrastructure as code (Terraform) and CI/CD (GitHub/GitLab) to construct, test, and deploy data infrastructure with security and reliability.Promote and adhere to best practices in code quality, system design, and operational readiness.
Join 6sense, a cutting-edge tech company, as a Staff Software Engineer specializing in Infrastructure and DevOps. In this role, you will play a vital part in developing robust infrastructure solutions and implementing best practices in DevOps to enhance our software development lifecycle.Your expertise in cloud services, automation, and continuous integration/continuous deployment (CI/CD) will be essential in driving our projects forward. Collaborate with cross-functional teams to ensure our systems are scalable, reliable, and secure.
Role overview 100ms is looking for a Platform Engineer specializing in Core Infrastructure to join the team in Bengaluru. This position centers on designing and building infrastructure that supports both the reliability and performance of the 100ms platform. Collaboration with other engineering teams is a key part of the role, with a focus on improving syste…
Join zaimler as a Data Infrastructure EngineerAt zaimler, we understand that AI agents cannot effectively reason over fragmented data. In today's enterprise landscape, data is often scattered across multiple systems without a coherent context, leading to failures in AI applications. As we transition from copilots to fully autonomous agents, we are pioneering a new layer of infrastructure to address this challenge.Our platform at zaimler serves as the backbone for the agentic era, facilitating the automatic discovery of domain knowledge, mapping of relationships, and providing AI agents with the semantic context they need to operate accurately and efficiently at scale. Imagine knowledge graphs that enable real-time inference, designed for systems that require reasoning, not merely data retrieval.Founded by industry veterans Biswajit Das and Sofus Macskassy, who have extensive experience in data infrastructure and knowledge graph development, zaimler is a small but dynamic team at the seed stage, collaborating with major enterprises in sectors like insurance, travel, and technology. If you aspire to contribute to the foundational infrastructure that will support the next decade of AI advancements, we would love to connect with you.Role OverviewWe are seeking a skilled Data Infrastructure Engineer to play a pivotal role in developing our foundational distributed data layer that drives our semantic platform. You will be responsible for designing, constructing, and scaling systems that facilitate high-throughput data ingestion, transformation, and real-time processing, ultimately shaping the core of our knowledge layer.As an early member of our Bengaluru office, your expertise will significantly influence the technical direction, culture, and standards of our growing team.
About AdyenAdyen is a cutting-edge financial technology platform that provides an all-in-one solution for payments, data, and financial products for esteemed clients like Meta, Uber, H&M, and Microsoft. We are built for ambition, engineering everything we do to foster success.We pride ourselves on creating a supportive environment where our team members can thrive and take ownership of their careers. Our motivated professionals tackle unique technical challenges at scale, collaborating as a team to deliver innovative and ethical solutions that empower businesses to achieve their goals faster.Position Overview: Senior Platform Engineer - Infrastructure ServicesWe are seeking a Senior Platform Engineer with over 6 years of experience to join our ACS Infrastructure Services Team within the Platform Engineering organization in Bengaluru. This team is responsible for operating and maintaining Adyen's core container orchestration and traffic management infrastructure, which is essential for the functioning of our global payment platform. In this role, you will design, build, and manage critical infrastructure systems, enhance automation and reliability, and support the scalability of Adyen's platform as we expand globally.
Role overview Quince seeks a Staff Data Engineer in Bengaluru, Karnataka, India. This position centers on building and maintaining the core infrastructure behind the company’s data platform. The work directly supports major data initiatives and helps drive informed decisions throughout the business. What you will do Design and develop infrastructure that powers the data platform Maintain and improve systems supporting data needs across the organization Collaborate with other teams to strengthen the broader data ecosystem Requirements Solid background in data engineering Experience architecting and developing data infrastructure Comfort working collaboratively to address challenges Motivation to use data for meaningful solutions Appreciation for innovation and ongoing improvement
Join AION as a Senior Software EngineerAION is at the forefront of revolutionizing high-performance computing (HPC) through our innovative AI cloud platform. Our mission is to democratize access to compute resources, enabling organizations to seamlessly navigate the entire AI lifecycle—from data management to model deployment. With a focus on bare-metal performance and a forward-deployed engineering approach, we are committed to transforming the way businesses harness the power of AI.The demand for robust compute solutions is surging globally, and AION is poised to be the gateway for dynamic compute workloads. We are building integration bridges with diverse data centers worldwide and re-engineering the compute stack using cutting-edge serverless technology. At AION, we prioritize enterprise security and compliance, meticulously rethinking infrastructure from hardware to API interfaces.Founded by seasoned entrepreneurs with successful track records, AION is well-funded by prominent venture capitalists and has established strategic global partnerships. With our headquarters in the US and a growing presence in India and London, we are assembling our core team to drive our mission forward.
Teamwork makes the stream work. Join Roku and Transform the Future of TV Streaming!As the leading TV streaming platform in the U.S., Canada, and Mexico, Roku is at the forefront of revolutionizing how audiences engage with television. Our goal is to power every TV worldwide, connecting viewers to their favorite content while empowering publishers and advertisers with innovative solutions.From day one, your contributions at Roku will be recognized and valued. We are a dynamic, growing public company where every team member plays a crucial role in delighting millions of viewers around the globe while acquiring invaluable experience across diverse fields. About Our Big Data TeamRoku operates one of the largest data lakes globally, managing over 70 PB of data and executing more than 10 million queries each month. Our Big Data team is responsible for developing and maintaining the platform that makes this possible. We offer tools to acquire, generate, process, monitor, validate, and access data for both streaming and batch processing. Our technologies include Scribe, Kafka, Hive, Presto, Spark, Flink, Pinot, and more. The team actively contributes to the Open Source community and aims to expand its involvement.Your RoleWe are modernizing our Big Data Platform and need your expertise to redefine our architecture to enhance user experience, reduce costs, and boost efficiency. If you are passionate about Big Data technologies and eager to explore Open Source, this position is tailored for you!Key ResponsibilitiesOptimize and fine-tune existing Big Data systems and pipelines, while also developing new ones to ensure they operate efficiently and cost-effectively.
(P-1490) At Databricks, we process vast amounts of data, managing petabytes and billions of transaction events each day. Our infrastructure is critical; every cluster launch, query executed, and dollar billed must function flawlessly. With stringent accuracy requirements of 99.999% for billing transactions and the ability to ingest terabytes of data per second across over 100 regions, the stakes are high. A mere five-minute outage can lead to significant revenue loss and erode customer trust. Therefore, our infrastructure is not just important—it is essential for our survival. As we scale, the next phase of our growth demands that we establish disaster recovery systems that ensure reliability, not just hope for it. We need testing frameworks that can identify production-scale issues before they affect our users, correctness guarantees that eliminate billing errors, and automation that scales operations efficiently alongside growth. In this pivotal leadership role, you will spearhead the development of the data infrastructure organization that underpins Databricks' ongoing expansion. You will lay the groundwork for teams in Bengaluru, responsible for the foundational systems that assure billing accuracy, operational resilience, and zero-downtime recovery across our monetization stack. This encompasses multi-region data ingestion, developer platforms, and deployment automation that streamline operations at petabyte scale. Your mission transcends mere maintenance; it involves architecting the infrastructure that allows Databricks to grow while alleviating operational burdens. You will define the standards for world-class infrastructure that will serve data platforms for the next decade. In your role as a founding technical leader in our rapidly growing engineering hub, you will collaborate closely with global infrastructure leaders. Beyond building exceptional teams, you will influence architectural decisions that resonate throughout the organization and advocate for an infrastructure-as-product mindset that transforms infrastructure into a global force multiplier. You will thrive in an engineering culture rooted in Apache Spark and open source, where technical expertise is highly valued and infrastructure engineers are regarded as skilled artisans. The ideal candidate has previously built infrastructure organizations in environments where achieving five nines was not merely a goal but a reality, where petabyte-scale operations were a daily expectation, and where the technical strength of the infrastructure team determined business scalability. You possess the technical expertise to engage in discussions about data architecture, the strategic insight to shape multi-year platform roadmaps, and the leadership ability to create teams that attract top-tier engineers. Most importantly, you believe that effective data infrastructure not only supports the business but also defines its potential.
P-1346 At Databricks, we are dedicated to empowering data teams to tackle some of the world's most challenging issues—from transforming transportation to spurring medical advancements. We achieve this by developing and managing an unparalleled data and AI infrastructure platform, enabling our clients to leverage profound data insights for business enhancement. Founded by engineers with a strong customer focus, we eagerly embrace every chance to address technical challenges, whether it's designing next-gen UI/UX for data interaction or scaling our services across millions of virtual machines. Databricks Mosaic AI presents a distinctive data-driven approach to constructing enterprise-grade Machine Learning and Generative AI solutions, allowing organizations to securely and cost-effectively own and manage ML and Generative AI models, enriched with their enterprise data. We're expanding rapidly in Bengaluru, India, with plans to establish 14 new teams from the ground up! As a Staff Software Engineer in our Infrastructure team at Databricks India, you will have the opportunity to engage in: Backend (Infrastructure) Your Impact: Our Infrastructure Backend teams encompass diverse areas within our core service platforms. You may face challenges such as: Addressing issues that range from product to infrastructure, including distributed systems, large-scale service architecture and monitoring, workflow orchestration, and enhancing developer experience. Delivering reliable, high-performance services and client libraries for managing vast amounts of data on cloud storage backends, such as AWS S3 and Azure Blob Store. Building dependable, scalable services (e.g., Scala, Kubernetes) and data pipelines (e.g., Apache Spark™, Databricks) to support the pricing infrastructure that processes millions of cluster-hours daily while developing product features that allow customers to easily monitor and manage platform usage.
Collaboration is key to seamless streaming. Join Roku in revolutionizing television viewing.As the leading TV streaming platform in the U.S., Canada, and Mexico, Roku is on a mission to enhance every television experience globally. By pioneering streaming technologies, we aim to connect audiences with their favorite content, empower content creators to grow their reach, and offer advertisers innovative ways to engage.From day one, your contributions at Roku will be recognized and valued. We are a rapidly expanding public company where every team member plays a vital role. Get ready to engage millions of TV streamers globally while gaining invaluable experience across diverse areas. About Our TeamThe Search & Recommendations (S&R) Platform Engineering team is at the heart of our mission to provide exceptional streaming experiences for millions worldwide. We design and maintain the core infrastructure that enables search, personalization, and content discovery across all Roku platforms.Our diverse and collaborative team emphasizes ownership, transparency, and continuous improvement. We partner with various infrastructure teams to develop high-performance distributed systems and observability tools that facilitate real-time search, ranking, and recommendations.Our projects involve designing and optimizing online inference infrastructure, feature stores, and data pipelines, all seamlessly integrated within the broader platform ecosystem (Kubernetes, Istio, Envoy). We thrive on tackling complex technical challenges that impact user experience.
Join Nexthink, a leader in digital employee experience management software, as a Platform Software Engineer in our Engineering Productivity team within the Technical Platform group. We empower organizations to enhance employee experiences through our cutting-edge products that provide unmatched visibility across digital workspaces. Our mission is to equip IT teams with the tools they need to proactively identify and resolve workplace issues before they impact employees.In this role, you will collaborate with a diverse team of talented engineers who take full ownership of their projects from conception to deployment. You will closely engage with Product Engineering, Security, and Architecture teams to understand developer needs, design, implement solutions, and facilitate their adoption. Together, we will pave the way for a premier internal developer platform that integrates modern technologies and best practices for continuous integration and continuous deployment (CI/CD).As a Platform Software Engineer, your responsibilities will include:Providing essential tools for daily product development, integrating with cloud platforms, and assisting developers in managing their build systems and CI/CD pipelines.Setting up and maintaining development tools such as Jenkins, Artifactory, and GitHub.Developing internal self-service tools and platforms for Nexthink developers.Owning technical work for various projects from conception through production, including proposals and execution.Building strong relationships with Nexthink developers to identify improvement areas and drive platform adoption.Documenting solutions and conducting workshops to disseminate knowledge across development teams.Diagnosing and resolving deployment incidents in both development and production environments to maintain service levels.
(P-1384) At Databricks, we manage vast amounts of data, processing petabytes and billions of transaction events daily. Every cluster launch, every query executed, and every dollar billed flows through a robust infrastructure that is crucial for our operations. With requirements for 99.999% billing accuracy and the ability to ingest terabytes per second across over 100 regions, our infrastructure is not just important; it's vital. A five-minute outage can translate into millions lost in revenue and customer trust, making it essential that we build disaster recovery systems that demonstrate reliability, testing frameworks that identify production-scale issues before deployment, and automation that allows us to scale operations efficiently. In this pivotal leadership role, you will spearhead the development of the data infrastructure organization that will support Databricks' ongoing growth. You will establish foundational teams in Bengaluru responsible for the critical systems that ensure billing accuracy, operational resilience, and zero-downtime recovery across our entire monetization stack. This includes managing multi-region data ingestion, developer platforms, and deployment automation that smoothens processes at petabyte scale. This role is about architecting the future infrastructure that enables Databricks to expand while alleviating operational burdens. As a foundational technical leader in our rapidly growing engineering hub, you will collaborate strategically with global infrastructure leaders. Besides building exceptional teams, you will influence architectural decisions across the organization and promote an infrastructure-as-product mindset that transforms infrastructure into global force multipliers. You will thrive within an engineering culture rooted in Apache Spark and open source principles, where technical excellence is celebrated and infrastructure engineers are regarded as artisans. The ideal candidate has successfully built infrastructure teams in environments where five nines were not merely aspirational, where handling petabyte-scale data was a regular occurrence, and where the technical capabilities of the infrastructure team played a critical role in the organization's scalability. You possess the technical acumen to engage in data architecture discussions, the strategic foresight to outline multi-year platform roadmaps, the ability to cultivate teams that attract top engineering talent, and, above all, the conviction that well-executed data infrastructure does not just support a business; it defines its potential.
About Wisdom AIAt Wisdom AI, we believe that simplicity is the key to clarity. Our mission is to transform the complexities of data interaction for businesses, enabling them to democratize insights and make swift, informed decisions.Our culture is anchored by core values that drive our operations:Default to Action: We prioritize speed and value progress.Customer Obsessed: Our primary focus is delivering exceptional value to our customers.One Team: We foster open communication and respectfully challenge ideas.Role OverviewWe are seeking a dedicated backend/infrastructure engineer to join our pioneering team. You will be instrumental in developing machine learning and data pipelines that are efficient, secure, and designed for scalability. As a platforms engineer, you will oversee deployments, observability, and other critical elements necessary for scaling our production environment to meet customer needs.Your ResponsibilitiesTake ownership of the observability and deployments within the Wisdom tech stack.Collaborate with fellow software and sales engineers to establish effective processes and tools for managing production changes.Utilize modern cloud infrastructure to develop cost-efficient and scalable systems.Engage closely with customers and founders to shape the product and engineering roadmap.Contribute to the development of our engineering culture and technology stack.QualificationsProven experience in building highly available and scalable cloud infrastructure.Proficient in Kubernetes, Terraform, and Python.Bachelor's or Master's degree in Computer Science or a related field.Experience in startup environments is a plus.
Teamwork makes the stream work. Roku is transforming the television landscape.As the leading TV streaming platform in the U.S., Canada, and Mexico, Roku is on a mission to power every television globally. We were the pioneers of streaming to the TV, and our goal is to connect consumers with their favorite content, empower publishers to grow their audiences, and provide advertisers with innovative tools to engage viewers effectively.From day one at Roku, you will be a key contributor in a fast-paced environment where your work impacts millions of TV streamers. Join us to gain invaluable experience across various disciplines while delighting users around the world. About the TeamRoku leads the industry in TV streaming innovation. Our ongoing success depends on enhancing the Roku Content Platform to deliver an exceptional streaming experience worldwide. As part of our elite Content Platform team, you will collaborate with talented engineers responsible for developing and maintaining our extensive backend systems, data processing services, and storage solutions, providing insights for all content on Roku devices. About the RoleWe are seeking a Senior Software Engineer with extensive experience in backend development, Data Engineering, and Data Analytics. This role focuses on advancing our content platform and data intelligence capabilities, supporting critical systems such as Search and Recommendations across the Roku platform. Ideal candidates will embrace high visibility, possess a strong business acumen, and thrive on making impactful decisions while working on core data platform components essential for streaming services at Roku. What You’ll Be DoingCollaborate closely with product management, content data platform services, and internal teams to enhance our data platform's architecture.Develop low-latency, optimized streaming, and batch data pipelines to support downstream services.Build and maintain our Microservices-based Event-Driven architecture.
About Us Acceldata stands at the forefront of Enterprise Data Observability, having established itself as a leader since its inception in 2018. Based in Silicon Valley, we have pioneered the first Enterprise Data Observability Platform designed to facilitate the development and management of exceptional data products.Our approach to Enterprise Data Observability integrates cutting-edge technologies such as AI, LLMs, Analytics, and DataOps. Acceldata empowers organizations with vital capabilities that ensure the delivery of reliable and trustworthy data to fuel enterprise data products.As a SaaS solution, Acceldata's platform is trusted by a diverse range of global clients, including industry giants like HPE, HSBC, Visa, Freddie Mac, Manulife, Workday, Oracle, and many more. We are proud to be a Series-C funded company with backing from top-tier investors including Insight Partners, March Capital, Lightspeed, and others.About the Role: We are on the lookout for a highly skilled Senior Software Development Engineer in Test (SDET) to join our Open Data Platform (ODP) team, focusing on the quality assurance and performance enhancement of large-scale data systems.In this position, you will collaborate closely with both development and operations teams to design and implement comprehensive testing strategies for the Open Source Data Platform (ODP), which encompasses technologies such as Hadoop, Spark, Hive, and Kafka. Your expertise will be vital in automating tests, fine-tuning performance, and pinpointing bottlenecks within distributed data systems.Key responsibilities will include drafting test plans, developing automated test scripts, and executing functional, regression, and performance testing. You will play a critical role in identifying and rectifying defects, safeguarding data integrity, and optimizing testing methodologies. Strong teamwork and collaboration skills are essential, as you will engage with cross-functional teams and spearhead quality improvement initiatives. Your contributions will significantly impact the reliability and quality standards of big data solutions.https://www.acceldata.io/open-data-platform
Join Our Team at Safe Security At Safe Security, we're pioneering the future of Cyber Super Intelligence (CSI) with an innovative platform that autonomously forecasts, identifies, and mitigates cyber threats. We champion a culture of transparency and collaboration, ensuring that every team member feels valued and empowered in their role. We prioritize a culture-first mindset, offering unlimited vacation, a supportive work environment built on trust, and unwavering commitment to continuous professional development. Our ethos is that Culture is Our Strategy. Dive deeper into the unique qualities that make SAFE exceptional by checking out our Culture Memo.
At WEKA, we are redefining the enterprise data stack for the reasoning age. Our innovative solution, NeuralMesh by WEKA, stands at the forefront of agentic AI data infrastructure, providing a cloud and AI-native software solution deployable anywhere. We convert traditional data silos into dynamic data pipelines that significantly enhance GPU utilization, accelerating AI model training, inference, machine learning, and other resource-intensive tasks while being energy efficient.As a pre-IPO, growth-stage company on a rapid growth path, WEKA has successfully raised $375 million in funding from world-class venture capitalists and strategic investors. We partner with the world’s most innovative enterprises and research organizations, including 12 of the Fortune 50, to facilitate faster and more sustainable discoveries, insights, and business outcomes. Our commitment is to tackle our customers’ most intricate data challenges to foster intelligent innovation and drive business value. If you share our enthusiasm, we welcome you to embark on this exciting journey with us.
Role Overview Weekday-1 seeks a Chief Platform Officer in Bengaluru to lead the design, development, and scaling of enterprise technology platforms for a key client. This executive role focuses on cloud infrastructure, data engineering, and MLOps, balancing innovation with reliability and performance. Key Responsibilities Architect and oversee large-scale platform deployments on AWS, using services such as EC2, S3, Lambda, EMR, and Redshift. Design resilient, data-driven platforms that support advanced analytics, machine learning, and critical business applications. Guide cross-functional teams to ensure seamless integration across systems and alignment with organizational strategy. Implement best practices for cloud governance, security, and performance optimization. Design and manage high-throughput ETL pipelines for both real-time and batch data processing. Maintain high standards for data quality, reliability, and accessibility to support data-driven decisions. Lead the MLOps strategy, including deployment, monitoring, and lifecycle management of machine learning models. Establish CI/CD pipelines for ML workflows and integrate models into production systems at scale. Mentor and develop high-performing teams, fostering a culture of innovation, ownership, and continuous improvement. Collaborate closely with product, engineering, and data science teams to deliver scalable platform solutions. What We’re Looking For Extensive experience in cloud infrastructure, especially AWS and its core services. Strong background in data engineering, including ETL pipeline design and management. Proven track record in MLOps, with hands-on experience deploying and managing machine learning models at scale. Demonstrated leadership in guiding cross-functional teams and aligning technology strategy with organizational goals. Commitment to best practices in cloud security, governance, and performance. Ability to foster team growth and promote a culture of continuous learning and innovation. Location Bengaluru, Karnataka, India
Join our dynamic team at Harvey as a Staff Software Engineer, specializing in Core Infrastructure. In this vital role, you will be responsible for designing, building, and optimizing robust infrastructure solutions that support our innovative software products. Your expertise will contribute to enhancing the reliability and scalability of our systems as we continue to grow.We are looking for a professional who is passionate about technology and eager to tackle complex challenges. If you thrive in a collaborative environment and are driven to make a significant impact, we want to hear from you!
Join the Okta Family!At Okta, we are revolutionizing identity management by empowering individuals and organizations to securely access any technology, anywhere, on any device. Our innovative platforms, including the Okta and Auth0 Platforms, are designed to enhance security, streamline authentication, and facilitate automation, placing identity at the forefront of business growth and security.We value diverse perspectives and backgrounds, seeking lifelong learners who can contribute their unique experiences to our mission.Be part of a team that is building a future where identity is truly yours.About OktaOkta is committed to making the world a more secure and interconnected place by enabling organizations to embrace any technology. As the leading independent identity provider for enterprises, we collaborate with a diverse range of clients—from major corporations to innovative startups—to ensure secure connections between people and technology. Our robust AI and data capabilities are pivotal to our growth and success, powering secure and scalable products for both our customer base and employees.The OpportunityWe are on the lookout for a Senior Data Engineer to join our Enterprise Data Platform team. Reporting to the Sr. Manager of Enterprise Data Platform, you will play a crucial role in developing the data infrastructure that enables our internal AI initiatives through AI-ready data. Your responsibilities will include transforming our data product vision into a secure, scalable, and sophisticated technical framework.The ideal candidate possesses a strong passion for creating high-quality data solutions. You will contribute to establishing engineering excellence within the team and ensuring our core data platform is reliable and ready to support AI-driven decision-making at Okta.What You’ll DoDesign and maintain efficient data pipelines and models on our AI data platform, leveraging tools such as Snowflake, AWS, and dbt.Implement security and governance components to uphold compliance and security standards.Develop automated solutions for data classification and access control, and support vulnerability management processes.Utilize infrastructure as code (Terraform) and CI/CD (GitHub/GitLab) to construct, test, and deploy data infrastructure with security and reliability.Promote and adhere to best practices in code quality, system design, and operational readiness.
Join 6sense, a cutting-edge tech company, as a Staff Software Engineer specializing in Infrastructure and DevOps. In this role, you will play a vital part in developing robust infrastructure solutions and implementing best practices in DevOps to enhance our software development lifecycle.Your expertise in cloud services, automation, and continuous integration/continuous deployment (CI/CD) will be essential in driving our projects forward. Collaborate with cross-functional teams to ensure our systems are scalable, reliable, and secure.