Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Qualifications
Proven experience in backend development, preferably with a focus on AI and data pipelines. Strong proficiency in programming languages such as Python, Java, or similar. Familiarity with cloud services and data storage solutions. Solid understanding of machine learning algorithms and data processing techniques. Excellent problem-solving skills and ability to work collaboratively in a team environment.
About the job
Join our innovative team at Seekatechnology as a Backend AI & Data Pipeline Engineer. In this role, you will be responsible for designing and implementing robust data pipelines that facilitate the integration of AI technologies into our systems. Your expertise will play a crucial role in optimizing data flow and ensuring seamless data processing.
About Seekatechnology
Seekatechnology is at the forefront of technological innovation, dedicated to developing cutting-edge solutions that drive efficiency and transform industries. We foster a dynamic work environment where creativity and collaboration thrive, and we are committed to empowering our employees with the tools and opportunities they need to succeed.
Role: Data EngineerLocation: Egypt, Uzbekistan, and Pakistan (Remote)Work Week: Sunday – ThursdayWork Timings: 9:00 AM – 6:00 PM (Saudi Arabian Time Zone)Overview:Join our dynamic team as a Data Engineer, where you will play a crucial role in designing, building, and maintaining the robust data infrastructure that fuels our analytics, machine learning models…
pavago is looking for a Data Engineer based in Pakistan to join its remote team. This position centers on building and maintaining the systems that move and organize data across the company. Role overview This role focuses on designing, developing, and supporting data pipelines. The work ensures that data flows smoothly and remains accessible for teams that depend on accurate, timely information. What you will do Create and improve data pipelines to handle large volumes of information Maintain data infrastructure for reliability and efficiency Work with modern technologies to support data accessibility across the organization Location This is a remote position open to candidates based in Pakistan.
Join the Devsinc Team! We are seeking a passionate and skilled Lead Data Engineer to become a vital part of our innovative data team. In this crucial position, you will spearhead the design and optimization of data pipelines, turning raw data into insightful analytics that drive informed decision-making. Your collaborative spirit and deep expertise in data engineering will significantly influence our data architecture, ensuring data integrity and accessibility for our organization.Key Responsibilities:Architect, develop, and refine scalable data pipelines and models to enhance the extraction, transformation, and loading (ETL) processes.Engage with data scientists, analysts, and business stakeholders to identify data needs and deliver effective data solutions that align with strategic objectives.Maintain high standards of data quality, consistency, and accuracy by implementing robust validation and testing processes.Leverage cloud services and data warehousing technologies to manage substantial datasets efficiently.Oversee and resolve issues in production data pipelines, ensuring data availability and reliability.Participate in data governance efforts by promoting best practices in data security, privacy, and compliance.Stay abreast of industry trends and advancements in data engineering technologies.
Who We Are:Motive is at the forefront of empowering organizations that manage physical operations. We provide cutting-edge tools designed to enhance safety, productivity, and profitability. For the first time, teams from safety, operations, and finance can manage their drivers, vehicles, equipment, and fleet-related expenses through a unified system. With our industry-leading AI capabilities, the Motive platform offers comprehensive visibility and control, significantly streamlining manual tasks through automation.We proudly serve nearly 100,000 clients, ranging from Fortune 500 companies to small businesses, across diverse sectors including transportation and logistics, construction, energy, field service, manufacturing, agriculture, food and beverage, retail, and the public sector.To learn more, visit gomotive.com. About the Role:As the Sales Data Engineering Manager, you will play a pivotal role in our core data team, developing world-class sales data products that will propel Motive's sales initiatives. You will work at the intersection of various data streams within the organization, supporting our sales representatives around the globe. Our ideal candidate is a collaborative team player who thrives in an environment that fosters innovation and continuous improvement. We take pride in our vibrant culture and our ability to effectively work within a highly diverse team.Responsibilities:Develop a strategic data roadmap for the sales data engineering team, focusing on both internal and external data products.Innovate our sales data offerings by integrating AI into our processes and deliverables.Establish and lead teams to scale our evolving set of analytics products.Maintain a hands-on technical role, coding and setting technical standards within the team.Collaborate with data and product teams to architect and design effective data models.Communicate and drive strategic initiatives across multiple teams and projects.
Join 9D Technologies, a leading innovator in mobile application development dedicated to crafting exceptional digital experiences that engage users worldwide. Our goal is to push the boundaries of creativity and technology, creating captivating applications that entertain and inspire.Key Responsibilities:Develop, oversee, and optimize ETL/ELT pipelines to efficiently process extensive datasets from diverse sources.Assist in the design and enhancement of databases to provide reliable data storage and retrieval solutions.Collaborate with cross-functional teams to convert raw data into structured formats suitable for analytics and reporting.Perform data quality checks, troubleshoot issues, and ensure the integrity and consistency of data.Engage with data analysts and engineers to align on data architecture and infrastructure requirements.Maintain comprehensive documentation for data processes, workflows, and best practices.
Job Overview:We are on the lookout for a talented Senior Data Engineer to become a vital part of our innovative team at creativechaos. The successful candidate will possess extensive expertise in crafting and deploying data pipelines, optimizing data processes, and managing substantial datasets. You will be tasked with establishing and sustaining robust data infrastructure while working alongside diverse teams to tackle data-oriented technical challenges and meet data infrastructure requirements.Key Responsibilities: Design and construct scalable, reliable data pipelines ensuring exceptional availability and performance. Create complex datasets that conform to both functional and non-functional business specifications. Identify, strategize, and execute enhancements to internal processes, including automating manual tasks, optimizing data delivery, and reengineering infrastructure for improved scalability. Adopt best practices for data storage, processing, and retrieval. Collaborate with various stakeholders, including executives, data scientists, and product managers, to comprehend data requirements and deploy effective data solutions. Enhance and fine-tune data workflows to achieve peak performance and efficiency. Ensure data security and compliance with pertinent data privacy regulations. Keep abreast of emerging technologies and industry advancements in data engineering and analytics. Mentor junior data engineers, providing guidance and support. Qualifications: A Bachelor's or Master's degree in Computer Science, Engineering, or a related discipline. A minimum of 7 years of experience in data engineering or a comparable role. Proficient in programming languages such as Python, Scala, or Java. Experience in designing and implementing data pipelines with tools like Apache Kafka, Apache Spark, or AWS Glue. Strong SQL skills and familiarity with database technologies like PostgreSQL, MySQL, or MongoDB. Understanding of cloud platforms, particularly Azure. Experience with data modeling, ETL processes, and data warehousing principles. Exceptional problem-solving and troubleshooting capabilities. Excellent communication and teamwork skills. Detail-oriented with a proactive approach to work. What We Offer: Paid Time Off Work From Home Health Insurance OPD Training and Development Life Insurance
Devsinc is on the lookout for a skilled Data Engineer with at least 2 years of professional experience to become a vital part of our expanding data team. In this exciting role, you will architect and create scalable data pipelines, engage with advanced cloud platforms, and establish the groundwork for analytics that inform key business decisions. From your first day, you’ll receive mentorship from senior engineers, work with a cutting-edge cloud stack, and witness the significant impact of your contributions.Key Responsibilities:Design, develop, and sustain automated ETL/ELT data pipelines for both structured and unstructured datasets.Create and refine scalable, secure, and cost-effective cloud data solutions using AWS, Azure, or GCP.Model, clean, and transform data to facilitate analytics, dashboards, and reporting use cases.Implement automated testing, monitoring, and alerting to guarantee high data quality and reliability.Develop high-performance Python-based services and utilities for data ingestion and processing.Engage with APIs, event-driven systems, and streaming platforms to support real-time data workflows.Collaborate with cross-functional teams (Data Science, Backend, DevOps, Product) to gather requirements and deliver custom data solutions.Adhere to strong software engineering best practices — including clean code, modularity, version control, and CI/CD.Document architecture, data flows, schemas, and development standards.Keep abreast of the latest data engineering tools, frameworks, and cloud-native technologies.
Join the innovative team at Devsinc as a Software Engineer II – AI & Data Engineering. We are seeking a talented individual with over 2.5 years of professional experience in developing and deploying robust AI/ML systems, applications powered by LLMs, and scalable data engineering solutions.This position demands a strong foundation in AI/ML Engineering, MLOps, Backend Engineering, and Data Engineering. You will take ownership of the project lifecycle, from the design of LLM applications, RAG pipelines, embeddings, and inference systems to the construction of ETL/ELT pipelines, cloud-native infrastructures, and architectures for real-time data processing.Key Responsibilities:Craft, develop, enhance, and deploy AI/ML models, including LLM-powered applications, RAG pipelines, embeddings, vector search architectures, and inference systems tailored for real-world applications.Develop and refine high-performance Python APIs, microservices, and backend services for AI workloads, collaborating with Engineering teams, Project Managers, and business stakeholders to deliver scalable, production-ready AI solutions.Establish and manage MLOps workflows and cloud-native infrastructures across AWS, Azure, and GCP, covering experiment tracking, model versioning, deployment automation, monitoring, and model optimization techniques like hyperparameter tuning and quantization.Design, develop, and sustain scalable ETL/ELT pipelines for both structured and unstructured datasets.Create and enhance data transformation, cleansing, validation, and quality frameworks, utilizing distributed and streaming technologies such as Kafka, Spark, Kinesis, and Pub/Sub for real-time data processing.Guarantee reliability, scalability, security, and cost-efficiency across AI and data infrastructures, while documenting architectural decisions, technical workflows, and engineering standards.
Role Overview:Join Adal Fintech as a Senior Data Engineer and play a pivotal role in building and optimizing our data infrastructure. You will leverage your expertise in SQL, PL/SQL, Stored Procedures, Database Query Optimization, SSIS, Apache Spark, and Python to design and enhance data pipelines and ETL workflows that are both efficient and scalable, supporting our business intelligence and analytics goals.About AdalFi:AdalFi is at the forefront of revolutionizing digital lending in Pakistan. We are developing the country’s fastest-growing AI-driven digital lending platform, empowering banks to launch innovative credit solutions swiftly. By harnessing cutting-edge AI and data analytics, we enable financial institutions to make informed and rapid lending decisions, significantly enhancing access to credit for millions.Responsibilities:• Develop and maintain robust ETL processes and data pipelines utilizing SQL, PL/SQL, SSIS, Apache Spark, and Python.• Create and fine-tune complex queries and stored procedures to ensure optimal performance.• Manage large-scale structured and unstructured datasets, ensuring their quality, security, and compliance.• Collaborate with Business Intelligence teams and stakeholders to deliver reliable, scalable data solutions.• Document technical designs and workflows to facilitate knowledge sharing.Ideal Candidate:• Minimum 4 years of experience in the Data Engineering domain.• Proficient in SQL, PL/SQL, Stored Procedures, query optimization, and performance tuning.• Proven experience with ETL tools (SSIS), big data technologies (Apache Spark), and Python programming.• Strong grasp of data modeling, warehousing, and relational database management.• Familiarity with version control systems like Git, CI/CD practices, and collaborative problem-solving skills.Qualifications:• Bachelor’s degree in Computer Science, Information Technology, or a related field.• Experience with cloud computing platforms such as AWS, Azure, or GCP.• Understanding of data governance principles, security protocols, and compliance regulations.• Exposure to NoSQL databases and real-time data processing methodologies.
Join Octopus Digital, a subsidiary of Avanceon Limited, as a Data Engineer and play a pivotal role in designing and implementing sophisticated data warehouse solutions across our organization. We are seeking skilled professionals who are adept at managing vast volumes of structured and unstructured data using cutting-edge technologies.Job Responsibilities:Apply hands-on expertise in Azure or AWS platforms.Utilize Spark and Python for data processing tasks.Design and create efficient codes, scripts, and data pipelines to handle both structured and unstructured datasets.Oversee the management of data ingestion pipelines and stream processing.Develop and optimize complex SQL queries and shell scripts.Conduct Big Data querying to derive insights.Work with NoSQL databases to support data needs.Experience with Hadoop distributions, particularly Azure DataLake HDFS.Certification or training in Data Lake HDFS is an added advantage.Demonstrate the ability to meet deadlines, troubleshoot issues, and provide resolutions with minimal supervision.
Join our innovative team at Seekatechnology as a Backend AI & Data Pipeline Engineer. In this role, you will be responsible for designing and implementing robust data pipelines that facilitate the integration of AI technologies into our systems. Your expertise will play a crucial role in optimizing data flow and ensuring seamless data processing.
Join the dynamic team at Speechify as a Software Engineer focused on Data Infrastructure and Acquisition. In this role, you will be essential in developing robust data solutions and systems that enhance our product offerings. You will collaborate with cross-functional teams to design, implement, and optimize data pipelines, ensuring high availability and performance. Your contribution will directly impact our ability to deliver exceptional user experiences.
Role overview Smart Working Solutions seeks a Senior Data Engineer in Pakistan with a focus on Google Cloud Platform (GCP), BigQuery, and Looker. This role centers on developing and maintaining data pipelines that adapt as business needs grow. Building dashboards to help teams make informed decisions is also a central responsibility. What you will do Design and implement data pipelines using GCP and BigQuery Develop and maintain scalable solutions for integrating and processing data Create and manage dashboards in Looker to support analytics and reporting Assist teams throughout the company in making data-driven decisions
Join our innovative team at Creative Chaos as a Data Engineer, specializing in Azure Data Lake. We are looking for an experienced professional to design, develop, and enhance data pipelines, enabling seamless processing of substantial datasets.Key Responsibilities:Create and manage efficient data pipeline architectures within Azure Data Lake.Transform and integrate data from diverse sources to support advanced analytics and reporting.Guarantee data quality and integrity through effective governance practices.Work collaboratively with cross-functional teams to ascertain data needs and devise scalable solutions.Optimize data processing workflows to enhance performance and reliability.Maintain thorough documentation of processes and architectures to ensure scalability and maintainability.Keep updated on the latest Azure technologies and data engineering best practices.
Full-time|On-site|Islamabad, Islamabad Capital Territory, Pakistan
Devsinc is actively seeking talented Data Scientists with 6 months to 1.5 years of experience, particularly in the realm of machine learning (ML). This position is perfect for those who have started to hone their skills in ML methodologies and are passionate about utilizing this expertise to tackle real-world problems. The ideal candidate will possess a solid grounding in ML techniques, a knack for analytical thinking to decode complex data challenges, and a commitment to driving data-informed decisions and innovations.Key Responsibilities:Design, develop, and implement machine learning models to solve specific business challenges, including data preprocessing, feature engineering, model selection, training, and validation.Conduct exploratory data analysis to discover hidden patterns, correlations, and insights within structured and unstructured datasets. Use these insights to optimize ML models and methodologies.Collaborate with a diverse team of data scientists, engineers, and business stakeholders to clarify data requirements and deliver ML-driven solutions.Create engaging visualizations to summarize the results of ML models and analyses. Prepare detailed reports and presentations that translate intricate ML concepts and findings into actionable business insights.Continuously seek educational opportunities in advanced machine learning techniques and algorithms, integrating innovative research and tools into projects to enhance model performance and efficiency.Contribute to the development of prototypes for predictive models and other ML applications, evaluating their effectiveness in practical scenarios.Explore opportunities to leverage insights, datasets, code, and models across various organizational functions, such as HR and marketing.Exhibit curiosity and enthusiasm for using algorithms to address challenges and inspire others to appreciate the value of your work.Maintain effective communication, both verbal and written, to understand data needs and report on results.
FlatGigs is on the lookout for a talented Data Engineer specializing in AI and Python to enhance our vibrant team. In this pivotal role, you will design and maintain scalable data pipelines and infrastructure that support AI-driven applications and analytics. Collaborating closely with data scientists and AI engineers, you will facilitate efficient machine learning workflows and foster data innovation.Key Responsibilities Craft, develop, and sustain scalable ETL pipelines utilizing Python to manage extensive structured and unstructured data. Partner with AI teams to architect data solutions that streamline model training, evaluation, and deployment. Execute data ingestion, cleansing, and transformation processes to ensure exceptional dataset quality for AI workflows. Enhance data workflows and storage solutions for optimal performance and reliability. Leverage cloud platforms (AWS, Azure, or GCP) to deploy and oversee data pipelines and supporting infrastructure. Document processes and uphold data governance and compliance policies. Engage in code reviews, testing, and continuous integration to produce robust, production-ready data solutions. RequirementsRequired Qualifications Bachelor’s degree in Computer Science, Engineering, Data Science, or a related field. A minimum of 3 years of professional experience in data engineering, focusing on AI and ML workflows. Expertise in Python programming with experience in libraries such as Pandas, NumPy, and Airflow. Practical experience in building and managing ETL/ELT pipelines. Solid understanding of cloud platforms (AWS, Azure, or GCP) and familiarity with deploying data infrastructure. Proficiency in SQL and NoSQL databases. Experience with container orchestration tools like Docker and Kubernetes is an advantage. Preferred Skills Knowledge of AI/ML concepts and ability to collaborate effectively with data science teams. Experience with distributed data processing frameworks such as Apache Spark. Familiarity with data security, compliance, and best practices in data engineering. BenefitsCompetitive salary, leave entitlements, health insurance, and a hybrid work model.
Speechify helps over 50 million people turn text, like PDFs, books, Google Docs, news articles, and websites, into audio. Our tools support faster reading, better comprehension, and stronger retention. Products run across iOS, Android, Mac, Chrome Extension, and Web App. Recent recognition includes Chrome Extension of the Year from Google and the Apple Design Award for inclusivity in 2025. The team is fully distributed, with nearly 200 colleagues from backgrounds at Amazon, Microsoft, Google, and top PhD programs. There is no central office. Collaboration and new ideas drive the work. Role Overview Speechify is hiring a Software Engineer focused on data infrastructure and acquisition for the AI team. This position is central to collecting and managing large-scale datasets that support model training. The work enables efficient creation and handling of petabyte-scale data. What You Will Do Find and connect new audio data sources to the ingestion pipeline. Maintain and expand cloud infrastructure on Google Cloud Platform (GCP) using Terraform. Work with scientists to improve data cost, throughput, and quality for future models. Partner with the AI team and leadership to plan the dataset roadmap for upcoming products. Location This role is based in Karachi, Pakistan, as part of Speechify’s distributed team.
Job Summary:Join Creative Chaos as a Data Architect, where you will play a pivotal role in shaping the data architecture of our organization. You will collaborate with stakeholders, data scientists, and engineers to create scalable and efficient data solutions. Your proficiency in data modeling, database design, and integration techniques will be essential for the effective storage, retrieval, and analysis of data across various systems. Additionally, you will be instrumental in establishing data governance policies and best practices to uphold data quality, security, and compliance.Responsibilities:Design and develop comprehensive data models and database schemas.Define strategies for data integration and migration.Engage with stakeholders to comprehend data requirements and translate them into technical solutions.Enhance database performance and ensure scalability.Implement data governance policies and maintain data quality standards.Guarantee data security and compliance with applicable regulations.Stay informed about industry trends and emerging technologies in data management.Provide technical guidance and mentorship to junior team members.
Join Our Team at CodeNinjaCodeNinja is at the forefront of AI-driven solutions, dedicated to empowering enterprises, government entities, and software buyers in crafting and managing intelligence-led systems tailored for mission-critical applications. Our expertise lies in seamlessly integrating AI into operational frameworks—leveraging robust engineering principles alongside AI-native methodologies to deliver tangible value, resilience, and sustainable ownership for our clients. With a global presence and diverse delivery strategies supported by AI Labs, AI Pods, and Global Capability Centers, we enable our teams to co-create scalable platforms across various regions and time zones.Role OverviewAs a pivotal player in our ERP program, you will spearhead the data strategy, ensuring comprehensive data readiness, effective conceptual data modeling, meticulous reporting data sourcing, and structured migration planning. Your efforts will cultivate a dependable, unified data foundation across all operational modules.Key ResponsibilitiesConduct thorough evaluations of legacy data sources:Inventory, Maintenance (MRO), Procurement, HR, Technical DocumentationIdentify critical data quality issues:Duplicates, missing entries, inconsistenciesData ownership and governance deficienciesRedundant or siloed datasetsEstablish a robust data cleansing and standardization framework.Create a comprehensive Data Readiness Assessment Report.Formulate an end-to-end Data Migration Strategy, encompassing:Migration methodologies (phased, module-wise, or big-bang)Data prioritization (master, transactional, historical)Develop Data Mapping Specifications:Mapping from legacy systems to ERP field-levelTransformation rules and standardization logicStrategies for handling missing or invalid dataDefine a Data Cleansing & Preparation Plan:Deduplication protocolsData enrichment processes as neededValidation checkpoints pre-migrationCraft a Migration Execution Plan:Migration cycles (mock runs, dry runs, final cutover)Rollback strategies in case of failureDowntime and cutover planningEstablish a Data Validation & Reconciliation Framework:Pre-migration vs post-migration validation rulesReconciliation reports (counts, totals, relationships)Sign-off criteria for each moduleGenerate key deliverables:Data Migration PlanData Mapping DocumentsData Validation & Reconciliation ReportsIdentify and enable innovative AI/ML use cases, such as:Predictive maintenance modelingInventory demand forecastingSupplier performance analyticsUtilize AI tools for:Data profiling and anomaly detectionAssessing data quality and recommending cleansing actionsAdvanced analytics and insights generation
Full-time|On-site|Islamabad, Islamabad Capital Territory, Pakistan
As a Data Reporting and Management Specialist at prime-system, you will play a pivotal role in our Development team, focused on crafting and implementing scalable, data-centric solutions for internal teams and clients alike.This position combines advanced reporting and analytics, data architecture, integration, and governance with a consultative, solution-focused methodology. You will serve as a trusted advisor, guiding stakeholders from problem identification to solution implementation, while ensuring the data ecosystem's reliability, security, and sustainability.Key Responsibilities:Consultation & Solution ArchitectureCollaborate with stakeholders to identify business challenges and design effective data solutions.Develop data models and reporting solutions that optimize accuracy, scalability, performance, and cost.Recommend tools, integrations, and architectures that align with business objectives and technical requirements.Influence data standards and strategies through careful design, documentation, and training.Reporting & AnalyticsCreate and manage high-quality reports, dashboards, and metrics to support operational and strategic decision-making.Translate complex data sets into clear and actionable insights suitable for both technical and non-technical audiences.Ensure that reporting solutions maintain accuracy, efficiency, and alignment with business goals.Establish and advocate for reporting best practices and standards.Data Management & GovernanceOversee the integrity, security, and reliability of internal data systems.Implement and uphold data quality, governance, and access standards.Act as a subject matter expert for data platforms and reporting solutions.Data Integration & WarehousingArchitect and support data integration pipelines across diverse systems.Design and manage data warehouses and centralized data models that facilitate analytics and reporting.