Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Experience Level
Not Applicable
About the job
T-Systems Iberia is hiring an HXM Data Integration Consultant for the Granada office. This full-time role centers on building and maintaining data connections within SAP and SuccessFactors Human Capital Management systems. The focus is on ensuring that human experience management (HXM) data moves smoothly and reliably between platforms.
Key responsibilities
Integrate HXM data across SAP and SuccessFactors HCM systems
Streamline data flows to maintain consistent and dependable system performance
Work with colleagues and clients to support ongoing data integration needs
Requirements
Practical experience with SAP and SuccessFactors HCM platforms
Strong understanding of data integration tools and techniques
Interest in advancing human experience management systems
This position contributes to reliable HCM operations for clients by focusing on effective data integration and optimization.
T-Systems Iberia is hiring an HXM Data Integration Consultant for the Granada office. This full-time role centers on building and maintaining data connections within SAP and SuccessFactors Human Capital Management systems. The focus is on ensuring that human experience management (HXM) data moves smoothly and reliably between platforms. Key responsibilities …
We are seeking to enhance our Precision Medicine team by hiring a Senior Bioinformatician with extensive experience in omics data analysis, particularly in genomics. This role involves the design, development, optimization, validation, and operation of bioinformatics pipelines (WGS/WES, panels, RNA-seq, etc.), ensuring quality, traceability, reproducibility, security, and scalability of processes.Key Responsibilities:Design and implement comprehensive NGS analysis pipelines (QC, alignment, calling, annotation, prioritization, reporting) for data types such as FASTQ, BAM/CRAM, VCF/gVCF, and derived results.Develop and maintain workflows in Nextflow (DSL2), adhering to best practices (modularity, profiles, parameters, testing, reusability) and aligning with standards like nf-core where applicable.Integrate bioinformatics tools and frameworks (e.g., BWA/DRAGEN, GATK, samtools/bcftools, VEP/ANNOVAR, FastQC/MultiQC) ensuring correct parameterization and versioning.Containerize pipelines (Docker/Singularity/Apptainer) and ensure reproducibility through dependency management and versioning.Deploy/execute pipelines in HPC and/or cloud environments, optimizing performance (parallelization, resource usage, caching, intermediate data, costs).Define and execute validation/verification strategies for pipelines (controls, gold standards, sensitivity/specificity metrics as applicable) and prepare evidence for audits.Implement traceability, monitoring, and alert mechanisms (logs, metrics, execution reporting, automated quality control).Collaborate with data and infrastructure teams for operations in Kubernetes/OpenShift/EKS and automation through CI/CD.Document analysis workflows, assumptions, inputs/outputs, QC criteria, limitations, and versioning, facilitating knowledge transfer to team members and clinical/laboratory users.
Project Overview:Join Deutsche Telekom Technik's Digital Business (DBIZ) team, where we pioneer the development of scalable GenAI and agentic AI solutions across Telekom's technical domains. Our projects span fiber rollout, construction supervision, regulatory processes, and multi-agent systems with intricate network data access.As part of our internal GenAI platform, T-GAIA, we provide essential tools like LLM endpoints, RAG chatbots, and agentic frameworks (LangGraph, LangChain, MCP), along with comprehensive applications such as TextmAIner, AI@Behördenkommunikation, FiberChatbot, and the multi-agent system “00Site”. Our core mission is to facilitate AI-driven transformations that enhance technology and securely scale productive AI solutions in collaboration with domain experts.Role Summary:We are in search of a proactive DevOps Engineer to become a vital part of our team, focusing on the operation and ongoing enhancement of our T-GAIA Chatbot Framework and associated Chatbots. This position encompasses both administrative and technical responsibilities, ensuring seamless operations, monitoring system performance, and implementing improvements to enhance application reliability and accuracy.Key Responsibilities:Configure, troubleshoot, and deploy T-GAIA Chatbots as a cloud service alongside other GenAI products and enablers.Maintain and oversee the Chatbots and their underlying frameworks and pipelines.Ensure compliance with SLA levels for the Chatbots, particularly concerning capacity and performance.Oversee SLA adherence for the T-GAIA platform.Document current operational statuses comprehensively.Support Incident, Problem, and Change Management processes to ensure smooth operations.Enhance the monitoring and alerting capabilities of the T-GAIA Framework and deployed Chatbots.Assist internal customers by resolving Service Desk Tickets.Deploy and manage use cases utilizing GitLab pipelines.Conduct administrative tasks including user management and technical tasks such as network configuration, IaC deployment, and service monitoring.Establish and maintain the Jira Service Desk and internal Wiki/documentation systems.Manage internal customer communications, including updates and roadmap discussions.Assist with MS Azure Cloud tooling and pipelines (deployment, monitoring, logging, security, IAM).Develop foundational operational standards for hosting and support (e.g., SLAs, alerts, coverage).Perform quality assurance checks for billing accuracy, security, and compliance.Gather internal requirements for Platform and Chatbot Operations and coordinate discussions with CCOE.
At T-Systems, we lead the charge in technological innovation, providing cutting-edge solutions across various sectors such as automotive, healthcare, and public services. Our AI Foundation Services team is dedicated to creating the platform infrastructure that supports AI inference at scale, including API gateways, authentication, billing, and multi-tenant services. We are committed to designing and developing high-performance backend systems and APIs that drive intelligent applications across diverse industries. Our collaborative engineering culture fosters technical depth, creativity, and ownership, enabling our team to make a tangible impact in the real world.Role OverviewWe are seeking a highly skilled Senior Backend Engineer with expertise in Python development and strong system design capabilities. This role requires deep experience in designing, building, and scaling distributed systems and APIs. You will be responsible for architecting and maintaining the backend platform that underpins our AI inference endpoints, a multi-tenant system managing authentication, API key management, usage metering, and billing services. Your contributions will ensure high-availability and data-intensive AI-powered solutions. This position demands high ownership, allowing you to influence architectural decisions, elevate engineering quality, and develop systems that are performant, secure, and observable at scale. You will collaborate with experts in AI infrastructure and data engineering to create robust, secure, and efficient systems capable of handling millions of requests.Responsibilities and DutiesDesign and build core platform services such as API gateway, authentication, authorization, key rotation, and multi-tenant isolation. Implement and optimize APIs and backend systems utilizing Python frameworks, primarily FastAPI (or Flask, or Django).Architect and implement usage metering, billing integration, and rate limiting for inference endpoints. Maintain scalable, fault-tolerant microservices for data processing and AI integration.Build and operate a high-throughput proxy/routing layer for AI model serving traffic. Work collaboratively with cross-functional teams to design system architecture and ensure interoperability.Integrate telemetry and observability into the platform from the ground up, incorporating structured logging, distributed tracing, metrics, and alerting. Implement robust CI/CD pipelines, monitoring, and observability for high-performance production systems.Drive technical decisions on architecture, data modeling, and technology choices. Identify performance bottlenecks and promote enhancements in reliability, scalability, and latency.Establish engineering standards for the backend codebase, including testing, code review, CI/CD, and deployment practices. Ensure adherence to best practices for security and code quality.
Are you a visionary and technically adept leader who excels in steering dynamic Product Management teams through the intricacies of real-time data streaming? If you possess the ability to convert complex technical challenges into a cohesive and motivating roadmap that aligns product initiatives with vital corporate goals, we want to hear from you! This role is ideal for a strategic thinker who embraces the concept of 'playing the infinite game' and is willing to take bold risks while scaling a team of technical experts.In this pivotal role, you will intersect innovation with market-driven strategies as we venture into the future of Physical AI and autonomous systems. As our Senior Group Product Manager, you will not only oversee a team but also be the catalyst for providing guidance and clarity necessary to build lasting value and influence the future of autonomy.
About the Role:We are seeking a dynamic and skilled DevOps Engineer to join our innovative team, focusing on the operation and enhancement of our advanced text-recognition solution, which primarily facilitates communication with local authorities. This position merges both administrative and technical duties, concentrating on ensuring efficient operations, monitoring system performance, and implementing enhancements to boost application reliability and precision.Your Responsibilities:Manage, maintain, deploy, and monitor our AI-driven text recognition application hosted on Google Cloud Platform (GCP).Enhance and optimize the application's cloud architecture and resource allocation for scalability, cost-efficiency, and reliability.Automate monitoring, alerts, and incident resolution wherever possible.Integrate and oversee partner system connections utilizing REST APIs and API Gateways.Support the operation and maintenance of Large Language Model (LLM) endpoints hosted on Azure.Track the technical performance of the AI model, including its statistical accuracy indicators and user feedback.Create and refine Document Intelligence models within Microsoft Azure.Facilitate operations by assisting Incident, Problem, and Change Management processes.Continuously enhance the Monitoring and Alerting capabilities of the application.Assist in resolving Service Desk Tickets from our internal customers.Further develop and optimize simple rules in Python to enhance application output quality.Deploy and manage use cases via GitLab pipelines.Integrate services to enhance application functionality and improve API management.Conduct administrative tasks such as user management and technical activities including network configuration, Infrastructure as Code (IaC) deployment, and service monitoring.
Join our vibrant team at usasurveyjob as a Part-Time Customer Service Representative. In this exciting role, you will be part of our Earn at Home Panelist Program, where your insights on products, services, and market trends will help shape the future of consumer goods. As a valued team member, your responsibilities will include online data entry, email correspondence, product reviews, and participation in surveys and various online projects. This is a fantastic opportunity to work from the comfort of your home while having the chance to influence industry trends and preview products before they hit the market.
T-Systems Iberia is hiring an HXM Data Integration Consultant for the Granada office. This full-time role centers on building and maintaining data connections within SAP and SuccessFactors Human Capital Management systems. The focus is on ensuring that human experience management (HXM) data moves smoothly and reliably between platforms. Key responsibilities …
We are seeking to enhance our Precision Medicine team by hiring a Senior Bioinformatician with extensive experience in omics data analysis, particularly in genomics. This role involves the design, development, optimization, validation, and operation of bioinformatics pipelines (WGS/WES, panels, RNA-seq, etc.), ensuring quality, traceability, reproducibility, security, and scalability of processes.Key Responsibilities:Design and implement comprehensive NGS analysis pipelines (QC, alignment, calling, annotation, prioritization, reporting) for data types such as FASTQ, BAM/CRAM, VCF/gVCF, and derived results.Develop and maintain workflows in Nextflow (DSL2), adhering to best practices (modularity, profiles, parameters, testing, reusability) and aligning with standards like nf-core where applicable.Integrate bioinformatics tools and frameworks (e.g., BWA/DRAGEN, GATK, samtools/bcftools, VEP/ANNOVAR, FastQC/MultiQC) ensuring correct parameterization and versioning.Containerize pipelines (Docker/Singularity/Apptainer) and ensure reproducibility through dependency management and versioning.Deploy/execute pipelines in HPC and/or cloud environments, optimizing performance (parallelization, resource usage, caching, intermediate data, costs).Define and execute validation/verification strategies for pipelines (controls, gold standards, sensitivity/specificity metrics as applicable) and prepare evidence for audits.Implement traceability, monitoring, and alert mechanisms (logs, metrics, execution reporting, automated quality control).Collaborate with data and infrastructure teams for operations in Kubernetes/OpenShift/EKS and automation through CI/CD.Document analysis workflows, assumptions, inputs/outputs, QC criteria, limitations, and versioning, facilitating knowledge transfer to team members and clinical/laboratory users.
Project Overview:Join Deutsche Telekom Technik's Digital Business (DBIZ) team, where we pioneer the development of scalable GenAI and agentic AI solutions across Telekom's technical domains. Our projects span fiber rollout, construction supervision, regulatory processes, and multi-agent systems with intricate network data access.As part of our internal GenAI platform, T-GAIA, we provide essential tools like LLM endpoints, RAG chatbots, and agentic frameworks (LangGraph, LangChain, MCP), along with comprehensive applications such as TextmAIner, AI@Behördenkommunikation, FiberChatbot, and the multi-agent system “00Site”. Our core mission is to facilitate AI-driven transformations that enhance technology and securely scale productive AI solutions in collaboration with domain experts.Role Summary:We are in search of a proactive DevOps Engineer to become a vital part of our team, focusing on the operation and ongoing enhancement of our T-GAIA Chatbot Framework and associated Chatbots. This position encompasses both administrative and technical responsibilities, ensuring seamless operations, monitoring system performance, and implementing improvements to enhance application reliability and accuracy.Key Responsibilities:Configure, troubleshoot, and deploy T-GAIA Chatbots as a cloud service alongside other GenAI products and enablers.Maintain and oversee the Chatbots and their underlying frameworks and pipelines.Ensure compliance with SLA levels for the Chatbots, particularly concerning capacity and performance.Oversee SLA adherence for the T-GAIA platform.Document current operational statuses comprehensively.Support Incident, Problem, and Change Management processes to ensure smooth operations.Enhance the monitoring and alerting capabilities of the T-GAIA Framework and deployed Chatbots.Assist internal customers by resolving Service Desk Tickets.Deploy and manage use cases utilizing GitLab pipelines.Conduct administrative tasks including user management and technical tasks such as network configuration, IaC deployment, and service monitoring.Establish and maintain the Jira Service Desk and internal Wiki/documentation systems.Manage internal customer communications, including updates and roadmap discussions.Assist with MS Azure Cloud tooling and pipelines (deployment, monitoring, logging, security, IAM).Develop foundational operational standards for hosting and support (e.g., SLAs, alerts, coverage).Perform quality assurance checks for billing accuracy, security, and compliance.Gather internal requirements for Platform and Chatbot Operations and coordinate discussions with CCOE.
At T-Systems, we lead the charge in technological innovation, providing cutting-edge solutions across various sectors such as automotive, healthcare, and public services. Our AI Foundation Services team is dedicated to creating the platform infrastructure that supports AI inference at scale, including API gateways, authentication, billing, and multi-tenant services. We are committed to designing and developing high-performance backend systems and APIs that drive intelligent applications across diverse industries. Our collaborative engineering culture fosters technical depth, creativity, and ownership, enabling our team to make a tangible impact in the real world.Role OverviewWe are seeking a highly skilled Senior Backend Engineer with expertise in Python development and strong system design capabilities. This role requires deep experience in designing, building, and scaling distributed systems and APIs. You will be responsible for architecting and maintaining the backend platform that underpins our AI inference endpoints, a multi-tenant system managing authentication, API key management, usage metering, and billing services. Your contributions will ensure high-availability and data-intensive AI-powered solutions. This position demands high ownership, allowing you to influence architectural decisions, elevate engineering quality, and develop systems that are performant, secure, and observable at scale. You will collaborate with experts in AI infrastructure and data engineering to create robust, secure, and efficient systems capable of handling millions of requests.Responsibilities and DutiesDesign and build core platform services such as API gateway, authentication, authorization, key rotation, and multi-tenant isolation. Implement and optimize APIs and backend systems utilizing Python frameworks, primarily FastAPI (or Flask, or Django).Architect and implement usage metering, billing integration, and rate limiting for inference endpoints. Maintain scalable, fault-tolerant microservices for data processing and AI integration.Build and operate a high-throughput proxy/routing layer for AI model serving traffic. Work collaboratively with cross-functional teams to design system architecture and ensure interoperability.Integrate telemetry and observability into the platform from the ground up, incorporating structured logging, distributed tracing, metrics, and alerting. Implement robust CI/CD pipelines, monitoring, and observability for high-performance production systems.Drive technical decisions on architecture, data modeling, and technology choices. Identify performance bottlenecks and promote enhancements in reliability, scalability, and latency.Establish engineering standards for the backend codebase, including testing, code review, CI/CD, and deployment practices. Ensure adherence to best practices for security and code quality.
Are you a visionary and technically adept leader who excels in steering dynamic Product Management teams through the intricacies of real-time data streaming? If you possess the ability to convert complex technical challenges into a cohesive and motivating roadmap that aligns product initiatives with vital corporate goals, we want to hear from you! This role is ideal for a strategic thinker who embraces the concept of 'playing the infinite game' and is willing to take bold risks while scaling a team of technical experts.In this pivotal role, you will intersect innovation with market-driven strategies as we venture into the future of Physical AI and autonomous systems. As our Senior Group Product Manager, you will not only oversee a team but also be the catalyst for providing guidance and clarity necessary to build lasting value and influence the future of autonomy.
About the Role:We are seeking a dynamic and skilled DevOps Engineer to join our innovative team, focusing on the operation and enhancement of our advanced text-recognition solution, which primarily facilitates communication with local authorities. This position merges both administrative and technical duties, concentrating on ensuring efficient operations, monitoring system performance, and implementing enhancements to boost application reliability and precision.Your Responsibilities:Manage, maintain, deploy, and monitor our AI-driven text recognition application hosted on Google Cloud Platform (GCP).Enhance and optimize the application's cloud architecture and resource allocation for scalability, cost-efficiency, and reliability.Automate monitoring, alerts, and incident resolution wherever possible.Integrate and oversee partner system connections utilizing REST APIs and API Gateways.Support the operation and maintenance of Large Language Model (LLM) endpoints hosted on Azure.Track the technical performance of the AI model, including its statistical accuracy indicators and user feedback.Create and refine Document Intelligence models within Microsoft Azure.Facilitate operations by assisting Incident, Problem, and Change Management processes.Continuously enhance the Monitoring and Alerting capabilities of the application.Assist in resolving Service Desk Tickets from our internal customers.Further develop and optimize simple rules in Python to enhance application output quality.Deploy and manage use cases via GitLab pipelines.Integrate services to enhance application functionality and improve API management.Conduct administrative tasks such as user management and technical activities including network configuration, Infrastructure as Code (IaC) deployment, and service monitoring.
Join our vibrant team at usasurveyjob as a Part-Time Customer Service Representative. In this exciting role, you will be part of our Earn at Home Panelist Program, where your insights on products, services, and market trends will help shape the future of consumer goods. As a valued team member, your responsibilities will include online data entry, email correspondence, product reviews, and participation in surveys and various online projects. This is a fantastic opportunity to work from the comfort of your home while having the chance to influence industry trends and preview products before they hit the market.