Qualifications
RequirementsMinimum of 3 years of experience in platform/application support within data or analytics environments. Proficient understanding of GCP analytics services such as BigQuery, Cloud Functions, and IAM. Hands-on experience with the Hadoop ecosystem, including Kafka, Spark, Hive, Sqoop, Oozie, and HDFS. Strong SQL skills with a focus on optimizing data pipeline performance. Experience with ETL pipelines, monitoring tools, and logging systems. Familiarity with APIs, microservices, and system integrations (CRM/BSS adapters). Experience with ticketing systems like Jira, ServiceNow, or similar platforms. Adept at analyzing logs, troubleshooting performance issues, and collaborating with engineering teams. Knowledge of data governance, metadata management, data lineage, and quality frameworks. Preferred QualificationsExperience with GCP (BigQuery, Dataflow, Pub/Sub) or AWS (Glue, Redshift, Kinesis). Familiarity with data security and masking tools (Apache Ranger, Voltage, IAM policies). Knowledge of NDMO standards and enterprise data frameworks (DAMA, DCAM). Experience with BI/Visualization tools (Power BI, Tableau, MicroStrategy, Looker Studio). Familiarity with CI/CD practices, GitOps, Docker, Kubernetes, and Terraform. Exposure to the telecom domain (BSS, OSS, CDR, usage analytics, CLDM). Understanding of data privacy, consent management, and k-anonymity.
About the job
Join our dynamic team at Masterworks Co. as a Cloud Operations Support Engineer, where you will play a vital role in supporting and monitoring our cloud-based data and analytics platforms. This position focuses on maintaining system stability, optimizing performance, resolving incidents, and achieving operational excellence within GCP analytics environments and Hadoop ecosystems.
Key Responsibilities
Monitor the health, uptime, and performance of the Slisor portal and its backend services, including dashboards, APIs, and workflows.
Lead the triage of incidents, conduct root cause analysis, and coordinate escalations with engineering teams and vendors.
Oversee the daily operations of GCP analytics services and manage data pipelines efficiently.
Ensure scalability and security across ingestion, storage, and consumption layers.
Track the secure transfer of anonymized datasets from on-premise data science and AI platforms to the cloud.
Manage user access, provisioning, and handle support requests through a ticketing system.
Create and maintain operational dashboards that display uptime, error trends, campaign metrics, and usage analytics.
Engage in knowledge transfer sessions during new software releases.
Provide on-call support for critical incidents outside of regular working hours.