About the job
About Lucidya
Lucidya is a cutting-edge AI-driven platform dedicated to enhancing customer experience (CX) intelligence. Our platform autonomously oversees complete customer lifecycles, from the first engagement to retention and growth. Unlike traditional platforms that merely present insights, Lucidya actively closes the feedback loop through proprietary NLU capabilities, developed in-house and refined through millions of multilingual conversations. This empowers marketing, support, CX, and research teams to deliver personalized experiences that significantly elevate customer satisfaction, retention, and lifetime value.
Why This Role Is Essential
We are expanding our Customer Experience Management (CXM) platform to incorporate video intelligence and multimodal AI. Currently, we analyze vast amounts of social and enterprise data. In the future, we aim to interpret video content, including sentiment, context, and intent, across visual, audio, and textual signals. This is not merely a side project; it is a pivotal direction for our core product.
This role is crucial because:
- You will lead the development of video AI from initial concept to full production.
- You will influence the deployment and optimization of self-hosted LLMs.
- Your input will shape our architectural and engineering standards.
- Your work will have a direct impact on our enterprise clients' experiences.
If you prefer structured tasks and detailed instructions, this may not be the right fit for you. However, if you enjoy taking ownership, working autonomously, and building live products, you’ll excel.
Your Responsibilities
Own Outcomes, Not Just Tasks
You will collaborate with Product teams to define the roadmap and execute against it, rather than merely working on isolated tickets. You’ll transform concepts like “video listening” into reliable, deployed features that customers will utilize.
Key tasks include:
- Designing and constructing comprehensive video analysis pipelines (visual, audio, and text).
- Extracting sentiment, intent, and semantic meaning from multimodal datasets.
- Deploying models in secure production environments.
- Ensuring that what you deliver performs effectively in production, not just in a notebook.
Manage Self-Hosted LLM Systems
Due to Saudi data regulations, we require private hosting for our models. You will:
- Oversee and enhance the self-hosted LLM infrastructure.
- Manage inference pipelines and assess performance trade-offs.
- Guarantee scalability, reliability, and security.
- Consider factors beyond model accuracy, including cost, latency, and uptime.
You are not just a researcher; you are a comprehensive AI builder.
Elevate Team Standards
While you will not have direct reports, you will:
- Mentor mid-level and junior engineers.
- Review code and contribute to architectural discussions.
- Clarify technical discussions.
- Advocate for high standards when necessary.
We appreciate team members who can express differing opinions respectfully and articulate their reasoning clearly.
