About the job
Welcome, Future Homie!
At Homebase, we are a dynamic team dedicated to empowering small businesses. Our focus is on building solutions with empathy, urgency, and a bold spirit that drives substantial real-world change. Here, each Homie is committed to elevating standards, supporting colleagues, and celebrating collective achievements.
We’re not merely developing an application; we're fostering resilient teams. So, are you ready to join us?
Your Impact Begins Here
We are on the lookout for a Senior Corporate AI Security Engineer who is eager to facilitate secure AI innovation at scale. In this pioneering role, you will establish AI security protocols at Homebase, creating frameworks and controls that enable teams to harness cutting-edge AI technologies—from generative AI tools to Model Context Protocol deployments—while safeguarding sensitive data and ensuring compliance. This is a unique opportunity to shape and take ownership of AI security architecture by bridging emerging AI technologies with corporate security, empowering teams to innovate with confidence internally.
Note: This position focuses on securing internal AI tools and operations, rather than product-facing AI features.
Here are the key contributions you will make in this role:
Design and execute security standards for internal AI tools, APIs, model integrations, and AI lifecycle management, facilitating safe and scalable AI adoption throughout Homebase.
Develop governance frameworks for the deployment of internal AI agents, managing training data, and overseeing inference operations that harmonize security rigor with business enablement.
Oversee and protect MCP (Model Context Protocol) server deployments with ongoing verification and audit trails, establishing a zero-trust architecture for internal AI interactions.
Design identity and access management for AI agents, automation tools, and machine-to-machine services, implementing least privilege principles for non-human identities.
Enforce data loss prevention policies for internal and third-party AI tools (e.g., ChatGPT, Claude, Gemini) to protect sensitive information during prompt exchanges and model training.
Create frameworks for privacy-preserving data handling to prevent data leaks and intellectual property exposure through AI workflows.
Collaborate with internal security domains (Application Security, Detection, Governance, Risk and Compliance, and Infrastructure Security) to identify and improve repetitive patterns and manual tasks.

