Aledade - Staff Security Engineer (AI)
Upload My Resume
Drop here or click to browse · PDF, DOCX, DOC, RTF, TXT
Requirements
• BS/BTech (or higher) in Computer Science, Information Technology, Cybersecurity or a related field. • Minimum of 10 years security domain experience without degree. • At least 8+ years of experience in software or security engineering within Cloud Native environments. • Experience with cloud technologies such as AWS, Azure, GCP is preferred. • Strong familiarity with server-side web technologies (eg: Java, Python, Scala, C#, C++, Go). • 4+ years of experience acting as a trusted technical decision-maker in a team setting. • Experience building continuous integration and continuous development (CI/CD) pipelines is preferred. • Domain Specific Experience with AppSec and AI/ML Security required, including leading an AI/ML security program for SaaS systems using specific models like Gemni, Claude, LightLLM, and AWS Bedrock. Must design robust security controls covering model training, inference, data pipelines, threat identification (model inversion, data poisoning, adversarial attacks, prompt injection), collaboration with cross-functional teams to embed security throughout the AI/ML lifecycle, conduct thorough threat modeling and risk assessments for AI systems. • Experience securing APIs and endpoints critical for model access and inference is preferred.
Responsibilities
• Lead the development, implementation, and ongoing maintenance of comprehensive security strategies and solutions. • Design and deploy advanced security controls to safeguard networks, systems, and applications. • Work across disciplines to shape our security services strategy and execution. • Mentor and galvanize new engineers to do their best work. • Set and uphold the standard for security processes to support high-quality engineering. • Lead an AI/ML Security program, including building SaaS systems using specific models like Gemni, Claude, LightLLM, and AWS Bedrock; designing robust security controls for these systems covering model training, inference, data pipelines; proactively identifying and mitigating diverse threats such as model inversion, data poisoning, adversarial attacks, prompt injection. • Collaborate with teams including data scientists, ML engineers, and DevOps to embed security throughout the AI/ML lifecycle. • Conduct threat modeling and risk assessments for AI systems and algorithms; develop methods to monitor AI systems for anomalous behavior and potential misuse. • Secure APIs and endpoints critical for model access and inference.
No credit card. Takes 10 seconds.