Full Description
OverviewWe're seeking a technically focused AI Security Engineer to design, implement, and manage the security of AI and ML systems across data, model, and deployment layers.
The role combines deep expertise in cybersecurity, DevSecOps, and applied machine learning, with hands-on experience building resilient, privacy-preserving, and production-safe AI solutions.
Key ResponsibilitiesModel Security& Hardening: Implement adversarial training, gradient masking, watermarking, and integrity verification to protect ML models.
Privacy & Data Protection: Apply differential privacy, secure aggregation, and federated learning techniques to safeguard sensitive data.
MLOps & Infrastructure: Secure containerised and cloud-native ML environments (Kubernetes, Docker, Terraform, MLflow, Vault, CI/CD).
Secure Deployment: Harden inference APIs with encryption, rate-limiting, authentication, and runtime monitoring.
Threat Modelling & Adversarial Testing: Conduct ATT&CK-for-ML-aligned threat modelling and red-team style testing for adversarial, poisoning, and prompt-injection attacks.
Monitoring & Observability: Implement drift detection, performance telemetry, and anomaly detection using Prometheus, Grafana, or ELK.
Cross-Functional Collaboration: Work closely with data scientists, ML engineers, and security teams to embed secure design principles across the AI lifecycle.
Technical Stack & ToolsLanguages & Frameworks: Python, Bash, Go, PyTorch, TensorFlow, Hugging Face Transformers, ONNXCloud & DevSecOps: AWS (SageMaker, ECR, IAM), Azure ML, GCP Vertex AI, GitHub Actions, Terraform, VaultAutomation & Integration: , Zapier, n8n, Power Automate, LangChain, Dialogflow, RasaMonitoring & Security Ops: Prometheus, Grafana, ELK Stack, Vault, Kubernetes security controlsPreferred Experience4–7 years' experience in security engineering, DevSecOps, or data security2–4 years' hands-on experience securing ML or LLM workloads in production environmentsExposure to adversarial ML, LLM security (prompt injection, data leakage testing), and privacy-preserving techniquesFamiliarity with cloud-native ML tooling (MLflow, Kubeflow, Vertex AI, SageMaker)Strong understanding of AI governance, compliance, and secure model deployment frameworksSoft SkillsAnalytical and structured problem-solvingExcellent stakeholder communication across security and data teamsAbility to translate complex technical risk into business impactCuriosity and a continuous learning mindset in fast-evolving AI security domainsNotes on Experience ExpectationsAI security as a discipline has rapidly evolved since ~2018.
Candidates with a strong foundation in cybersecurity and cloud engineering, and 2–5 years of hands-on AI/ML security work, will be well-suited for this role — even if they do not meet the longer "AI experience" requirements literally.
OverviewWe're seeking a technically focused AI Security Engineer to design, implement, and manage the security of AI and ML systems across data, model, and deployment layers.
The role combines deep expertise in cybersecurity, DevSecOps, and applied machine learning, with hands-on experience building resilient, privacy-preserving, and production-safe AI solutions.
Key ResponsibilitiesModel Security & Hardening: Implement adversarial training, gradient masking, watermarking, and integrity verification to protect ML models.
Privacy & Data Protection: Apply differential privacy, secure aggregation, and federated learning techniques to safeguard sensitive data.
MLOps & Infrastructure: Secure containerised and cloud-native ML environments (Kubernetes, Docker, Terraform, MLflow, Vault, CI/CD).
Secure Deployment: Harden inference APIs with encryption, rate-limiting, authentication, and runtime monitoring.
Threat Modelling & Adversarial Testing: Conduct ATT&CK-for-ML-aligned threat modelling and red-team style testing for adversarial, poisoning, and prompt-injection attacks.
Monitoring & Observability: Implement drift detection, performance telemetry, and anomaly detection using Prometheus, Grafana, or ELK.
Cross-Functional Collaboration: Work closely with data scientists, ML engineers, and security teams to embed secure design principles across the AI lifecycle.
Technical Stack & ToolsLanguages & Frameworks: Python, Bash, Go, PyTorch, TensorFlow, Hugging Face Transformers, ONNXCloud & DevSecOps: AWS (SageMaker, ECR, IAM), Azure ML, GCP Vertex AI, GitHub Actions, Terraform, VaultAutomation & Integration: , Zapier, n8n, Power Automate, LangChain, Dialogflow, RasaMonitoring & Security Ops: Prometheus, Grafana, ELK Stack, Vault, Kubernetes security controlsPreferred Experience4–7 years' experience in security engineering, DevSecOps, or data security2–4 years' hands-on experience securing ML or LLM workloads in production environmentsExposure to adversarial ML, LLM security (prompt injection, data leakage testing), and privacy-preserving techniquesFamiliarity with cloud-native ML tooling (MLflow, Kubeflow, Vertex AI, SageMaker)Strong understanding of AI governance, compliance, and secure model deployment frameworksSoft SkillsAnalytical and structured problem-solvingExcellent stakeholder communication across security and data teamsAbility to translate complex technical risk into business impactCuriosity and continuous learning mindset in fast-evolving AI security domainsNotes on Experience ExpectationsAI security as a discipline has rapidly evolved since ~2018.
Candidates with a strong foundation in cybersecurity and cloud engineering, and 2–5 years of hands-on AI/ML security work, will be well-suited for this role — even if they do not meet the longer "AI experience" requirements literally.

Zero to AI Engineer
Skip the degree. Learn real-world AI skills used by AI researchers and engineers. Get certified in 8 weeks or less. No experience required.
Find AI, ML, Data Science Jobs By Location
Find Jobs By Position