Posted on 2026/04/15
AI / Emerging Tech Security Analyst
Alignerr
Vancouver, BC
Job description
AI / Emerging Tech Security Analyst (AI Training)
About The Role
What if your security expertise could directly shape how the world's most advanced AI systems defend against attacks, misuse, and manipulation? We're looking for AI Security Analysts to stress-test frontier AI models — identifying vulnerabilities, evaluating adversarial scenarios, and helping ensure that cutting-edge AI stays safe,... reliable, and aligned with real-world security standards.
This is a fully remote, flexible contract role built for security professionals who are curious about how modern AI systems behave when pushed to their limits.
If you understand how things break — and why that matters — this role was made for you.
• Organization: Alignerr
• Type: Hourly Contract
• Location: Remote
• Commitment: 10–40 hours/week
What You'll Do
• Analyze AI and LLM security scenarios to understand how models behave under adversarial, edge-case, or unexpected conditions
• Review and evaluate prompt injection attacks, data leakage risks, model abuse patterns, and system misuse cases
• Classify security issues by real-world impact and likelihood, and recommend appropriate mitigations
• Help evaluate and improve AI system behavior so it remains safe, reliable, and aligned with security best practices
• Work across realistic scenarios drawn from the cutting edge of AI deployment and research
• Complete task-based assignments independently on your own schedule
Who You Are
• Background in cybersecurity, application security, or a related field — with a strong interest in how AI systems are built and deployed
• Familiar with modern security threat modeling and how those concepts apply to emerging AI technologies
• Naturally analytical and precise — you think carefully about complex systems and potential failure modes
• Curious about AI: how large language models work, where they can go wrong, and what it takes to keep them safe
• Comfortable working independently and communicating findings clearly in writing
Nice to Have
• Hands-on experience with penetration testing, red teaming, or adversarial security research
• Familiarity with AI/ML systems, LLMs, or prompt engineering
• Background in application security, cloud security, or software engineering
• Experience with threat modeling frameworks or security risk classification
• Prior involvement in AI safety, alignment research, or responsible AI initiatives
Why Join Us
• Work directly on frontier AI systems alongside leading research labs
• Fully remote and flexible — work when and where it suits you
• Freelance autonomy with the structure of meaningful, task-based work
• Make a tangible impact on the safety and security of AI systems that affect millions of people
• Potential for ongoing work and contract extension as new projects launch
Show full description
Choose what you’re giving feedback on
Report this listing

Zero to AI Engineer
Skip the degree. Learn real-world AI skills used by AI researchers and engineers. Get certified in 8 weeks or less. No experience required.
Find AI, ML, Data Science Jobs By Location
Find Jobs By Position