< More Jobs

Posted on 2025/02/21

Gen AI Red-teaming Engineer.

EITAcies, Inc.

Austin, TX, United States

Full-time and Part-time
$60 an hour

Qualifications

  • Prompt Engineering and LLM security evaluation expertise

  • Hands-on experience with AI red-teaming tools like PyRIT, Garak, Giskard

  • Experience working with LLM-powered applications and RAG (Retrieval-Augmented Generation) architectures

  • Strong Python programming skills and experience with AI security tools

  • Knowledge of adversarial machine learning, AI threat models, and security risks

  • Background in AI safety, bias detection, and jailbreak detection

  • Ability to work in a fast-paced environment and problem-solve complex AI security issues

Responsibilities

  • This role focuses on evaluating the security of LLM-based applications by identifying vulnerabilities, adversarial attacks, and bias within AI systems

  • Conduct red-teaming exercises on LLM-powered applications to identify security gaps

  • Utilize automated AI security tools (e.g., PyRIT, Garak, Giskard) to test AI models

  • Implement adversarial machine learning techniques to assess AI robustness

  • Work on Prompt Engineering to evaluate AI model behavior and vulnerabilities

  • Develop AI safety measures, including bias detection, jailbreak prevention, and AI threat modeling

  • Collaborate with teams to enhance AI security and ensure responsible AI development

Full Description

We are looking for a Gen AI Red-teaming Engineer with 5+ years of experience to work onsite in Austin, Texas.

This role focuses on evaluating the security of LLM-based applications by identifying vulnerabilities, adversarial attacks, and bias within AI systems.

Key Responsibilities:

• Conduct red-teaming exercises on LLM-powered applications to identify security gaps.

• Utilize automated AI security tools (e.g., PyRIT, Garak, Giskard) to test AI models.

• Implement adversarial machine learning techniques to assess AI robustness.

• Work on Prompt Engineering to evaluate AI model behavior and vulnerabilities.

• Develop AI safety measures, including bias detection, jailbreak prevention, and AI threat modeling.

• Collaborate with teams to enhance AI security and ensure responsible AI development.

Must-Have Skills:

• Prompt Engineering and LLM security evaluation expertise.

• Hands-on experience with AI red-teaming tools like PyRIT, Garak, Giskard.

• Experience working with LLM-powered applications and RAG (Retrieval-Augmented Generation) architectures.

• Strong Python programming skills and experience with AI security tools.

• Knowledge of adversarial machine learning, AI threat models, and security risks.

• Background in AI safety, bias detection, and jailbreak detection.

• Ability to work in a fast-paced environment and problem-solve complex AI security issues.

Zero to AI Engineer Program

Zero to AI Engineer

Skip the degree. Learn real-world AI skills used by AI researchers and engineers. Get certified in 8 weeks or less. No experience required.