< More Jobs

Posted on 2025/12/06

AI Governance Consulting

Crowe

Boston, MA, United States

Full-time

Responsibilities

  • We align AI practices with business goals, risk appetite, and evolving regulations and standards (e.g., NIST AI RMF 1.0, ISO/IEC 42001, EU AI Act), enabling clients to adopt AI confidently and safely
  • You will be the hands-on lead for independent testing and operational monitoring of AI systems (including GenAI)
  • You'll design and run evaluations, stand up monitoring pipelines, quantify risks (bias, robustness, safety, privacy), and provide transparent reporting to business, risk, and technology stakeholders
  • Independent Testing: Design and execute independent test plans for classical ML and LLMs/GenAI (functional accuracy, robustness, safety, toxicity, jailbreak/prompt-injection, hallucination/error rates); define acceptance criteria and go/no-go recommendations
  • Sales enablement: Partner with teams to qualify opportunities, shape solutions/SOW/ELs, develop proposals and pricing, and contribute to pipeline reviews
  • Build client-ready collateral
  • Offering development: Evolve Crowe's AI Governance methodologies, accelerators, control libraries, templates, and training
  • Incorporate updates from standards/regulators into our playbooks (e.g., NIST's GAI profile)
  • Thought leadership: Publish insights, speak on webinars/events, and support marketing campaigns to grow brand presence
  • People leadership: Supervise, coach, and develop consultants; manage engagement economics (scope, timeline, budget, quality) and support recruiting
  • Bias/Fairness: Plan and run bias/fairness assessments using appropriate population slices and fairness metrics; document mitigations per NIST guidance on identifying/managing bias
  • Evaluate Explainability: Produce model explainability/transparency artifacts (e.g., model cards, method docs) and apply techniques (SHAP, LIME, feature attributions) aligned to NIST's Four Principles of Explainable AI

Full Description

Job Title: AI Governance Consulting - Technical Manager

Crowe's AI Governance Consulting team helps organizations build, assess, run, and audit responsible AI programs. We align AI practices with business goals, risk appetite, and evolving regulations and standards (e.g., NIST AI RMF 1.0, ISO/IEC 42001, EU AI Act), enabling clients to adopt AI confidently and safely.

You will be the hands-on lead for independent testing and operational monitoring of AI systems (including GenAI).

You'll design and run evaluations, stand up monitoring pipelines, quantify risks (bias, robustness, safety, privacy), and provide transparent reporting to business, risk, and technology stakeholders.

Your Key Responsibilities:

• Independent Testing: Design and execute independent test plans for classical ML and LLMs/GenAI (functional accuracy, robustness, safety, toxicity, jailbreak/prompt-injection, hallucination/error rates); define acceptance criteria and go/no-go recommendations.

• Sales enablement: Partner with teams to qualify opportunities, shape solutions/SOW/ELs, develop proposals and pricing, and contribute to pipeline reviews. Build client-ready collateral.

• Offering development: Evolve Crowe's AI Governance methodologies, accelerators, control libraries, templates, and training. Incorporate updates from standards/regulators into our playbooks (e.g., NIST's GAI profile).

• Thought leadership: Publish insights, speak on webinars/events, and support marketing campaigns to grow brand presence.

• People leadership: Supervise, coach, and develop consultants; manage engagement economics (scope, timeline, budget, quality) and support recruiting.

• Bias/Fairness: Plan and run bias/fairness assessments using appropriate population slices and fairness metrics; document mitigations per NIST guidance on identifying/managing bias.

• Evaluate Explainability: Produce model explainability/transparency artifacts (e.g., model cards, method docs) and apply techniques (SHAP, LIME, feature attributions) aligned to NIST's Four Principles of Explainable AI.