< More Jobs

Posted on 2026/01/23

GenAI Data Automation Engineer (Public Trust Required)

Robert Half

Atlanta, GA, United States

Qualifications

  • Ensure security and compliance through IAM, KMS encryption, VPC isolation, RBAC, and firewalls

Responsibilities

  • You will be responsible for building intelligent, scalable data pipelines and automations that integrate cloud services, enterprise tools, and Generative AI to support mission-critical analytics, reporting, and customer engagement platforms
  • Ideal candidate is mission focused, delivery oriented, applies critical thinking to create innovative functions and solve technical issues.Location: REMOTE - EST or CSTThis position involves designing, developing, testing, and troubleshooting software programs to enhance existing systems and build new software products
  • The ideal candidate will apply software engineering principles and collaborate effectively with colleagues to tackle moderately complex technical challenges and deliver impactful solutions.

Responsibilities: Design and maintain data pipelines in AWS using S3, RDS/SQL Server, Glue, Lambda, EMR, DynamoDB, and Step Functions

  • Develop ETL/ELT processes to move data from multiple data systems including DynamoDB → SQL Server (AWS) and between AWS ↔ Azure SQL systems
  • Integrate AWS Connect, Nice in
  • Contact CRM data into the enterprise data pipeline for analytics and operational reporting
  • Engineer, enhance ingestion pipelines with Apache Spark, Flume, Kafka for real-time and batch processing into Apache Solr, AWS Open Search platforms
  • Leverage Generative AI services and Frameworks (AWS Bedrock, Amazon Q, Azure OpenAI, Hugging Face, LangChain) to: Create automated processes for vector generation and embedding from unstructured data to support Generative AI models
  • Automate data quality checks, metadata tagging, and lineage tracking
  • Enhance ingestion/ETL with LLM-assisted transformation and anomaly detection
  • Build conversational BI interfaces that allow natural language access to Solr and SQL data
  • Develop AI-powered copilots for pipeline monitoring and automated troubleshooting
  • Implement SQL Server stored procedures, indexing, query optimization, profiling, and execution plan tuning to maximize performance
  • Apply CI/CD best practices using GitHub, Jenkins, or Azure DevOps for both data pipelines and GenAI model integration
  • Support Agile DevOps processes with sprint-based delivery of pipeline and AI-enabled features

Full Description

We are looking for a skilled GenAI Data Automation Engineer to design and implement innovative, AI-driven automation solutions across AWS and Azure hybrid environments.

You will be responsible for building intelligent, scalable data pipelines and automations that integrate cloud services, enterprise tools, and Generative AI to support mission-critical analytics, reporting, and customer engagementplatforms.

Ideal candidate is mission focused, delivery oriented, applies critical thinking to create innovative functions and solve technical issues.

Location: REMOTE - EST or CSTThis position involves designing, developing, testing, and troubleshooting software programs to enhance existing systems and build new software products.

The ideal candidate will apply software engineering principles and collaborate effectively with colleagues to tackle moderately complex technical challenges and deliver impactful solutions.

Responsibilities: Design and maintain data pipelines in AWS using S3, RDS/SQL Server, Glue, Lambda, EMR, DynamoDB, and Step Functions.

Develop ETL/ELT processes to move data from multiple data systems including DynamoDB → SQL Server (AWS) and between AWS ↔ Azure SQL systems.

Integrate AWS Connect, Nice inContact CRM data into the enterprise data pipeline for analytics and operational reporting.

Engineer, enhance ingestion pipelines with Apache Spark, Flume, Kafka for real-time and batch processing into Apache Solr, AWS Open Search platforms.

Leverage Generative AI services and Frameworks (AWS Bedrock, Amazon Q, Azure OpenAI, Hugging Face, LangChain) to: Create automated processes for vector generation and embedding from unstructured data to support Generative AI models.

Automate data quality checks, metadata tagging, and lineage tracking.

Enhance ingestion/ETL with LLM-assisted transformation and anomaly detection.

Build conversational BI interfaces that allow natural language access to Solr and SQL data.

Develop AI-powered copilots for pipeline monitoring and automated troubleshooting.

Implement SQL Server stored procedures, indexing, query optimization, profiling, and execution plan tuning to maximize performance.

Apply CI/CD best practices using GitHub, Jenkins, or Azure DevOps for both data pipelines and GenAI model integration.

Ensure security and compliance through IAM, KMS encryption, VPC isolation, RBAC, and firewalls.

Support Agile DevOps processes with sprint-based delivery of pipeline and AI-enabled features.

Zero to AI Engineer Program

Zero to AI Engineer

Skip the degree. Learn real-world AI skills used by AI researchers and engineers. Get certified in 8 weeks or less. No experience required.