Resume and JobRESUME AND JOB
Robert Half logo

GenAI Data Automation Engineer (Public Trust Required) - Careers at Robert Half

Robert Half

GenAI Data Automation Engineer (Public Trust Required) - Careers at Robert Half

contractPosted: Feb 4, 2026

Job Description

GenAI Data Automation Engineer (Public Trust Required) - Remote Opportunity

Join Robert Half's elite technology team as a GenAI Data Automation Engineer and drive transformative AI-powered data solutions in hybrid AWS-Azure environments. This mission-critical role supports public sector initiatives requiring Public Trust clearance, focusing on intelligent automation, scalable data pipelines, and Generative AI innovation. Perfect for delivery-oriented engineers passionate about solving complex technical challenges with cutting-edge cloud and AI technologies.

About the Role

Robert Half is seeking a talented GenAI Data Automation Engineer to design, develop, and optimize AI-driven automation solutions across AWS and Azure platforms. You'll build intelligent data pipelines that integrate enterprise tools, cloud services, and Generative AI to power analytics, reporting, and customer engagement systems. Working remotely (EST/CST), you'll collaborate with cross-functional teams in an Agile DevOps environment to deliver high-impact solutions for mission-critical applications.

This position demands critical thinking, software engineering principles, and expertise in hybrid cloud architectures. You'll enhance existing systems, create new AI-enabled products, and troubleshoot complex issues while ensuring security, compliance, and performance excellence. Ideal candidates thrive in fast-paced settings, applying innovative approaches to Generative AI integration and data engineering challenges.

Key Responsibilities

Data Pipeline Architecture: Design and maintain scalable pipelines using AWS services like S3, RDS/SQL Server, Glue, Lambda, EMR, DynamoDB, and Step Functions. Develop ETL/ELT processes for seamless data movement across DynamoDB, SQL Server, and Azure SQL systems.

Real-Time & Batch Processing: Engineer ingestion pipelines with Apache Spark, Flume, and Kafka, targeting Apache Solr and AWS OpenSearch for analytics-ready data. Integrate AWS Connect and Nice inContact CRM sources into enterprise pipelines.

Generative AI Innovation: Leverage AWS Bedrock, Amazon Q, Azure OpenAI, Hugging Face, and LangChain to automate vector embeddings, data quality validation, metadata tagging, and lineage tracking. Build LLM-enhanced ETL transformations, anomaly detection, conversational BI interfaces, and AI copilots for pipeline monitoring.

Database & Performance Optimization: Implement SQL Server stored procedures, indexing, query profiling, and execution plan tuning for optimal performance.

DevOps & Security: Champion CI/CD with GitHub, Jenkins, or Azure DevOps. Enforce security through IAM, KMS encryption, VPCs, RBAC, and firewalls while supporting Agile sprint deliveries.

Required Qualifications

Candidates must demonstrate 5+ years in data engineering with AWS/Azure expertise, Generative AI proficiency, and Public Trust clearance eligibility. Key skills include ETL mastery, Spark/Kafka processing, SQL optimization, and DevSecOps practices. A degree in Computer Science or equivalent experience is required, along with a proven track record in mission-focused delivery.

Why Join Us at Robert Half?

Robert Half offers unparalleled opportunities in a collaborative, innovative culture. Enjoy competitive compensation, remote flexibility, comprehensive benefits, and access to premier cloud/AI technologies. Contribute to meaningful public sector projects while advancing your career with continuous learning and growth paths. Apply now to shape the future of AI-driven data automation!

Locations

  • Atlanta, Georgia, United States

Salary

Estimated Salary Rangehigh confidence

165,000 - 195,000 USD / yearly

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • AWS (S3, RDS/SQL Server, Glue, Lambda, EMR, DynamoDB, Step Functions)intermediate
  • Azure SQLintermediate
  • ETL/ELT Processesintermediate
  • Apache Spark, Flume, Kafkaintermediate
  • Generative AI (AWS Bedrock, Amazon Q, Azure OpenAI, Hugging Face, LangChain)intermediate
  • SQL Server Optimization (Stored Procedures, Indexing, Query Tuning)intermediate
  • CI/CD (GitHub, Jenkins, Azure DevOps)intermediate
  • Security & Compliance (IAM, KMS, VPC, RBAC)intermediate

Required Qualifications

  • 5+ years experience in data engineering with AWS and Azure hybrid environments (experience)
  • Proven expertise in building scalable data pipelines using S3, Glue, Lambda, EMR, and Step Functions (experience)
  • Hands-on experience with Generative AI frameworks including AWS Bedrock, Azure OpenAI, LangChain, and Hugging Face (experience)
  • Strong proficiency in ETL/ELT processes, Apache Spark, Kafka, and real-time data processing (experience)
  • Advanced SQL Server skills including stored procedures, indexing, query optimization, and performance tuning (experience)
  • Experience implementing CI/CD pipelines with GitHub, Jenkins, or Azure DevOps (experience)
  • Knowledge of security best practices: IAM, KMS encryption, VPC isolation, RBAC, and firewalls (experience)
  • Public Trust clearance eligibility required (experience)
  • Mission-focused mindset with Agile/DevOps experience and strong problem-solving skills (experience)
  • Bachelor's degree in Computer Science, Data Engineering, or related field (Master's preferred) (experience)

Responsibilities

  • Design and maintain robust data pipelines in AWS using S3, RDS/SQL Server, Glue, Lambda, EMR, DynamoDB, and Step Functions
  • Develop comprehensive ETL/ELT processes for data movement between DynamoDB → SQL Server (AWS) and AWS ↔ Azure SQL systems
  • Integrate AWS Connect and Nice inContact CRM data into enterprise pipelines for analytics and operational reporting
  • Engineer advanced ingestion pipelines leveraging Apache Spark, Flume, and Kafka for real-time and batch processing into Apache Solr and AWS OpenSearch
  • Harness Generative AI services (AWS Bedrock, Amazon Q, Azure OpenAI, Hugging Face, LangChain) to automate vector generation, embeddings, data quality checks, metadata tagging, and lineage tracking
  • Enhance ETL processes with LLM-assisted transformations, anomaly detection, and build conversational BI interfaces for natural language SQL/Solr queries
  • Develop AI-powered copilots for pipeline monitoring, automated troubleshooting, and operational efficiency
  • Implement SQL Server stored procedures, indexing strategies, query optimization, profiling, and execution plan tuning for peak performance
  • Apply CI/CD best practices using GitHub, Jenkins, or Azure DevOps for seamless deployment of data pipelines and GenAI integrations
  • Ensure enterprise-grade security and compliance through IAM policies, KMS encryption, VPC isolation, RBAC controls, and firewall configurations
  • Collaborate in Agile DevOps environment delivering sprint-based features for mission-critical analytics and customer engagement platforms

Benefits

  • general: Competitive salary range with performance-based bonuses
  • general: Comprehensive health, dental, and vision insurance plans
  • general: 401(k) retirement savings with generous company matching
  • general: Remote work flexibility (EST/CST time zones)
  • general: Professional development stipend for certifications and training
  • general: Generous PTO policy with additional floating holidays
  • general: Cutting-edge technology stack and cloud environment access
  • general: Mission-driven culture supporting public sector impact
  • general: Career growth opportunities within Robert Half Technology
  • general: Flexible work-life balance with wellness programs

Target Your Resume for "GenAI Data Automation Engineer (Public Trust Required) - Careers at Robert Half" , Robert Half

Get personalized recommendations to optimize your resume specifically for GenAI Data Automation Engineer (Public Trust Required) - Careers at Robert Half. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "GenAI Data Automation Engineer (Public Trust Required) - Careers at Robert Half" , Robert Half

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

Robert Half CareersGenAI JobsData Engineering AtlantaAWS Azure HybridPublic Trust ClearanceRemote Tech Jobs GeorgiaGenerative AI CareersFinanceAccountingAdmin

Answer 10 quick questions to check your fit for GenAI Data Automation Engineer (Public Trust Required) - Careers at Robert Half @ Robert Half.

Quiz Challenge
10 Questions
~2 Minutes
Instant Score

Related Books and Jobs

No related jobs found at the moment.

Robert Half logo

GenAI Data Automation Engineer (Public Trust Required) - Careers at Robert Half

Robert Half

GenAI Data Automation Engineer (Public Trust Required) - Careers at Robert Half

contractPosted: Feb 4, 2026

Job Description

GenAI Data Automation Engineer (Public Trust Required) - Remote Opportunity

Join Robert Half's elite technology team as a GenAI Data Automation Engineer and drive transformative AI-powered data solutions in hybrid AWS-Azure environments. This mission-critical role supports public sector initiatives requiring Public Trust clearance, focusing on intelligent automation, scalable data pipelines, and Generative AI innovation. Perfect for delivery-oriented engineers passionate about solving complex technical challenges with cutting-edge cloud and AI technologies.

About the Role

Robert Half is seeking a talented GenAI Data Automation Engineer to design, develop, and optimize AI-driven automation solutions across AWS and Azure platforms. You'll build intelligent data pipelines that integrate enterprise tools, cloud services, and Generative AI to power analytics, reporting, and customer engagement systems. Working remotely (EST/CST), you'll collaborate with cross-functional teams in an Agile DevOps environment to deliver high-impact solutions for mission-critical applications.

This position demands critical thinking, software engineering principles, and expertise in hybrid cloud architectures. You'll enhance existing systems, create new AI-enabled products, and troubleshoot complex issues while ensuring security, compliance, and performance excellence. Ideal candidates thrive in fast-paced settings, applying innovative approaches to Generative AI integration and data engineering challenges.

Key Responsibilities

Data Pipeline Architecture: Design and maintain scalable pipelines using AWS services like S3, RDS/SQL Server, Glue, Lambda, EMR, DynamoDB, and Step Functions. Develop ETL/ELT processes for seamless data movement across DynamoDB, SQL Server, and Azure SQL systems.

Real-Time & Batch Processing: Engineer ingestion pipelines with Apache Spark, Flume, and Kafka, targeting Apache Solr and AWS OpenSearch for analytics-ready data. Integrate AWS Connect and Nice inContact CRM sources into enterprise pipelines.

Generative AI Innovation: Leverage AWS Bedrock, Amazon Q, Azure OpenAI, Hugging Face, and LangChain to automate vector embeddings, data quality validation, metadata tagging, and lineage tracking. Build LLM-enhanced ETL transformations, anomaly detection, conversational BI interfaces, and AI copilots for pipeline monitoring.

Database & Performance Optimization: Implement SQL Server stored procedures, indexing, query profiling, and execution plan tuning for optimal performance.

DevOps & Security: Champion CI/CD with GitHub, Jenkins, or Azure DevOps. Enforce security through IAM, KMS encryption, VPCs, RBAC, and firewalls while supporting Agile sprint deliveries.

Required Qualifications

Candidates must demonstrate 5+ years in data engineering with AWS/Azure expertise, Generative AI proficiency, and Public Trust clearance eligibility. Key skills include ETL mastery, Spark/Kafka processing, SQL optimization, and DevSecOps practices. A degree in Computer Science or equivalent experience is required, along with a proven track record in mission-focused delivery.

Why Join Us at Robert Half?

Robert Half offers unparalleled opportunities in a collaborative, innovative culture. Enjoy competitive compensation, remote flexibility, comprehensive benefits, and access to premier cloud/AI technologies. Contribute to meaningful public sector projects while advancing your career with continuous learning and growth paths. Apply now to shape the future of AI-driven data automation!

Locations

  • Atlanta, Georgia, United States

Salary

Estimated Salary Rangehigh confidence

165,000 - 195,000 USD / yearly

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • AWS (S3, RDS/SQL Server, Glue, Lambda, EMR, DynamoDB, Step Functions)intermediate
  • Azure SQLintermediate
  • ETL/ELT Processesintermediate
  • Apache Spark, Flume, Kafkaintermediate
  • Generative AI (AWS Bedrock, Amazon Q, Azure OpenAI, Hugging Face, LangChain)intermediate
  • SQL Server Optimization (Stored Procedures, Indexing, Query Tuning)intermediate
  • CI/CD (GitHub, Jenkins, Azure DevOps)intermediate
  • Security & Compliance (IAM, KMS, VPC, RBAC)intermediate

Required Qualifications

  • 5+ years experience in data engineering with AWS and Azure hybrid environments (experience)
  • Proven expertise in building scalable data pipelines using S3, Glue, Lambda, EMR, and Step Functions (experience)
  • Hands-on experience with Generative AI frameworks including AWS Bedrock, Azure OpenAI, LangChain, and Hugging Face (experience)
  • Strong proficiency in ETL/ELT processes, Apache Spark, Kafka, and real-time data processing (experience)
  • Advanced SQL Server skills including stored procedures, indexing, query optimization, and performance tuning (experience)
  • Experience implementing CI/CD pipelines with GitHub, Jenkins, or Azure DevOps (experience)
  • Knowledge of security best practices: IAM, KMS encryption, VPC isolation, RBAC, and firewalls (experience)
  • Public Trust clearance eligibility required (experience)
  • Mission-focused mindset with Agile/DevOps experience and strong problem-solving skills (experience)
  • Bachelor's degree in Computer Science, Data Engineering, or related field (Master's preferred) (experience)

Responsibilities

  • Design and maintain robust data pipelines in AWS using S3, RDS/SQL Server, Glue, Lambda, EMR, DynamoDB, and Step Functions
  • Develop comprehensive ETL/ELT processes for data movement between DynamoDB → SQL Server (AWS) and AWS ↔ Azure SQL systems
  • Integrate AWS Connect and Nice inContact CRM data into enterprise pipelines for analytics and operational reporting
  • Engineer advanced ingestion pipelines leveraging Apache Spark, Flume, and Kafka for real-time and batch processing into Apache Solr and AWS OpenSearch
  • Harness Generative AI services (AWS Bedrock, Amazon Q, Azure OpenAI, Hugging Face, LangChain) to automate vector generation, embeddings, data quality checks, metadata tagging, and lineage tracking
  • Enhance ETL processes with LLM-assisted transformations, anomaly detection, and build conversational BI interfaces for natural language SQL/Solr queries
  • Develop AI-powered copilots for pipeline monitoring, automated troubleshooting, and operational efficiency
  • Implement SQL Server stored procedures, indexing strategies, query optimization, profiling, and execution plan tuning for peak performance
  • Apply CI/CD best practices using GitHub, Jenkins, or Azure DevOps for seamless deployment of data pipelines and GenAI integrations
  • Ensure enterprise-grade security and compliance through IAM policies, KMS encryption, VPC isolation, RBAC controls, and firewall configurations
  • Collaborate in Agile DevOps environment delivering sprint-based features for mission-critical analytics and customer engagement platforms

Benefits

  • general: Competitive salary range with performance-based bonuses
  • general: Comprehensive health, dental, and vision insurance plans
  • general: 401(k) retirement savings with generous company matching
  • general: Remote work flexibility (EST/CST time zones)
  • general: Professional development stipend for certifications and training
  • general: Generous PTO policy with additional floating holidays
  • general: Cutting-edge technology stack and cloud environment access
  • general: Mission-driven culture supporting public sector impact
  • general: Career growth opportunities within Robert Half Technology
  • general: Flexible work-life balance with wellness programs

Target Your Resume for "GenAI Data Automation Engineer (Public Trust Required) - Careers at Robert Half" , Robert Half

Get personalized recommendations to optimize your resume specifically for GenAI Data Automation Engineer (Public Trust Required) - Careers at Robert Half. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "GenAI Data Automation Engineer (Public Trust Required) - Careers at Robert Half" , Robert Half

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

Robert Half CareersGenAI JobsData Engineering AtlantaAWS Azure HybridPublic Trust ClearanceRemote Tech Jobs GeorgiaGenerative AI CareersFinanceAccountingAdmin

Answer 10 quick questions to check your fit for GenAI Data Automation Engineer (Public Trust Required) - Careers at Robert Half @ Robert Half.

Quiz Challenge
10 Questions
~2 Minutes
Instant Score

Related Books and Jobs

No related jobs found at the moment.