< More Jobs

Posted on 2025/12/05

Model Accuracy Development and Test Engineer; Datacentre AI Engineering KSA

Qualcomm Technologies, Inc

Riyadh Saudi Arabia

Full-time

Full Description

Position: Model Accuracy Development and Test Engineer (Data centre AI Engineering KSA

Company: Qualcomm Middle East Information Technology Company LLC Job Area: Engineering Group, Engineering Group >

Software Engineering

About Us

Qualcomm is enabling a world where everyone and everything can be intelligently connected.

You interact with products and technologies made possible by Qualcomm every day, including 5G‑enabled smartphones that double as pro‑level cameras and gaming devices, smarter vehicles and cities, and the technology behind the smart, connected factories that manufactured your latest purchase.

Qualcomm 5G and AI innovations are the power behind the connected intelligent edge.

You’ll find our technologies behind and inside the innovations that deliver significant value across multiple industries and to billions of people every day.

About the Role

We are seeking an Inference Accuracy engineer to design, develop, and validate model accuracy of deep learning models deployed role focuses on deep accuracy analysis, debugging, accuracy evaluation, and recovery during inference on large data‑centre hardware platforms.

You will have strong problem‑solving ability, excellent Python programming skills, and hands‑on expertise with inference pipelines.

Key Responsibilities

• Define and implement accuracy KPIs across precision modes

• Develop scalable Python‑based accuracy evaluation tools and automated pipelines.

• Implement accuracy‑preserving optimizations for inference frameworks (Tensor

RT, ONNX Runtime, AI Template, Triton).

• Build and maintain automated pipelines for accuracy evaluation across multiple frameworks (ONNX, Tensor Flow, PyTorch).

• Develop reusable plugins for pre‑processing, post‑processing, and metric evaluation.

• Execute comprehensive accuracy tests for large‑scale models (LLMs, vision, diffusion).

• Validate accuracy under various quantization and precision settings (FP32, FP16, INT8).

• Perform accuracy analysis with deep understanding of model architecture, including layers, attention mechanisms, and parameter configurations.

• Identify architecture‑driven accuracy degradation trends and propose optimization strategies.

• Identify issues related to pre‑processing drift, tokenization mismatches, operator fallback, and quantization effects.

• Analyse accuracy differences across hardware targets, firmware versions, and runtime backends.

• Perform slice‑based accuracy analysis (batch size, concurrency, sequence length, domain shifts).

• Design and run experiments to recover accuracy, including fine‑tuning, calibration, and hyperparameter adjustments.

• Debug accuracy failures by tracing root causes across data pre‑processing, model layers, quantization steps, and deployment pipelines.

• Compare results across different hardware/software stacks and generate actionable insights.

• Document workflows, maintain dashboards, and publish accuracy results for stakeholders.

Required Skills & Experience

• Strong background in AI/ML model evaluation and accuracy metrics.

• Solid understanding of model architectures (transformers, CNNs, RNNs, MoE) and their impact on accuracy.

• Experience with large language models (LLMs) and generative AI accuracy validation.

• Expertise with inference runtimes (Tensor

RT, ONNX Runtime, Triton).

• Understanding of quantization (INT8/FP8/INT4), calibration, QAT, and accuracy trade-offs.

• Experience with model graph conversion (PyTorch → ONNX → backend engines).

• Hands‑on experience with accuracy pipeline development and automation frameworks. Understanding of video generation model accuracy and multi‑modal evaluation benchmarking.

• Proficiency in Python and familiarity with ML toolkits (ONNX Runtime, Tensor Flow, PyTorch).

• Expertise in accuracy analysis, including statistical methods and visualization tools.

• Ability to design experiments for accuracy recovery and debug accuracy failures effectively.

• Knowledge of quantization techniques and mixed‑precision workflows.

• Experience with data‑centre accelerators (NVIDIA A100/H100/B200, AI100 Ultra, Gaudi, TPU).

• Knowledge of LLM accuracy evaluation tools (lm‑eval, HELM, synthetic benchmarks) is an advantage.

• Strong problem‑solving and analytical skills with the ability to isolate complex accuracy issues.

• Familiarity with distributed deployment systems (Kubernetes,…