< More Jobs

Posted on 2026/02/04

Senior Full Stack Engineer (Backend Focus) - AI Platform, Seattle, Hybrid

Planetary Talent

Seattle, WA, United States

Full-time

Qualifications

  • Land one integration (e.g., supplier intake or ERP/PLM export) with robust retries/observability
  • Stand up a reusable evidence graph module (provenance, versioning, expiry watch) used by multiple features
  • 6 plus years building production SaaS with modern JS/TS plus a typed backend (Node.js, Go, or similar) and practical Python for data/AI tasks
  • Experience with at least one of: document processing/OCR, LLM/RAG, or complex workflow engines with reliability/latency concerns
  • Track record shipping measurable improvements (perf, reliability, product adoption) in an agile environment
  • LangChain/LangGraph patterns, vector databases, prompt/guardrail tooling
  • PDF/layout parsing, table extraction, entity resolution
  • ERP/PLM or compliance domain exposure (RoHS, REACH, TSCA, PFAS, CE/UL/CSA, EPD/DoP)
  • Terraform/IaC, DataDog/Otel, Temporal/Step Functions, Auth (SAML/OIDC), secure file handling
  • building SaaS: 6 years (Required)

Benefits

  • Pay: $150,000.00 - $200,000.00 per year

Responsibilities

  • AI-native system for turning regulatory evidence into market-ready packets
  • Tracks proofs across product × site × market, orchestrates supplier campaigns, validates document lineage/expirations, encodes rules (RoHS, REACH, TSCA, PFAS, CE/UL/CSA, EPD/EN 15804, DoP/CPR), and ships one-click outputs to customers, auditors, and borders
  • Senior Backend Engineer building and owning core services: evidence graph, supplier pipelines, rules/validations, and packet factory
  • Heavy on APIs, data modeling, event-driven workflows, and reliability
  • Work tight with product and AI to ship fast, measure impact, and keep SLAs sharp. Modern stack, real ownership, high leverage
  • Build product end-to-end
  • Design, implement, and ship backend services (Node.js/TypeScript, - Python) that ingest documents, validate evidence, and generate market-ready packets
  • Own APIs and data models
  • Design clean REST/GraphQL APIs; model data across Postgres/Aurora (relational), DynamoDB (document), and S3; enforce provenance and audit trails
  • AI + guardrails
  • Integrate LLM/embedding services with deterministic rules; wire RAG pipelines, citations, confidence thresholds, and human-in-the-loop review paths
  • Rules and validations
  • Encode checks for RoHS/REACH/TSCA/PFAS, UL/CE/IEC/CSA, NSF 61/372, DoP/CPR, EPD/EN 15804; implement versioning and diffs
  • Reliability at scale
  • Use eventing/queues (SNS/SQS/Kinesis), serverless (Lambda) and containerized services to process large doc volumes with strong SLAs
  • Security and compliance
  • Build with least privilege, secrets hygiene, and logging to support SOC 2/GDPR; contribute to threat modeling and privacy reviews
  • DevEx and quality
  • Add tests (unit/integration/e2e), CI/CD (GitHub Actions), feature flags, and deep observability (DataDog/OpenTelemetry) to keep the system fast and debuggable
  • Ship connectors to ERP/PLM (SAP, Oracle, Teamcenter, Windchill), identity (SSO/SAML/OIDC), and content stores; later, push packets to routing tools
  • Partner with PM/design to scope, instrument, and iterate measuring minutes-to-packet, extraction precision, and time-to-first-value
  • Frontend: React, TypeScript, Next.js, Tailwind, Vite
  • Data & Infra: Postgres/Aurora, DynamoDB, S3, Step Functions/Lambda, SNS/SQS, Terraform, DataDog, OpenTelemetry, CloudFront
  • DevOps: GitHub Actions, IaC, feature flags, preview envs
  • Ship a customer-visible workflow end-to-end (UI + API + data) with tests and dashboards
  • Reduce a packet flow from hours to <10 minutes wall-clock in production
  • Improve extraction quality with guardrails (measured precision/recall on key fields); cut rework by 25–40 percent
  • Author or own a service with 99.9 percent plus monthly availability and SLO dashboards

Full Description

PLEASE APPLY HERE: https://app.planetarytalent.com/apply?role=72686982-567d-40a0-878d-8c99a9bcaf63

AI-native system for turning regulatory evidence into market-ready packets.

Tracks proofs across product × site × market, orchestrates supplier campaigns, validates document lineage/expirations, encodes rules (RoHS, REACH, TSCA, PFAS, CE/UL/CSA, EPD/EN 15804, DoP/CPR), and ships one-click outputs tocustomers, auditors, and borders.

Automatic.

Scalable.

Real-time.

The Role

Senior Backend Engineer building and owning core services: evidence graph, supplier pipelines, rules/validations, and packet factory.

Heavy on APIs, data modeling, event-driven workflows, and reliability.

Work tight with product and AI to ship fast, measure impact, and keep SLAs sharp.

Modern stack, real ownership, high leverage.

What You’ll Do

• Build product end-to-end.

Design, implement, and ship backend services (Node.js/TypeScript, - Python) that ingest documents, validate evidence, and generate market-ready packets.

• Own APIs and data models. Design clean REST/GraphQL APIs; model data across Postgres/Aurora (relational), DynamoDB (document), and S3; enforce provenance and audit trails.

• AI + guardrails. Integrate LLM/embedding services with deterministic rules; wire RAG pipelines, citations, confidence thresholds, and human-in-the-loop review paths.

• Rules and validations. Encode checks for RoHS/REACH/TSCA/PFAS, UL/CE/IEC/CSA, NSF 61/372, DoP/CPR, EPD/EN 15804; implement versioning and diffs.

• Reliability at scale. Use eventing/queues (SNS/SQS/Kinesis), serverless (Lambda) and containerized services to process large doc volumes with strong SLAs.

• Security and compliance. Build with least privilege, secrets hygiene, and logging to support SOC 2/GDPR; contribute to threat modeling and privacy reviews.

• DevEx and quality. Add tests (unit/integration/e2e), CI/CD (GitHub Actions), feature flags, and deep observability (DataDog/OpenTelemetry) to keep the system fast and debuggable.

• Integrations. Ship connectors to ERP/PLM (SAP, Oracle, Teamcenter, Windchill), identity (SSO/SAML/OIDC), and content stores; later, push packets to routing tools.

• Own outcomes.

Partner with PM/design to scope, instrument, and iterate measuring minutes-to-packet, extraction precision, and time-to-first-value.

Our Stack (today)

• Frontend: React, TypeScript, Next.js, Tailwind, Vite

• Backend: Node.js (TypeScript), Python (for AI/ETL), REST/GraphQL, gRPC (select services)

• AI/ML: embeddings + LLM orchestration (LangChain/LangGraph-style patterns), vector store, OCR/layout parsing

• Data & Infra: Postgres/Aurora, DynamoDB, S3, Step Functions/Lambda, SNS/SQS, Terraform, DataDog, OpenTelemetry, CloudFront

• DevOps: GitHub Actions, IaC, feature flags, preview envs

What Success Looks Like (first 90–180 days)90 days:

• Ship a customer-visible workflow end-to-end (UI + API + data) with tests and dashboards.

• Reduce a packet flow from hours to <10 minutes wall-clock in production.

• Land one integration (e.g., supplier intake or ERP/PLM export) with robust retries/observability.

180 days:

• Stand up a reusable evidence graph module (provenance, versioning, expiry watch) used by multiple features.

• Improve extraction quality with guardrails (measured precision/recall on key fields); cut rework by 25–40 percent.

• Author or own a service with 99.9 percent plus monthly availability and SLO dashboards.

What You’ll BringMust-Have

• 6 plus years building production SaaS with modern JS/TS plus a typed backend (Node.js, Go, or similar) and practical Python for data/AI tasks.

• API and data design chops (REST/GraphQL, SQL/NoSQL), event-driven patterns, and cloud experience (AWS preferred).

• Experience with at least one of: document processing/OCR, LLM/RAG, or complex workflow engines with reliability/latency concerns.

• Track record shipping measurable improvements (perf, reliability, product adoption) in an agile environment.

Nice-to-Have

• LangChain/LangGraph patterns, vector databases, prompt/guardrail tooling.

• PDF/layout parsing, table extraction, entity resolution.

• ERP/PLM or compliance domain exposure (RoHS, REACH, TSCA, PFAS, CE/UL/CSA, EPD/DoP).

• Terraform/IaC, DataDog/Otel, Temporal/Step Functions, Auth (SAML/OIDC), secure file handling.

PLEASE APPLY HERE: https://app.planetarytalent.com/apply?role=72686982-567d-40a0-878d-8c99a9bcaf63

Pay: $150,000.00 - $200,000.00 per year

Experience:

• building SaaS: 6 years (Required)

Work Location: Hybrid remote in Seattle, WA 98104

Zero to AI Engineer Program

Zero to AI Engineer

Skip the degree. Learn real-world AI skills used by AI researchers and engineers. Get certified in 8 weeks or less. No experience required.