Member of Technical Staff, Inference & Serving
Inception Labs
IT
San Francisco, CA, USA
Posted on Mar 13, 2026
Member of Technical Staff, Inference & Serving
Bay Area
AI Systems
In office
Full-time
Inception creates the world’s fastest, most efficient AI models. Our Mercury model is the world’s fastest reasoning LLM and first commercially available diffusion LLM, delivering 5x greater speed and efficiency than today’s LLMs, with best-in-class quality.
We are the AI researchers and engineers behind such breakthrough AI technologies as diffusion models, flash attention, and DPO.
The Role
We're looking for engineers and scientists to design, optimize, and scale the systems that power our diffusion LLMs in production. Your work will make inference faster, more cost-effective, and more reliable.
Key Responsibilities
- Build and optimize high-performance model serving systems for low-latency inference of diffusion LLMs.
- Extend orchestration frameworks (Kubernetes, Ray, SLURM) for distributed inference, evaluation, and large-batch serving.
- Implement and manage load balancing, autoscaling, and traffic routing for model endpoints.
- Build systems for model versioning, canary deployments, and zero-downtime rollouts.
- Develop monitoring, alerting, and observability tooling to ensure SLA compliance and rapid incident response.
- Collaborate with ML researchers to translate model advances (new architectures, quantization techniques, batching strategies) into production-ready serving improvements.
Qualifications
- BS/MS/PhD in Computer Science, Engineering, or a related field (or equivalent experience).
- Knowledge of ML serving frameworks (SGLang, vLLM, Triton Inference Server, TensorRT-LLM).
- Understanding of ML frameworks (PyTorch, TensorFlow) from a systems perspective.
- Familiarity with high-performance computing and GPU programming (CUDA).
- Experience with containerization (Docker), orchestration (Kubernetes), and CI/CD pipelines.
- Background in performance optimization and profiling of ML systems.
Preferred Skills
- Experience building and maintaining large-scale language models with tens of billions of parameters or more.
- Experience with distributed systems and cloud computing platforms (AWS/GCP/Azure).
- Experience with ML workflow orchestration tools (Kubeflow, Airflow).
- Experience with model optimization techniques (quantization, distillation, speculative decoding, continuous batching).
- Knowledge of ML-specific infrastructure challenges (checkpointing, resource scheduling, etc.).
Why Join Inception
- Work with World-Class Talent: Collaborate with the inventors of diffusion models and leading AI researchers
- Shape Foundational Technology: Your decisions will influence how the next generation of AI products are built and used
- Immediate Impact: Join at the ground floor where your contributions directly shape product direction and company trajectory
Perks & Benefits
- Competitive salary and equity in a rapidly growing startup
- Flexible vacation and paid time off (PTO)
- Health, dental, and vision insurance
- Catered meals (breakfast, lunch, & dinner)
- Commuter subsidies
- A collaborative and inclusive culture
About Us
Inception creates the world’s fastest, most efficient AI models. Today’s autoregressive LLMs generate tokens sequentially, which makes them painfully slow and expensive. Inception’s diffusion-based LLMs (dLLMs) generate answers in parallel. They are 5x faster and more efficient, while delivering best-in-class quality.
Inception was co-founded by Stanford professor Stefano Ermon, who co-invented such breakthrough AI technologies as diffusion models, flash attention, and DPO, UCLA professor Aditya Grover, who co-invented node2vec, decision transformers, and d1 reasoning, and Cornell professor and Afresh co-founder Volodymyr Kuleshov, who co-invented MDLM and Block Diffusion.
We pioneered the application of diffusion to language, with world’s first (and only) commercially available dLLM, Mercury. We are currently deploying our large-scale diffusion LLMs at Fortune 500 companies. Diffusion is the technology behind today’s image and video AI, and we’re making it the standard for LLMs as well.
Our team includes engineers from Google DeepMind, Meta AI, Microsoft AI, and OpenAI. Based in Palo Alto, CA, we are backed by A-list venture capitalists, including Menlo Ventures, Mayfield, M12 (Microsoft’s venture fund), Snowflake Ventures, Databricks, and Innovation Endeavors, and by tech luminaries such as Andrew Ng, Andrej Karpathy, and Eric Schmidt.
If you are talented, innovative, and ambitious, come help us invent the future of AI.
We are an equal opportunity employer and encourage candidates of all backgrounds to apply.
Req ID: R32