Member of Technical Staff, Kernels
Inception Labs
IT
San Francisco, CA, USA
Posted on Mar 13, 2026
Member of Technical Staff, Kernels
Bay Area
AI Systems
In office
Full-time
Inception creates the world’s fastest, most efficient AI models. Our Mercury model is the world’s fastest reasoning LLM and first commercially available diffusion LLM, delivering 5x greater speed and efficiency than today’s LLMs, with best-in-class quality.
We are the AI researchers and engineers behind such breakthrough AI technologies as diffusion models, flash attention, and DPO.
The Role
We're looking for engineers and scientists to design, optimize, and maintain the compute foundations that power large-scale language model training and inference. You will develop high-performance ML kernels, enable efficient low-precision arithmetic, and improve the distributed compute stack that makes training and serving large models possible.
Key Responsibilities
- Design and implement custom ML kernels (CUDA, CuTe, Triton) for core dLLM operations such as attention, matrix multiplication, gating, and normalization, optimized for modern GPU architectures.
- Design compute primitives to reduce memory bandwidth bottlenecks and improve kernel efficiency.
- Contribute to infrastructure stability and scalability, ensuring reproducibility, consistency across precision formats, and high utilization of compute resources.
Qualifications
- BS/MS/PhD in Computer Science, Engineering, or a related field (or equivalent experience).
- Proficiency in CUDA, CuTe, Triton, or other GPU programming frameworks.
- Understanding of ML frameworks (PyTorch, TensorFlow) from a systems perspective.
- Background in performance optimization and profiling of ML systems.
- Experience implementing low-precision formats (FP8, INT8, block floating point) or contributing to related compiler stacks (XLA, TVM).
- Familiarity with distributed training techniques (data parallel, model parallel, pipeline parallel).
- Proficiency in Python and at least one systems programming language (C++/Rust/Go).
- Experience with containerization (Docker), orchestration (Kubernetes), and CI/CD pipelines.
Preferred Skills
- Experience building and maintaining large-scale language models with tens of billions of parameters or more.
- Experience with distributed systems and cloud computing platforms (AWS/GCP/Azure).
- Familiarity with distributed frameworks such as PyTorch/XLA, DeepSpeed, Megatron-LM.
- Prior contributions to open-source deep learning infrastructure such as PyTorch, DeepSpeed, or XLA.
Why Join Inception
- Work with World-Class Talent: Collaborate with the inventors of diffusion models and leading AI researchers
- Shape Foundational Technology: Your decisions will influence how the next generation of AI products are built and used
- Immediate Impact: Join at the ground floor where your contributions directly shape product direction and company trajectory
Perks & Benefits
- Competitive salary and equity in a rapidly growing startup
- Flexible vacation and paid time off (PTO)
- Health, dental, and vision insurance
- Catered meals (breakfast, lunch, & dinner)
- Commuter subsidies
- A collaborative and inclusive culture
About Us
Inception creates the world’s fastest, most efficient AI models. Today’s autoregressive LLMs generate tokens sequentially, which makes them painfully slow and expensive. Inception’s diffusion-based LLMs (dLLMs) generate answers in parallel. They are 5x faster and more efficient, while delivering best-in-class quality.
Inception was co-founded by Stanford professor Stefano Ermon, who co-invented such breakthrough AI technologies as diffusion models, flash attention, and DPO, UCLA professor Aditya Grover, who co-invented node2vec, decision transformers, and d1 reasoning, and Cornell professor and Afresh co-founder Volodymyr Kuleshov, who co-invented MDLM and Block Diffusion.
We pioneered the application of diffusion to language, with world’s first (and only) commercially available dLLM, Mercury. We are currently deploying our large-scale diffusion LLMs at Fortune 500 companies. Diffusion is the technology behind today’s image and video AI, and we’re making it the standard for LLMs as well.
Our team includes engineers from Google DeepMind, Meta AI, Microsoft AI, and OpenAI. Based in Palo Alto, CA, we are backed by A-list venture capitalists, including Menlo Ventures, Mayfield, M12 (Microsoft’s venture fund), Snowflake Ventures, Databricks, and Innovation Endeavors, and by tech luminaries such as Andrew Ng, Andrej Karpathy, and Eric Schmidt.
If you are talented, innovative, and ambitious, come help us invent the future of AI.
We are an equal opportunity employer and encourage candidates of all backgrounds to apply.
First name *
Last name *
Email *
LinkedIn URL
Resume *
Click to upload or drag and drop here
By applying you agree to Gem's terms and privacy policy.
Save your info to apply to other roles faster & help employers reach you.
Req ID: R33