Doubleword logo black
Product
Product
Inference StackControl Layer
Solutions
By Deployment Option
On-premiseCloudHybrid
By Team
AI, ML & Data SciencePlatform, DevOps & ITCompliance & Cyber
Resources
Resource CenterAI DictionaryCustomer Stories
Docs
Pricing
Careers
Book a demo
Book a demo
Careers
/
Senior Research Engineer: LLM Inference Systems

Senior Research Engineer: LLM Inference Systems

Description: Develop cutting edge inference technology at all levels of the inference stack.

Location: London, UK (Hybrid)

Compensation: Competitive with equity.

How to Apply: Send your CV to fergus.finn@doubleword.ai

‍

Apply for this job
Share:
https://doubleword.ai/careers/senior-research-engineer-llm-inference-systems
Copied

About the Role

We’re seeking a Senior Research Engineer to join our mission of solving the hardest inference challenges in generative AI. You’ll be responsible for developing cutting edge inference technology at all levels of the inference stack. This could involve writing custom kernels for inference, or designing of compute clusters for unique inference needs, or contributing to state of the art open source inference engines.

What You’ll Do

Examples of projects you might work on:

  1. Building and optimizing infrastructure for batch inference workloads: focusing on high throughput, cost-efficient processing
  2. Inferencing fine tuned models as scale: using tools like multi LoRA and multi PEFT inference engines.
  3. Optimizing open source inference engines for offloading-based inference: implementing inference optimizations for severely memory constrained environments.

What We’re Looking For

Note: A good candidate will have 80% of the following quantities. Please apply, even if the following doesn’t describe you perfectly.

‍Core Technical Skills:

  • Strong programming fundamentals
  • Understanding of GPU architectures and their performance characteristics
  • Deep understanding of LLM inference workloads, performance characteristics, and optimization techniques
  • Familiarity with Inference tooling and deep learning libraries (PyTorch, TensorRT, vLLM, SGLang, TensorRT-LLM)

Research Mindset:

  • Curiosity about emerging hardware trends and ML optimization techniques
  • Ability to understand complex research requirements and translate them into infrastructure needs
  • Comfort with ambiguity and rapidly evolving technical landscapes
  • Experience supporting research workflows and experimental systems

About Us

We’re dedicated to making large language models faster, cheaper, and more accessible. Our infrastructure team is laser-focused on LLM inference optimization, pushing the boundaries of what’s possible in terms of performance and cost efficiency while maintaining the reliability needed to serve these models at scale.

We provide competitive compensation, comprehensive benefits, and opportunities for professional growth in one of the most exciting fields in technology.

‍

Doubleword logo white
Sitemap
HomePricingDocsResourcesBook a demoCareers
Contact
hello@doubleword.ai
Address
Farringdon, London
JOIN THE COMMUNITY
Subscribe to our newsletter
Thanks you for subscription!
Oops! Something went wrong while submitting the form.
©2025 Doubleword. All rights reserved.
designed by
celerart
Privacy Policy
We use cookies to ensure you get the best experience on our website.
Accept
Deny