Paris, France - June 11, 2025 - Doubleword, the leading self-hosted inference platform for enterprises, has today announced that it is doubling down on its collaboration with NVIDIA. Fresh from a $12 million Series A round, Doubleword is expanding its platform offering through integrating NVIDIA NIM microservices. This will enable its customers to deploy a range of LLMs on NVIDIA infrastructure, all while benefiting from the enhanced monitoring, observability and scalability provided by Doubleword.
Key highlights of the collaboration include:
- Expanded Inference Platform: Doubleword now integrates NVIDIA universal LLM NIM microservices, letting enterprises tap model-specific AI power with ease.
- Universal GPU Compatibility: The platform runs on any NVIDIA GPU - including the latest releases - for highest-level speed and maximum flexibility.
- Sovereign AI by Design: Doubleword's on-prem inference stack, wrapped around NVIDIA NIM microservices capabilities, ensures enterprises have the tools they need to maintain control, privacy, and trust in their AI deployments
- Integration with NVIDIA AI Blueprints: Doubleword integrates seamlessly with NVIDIA AI Blueprints, enabling teams to follow best practices in application design and fully leverage their AI infrastructure without the burden of managing complex underlying systems.
Doubleword was founded to solve the inference problem pre-ChatGPT in London by Meryem Arik (CEO), Dr. Jamie Dborin (CSO), and Dr. Fergus Finn (CTO). The team just raised $12m from Europe's leading B2B investor, Dawn Capital, and is now on a mission to help enterprises solve one of the biggest barriers to large-scale enterprise AI adoption: self-hosted inference.
Inference is where AI delivers real-world value: from answering questions to generating images, it transforms models into business outcomes. As AI adoption grows, inference has become mission-critical - a capability enterprises must own and control - but it brings with it the enormous task of building and maintaining performant, scalable inference infrastructure. With the integration of NVIDIA NIM microservices, Doubleword expands its full-stack solution that simplifies deployment and enhances the effectiveness of enterprise AI initiatives. It offers enterprises of all sizes a production-ready, future-proofed self-hosted inference platform.
“We’re delighted to further support our customers in owning and scaling their AI through this initiative with NVIDIA . By integrating the universal LLM NIM microservices, we’re making it even easier for enterprises to deploy state-of-the-art AI models, fully optimized for their target NVIDIA hardware deployments.”
— Meryem Arik, CEO & Co-founder, Doubleword
About Doubleword
Doubleword is a self-hosted inference platform purpose-built for enterprises. Committed to making self-hosting AI as easy as using 3rd party APIs, Doubleword is on a mission to help enterprises own & control their AI. Doubleworld was founded by AI researchers and has received backing from top investors and industry leaders, including Dawn Capital, Hugging Face CEO Clément Delangue, and Dataiku CEO Florian Douetteau. To learn more about Doubleword’s Self-Hosted Inference Platform, click here.