TitanML is now Doubleword
Doubleword logo black
Product
Resources
Resource CenterAI Dictionary
Docs
Pricing
Book a demo
Book a demo
Resources
/
Blog
/
Why We Created a Community Version of the Takeoff Server
January 8, 2023

Why We Created a Community Version of the Takeoff Server

Fergus Finn
Share:
https://doubleword.ai/resources/why-we-open-sourced-the-takeoff-server
Copied
To Webinar
•

Open source has been the backbone of most technological advancements. It has eliminated the barriers and silos of proprietary software, enabling brilliant minds to not only use, but contribute to, the cutting edge of technology — fueling a beautiful technological flywheel. Today we have made the step to be a part of this community — open sourcing the core of our Takeoff server.

What Is the Takeoff Server?

Takeoff is a hyper-optimised LLM inference server that ‘just works’. It is the fastest way to experiment with and deploy LLMs locally.

The Takeoff Server in action — demonstrating some pretty impressive latency improvements.
  • Inference optimisation: Achieve 5–25x latency improvements and deploy to smaller and cheaper hardware including CPUs.
  • Crazily easy to use: Everything is containerised so you don’t need to worry about weird dependencies or versions. Getting to deployment takes just a few lines of code.
  • Comes with chat and playground interfaces so you can test out the latest OS models in minutes.

Why Are We Open Sourcing It?

Other than just making it clear what’s permissible with the Takeoff server — there are a few things that have really driven this decision.

🤓 Driving really Open AI. We don’t think that playing with and deploying open source models should be reserved for expert ML engineers. Anyone can use ‘OpenAI’ — so genuinely open AI should be just as accessible.

💪🏽 We are standing on the shoulders of giants. Building Takeoff would have been impossible without all of the open source contributions that came before us — so it is only right that we give back.

🧑🏾‍🔬 We love open science. Two of our three co-founders have a strong background in academia. We have spent years contributing towards open science and are deeply fond of it — we want to keep this tradition alive.

👯‍♂️ We want to build products that people love. There is no better way to do that than by building with the community.

🔍 We form our reputation based on what we do, not by what we say. We care about trust, transparency, and quality. There is no better way for us to give our clients confidence than to show them what goes on underneath the covers.

🌳 We want to build something bigger than us. Open source is bigger than a single company or contributor — we are building something important and we want that to exist beyond us.

But We Are Thinking About Commercialisation…

We want to be upfront about how we commercialise the Takeoff Server.

The parts of our platform which are most relevant for in-production usage such as multi-GPU serving, enhanced throughput, int4 Quantisation, and batching will be closed source, and we will licence these to businesses who need these features to put Takeoff into production.

We think this is the right balance between maintaining an offering for the OS community, while ensuring that we are able to support our customers with their needs.

Let’s Drive AI Together.

🌟 Github repo 🌟

👾 Join the Discord server! 👾

About TitanML

TitanML enables machine learning teams to effortlessly and efficiently deploy large language models (LLMs). Their flagship product Takeoff Inference Server is already supercharging the deployments of a number of ML teams.

Founded by Dr. James Dborin, Dr. Fergus Finn and Meryem Arik, and backed by key industry partners including AWS and Intel, TitanML is a team of dedicated deep learning engineers on a mission to supercharge the adoption of enterprise AI.

Footnotes

Table of contents:

Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Learn more about self-hosted AI Inference
Subscribe to our newsletter
Thanks you for subscription!
Oops! Something went wrong while submitting the form.

Want to learn more?

We work with enterprises at every stage of their self-hosting journey - whether you're deploying your first model in an on-prem environment or scaling dozens of fine-tuned, domain-specific models across a hybrid, multi-cloud setup. Doubleword is here to help you do it faster, easier, and with confidence.

Book a demo
Doubleword logo white
Sitemap
HomePricingDocsResourcesBook a demo
Contact
hello@doubleword.ai
Adress
Farringdon, London
JOIN THE COMMUNITY
Subscribe to our newsletter
Thanks you for subscription!
Oops! Something went wrong while submitting the form.
©2025 Doubleword. All rights reserved.
designed by
celerart
Privacy Policy
We use cookies to ensure you get the best experience on our website.
Accept
Deny