LLM optimisation and deployment platform for enterprise AI
Adaptive ML is a Paris-based platform for customising, optimising, and deploying open-source LLMs using reinforcement learning. Founded by former Hugging Face researchers behind Falcon and BLOOM, it automates preference tuning to align model outputs with business objectives.
Headquarters
Paris, France
Founded
2023
Pricing
EU Data Hosting
No
Employees
11-50
Contact Sales
Billing: annual
Before Adaptive ML existed, its founding team had already shaped the open-source AI landscape. Julien Launay, Daniel Hesslow, Baptiste Pannier, and Guilherme Penedo were the researchers at Hugging Face responsible for Falcon and BLOOM — two of the most significant open-weight language models ever released. In March 2024, they emerged from stealth with $20 million in seed funding led by Index Ventures and a singular thesis: enterprises deploying open-source LLMs need continuous optimisation, not just one-off fine-tuning.
Headquartered in Paris with additional operations in New York, Adaptive ML builds what it calls the "Adaptive Engine" — a platform that applies reinforcement learning from human and AI feedback (RLHF/RLAIF) to production LLMs. The approach goes beyond static fine-tuning. Instead of training a model once and deploying it, Adaptive creates an automated feedback loop. As users interact with the model, their behaviour signals get captured and fed back into the optimisation pipeline, producing progressively better-aligned outputs.
The target market is clear: enterprises running open-source LLMs (Llama, Falcon, Mistral) who need those models to perform reliably on specific business tasks without the cost or data exposure of sending everything to a third-party API. Financial services, healthcare, and government organisations with strict data governance requirements form the core customer base.
At the heart of the platform sits the Adaptive Engine, which orchestrates the full reinforcement learning loop. It captures preference signals from real user interactions — not just explicit thumbs-up/thumbs-down ratings, but implicit behavioural signals like which responses users actually use versus regenerate. These signals feed into automated preference tuning that continuously improves model alignment. In published benchmarks, Adaptive tuned a Llama 3.1 8B model to reduce hallucinations by 25% compared to GPT-4o and 42% compared to the untuned base model. Those are meaningful numbers for enterprises where accuracy directly impacts revenue and compliance.
Adaptive provides built-in A/B testing between model versions, letting teams compare tuned variants against baselines without building custom evaluation infrastructure. The platform handles traffic splitting, metric collection, and statistical significance calculations. For enterprise ML teams, this removes weeks of custom engineering that would otherwise be required to validate whether a new model version actually improves performance in production.
All data and model operations run within the customer's private cloud network. Adaptive deploys into existing AWS, Google Cloud, or Azure environments without data leaving the customer's infrastructure. This architecture eliminates the data sovereignty concerns that prevent many regulated enterprises from using third-party AI APIs. The model weights, training data, and inference logs all stay within the customer's security perimeter.
The platform includes performance dashboards that track model accuracy, user engagement, and business KPIs in a single view. Teams can monitor hallucination rates, response latency, user satisfaction scores, and custom metrics specific to their use case. This monitoring layer is essential for enterprises that need to demonstrate model governance and performance accountability to regulators.
Adaptive ML operates on a custom enterprise pricing model. There is no self-serve plan, no free tier, and no published price list. Pricing is negotiated based on deployment scope, model count, and support requirements. Given the $20M seed valuation and the enterprise-only focus, typical contracts likely start in the six-figure range annually — though Adaptive has not disclosed specific numbers.
This pricing approach makes sense for the platform's positioning. The value proposition is reducing costly hallucinations and improving model performance at scale. For a financial services firm where a single inaccurate AI-generated response could trigger compliance issues, the ROI calculation favours a premium platform. However, it completely excludes startups, small teams, and individual researchers who might benefit from the same technology.
For organisations evaluating Adaptive against alternatives like AWS SageMaker's RLHF tooling or Hugging Face's TRL library, the key question is whether managed preference tuning with automated feedback loops justifies the premium over assembling these capabilities from open-source components.
Adaptive ML holds a structural advantage for EU compliance. As a French SAS headquartered in Paris, the company falls under EU jurisdiction by default. All model training and inference operations run within the customer's own cloud infrastructure, which means customer data never transits through Adaptive's servers.
This architecture goes further than typical "EU data processing" guarantees. There is no data residency question because the data never leaves the customer's environment. For organisations subject to GDPR, the EU AI Act, or sector-specific regulations like the Digital Operational Resilience Act (DORA), this deployment model simplifies compliance significantly.
The company does not currently advertise specific certifications like ISO 27001 or SOC 2, which is typical for a seed-stage company. As Adaptive matures and pursues larger enterprise contracts, these certifications will likely become a priority.
Regulated enterprises deploying open-source LLMs in financial services, healthcare, or government. The privacy-preserving architecture and continuous optimisation loop directly address their governance and accuracy requirements.
ML teams with existing LLM deployments who have moved past initial fine-tuning and need ongoing model improvement without building custom RLHF infrastructure from scratch.
Organisations with strict data sovereignty requirements that cannot send data to third-party APIs but still need production-grade model optimisation.
Not suitable for individual developers, startups, or teams without an existing LLM deployment. The enterprise-only model and absence of self-serve access put Adaptive out of reach for smaller organisations.
Adaptive ML brings genuine research pedigree to a real enterprise problem: making open-source LLMs reliably good at specific business tasks. The founding team's credentials are verifiable and impressive — Falcon and BLOOM remain landmark models. The technical approach is sound, with published benchmark results that demonstrate measurable improvement. The privacy-preserving deployment model is a significant advantage for EU-regulated enterprises. However, this is still an early-stage company with seed funding, limited public documentation, and no transparent pricing. Organisations considering Adaptive should expect a bespoke engagement rather than a self-serve product, and should weigh the platform's specialised focus against more general-purpose MLOps alternatives.
Yes. Adaptive ML is a French company (SAS) headquartered in Paris, falling under EU jurisdiction by default. All model training and deployment runs within the customer's private cloud infrastructure, so no customer data transits through Adaptive's servers.
Adaptive ML was founded in 2023 by Julien Launay, Baptiste Pannier, and Daniel Hesslow, along with other former Hugging Face researchers. The team led the development of the Falcon and BLOOM open-source language models before starting Adaptive.
Hugging Face TRL is an open-source library for RLHF that requires significant engineering to operationalise. Adaptive ML provides a managed platform with automated feedback loops, A/B testing, and enterprise monitoring — essentially productising what TRL offers as raw components. The trade-off is cost versus implementation effort.
No. Adaptive ML is an enterprise-only platform with custom pricing. There is no self-serve plan, free trial, or individual developer access. Organisations must engage with the sales team for evaluation and pricing.
Adaptive ML works with major open-source LLM families including Meta Llama, Falcon, and Mistral. The platform is designed to optimise these models for specific business tasks through continuous reinforcement learning, rather than building or hosting proprietary models.
Sovereign AI for European enterprises and government institutions
Alternative to Openai
AI-powered translation that outperforms Google Translate in quality
Alternative to Google Translate
The open-source AI platform for models, datasets, and machine learning applications
Enterprise AI platform with proprietary European language models