Enterprise AI platform with proprietary European language models
LightOn is a French AI company founded in 2016, originally as a photonic computing startup before pivoting to enterprise AI. The company offers Paradigm, a platform for deploying large language models in secure, sovereign environments. LightOn has developed proprietary models and provides on-premise or EU-hosted AI infrastructure designed specifically for enterprises and public sector organizations that cannot send data to US cloud providers. The company has secured contracts with French government entities and major European enterprises, positioning itself as a sovereign AI infrastructure provider rather than a consumer-facing product.
Headquarters
Paris, France
Founded
2016
Pricing
EU Data Hosting
Yes
Employees
51-200
Contact Sales
Contact Sales
Contact Sales
Billing: annual, custom
When a French government ministry needed to deploy large language models for sensitive document analysis, the usual suspects — OpenAI, Anthropic, Google — were immediately disqualified. Not because of capability, but because of jurisdiction. Sending classified government documents to US cloud infrastructure was not an option the compliance team could approve, regardless of contractual assurances or data processing agreements.
This is the exact scenario LightOn was built for. Founded in Paris in 2016, LightOn started as a photonic computing research company before pivoting to enterprise AI. Today, it offers Paradigm, a platform for deploying, managing, and orchestrating large language models in sovereign environments — either on-premise within a customer's own infrastructure or in EU-hosted cloud environments with no data leaving European borders.
LightOn is not an AI company that happens to offer EU hosting. It is an AI company whose entire value proposition is sovereignty. The Paradigm platform supports both LightOn's proprietary models and popular open-source models, giving organisations the flexibility to choose the right model for each use case without compromising on data residency. The company has deep ties to French research institutions including CNRS and INRIA, and its photonic computing heritage — using light rather than electricity for certain computations — gives it a technical differentiation that extends beyond software.
This is enterprise AI for organisations that cannot send data outside their walls: government agencies, defence contractors, healthcare systems, financial institutions, and critical infrastructure operators. It is not for startups experimenting with GPT wrappers. It is not for indie developers building chatbots. LightOn exists at the intersection of AI capability and sovereign infrastructure, serving a market that grows every time a new regulation tightens the rules around data processing.
Paradigm is LightOn's unified AI platform for deploying and orchestrating large language models. It provides a managed environment where organisations can run text generation, summarisation, document analysis, semantic search, and retrieval tasks against one or more LLMs. The platform abstracts away the infrastructure complexity of running large models — GPU allocation, model serving, scaling — and presents a consistent API interface regardless of whether the underlying deployment is cloud-hosted or on-premise. For organisations deploying AI for the first time, Paradigm reduces the engineering burden significantly. For those already running models, it provides orchestration and management tooling.
LightOn's strongest differentiator is full on-premise deployment, including support for air-gapped environments with no internet connectivity. This means the models, the data, and the compute all reside within the customer's physical infrastructure. For classified government work, defence applications, or any environment where data exfiltration risks must be eliminated, this architecture is not a nice-to-have — it is a requirement. LightOn provides hardware optimisation consulting to ensure that on-premise GPU infrastructure runs models efficiently, which is non-trivial given the compute demands of modern LLMs.
Paradigm is not locked to LightOn's proprietary models. The platform supports deploying open-source models from ecosystems like Hugging Face alongside LightOn's own offerings. This multi-model approach allows organisations to match model capabilities to specific use cases: a smaller, faster model for real-time classification, a larger model for complex document analysis, and a specialised fine-tuned model for domain-specific terminology. Model management, versioning, and A/B testing are handled within the platform.
For organisations with domain-specific requirements — legal language, medical terminology, technical documentation — LightOn offers custom model fine-tuning. This involves training or adapting models on the customer's proprietary data to improve accuracy for specific tasks. Fine-tuning happens within the sovereign environment, meaning proprietary training data never leaves the customer's infrastructure. This addresses a fundamental concern with cloud-based fine-tuning services, where training data is uploaded to third-party infrastructure.
LightOn holds French government security approvals and aligns with emerging AI Act requirements. The platform includes audit logging, access controls, and usage monitoring for compliance reporting. For organisations operating under sector-specific regulations — financial services, healthcare, government — LightOn provides the compliance infrastructure that generic AI APIs do not.
LightOn's pricing is entirely custom and quote-based, with no self-service tier, no free trial accessible to individuals, and no publicly listed prices. This is enterprise software in the traditional sense: you engage with sales, define your requirements, receive a proposal, and negotiate terms.
Paradigm Cloud offers EU-hosted AI infrastructure with usage-based compute billing. You pay for the compute resources your model deployments consume, with pricing varying by model size, request volume, and required latency. This is the lower barrier entry point for organisations that need sovereign AI but do not require on-premise deployment.
Paradigm On-Premise involves a more substantial commitment: hardware consulting, deployment engineering, and ongoing support. Pricing reflects the complexity of running production AI infrastructure within a customer's data centre, and contracts are typically annual.
Enterprise engagements with custom model development, dedicated support engineers, and integration consulting sit at the top of the pricing spectrum.
The lack of a self-service tier is LightOn's most significant limitation for market adoption. Organisations cannot experiment with the platform before committing to a sales conversation, which creates friction for technical evaluators who want to test capabilities before involving procurement.
LightOn's compliance credentials are its primary selling point. As a French SAS (Societe par Actions Simplifiee) headquartered in Paris, LightOn operates under French and EU law. The Paradigm Cloud option processes data exclusively within EU infrastructure, with no data transfers to non-EU jurisdictions.
The on-premise deployment option goes further: data never leaves the customer's physical premises. For air-gapped deployments, there is no network path for data to exit the environment. This eliminates an entire category of compliance risk — there are no sub-processors, no cloud provider dependencies, and no cross-border transfer mechanisms to evaluate.
LightOn's alignment with the EU AI Act, French government security requirements, and sector-specific regulations (financial services, healthcare) positions it as a compliance-first AI provider. The trade-off is clear: you sacrifice the convenience and model breadth of OpenAI's API for the certainty that your data and your AI processing stay within your sovereign boundaries.
Government agencies and defence organisations that require AI capabilities in classified or restricted environments where data cannot leave sovereign infrastructure.
Large European enterprises in regulated industries — financial services, healthcare, legal — that need LLM capabilities but face strict data residency and processing requirements.
Public sector organisations subject to EU procurement rules that mandate European-headquartered vendors and EU data processing.
Organisations with existing on-premise infrastructure looking to add AI capabilities without introducing cloud dependencies or external data flows.
LightOn is not competing with OpenAI on model capability or developer experience. It is competing on a dimension that matters more to its target market: sovereignty. For the organisations that need it — and there are more of them with every new regulation — LightOn provides something no US-based AI provider can credibly offer: AI infrastructure that is European by design, deployable behind your own firewall, and legally unreachable by non-EU authorities. The 6.5 overall score reflects the trade-offs: exceptional EU compliance (9.5) and solid feature depth (7.0), but limited accessibility (5.5 ease of use), a narrow integration ecosystem (4.5), and pricing opacity that excludes smaller organisations entirely.
Mistral focuses on building frontier models and offering them via API, competing directly with OpenAI on model quality. LightOn focuses on sovereign deployment infrastructure — the platform for running models securely, whether they are LightOn's own, Mistral's, or open-source. An organisation could theoretically deploy Mistral's models on LightOn's Paradigm platform for a fully sovereign French AI stack.
Practically, no. LightOn's enterprise-only model, custom pricing, and sales-driven process make it inaccessible to small organisations. Startups and small businesses looking for European AI should consider Mistral's API, Hugging Face's inference endpoints, or Scaleway's GPU cloud as more accessible alternatives.
LightOn was originally founded to develop optical processing units (OPUs) that use light instead of electricity for certain AI computations. While the company pivoted to enterprise AI software, the photonic computing research continues and informs its approach to efficient model inference. This heritage is more of a long-term research differentiator than a current product feature.
Yes. The Paradigm platform supports deploying models from Hugging Face and other open-source ecosystems alongside LightOn's proprietary models. This gives organisations the flexibility to choose models based on capability, licensing, and cost rather than being locked into a single model family.
Enterprise customers receive dedicated account managers, integration consulting, and direct engineering support. There is no community forum or self-service documentation hub in the way that developer-focused AI companies provide. Support quality is reported as strong for committed enterprise clients, but the lack of public resources makes independent evaluation difficult.
LLM optimisation and deployment platform for enterprise AI
Alternative to Openai
Sovereign AI for European enterprises and government institutions
Alternative to Openai
AI-powered translation that outperforms Google Translate in quality
Alternative to Google Translate
The open-source AI platform for models, datasets, and machine learning applications