Local AI Station
Confidential AI, zero cloud, 100% sovereign — deployed in Geneva by Optinova
A local AI station is a powerful artificial intelligence running entirely on your own infrastructure — no data ever leaves your premises, no external server is involved. Optinova designs and deploys custom stations for businesses and regulated professions in Geneva and French-speaking Switzerland that cannot afford to compromise on data confidentiality.
Why a 100% local
AI in 2026?
Cloud AI tools like ChatGPT, Gemini, or Copilot are powerful — but they send every prompt to external servers. For a lawyer, doctor, fiduciary, or business handling sensitive data, that is a non-starter.
Switzerland's nFADP (in force since 2023) and GDPR impose a strict framework. A 100% local AI structurally eliminates risks: no cross-border transfer, no DPA to negotiate, no risk of your data being used to train third-party models.
Who is it for?
Any organization in Geneva handling confidential data
Law firms
Professional secrecy guaranteed. Analyze contracts, draft documents, query your case files — without a single character leaving your network.
Fiduciaries & accountants
Process balance sheets, tax returns, and client data in full confidentiality. nFADP compliance documentation included in every deployment.
Medical & dental practices
Patient data protected by design. AI analyzes your files, drafts reports, and answers clinical questions — 100% local, 100% confidential.
Meeting transcription & summaries
Record your meetings, board sessions, or confidential briefings and get a full transcription and structured summary — 100% offline, never sent outside.
SMEs with sensitive data
Intellectual property, commercial data, client information — protected by an AI running exclusively on your Swiss infrastructure.
RAG on your internal documents
Query your PDFs, Word files, contracts, and reports in natural language. AI answers based solely on your data, sending nothing outside.
What you get
No data ever leaves your infrastructure. Zero transfer to external servers during inference.
From audit to go-live: between 1 and 5 business days depending on your infrastructure complexity.
Complete compliance documentation provided: processing register, usage policy, client notices.
Monthly follow-up for a full year: model updates, optimizations, new use cases, and dedicated technical support.
Our approach: from hardware to model
Optinova handles the entire chain — from initial diagnosis to team training
Hardware audit & sizing
Based on your use case (number of users, document volume, task types), we size the optimal setup: local workstation, dedicated mini-server, or reinforced NAS with GPU.
Ollama + open-source model deployment
Installation and configuration of Ollama with the right model for your needs: Mistral, LLaMA, Phi-3, Gemma, or others. Selected based on your hardware and requirements.
Offline meeting transcription & summaries
Configuration of a 100% local audio transcription pipeline (Whisper or equivalent) for your meetings, board sessions, and confidential briefings. Automatic structured summary, zero cloud.
Local user interface
Open WebUI or custom interface deployed on your internal network — accessible only from your premises or VPN. No external exposure.
macOS & Windows exclusively
We deploy on the environments your teams already know. No Linux, no cryptic terminal. The interface opens in a browser — your staff is operational immediately.
nFADP/GDPR compliance documentation
nFADP processing register, internal usage policy, client information notice if required. You are covered from a regulatory standpoint from day one.
Built for
Geneva's requirements
Geneva concentrates some of the world's most demanding regulated professions: law, finance, healthcare, international organizations. These professions cannot afford to entrust their data to American servers.
Optinova deploys AI stations calibrated for these realities: FR/EN/DE multilingualism, documented nFADP compliance, integration with Swiss tools (Abacus, Bexio, Office suites), offline meeting transcription.
Our deployment method
Audit & diagnostic
Analysis of your existing infrastructure, identification of priority use cases (documents, meetings, internal Q&A), hardware sizing, and selection of the AI model suited to your needs and constraints.
Stack deployment
Ollama installation, selected model configuration, user interface deployment on your internal network. Security hardening, disk encryption, and offline transcription pipeline configuration.
RAG configuration & integrations
RAG system configuration to query your internal documents (PDF, Word, etc.), connection to existing tools if needed, performance and quality testing on your real use cases.
Training & ongoing support
Team training session and nFADP/GDPR compliance documentation handover. Then monthly support for 12 months: model updates, optimizations, new use cases, and dedicated technical assistance.
