Security
Your cloud. Your data.
No exceptions.
Wimsi is self-hosted by design. Not as a premium tier. Not as an add-on. It's the only way we ship.
Runs in your tenant
Wimsi is a Docker container you deploy in your own cloud — AWS, Azure, GCP, or on-prem. We never host your instance.
Data stays with you
Your application data, user data, and generated apps live entirely in your infrastructure. Nothing is transmitted to Wimsi.
No telemetry
Wimsi doesn't phone home. No usage analytics, no tracking, no heartbeat. Install it, disconnect the internet — it still works.
Architecture
What lives where
Here's exactly what happens when someone builds an app with Wimsi. No ambiguity.
In your cloud
100% under your control
- Wimsi application container
- All generated apps and their source code
- Application databases (PGlite / Turso)
- User sessions and authentication
- File uploads and CSV data
- All configuration and settings
External (your choice)
Optional, provider agnostic
- LLM API calls
Your API key, your provider. Only app descriptions are sent.
- Local LLM option
Run Ollama or vLLM on your network. Zero external calls.
- SSO provider
Connect to your existing identity provider (SAML, OIDC).
Zero trust by default — Wimsi makes no outbound connections except to your chosen LLM provider. Verifiable with tcpdump or your network monitoring.
Your AI, your rules
Choose your AI provider
Wimsi doesn't lock you into a single AI vendor. Your IT department picks the provider that matches your security posture, budget, and preferences.
Google Gemini
Cloud API
OpenAI
Cloud API
Anthropic Claude
Cloud API
Ollama / vLLM
Self-hosted
How it works
During setup, your IT team configures which AI provider to use — it's a single environment variable. Swap providers anytime without touching application code. Each option uses your own API key, billed directly by the provider.
Want to run a fully private model on your own GPU hardware? Ollama and vLLM let you do exactly that — zero data ever leaves your network.
Need help choosing or hosting an AI model?
Our team can help you select the right provider for your needs, or even host and manage a dedicated AI model on your behalf.
Contact us directlyTransparency
AI data transparency
Only the app description and refinement instructions your users type. The actual generated code, application data, CSV uploads, and database contents are never sent to the LLM.
Yes. Wimsi supports local models via Ollama and vLLM. Run the LLM on your own hardware and no data ever leaves your network.
Anytime. It's a configuration change, not a migration. Your apps, data, and users are completely unaffected. Swap from OpenAI to Gemini to a local model — it takes minutes.
With a local LLM (Ollama/vLLM), Wimsi can run completely disconnected from the internet. Zero external dependencies at runtime.
Compliance
You control the perimeter
Because Wimsi runs in your infrastructure, your existing security controls, audit processes, and compliance frameworks apply automatically.
Data residency
Deploy in any region. Your data stays where your policies require.
Audit trail
All operations are logged. Integrate with your existing SIEM.
Access control
SSO, RBAC, and your existing identity provider. No separate user directory.
Pen test friendly
It's your infrastructure. Run your own security assessments anytime.
Security questions?
We're happy to walk through the architecture with your security team. Or just try it — the demo on our homepage runs entirely in your browser.