Responsible AI

Steinkauz puts you in charge. You can configure your environment to your needs and get transparent information wherever possible. Transparency, privacy, control, and security are built into the product — not as abstract principles, but in how it behaves. Below we highlight some of the ways these four tenets manifest in the application.

Transparency

See exactly which model you're using, where your data goes, and what it costs. Model origins, capabilities, pricing, data flow, and usage statistics — no hidden fees or surprises.

Click the info icon on the example message below to see provider, model, tokens, and cost for that reply.

Control

Provider configuration, gateway vs BYOK, and security classifications. Choose which providers handle which conversations and how data is routed.

Interactive demo — no data is saved.

Specify the order in which providers should be tried. Providers not in the list will be tried after the ordered ones.

Choice: 1OpenAI
Choice: 2Anthropic
Rest (will be tried after ordered providers)
GoogleMistral AIMeta

Select providers to allow. Only selected providers will be considered for routing. Unselected providers will be excluded. Mutually exclusive with Provider Order.

0 Allowedproviders
5 Excludedproviders

No restrictions applied. All providers will be considered for routing.

Enable zero data retention policy. Only providers with ZDR agreements will be used.

Privacy & Security

This section lists some of the key capabilities of Steinkauz. Also, it shows a possible usage scenario for the different providers achieving a flexible and secure setup.

  • On our servers, your data is encrypted at rest and in transit so your conversations and model traffic stays protected end to end.
  • For the Gateway, you can enable optional zero data retention and fine-tune provider configuration (as in the section above) to match your policies.
  • BYOK (bring your own keys) lets you use your own keys with supported providers for additional control over data and compliance.
  • Security classifications ensure that only providers at or above a conversation’s sensitivity level are used, preventing conversation-level data leaks to lower-trust providers.

Usage scenario: Three providers, one interface — routing follows sensitivity

Gateway
Public

Instant access with gateway plans. Wide variety of models, perfect for everyday use.

Azure OpenAI
InternalBYOK

Private cloud deployment, higher trust. Perfect for sensitive data and compliance requirements.

OpenAI Compatible (e.g. vLLM)
SecretBYOK

On-Premise Deployment on your infrastructure, full control over your data and models

Gateway is set to Public for everyday, non-sensitive use, with optional zero data retention available. Azure OpenAI is added via BYOK as Internal for internal data in a private cloud. A self-hosted vLLM instance is connected with the OpenAI Compatible provider and assigned Secret for the most sensitive workloads.

Result: one interface; routing and provider choice stay aligned to sensitivity and requirements — no conversation data leaks to lower-classification providers.

This is one way to combine these options; you configure to your own trust boundaries and policies.

Ready for responsible AI?

Sign up and use transparency, privacy, control, and security from day one.