Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
SignalVault - The trust layer for AI applications
[go: Go Back, main page]

The trust layer for AI applications

Audit logging, PII detection, and guardrails for production AI. Integrates with your existing stack in minutes.

100 free requests · no credit card required

Stripe → payments · Sentry → errors · PostHog → analytics · SignalVault → AI trust
production-api Growth
Requests (30d)
0
+12% vs prev
Violation rate
2.4%
30 violations
Cost (30d)
$48.32
~$0.04/req
Top violation
PII detected
18 occurrences
Time Prompt Model Tokens Decision
The problem

AI moved to production. Trust didn't.

Logging is an afterthought

Teams ship AI features fast. Audit logs come later — if at all.

Silent data leaks

PII and secrets slip through prompts. You won't know until it's too late.

No audit trail

When things break, you have no record of what was sent to the AI.

Compliance after the fact

Security teams ask for logs after an incident. You have nothing to show.

How it works

One proxy, complete visibility

Route AI requests through SignalVault. Every prompt and response is logged, scanned, and governed.

Your App SignalVault LLM Provider
01

Route requests

Use our SDK or point your base URL at SignalVault's proxy.

02

Log everything

Every prompt and response encrypted with AES-256-GCM.

03

Enforce rules

PII detection, secret scanning, and token limits on every request.

04

Get alerts

Violations trigger webhooks and email alerts. Export anytime.

Integration

SDK, proxy, or both

Python and Node.js SDKs that wrap OpenAI's client. Or use the proxy with any provider — no SDK required.

import os
from signalvault import SignalVaultClient

client = SignalVaultClient(
    api_key="sk_live_your_signalvault_key",
    openai_api_key=os.environ["OPENAI_API_KEY"],
    base_url="https://api.signalvault.io",
    environment="production",
)

# Use exactly like OpenAI SDK
response = client.chat.completions.create(
    model="gpt-4",
    messages=[{"role": "user", "content": "Hello!"}],
)

print(response.choices[0].message.content)
Features

Everything you need to ship AI safely

Encrypted audit logs

Every prompt and response stored with AES-256-GCM. Immutable, tamper-proof, export-ready.

PII detection

Catch emails, SSNs, phone numbers, and credit cards before they reach the AI provider.

Secret scanning

Detect API keys, tokens, and credentials in prompts. Block or redact automatically.

Budget controls

Set monthly cost limits and daily token caps per app. Get alerts before you overspend.

Compliance exports

One-click CSV and JSON exports for SOC2, GDPR, and security audits.

Mirror mode

Log requests asynchronously without sitting in the request path. Zero added latency.

Under the hood

Built for reliability

Runtime
Elixir / BEAM
Telecom-grade fault tolerance and concurrency
Encryption
AES-256-GCM
All payloads encrypted at rest via Cloak
Key hashing
HMAC-SHA256
API keys never stored in plaintext
API format
OpenAI-compatible
Drop-in proxy, no proprietary lock-in
Detection
Pattern + rules
Regex-based PII, secret, and policy scanning
Dashboard
Phoenix LiveView
Real-time updates, zero JS frameworks
FAQ

Common questions

Start protecting your AI stack

Free to start. No credit card required.

100 free requests · no credit card required