← Back to ProxyLLM

Terms of Service

Last updated: April 10, 2026

1. Acceptance of Terms

By accessing or using ProxyLLM ("the Service"), operated by Sysdev TechStrategy & Consulting ("we", "us"), you agree to be bound by these Terms of Service. If you do not agree, do not use the Service.

2. Description of Service

ProxyLLM is an API proxy service that sits between your applications and LLM providers (such as OpenAI, Anthropic, and others). The Service provides semantic caching, smart model routing, cost tracking, and a management dashboard. We do not provide the underlying LLM models — we proxy requests to third-party providers using your configured API keys.

3. Account Registration

You must create an account to use the Service. You are responsible for maintaining the confidentiality of your account credentials and API keys. You are responsible for all activity under your account.

4. API Keys and Security

You may configure your own LLM provider API keys within the Service. We store these keys encrypted at rest and transmit them only over TLS-encrypted connections to the configured providers. You are solely responsible for the security and usage of your upstream provider API keys.

5. Acceptable Use

You agree not to:

  • Use the Service for any unlawful purpose or in violation of any applicable law
  • Attempt to gain unauthorized access to the Service or its infrastructure
  • Interfere with or disrupt the Service or other users' access
  • Exceed the rate limits of your subscription plan through circumvention
  • Resell or redistribute access to the Service

6. Billing and Subscriptions

The Service offers Free, Pro, and Scale subscription tiers. Billing is handled by Stripe. By subscribing to a paid plan, you authorize us to charge your payment method on a recurring monthly basis. You may cancel at any time through the billing portal, and your access will continue until the end of the current billing period.

7. Caching

The Service caches LLM responses to reduce costs and latency. Cached responses are stored in encrypted Redis instances and are scoped to your workspace. Cache entries expire based on your plan tier (24h–168h). You may request cache clearing by contacting support.

8. Service Availability

We strive to maintain high availability but do not guarantee uninterrupted service. The Service depends on third-party infrastructure (Railway, Vercel, Clerk, Stripe) and upstream LLM providers. We are not liable for outages caused by these third-party services.

9. Limitation of Liability

To the maximum extent permitted by law, ProxyLLM shall not be liable for any indirect, incidental, special, consequential, or punitive damages, including but not limited to loss of profits, data, or business opportunities, arising from your use of the Service.

10. Changes to Terms

We may update these Terms from time to time. We will notify you of material changes via email or through the dashboard. Continued use of the Service after changes constitutes acceptance of the updated Terms.

11. Contact

For questions about these Terms, contact us at contact@proxyllm.dev.