vibex.sh Documentation
Zero-config log monitoring, charting, and AI log analysis. Pipe logs and visualize instantly.
Quick Navigation
Quickstart
Get up and running with vibex.sh in under a minute. First authenticate, then pipe your data and watch it appear.
Step 1: Authenticate
This opens your browser for signup/login. Your token is saved to ~/.vibex/config.json and used automatically.
Step 2: Your First Log
A dashboard URL will appear. Open it in your browser to see your data visualized in real-time.
Monitoring from Pipe
The simplest way to use vibex.sh. After authenticating with npx vibex-sh login, pipe any output from your scripts, applications, or commands directly to the CLI.
JSON Logs
Script Output
Reusing Sessions
Keep sending logs to the same session for continuous monitoring. Sessions are automatically created and linked to your account:
💡 Pro Tip: JSON logs are automatically parsed and visualized. Non-JSON text logs are displayed in the terminal view and can be parsed with auto-facet detection.
Monitoring using a Logger
Integrate vibex.sh directly into your application using language-specific SDKs. Perfect for production deployments.
🔐 Get Your Authentication Token
Before using the SDKs, you need to authenticate and get your token. Run this command in your terminal:
npx vibex-sh loginThis will generate your VIBEX_TOKEN which you'll use in your environment variables below.
🐍 Python Project
Installation
Quick Setup
Code Integration
import logging
import json
from vibex_sh import VibexHandler
logger = logging.getLogger('my_app')
vibex_handler = VibexHandler()
logger.addHandler(vibex_handler)
# Send logs (JSON format recommended for structured data)
logger.info(json.dumps({'cpu': 45, 'memory': 78, 'status': 'healthy'}))
logger.info(json.dumps({'error': 'connection_failed', 'retry_count': 3}))Fail-Safe Behavior
VIBEX_TOKEN or VIBEX_SESSION_ID is missing📦 Node.js Project
Installation
Quick Setup
Code Integration
const winston = require('winston');
const { VibexHandler } = require('vibex-sdk');
const logger = winston.createLogger({
level: 'info',
transports: [
new winston.transports.Console(),
new VibexHandler({ verbose: true }),
],
});
// Send logs (JSON format recommended for structured data)
logger.info(JSON.stringify({ cpu: 45, memory: 78, status: 'healthy' }));
logger.info(JSON.stringify({ error: 'connection_failed', retry_count: 3 }));Note: The CLI can handle any log format (JSON, Nginx, Syslog, Docker, Kubernetes, etc.). SDKs send structured JSON data. For best results with auto-facet detection, use JSON format.
Fail-Safe Behavior
VIBEX_TOKEN or VIBEX_SESSION_ID is missingAI Assistant Guide
vibex.sh includes an AI assistant that analyzes your logs and answers questions. The assistant is context-aware and can understand your application stack, infrastructure, and log patterns when you provide context.
Setting Up OpenAI API Key
The AI Assistant uses your own OpenAI API key for privacy and cost control. Configure it in your session settings:
- Open your session dashboard
- Click the settings icon (gear) in the top right
- Navigate to "AI Assistant" section
- Enter your OpenAI API key
- Your key is encrypted and stored securely
Providing Context
For better responses, provide context about your application:
- Technology Stack: e.g., "Node.js 18, Next.js 14, PostgreSQL"
- Infrastructure: e.g., "AWS Lambda, Vercel, Redis"
- Documentation URLs: Links to your API docs or relevant documentation
This context helps the AI understand your codebase and provide more accurate analysis and answers.
Attaching Logs to Conversations
You can attach specific logs to your questions for focused analysis:
- Click on a log entry in the console view
- Click the "Attach to AI" button
- Open the AI Assistant drawer
- Ask your question - the attached logs will be included in context
The AI also automatically includes surrounding log context (before and after attached logs) for better understanding.
Best Practices
- Be specific in your questions - "Why is this error happening?" is better than "What's wrong?"
- Attach relevant logs when asking about specific errors
- Provide context about what you were doing when the error occurred
- Use the assistant for pattern detection, log analysis, and getting insights about your data
🔒 Privacy: Your OpenAI API key is encrypted and never shared. All AI processing uses your own API key, giving you full control over costs and data.
IDE Integration (Vibe Coding)
Configure your IDE to generate vibex.sh-optimized JSON logs. Rule files guide AI coding assistants to ensure consistent log formatting that works seamlessly with vibex.sh's auto-facet detection.
What are Rule Files?
Rule files are configuration files that guide AI coding assistants in your IDE. They help ensure consistent code generation and adherence to project standards. By adding a vibex rule file to your project, your IDE will automatically generate JSON logs that are compatible with vibex.sh, making it easy to visualize and monitor your application's output.
Rule Content
# vibex.sh Log Format Rules
When generating logs or debugging output, always format data to be compatible with vibex.sh log monitoring. vibex.sh supports multiple log formats and intelligently parses them to extract metrics, context, and structured data.
## Core Principles
1. **Hybrid JSON Structure**: vibex.sh uses a hybrid JSON structure that combines metrics, logs, and context
2. **Multiple Log Types**: Support for JSON, text, web server logs, stack traces, and more
3. **Auto-Detection**: The system automatically detects and parses various log formats
4. **Structured Data**: Numeric values become metrics, strings become categorical data, nested objects become facets
## Supported Log Types
vibex.sh supports 13 different log types, each optimized for specific use cases:
### 1. JSON Logs (Default)
Structured JSON logs with hybrid structure. Best for application logs, metrics, and structured data.
### 2. Text Logs
Plain text logs that will be parsed by smart pattern detection. Best for unstructured logs.
### 3. Web Server Logs
Nginx, Apache, and other web server access logs. Automatically extracts HTTP method, status, path, query parameters.
### 4. Load Balancer Logs
HAProxy, AWS ALB, and other load balancer logs.
### 5. Stack Traces
Error stack traces with file, line, and function information.
### 6. Firewall Logs
iptables, pfSense, Cisco ASA firewall logs.
### 7. Kubernetes Logs
Kubernetes pod and container logs.
### 8. Docker Logs
Docker container logs.
### 9. Network Logs
tcpdump, wireshark, and other network packet logs.
### 10. Key-Value Logs
Key-value pair formatted logs (e.g., `key1=value1 key2=value2`).
### 11. JSON-in-Text
JSON objects embedded in text logs.
### 12. Smart Pattern
Multi-language pattern matching for various log formats.
### 13. Raw Logs
Fallback parser for any other log format.
## Hybrid JSON Structure
When using JSON logs, vibex.sh expects a hybrid structure that combines multiple data types:
```json
{
"message": "Human readable log message",
"level": "info|warn|error|debug",
"metrics": {
"cpu": 45,
"memory": 78,
"latency_ms": 200,
"requests_per_sec": 1200
},
"context": {
"trace_id": "abc-123-def",
"user_id": "u_999",
"request_id": "req_456",
"environment": "production"
},
"timestamp": "2024-01-15T10:00:00Z"
}
```
### Field Guidelines
- **message**: Human-readable log message (optional but recommended)
- **level**: Log level - `info`, `warn`, `error`, `debug` (defaults to `debug` if not set)
- **metrics**: Numeric values that will be charted (cpu, memory, latency, etc.)
- **context**: Indexed fields for filtering (trace_id, user_id, request_id, etc.)
- **timestamp**: ISO 8601 format (auto-filled if missing)
## Detailed Examples by Log Type
### JSON Logs - Application Metrics
```json
{"timestamp": "2024-01-15T10:00:00Z", "level": "info", "message": "User logged in", "user_id": 123, "cpu": 45, "memory": 78}
{"timestamp": "2024-01-15T10:00:01Z", "level": "error", "error_code": "DB_001", "message": "Database connection failed", "retry_count": 3}
{"timestamp": "2024-01-15T10:00:02Z", "cpu": 45, "memory": 78, "requests": 1200, "latency_ms": 200}
{"timestamp": "2024-01-15T10:00:03Z", "level": "info", "message": "API request completed", "duration_ms": 150, "status_code": 200, "path": "/api/users", "method": "GET"}
```
### JSON Logs - With Context for Tracing
```json
{"timestamp": "2024-01-15T10:00:00Z", "level": "info", "message": "Processing payment", "trace_id": "trace-abc-123", "user_id": "u_999", "order_id": "order_456", "amount": 99.99}
{"timestamp": "2024-01-15T10:00:01Z", "level": "info", "message": "Payment gateway called", "trace_id": "trace-abc-123", "duration_ms": 250, "gateway": "stripe"}
{"timestamp": "2024-01-15T10:00:02Z", "level": "info", "message": "Payment confirmed", "trace_id": "trace-abc-123", "user_id": "u_999", "transaction_id": "txn_789"}
```
### JSON Logs - Performance Metrics
```json
{"timestamp": "2024-01-15T10:00:00Z", "cpu": 45.2, "memory": 78.5, "disk_io": 1200, "network_io": 3400}
{"timestamp": "2024-01-15T10:00:01Z", "level": "info", "message": "Cache hit", "cache_hit_rate": 0.95, "cache_size_mb": 512}
{"timestamp": "2024-01-15T10:00:02Z", "level": "info", "message": "Database query", "query_duration_ms": 45, "rows_returned": 100, "table": "users"}
```
### Text Logs - Plain Text
```
Application started successfully
User [email protected] logged in from IP 192.168.1.100
High memory usage detected: 85%
Database connection pool exhausted
```
### Web Server Logs - Nginx/Apache Format
```
127.0.0.1 - - [25/Dec/2024:10:00:00 +0000] "GET /api/users HTTP/1.1" 200 1234 "-" "Mozilla/5.0"
192.168.1.50 - - [25/Dec/2024:10:00:01 +0000] "POST /api/orders HTTP/1.1" 201 5678 "https://example.com" "Mozilla/5.0"
10.0.0.1 - - [25/Dec/2024:10:00:02 +0000] "GET /api/products?category=electronics HTTP/1.1" 200 8901 "-" "curl/7.68.0"
```
### Stack Traces - Error Logs
```
Error: Connection failed
at Database.connect (db.js:45:12)
at UserService.getUser (user-service.js:23:8)
at APIHandler.handleRequest (api-handler.js:67:15)
```
### Key-Value Logs
```
level=info message="User logged in" user_id=123 ip=192.168.1.100 duration_ms=45
level=error error_code=DB_001 message="Database connection failed" retry_count=3
cpu=45.2 memory=78.5 requests=1200 latency_ms=200
```
### Docker Container Logs
```
2024-12-25T10:00:00.123Z [INFO] Application started on port 3000
2024-12-25T10:00:01.456Z [ERROR] Failed to connect to database: connection timeout
2024-12-25T10:00:02.789Z [WARN] High memory usage: 85%
```
### Kubernetes Pod Logs
```
2024-12-25T10:00:00.123Z stdout F [INFO] Processing request from pod: frontend-abc-123
2024-12-25T10:00:01.456Z stderr F [ERROR] Health check failed: timeout
2024-12-25T10:00:02.789Z stdout F [INFO] Pod restarted successfully
```
### Firewall Logs
```
Dec 25 10:00:00 firewall kernel: [12345.678] IN=eth0 OUT= MAC=00:11:22:33:44:55 SRC=192.168.1.100 DST=10.0.0.1 LEN=60 TOS=0x00 PROTO=TCP SPT=54321 DPT=80
Dec 25 10:00:01 firewall kernel: [12346.789] IN=eth0 OUT= MAC=00:11:22:33:44:55 SRC=192.168.1.200 DST=10.0.0.1 LEN=60 TOS=0x00 PROTO=TCP SPT=54322 DPT=443
```
### Network Logs
```
10:00:00.123 IP 192.168.1.100.54321 > 10.0.0.1.80: Flags [S], seq 1234567890, win 65535
10:00:01.456 IP 10.0.0.1.80 > 192.168.1.100.54321: Flags [S.], seq 9876543210, ack 1234567891, win 65535
10:00:02.789 IP 192.168.1.100.54321 > 10.0.0.1.80: Flags [.], ack 9876543211, win 65535
```
### JSON-in-Text Logs
```
[2024-12-25 10:00:00] Application started with config: {"port": 3000, "env": "production", "debug": false}
[2024-12-25 10:00:01] User action: {"action": "login", "user_id": 123, "ip": "192.168.1.100"}
[2024-12-25 10:00:02] Metrics: {"cpu": 45, "memory": 78, "requests": 1200}
```
## SDK Usage Examples
### Node.js SDK
```javascript
const { VibexClient } = require('vibex-sdk');
const client = new VibexClient();
// JSON log with hybrid structure
await client.sendLog('json', {
message: 'User logged in',
level: 'info',
metrics: { cpu: 45, memory: 78 },
context: { user_id: 123, trace_id: 'abc-123' }
});
// Text log
await client.sendLog('text', 'Application started successfully');
// Web server log
await client.sendLog('web-server', '127.0.0.1 - - [25/Dec/2024:10:00:00 +0000] "GET /api/users HTTP/1.1" 200');
// Stack trace
await client.sendLog('stacktrace', 'Error: Connection failed\n at file.js:10:5');
```
### Python SDK
```python
from vibex_sh import VibexClient
client = VibexClient()
# JSON log with hybrid structure
client.send_log('json', {
'message': 'User logged in',
'level': 'info',
'metrics': {'cpu': 45, 'memory': 78},
'context': {'user_id': 123, 'trace_id': 'abc-123'}
})
# Text log
client.send_log('text', 'Application started successfully')
# Web server log
client.send_log('web-server', '127.0.0.1 - - [25/Dec/2024:10:00:00 +0000] "GET /api/users HTTP/1.1" 200')
# Stack trace
client.send_log('stacktrace', 'Error: Connection failed\n at file.py:10:5')
```
## Best Practices
### 1. Use Appropriate Log Types
- **JSON**: For structured application data, metrics, and events
- **Text**: For unstructured logs that need pattern detection
- **web-server**: When you know the log is from a web server
- **stacktrace**: For error stack traces
- **keyvalue**: For key-value formatted logs
### 2. Include Context for Tracing
Always include `trace_id`, `user_id`, or `request_id` in the `context` object to enable trace filtering:
```json
{
"message": "Processing request",
"context": {
"trace_id": "trace-abc-123",
"user_id": "u_999",
"request_id": "req_456"
}
}
```
### 3. Use Consistent Field Names
- **Metrics**: `cpu`, `memory`, `latency_ms`, `duration_ms`, `requests_per_sec`
- **Context**: `trace_id`, `user_id`, `request_id`, `correlation_id`, `span_id`, `session_id`
- **Levels**: `info`, `warn`, `error`, `debug`
### 4. Numeric Values Should Be Numbers
```json
// ✅ Good
{"cpu": 45, "memory": 78, "latency_ms": 200}
// ❌ Bad
{"cpu": "45", "memory": "78", "latency_ms": "200"}
```
### 5. Include Timestamps
Timestamps are auto-filled if missing, but it's better to include them:
```json
{"timestamp": "2024-01-15T10:00:00Z", "message": "Event occurred"}
```
### 6. Use Nested Objects for Complex Data
Nested objects are automatically extracted as facets:
```json
{
"message": "Order processed",
"order": {
"id": "order_123",
"total": 99.99,
"items": 3
},
"customer": {
"id": "cust_456",
"tier": "premium"
}
}
```
## When to Use Each Log Type
- **JSON**: Default choice for application logs, metrics, structured events
- **Text**: When you have unstructured logs or want automatic pattern detection
- **web-server**: For Nginx, Apache, or other web server access logs
- **loadbalancer**: For HAProxy, AWS ALB, or other load balancer logs
- **stacktrace**: For error stack traces from exceptions
- **firewall**: For iptables, pfSense, or Cisco ASA firewall logs
- **kubernetes**: For Kubernetes pod or container logs
- **docker**: For Docker container logs
- **network**: For tcpdump, wireshark, or other network packet logs
- **keyvalue**: For key-value pair formatted logs
- **json-in-text**: When JSON is embedded in text logs
- **smart-pattern**: For multi-language pattern matching
- **raw**: Fallback for any other log format
## Auto-Facet Detection
vibex.sh automatically detects and extracts:
- **Metrics**: Numeric fields become chartable metrics
- **Categorical Data**: String fields become filterable categories
- **Context Fields**: Known fields (trace_id, user_id, etc.) are indexed for filtering
- **Nested Objects**: Automatically extracted as facets
Following these rules ensures your logs are automatically parsed, visualized, and made searchable in vibex.sh dashboards.Location
Project root
File Name
.cursor/rules/vibex.mdcInstallation Steps
- 1Navigate to your project root directory
- 2Create the `.cursor` folder if it doesn't exist
- 3Create the `rules` folder inside `.cursor`
- 4Create a file named `vibex.mdc` in `.cursor/rules/`
- 5Copy the rule content into the file
- 6Restart Cursor IDE to apply the rules
Directory Structure
project-root/
└── .cursor/
└── rules/
└── vibex.mdcAuto-Facet Detection
vibex.sh automatically detects and extracts facets from your logs without any configuration. This saves hours of manual mapping and setup time.
How It Works
When logs arrive, vibex.sh runs multiple parsers in parallel to detect patterns:
- JSON Logs: All fields are automatically extracted as facets
- Web Server Logs: IP, method, path, status codes, user agents are detected
- Syslog: Handled by smart pattern detection. Timestamps, hosts, facilities, priorities are automatically extracted
- Application Logs: Smart pattern detection for any format
- Container Logs: Pod/container metadata, stream types are extracted
Supported Parsers
Web
- • Web Server Access Log (Nginx, Apache)
- • Load Balancer (HAProxy, AWS ALB)
System
- • Stack Trace
- • Firewall Logs
- • Kubernetes Pod/Container Logs
- • Docker Container Logs
- • Network Logs
Generic
- • JSON in Text
- • Key-Value Pairs
- • Smart Pattern (Multi-language)
- • Raw (Fallback)
Chart Type Selection
vibex.sh automatically selects the best chart type for each facet:
- Numeric fields: Time series area charts
- Categorical fields (≤5 values): Pie charts
- Categorical fields (>5 values): Bar charts
- Relational data: Stacked bar charts
- Long labels: Horizontal bar charts for readability
⚡ Zero Config: All detection happens automatically. No manual mapping, no configuration files. Just pipe your logs and watch the charts appear.
Authentication & Config
Authentication is required to use vibex.sh. Run npx vibex-sh login to authenticate and get your token.
Authentication
Use the vibex login command to authenticate:
This will open your browser for authentication. Your token will be saved to ~/.vibex/config.json
Config File Location
Config File Format
{
"token": "vb_live_your_token_here",
"webUrl": "https://vibex.sh",
"updatedAt": "2024-01-01T12:00:00.000Z"
}Automatic Session Creation
When you're authenticated, new sessions are automatically created and linked to your account:
🔒 Security: Your token is stored locally and never shared. It's used only for authenticating API requests to vibex.sh.
Session Sharing & Collaboration
Share your sessions with team members for incident response, log analysis, or client demos. Perfect for war room scenarios and collaborative troubleshooting.
How to Share
- Open your session dashboard
- Click the "Share" button in the top right
- Copy the shareable URL
- Send the URL to team members (they'll need the auth code)
Auth Codes
Each session has a unique 6-8 character auth code for security:
- Auth codes are displayed when you create a session
- Share the code along with the URL
- Recipients enter the code to access the session
- Codes can be regenerated from session settings
Use Cases
War Room
Share incident logs with on-call engineers for collaborative debugging
Client Sharing
Share application metrics with clients for transparency
Team Analysis
Collaborate on complex issues with multiple team members
Read-Only Access
Shared sessions are read-only by default for security
🔒 Privacy: Sessions are private by default. Only share when you explicitly choose to. Auth codes provide an additional layer of security.
Command References
CLI Commands
-s, --session-id <id>Reuse existing session ID
npx vibex-sh -s vibex-abc123--web <url>Web server URL
npx vibex-sh --web https://vibex.sh--socket <url>Socket server URL
npx vibex-sh --socket wss://ingest.vibex.sh--server <url>Shorthand for --web (auto-derives socket)
npx vibex-sh --server https://vibex.sh--token <token>Authentication token (or use VIBEX_TOKEN env var)
npx vibex-sh --token vb_live_xxxloginAuthenticate with vibex.sh
npx vibex-sh login| Flag | Description | Example |
|---|---|---|
-s, --session-id <id> | Reuse existing session ID | npx vibex-sh -s vibex-abc123 |
--web <url> | Web server URL | npx vibex-sh --web https://vibex.sh |
--socket <url> | Socket server URL | npx vibex-sh --socket wss://ingest.vibex.sh |
--server <url> | Shorthand for --web (auto-derives socket) | npx vibex-sh --server https://vibex.sh |
--token <token> | Authentication token (or use VIBEX_TOKEN env var) | npx vibex-sh --token vb_live_xxx |
login | Authenticate with vibex.sh | npx vibex-sh login |
Environment Variables
VIBEX_WEB_URLWeb server URL
export VIBEX_WEB_URL=https://vibex.shVIBEX_SOCKET_URLSocket server URL
export VIBEX_SOCKET_URL=wss://ingest.vibex.shVIBEX_TOKENAuthentication token
export VIBEX_TOKEN=vb_live_xxxVIBEX_CONFIG_PATHCustom config file path
export VIBEX_CONFIG_PATH=/path/to/config.json| Variable | Description | Example |
|---|---|---|
VIBEX_WEB_URL | Web server URL | export VIBEX_WEB_URL=https://vibex.sh |
VIBEX_SOCKET_URL | Socket server URL | export VIBEX_SOCKET_URL=wss://ingest.vibex.sh |
VIBEX_TOKEN | Authentication token | export VIBEX_TOKEN=vb_live_xxx |
VIBEX_CONFIG_PATH | Custom config file path | export VIBEX_CONFIG_PATH=/path/to/config.json |
Priority Order
Configuration is resolved in the following priority order (highest to lowest):
- Flags (
--web,--socket,--server) - Environment variables (
VIBEX_WEB_URL,VIBEX_SOCKET_URL) - Production defaults (
https://vibex.sh,wss://ingest.vibex.sh)
Limits & Quotas
vibex.sh uses two types of limits to ensure fair usage and system stability: MPS (Messages Per Second) limits and monthly log volume limits.
MPS (Messages Per Second) Limits
MPS limits control the throughput rate at which you can send logs. Think of it as a speed limit for log ingestion.
- Why it matters: Prevents system overload, ensures fair usage, and maintains quality of service for all users
- When exceeded: Excess logs are rate-limited and dropped. SDKs handle this gracefully with 429 responses
- Impact: Your application continues running normally. The SDKs drop excess logs automatically, so there's no impact on your code
Monthly Log Volume Limits
Monthly limits control the total number of logs you can send per billing cycle.
- Hobby: 10K logs/month
- Pro: 100K logs/month
- Business: 1M logs/month
- When exceeded: New logs are rejected until the next billing cycle
- Notifications: You'll receive warnings as you approach your limit (80-90% usage)
Rate Limit Thresholds
Here's what happens at different usage levels:
Approaching Limits (80-90%)
You'll receive notifications in the dashboard warning you that you're approaching your limits. Consider upgrading your plan if you consistently hit these thresholds.
MPS Limit Exceeded
Excess logs are dropped gracefully. SDKs automatically handle 429 (rate limit) responses by dropping logs, so your application continues running normally. No code changes needed.
Monthly Volume Exceeded
New logs are rejected with appropriate error responses until the next billing cycle. Upgrade your plan for immediate access, or wait for the cycle to reset.
Frequently Asked Questions
Do I need to create an account?
Yes. Authentication is required. Run npx vibex-sh login which opens your browser for signup/login. Your token is automatically saved and used for all future commands. This allows you to access your sessions, use the AI Assistant, and access all features.
How long are sessions stored?
Hobby (free) tier: 7 days retention - data older than 7 days is automatically deleted from both charts and raw logs. Pro tier: 30 days retention. Sessions continue to exist and can receive new data even after old data expires. This is a rolling window based on the current time, so you always see data from the last X days where X is your retention period.
Can I use vibex.sh in production?
Yes! Use the Python SDK or Node.js SDK, or pipe logs from your production applications. The SDKs are fail-safe and won't break your application if there are network issues or missing configuration.
What data formats are supported?
JSON logs are automatically parsed and visualized. Text logs (Nginx, Syslog, etc.) are parsed with auto-facet detection. For best results, use structured JSON with numeric values for metrics and strings for categories.
How does the AI Assistant work?
The AI Assistant uses your own OpenAI API key for privacy and cost control. You configure it in session settings, and it provides context-aware log analysis based on your logs, application stack, and infrastructure.
Is my data secure?
Yes. Sessions are private by default. Your authentication token is stored locally and never shared. All communication uses HTTPS/WSS encryption. Your OpenAI API key (for AI Assistant) is encrypted at rest.
What happens if the connection drops?
The CLI automatically reconnects and will queue logs while disconnected. The SDKs handle network errors gracefully and won't affect your application.
Are there rate limits?
Yes, there are two types of rate limits to ensure fair usage:
- MPS (Messages Per Second) Limits: Controls throughput rate. Each plan has a maximum MPS limit (Hobby: 10 MPS, Pro: 50 MPS, Business: 200 MPS). When exceeded, excess logs are rate-limited and dropped.
- Monthly Log Volume Limits: Controls total log count per billing cycle. When exceeded, new logs may be rejected until the next billing cycle.
What happens at different thresholds:
- Approaching limits (80-90%): You'll receive notifications in the dashboard
- MPS limit exceeded: Excess logs are dropped gracefully. SDKs handle 429 responses automatically, so your application continues normally
- Monthly volume exceeded: New logs are rejected with appropriate error responses. Upgrade your plan or wait for the next billing cycle
The SDKs handle rate limits gracefully by dropping logs when limits are exceeded, but your application continues normally. This fail-safe behavior ensures your production systems aren't affected by rate limiting.
Support
Need help? We're here for you. Reach out through any of these channels:
Response Time
We typically respond within 24 hours. For urgent issues, please mark your email as "Urgent" in the subject line.