HTTP API
The LogPulse HTTP API lets you ingest and query log data programmatically. Send logs from any language or platform using simple REST calls with JSON payloads. The API supports single and batch ingestion, LPQL queries, and provides real-time quota tracking.
Overview
The LogPulse API is a RESTful JSON API. All requests and responses use JSON encoding. Authentication is done via Bearer token in the Authorization header.
Authentication
All API requests require a valid API key passed as a Bearer token in the Authorization header. Create API keys in the LogPulse dashboard under Integrations → HTTP API.
Authorization: Bearer YOUR_API_KEYEach API key is scoped to your organization and can optionally be linked to an ETL pipeline for automatic processing. You can create multiple API keys for different applications or environments.
Base URL
https://api.logpulse.ioAll API endpoints are relative to this base URL. The API is served over HTTPS only.
Ingest Logs
Send log events to LogPulse for indexing and analysis. The ingest endpoint accepts both single log objects and arrays of logs for batch ingestion.
Log Schema
Each log event follows a simple schema. All fields are optional with sensible defaults:
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
| event | string | object | No | "" | The log event content — a text string or structured object |
| level | string | No | "info" | Log level: debug, info, warn, error, fatal |
| timestamp | string | number | No | now | Event timestamp as ISO 8601 string or Unix timestamp. Auto-generated if omitted. |
| index | string | No | "main" | Target index for the event (default: 'main') |
| source | string | No | "" | Source identifier (e.g., service name, hostname) |
| sourcetype | string | No | "" | Source type for categorization (e.g., 'application', 'nginx_access') |
| host | string | No | "" | Originating host name |
| attributes | Record<string, string> | No | {} | Arbitrary key-value pairs for structured data (searchable via LPQL) |
Single Log
/api/v1/logsSend a single log event as a JSON object in the request body.
{
"event": "User login successful",
"level": "info",
"timestamp": "2026-03-21T10:30:00.000Z",
"index": "production",
"source": "auth-service",
"sourcetype": "application",
"host": "web-01",
"attributes": {
"user_id": "usr_abc123",
"ip_address": "192.168.1.100",
"method": "POST",
"path": "/api/auth/login"
}
}{
"data": {
"accepted": 1,
"rejected": 0,
"timestamp": "2026-03-21T10:30:00.000Z",
"quotaStatus": {
"plan": "starter",
"limitMB": 1024,
"usedMB": 42.5,
"percent": 4.15,
"blocked": false
}
}
}Batch Ingest
/api/v1/logsSend multiple log events at once as a JSON array. Batch ingestion is more efficient for high-volume logging — fewer HTTP requests and lower overhead.
[
{
"event": "GET /api/users 200 12ms",
"level": "info",
"source": "nginx",
"sourcetype": "access_log",
"attributes": { "status": "200", "duration_ms": "12" }
},
{
"event": "Database connection timeout",
"level": "error",
"source": "api-server",
"sourcetype": "application",
"attributes": { "db": "postgres", "timeout_ms": "5000" }
}
]Vector Ingest
/api/v1/ingest/vectorA dedicated endpoint optimized for the Vector log agent. It accepts the same JSON payload format but includes additional processing for Vector-specific metadata fields.
Query Logs
/api/v1/logsSearch and retrieve log events using LPQL queries. Results are returned in reverse chronological order by default.
| Parameter | Type | Description |
|---|---|---|
| query | string | LPQL query string (e.g., level="error" source="api") |
| level | string | docs.httpApi.query.levelParam |
| source | string | docs.httpApi.query.sourceParam |
| sourcetype | string | docs.httpApi.query.sourcetypeParam |
| from | string | Start of time range (ISO 8601) |
| to | string | End of time range (ISO 8601) |
| limit | number | Maximum number of results to return (default: 100, max: 10000) |
| offset | number | Number of results to skip for pagination |
GET /api/v1/logs?query=timeout&level=error&source=api-server&from=2026-03-20T00:00:00Z&limit=100Recent Logs
/api/v1/logs/recentQuickly fetch the most recent log events without specifying a query. Useful for tail-like functionality and dashboard widgets.
Rate Limits
The API enforces rate limits to ensure fair usage and platform stability. Limits are applied per API key.
| Endpoint | Limit | Window |
|---|---|---|
| /api/v1/logs (POST) | 10,000 | Per minute |
| /api/v1/ingest/vector | 10,000 | Per minute |
| /api/v1/logs (GET) | 100 | Per minute |
Rate limit information is included in response headers:
X-RateLimit-Limit: 10000
X-RateLimit-Remaining: 9542
X-RateLimit-Reset: 1711018860Quotas
Each organization has a daily data ingestion quota based on their plan tier. The quota is measured in megabytes of raw log data per day and resets at midnight UTC.
| Threshold | Behavior |
|---|---|
| < 80% | Normal operation — logs are ingested without restrictions |
| 80–100% | Warning threshold — a quota warning is included in API responses |
| 100% | Quota exceeded — new log ingestion is blocked until the next day |
Every ingest response includes a quotaStatus object with usedMB, limitMB, and percentUsed fields so you can monitor your consumption.
Error Handling
The API uses standard HTTP status codes and returns structured JSON error responses with details about what went wrong.
| Status Code | Meaning | Action |
|---|---|---|
| 400 | Bad Request — invalid JSON or schema | Check the request body against the log schema. Ensure 'message' field is present. |
| 401 | Unauthorized — invalid or missing API key | Verify the Authorization header contains a valid Bearer token. |
| 413 | Payload Too Large — body exceeds 10 MB | Split the batch into smaller chunks (recommended: < 5 MB per request). |
| 429 | Too Many Requests — rate limit exceeded | Implement exponential backoff. Check X-RateLimit-Reset header for retry timing. |
| 500 | Internal Server Error | Retry with backoff. If persistent, contact [email protected]. |
{
"error": "Validation failed",
"code": "VALIDATION_ERROR",
"message": "Invalid log entry format"
}OTLP Ingest
LogPulse natively accepts OpenTelemetry logs via OTLP/HTTP. Send logs to the standard OTLP endpoint using either protobuf or JSON encoding. This is the recommended ingestion method for teams already using OpenTelemetry collectors.
/v1/logs| Parameter | Value |
|---|---|
| Endpoint | https://api.logpulse.io/v1/logs |
| Content-Type (protobuf) | application/x-protobuf |
| Content-Type (JSON) | application/json |
| Encoding | gzip recommended (Content-Encoding: gzip) |
| Authentication | Bearer token via Authorization header |
| Rate limit | 10,000 requests/min per organization |
Schema Mapping
OTLP log records are mapped to LogPulse fields as follows:
| OTLP Field | LogPulse Field | Notes |
|---|---|---|
| Body | event | Log message body (string or structured) |
| SeverityText / SeverityNumber | level | Mapped to info/warn/error/debug/trace |
| TimeUnixNano | timestamp | Nanosecond precision preserved |
| Resource attributes | attributes.* | Merged into attributes with resource. prefix |
| Scope attributes | attributes.* | Merged into attributes with scope. prefix |
| Log record attributes | attributes.* | Stored directly as attributes |
| Resource service.name | source | Used as the log source identifier |
Schema Conventions
Following consistent naming conventions for attributes improves query performance and makes your logs easier to search. LogPulse recommends alignment with OpenTelemetry semantic conventions.
| Convention | Example | Description |
|---|---|---|
| Dot-separated namespaces | k8s.pod.name, http.method | Group related attributes under a common prefix |
| Lowercase snake_case | error_type, user_id | Consistent casing for attribute keys |
| Reserved prefixes | k8s.*, cloud.*, host.*, service.* | Aligned with OTel semantic conventions; used for built-in filters |
Sampling & Redaction (Planned)
Server-side sampling and PII redaction are planned features that will allow you to control log volume and protect sensitive data at the ingestion layer.
| Feature | Status | Description |
|---|---|---|
| Server-side sampling | Planned | Define rules to keep only a percentage of matching logs (e.g., keep 10% of debug logs) |
| PII redaction | Planned | Regex-based field masking applied at ingest time (e.g., mask email addresses, credit card numbers) |
| Attribute filtering | Planned | Drop specific attributes before storage to reduce cardinality and storage costs |
In the meantime, you can achieve similar results using Visual ETL Pipelines. The redactMask node supports regex-based PII masking, and condition nodes can filter or sample logs before they reach storage.
Code Examples
Here are complete examples for sending logs to LogPulse from popular languages and tools:
cURL
curl -X POST https://api.logpulse.io/api/v1/logs \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"event": "Payment processed successfully",
"level": "info",
"source": "payment-service",
"attributes": {
"amount": "49.99",
"currency": "EUR",
"order_id": "ord_xyz789"
}
}'Python
import requests
API_KEY = "YOUR_API_KEY"
BASE_URL = "https://api.logpulse.io"
# Send a batch of logs
logs = [
{
"event": "User signed up",
"level": "info",
"source": "auth-service",
"attributes": {"user_id": "usr_001", "plan": "growth"}
},
{
"event": "Welcome email sent",
"level": "info",
"source": "email-service",
"attributes": {"user_id": "usr_001", "template": "welcome"}
}
]
response = requests.post(
f"{BASE_URL}/api/v1/logs",
json=logs,
headers={"Authorization": f"Bearer {API_KEY}"}
)
print(response.json())
# {"data": {"accepted": 2, "rejected": 0, "timestamp": "...", "quotaStatus": {...}}}Node.js
const API_KEY = process.env.LOGPULSE_API_KEY;
async function sendLog(event, level = 'info', attributes = {}) {
const response = await fetch('https://api.logpulse.io/api/v1/logs', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${API_KEY}`,
},
body: JSON.stringify({
event,
level,
source: 'my-node-app',
sourcetype: 'application',
attributes,
}),
});
if (!response.ok) {
throw new Error(`LogPulse API error: ${response.status}`);
}
return response.json();
}
// Usage
await sendLog('Order created', 'info', { order_id: 'ord_123' }); // attributesGo
package main
import (
"bytes"
"encoding/json"
"fmt"
"net/http"
"os"
)
type LogEntry struct {
Event string `json:"event"`
Level string `json:"level,omitempty"`
Source string `json:"source,omitempty"`
Sourcetype string `json:"sourcetype,omitempty"`
Attributes map[string]string `json:"attributes,omitempty"`
}
func SendLog(entry LogEntry) error {
body, err := json.Marshal(entry)
if err != nil {
return err
}
req, err := http.NewRequest("POST",
"https://api.logpulse.io/api/v1/logs",
bytes.NewBuffer(body))
if err != nil {
return err
}
req.Header.Set("Content-Type", "application/json")
req.Header.Set("Authorization",
"Bearer "+os.Getenv("LOGPULSE_API_KEY"))
resp, err := http.DefaultClient.Do(req)
if err != nil {
return err
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
return fmt.Errorf("unexpected status: %d", resp.StatusCode)
}
return nil
}