Integrations

HTTP API

The LogPulse HTTP API lets you ingest and query log data programmatically. Send logs from any language or platform using simple REST calls with JSON payloads. The API supports single and batch ingestion, LPQL queries, and provides real-time quota tracking.

Overview

The LogPulse API is a RESTful JSON API. All requests and responses use JSON encoding. Authentication is done via Bearer token in the Authorization header.

REST
RESTful API
JSON
JSON encoding
10 MB
Max request body

Authentication

All API requests require a valid API key passed as a Bearer token in the Authorization header. Create API keys in the LogPulse dashboard under Integrations → HTTP API.

Authorization header
Authorization: Bearer YOUR_API_KEY

Each API key is scoped to your organization and can optionally be linked to an ETL pipeline for automatic processing. You can create multiple API keys for different applications or environments.

Warning: Keep your API keys secret. Never expose them in client-side code, public repositories, or logs. If a key is compromised, revoke it immediately from the dashboard and create a new one.

Base URL

https://api.logpulse.io

All API endpoints are relative to this base URL. The API is served over HTTPS only.

Ingest Logs

Send log events to LogPulse for indexing and analysis. The ingest endpoint accepts both single log objects and arrays of logs for batch ingestion.

Log Schema

Each log event follows a simple schema. All fields are optional with sensible defaults:

FieldTypeRequiredDefaultDescription
eventstring | objectNo""The log event content — a text string or structured object
levelstringNo"info"Log level: debug, info, warn, error, fatal
timestampstring | numberNonowEvent timestamp as ISO 8601 string or Unix timestamp. Auto-generated if omitted.
indexstringNo"main"Target index for the event (default: 'main')
sourcestringNo""Source identifier (e.g., service name, hostname)
sourcetypestringNo""Source type for categorization (e.g., 'application', 'nginx_access')
hoststringNo""Originating host name
attributesRecord<string, string>No{}Arbitrary key-value pairs for structured data (searchable via LPQL)

Single Log

POST/api/v1/logs

Send a single log event as a JSON object in the request body.

Request body
{
  "event": "User login successful",
  "level": "info",
  "timestamp": "2026-03-21T10:30:00.000Z",
  "index": "production",
  "source": "auth-service",
  "sourcetype": "application",
  "host": "web-01",
  "attributes": {
    "user_id": "usr_abc123",
    "ip_address": "192.168.1.100",
    "method": "POST",
    "path": "/api/auth/login"
  }
}
Response (200)
{
  "data": {
    "accepted": 1,
    "rejected": 0,
    "timestamp": "2026-03-21T10:30:00.000Z",
    "quotaStatus": {
      "plan": "starter",
      "limitMB": 1024,
      "usedMB": 42.5,
      "percent": 4.15,
      "blocked": false
    }
  }
}

Batch Ingest

POST/api/v1/logs

Send multiple log events at once as a JSON array. Batch ingestion is more efficient for high-volume logging — fewer HTTP requests and lower overhead.

Batch request body
[
  {
    "event": "GET /api/users 200 12ms",
    "level": "info",
    "source": "nginx",
    "sourcetype": "access_log",
    "attributes": { "status": "200", "duration_ms": "12" }
  },
  {
    "event": "Database connection timeout",
    "level": "error",
    "source": "api-server",
    "sourcetype": "application",
    "attributes": { "db": "postgres", "timeout_ms": "5000" }
  }
]
Note: Batch ingestion supports partial success. If some log events in a batch fail validation, the valid events are still ingested. The response indicates the total count of successfully ingested events.

Vector Ingest

POST/api/v1/ingest/vector

A dedicated endpoint optimized for the Vector log agent. It accepts the same JSON payload format but includes additional processing for Vector-specific metadata fields.

Tip: Use this endpoint with Vector's HTTP sink for optimal compatibility. See the Vector Agent documentation for full configuration details.

Query Logs

GET/api/v1/logs

Search and retrieve log events using LPQL queries. Results are returned in reverse chronological order by default.

ParameterTypeDescription
querystringLPQL query string (e.g., level="error" source="api")
levelstringdocs.httpApi.query.levelParam
sourcestringdocs.httpApi.query.sourceParam
sourcetypestringdocs.httpApi.query.sourcetypeParam
fromstringStart of time range (ISO 8601)
tostringEnd of time range (ISO 8601)
limitnumberMaximum number of results to return (default: 100, max: 10000)
offsetnumberNumber of results to skip for pagination
Example request
GET /api/v1/logs?query=timeout&level=error&source=api-server&from=2026-03-20T00:00:00Z&limit=100

Recent Logs

GET/api/v1/logs/recent

Quickly fetch the most recent log events without specifying a query. Useful for tail-like functionality and dashboard widgets.

Rate Limits

The API enforces rate limits to ensure fair usage and platform stability. Limits are applied per API key.

EndpointLimitWindow
/api/v1/logs (POST)10,000Per minute
/api/v1/ingest/vector10,000Per minute
/api/v1/logs (GET)100Per minute

Rate limit information is included in response headers:

Rate limit response headers
X-RateLimit-Limit: 10000
X-RateLimit-Remaining: 9542
X-RateLimit-Reset: 1711018860
Warning: When a rate limit is exceeded, the API returns a 429 status code. Implement exponential backoff in your client to handle rate limiting gracefully.

Quotas

Each organization has a daily data ingestion quota based on their plan tier. The quota is measured in megabytes of raw log data per day and resets at midnight UTC.

ThresholdBehavior
< 80%Normal operation — logs are ingested without restrictions
80–100%Warning threshold — a quota warning is included in API responses
100%Quota exceeded — new log ingestion is blocked until the next day

Every ingest response includes a quotaStatus object with usedMB, limitMB, and percentUsed fields so you can monitor your consumption.

Error Handling

The API uses standard HTTP status codes and returns structured JSON error responses with details about what went wrong.

Status CodeMeaningAction
400Bad Request — invalid JSON or schemaCheck the request body against the log schema. Ensure 'message' field is present.
401Unauthorized — invalid or missing API keyVerify the Authorization header contains a valid Bearer token.
413Payload Too Large — body exceeds 10 MBSplit the batch into smaller chunks (recommended: < 5 MB per request).
429Too Many Requests — rate limit exceededImplement exponential backoff. Check X-RateLimit-Reset header for retry timing.
500Internal Server ErrorRetry with backoff. If persistent, contact [email protected].
Error response format
{
  "error": "Validation failed",
  "code": "VALIDATION_ERROR",
  "message": "Invalid log entry format"
}

OTLP Ingest

LogPulse natively accepts OpenTelemetry logs via OTLP/HTTP. Send logs to the standard OTLP endpoint using either protobuf or JSON encoding. This is the recommended ingestion method for teams already using OpenTelemetry collectors.

POST/v1/logs
ParameterValue
Endpointhttps://api.logpulse.io/v1/logs
Content-Type (protobuf)application/x-protobuf
Content-Type (JSON)application/json
Encodinggzip recommended (Content-Encoding: gzip)
AuthenticationBearer token via Authorization header
Rate limit10,000 requests/min per organization
Tip: Use the OpenTelemetry Collector as a local aggregator to batch, compress, and forward logs to LogPulse. See the Kubernetes integration docs for a complete Helm chart setup with the OTel Collector DaemonSet.

Schema Mapping

OTLP log records are mapped to LogPulse fields as follows:

OTLP FieldLogPulse FieldNotes
BodyeventLog message body (string or structured)
SeverityText / SeverityNumberlevelMapped to info/warn/error/debug/trace
TimeUnixNanotimestampNanosecond precision preserved
Resource attributesattributes.*Merged into attributes with resource. prefix
Scope attributesattributes.*Merged into attributes with scope. prefix
Log record attributesattributes.*Stored directly as attributes
Resource service.namesourceUsed as the log source identifier

Schema Conventions

Following consistent naming conventions for attributes improves query performance and makes your logs easier to search. LogPulse recommends alignment with OpenTelemetry semantic conventions.

ConventionExampleDescription
Dot-separated namespacesk8s.pod.name, http.methodGroup related attributes under a common prefix
Lowercase snake_caseerror_type, user_idConsistent casing for attribute keys
Reserved prefixesk8s.*, cloud.*, host.*, service.*Aligned with OTel semantic conventions; used for built-in filters
Note: Individual log events should not exceed 1 MB. Events larger than 1 MB are rejected with HTTP 413. The maximum request body size is 10 MB (can contain multiple events). See Platform Limits for all documented limits.

Sampling & Redaction (Planned)

Server-side sampling and PII redaction are planned features that will allow you to control log volume and protect sensitive data at the ingestion layer.

FeatureStatusDescription
Server-side samplingPlannedDefine rules to keep only a percentage of matching logs (e.g., keep 10% of debug logs)
PII redactionPlannedRegex-based field masking applied at ingest time (e.g., mask email addresses, credit card numbers)
Attribute filteringPlannedDrop specific attributes before storage to reduce cardinality and storage costs

In the meantime, you can achieve similar results using Visual ETL Pipelines. The redactMask node supports regex-based PII masking, and condition nodes can filter or sample logs before they reach storage.

Code Examples

Here are complete examples for sending logs to LogPulse from popular languages and tools:

cURL

Send a single log
curl -X POST https://api.logpulse.io/api/v1/logs \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "event": "Payment processed successfully",
    "level": "info",
    "source": "payment-service",
    "attributes": {
      "amount": "49.99",
      "currency": "EUR",
      "order_id": "ord_xyz789"
    }
  }'

Python

python — requests
import requests

API_KEY = "YOUR_API_KEY"
BASE_URL = "https://api.logpulse.io"

# Send a batch of logs
logs = [
    {
        "event": "User signed up",
        "level": "info",
        "source": "auth-service",
        "attributes": {"user_id": "usr_001", "plan": "growth"}
    },
    {
        "event": "Welcome email sent",
        "level": "info",
        "source": "email-service",
        "attributes": {"user_id": "usr_001", "template": "welcome"}
    }
]

response = requests.post(
    f"{BASE_URL}/api/v1/logs",
    json=logs,
    headers={"Authorization": f"Bearer {API_KEY}"}
)

print(response.json())
# {"data": {"accepted": 2, "rejected": 0, "timestamp": "...", "quotaStatus": {...}}}

Node.js

node.js — fetch
const API_KEY = process.env.LOGPULSE_API_KEY;

async function sendLog(event, level = 'info', attributes = {}) {
  const response = await fetch('https://api.logpulse.io/api/v1/logs', {
    method: 'POST',
    headers: {
      'Content-Type': 'application/json',
      'Authorization': `Bearer ${API_KEY}`,
    },
    body: JSON.stringify({
      event,
      level,
      source: 'my-node-app',
      sourcetype: 'application',
      attributes,
    }),
  });

  if (!response.ok) {
    throw new Error(`LogPulse API error: ${response.status}`);
  }

  return response.json();
}

// Usage
await sendLog('Order created', 'info', { order_id: 'ord_123' }); // attributes

Go

go
package main

import (
    "bytes"
    "encoding/json"
    "fmt"
    "net/http"
    "os"
)

type LogEntry struct {
    Event      string            `json:"event"`
    Level      string            `json:"level,omitempty"`
    Source     string            `json:"source,omitempty"`
    Sourcetype string           `json:"sourcetype,omitempty"`
    Attributes map[string]string `json:"attributes,omitempty"`
}

func SendLog(entry LogEntry) error {
    body, err := json.Marshal(entry)
    if err != nil {
        return err
    }

    req, err := http.NewRequest("POST",
        "https://api.logpulse.io/api/v1/logs",
        bytes.NewBuffer(body))
    if err != nil {
        return err
    }

    req.Header.Set("Content-Type", "application/json")
    req.Header.Set("Authorization",
        "Bearer "+os.Getenv("LOGPULSE_API_KEY"))

    resp, err := http.DefaultClient.Do(req)
    if err != nil {
        return err
    }
    defer resp.Body.Close()

    if resp.StatusCode != http.StatusOK {
        return fmt.Errorf("unexpected status: %d", resp.StatusCode)
    }
    return nil
}

We use cookies to analyze site traffic and improve your experience. No cookies are placed without your consent. Privacy Policy