Ship Your First Log in 5 Minutes

This quickstart walks you through everything you need to go from zero to a working LogPulse setup. By the end, you will have accomplished three things: ingested log data into LogPulse, searched those logs using LPQL, and configured your first alert rule.

The entire process takes about five minutes. No agents or SDKs are required for this initial setup -- a simple HTTP request is all you need to start sending logs.

Prerequisites

Before you begin, make sure you have the following:

RequirementDetails
LogPulse accountSign up at app.logpulse.io -- the free tier includes 100 MB/day ingestion and 7-day retention.
API keyYou will create one in Step 1. Requires an active LogPulse account.
HTTP clientcurl (pre-installed on macOS and most Linux distributions), or any HTTP client such as Postman, httpie, or wget.
Tip
If you prefer to use a language-specific SDK, skip ahead to the Code Examples section for ready-to-use snippets in Python and Node.js.

Step 1: Get Your API Key

Your API key authenticates all requests to the LogPulse ingestion and query APIs. Each key can be scoped to specific permissions (ingest-only, read-only, or full access).

To create your first API key:

1. Log in to your LogPulse dashboard at app.logpulse.io.

2. Navigate to Integrations, then select HTTP API from the left sidebar.

3. Click Create API Key.

4. Give the key a descriptive name (for example, "quickstart-test") and select Full Access as the scope.

5. Click Create. Copy the key immediately -- it will not be shown again.

Warning
Store your API key securely. Do not commit it to version control or share it in plaintext. For production use, store it in environment variables or a secrets manager.
Set your API key as an environment variable
export LOGPULSE_API_KEY="lp_your_api_key_here"

Step 2: Send Your First Log

Send a single log entry to LogPulse using the HTTP ingestion API. The endpoint accepts JSON payloads with a timestamp, severity level, event, source identifier, and optional attributes. All fields are optional with sensible defaults.

cURL -- Send a single log
curl -X POST https://api.logpulse.io/api/v1/logs \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $LOGPULSE_API_KEY" \
  -d '{
    "timestamp": "2026-03-21T10:15:30.123Z",
    "level": "info",
    "event": "User login successful",
    "source": "auth-service",
    "attributes": {
      "user_id": "usr_8a3b2c1d",
      "ip_address": "192.168.1.42",
      "method": "oauth2",
      "region": "us-east-1"
    }
  }'

A successful response returns HTTP 200 with a JSON body containing the ingestion summary:

Response
{
  "data": {
    "accepted": 1,
    "rejected": 0,
    "timestamp": "2026-03-21T10:15:30.456Z",
    "quotaStatus": {
      "plan": "starter",
      "limitMB": 1024,
      "usedMB": 42.5,
      "percent": 4.15,
      "blocked": false
    }
  }
}
Note
Logs are available for search within 2-5 seconds of ingestion. If you do not see your log immediately, wait a few seconds and refresh the Log Explorer.

Step 3: Send a Batch of Logs

For better throughput, you can send multiple logs in a single request using the same endpoint. Instead of a single object, send an array of log entries. The request body limit is 10 MB of uncompressed JSON.

cURL -- Send a batch of logs
curl -X POST https://api.logpulse.io/api/v1/logs \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $LOGPULSE_API_KEY" \
  -d '[
      {
        "timestamp": "2026-03-21T10:16:00.000Z",
        "level": "error",
        "event": "Database connection timeout after 30s",
        "source": "order-service",
        "attributes": {
          "db_host": "db-primary.internal",
          "timeout_ms": "30000",
          "retry_count": "3"
        }
      },
      {
        "timestamp": "2026-03-21T10:16:01.000Z",
        "level": "warn",
        "event": "Falling back to read replica",
        "source": "order-service",
        "attributes": {
          "db_host": "db-replica-1.internal",
          "fallback_reason": "primary_timeout"
        }
      },
      {
        "timestamp": "2026-03-21T10:16:02.500Z",
        "level": "info",
        "event": "Order processed successfully via replica",
        "source": "order-service",
        "attributes": {
          "order_id": "ord_9f8e7d6c",
          "processing_time_ms": "245"
        }
      }
    ]'

The batch response includes a summary with the count of accepted and rejected entries:

Batch response
{
  "data": {
    "accepted": 3,
    "rejected": 0,
    "timestamp": "2026-03-21T10:16:02.800Z",
    "quotaStatus": {
      "plan": "starter",
      "limitMB": 1024,
      "usedMB": 42.5,
      "percent": 4.15,
      "blocked": false
    }
  }
}

Step 5: Set Up Your First Alert

Configure an alert that notifies you when error logs exceed a threshold. This ensures you are aware of issues before they impact users.

1. Navigate to Anomaly Detection in the left sidebar, then click Create Rule.

2. Set the rule name to "High Error Rate".

3. Enter the following LPQL condition:

Alert condition
level=error | stats count as error_count | where error_count > 50

4. Set the evaluation window to 5 minutes and the evaluation interval to 1 minute.

5. Under Notification Channel, select your preferred channel (email, Slack, or PagerDuty). If you have not configured a channel yet, click Add Channel and follow the setup wizard.

6. Set the severity to Warning and click Save Rule.

Note
The alert will trigger when more than 50 error-level logs are detected within any 5-minute window. You can adjust the threshold and window to match your application's normal error rate.

Code Examples

Python

Python -- Send logs with requests
import requests
import json
from datetime import datetime, timezone

LOGPULSE_API_KEY = "lp_your_api_key_here"
LOGPULSE_URL = "https://api.logpulse.io/api/v1/logs"

def send_log(level, event, source, attributes=None):
    payload = {
        "timestamp": datetime.now(timezone.utc).isoformat(),
        "level": level,
        "event": event,
        "source": source,
        "attributes": attributes or {}
    }

    response = requests.post(
        LOGPULSE_URL,
        headers={
            "Content-Type": "application/json",
            "Authorization": f"Bearer {LOGPULSE_API_KEY}"
        },
        json=payload
    )
    response.raise_for_status()
    return response.json()

# Send an info log
result = send_log(
    level="info",
    event="Payment processed successfully",
    source="billing-service",
    attributes={
        "amount_cents": "4999",
        "currency": "USD",
        "customer_id": "cus_abc123"
    }
)
print(f"Accepted: {result['data']['accepted']}")

Node.js

Node.js -- Send logs with fetch
const LOGPULSE_API_KEY = "lp_your_api_key_here";
const LOGPULSE_URL = "https://api.logpulse.io/api/v1/logs";

async function sendLog(level, event, source, attributes = {}) {
  const payload = {
    timestamp: new Date().toISOString(),
    level,
    event,
    source,
    attributes,
  };

  const response = await fetch(LOGPULSE_URL, {
    method: "POST",
    headers: {
      "Content-Type": "application/json",
      "Authorization": `Bearer ${LOGPULSE_API_KEY}`,
    },
    body: JSON.stringify(payload),
  });

  if (!response.ok) {
    throw new Error(`Ingestion failed: ${response.status}`);
  }

  return response.json();
}

// Send an error log
const result = await sendLog(
  "error",
  "Failed to connect to cache layer",
  "api-gateway",
  {
    cache_host: "redis-primary.internal",
    error_code: "ECONNREFUSED",
    retry_attempt: "1",
  }
);
console.log("Accepted:", result.data.accepted);

Next Steps

You now have a working LogPulse setup with log ingestion, search, and alerting. Here are some recommended next steps to expand your configuration:

TopicDescription
LPQL Syntax ReferenceLearn the full query language for advanced filtering, aggregations, and transformations.
HTTP API DocumentationComplete API reference for ingestion, querying, and management endpoints.
Vector Agent SetupInstall the Vector agent for automatic log collection from files, syslog, and other sources.
Kubernetes IntegrationDeploy LogPulse as a DaemonSet to collect container logs from your Kubernetes cluster.
ETL PipelinesBuild data transformation pipelines to parse, enrich, and route logs before storage.
Dashboard & UI GuideCreate custom dashboards with widgets, charts, and saved searches.
Alerting & NotificationsConfigure advanced alert rules with escalation policies and multiple notification channels.
Tip
For production deployments, we recommend using the Vector agent instead of direct HTTP ingestion. The agent handles batching, retries, backpressure, and local buffering automatically.