Ship Your First Log in 5 Minutes
This quickstart walks you through everything you need to go from zero to a working LogPulse setup. By the end, you will have accomplished three things: ingested log data into LogPulse, searched those logs using LPQL, and configured your first alert rule.
The entire process takes about five minutes. No agents or SDKs are required for this initial setup -- a simple HTTP request is all you need to start sending logs.
Prerequisites
Before you begin, make sure you have the following:
| Requirement | Details |
|---|---|
| LogPulse account | Sign up at app.logpulse.io -- the free tier includes 100 MB/day ingestion and 7-day retention. |
| API key | You will create one in Step 1. Requires an active LogPulse account. |
| HTTP client | curl (pre-installed on macOS and most Linux distributions), or any HTTP client such as Postman, httpie, or wget. |
Step 1: Get Your API Key
Your API key authenticates all requests to the LogPulse ingestion and query APIs. Each key can be scoped to specific permissions (ingest-only, read-only, or full access).
To create your first API key:
1. Log in to your LogPulse dashboard at app.logpulse.io.
2. Navigate to Integrations, then select HTTP API from the left sidebar.
3. Click Create API Key.
4. Give the key a descriptive name (for example, "quickstart-test") and select Full Access as the scope.
5. Click Create. Copy the key immediately -- it will not be shown again.
export LOGPULSE_API_KEY="lp_your_api_key_here"Step 2: Send Your First Log
Send a single log entry to LogPulse using the HTTP ingestion API. The endpoint accepts JSON payloads with a timestamp, severity level, event, source identifier, and optional attributes. All fields are optional with sensible defaults.
curl -X POST https://api.logpulse.io/api/v1/logs \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $LOGPULSE_API_KEY" \
-d '{
"timestamp": "2026-03-21T10:15:30.123Z",
"level": "info",
"event": "User login successful",
"source": "auth-service",
"attributes": {
"user_id": "usr_8a3b2c1d",
"ip_address": "192.168.1.42",
"method": "oauth2",
"region": "us-east-1"
}
}'A successful response returns HTTP 200 with a JSON body containing the ingestion summary:
{
"data": {
"accepted": 1,
"rejected": 0,
"timestamp": "2026-03-21T10:15:30.456Z",
"quotaStatus": {
"plan": "starter",
"limitMB": 1024,
"usedMB": 42.5,
"percent": 4.15,
"blocked": false
}
}
}Step 3: Send a Batch of Logs
For better throughput, you can send multiple logs in a single request using the same endpoint. Instead of a single object, send an array of log entries. The request body limit is 10 MB of uncompressed JSON.
curl -X POST https://api.logpulse.io/api/v1/logs \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $LOGPULSE_API_KEY" \
-d '[
{
"timestamp": "2026-03-21T10:16:00.000Z",
"level": "error",
"event": "Database connection timeout after 30s",
"source": "order-service",
"attributes": {
"db_host": "db-primary.internal",
"timeout_ms": "30000",
"retry_count": "3"
}
},
{
"timestamp": "2026-03-21T10:16:01.000Z",
"level": "warn",
"event": "Falling back to read replica",
"source": "order-service",
"attributes": {
"db_host": "db-replica-1.internal",
"fallback_reason": "primary_timeout"
}
},
{
"timestamp": "2026-03-21T10:16:02.500Z",
"level": "info",
"event": "Order processed successfully via replica",
"source": "order-service",
"attributes": {
"order_id": "ord_9f8e7d6c",
"processing_time_ms": "245"
}
}
]'The batch response includes a summary with the count of accepted and rejected entries:
{
"data": {
"accepted": 3,
"rejected": 0,
"timestamp": "2026-03-21T10:16:02.800Z",
"quotaStatus": {
"plan": "starter",
"limitMB": 1024,
"usedMB": 42.5,
"percent": 4.15,
"blocked": false
}
}
}Step 4: Search Your Logs
Now that you have ingested some logs, open the Log Explorer to search and analyze them. LogPulse uses LPQL (LogPulse Query Language) for searching, filtering, and aggregating log data.
1. Navigate to Log Explorer in the left sidebar of your LogPulse dashboard.
2. In the search bar, type the following LPQL query to find error-level logs:
level=error | head 103. Press Enter or click Run Query. You should see the database timeout error from Step 3.
Here are a few more useful queries to try:
source=order-service level=error | timechart count by levelattributes.order_id="ord_9f8e7d6c""connection timeout" | stats count by source | sort -countStep 5: Set Up Your First Alert
Configure an alert that notifies you when error logs exceed a threshold. This ensures you are aware of issues before they impact users.
1. Navigate to Anomaly Detection in the left sidebar, then click Create Rule.
2. Set the rule name to "High Error Rate".
3. Enter the following LPQL condition:
level=error | stats count as error_count | where error_count > 504. Set the evaluation window to 5 minutes and the evaluation interval to 1 minute.
5. Under Notification Channel, select your preferred channel (email, Slack, or PagerDuty). If you have not configured a channel yet, click Add Channel and follow the setup wizard.
6. Set the severity to Warning and click Save Rule.
Code Examples
Python
import requests
import json
from datetime import datetime, timezone
LOGPULSE_API_KEY = "lp_your_api_key_here"
LOGPULSE_URL = "https://api.logpulse.io/api/v1/logs"
def send_log(level, event, source, attributes=None):
payload = {
"timestamp": datetime.now(timezone.utc).isoformat(),
"level": level,
"event": event,
"source": source,
"attributes": attributes or {}
}
response = requests.post(
LOGPULSE_URL,
headers={
"Content-Type": "application/json",
"Authorization": f"Bearer {LOGPULSE_API_KEY}"
},
json=payload
)
response.raise_for_status()
return response.json()
# Send an info log
result = send_log(
level="info",
event="Payment processed successfully",
source="billing-service",
attributes={
"amount_cents": "4999",
"currency": "USD",
"customer_id": "cus_abc123"
}
)
print(f"Accepted: {result['data']['accepted']}")Node.js
const LOGPULSE_API_KEY = "lp_your_api_key_here";
const LOGPULSE_URL = "https://api.logpulse.io/api/v1/logs";
async function sendLog(level, event, source, attributes = {}) {
const payload = {
timestamp: new Date().toISOString(),
level,
event,
source,
attributes,
};
const response = await fetch(LOGPULSE_URL, {
method: "POST",
headers: {
"Content-Type": "application/json",
"Authorization": `Bearer ${LOGPULSE_API_KEY}`,
},
body: JSON.stringify(payload),
});
if (!response.ok) {
throw new Error(`Ingestion failed: ${response.status}`);
}
return response.json();
}
// Send an error log
const result = await sendLog(
"error",
"Failed to connect to cache layer",
"api-gateway",
{
cache_host: "redis-primary.internal",
error_code: "ECONNREFUSED",
retry_attempt: "1",
}
);
console.log("Accepted:", result.data.accepted);Next Steps
You now have a working LogPulse setup with log ingestion, search, and alerting. Here are some recommended next steps to expand your configuration:
| Topic | Description |
|---|---|
| LPQL Syntax Reference | Learn the full query language for advanced filtering, aggregations, and transformations. |
| HTTP API Documentation | Complete API reference for ingestion, querying, and management endpoints. |
| Vector Agent Setup | Install the Vector agent for automatic log collection from files, syslog, and other sources. |
| Kubernetes Integration | Deploy LogPulse as a DaemonSet to collect container logs from your Kubernetes cluster. |
| ETL Pipelines | Build data transformation pipelines to parse, enrich, and route logs before storage. |
| Dashboard & UI Guide | Create custom dashboards with widgets, charts, and saved searches. |
| Alerting & Notifications | Configure advanced alert rules with escalation policies and multiple notification channels. |