Integrations

Vector Agent

Vector is a lightweight, high-performance log collection agent that ships logs from your servers, containers, and cloud services to LogPulse. Use LogPulse's built-in configuration templates to generate ready-to-use Vector configs, or write your own from scratch.

Overview

Vector by Datadog is an open-source observability data pipeline that collects, transforms, and routes log data. LogPulse uses Vector as its recommended agent for shipping logs from infrastructure and applications.

Lightweight Agent
Written in Rust, Vector uses minimal CPU and memory while handling thousands of events per second.
Config Templates
Generate ready-to-use YAML configurations from the LogPulse dashboard — no manual editing required.
Multi-Source
Collect from files, Docker, syslog, journald, Kubernetes, AWS S3, CloudWatch, Kafka, and more.

What is Vector?

Vector is an open-source, high-performance observability data pipeline built in Rust. It supports over 50 sources and sinks, making it ideal for collecting logs from diverse environments and routing them to LogPulse. Vector handles log rotation, backpressure, and disk-based buffering out of the box.

Note: Vector is developed by Datadog and licensed under the Mozilla Public License 2.0. LogPulse uses Vector's HTTP sink to receive log data — no proprietary agent required.

Installation

Vector can be installed on Linux, macOS, and as a Docker container. Choose the method that best fits your environment.

Linux

Install Vector on Linux using the official install script or package manager.

Install via script (recommended)
curl --proto '=https' --tlsv1.2 -sSfL https://sh.vector.dev | bash
Install via APT (Debian/Ubuntu)
bash -c "$(curl -L https://setup.vector.dev)"
apt-get install vector
Install via YUM (RHEL/CentOS)
bash -c "$(curl -L https://setup.vector.dev)"
yum install vector

macOS

Install via Homebrew
brew install vector

Docker

Docker
docker run -d \
  --name vector \
  -v /var/log:/var/log:ro \
  -v $(pwd)/vector.yaml:/etc/vector/vector.yaml:ro \
  timberio/vector:latest-alpine

Configuration

Vector is configured via a YAML file that defines sources (where to collect logs), transforms (how to process them), and sinks (where to send them). The LogPulse sink uses Vector's built-in HTTP sink with bearer token authentication.

vector.yaml — Minimal Example
sources:
  app_logs:
    type: file
    include:
      - /var/log/myapp/*.log
    read_from: beginning

transforms:
  add_metadata:
    type: remap
    inputs:
      - app_logs
    source: |
      .index = "main"
      .source = "myapp"
      .sourcetype = "application"

sinks:
  logpulse:
    type: http
    inputs:
      - add_metadata
    uri: "https://api.logpulse.io/api/v1/ingest/vector"
    method: post
    encoding:
      codec: json
    auth:
      strategy: bearer
      token: "${LOGPULSE_API_KEY}"
    batch:
      max_bytes: 1048576
      timeout_secs: 5
Warning: Never hardcode API keys in your Vector configuration. Use environment variables (${LOGPULSE_API_KEY}) and store secrets in your environment file or secrets manager.

Configuration Templates

LogPulse provides a template system for generating Vector configurations. Templates are managed from the dashboard and automatically include the correct sink configuration, metadata transforms, and batch settings.

Navigate to Settings → Vector Templates in the LogPulse dashboard. Create a new template, select your sources (file, docker, syslog, etc.), configure paths and filters, and download the generated YAML configuration.

Tip: Templates automatically inject your organization's metadata (template ID, version) into each log event. This enables tracking which template and version generated each log entry.

Supported Sources

Vector supports dozens of log sources. Here are the most commonly used ones with LogPulse:

File Source

The file source reads log data from files on the local filesystem. It handles log rotation, multiline events, and tracks file positions to avoid data loss.

File source
sources:
  app_logs:
    type: file
    include:
      - /var/log/myapp/*.log
      - /var/log/syslog
    exclude:
      - /var/log/myapp/*.gz
    read_from: beginning
    fingerprint:
      strategy: device_and_inode
OptionDescription
includeGlob patterns for files to include (required)
excludeGlob patterns for files to exclude
read_fromWhere to start reading: 'beginning' or 'end' (default: 'end')
fingerprintStrategy for tracking files across renames: 'device_and_inode' (Linux) or 'checksum'

Docker Source

Collect logs from Docker containers. Filter by container name, label, or image to target specific services.

Docker Logs source
sources:
  docker_logs:
    type: docker_logs
    include_containers:
      - "myapp-*"
    include_labels:
      - "environment=production"

Syslog Source

Receive syslog messages over TCP or UDP. Supports both RFC 3164 and RFC 5424 formats.

Syslog source
sources:
  syslog_input:
    type: syslog
    address: "0.0.0.0:514"
    mode: tcp

Journald Source

Read logs from the systemd journal. Filter by unit name to collect logs from specific services.

Journald source
sources:
  journal:
    type: journald
    include_units:
      - myapp.service
      - nginx.service

Kubernetes Source

Collect container logs from a Kubernetes cluster. Automatically enriches events with pod name, namespace, and labels.

Kubernetes Logs source
sources:
  k8s_logs:
    type: kubernetes_logs
    extra_label_selector: "app=myapp"
    extra_namespace_label_selector: "environment=production"
Note: Deploy Vector as a DaemonSet to collect logs from all nodes. Use label selectors to filter which pods are collected.

LogPulse Sink

The LogPulse sink uses Vector's built-in HTTP sink to send logs to the LogPulse ingest API. Configure batch size, concurrency, and retry settings for optimal throughput.

LogPulse HTTP Sink
sinks:
  logpulse:
    type: http
    inputs:
      - add_metadata
    uri: "https://api.logpulse.io/api/v1/ingest/vector"
    method: post
    encoding:
      codec: json
    auth:
      strategy: bearer
      token: "${LOGPULSE_API_KEY}"
    batch:
      max_bytes: 1048576
      timeout_secs: 5
    request:
      concurrency: 10
      rate_limit_num: 100
      retry_max_duration_secs: 30
OptionDefaultDescription
uriThe LogPulse Vector ingest endpoint
auth.strategybearerBearer token authentication using your LogPulse API key
batch.max_bytes1048576 (1 MB)Maximum batch size in bytes before flushing (default: 1 MB)
batch.timeout_secs5Maximum time to wait before flushing a batch
request.concurrency10Number of concurrent requests to LogPulse
Warning: The LogPulse ingest endpoint supports up to 10,000 requests per minute. If you're shipping high-volume logs, increase the batch size to reduce request count while staying within limits.

Transforms

Use Vector's remap transform to add LogPulse metadata fields to your logs before shipping. These fields control how logs are indexed and displayed in the LogPulse search interface.

Add LogPulse metadata fields
transforms:
  enrich:
    type: remap
    inputs:
      - app_logs
    source: |
      .index = "production"
      .source = get_hostname!()
      .sourcetype = "nginx_access"

      # Parse structured fields
      parsed = parse_json!(.message)
      .fields.status_code = parsed.status
      .fields.method = parsed.method
      .fields.path = parsed.path

LogPulse recognizes the following metadata fields in incoming log events:

FieldDescription
.indexTarget index for the log event (default: 'main')
.sourceThe source identifier (e.g., hostname, application name)
.sourcetypeThe source type for parsing and categorization (e.g., 'nginx_access', 'application')
.fields.*Custom key-value pairs for structured data (searchable via LPQL)
.hostThe originating host (auto-detected if not set)

Running Vector

Always validate your configuration before starting Vector. The validate command checks syntax, source connectivity, and sink configuration.

Start Vector
# Validate configuration first
vector validate /etc/vector/vector.yaml

# Run Vector
LOGPULSE_API_KEY=your_api_key_here vector -c /etc/vector/vector.yaml
Tip: Run 'vector validate' after every config change. It catches syntax errors, missing files, and unreachable endpoints before Vector starts.

Systemd Service

For production deployments, run Vector as a systemd service. This ensures Vector starts automatically on boot and restarts on failure.

/etc/systemd/system/vector.service
[Unit]
Description=Vector Log Agent
After=network-online.target
Requires=network-online.target

[Service]
Type=simple
User=vector
Group=vector
ExecStartPre=/usr/bin/vector validate
ExecStart=/usr/bin/vector -c /etc/vector/vector.yaml
Restart=always
RestartSec=5
EnvironmentFile=/etc/default/vector

[Install]
WantedBy=multi-user.target
/etc/default/vector
LOGPULSE_API_KEY=your_api_key_here
Enable & start
sudo systemctl daemon-reload
sudo systemctl enable vector
sudo systemctl start vector
sudo systemctl status vector

Troubleshooting

Common issues and their solutions when setting up Vector with LogPulse:

IssueSolution
No data appearing in LogPulseCheck that the API key is correct, the sink URI is reachable, and your source files/containers are producing new log data.
401 Authentication errorVerify your LOGPULSE_API_KEY environment variable is set and the API key is active in the LogPulse dashboard.
High memory usageReduce batch.max_bytes or request.concurrency. Check for slow sinks causing backpressure.
SELinux blocking file accessRun 'setsebool -P vector_read_all_files 1' or add the appropriate SELinux policy for Vector.
Environment variables not substitutedUse ${VAR_NAME} syntax (not $VAR_NAME). Ensure the variable is exported in the environment file.
Enable verbose logging
VECTOR_LOG=debug vector -c /etc/vector/vector.yaml
Tip: Enable debug logging with VECTOR_LOG=debug to see detailed connection, parsing, and delivery information.

Template API

Manage Vector configuration templates programmatically via the REST API. All endpoints require authentication.

MethodEndpointDescription
GET/api/v1/vector-templatesList all configuration templates
GET/api/v1/vector-templates/:idGet a specific template with full YAML configuration
POST/api/v1/vector-templatesCreate a new configuration template
PUT/api/v1/vector-templates/:idUpdate an existing template
DELETE/api/v1/vector-templates/:idDelete a template
GET/api/v1/vector-templates/:id/hostsList hosts connected using this template