Search

LPQL Syntax Reference

LPQL (LogPulse Query Language) is a powerful search language for querying, filtering, and analyzing your log data. It supports field expressions, boolean logic, time ranges, and a rich set of pipe commands for transformation and aggregation.

Overview

A LPQL query starts with an optional search expression, followed by optional time range modifiers, and zero or more pipe commands separated by the pipe character (|).

Query structure
[search] <expression> [earliest=<time>] [latest=<time>] [| command1] [| command2] ...

Here are some common query patterns:

Examples
level=error
level=error source=api host=prod-*
(level=error OR level=fatal) source=payment earliest=-1h
level=error | stats count by host | sort -count | head 10

Search Expressions

Field Expressions

Field expressions compare a field name to a value using comparison operators. Field names can contain letters, digits, underscores, and dots.

OperatorDescriptionExample
= or ==Exact matchlevel=error
!=Not equalstatus!=200
>Greater thanduration>1000
<Less thanstatus<400
>=Greater or equalhost>=prod-01
<=Less or equallatency<=100
level=error
status!=200
duration>1000
host>=prod-01
field="quoted value"

Values support wildcards with * for pattern matching:

source=api-*       # starts with "api-"
host=*prod*        # contains "prod"
field=*            # field exists and is non-empty
field!=*           # field does not exist or is empty

Boolean Logic

Combine expressions with AND, OR, and NOT operators. Spaces between expressions act as implicit AND. Use parentheses to control evaluation order. Operator precedence (lowest to highest): OR, AND, NOT.

# Implicit AND (space-separated)
level=error source=api

# Explicit operators
level=error AND source=api
level=error OR level=warn
NOT debug

# Grouping with parentheses
(level=error OR level=fatal) AND source=api

# Operator precedence: NOT > AND > OR
level=error OR level=warn NOT debug

IN Operator

The IN operator matches a field against a list of values. Values can include wildcards.

level IN (error, fatal, critical)
source IN ("api-gateway", "web-server")
host IN (prod-*, dev-*)

Time Ranges

Use earliest and latest to limit the time window. Supports relative time offsets and absolute ISO 8601 timestamps.

UnitMeaningExample
mMinutesearliest=-15m
hHoursearliest=-1h
dDaysearliest=-7d
wWeeksearliest=-1w
MMonthsearliest=-1M
yYearsearliest=-1y
# Relative time
earliest=-1h
earliest=-15m latest=now

# Absolute time
earliest=2024-01-01T00:00:00Z latest=2024-12-31T23:59:59Z

# Combined with search
earliest=-1h error status>500
level=error earliest=-24h latest=-1h

Known Fields

LogPulse recognizes these built-in fields that map directly to stored columns:

FieldDescription
eventLog message / event text
levelLog level (debug, info, warn, error, fatal)
indexIndex name
sourceSource / file name
sourcetypeSource type
hostHostname
timestampEvent timestamp
clusterKubernetes cluster
namespaceKubernetes namespace
podKubernetes pod
containerKubernetes container
nodeKubernetes node

Dynamic fields are stored in map columns and accessed via dot notation:

# Map columns use dot notation
labels.app="my-service"
annotations.owner="platform-team"
attributes.status="active"
parsed_fields.request_id="abc-123"

JSON Paths

Events often contain nested objects and arrays. JSON paths let you walk into those structures anywhere a field name is accepted — in search expressions, where, eval, stats, timechart, table, and more. Paths combine dot notation with bracket syntax:

SyntaxMeaningExample
a.b.cWalk into nested objectsuser.profile.country
a[N]Array index (0-based)io_data[0].value
a[*]Any array itemio_data[*].name
a[k="v"]Filter array by predicateio_data[io="239"].value

Use them directly as a field in search expressions and where filters:

Search / where filters
# Filter events by a nested value
io_data[io="239"].name=Movement
user.profile.country=NL

# Combine with boolean logic
level=error AND io_data[io="66"].value > 0

# In the where pipe (eval expressions)
| where io_data[io="66"].value > 0
| where user.profile.country != "NL"

In eval, a JSON path behaves like any other field — read from it, compute with it, alias it:

eval — compute new fields from nested values
# Extract a nested value into a top-level field
| eval voltage = io_data[io="66"].value

# Compute with it
| eval voltage_ok = if(io_data[io="66"].value >= 12, "ok", "low")

# Combine paths from multiple sources
| eval full = user.profile.country . "-" . io_data[0].name

Aggregate and group by nested values with stats and timechart:

stats / timechart — aggregate nested values
# Latest voltage reading per device
| stats latest(io_data[io="66"].value) as voltage by device_id

# Average temperature over time
| timechart span=1h avg(sensors[type="temp"].reading)

# Group by a nested value
| stats count by user.profile.country

Project nested values as output columns with table:

table — display nested values as columns
# Project nested values as columns
| table timestamp, device_id, io_data[io="66"].value

# Mix with dot-notation and regular fields
| table host, user.profile.country, status

Comparison against a number (e.g. value < 12) automatically coerces the extracted string to a number. Predicate filters like [io="239"] match string equality on the field inside each array item.

Pipe Commands

Pipe commands transform, filter, and aggregate search results. Chain multiple commands with the pipe (|) character.

stats

Aggregates results by one or more fields, computing statistical functions.

| stats count by level
| stats avg(duration) as avg_dur by source
| stats count, sum(amount), avg(price) by level, host
| stats dc(host) as unique_hosts by level
| stats latest(io_data[io="66"].value) as voltage by device_id

Available aggregation functions:

CategoryFunctions
Countcount, c
Averageavg, mean
Sumsum, sumsq
Min/Maxmin, max
Distinctdc, distinct_count
Statisticalmedian, mode, stdev, stdevp, var, variance, range
Selectionvalues, list, first, last, latest, earliest
Percentileperc50, p95, exactperc99, upperperc75

timechart

Creates time-bucketed aggregations for charting. Similar to stats but automatically groups by time intervals.

| timechart count
| timechart span=5m count
| timechart span=1h count by level
| timechart span=1h cont=true avg(duration) by source

Available options:

OptionDescriptionExample
spanTime bucket sizespan=5m, span=1h, span=1d
contFill gaps with zeroscont=true (default)
bySplit by field (max 1)by level

eval

Computes new fields from expressions. Supports arithmetic, string concatenation, comparisons, and a rich set of built-in functions.

| eval ratio = count / total
| eval msg = "Status: " . status
| eval is_error = (status >= 500)
| eval x=1, y=2, z=x+y
| eval voltage = io_data[io="66"].value
CategoryOperators
Arithmetic+ - * / %
Comparison== != < > <= >= LIKE
BooleanAND OR NOT XOR
String. (concatenation)
Unary- (negation)

where

Filters results using eval expressions. Only events where the expression evaluates to true are kept.

| where status > 400
| where level == "error" OR level == "fatal"
| where duration/1000 > 5
| where like(source, "api%")
| where io_data[io="66"].value < 12

table

Selects specific fields to display in the output. Supports wildcard patterns for field names.

| table level, event, host
| table timestamp, source_*, host
| table *
| table timestamp, device_id, io_data[io="66"].value

top / rare

top shows the most common values of a field, rare shows the least common. Both support a limit, percentage display toggle, and split-by fields.

| top level
| top 5 host
| top limit=10 showperc=false source
| top level by host

| rare level
| rare 5 host by source

dedup

Removes duplicate events based on one or more fields. Optionally keep a set number of duplicates or sort by a specific field.

| dedup host
| dedup 3 level
| dedup host, source
| dedup keepevents=true sortby=timestamp host

sort

Sorts results by one or more fields. Use - prefix for descending order, + or no prefix for ascending.

| sort timestamp        # ascending (default)
| sort -level           # descending
| sort limit=100 -duration timestamp

head / tail

head returns the first N results, tail returns the last N. Default is 10.

| head          # first 10 results (default)
| head 20       # first 20 results
| tail 50       # last 50 results

fields

Include (+) or exclude (-) specific fields from the output. Supports wildcard patterns.

| fields level, event, host      # include only these
| fields +source                 # include
| fields -attributes             # exclude
| fields source_*, host          # wildcard include

rename

Renames fields in the output using field as new_name syntax.

| rename old_field as new_field
| rename level as log_level, host as hostname

rex

Extracts new fields using regular expressions with named capture groups. Defaults to extracting from the event field.

| rex field=event "(?<app>\w+)-(?<env>\w+)"
| rex "(?<status_code>\d{3})"

Eval Functions Reference

These functions can be used in eval and where commands:

CategoryFunctions
Conditionalif(cond, a, b), case(...), coalesce(a, b, ...)
Stringlower, upper, len, substr, trim, ltrim, rtrim, replace, split, urldecode, urlencode
Conversiontostring, tonumber
Mathround, ceil, floor, abs, pow, sqrt, log, ln, exp, pi, min, max
Type checksisnull, isnotnull, isnum, isstr, typeof, null
Timenow, strftime, strptime, relative_time, time
Patternmatch, like, searchmatch, cidrmatch
Multivaluemvindex, mvcount, mvjoin, mvappend, mvdedup, mvsort, mvfilter, mvfind
Cryptomd5, sha1, sha256, sha512
Examples
| eval greeting = if(level=="error", "ALERT", "OK")
| eval name = lower(host)
| eval dur_sec = round(duration / 1000, 2)
| eval is_prod = like(host, "prod%")
| eval hash = md5(event)

We use cookies to analyze site traffic and improve your experience. No cookies are placed without your consent. Privacy Policy