LPQL Syntax Reference
LPQL (LogPulse Query Language) is a powerful search language for querying, filtering, and analyzing your log data. It supports field expressions, boolean logic, time ranges, and a rich set of pipe commands for transformation and aggregation.
Overview
A LPQL query starts with an optional search expression, followed by optional time range modifiers, and zero or more pipe commands separated by the pipe character (|).
[search] <expression> [earliest=<time>] [latest=<time>] [| command1] [| command2] ...Here are some common query patterns:
level=error
level=error source=api host=prod-*
(level=error OR level=fatal) source=payment earliest=-1h
level=error | stats count by host | sort -count | head 10Search Expressions
Field Expressions
Field expressions compare a field name to a value using comparison operators. Field names can contain letters, digits, underscores, and dots.
| Operator | Description | Example |
|---|---|---|
| = or == | Exact match | level=error |
| != | Not equal | status!=200 |
| > | Greater than | duration>1000 |
| < | Less than | status<400 |
| >= | Greater or equal | host>=prod-01 |
| <= | Less or equal | latency<=100 |
level=error
status!=200
duration>1000
host>=prod-01
field="quoted value"Values support wildcards with * for pattern matching:
source=api-* # starts with "api-"
host=*prod* # contains "prod"
field=* # field exists and is non-empty
field!=* # field does not exist or is emptyText Search
Free-text search matches against the event message field. Unquoted terms are matched individually, quoted strings match exact phrases, and wildcards allow pattern matching.
error # free text search
"connection refused" # exact phrase (double quotes)
'timeout error' # exact phrase (single quotes)
*exception* # wildcard text searchBoolean Logic
Combine expressions with AND, OR, and NOT operators. Spaces between expressions act as implicit AND. Use parentheses to control evaluation order. Operator precedence (lowest to highest): OR, AND, NOT.
# Implicit AND (space-separated)
level=error source=api
# Explicit operators
level=error AND source=api
level=error OR level=warn
NOT debug
# Grouping with parentheses
(level=error OR level=fatal) AND source=api
# Operator precedence: NOT > AND > OR
level=error OR level=warn NOT debugIN Operator
The IN operator matches a field against a list of values. Values can include wildcards.
level IN (error, fatal, critical)
source IN ("api-gateway", "web-server")
host IN (prod-*, dev-*)Time Ranges
Use earliest and latest to limit the time window. Supports relative time offsets and absolute ISO 8601 timestamps.
| Unit | Meaning | Example |
|---|---|---|
| m | Minutes | earliest=-15m |
| h | Hours | earliest=-1h |
| d | Days | earliest=-7d |
| w | Weeks | earliest=-1w |
| M | Months | earliest=-1M |
| y | Years | earliest=-1y |
# Relative time
earliest=-1h
earliest=-15m latest=now
# Absolute time
earliest=2024-01-01T00:00:00Z latest=2024-12-31T23:59:59Z
# Combined with search
earliest=-1h error status>500
level=error earliest=-24h latest=-1hKnown Fields
LogPulse recognizes these built-in fields that map directly to stored columns:
| Field | Description |
|---|---|
| event | Log message / event text |
| level | Log level (debug, info, warn, error, fatal) |
| index | Index name |
| source | Source / file name |
| sourcetype | Source type |
| host | Hostname |
| timestamp | Event timestamp |
| cluster | Kubernetes cluster |
| namespace | Kubernetes namespace |
| pod | Kubernetes pod |
| container | Kubernetes container |
| node | Kubernetes node |
Dynamic fields are stored in map columns and accessed via dot notation:
# Map columns use dot notation
labels.app="my-service"
annotations.owner="platform-team"
attributes.status="active"
parsed_fields.request_id="abc-123"JSON Paths
Events often contain nested objects and arrays. JSON paths let you walk into those structures anywhere a field name is accepted — in search expressions, where, eval, stats, timechart, table, and more. Paths combine dot notation with bracket syntax:
| Syntax | Meaning | Example |
|---|---|---|
| a.b.c | Walk into nested objects | user.profile.country |
| a[N] | Array index (0-based) | io_data[0].value |
| a[*] | Any array item | io_data[*].name |
| a[k="v"] | Filter array by predicate | io_data[io="239"].value |
Use them directly as a field in search expressions and where filters:
# Filter events by a nested value
io_data[io="239"].name=Movement
user.profile.country=NL
# Combine with boolean logic
level=error AND io_data[io="66"].value > 0
# In the where pipe (eval expressions)
| where io_data[io="66"].value > 0
| where user.profile.country != "NL"In eval, a JSON path behaves like any other field — read from it, compute with it, alias it:
# Extract a nested value into a top-level field
| eval voltage = io_data[io="66"].value
# Compute with it
| eval voltage_ok = if(io_data[io="66"].value >= 12, "ok", "low")
# Combine paths from multiple sources
| eval full = user.profile.country . "-" . io_data[0].nameAggregate and group by nested values with stats and timechart:
# Latest voltage reading per device
| stats latest(io_data[io="66"].value) as voltage by device_id
# Average temperature over time
| timechart span=1h avg(sensors[type="temp"].reading)
# Group by a nested value
| stats count by user.profile.countryProject nested values as output columns with table:
# Project nested values as columns
| table timestamp, device_id, io_data[io="66"].value
# Mix with dot-notation and regular fields
| table host, user.profile.country, statusComparison against a number (e.g. value < 12) automatically coerces the extracted string to a number. Predicate filters like [io="239"] match string equality on the field inside each array item.
Pipe Commands
Pipe commands transform, filter, and aggregate search results. Chain multiple commands with the pipe (|) character.
stats
Aggregates results by one or more fields, computing statistical functions.
| stats count by level
| stats avg(duration) as avg_dur by source
| stats count, sum(amount), avg(price) by level, host
| stats dc(host) as unique_hosts by level
| stats latest(io_data[io="66"].value) as voltage by device_idAvailable aggregation functions:
| Category | Functions |
|---|---|
| Count | count, c |
| Average | avg, mean |
| Sum | sum, sumsq |
| Min/Max | min, max |
| Distinct | dc, distinct_count |
| Statistical | median, mode, stdev, stdevp, var, variance, range |
| Selection | values, list, first, last, latest, earliest |
| Percentile | perc50, p95, exactperc99, upperperc75 |
timechart
Creates time-bucketed aggregations for charting. Similar to stats but automatically groups by time intervals.
| timechart count
| timechart span=5m count
| timechart span=1h count by level
| timechart span=1h cont=true avg(duration) by sourceAvailable options:
| Option | Description | Example |
|---|---|---|
| span | Time bucket size | span=5m, span=1h, span=1d |
| cont | Fill gaps with zeros | cont=true (default) |
| by | Split by field (max 1) | by level |
eval
Computes new fields from expressions. Supports arithmetic, string concatenation, comparisons, and a rich set of built-in functions.
| eval ratio = count / total
| eval msg = "Status: " . status
| eval is_error = (status >= 500)
| eval x=1, y=2, z=x+y
| eval voltage = io_data[io="66"].value| Category | Operators |
|---|---|
| Arithmetic | + - * / % |
| Comparison | == != < > <= >= LIKE |
| Boolean | AND OR NOT XOR |
| String | . (concatenation) |
| Unary | - (negation) |
where
Filters results using eval expressions. Only events where the expression evaluates to true are kept.
| where status > 400
| where level == "error" OR level == "fatal"
| where duration/1000 > 5
| where like(source, "api%")
| where io_data[io="66"].value < 12table
Selects specific fields to display in the output. Supports wildcard patterns for field names.
| table level, event, host
| table timestamp, source_*, host
| table *
| table timestamp, device_id, io_data[io="66"].valuetop / rare
top shows the most common values of a field, rare shows the least common. Both support a limit, percentage display toggle, and split-by fields.
| top level
| top 5 host
| top limit=10 showperc=false source
| top level by host
| rare level
| rare 5 host by sourcededup
Removes duplicate events based on one or more fields. Optionally keep a set number of duplicates or sort by a specific field.
| dedup host
| dedup 3 level
| dedup host, source
| dedup keepevents=true sortby=timestamp hostsort
Sorts results by one or more fields. Use - prefix for descending order, + or no prefix for ascending.
| sort timestamp # ascending (default)
| sort -level # descending
| sort limit=100 -duration timestamphead / tail
head returns the first N results, tail returns the last N. Default is 10.
| head # first 10 results (default)
| head 20 # first 20 results
| tail 50 # last 50 resultsfields
Include (+) or exclude (-) specific fields from the output. Supports wildcard patterns.
| fields level, event, host # include only these
| fields +source # include
| fields -attributes # exclude
| fields source_*, host # wildcard includerename
Renames fields in the output using field as new_name syntax.
| rename old_field as new_field
| rename level as log_level, host as hostnamerex
Extracts new fields using regular expressions with named capture groups. Defaults to extracting from the event field.
| rex field=event "(?<app>\w+)-(?<env>\w+)"
| rex "(?<status_code>\d{3})"Eval Functions Reference
These functions can be used in eval and where commands:
| Category | Functions |
|---|---|
| Conditional | if(cond, a, b), case(...), coalesce(a, b, ...) |
| String | lower, upper, len, substr, trim, ltrim, rtrim, replace, split, urldecode, urlencode |
| Conversion | tostring, tonumber |
| Math | round, ceil, floor, abs, pow, sqrt, log, ln, exp, pi, min, max |
| Type checks | isnull, isnotnull, isnum, isstr, typeof, null |
| Time | now, strftime, strptime, relative_time, time |
| Pattern | match, like, searchmatch, cidrmatch |
| Multivalue | mvindex, mvcount, mvjoin, mvappend, mvdedup, mvsort, mvfilter, mvfind |
| Crypto | md5, sha1, sha256, sha512 |
| eval greeting = if(level=="error", "ALERT", "OK")
| eval name = lower(host)
| eval dur_sec = round(duration / 1000, 2)
| eval is_prod = like(host, "prod%")
| eval hash = md5(event)