Skip to main content
Query all your Laminar data directly with SQL. Find patterns, debug issues, and answer questions the dashboard doesn’t anticipate.

What You Can Query

TableContains
spansIndividual spans (LLM/tool/custom/eval spans)
tracesTrace-level aggregates derived from spans
eventsCustom events you’ve sent
tagsTags attached to spans
dataset_datapointsDataset datapoints (latest version per datapoint)
dataset_datapoint_versionsDataset datapoints (all versions/history)
evaluation_datapointsEvaluation datapoints incl. scores (JSON map), executor output, and dataset links
Only SELECT queries are allowed.

Getting Started

Open the SQL Editor from the sidebar. Write a query:
SELECT name, input, output, start_time
FROM spans
WHERE start_time > now() - INTERVAL 3 DAY
Results appear in a table or raw JSON view. Export results to a dataset or labeling queue for further use. You can also query via API at /v1/sql/query—authenticate with your project API key and pass { "query": "..." }.

Writing Queries

Laminar uses ClickHouse, a columnar analytics database. The basics work like standard SQL (SELECT, FROM, WHERE, GROUP BY, ORDER BY, LIMIT), with a few differences.

Always filter by time

Spans are ordered by start_time. Adding a time filter dramatically speeds up queries and prevents memory issues:
-- Slow: scans everything
SELECT * FROM spans WHERE trace_id = 'abc-123'

-- Fast: scans only relevant time range
SELECT * FROM spans 
WHERE trace_id = 'abc-123'
  AND start_time > now() - INTERVAL 1 DAY

Avoid joins

ClickHouse isn’t optimized for joins. Instead, run two queries and combine results in your application:
-- First: find the spans you care about
SELECT trace_id, name, input, output
FROM spans
WHERE span_type = 'LLM' AND start_time > now() - INTERVAL 1 DAY

-- Second: get trace-level data for those trace_ids
SELECT id, duration
FROM traces
WHERE id IN ('id1', 'id2', 'id3')

Working with dates

Truncate timestamps for grouping with toStartOfInterval:
-- Spans per day over the last month
SELECT
    toStartOfInterval(start_time, INTERVAL 1 DAY) AS day,
    count(*) AS span_count
FROM spans
WHERE start_time > now() - INTERVAL 1 MONTH
GROUP BY day
ORDER BY day
Works with any interval: INTERVAL 15 MINUTE, INTERVAL 1 HOUR, etc. Shortcuts exist for common intervals: toStartOfDay(value), toStartOfHour(value), toStartOfWeek(value).

Working with JSON

Many columns (like attributes) store JSON as strings. Use simpleJSONExtract* functions for fast extraction:
-- Extract token counts from LLM spans
SELECT
    name,
    simpleJSONExtractInt(attributes, 'gen_ai.usage.input_tokens') AS input_tokens,
    simpleJSONExtractInt(attributes, 'gen_ai.usage.output_tokens') AS output_tokens
FROM spans
WHERE span_type = 'LLM' AND start_time > now() - INTERVAL 1 DAY
Check if a key exists with simpleJSONHas:
SELECT count(*)
FROM spans
WHERE simpleJSONHas(attributes, 'gen_ai.request.structured_output_schema')
For complex operations (array indexing, nested paths), use JSONExtract* functions—more flexible but slower.

Data types

TypeUsed for
UUIDMost ID columns (trace_id, span_id, id)
DateTime64(9, 'UTC')Timestamps (always UTC)
StringText, JSON stored as strings, and enum-like columns (span_type, trace_type, status, tags.source)
LowCardinality(String)Low-cardinality enums (for example, events.source)
Float64Floating point numbers
Int64Counts, token numbers
UInt64Indexes and counters (evaluation_datapoints.index)
BoolFlags like has_browser_session
Array(String)Tag lists and other string arrays
Cast with CAST(value AS Type) or toDateTime64('2025-01-01 00:00:00', 9, 'UTC').

Table Schemas

These are the logical tables exposed in the SQL Editor. The schemas below reflect the columns available for queries.

spans

ColumnTypeExample value
span_idUUID"00000000-0000-0000-1234-426614174000"
nameString"openai.chat"
span_typeString"LLM"
start_timeDateTime64(9, 'UTC')"2021-01-01 00:00:00"
end_timeDateTime64(9, 'UTC')"2021-01-01 00:00:00"
durationFloat641.23
input_costFloat640.5667
output_costFloat640.123
total_costFloat640.6897
input_tokensInt64150
output_tokensInt64100
total_tokensInt64250
request_modelString"gpt-4.1-mini"
response_modelString"gpt-4.1-mini-2025-04-14"
modelString"gpt-4.1-mini-2025-04-14"
trace_idUUID"12345678-90ab-cdef-1234-426614174000"
providerString"openai"
pathString"workflow.process.step1.openai.chat"
inputString"[{\"role\": \"user\", \"content\": \"Hello\"}]"
outputString"[{\"role\": \"assistant\", \"content\": \"Hi\"}]"
statusString"success"
parent_span_idUUID"00000000-0000-0000-a456-abcd5667ef09"
attributesString"{\"gen_ai.system\": \"openai\", \"gen_ai.model\": \"gpt-4o\"}"
tagsArray(String)["needs-review", "tool-call"]

Path

Laminar span path is stored as an array of span names in span attributes. However, in SQL queries, it is stored as a string with items joined by a dot. For example, if the span path is ["outer", "inner"], the path column will be "outer.inner". If needed, you can still access the array value by reading attributes with simpleJSONExtractRaw(attributes, 'lmnr.span.path').

Parent span ID

If the current span is the top span of the trace, the parent_span_id will be a 0 UUID, i.e. "00000000-0000-0000-0000-000000000000".

Span type

Here are the values of the span_type column and their meanings:
DEFAULT
LLM
EXECUTOR
EVALUATOR
EVALUATION
TOOL
HUMAN_EVALUATOR
EVENT
UNKNOWN

Status

Status is normalized to "success" or "error".

Input and output

The input and output columns are stored as either raw strings or stringified JSON. The best way to parse them is to try to parse them as JSON, and if it fails, use the raw string. You can also use isValidJSON function right in the query to test for this. input and output columns are also indexed on content, so you can use them in WHERE conditions. Use ILIKE instead of LIKE, because the index is case-insensitive.

Attributes

The attributes column is stored as a string in JSON format. That is, you can safely JSON.parse / json.loads them. In addition, you can use JSON* and simpleJSON* functions on them right in the queries. Attributes are guaranteed to be a valid JSON object.

Model

The model column is set to the response model if present, otherwise it is set to the request model.

Total tokens and total cost

Usually, total_tokens = input_tokens + output_tokens and total_cost = input_cost + output_cost. However, you can manually report these values using the relevant attributes. In this case, totals may not be equal to the sum of the input and output tokens and costs.

traces

ColumnTypeExample value
idUUID"01234567-1234-cdef-1234-426614174000"
start_timeDateTime64(9, 'UTC')"2021-01-01 00:00:00"
end_timeDateTime64(9, 'UTC')"2021-01-01 00:00:00"
input_tokensInt64150
output_tokensInt64100
total_tokensInt64250
input_costFloat640.5667
output_costFloat640.123
total_costFloat640.6897
durationFloat641.23
metadataString"{\"key\": \"value\"}"
session_idString"session_123"
user_idString"user_123"
statusString"success"
top_span_idUUID"00000000-0000-0000-1234-426614174000"
top_span_nameString"run"
top_span_typeString"DEFAULT"
trace_typeString"DEFAULT"
tagsArray(String)["needs-review", "production"]
has_browser_sessionBooltrue
id is the trace ID; join to spans with spans.trace_id = traces.id.

Trace type

Here are the values of the trace_type column and their meanings:
DEFAULT
EVALUATION
PLAYGROUND

Duration

The duration is in seconds, and is calculated as end_time - start_time.

Status

Status is set to error if any span in the trace has status error, otherwise it is success.

Metadata

Metadata is stored as a string in JSON format. That is, you can safely JSON.parse / json.loads it. In addition, you can use JSON* and simpleJSON* functions on it right in the queries. Metadata is guaranteed to be a valid JSON object.

events

ColumnTypeExample value
idUUID"01234567-89ab-4def-1234-426614174000"
span_idUUID"00000000-0000-0000-1234-426614174000"
nameString"My custom event"
timestampDateTime64(9, 'UTC')"2021-01-01 00:00:00"
attributesString"{\"key\": \"value\"}"
user_idString"user_123"
session_idString"session_123"
trace_idUUID"01234567-1234-cdef-1234-426614174000"
sourceLowCardinality(String)"CODE"

Attributes

The attributes column is stored as a string in JSON format. That is, you can safely JSON.parse / json.loads it. In addition, you can use JSON* and simpleJSON* functions on it right in the queries. Attributes are guaranteed to be a valid JSON object.

Source

source is either "CODE" or "SEMANTIC".

tags

ColumnTypeExample value
idUUID"01234567-89ab-4def-1234-426614174000"
span_idUUID"00000000-0000-0000-1234-426614174000"
nameString"needs-review"
created_atDateTime64(9, 'UTC')"2021-01-01 00:00:00"
sourceString"HUMAN"

Source

source is "HUMAN" (set in the Laminar UI), "CODE" (attached from code), or "UNKNOWN".

evaluation_datapoints

ColumnTypeExample value
idUUID"01234567-89ab-4def-1234-426614174000"
evaluation_idUUID"98765432-1098-4654-3210-987654321098"
dataString"{\"key\": \"value\"}"
targetString"{\"key\": \"value\"}"
metadataString"{\"key\": \"value\"}"
executor_outputString"{\"key\": \"value\"}"
indexUInt640
trace_idUUID"01234567-1234-cdef-1234-426614174000"
group_idString"group_a"
scoresString"{\"score1\": 0.85}"
created_atDateTime64(9, 'UTC')"2021-01-01 00:00:00"
dataset_idUUID"00000000-0000-0000-0000-000000000000"
dataset_datapoint_idUUID"00000000-0000-0000-0000-000000000000"
dataset_datapoint_created_atDateTime64(9, 'UTC')"1970-01-01 00:00:00"
data, target, metadata, executor_output, and scores are JSON stored as strings. scores is a JSON object of string keys to numeric values. When the datapoint is not sourced from a dataset, dataset_id and dataset_datapoint_id are a nil UUID (all zeroes) and dataset_datapoint_created_at is the Unix epoch.

dataset_datapoints

ColumnTypeExample value
idUUID"01234567-89ab-4def-1234-426614174000"
created_atDateTime64(9, 'UTC')"2021-01-01 00:00:00"
dataset_idUUID"11111111-2222-3333-4444-555555555555"
dataString"{\"query\": \"What is 2+2?\"}"
targetString"{\"answer\": \"4\"}"
metadataString"{\"source\": \"prod\"}"
data, target, and metadata are JSON stored as strings.

dataset_datapoint_versions

Same schema as dataset_datapoints, but includes all versions and history for each datapoint.

Example Queries

Cost breakdown by model:
SELECT
    model,
    sum(total_cost) AS total_cost,
    count(*) AS call_count
FROM spans
WHERE span_type = 'LLM' AND start_time > now() - INTERVAL 7 DAY
GROUP BY model
ORDER BY total_cost DESC
Slowest operations:
SELECT name, avg(end_time - start_time) AS avg_duration_ms
FROM spans
WHERE start_time > now() - INTERVAL 1 DAY
GROUP BY name
ORDER BY avg_duration_ms DESC
LIMIT 10
Error rate by span type:
SELECT
    name,
    countIf(status = 'error') AS errors,
    count(*) AS total,
    round(errors / total * 100, 2) AS error_rate
FROM spans
WHERE start_time > now() - INTERVAL 1 DAY
GROUP BY name
HAVING total > 10
ORDER BY error_rate DESC

Exporting Results

Select results and click “Export to Dataset.” Map columns to dataset fields (data, target, metadata). Use this to build evaluation datasets from query results.

Full Reference

For complete ClickHouse SQL syntax, see the ClickHouse documentation.