push. pipe. query. ship.

Event streaming
without the infrastructure

One binary. Zero config. Push events in, pipe them into streams, query with a language that makes sense. Install in 5 seconds.

Get Started Free View Docs
$ curl -sSL https://tailpush.com/install | sh click to copy
How it works
Simple by design.
No clusters. No partitions. No consumer groups. No YAML. Four concepts, one binary.
01
Push
Send any JSON over HTTP. No schemas, no topics, no setup. Events are timestamped and stored instantly.
02
Pipe
Create live pipes that filter and route events into focused streams. One firehose in, many streams out.
03
Query
A Unix-inspired query language. Chain operators with pipes. Results stream back in real time.
04
Ship
Deploy as a single binary. Integrate with your apps via HTTP and SSE. Route events to webhooks, storage, or anywhere with pipes.
terminal
# Start Tailpush $ tailpush # Push an event $ curl -X POST localhost:3000/v1/events/push \ -d '{"type":"user.signup","email":"alice@example.com"}' {"ok":true,"stream":"events","appended":1,"offset":1} # Tail live events (in another terminal) $ curl -N localhost:3000/v1/events/tail data: {"offset":1,"ts":"2026-02-28T14:00:00Z","data":{"type":"user.signup",...}} # Query $ curl -X POST localhost:3000/v1/events/query \ -d 'scan(-1h) | filter(type = error) | count()' {"count": 42} # Create a pipe — errors stream updates automatically $ curl -X POST localhost:3000/v1/pipes \ -d '{"source":"events","destination":"errors","pipeline":"filter(type = error)"}'
Architecture
One firehose in.
Many streams out.
Pipes filter and route events in real time. Chain them. Backfill history. Export to external systems. Each derived stream is tiny and fast to query.
events raw firehose
├── filter(type = error) errors
├── filter(event = purchase) purchases
├── filter(service = api) api-logs
└── filter(status >= 500) api-errors chained
├── group_by(type) | rate(1m) event-rates materialized
└── filter(level = critical) webhook:slack external
Query language
Queries that make sense.
Unix-inspired pipe syntax. No SQL, no special query builder. Chain operators, stream results.
Recent errors
scan(-1h) | filter(type = error)
Revenue today
scan(-24h) | filter(event = purchase) | sum(amount)
Top error endpoints
scan(-24h)
| filter(status >= 500)
| group_by(endpoint)
| count() | sort(count, desc)
| head(10)
P99 latency by service
scan(-1h)
| group_by(service)
| percentile(latency_ms, 99)
Event rate over time
scan(-24h) | window(1h) | count()
Unique users this week
scan(-7d) | distinct(user_id) | count()
Performance
Absurdly fast on a single machine.
Rust, io_uring, SIMD JSON parsing, zero-copy reads. No GC pauses. No JVM. No overhead.
1M+
events/sec append
batched io_uring writes
<100μs
p99 write latency
pre-allocated segments
<50μs
tail notification
in-process, zero-copy
Why Tailpush
Built for builders.
Not another distributed system for platform teams. A simple tool that works.
tailpush Kafka Loki Datadog
5 seconds Hours 30 minutes Minutes + $$
Time to first event
One binary Cluster + ZK/KRaft Grafana + Promtail Cloud SaaS
Architecture
Pipe syntax None LogQL Custom
Query language
Native SSE Consumer poll Via Grafana Yes
Real-time streaming
Free / $29/mo Free (self-host) Free / Cloud $ $$
Pricing
Start in 5 seconds.
Free tier. 100K events/month. No credit card.