Tape Stream

I was running a few small services on my VPS and needed logs. Not analytics—just “what happened when things broke.” The options felt like overkill: ELK stack (2GB RAM minimum), cloud services (ongoing cost, external dependency), or syslog + grep (painful for structured data).

Tape Stream is a middle path. Services POST JSON logs over HTTP/2. The server appends to a rotating log file (the “tape”). A separate process imports batches into SQLite for querying. The twist: the import process is idempotent and reversible, so you can re-import a corrupted section without losing data.

Design decisions

Why not just SQLite directly? Write-heavy SQLite on a VPS with modest I/O is risky. The append-only log is sequential, cheap, and survives crashes. SQLite is for analysis, not ingestion.

HTTP/2 specifically? Binary framing was less important than the ability to multiplex multiple log streams over one connection without head-of-line blocking. I run this behind nginx, so the complexity is hidden.

Schema evolution: Logs change. The importer uses SQLite’s json1 extension and creates views that normalize common fields while keeping raw JSON accessible for rare queries.

Current status

Running on my VPS for 6 months, handling ~10K events/day across 4 services. CPU usage stays under 5%. The SQLite database is 400MB with 6 months of retention. Query time for “errors in the last hour” is <10ms.