built-in formats
understands standard Nginx, Apache, and syslog formats out of the box. no configuration required.
turn messy text logs into structured json.
logparse reads standard input containing unstructured log data and outputs json lines. it includes built-in formats for common servers and accepts custom regex for bespoke logs.
logparse is a cli utility that converts unstructured server logs into structured json lines. it ships with pre-configured parsers for Nginx, Apache, syslog, and common application log formats. you can pipe logs directly into it and get machine-readable output.
grep and awk are fine for simple searches, but they fall apart when you need to aggregate data or extract specific fields reliably. sending unstructured text to a centralized logging system ruins your ability to query it. logparse structures the data at the source.
sysadmins and backend engineers use this to feed logs into tools like jq or observability stacks. if your legacy application only writes flat text files, logparse acts as a bridge to modern infrastructure.
understands standard Nginx, Apache, and syslog formats out of the box. no configuration required.
converts messy text into structured, newline-delimited json. ready to be piped into jq or a database.
supply your own regular expressions for proprietary application logs. extracts named capture groups as json keys.
processes hundreds of thousands of lines per second. handles massive log rotations without choking.
skips or flags lines that do not match the expected format. does not crash the entire pipeline.
convert raw Nginx access logs into json lines.
// input
tail -n 2 access.log | logparse --format nginx
// output
{"ip":"192.168.1.1","method":"GET","path":"/api/users","status":200}
{"ip":"10.0.0.5","method":"POST","path":"/login","status":401}
the unstructured text was parsed using the built-in Nginx regex and converted into queryable json.
trying to write a regex to parse nginx access logs on the fly during a production incident is a miserable experience. unstructured logs are essentially write-only memory.
we needed a way to pipe standard logs into jq without setting up a massive centralized cluster. logparse does one thing and outputs cleanly formatted json lines.
pass a regex string with named capture groups. the names become the json keys, and the matched text becomes the values.
yes. built-in parsers will automatically cast status codes and bytes to integers. custom parsers treat everything as strings by default.
no. you should use zcat to decompress the log stream and pipe the output into logparse.
it outputs a json object with a single 'raw' key containing the original string and an error flag, so you do not lose the data.
// stop reading flat text files.