DevOps 2026-03-09

Debug Logs: Convert Timestamps → Format JSON → Diff Log Versions

Systematic log debugging: convert Unix timestamps to readable dates, format structured JSON logs, and diff log snapshots to find what changed. Cut debugging time in half.

Workflow uses: Timestamp Converter JSON Formatter Diff Checker — All Free

The Problem

Your service is throwing errors at 3am. The log file shows Unix timestamps nobody can read, minified JSON payloads per line, and you need to figure out what changed between the last deployment and now. This workflow turns that chaos into actionable debugging.

Why This Matters

Logs are only useful if you can read them. Unix timestamps are compact for storage but opaque to humans. Minified JSON in logs is space-efficient but unreadable during incidents. Diff-based log comparison is the fastest way to isolate regression causes in production systems.

Step-by-Step Instructions

1

Convert Unix timestamps to readable dates

Copy timestamp values from your logs and paste them into the Timestamp Converter. Convert to your local timezone to understand exactly when events occurred. Compare timestamps to deployment times to correlate incidents.

2

Format structured JSON log entries

Many modern logging systems (Datadog, Splunk, CloudWatch) emit structured JSON per line. Copy a problematic log entry and paste it into the JSON Formatter to see all fields with proper indentation — spot missing or unexpected fields immediately.

3

Diff before/after deployment logs

Copy a representative log snapshot from before and after the problematic deployment. Paste both into the Diff Checker. Changed log fields, new error types, or missing events pinpoint the regression.

Try It Now — Unix Timestamp Converter

Open full page →
Unix Timestamp Converter — Live Demo

All processing happens in your browser — no data is sent to any server.

Before & After Example

Unreadable production log (raw)
1709804000 {"svc":"api","lvl":"err","msg":"timeout","req_id":"abc123","ts":1709804000,"dur_ms":30001,"upstream":"db"}
1709804060 {"svc":"api","lvl":"err","msg":"timeout","req_id":"def456","ts":1709804060,"dur_ms":30002,"upstream":"db"}

# When is 1709804000? What fields matter?
Readable log with context
Timestamp: 2026-03-07 11:00:00 UTC

{
  "svc": "api",
  "lvl": "err",
  "msg": "timeout",
  "req_id": "abc123",
  "ts": "2026-03-07T11:00:00Z",
  "dur_ms": 30001,       // ← 30 second timeout!
  "upstream": "db"      // ← database is slow
}

# Root cause: DB latency spike at 11:00 UTC

Frequently Asked Questions

What's the difference between Unix timestamp and ISO 8601?

Unix timestamp is seconds (or milliseconds) since 1970-01-01 00:00:00 UTC — compact but unreadable. ISO 8601 is 2026-03-07T11:00:00Z — human-readable and timezone-explicit. ISO 8601 is best for logs; Unix timestamps are best for calculation and sorting.

Are log timestamps in seconds or milliseconds?

It depends on the system. Unix traditionally uses seconds (10 digits: 1709804000). JavaScript and many modern systems use milliseconds (13 digits: 1709804000000). If your timestamp is 13 digits, divide by 1000 first.

What's the best structured logging format?

JSON Lines (JSONL) — one JSON object per line. Each line should include: timestamp (ISO 8601), level (info/warn/error), message, service name, request/trace ID, and relevant context. This format is natively supported by Datadog, CloudWatch, and most log aggregators.

Related Workflows

Try all 3 tools in this workflow

Each tool is free, runs in your browser, and requires no signup.

Related Workflow Guides