JSON Parsing Performance
The cluster centers on debates about whether JSON parsing is a performance bottleneck, especially for large documents, with frequent mentions of fast libraries like simdjson achieving GB/s speeds and advice on profiling and alternatives.
Activity Over Time
Top Contributors
Keywords
Sample Comments
If JSON parsing is a bottleneck in your application, you're doing it wrong.
Is JSON parsing still a bottleneck? https://github.com/simdjson/simdjson
Probably anywhere that requires parsing large JSON documents. Off the shelf JSON parsers are notoriously slow on large JSON documents.
What is the reason not to use the micro optimized JSON implementation if parsing becomes your bottleneck?
I guess it depends in your use-case. Looks like this was primarily made for large JSON files and not the typical small JSON payloads you encounter with HTTP bodies and the like. On top of that JSON.parse() is pretty heavily optimized already. Profiling is key.
There are some quite big JSON files out there; you might also be interested in parsing megabytes but not spending more than 1ms to get through it.
You might also move to something other than JSON if parsing it is a significant part of your workload.
It can be a big chunk of the cost of parsing JSON.
If at least they had used a JSON parser instead of a full blown (slow) language!
Further evidence is the fact that optimized SIMD JSON or UTF8 libraries exist. If I/O was the bottleneck, there wouldn't be a need to parse JSON using SIMD.