MongoDB BSON Size Calculator
Paste any document (JSON or Extended JSON) to see its exact BSON size, its share of the 16 MB per-document limit, and which top-level fields are eating the most bytes. Useful for catching a runaway array before production does.
$oid, $date, $numberDecimal, $binary, …) are counted per the real BSON spec.Bytes per BSON type
Every field adds 1 type byte + the UTF-8-encoded key + 1 null terminator on top of the value.
Frequently asked questions
Why does MongoDB have a 16 MB document limit?
The limit prevents a single document from monopolizing RAM, the network, and the oplog. It's been 16 MB since MongoDB 1.x and hasn't moved. In practice it's a design signal: if a document is close to the ceiling, it probably wants to be split, or stored in GridFS.
Is the size really 16 MB exactly?
16 × 1024 × 1024 = 16,777,216 bytes. The server rejects any BSON document larger than that for inserts and updates. Queries returning multiple documents aren't limited as a whole. Each document just has to fit individually.
How is BSON size computed?
Each document is a 4-byte length prefix, a sequence of elements, and a 0x00 terminator. Every element adds 1 type byte, a null-terminated key (CString), and the value payload. int32 takes 4 bytes, int64/double/date take 8, ObjectId takes 12, decimal128 takes 16, and strings take 4 (length) + UTF-8 bytes + 1 (null).
Why do my int fields look like 4 bytes here but my driver sends 8?
This tool treats JSON numbers that fit in int32 range as int32, matching what most drivers do when converting JSON to BSON. If your driver always emits int64 (Java/Go/.NET behavior with Long), wrap the value as { "$numberLong": "123" } and the size will reflect it.
What should I do if I’m close to the limit?
Three common fixes. (1) Move unbounded arrays to a separate collection with a foreign key. (2) Store large blobs in GridFS, which chunks them across multiple BSON documents. (3) If the hot fields are only a fraction of the document, split into a summary doc plus a detail doc.
Does this include index entries or overhead?
No. The number is just the BSON-on-the-wire document size. Index entries, replication overhead, storage-engine compression, and _id metadata are separate. Those matter for collection sizing, not for the 16 MB per-document limit, which applies to the document itself.
Want to understand your document structure?
Monghoul's schema analysis samples your collection and maps out every field, with
types, occurrence rates, and enum detection. Collection stats show document counts
and average sizes. The query sandbox includes BSON.calculateObjectSize() for exact measurements.