It's extremely fast and it's easy to work with for reading, but it's just a key-value store. Also, it's sort of weird to work with on the write side. You have to do atomic replaces of the read-only data files.
Regarding writes, most likely the expectation is that you'll update your entire data set periodically in a batch job, or export and cache another authoritative data store. I understood the atomic replace feature as referring to a convenient way to flip to a new version of the database after its production & distribution by a batch job. It sounds like the "cdbmake" tool is designed to facilitate this.
A lot of data doesn't change very often; and a lot of data can tolerate propagation delays on change. When you have an extremely high read volume against such data sets, it can be economical to cache the entire data set and distribute it periodically to the fleets of machines that require access, as opposed to servicing individual reads over a network. Provides lower cost and latency, while supporting a higher volume of reads and higher availability, at the expense of engineering cost and propagation delay.
I don't know how many people remember this, but this kind of "weird atomic replaces of the read-only data file" was very common on unix-systems for a long time.
Take sendmail: people would have their email-aliases in a /etc/aliases file, with a simple ascii-format, lines formatted according to: "<key>: <values...>". The "newaliases" command then would convert the content of the ASCII file /etc/aliases into the binary (originally: Berkeley db) /etc/aliases.db.
As far as I know, CDB was developed initially for qmail, DJBs email-daemon, for exactly that purpose.
Another example for this is the "Yellow-Pages" (yp) / NIS directory services to distribute usernames/userids/groups/... in a UNIX cluster or network. ASCII files were converted into e.g. /var/yp/DOMAIN/passwd.byname, a .db-file where keys are the usernames from /etc/passwd and values were the full ASCII lines from /etc/passwd, same goes for passwd.byuid, group.byname, and so on. All for fast indicing, so that the network server didn't have to repeatedly parse /etc/passwd.
It can be reimplemented from scratch in a few hundred lines of code?
It's not really the same thing, though. CDB provides a single key -> value mapping in a single file, with updates by replacing the entire file; SQLite is a relational database with updates, locking, transactions, SQL, etc.
cdb is immutable - when you want to change the contained data set you generally rebuild the file, flip a fs link to the new version, and release file descriptors referencing the old version in applications when it's time to move on. bdb updates in place, but is slower for reads.