I generally don't use MySQL, but iirc it confirms that nothing went wrong saving the data (whereas with mongo there is no confirmation step, you just have to hope that it was successful), and the data is written to disk. Additionally, i've heard of zero "Oh no, MySQL just lost 50% of my production database!" where it wasn't the user's fault, and i've heard enough of that from Mongo to stay away.
Respectfully, this isn't true. Yes, fire and forget is the default behavior, but there is a confirmation step that you can check. It may be implemented a bit differently from driver to driver, but it is generally called a "safe insert". Numerous people use this to ensure their writes across single databases as well as multi-node master-slave and replica sets.
Respectfully, this isn't true. Safe inserts are safer, but not safe. There could still be a problem writing the data to disk, and (My|Postgre)SQL just doesn't have this problem.
Assuming the changes that were introduced in MongoDB 1.8 for single-server durability, writes are being put to disk with journaling. So, the data is written to disk.
That doesn't actually guarantee the write was written to disk unless an fsync was issued. Of course that comes with a significant effect on MongoDB's famously marketed write performance.
Then in MongoDB's case durability is a function of scale, which leads back to the parent's suggestion that it is a technology optimized for performance.
Personally I think this is a bad foundation for data that is important. There are probably a lot of use cases where data not being on disk for n seconds (or one minute in the case of MongoDB) is ok.
Even when that is the case, I still think that is the most important question to be addressed when choosing MongoDB as a data store.
The flexible query API, schema-less document format, secondary indices... those are siren songs of rapid development.