Maxmind - good. Not like there's many choices here, though. :)
Instead of "waking up to sync logs", consider using something like NSQ to emit events as they happen. You can scale the number of servers/processes generating messages and the number of workers consuming those messages (and committing them to your database) very easily.
You could also replace the writing of the transaction log with a NSQ event. It lets you avoid having to write and scale the log shipping stuff.
We precalculated which ads a given user was eligible for and a separate process was contacted when a bid request came in to get the info for the ad to show. We never had to do anything funky to do geotargeting exclusion at scale.
Instead of having your adserver connect to a database, have a separate process generate a working set (as JSON or whatever you fancy), compress it, and ship it to the adserver periodically. The adserver can just do a straight load from the file every minute or whatever interval you'd like. If the file's mtime is too old, raise an alert and stop serving ads if necessary. Keeping things separate and simple lets you scale more simply. Our working sets were on average about 2gb uncompressed and they could be loaded in a few seconds (C++/JSON and later Go + JSON).
Seems like it was a fun project and I hope you learned a lot!
Maxmind - good. Not like there's many choices here, though. :)
Instead of "waking up to sync logs", consider using something like NSQ to emit events as they happen. You can scale the number of servers/processes generating messages and the number of workers consuming those messages (and committing them to your database) very easily.
You could also replace the writing of the transaction log with a NSQ event. It lets you avoid having to write and scale the log shipping stuff.
We precalculated which ads a given user was eligible for and a separate process was contacted when a bid request came in to get the info for the ad to show. We never had to do anything funky to do geotargeting exclusion at scale.
Instead of having your adserver connect to a database, have a separate process generate a working set (as JSON or whatever you fancy), compress it, and ship it to the adserver periodically. The adserver can just do a straight load from the file every minute or whatever interval you'd like. If the file's mtime is too old, raise an alert and stop serving ads if necessary. Keeping things separate and simple lets you scale more simply. Our working sets were on average about 2gb uncompressed and they could be loaded in a few seconds (C++/JSON and later Go + JSON).
Seems like it was a fun project and I hope you learned a lot!