It probably says a lot more about me than I want it to, but I was in a Big 10 marching band and didn't really care about the performances. I mean, it was enjoyable, but I might have done it even if we did all the same long rehearsals of the performances (we memorized the drill and music for ~7 shows a season) and then just went to the games as a pep band.
There's a ton of research and discussion out there, checkout the Alt Right Playbook video series on YouTube to help you get up to date with everyone else.
I've watched all of the "Alt Right Playbook" videos and don't remember 'research' that went beyond "I've noticed people doing this thing and here is why I think they're doing it".
The actual tangible impact of such behaviors, as far as I'm aware, has not been properly studied.
Perhaps not in more mature organizations, but it's standard practice at every startup I've ever worked for. One place had the dev office VPN'd into production at all times.
You are technically correct, it is a dumb mistake and sadly not that hard to imagine happening. It's also inexcusable, and I would expect even junior engineers to know better than to log credentials as part of request processing
I doubt someone wrote "log.print(user.creds)". They probably wrote "log.print(req.args)" in a (what they felt) was an unrelated section of code. Sucks, but could easily happen.
I'd be interested in a system or tooling that could identify that something sensitive made it into a log. I think it is practically impossible, but would be interesting.
> I'd be interested in a system or tooling that could identify that something sensitive made it into a log. I think it is practically impossible, but would be interesting.
Have all strings printed to logs go through a common checking routine. That checking routine simply checks for the presence of certain hard coded sequences, and raises an alarm if they are found.
Whenever a production system is updated, run a test suite. The tests includes logging in to a test account whose password is one of the aforementioned hard coded sequences. If your system accepts payments, the tests can include a test purchase using a test credit card number that is one of the aforementioned hard coded sequences. In general, for each type of sensitive information, have a test that supplies sensitive information of that type, with that information being one of the hard coded sequences the log checkers checks for.
This won't stop you from accidentally logging sensitive data in production, but it should catch it during the post-deployment tests so you can fix it quickly.
Interesting, but way harder than it sounds. Very often log systems live as a service for multiple microservices that do real world work. Propagating sentinel data through each of those systems is a nightmare because:
- those systems often have real-world secondary effects
- they sometimes have a tendency to validate away sentinel data prior to logging because the data is no good (e.g. a credit card number that isn't real or a password for a user that doesn't exist), although that can depend on scale and cost
- cross-team coordination of how to handle the sentinel data introduces coupling across teams/services which is contrary to the goal of microservices.