> That's a mainstream mantra but not true at all against High Strength Attackers (HSA's). They always have more resources than you do.
That's why obscurity doesn't work. An attacker with many resources can reverse engineer whatever concealment you've concocted but the concealment is enough to deter other researchers who would have published their results from analyzing your system. So then the HSA knows of the attack and can take advantage of it but you don't so you can't counter it.
Nonsense. The attacker must expend significant effort to identify both the product- and user-specific obfuscations. That by itself does two things: (a) reduce number that can afford to attack you; (b) possibly force individual attacks instead of one size fits all. On top of this, there's a range of monitoring and tamper-resistance measures that can be employed.
So, the result of such obfuscations plus strong, proven methods stops most attackers outright, helps catch some, and at least slows/limits others.
Compare that to a situation where every detail is known and TCB (esp hardware & kernel) are same for all users. Find flaw easily, compromise any. That's your model. See the NSA TAO catalog and any reporting of mass hacks to see where that led. Your method simply doesnt work and is defeated even by average attackers. Whereas mine has held off many in practice with the few failures due to over-privileged, malicious insiders. Learn from the evidence. Apply strong methods, obfuscation, diversity, and detection in combination. Or High Strength Attackers own you without breaking a sweat.
Your argument is upside down. Unraveling obfuscation costs time and money, which means it may be able to deter amateurs (though the history of DRM implies it can't even do that). State-level attackers have nearly unlimited resources. You can't defeat them by costing them money because their money comes out of your paycheck. The only way to defend against them, if you have the hubris to think you're even capable of it, is to find and remove the vulnerabilities so there is nothing for them to exploit. Which makes obfuscation counterproductive because it makes things harder for whitehats who would otherwise help you.
And diversity and obscurity are separate things so the diversity argument is a non-sequitur.
Ok, let's put your method to the test with a simple use case. There's two protocols for secure messaging:
1. The popular one that's been battle-tested, is written in C, and uses one algorithm in a specific way.
2. My method which starts with that protocol, applies a technique for protecting C code, uses 2-3 of AES candidates, has a different initial counter value, and a port-knocking scheme. Each of these are unknown and randomized on a per user-group basis.
You have an encryption or protocol flaw. Your goal is to intercept these users' communications. You do not have an attack on their endpoints. Which of (1) or (2) is harder? Your argument already fails at this point. Let's continue, though.
You first have to figure out the portknocking scheme. That means you must identify which part of the packet headers or data are used to do it plus how its done plus attack how its done. This is already a ridiculously hard problem evidenced by the fact that nobody's ever bypassed it when I used it: always went straight for endpoints or social engineering attempts. Using a deniable strategy like SILENTKNOCK can make it provably impossible for them to even know it's in use.
Next are the ciphers. They don't know which one is in use. The non-security-critical values fed into it are randomized in such a way that they might have to try more possibilities than are in the universe just to start the attack. A weakness in any one algorithm won't help them. Unlike most SW people, I also address covert channels: no leaks to help them. What are odds your attack will work on my users?
Next is the protocol engine. This is where they have a solid chance of screwing me up because mainstream loves dangerous languages, OS's, and ISA's. However, there are dozens of ways to obfuscate/immunize C code against most likely attacks and safe system languages I can reimplement it in while checking assembler. Further, if I make it static & a FSM, I can use tools like Astree for C or SPARK Ada to prove it free of errors. Without the binary and w/ limited online attacks, these guys would have to be geniuses to find an attack on it. After they beat the port-knocking scheme that was designed similarly...
The use of provably strong mechanisms in layers is in NSA's own evaluation criteria as the kind of thing that stops nation-states. And stopped NSA's own red teams in the past during A1/EAL6/EAL7 evaluations. They depend on it for most critical stuff. Applying sound principles plus things easy for defender but exponentially harder for attacker makes a fence so high even nation-states have trouble jumping over it. They're forced to do one of several things: use very best attacks that might be lost during a detection; get physical, increasing odds of detection; ignore you to attack someone else for reasons of cost-effectiveness and keeping their best stuff stealthy. They usually do the latter that I see, even NSA per Snowden leaks.
Of course, I'm sure you and others on your side will continue handing them your source, protocols, exact configurations, and so on to enable them to smash it. I'm sure you'll use the reference implementation without any changes so one attack affects you along with hundreds of thousands to millions.
However, anyone that wants enemies to work for their system should take my approach as it provably increases their difficulty so much that almost all of them will leave you alone or try other attack vectors. This strategy should be applied from ISA to OS to client & server software. And if you have holistic security, you'll prevent or eventually detect most of their other attacks, too. Worst case, one or two of the elite groups get in while you're still safe from the others (esp most damaging). Still better than the "give it all to all of them" approach to security you're endorsing.
The entirety of your argument is that it's a good idea to use code or protocol secrecy as a type of secret password. All the rest is a list of non-sequiturs; you can use port knocking or Ada and still publish the code -- the SILENTKNOCK code is public and port knocking is just a mechanism for password authentication. But this kind of security is already provided by shared secret authentication methods that are stronger and better designed than ad hoc protocol secrecy.
The most common (and quite valid) critique of secret custom protocols is that each change you've made from the well-tested common version is an opportunity for you to have made an exploitable mistake. In the event the attacker does learn the custom protocol that is clearly a liability. But claiming the protocol itself as a secret is worse than that. If one client using a secret password is compromised then you have to change the password, but if one client using a secret protocol is compromised then you have to change the protocol.
Even if no client is ever compromised, the general form of your argument is X + Y > X. So TLS + secret alterations is more secure than TLS. But it's a false dichotomy. A secret protocol is an unnecessary liability over other, better security mechanisms. TLS + independent Secure Remote Password is more secure than TLS + secret alterations. And the list of sensible things you can layer together is long: If you're sufficiently paranoid you can use TLS over ssh over IPSec etc. etc. The point at which you run out of good published security layers is so far past the point or practicality that there is no reason to ever suffer the liabilities of protocol secrecy.
"The entirety of your argument is that it's a good idea to use code or protocol secrecy as a type of secret password."
You're getting close to understanding but still not there. Obfuscation is the use of secrecy in code, protocol, configuration, etc to increase knowledge or access required for an attack. What you wrote is partly there but some secrets (eg memory-safety scheme) require more than disclosure to break. Also, what I've always called Security via Diversity claims that everyone relying on a few protocols or implementations means each attack on them automatically can hit huge numbers of users. Therefore, each should be using something different in a way to cause further work or reduce odds of sharing a vulnerability. The difference can be open or obfuscated.
"All the rest is a list of non-sequiturs"
Then followed followed with claims showing that this makes mass attacks nearly impossible with targeted attacks extremely difficult and with a custom attack required per target. Even getting started on an attack practically requires they've already succeeded in others. You dismiss this as a non-sequitur, which makes no sense. I've substantiated in very specific detail how my method provides great protection vs totally-open, vanilla, standardized-for-all methods. To avoid mere security by obscurity, my methods still utilize and compose the best of openly-vetted methods following their authors' guidelines to avoid loss of security properties.
I also noted this has worked in practice per our monitoring with our systems merely crashing or raising exceptions due to significant vulnerabilities that simply didn't work. Success via obfsucation + proven methods was reported by many others including famed Grimes [1] who opposes "security by obscurity" in most writings. Despite arguing against me for decade plus, computer science has started coming around to the idea with many techs published with DARPA funding, etc under the banner "moving target" that try to make each system different with some having mathematical arguments about security they provide. Most are obfuscations at compiler, protocol, or processor level. (Sound familiar?) That field was largely result of people doing what your side said resulting in attackers with hundreds to thousands of 0-days defeating such software with ease with fire-and-forget, kit-based attacks.
So, the evidence is on my side. In practice, obfuscating other otherwise proven tech in good configurations greatly benefits security even against pro's. I gave clear arguments & examples that it makes attackers' job more difficult, requires they have inside access, and forces customized attacks instead of mass fire-and-forget kits. On other hand, your counter shows little understanding of what the field has accomplished on defensive side or the economics of malware development on their side. Plus, equates all obfuscation with custom work by the least competent on hardest parts of software. You've only supported keeping amateurs out of obfuscation decisions unless following cookbooks for easy things that are hard to screw up (eg Grimes-style obfuscations). Additionally, you follow up with a ludicrous idea that people unable to use some installers/scripts (all my approach requires) can safely compose and configure arbitrary, complex protocols that such people regularly get wrong in single-protocol deployments. What lol...
So, I guess this thread has to be over until you refute the specific claims that have held up in both theory and practice when done by pro's. Further, you might want to test your theory by switching to Windows, OpenSSL, Firefox, etc because these have had most review and "pen-tests" by malware authors. Plus, publish all your config and I.P. information online in security and black hat forums for "many eyes" checks. Should make you extra safe if your theory is correct. Mine would imply using a Linux/BDS desktop w/ strong sandboxing, non-Intel where possible, LibreSSL, and automatic transformation of client programs for memory/pointer safety. Been fine, even in pentests by Linux/BSD specialists. Windows, OpenSSL, Firefox, etc? No so much...
Good luck with your approach of open, widely attacked software in standard configuraiton that you publish online. You're going to need it. :)
> some secrets (eg memory-safety scheme) require more than disclosure to break
And to that extent it isn't actually a secret. If the secrecy is providing you with anything then it comes at the expense of having a unique design but is lost by a compromise of any device. That isn't cost effective.
Your Microsoft link has an excellent exemplification of the flaw in thinking that leads to the erroneous conclusion that obfuscation is productive:
> Renaming the Administrator account can only improve security. It certainly can't hurt it.
The trouble is that it can. Renaming the Administrator account not only breaks poorly written malware, it also breaks poorly written legitimate software. Then the system administrator has to spend time and resources fixing a manufactured problem that could have been spent on other measures that achieve a better security improvement.
And you keep talking about things like diversity and sandboxing as if you can't use these things without hiding their design, but you can. Obfuscation of design is essentially useless because it has similar costs but a worse failure mode than other ways to improve security -- including the ones you keep talking about. Or layering independent systems.
You claim this layering is "ludicrous" but can you name a single major company that doesn't separately use all of those things already? Layering, for example, IPSec and TLS is the same work as configuring them separately. Being independent from each other so that a vulnerability or misconfiguration of one doesn't defeat the other is the idea.
Every security measure comes at a cost. You may need to configure more things etc. Which is why wasting resources on high-cost low-benefit measures like protocol secrecy harms actual security.
That's why obscurity doesn't work. An attacker with many resources can reverse engineer whatever concealment you've concocted but the concealment is enough to deter other researchers who would have published their results from analyzing your system. So then the HSA knows of the attack and can take advantage of it but you don't so you can't counter it.