It's usually true that the secret designs are insecure. I'll give you that. I only trust that which is done by pro's with review, esp with high assurance methods. Most things aren't like that. Easy to see how the interpretations would get conflated a lot.
Far as help or review, there's a false dilemma that goes around between something being so open I put it in a Hacker's News comment and totally closed. In reality, there's many things between with the review being more important than openness. I tried to address it in this write-up:
Goal was to get people to consider proprietary, open-source models more to ensure steady flow of cash to maintain/audit security-critical software. Dual-licensing at the least. Might prevent another OpenSSL debacle.
Far as actively developed, that's a good point. The trick there is to make sure you have guidelines for how to use the security-critical functionality and stick to them. The common stuff that's prone to error is done in a way that defaults to security. An example is how Ada or Rust approach memory/concurrency safety vs C. Not a guarantee by itself but raises bar considerably. The parts that are obfuscated have little risk on the security-critical aspects and are more about changing likelihood that shellcode will do its job. There's even methods in academia to do this automatically at compiler level.
So, this is what I'm saying. You should definitely have pro's in-house or (if possible) externally to review the design for flaws. Just do Correct by Construction, make design as boring as possible, and obfuscate the hell out of any aspect you can without introducing risk.
An example from my work when I did high security consulting was to put guards in front of any networked service. The guard blocked attacks like DMA, TCP/IP, whatever. The messages they passed along were simple, easy to parse, and landed directly in the application. The application itself had automatic checking of buffers, etc. My biggest trick, though, was to use the guard to fake a certain platform/ISA (Windows/Linux on x86) while app actually ran on different OS and ISA (eg POWER/SPARC/Alpha/MIPS). They'd hit it constantly with stuff too clever for me to even want to waste time understanding. Never executed because they couldn't see what they needed to see. And all the good security hardening, patches, monitoring, and whatnot. Strong security practices + effective obfuscation = TLA's remote attacks stood no chance. :)
Far as help or review, there's a false dilemma that goes around between something being so open I put it in a Hacker's News comment and totally closed. In reality, there's many things between with the review being more important than openness. I tried to address it in this write-up:
https://www.schneier.com/blog/archives/2014/05/friday_squid_...
Goal was to get people to consider proprietary, open-source models more to ensure steady flow of cash to maintain/audit security-critical software. Dual-licensing at the least. Might prevent another OpenSSL debacle.
Far as actively developed, that's a good point. The trick there is to make sure you have guidelines for how to use the security-critical functionality and stick to them. The common stuff that's prone to error is done in a way that defaults to security. An example is how Ada or Rust approach memory/concurrency safety vs C. Not a guarantee by itself but raises bar considerably. The parts that are obfuscated have little risk on the security-critical aspects and are more about changing likelihood that shellcode will do its job. There's even methods in academia to do this automatically at compiler level.
So, this is what I'm saying. You should definitely have pro's in-house or (if possible) externally to review the design for flaws. Just do Correct by Construction, make design as boring as possible, and obfuscate the hell out of any aspect you can without introducing risk.
An example from my work when I did high security consulting was to put guards in front of any networked service. The guard blocked attacks like DMA, TCP/IP, whatever. The messages they passed along were simple, easy to parse, and landed directly in the application. The application itself had automatic checking of buffers, etc. My biggest trick, though, was to use the guard to fake a certain platform/ISA (Windows/Linux on x86) while app actually ran on different OS and ISA (eg POWER/SPARC/Alpha/MIPS). They'd hit it constantly with stuff too clever for me to even want to waste time understanding. Never executed because they couldn't see what they needed to see. And all the good security hardening, patches, monitoring, and whatnot. Strong security practices + effective obfuscation = TLA's remote attacks stood no chance. :)