Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Crypto wallet security as seen by security engineers (cossacklabs.com)
175 points by rchaudhary on Dec 15, 2021 | hide | past | favorite | 102 comments


A few years ago I played around with Metamask - which is a popular browser extension which lets you use ether and ERC20 tokens on websites.

Pretty soon afterwards I realised my browser just felt slow, so I looked into what was going on. Turns out metamask pulls in web3.js, which alone is 2.3mb of javascript. Simply parsing that much code in each chrome tab was taking hundreds of milliseconds per page load. Web3.js is so ineptly crafted. Some of its dependencies get bundled a lot of times at slightly different point releases, for no forseeable reason. I opened an issue about this years ago, and despite lots of attention and promises it still hasn't been fixed[1].

The bigger issue with metamask is that (at least at the time) it allowed any site you visit to silently query your crypto wallet address. Talk about a privacy invasion. Every website you visit gets an ID that I'll keep across all my devices, through which you can find out exactly how much money I have, in what cryptocurrencies. And they can use that to track my purchases. Forever. I raised this with the developers and they said "oh yeah we know about that and we'll fix it later". (Edit: sounds like this security problem was eventually fixed months or years later.)

What on earth? People want to build the future of the web on this junk pile? Holy cow this is a house of cards.

[1] https://github.com/ChainSafe/web3.js/issues/1178


If I’m not mistaken, MetaMask (fixed this via) prompts that ask you to allow a website to view your addresses now. At least, I see those anytime I need to connect to a site now.


Such functionality would still be very useful for criminals, because they can decide to focus their efforts on devices where the prompt exists in the first place, even if the user says no. That way your coin stealer doesn't get onto exploited machines where it's useless, so it'll take longer for AV to pick it up.


It'd be a pretty obvious red flag when a site with no 'web3' functionality opens the prompt.


That's not the worst part of it to be honest. One would guess that to deploy smart contracts you would just download the Ethereum client and write a smart contract and then deploy. Sure this can be technically done but in reality most developers use a centralized system that deploys smart contracts and it is called Infura [1].

I hold some ETH, DOGE and BTC but as these platforms become more and more mature it seems like you get either more governance that becomes centralized, developers use more and more centralized systems and lastly government will probably regulate it to the point where it becomes useful. As it stands right now making a DAO can't be done realistically in Ethereum.

[1] https://infura.io/


My company deploys our smart contracts through our own nodes. Infura is really just an Ethereum node as a service.


there is practically no attack vector in using infura.

Infura can't fake your can't fake your cryptographic signature. Worst case, they censor txs from your address at which point you can run your own node.


I'm not even talking about an attack vector. I'm talking about a centralized system that is recommended by other developers when Ethereum keeps on making claims of moving forward centralized systems.


> The bigger issue with metamask is that (at least at the time) it allowed any site you visit to silently query your crypto wallet address.

Maybe things were different then, but I believe at the present, sites you visit can't get your wallet address unless you've connected metamask to that site.


> The bigger issue with metamask is that (at least at the time) it allowed any site you visit to silently query your crypto wallet address.

Yeah, that's not correct, and I don't think it was ever that way. However, websites can indeed know if you have Metamask installed, which still is a security concern. Chrome has mitigated that adding the option to only run Extensions when you click on them.


Both of the problems you mention have been fixed.


No it hasn't. Web3 hasn't gotten smaller. In the last year its gotten even worse. Its now nearly 3x as big as it was 4 years ago, when I first complained about it being slow and bloated.

I re-ran some tests to check after reading your comment, and compared to last year web3 has regressed in every measure I took:

- node_modules size: 59mb (was 53mb last year)

- Non-minified bundle size: 3.2mb (was 2.4mb last year)

- Minified bundle size: 1.85mb (was 1.15mb last year)

- Number of copies of bn.js included in the bundle: 8 (was 6)

The minified+gzipped size has increased by 88% in the last 12 months alone.

Full details in the issue:

https://github.com/ChainSafe/web3.js/issues/1178#issuecommen...


However, MetaMask no longer injects web3.js.


> Crypto wallets often miss crucial controls around password and authentication flow: password policy, rotation

Oof. NIST 101: https://pages.nist.gov/800-63-FAQ/#q-b05

Hard for me to take the rest of this article seriously when one misses something so basic.


Scheduled rotation is deprecated, but it's recommended practice to rotate passwords if your account has been compromised. Which some wallets apparently don't support. Which is quite worrisome, if I'm supposed to store money or money equivalents in the wallet.


Well, you can rotate any wallet, if you're willing to pay: just create a new wallet, pay the transaction fee and transfer your money over to the new address.

It's kinda like with full disk encryption, you can change the key-encrypting-key in seconds, but to rotate master key would require a long time to re-encrypt everything.


Whadya mean my operating system is not multi-user? If another user wants to use it, they just buy another machine and use it on that.


Duh, if you need a wallet, go get a new one. I'm not giving you mine /s


Agreed. It's sad how many security engineers still push outdated processes, when there is significant research which indicates _why_ it actually _reduces_ security.


Why was rotation ever a recommendation? Did it used to make sense?


There isn't a clear answer to that but here's what Gene Spafford wrote in 2006:

> So where did the “change passwords once a month” dictum come from? Back in the days when people were using mainframes without networking, the biggest uncontrolled authentication concern was cracking. Resources, however, were limited. As best as I can find, some DoD contractors did some back-of-the-envelope calculation about how long it would take to run through all the possible passwords using their mainframe, and the result was several months. So, they (somewhat reasonably) set a password change period of 1 month as a means to defeat systematic cracking attempts. This was then enshrined in policy, which got published, and largely accepted by others over the years. As time went on, auditors began to look for this and ended up building it into their “best practice” that they expected. It also got written into several lists of security recommendations.

> This is DESPITE the fact that any reasonable analysis shows that a monthly password change has little or no end impact on improving security! It is a “best practice” based on experience 30 years ago with non-networked mainframes in a DoD environment—hardly a match for today’s systems, especially in academia!

https://www.cerias.purdue.edu/site/blog/post/password-change...

I've heard variations on this idea which all stem back to that same kind of scenario of a DoD facility where things like access were limited and, for example, a spy who cracked or shoulder-surfed a password might have to wait some period of time to use it, none of which makes much sense in our modern security landscape. I haven't seen anything definitive about the origins but it's very hard to find an actual security expert who thinks it's a good idea (as opposed to a compliance process enforcement person who might have had this trained into them) and these days I'd really be focused on how you could make WebAuthn mandatory.


>Back in the days when people were using mainframes without networking, the biggest uncontrolled authentication concern was cracking.

When we first got our terminals, at first, shared terminals, there was absolutely no guidance on passwords. Password security wasn't part of the consciousness. At least in my corporate experience in the 80's. Especially with all the data being on tapes. A security utopia, briefly.


Password rotation still makes technical sense today. The benefit is that it limits the utility of stolen credentials.

That’s basically all an MFA token is: a rapidly rotating second password. In fact the widespread availability of MFA options is one reason memorized passwords don’t need to rotate anymore. Just implement MFA instead.

Another reason is that forced rotation of memorized passwords gives users an incentive to create passwords that are simpler, and therefore easier to steal in the first place. So the technical advantage was nullified by a human factors disadvantage.


Security models from the dawn of computing, which operated on assumptions that no longer hold true, including passwords being stored in plaintext in /etc/passwd, then later, crypted in /etc/shadow. If the /etc/passwd file were stolen, then you'd have everyone's password. By forcing the password to be changed every X days, then even if an attacker got a copy of /etc/passwd, those passwords would not work after N days.


It's not obviously explicit (to me, at least) whether they are discussing rotation of passwords specifically or wallet-related authentication flows more generally.

In Bitcoin wallets, for instance, multi-signature wallets do not allow rotation of the signing keys. If one of your singing keys gets owned, you need to burn the wallet.


Also in the pages that he links, they explicitly warn against password rotation.

>> https://github.com/OWASP/ASVS/blob/master/4.0/en/0x11-V2-Aut...

>> Note: Passwords are not to have a maximum lifetime or be subject to password rotation. Passwords should be checked for being breached, not regularly replaced.


>User is a single point of failure

I’m surprised this blog post didn’t address brute force methods. People can be coerced into revealing a password to unlock their wallet. It’s a point to consider and will continue to happen.

https://www.independent.co.uk/life-style/gadgets-and-tech/ne...

https://www.newsweek.com/bitcoin-millionaire-zaryn-dentzel-b...

https://cointelegraph.com/news/armed-robbers-steal-450k-from...

https://www.cnbc.com/amp/2015/06/05/new-york-city-man-robbed...


All of this is well known in the blockchain community/industry. For the most part, the people developing these products either were, or had access to security engineers, but the product was developed to meet a specification that didn't permit high security. e.g. "it has to run in any web browser and not use a hardware token". Perhaps the missing part here is user education, although presumably most high value coin balances were going to be held at Coinbase et al, or belong to prudent users who know not to use a browser wallet for their life savings.


Seems like so many ways to mess up, so many points of failure. Traditional banking works because failure is not as unforgiving and permeant as it is with crypto.


"Being open-source is a two-sided sword – attackers could also read details of implementation and find flaws easily."

umm, really?


Absolutely!

So many compromises against IoT devices or network equipment come from inspecting the open/recycled firmware.

There are a lot of interesting defcon videos about using this method

edit: To clarify, this isn't to imply that I think closed source would necessarily be better.

I think this quote puts it well. An all-too-common story about security in open source... of four people: Everybody, Somebody, Anybody, and Nobody:

"Anybody could have done it, but Nobody did it. Somebody got angry about that, because it was Everybody's job. Everybody thought Anybody could do it but Nobody realized that Everybody wouldn't do it.

It ended up that Everybody blamed Somebody when Nobody did what Anybody could have done."

Source: https://www.youtube.com/watch?v=OnJ18pyMncE


Tbh. I doubt that it makes a huge difference.

Some of the best/easiest ways for someone with experience to find vulnerabilities is to throw automatic analysis tools at it. Many of which do work on binaries/disassembly, too.

Don't forget that in many cases it's not too hard to disassemble binaries.

Sure having the source can make it easier, but then things being open source also means that non evil actors might look at you code and help you find/fix problems before they become big. Which I believe somewhat balances it out. (for larger/widely used projects).

Like imagine log4j becoming known because of multiple supper wide spread attacks happening all over the internet, instead of getting known and then fixed and then mass attacks happened because updates take so long. And the log4j vulnerability is totally discoverable without source code. Idk. who found it but I wouldn't be surprised if someone ran into a non-security bug triggered by it and was, hey, wait a moment. Even if it didn't happen like that, it totally could have(1). Closed source would not have helped at all. At best it would delayed it and then made the effects worse due to coordinated wide spread attacks before there is a fix/mitigations.


The log4j incident is an excellent example. A fix to a closed-source vuln of a similar nature would've taken months, compared to ~5 days (if we include the 2nd CVE fixed in 2.16) for an open community response to an open source issue.


I don't know how you can say this... a closed source vuln of this severity would be patched almost immediately. See: any vuln of this severity on iOS or Windows.

On the contrary, log4j incident is an excellent example of how relying on open source for security completely failed. This vuln existed for years, all while being open to security researchers to find it. They didn't. Instead, there is evidence that black hats found the vuln first (perhaps because log4j is open source?).


There's no clear winner between open and closed source, but that's just wrong. Vendors have and will continue to bury (under threat of violation of the CFAA or civil lawsuit) vulnerabilities to prevent hackers from disclosing said vulnerability, rather than fix them. That's also why Google's Project Zero gives a hard 90-day deadline to vendors for patching found vulnerabilities, and they got a lot of flack early on for disclosure.


I literally have not heard a more naive statement on this site, ever. If you work in IT, please educate yourself on this. I'm serious.


I'm a senior security engineer at a FAANG. You're completely oblivious to the state of security today you think anything in my comment was naive.

Please, let's continue this back-and-forth of condescension. It's really productive. /s


Apologies for the tone, but the content of my point 100% stands. For what its worth, I have dear friends at multiple FAANGS who I'd die for -- who nonetheless strongly appear to have a wildly overinflated sense of their own cybersec, in a big picture "missing the Black Swans" sort of way -- if the past is any indicator.

Maybe it's not, maybe you guys have drastically improved things. What makes me strongly doubt it is that we still haven't seen substantial liability or other consequences for bad or negligent actors in this space.


> maybe you guys have drastically improved things.

FAANG security is just like everywhere else: mixed. There are mixed levels of knowledge & experience, and mixed views: i.e. even though there's a lot of consensus on high-level tenets and frameworks, like e.g. Google's Beyondcorp stuff (along with some certain security 101 stuff that's been discussed for the past 138 years), you'll always get engineers who don't like X or Y. Just like anywhere else.

Nothing is a continuum.


I'm not sure if this is a reply to my original poor phrasing where I mistakenly said 'all' instead of 'so many'. I'm sure even that may be a bit generous. We're in niches of niches, here.

It seems the places where the manual digging occurs most are with devices where nobody has even thought or bothered to use such tools.

The vulns the people find are often incredibly laughable; unauthenticated actions leading to a clever path to higher privileges.


Heavy citation needed here I think.


I didn't write this with anything specific in mind; it's based on a binge of Defcon videos from at least a year ago - consider that my citation.

Honestly, anyone that invests the slightest amount of effort in researching how vulnerabilities are found could arrive here. I'm surprised there's skepticism, quite honestly.

It often comes down to the would-be attacker downloading a published update, looking around the PHP or whatever within, and finding unprotected or bugged APIs to leverage -- often to write their own modified version of that same update.


Well if I re-read your comment, you technically don't say anything too controversial: "so many" isn't really countable.

But the implication is that compromises were found because those firmwares were open and/or that closed firmwares suffer fewer similar compromises. Which is what I would question.


Yes absolutely.

There’s definitely a bit of a cult of open source. Just because I can look at your code does not inherently, magically make it more secure. Especially in a world where the black market pays better than bug bounty programs.

Not necessarily advocating for closed source approaches, but it’s something to bear in mind.


It's not like this stuff is hard to reverse but those skills are a bit more scarce and it requires more effort.

My opinion is that you'll get relatively more blackhat "eyes" on your code and fewer whitehat "eyes" if you go closed source.

Personally, I don't know anyone who would reverse a closed-source wallet to report bugs "out of the goodness of their hearts", but I do know a bunch of people who would report bugs in good faith to open-source projects.


Wait a second. The biggest problem here with a crypto system that is not open is not that "it may have not passed through a lot of eyes", it's that it may have not passed through EVEN A SINGLE EYE. The closed system could very well be "we xor this with ILUVPUPPIES repeated ad infinitum" and very few people would the wiser.

An open system at least means _you_ can have a quick look into it. Every other advantage is not guaranteed, but with the amount of pure snake oil out there this one guaranteed advantage is friggin important.


Hopefully (blind coders aside), the code's passed through, at the very least, one set of eyes. GPT-3's not that good at writing code yet!


> Especially in a world where the black market pays better than bug bounty programs.

In crypto/DeFi/croins/crokens/whatever, the bug bounty can be the entire holdings. And there are no legal ramifications or manners of recourse/restitution.


The whole point of the game is that the tokens belong to whoever got the code to run to transfer them to them, isn't it? Complaining that someone "stole" your tokens is like complaining that someone "stole" your gold in an MMORPG.


> And there are no legal ramifications or manners of recourse/restitution.

I'd appreciate a source on that... Both seem unlikely to me. More difficult perhaps, but impossible you say?


Well in crypto there does seem to exist a somewhat widespread notion of "code is law". If the existing business logic is used to do something undesirable (transfer all money to another person) - in such axiom set that action would be considered in line with the " law".


"code is law" has yet to be tested in courts. It is highly suspect that actual humans in a jury would all agree this is the case.


We'll have an answer (applicable to the U.S. at least) soon enough: https://www.coindesk.com/tech/2021/10/22/after-stealing-16m-...


Time to write code that indebts the user to you forever, then get FAMAGMA to use it.

Code is law.


Code law? You must be kidding. It is OK to chat about "Code Law" among like minded individuals always on the lookout to crack open the pirate's chest. It is a different story to explain to a judge: "sir, the bank had left the vault open during the fire alarm. I just helped myself with few $M. It is the bank's fault". Good luck.


If the smart contract is not the contract, then where is the actual contract? I imagine the judge would like to know that. If there's no contract, they'll have a hard time convincing anyone that there was a breach of contract.


This notion is only used in context of smart contracts and on-chain virtual machine code.

This is in contrast to wallet implementations.


Https://rekt.news - plenty of references


A list of compromises or exploits isn't a source for the claim that there aren't legal repercussions or avenues of recourse that could be pursued.


Most hackers can't even be found, since it goes to tornado Cash and owners are clueless or it's an intern job.

So plenty of examples on that site.

What would the recourse be if none of the malicious actors can be found? ( In almost all cases).


Wait, are you saying that there is recourse for reversing blockchain transactions?


See the DAO hack for precedent.


There's no result in court. So what's the recourse now at this moment ?

Additionally, they were lucky they could find the 16 yr old kid, otherwise they wouldn't known in which jurisdiction to sue :)

Welcome to the decentralized world!


Sorry, not following. The recourse I was referring to was a fork and modification of the transactions in question.


Okay, part of the subject was "legal repercussions" and i thought it was related to that.


There’s definitely a bit of a cult of open source. Just because I can look at your code does not inherently, magically make it more secure. Especially in a world where the black market pays better than bug bounty programs.

And open source projects likely pay nothing! I've certainly never heard of a paid bug bounty for any open source project, which doesn't mean they don't exist, it just means it's not that common.


There definitely are a few, kubernetes actually has a dedicated one that hacker one runs. Here are a few examples:

https://www.hackerone.com/internet-bug-bounty

https://huntr.dev/

https://securitylab.github.com/bounties/

https://www.mozilla.org/en-US/security/bug-bounty/


There are some but for of the projects I’ve looked at, you are almost certainly getting a better payout by selling the vulnerability (monetarily not karma wise) to zerodium et al.


Yes. The security wins of being opensource are really only there if the project is being engaged with and reviewed by security conscious developers, improvements are being merged in, and the new improvements are being distributed. Otherwise it's usually going to lower your security by some degree.


> going to lower your security by some degree.

Not really, the security is the same.

You just make the work for attackers a bit faster=>cheaper.

But IMHO for an experienced attacker it's just a matter of "a bit faster" (like their attack comes a few days earlier), not a matter of "being more secure" (like their attack doesn't come at all).


Look at the lists of CVEs that come out every month for closed source software. How did they fix them?

You don’t know. I can tell you that that my colleagues have found cases where major vendors don’t “fix” a defect - they prevent the public exploit from executing. With open source, you can see exactly what was fixed and how.

You don’t lower or raise security.


> lower

Using the word "lower" here implies you're comparing open source software to something (closed source software), in which case you'd be implying that closed source software is engaged with and reviewed by security conscious developers.

If you think that's the case just because you've heard of a few high-profile open source vulnerabilities, I've got news for you...


How is this controversial?

I mean, we may believe the trade-offs are worth it but this is definitely in the "con" column.


How this is controversial is extremely obvious to anyone who works in security. So much so that the opposite of this tenet is literally the first thing generally taught about security (Kerckhoffs's principle).

It's well-known, and pretty uncontroversially accepted in the security community. The general idea here is that, while it can be useful to have some "security through obscurity", it's important not to rely on it, and in most cases where it is cites as a measure, it is relied on (often heavily). A good phrase I've heard used to elucidate on this is that "sunlight is the best disinfectant". For example, a more recent related discussion is Schneier's Law[0], which basically posits that any amateur can make a piece of software that they themselves can't break into. True security comes from sharing & discussing it with alternative perspectives (ideally whitebox - not black/greybox - discussions).

It's telling about the audience that flock to "crypto(currency)" related posts that the comment is buried in downvotes, because I would think the typical average HN user would not be downvoting this comment.

[0] https://www.schneier.com/blog/archives/2011/04/schneiers_law...

> Kerckhoffs's principle.


You are completely missing the point. Risk analysis is a balancing act:

Pros: open source, white hats can easily inspect the source

Cons: open source, black hats can easily inspect the source

You don’t get to just delete the cons you don’t like, even, if like me, you think the pro out weighs the con.


> if like me, you think the pro out weighs the con.

My point is very simply that in my opinion that you're wrong to think this, and that opinion is shared by most of the security community.

There is ample evidence that the cons FAR outweigh the pros in this particular comparison.

---

Another way to visualize your comparison would be:

- Closed source:

-- Pros: 1 white hat can easily inspect the source

-- Cons: 1000 hackers can blackbox-attack the application

- Open source:

-- Pros: 100 white hats can easily fix the source

-- Cons: 1000 black hats can whitebox-attack the application

You're arguing that 99 white hats don't trump black-vs-white.

---

Even if I play devil's semi-advocate here and consider that the pros balance the cons, the statement in the article is still wildly out of place in that context.


>you're wrong to think this

Security engineer here. No they aren't. I'd say they are being a much more comprehensive in their security analysis by including the cons in their calculation rather than dismissing them and assuming that the pros outweigh them.

>opinion is shared by most of the security community.

No it isn't.

Security by obscurity is a very valid defense-in-depth measure. There's a reason you disable "debug-level logging" on production webservers.

>There is ample evidence that the cons FAR outweigh the pros in this particular comparison.

Show said evidence, please, because there is ample evidence that open source software magically being reviewed by hordes of people is many times a myth.

Heartbleed was caused because pretty much nobody cared to look at openssl's code and review it for vulnerabilities. Just because it is available to be looked at by white hats doesn't mean anyone actually is looking at it. And if something so critical to security like openssl isn't even being reviewed by security researchers, what gives you any confidence that random software like JoesCoolLoggingLibrary is reviewed with any more scrutiny?

Speaking of logging, Log4shell is yet another example. The most ubiquitous libraries in use. Used everywhere by some of the largest tech companies in the world, that all have the largest and most well-budgeted security organizations. The vulnerability was present in code for years, available for anyone in the world to look at, and yet...? Instead, there is evidence that Log4shell was being exploited by black hats before any white hats discovered it.

>You're arguing that 99 white hats don't trump black-vs-white.

No, they're arguing that there is no guarantee that these 99 white hats are somehow better than the 1000 black hats, and I'd add to it that there is also no guarantee that these 99 white hats magically appear, anyway. In fact, I think a more realistic visualization for most software is:

- Closed source:

-- Pros: 1 well paid white hat can easily inspect the source

-- Cons: 1000 hackers can blackbox-attack the application

- Open source:

-- Pros: 1-2 unpaid volunteer white hats might inspect the source if they have time

-- Cons: 1000 black hats can whitebox-attack the application

As a security engineer, I know which one I feel more comfortable with.


> Show said evidence, please, because there is ample evidence that open source software magically being reviewed by hordes of people is many times a myth.

It is outright ridiculous ( even malicious) to point to examples of bugs that were found _by 3rd party users reviewing the code_ as evidence of lack of 3rd party code review.

However, try to find evidence of issues found by a 3rd party reviewing a closed crypto system .

There is a literal objective benefit to having an open system, and that is why every engineer worth its salt is nowadays going to consider a closed crypto system as the snake oil it is.

> Security by obscurity is a very valid defense-in-depth measure. There's a reason you disable "debug-level logging" on production webservers.

This is also missing the point. What is meant here is that a crypto system should not rely on obscurity of its design (rather, the design should be open for cryptonanalysis), not that you have to provide friggin debug- level access to every implementation of the system, even production.


>It is outright ridiculous ( even malicious) to point to examples of bugs that were found _by 3rd party users reviewing the code_ as evidence of lack of 3rd party code review.

Those "bugs" were found by third party hackers reviewing the code. Heartbleed was found by someone conducting a blackbox pentest. Log4shell was found by someone reviewing the code, and using the exploit before white hats discovered it as a 0-day. This is the exact opposite of championing open source white hats, and is exactly the concern raised by the original author of the statement which created this entire thread.

>However, try to find evidence of issues found by a 3rd party reviewing a closed crypto system .

I can speak from personal experience at a FAANG that this happens literally every day, multiple times a day. Just because you don't hear about it happening because its behind closed doors does not mean it isn't happening.

>There is a literal objective benefit to having an open system

And there is literal objective disadvantage to having an open system, as well. The entire point is that you must weigh the tradeoffs.

>why every engineer worth its salt is nowadays going to consider a closed crypto system as the snake oil it is.

The majority of software you use is using closed crypto systems. Just because the core algorithm used is open source doesn't mean the rest of the implementation is. The security industry does not view this as snake oil. If you think we do, you have a misunderstanding of the security industry.

>This is also missing the point. What is meant here is that a crypto system should not rely on obscurity of its design (rather, the design should be open for cryptonanalysis), not that you have to provide friggin debug- level access to every implementation of the system, even production.

It seems like you're the one that completely missed the point. Relying on third party cryptanalysis for your security goals (especially by unpaid volunteers, or low-paid bounty hunters) is terrible and lazy security, and will almost certainly not get you what you want. The design of a crypto wallet being open is a product decision made because your users want some level of assurance that you aren't secretly stealing their BTC, but it is not a security decision that can be relied on to secure your product from vulnerabilities.


> Security by obscurity is a very valid defense-in-depth measure.

No it isn't. The whole notion of "defense-in-depth" generally does more harm than good IME, as it creates confusion about where the actual security boundaries are.

> Speaking of logging, Log4shell is yet another example. The most ubiquitous libraries in use. Used everywhere by some of the largest tech companies in the world, that all have the largest and most well-budgeted security organizations.

log4j2 was widely disliked and rarely used, IME.


>No it isn't. The whole notion of "defense-in-depth" generally does more harm than good IME, as it creates confusion about where the actual security boundaries are.

The security departments of multiple FAANGs, not to mention security experts, completely disagree with you.

>log4j2 was widely disliked and rarely used, IME.

Tell that to the tens of thousands of FAANG engineers who worked all weekend remediating the hundreds of thousands (not exaggeration) instances in their companies where it is in use.


I'm happy to hear you're a security engineer. I'm sure you're the only one here.

> No it isn't. Security by obscurity is a very valid defense-in-depth measure.

No-one has said otherwise. I said this in the gp of the comment you're replying to.

> Show said evidence, please

No need: you've just shown two good examples yourself. Bugs found by scrutinising open source software. I'd be curious to see how many examples you have of the same found in closed-source software (or are you suggesting there's no closed-source software out there with vulns of the same vintage as Heartbleed).

> No, they're arguing that there is no guarantee that these 99 white hats are somehow better than the 1000 black hats

This sentence seems to make the same assumption others have: that "black hats" are exclusively looking at open source software.

> 1-2 unpaid volunteer white hats might inspect the source if they have time

This might be true of a library noone uses (in which the impact of an exploit is limited by it's popularity). For popular libraries, there's an entire SCA industry of commercial vendors selling products that disprove this (transient dependency reporting is done throughout large corps that rely on open source supply chains). Admittedly this industry is more mature in some areas than others (e.g. package-managed language ecosystems -vs- orchestrated system deps), but it's still not insignificant. There's absolutely no comparison between this effort and the number of eyes looking at proprietary products internally in any given org.


>No need:

Yes, need.

>you've just shown two good examples yourself. Bugs found by scrutinising open source software.

Bugs found years after they were introduced, and not found by white hats until after black hats were already exploiting them. This is your example of open source software being secure? These are poor examples, and the exact opposite of what you're arguing for.

>This sentence seems to make the same assumption others have: that "black hats" are exclusively looking at open source software.

It makes no such assumption. You appear to have completely missed the point.

>This might be true of a library noone uses (in which the impact of an exploit is limited by it's popularity). For popular libraries, there's an entire SCA industry of commercial vendors selling products that disprove this

Nope. You again seem to have completely missed the point. Openssl and log4j completely destroy your argument here, as they are two of the most used software packages in history and yet nobody noticed the bugs for years. I don't know how you can champion this as a win for open source security with a straight face. We still do not understand the full extent to which these vulnerabilities were exploited, but we do absolutely know that they were exploited before open source white hat researchers found anything. These were abject failures for open source.

> There's absolutely no comparison between this effort and the number of eyes looking at proprietary products internally in any given org.

You're right that there's no comparison. Right now in my company there are thousands of well-paid engineers whose full-time job is to look for vulnerabilities in our closed-source code bases. The amount of scrutiny that open-source libs get doesn't hold a candle to it.


Why doesn't the NSA/FSB/... open source their crypto systems then so that people can search for flaws?

And there would be interest, imagine the street cred...


They generally do open source their crypto systems (at least the algorithms themselves), given that that is practically a requirement for any crypto system to be adopted in a widespread way (no one who is serious about security is going to use home-baked closed-source cryptosystems).

Examples: https://en.wikipedia.org/wiki/Speck_(cipher) and https://en.wikipedia.org/wiki/Simon_%28cipher%29 and https://en.wikipedia.org/wiki/GOST_(block_cipher)


Not too sure about the NSA - I know they subcontract a lot, so may be harder to weed out. GCHQ have some fascinating stuff out there though https://github.com/gchq/CyberChef/blob/master/src/core/opera...


Have you never considered this or do you actually have a valid argument as to why it isn't true? I thought this was something that was agreed upon by most engineers.


One of the main things I do when doing any application security tests is - you guessed it - read the documentation and source code (if it's open source).

It's basically like reading a manual on how to exploit the application.


Interestingly, the most popular web3 wallet for EVM chains (Metamask), is source-available but not open source.


Security in obscurity


about half of these concerns evaporate if you just use a hardware wallet. web3 will almost certainly never achieve its goals if hardware wallet adoption isn't ubiquitous.

if you want to lever criticism, criticize how expensive hardware wallets are and their lack of support.


web3 will almost certainly never achieve its goals if hardware wallet adoption isn't ubiquitous.

If that is true then web3 may be in trouble. For a very long time people have pushed for hardware devices for MFA and yet most people still use phones if they even have MFA at all. I know people with Yubikeys and other devices because of the circles I have been in but in the wider audience of the internet they are exceedingly rare. So unless it becomes super easy for the masses to have a hardware wallet such as their financial institution issuing one then I am not sure how you realistically get past this problem. That of course puts people back into trusting and depending on their financial institution and its third party providers.


Hardware wallets are basically crypto keys in a smart card with a gui, so crypto hardware wallets are going to eventually migrate to the smart card in everyone's computer (TPM) and mobile phone (secure enclave and equivalents). Webauthn shows what this future will be. Everyone's hardware wallet will eventually just be their mobile phone.


How would you theoretically back up and revoke that device? stolen, broken phone, compromised phone, etc


1. Social recovery methods. 2. Even if you use a hardware wallet you still need to backup your seed phrase offline.


The money just flows to people with better OPSEC.

After they phish the people with bad OPSEC.

You might think web3 will be in trouble as people stop showing up, but there are so many other opportunities to make money when the good OPSEC people launder all their money by sending a random token’s price sky high, and cashing out as capital gains from their other already clean money. So that will keep attracting people looking for gains. Many people also do secure their holdings properly, the inconvenience of bothering likely driving scarcity.


Metamask supports the Ledger hardware wallet. It's about $60 pre-shipping/tax.


I'll never buy another Ledger product.

I got doxxed in their data leak(s): https://www.google.com/search?q=ledger+data+leak

In hindsight, I should've known better than to use PII at the time of purchase.


My main takeaway from this is that getting rich isn't worth the trouble


i remember back in 2014 trying to find a decently priced and widely available smartphone that could be repurposed into a secure hardware wallet. replacing the operating system was possible for a lot of phones, but after a lot of research i could never find a phone that didn't either require binary blob drivers or was fully open in terms of firmware both on the application processor side and the baseband side. (even if that meant disabling the baseband, which was desirable anyway)

i wonder if that's still true today. although i suppose there are a lot of hardware wallet choices now.


Bunnie's precursor/betrusted project ticks those boxes.

https://www.bunniestudios.com/blog/?p=5921


In my experience wallets are pretty secure. The problems often lie with smart contracts

What is bad with wallets right now, is UX, especially on mobile.

I hope smart contract wallets can fix these UX issues, but I'm not so happy about potential security risks they bring.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: