The myth of american moral superiority had been dead for a while. Why would china be any more evil than the US, which has waged far more colonialist wars and killed far more foreign lives in recent times (look at the news today for inspiration)
I don’t see any contradiction with what the OP said, though. You don’t have to be morally superior to still be concerned about a country’s forces killing you.
It's a reversal of the more likely situation which is the us getting it and china following in response. Nuclear weapons anyone? Remember who started those.
Vietnam war, iraq war, afghanistan war, iran war, gaza war, allowing iraq to get and use chemical weapons on iran, forced regime change in south america (then and now). Get real it's not equivalent in any way
How can you say the Uyghur genocide isn't "equivalent" to the things you listed? What math are you using to compare them? How do you compare regime change in South America to Uyghur genocide, for example? Is there a spreadsheet somewhere that lists the value you're placing on lives, war and geopolitical actions, in order to make a fair comparison?
Use Mullvad Browser or Brave (both require no extensions to block ads, with mullvad browser being modelled off of tor. Use data traffic fingerprint obfuscation even behind vpn (yes they can tell if you're messaging, watching a video, torrenting, etc 90% of the time even behind vpn) use mullvads daita (makes packets the same size) or nymvpn (mixnet with tor like routing and in built delays). Tor doesn't protect against traffic analysis at all.
Also, it's probably tricky to find a Schelling point that a broad range of people can agree to.
* no military use
* no lethal use
* no use in support of law enforcement
* no use in support of immigration enforcement
* no use in mass surveillance
* no use in domestic mass surveillance (but mass surveillance of foreigners is OK)
* no use in domestic surveillance
* no use in surveillance
* require independent audits
* require court oversight
* require company to monitor use
* require company to monitor use and divulge it to employees
* some other form of human rights monitoring or auditing
* some other form of restriction on theaters/conflicts/targets
* company will permit some of these uses (not purport to forbid them by license, contract, or ToS) but not customize software to facilitate them
* company can unilaterally block inappropriate uses
* company can publicly disclose uses it thinks are inappropriate
* some other form of remedy
* government literally has to explain why some uses are necessary or appropriate to reassure people developing capabilities, and they have some kind of ongoing bargaining power to push back
It feels normal to me that a lot of people would want some of those things, but kind of unlikely that they would readily agree on exactly which ones.
I even think there's a different intuition about the baseline because one version is "nobody works on weapons except for people who specifically make a decision to work for an arms company because they have decided that's OK according to their moral views" (working on weapons is an abnormal, deliberate decision) and another version is "every company might sell every technology as part of a weapons system or military application, and a few people then object because they've decided that's not OK according to their moral views" (refusing to work on weapons is an abnormal, deliberate decision). I imagine a fair number of people in computing fields effectively thought that the norm or default for their industry was the latter, because of the perception that there are "special" military contractors where people get security clearances and navigate military procurement processes, and most companies are not like that, so you were not working on any form of weapon unless you intentionally chose to do so. But, having just been to the Computer History Museum earlier this week, I also see that a lot of Silicon Valley companies have actually been making weapons systems for as long as there has been a Silicon Valley.
There is definitely a muddle on so many levels about signaling and agreeing on ethics in technology.
But as innovation slows globally, it is implementation, ethics, and ideology that will once again be the dominant metrics of progress, so there's a new window emerging to push for this social/moral change in technology once again.
So it's still critically important that we actively work towards finding a meaningful, socially contagious differentiator other than "ethical technologist" even if it's difficult- look at what OpenAI gets away with under that flimsy banner.
"Starting today I will be asking prominent members of the tech community to sign their name onto this. A code of conduct, authored by me, that pledges them to a universal ethos, which I created, that I call tech ethics or Tethics for short."
I've had similar thoughts, alongside these rapid advancements maybe suggesting the universe really is only "recently created" - it's no absolute comfort but I enjoyed this boston review article of AI psychosis being a longstanding historical phenomenon https://www.bostonreview.net/articles/a-brief-history-of-ai-....
The confines of our reality really are so small in the face of technology like this aren't they.
Especially when you realise that our tolerance for AI propping up the rest of the economy has stifled the "natural selection" that weeds out poor performers and poor business choices and rewards innovation and success. An imprecise economy is a worrying thing.
reply