All roads of AI are converging on the end of privacy. A free society does not exist in the absence of privacy.
When you lose privacy you lose ownership over your own thoughts. You lose agency over the direction of your own life.
Humanity suffers by either scenario, either by our own agency in control of power we are not prepared to manage or we will be managed by power that we can not control.
Many have their focus looking forward to the unknown concerns of AGI; however, these other issues are already nearly imminent and are far less present in discourse around AI.
Loss of agency is a deeply profound concern that is not discussed enough.
So much of human life is tied to the reward of the sense of accomplishment, and AI threatens that in a meaningful way. From large things, like seeing a piece of software you built ship and get used, or seeing your child graduate from college after years of hard work raising them, to the small things, like even eating a delicious meal you cooked yourself.
When machines are not only offering to solve problems and perform actions for us, but are reading are thoughts and doing everything for us before we even make an attempt, I worry that sense of accomplishment will become ever less frequent.
And while we can compensate for machines taking jobs through, eg, universal basic income or the like, how we can solve for machines replacing struggle, perseverance, and accomplishment is still very much TBD.
Indeed, this is one of the major topics I focused on in the referenced article.
"AI brings into question who is really the creator. Am I the creator if I simply role the dice until I win the lottery? When I regenerate a new response over and over until finally getting the image or output I most desire? When the output is the final product, is AI still just a tool? Or are we simply observers foolishly convincing ourselves of the value of our contribution?"
The type of decisions those chore-bots make are rather inconsequential, whereas the ones that will decide which digital content you can trust or whether its something you shouldn't want to see could get very nasty.
I got an Automower this year and it's been one of the best purchases I ever made. I have multiple hours per month of my life back. Though I have more grass than I'd like.
You have to physically climb into an multimillion dollar, 10 ton, liquid-helium-cooled machine and spend hours training it on your brain. We're a long way from the end of private thought, nor is there a roadmap to it.
If someone is working on a handheld device that can perform MRI-level scans from across the room, I would be worried about the privacy implications of that technology, not AI.
That is missing the point. It is the trajectory. Machines that can analyze all orthogonal data to understand your behavior is not limited to a single dimension of a MRI machine.
Facial expression analysis, all of your online conversations, the existence of your entire life is being monitored and recorded. There will be enough data to perceive beyond the veil into the thoughts of the mind.
It is definitely missing the point - capturing of brain waves does not require liquid cooling etc. In fact it's something that could theoretically be squeezed into the next Meta Quest to just "capture". Then it would just be stored until they are ready with an airflow pipeline to pass it into the nn.
During the pandemic, I built a device that records "brain waves". Not with the fidelity that would be needed to be a useful medical instrument -- let alone read minds -- but the fact that I, a fairly average geek, can do so using my own equipment and for less than $100 seems meaningful.
The device itself is very small, battery-powered (for safety) and requires that you attach electrodes to your scalp.
I suspect that the underlying point here is that when EEGs first came around in the 1920s, they were also extremely expensive and not available for common use. But since then, the technologies involved have become so much more accessible and affordable that an electronics hobbyist can build one for themself, maybe even using parts they already have kicking around.
MRIs may very well follow a similar trajectory over time.
Personally, I don't think this will happen in the very near future -- but history shows us time and time again that it's good to start thinking about these things well before they become practical.
That doesn't make sense from a physics perspective.
MRI machines operate by creating intense magnetic fields. In order to create those magnetic fields, you need superconductors, otherwise the magnets would burn up. Thus the liquid helium.
To make this accessible to the hobbyist, you need a revolution in physics, not informatics or engineering. Not saying that it's impossible, but if someone does develop room temperature superconductors, we're going to be talking about a lot more exciting things than handheld MRI machines.
> That doesn't make sense from a physics perspective.
I'm reminded of what a physicist once told me: if a physicist says something is impossible, give up all hope. If a physicist says something is uneconomical/impractical/infeasible, then there is still hope because economics change over time.
> To make this accessible to the hobbyist, you need a revolution in physics, not informatics or engineering.
The big stumbling block in terms of doing this on an (advanced) hobbyist level is the need for liquid helium and rare metals. That's an economic problem, not a physics problem. The helium is a really big deal -- but it's also the one that is most likely to have the economics change in, because the helium shortage is a serious issue and there are lots of people looking into ways to manufacture it efficiently. If they succeed, helium may end up becoming cheap enough to be within the realm of possibility for advanced hobbyists.
Also, the only reason that helium is needed at all is because MRI machines require superconductivity to work. It's not impossible that an advance in that field could happen such that you don't need to make things as cold as liquid helium in order to achieve it.
> we're going to be talking about a lot more exciting things than handheld MRI machines.
Hand-held? Why do they have to be hand-held? I'm just talking about ordinary people being able to build one at all, not how portable it would be.
What are you worried about? That someone will kidnap you, force you into an MRI machine, force you to train it for hours on your neural firing patterns, and get the password to your bank account this way?
I'm trying to figure out which part of this threat model AI makes a meaningful difference in. If they already have you captive, the xkcd-certified $5 wrench is cheaper.
"End of private thought" doesn't seem to be on this tech tree, unless you posit being able to scan people secretly or against their will.
I'm not worried about any of that at all. None of what I've said has some unstated "therefore, this is bad" clause to it. I'm just pondering the progression of technology here.
If someone comes up with a technology that allows people's minds to be read without their cooperation, then I'd start to worry -- but I see nothing in this that indicates that's where things are going.
Also, the idea of building my own MRI appeals to me, so my mind went on a little tangent about how to make that happen.
Progress isn't linear. Because we can cure one disease doesn't mean we're on a trajectory to eliminate all diseases. Because today's Camry is faster than last year's Camry doesn't mean we'll be traveling at relativistic speeds anytime soon.
That we can correlate thought with incredibly precise and detailed electrochemical phenomenon in your brain should come as absolutely no surprise to anyone with a materialist view of the universe. Your thoughts are, after all, electrochemical phenomenon. The problem - still - is measuring them.
The idea that AI is going to somehow read your mind from macro phenomenon like facial expressions and the width of your iris is total bullshit made up by people who want to sell TV shows. We already have this kind of "technology" in the form of polygraphs - and they have the same effectiveness as horoscopes.
You possibly have a relevant point if it wasn't for the fact that we are already at a crisis in regards to loss of privacy and its impacts for society.
In this respect, any further loss is of significant concern.
You have to lug around a giant bulky laptop and you can't even call people on it! We're a long way from mobile computing.
Anyways, even given current limitations, I'd likely side with the very researchers working on such. They explicitly highlight privacy reasons as a serious concern. Going out of their way to highlight misuse through bypassing requirements (cooperation of subject) and intentionally misinterpreting for nefarious reasons.
One thing that history teaches is that there are always people who will misuse technology for personal gain and to influence or control other people. Always. Human nature hasn't changed.
You do now. But if you are arrested, you can be compelled to do that, and in a few years EEG skullcaps will probably be sufficient.
Honestly, it baffles me that so many people on a site devoted to technology evaluate long-term trends based on current capabilities. Capacity is going to continue to double every ~2 years. If we can't make transistors 2x smaller, we'll find a different architecture to make transistor arrays 2x larger, or [something].
Technical progress compounds. It's often lumpy rather than linear, but it's going to keep imposing an accelerating effect wherever people see profit in applying it.
Liquid helium wouldn't be a barrier to being used on key individuals (say leaders of opposition parties in some large authoritarian-ish country). The training would be if it requires cooperation.
> You have to physically climb into an multimillion dollar, 10 ton, liquid-helium-cooled machine and spend hours training it on your brain
the article suggests it could be accomplished with a neurosurgical implant as well - and which, honestly, if I could transcribe my thoughts to review later, I'd love to try that out. In which case, the question of security moves to the matter of accessing the implant.
Privacy is gray and there has never been absolute privacy. For example, if I go somewhere in public then people see me. I should not expect my location to be private.
> if I go somewhere in public then people see me. I should not expect my location to be private.
If that's the case, then we have almost no meaningful privacy. Fortunately, that doesn't have to be the case.
If I go out in public, people can see me, sure. But the odds of any of those people knowing who I am are miniscule. To them, I'm just another body in the bulk of bodies that populate their landscape. My privacy is retained.
It's when cameras are everywhere, recording everyone they see, when we carry devices that report our locations to others, etc., and that data is correlated with other data, that privacy is compromised.
When you lose privacy you lose ownership over your own thoughts. You lose agency over the direction of your own life.
Humanity suffers by either scenario, either by our own agency in control of power we are not prepared to manage or we will be managed by power that we can not control.
Many have their focus looking forward to the unknown concerns of AGI; however, these other issues are already nearly imminent and are far less present in discourse around AI.
The impacts on socialization and society in general from AI performing exactly as we have requested it may be one of the greatest threats. I've written quite a bit on that aspect, FYI - https://dakara.substack.com/p/artificial-intelligence-ai-end...