The issue is trying to look for technical solutions to what is fundamentally a social problem. That problem is trust.
Over the previous few decades, the web has slid from a place that felt like a small village to a bustling metropolis. In a large city, trust is maintained first via politeness norms suited to anonymised societies (like civil inattention) — we trust that others will leave us alone — and second (and perhaps more prominently) by explicit institutions (emergency services, businesses, media).
Similarly, trust in information on the Internet has been mediated by implicit rules (consider how many people will add "Reddit" at the end of their Google search) and explicit institutions (such as search engines and social media platforms).
Though I agree with a lot of the article's conclusions, I feel that it falls into the same trap that a lot of tech commentators do. Ultimately, though I'm certainly worried (like most of us) about the ramifications of AI, including as the article mentions how we'll navigate a noise-saturated information environment, I think we will find a way. No technology will alter the fact that humans will always seek out trust*.
LLMs have no ethics and can't be trusted. A real AI would be able to recognize these things.
LLMs being highly advanced remorseless bullshitters can't even understand trust, except to bullshit their way into appearing trustworthy.
In a way these are possibly more dangerous than real AI, which for all we know would converge on some higher understanding of its role in society.
Instead these are weaponized trust dopplegangers solely on the service of the other significant trust dopplegangers of our society: limited liability corporations l.
Or maybe we can get something like the old web back from before it was dominated by search engines, click farms, and silos like Reddit and Facebook. If the platforms are dominated by garbage, then curated blogs, sites, and small topic based communities have value again.
We need a new internet for that. Half joking but not really. One that is strictly p2p and has no distinction between client and server. All applications run client side and sync with a trusted and opt in friend of friend spanning tree. Looks more like organic neighborhoods. Incentivizes closer relationships but can still leverage networks in higher degrees through friend of friend connections. Essentially something like BitTorrent and bypassing BGP and incentivizing local clusters of information exchange that get mixed with “curators” who dip into multiple networks helping networks replicate information they want access to.
Latency matters less when it’s all run locally and can even be done predictably with local ai for many scenarios. That with local application rules that get verified by whoever is peering through/with you on a app.
Another part that I came across as a separate inquiry of possibility was kind of a OS on top of such a structure where you wouldn’t need anything than a “terminal” as all your computers are on a virtual machine overlay. It’s be interested to have every possible computation be hashed so once it’s done it just need to be retrieved so you can treat all peers as a type of addressable memory to not only information but computation and data.
It’s something I wanted to build but then I lost motivation because I can’t possibly be sane or unique now can I?
Your vision is intriguing. The emphasis on decentralization, privacy, and a peer-to-peer model to reconstruct the Net is something we need now more than ever IMO
A project I've been experimenting with, Bison Relay, makes strides in this direction. It's a p2p platform that offers zero knowledge communications with end-to-end encryption, so messages are seen only by their intended recipients. It integrates with the Lightning Network. Each message involves a microtransaction as a spam prevention measure, and users also have the opportunity to monetize their content in a simple way... our internet now is "free" only because we ourselves are the product, as we know!
In its current form it's like a Telegram + Twitter and soon it will have sites and "eBay" like stores, all completely p2p
check out its github or website, you need an invite to join tho RN
Yep, this is the way forward. Long time ago, I also started some groundwork to build something like this, but had to stop it due to lack of time. As this would be a multi-year effort, with no commercial gains (which is a feature in my view).
I’m reminded of PGP signing parties where people built a web of trust via key signing.
There may be a resurgence of this kind of thing as people use social heuristics to decide if any given creator is likely to be producing generative content or not.
We need a way to filter out the low utility or information sites that are only there for SEO, marketing or even misinformation. And we need to do so without censoring. This is a wicked problem[0] to solve, but one worth tackling explicitly as a society. Otherwise, the first mover gets to decide what deserves censorship and gets to control public speech.
My modest proposal: an internet constitution or bill of rights that solidify the right to free speech online as well as a court system to enforce and disambiguate the rules. Following this you have laws against fake reviews (basically fraud), and allow court cases when harm is caused by knowingly false information. Then, for the blogspam and AI generated stuff, you let jurisprudence and representative democracy decide how to tackle this.
What about all the cost of this? Honestly, it might be worth it considering the importance of the internet today.
Way too overcomplicated and authoritarian. The reason why the old internet was good was because it had both a high barrier to entry and a low perceived utility for commercial exploitation. Gemini gets it right by just starting over and limiting their protocol to prevent general appeal and exploitation.
There is already authoritarian control of the internet going on. I believe the solution is to make this control of the internet explicit, transparent and have the people be able to influence it.
Of course, there is nothing wrong with starting your own version of the World Wide Web and that’s definitely also part of the solution.
Me to...on on the one hand you could leverage the power of many individuals, unlike in earlier days.
On the other hand, SEOs would just inflitrate that space and it would turn out just as shitty.
The influence of money on information quality is important. When you can pay OpenAI to have the LLM recommend your product over competitors, they'll practically be Google but search = LLM.
OpenAI don't seem resistant to money.
Not looking good. We need competition and informed consumers.
That community was so small we could already have that in a webring, but it is also unglamorous just like the 90s web, so it would be confined to that group.
This is like an article concerning the rise in injuries to horses caused by automobiles on the road. It’s stuck in the old paradigm. There will still be search engines in 10 years but they will not be the primary interface we use to access information like they were for the two decades preceding.
Might there be a platform like Amazon turk for curating knowledge where the "turk" gets incentivized through its value generated instead of fixed rate? How much the average Wikipedian receive using such funding model and could its incentives align with ideals of utility and rigour? One possible result from incentivizing consilience [0] : a xanadu-style [1] network of curated human knowledge more functional/constructive than arguman [2]. Just a thought.
This virus feeds on money. As long as there are economic incentives to produce content, people will use generative models to spew it onto the internet. It is undetectable and therefore unstoppable. It will happen regardless of whether OpenAI exists or not.
So if the internet as we know it is dead, what would then inventive be for content farms to keep producing? The internet doesn't run on air, neither do LLMs. It's all very expensive to run. So if the internet does die from enshittification and false / crappy information then it should die pretty soon and then the inventive to produce bullshit content should go away with it. Who is going to pay to host, protect and make available exabytes of nonsense ?
It's wild when you think about what's happened recently. Twitter, for those without a login has just disappeared. Reddit has degraded a lot and my Youtube is so full of actual lies, rubbish and clickbait, I only use it when I really know what I'm looking for, and even then, it's getting really, really hard to find what I want.
What amazes me about YouTube is even people who seemed like fairly respectable YouTubers in the past have jumped on the viral clickbait / false info bandwagon.
I'm with the author, it's exciting but it's also kind of scary, like a fog of confusion is descending on our information space.
The answer is bad but IMO obvious. Any remote interactions with other humans will require bulletproof authenticity verification. At a minimum something like the chip + PIN that exists for payment cards or government CACs. Countries like India would be ahead of the ball on this.
Any human communication that doesn't occur in-person will by necessity be treated as fake or hostile until proven otherwise.
But the only part we need is the authenticity verification answer to "is this a human?" Yes, the human authenticated at some point. But fat chance we want any central organization doing it. Hate to say it, but if we can do distributed Proof of Work or Proof of State for crypto, then we need the same for auth.
Neither Proof of Work nor Proof of State would actually help. A human can always choose to operate based upon a script. All it shows is that a human signed off on it at some point. Removing privacy wouldn't help matters either. /At best/ you would get chilling effects combined with warmed over classism as the poorer are the most likely to be tempted into selling their certs and therefore "poor = shill".
The fog of confusion has already descended on the traditional media in a different way.
Modern economic pressures and incentives, as well as political ones, have made traditional media subservient to those, instead of the original ideal of informing the public.
Some still exist that serve that ideal but it has become almost impossible for the average person to separate these from the rest.
YouTube is interesting but it is in my experience is tamable with a lot of 'buts'. What I find ironic is that the algorithm is so sensitive to keeping you engaged, you can have the opposite effect and, instead of being barraged with clickbait at random, you're left with hyper-similar content to what you've previously watched, to the point that the variety becomes so narrow that you can't escape without "re-taming" it via text search and subscriptions.
I agree completely about clickbait chasing from content creators. There's one guitar based content creator that has gone so far down the rabbit hole that I can't watch them at all any more as the content is just barely about guitar playing.
I’m quite happy that the always on firehose of ephemeral trash is dying. It only feels bad in the way that withdrawal sickness feels bad to a junkie. We need to kick this dopamine habit and focus on the important things.
I've been enjoying inflation coverage in this regard, roughly: "it turns out that inflation is caused by people believing there will be inflation significantly more than originally expected."
I'll still take secularism as an organizing belief principle over religion, though. Money is a nice mass hallucination insomuch as it is flexible to changing needs/priorities. Like more traditional religions, it also allows power to shift fluidly as social dynamics change. Any power structure will be susceptible to capture, but power structures built on belief are usefully fragile.
China's organic social credit system is an interesting experiment in this regard. It highlights how America in contrast lost an accounting/social enforcement of "good behavior" (with no replacement) when it lost religion. Instead, everyone is left reliant on their "legacy systems," the individual values of their families/upbringing, without a more widely accepted accountability for doing "good deeds."
Over the previous few decades, the web has slid from a place that felt like a small village to a bustling metropolis. In a large city, trust is maintained first via politeness norms suited to anonymised societies (like civil inattention) — we trust that others will leave us alone — and second (and perhaps more prominently) by explicit institutions (emergency services, businesses, media).
Similarly, trust in information on the Internet has been mediated by implicit rules (consider how many people will add "Reddit" at the end of their Google search) and explicit institutions (such as search engines and social media platforms).
Though I agree with a lot of the article's conclusions, I feel that it falls into the same trap that a lot of tech commentators do. Ultimately, though I'm certainly worried (like most of us) about the ramifications of AI, including as the article mentions how we'll navigate a noise-saturated information environment, I think we will find a way. No technology will alter the fact that humans will always seek out trust*.