Kagi is already for profit, they just haven’t raised a huge amount of money:
> Kagi can afford this because they go further than being bootstrapped and profitable: As a Public Benefit Corporation, not beholden to maximizing shareholder value.
As far as I can see it explicitly allows management to consider public benefit, which is something they can do anyway (certainly if the shareholders allow it).
They are for profit, they're just a "public benefit corporation". I don't know exactly what that means and I don't think it comes with any obligations (like being a non-profit would).
It's the same status Patagonia has. I think it's more of a signal to investors (we won't maximize your gains) and a brand boost with the community.
For what goal? Just for them to get instantly addicted once the ban is lifted? For them to lack any communications with their friends and to be excluded from their social circles discussing the newest tiktoks or whatever?
First, lack of a phone won't cause them to be completely excluded from their social circles. If it does, then I'd argue those weren't their friends to begin with. Second, kids need to learn that social acceptance doesn't mean they have to do everything their friends are doing. Third, the long-term benefits of reducing their exposure to social media are outweighed by the short-term benefits of the instant gratification and shared experience of social media.
> completely excluded from their social circles. If it does, then I'd argue those weren't their friends to begin with.
I believe you underestimate the power of being "in". Even if the friends wouldn't be "true", it is still extremely valuable socially. That is, speaking as someone who, due to unrelated reasons, was prevented from fitting in fully. It may not hold much water from a stranger on the internet, but i would've given anything to be able to fit in more at that time. I believe it has set me back socially 3-5 years, with lasting consequences which I may never truly heal.
> Second, kids need to learn that social acceptance doesn't mean they have to do everything their friends are doing
Sure, but they won't learn that when you prevent them from participating activities with their friends. This isn't them deciding that they don't want to participate in something.
> Third, the long-term benefits of reducing their exposure to social media are outweighed by the short-term benefits of the instant gratification and shared experience of social media.
Attention spans can be fixed.
And besides, you shouldn't control any child like that. You might say "they will thank me in the future". But they never will. And the damage done by controlling their life like that is more lasting. Their relationship with authority, with you, with their own autonomy will be forever changed. (Speaking as figurative you, I don't mean to imply you specifically) This teaches them "You don't have a right to own things the authority doesn't want you to own" (Or it teaches them how to lie and hide contraband.)
> And besides, you shouldn't control any child like that.
Every parent exercises some form of control over their child. (Cookie before dinner? No, sorry.) Children need to learn boundaries and it's up to the parents to set those boundaries. It's basic parenting, and isn't as nefarious as you're making it out to be.
> You might say "they will thank me in the future". But they never will.
In my experience this is untrue. I grew up when TV was the primary medium of household entertainment, and yet I was the sole child in my class, and probably my whole school, to not have at TV at home (a deliberate choice on my parents' part). Now that I'm grown up, I'm thankful for it.
Somehow kids were able to make friendships before everyone was online all the time. Perhaps they don't need to be spending time discussing the newest tiktoks. Maybe their friends should be hanging out and doing things.
No. It's not the smartphones that are the problem. Smartphones are a wonderful invention, capable of connecting anyone anywhere.
It's the apps, which overcharge everyone's (not just kids!) brains, by algorithmically "mAxImiZinG eNgaGeMent"
It's time to ban them all. Okay that's a bit much. Ban all algorithmic feeds, all apps must adhere to strictly chronological feed of the strictly subscribed authors.
If we can all agree that cannabis is bad for the still-developing mind, and can generally get on board with the idea that kids should be kept as far away from it as possible, because it's addicting, because it causes long-term alterations to brain development, because it diminishes motivation and hijacks executive functioning networks, why is it so hard for society to consider treating smartphones, social media, and highly-immersive video games like MMORPG's, with essentially all of the same effects, the same way?
I am part of the generation that grew up with MMORPG's from early childhood (I was about 9 years old when I made my first RuneScape account), but approaching 30, I don't game at all anymore for the exact same reasons I don't touch cannabis anymore. Instagram, Snapchat, TikTok, Facebook, it's all the same thing for teenagers. At a neurological level, these platforms are as highly addicting and neural-network-altering as actual psychoactive pharmaceuticals, legal or otherwise.
Paleolithic emotions, medieval institutions, and god-like technology is a combination that we're not nearly as well-adapted to as we think we are.
> why is it so hard for society to consider treating smartphones, social media, and highly-immersive video games like MMORPG's, with essentially all of the same effects, the same way?
I agree with you. I would consider social media and games addictive. It's just that the SMS app on my phone isn't addictive. Telegram app, the Photo app also isn't.
> Paleolithic emotions, medieval institutions, and god-like technology is a combination that we're not nearly as well-adapted to as we think we are.
Agreed. But my paleolithic emotions aren't addicted to the radio waves of my phone, but to the TikTok app specifically.
Sorry if my post was unclear, when I say "platforms", I am talking about Facebook, Instagram, Snapchat, TikTok, open-ended MMORPG's, etc - I agree that the problem is the addiction-optimized psychological experiments, not the operating system or device itself.
Because phone is just a box of wires, without apps it's inert.
It's the apps, which corrode everyone's attention span. And unlike weed, I doubt there will be "algorithmic feed" dealers, because no one actually wants an algorithmic feed.
Sure - to be clear, I am not suggesting banning technology itself. Computers and the internet were also a boon of joy and discovery for me. I self-started programming in TI-basic back in middle school because "computer science" classes that covered anything beyond typing and "here's how to use to a web browser, here's how to use a text editor" skills weren't available until high school for me. I have vivid and fond memories of learning visual basic and making my own GUI apps after this, before eventually starting to learn javascript, python, and "real" programming languages like C.
None of this exploration ever required or involved Facebook or other social media platform or highly immersive video game, save YouTube.
And to be clear, I'm no proponent of the state simply passing universal bans, or infringing upon privacy of adults with facial recognition requirements for using social media, this is a responsibility of parents, many of whom I fear themselves haven't been adequately warned about how addicting these platforms are.
I don't think DARE-style assemblies for both students and parents would be the worst idea to warn both groups about the risks of these platforms, provided they were done honestly, rather than being filled with hyperbole. It doesn't infringe upon anyone's rights, and wouldn't really "cost" anything, but would help educate those who might lack the awareness on the subject.
> I don't think DARE-style assemblies for both students and parents would be the worst idea to warn both groups about the risks of these platforms, provided they were done honestly, rather than being filled with hyperbole.
Yeah that's fair. Probably can't hurt anything with that. But it's hard to get the actual danger across.
> None of this exploration ever required or involved Facebook or other social media platform or highly immersive video game, save YouTube.
That's why I am gunning to limit these kind of platforms, specifically.
> It doesn't infringe upon anyone's rights, and wouldn't really "cost" anything,
Well it depends. If these assemblies worked, they would "cost" the platforms potential engagement and potential revenue. Which is kind of a pointless distinction, I just thought it's interesting
No, that doesn't address the incentives that cause all those things: maximizing engagement to maximize ad impressions for money. You have to choke the money supply off at the source or the big corporations will just find other engagement mechanisms to hook users to get at more profits.
Instead, tax ad impressions per day per user on a sliding scale that makes it quickly unprofitable to display more than a handful of ads and use the money to fund media literacy classes in schools. Restrict the number and types of advertising that can be shown to children and adolescents, like forbidding animated ads.
I think you're putting too much emphasis on The Algorithm. It's a problem, and I agree it's probably the worst offender, but similar problems were observed decades ago with children (and adults...) allowed to watch too many hours of uninterrupted TV. Cutting back to chronological feeds might improve some things but I don't think that's the root of the issue.
I would suggest the primary difference between then and now is accessibility. As a kid, my screen time was limited not just by my parents indulgence but the social pressure from using a shared device. Smart phones let you carry your personal distraction with you.
I agree they are a wonderful invention but I'm not sure grade school students need to be connecting to anyone, anywhere throughout the entire school day.
> I think you're putting too much emphasis on The Algorithm. It's a problem, and I agree it's probably the worst offender, but similar problems were observed decades ago with children (and adults...) allowed to watch too many hours of uninterrupted TV.
Yeah that's fair.
> I agree they are a wonderful invention but I'm not sure grade school students need to be connecting to anyone, anywhere throughout the entire school day.
Well to their friends in other classes ("Wanna go out after 3pm lesson").
Additionally, and socially, smart phones, if banned, would be instantly seen as a status symbol. And it would also accelerate strong anti-autority sentimentality. The kids won't understand it, hell adults wouldn't. So it's also the case that you can't really ban them without really adverse social effects.
Sure, but the natural consequence is that they will be more inclined to distrust society, authority, and vote for anti-estabishment populist parties.
To quote a great man, we live in society. And it's better to work within a system and get to know it rather than it is to just hate it. And if the first experience of a large portion of youth is system beating them down, you can see how that's gonna grow a strong "tear it all down" mentality.
I don't buy arguments from parents about why they can't just take away their kids' phones, or simply decline to buy them a phone in the first place.
My family didn't have a TV growing up. (This was way before the Internet, when TV was king and HBO and cable were a status symbol.) Me and my siblings tried every argument in the book to get them to buy one, to no avail. Out of the loop on TV pop culture? Boo-hoo. Peers make fun of you for not having a TV? Too bad, so sad. The result was that I participated in more activities that engaged my body and brain. Aside from being bad at TV pop culture trivia from those decades, I turned out just fine.
At the end of they day, parents need to set the standards that they want their children to live by, and stick with them. Even today, a phone is a luxury that a kid doesn't really need, and will likely contribute to low attention span and cause them all manner of anxiety. Don't take my word for it; many studies will back me up.
You sound like one of the author's students. Just restricting juvenile phone use to dumb phones is obviously the more feasible solution than banning or manipulating entire platforms.
I never said ban platforms? TikTok, Facebook could still very well exist and still make more money than any of us ever will. Just without the brain rotting engagement algorithm
Why not educate the users about the dangers misuse and abuse lead to the attention span, instead of banning things?
I vaguely recall too students back in the era where our biggest distraction was MSN messenger and our university forums. They kept both off until late at night.
We're letting people experience the downsides of the attention economy when it's almost (if not entirely) too late to avoid the negatives.
> Why not educate the users about the dangers misuse and abuse lead to the attention span, instead of banning things?
Because social media is precisely in the short term benefit x long term risk that human brains are bad at conceptualizing. Same reasons for why we mandate belts in cars.
Hardly anyone in the "west" gets pulled over by police for seat belt checks (unlike say, India, China), yet nearly everyone still wears them, because they understand if they don't, they'll probably become a stain on the asphalt. I imagine if tomorrow, a law passed that seat belts no longer had to be worn, most people would still use them. Perhaps the regulation and enforcement are only needed initially when not everyone is educated on the long term risks.
To be fair, belts and phones aren’t the same things. Belts are popular now because wearing them is barely an inconvenience compared to the improvement in safety - abstaining from phones is way harder for the average person.
Reddit and HN can be very addictive, and Instagram and YouTube and TikTok with mere “highest upvote” per topic would still be. I’m doubtful that your strategy would do very much about the problem.
I’d actually prefer HN and Reddit to be just chronological (or “newest comment” on the above-thread level), like traditional forums.
Even on something as anonymous as 4chan where all comments are posted in chrono order I see a difference in behavior after they added direct links to comments so one could easily see how many reactions your comment got as opposed to actually reading every comment.
I've no clue why people have downvoted this; you're right as rain. A phone is nothing short of a digital slot machine and shouldn't be put in front of adults or children. These algorithms are designed for profit, not humanity. They have far greater control over us than they should.
The funny thing is, they don't even have control. They can't push propaganda. They can just accelerate human desire. Through all the brain rot they have created, they didn't even gain anything significant, just a few % bump in "kEy pErFormAnce iNdiCatoRs".
Capitalism is a system that rewards the selfish and greedy. If you don't pursue every bump in key performance indicators you can, then someone else will and they'll eat your lunch.
Well, there's AI for reading source now. In fact, a lot of the time I spend arguing with AI now is the implementation of things like replaceAll('{{$value}}').
If value is foo, then the intent here is {{foo}} would be replaced. But sometimes the $ is a regex character. But because it's a string, is it using the string form of replaceAll or regex form?
AI tends to be multilingual and sometimes it's thinking in the wrong language, so this is the vibe code version of 10 + 10 = 1010.
The docs can be ambiguous on this, so ideally read the source. Or heck, tell AI to read it for you, but someone has to read it, and it becomes another gotcha for engineers to understand, vibe code or not.
I don't think that's the problem. It's time and effort; junior devs (and more broadly, humans) always want to see how little they can spend; here they incorrectly believe they can lean on LLMs and get a guaranteed increase in quality output. Even if that were true today (and we're nowhere near that), you would still either have to know how to evaluate the output for correctness, or trust that the model has handled it.
Junior devs seem to just trust it now, because they literally don't know better.
That is an important catalyst aggregating the problem. Juniors are reading less code, mostly generating and then AI code produced dissuades them from reading code even more and the skills take a bit even more.
Most docs are for experienced people to get valuable, exhaustive, and correct information when they need to. Learning about the same concepts is a different paradigm and you'd be better off with an actual book/tutorial. My most use information repositories are MDN and the likes, Devdocs|Dash, and sometimes library code when the documentation is lacking. But when I'm unfamiliar with a language/platform, I seek a book or a guide to get familiar with it.
LLMs can't help you if you want to skip the learning|training phase. They will present you something that you can't judge because you lack the qualification to do so. You don't learn how to play piano by only listening to it, or painting by viewing pictures.
Very well put. I also read books to learn concepts. I usually only turn to documentation to remember the nitty gritty details that are easy to forget. Consider loading the first 100 rows of a large CSV. About as basic as code can get. Yet as I move between languages and libraries, it's hard to remember all the different syntaxes, so I find myself looking up official documentation.
I possibly disagree with the principle, if I understood the point correctly: I have always felt reading the `man` pages an inefficient process, searching for literals ('/') often inefficace, and hoped one could finally request for `man` content (and similar) in (more) natural language.
Yeah fair. In unix tools I usually lean on AI too. But also pandas has much better docs than man pages tbh. Maybe pandas is just simpler than the unix world hah
Language specifics you can look up and confirm easily. But recent gpt-4o tried to convince me that Python added a pipe operator in 3.13. Even had sources. To my disappointment, that's just a lie. (https://chatgpt.com/share/67de9c77-d5f4-8012-9f1c-ac15b70aee...)
On the other hand, intuition and thought process is something I have good experience with ChatGPT, ie deciding on architecture (tRPC vs gRPC vs REST for my use case).
I would say good use: generating small code snippets, architecture decisions
Bad use: Anything documentation related, any specific feature, any nitpicks. (Just ask the security guys how good chatgpt is at paying attention to the little things) Anything you can look up in docs / ref. Anything where there is a clear yes / no answer
Note: A good use of LLMs imho is trying to get a start point for lookup docs, like
"What's that thing in Python like [x for x in...] called and where can I find more info". If you however ask it for exact rules for list comprehension it's gonna tell you lies sometimes
Edit2: Unless you mean like really general language specifics. Like how do I make classes in Ruby. In that case yeah that works
> A good use of LLMs imho is trying to get a start point for lookup docs, like
I like this and resonate with it a lot. Sometimes you don’t know what you don’t know or you you just a little about what you might not know. This helps at least give you a name for the thing which you can then verify from source.
> Sometimes you don’t know what you don’t know or you you just a little about what you might not know.
If you find yourself in this situation, that usually means you've skipped something on the way. Kinda like wanting to contribute to ffmpeg's core with no understanding of audio codecs and containers. Or wanting to code web applications with no knowledge of HTTP and the TCP stack, sysadmin, the DOM API, what cascading in CSS means,...