15 years ago, if you were computing you were almost certainly using a general purpose computer. Phones were more locked down, but they also were basically just for texting and calling, and could never serve as a primary device.
Now, most people spend a majority of their time in a locked down environment (phones and tablets) consuming locked down content (subscription only, on approved devices). The default 15 years ago was open, now it is default closed.
Yup. Content was typically owned in physical form as a DVD or CD (maybe mp3?) and wasn’t subject to disappearing because a company went bankrupt or merely changed license agreements (stuff that happens now even for “bought” items).
We’ve also abstracted away everything from files, for long enough that the idea of not streaming something but playing it locally is becoming more and more niche. Same for applications, which have gone more and more to web based subscription models.
It's insane, really. Many of my friends have no idea what to do when I send them an .mp3 or an .mp4 file if it doesn't instantly embed as a media player in their chat application of choice. For them, the only way to share images or videos is by uploading it to a commercial, ad-infested, sharing site.
By catering to the lowest common denominator we are creating a tech-illiterate society. This is on all of us who dumb down our features to make sure the users can understand.
Apple is intentionally hostile to sharing files and makes it very hard. Improved slightly in recent IOS versions, but still super hard. You can’t just email yourself mp3s and Play them like normal music on your phone, you have to use a third party app or a PC or something to transfer them to iTunes.
_RapidGator, MegaUpload, and Mediafire angrily enter the chat_
in all seriousness, there really are very few reasons for sharing audio files these days. The only ones I can think of are:
- finding a work that is not online (like a specific live record or like 85% of early 80s hardcore that hasn't been remastered),
- pirating (which music streaming services has made ubiquitous for 99.95% of people who consume music; thanks, Sean Parker!), or
- audiophiles buying $10,000 balanced headphone cables with gold TRS jacks (because mics don't belong in headphones, _obviously_)who only listen to test tracks in FLAC format (who don't have newer iPhones anyway)
as far as i remember, iPhones were able to play loose audio files, but you couldn't catalogue them into iTunes, which was annoying given that iPhoneOS (only 2010's kids remember this) didn't have a built-in file manager. moreover, most of those files came compressed (_Mediafire's anger intensifies_), and iPhones didn't have a publically-usable extraction utility, which made working with them a huge chore.
I think the MP3 player in its earliest form was a concession that we lived with (because it was easier than dealing with jackets of CDs and anti-skip sucking) that was absolutely destined for a streaming-only world (because my gut says that MOST people never wanted to get into the audio collection business; they only want to listen to their favorite songs from their glory days in college)
Yup. I taught a basic HTML course for grade school age kids this winter.
I had to start with how to make a file and folder and what the desktop was, etc. Basically, the issue was that their computing experience was all platform based. Everything for them was click a link, use the browser, walled google-classroom gardens, etc.
When I was a kid, the younger you were the more computer-savvy you were. I figured it was due to being exposed to computers earlier in life, and I thought that in the future young people would be tech geniuses from growing up immersed in the internet. I was a fool.
> The default 15 years ago was open, now it is default closed.
Yes, but 15 years ago far fewer people did far less with computing devices of any kind than they do today. Imagine if, for example, I made this argument:
"In 1992, the default for using a personal computer was to both create and consume information, e.g. writing and reading email, writing and reading documents. Today the default is to just consume information, e.g. YouTube, TikTok, AppleTV."
That would be true, but not because of locking devices down. It would be true in because in the last thirty years, the industry had expanded the number of users and their use cases by orders of magnitude.
The people who sent and received emails are still sending and receiving emails. Same for the people writing and reading documents. But all those people are also now watching TV and YouTube and TikTok on computers instead of analogue televisions in their recreational time AROUND the documents and emails.
And there are many, many people who just want to consume content for every "maker" generating content of any type, whether it be programs, documents, videos, music, whatever.
How many people are involved in the construction and operation of the Webb telescope? How many people just want to see pictures of what it sees?
Makers are a small proportion of humanity, and even for makers, making is a small proportion of our use cases for tech.
The next thing then becomes, "Why can't we use GP computers for consuming all this content?"
And the tongue-in-cheek answer is, "Because Linux." Optimizing a device for makers often makes it sub-optimal for consumers.
I am a bassist. But I listen to music far more than I play music, and I have no interest in constructing a player-bass like a player-piano. For when I listen to music, closed-source "information appliance" ecosystems beat open-source general-purpose ecosystems.
I maintain quite a bit of general-purpose computing ways to manage music, but honestly, it's more because I have an aversion to corporate control than any thought that it's easier to be in complete control.
Joel Spolsky wrote that the key feature of Napster wasn't that music was free, it was that you could type the name of a song and listen to it right away. The challenge for us as technologists pursuing a free future is that information appliances do this better than general-purpose ecosystems.
The challenge that you describe is primarily political, not technological. It wouldn't matter one bit if Linux became the perfectly polished consumer OS today if its users are still locked out of DRM'd video services by their owners, for example.
> Joel Spolsky wrote that the key feature of Napster wasn't that music was free, it was that you could type the name of a song and listen to it right away.
If that paraphrase is true then Joel Spolsky has no idea what he's talking about on this subject.
Without a doubt the key feature of Napster was that students could download-- not stream-- music. For free. Students would consequently fill their harddrives with everything they thought they'd want to listen to in their lifetime, often buying 2nd harddrives to populate with more mp3s. (Well, that and pron.) Keep in mind many dorms were still using dialup connections during this period-- thus there was a pattern of students running to the library computer lab to download a few mp3 albums to zip disk (yes, zip disks) then bring them back to the dorms.
What facilitated immediate listening/viewing was sharing directories in Windows with the rest of the LAN on college campuses.
Quick digression to argue against the Gell-Mann Amnesia effect-- while your paraphrase of Joel Spolsky expresses an idea that is indeed false, I reserve judgement on anything else Spolsky has ever written (and frankly on whatever his verbatim words were on this same topic). I mean what kind of pathetic, impulsive nimrod would I become if I simply through out an entire body of someone's work on a single passing impression?
Edit: Just to cover my bases-- in every case I can remember, students who were playing music in their dorms or a shared space had winamp or some other such player loaded up with a playlist selected from thousands from their own collection. Napster was the place to download songs for your collection, not the place to build an ad-hoc playlist in realtime. Maybe there are cases where people were doing this. But the overwhelming supermajority of Napster users were using it because they could replicate a subset of the whole to build their own lifetime library of music. For free.
2nd edit: I almost forgot-- nearly everyone in the dorms would share a directory when they hooked up to the LAN. The process of finding immediately playable content was to browse the various shared directories in windows, or use Windows search on them which IIRC worked incredibly slow or not at all. This was a common practice because again, most people were still on dial-up and couldn't download anything from Napster nearly as fast as you could on the computer lab networks.
Later, Youtube made both music and video files immediately playable. Around that same time, torrent tech started to improve to the point where you could stream while downloading as well as do keyword searches with vastly improved results. This is all to say that Napster kicked off a pattern of college kids grabbing free content, and this proliferation of content caused the development of realtime playback of discoverable content.
So what you paraphrased above isn't exactly "wet streets caused rain." It's more like "the issue isn't the rain-- it's the wet streets." I'm honestly not sure which is worse. :)
15 years ago the default was to consume content from TVs, which at the time typically had minimal computing power, although many people used them along with locked-down devices such as TiVo. I don't have any idea what kind of poll you could build to answer this question, but my bet would be that people spend about as much time on a general-purpose computer as they did in 2007, and the increase in phone and tablet usage reflects a computerization of previously free time.
Sadly a lot of people prefer this over an open environment, as locked down environments usually work out of the box, consuming recommended information is easier than searching by yourself. Centralization brings convenience at the expense of freedom, but it is much easier for us to feel convenience (or being frustrated by something inconvenient) than to understand the importance of freedom.
I'm not sure if people prefer it, or that they are strongly steered towards that environment.
My apple watch is super cool from a hardware perspective, but so locked down that I can't use it in the ways that I would like to. For example, it has a barometric pressure sensor, but I/my apps can't directly read the sensor data, instead a filtered update is pushed to the app approximately every 1.5 seconds. Why? I know the sensor is capable of reading at 20+ hz.
So that someone doesn't write an app that polls at 20+ hz and burn the battery. Apple is optimized for user experience and simplicity, that includes precluding bad behavior. If you want a real time weather station, a smart watch is not the right tool.
My hot water kettle heats water, why can't I wire it up to be my whole house heater?
I mentioned it specifically because I write software for hobby devices that poll at 20hz that use literally the same Bosch sensors as apple. The use case is for gliding variometers (audio altimeters)
The sensor that is in the Apple Watch draws significantly less than 1ma when polled at 20 hz. Without an EE degree I have my devices, including the 90s era processor and piezo speaker, running for 100+ hours on a 150mah button cell.
I cite this example because I KNOW what is possible. This is a pure software issue.
I suspect that apple rate limits because the raw sensor data is quite noisy, and would look glitchy in a badly designed app. But there is a lot of signal in that noise that I want access to. Instead people in the gliding hobby spend hundreds to buy devices that have the same sensor package as an iPhone 6, but are able to access the sensors in a way that are useful.
Well, some of my friends prefer Apple's walled garden because they think the applications there are better than open source ones, and consider policies requiring Apple to allow third-party app store as dangerous to users...
There can be many 'why'-answers for such a context. While we can see 'a sensor' with some properties like data format, refresh rate etc. that sensor is a mere 'implementation' of a desired functionality of a product design.
It used to be that the designed feature or function would be very close to the implementation, but that really hasn't been the case for a very long time. People aren't buying "a large bank of memory addresses" but rather "a device that contains pictures", for lack of a better example.
With the watch a customer isn't actually buying a package of sensors, ARM-cores, BMS, Lithium-Ion battery, and display, but they are buying the experience of having a device that tells the time, notifies them if something happens and can track some aspects of their life so they have an overview of it later (be it for turning their life into a game or simply tracking their energy use/consumption). And then all of that for at least an entire day.
Why would the implementation of the feature result in a sensor that can be polled at a high frequency but is actually only pushed at a lower frequency? It's anyone's guess but here is my guess:
The sensor has its own specs but those are set in isolation and might differ based on implementation inside a casing, so they only way to get true data would be to have some form of calibration or offset where a low-power CPU core for sensor tasks just gets the raw values and applies the offset/calibration. Next, there is power consumption where they might have found the perfect balanced duty-cycle between data that has had enough time to cool down and be useful as well as power requirements for the sensor core and the sensor itself. So they have some sort of RT OS doing reading, processing etc. on a low-power core at a lower interval to get a 1% battery life increase. Do that for 10 sensors and suddenly it's worth it. It's quite an investment to have a team of people dive into the hardware, firmware and application development to do all that, so it's likely not a matter of "how can we spend a multi-million R&D chunk on making the hardware less useful", but rather some "how do we make millions of mass produced devices use a little bit less power" concept.
This is also where the push vs. pull comes from, instead of having every application do some interrupt or scheduling, you just ask to be part of a list of observers and when the data changes you get notified. Much more efficient, and if everyone has to do that, there is a much smaller chance of the user experience suddenly changing and support personnel (phone, in-store) getting complaints about something they can't fix because some third party app did it.
> Now, most people spend a majority of their time in a locked down environment (phones and tablets) consuming locked down content (subscription only, on approved devices). The default 15 years ago was open, now it is default closed.
I love having the choice. Most of the time I want the locked-down thing. A bunch of extra "freedom" I'm not actually using is just a hassle and liability when all I wanna do is take and crop some photos, or do some writing, or whatever.
Now we have that option, and then also general-purpose computers are so cheap and widely available that you can easily score decent ones for free.
Far from being some computing apocalypse, the situation now is excellent. Computers were a barely-tolerated near-failure for most normal use cases, but now there are options that don't suck for normal people wanting to do normal things. And general-purpose computing is more accessible than ever.
Maybe some Dark Age of Computing still lurks, but it's not happened yet, and things have only gotten better so far.
IMO the worst thing about the current computing era are the handful of Web monopolies and the way "AI"/machine-learning has mostly enabled systems working against us rather than creating agents working entirely on our behalf.
This exactly. Mobiles are now the standard general purpose computers and they're all locked down. We don't really own our devices anymore nor our data :(
More seriously, I wonder if we should look at the raw numbers. Before, perhaps all the people who use a mobile as a general purpose computer did not even have a computer. And likewise maybe more people have a PC-type device than before, but still represent a smaller fraction. I take heart in this.
> And likewise maybe more people have a PC-type device than before, but still represent a smaller fraction. I take heart in this.
I also question this whenever I see that "people mostly use locked-down devices now". But I don't have any numbers; honestly it wouldn't surprise me to be wrong.
> Edit: see e.g. the rise of the home lab movement
Sure, but care should be taken for this not to skew the numbers. That more people can now afford to have a mini-datacenter in their house – and therefore there being more computers out there – doesn't mean that there are more distinct people whose main computing device is a general-purpose computer.
To me, the people interested in the home lab movement are those who already were interested in doing "more" with computers, as opposed to just "consuming recommended content".
I know it's my case and I do it now because computers are cheaper, and it's easier to find quiet ones.
> Sure, but care should be taken for this not to skew the numbers. That more people can now afford to have a mini-datacenter in their house – and therefore there being more computers out there – doesn't mean that there are more distinct people whose main computing device is a general-purpose computer.
I've always done this even when I didn't have a lot of money. Server hardware has always been cheap because most of its audience won't even think about buying it secondhand, they will only buy new with warranty and 4hr support. I got $400 fibre channel cards for $10 because literally nobody wants them and companies throw out perfectly good cards when the warranty expires. It's a joke.
In the early 2000's I had Sun Fire and Netra's. I had an HP 9000 HP-UX box with 1GB (in the day when that was a ridiculous amount of memory). These days I have HPs.
A home lab has always been within reach. In fact I find it harder now due to energy consumption, as energy was always cheap.
Maybe I didn't know where to look in the early 2000s, but I do find it much easier now. Basically all of my home lab is what my work was going to throw away – so free.
However, the biggest issue to me, up until very recently, was noise. I have a ~4-year-old Lenovo Think Server that is quieter than some laptops. I also have my eye on a somewhat older HP rack-mount server that is also extremely quiet and should be decommissioned soon.
But up until around 2010, a rack-mount server would have driven me insane. Ditto for switches. Around 2012 I first saw a then-new HP model that was fanless. But it only did 100 Mbps, all the gigabit models came with a jet-engine attached. Now I managed to find a fanless Gigabit Ruckus with a few 10G ports, since they start to be old enough.
Throwaways from work were one major source of my home lab too. However for me this has become a lot harder in recent years, because our company scrapped its own datacenters and moved to the cloud. A hallway full of decommissioned servers is now extremely rare. We have some "computer rooms" left (not allowed to call them "datacenters" anymore) but it's just for the stuff that really must be on-site.
Other than that, online marketplaces. Not eBay generally, because its auction system and international reach drives prices up, even for items which normally gather low interest. I tend to use local buy & sell websites where people usually offer lower prices than advertised and these kinds of items are not very popular so they tend to go cheap.
I've never seen fanless servers, but my home lab is not something I keep running 24/7 anyway. And it sits in a dedicated room with my 3D printers and electronics workbench so it's not the kind of place I hang out for peace and quiet anyway :) It's my mancave really (though, for lack of a partner, right now my whole apartment is a mancave :) ).
My 24/7 stuff I do pick for energy-efficiency and to a much smaller extent, noise. I have 4 NUCs for this stuff. 2 nice ones with 4/6 cores and 64GB RAM, and 2 ancient ones (one atom and one skylake IIRC) which are very low power though. They're the ones that keep running when everything switches to UPS.
I'm not really big into networking so I have some semi-managed TP-Links that bought new for 35 bucks. They're gigabit, 8 ports and can do basic stuff like vlans and mirroring which is all I need. I'm not doing any CCNA stuff or anything.
I really didn't know anyone that didn't have a computer in the years 2000-2010. Even old people with hardly any PC knowledge. I fixed enough old crappy PC's for them :)
And yeah I know you can unlock Android but it's really really fringey. Most people don't. And that's like 95% most.
15 years ago, if you were computing you were almost certainly using a general purpose computer. Phones were more locked down, but they also were basically just for texting and calling, and could never serve as a primary device.
Now, most people spend a majority of their time in a locked down environment (phones and tablets) consuming locked down content (subscription only, on approved devices). The default 15 years ago was open, now it is default closed.