I loved the Amiga but I loathed all the XT/AT segmented memory bitplaned VGA 16-bit stuff. That to me is the deep dark ages.
I get the sentiment, but I have to nitpick the details ;)
VGA isn't bitplaned. It's chunky -- to put a pixel on the screen in 320x200x8 VGA mode 13h you literally just write a single byte to the VGA memory. The Amiga, on the other hand, does use planar graphics.
(Maybe you're thinking of EGA, which is planar and a pain to program for)
The old EDIT never shipped with any 64-bit Windows IIRC, since it was a 16-bit MS-DOS application. I believe 32-bit Windows 10 has it..?
As someone who (mercifully) only occasionally has to touch Windows machines, I keep forgetting this, and then when I try to do stuff I’m flabbergasted that the operating system does not include a terminal text editor. (In a fit of pure desperation I even typed EDLIN into the Command Prompt — no go ;)
That was the case with Win11 about a year ago; if they finally started shipping EDIT64 then hey, that’s one positive recent change in Windows I suppose.
Well, there was a workaround (that I only learned today) for creating new files:
copy con file_to_edit.txt
Type text, end with CTRL+Z. Don't make any typos.
That's what web search told me, but then looking at the remarks in docs for `copy`[0], I have to wonder if this works now, and if it would've worked back then:
copy prefix.txt+con+suffix.txt output.txt
If it does, then combined with some clever use of `find`, `findstr` or `for` (whichever was available back then), you could probably get something that's half-way between EDIT.COM and a line editor.
(`more` would come in handy here, but AIUI, there's no way to run it non-interactively in cmd.exe? Don't have a Windows machine handy to check it right now.)
So then act now, because the best time to act was yesterday, and the longer you wait the worse the mess and pain becomes. Not acting at all is not an option.
i.e. we get "embrace" without the "extend and extinguish"
This only ever happens when the party trying to EEE is fighting a losing battle. If they have the upper hand, they will always get to the extend and extinguish part. Do we think movements for user freedom have the upper hand right now?
Sure, but util-linux and the BSDs won't suddenly cease to exist. If you don't like what Ubuntu is doing, just don't use it.
And then websites and applications stop working if you're not using a verified, attested, locked-down OS and you're stuck with your nice free software system that will not do your online banking, let you chat with your friends, or access your company resources.
At that point I'll just move into the woods with a typewriter and chat with my friends via HAM radio
Edit: Also, why would some userspace components in a slightly-less-free license cause this to happen? if the powers-that-be want to shut you out of the internet, they can do it now; lots of proprietary software already exists.
Also, why would some userspace components in a slightly-less-free license cause this to happen?
It won't, in itself, but it appears to be yet another little push forward on the slippery slope that probably will end where it appears to inevitably end.
but again, the bsd userspace already has a permissive license. if the mustache twirling villains want to lock down stuff, they can do it now. they don't need any push forward.
Yeah, but people don't really want to use the BSD userspace. A lot of the Linux stuff people want to build on assumes a GNU userland and it's not trivial to build a BSD/Linux that actually does relevant computer stuff.
But in places where that stuff isn't relevant, we already see a lot of locked-down devices like the Nintendo Switch and PlayStation based on BSD precisely because they can leverage free software but still lock it down. macOS with its BSD userland is also kind of like this -- the OS is getting gradually more locked down over time, but the frog boils slowly.
If you tighten the screws too hard and fast then people will scream and yell and maybe leave your business for a competitor -- even though it's technically feasible, that means you can't disallow access to banking websites for generic-browser-on-generic-OS now. But we are, brick by brick, building a foundation where that will seem inevitable.
The argument is basically that making it easier to lock down general purpose computing devices like desktop computers (by, for example, making a non-GPL drop-in replacement for GNU *utils) will eventually aid in making it happen. The powers that be will use tried-and-true arguments about security and think-of-the-kids etc to make it seem like running a mutable, untrusted OS is an unacceptable risk.
>that means you can't disallow access to banking websites for generic-browser-on-generic-OS now. But we are, brick by brick, building a foundation where that will seem inevitable.
If you have too much non-standard stuff going on in your browser or mobile device, this is already happening, to a degree. Not a hard block, but increasing difficulties
People give away their freedoms all the time. Most people are walking around with facebook and tiktok tracking their every move. they don't care.
Some linux users aren't going to stop this sort of thing from happening. If Chase Bank wants to only allow MacOS and Windows 11 computers to access their website, the 1% of their userbase that uses something else isn't going to move the needle, and 99% of their users won't care (or even notice).
If this was going to happen, it would have already happened. The pieces are all there already.
People give away their freedoms all the time. Most people are walking around with facebook and tiktok tracking their every move. they don't care.
This is absolutely true. I'm saying someone should care, because it does matter.
Some linux users aren't going to stop this sort of thing from happening. If Chase Bank wants to only allow MacOS and Windows 11 computers to access their website, the 1% of their userbase that uses something else isn't going to move the needle, and 99% of their users won't care (or even notice).
For some businesses, losing 1% of your customers is actually a lot of customers and a lot of money, and all else being equal they would prefer to not lose them.
If this was going to happen, it would have already happened. The pieces are all there already.
No, they really aren't. Again, it's perhaps technically feasible to flip the switch, but it doesn't make business sense yet.
How many people are doing online banking without running on a fully cryptographically verifiable/attestable OS? This means everyone not using a TPM, Secure Boot, etc. This means grandpa with an old Windows 10 machine or an old Mac that perhaps he should not still be using but he doesn't care, he just wants to pay his bills. I don't have numbers of course but I bet you this starts looking like a hell of a lot more than 1% of the userbase.
There are web APIs for this sort of thing in all major browsers but no one is really using them yet. But they exist for a reason, much like Windows 11 requires a TPM for a reason, and this tech will at some point be deployed for things like online banking. Of course it will.
> If this was going to happen, it would have already happened. The pieces are all there already.
Same things were said for:
- Removal of DRM from music: Happened.
- Age verification in the internet: Happening.
- Locked down personal devices: Happened.
- Total surveillance in cities: Happened.
- Not being able to buy but only rent: Happened in many digital formats.
- Internet activation of software: Happened.
- Tracking individual persons real-time: Happened.
- Browser attestation: Google is trying hard.
- Attestation for Internet Banking: Reality in S. Korea.
This resonates. The after effects of age verification and the general exclusion of freedom loving coders is going to leave me standing here in the tumbleweeds with my 90s toyota and laptop with solar panels and unregulated radio frequencies my only communication with the outside world.
Its like those movies coming true. I've already had casual user accounts frozen just for accessing via VPN, or some other inscrutable reason.
I'm with you and the only solace in this dystopia is the fact that I increasingly feel like I just don't care. I don't really like using computers anymore. I liked them when they represented freedom and creativity.
So fine, exclude me from all your platforms, there's nothing there for me. It's all bad content from bad people (or increasingly: not even people) running on bad software. I'm not giving up my freedom to partake in that, I'd rather just stop using your shit.
(But I would very much like to be able to pay my bills and buy my train tickets, so I'll play your game and have a smartphone. Fine. You win this round.).
The lack of dedicated registers meant a lot of memory access, which slowed things down considerably.
It gets worse because the TI99 only has 256 bytes of RAM directly addressable on its 16-bit bus. All the other memory in the system is video RAM and is accessed 8 bits at a time through the video display processor. Oh, and you can only do this when the VDP is not accessing the memory. This is incredibly slow and severely hobbles the potential performance of the CPU.
The whole thing seems like it was designed in a parallel universe, or at least it reeks of some kind of a sunk-cost-fallacy design-by-committee thing.
Supposedly what happened is that the system was originally designed to have either an 8-bit CPU, or a 16-bit CPU with an 8-bit bus (cf. 8086/8088) like TI's own TMS9985, but at some point it was decided that they should instead cram their full 16-bit TMS9900 minicomputer CPU (!) into the thing. This decision basically tanked the whole architecture.
It was too late/too expensive to redesign the 8-bit support chips to 16-bit counterparts so they had to make some really out there decisions like "talk to the graphics chip and give it an address to read/write every time you want to use memory" and "software is written not in machine code, but in GPL (Graphic Programming Language), which is then interpreted by the CPU and turned into actual TMS9900 machine code"
Software on ROM cartridge for the system is stored in GPL and is fetched from ROM by the CPU (but wait! The ROMs are not in memory space like they would be on a sane computer; they are SERIAL ROMs read 16 bits at a time with memory mapped I/O) and interpreted to machine code. This is slow. When you write your own software in BASIC, however, this gets worse: now you're writing BASIC, which is being interpreted and turned into GPL, stored in video RAM, and then fetched back from video RAM and turned into machine code by the CPU. THIS IS EVEN SLOWER.
Needless to say, the BASIC on the TI99 is dramatically slower than the already slow implementations on other contemporary micros.
It DOES have a full 16-bit CPU which is theoretically much more powerful than a 6502 or Z80 but this wild-ass implementation of... well, everything, makes the system probably the least capable machine of the era.
RAM was very expensive then and 16 bit CPUs weren't that much faster to justify the cost if you were aiming for the home market.
Both true, which makes this an even more baffling choice -- why pick the more expensive, state of the art 16-bit CPU* that you're getting little or no benefit from + 16K of extremely slow-to-access combined video and system RAM? You could have used a cheaper 8-bit CPU and maybe for the same budget have fit 4K or 8K of system RAM on the bus + some amount of dedicated video RAM for the VDP. This would have been faster and more useful in nearly all real world applications, make for a much cleaner board design, easier development, and probably cheaper. That's what everyone else did.
Then again, what was this machine's target market?
* The reason is probably that TI wanted to show off their state-of-the-art CPU tech and be able to point to the spec sheet and say "look, it's 16 bit! All our competitors are only 8 bits -- that's half as many bits!"
The design "decisions" are easy to explain. The 9985 failed. They had a development prototype with a 9900 emulating the expected CPU. The 9918 VDP was the cheapest way to add 4K later 16K of DRAM. And that was what they shipped after the 9985 was killed.
------------------------------
From 1977 they expected a 9985 to succceed the cheap 40-pin 9981, both having an 8-bit external bus (1). It would have 256 bytes of RAM onboard. I speculate it would have the 9900 microcode optimizations seen in the military SBP9989.
Anecdotally, the 9985 failed seven tape-outs. It was killed. The Bedford UK team was tasked with starting over: eventually this produced the 9995.
But the Home Computer had been prototyped using a 9900 board. So that was forced into the 99/4 (not A) with some external 256 byte SRAM.
Memory was expensive. The 9918 VDP, made by a team in 1975 with junior engineer Karl Guttag, was the cheapest way to interface 4Ks DRAM which TI made and sold to itself. By the time it reached market, 16k in 8x 4116s was optimal.
Various efforts to cost-reduce and upgrade the 99/4A ran into the '82 price-war with Commodore.
Every design iteration that added more RAM (2 or 8 or 16K directly accessible from the CPU) was "paid for" by reducing the cost elsewhere (PALs for instance.) BOM was around $105. [3]
But in the price war, engineers were told to deploy the cost-savings without any new features: this was the 99/4A 2.2 or QI for quality improved. [3] The 99/4A was already a loss leader by Q4 1982 [5].
In 1981, Karl Guttag's new 9995 passed first silicon [2]. It used the new optimized 99000 CPU core which also famously passed on first tape-out. The 9995 was available in quantity in 1982 [3] when new consoles were started around it: 99/2, 99/8.
The 99/2 was supposed to be cheap enough to compete with Sinclair. [6]
The 99/8 was a technical beast for the high-end, having 64K of directly accessible RAM. Its fancy memory mapper drove 24 bit external addresses. It supported 512K off board, which the P-Box had been designed for. It had Pascal built-in. Yet there was no Advanced VDP for it: stuck with the same 9918A.
In early 1983, TI assembled a team of two dozen engineers to write software for it: Pascal applications, new LOGO, a database, new word processor, TI FORTH, and complete accounting package, and a rumored superior easy-to-use interface. Pascal was supposed to deliver many benefits. It would be a small business machine. (4)
Of course, in November 1983, all efforts ceased as Home Computer was cancelled--just as the consoles were to be unveiled at Winter CES.
-----------
(1) An 8-bit bus was always going to be optimal--even the IBM PC 8088 saw that. 16-bit peripheral chips were never going to be made: the package size would prohibit that.
(2) Electronics Magazine and EE Times articles
(3) Internal memos of Don Bynum, program manager
(4) TI Records, DeGolyer Library, SMU : Armadillo and Pegasus
(5) "Death of a Computer", Texas Monthly, end of 1983?
(6) BYTE Magazine June 1982-ish
Based on research for my book: _Legacy: the TI Home Computer_.
Thank you for this, this is very interesting detailed context.
Do you think there is a possible world where TI would have swallowed their pride and considered not-invented-here options like a regular 8080/Z80/6502 as the CPU?
I have a few ideas but I think they were set on using their own chip.
There was a memo asking if TI should support those other CPUs in their AMPL prototyping system (990 based tools and in-circuit emulator). That investment was rejected.
Anecdotally, Don Bynum was unhappy with slow progress on defining the Home Computer, and hacked together a Z80 based machine. The engineers redoubled their efforts... supposedly...
There's politics between the Calculator division (all consumer products), Semiconductor, and Data Systems Group.
Still, TI had a TMS8080 (and later their own 486).
I'll work on this idea, thanks...
----
As a child, I knocked some books off a garage shelf once and was plonked on the head with copies of The 8080 Bugbook. What the heck was a Bugbook? Or an 8080?
Some years later, a 9995 data sheet fell on my head and I thought how hard can it be to wire up a computer?
"Section 1.23 of the IBM Agreement states that IBM Licensed Products "shall mean IHS Products ..." The only logical conclusion is that the parties meant those IHS Products specifically identified in Section 1.2 of the Agreement. Section 1.2 does not limit the products to IBM designed products."
"Therefore, IBM has the right to act as a foundry and to make, use, lease, sell and otherwise transfer the microprocessors in question to Cyrix free of any claims of patent infringement."
This argument would sound nearly identical if you made it in the 70s or early 80s about mainframes and personal computers.
It's not that mainframes (or supercomputers, or servers, or the cloud) stopped existing, it's that there was a "good enough" point where the personal computer was powerful enough to do all the things that people care about. Why would this be different?*
And aren't we all paying for a bunch of silicon that sits mostly unused? I have a full modern GPU in my Apple SoC capable of throwing a ridiculous number of polygons per second at the screen and I'm using it to display two terminal emulator windows.
* (I can think of a number of reasons why it would in fact turn out different, but none of them have to do with the limits of technology -- they are all about control or economic incentives)
It’s different because of the ubiquity of the internet and the financial incentives of the companies involved.
Right now you can get 20TB hard drives for cheap and setup your own NAS, but way more people spend money every month on Dropbox/iCloud/onedrive - people value convenience and accessibility over “owning” the product.
Companies also lean into this. Just consider Photoshop. It used to be a one-time purchase, then it became a cloud subscription, now virtually every new AI feature uses paid credits. Despite having that fast SoC, Photoshop will still throw your request to their cloud and charge you for it.
The big point still remains: by the time you can run that trillion parameter model at home, it’s old news. If the personal computer of the 80s was good enough, why’s nobody still using one? AI on edge devices will exist, but will forever remain behind data center AI.
Right now you can get 20TB hard drives for cheap and setup your own NAS, but way more people spend money every month on Dropbox/iCloud/onedrive - people value convenience and accessibility over “owning” the product.
Yes, this is a convenience argument, not a technical one. It's not that your PC doesn't have or could have more than enough storage -- it likely does -- it's that there are other factors that make you use Dropbox.
So now the question becomes: do we not believe that personal devices will ever become good enough to run a "good enough" LLM (technical barrier), or do we believe that other factors will make it seem less desirable to do so (social/financial/legal barrier)?
I think there's a very decent chance that the latter will be true, but the original argument was a technical one -- that good-enough LLMs will always require so much compute that you wouldn't want to run one locally even if you could.
If the personal computer of the 80s was good enough, why’s nobody still using one?
What people want to do changes with time, and therefore your PC XT will no longer hack it in the modern workplace, but the point is that from the point that a personal computer of any kind was good enough, people kept using personal computers. The parallel argument here would be that if there is a plateau where LLM improvement slows and converges with ability to run something good enough on consumer hardware, why would people not then just keep running those good enough models on their hardware? The models would get better with time, sure, but so would the hardware running them.
The original point that I was making was never purely a technical one. Performance, economics, convenience, and business trends all play a part in what I think will happen.
Even if LLM improvement slows, it’ll probably result in the same treadmill effect we see in other software.
Consider MS Office, Adobe Creative (Cloud), or just about any pro level software. The older versions aren’t really used, for various reasons, including performance, features, compatibility, etc. Why would LLMs, which seem to be on an even faster trajectory than conventional software, be any different? Users will want to continue upgrading, and in the case of AI, that’ll mean continuing to access the latest cloud model.
No doubt that someone can run gpt-oss-120b five years from now on device, but outside of privacy, why would they when you can get a faster, smarter answer (for free, likely) from a service?
The benchmarks are not invented by the LLM, they are from an issue where Scott Shambaugh himself suggests this change as low-hanging, but low importance, perf improvement fruit:
Ah fair enough. But then it seems the bot completely ignored the discussion in question, there's a reason they spent time evaluating and discussing it instead of just making the change. Having a bot push on the issue that the humans are already well aware of is just as bad behaviour.
This is how it always is, until suddenly one day it isn't. Linux didn't play in the same league as serious and commercial UNIX systems until one fateful day it killed them all dead forever.
I get the sentiment, but I have to nitpick the details ;)
VGA isn't bitplaned. It's chunky -- to put a pixel on the screen in 320x200x8 VGA mode 13h you literally just write a single byte to the VGA memory. The Amiga, on the other hand, does use planar graphics.
(Maybe you're thinking of EGA, which is planar and a pain to program for)
reply