Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Legacy PC design misery (2009) (mjg59.livejournal.com)
42 points by JoshTriplett on Dec 27, 2014 | hide | past | favorite | 25 comments


A lot of the non-mainstream PC clones, especially in non-conventional formfactors, were more subtly different than those of major manufacturers like Compaq, and probably lacked the testing required to uncover bugs like this. I'm guessing the BIOS implemented the A20 enable call so it returned success, but all it did was send a command to a nonexistent keyboard controller (especially on a PC laptop, not having an 8042 is a bad idea since the 8042 emulation is usually built-in to the EC and interfacing with it is far easier than requiring a whole USB stack, but I digress...) The OEM bought the BIOS and didn't customise it completely to the particular system's setup.

Things might've been better if the PC had been officially standardised, since AFAIK despite IBM publishing schematics and BIOS source code for everything up to the AT, other companies couldn't make use of that and still had to reverse-engineer the functionality.

I see a lot of people, mostly newcomers, complain about the "legacy" stuff but keep in mind that the strong backwards-compatibility is what made the PC platform as successful as it is. It has its quirks due to evolution, but the relative openness and stability of the design is why I'd still prefer it over some of the other platforms out there e.g. ARM SoCs where everything except the CPU cores vary widely between manufacturers and models. To me, a mostly openly-documented platform is far better than a proprietary one even if the latter is "legacy free".


I wonder if such a standard group can be created now.

Support lifecycles for UEFI BIOSes for both security and non security bugs are desperately needed for example.

Better documentation and definition for VGA would be probably be nice too.


Really

Before 32bit, x86 was a complete crapfest

Segments, A20, chained IRQs, several controllers that are still seen as separate, a whole chain of legacy stuff that has to be dealt with

No wonder the ARM processor in your phone has a lower power consumption


ARM goes in the other direction from 'legacy': every device's boot initialisation is different.


and its a good thing. No one needs to add redundant silicon that is used for 0.0000000001% of device lifetime (<1 second between power on and loading kernel).

People think we need ACPI on aRM, Linaro is even working on it right now :/ Its like no one gets what a clusterfuck ACPI was on PC.


They target servers where the cost of redundant silicon is a lot less.


It's about time someone wrote "x86, the good parts"...


I believe it's titled The Intel 80386 Programmer's Reference Manual.


I don't think it's up to date. I'd consider AVX a good part, for instance.


Does Intel's fabrication plants count?


Regarding segments:

In 64 bit mode CS DS SS and ES are still there but forced to 0, but are still around and FS and GS can still be non-0.

In 32 bit mode they still exist and all 6 are functional afaik.


Thanks for the info, I know they existed in 32bit, but didn't know about them in 64-bit

I don't hate segments just because, but because of the whole 64k limitation in 16 bit

(And real mode, and programs fighting for 640k while the machine had 4Mb, etc)


I blame Intel. Its time for x86 CPU that drops all of this compatibility nonsense. Imagine something crazy like x86 bootstraping in protected (or long) mode.

real 16 bit? gone

virtual mode? gone

MSR? gone

CRx? gone, btw wtf happened to CR1?

Im sure MS would be onboard (if not extatic) with CPU that can run only the newest version of Windows. Linux would happily adapt in couple of weeks. There is maybe <1% of computers with CPU ever touching this swamp of cruft and hacks outside of bios/bootloader.


Intel already tried something like that with Itanium and it failed to gain any traction.


Itanium killed x86 compatibility, this had zero chance of success in commodity market, and medium to little in server space. You cant jump out with no user base product when your competitor (AMD) offers better and cheaper processors.

I am proposing removing _unused_ compatibility hacks. No one sane uses Virtual mode. I wonder how much silicon real estate is taken by all this garbage.


Was that through technical failing, or was the project just mismanaged into the ground like the Alpha? Or was it killed by AMD's backward-compatible 64-bit architecture?


Itanium failed to deliver on its basic promise: the simple in-order VLIW design was intended to be much easier to implement in hardware than the complex out-of-order RISC designs with long pipelines that everybody else (including Intel x86/AMD64) was doing, but then the various actual Itanium CPUs were notorious for being released years later than initially announced, and due to the delays the available out-of-order RISCs were usually much faster.


From https://lkml.org/lkml/2014/4/4/330 :

"I'll contact the people I know in Dell and see if I can find anyone from the firmware division who'll actually talk to us."

I wonder what happened afterwards?


> chunks of address space that contained lies rather than RAM

My favourite sentence fragment of the day :D


Nice writeup, but[1] I'm wondering, could the author have saved himself from a lot of grief by not using an ancient, outdated bootloader in the first place?

[1]: I don't want to spoil the fun, but look at [2] in the article and wonder: who the hell is still using grub 1? Even mainstay isolinux (which in addition has to deal with lots of weird / broken BIOS implementations of cd booting (El Torito, anyone?)) does a lot better.


Because in 2009 grub 2 was a dreadful EFI bootloader and there were working patches for grub legacy. We shipped the bootloader we had, not the bootloader we wished we had.


This article was written in November 2009 and is 5 years old.


afaict, work on grub2 started back in 2004, and a backport patch[1] of the 'paranoid' A20 checking for grub legacy was posted to the bug-grub mailing list back in 2006.

In fact, the very first reply[2] to that patch makes exactly the same point I did above: why not use grub2?

[1]: http://lists.gnu.org/archive/html/bug-grub/2006-07/msg00015....

[2]: http://lists.gnu.org/archive/html/bug-grub/2006-07/msg00025....


The rest of the discussion on the email thread from 2006 on the A20 patch is about whether GRUB 2 is ready for use in production environments, so I am not sure what your point is.

My point wasn't that GRUB 2 did not exist in 2009, but that "Who the hell is still using grub 1?" was not as valid a question then, and is definitely not a reason for you to dismiss the article.


BTW, I think recent Intel CPUs finally got rid of A20M.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: