Linear which RGB? I'm being pedantic here, but this will start being a problem as wide gamut displays become more commonplace. A colour space such as sRGB is first a definition of where in CIE space the three primaries are, and then a transfer function for how linear values can be encoded in various formats. For fonts, you should probably render them in 16 bit, keeping only coverage data, i.e. linear alpha channel only. Then you can composite in the same linear colour space as your display, but with more bits per pixel. This wasn't done in the past, because it needs custom shaders for the compositing and requires quite a lot of memory.
Another option would be to rasterise directly in the final buffer, GPUs are powerful enough these days to render vector graphics in a shader. Or you could go the Direct Write route and discretise the vector outlines to triangles and let the built-in AA handle it.
In short, I think we're at the point where either you're writing for a system that can handle rendering everything in 16bpc half-float linear colour, or it's so weak that pixel fonts are the most you can do. Either case doesn't suffer from gamma issues, as you apply the gamma only in the final step.
I understand what you mean, as I too have said these things.
Linear RGB in the sRGB/Rec 709 color space is the correct interpretation. Monitors that are not sRGB/Rec 709 primaries with a Rec 1886 gamma ramp (as opposed to the exactly defined sRGB gamma ramp, please stop using this) outside of a strictly color-managed workflow should be replaced with ones that are.
Now, if your UI compositing system is modern, then yes, just output 16 bit linear and let the OS's color management handle it (ie, Vista and up using modern APIs, which is what DirectWrite does); however, the majority of software devs only understand "2.2" or "sRGB" (which is incorrect; Rec 1886 pragmatically is 2.35, sRGB matches 2.35 inside of the 16-255 range; sRGB has never been 2.2, it is 2.2 with an offset of 16 starting at 16, with 0-15 being linear), ergo, I wrote this with the average software dev in mind.
Another option would be to rasterise directly in the final buffer, GPUs are powerful enough these days to render vector graphics in a shader. Or you could go the Direct Write route and discretise the vector outlines to triangles and let the built-in AA handle it.
In short, I think we're at the point where either you're writing for a system that can handle rendering everything in 16bpc half-float linear colour, or it's so weak that pixel fonts are the most you can do. Either case doesn't suffer from gamma issues, as you apply the gamma only in the final step.