I find it amusing to recall that Stallman's Free Software efforts were more or less kicked off by frustration with crappy closed-source printer drivers. 40 years later, being subjected to abusive behavior from your printer has become a near-universal experience.
When I worked on google wave, one of the most requested features was adding print support (to allow users to print waves). Printing support is a super simple feature (especially compared to "make it run faster on IE9"). But print support didn't occur to us at google because was ... well, of course we didn't think of it. Google (and most tech companies I imagine) work in paperless offices. We almost never try to print things ourselves so we didn't think of it or care.
I have a working theory that any software used by programmers will eventually get excellent (or be replaced with something excellent). And everything else stays vaguely mediocre.
Postgresql? Excellent.
The tooling to allow non-programmers to edit data in postgres? Halfbaked.
Sound cancellation in macbooks for video calls? Fantastic.
The software bank tellers use? Garbage.
Github? Fantastic.
Github equivalent for non programmers (eg people with folders full of Word docs)? 404 not found.
Anyway, the fact that modern printer drivers are garbage should come as no surprise. Who amongst us cares enough to fix them? RMS was probably one of the last competent programmers who will bother writing clean, minimal printer drivers. I expect the world will become paperless before HP cleans up their act.
I have the same problem at the moment with my Wacom tablet - the hardware is great but the software is truly awful, and apparently it phones home regularly. Software for artists is unfortunately off the golden path.
Ah, the tell of someone who has never implemented printing.
Pagination across different paper sizes when content is dynamic/interactive and includes things like section headers that you need keep with text and images and tables that users expect to not split across pages with their custom choices of margins, page-numbers, page titles etc. So many other issues like ensuring monochrome prints are legible, implementing print previews etc.
Oh boy. Some might think implementing math from papers is hard but no, it's this sort of thing. You'll be fixing problems with it for the life-time of the product.
It seems like the driver should detect paper size, broadcast a compatible print area/capabilities to something above, and then stand by to make any dot it can in the allowed area.
Why would the hardware driver need to deal with pagination, layout, or print previews (beyond advertising capabilities accurately)?
I've always imagined that drivers are hard on the hardware side. It is really difficult to reliably place dots on a page with the requisite precision/reliability at the price-point to which we have all become accustomed.
Sure, that would make for a decent PoC. It's all the work that will come later as you discover the browser is NOT doing a good job that will be the problem.
It may be different now, but it was not long ago that browsers would without remorse just let the printed page cut even images in half if a page break happened to be there. It seemed as if all they did was create an image of a length long enough to fit the printable content and just fed that to the printer. No smart layout at all.
If things are different now, it certainly was not during the brief time Wave was around.
Browsers don't normally work with paper-like pages, they work with essentially infinitely scrollable areas. So a browser layout engine is not in any way, shape or form optimized for laying out HTML + CSS in a printer friendly way, especially if you start having 'weird' requirements like A4 vs Letter, smart layout of tables across pages so that they are still readable etc.
In fact, in general, a human being must actually lay these things out by hand for print if that is the desired medium.
Which is an excellent attitude for a short PoC, but unimaginable for a paid product.
In the end, the user doesn't care. They want the printed page to be usable. If the browser makes it usable, excellent! If it doesn't, than the product is bad and an alternative must be found.
Sure, Google, being in control of Chrome, could implement a browser-level page layout engine.
Unfortunately, there is obviously no way to make a good general purpose layout engine, so that will never fly. Any site that wants to offer printing as a paid feautre must implement their own printable version, no question.
Strong disagree. Software quality is driven almost entirely by either insane dedication from hobbyists or (more commonly) monetary incentives. Lots of businesses depend on Postgres so it's good. Github makes money off of providing a good product. Wacom has a total stranglehold on the market so they don't care because they know artists will put up with it.
Wacom is seeing a lot of their lunch get eaten by iPad Pros now, so hopefully they’ll be pressured to improve. I wish Apple could make Sidecar integration work at Wacom levels so I’d never have to use one of their devices again though.
I don't know enough about the APIs involved but - that seems like something you should be able to write as a pair of apps.
You'd need a native iPad app to capture and transmit pen events, and then something on the desktop side to receive the events and turn them into the equivalent wacom tablet events. How does wacom send tablet events to apps? Is it a named socket or something? It shouldn't be too hard to emulate. I wonder if anyone's made something like that in the app store.
Mind you, I'd much rather if Apple baked it into macos through Sidecar.
Wacom uses kernel extensions, which is another reason why I’d rather it be a first party solution. Not sure what Astropad uses because their service requires an overpriced subscription, has a bad UI, and only functions well with an extra peripheral.
But Sidecar is also pretty half-baked and Apple is flakey about supporting features like in general, so who knows how it will all pan out.
There is indeed an app, Duet Display Pro, that will relay Apple Pencil input to your Mac. I've only used it for its second screen functionality; no idea how well the pencil part works.
Amazon offices use absurd amounts of paper.... however it's mainly for raising desks and leveling monitors. Literally every desk has some thing up on a ream of paper. The actual paper document culture is moving online finally.
Most people also position their monitor too high, to be fair. The top of the monitor should be at or slightly below eye level. If you reach out from your chair with your arm held level, that should be roughly in line with the middle of the screen, and your fintips should be ~10cm or a few inches away from the screen.
I very purposefully position my monitor so that my main console window is viewed, with my head level. It is also horizontally centred on the mid-left of that console.
This is because most text, eg bash work, vi of files or code, rests there.
In this way, my head is not tilted down, or eyes tilted down constantly. My head is mostly centred, and looking straight ahead.
I find that elsewise, my body follows the constant downward look, either slouching or the neck bent forwards.
Why do you belief your monitor should be positioned, so you are looking downward constantly?
Multiple ergonomics guides give the guidance I mentioned. Here’s a link to a few, but I encourage you to do your own research.
At the end of the day, if you’re finding that something helps to reduce long-term fatigue then it’s probably not too bad, but don’t forget that slouching in your chair feels good however it’s terrible ergonomically - this is similar. Comfortable short term, but not always long term.
All of these recommend at or slightly below eye level:
Of course though, because the monitor and keyboard should be able to be raised and lowered independently. If you set the desk to a height that's suitable for the monitor, it may be too high for the keyboard.
> I have a working theory that any software used by programmers will eventually get excellent
Most programmers rarely using office suites and prefer to use plain text editors. This has gone so far that developers prefer a sadistically under-featured file-format (.md) to office files.
Personally, I prefer a lightweight, portable, easier to edit, technically just plain text file vs whatever mess of useless fonts/font sizes/unrelated garbage is crammed into a .odf/.docx file.
Literally anyone with a computer that has a ASCII or Unicode compatible text editor can open a .md file and get useful info.
Even better, add a little bit of lightweight extra tooling, and now you've generated a nice looking html page, if you need to make something with fancier presentation.
The only thing I miss is the ability to embed images in the md file itself, but even that is not hard to work around.
Obviously I keep software on hand that can deal with docx/odf, but I'd really rather just keep it simple, due to the fact that I'm stupid.
What you call underfeatured many would call correctly featured. A more complex format doesn't add value for most use cases, while being harder to reason with and correct issues in.
.md files avoid the copy paste font/size mess by being plain text and rendering in the reader's choice of font. Bold, italics, hyperlinks and such are all explicitly added, easy to Ctrl + F for and aren't hidden behind finicky context menus as in standard word processors.
> I have a working theory that any software used by programmers will eventually get excellent (or be replaced with something excellent). And everything else stays vaguely mediocre.
Hmmm... I think.. sometimes. There's also software that gets excellent, becomes big, becomes profitable, and squeezes out everything else. Then becomes mediocre. word and google docs?
I would be unsurprised if a lot of drivers remain closed source to hide where the bodies are buried.
There's probably a lot of dodgy hardware that's somewhat papered over with fixes in a 200Mb propriatery drivers-- much easier to filter the inbound data and build a workaround than ship everyone a new controller board when it turns out sending 0x08675309 grenades the print head.
(There's now browser based versions of MS Office apps that can simultaneously edit documents stored and automatically version controlled in SharePoint)
In some ways it was truly awful, to mention two examples:
- a reason for using it was to automatically apply Azure Information Protection to documents. All well and good until it turns out the mechanism can trivially be subverted, so trivially that we found out by accident by not following the exact login procedure.
- the usability makes is comparable to Oracle software. I have extensive experience with that too, but I can't say which is worse.
By teaching the current school aged kid's to program.
I seriously think the world would be better off if programming were seriously taught in schools, on a level equivalent to language, math, science, etc.
If we start with Scratch in early grade school and work up to C in high school, so many more people would be comfortable and efficient using computers, and would have the understanding necessary to understand the legislation of tech.
This would take a decade+ to pay off/have the bank tellers know to program, but it seems like the best long term plan.
I agree. Programming shouldn't be (just) thought of as engineering, but also as simply giving the computer complex instructions. There is also the story about RMS teaching secretaries to program Emacs by not telling them Lisp is a programming language.
> Programming shouldn't be (just) thought of as engineering, but also as simply giving the computer complex instructions.
I really like this phrasing. I'm not really an engineer. My code is mostly rough, but it is functional and does what I need it to. I don't program to build a skyscraper, I program so my computer can do a thing I need it to do.
It was about not being able to customize the behavior, lacking the access to source code, not about “crappiness.”
“ In 1980, Stallman and some other hackers at the AI Lab were refused access to the source code for the software of a newly installed laser printer, the Xerox 9700. Stallman had modified the software for the Lab's previous laser printer (the XGP, Xerographic Printer), so it electronically messaged a user when the person's job was printed, and would message all logged-in users waiting for print jobs if the printer was jammed. Not being able to add these features to the new printer was a major inconvenience, as the printer was on a different floor from most of the users. This experience convinced Stallman of people's need to be able to freely modify the software they use.[29]”
It was missing what would quickly be considered a standard feature, and one that was obviously necessary for the use case of a shared printer. That's driver crappiness. The only justification for a shared network printer with no user feedback mechanism is if nothing ever goes wrong with the printer and it always completes your job by the time you can get up and walk down the hall (and stairs, in this case) to pick it up.
> It was missing what would quickly be considered a standard feature
That is an anachronistic claim which sounds logical to the reader in 2020 but doesn’t match the environment of decades ago. The feature he added could have been based on, from perspective of the producer of the printer, completely non existing API.
That is, something available to different customers, but completely specific to the setup of every customer.
Additionally, it was the principle that mattered to RMS. One can often do some reverse engineering intervention to achieve the desired modification even based on the closed source, but it’s still against the conceptual advantages of working on the codebase which is by policy free. As in:
“Roughly, it means that the users have the freedom to run, copy, distribute, study, change and improve the software.”
The principle works even once the network messaging API in some environment is standardized and starts to allow some, from that point on, “standard feature”.
I think you misunderstood my use of the word "standard". I did not mean "standard" in the sense of complying to some particular RFC or anything like that. I meant that some kind of feedback/monitoring capability is de rigueur for that product segment, and that feature needs to be listed on the spec sheet for the product to be considered adequate. Even a completely proprietary network printing protocol needs to have a status reporting capability to be taken seriously.
Does anyone know a list of safe printers (can be second hand postscript laser printers) that do not phone home, embed invisible yellow dot patterns for unique identification and so forth?