Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It seems a bit too coincidental that images to which human beings assign semantic value are being transformed into images to which human beings assign different semantic value.

I don't expect the scanner to have any semantic awareness of the document content, so when I hear "lossy compression", my expectation is "image may become illegible", and not "image may remain legible, but become inaccurate".



This is hacker news -- I don't expect everyone to know how jbig2 or other compression scheme works. But before you insinuate that the scanner has semantic awareness of the document and is altering that meaning in a less-than-coincidental way, I would hope that you could have a cursory look at how such compression works.

The issue only involves small letters, because the compression scheme breaks up the image into patches and then tries to identify visually similar blocks and reuse them. Certain settings can allow for small blocks of text to be deemed identical, within a threshold, and thus replaced. That's all. Coincidence, not semantic awareness.

Hence the advisory notice to use a higher resolution -- smaller block sizes.


> The issue only involves small letters, because the compression scheme breaks up the image into patches and then tries to identify visually similar blocks and reuse them. Certain settings can allow for small blocks of text to be deemed identical, within a threshold, and thus replaced. That's all. Coincidence, not semantic awareness.

Copiers very commonly copy printed material. This sort of algorithm makes it likely that sometimes one character will be replaced by another, so it is a bad algorithm for the job.

Xerox should have known better.


>>This is hacker news -- I don't expect everyone to know how jbig2 or other compression scheme works.

As opposed to what, ImageCompression News where you can expect everyone to know it?


Or maybe comp.compression


I'm aware - I'm merely responding to the previous commenter's point about how the compression algorithm is "starting off with a bit mapped image that your brain happens to interpret as the number 17", and pointing out that if this were the case, the likely outcome should be a fuzzier-looking "17" and not a "21".

Clearly, the compression algorithm is designed around human perception (i.e. looking for visually-similar segments to, I assume, tokenize), and therefore does relate to the actual semantics of the document, albeit in a coarse and mechanical way. It did know enough to replace character glyphs with other character glyphs, but didn't know enough to choose the right ones.

My point is that it's not coincidental at all - this algorithm is obviously in a sort of "uncanny valley" in its attempt to model human visual perception.


You'd expect anyone who knows how JBIG2 works would also know it should never have been used for this


It's not a coincidence that the thing that looks most like a blurred number is another blurred number.

A document will be covered in numbers, and the compression algorithm looks for similar blocks it can re-use; the side effect is sometimes it says "that blurry 4 looks pretty close to this blurry two, so I'll just store that block once and reuse it"

The problem is that this is a minor side effect to a programmer and an absolutely massive issue to an end user that no-one had thought of previously, and now we all have to be worried that all our scanned documents might be incorrect. (just because this was found in fuji-xerox scanners doesn't mean other brands don't also have the issue)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: