Hacker Newsnew | past | comments | ask | show | jobs | submit | el_pa_b's commentslogin

I'm curious about the "core logging" photo. Where can I find one? Do you have an implementation of your solution? I would be curious to have a look at it.


I wasn't able to find any imagery online, and I don't have anything I can share publicly.

These are some of the existing commercial solutions (just found these on Google, can't remember which I was comparing my own work against):

- https://koregeosystems.com/digital-core-logging/

- https://mountsopris.com/wellcad/core-logging-software/

- https://www.geologicai.com/logging/

I don't know enough about the science side to take it any further on my own.

The "tech" part of what I started building is really quite simple: convert the images to Cloud-optimised GeoTIFF, then do range requests to S3 from the browser.


Might not be possible to find any, they’re expensive and niche. If you reach out (email in profile) I can show/share how it works (nothing currently public).


@carderne I think el_pa_b has an idea on how to commercialize it.

In all seriousness, how is it not useful for gold mining or phracking?


Nice!


Thanks for sharing! I agree that newer scientific formats will need to deeply think about how they are deciphered directly from cloud storage.


IMO Zarr is that newer format. It abstracts over the features of all these other formats so neatly that it can literally subsume them.

I feel that we no longer really need TIFF etc. - for scientific use cases in the cloud Zarr is all that's needed going forwards. The other file formats become just archival blobs that either are converted to Zarr or pointed at by virtual Zarr stores.


thanks for sharing !


Yes, they can be huge, and for modalities like multiplex immunofluorescence with up to 20 channels, you're often dealing with very faint proteomic signals. Preserving that signal is critical, and compression can destroy it quickly.


CODEX can do up to 120 channels I think. They are also 16/32bit. They are usually just deflated


Yes, I agree. I'm not persisting the WSI locally, which creates a smoother user experience. But I do need to transfer tiles from server to client. They are stored in an LRU cache and evicted if not used.


Currently we only support TIFF and SVS with JPEG and JPEG2000 compression formats. I plan on supporting more file extensions (e.g. NDPI, MRXS) in the future, each with their own compression formats.


As data scientists, we usually don't get to choose. It's usually up to the hospital or digital lab's CISO to decide where the digitized slides are stored, and S3 is a fairly common option.

That being said, I plan to support more cloud platforms in the future, starting with GCP.


Yes there is a requirement to work with the vendor format. For instance, TCGA (The Cancer Genome Atlas - a large dataset of 12k+ human tumor cases) has mostly .svs files (scanned with an Aperio scanner). We tend to work with these formats as they contain all the metadata we need.

Sometimes, it happens that we re-write the image in a pyramidal TIFF format (happened to me a few times, where NDPI images had only the highest resolution level, no pyramid), in which case COGs could work.


When WSI are stored on-premise, they are typically stored on hard drives with a filesystem. If you have a filesystem, you can use OpenSlide, and use a viewer like OpenSeaDragon to visualize the slide.

WSIStreamer is relevant for storage systems without a filesystem. In this case, OpenSlide cannot work (it needs to seek and open the file).


Then mount the s3 filesystem. It's slow though. But good if have tools to filter them properly.


Thanks! Indeed, digital pathology, satellite imaging and geospatial data share a lot of computational problems: efficient storage, fast spatial retrieval/indexing. I think this could be doable.

As for digital pathology, the field is very much tied to scanner-vendor proprietary formats (SVS, NDPI, MRXS, etc).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: