Hacker Newsnew | past | comments | ask | show | jobs | submit | myrond's commentslogin

Happened to me as well. I found out who the actual people who hijacked it due to their poor operational security awareness. Found out they did this to someone every 2 weeks. Nobody cared as they successfully stole $0. I've watched the news to see if they were ever caught; I assume they are still doing it to this day.


bang-on.

This article should be thrown in the trash.

The EXAMPLES for a raidz and raidz2 and raidz3 are mis-aligned in the article. They will obviously have terrible performance, and waste space due to blocks not fitting nicely due to non-standard physical block sizes.

mirror's are easy because you don't have to worry about ASHIFT values, and physical block sizes of devices and making sure all of your blocks can be divided nicely into 128K block sizes.

I get the feeling that the author of the article never actually has ran a properly configured raidz(x) with SLOG and cache devices.


Each 12 drive array would have an interesting physical block layout.

What was your ASHIFT and what was your physical disk raw sector size because if you had a ASHIFT=9 you would have 5120k blocks, if it was a ASHIFT=12 you would have physical 40960 blocks. None of those line up well with zfs.

Additionally if this was a long time ago the pool would have defaulted to ASHIFT=9 and if you added any 4K drives you could have been having write amplification occurring.

Looks as if you had your drives doing a lot of unnecessary extra wasted work.


This is huge thing (called "Slop Space") that I noticed after I bought all ZFS NAS parts!

It wastes much capacity by default if drive count is not ideal. IIRC use larger recordsize (1MB or above) and enable compression almost solves issue. I use default 128K recordsize for small files zvol for performance, and use larger recordsize for the rest of zvols.

https://wintelguy.com/zfs-calc.pl


Ashift=9 native 512b sector drives


I disagree.

With co-located boxes and drop shipped drive replacements the time between a FAULT and the resilver event can be multiple days. Even though the resilver will go faster with a mirror having one disk remaining on a mirror vdev compared to raidz2 (or higher) mirrors will increase risk of data loss irrespective of resilver times because of drop ship drive replacement time.

3TB resilver on my last mechanical drive failure took 6 hours 30 minutes. Plus an additional 3 days for the drive to arrive.

With mirror vdev setups you lose significantly more space as well. If you argue speed is worth it, then I would instead invest that money you saved going with a raidz2 with NVME cache and SLOG.

Users won't notice the resilver event at all with a significant amount of memory and NVME cache + nvme SLOG tuned with a high /sys/module/zfs/parameters/zfs_dirty_data_max and larger than default /sys/module/zfs/parameters/zfs_txg_timeout.


Maybe this will be useful to someone adding the following flags LDFLAGS='-z relro -z now' help with one of the holes (they mentioned it, but not what flags) Check out: https://github.com/slimm609/checksec.sh

  * System-wide ASLR (kernel.randomize_va_space): Full (Setting: 2)
  * Does the CPU support NX: Yes
  * Core-Dumps access to all users: Restricted
           COMMAND         PID RELRO      STACK CANARY            SECCOMP          NX/PaX        PIE                     FORTIFY
  # checksec --proc-all |grep qmail
      qmail-injectXXXXX27 Full RELRO      Canary found            Seccomp-bpf      NX enabled    PIE enabled             Yes
       qmail-queueXXXXX97 Full RELRO      Canary found            Seccomp-bpf      NX enabled    PIE enabled             Yes
        qmail-sendXXXXX78 Full RELRO      Canary found            Seccomp-bpf      NX enabled    PIE enabled             Yes
      qmail-lspawnXXXXX89 Full RELRO      Canary found            Seccomp-bpf      NX enabled    PIE enabled             Yes
      qmail-rspawnXXXXX90 Full RELRO      Canary found            Seccomp-bpf      NX enabled    PIE enabled             Yes
       qmail-cleanXXXXX91 Full RELRO      Canary found            Seccomp-bpf      NX enabled    PIE enabled             Yes


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: