SSH hardening guide bonus edition: Disable password login if you can, leave the algorithm settings as they are and use an up to date version of OpenSSH.
OpenSSH already agressively deprecates algorithms that are problematic. None of the algorithms enabled by default has any known security issue. But your manual tweaks from a random document you read on the Internet may enable an algorithm that we may later learn to be problematic.
In the same vein, protecting your SSH server with spiped[1] does 99% of the job. (= No need to setup fail2ban, password auth is not a big deal anymore, protects against out-of-date SSH servers and/or zero-days exploits, ...)
spiped looks like netcat with symmetric encryption. If your SSH server has password auth disabled, then all you're doing is moving the attack surface from one thing to another.
You're making a trade-off no matter which way you go. spiped probably has a smaller attack surface than sshd due to being less code, but it's also less "tried and true" than openssh. Not to mention, managing symmetric keys securely is more difficult than with asymmetric openssh keys where you generally only need to copy around the public key.
OpenSSH is plenty secure enough to be exposed to the public internet as long as you keep it up to date and do not have it misconfigured. But if you have a strong reason to not make it public, then I feel that something like Wireguard is really a better way to go.
> spiped looks like netcat with symmetric encryption
True. But to be more specific, it does symmetric encryption and authentication.
> If your SSH server has password auth disabled, then all you're doing is moving the attack surface from one thing to another.
I get what you're saying. But I see spiped as port-knocking with a 256bits combination. So basically, you are reducing the attack surface. In order for the attacker to get through, they need a vulnerability in spiped and in openssh-server. (If these probabilities are 50-50 each, that means the overall probability is 0.25)
At the end of the day, spiped should run in a chroot, as a user, so the attack surface of spiped is really low. If it gets compromised, the only thing the attacker can do is "be able to try to establish a connection to the SSH server".
The goal of spiped for me is to eliminate the need for constant monitoring of openssh vulnerabilities, and for installing fail2ban/blacklistd (which can lock legitimate users out)
Sort of. Not really. spiped operates at the level of individual stream connections, so you can e.g. make one end a local socket in a filesystem and use UNIX permissions to control access to it.
In fact that's exactly why I wrote it -- so I could have a set of daemons designed to communicate via local sockets and transparently (aside from performance) have them running on different systems.
Is it possible to use tarsnap's deduplication code on my own server? We're setting up an ML dataset distribution box, and I was hoping to avoid storing e.g. imagenet as a tarball + untar'd (so that nginx can serve each photo individually) + imagenet in TFDS format.
Has anyone made an interface to tarsnap's tarball dedup code? A python wrapper around the block dedup code would be ideal, but I doubt it exists.
(Sorry for the random question -- I was just hoping for a standalone library along the lines of tarsnap's "filesystem block database" APIs. I thought about emailing this to you instead, but I'm crossing my fingers that some random HN'er might know. I'm sort of surprised that filesystems don't make it effortless. In fact, I delayed posting this for an hour to go research whether ZFS is the actual solution -- apparently "no, not unless you have specific brands of SSDs: https://www.truenas.com/community/resources/my-experiments-i..." which rules out my non-SSD 64TB Hetzner server. But like, dropbox solved this problem a decade ago -- isn't there something similar by now?)
EDIT: How timely -- Wyng (https://news.ycombinator.com/item?id=28537761) was just submitted a few hours ago. It seems to support "Data deduplication," though I wonder if it's block-level or file-level dedup. Tarsnap's block dedup is basically flawless, so I'm keen to find something that closely matches it.
True, but a couple years ago I ported most of the Tarsnap dedup algorithms to Python. It wasn't too hard, just time consuming. I was hoping someone else did that in a thorough way, but I guess the intersection of "I love tarsnap's design!" and "I have the time to port it from C!" might not be too large.
> Redistribution and use in source and binary forms, without modification,
is permitted for the sole purpose of using the "tarsnap" backup service
provided by Tarsnap Backup Inc.
The codebase is a jewel. I love the design, the way it's organized, the coding style, the algorithms, everything.
Then I started making a mental map of tarsnap: How does it build its deduplication index? How does it decide where block boundaries start within a file? Etc.
Eventually I started coding the algorithms in Python, mostly as a way of understanding the code. It's not actually as hard as it sounds, but you have to be rigorous. (It's a C -> Python conversion, after all, so there's not much room for error.)
My process was basically: Copy the C code into a Python file; comment out the code; for each line, write the corresponding Python; try to get something running as quickly as possible.
It worked pretty well, but I eventually lost interest.
Over the years, I've wanted a deduplication library, and 2021 is no exception. Someday I'll just roll up my sleeves and finish porting it.
OpenVPN may have its issues (complicated setup vs. e.g. Wireguard, but not vs. e.g. IPsec), but I wouldn’t call it “not good” and it predates spiped by a decade.
Ok. I don’t agree there. What I’ve heard from security experts is that WireGuard is vastly superior to OpenVPN.
Addendum: OpenVPN was released in 2001 and there where lots of cryptography-related systems from that era that certainly didn’t age well – IMO OpenVPN is one of those examples.
OpenVPN's encryption is just TLS. It uses OpenSSL for this, not rolling their own implementation. Yes, there are parts of SSL/TLS that haven't aged well, but... it's good enough for the world's web traffic.
> security experts is that WireGuard is vastly superior to OpenVPN
Superior doesn’t imply the other is “not good”.
> lots of cryptography-related systems from that era that certainly didn’t age well
This doesn’t really mean anything.
> IMO OpenVPN is one of those example
That’s your opinion, but so far you’ve given no evidence.
As the other commenter said: OpenVPN is just TLS via OpenSSL. Yes, at some points it has used now-insecure algorithms, but so have web browsers and most everything else. One wouldn’t configure OpenVPN today the way they did in 2001.
Not that it necessarily means much, but AWS Client VPN is just OpenVPN. AWS, GCP, & Azure all support IPsec VPN which dates back to the ’90s. Just because something has been around for a long time doesn’t mean it hasn’t evolved its cryptography at all.
> None of the algorithms enabled by default has any known security issue.
For the tinfoil among us, jump on the post-quantum key exchange train, there's little overhead, it still uses traditional elliptic curve crypto, best of both worlds.
The jump-off: sntrup4591761x25519-sha512@tinyssh.org
Note that the above post-quantum key exchange method was removed in OpenSSH 8.5 (released March 3, 2021) in favor of a newer one:
sntrup761x25519-sha512@openssh.com
So if you add the previous one to your server config, sshd may fail to start after upgrading to 8.5 or newer (8.7 is the most recent release).
I started using sntrup4591761x25519-sha512@tinyssh.org about two years ago, and the new one when OpenSSH 8.5 came out. They've both worked flawlessly for me. Thank you to TinySSH and OpenSSH, and of course cryptographers, for making this possible.
I actually filed the github request for Jan Mojžíš to update TinySSH with the new KEX.
I run tinysshd on some RedHat 5 servers, and I rely upon the latest post-quantum exchange. I also jack all my putty users into it with an agent, and I don't assign them passwords.
It would be nice if I was allowed to upgrade from RedHat 5, but I am not.
Thanks for the update, guessing this is the round 2 submission?
Despite being sidelined in NIST's post quantum standardisation project it's interesting to see their Streamlined NTRU Prime algorithm still being the main real world adoption out there right now. I'd be interested if anyone knows of a more utilised post-quantum algo irl.
I prefer tinyssh to dropbear and even to openssh. https://github.com/janmojzis/tinyssh
Its not for the tinfoil, the reason is the size. OpenSSH is too large for some small systems. In June, dropbear finally got support for chacha20-poly1305 and ed25519 keys, but it only offers the TweetNaCl version. tinyssh has had this since 2014 and can use the original NaCl library, not just the Twitter version.
tinyssh does what the top comment comment in this thread suggests, and in fact goes further. It does not include bad algorithms. This is arguably better than only "deprecating" them.
All of today's practical public key crypto could be broken by a sufficiently powerful quantum computer - if one existed which today it does not - thanks to a trick called Shor's Algorithm.
However, there are new public key algorithms that aren't affected by Shor's algorithm. Two problems. 1. They're all worse, they have huge keys for example, or they're slow, so that sucks. 2. We don't know if they actually work, beyond that Shor's algorithm doesn't break them on hypothetical Quantum Computers.
Even if you decide you don't care about (1) and you're very scared of the hypothetical quantum computers (after all once upon a time the Atom Bomb was also hypothetical) you need to deal with (2).
Post-quantum schemes like this for SSH generally arrange that by doing two things, they have the shiny new Quantum Resistant algorithms, but they also have an old-fashioned public key algorithm, and you need to break both to break their security. Either alone won't help you.
> All of today's practical public key crypto could be broken by a sufficiently powerful quantum computer - if one existed which today it does not
That we know of. There are commercially available quantum computers [0] and there are roadmaps for commercial technology ramp up significantly [1]. Remember, that's a roadmap to projected commercial viability. It's hard to say what state actors are capable of right now.
Then, using aluminum foil and unfolded crisp packets make your room into a Faraday cage. Never open the door for any reason whatsoever. In fact, remove the door altogether.
Is it really necessary to disable an E521 ECDSA host key? By all means, replace a P256 host key with E521, but are E521 keys truly weak to justify removal?
E521 is listed as safe on DJB's main evaluation site:
More specific DJB commentary: "To be fair I should mention that there's one standard NIST curve using a nice prime, namely 2^521 – 1; but the sheer size of this prime makes it much slower than NIST P-256."
I believe that OpenSSH is using the E521 provided by OpenSSL (as seen on Red Hat 7):
$ openssl ecparam -list_curves
secp256k1 : SECG curve over a 256 bit prime field
secp384r1 : NIST/SECG curve over a 384 bit prime field
secp521r1 : NIST/SECG curve over a 521 bit prime field
prime256v1: X9.62/SECG curve over a 256 bit prime field
These appear to have been contributed by Sun Microsystems, and were designed to avoid patent infringement.
Ignoring the fact that some of the SafeCurves criteria are questionable (reasonably performant complete short Weierstrass formulae have existed for a while; indistinguishability is a complete niche feature that is hardly ever required)...
These are not the same curves. NIST P-521 is a short Weierstrass curve defined by NIST. E-521 is an Edwards curve introduced by Aranha/Barreto/Pereira/Ricardini.
To the best of my current knowledge, it's at most possible that the NSA backdoored the NIST curves. I'm unaware of anyone in academia positively proving the existence thereof.
If your threat model doesn't include the NSA or other intelligence agency level state actors, ECDSA with NIST P-521 will serve you just fine.
(ECDSA is per se a questionable abuse of elliptic curves born from patent issues now long past, but it's not a real, exploitable security problem, either, if implemented correctly.)
Jumping on the bandwagon here, SSH also now supports FIDO/U2F. This allows for hardware security keys like Yubikeys to be used directly for auth, rather than via TOTP/HOTP codes.
The FIDO tokens have no intention of allowing you to do anything else except FIDO with their keys (in fact the cheapest ones literally couldn't if they wanted to, good) and the SSH protocol of course was not originally designed for these tokens (it's from last century!) so the result is that the OpenSSH team had to design a custom key type for this purpose.
In consequence, although this technology is excellent and I endorse choosing it especially in tandem with other FIDO usage (e.g. WebAuthn for web sites, and I believe Windows can use it to authenticate users to their desktops/ laptops) you need to understand that both sides must have the necessary feature for it to authenticate you, both your clients and any SSH servers you need to authenticate against must recognise the FIDO-specific auth method in SSH.
If you mostly administrate shiny modern *BSD or Linux boxes, they have a new enough OpenSSH, so this Just Works™. But if you've got some creaky five year old VMs or worse real servers running like RHEL 6 or something, that may be an obstacle to practically deploying this.
Good news is that this will improve over time, and e.g. GitHub did eventually learn the new key type.
And you can use a Yubikey hardware key as a ecdsa-sha2-nistp384 secret store, without messing with PAM or needing custom key types or special files on the client host: https://github.com/FiloSottile/yubikey-agent
I was just searching for someone bringing this up. Of any advice I could give someone setting up SSH it would be to use a FIDO key + short lived sessions.
i did not notice until skimming this arch wiki page that it is now possible to have both MFA or pam auth in general and key auth both required at the same time to authenticate, great!
I made a mistake once setting this up and I managed to have password AND key and MFA. This was a misconfiguration but might be useful for some use-cases. So it’s good to know it’s not either/or.
The first thing i do on a new remote box is to move SSH to another non-standard port other than 22. I use the same port for every remote boxes I have. Then add that port into `.ssh/config` on local box.
Second is to disable root login.
Third is to copy my private key over and disable password login.
I always move sshd to a non-standard port, and blacklistd would not address the reason why I do it.
One IP abusing it and trying to gain access isn't what I address by moving the port. I move the port to cut out the noise of the thousands of IPs that will connect to it once to probe it and never again. The volume of those one-off probes is so dramatic that it makes the logs entirely useless. By moving the port, I cut out that noise so that if I glance at the logs I actually have a chance of noticing anything that is worth noticing.
My use for it was to have a 5 way residential VPN across multiple Countries for obvious reasons. That wouldn't really suffice with just ssh. It also makes the shared infrastructure a lot easier to use for the rest of my family.
Also a globally accessible pihole connected to DoH which ensures somewhat global privacy.
They are basically my perfect use for my raspberry pis. Extremely low power, but perfectly capable for handling say 1080p video streams or to RDP into machines for access to cross-country resources.
The same thing that happens if the SSH daemon dies, I guess?
FWIW, I’ve been using Wireguard for a while (probably ~2 years?) as an always-on VPN for multiple mobile devices, and also as a reverse tunnel to pinhole service access inside a LAN. The Wireguard config and daemon has been rock solid. The only time it’s failed is when I messed up the AllowedIPs, but that failure occurs at configuration time. It has never crashed, or stopped routing traffic correctly, or otherwise failed in a way that interrupted traffic flows.
I have 5 locations running effectively independent VPNs, each hub connected to each other for redundancy if a VPN falls over.
i.e. Each hub has 1 VPN in, or is connecting 4 ways out.
If the port forwarding or something fails inbound, then I can connect via another VPN and try and debug/diagnose what is wrong.
If all VPNs are reporting down, then I know the pi/internet is completely down. It will either restart connectivity, but I have someone there who can plug/unplug/restore the system if necessary. The same kind of problem would occur if ssh falls over or wireguard.
ssh is typically the only thing i expose (publicly if needed) because in most environments were it is running it is used for troubleshooting issues. if your issue is that your wireguard peer cant connect you are lost with that suggestion.
Changing the port is obfuscation and by itself would not enhance security, however it does preclude all the noise from the automated bots. This allows you to have better alerting on brute force attempts because all of those attempts are a human manually targeting your server. The end result is effectively a better security posture. I have servers sprinkled all over the internet and in the last 30 years or so bots have never tickled my ssh daemon.
As I said: changing the port is just a means to avoid having to `apt install logrotate`
Active alerting on brute force attempts on an internet-facing SSH service is an exercise in human suffering. At best you don’t get any alerts, and at worst you get alerts that you do… what, precisely, with? Block the IP? Look up the “human” attacker and send them an email asking them to stop?
There are environments and entities for whom pattern detection on incoming connections makes sense, and those environments aren’t running internet-facing SSH.
I feel like this doesn’t actually address any of my comment.
I’m specifically saying that the act of reading SSH logs for an internet-facing server is an exercise in futility. The kinds of things that will show up in the logs (brute force attempts, generally nmapping, etc) are not credible risks to even a largely unconfigured SSH daemon (as noted elsewhere in this thread, the bar to have an above average secure SSH service is basically “apply pubkey, disable password auth, celebrate”).
The attackers that are problematic don’t look out of place in your logs: somebody who stole a valid pubkey/password, the unlikely case of an SSH zero day, etc. Those are going to be single access attempts that just work. Unless you’re literally alerting on every successful auth, the logs aren’t helping you for active alerting.
Keeping your internet-facing SSH logs is important for investigative work: once you find out that your buddy accidentally put their private key in a pastebin, you can check if somebody used it to log into your server.
I got a new cloud virtual machine and didn't login for 2 hours. When I did the logs showed there were about 50 attempts to login from random IP addresses.
I changed my port to a random 4 digit number. Not a single failed login attempt in 6 months.
Obviously follow good security practices too but I like not have to rotate and filter the logs with yet another tool.
But geezus, it's daunting to address SSH weaknesses unless you know ssh and it's configuration top to bottom. I don't! And I am not afraid to admit it. I just use ssh "as-is" on mainstream platforms, for example, whatever Amazon gives me on lightsail linux images or windows-10 or whatever's on my Mac and hope for the best.
I mean, there's 4 different groups of algorithms to think about: "Key Exchange", "Server Host Key", "Encryption" and "MAC". Each with a bunch of choices, all different, all consisting of mouthfuls of impossible to remember complicated names.
The sshcheck tool indicates that one of these is "insecure" because it may be "broken by nation states". What does that _really_ mean for a business or individual? ¯\_(ツ)_/¯ There are others which are labeled as "weak" so what does that mean? That it might someday be broken by nation-states?
I think it's still useful, however. Why wouldn't you want to have the most secure ssh connections if it's just a matter of configuration?
Ultimately, someone who uses the report from sshcheck has to decide whether it's worth it to google around, spend a solid 30 minutes or so, and figure out how to change their "out-of-the-box" ssh config to get a fully secure report from sshcheck.
Though I'll have to hunt out (or try knock together) something that we can run locally for checking internal-only/white-listed hosts (like https://testssl.sh/ for HTTPS config checking).
Generally they’re going to be for legacy ciphers/MACs/etc.
If you don’t need them, you can turn them off. If you’re the only one accessing your servers, you can honestly just pick a single option for each based on the highest security option that’s supported by all your client devices.
I feel like an important part of "hardening" a server is to remove/disable unused services. Does anyone know if NanoBSD is actively worked-on by the FreeBSD team and/or still in use? For those note aware, NanoBSD is an official build from FreeBSD team that allows you to compile a slimmed down FreeBSD build that is read-only yet can run any/all FreeBSD software.
I can find very little about NanoBSD other than a handful of posts from 10 years ago. It seems like a great foundation for hardening a server.
mfsbsd builds a custom live CD using an easily-customized Makefile. NanoBSD expects multiple partitions, some of which are mutable, so is more oriented toward booting small systems from flash.
For my purposes I needed an application server that would reboot to a known-good state, so I embedded the application code inside a mfsbsd live CD.
I've used Martin Matuska's mfsbsd in the past to install a system with ZFS on root. I believe now that's natively supported, but back in the day it was quite an involved thing :)
I always heard that FreeBSD has unparalleled networking
Does it mean that it'd be worth picking FreeBSD over Linux for my C# crud app if it had to handle a lot of requests/sec? (let's ignore db for the moment)
As with all things, you would really need to benchmark the system, preferably with real load, both ways to know for sure. But that takes a lot of time, especially if you're going to put in the time to tweak both systems.
People can do amazing stuff with enough time in both FreeBSD and Linux. I honestly think most server applications wouldn't be held back by either OS. You need your application to be really lightweight and focused before the OS makes a big difference, and even then, the differences only show if you're maxing out the hardware.
I worked at WhatsApp, and enjoyed working with FreeBSD there, and clearly it worked for us. Linux in FB datacenters also worked, but the server components were a lot different so there was never an apples to apples comparison. I run FreeBSD on my personal servers because I enjoyed working with it at Yahoo and then WhatsApp; but my personal servers don't have any performance needs. Sure, the networking stuff is nice (and it was nice to work with in the kernel), but what I like most about FreeBSD is the lack of churn. I can look at old administrative recipies and all the commands still work. I can expect (and mostly get) that when I upgrade, everything will keep working, and maybe a little better; occassionally, a lot better.
IIRC, I saw a presentation by someone (Rick?) where at your previous employer - you guys slimmed FreeBSD down to be unbelievably minimal such that only 2-3 total services ran on the entire server.
Was that done for performance reason? Or for hardening reasons?
If someone wanted to do that today with FreeBSD: would you recommend it and how would you go about doing it (NanoBSD)?
We ran a pretty vanilla FreeBSD. IIRC, we disabled locales, and had 5-10 patches depending on exactly when and what bottlenecks we were running into.
We didn't go out of our way to turn off daemons, we just didn't start anything we didn't need. Out of the box, you get sshd, crond, getty, ntpd, syslogd, our system activity report, and that's really all you need to administer the box. Then on our Erlang machines, we'd run one big Erlang process. Unless something went seriously wrong, only Erlang ever had real work to do.
I don't know that there was a performance benefit, you can run a lot of daemons that are waiting on sockets and don't use meaningful amounts of ram before noticing a performance hit. Hardening has something to do with it, can't get into our smtpd if we're not running it. But mostly it kept things simple and reduced work. No extra daemons means no configuration of them and no update headaches. Even with that limited set, I had to go around and reconfigure sshd sometimes and ntp rather more than I'd have liked, and we used caveman automation, so stuff that needs super user is extra painful.
On hardening, we didn't exclude things from FreeBSD's base, even though that would probably be a good idea. Some things are there for reasonable reasons, but had no relevance to us, and less installed stuff would be better, but tradeoffs. We did run stud (now known as hitch, which we needed when Erlang TLS was too slow) in a totally locked down jail though; statically configured binary, only executable in the jail, etc.
Personally, I would use FreeBSD. But, to be honest, that's 99% because of familiarity and only 1% because I hope the performance is better. Some of the familiarity is just knowing how the system works and how to do routine tasks easily. Some of it is knowing what manual pages will give me the answers I need and being familiar with the writing so I can look it up. But also, I know a lot of important sysctls to adjust behaviors and where to look if there's something that needs adjusting that I think might have a knob. And, I'm fairly handy at digging through the kernel now that I've done it enough. The Linux kernel is organized differently, and I'd need to develop that skill/knowledge. Getting changes upstreamed is a challenge and I've had some success with FreeBSD, but Linux kernel would be new processes, and some things that are really core to a usable system (for example ifconfig/ip) are developed seperately, so have their own processes, so potentially more processes to learn.
Mostly our software ran fine in Linux and the things that bothered me the most were mostly artifacts of how Facebook runs their servers. That said, some things in Linux bother me: why does ss exist as a faster netstat instead of just making netstat faster? I don't find the new init framework compelling on servers (I don't hotplug anything, and I don't think init should restart crashed server processes) and it actively hinders me on desktop/laptop, so it's frustrating that the ecosystem has embraced it (but I'm trying to avoid an offtopic rant, email in profile if you want a rant).
We had a couple migration issues, but nothing serious. epoll and kqueue are similar, but not similar enough that we didn't need to tweak some things in Erlang to make it work better, but the chat machines in FB have much less ram because we had to select from FB's menu of server hardware and more nodes with less ram in each makes the most sense on their menu, so I can't say for sure if epoll scales as well as kqueue. I understand epoll takes more syscalls to do the same work (for network I/O anyway), so that's probably worse, but it likely doesn't make a big difference, but then there's also io_uring which I haven't evaluated and might be better than either epoll or kqueue. I think Linux's default memory overcommit behavior is problematic for a single process server, but it's hard to disable it because the ecosystem has formed around it. Tools allocate huge memory because 'it doesn't matter, it only really allocates when I write to it', and like, I understand how that's useful, but it's impossible for a program to handle out of memory when the trigger is a write to allocated space; alloc failures are hard to handle, but with careful design you can get a fairly consistent crash report most of the time.
That's super interesting and insightful to hear. I really appreciate the time you spent detailing out your prespetive!
I too love FreeBSD (and don't have much in-depth Linux experience). You hit on a topic I'm also interested in as well, which is io_uring. A week doesn't go by that it seems I bump into an article/post about it. Unforunately, I haven't found any solid comparison yet on kqueue vs io_uring (or at least - aren't written by someone in the Linux community who doesn't understand FreeBSD well ... which can be frustrating).
Nonetheless, again - really appreciate your comment.
Well, thank you for asking nice questions and reading my long posts. ;)
I must have missed some of the weekly io_uring posts, but it sure does look interesting. But I'm not working on anything performance driven at the moment, so not going to dig into it, certainly not enough to try to setup a good comparison for the three options. A tricky bit would be trying to get the test scenario to try to avoid the IP stack, so differences between IP handling on the two OSes isn't significant in the results. Comparing OS TCP performance is fun too, but we'd want to compare one thing at a time.
> but what I like most about FreeBSD is the lack of churn
I agree completely. As I mature in this field this becomes an ever important characteristic of the technology I adopt. Erlang also shares this property.
I feel like Erlang churn is more acceptable because they're usually honest in naming. You get pg2 which does what pg did, but differntly. And now pg (which could have been pg3, but the old pg had been gone long enough to reclaim the name). Similar with phash2, although phash may live for much longer.
FreeBSD doesn't promise compatibility between major versions, it's a breaking change, e.g. in FreeBSD 12 they revamped some system types and like half of system interface became incompatible with FreeBSD 11.
It depends on the specifics, but if you install the -compat packages and enable the compat kernel flags, broadly speaking old software continues to work across major versions. Anyway, just because I expect it to happen, doesn't mean it happens (and I acknowledged that).
Things I've run into include when they made the CPU masks bigger, old binaries couldn't set CPU affinity (even when the old code had a mask big enough for the current CPU; we had a local patch for this, since upstream didn't seem interested in making it right).
Updating to 13 at home with the approved process (install kernel, reboot, install userland, reboot (maybe), do ports/pkg update), resulted in bad network configuration because 12.x ifconfig didn't fully work with 13.0 kernel, but I believe that's been addressed; and I never hit similar issues in prod because we would install both the new kernel and new userland before rebooting, because redundant servers means you can live dangerously (if the first server fails to start, you can improve the upgrade process on the next ones). This one wasn't terrible for me, but I can imagine someone having a very negative experience and holding a grudge if they had a less accessible machine and specific network configs that made the machine unreachable after the first reboot, my machine was accessible but misbehaving and I've got easy access to the console I rolled back and later did a more cautious update.
FreeBSD 11 (+/- 1) changed page inactivity policies in an unexpected way and resulted in undesired swapping for our use cases and becauae it was a slow process it wasn't obvious. That release also negatively impacted our heavy disk I/O systems in a way I didn't have time to get to the bottom of, accurate benchmarking was time consuming and user impacting and migration was in progress, so those systems were forbidden to upgrade. They didn't need the positive networking changes because of their use case, so it was acceptable.
There was a time when FreeBSD had much better networking performance than Linux, but that was many years ago.
Now, on supported hardware, their performance should be similar.
However, Linux has drivers for much more varied networking hardware. FreeBSD has very good support for Intel NIC's but some of the hardware from other vendors may happen to be not supported.
FreeBSD has a few nicer kernel features for those who develop themselves a networking application, but more networking libraries useful for high performance applications are available for Linux, even if DPDK, which was mentioned in another reply, is available for both Linux and FreeBSD.
So, while I am a very satisfied FreeBSD user, I would recommend to someone with less experience to use Linux, as there are more resources readily available.
On the other hand, for someone who wants to learn more about the implementation of networking applications, it can be useful to also try FreeBSD, to understand more about alternative solutions.
If you want better latency and throughput, you wouldn't be using the kernel network stack and instead be opting for some userspace networking stack like DPDK or onload.
Depends obviously on what the bottlenecks of your application are, your NIC and the characteristics of your hardware as well.
True, and the Linux kernel has zero-copy AF_XDP that enable memory to be shared with userspace. However, low-latency networking is a lot more than just simple kernel bypass.
It's things like pinning cpu cores dedicated for networking, disabling C-states, epolling and being able to utilize bespoke firmware interfaces designed for smartnics. Also application protocol, ie using features like TCP checksum offload and TSO.
Heck the application would also need to be adjusted for a low-latency environment via probably a custom JVM and doing things like reading data structures/variables to ensure they are in CPU cache.
Frankly I would recommend trying openonload which at least is compatible with native Linux socket programming unlike DPDK.
OpenSSH already agressively deprecates algorithms that are problematic. None of the algorithms enabled by default has any known security issue. But your manual tweaks from a random document you read on the Internet may enable an algorithm that we may later learn to be problematic.