The first thing i do on a new remote box is to move SSH to another non-standard port other than 22. I use the same port for every remote boxes I have. Then add that port into `.ssh/config` on local box.
Second is to disable root login.
Third is to copy my private key over and disable password login.
I always move sshd to a non-standard port, and blacklistd would not address the reason why I do it.
One IP abusing it and trying to gain access isn't what I address by moving the port. I move the port to cut out the noise of the thousands of IPs that will connect to it once to probe it and never again. The volume of those one-off probes is so dramatic that it makes the logs entirely useless. By moving the port, I cut out that noise so that if I glance at the logs I actually have a chance of noticing anything that is worth noticing.
My use for it was to have a 5 way residential VPN across multiple Countries for obvious reasons. That wouldn't really suffice with just ssh. It also makes the shared infrastructure a lot easier to use for the rest of my family.
Also a globally accessible pihole connected to DoH which ensures somewhat global privacy.
They are basically my perfect use for my raspberry pis. Extremely low power, but perfectly capable for handling say 1080p video streams or to RDP into machines for access to cross-country resources.
The same thing that happens if the SSH daemon dies, I guess?
FWIW, I’ve been using Wireguard for a while (probably ~2 years?) as an always-on VPN for multiple mobile devices, and also as a reverse tunnel to pinhole service access inside a LAN. The Wireguard config and daemon has been rock solid. The only time it’s failed is when I messed up the AllowedIPs, but that failure occurs at configuration time. It has never crashed, or stopped routing traffic correctly, or otherwise failed in a way that interrupted traffic flows.
I have 5 locations running effectively independent VPNs, each hub connected to each other for redundancy if a VPN falls over.
i.e. Each hub has 1 VPN in, or is connecting 4 ways out.
If the port forwarding or something fails inbound, then I can connect via another VPN and try and debug/diagnose what is wrong.
If all VPNs are reporting down, then I know the pi/internet is completely down. It will either restart connectivity, but I have someone there who can plug/unplug/restore the system if necessary. The same kind of problem would occur if ssh falls over or wireguard.
ssh is typically the only thing i expose (publicly if needed) because in most environments were it is running it is used for troubleshooting issues. if your issue is that your wireguard peer cant connect you are lost with that suggestion.
Changing the port is obfuscation and by itself would not enhance security, however it does preclude all the noise from the automated bots. This allows you to have better alerting on brute force attempts because all of those attempts are a human manually targeting your server. The end result is effectively a better security posture. I have servers sprinkled all over the internet and in the last 30 years or so bots have never tickled my ssh daemon.
As I said: changing the port is just a means to avoid having to `apt install logrotate`
Active alerting on brute force attempts on an internet-facing SSH service is an exercise in human suffering. At best you don’t get any alerts, and at worst you get alerts that you do… what, precisely, with? Block the IP? Look up the “human” attacker and send them an email asking them to stop?
There are environments and entities for whom pattern detection on incoming connections makes sense, and those environments aren’t running internet-facing SSH.
I feel like this doesn’t actually address any of my comment.
I’m specifically saying that the act of reading SSH logs for an internet-facing server is an exercise in futility. The kinds of things that will show up in the logs (brute force attempts, generally nmapping, etc) are not credible risks to even a largely unconfigured SSH daemon (as noted elsewhere in this thread, the bar to have an above average secure SSH service is basically “apply pubkey, disable password auth, celebrate”).
The attackers that are problematic don’t look out of place in your logs: somebody who stole a valid pubkey/password, the unlikely case of an SSH zero day, etc. Those are going to be single access attempts that just work. Unless you’re literally alerting on every successful auth, the logs aren’t helping you for active alerting.
Keeping your internet-facing SSH logs is important for investigative work: once you find out that your buddy accidentally put their private key in a pastebin, you can check if somebody used it to log into your server.
I got a new cloud virtual machine and didn't login for 2 hours. When I did the logs showed there were about 50 attempts to login from random IP addresses.
I changed my port to a random 4 digit number. Not a single failed login attempt in 6 months.
Obviously follow good security practices too but I like not have to rotate and filter the logs with yet another tool.
Second is to disable root login.
Third is to copy my private key over and disable password login.
3 essential steps to secure SSH.