Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The first thing i do on a new remote box is to move SSH to another non-standard port other than 22. I use the same port for every remote boxes I have. Then add that port into `.ssh/config` on local box.

Second is to disable root login.

Third is to copy my private key over and disable password login.

3 essential steps to secure SSH.



Just use blacklistd [0], on FreeBSD, instead of changing the port. It works with sshd, and it temporarily blocks IPs that are abusive.

[0]: https://docs.freebsd.org/en/books/handbook/firewalls/#firewa...


I always move sshd to a non-standard port, and blacklistd would not address the reason why I do it.

One IP abusing it and trying to gain access isn't what I address by moving the port. I move the port to cut out the noise of the thousands of IPs that will connect to it once to probe it and never again. The volume of those one-off probes is so dramatic that it makes the logs entirely useless. By moving the port, I cut out that noise so that if I glance at the logs I actually have a chance of noticing anything that is worth noticing.


please do not copy your private key to remote machines ;) you can use the ssh-copy-id tool it does the right thing for you.


yeah, my mistake, i meant to write copy the public key :D copying by hand is ok but ssh-copy-id is simpler. Can't edit the comment now hmmm...


I think there are better approaches than this.

1) Setup a VPN via wireguard and only expose that random udp port. That way only a single UDP port is exposed and port-scans become infeasible.

2) Setup 2fa via libpam-google


Yes, at work we use OpenVPN and only expose VPN, HTTP, HTTPS ports to the public.

But I find VPN a bit overkill on my personal machines.


My use for it was to have a 5 way residential VPN across multiple Countries for obvious reasons. That wouldn't really suffice with just ssh. It also makes the shared infrastructure a lot easier to use for the rest of my family.

Also a globally accessible pihole connected to DoH which ensures somewhat global privacy.

They are basically my perfect use for my raspberry pis. Extremely low power, but perfectly capable for handling say 1080p video streams or to RDP into machines for access to cross-country resources.


What would you do if your Wireguard tunnel dies?

That's the one thing that's prevented me from actually doing this.


The same thing that happens if the SSH daemon dies, I guess?

FWIW, I’ve been using Wireguard for a while (probably ~2 years?) as an always-on VPN for multiple mobile devices, and also as a reverse tunnel to pinhole service access inside a LAN. The Wireguard config and daemon has been rock solid. The only time it’s failed is when I messed up the AllowedIPs, but that failure occurs at configuration time. It has never crashed, or stopped routing traffic correctly, or otherwise failed in a way that interrupted traffic flows.


That's a good point.

I guess I'll give it a try for some time.


I have 5 locations running effectively independent VPNs, each hub connected to each other for redundancy if a VPN falls over.

i.e. Each hub has 1 VPN in, or is connecting 4 ways out.

If the port forwarding or something fails inbound, then I can connect via another VPN and try and debug/diagnose what is wrong.

If all VPNs are reporting down, then I know the pi/internet is completely down. It will either restart connectivity, but I have someone there who can plug/unplug/restore the system if necessary. The same kind of problem would occur if ssh falls over or wireguard.


>f your Wireguard tunnel dies?

wireguard tunnels are pretty robust to failure.

they can survive you changing your wifi access point and IP for example.


ssh is typically the only thing i expose (publicly if needed) because in most environments were it is running it is used for troubleshooting issues. if your issue is that your wireguard peer cant connect you are lost with that suggestion.


Just so we’re clear, this is 1 step to secure SSH, 1 step to avoid installing logrotate, and 1 step to encourage good admin practices.

Changing the port and using a non-root user for SSH don’t appreciably change the strength of the server’s security.


Changing the port is obfuscation and by itself would not enhance security, however it does preclude all the noise from the automated bots. This allows you to have better alerting on brute force attempts because all of those attempts are a human manually targeting your server. The end result is effectively a better security posture. I have servers sprinkled all over the internet and in the last 30 years or so bots have never tickled my ssh daemon.


As I said: changing the port is just a means to avoid having to `apt install logrotate`

Active alerting on brute force attempts on an internet-facing SSH service is an exercise in human suffering. At best you don’t get any alerts, and at worst you get alerts that you do… what, precisely, with? Block the IP? Look up the “human” attacker and send them an email asking them to stop?

There are environments and entities for whom pattern detection on incoming connections makes sense, and those environments aren’t running internet-facing SSH.


Only if you never read your logs.


I feel like this doesn’t actually address any of my comment.

I’m specifically saying that the act of reading SSH logs for an internet-facing server is an exercise in futility. The kinds of things that will show up in the logs (brute force attempts, generally nmapping, etc) are not credible risks to even a largely unconfigured SSH daemon (as noted elsewhere in this thread, the bar to have an above average secure SSH service is basically “apply pubkey, disable password auth, celebrate”).

The attackers that are problematic don’t look out of place in your logs: somebody who stole a valid pubkey/password, the unlikely case of an SSH zero day, etc. Those are going to be single access attempts that just work. Unless you’re literally alerting on every successful auth, the logs aren’t helping you for active alerting.

Keeping your internet-facing SSH logs is important for investigative work: once you find out that your buddy accidentally put their private key in a pastebin, you can check if somebody used it to log into your server.


I got a new cloud virtual machine and didn't login for 2 hours. When I did the logs showed there were about 50 attempts to login from random IP addresses.

I changed my port to a random 4 digit number. Not a single failed login attempt in 6 months.

Obviously follow good security practices too but I like not have to rotate and filter the logs with yet another tool.


obscurity increases security, doesn't it?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: