Background: 15 years of experience in software and apparently spoiled because it was already set up correctly.
Been practicing doing my own servers, published a test site and 24 hours later, root was compromised.
Rolled back to the backup before I made it public and now I have a security checklist.
I’ve been quite stupid with this but never really had issues. Ever since I changed the open ssh port from 22 to something else, my server is basically ignored by botnets. These days I obviously also have some other tricks like fail2ban, but it was funny how effective that was.
Almost the same here. I also change some ssh settings: disable root login, disable password, allow only public key login. That’s about it. I never had any problems.
We’re not really supposed to expose the ssh port to the internet at all. Better to hide it behind a vpn.
But it’s too damn convenient for so many use cases. Fuck it. Fail2Ban works fine.
You can also set up an ssh tarpit on port 22, which will tie up the bot’s resources and get them stuck in a loop for a while. But I didn’t think it was worth attracting extra attention from the bot admins to satisfy my pettiness.
I can’t even figure out how to expose my services to the internet, honestly it’s probably for the best Wireguard gets the job done in the end.
I’m interested, how do you expose your services (on your PC I assume) to the internet through wireguard? Is it theough some VPN?
VPN’s are neat, besides the fact they’re capable of masking your IP & DNS they’re also capable of providing resources to devices outside a network.
A good example is the server at my work is only accessible on my works network, to access the server remotely without exposing it directly to the internet would be to use a VPN tunnel.
Wireguard IS a VPN. He has somehow through his challenges of exposing services to the internet, exposed wireguard from his home to the internet for him to connect to. Then he can connect to his internal services from there.
It’s honestly the best option and how I operate as well. I only have a handful of items exposed and even those flow through a DMZ proxy before hitting their destination servers.
Oh, I thought it was a protocol for virtual networks, that merely VPNs used. The more you know!
Edit: spelled out VPN 😅
I’m having the opposite problem right now. Tightend a VM down so hard that now I can’t get into it.
One time, I didn’t realize I had allowed all users to log in via ssh, and I had a user “steam” whose password was just “steam”.
“Hey, why is this Valheim server running like shit?”
“Wtf is
xrx
?”“Oh, it looks like it’s mining crypto. Cool. Welp, gotta nuke this whole box now.”
So anyway, now I use NixOS.
Basic setup for me is scripted on a new system. In regards to ssh, I make sure:
- Root account is disabled, sudo only
- ssh only by keys
- sshd blocks all users but a few, via AllowUsers
- All ‘default usernames’ are removed, like ec2-user or ubuntu for AWS ec2 systems
- The default ssh port moved if ssh has to be exposed to the Internet. No, this doesn’t make it “more secure” but damn, it reduces the script denials in my system logs, fight me.
- Services are only allowed connections by an allow list of IPs or subnets. Internal, when possible.
My systems are not “unhackable” but not low-hanging fruit, either. I assume everything I have out there can be hacked by someone SUPER determined, and have a vector of protection to mitigate backwash in case they gain full access.
- The default ssh port moved if ssh has to be exposed to the Internet. No, this doesn’t make it “more secure” but damn, it reduces the script denials in my system logs, fight me.
Gosh I get unreasonably frustrated when someone says yeah but that’s just security through obscurity. Like yeah, we all know what nmap is, a persistent threat will just look at all 65535 and figure out where ssh is listening… But if you change your threat model and talk about bots? Logs are much cleaner and moving ports gets rid of a lot of traffic. Obviously so does enabling keys only.
Also does anyone still port knock these days?
Literally the only time I got somewhat hacked was when I left the default port of the service. Obscurity is reasonable, combined with other things like the ones mentioned here make you pretty much invulnerable to casuals. Somebody needs to target you to get anything.
Also does anyone still port knock these days?
Enter Masscan, probably a net negative for the internet, so use with care.
I didn’t see anything about port knocking there, it rather looks like it has the opposite focus - a quote from that page is “features that support widespread scanning of many machines are supported, while in-depth scanning of single machines aren’t.”
At least you had a backup
I do worry about putting up public servers that other people might rely on because there’s something I might not realize making it vulnerable.
So far I have pubkey root login only on the VPSs I’m messing around with, but my ol’ reliable private key from 6 years ago might be beginning to fall behind on encryption standards.
And this is why every time a developer asks me for shell access to any of the deployment servers, I flat out deny the request.
Good on you for learning from your mistakes, but a perfect example for why I only let sysadmins into the systems.
We have it at my company its just a very small group and we have to manually enable it for production and its through tools like teleport. Staging and the like its free game there for them for debugging, same infra through. Gives us best of all worlds
You’re not wrong! Devops made me lazy
Please examine where devops allowed non-system people to be the last word on altering systems. This is a risk that needs block-letter indemnification or correction.
It’s not that devops made ya lazy. I’ve been doing devops since before they coined the term, and it’s a constant effort to remind people that it doesn’t magically make things safe, but keeping it safe is still the way.
Ah not to discount devops, I mean that in a good way.
Devops made me lazy in that for the past decade, I focus on just everything inside the code base.
I literally push code into a magic black box that then triggers a rube goldberg of events. Servers get instanced. Configs just get magically set up. It’s beautiful. Just years of smart people who make it so easy that I never have to think about it.
Since I can’t pay my devops team to come to my house, I get to figure it all out!
Permitting inbound SSH attempts, but disallowing actual logins, is an effective strategy to identify compromised hosts in real-time.
The origin address of any login attempt is betraying it shouldn’t be trusted, and be fed into tarpits and block lists.
If it is your single purpose to create a blocklist of suspect IP addresses, I guess this could be a honeypot strategy.
If it’s to secure your own servers, you’re only playing whack-a-mole using this method. For every IP you block, ten more will pop up.
Instead of blacklisting, it’s better to whitelist the IP addresses or ranges that have a legitimate reason to connect to your server, or alternatively use someting like geoip firewall rules to limit the scope of your exposure.
Endlessh and fail2ban are great to setup a ssh honeypot. There even is a Prometheus exporter version for some nice stats
Just expose endlessh on your public port 22 and if needed, configure your actual ssh on a different port. But generally: avoid exposing ssh if you don’t actually need it or at least disable root login and disable password authentication completely.
https://github.com/skeeto/endlessh https://github.com/shizunge/endlessh-go https://github.com/itskenny0/fail2ban-endlessh
I’m confused. I never disable root user and never got hacked.
Is the issue that the app is coded in a shitty way maybe ?
You can’t really disable the root user. You can make it so they can’t login remotely, which is highly suggested.
sudo passwd -l root
This disables the root user
There’s no real advantage to disable the root user, and I really don’t recommend it. You can disable SSH root login, and as long as you ensure root has a secure password that’s different than your own account your system is just as safe with the added advantage of having the root account incase something happens.
That wouldn’t be defense in depth. You want to limit anything that’s not necessary as it can become a source of attack. There is no reason root should be enabled.
I don’t understand. You will still need to do administrative tasks once in a while so it isn’t really unnecessary, and if root can’t be logged in, that will mean you will have to use sudo instead, which could be an attack vector just as su.
Why do like, houses have doors man. You gotta eliminate all points of egress for security, maaaan. /s
There’s no particular reason to disable root, and with a hardened system, it’s not even a problem you need to worry about…
Do not allow username/password login for ssh. Force certificate authentication only!
Why though? If u have a strong password, it will take eternity to brute force
Interesting. Do you know how it got compromised?
I published it to the internet and the next day, I couldn’t ssh into the server anymore with my user account and something was off.
Tried root + password, also failed.
Immediately facepalmed because the password was the generic 8 characters and there was no fail2ban to stop guessing.
wow crazy that this was the default setup. It should really force you to either disable root or set a proper password (or warn you)
Most distributions disable root by default
Which ones? I’m asking because that isn’t true for cent, rocky, arch.
we’re probably talking about different things. virtually no distribution comes with root access with a password. you have to explicitly give the root user a password. without a password no amount of brute force sshing root will work. I’m not saying the root user is entirely disabled. so either the service OP is building on is basically a goldmine for compromised machines or OP literally shot themselves in the root by giving root a password manually. something you should never do.
Yeah I was confused about the comment chain. I was thinking terminal login vs ssh. You’re right in my experience…root ssh requires user intervention for RHEL and friends and arch and debian.
Side note: did you mean to say “shot themselves in the root”? I love it either way.
Mostly Ubuntu. And… I think it’s just Ubuntu.
Ah fair enough, I know that’s the basis of a ton of distros. I lean towards RHEL so I’m not super fluent there.
Don’t use passwords for ssh. Use keys and disable password authentication.
More importantly, don’t open up SSH to public access. Use a VPN connection to the server. This is really easy to do with Netbird, Tailscale, etc. You should only ever be able to connect to SSH privately, never over the public net.
It’s perfectly safe to run SSH on port 22 towards the open Internet with public key authentication only.
On a new linux install or image I will always:
- Make new users(s)
- Setup new user to sudo
- Change ssh port
- Change new user to authenticate ssh via key+password
- Disable root ssh login
- Setup new user to sudo
I hope it is not a passwordless sudo, it is basically the same as root.
That’s more or less the advice I’ve gotten as well. I’ve also read good things about fail2ban which tries to ban sources of repeated authentication failures to prevent brute force password attempts. I’ve used it, but the only person who has managed to get banned is myself! I did get back in after the delay, but I’m happy to know it works.
How are people’s servers getting compromised? I’m no security expert (I’ve never worked in tech at all) and have a public VPS, never been compromised. Mainly just use SSH keys not passwords, I don’t do anything too crazy. Like if you have open SSH on port 22 with root login enabled and your root password is
password123
then maybe but I’m surprised I’ve never been pwned if it’s so easy to get got…That’s incredible, I’ve got the same combination on my luggage.
The one db I saw compromised at a previous employer was an AWS RDS with public Internet access open and default admin username/password. Luckily it was just full of test data, so when we noticed its contents had been replaced with a ransom message we just deleted the instance.
glad my root pass is
toor
and not something as obvious aspassword123
toor
, like Tor, the leet hacker software. So it must be super secure.
By allowing password login and using weak passwords or by reusing passwords that have been involved in a data breach somewhere.
That makes sense. It feels a bit mad that the difference between getting pwned super easy vs not is something simple like that. But also reassuring to know, cause I was wondering how I heard about so many hobbyist home labs etc getting compromised when it’d be pretty hard to obtain a reasonably secured private key (ie not uploaded onto the cloud or anything, not stored on an unencrypted drive that other people can easily access, etc). But if it’s just password logins that makes more sense.
This is like browsing /c/selfhosted as everyone portforwards every experimental piece of garbage across their router…
Meh. Each service in its isolated VM and subnet. Plus just generally a good firewall setup. Currently hosting ~10 services plubicly, never had any issue.