

Agreed. The nonstandard port helps too. Most script kiddies aren’t going to know your service even exists.
Take it another step further and remove the default backend on your reverse proxy so that requests to anything but the correct DNS name are dropped (bots just are probing IPs) and you basically don’t have to worry at all. Just make sure to keep your reverse proxy up to date.
The reverse proxy ends up enabling security through obscurity, which shouldn’t be your only line of defence, but it is an effective first line of defence especially for anyone who isn’t a target of foreign government level of attacks.
Adding basic auth to your reverse proxy endpoints extends that a whole lot further. Form based logins on your apps might be a lot prettier, but it’s a lot harder to probe for what’s running behind your proxy when every single URI just returns 401. I trust my reverse proxy doing basic auth a lot more than I trust some php login form.
I always see posters on Lemmy about setting up elaborate VPN setups for as the only way to access internal services, but it seems like awful overkill to me.
VPN still needed for some things that are inherently insecure or just should never be exposed to the outside, but if it is a web service with authentication required a reverse proxy is plenty of security for a home lab.
Btrfs is a copy on write (COW) filesystem. Which means that whenever you modify a file it can’t be modified in place. Instead a new block is written and then a single atomic operation is done to flip that new block to be the location of that data.
This is a really good thing for protecting your data from things like power outages or system crashes because the data is always in a good state on disk. Either the update happened or it didn’t there is never any in-between.
While COW is good for data integrity it isn’t always good for speed. If you were doing lots of updates that are smaller than a block you first have to read the rest of the block and then seek to the new location and write out the new block. On ssds this isn’t a issue but on HDDs it can slow things down and fragment your filesystem considerably.
Btrfs has a defragmentation utility though so fragmentation is a fixable problem. If you were using ZFS there would be no way to reverse that fragmentation.
Other filesystems like ext4/xfs are “journaling” filesystems. Instead of writing new blocks or updating each block immediately they keep the changes in memory and write them to a “journal” on the disk. When there is time those changes from the journal are flushed to the disk to make the actual changes happen. Writing the journal to disk is a sequential operation making it more efficient on HDDs. In the event that the system crashes the filesystem replays the journal to get back to the latest state.
ZFS has a journal equivalent called the ZFS Intent Log (ZIL). You put the ZIL on fast SSDs while the data itself is on your HDDs. This also helps with the fragmentation issues for ZFS because ZFS will write incoming writes to the ZIL and then flush them to disk every few seconds. This means fewer larger writes to the HDDs.
Another downside of COW is that because the filesystem is assumed to be so good at preventing corruption, in some extremely rare cases if corruption gets written to disk you might lose the entire filesystem. There are lots of checks in software to prevent that from happening but occasionally hardware issues may let the corruption past.
This is why anyone running ZFS/btrfs for their NAS is recommended to run ECC memory. A random bit flipping in ram might mean the wrong data gets written out and if that data is part of the metadata of the filesystem itself the entire filesystem may be unrecoverable. This is exceedingly rare, but a risk.
Most traditional filesystems on the other hand were built assuming that they had to cleanup corruption from system crashes, etc. So they have fsck tools that can go through and recover as much as possible when that happens.
Lots of other posts here talking about other features that make btrfs a great choice. If you were running a high performance database a journaling filesystem would likely be faster but maybe not by much especially on SSD. But for a end user system the snapshots/file checksumming/etc are far more important than a tiny bit of performance. For the potential corruption issues if you are lacking ECC backups are the proper mitigation (as of DDR5 ECC is in all ram sticks).