• 0 Posts
  • 29 Comments
Joined 2 years ago
cake
Cake day: November 5th, 2023

help-circle
  • greyfox@lemmy.worldtoSelfhosted@lemmy.worldRaid Z2 help
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    2 months ago

    Well I am not claiming to be a ZFS expert, I have been using it since 2008ish both personally and professionally. So I am fairly certain what I have said here is correct semantics aside.

    Both lz4 and zstd have almost no performance impact on modern hardware.

    So “almost” is zero in your mind? Why waste the CPU cycles on compressing data that is already compressed? I recognize that you might not care, but I sure do. And I wouldn’t say it would be wrong to think that way.

    compression acts on blocks in ZFS, therefore it is enabled at the pool level

    This is incorrect. You can zfs set compression=lz4 dataset (or off) on a per dataset basis. You can see your compression efficiency per dataset by running zfs get compressratio dataset, if your blocks were written to a dataset with compress=off you will see no compression for that dataset. You can absolutely mix compressed and uncompressed datasets in the same pool.

    OP added a -O option to set compression when he created the pool but that is not a pool level setting. If you look at the documentation for zpool-create you will see that -O are just properties passed to the root dataset verses -o options which are actual pool level parameters.

    You might be confusing compression and deduplication. Deduplication is more pool wide.

    ZFS does indeed need to allocate some space at the front and end of a pool for slop, metaslab, and metadata. I think you are confusing filesystem and datasets.

    Well yes and no here. You are right I should have been calling them datasets. Datasets are a generic term, and there are different dataset types, like file systems, volumes and snapshots.

    So yeah I maybe should have been more generic and called them datasets but unless OP is using block volumes we are probably talking about ZFS file systems here. Go to say the zfsprops man page and you will see file system mentioned about 60 times when discussing properties that can be set for file type datasets.

    I’m not sure what you’re trying to say about NFS and ZFS, here but this is completely false, even if you mean datasets.

    It sounds like you are unaware of the native NFS/SMB integrations that ZFS has.

    It is totally optional but instead of using your normal /etc/exports to set NFS settings ZFS can dynamically load export settings when your dataset is mounted.

    This is done with the sharenfs parameter zfs set sharenfs=<export options> dataset. Doing this means you keep your export settings with your pool instead of the system it is mounted on, that way if you say replicate your pool to another system those export settings automatically come with it.

    There are also sharesmb options for samba.

    My point was then that you should lay out your dataset hierarchy based on your expected permissions for NFS/SMB. You could certainly skip all of this and handle these exports manually yourself in which case you wouldn’t have to worry about separate filesystems and this point is moot.

    My post was less about compression and more about saying that you should consider splitting your datasets based on what is in them because the more separate they are the more control you have. You gain a lot of control, and lose very little since it all comes from the same pool.

    Some of the reasons I have these options are less important than they were a decade ago. i.e. doing a 20tb ZFS send before resuming a send was possible sucked. Any little problem and you have to start over. Having more smaller filesystems meant smaller sends. And yeah I was using ZFS before lz4 was even an option and CPU was more precious back then, but I don’t see any reason to waste CPU cycles when you can create a separate file system for your media and set compression to off on it.

    And most importantly I would want different snapshot policies for different data types. I don’t need years worth of retention for a movie collection, but I would like to have years worth of retention on my documents filesystem because it is relatively small so the storage consumed is minimal to protect against accidental deletion.


  • greyfox@lemmy.worldtoSelfhosted@lemmy.worldRaid Z2 help
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 months ago

    Yeah it won’t make much difference these days.

    I suppose my point was more so that because ZFS is a pool that can be split up with filesystems new users should be thinking a little differently than they would have been used to with traditional raid volumes/partitions.

    With a normal filesystem partitions are extremely limiting, requiring you to know how much space you need for each partition. ZFS filesystems just being part of the pool means that you can get logical separation between data types without needing that kind of pre-planning.

    So many settings with ZFS that you may want to set differently between data types. Compression, export settings, snapshot schedules, replicating particular data sets to other systems, quotas, etc.

    So I was mostly just saying “you should consider splitting those up so that you can adjust settings per filesystem that make sense”.

    There is also a bit of danger with a single ZFS filesystems if you have no snapshots. ZFS being a copy on write filesystem means that even deleting something actually needs space. A bit counter intuitive but deleting something means writing a new block first then updating the FS to point at the new block. If you fill the pool to 100% you can’t delete anything to free up space. Your only option is to delete a snapshot or delete entire filesystems to free up a single block so that you can cleanup. If you don’t have a snapshot to delete you have to delete the entire filesystem and if you only have one filesystem you need to backup+delete everything… ask me how I know this ;)

    If you have several filesystems you only need to backup and destroy the smallest one to get things moving again. Or better yet have some snapshots you can roll off to free up space or have quotas in place so that you don’t fill the pool entirely.



  • greyfox@lemmy.worldtoSelfhosted@lemmy.worldRaid Z2 help
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    2 months ago

    You probably shouldn’t enable compression on the root filesystem of the pool. Since you mention movies/TV shows/music those are just going to waste cpu cycles compressing uncompressable data.

    Instead you should consider separate ZFS filesystems for each data type. Since ZFS is a pool you don’t have to pre-allocate space like partitions so there is no harm in having separate filesystems for each data type rather than single large filesystem for everything. You can then turn on compression only for those filesystems that benefit from it.

    Also remember that many permissions like nfs export settings are done on a per filesystem basis so you should lay out your filesystems according to your data type and according to what permissions you want to give out for that filesystem.

    i.e. if you are going to have a Navidrome server to stream your music you don’t want to give that server access to your entire pool, just your music.

    Separate filesystems also means you can have different snapshot schedules/retentions. Documents might need to snapshot more often, and be kept around longer than media snapshots.



  • greyfox@lemmy.worldtoLinux@lemmy.ml*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    1
    ·
    6 months ago

    +1 to this. Lots of talk in this thread about drivers, but the only driver involved here is the Bluetooth driver. Half of the point of Bluetooth is that peripherals don’t need their own drivers, they just provide various profiles which are standardized so the Bluetooth service can consume those profiles from any device.

    Not an expert in this area but I believe the implementation of most of those profiles is user space, so the proper place to be debugging is the Bluetooth service or in pulsesudio. So start your Bluetooth service logs they might give you some idea as to what is going on. Try to get a list of what profiles are supported by your OS and what profiles are supported by the device, maybe the device only supports some newer lossless profile that hasn’t been implemented in Linux yet.




  • I always see this argument but I really don’t want anything plugged into anything as important as the USB-C port while the phone is in my pocket.

    3.5 plugs are rather short outside of the phone (at least for headphones with 90deg plugs) to minimize leverage that you put on the port. Being able to rotate also means less stress on the port as well.

    The USB-C adapters are pretty short, but lack the rotation. I have replaced USB-C ports in dozens of Nintendo Switches and other devices, it is pretty clear they aren’t designed to take much stress.

    Long story short if anything happens I would much rather have the 3.5mm pin stuck in a headphone jack than breaking the USB-C port and making it so my phone is a brick.


  • I don’t use the WebOS app but generally default subtitles/audio languages are set on your profile and the apps pick up those settings.

    Try logging in to the web interface and going to your user profile. There is a “Playback” section where you can set your preferred languages. If this isn’t set it likely is taking the default language from your media files instead.


  • Unauthorized VPNs (non government approved) are illegal in China. If a business needs their own they can get approval but they have to apply for those exceptions.

    It isn’t really enforced, probably especially so for non citizens, but if you do something they don’t like it is something they could use against you.

    You would probably be less breaking the law to just directly open up SSH and access that instead of tunneling through a VPN. Even though SSH can do tunneling of its own.


  • Your $1 has absolutely changed in value by 10pm. What do you think inflation is? It might not be enough change for the store to bother changing prices but the value changes constantly.

    Watch the foreign exchange markets, your $1 is changing in value compared to every other currency constantly.

    The only difference between fiat and crypto is that changing the prices in the store is difficult, and the volume of trade is high enough to reduce volatility in the value of your $. There are plenty of cases of hyperinflation in history where stores have to change prices on a daily basis, meaning that fiat is not immune to volatility.

    To prevent that volatility we just have things like the federal reserve, debt limits, federal regulations, etc that are designed to keep you the investor (money holders) happy with keeping that money in dollars instead of assets. The value is somewhat stable as long as the government is solvent.

    Crypto doesn’t have those external controls, instead it has internal controls, i.e. mining difficulty. Which from a user perspective is better because it can’t be printed at will by the government.

    Long story short fiat is no different than crypto, there is no real tangible value, so value is what people think it is. Unfortunately crypto’s value is driven more by speculative “investors” than by actual trade demand which means it is more volatile. If enough of the world changed to crypto it would just as stable as your $.

    Not saying crypto is a good thing just saying that it isn’t any better or worse. It needs daily usage for real trade by a large portion of the population to reduce the volatility, instead of just being used to gamble against the dollar.

    Our governments would likely never let that happen though, they can’t give up their ability to print money. It’s far easier to keep getting elected when you print the cash to operate the government, than it is to raise taxes to pay for the things they need.

    The absolutely worthless meme coin scams/forks/etc are just scammers and gamblers trying to rip each other off. They just make any sort of useful critical mass of trade less and less plausible because it gives all crypto a bad name. Not that Bitcoin/Ethereum started out any different but now that enough people are using them splitting your user base is just self defeating



  • Nope, the switch only keeps saves on the internal storage or synced to their cloud if you pay for it. When doing transfers between devices like this there is no copy option only a move and delete.

    There are some legitimate reasons they want to prevent this like preventing users from duplicating items in multiplayer games, etc. Even if you got access to the files they are encrypted so that only your user can use them.

    I think the bigger reason they do this is there are occasionally exploits that are done through corrupted saves. So preventing the user from importing their own saves helps protect the switch from getting soft modded.

    If you mod your switch you can get access to the save files and since it has full access it can also decrypt them, so that you can back them up. One of several legitimate reasons to mod your switch.



  • Named volumes are often the default because there is no chance of them conflicting with other services or containers running on the system.

    Say you deployed two different docker compose apps each with their own MariaDB. With named volumes there is zero chance of those conflicting (at least from the filesystem perspective).

    This also better facilitates easier cleanup. The apps documentation can say “docker compose down -v”, and they are done. Instead of listing a bunch of directories that need to be cleaned up.

    Those lingering directories can also cause problems for users that might have wanted a clean start when their app is broken, but with a bind mount that broken database schema won’t have been deleted for them when they start up the services again.

    All that said, I very much agree that when you go to deploy a docker service you should consider changing the named volumes to standard bind mounts for a couple of reasons.

    • When running production applications I don’t want the volumes to be able to be cleaned up so easily. A little extra protection from accidental deletion is handy.

    • The default location for named volumes doesn’t work well with any advanced partitioning strategies. i.e. if you want your database volume on a different partition than your static web content.

    • Old reason and maybe more user preference at this point but back before the docker overlay2 storage driver had matured we used the btrfs driver instead and occasionally Docker would break and we would need to wipe out the entire /var/lib/docker btrfs filesystem, so I just personally want to keep anything persistent out of that directory.

    So basically application writers should use named volumes to simplify the documentation/installation/maintenance/cleanup of their applications.

    Systems administrators running those applications should know and understand the docker compose well enough to change those settings to make them production ready for their environment. Reading through it and making those changes ends up being part of learning how the containers are structured in the first place.


  • For shared lines like cable and wireless it is often asymmetrical so that everyone gets better speeds, not so they can hold you back.

    For wireless service providers for instance let’s say you have 20 customers on a single access point. Like a walkie-talkie you can’t both transmit and receive at the same time, and no two customers can be transmitting at the same time either.

    So to get around this problem TDMA (time division multiple access) is used. Basically time is split into slices and each user is given a certain percentage of those slices.

    Since the AP is transmitting to everyone it usually gets the bulk of the slices like 60+%. This is the shared download speed for everyone in the network.

    Most users don’t really upload much so giving the user radios equal slices to the AP would be a massive waste of air time, and since there are 20 customers on this theoretical AP every 1mbit cut off of each users upload speed is 20mbit added to the total download capability for anyone downloading on that AP.

    So let’s say we have APs/clients capable of 1000mbit. With 20 users and 1AP if we wanted symmetrical speeds we need 40 equal slots, 20 slots on the AP one for each user to download and 1 slot for each user to upload back. Every user gets 25mbit download and 25mbit upload.

    Contrast that to asymmetrical. Let’s say we do a 80/20 AP/client airtime split. We end up with 800mbit shared download amongst everyone and 10mbit upload per user.

    In the worst case scenario every user is downloading at the same time meaning you get about 40mbit of that 800, still quite the improvement over 25mbit and if some of those people aren’t home or aren’t active at the time that means that much more for those who are active.

    I think the size of the slices is a little more dynamic on more modern systems where AP adjusts the user radios slices on the fly so that idle clients don’t have a bunch of dead air but they still need to have a little time allocated to them for when data does start to flow.

    A quick Google seems to show that DOCSIS cable modems use TDMA as well so this all likely applies to cable users as well.



  • I am assuming this is the LVM volume that Ubuntu creates if you selected the LVM option when installing.

    Think of LVM like a more simple more flexible version of RAID0. It isn’t there to offer redundancy but it take make multiple disks aggregate their storage/performance into a single block device. It doesn’t have all of the performance benefits of RAID0, particularly with sequential reads, but in the cases of fileservers with multiple active users it can probably perform even better than a RAID0 volume would.

    The first thing to do would be to look at what volume groups you have. A volume group is one or more drives that creates a pool of storage that we can allocate space from to create logical volumes. Run vgdisplay and you will get a summary of all of the volume groups. If you see a lot of storage available in the ‘Free PE/Size’ (PE means physical extents) line that means that you have storage in the pool that hasn’t been allocated to a logical volume yet.

    If you have a set of OS disks an a separate set of storage disks it is probably a good idea to create a separate volume group for your storage disks instead of combining them with the OS disks. This keeps the OS and your storage separate so that it is easier to do things like rebuilding the OS, or migrating to new hardware. If you have enough storage to keep your data volumes separate you should consider ZFS or btrfs for those volumes instead of LVM. ZFS/btrfs have a lot of extra features that can protect your data.

    If you don’t have free space then you might be missing additional drives that you want to have added to the pool. You can list all of the physical volume which have been formatted to be used with LVM by running the pvs command. The pvs command show you each formatted drive and if they are associated with a volume group. If you have additional drives that you want to add to your volume group you can run pvcreate /dev/yourvolume to format them.

    Once the new drives have been formatted they need to be added to the volume group. Run vgextend volumegroupname /dev/yourvolume to add the new physical device to your volume group. You should re-run vgdisplay afterwards and verify the new physical extents have been added.

    If you are looking to have redundancy in this storage you would usually build an mdam array and then do the pvcreate on the volume created my mdadm. LVM is usually not used to give you redundancy, other tools are better for that. Typically LVM is used for pooling storage, snapshots, multiple volumes from a large device, etc.

    So one way or another your additional space should be in the volume group now, however that doesn’t make it usable by the OS yet. On top of the volume group we create logical volumes. These are virtual block devices made up of physical extents on the physical disks. If you run lvdisplay you will see a list of logical volumes that were created by the Ubuntu installer which is probably only one by default.

    You can create new logical volumes with the lvcreate command or extend the volume that is already there. Or resize the volume that you already have with lvresize. I see other posts already explained those commands in more detail.

    Once you have extended the logical volume (the virtual block device) you have to extend the filesystem on top of it. That procedure depends on what filesystem you are using on your logical volume. Likely resize2fs for ext4 by default in Ubuntu, or xfs_growfs if you are on XFS.