i’m lizard

  • 0 Posts
  • 26 Comments
Joined 1 year ago
cake
Cake day: June 21st, 2024

help-circle




  • These containers are/were for self-hosting. VMWare previously owned Bitnami, it was their attempt to make it easier to self-host rather than paying a cloud provider, which should directly benefit them because VMWare got its money from businesses that self-host + self-host people growing up learning free homelab ESXi and wanting to apply that at work. It helps a lot if there’s well-maintained solutions for deploying popular stuff.

    Then Broadcom bought VMWare for a ridiculous price and is doing none of that.


  • Tends to change by playthrough but 50% water scale/150% water coverage/~200% resource frequency+size+richness is my go-to. Creates lots of natural chokepoints, available resources end up feeling like they’re similar to default map settings, gives you enough area to build a reasonable bus starter base at the start but eventually pushes towards a more train spaghetti playstyle.


  • scripts mix configuration with logic and this was a big reason why a lot of distributions switched to systemd in the first place

    What was really wrong with the old BSD-style rc/init systems is that they mixed configuration with the logic required to start/stop the service at all, and that that logic was running in the same session it was being executed from (inheriting the environment, FDs and the like). These daemontools-style supervisors don’t have that problem, the run script is essentially just systemd’s ExecStart= and it gets forked off from the supervisor itself and is then managed by it. Lots of them are just #!/bin/sh \n exec coolservice.

    There’s plenty more things that systemd does pretty well that this doesn’t do (dependency management seems to be sorely lacking here in particular), but this kind of approach is much closer to it than the old rc scripts.


  • All true, wanted to add on to this:

    Note that smart peeps say that the docker socket is not safe as read-only.

    That’s true, and it’s not just something mildly imperfect, read-only straight up does nothing. For connecting to a socket, Linux ignores read-only mount state and only checks write permission on the socket itself. Read-only would only make it impossible to make a new socket there. Once you do have a connection, that connection can write anything it wants to it. Traefik and other “read-only” uses still have to send GET queries for the data they need, so that’s happening for legitimate use cases too.

    If you really need a “GET-only” Docker socket, it has to be done with some other kind of mechanism, and frankly the options aren’t very good. Docker has authorization plugins that seem like too much of a headache to set up, and proxies don’t seem very good to me either.

    Or TLDR: :ro or stripping off permission bits doesn’t do anything aside from potentially break all uses for the socket. If it can connect at all, it’s root-equivalent or has all privileges of your rootless user, unless you took other steps. That might or might not be a massive problem for your setup, but it is something you should know when doing it.




  • Then it can’t be booted with new media. Microsoft has been very, very slow with the automatic rollout of their own key updates, and made just about no progress over the past two years. It’s been manual updates + newly produced systems only.

    The trick here is that they have a key-exchange-key that can be used to update the other keys. That doesn’t expire (or rather, not in a meaningful way). But, a Windows image is still only going to boot on a system that trusts the key that was used for it. If you make a Windows image on a 2011 system now, it’s going to be signed with the 2011 key, and it won’t boot on a system that distrusts that key. The same is true in reverse.

    Their key update documentation is all available and some enterprises have been on the new key for a while, but it’s a lot of manual work and a lot of problems have popped up, most documented in there. How they’re going to roll this out automatically to normal users isn’t obvious to me. There’s technically nothing stopping a system from trusting both the 2011 and 2023 keys, and I wouldn’t be entirely surprised if they end up never pushing the 2011 revocation.

    The keys they use for their own OS don’t truly expire until late 2026, and I expect they’ll do their best to delay it until then, but the next time they have to update their boot manager is going to be painful and introduce all kinds of new problems.






  • PUID is indeed handled inside the container itself, it’ll run a container-provided script as whatever the container’s UID 0 happens to be first which then drops to whatever $PUID happens to be inside the container. user= is enforced by Podman itself before the container starts, but Podman will still run as root in that setup. That means Podman is running “rootful”, while if you started the container manually as $uid using the regular Podman CLI, it would be “rootless”. That is a major difference in a lot of respects, including security, and you can find quite a bit of documentation on the differences between those operating modes online; it wouldn’t fit in a comment. Rootless is generally considered the better mode, though there are some things that still require a rootful container.

    In the upcoming NixOS 25.05 or current unstable, there are some tools you can use to run containers rootless as another user more easily using a new $name.podman.user = ""; setting. From what I understand they’ll still be root-managed systemd system services that require sudo to operate, but that means privileges get dropped by systemd before running Podman, instead of dropped by Podman before running the container. This stuff is recent and I haven’t used it, I just happen to know it exists, relevant nixpkgs commit if you wanna dig into it yourself: https://github.com/NixOS/nixpkgs/commit/7d443d378b07ad55686e9ba68faf16802c030025