• 1 Post
  • 50 Comments
Joined 2 years ago
cake
Cake day: June 2nd, 2023

help-circle





  • True, but there are also some legitimate applications for 100s of gigabytes of RAM. I’ve been working on a thing for processing historical OpenStreetMap data and it is quite a few orders of magnitude faster to fill the database by loading the 300GiB or so of point data into memory, sorting it in memory, and then partitioning and compressing it into pre-sorted table files which RocksDB can ingest directly without additional processing. I had to get 24x16GiB of RAM in order to do that, though.


  • In my experience, nouveau is painfully slow and crashes constantly to the point of being virtually unusable for anything. The developers agree, as in the last couple months nouveau has been phased out of Mesa entirely. More recent Mesa versions now implement OpenGL on Nvidia using Zink on NVK, and the result is quite a bit faster and FAR more stable.

    If your distribution currently still ships a Mesa version which uses nouveau, I would personally recommend you just stick with the Intel graphics for now.


  • Aside from checking the kernel log (sudo dmesg) and system log (sudo journalctl -xe) for any interesting messages, I might suggest simply watching for any processes which are abnormally high while the system is running slow. My initial approach would be to use htop (disable “Hide Kernel Threads” and enable “Detailed CPU Time”), and seeing which processes, if any, are eating up your CPU time. The colored core utilization bars at the top show how much CPU time is being spent on what: gray for disk wait, red for kernel, green for regular user process, etc. That information will be a good starting point.


  • Again, that would be TIFF. TIFF images can be encoded either with each line compressed separately or with rectangular tiles compressed separately, and separately compressed blocks can be read and decompressed in parallel. I have some >100GiB TIFFs containing elevation maps for entire countries, and my very old laptop can happily zoom and pan around in them with virtually no delay.





  • I have tried hosting a Tor relay on a VPS in the past and it was bottlenecked by the CPU at barely 20MB/s, although to be fair this was without hardware AES. More importantly for you, the server’s IP started getting DDoSed constantly and a whole bunch of big internet services just immediately blocked the address (the list of relay IPs is public and many things just block every address on that list instead of only exit nodes). So any of your machines are probably at least somewhat up to the task (ideally if they have hardware AES support), but this is definitely not something I’d do on my home network.