• 0 Posts
  • 401 Comments
Joined 7 months ago
cake
Cake day: October 16th, 2025

help-circle







  • It’s comparable because it’s a negative outcome that may cost something (cooking a new meal, ordering a takeaway) to fix, but can be checked quite easily. Information that is factually incorrect has a negative outcome as well, and can also be checked quite easily - but the negative outcome, and the ease of checking, varies vastly across the space of all possible information.

    I am encouraging you to think about situations where the negative outcome is not that bad, and the ease of checking quite high. Does that make using AI more practical?



  • If you can’t know if it’s right or wrong, and have to double check it, why use it in the first place?

    Me and my partner alternate doing the cooking. She doesn’t know if I’m going to make a mistake and serve her something she doesn’t like (it has happened). Does that mean she’s better off doing all the cooking herself?

    “If it’s not perfect, it’s useless” is a fallacy. So the question is, how good does it have to be to be useful? That depends on the task, and especially on the cost (however you measure it - dollars or hours or whatever) of verifying whether the result is good compare to the cost of a person doing the task.


  • That’s for laptop keyboards - I don’t see any spare parts there for membrane keyboards. I said “rubber dome” but I guess that was ambiguous; I meant the PC keyboards where there’s a single moulded rubber sheet inside that forms the switch and “spring”. I was not able to quickly find anywhere offering spare membranes for ordinary keyboards, but I’m pretty sure they’ll be more than 5 euro :P








  • Well I just had to work it out again myself and you’re right. I dunno what scenario I was thinking of that had worse complexity and whether it was really due to dynamic arrays; I just remember getting asked about it in some interview and somehow the answer ended up being “use a linked list and the time complexity goes down to linear” /shrug

    Thanks for the correction!


  • Yes, but dynamic resize typically means copying all of the old data to the new destination, whereas a linked list does not need to do this. The time complexity of reading a large quantity of data into a linked list is O(N), but reading it into an array can end up being O(N^2) or at best O(N log N).

    You can make the things in your list big chunks so that you don’t pay much penalty on cache performance.

    I thought of another good example situation: a text buffer for an editor. If you use an array, then on large documents inserting a character at the beginning of the document requires you to rewrite the rest of the array, every single character, to move everything up. If you use a linked list of chunks, you can cap the amount of rewriting you need to do at the size of a single chunk.