

- Sometimes you are using language features your team is unfamiliar with.
Had this happen before with pattern matching.
Had this happen before with pattern matching.
Because you created a first draft. Your first draft should include all that info. It isn’t writing the whole doc for you lol, just making minor edits to turn it from notes into prose.
Without that? No clue, good luck. They can usually read source files to put something together, but that’s unreliable.
This would infuriate me to no end. It’s literally the definition of a data race. All data between threads needs to either be accessed through synchronization primitives (mutexes, atomic access, etc) or needs to be immutable. For the most part, this should include fds, though concurrent writes to stderr might be less of an issue (still a good idea to lock/buffer it and stdout though to avoid garbled output).
The main value I found from Copilot in vscode back when it first released was its ability to recognize and continue patterns in code (like in assets, or where you might have a bunch of similar but slightly different fields in a type that are all initialized mostly the same).
I don’t use it anymore though because I found the suggestions to be annoying and distracting most of the time and got tired of hitting escape. It also got in the way of standard intellisense when all I needed was to fill in a method name. It took my focus away from thinking about the code because it would generate plausible looking lines of code and my thinking would get pulled in that direction as a result.
With “agents” (whatever that term means these days), the article describes my feelings exactly. I spend the same amount of time verifying a solution as I would just creating the solution myself. The difference is I fully understand my own code, but I can’t reach that same understanding of generated code as fast because I didn’t think about writing it or how that code will solve my problem.
Also, asking an LLM about the generated code is about as reliable as you’d expect on average, and I need it to be 100% reliable (or extremely close) if I’m going to use it to explain anything to me at all.
Where I found these “agents” to be the most useful is expanding on documentation (markdown files and such). Create a first draft and ask it to clean it up. It still takes effort to review that it didn’t start BSing something, but as long as what it generates is small and it’s just editing an existing file, it’s usually not too bad.
This depends. Many languages support 1 liner aliases, whether that’s using
/typedef
in C++, type
in Rust, Python, and TS, etc.
In other languages, it may be more difficult and not worth it, though this particular example should just use a duration type instead.
Ah yes, one of the major questions of software development: to comment, or not to comment? This is almost as big of a question as tabs vs spaces at this point.
Personally? I don’t really care. Make the code readable to whoever needs to be able to read it. If you’re working on a team, set the standard with your team. No answer is the universally correct one, nor is any answer going to be eternally the correct one.
Regardless of whether code comments should or shouldn’t exist, I’m of the opinion that doc comments should exist for functions at the very minimum. Describe preconditions, postconditions, the expected parameters (and types if needed), etc. I hate seeing undocumented **kwargs
in a function, and I’ll almost always block a PR on my team if I see one where the valid arguments there are not blatantly obvious from context.
DDR5-6000/CL28 should be fine. Make sure to enable the XMP/EXPO profile in your BIOS after installing it.
You can follow hardware reviewers like GamersNexus, LTT, HardwareUnboxed, etc if you want to stay up to date (which is what I do), or look at their content if you just want a review for a product you’re looking at.
It’s less of an issue now, but there were stability issues in the early days of DDR5. Memory instability can lead to a number of issues including being unable to boot the PC (failing to post), the PC crashing suddenly during use, applications crashing or behaving strangely, etc. Usually it’s a sign of memory going bad, but for DDR5 since it’s still relatively young it can also be a sign that the memory is just too fast.
Always check and verify that the RAM manufacturer has validated their RAM against your CPU.
Air cooling is sufficient to cool most consumer processors these days. Make sure to get a good cooler though. I remember Thermalright’s Peerless Assassin being well reviewed, but there may be even better (reasonably priced) options these days.
If you don’t care about price, Noctua’s air coolers are overkill but expensive, or an AIO could be an option too.
AIOs have the benefit of moving heat directly to your fans via fluid instead of heating up the case interior, but that usually doesn’t matter that much, especially outside of intense gaming.
Very few things need 64GB memory to compile, but some do. If you think you’ll be compiling web browsers or clang or something, then 64GB would be the right call.
Also, higher speeds of DDR5 can be unstable at higher capacities. If you’re going with 64GB or more of DDR5, I’d stick to speeds around 6000 (or less) and not focus too much on overclocking it. If you get a kit of 2x32GB (which you should rather than getting the sticks independently), then you’ll be fine. You won’t benefit as much from RAM speed anyway as opposed to capacity.
This is why claims about security should always be backed by an audit. Someone will inevitably (rightfully) tear you a new hole if there’s any gaps.
Let chains are finally stable! Yay! Thanks everyone who made that happen.
uv’s workspaces work, yep. It’s honestly great. Haven’t really run into any issues with them yet.
Python can also be used for large codebases (thanks uv
), but I agree that Rust is better suited to the job.
I agree here. I always find it difficult to navigate a Go codebase, especially when public members just seem to magically exist as opposed to being explicitly imported.
Quoting OpenAI:
Our goal is to make the software pieces as efficient as possible and there were a few areas we wanted to improve:
- Zero-dependency Install — currently Node v22+ is required, which is frustrating or a blocker for some users
- Native Security Bindings — surprise! we already ship a Rust for linux sandboxing since the bindings were available
- Optimized Performance — no runtime garbage collection, resulting in lower memory consumption
- Extensible Protocol — we’ve been working on a “wire protocol” for Codex CLI to allow developers to extend the agent in different languages (including Type/JavaScript, Python, etc) and MCPs (already supported in Rust)
Now to be fair, these dashes scream “LLM generated” for their entire post. Regardless, if these really are their reasons:
As for the difficulty in making a CLI, clap makes CLIs dead simple to build with its derive macro. To be clear, other languages can be just as easy (Python has a ton of libraries for this for example including argparse and Typst).
Personally, if I were to choose a language for them, it’d be Python, not Go. It would have the most overlap with their users and could get a lot more contributors as a result in my opinion. Go, on the otherhand, may be a language their devs are less familiar with or don’t use as much as Rust or other languages.
Rust does not check arrays at compile time if it cannot know the index at compile time, for example in this code:
fn get_item(arr: [i32; 10]) -> i32 {
let idx = get_from_user();
arr[idx] // runtime bounds check
}
When it can know the index at compile time, it omits the bounds check, and iterators are an example of that. But Rust cannot always omit a bounds check. Doing so could lead to a buffer overflow/underflow, which violates Rust’s rules for safe code.
Edit: I should also add, but the compiler also makes optimizations around slices and vectors at compile time if it statically knows their sizes. Blanket statements here around how it optimizes will almost always be incorrect - it’s smarter than you think, but not as smart as you think at the same time.
Rust’s memory safety guarantees only work for Rust due to its type system, but another language could also make the same guarantees with a higher runtime cost. For example, a theoretical Python without a GIL (so 3.13ish) that also treated all mutable non-thread-local values as reentrant locks and required you to lock on them before read or write would be able to make the same kinds of guarantees. Similarly, a Python that disallowed coroutines and threading and only supported multiprocessing could offer similar guarantees.
Are you suggesting that Rust can perform compile time array bounds checking for all code that uses arrays?
I’ll answer this question: no.
But it does make some optimizations around iterators and unnecessary bounds checks written in code at least.
But yes it does runtime bounds checking where necessary.
The distribution is super important here too. Hashing any value to zero (or
h(x) = 0
) is valid, but a terrible distribution. The challenge is getting real-world values hashed in a mostly uniform distribution to avoid collisions where possible.Still, the contents of the article are useful even outside of hashing. It should just disclaim that the width of the output isn’t the only thing important in a hash function.