• 0 Posts
  • 151 Comments
Joined 2 years ago
cake
Cake day: June 21st, 2023

help-circle
  • This is super cool! I love seeing these new implementations of JS. boa is another JS runtime written in Rust as well.

    I’m curious how easy it is to embed this. Can I use it from another Rust project? Can I customize module loading behavior, or set limits on the runtime to limit CPU usage or memory usage or intercept network calls? Can I use it from a non-Rust project? Or is this intended to be a standalone JS runtime called from the CLI? I’ve been looking at Boa as a JS engine for one of my projects, but I’m open to checking out brimstone too if it’ll work.


  • Another commenter already explained why this is unsound, so I’ll skip that, though static mut is almost universally unsound.

    Note, of course, that main() won’t be called more than once, so if you can, I would honestly just make this a stack variable containing a Box<[u8; 0x400]> instead. Alternatively, a Box<[u8]> can make it simpler to pass around, and a Vec<u8> that is pre-allocated with Vec::with_capacity lets you track the current length as well with the buffer (if it’s going to have a variable length of actually useful data).

    If you want to make it a static for some reason, I’d recommend making it just static and thread_local, then wrapping it in some kind of cell. Making it thread local will mean you don’t need to lock to access it safely.


  • I already do #1, and I push for #3 (specifically Python or TS) where I can at work, but there’s this weird obsession with bash that people have at work despite all these scripts not running on Windows natively (outside WSL). Currently I do #2, but I often end up just stuck in bash the whole time because it’s needed for things as simple as building our code. I want to try out Fish as an alternative for those situations.


  • Yeah I normally use Nushell as well. It was the one cross-platform shell I really liked.

    I’ll still use it. I just need to find something a bit closer to bash for when I need to use bash commands to do something, or where working in an environment where others use bash. Nushell has some pretty major syntax differences like && not being used to “chain” commands.



  • Is this your first time here?

    Your account is brand new and you’ve already posted now three posts related to JPlus in this community in one day. Please tell me you’re joking with this one.

    This post is a GitHub link to the project. Cool, I love seeing new projects, especially when the goal is to make it harder to write buggy code.

    The other post is an article that immediately links to the GitHub. The GitHub contains a link at the top to, what I can tell, the same exact article. Both the article and the GitHub README explain what JPlus is and how to use it.

    Why is this two posts when they contain the same information and link to each other directly at the top?





  • The conclusion of this experiment is objectively wrong when generalized. At work, to my disappointment, we have been trying for years to make this work, and it has been failure after failure (and I wish we’d just stop, but eventually we moved to more useful stuff like building tools adjacent to the problem, which is honestly the only reason I stuck around).

    There are a couple reasons why this problem cannot succeed:

    1. The outputs of LLMs are nondeterministic. Most problems require determinism. For example, REST API standards require idempotency from some kinds of requests, and a LLM without a fixed seed and a temperature of 0 will return different responses at least some of the time.
    2. Most real-world problems are not simple input-output machines. When calling, let’s say for example, an API to post a message to Lemmy, that endpoint does a lot of work. It needs to store the message in the darabase, federate the message, and verify that the message is safe. It also needs to validate the user’s credential before all of this, and it needs to record telemetry for observability purposes. LLMs are not able to do all this. They might, if you’re really lucky, be able to generate code that does this, but a single LLM call can’t do it by itself.
    3. Some real world problems operate on unbounded input sizes. Context sizes are constrained and as currently designed cannot handle unbounded inputs. See signal processing for an example of this, and for an example of a problem a LLM cannot solve because it cannot receive the input.
    4. LLM outputs cannot be deterministically improved. You can make changes to prompts and so on but the output will not monotonically improve when doing this. Improving one result often means sacrificing another result.
    5. The kinds of models you want to run are not in your control. Using Claude? K Anthropic updated the model and now your outputs all changed and you need to update your prompts again. This fucked us over many times.

    The list keeps going on. My suggestion? Just don’t. You’ll spend less time implementing the thing than trying to get an LLM to do it. You’ll save operating expenses. You’ll be less of an asshole.





  • Used Claude 4 for something at work (not much of a choice here and that team said they generate all their code). It’s sycophantic af. Between “you’re absolutely right” and it confidently making stuff up, I’ve wasted 20 minutes and an unknown number of tokens on it generating a non-functional unit test and then failing to solve the type errors and eslint errors.

    There are some times it was faster to use, sure, but only because I don’t have the time to learn the APIs myself due to having to deliver an entire feature in a week by myself (rest of the team doesn’t know frontend) and other shitty high level management decisions.

    At the end of the day, I learned nothing by using it, the tests pass but I have no clue if they test the right edge cases, and I guess I get to merge my code and never work on this project again.



  • This to me feels like the author trying to understand library code, failing to do so, then complaining that it’s too complicated rather than taking the time to learn why that’s the case.

    For example, the example about nalgebra is wild. nalgebra does a lot, but it has only one goal, and it does that goal well. To quote nalgebra, this is its goal:

    nalgebra is a linear algebra library written for Rust targeting:

    • General-purpose linear algebra (still lacks a lot of features…)
    • Real-time computer graphics.
    • Real-time computer physics.

    Note that it’s a general-purpose linear algebra library, hence a lot of non-game features, but it can be used for games. This also explains its complexity. For example, it needs to support many mathematical operations between arbitrary compatible types (for example a Vector6 and a Matrix6x6, though nalgrbra supports arbitrary sized matrices so it’s not just a 6x6 matrix that needs to work here).

    Now looking at glam:

    glam is a simple and fast linear algebra library for games and graphics.

    “For games and graphics” means glam can simplify itself by disregarding features they don’t need for that purpose. nalgebra can’t do that. glam can work with only square matrices up to 4x4 because it doesn’t care about general linear algebra, just what’s needed for graphics and games. This also means glam can’t do general linear algebra and would be the wrong choice if someone wanted to do that. glam also released after nalgebra, so it should come as no surprise that they learned from nalgebra and simplified the interface for their specific needs.

    So what about wgpu? Well…

    wgpu is a cross-platform, safe, pure-Rust graphics API. It runs natively on Vulkan, Metal, D3D12, and OpenGL; and on top of WebGL2 and WebGPU on wasm.

    GPUs are complicated af. wgpu is also trying to mirror a very actively developed standard by following WebGPU. So why is it so complicated? Because WebGPU is complicated. Because GPUs are very complicated. And because their users want that complexity so that they can do whatever crazy magic they want with the GPU rather than being unable to because the complexity was hidden. It’s abstracted to hell and back because GPU interfaces are all incredibly different. OpenGL is nothing like Vulkan, which is nothing like DirectX 11, which is nothing like WebGPU.

    Having contributed to bevy, there’s also two things to keep in mind there:

    1. Bevy is not “done”. The code has a lot of churn because they are trying to find the right way to approach a very difficult problem.
    2. The scope is enormous. The goal with bevy isn’t to create a game dev library. It’s to create an entire game engine. Compare it to Godot or Unreal or Unity.

    What this article really reminds me of isn’t a whole lot of Rust libraries that I’ve seen, but actually Python libraries. It shouldn’t take an entire course to learn how to use numpy or pandas, for example. But honestly even those libraries have, for the most part, a single goal each that they strive to solve, and there’s a reason for their popularity.



  • For a graphics-intensive application, this (or something custom with egui).

    Bevy also doesn’t need to redraw every N milliseconds or anything. You can create a custom game loop and redraw only when needed, whether that’s 60fps or only on window event.

    There’s also no reason a Bevy app couldn’t be embedded within a larger application. You can create the Bevy app when needed, render to a render target rather than the window surface, then manually draw that where you need to in your egui app. This also means you can stop the app, or at least the game loop, when it’s not needed anymore.


  • I got a simple approach to comments: do whatever makes the most sense to you and your team and anyone else who is expected to read or maintain the code.

    All these hard rules around comments, where they should live, whether they should exist, etc. exist only to be broken by edge cases. Personally I agree with this post in the given example, but eventually an edge case will come up when this no longer works well.

    I think far too many people focus on comments, especially related to Clean Code. At the end of the day, what I want to see is:

    • Does the code work? How do you know?
    • What does the code do? How do you know? How do I know?
    • Can I easily add to your code without breaking it?

    Whether you use comments at all, where you place them, whether they are full sentences, fragments, lowercase, sentence case, etc makes no difference to me as long as I know what the code does when I see it (assuming sufficient domain knowledge).