• 0 Posts
  • 190 Comments
Joined 1 year ago
cake
Cake day: July 10th, 2024

help-circle

  • Zacryon@feddit.orgtoAsklemmy@lemmy.mlWhat's a Tankie?
    link
    fedilink
    arrow-up
    16
    arrow-down
    5
    ·
    20 days ago

    Tankie is a pejorative label generally applied to authoritarian communists, especially those who support or defend acts of repression by such regimes, their allies, or deny the occurrence of the events thereof. More specifically, the term has been applied to those who express support for one-party Marxist–Leninist socialist republics, whether contemporary or historical. It is commonly used by anti-authoritarian leftists, anarchists, libertarian socialists, left communists, social democrats, democratic socialists, and reformists to criticise Leninism, although the term has seen increasing use by liberal and right‐wing factions as well.

    https://en.wikipedia.org/wiki/Tankie









  • Good question! I have read a bit more about it and this does indeed heavily depend on the respective compiler implementation. Depending on the compiler, it may prefer default if-else ladders for few cases. For more, dense cases, LUTs may play a larger role. For less dense cases binary search might be chosen.

    I inspected the generated assembler code of your (slightly extended) example via https://godbolt.org/

    The code I used:

    void check_uv();
    void check_holograph();
    void check_stripe();
    void check_watermark();
    
    void switch_test(int banknoteValue) {
    
        switch (banknoteValue) {
            case 5000:
                check_uv();
                check_holograph();
            case 2000:
                check_stripe();
            case 1000:
                check_watermark();
        }
    
    }
    

    Using x86-64 gcc 15.2 this leads to a couple of cmp followed by je instructions, so “compare” and “jump to label if equal” which basically is a typical if-else ladder. I get the same for x64 msvc v19.43.
    Changing the cases to 1, 2 and 3 instead of 5000, 2000 and 1000 does not change the outcome.

    Increasing to 23 different but incrementally increasing cases (cases 1 to 23) does not change the outcome as well for gcc. But here msvc has introduced a performance optimization: it decreased the input value by one to get a range of 0 to 22 and then created a jump table, so a LUT with execution addresses. (I am not going to detail the assembler commands logic here, but you can use the C++ code below and take a look yourself. :) )

    So even in this simple example we can already see how different compilers may implement switch cases differently depending on its structure. Even though gcc chose the apparently less efficient solution here, usually one may trust on the compiler choosing the most efficient switch implementation. ;)

    As far as I know, we would not even get the chance of similar optimizations if choosing if-else ladders directly instead of a switch-case structure. It would be interesting to put this to a test though and see whether some compilers translate if-else ladders equivalently with the performance benefits that can currently come with switch structures.

    The inflated code:

    void check_uv();
    void check_holograph();
    void check_stripe();
    void check_watermark();
    
    void switch_test(int banknoteValue) {
    
        switch (banknoteValue) {
            case 1:
                check_uv();
                check_holograph();
            case 2:
                check_stripe();
            case 3:
                check_watermark();
            case 4:
                check_watermark();
            case 5:
                check_watermark();
            case 6:
                check_watermark();
            case 7:
                check_watermark();
            case 8:
                check_watermark();
            case 9:
                check_watermark();
            case 10:
                check_watermark();
            case 11:
                check_watermark();
            case 12:
                check_watermark();
            case 13:
                check_watermark();
            case 14:
                check_watermark();
            case 15:
                check_watermark();
            case 16:
                check_watermark();
            case 17:
                check_watermark();
            case 18:
                check_watermark();
            case 19:
                check_watermark();
            case 20:
                check_watermark();
            case 21:
                check_watermark();
            case 22:
                check_watermark();
            case 23:
                check_watermark();
        }
    
    }
    

  • That falls into the “very desparate” part. As long as there are companies that have better recruitement processes it will help if most people prefer those over others. If all of these companies reject an applicant then this applicant might become “desparate” and turn towards worse companies. So it’s more the mass of people that influence the market and can therefore improve its conditions.



  • There was a similar study / survey by Microsoft (idk anymore if it was really them) recently where similar results where found. In my experience, LLM based coding assistants are pretty okay for low level complexity tasks, creating boilerplate code, especially if it does not require deeper understanding of the system architecture.

    But the more complex the task becomes, the harder they start to suck and fail. This is where the time drag begins. Common mistakes or outdated coding approaches are also used rather often instead of newer standards. The deviations from the given instructions are also happening way too often. And if you do not check the generated code thoroughly, which can happen if the code “looks okay” on first glance, then finding bugs and error sources due to this can become quite cumbersome.

    Debugging is where I have wasted most of my time with AI assitants. While there is some advantage in having a somewhat more capable rubber duck, it is usually not really helpful in fixing stuff. Either the error/bug sources are completely missed (even some beginner mistakes) or it tries to apply band-aid solutions rather than solving the cause or, and this is the worst of all, it is very stubborn about the alleged problem cause (possibly combined with forgetting earlier debugging findings, resulting in a tedious reasoning and chat loop). I have found myself more often than I’d like to arguing with the machine. Hallucinations or unfounded fix hypotheses make this regularly worse.
    However, letting the AI assistant add some low level debug code to help analyze the problem has often been useful in my experience. But this requires clear and precise instructions, you can’t just hope the assistant will cover all important values and aspects.

    When I ask the assistant to logically go through some lines of code step by step, possibly using an example, to nudge it towards seeing how it’s reasoning was wrong, it’s funny to see, e.g. with Claude, how it first says stuff like “This works as intended!” and a moment later “Wait… this is not right. Let me think about it again.”

    This becomec less funny for very fundamental stuff. There were times where the AI assistant told me that 0.5 is greater than 0.8 for example, which really shows the “autocorrect on steroids” nature of LLMs rather than an active critical thinking process. This is bad, obvio. But it also makes jobs for humans in various fields of IT safe.

    Typing during the whole conversation is naturally also really slow, especially when writing more than a few sentences to provide context.

    Where I do find AI assistants in coding mostly useful, is in exploring APIs that I do not know so well, or code written by others that is possibly underdocumented. (Which is unfortunately really common. Most devs don’t seem to like writing documentation.)
    Generating documentation for this or my own code is also pretty good most cases but also tends to contain mistakes or misses important mechanisms.

    Overall in my experience I find AI assistance useful and see a mild productivity speed boost for very low level tasks with low complexity and low contextual knowledge requirements. They are useful for exploring code and writing documentation, but I can not really recommend them for debugging. It is important to learn and know how to use such AI tools precisely in order to save time instead of wasting time, since as of now they are not really capable of much.