• 1 Post
  • 26 Comments
Joined 11 months ago
cake
Cake day: April 27th, 2024

help-circle





  • I have one big frustration with that: Your voice input has to be understood PERFECTLY by TTS.

    If you have a “To Do” list, and speak “Add cooking to my To Do list”, it will do it! But if the TTS system understood:

    • Todo
    • To-do
    • to do
    • ToDo
    • To-Do

    The system will say it couldn’t find that list. Same for the names of your lights, asking for the time,… and you have very little control over this.

    HA Voice Assistant either needs to find a PERFECT match, or you need to be running a full-blown LLM as the backend, which honestly works even worse in many ways.

    They recently added the option to use LLM as fallback only, but for most people’s hardware, that means that a big chunk of requests take a suuuuuuuper long time to get a response.

    I do not understand why there’s no option to just use the most similar command upon an imperfect matching, through something like the Levenshtein Distance.






  • smiletolerantly@awful.systemsto196@lemmy.blahaj.zonerule
    link
    fedilink
    English
    arrow-up
    4
    ·
    10 days ago

    Hmm, it’s a bit cheaper here (I think - it’s been a while!), but yeah.

    Electricity is expensive here, I think the server setup draws 40€/month, but that is for the entire setup of course, not just pirating-related stuff; plus ~9€/month for the two usenet backbones, and a couple bucks for trackers.



  • smiletolerantly@awful.systemsto196@lemmy.blahaj.zonerule
    link
    fedilink
    English
    arrow-up
    5
    ·
    11 days ago

    That’s mostly true, although you usually (in my case at least) I am aware of all shows I have available on Jellyfin, and it’s only ones I like.

    For discovering new shows to download, things like Jellyseerr actually do give recommendations… No idea how good they are though.

    But frankly, Netflix used to recommend a lot of things that sounded interesting on the surface-level, and then turned out to be utter shit. Probably not an entirely bad thing to be lacking recommendations :D


  • smiletolerantly@awful.systemsto196@lemmy.blahaj.zonerule
    link
    fedilink
    English
    arrow-up
    26
    ·
    edit-2
    11 days ago

    For Non-English ones in my native language. There isn’t a lot of them. AFAICT the one I mostly use is free for a handful of requests/day, but generously lifts that limit in exchange for a “donation” 😄

    (It’s only around 20/year)

    Edit: and just to be clear, that one Tracker took us from “basically nothing is available in our language” to “literally everything is”, so it’s money well spent.


  • smiletolerantly@awful.systemsto196@lemmy.blahaj.zonerule
    link
    fedilink
    English
    arrow-up
    51
    ·
    11 days ago

    For a very long time, I was one of the people who kep saying:

    “I used to pirate until Netflix came along; now I pirate because of the fragmentation of services; should a good service become available at a reasonable price again, I will be happy to switch back.”

    But at some point, that stopped being true. More precisely, my *arr-Stack + Jellyfin setup become so stable, I do no longer really think about it, while also getting better quality content, and often faster than I would due to global licensing shennanigans.

    Another factor also is that at some point, we crossed the “enough content to mindlessly scroll until we find something to watch” barrier, which my GF actually kinda missed from Netflix.

    The crazy thing though, is that we pay actual money for this: hardware cost; electricity; access to usenet trackers and two usenet backbones. All in all, I do not think it’s cheaper than getting Netflix+Prime+Disney.

    It’s just better. And we will not be switching back, ever.






  • No. I am not saying that to put man and machine in two boxes. I am saying that because it is a huge difference, and yes, a practical one.

    An LLM can talk about a topic for however long you wish, but it does not know what it is talking about, it has no understanding or concept of the topic. And that shines through the instance you hit a spot where training data was lacking and it starts hallucinating. LLMs have “read” an unimaginable amount of texts on computer science, and yet as soon as I ask something that is niche, it spouts bullshit. Not it’s fault, it’s not lying; it’s just doing what it always does, putting statistically likely token after statistically liken token, only in this case, the training data was insufficient.

    But it does not understand or know that either; it just keeps talking. I go “that is absolutely not right, remember that <…> is <…,>” and whether or not what I said was true, it will go "Yes, you are right! I see now, <continues to hallucinate> ".

    There’s no ghost in the machine. Just fancy text prediction.