Essentially, yes. Great point! I think it needs more features to function more like a social network (transitive topic-based sharing, for one)
- 0 Posts
- 7 Comments
Hah, I designed one as well!
I think the flow of information has to be fundamentally different.
In mine, people only receive data directly from people they know and trust in real life. This makes scaling easy, and makes it impossible for centralized entities to broadcast propaganda to everyone at once.
I described it at freetheinter.net if you’re interested
helopigs@lemmy.worldto Technology@lemmy.world•OpenAI declares AI race “over” if training on copyrighted works isn’t fair useEnglish1·4 months agothe issue is that foreign companies aren’t subject to US copyright law, so if we hobble US AI companies, our country loses the AI war
I get that AI seems unfair, but there isn’t really a way to prevent AI scraping (domestic and foreign) aside from removing all public content on the internet
helopigs@lemmy.worldto Technology@lemmy.world•Sergey Brin says AGI is within reach if Googlers work 60-hour weeksEnglish1·4 months agoSorry for the late reply - work is consuming everything :)
I suspect that we are (like LLMs) mostly “sophisticated pattern recognition systems trained on vast amounts of data.”
Considering the claim that LLMs have “no true understanding”, I think there isn’t a definition of “true understanding” that would cleanly separate humans and LLMs. It seems clear that LLMs are able to extract the information contained within language, and use that information to answer questions and inform decisions (with adequately tooled agents). I think that acquiring and using information is what’s relevant, and that’s solved.
Engaging with the real world is mostly a matter of tooling. Real-time learning and more comprehensive multi-modal architectures are just iterations on current systems.
I think it’s quite relevant that the Turing Test has essentially been passed by machines. It’s our instinct to gatekeep intellect, moving the goalposts as they’re passed in order to affirm our relevance and worth, but LLMs have our intellectual essence, and will continue to improve rapidly while we stagnate.
There is still progress to be made before we’re obsolete, but I think it will be just a few years, and then it’s just a question of cost efficiency.
Anyways, we’ll see! Thanks for the thoughtful reply
helopigs@lemmy.worldto Technology@lemmy.world•Reddit will warn users who repeatedly upvote banned contentEnglish3·4 months agoniche communities are still struggling due to the chicken-and-egg problem (and reddit dominance), but it’s improving
if there is a party, it’s about lemmy’s inevitable growth amidst reddit enshittification
helopigs@lemmy.worldto Technology@lemmy.world•Sergey Brin says AGI is within reach if Googlers work 60-hour weeksEnglish1·4 months agorelative to where we were before LLMs, I think we’re quite close
I think 10x is a reasonable long term goal, given continued improvements in models, agentic systems, tooling, and proper use of them.
It’s close already for some use cases, for example understanding a new code base with the help of cursor agent is kind of insane.
We’ve only had these tools for a few years, and I expect software development will be unrecognizable in ten more.