I used a hybrid of near-shore telepresence and on-site scrum sessions to move fast and put the quantum metaverse on a content-addressable de-fi AI blockchain
They convinced a good chunk of the country that it’s a good thing.
But then you’ll need dog water to deal with the cat water
It’s a multi-faceted problem.
Opaqueness and lack of interop are one thing. (Although, I’d say the Lemmy/Reddit comparison is a bit off-base, since those center around user-to-user communication, so prohibition of interop is a bigger deal there.) Data dignity or copyright protection is another thing.
And also there’s the fact that anything can (and will) be called AI these days.
For me, the biggest problem with generative AI is that its most powerful use case is what I’d call “signal-jamming”.
That is: Creating an impression that there is a meaningful message being conveyed in a piece of content, when there actually is none.
It’s kinda what it does by default. So the fact that it produces meaningless content so easily, and even accidentally, creates a big problem.
In the labor market, I think the problem is less that automated processes replace your job outright and more that if every interaction is mediated by AI, it dilutes your power to exert control over how business is conducted.
As a consumer, having AI as the first line of defense in consumer support dilutes how much you can hold a seller responsible for their services.
In the political world, astro-turfing has never been easier.
I’m not sure how much fighting back with your own AI actually helps here.
If we end up just having AIs talk to other AIs as the default for all communication, we’ve pretty much forsaken the key evolutionary feature of our species.
It’s kind of like solving nuclear proliferation by perpetually launching nukes from every country to every country at all times forever.
Should be using Australium
Need bag water to counteract the mouse water
Thelen brought a jar of lithium iron phosphate to the podium. Grim-faced and wearing a navy blue suit, he poured out a small sample of the substance into a bottle for the audience to pass around. Then he began reading safety guidelines for handling it. “If you get it on the skin, wash it off,” he said. “If you get it in your mouth, drink plenty of water.”
Then, Thelen opened the jar again, this time dipping his index finger inside. “This is my finger,” he said, putting his finger in his mouth. A sucking sound was heard across the room. He raised his finger up high. “That’s how non-toxic this material is.”
The No Gos were not impressed.
Worked fine for Midgley, after all.
FWIW: The major technical advancements are usually public sector R&D.
The private sector doesn’t have the same tolerance for risk.
They just swoop in to package it and monetize it, while patenting every possible way to combine this publicly available tech.
Bring knowledge of CFCs to a time when we’re able to make them but not able to detect the ozone problems they cause.
A very reassuring technology to have!
But my worry was more about them changing their business model once they get big enough.
I’ve been using Kagi for about a month now, and I think I’m gonna stick with it. Paying with dollars instead of data/attention feels more healthy for everyone involved.
(Fully realizing, of course, that there’s nothing stopping them from doing both, and that’s why we need better laws. Voting with your wallet will never be a complete solution… but it is something I can do right now.)
The dichotomy of “freedom to” and “freedom from” is pretty well-worn territory in philosophy, although there are many different formulations of it (including options beyond just these two), but the simplest model is this:
“Freedom to”: The protected right to do something, like fire a gun in the air.
“Freedom from”: The enforced guarantee that you will not be impacted by the actions of others, like your neighbor’s falling bullets.
An egalitarian society can’t grant “freedom to” all actions to all people while also guaranteeing them “freedom from” the consequences of all others’ actions.
If I have the freedom to drive a monster truck on any public motorway, I necessarily lose the freedom to walk those streets without worrying about monster trucks.
The only way around it is to have a privileged class that has extra “freedom to” do things when the consequences mainly impact the underclass, and extra “freedom from” the actions of the underclass.
Like, most states allow you the “freedom to” openly carry a firearm, but also employ police to protect your “freedom from” people being an immediate threat to your life.
In theory, you can’t have both. So in practice, this means that only white people get to openly carry guns. Black people get disarmed or shot.
—
That said, I’d disagree that labor freedom reduces economic security in general, but if you got more specific I’m sure there are some instances where that’s true.
Just specifically don’t take an employer’s word when they say “if you unionize we can’t protect you anymore”.
Tracking down my references to obscure cognitive science podcasts gives people a sense of pride and accomplishment.
…much like the escalation from vegetarianism to local-first, as described by McRaney’s guest whose book cover features a rubber duck.
Gotcha. Yeah, I can endorse that viewpoint.
To me, “engineer” implies confidence in the specific result of what you’re making.
So like, you can produce an ambiguous image like The Dress by accident, but that’s not engineering it.
The researchers who made the Socks and Crocs images did engineer them.
Privacy doesn’t mean that nobody can tell what you’re thinking. It means that you will always be more justified in believing yourself to be conscious than in believing others are conscious. There will always be an asymmetry there.
Replaying neural activity is impressive, but it doesn’t prove the original recorded subject was conscious quite as robustly as my daily subjective experience proves my own consciousness to myself. For example, you could conceivably fabricate an entirely original neural recording of a person who never existed at all.
I added some episodes of Walden Pod to my comment, so check those out if you wanna go deeper, but I’ll still give a tl;dl here.
Privacy of consciousness is simply that there’s a permanent asymmetry of how well you can know your own mind vs. the minds of others, no matter how sophisticated you get with physical tools. You will always have a different level of doubt about the sentience of others, compared to your own sentience.
Phenomenal transparency is the idea that your internal experiences (like what pain feels like) are “transparent”, where transparency means you can fully understand something’s nature through cognition alone and not needing to measure anything in the physical world to complete your understanding. For example, the concept of a triangle or that 2+2=4 are transparent. Water is opaque, because you have to inspect it with material tools to understand the nature of what you’re referring to.
You probably immediately have some questions or objections, and that’s where I’ll encourage you to check out those episodes. There’s a good reason they’re longer than 5 sentences.
If you wanna continue down the rabbit hole, I added some good stuff to my original comment. But if you’re leaning towards epiphenomenalism, might I recommend this one: https://audioboom.com/posts/8389860-71-against-epiphenomenalism
Edit: I thought of another couple of things for this comment.
You mentioned consciousness not being well-defined. It actually is, and the go-to definition is from 1974. Nagel’s “What Is It Like to Be a Bat?”
It’s a pretty easy read, as are all of the essays in his book Mortal Questions, so if you have a mild interest in this stuff you might enjoy that book.
Very Bad Wizards has at least one episode on it, too. (Link tbd)
Speaking of Very Bad Wizards, they have an episode about sex robots (link tbd) where (IIRC) they talk about the moral problems with having a convincing human replica that can’t actually consent, and that doesn’t even require bringing consciousness into the argument.
Not technically, because there are scenarios where you can give up some freedom or safety without improving the other in return (and therefore restore freedom/safety afterwards without diminishing the other)… but it’s a close enough approximation to be useful, kinda like classical physics vs general relativity.
If you want to be more detailed, you can look at “freedom to” vs “freedom from”. This has its own limitations, but it’s precise enough while still being useful.
For example, assuming everyone involved is constrained by the same rules:
You can’t have the freedom to fire a gun in the air, and have freedom from your neighbor’s falling bullets.
You can’t have the freedom to drive a tank down the street, and have freedom from fear of being squashed as a pedestrian.