

They convinced a good chunk of the country that it’s a good thing.
They convinced a good chunk of the country that it’s a good thing.
It’s a multi-faceted problem.
Opaqueness and lack of interop are one thing. (Although, I’d say the Lemmy/Reddit comparison is a bit off-base, since those center around user-to-user communication, so prohibition of interop is a bigger deal there.) Data dignity or copyright protection is another thing.
And also there’s the fact that anything can (and will) be called AI these days.
For me, the biggest problem with generative AI is that its most powerful use case is what I’d call “signal-jamming”.
That is: Creating an impression that there is a meaningful message being conveyed in a piece of content, when there actually is none.
It’s kinda what it does by default. So the fact that it produces meaningless content so easily, and even accidentally, creates a big problem.
In the labor market, I think the problem is less that automated processes replace your job outright and more that if every interaction is mediated by AI, it dilutes your power to exert control over how business is conducted.
As a consumer, having AI as the first line of defense in consumer support dilutes how much you can hold a seller responsible for their services.
In the political world, astro-turfing has never been easier.
I’m not sure how much fighting back with your own AI actually helps here.
If we end up just having AIs talk to other AIs as the default for all communication, we’ve pretty much forsaken the key evolutionary feature of our species.
It’s kind of like solving nuclear proliferation by perpetually launching nukes from every country to every country at all times forever.
Should be using Australium
Thelen brought a jar of lithium iron phosphate to the podium. Grim-faced and wearing a navy blue suit, he poured out a small sample of the substance into a bottle for the audience to pass around. Then he began reading safety guidelines for handling it. “If you get it on the skin, wash it off,” he said. “If you get it in your mouth, drink plenty of water.”
Then, Thelen opened the jar again, this time dipping his index finger inside. “This is my finger,” he said, putting his finger in his mouth. A sucking sound was heard across the room. He raised his finger up high. “That’s how non-toxic this material is.”
The No Gos were not impressed.
Worked fine for Midgley, after all.
Could just say:
If you accept either privacy of consciousness or phenomenal transparency then philosophical zombies must be conceivable and therefore physicalism is wrong and you can’t engineer consciousness by mimicking brain states.
Edit:
I guess I should’ve expected this, but I’m glad to see multiple people wanted to dive deep into this!
I don’t have the expertise or time to truly do it justice myself, so if you want to go deep on this topic I’m going to recommend my favorite communicator on non-materialist (but also non-religious) takes on consciousness, Emerson Green:
But I’m trying to give a tl;dl to the first few replies at least.
I used a hybrid of near-shore telepresence and on-site scrum sessions to move fast and put the quantum metaverse on a content-addressable de-fi AI blockchain