Query failed: connection to localhost:9312 failed (errno=111, msg=Connection refused).
It looks like you're new here. If you want to get involved, click one of these buttons!
Subscribe to our Patreon, and get image uploads with no ads on the site!
Base theme by DesignModo & ported to Powered by Vanilla by Chris Ireland, modified by the "theFB" team.
Comments
Jesus, I think I preferred burning up in a fiery hell of climate catastrophe.
The way I see it is...there's a 50% chance that the singularity will be beneficial. However, there's a near-100% chance that, in the absence of AI, we'll destroy our only habitable environment with climate change or nukes...yet there's a decent (but unknown) possibility that even pre-singularity AI could save us from both those things.
Accordingly, AI is a necessity for humanity's survival; the idea that it might destroy us is incidental since that's an inevitability in AI's absence.
As a complete non-expert, the biggest risk I see from current AI is that it results in an acceleration of the push to the bottom, with even more "news" websites regurgitating empty nonsense into the howling void of the Internet. And that people carry on using it to create shit code that doesn't quire do what it's meant to.
"Well, gather 'round, my friends, I've got a tale to tell,
'Bout a world where machines and silicon dwell,
They said progress was the path we should take,
But little did we know the cost it would make.
They built 'em smart, with algorithms so keen,
Learnin' from our data, every sight and every scene,
They promised us convenience, a life made so sweet,
But now we're standin' on the edge of a perilous feat."
Ask it to throw in a freestyle rap about the benefits of fish oil, tying this back to the original theme. Then ask it to turn the song into a screenplay, Then ask it to summarise the plot in bullet points. It is very good at these sorts of tasks lol
Funny because even basic AI was science fiction not so long ago. So were electric cars, virtual reality, gene splicing, robotics and much more. But here we are. I just find it unbelievable how short sighted a lot of people can be to just quickly dismiss risks just because it seems impossible today.
Obvs all that will happen in America and the smart cookie will have his face chiselled into the nearest mountain.
The risks with (current or near future) AI are not "superintelligence" (or whatever equivalent term you want to use) - the risk is in target misalignment, misuse, or blind optimism.
There is a dark web, there will be a dark AI, which is very frightening and im not sure regulation could stop it even if it wanted to.
Proper generalised AI with intent, cunning, motivation, and guile enough to "take over" is fucking decades away. Possibly more.
Of course there are dangers in hooking LLMs up inappropriately to real-world infrastructure, but that doesn't make it intelligen^H^H^HNO CARRIER
Trading feedback here
Kids in future will be bullied for the quality of their AI and parents will go into debt trying to buy a less embarrassing one for the bullied kids Christmas present.
If someone now uses a gun to commit a crime, chances are the law will eventually find the shooter. Someone uses AI to commit a crime, good luck finding the perpetrator. Like that incident in Spain last week where someone used AI to turn normal young girls photos into nude photos. The photos spread on the web, they have no idea who did it.
So who cares if the current form of AI is "no more intelligent than a Speak and Spell toy" when in its current form it can already make it easier and easier for people to do things like this. Don't really need to wait decades to think long and hard about where this is eventually going to end up, do we.
We're in agreement on that.
As I said, "Of course there are dangers in hooking LLMs up inappropriately to real-world infrastructure".
Trading feedback here