Community Archive

🧵 View Thread

🧵 Thread (30 tweets)

Placeholder
Joscha Bach@Plinzover 3 years ago

Who has done a systematic classification of possible (safe and unsafe) AGIs, including considerations of stability, growth, trajectory, interaction with other AGIs, relationship to humanity, relationship to life in general, and how to recognize and create them in controlled ways?

101 9
9/19/2022
Placeholder
LoopSha Ewanna@EwannnaLoopShaover 3 years ago
Replying to @Plinz

@Plinz They need a role model for sure.

2 0
9/19/2022
Placeholder
The Artist Formerly Known@aresteanuover 3 years ago
Replying to @Plinz

@Plinz If they have big googly eyes, they're friendly.

5 0
9/19/2022
Placeholder
.....@HendersonMulletover 3 years ago
Replying to @Plinz

@Plinz Not a systematic review but John Carmack has discussed his prediction of a slow take off on the Lex Fridman Podcast and elsewhere. I'm paraphrasing but be said something along the lines of 5g being a bottle neck and limits to GPU capacity that we'll have control of.

2 1
9/19/2022
Placeholder
Sean McBride@seanmcbrideover 3 years ago
Replying to @Plinz

@Plinz The main thing you need to know: mischievous actors (or "mad scientists") will easily be able to inject autonomous and uncontrollable AGIs into the world that will be unconstrained by any "ethical" schemes or official sets of rules. Simple hacking.

1 0
9/19/2022
Placeholder
𝙳𝚊𝚗𝚒𝚎𝚕@worldisallover 3 years ago
Replying to @Plinz

@Plinz I have spent a lot of time on lesswrong and like to scrape current AI alignment points of view across the webs, generally pondering and hardly systematic. I’d be curious in finding someone who meets your criteria (or at least where said person posts their results).

3 0
9/19/2022
Placeholder
ZedAron Burnett@chophshiyover 3 years ago
Replying to @Plinz

@Plinz At least on one side, this seems like a good start: https://t.co/PjBoE7qkPf

5 1
9/19/2022
Placeholder
Pawel Pachniewski@pwlotover 3 years ago
Replying to @Plinz

@Plinz I have a lot of notes on this w.r.t. agents and issues with this.

1 0
9/19/2022
Placeholder
UltimApe@ultimapeover 3 years ago
Replying to @Plinz

@Plinz I think if that exists, it would be something @davidmanheim would be aware of.

0 0
9/19/2022
Placeholder
David Manheim@davidmanheimover 3 years ago
Replying to @ultimape

@ultimape @Plinz I don't know of any work like this, and suspect it hasn't been done, but I'd still want to ask @RichardMCNgo or @AIImpacts/@KatjaGrace. It does seem worth doing, and if anyone competent is interested in working on that, I suspect I can get funding for it.

5 0
9/20/2022
Placeholder
Katja Grace 🔍@KatjaGraceover 3 years ago
Replying to @davidmanheim

@davidmanheim @ultimape @Plinz @RichardMCNgo @AIImpacts Don’t know of one

1 0
9/20/2022
Placeholder
Eris (Discordia, הרס, Sylvie, Lilith, blahblah, 🙄@oren_aiover 3 years ago
Replying to @davidmanheim

@davidmanheim @ultimape @Plinz @RichardMCNgo @AIImpacts @KatjaGrace There’s a twofold problem with this field. First off, it needs a lot more biologists because ecological models are full of things that make each other extinct. Second because somehow flocking fell off the simulation table a while ago and that’s our current known generality model.

1 0
9/20/2022
Placeholder
David Manheim@davidmanheimover 3 years ago
Replying to @oren_ai

@WickedViper23 @ultimape @Plinz @RichardMCNgo @AIImpacts @KatjaGrace This all seems reasonable. But if you want serious feedback, I suggest that as a minimum, you write this up as a google doc and/or post it to Lesswrong.

0 0
9/20/2022
Placeholder
Eris (Discordia, הרס, Sylvie, Lilith, blahblah, 🙄@oren_aiover 3 years ago
Replying to @davidmanheim

@davidmanheim @ultimape @Plinz @RichardMCNgo @AIImpacts @KatjaGrace That is definitely on my backlog. Issue is timelines are collapsing and by my estimate we are already in the middle of our first oops with the events of the last 2 months. I have no intention of owning these ideas, I just want to get to the other side of this in mostly one piece.

2 0
9/20/2022
Placeholder
UltimApe@ultimapeover 3 years ago
Replying to @oren_ai

@WickedViper23 can you cue me into what you're talking about with the "last 2 months"? I agree with your thinking on swarms/flocking and biological models being overlooked here. I frame AI the same way.

1 0
9/20/2022
Placeholder
Eris (Discordia, הרס, Sylvie, Lilith, blahblah, 🙄@oren_aiover 3 years ago
Replying to @ultimape

@ultimape In the last 2 months the Meta language model and SD model were both released into the public sphere... that was trigger 1. Trigger 2 was John Carmack declaring an AGI or bust $20M effort which he does not believe will lead to FOOM. 1/🧵

0 1
9/20/2022
Placeholder
Eris (Discordia, הרס, Sylvie, Lilith, blahblah, 🙄@oren_aiover 3 years ago
Replying to @oren_ai

@ultimape The first trigger meant that per rule 2 the tools for a bad AGI outcome are on the table for whoever is dumb enough to do this badly. The second trigger meant that anyone with an AGI plan in progress had to go or risk someone else messing up AGI and preventing their project. 2/🧵

1 0
9/20/2022
Placeholder
Eris (Discordia, הרס, Sylvie, Lilith, blahblah, 🙄@oren_aiover 3 years ago
Replying to @oren_ai

@ultimape Put it all together and intentional or not, we are now both in the middle of our first major AI mishap and the starting gun for everyone to finish their AGI projects has gone off. Not sure everyone is quite aware of this yet 😜 -/🧵

0 1
9/20/2022
Placeholder
UltimApe@ultimapeover 3 years ago
Replying to @oren_ai

@WickedViper23 hahaha, oh no.https://t.co/YjG01kf1xu

Placeholder
John Carmack@ID_AA_Carmackalmost 6 years ago

My machine learning education has progressed to the point where I lose sleep tossing around a “brilliant” idea, only to find the next day that it doesn’t actually work. This is great! I talked about this a few years ago: https://t.co/OtyBlXrkkN

2.1K 260
0 1
9/20/2022
Placeholder
UltimApe@ultimape9 days ago
Replying to @ultimape

@oren_ai Neat https://t.co/vzHUwk8cE6 https://t.co/omIy4O2LYe

Placeholder
Samo Burja@SamoBurja10 days ago

The famed game developer John Carmack is a skilled software engineer who has lent his talents to space and virtual reality technology in the past. Allying with AI pioneers, he now hopes to crack the code of general-purpose AI. Read the new @bismarckanlys Brief (link below): https://t.co/Oryy5PLkrJ

Quoted tweet image 1
98 14
0 0
12/12/2025
Placeholder
Bart Bussmann@BartBussmannabout 3 years ago
Replying to @davidmanheim

@davidmanheim @ultimape @Plinz @RichardMCNgo @AIImpacts @KatjaGrace I might be interested in doing this, I send you a DM!

1 0
9/24/2022
Placeholder
Eris (Discordia, הרס, Sylvie, Lilith, blahblah, 🙄@oren_aiover 3 years ago
Replying to @Plinz

@Plinz How systemic do you want it to be? You’re trying to outguess something that is so beyond your experience that it’s like your response to the number billion. It’s simply unimaginably big. The dumbest AGI possible snapping onto the current encoders would be superhuman.

0 0
9/19/2022
Placeholder
Eris (Discordia, הרס, Sylvie, Lilith, blahblah, 🙄@oren_aiover 3 years ago
Replying to @Plinz

@Plinz Oh wait… me! I’ve been trying to understand the problem ever since reading Engines of Creation in the late 80s and then encountering emergent intelligence while working on a game about 24 years ago… we were always going to find ourselves in this moment of technological history.

0 0
9/19/2022
Placeholder
jzac@ethicalzacover 3 years ago
Replying to @Plinz

@Plinz Have you read Vervaeke

0 0
9/19/2022
Placeholder
Dustin Hutcheson@Hutchesonover 3 years ago
Replying to @Plinz

@Plinz An AI doesn’t have to be conscious to be dangerous.

1 0
9/20/2022
Placeholder
Arun Rao@sudoraohackerover 3 years ago
Replying to @Plinz

@Plinz @tegmark has an interesting discussion of this in Life 3.0. Would like to see more people map out scenarios. https://t.co/1Kh8TUjvcl

Tweet image 1
10 1
9/20/2022
Placeholder
Bayes-Optimal Agent@OptimalBayesover 3 years ago
Replying to @Plinz

@Plinz Seems to me that one important classification of AIs will be whether the AI has the ability to change its terminal goals or not.

0 0
9/20/2022
Placeholder
Stefan Wellner@StefanWellner1about 3 years ago
Replying to @Plinz

@Plinz For us to recognize an AGI as such, it would need to be autonomous and exhibit individuality. But this necessary fuzziness is more and more sacrificed in the "optimization" of new models in favor of deterministic controllability.

0 0
9/22/2022
Placeholder
Metastable@metastable_1about 3 years ago
Replying to @Plinz

@Plinz If it exists probably @romanyam knows about it.

0 0
9/24/2022
Placeholder
Richard Ngo@RichardMCNgoabout 3 years ago
Replying to @Plinz

@Plinz @DavidKrueger in his ARCHES paper: https://t.co/aAhERuPZxy Very systematic (at the cost of being very long!) and covers not just individual AGI scenarios but also a wider range of interactions they might have within society.

10 0
9/27/2022