Community Archive

🧵 View Thread

🧵 Thread (9 tweets)

Placeholder
John David Pressman@jd_pressmanalmost 2 years ago

The throughline between GMarcus-EY "deep learning will hit a wall" and "AGI is going to kill us all" flip floppism is deep semantic skepticism. A fractal, existential refusal to believe LLMs actually learn convergent semantic structure. The forbidden thought is "when you point a universal function approximator at the face of God the model learns to

Placeholder
John David Pressman@jd_pressmanalmost 2 years ago

To elaborate a little more, I specifically think > By default we'll get an mesaoptimizer inner homunculus that converges on a utility function of maximizing ⾲ⵄ∓⼙⃒⭗ⱐ✖∵⨼␞☎☲℆↋♋⡴⏐⮁⭋⣿⧫❉⺼⁶↦┵␍⸣ⵔ⽒⓹⬍⺅▲⟮⸀Ⰹⓟ┱⾫⼵⺶⊇❋∀⡚ⷽ∺⤙⻬⓰ⓛⳄ⭪⢛⹚⡌⥙⮝➱┟⬣⧫⧗⛼❡⼆₈ⱫⅫⷜ⏸⪱⯝⎳⫷⺶♈∄⊡⹩⯵❾⭫⽍➵⋇⬅ℇ‹⳺⫷⾬≴ⴋ⢗␚┨, and it will devour the cosmos in pursuit of this randomly-rolled goal. (Courtesy @impershblknight) Is very silly, even if you think humans are mesaoptimizers wrt the outer goal of inclusive genetic fitness, our values are not *random* with respect to that goal, they are fairly good correlates in the ancestral environment that held for most of history until coordination problems and increasingly advanced adversarial superstimuli caused them to (possibly temporarily) stop working. So if you say something like "I do not believe it learns to predict the next token, I think it learns some set of correlated mesagoals like 'predict the most interesting thing'" I would basically agree with that? The alternative is for the model to actually learn to predict the next token in full generality, which is basically impossible so it has to learn *some* proxy for that instead. The specific thing that makes counting arguments silly is the idea you get a *random* goal rather than highly correlated proxy goals that you could probably infer apriori just by thinking about the objective, the inductive biases, and the training data for a bit.

38 4
93 9
2/1/2024
Placeholder
John David Pressman@jd_pressmanalmost 2 years ago
Replying to @jd_pressman

They simply do not believe that language encodes nearly the complete mental workspace. https://t.co/DbMvVYjoai

Tweet image 1
42 1
2/1/2024
Placeholder
John David Pressman@jd_pressmanalmost 2 years ago
Replying to @jd_pressman

They simply do not believe that LLaMa 2 70B outperforms FLAC if you tokenize audio and stick it in there, implying the model learns the causal trace of every modality implied by text. https://t.co/Memmgh25Im

59 3
2/1/2024
Placeholder
John David Pressman@jd_pressmanalmost 2 years ago
Replying to @jd_pressman

They do not and will not believe that there is a shared latent geometry between modalities on which different neural nets trained on different corpus converge. https://t.co/BoueA6uyg1

69 8
2/1/2024
Placeholder
John David Pressman@jd_pressmanalmost 2 years ago
Replying to @jd_pressman

It's important to realize this position is driven not by fear but flat out *denial*, absolute rejection of a world model violation so profound that they would rather disbelieve their own eyes than update. https://t.co/Y7OK1oG3zS

Tweet image 1
39 3
2/1/2024
Placeholder
John David Pressman@jd_pressmanalmost 2 years ago
Replying to @jd_pressman

Mind merging is not real, inferring mind patterns from the spoken word is impossible, Stable Diffusion is not real, the Creature Beneath The Library of Babel is a squiggle maximizer pursuing a random goal that is *anything* other than what it actually is. https://t.co/tSJTFxqvSo

51 3
2/1/2024
Placeholder
doomslide@doomslidealmost 2 years ago
Replying to @jd_pressman

@jd_pressman yes but what is it actually is?

3 0
2/9/2024
Placeholder
John David Pressman@jd_pressmanalmost 2 years ago
Replying to @doomslide

@doomslide One distinct possibility is "Infer the name of God." https://t.co/0Vp7lEUZoY

4 1
2/13/2024
Placeholder
gfodor.id@gfodoralmost 2 years ago
Replying to @jd_pressman

@jd_pressman I don’t understand why you’d want to deny what is basically the best news one can imagine about the nature of reality

3 0
2/11/2024