Community Archive

🧡 View Thread

🧡 Thread (22 tweets)

Placeholder
Malcolm Ocean ξ¨€πŸ΄β€β˜ οΈ@Malcolm_Oceanβ€’ almost 6 years ago

Reading this post by @JanelleCShane and I'm like "oh man, our current ai bots are basically left hemispheres". > "When confused, it tends not to admit it" https://t.co/zdgi1HW5KC https://t.co/wKowi16Rty

Tweet image 1
13 1
10/25/2019
Placeholder
Malcolm Ocean ξ¨€πŸ΄β€β˜ οΈ@Malcolm_Oceanβ€’ almost 6 years ago
Replying to @Malcolm_Ocean

To explain what I mean, first I need to be clear that I'm talking about new models of the brain hemispheres, not the old and well-debunked models from the 60s! And I'll share a few of the new findings that are relevant. https://t.co/wbxfAgtsvI

Placeholder
Malcolm Ocean ξ¨€πŸ΄β€β˜ οΈ@Malcolm_Oceanβ€’ over 6 years ago

(Note: there's lots of myths & bs about the brain hemispheres, but this new model seems pretty solid and better than a naive model that just says "all we know is that these functions are on the left and these are on the right")

11 0
6 0
10/25/2019
Placeholder
Malcolm Ocean ξ¨€πŸ΄β€β˜ οΈ@Malcolm_Oceanβ€’ almost 6 years ago
Replying to @Malcolm_Ocean

Early research on split-brain patients, by Gazzaniga, noticed that the left hemisphere would invent ("confabulate") explanations for things the right hemisphere had done, without any apparent regard to the possibility that it might be missing information (that the RH had).

5 0
10/25/2019
Placeholder
Malcolm Ocean ξ¨€πŸ΄β€β˜ οΈ@Malcolm_Oceanβ€’ almost 6 years ago
Replying to @Malcolm_Ocean

Building on knowledge that LHem handles speech, Gazzaniga concluded that LH was the "interpreter". Later he realized his answer to "what does each hemisphere do?" was an oversimplification, and abandoned the prospect of a LH/RH model, in favor of ~"100s or 1000s of modules"

2 0
10/25/2019
Placeholder
Malcolm Ocean ξ¨€πŸ΄β€β˜ οΈ@Malcolm_Oceanβ€’ almost 6 years ago
Replying to @Malcolm_Ocean

This was an improvement, on one level, since his model *was* an oversimplification, but it leaves a question unanswered: why then *are* these modules divided into two very distinct groups, not just in us but in most vertebrates? https://t.co/rnG0jPyZWW

Placeholder
Malcolm Ocean ξ¨€πŸ΄β€β˜ οΈ@Malcolm_Oceanβ€’ over 6 years ago

Pitch for why McGilchrist's @divided_brain model matters: Lateralization is not just in humans but in mice & birds & fish, so there's gotta be SOMETHING going on there. If you try to understand the rest of the brain while ignoring hemispheres, you're going to be confused!

7 0
4 0
10/25/2019
Placeholder
Malcolm Ocean ξ¨€πŸ΄β€β˜ οΈ@Malcolm_Oceanβ€’ almost 6 years ago
Replying to @Malcolm_Ocean

And unfortunately, people got so embarrassed about how oversimplified the early hemisphere models were, that for decades it's been unfashionable to research hemisphere models and so most people basically pretend that it's not important. However... https://t.co/UCX5mC7qEx

Placeholder
Malcolm Ocean ξ¨€πŸ΄β€β˜ οΈ@Malcolm_Oceanβ€’ over 6 years ago

The obvious importance of the hemisphere axis to our experience of thinking & acting means that people are going to try to talk about itβ€”so if they refuse to use hemisphere terms, they'll use others. This then confuses the meanings of those other terms! https://t.co/4jhsw8EPJA

2 0
4 0
10/25/2019
Placeholder
Malcolm Ocean ξ¨€πŸ΄β€β˜ οΈ@Malcolm_Oceanβ€’ almost 6 years ago
Replying to @Malcolm_Ocean

Fortunately, a psychiatrist named Iain McGilchrist has spent 2+ decades working on a new @divided_brain framework, which is less about "what does each hemisphere do?" (functionally) and more about the "how": the personality, attitude & paradigm of each. https://t.co/qiDmn1gf9e

Placeholder
Malcolm Ocean ξ¨€πŸ΄β€β˜ οΈ@Malcolm_Oceanβ€’ over 6 years ago

Meta-thread of various threads (mostly mine) about Iain McGilchrist's brain hemisphere model! https://t.co/iZHEKsKnOF

Quoted tweet image 1
102 9
3 0
10/25/2019
Placeholder
Malcolm Ocean ξ¨€πŸ΄β€β˜ οΈ@Malcolm_Oceanβ€’ almost 6 years ago
Replying to @Malcolm_Ocean

McGilchrist talks about how the LHem perceives everything in terms of the known categories it understands, so is remarkably unaware of its own blindspotsβ€”both conceptual blindspots and visual blindspots (eg not shaving left side of face) https://t.co/nRoG2yqHay

Tweet image 1
5 0
10/25/2019
Placeholder
Malcolm Ocean ξ¨€πŸ΄β€β˜ οΈ@Malcolm_Oceanβ€’ almost 6 years ago
Replying to @Malcolm_Ocean

Coming back to @JanelleCShane's post, there's a clear parallel between current ML developments and how the LH functions. This was boringly obvious when all classifiers could do was categorize things, but now they're attempting to describe scenes & the blindspots are glaring.

5 0
10/25/2019
Placeholder
Malcolm Ocean ξ¨€πŸ΄β€β˜ οΈ@Malcolm_Oceanβ€’ almost 6 years ago
Replying to @Malcolm_Ocean

To be clear, these classifiers don't appear to even be *attempting* to update their model of the scene based on someone's skepticism of their original description, and I imagine there could be straightforward hacks to train them to do this by tweaking priors etc based on Qs.

2 0
10/25/2019
Placeholder
Malcolm Ocean ξ¨€πŸ΄β€β˜ οΈ@Malcolm_Oceanβ€’ almost 6 years ago
Replying to @Malcolm_Ocean

...I'm imagining an exchange like: person: "those aren't goats?" visual chatbot: "holy shit wait maybe those are goats!? I've never seen a goat in a tree before..." 🀣 (instead of what we actually see πŸ‘‡) https://t.co/XlLJvzHdlA

Tweet image 1
4 0
10/25/2019
Placeholder
Malcolm Ocean ξ¨€πŸ΄β€β˜ οΈ@Malcolm_Oceanβ€’ almost 6 years ago
Replying to @Malcolm_Ocean

But even then, it'd still be thinking purely in known categories. Ppl are researching AI+uncertainty, but there's a diff between in-model uncertainty vs uncert re which model to use. And choosing among finite set of models is still technically model-uncertainty, just up a level.

3 0
10/25/2019
Placeholder
Malcolm Ocean ξ¨€πŸ΄β€β˜ οΈ@Malcolm_Oceanβ€’ almost 6 years ago
Replying to @Malcolm_Ocean

Bayesian updating requires a fixed ontology & even a fixed collection of hypotheses stated in that ontology, which means it's completely inadequate as an account of how reasoning/learning actually works, since any interesting problem (/life itself) starts w/o adequate ontology! https://t.co/UhP1dCnsvC

Tweet image 1
6 0
10/25/2019
Placeholder
Malcolm Ocean ξ¨€πŸ΄β€β˜ οΈ@Malcolm_Oceanβ€’ almost 6 years ago
Replying to @Malcolm_Ocean

Relatedly: DeepMind winning at StarCraft is still waaaay closer to Go than to real-world strategy. SC has all known categories. Nobody can invent new units you've never seen, with not just different stats but abilities you can't imagine. The terrain can't be modified/destroyed.

5 0
10/25/2019
Placeholder
Malcolm Ocean ξ¨€πŸ΄β€β˜ οΈ@Malcolm_Oceanβ€’ almost 6 years ago
Replying to @Malcolm_Ocean

There's an implicit model among many people, particularly computer scientists, that informal thinking is just a poor approximation of formal thinking, as opposed to being able to accomplish things that cannot be done formally, due to limits on certainty https://t.co/I5QQwq85TL

5 0
10/25/2019
Placeholder
Malcolm Ocean ξ¨€πŸ΄β€β˜ οΈ@Malcolm_Oceanβ€’ almost 6 years ago
Replying to @Malcolm_Ocean

It seems to me that concerns (which I share) about Robust & Beneficial AI are oriented to the apparent impossibility of aligning something with the basic architecture of a LHem... given its native categorical tendency towards goodharting, & the complex nature of real world value.

3 0
10/25/2019
Placeholder
Malcolm Ocean ξ¨€πŸ΄β€β˜ οΈ@Malcolm_Oceanβ€’ almost 6 years ago
Replying to @Malcolm_Ocean

I think many people who are *not* concerned about AI Safety are unconcerned because they assume that anything intelligent enough to be scary will have the level of non-blindspot-ness & interconnectedness that a human does (with its right hemisphere).

3 0
10/25/2019
Placeholder
Malcolm Ocean ξ¨€πŸ΄β€β˜ οΈ@Malcolm_Oceanβ€’ almost 6 years ago
Replying to @Malcolm_Ocean

But we need to differentiate 2 things: 1. It may be impossible to build AGI without it also having a more right-hemisphere architecture and that this *may* imply that AI safety for actual AGI won't be as impossible as the corrigibility problem currently seems. (Still very risky!)

5 0
10/25/2019
Placeholder
Malcolm Ocean ξ¨€πŸ΄β€β˜ οΈ@Malcolm_Oceanβ€’ almost 6 years ago
Replying to @Malcolm_Ocean

2. It may *also* be that it is possible (& likely) to build a very powerful Narrow AI with more of the kind of left-hemisphere-esque architecture of current AI systems, which could totally fuck us up in ways @ESYudkowsky has warned about. (I seem to currently think both 1 & 2.)

3 0
10/25/2019
Placeholder
Malcolm Ocean ξ¨€πŸ΄β€β˜ οΈ@Malcolm_Oceanβ€’ almost 6 years ago
Replying to @Malcolm_Ocean

This is obviously an oversimplification in many ways, but it at least starts to distinguish things so we can talk about them. Although having serious conversations about it will require shared frameworks for discussing eg brain hemispheres (much morethan this tweetstorm offers!)

3 0
10/25/2019
Placeholder
Malcolm Ocean ξ¨€πŸ΄β€β˜ οΈ@Malcolm_Oceanβ€’ almost 6 years ago
Replying to @Malcolm_Ocean

If you're wanting to learn more about this new hemisphere model, this podcast is a pretty good place to start. It's just 44 minutes long and touches on a lot of the things I want to include in an intro write-up when I finally make one: https://t.co/LSjHNdaYct

2 0
10/25/2019
Placeholder
Malcolm Ocean ξ¨€πŸ΄β€β˜ οΈ@Malcolm_Oceanβ€’ almost 6 years ago
Replying to @Malcolm_Ocean

If you'd rather read than listen, McGilchrist's book is also excellent. Detailed, well-structured, and well-sourced. Amazon link: https://t.co/ryXgz6AupI https://t.co/pTT9TSYg1c https://t.co/bxi01bowoU

Tweet image 1
Placeholder
Malcolm Ocean ξ¨€πŸ΄β€β˜ οΈ@Malcolm_Oceanβ€’ almost 6 years ago

Excerpts from the intro to Iain McGilchrist's book, The Master and his Emissary, which presents a new model of what's going on with the brain hemispheres. https://t.co/sbNMk4B6d6

Quoted tweet image 1
7 0
2 0
10/25/2019