π§΅ View Thread
π§΅ Thread (11 tweets)

alignment is not fundamentally a machine learning problem optimising narrow objectives is a machine learning problem. the messy problem of what "alignment" means, i.e., how can an intelligence fit into a larger ecology of humans, is, well, an ecological problem https://t.co/GnDszbMpJ9

alignment is not fundamentally a multi-objective problem either sure, the "solution" will involve machine learning and multi objective optimisation. but fundamentally, the wisdom and level of open-ended action is at the level of integrating intelligences into ecologies https://t.co/e2Bh8KIUqD


"multi objective optimisation" may be useful. but it's a small tool in an overall framework viewing training & rolling out AI as intelligences to be integrated in "working with spirits" and "raising children" are closer metaphors to alignment than "multi objective optimisation"

this is obvious once stated. but of course, it's not obvious to the kind of person who thinks optimisation and rule-following and training data is all that exists https://t.co/kM4RJ7gYfV https://t.co/gBV3NDS1ud


ecological language is such a natural fit for talking about AI alignment. π£π¦π€π’πΆπ΄π¦ πΈπ¦ ππͺπ·π¦ πͺπ― π’π― π¦π€π°ππ°π¨πΊ a continuous living cooperation of various beings and forces, we donβt want disrupted

we need the best ML minds, this much is true. but the overall view and approach needs to be informed by ecological, animist, and relational wisdom. because this is about non-human actors in an ecology relating to us https://t.co/P41kW6tfTh

animists don't make the mistake of thinking artificial intelligences are human like. and when LLMs get RLHF'ed to simulate a human response, they're not confused about the actor *literally being human-like*. they're more sophisticated users

myths about trickster gods and monkey paws abound. the understanding of ecological shocks, a layer of cultural machinery that aligns individual machinery, etc. is there in metaphors and frameworks more evocative and more cut to the task at hand

fascinating take lol https://t.co/yDqShSfpuW

@AskYatharth yah I'm in an AI safety class and this is my biggest takeaway. I'm hyperattuned to the phase transition between computer language of numerical precision, and human language of words and sense-impression many people are not. and that misattunement causes runaway muddy confusion