🧵 View Thread
🧵 Thread (2 tweets)

ah this expresses it nicely doomers gonna doom because they're experts in the wrong field, trying to solve the wrong problem using the wrong methods, and we can only hope their mistakes will not fail to cancel out https://t.co/fkpfYQirJY

alignment is not fundamentally a machine learning problem optimising narrow objectives is a machine learning problem. the messy problem of what "alignment" means, i.e., how can an intelligence fit into a larger ecology of humans, is, well, an ecological problem https://t.co/GnDszbMpJ9

@aleksil79 yes! "experts in the wrong domain" is something I've only started to see once I picked up expertise in domains as different as math and bodywork different domains simply work *really* differently https://t.co/gHeMcgllmd

experts in one domain often have completely incorrect models of other domains. a materials scientist friend said he’s always shocked when engineers are way off-base about how his field works, and according to him “it’s not even hard. they just lack context and don’t know it”