Community Archive

🧵 View Thread

🧵 Thread (2 tweets)

Placeholder
Aleksi Liimatainen@aleksil79over 1 year ago

ah this expresses it nicely doomers gonna doom because they're experts in the wrong field, trying to solve the wrong problem using the wrong methods, and we can only hope their mistakes will not fail to cancel out https://t.co/fkpfYQirJY

Placeholder
yatharth in asheville@AskYatharthover 1 year ago

alignment is not fundamentally a machine learning problem optimising narrow objectives is a machine learning problem. the messy problem of what "alignment" means, i.e., how can an intelligence fit into a larger ecology of humans, is, well, an ecological problem https://t.co/GnDszbMpJ9

29 4
5 0
3/27/2024
Placeholder
Elena Lake 🌿@relic_radiationover 1 year ago
Replying to @aleksil79

@aleksil79 yes! "experts in the wrong domain" is something I've only started to see once I picked up expertise in domains as different as math and bodywork different domains simply work *really* differently https://t.co/gHeMcgllmd

Placeholder
Elena Lake 🌿@relic_radiationover 1 year ago

experts in one domain often have completely incorrect models of other domains. a materials scientist friend said he’s always shocked when engineers are way off-base about how his field works, and according to him “it’s not even hard. they just lack context and don’t know it”

16 0
2 0
3/27/2024