Warning: Some posts on this platform may contain adult material intended for mature audiences only. Viewer discretion is advised. By clicking ‘Continue’, you confirm that you are 18 years or older and consent to viewing explicit content.
Now that I think of it, it seems to be at the core of some issues with training AI agents using reinforcement learning (e.g., if you choose a wrong metric, you’d get the behavior that makes sense for the agent but not what you want) and with any kind of planned economy (you need targets for planning, but people manipulate them, so you do not get what you want)
Now that I think of it, it seems to be at the core of some issues with training AI agents using reinforcement learning (e.g., if you choose a wrong metric, you’d get the behavior that makes sense for the agent but not what you want) and with any kind of planned economy (you need targets for planning, but people manipulate them, so you do not get what you want)
The economic calculation problem is not only Goodhart, but Goodhart certainly doesn’t help.