Warning: Some posts on this platform may contain adult material intended for mature audiences only. Viewer discretion is advised. By clicking ‘Continue’, you confirm that you are 18 years or older and consent to viewing explicit content.
The turing test is an absolutely garbage metric for identifying if a computer iteration qualifies as human
It’s a useful metric because it addresses the primary means by which humanity is evaluated (via evaluation by other humans). You can set up a synthetic test to determine if a response is computer generated. But this won’t measure behaviors as evaluated by humans. If the results diverge, it will be due to some number of characteristics that humans aren’t reliably picking up on.
The original name for the Turing Test was “The Imitation Game”. And the fact that computers could pass the test as early as the 1960s only proves that humans (in this particular case, humans with very low exposure to computer behaviors) can be reliably deceived. But the consequence of this game iterating out over sixty years of practice is a hyper-sensitivity to computer output, such that end users will mistake humans for computers instead of the other way around.
entirely dependent on the whims of the individuals that make up the test group
Not whims, but learned observational patterns. This is what ultimately separates people from machines - patterns of behavior. If a computer and a human exhibit the exact same behavioral pattern, there’s no way to distinguish one from the other.
It’s a useful metric because it addresses the primary means by which humanity is evaluated (via evaluation by other humans). You can set up a synthetic test to determine if a response is computer generated. But this won’t measure behaviors as evaluated by humans. If the results diverge, it will be due to some number of characteristics that humans aren’t reliably picking up on.
The original name for the Turing Test was “The Imitation Game”. And the fact that computers could pass the test as early as the 1960s only proves that humans (in this particular case, humans with very low exposure to computer behaviors) can be reliably deceived. But the consequence of this game iterating out over sixty years of practice is a hyper-sensitivity to computer output, such that end users will mistake humans for computers instead of the other way around.
Not whims, but learned observational patterns. This is what ultimately separates people from machines - patterns of behavior. If a computer and a human exhibit the exact same behavioral pattern, there’s no way to distinguish one from the other.