odysseus2000 wrote:Is it possible, or will it ever be possible, to develop a test or series of tests that would objectively show that a machine is not sentient?
Never mind a machine, how would you test that with another human being? (Or an animal?)
Obviously, in practice, we simply don't need to bring this up as an issue. The simplest assumption is that we share what we share - this was mentioned by Demis Hassabis in one of those Lex Fridman interviews.
odysseus2000 wrote:I am coming towards feeling that tests such as proposed by Turin & updated are no longer relevant & anyhow don’t address the issue of whether the ai is useful or not which in a practical sense is what matters.
I could be completely wrong & welcome comments.
As far as whether AI is "useful" etc. I suspect the Turing Test isn't really that important and in general it may be a bit of a distraction. After all, what is even meant in this context by "think" ?
I think I'd go with this:
Impracticality and irrelevance: the Turing test and AI researchhttps://en.wikipedia.org/wiki/Turing_test#Impracticality_and_irrelevance:_the_Turing_test_and_AI_research