Turing tests may cause controversy by the argument that the test is concerned strictly with how the subject acts — the external behavior of the machine. In this regard, it takes a behaviorist or functionalist approach to the study of intelligence. The example of ELIZA suggests that a machine passing the test may be able to simulate human conversational behavior by following a simple (but large) list of mechanical rules, without thinking or having a mind at all. However, the artificial brain would act in the same way as a human brain and externally appear as human-like as you and I, but internally it is made out of unconscious materials. This ultimately brings me to the question of whether a few unconscious materials can be constructed together in a way to become conscious Dr. Johnson?
See attached file for solution.
Great question and concept.
There are two approaches: first, that there is nothing mechanically natural about human cognition. Human beings in the modern era, however, raised in the era of the omnipresent machine, have taken characteristics of mechanism. Thought has been stunted and all that seems to matter is causality, which comes down to power.
The other approach is to say that the entire project is absurd because there is no real similarity between organism and mechanism. The complexity of the production of as few proteins would take up a massive amount of computer space. So long as we remain in the realm of novelty and imitation, there seems to be no problem, and certain AI units can be very helpful to law enforcement, etc.
The main question is about the nature of thought. If thought is reducible to a mechanism that correlates objects and abstract definitions, then not only is ELIZA thinking, but human beings are, in fact, machines (and of course, should be treated as such).
Slightly less scientific, let's not forget that promoting these machines as semi-human means millions of dollars to that industry. Defining thought in the most convenient way can earn these institutions a huge amount of federal or private cash. It is not a good idea to dismiss that clear economic and professional interest. Talking like me, for example, will get you denied tenure every time, since no one wants to fund someone who believes in spirit, etc.
Pat Langley holds that there are two concepts to "higher order" thought. The first is the recognition of concepts. I fail to see how this is "high order" since the nominalist school holds these concepts to be non-existent. The other way is to produce a machine that can properly go through a logical set of deductions flawlessly. Even more, that this multi-step reasoning would be set to use in achieving goals. How these goals are to be formulated apart from outside interference is a mystery (Langley, 2012).
I can argue that the typical human being is just a machine (or more accurate, it proved itself convenient to consider itself a machine). The existentialists said as much as an example of bad faith. If all is mechanism, then I'm not morally responsible. At worst, I'm sick and need psychotropics. It is not accident that the existentialists have railed against AI as a mockery of cognition. They hold, and I think I do too, that cognition assumes both consciousness and freedom (or that freedom is inherently a part of ...
Turing tests and consciousness are examined.