Institute for Christian Teaching

Education Department of Seventh-day Adventists

 

 

 

 

 

 

 

 

ARTIFICIAL INTELLIGENCE: FROM THE FOUNDATIONS OF

MATHEMATICS TO INTELLIGENT COMPUTERS

 

 

 

 

 

 

 

by

 

Raymond L. Paden

 

Computer Science Department

Andrews University

Berrien Springs, Michigan

 

 

 

 

Prepared for the

Faith and Learning Seminar

held at Union College

Lincoln, Nebraska

September 1992

 

106- 92 Institute for Christian Teaching

12501 Old Columbia Pike

Silver Spring MD 20904, USA

 

ABSTRACT

 

Ramifications between artificial intelligence and Christian faith are explored. The discussion is intended to motivate thoughtful discussion by Christian scholars from numerous disciplines and to serve as a catalyst for ideas to be used by computer science and mathematics professors for integrating faith and learning. Artificial intelligence and its relationship to faith are developed within a historical context. Thus discussion progresses from its early precursors in the foundations of mathematics, through the development of computational theory and ending with its modem program. The point is not to resolve issues, but to generate dialogue. Thus questions are purposed and cautions are stated rather than giving definitive answers to complex issues. Moreover, the language used is largely nontechnical assuming that the specialist can easily supply the concomitant rigor only hinted at within the text that follows.

 

Keywords:    Artificial Intelligence, Computer Science, Mathematics, and Christian Education, Integration of Faith and Learning

 

1. Introduction

 

It is the intention of this paper to examine the ramifications of artificial intelligence (AI) from a Christian perspective for two purposes. First, the program for contemporary Al is broad and far reaching having implications for such diverse disciplines as mathematics, computer science, engineering, physics, psychology, sociology, history, ethics and philosophy. Though much has been said about AI from a secular perspective regarding these disciplines, little has been said regarding the theological and spiritual aspects of Al. Thus this paper hopes to generate thoughtful religious comment from Christian scholars over a broad range of other disciplines. Second, from the perspective of Christian education, this paper offers insight and observations for integrating faith and learning. This will be the most relevant to appropriate classes in computer science and mathematics[1], though there will be spin off benefits to the other disciplines as well.

 

Regarding the structure of this paper, it should be noted that AI as a subdiscipline of computer science grew out of the pregnant milieu of late nineteenth and early twentieth century mathematics. It was the aspirations, unfulfilled dreams and discoveries of this generation of mathematicians, which lead to the development of computational theory, which in turn made possible the quest for machine intelligence. It was also these aspirations, which provided Al its unique character and philosophical foundation. Thus to understand Al within its modern context and to understand its impact regarding spiritual matters, it is important to understand these early developments. Therefore, this paper is structured upon the historical progression of AI; it begins with the development of the foundations of mathematics at the turn of the century, proceeding on to the formal theory of computation and ending by exploring the program of Al. Along the way questions, observations and insights of a spiritual nature are offered as appropriate. In addition, since this paper provides insights regarding the integration of faith and learning for computer science and mathematics professors, it also describes the academic context of computer science in general, AI in specific and the relationship of mathematics to Al and computer science.

 

Numerous works have addressed many of the issues raised in this paper from a secular perspective. Several intended for the persistent layman include [BOLT84, FORE90, HOFS80, JOHN85, PENR89, WEIZ76]. Other texts examining these topics in a manner easily accessible to those in mathematically oriented disciplines include [COHE91, MINS67, PINT71]. Finally, several of an advanced nature include [LEWI81, ROGE87].

 

2. Computer Science and AI as an Academic Discipline in a Christian University

 

As an academic discipline, computer science is a latecomer to the multiplicity of academic degrees offered in contemporary colleges and universities. Historically, its theoretical heritage is based upon the long-standing traditions of symbolic logic, mathematics and the relatively more recent developments in electrical engineering. It was, however, the theoretical work of the mathematician Alan Turing in the 1930's and the implementation of his theory by the mathematician John von Neumann in the early 1950's that lead to the development of the modem stored program computer.[2] Though advanced study in the theory of computation and the development of computing devices was occurring in universities since the time of Turing and von Neumann, it was not until the mid sixties that the first formal degrees in computer science were offered. Today, computing degrees are offered at most colleges and universities in variety of specialties (e.g., computer science, information systems, software engineering, computer engineering, etc.). These degrees are often professionally oriented much like engineering or accounting with varying degrees of rigor and theory, and differing cognate requirements in the arts and sciences.

 

Al and its theoretical foundations have long been a prominent subdiscipline within the field of computer science. For instance, the Association for Computing Machinery (ACM) lists Al as an elective course in its recommended undergraduate and masters degree curriculums [ACM81]. Moreover, numerous universities granting doctoral degrees in computer science have prominent AI labs and/or students doing advanced work in AI [KURZ90]. In addition to AI courses, the theoretical components of Al (e.g., Turing machines, pushdown automata, lambda calculus, etc.) are typically covered in several other courses from the ACM recommended curriculum [ACM81]. AI is also significantly used in such areas as robotics, expert systems and virtual reality.[3]

 

Computing degrees, which include courses in Al, are offered at most Christian colleges and universities. Thus it is relevant to ask at what points can Christian academia bring spiritual discernment to bear upon computer science. The answer is multifaceted. First, as people we are created in the image of God (Genesis 1:27). 'Since God is a creative being, it follows that we too are creative beings ordained to express this creativity in a responsible manner. In so far as computer science involves creativity and imagination, like most other academic disciplines, the Christian computer scientist is therefore using his unique talents to express his creative mandate. Second, the Christian ethicist has abundant opportunities and responsibilities to comment on how computers are used in society. This includes issues such as privacy, security, software piracy, military applications, availability of computers to minorities and the developing world, professional standards of conduct and so forth. The third facet involves the intrinsic relevance computer science to faith. This is perhaps the most difficult facet because computer science is a product of human development bearing in most of its aspects little insight into our humanity. For instance, it is difficult to find meaningful and direct spiritual nuance regarding compilers, operating systems, word processors, computer architectures and the like. However, AI and its theory are different in character since many believe that it offers profound insights into human intelligence [MARX79]. Indeed, it is from this aspect that this paper finds inspiration for many of the faith issues it discusses.

 

3. The Foundations of Mathematics Sew the Seeds of Artificial Intelligence

 

The seeds for Al were sewn by the general thrust of mathematics between 1870 and 1930. During this period considerable attention was being given to the foundations of mathematics. The illusive goal in this period was to unify all of mathematics using a small collection of basic principles. In this quest unresolvable logical paradoxes surfaced ultimately leading to the shocking discovery that mathematics was incomplete which was the seed for Al. This period is reminiscent of the classicist's dream to find perfection, as they saw it, within this world [BLAM63]. But, like the tower of Babel, man's dreams to reach the heaven's of mathematical endeavor by his own intellectual prowess was doomed to failure from the beginning by the basic structure of logic itself.

 

The best hope for this goal came from the development of set theory by Georg Cantor and Dedekind. The basic idea was that one could start with the notion of a set and the ability to specify that an object is an element of a set. For example, if x 3, and S = {1, 2, 3}, then we say that x is an element of the set S. Using these notions of set and element the natural numbers can then be defined by first specifying that the number 0 is represented by the empty set; i.e., 0 = {}. Next, the number 1 can be represented by the set {0} = {{}}, and the number 2 can be represented by the set {0,1} = {{}, {{}}}, and so on. The natural numbers can then be used to define the set of integers (i.e., positive and negative whole numbers as well as zero), which in turn can be used to define the set of rational numbers (i.e., fractions), which in turn can be used to define the set of real numbers (i.e., all decimal numbers). From here it was hoped that all of mathematics could be defined.

 

The elegance of set theory was very attractive to the mathematicians of this period, but difficulties started showing themselves in this paradise of perfection they were trying to create. The work of Bertrand Russell in 1902 is illustrative in this regard. Russell observed that a set can contain sets; for example, a set of lines is a set where each line is a set of points. Moreover, a set can contain itself. He then considered the set, A which is the set of all sets that are not elements of themselves. Is it then possible that A is an element of itself? Well, if A is an element of itself, then it is not an element of itself and if A is not an element of itself then it is an element of itself! This is known as Russell's paradox [PINT71]. The basic problem was that the intuitive notion of a set used by Cantor was too unrestricted. This and other paradoxes thus forced mathematicians to overhaul set theory. The common alternative to set theory used today is a formal theory based on classes developed by von Neumann where a class is a restricted form of a set. Class theory avoids the classical paradoxes of set theory, but is less encompassing. For example, there is no formal mathematical means to consider the class of all protons in the universe. Though it is possible to build the whole of modern mathematics from classes and even to use it as a modeling device in the sciences, it is not possible to include the whole of reality under the banner of "pure" mathematics.

 

All of this was very disturbing to the mathematician David Hilbert who stated that,  "we will not be expelled from the paradise into which Cantor has led us" [PINT71]. His program, called the Entscheidungsproblem, to solve the difficulties of this period was perhaps the most ambitious for he desired to prove that mathematics is consistent (no contradictions), complete (all mathematical statements could be proven or disproved) and computable (a mechanical device exists that could in principle automatically determine the truth value of any mathematical statement). In other words, Hilbert desired to prove that mathematics has no contradictions, all of its problems have solutions and an algorithm exists to solve all of these problems mechanically. However, given the unnerving discoveries of set theory, it was considered necessary to state the problems and their proofs using strictly formal methods. Such formal methods would substitute the need for human insight and judgment regarding the validity of proofs with a mechanical means for accomplishing the same task[4] [PENR89]. Thus it was believed that pure mathematics could once again be placed upon an unassailable pedestal.

 

In spite of his optimism, his ambitions were ultimately shattered in 1931 by the mathematician Kurt Godel. In his famous and shocking incompleteness theorem Godel proved that mathematics based upon formal methods could not be both complete and consistent. In other words, if all mathematical problems had solutions, then it is necessarily true that there exist mathematical statements, which are simultaneously true and false, or if there are no simultaneously true and false mathematical statements, then there necessarily exist mathematical problems, which have no solution [COHE91]. Today, given the necessity for consistency in mathematics, most people assume (hope?) that formal mathematics is incomplete, but consistent.

 

The consequences of Godel's incompleteness theorem and the paradoxes of set theory provide fascinating insights into the nature of God's created order. To begin with, earlier generations of mathematicians and philosophers would find it unthinkable that mathematics could have paradoxes. When mathematics was restricted to eliminate the paradoxes, it was a devastating blow to learn that it was incomplete.[5] However, these "inadequacies" arise out of the common everyday logic that we use in our daily lives. If one accepts that such logic emanates from the very construction of the human brain that God has created (i.e., the logic we as people use was created by God like the air we breathe), then are we to assume that God has given us imperfect logic? Certainly not, for everything God has created "was very good" (Gen. 1:31). The fallacy lies in our notions of perfection and how we use our logic. Logic restricted to formal systems has allowed us to see things that thinkers in past generations had not even the slightest hope of resolving. But logic with formal systems is incomplete; not even God can ascertain truth-values of the unprovable statements in these circumstances. However, there are other ways of determining the truth-value for some of these unprovable statements using a combination of logic, insight and judgment [PENR89]. In other words, the limitations that God has created in human logic are no excuse for careless thinking!

 

4. From Incompleteness to The Formal Theory of Computation

 

At the core of incompleteness lies the use of formal methods to establish truth-values for mathematical statements. It was these formal methods that lead mathematicians of the 1930's to the development of computational theory. At the heart of this theory is the mechanical procedure for proving mathematical statements; this procedure is called an algorithm.[6] It is the algorithm that lies at the core of AL

 

As pointed out above, Hilbert's Entscheidungsproblem involved three things -consistency, completeness and computability. Godel resolved the consistency and completeness issues but had not resolved the computability issue. However, the options for computability were significantly narrowed after Godel made his discovery; since formal mathematics was incomplete, all that remained to be done was find a mechanical means to decide if a mathematical statement could be proven or disproved for there was no point in trying to mechanically solve an unsolvable problem. In 1936 three seminal papers were independently published providing complete and equivalent solutions to Hilbert's problem [COHE90]. Like Godel they proved that math could not be both consistent and complete. Moreover, they defined precisely the notion of an algorithm and used it to prove that mathematics was not computable; i.e., it was not even possible to mechanically decide ahead of time if a mathematical statement could be solved! Since the paper published by Alan Turing [TURI36] has been the most influential of the three in computer science, his work is presented below.

 

Turing's model of computation, called a Turing machine (TM), is a simple abstract computing device. It consists of an infinitely long tape divided into blocks with a marker that points to these blocks; this marker can move forward and backward and can read, write and erase the blocks on the tape. The marker works with the two symbols "0" and "1". A finite number of blocks on the tape initially contain a string of O's and l's; all other blocks are blank. In addition to this structure there is also a finite number of states and a finite number of instructions to direct the action of the TM. The TM works by always knowing its current state and where the marker is pointing [MINS67]. Note that if the initial string is acceptable, then the TM will naturally terminate ending in a "halt" state. It is this logical device, the TM, that is formally the definition of an algorithm.[7] An example of TM is given in Figure 1 showing three things; a tape containing the string "1110010" and its marker, a sample instruction set (i.e., the TM's program) and a step by step sample execution of the program [BOLT84].

 

A TM tape with the string 1110010 and its marker

 

1

1

1

0

0

1

0

 

 

.      .      .      .

 

Instruction Set

  1. if the current state is Q1 and the current symbol is "1", then

 

a)      Write the symbol "1"

b)      Move the marker to the right one square

c)      Change the state to Q2

 

  1. if the current state is Q2 and the current symbol is blank, then

a)      Write the symbol "0"

b)      Move the marker to the right one square

c)      Change the state to halt

 

Sample execution using the previous instruction set.

 

State = Q1

Symbol = 1

 
 


1

 

 

 

. . . . .

 

 

State = Q2

Symbol = blank

 
 


1

0

 

 

. . . . .

 

 

1

0

 

 

State = halt

Symbol = blank

 
. . . . .

 

 

Figure 1. Example of a Turing machine.

 

 

Turing used this model to solve Hilbert's problem.  First consider the computability issue.  Turing asked if given an arbitrary string of 0's and 1's (e.g., 1110010) and an arbitrary TM, does there exists another Turing machine, called an universal TM (UTM), which can decide if the given TM halts for this string? The answer is no since the UTM must run forever to decide if the given TM runs forever! This problem, known, as the halting problem is only one example of a problem for which it is not possible to determine whether or not it has a solution before doing it[8] [LEWI81]. To answer the consistency and completeness issues Turing developed the notion of recursive enumerability [COHE90]. Since his solution to this problem is equivalent to Godel's results, it will not be developed further in this paper.

 

One of the remarkable qualities of the TM is its simplicity. In spite of this simplicity, however, it is believed to be the most powerful form of mechanical computation known to man, yet as demonstrated above it cannot solve all problems given to it [PENR89]. Though this assertion regarding a TM's power cannot be formally proven, no mechanical model of computation has been discovered which is more powerful [MINS67]. Since the publication of this model, numerous other models of computation have been purposed. These include such things as the lambda calculus, post machine, pushdown automaton, context free grammar and regular expression, none of which have been more powerful. These and other models have been ordered in a four level hierarchy by the linguist Noam Chomsky in 1959 [COHE91] with the TM in top level.

 

Ten years after Turing first published his results, John von Neumann and several colleagues began working to convert Turing's abstract model of computation into an actual computer [BOLT84]. By 1951 the UNIVAC became first computer to embody all of the logical components of the Turing Machine and the functional equivalents of modem computers [HAYE88]. Therefore, it must be borne in mind that any comments made regarding Turing machines equally apply in principle[9] to a modem computer, no matter how simple or complex it is.

 

To summarize, Turing's legacy was three-fold. First, he solved Hilbert's Entscheidungsproblem. Second, in solving this problem he developed a model of computation that precisely defines what an algorithm is and which lead to the development of modern computers. Three, his solution to this problem demonstrated that computers are not omniscient, even if given infinite resources!

 

The theories developed by Godel [ROGE87] and Turing [LEWI81] only sketched above, require an arduous development to do it in their fall rigor.[10] To provide the mathematically inclined reader with a more demonstrative understanding of a computer's limitations an alternative argument is given. Recall that there are at least two different sizes of infinity; a countable set is an infinite set whose members are in a one to one correspondence with the integers and an uncountable set is one that is neither finite nor countable. For instance, the set of all rational numbers over [0, 1] is countable with measure 0, but the set of all real numbers over [0, 1 ] is uncountable with measure 1 [RUDI76]. It can be shown using a diagonalization technique [WOOD87] that the number of algorithms is countable and that the number of integer functions is uncountable. Thus there exist an uncountable number of integer functions, which cannot be computed. For most mathematicians during this period, the results of Godel's incompleteness theorem and Turing theory were very distressing for they were unsuccessful in solving all problems within mathematics. However, if Hilbert's course had been successful, then mathematics would have ceased to be a viable field for intellectual investigation. That would have been unfortunate. To the author, the uncertainty provided in this theory is a source of good news, because there remain unlimited opportunities by which to fulfill one aspect God's creative mandate (Genesis 1:27) for humankind. Schumacher's words are relevant here.

 

When the Lord created the world and people to live in it ... I could well imagine that He reasoned with himself as follows: "If I make everything predictable, these human beings, whom I have endowed with pretty good brains, will undoubtedly learn to predict everything, and they will thereupon have no motive to do anything at all, because they will recognize that the future is totally determined and cannot be influenced by any human action. On the other hand, if I make everything unpredictable, they will thereupon have no motive to do anything at all. Neither scheme would make sense. I must therefore create a mixture of the two. Let some things be predictable and let others be unpredictable. They will then, amongst other things, have the very important task of finding out which is which [SCHU75].

 

In God's ideal for creation, He apparently did not plan for a closed universe. His understanding of perfection is very different than that of fallen humanity (I Cor. 2:9). In most disciplines today, rationalism's quest for scientific understanding, clean, well defined boundaries, simple, theoretical bases and rigid metaphysics seems to be showing shortcomings in its utility [SMIT82]. As this paper has discussed, even mathematics and computer science are not impervious to changing nature of post-modern epistemology. Though they will continue to be more dependent upon classical forms of epistemology than the humanities, the answers they provide are nonetheless incomplete!

 

5. From Formal Theory to Artificial Intelligence

When mathematicians tried to "fix" the logical paradoxes in the theory of sets between 1870 and 1930 they discovered to their dismay that mathematics was incomplete. Out of this investigation, however, emerged the formal theory of computation in the mid 1930's. This theory leads to a framework in which to model intelligence and the invention of the modern computer by which, many believed, intelligence could be programmed.

 

Defining AI[11] is a difficult task since there is no uniform definition for it. The definitions sighted by various specialists typically represent their particular research interests. Here are several representative definitions found in [KURZ90].

 

Al is the attempt to answer the question ... how does the human brain give rise to thoughts, feelings, and consciousness.

 

Al is the study of computer problems that have not been solved.

 

Al is the art of creating machines that perform functions that require intelligence when performed by people.

The second definition has been called the "moving frontier" definition. Once a problem has been reasonably solved (e.g., chess) removing its mystique, it seems ordinary and researchers move on to new fields of endeavor. The third definition, which is a standard textbook definition, is circular; it never defines intelligence directly, but merely implies that it is the study of emulating human intelligence. In this respect the first and third definitions converge. However, the third definition goes beyond the philosophical and psychological aspects of Al to include such practical problems like pattern recognition and expert systems where intelligence is merely a metaphor for the design of heuristic algorithms.

 

It is as the research in Al moves closer to the human, philosophical and psychological aspects of the field that the debate becomes more impassioned and more clearly involved with metaphysical and theological issues. Yet, with its theoretical base in the mathematical theories of formal computation, its philosophical roots clearly lie in logical positivism [KURZ90, SMIT82]. Are the philosophical aspects of Al, then, an oxymoron? Perhaps it is in Al that the threat people feel by machines becomes the most acute.[12]

 

In trying to emulate intelligence by a TM, and hence a computer, the question arises as to just exactly how powerful is a TM. Earlier it was stated that a TM was the most powerful form of mechanical computation known to man; i.e., any procedure, which can be mechanically performed by a human, can be executed by a TM. This is known as Church's thesis [COHE91]. But the question can be asked, is there anything else? Can all of human intelligence be performed by equivalent mechanical procedures? To this Douglas Hofstadter asks rhetorically:

 

Here one runs up against a seeming paradox. Computers by their very nature are the most inflexible, desireless, rule-following of beasts. Fast though the may be, they are nonetheless the epitome of unconsciousness. How, then, can intelligent behavior be programmed? Isn't this the most blatant of contradiction of terms? [HOFS80]

 

Though proponents on the humanistic side of AI, like Hofstadter, assert that it is not a contradiction, they do not believe people always resort to formal methods in their thinking; indeed, they use insight, intuition and other such things. However, they believe that all intelligence can be modeled equivalently by mechanical algorithms thus equating computers and people in the top level of the Chomsky hierarchy. Hence, in the next century they believe that computers will exist that are the functional equivalents of human beings [KURZ90]. It is at this point that one moves from merely computational theory and useful programming paradigms to the world of strong AI [PENR89].

 

The implications of these observations are significant for Christians. For instance, if the contentions of strong Al are true, then computers can be programmed to feel, love and have faith in God. Does this imply then that man has potentially the power to create morally responsible agents or that feeling; love and faith are merely convenient metaphors describing complex human behavior? Since the theoretical models of AI are deterministic in nature, are human feelings of free choice merely an illusion? Given AI's roots in logical positivism and if one accepts Huston Smith's maxim that "an epistemology that aims relentlessly at control rules out the possibility of transcendence in principle," [SMIT82] then is AI necessarily at odds with Christian faith? These and other questions are presently explored.

 

6. The Biblical vs Strong AI View of Humanity:

 

The Psalmist asks, "what is man?" (Ps. 8:4). This question is the subject of numerous volumes in theology and this paper cannot hope to do justice to it. However, since one of the most distressing aspects of Al is its claim to duplicate man in the near future, an attempt is made to resolve this issue by comparing the Biblical and Al views of our humanity.

 

What is man? He is a being which is "a little lower than God" (Ps. 8:5), who rules "over the works of' God (Ps 8:6), which is "fearfully and wonderfully made" (Ps 139:14) and made "in the image of God" (Gen 1:27). Man is a being which is spiritual (Gen 2:8, 1 Cor. 2:14-16), intellectual (Isa. 1: 18, 1 Pe 3:15), creative (Ex. 31:1-5, Ps 100:2), social (Gen 2:18), affectionate (Ecc 3:5) and sexual (Gen 4: 1, SS 4:16-5: 1). God has given man freedom of choice (Deut. 30:19, Jos 24:15, John 7:17), but man's freedom is not absolute (Rom 6:23). God has made man to be a loving creature (Mt 22:37-39), but he also has the capacity to hate (Ecc 3:8). Through man's choice he has fallen (Rom 5:12,17), but God has chosen to send us His Son (John 3:16, Phil 2:6-11) to restore us into His image (John 15:16) provided we consent[13] (Jn 14:15). Moreover, God holds man accountable for his choice in the judgment (Ecc. 12:13-14). Owen Hughes organizes many of these and other aspects of human personality into a Christian model; it is an eight level hierarchy based upon the idea that man is created in the image of God. The levels comprising this hierarchy are physical attributes, self awareness, rational thought, feelings, free choice, freedom to act, creativity and relationship [HUGH88).

 

In contrast to this picture of humanity, the logical positivism of Al asserts that the brain is a machine [MINS86] creating a mind that is equivalent to the mathematical formalism of a TM[14] [HOFS80]. At first glance this biblical view of humankind, seems at odds with TM model of mind. Though this gut level response is warranted, it is not immediately obvious, for the proponents of strong Al would argue that these lofty ideals can either be programmed or are illusions.

 

Consider the perceived human notion of free will. As stated earlier, the proponents of strong Al equate mind with a TM thus placing people at the top of the Chomsky hierarchy. At this level it can be proven that determinism and nondeterminism are equivalent. This is done by proving that a deterministic TM and nondeterministic TM are equivalent [LEW181]. Thus Hofstadter can account for the "feeling" of free will in the human mind as follows:

 

It is irrelevant whether the system is running deterministically; what makes us call it a "choice maker" is whether we can identify with a high-level description of the process which takes place when the program runs. On a low ... level, the program looks like any other program; on a high ... level, qualities such as "will", "intuition', "creativity', and "consciousness" can emerge. [HOFS80]

 

Thus it is argued that at the low level of neurophysiology, deterministic choices are made in the brain, like in a TM, and at the high level of consciousness people merely perceive free will. At this point man becomes an automaton.[15] This runs counter to traditional Adventist teachings [WEIT58]. It also runs counter to the scriptures, which assert that man must choose whom he is to follow (Js. 24:15). Since man is to be held accountable for this choice in the judgment (Ecc. 12:13-14), fairness dictates that he must have a reasonable degree of choice over his destiny. Even in Luther's understanding of grace, which tends toward determinism [PELI84, MARX79], one must choose to accept or reject God's gift. Finally love is at the center of God's ideals for man (Mt. 22:37-39), and since free choice is "the infrastructure of love" [BERK79], then man must be free to be able to love God, even if marred by the results of sin! If one accepts these observations, then theologically the equivalence between the human mind and the TM must be rejected on the basis of free choice.[16]

 

7. Other Aspects of Computerized Intelligence

 

In the previous section it was argued that the modeling of human intelligence is unlikely given Biblical perspectives on free choice and love's dependence upon it. However, is it possible or even desirable to model some of the other aspects of our humanity mentioned above? Moreover, if the previous arguments are accepted, then what can be expected from the research of AI?

 

One of the major dilemmas in dealing with the human side of Al is to define exactly what intelligence is. None of the definitions given above spell out exactly what intelligence is. So how are researchers to impartially recognize a truly intelligent computer if they saw it? Alan Turing attempted to answer this question in 1950 with an operational view of Al using what is now called the Turing Test [TUR150, PENR89]. The test works as follows. A computer, which is claimed to be intelligent, and a person are hidden from the view of a panel of judges. The judges must by interviewing the computer and person through a keyboard and monitor[17] determine which is the computer and which is the person. Suppose that a judge asks the respondents to factor a 30 digit integer; it would be a quick matter for the computer to answer the question, but quite tedious for the person. Thus it would be necessary to program the computer to slow down on mathematical responses and even make mistakes. It would also be necessary to program the computer to get angry, lie and cheat as well as to emulate the more noble aspects of humanity such as appreciating the aesthetic appeal of a musical composition, catching the humor in a joke or understanding the spiritual domain of faith.

 

Two observations are in order. First, suppose it were possible for a computer to appear genuinely intellectually human (i.e., think and feel). Does this necessarily imply a real aspect of our humanity or even awareness? This of course is an issue of significant debate within the cognitive sciences where, for example, primates are taught sign language [VESS85]. To the proponents of strong Al, it is only the TM that matters; if the computer acts intelligent by means of a computer program, then it is intelligent. To others, if there is no semantic insight, then there is no intelligence; "acting, no matter how skilful, is not the real thing" [PENR89].

 

Second, is it desirable to create a "machine" identical in every respect to our humanity?[18] From a practical perspective the answer is probably no.[19] If researchers are trying to create a beast of burden, it makes no sense to program it to make arithmetic errors, to get angry or to lie. It would be ethically reprehensible to create a machine, which could form relationships only to barter it like a slave. Moreover, it would be cruel to program a computer with a sense for anticipating the future only to have it "dismantled" once it became obsolete and its software no longer transferable to a new platform? If strong Al were possible, responsible researchers would likely create a machine manifesting an alien intelligence, which is understandable and submissive to people in an ethically acceptable manner, much the like robots in science fiction movies. They would perform useful tasks and would interact with people in a socially acceptable manner. They would be a beast of burden smarter than an ox. Clearly it is not necessary to make these machines out of flesh and blood.

 

This vision of practical Al is based upon the assumption that a genuine intelligence compatible with human intelligence is possible at TM level of the Chomsky hierarchy. However, if one accepts the theological arguments offered above suggesting that AI is unlikely this vision for Al is still reasonable in some aspects. Rather than creating truly intelligent machines, heuristic programs could be designed emulating those aspects of human mind accessible to a TM. In other words, intelligence becomes a practical metaphor used in the design of algorithms. The less accessible aspects of intelligence such as free will, spiritual vitality and so forth would not, could not, be programmed. However, it would be possible to create a robot manifesting the appearance of intelligence in some respects. Perhaps it could recognize speech and be given vision. Perhaps it could be given a socially pleasing and accessible interface making human machine interaction more palatable. It would also remove some of the vexing ethical dilemmas mentioned above.

 

When considering the spiritual implications of Al upon our humanity, it must be remembered that its philosophical roots lie in logical positivism and scientism. Thus the existence of transcendence is typically dismissed outright and many of these questions become meaningless. However, it must be remembered even in this case that there exists persuasive metaphysical arguments suggesting that intelligence is not computable [PENR89].

 

8. Christian Involvements with AI

 

Is it proper for Christians to be involved in AI? Should Christian colleges teach courses in AI? As society devotes increasing resources to this enterprise, more and more Christians will be confronted with its impact and will have to answer these questions. Several answers are offered below.

 

First, it is helpful to recognize the difference between the human, philosophical and psychological aspects of Al and its more practical components. As a software technology, Al is being applied to pattern recognition, robotics, user interfaces, expert systems and the like. These technologies are not intrinsically antithetical to Christian teaching; they are merely amoral tools whose use determines their ethical implications. In this regard the Christian, like many others, has exciting career opportunities with the potential for providing an important and useful service to society. Moreover, if it is not possible, then there is nothing to fear since intelligent machines will never be created.

 

Second, it should be recognized that the goals of strong Al couldn't be ruled out categorically. The theological arguments given above reflect the author's current understanding of scripture. So one must always recognize in a spirit of intellectual humility that his arguments may be incomplete or even wrong. Though science is incapable of discovering the totality of truth [SMIT82], it is, nonetheless, very concrete in many of experimental discoveries. So if intelligent machines were someday created, how should the Christian respond?[20] On the other hand, if the eventuality of Al is impossible as argued above, it still must be recognized that its philosophy has had a major impact upon society [MARX79]. In either case Christian scholar, teachers and laymen should be prepared with well-reasoned responses to the challenges and opportunities in the field of Al (I Pe. 3:15).

 

9. Conclusion

Within this context of our humanity, the scientism and logical positivism of Al which seeks mechanical explanations to account for the phenomenon of mind, has overstepped its bounds[21] [SMIT82]. While the author endorses the useful software technologies emerging from the study of AI, he agrees with Joseph Weizenbaum, who states:

 

We are capable of listening with the third ear, of sensing living truth[22] that is truth beyond any standards of probability It is that kind of understanding, and that kind of intelligence that is derived from it, which I claim is beyond the abilities of computers to simulate. [WEIZ76]

 

Many people feel threatened by Al's encroachment on their humanity. In a age when we are reduced to numbers and bullied around by computers, is not the ultimate threat of modernity to make machines our equal? Regardless of the successes or failures in Al, we must remember that we are "fearfully and wonderfully made" (Ps 139:14) by God, that He sent his Son to redeem us (I Jn. 2:1-2 and that we are welcome before his thrown (Heb. 4:16). Nothing can separate us from the love of God (Rom 8:38-39). If nothing else, this sets us apart from machines.

 

BIBLIOGRAPHY

 

[ACM81] Committee on Computer Curricula of the ACM Education Board. ACM Recommended Curricula forComputer Science and Information Processing Programs in Colleges and Universities, 1968-1981, Association for Computing Machinery, New York, NY., 1981.

 

[BLAM63] Blamires, H. The Christian Mind, Servant Books, Ann Arbor, MI., 1963.

 

[BOLT84] Bolter, J.D. Turing's Man: Western Culture in the Computer Age, Univ. of North Carolina Press, Chapel Hill, NC., 1984.

 

[COHE91] Cohen, D.I.A. Introduction to Computer Science, rev. ed., John Wiley & Sons, Inc., New York, NY., 1991.

 

[FORE90] Forester, T., Morrison, P. Computer Ethics. Cautionary Tales and Ethical Dilemmas in Computing, The MIT Press, Cambridge, MA., 1990.

 

[HAYE88] Hayes, J.P. Computer Architecture and Organization, McGraw-Hill Book Co., New York, NY., 1988.

 

[HOFS80] Hofstadter, D.R. Godel, Escher, Bach: an Eternal Golden Braid, Vintage Books, New York, NY., 1980.

 

[HUGH88] Hughes, O.L. Created in the image of God: a Christian view of human personality. In Christ in the Classroom, vol. 2, (ed. H. Rasi), Institute for Christian Teaching, Silver Spring, MD., 023-88 (Aug. 1988).

 

[JOHN85] Johnson, D.G., Snapper, J.W. Ethical Issues in the Use of Computers, Wadsworth Pub. Co., Belmont, CA., 1985.

 

[JUMP69] Jump, J.D. Marlowe. Doctor Faustus. a Casebook, Macmillan Pub. Co., London, England, 1969.

 

[MARX79] Marx, M.H., Hillix, W.A. Systems and Theories in Psychology, 3rd ed., McGraw-Hill Book Co., New York, NY., 1979.

 

[MINS67] Minsky, M. Computation: Finite and Infinite Machines, Prentice-Hall, Englewood Cliffs, NJ., 1967.

 

[MINS85] Minsky, M, The Society of Mind, Simon and Schuster, New York, NY., 1985.

 

[LEWI81] Lewis, H.R., Papadimitriou, C.H. Elements of the Theory of Computation, Prentice-Hall, Englewood Cliffs, NJ., 1981.

 

[PELI84] Pelikan, J. The Christian Tradition, Vol 4. Reformation of Church and Dogma, The Univ. of Chicago Press, Chicago, IL.1984.

 

[PENR89] Penrose, R. The Emperor's New Mind, Penguin Books, New York, NY., 1989.

 

[PINT71] Pinter, C. Set Theory, Addison-Wesley Pub. Co., Reading, MA., 1971.

 

[RUDI76] Rudin, W. Principles of Mathernatical Analysis, McGraw-Hill Book Co., New York, NY., 1976.

 

[SCHU75] Schumacher, E.F. Small is Beautiful: A Study of Economics as if People Mattered, Sphere Books, Ltd., London, England, 1975.

 

[SMIT82] Smith, H. Beyond the Post Modern Mind, Crossroads Press, New York, NY., 1982.

 

[TURI36] Turing, A.M. On computable numbers, with application to the Entscheidungs problem. Proc. London Math. Soc., 2-42 (1936), 230-265.

 

[TURI50] Turing, A.M. Computing Machinery and Intelligence. Mind, 59, n.s. 36 (1950) 433-460.

 

[VESS85] Vessels, J. Cohen, R.H. Koko's Kitten. National Geographic, 167, 1 (Jan. 1985), 110-113.

 

[WEIZ76] Weizenbaum, J. Computer Power and Human Reason: from Judgement to Calculation, W.H. Freeman and Co., San Francisco, CA., 1976.

 

[WHIT58] White, E.G. Patriarches and Prophets, Pacific Press Pub. Assoc., Mountain View, CA., 1958.

 

[WOOD87] Wood, D. Theory of Computation, John Wiley & Sons, Inc., New York, NY., 1987.



[1] There exists a common thread between mathematics and computer science; it lies in the historical development of computational theory from work on the foundations of mathematics at the end of the nineteenth century and the Entscheidungsproblem (i.e., Hilbert's tenth problem) [COHE91, PINT71 PENR89]. This work has lead to the quest for intelligent machines. Since this paper is structured around this historical development, the author felt that both Christian computer science and mathematics professors could benefit by this discussion. Hence its comments on the integration of faith and learning are presented in terms of a dual thrust.

 

[2] The computer developed by von Neumann was known as the UNIVAC. There were, however, other more primitive computing devices that had been designed earlier which incorporated a logical subset of the components in the UNIVAC [HAYE88].

 

[3] It is believed that Christian professors teaching subjects mentioned in this paragraph will find the content of this paper relevant to integrating faith with their course.

[4] Formal methods rely upon the syntax of mathematical statements and precisely stated rules of inference for manipulating these statements to derive their truth values without resorting to semantically based insight. However, when mathematicians work on problems they seldom use overtly formal methods because they are exceedingly tedious. But a mathematical formalist will assert that informal methods relying on insight and judgement are equivalent to formal methods.

 

[5] For many of the people whose entire lives were devoted to mathematics, the emotional stress of their disappointment can be compared somewhat to the great disappointment of the Millerite movement.

[6] Terms such as algorithm [LEWI81], effective procedure [MINS67] and recursive function [ROGE87] are used more or less synonymously.

[7] Within the context of this paper, a computer with a program can be considered equivalent to an algorithm

[8] The reader should not underestimate the significance of this result; though the halting problem seems contrived it has far reaching consequences.

[9] Computational efficiency is not the issue in this paper, rather, it is theoretical possibilities. Thus to say a TM is equivalent to a modem computer is to say that it is logically equivalent and that given adequate time and resources both are capable of solving the same problems. In practice a computer is much more efficient than a TM.

[10] One account of this theory readily accessible to the layman is [PENR89]. In this fascinating book the highlights of Turing theory are presented

     with generous explanations and resorting only to minimal amounts of mathematical notation.

[11] John McCarthy was the person credited for coining the name "artificial intelligence" in 1956. Another once popular name, which has since faded, was  "cybernetics" [KURZ90].

[12] See [WEIZ76] for an articulate, impassioned criticism of the Al's perceived encroachment upon human reason.

[13] Love implies consent.

[14] Douglas Hofstadter provides an example of the equivalence between computers and mind in a dialogue entitled "A Conversation with Einstein's

Brain" [HOF80]. Here he conjectures the existence of a book containing the workings of Einstein's brain. To ask Einstein a question one merely needs to read the book. To the proponents of strong Al, this book, like a TM, would have insight and awareness; it would be Einstein. See also [PENR89].

[15] In other words, they seem to believe that the purely rudimentary mathematical operations of a computer can be used to emulate the complex

psychological operations of the brain, thereby making mathematics universally applicable epistemology known to man!

[16] Roger Penrose gives an articulate, metaphysical argument for the belief that the human mind is not equivalent to a TM. Unfortunately, he believes the mind is, nonetheless, deterministic [PENR89].

[17] At this point in history it is not demanded that the computer look and tactilely feel like a person.

[18] Accepting a Christian world view, it is highly unlikely that man could create a "machine" manifesting the ideal of humanity given his corrupt

nature.

[19] Researchers trying to understand human intelligence may answer this question affirmatively, but many of the ethical issues associated with such

experiments seem to make it unwise.

[20] Recall such things as the Great Disappointment of the Millerite movement or statements asserting that man would never land on the moon because he was sinful and the moon had known no sin. If we base our beliefs on ephemeral desires and fears of the unknown, they are likely to be shattered  resulting in a crisis of faith. Sound thinking, intellectual fortitude and balanced regard for tradition are necessary components for a strong faith (Isa 1:18, 8:20).

[21] Are the gurus of AI guilty of the sin of Christopher Marlowe's Dr. Faustus? [JUMP69]

[22] Emphasis added. The author is not aware of Weizenbaum's religious commitments, but the reference to "living truth" clearly strikes a resonant

    chord within the Christian's heart