A Review and Proposed Framework for Artificial General Intelligence

Page created by Gilbert Johnson
 
CONTINUE READING
A Review and Proposed Framework for Artificial General Intelligence
Long, Lyle N. and Cotner, C., "A Review of and Proposed Framework for Artificial General
 Intelligence," presented at IEEE Aerospace Conference, Mar., 2019.
 A Review and Proposed Framework for
 Artificial General Intelligence
 Lyle N. Long Carl F. Cotner
 The Pennsylvania State Univ. Applied Research Lab
 233 Hammond Building The Pennsylvania State Univ.
 University Park, PA 16802 University Park, PA 16802
 814-865-1172 814 865 7299
 lyle.long@gmail.com carl.cotner@psu.edu

Abstract— This paper will review the current status of aspect of this scenario that scares some people is that once
artificial general intelligence (AGI) and describe a general these systems reach human-level (or beyond) intelligence,
framework for developing these systems for autonomous we will be able to make copies of them easily, whereas each
systems. While AI has been around for about 70 years, human requires about 20 years to train.
almost everything that has been done to-date has been what
is called “narrow” AI. That is, the theory and applications Humans are often thought of as being the pinnacle of
have mostly been focused on specific applications, not intelligence, but machines will eventually surpass humans.
“strong” or general-purpose AI (or Artificial General The processing power and memory of supercomputers
Intelligence, AGI). As stated in [5], Deep Blue was great at already exceed humans, and human memory is really very
playing chess but could not play checkers. The typical AI of poor compared to computers. Eye-witness accounts are
today is basically a collection of well-known algorithms and known to be unreliable, and the Miller number demonstrates
techniques that are each good for certain applications. how little can be stored in short term memory.

 Dictionary definitions of intelligence are often roughly:

 TABLE OF CONTENTS “the capacity to acquire and apply knowledge.”

1. INTRODUCTION ....................................................... 1 But in [2] a group of researchers defined it as:
2. ELEMENTS OF ARTIFICIAL INTELLIGENCE .......... 2
 “A very general mental capability that, among other
3. PREVIOUS AGI WORK............................................ 5 things, involves the ability to reason, plan, solve
4. PROPOSED AGI FRAMEWORK ............................... 6 problems, think abstractly, comprehend complex ideas,
6. CONCLUSIONS ......................................................... 8 learn quickly and learn from experience. It is not merely
REFERENCES ............................................................... 8 book learning, a narrow academic skill, or test-taking
BIOGRAPHIES ............................................................ 10 smarts. Rather, it reflects a broader and deeper capability
 for comprehending our surroundings—"catching on,"
 "making sense" of things, or "figuring out" what to do.”
1. INTRODUCTION
 While artificial intelligence (AI) has been around for about
There is enormous interest in unmanned autonomous 70 years, almost everything that has been done to-date has
systems, but these systems need significant onboard been what is called “narrow” AI. That is, the theory and
intelligence to be effective. This requires powerful applications have mostly been focused on specific
computers [12], powerful algorithms, complex software, applications, not “strong” or general-purpose AI (or
lots of training, and lots of data; all of which now exist to Artificial General Intelligence, AGI). And the systems have
some extent. Also, it is not feasible to hand-code software been very far from having the intelligence capabilities
for truly intelligent autonomous systems. Even the software described above. As stated in [5], Deep Blue was great at
used on manned aircraft can cost hundreds of millions of playing chess but could not play checkers. The typical AI of
dollars, and could be tens of millions of lines of code. There today is basically a collection of well-known algorithms and
is an enormous need for artificial general intelligence for techniques that are each good for certain applications. A few
unmanned autonomous systems. of these will be discussed in this paper. None of these
 algorithms alone is currently capable of AGI. Domingos [7]
Machine learning will be essential to making these systems claims that no one knows the ultimate master algorithm for
effective. The systems will need to learn much as an infant AI, but the current algorithms described herein could be the
learns, by exploring their environment. Humans get their master algorithm. There are pure AGI theories (e.g. AIXI
training data by exploring their environment and using all [27], Gödel machines [30], etc.), but these are not practical
their sensors, in addition to being taught by other humans. on even the largest supercomputers. They are essentially
Humans can also learn by doing thought experiments. One
 978-1-5386-6854-2/19/$31.00 ©2019 IEEE
 1
A Review and Proposed Framework for Artificial General Intelligence
theoretical or mathematical approaches, but are very useful the 1970’s the Cray supercomputers were roughly
for guidance in algorithm development. equivalent to a cockroach in memory and speed. In the
 1990’s we saw the first teraflop computer (ASCI Red),
This paper will review the current status of artificial general which approached the speed of a rat nervous system. It was
intelligence (AGI) and describe a general framework for not until the last few years that the largest computers in the
developing these systems for autonomous systems. The world surpassed the memory and processing of humans. So
framework builds upon neural networks [36] and Monte for most of the last 70 years there was little chance that AI
Carlo tree search [38]. It is reminiscent of reinforcement could achieve human-level intelligence. Now, however, AI
learning (RL) [13] but does not use any of the standard RL has the potential to be very powerful (even superhuman),
approaches. This system would continually build models of and people are interested in and also afraid of it.
the world, and would take actions both in its mind and in the
real world. To have a truly intelligent autonomous system it It should be pointed out, however, that while computers
would need a wide variety of multi-modal sensor inputs, all such as the TaihuLight have achieved the memory and
using neural networks for processing. Humans have an speed of a human brain, they require about a million times
enormous amount of sensory inputs from touch, hearing, more energy than the brain and are about a million times
vision, taste, and smell. Table 1 shows approximate human larger. General purpose processors cannot compete with the
sensor input data. efficiency of the brain. Recent efforts to build special
 purpose neuromorphic chips are promising to change that,
 however. IBM [1], Intel [3] and DeepMind [4] have all
 Table 1. Human Sensory Inputs developed special purpose chips for efficiently simulating
 neural networks. These are in the early stages of
 Sensor Modality Approximate Size development, and learning algorithms in particular need
 much more work.
 Vision 120 million rods/cones
 Computing machinery dates back to the abacus and
 10 million receptor Babbage’s analytic difference engine, but the theory of
 Olfaction
 neurons modern-day computing dates back to Turing [6], and the
 Universal Turing machine. With computers being
 Touch 100,000 neurons in hand
 ubiquitous now, it is difficult to understand the
 Hearing 15,500 hair cells achievements of Turing and others, but in the 1930’s it was
 not at all clear how to build a computer. And technology has
 Taste 10,000 taste buds been changing exponentially, as described by Moore’s law.
 It is now possible to buy a compute server with fifty-six
 Intel cores (2.5 GHz) for only about $25K. This would have
A portion of the proposed system would be similar to that a peak speed about 10% of the speed of the ASCI Red
used in the world class AlphaGo Zero [24] and AlphaZero, machine for about 1000th the price.
which was able to learn the game Go in about three days
with no human input (and chess in just four hours). The Figure 2 shows a comparison of the memory and
approach described in this paper expands upon this and computational power of biological systems compared to
describes an intelligent system (such as a UAV) embedded human-developed systems [42]. A PC-based server today
in the real world, which requires perception and a model of exceeds the power and memory of the Cray 2. This server
the world. While most UAV’s have all their behaviors and also exceeds the computing capacity of a mouse, which is a
actions preprogrammed, the approaches described here very intelligent animal. The TuahuLight exceeds the power
would allow a system to learn on its own. It is not possible and memory of a human.
to program all the features of an intelligent system (e.g.,
using rule-based systems such as Soar, ACT/R, EPIC, and The AI of today is basically a collection of well-known
SS-RICS). algorithms and techniques that are each good for certain
 applications, for example:
2. ELEMENTS OF ARTIFICIAL INTELLIGENCE
 • Neural networks
Artificial Intelligence (AI) has been discussed since the • Decision trees
Dartmouth conference in 1956, but interest in it has been • Reinforcement learning
cyclic, primarily because it has been repeatedly oversold. • Supervised/Unsupervised learning
Figure 1 shows a graph of computer power vs. years, where • Bayesian methods
the computers are the fastest supercomputers of the time • Natural language processing
(i.e. IBM 701, Cray 1, ASCI Red, and TaihuLight). The • Cognitive architectures
trend far exceeds Moore’s law due to parallel computing.
 • Genetic algorithms
From this figure it is easy to show how AI has been
 • Others…
oversold. In the 1950’s (when AI was first discussed) the
computers had roughly the power of a C. Elegan worm. In
 2
A Review and Proposed Framework for Artificial General Intelligence
Figure 1. Computer power vs. years

A few of these are described below. Most of these synapses, learning, or the network connectivity (the
algorithms are good for some problems but not others. connectome [8]). Progress is being made, but building a
Throughout history people have developed and used the best human-like nervous system is extremely challenging. In
algorithms they could at the time, given the computer power addition, the brain itself is of little use without the massive
available at that time. As computers get more and more sensor input from the nervous system. Nevertheless, we do
powerful some algorithms are abandoned because better know human intelligence is neural-network based.
ones become feasible.
 Artificial neural network (ANN) algorithms have a long
We know that intelligence exists in humans (and other history, dating back to about 1943 [37]. Most of these have
animals) and it is based on neural networks. We just don’t little in common with real electro-chemical biological
understand the detailed behavior of the neurons, the neurons and synapses. The most common type of ANN’s
 are feed-forward networks trained using back-propagation.
 Back-propagation uses stochastic gradient descent (SGD).
 Gradient descent is usually a fairly poor optimization
 algorithm but SGD seems to work well on ANN’s, and
 better methods have not been found yet. Deep learning
 methods (e.g., TensorFlow) are based upon this approach
 too. Given enormous amounts of data and lengthy training
 periods, they are very effective and widely used. These deep
 networks, however, are not intelligent [45].

 There is no standard mathematical definition of a neural
 network, but as commonly used in machine learning, a
 neural network is a parameterized function f(x; θ) that maps
 the input data from RM to RN. It is usually used to
 interpolate a set of data points (Xi, Yi). Typically this
 function can be factored through d – 1 intermediate
 Euclidean spaces RNk, and d is called the depth of the neural
 network. This is what makes neural networks so effective,
 the data is much easier to separate when it is transformed
Figure 2. Comparison of biological and into higher dimensional spaces using nonlinear activation
 functions.
human-made systems [42].

 3
A Review and Proposed Framework for Artificial General Intelligence
More interesting, but less developed, are biologically application, primarily because they have very little ability to
plausible spiking neural networks [12]. For these, each learn. Also, people must develop essentially all the software
neuron is modeled using time-dependent differential and rules. It is simply not feasible to build intelligent
equations. There is a range of mathematical models from systems that rely on humans to code all the behaviors and
leaky integrate and fire (LIF) methods [9] to the Hodgkin- actions. As Turing said [19]: “Instead of trying to produce a
Huxley equations [10]. The simpler models have been programme to simulate the adult mind, why not rather try to
implemented in silicon chips, as mentioned above, by Intel produce one which simulates the child’s?”
and IBM, but training them is not a solved problem. In
addition, there are roughly 100 different kinds of neurons in Symbolic systems such as these are an example of an
the human brain; and the synapses are very complex approach that was good at the time, but since this approach
chemical systems. One of the authors here (Long) has was developed computers have gotten about 10 million
shown that massive networks of Hodgkin-Huxley neurons times faster. The key to AGI systems is learning and neural
can be solved very efficiently and scalably on massively networks.
parallel computers [11,12].
 Another key algorithm is Monte Carlo Tree Search (MCTS)
Another very powerful and biologically plausible algorithm [38]. This is most often used as a way to sample game trees
is reinforcement learning [13]. In this approach the system to find good moves when a full game tree search would be
continually performs actions in the environment while too expensive. So instead of searching the whole subtree to
attempting to optimize the reward. These techniques are evaluate the strength of a given node, one plays out random
related to Markov Decision Processes (MDP), control moves all the way to the end of the game many times (called
theory, and dynamic programming. This approach is quite a playout). The nodes involved are then weighted based on
different than supervised learning, as it does not need a the wins and losses. During future playouts, higher weighted
collection of correct input-output results. Legged-robots nodes are preferentially chosen. UCT [22] stands for Upper
have been shown to learn to walk without human training Confidence bounds for Trees and provides an optimal way
[34], just using reinforcement learning; and it appears to balance exploitation and exploration of moves to
similar to watching a pony learn to walk. The approach minimize a mathematically defined quantity called regret.
might not be the most efficient method but biological
systems learn very slowly also. It takes roughly 20 years to The main approach to UCT balances exploitation and
 exploration using the equation to chose which nodes to

 Figure 3. Example MCTS with UCT [41].

train a human. One difference with machines is that if we choose:
can train one, we can then make millions of copies relatively
quickly, which scares people. ! ln !
 = +
Cognitive architectures are another AI approach. These are ! !
rule-based systems that date back to Newell and Simon’s
universal problem solver [14,15]. The most common where:
implementations are ACT-R [16], SOAR [17], SS-RICS
[18], and EPIC [20]. These have shown to be able to • wi stands for the number of wins for the node
simulate some aspects of human performance, but are also considered after the ith move
very far from AGI. They are difficult to use and have very • ni stands for the number of simulations for the
limited ability to learn. These have been implemented on node considered after the ith move (e.g.
mobile robots [21], but they are not well suited to that wins+loses+draws)

 4
A Review and Proposed Framework for Artificial General Intelligence
• Ni stands for the total number of simulations after unconscious “programs,” that can be executed without
 the ith move (at top node) thinking about them. Only about 5% of the human brain is
 • c is the exploration parameter—theoretically equal used for conscious processing [44], and is too slow for
 to 2; in practice usually chosen empirically many tasks.

The next node in the tree to consider is chosen by the one To illustrate the MCTS/UCT approach, consider the trivial
that has the highest value of UCT. The first term is the problem of tic-tac-toe. The tree structure would have layers
exploitation term, which will be large if this choice has of 1-9-8-7-6-5-4-3-2-1, and the state could be represented
produced positive results in the past. The second term is the by 18 bits. The first 9 bits would be 0 or 1 depending on
exploration term, and will drive the algorithm to choose whether an X was in one of the nine squares, and the second
terms that have not been explored enough. This approach is nine bits would be 0 or 1 depending on whether there was
one form of reinforcement learning. an O in the squares. The board at the start of the game
 would have all zeros in this state vector. As the state vector
Figure 3 shows how UCT works. The first panel shows the changes, some leaves in the tree will become unavailable.
current status of the tree, the second panel shows the The program would choose where to put an X using UCT,
selection of the next node, and the third panel shows and then it could choose where to put an O randomly (or
running the simulation on this tree. In the case shown the more intelligently). A win can be determined by the values
simulation produces a loss (i.e. wins/total = 0/1). The fourth of the state. As the many different games are played UCT is
panel shows how the result of the simulation is back- used to update value of each leaf in the tree, and the neural
propagated up the tree. network is trained to choose the next move given the current
 state. Eventually the MCTS/UCT will not be needed and the
One of the most interesting AI systems to-date is AlphaGo neural network can be used, which is somewhat analogous
Zero [24] (and AlphaZero). This uses MCTS and neural to humans developing “muscle memory” on some task or
networks. It also uses tensor processing units [4]. This chunking.
system, without human training or data, learned to play one
of the most difficult games known (Go) in only 70 hours (it While Go is one of the most difficult games known (orders
used a computer which cost roughly $25 million and there of magnitude more difficult than chess), it is still just a
were 17 people listed as co-authors on the paper). It beat all game. It has well-defined structure, rules, and states; and the
other software systems and the best human Go champions, result is either a win or a loss. It can hardly be called an
and it is considered an AI-hard problem. Many Go experts intelligent system using the definition in Section 1. And an
have spent their entire lives learning to play the game, while unmanned system embedded in the real world is much more
AlphaZero did it in 3 days. So this system exceeded human complicated. But AlphaZero shows us that given a decision
performance and it learned to do it roughly 3000 times tree structure, a goal, and a definition of the state vector it is
faster than a human. This system was also able to learn to possible for the system to learn how to succeed. The road
play chess in only 4 hours, which does demonstrate more ahead is still not easy, but there is now sufficient evidence
generalizability than previous game playing systems (e.g. of how we might proceed.
Watson [25]). And most importantly, this system learns on
its own through trial-and-error; and does not need humans to 3. PREVIOUS AGI WORK
teach it. This is a very exciting breakthrough. It is a step in
 A Turing machine [6] is a formal computational model that
the right direction, and as discussed below may help lead to
 consists of a finite state machine and infinite tape (memory).
AGI eventually. What we are proposing here is implement
 Its primary importance is that it provides a well-defined
this approach in a mobile robot embedded in the real world.
 computational model that allows one to prove theorems
The power of this approach is through the combination of about computation. It is theoretically equivalent to a variety
neural networks and MCTS/UCT. Figure 4 shows a of other standard mathematically defined computational
flowchart of MCTS/UCT combined with a neural network. models, and it is believed by most that anything that is
As the MCTS runs the neural network is trained on the data computable is computable by a Turing machine. This is
from the simulation. The inputs to this system would be the conjecture is known as Church’s thesis (or the Church–
definition of the problem (e.g. definition of games rules, Turing thesis) [28].
goals for a robot, etc.), the definition of the tree structure,
 AIXI [27] is a precise information-theoretic formula for
and the state of the system. The other powerful feature of
 near optimal reinforcement learning. It combines
this approach is the relatively small amount of code
 Solomonoff induction with sequential decision theory.
required.
 ‘Information theoretic’ means that it does not take
The other feature illustrated in Fig. 4 is the conversion of computational complexity into account; indeed, it is
the process to unconscious processing. One the task has uncomputable because of the halting problem [28,29].
been learned (solution known), the MCTS and neural Moreover, even if it were computable, it would be
network training can be skipped. Humans do this completely intractable as a practical algorithm. One can use
continuously, as they learn skills they are converted to it, however, as the basis of a practical algorithm by
 approximating it. AIXItl [27] approximates it by putting
 5
some external environment. This will provide us with a
 convenient way of determining utility (using the RL
 program’s reward function), which will be an important
 topic later on. But in general, no constraints are placed
 on the solver. The second part of the Gödel Machine,
 which we will call the searcher, is a program that tries
 to improve the entire Gödel Machine (including the
 searcher ) in a provably optimal way.”

 Additional approaches to AGI are discussed in proceedings
 of the Artificial General Intelligence conferences [39].

 4. PROPOSED AGI FRAMEWORK
 The main message of this paper is that there is a Master
 Algorithm despite what Domingos [7] says. It is neural
 networks. This is what all mammals use, and this is what
 AlphaZero uses. There are also many types of
 computational neural networks: artificial neural networks
 (ANN), spiking neural networks (SNN), recurrent neural
 networks, deep neural networks, convolutional neural
 networks, etc. These can be trained in many different ways
 also: backpropagation, supervised, unsupervised, STDP,
 MCTS/UCT, reinforcement learning, etc.

 Intelligence can be defined in a variety of mathematically
 equivalent ways, and one of them is the ability to learn to
 predict the future state of a system from the current state.
 For an agent in an environment, the current state includes
 the state of the agent and any action the agent performs.

 Animals have brains so they can (imperfectly) predict what
 actions would result in the best outcomes for themselves.
 Without brains, an animal’s actions would either be random
 or hard-coded (reactive) like an insect, and, indeed, the
 actions of primitive animals and most current robots are
 Figure 4. Flowchart showing MCTS with UCT and essentially reactive. But to survive and prosper in an
 neural network. unknown and/or complicated environment, an animal needs
 to be able to learn the environment through experience
 (physical experiments or thought experiments) to be able to
limits on the time and space used in the computation, and make predictions about the environment.
MC-AIXI-CTW [27] approximates it stochastically.
 Figure 5 illustrates the information flow of a proposed
A Gödel machine [30-32] is a Turing machine that is a intelligent system in its environment. It is embedded in the
provably optimal (in run-time) reinforcement ‘learner’. At world and has numerous sensor inputs. This could, for
the moment, its primary importance is that it shows that example, be a mobile robot with vision, smell, touch, or
such a thing exists, and although no full implementation has other sensor inputs. The inputs are all processed using
been created yet, one could actually be created, and in the artificial neural networks (ANN). These neural networks
future it may become a practical solution to the general can be trained ahead of time to recognize different sensor
learning problem. It recursively self-improves itself by inputs, or they could implement reinforcement learning.
rewriting its own code when it can prove the new code
provides a better strategy. As in animals there is a “data fusion” step where the
 different sensor modalities are fused together. This could be
Steunebrink and Schmidhuber [40] state: accomplished using a hierarchical neural network that uses
 the sensor ANN’s as input. It is much more effective to use
 “One can view a Gödel Machine as a program all the input data than just one set, especially with noisy or
 consisting of two parts. One part, which we will call the incomplete data which often occurs. Evolution has enabled
 solver, can be any problem-solving program. For clarity us to predict danger is nearby (e.g., a tiger) even if we have
 of presentation, we will pretend the solver is a incomplete sensor data. A partial view combined with a
 Reinforcement Learning (RL) program interacting with partial sound or smell might be enough to recognize the

 6
Figure 5. Unmanned System Architecture [43]

tiger. Sensor data fusion is already being used in some explore actions with bad outcomes more cheaply and safely
military systems [33]. than having to perform the experiment in reality.

Once the system perceives what is in the world, then it For example, if a human is walking near the edge of a cliff
either reacts very quickly (e.g., to danger), or it reasons and is trying to decide what to do next, it is essential for
about how to accomplish its goals. For example, if an object survivability to think about what will happen if they step
is thrown at a person and could potentially cause harm, there over the edge. To actually perform the experiment in the
is no time to think about this, you just have to react very real world would obviously not be effective.
quickly. This is also the sort of behavior a professional
baseball player needs to be successful. There is not enough The simulations in the learned model can be performed
time for our brains to process a very fast pitch, the player using the same algorithms as AlphaZero, which uses neural
just has to react and rely on years of training. Some call this networks and Monte Carlo decision trees. If the model is
“muscle memory.” This sort of processing takes place in the sufficiently detailed and precise, then the results in the
more primitive parts of the brain. Roughy 95% of the model will be a close approximation to the results in the real
human brain is devoted to unconscious processing [44]. world if the same actions are performed as were performed
 in the simulation. We note that this process essentially
If the system does not need to react quickly, it can take the results in a reinforcement learning algorithm.
time to reason about its options and perform thought
experiments. This is the sort of processing that takes place One of the difficult aspects of this system is determining the
in the cortex, where most of the complex processing takes objectives to go into MCTS/UCT approach for all the
place. But this processing takes time. Human reaction time different goals and sub-goals. For a game (even one as
for visual stimulus is about 0.25 seconds. In the system difficult as GO) the objectives, tree structure, and state
proposed here this area would perform neural network information is well known. However, imagine a new-born
processing and Monte Carlo tree search, as in AlphaZero. baby trying to survive in the world. They have some built-in
The goal here, as in AlphaZero, is for the system to learn on instincts, but have to learn an enormous amount. They have
its own. This is essential to having a truly intelligent system, urges to eat, avoid danger, etc., but to accomplish these they
since it is not feasible to program all aspects of behavior. have to learn sub-goals such as learn to walk, learn to
 recognize objects and people, learn to use their hands, etc.
The only way to be able to predict the future state of an Other humans might teach them some of these. Until this
environment is to learn a model of it and then perform system can reason about goals and sub-goals, humans could
‘thought experiments’ (simulations) in the model. If the be used to input goals, sub-goals, state information, and
model is precise enough and there is sufficient time to do decision tree structures.
enough simulations to explore a wide variety of possible
future outcomes, then these simulations will allow an agent Neural networks can be used to both learn a model of the
to pick an action that has a good outcome. It is important to world, and also the ‘physics’ of the real world. This is true
understand that doing ‘thought experiments’ allows one to no matter what the ‘physics’ is. For the real world, the
 physics is the standard physics we are accustomed to
 studying, but in other circumstances the ‘physics’ might
 7
correspond to the rules of a game or some other formal 7. Domingos, P., “The Master Algorithm: How the Quest
system. for the Ultimate Learning Machine Will Remake Our
 World,” Basic Books, 2018.
One of the very interesting aspects of this approach is the
small number of lines of code required, and the relatively 8. Sporns, Olaf; Tononi, Giulio; Kötter, Rolf (2005). "The
small amount of computations required. A simple program Human Connectome: A Structural Description of the
to implement MCTS/UCT can be written in about 20 lines Human Brain". PLoS Computational Biology. 1 (4):
of Python code. A neural network including
backpropagation can be implemented in about 500 lines of 9. Koch, C., Biophysics of Computation: Information
C++ code. These are very tiny codes compared to those in Processing in Single Neurons, Oxford Univ. Press,
traditional systems, which could easily have tens of millions Oxford, 1999.
of lines of code. For example, the Soar cognitive
architecture contains about 500,000 lines of C++ code. And 10. Hodgkin, A. L., and Huxley, A. F., “A Quantitative
the Boeing 787 has 6.5 million lines of code. Description of Ion Currents and Its Applications to
 Conduction and Excitation in Nerve Membranes,”
 Journal of Physiology, Vol. 117, No. 4, 1952, pp. 500–
6. CONCLUSIONS 544.

This paper reviews artificial intelligence algorithms and 11. Skocik, M.J. and Long, L.N., "On The Capabilities and
describes a framework for artificial general intelligence that Computational Costs of Neuron Models," IEEE Trans.
could, conceivably, be implemented in software for on Neural Networks and Learning, Vol. 25, No. 8, Aug.,
autonomous systems. While approaches like this were not 2014.
feasible in the past, with the rapid increase in computing
power (even surpassing humans), sensors, and algorithms 12. Long, Lyle N., " Toward Human Level Massively
they may now be feasible. Parallel Neural Networks with Hodgkin-Huxley
 Neurons," presented at 16th Conference on Artificial
REFERENCES General Intelligence, New York, NY, July 16-19, 2016.
1. Modha, D. S. (2017). Introducing a brain-inspired 13. Sutton, R.S. and A. G. Barto, Reinforcement Learning:
 computer. Published online at http://www. research. ibm. An Introduction (Adaptive Computation and Machine
 com/articles/brain-chip.shtml. Learning), second edition, 1998.

2. Gottfredson, Linda S. (1997). "Mainstream Science on 14. Newell, A., and Simon, H. A., Human Problem Solving,
 Intelligence (editorial)" (PDF). Intelligence. 24: 13–23. Prentice-Hall, Englewood Cliffs, NJ, 1972.
 doi:10.1016/s0160-2896(97)90011-8. ISSN 0160-2896.
 Archived (PDF) from the original on 22 December 2014. 15. Newell, A., Unified Theories of Cognition, Harvard
 Univ. Press, Cambridge, MA, 1990.
3. Davies, M., et al, “Loihi: A Neuromorphic Manycore
 Processor with On-Chip Learning,” IEEE Micro, Jan., 16. J.R. Anderson, D. Bothell, M.D. Byrne, S. Douglass, C.
 2018. Lebiere, and Y. Qin, “An Integrated Theory of the
 Mind,” Psychological Review 2004, 11(4), pp. 1026-
4. Norman P. Jouppi, Cliff Young, Nishant Patil, David 1060.
 Patterson, et al., Google, Inc., Mountain View, CA USA
 2017. In-Datacenter Performance Analysis of a Tensor 17. Laird, J.E., The Soar Cognitive Architecture, MIT Press,
 Processing Unit. In Proceedings of ISCA ’17, Toronto, 2012
 ON, Canada, June 24-28, 2017.
 18. Kelley, T. D., “Symbolic and Sub-symbolic
5. Kelley, T. and Long, L.N., “Deep Blue Can't Play Representations in Computational Models of Human
 Checkers: the Need for Generalized Intelligence for Cognition: What Can be Learned from Biology?”
 Mobile Robots," Journal of Robotics, Vol. 2010, May, Theory & Psychology, Vol. 13, No. 6, 2003, pp. 847–
 2010. 860.

6. Turing, A.M. (1936). "On Computable Numbers, with an 19. Turing, A.M. (1950). “Computing machinery and
 Application to the Entscheidungs problem". Proceedings intelligence,” Mind, 59, 433-460.
 of the London Mathematical Society. 2 (published
 1937). 42: 230–265. doi:10.1112/plms/s2-42.1.230. (and 20. Kieras, D.E. and D.E. Meyer, “An Overview of the EPIC
 Turing, A.M. (1938). "On Computable Numbers, with an Architecture for Cognition and Performance With
 Application to the Entscheidungsproblem: A correction". Application to Human-Computer Interaction,”
 Proceedings of the Lon-don Mathematical Society. 2 HumanComputer Interaction, 1997, 12, pp. 391-438.
 (published 1937). 43 (6): 544–6. doi:10.1112/plms/s2-
 43.6.544.).
 8
21. Hanford, S. D., Janrathitikarn, O., and Long, L. N., 33. Liggins II, Martin, D. Hall, J. Llinas, “Handbook of
 “Control of Mobile Robots Using the Soar Cognitive Multisensor Data Fusion: Theory and Practice,” 2nd
 Architecture,” Journal of Aerospace Computing, Edition, CRC Press, 2017.
 Information, and Communication, Vol. 6, No. 6, 2009,
 pp. 69–91. 34. https://www.youtube.com/watch?v=RZf8fR1SmNY

22. Abramson, B., “The Expected-Outcome Model of Two- 35. https://faculty.washington.edu/chudler/facts.html
 Player Games,” Ph.D. Dissertation, Columbia Univ.,
 1987. 36. Haykin, S., Neural Networks: A Comprehensive
 Foundation, 2nd Edition, Prentice-Hall, 1999.
23. Levente Kocsis and Csaba Szepesv, “Bandit based
 Monte-Carlo Planning,” Proceeding ECML'06 37. McCulloch, W.S. and W. Pitts, “A Logical Calculus of
 Proceedings of the 17th European conference on the ideas immanent in nervous activity,” Bull. Of Math.
 Machine Learning, Pages 282-293, Berlin, Germany, BioPhys., Vol. 5, No. 5, 1943, pp. 115-133.
 September 18 - 22, 2006.
 38. Browne, C. B., E. Powley, D. Whitehouse, S. M. Lucas,
24. Silver, D., et al., “Mastering the game of Go without P. I. Cowling, P. Rohlfshagen, S. Tavener, D. Perez, S.
 human knowledge,” Nature, Vol. 550, October 19, Samothrakis, and S. Colton, “A Survey of Monte Carlo
 2017. Tree Search Methods,” IEEE Transactions On
 Computational Intelligence And AI in Games, Vol. 4,
25. David Ferrucci, Anthony Levas, Sugato, Bagchi, David, No. 1, Mar. 2012.
 Gondek, and Erik T. Mueller, “Watson: Beyond
 Jeopardy!,” Artificial Intelligence, Volumes 199–200, 39. http://agi-conf.org/
 June–July 2013, Pages 93-105.
 40. Steunebrink B.R., Schmidhuber J. (2012) Towards an
26. Davis, Martin, ed. (1965). The Undecidable, Basic Actual Gödel Machine Implementation: a Lesson in
 Papers on Undecidable Propositions, Unsolvable Self-Reflective Systems. In: Wang P., Goertzel B. (eds)
 Problems And Computable Functions. New York: Theoretical Foundations of Artificial General
 Raven Press. Intelligence. Atlantis Thinking Machines, vol 4.
 Atlantis Press, Paris.
27. Hutter, M., http://www.hutter1.net/ai/
 41. https://en.wikipedia.org/wiki/Monte_Carlo_tree_search
28. Turing, A.M., 1948, "Intelligent Machinery." Reprinted
 in "Cybernetics: Key Papers." Ed. C.R. Evans and 42. Moravec, H., Mind Children: The Future of Robot and
 A.D.J. Robertson. Baltimore: University Park Press, Human Intelligence, Harvard Univ. Press, Cambridge,
 1968. p. 31. Reprinted in Turing, A. M. (1996). 1988.
 "Intelligent Machinery, A Heretical Theory".
 Philosophia Mathematica. 4 (3): 256. 43. Long, Lyle N., Kelley, Troy D., and Avery, Eric S.,
 doi:10.1093/philmat/4.3.256. "An Emotion and Temperament Model for Cognitive
 Mobile Robots," 24th Conference on Behavior
29. Teuscher, C. “Alan Turing: Life and Legacy of a Great Representation in Modeling and Simulation (BRIMS),
 Thinker,” Springer Science & Business Media, Jun 29, March 31-April 3, 2015, Washington, DC.
 2013.
 44. Lipton, B. H., The biology of belief: Unleashing the
30. Schmidhuber, Jürgen (December 2006). “Gödel power of consciousness, matter and miracles. Santa
 Machines: Self-Referential Universal Problem Solvers, Rosa, CA, US: Mountain of Love/Elite Books, 2005.
 Making Provably Optimal Self-Improvements,”
 arXiv:cs/0309048, Vers. 5, 2006. 45. Lohr, S., “Is There a Smarter Path to Artificial
 Intelligence? Some Experts Hope So,” New York
31. Steunebrink B.R., Schmidhuber J. (2011) A Family of Times, June 20, 2018.
 Gödel Machine Implementations. In: Schmidhuber J.,
 Thórisson K.R., Looks M. (eds) Artificial General
 Intelligence. AGI 2011. Lecture Notes in Computer
 Science, vol 6830. Springer, Berlin, Heidelberg.

32. Gödel, K., “On Formally Undecidable Propositions of
 Principia Mathematica And Related Systems,”
 (original: Gödel, " Uber formal unentscheidbare Satze
 der Principia Mathemat-ica und vervvandter Systeme,
 I," . Monatsheftc Math. Phys., 38 (1931), 173-198.)

 9
BIOGRAPHIES
 Lyle N. Long is a Professor of
 Aerospace Engineering,
 Computational Science,
 Neuroscience, and Mathematics
 at The Pennsylvania State
 University. He has a D.Sc. from
 George Washington University,
 an M.S. from Stanford
 University, and a B.M.E. from
 the University of Minnesota. In
2007-2008 he was a Moore Distinguished Scholar at the
Caltech. And has been a visiting scientist at the Army
Research Lab, Thinking Machines Corp., and NASA
Langley Research Center; and was a Senior Research
Scientist at Lockheed California Company for six years.
He received the Penn State Outstanding Research Award,
the 1993 IEEE Computer Society Gordon Bell Prize for
achieving highest performance on a parallel computer, a
Lockheed award for excellence in research and
development, and the 2017 AIAA Software Engineering
award and medal. He is a Fellow of the American
Physical Society (APS), and the American Institute of
Aeronautics and Astronautics (AIAA). He has written
more than 260 journal and conference papers.

 Carl F. Cotner is an Associate
 Research Professor in the Access
 and Effects Department of the
 Applied Research Laboratory at
 the Pennsylvania State
 University. He has a Ph.D. in
 Mathematics from the University
 of California at Berkeley.

 10
You can also read