Wednesday, September 9, 2009

Artificial Intelligence: Neural Networks and Genetic Algorithms

Artificial intelligence has advanced considerably since Alan Turing's article "Computing Machinery and Intelligence", published in 1950. Turing speculated that one day computer programs emulating a learning behaviour and heuristic could become possible with sufficient computing power, and he gave certain principles regarding the components such programs would require. From the day the article was published, some of these principles have been gradually developed into sophisticated algorithms, and new algorithms independent on Turing's research have been researched. Many of Turings propositions became fundamental components in advanced artificial intelligence today; others became technologically obsolete because of the lack of sophisticated computing at the time of publishing.

One such computer science field, only suggestively touched in Turing's article, is an fascinating and, quite frankly, mysterious type of algorithm called Artificial Neural Networks. Neural networks are analogous to the processes which occur in our own human brains. More specifically, the algorithm consists of an abstract "brain" in which there are "neurons" connected to eachother in an intricate network pattern. Neurons may send and transmit signals according to pre-designed rules, and by using defined "input" and "output" neurons it is possible to train the network to perform tasks in similar ways as humans do, and more importantly, learn.

Neural networks are so fascinating and ground-breaking because of their very hard-grasped inner workings. To explain with an analogy, consider seeing a team of construction workers, and their end result, a tall skyscraper, in which the intermediary process was hidden. The individual components, the construction workers, tools and material are simple, but the end result is highly complex. Because you do not know how the building was constructed by these simple parts, you are naturally fascinated. This applies for neural networks too. We still only understand their intermediary process on a very shallow level, yet they have showed amazing results, especially when combined with genetic algorithms. By evolving the networks, allowing artificial selection to select the well-suited and eliminate the badly-suited networks, one can "teach" networks to perform complex tasks using complex tools to do so. One can also make the neural network teach itself by implementing a "reward and punishment"-system described by Turing, but compared to networks evolved by genetic algorithms, these networks usually do not adapt as radical and scientifically interesting behaviours as do the genetic algorithm networks.

Some may think that a learning machine, especially after seeing films like The Terminator, is a frightening offspring of science gone wrong. I oppose this view - it is easy to contain an artificial intelligence by providing no physical tools it can use. Comparing this drawback to the scientific interest of neural networks demonstrates that we should accelerate research within this field. Some of the results of neural networks, however, are unexpected and perhaps not really useful. I have researched neural networks myself, as I contributed once in making a computer game in which the opponents (in the shape of tanks) were controlled by neural networks. The opponent was rewarded points for shooting opposing tanks and collecting powerups. I watched the neural networks evolve, and curiously, the tanks did not learn to shoot each other; instead, they chose a pacifistic approach and instead only drove around the track collecting powerups. This actually, in total, gave more points than shooting at the enemy tanks, thus explaining the strange result. Another amusing example comes from military research in the 60's in which  neural networks were trained to recognize hidden tanks in pictures. All went well: Eventually the computer could seem to tell the difference between pictures in which there were tanks and pictures in which there were not. However, when presented with a new set of pictures of hidden tanks, the neural network failed; this obviously puzzled the researchers. They eventually found the problem; the pictures of the hidden tanks were taken on cloudy days, and the pictures without tanks were taken on sunny days. Thus, the military now had a multi-million-dollar computer which could tell if it was sunny or not.

1 comment:

  1. This sounds like a potential definition paper in the works. Your discussion of neural networks, and in particular the analogy with the construction of a skyscraper, for me recalls the Descartes epigraph we discussed earlier this semester. Perhaps the "wonder" we experience in the face of now-not-so-new media is tied to this phenomenon. It's not that we've never seen a computer or a tall building, but we experience awe when we attempt to contemplate the process by which it came into being.

    ReplyDelete