“Cognitive Science can be roughly summed up as the scientific interdisciplinary study of the mind” – Jay Friedenberg and Gordon Silverman 
Cognitive Science is a truly interdisciplinary field of study. It combines areas such as Philosophy, Psychology, Linguistics, Artificial Intelligence, Robotics, and Neuroscience. Cognitive Science benefits from the interaction of all these different disciplines and their worldviews. As interdisciplinarity is the central notion, I have decided to bring forth cognitive scientists from diverse backgrounds and illustrate how all the different disciplines contribute to the knowledge expansion of cognitive science. I will also link their discoveries to state-of-the-art AI research.
The chosen scientist from the psychological approach is Burrhus Frederick Skinner (1904-1990). Skinner´s main field of interest was the study of learning, and he is the father of the theory of operant conditioning. Operant conditioning states that behavior is strengthened in case it is followed by reinforcement [2, 3] and diminished in case it is followed by punishment . Both reinforcement and punishment have two types, positive and negative, where positive indicates the addition (add reward, add punishment) and negative indicates elimination (eliminate bad or good circumstance). Does reinforcement-learning in AI sound familiar? I must add that there are multiple differences between reinforcement-learning and operant conditioning (for example the timing of the reward), however, they both build on the assumption that a reward (reinforcement) facilitates learning.
David Marr formulated the trilevel hypothesis of description . It explains the three levels on which every information processing “machine” should be understood. These levels are the following: Computational level, Algorithmic Level, and Implementation level. The computational level describes the goal of the computation, the reason why the computation is appropriate, and the logic behind the strategy. The algorithmic level must answer the questions of what the representations for both the input and the output are, and what is the algorithm needed for the transformation. The third level explains how these representations and the developed algorithm can be implemented physically. Modern-day Machine Learning is still evaluated and analyzed by these three levels of description.
Donald O. Hebb proposed a theory that could explain learning with the help of changes in the neural network of the brain. The theory postulates that “cells that fire together, wire together”.  If one neuron repeatedly activates another, their connection strength will increase. Connection strength changes can form neuronal pathways and circuits. This neuroscientific finding contributed to the development of Hebbian learning, a type of unsupervised machine learning.
The network approach sees the mind as a web of interacting neurons. Warren McCulloch and Walter Pitts proposed a theory about the function of biological networks, the first computational model of a neuron. They assumed that neurons have a binary output: they are either “on” (fire) or “off” (stay silent/inactive) . Firing is dependent on whether the threshold value is reached by the aggregated value of inputs, which could be excitatory or inhibitory. These assumptions kickstarted research in the artificial neural network field and resulted in the perceptron.
The evolutionary approach builds on the idea of the survival of the fittest. Gerald Edelman, a follower of the evolutionary approach, developed the Theory of Neuronal Group Selection , in other words, Neuronal Darwinism. It states that it is key for the development and functioning of the brain that there are variations and selections within neuronal populations . The Darwinian view is highly recognized among AI researchers as well and led to the development of evolutionary algorithms for task/process optimization.
Transformational (generative) grammar (TTG) is Noam Chomsky’s solution to the limitations of phrase structure grammar to explain sentence rearrangement. It is a “set of rules for modifying a sentence into a closely related one” . TTG research extends our knowledge about computational language which is a valuable input for NLP and NLG. For example, GPT-3 and other models with a universal linguistic base have principles from the linguistic approach as their foundation.
Alan Turing described the “Imitation Game” in 1950. This game is now known as the Turing Test which is supposed to answer the question “can machines think?” . The game functions in the following way: there are 3 participants, a person, a machine, and an interrogator. The interrogator is separated from the other two participants. After some interaction among the participants, the interrogator is tasked with determining the identity of the others. Could the Turing Test be passed?
Does passing the Turing Test mean that the machine has thoughts and understands the conversation? John Searle would argue otherwise with his famous Chinese Room argument. “The heart of the argument is Searle imagining himself following an [sic!] symbol processing program is written in English […]. The English speaker (Searle) sitting in the room follows English instructions for manipulating Chinese symbols, whereas a computer “follows” (in some sense) a program written in a computing language.” . This argument proposes that there could be an illusion of understanding, just like the human appears to understand Chinese, the computer can appear to understand the language, however, it only manipulates symbols based on a given rulebook.
Charles Rosen was a pioneer in the AI/Robotics field. He led the project team at Stanford Research Institute (SRI) that developed the first AI robot, Shakey. As SRI describes, “Shakey was the first mobile robot with the ability to perceive and reason about its surroundings” . It could perform various tasks that required planning, route-finding, and even rearrangement of simple objects. The development of this robot was a huge milestone influencing robotics research all around the globe.
Last, but not least, I would like to thank all the cognitive scientists, who manage to overcome the limitations of one specific field of study and make the impossible possible by utilizing theories and methods from a multiverse of disciplines. They deserve to be on every list.
“Rebecca Varga is a Data Scientist with an undergraduate degree in International Business Consultancy and graduate studies in Cognitive Science and Data Science. She has experience with mentoring (Templeton Program), and she is a Founding Member of Mensa Youth Austria, a daughter organization of Mensa International. Her work mainly focuses on carbon- and silicon-based learning, and computational modeling.“
 Friedenberg, J.; and G. Silverman, 2006. “Cognitive Science: An Introduction to the Study of Mind”. Sage Publications, Thousand Oaks.
 Skinner, B.F., 1951. “How to teach animals”. Scientific American, 185(6), 26-29.
 Skinner, B.F., 1953. “Science and Human Behavior”. Simon & Schuster Inc., New York.
 Marr, D., 1982. “Vision: A Computational Approach”. Freeman & Co., San Francisco.
 Hebb, D.O., 1949. “The Organization of Behavior”. John Wiley and Sons, New York.
 McCulloch, W.S., and W. Pitts, 1943. “A logical calculus of the ideas immanent in nervous activity”. The bulletin of mathematical biophysics 5, 115-133.
 Edelman, G.M., 1987. “Neural Darwinism – The Theory of Neuronal Group Selection”. Basic Books, New York.
 Edelman, G.M., 1993. “Neuronal Darwinism: selection and reentrant signaling in higher brain function”. Neuron 10(2), 115-125.
 Stanford Encyclopedia of Philosophy, 2021. “The Turing Test”. https://plato.stanford.edu/entries/turing-test/#Tur195ImiGam. Accessed on 13.09.2021
 Stanford Encyclopedia of Philosophy, 2021. “The Chinese Room Argument”. https://plato.stanford.edu/entries/chinese-room/. Accessed on 13.09.2021.
 SRI. 2021. “Shakey the Robot”. https://www.sri.com/hoi/shakey-the-robot/. Accessed on 13.09.2021.