From Thinking Machines to Rational Agents: A Guide to different Approaches to AI
Shashank Rajak
May 11, 2025
10 min read

As someone learning about Artificial Intelligence, I came across an interesting idea in the book "Artificial Intelligence: A Modern Approach" by Russell and Norvig. It said, "A student in physics might feel that all the good ideas in physics have already been taken by Newton, Einstein, Galileo and others. AI, on the other hand, still has openings for several full-time Einsteins and Edisons."
This really struck me. As someone who is still learning a lot about AI these lines read huge motivation to go deeper into the field and work on some groundbreaking ideas.
We humans call ourselves homo sapiens, man the wise because of our intelligence. Our intelligence is what truly sets us apart from other living beings in the world. A big goal in AI is to build the same level of intelligence that can do the same smart things as we humans do. There have been lot of trials and errors in the field of AI and it took a long road to reach where it is today. This is also one of the interesting part which attracts me to AI because here we have to think first about human intelligence and then on how we can re-create the same intelligence in machines. We should also keep in mind that till date we do not clearly understand how the human brain creates mind (the intellectual part) from the physical brain, there is still a huge to uncover in this area, and then on the other hand we are also on a quest to build systems that are intelligent and do the same work for which we human need intelligence. The blend of this humane and computer knowledge is what attracts me more towards AI.
In this blog post, I'll share different ways people from different school of thoughts have thought about how to create AI or what exactly is AI. Understanding these core ideas is really important if we want to build truly intelligent systems. So, let's take a look at the different ways people have thought about AI.
Historically, there have been four major approaches towards AI. In the below figure, if you see the bottom boxes, they talk about "action" while the boxes on top talk about "thought process". Also, the left boxes are concerned about “humanly” approaches while the right boxes are concerned with "rationality".
Thinking Humanly - The Cognitive Modeling Approach
As the name suggests, this approach to AI aims to build machines that think like humans. The fundamental goal here is not just to get a machine to perform a task, but to perform it using cognitive processes that mirror human reasoning, problem-solving, and even learning. As John Haugeland aptly put it in 1985, this is about "...machines with minds, in the full and literal sense."
As I talked earlier, if we have to build a system that is as intelligent as we humans and thinks like humans then first thing we need to understand is how human mind thinks. This is why this approach is deeply intertwined with cognitive science, an interdisciplinary field that studies the human mind and its processes. By developing a robust theory of how the human brain processes information, solves problems, and makes decisions, we can model these theories as computer programs. These programs can then be tested and validated by comparing their inputs and outputs, as well as their reasoning steps, to those of humans tackling the same tasks.
A prime example of this approach in early AI is the General Problem Solver (GPS), developed by Allen Newell and Herbert Simon in 1961. The primary objective of GPS wasn't solely to find the correct solution but, more importantly, to compare the sequence of reasoning steps taken by the program to that of humans solving the same problems. The focus was on replicating the human thought process itself.
Interestingly, modern AI is also seeing a resurgence of this focus on explainability and mimicking human-like reasoning. Many contemporary AI chatbots strive to demonstrate their "thought process" to users, aiming to build trust and convince them that the answers provided are derived through a human-like cognitive journey.
A current illustration of this can be seen in models like ChatGPT and other Large Language Models (LLMs). As shown in below figure, features like the "Reason" option in ChatGPT allow users to see a glimpse into what the model "thought" before generating its response, offering a window into its internal processing that attempts to mirror human-like reasoning.
The underlying belief in this approach remains that the most effective way to build truly intelligent machines is to understand, model, and ultimately replicate the intelligence we already know best – our own human minds.
Thinking Rationally: The Laws of Thought Approach
This approach focuses on building AI that thinks logically, based on rules and reasoning. So given some knowledge to the machines, they are expected to take the "right" decision which is rational and logical. Many a times we humans also find us in a situation where given some information we need to take a rational or irrational decision and we know its not an easy task although it looks simple on paper.
The Greek philosopher Aristotle was the first to come up with the idea of Rationality and logical thinking, he tried to lay down rules for "right thinking." His syllogisms, like "All humans are mortal; Socrates is a human; therefore, Socrates is mortal," are examples of arguments that lead to correct conclusions if the starting points are correct. This idea led to the field of logic, where people developed precise ways to write down statements and relationships.
Early AI researchers tried to use these logical systems to build intelligent machines. The idea was that if a computer followed these logical "laws of thought," it could reason correctly and solve problems.
However, there were a couple of big challenges. First, it's hard to take everyday knowledge and put it into strict logical form, especially when things aren't always 100% certain. Second, just because a problem can be solved logically doesn't mean it can be solved quickly. Even with a small amount of information, logical reasoning can take a very long time for a computer without good guidance.
Imagine trying to plan your route to a new restaurant using only strict logical rules. You might know the address and some traffic laws, but dealing with unexpected road closures or traffic jams requires more than just pure logic. You need to be able to adapt and make decisions based on incomplete information.
Despite these hurdles, the "Thinking Rationally" approach highlighted the importance of logical reasoning in AI and paved the way for systems that could make deductions and solve problems in well-defined areas.
Acting Humanly: The Turing Test Approach
Instead of focusing on how a machine thinks, the "Acting Humanly" approach, most famously embodied by the Turing Test, shifts the focus to behavior. Proposed by Alan Turing in 1950, the idea is that if a machine can behave in a way that is indistinguishable from a human in a conversation, then we can say that it is intelligent.
The standard Turing Test involves a human evaluator (the interrogator) engaging in natural language conversations with both a human and a machine, without knowing which is which. The computer passes the test if it can answer the questions posed by the interrogator in such a way that the interrogator cannot reliably tell the difference between the computer's responses and those of the human.
This approach sidesteps the complex and often philosophical questions of consciousness and genuine understanding. It focuses purely on the outward manifestation of intelligence – the ability to communicate and interact in a way that is convincingly human.
To pass the Turing Test, a machine would need to possess a wide range of capabilities:
Natural Language Processing: To understand and generate human language fluently.
Knowledge Representation: To store and access a vast amount of information about the world.
Automated Reasoning: To use the stored knowledge to answer questions and draw inferences.
Machine Learning: To adapt to new information and improve its responses over time.
While the original Turing Test focused solely on textual conversation, Turing also envisioned a Total Turing Test to also account for the physical embodiment of human being, which would require two additional capabilities for a machine:
Computer Vision: To perceive the physical world.
Robotics: To interact with and move around objects in the physical world.
Together, these six disciplines encompass much of the research in AI today, highlighting the enduring relevance of the Turing Test even after more than 70 years.
However, a significant critique of this approach is its overemphasis on mimicking human behavior rather than focusing on implementing the underlying principles of intelligence. The analogy of the Wright brothers is often used here: they succeeded in creating airplanes not by mimicking the flapping of birds' wings, but by understanding and applying the principles of aerodynamics. Similarly, some argue that AI should focus on the fundamental principles of intelligence, even if it doesn't result in systems that perfectly imitate human behavior.
Despite this criticism, the Turing Test remains a powerful benchmark and a driving force behind progress in areas like natural language understanding and human-computer interaction.
Acting Rationally: The Rational Agent Approach
The "Acting Rationally" approach moves beyond simply thinking or acting like a human. Instead, it focuses on designing rational agents – systems that act to achieve the best outcome or, when there is uncertainty, the best expected outcome. A rational agent strives to do the "right thing" given its current knowledge and goals about its environment.
This approach introduces us to a key entity in the AI world: the agent. An agent is simply anything that can act (the word comes from the Latin "agere," meaning "to do"). While all computer programs perform actions, what distinguishes an AI agent is its ability to understand its current environment, operate autonomously, adapt to changes, and ultimately pursue the specific goal for which it was created.
Sounds so filmy! Agent 007 reporting for duty, sir. But this is the new reality, we have variety of AI agents to do tasks for us.
If we look at the current AI landscape, there's a significant emphasis on creating agents for specific domains. For example, "vibe coding," a term recently coined by Andrej Karpathy, describes a scenario where a coder prompts a coding agent, and the agent proceeds to write code, update existing files, and create new ones.
How is this different from a tool like ChatGPT providing coding snippets? The key difference lies in the agent's autonomy and goal-oriented behavior. These AI agents can perform a sequence of tasks, from creating new files and installing necessary packages to writing and integrating code across multiple files, all to achieve a specific feature or goal defined by the prompter. There are already numerous coding agents available, with GitHub Copilot being my personal favorite.
This illustrates perfectly what is expected from an AI agent according to the rational agent approach: to perceive, reason, and act in a way that maximizes the chances of achieving its objectives within its environment.
The "Acting Rationally" approach is considered a more principled and general approach to AI because it focuses on the fundamental goal of intelligent behavior – achieving the best possible outcomes – without being constrained by the limitations or peculiarities of human intelligence. This allows for the development of AI systems that can surpass human capabilities in specific domains, and it's likely that we will see the majority of future developments in the AI world following this paradigm.
Conclusion - The multifaceted quest for AI
Our journey through the different schools of thought in Artificial Intelligence, as illuminated by Russell and Norvig's "Artificial Intelligence: A Modern Approach," reveals a fascinating evolution in our understanding and pursuit of creating intelligent machines. From the early aspirations of building machines that think like humans, mimicking our cognitive processes, to the logical precision of systems striving to think rationally based on the laws of thought, the field has explored diverse paths.
The Acting Humanly approach, epitomized by the Turing Test, shifted the focus to observable behavior, challenging machines to convincingly emulate human interaction. While influential, it also highlighted the potential for focusing on mimicry over genuine intelligence.
Ultimately, the dominant modern paradigm leans towards Acting Rationally, with the goal of creating intelligent agents that can perceive their environment and act to achieve the best possible outcomes. This approach, exemplified by the rise of autonomous systems and goal-oriented AI agents, prioritizes effectiveness and efficiency, often transcending the limitations of human-like thinking or behavior.
Personally, delving into these different approaches has been incredibly insightful. It underscores the immense challenge of replicating something as intricate and still not fully understood as the human brain and the emergent mind it produces. While AI strives to create intelligence, the very blueprint of our own intelligence remains a profound mystery in many ways. Understanding these foundational perspectives not only provides a strong historical context but also helps in appreciating the nuances and trade-offs involved in building intelligent systems today. The excitement lies in witnessing how these different ideas continue to converge and inspire new innovations. The journey, as Russell and Norvig suggest, still holds vast potential for groundbreaking discoveries, with many "Einsteins and Edisons" yet to emerge in the exciting field of Artificial Intelligence, and I'm eager to see where this quest for intelligence, perhaps even shedding more light on our own cognitive abilities, takes us next.