“Artificial intelligence does not force us to ask whether machines are becoming human, but whether our idea of being human was ever as clear as we thought.”

Upcoming book
The Psychology of AI helps readers understand artificial intelligence by comparing it to how the human mind works. It explains why machines can appear intelligent, emotional, or creative, and why this impression is often misleading. The book shows where AI differs from human thinking, memory, and motivation, and why those differences matter in everyday life. Rather than asking whether machines are becoming human, it clarifies how living with intelligent machines changes how we work, make decisions, and define what it means to be human. Oversights of example chapters can be find below:

Why Compare Human and Artificial Minds
Why comparisons between human and artificial minds are both attractive and problematic. Fundamental architectural differences between biological and artificial systems are introduced, and the limits of anthropomorphism are discussed. The session establishes why treating AI systems as “human-like” often obscures rather than explains their functioning.

Consciousness: Biological, Cognitive, and Metaphysical Perspectives
Exploring consciousness from biological, psychological, and philosophical perspectives. Key questions include whether consciousness is necessarily tied to biological systems, and whether humans attribute consciousness to others based on similarity to their own experience.

Evolutionary Pressure and Gradient Descent: Training Systems Compared
Human minds are examined as products of evolutionary selection acting on embodied organisms, while artificial minds are shaped by training procedures and optimization objectives. The capter extends this comparison by asking whether artificial systems may themselves become subject to forms of evolutionary pressure, such as competition, resource constraints, or adaptation to dynamic environments.

Can Tokens Think? Language, Prolog, and Symbolic Minds
Language is examined as a tool for representing and structuring thought. Traditional symbolic systems model reasoning through explicit rules and symbols, while large language models instead rely on statistical patterns to produce language-like behavior. The chapter explains why AI systems are designed to communicate in human-like ways and shows that, when machines develop their own forms of communication, these can differ fundamentally from human language.

Fear, Love, and Reward Functions: What Minds Want
Human emotions are analyzed as adaptive control systems that compress complex decision spaces and guide behavior. Artificial agents are examined in terms of reward functions and externally defined objectives. The chapter draws a clear distinction between intrinsic valuation and instrumental optimization and examines whether “emotion” is a meaningful scientific concept for describing artificial behavior.

Living Inside the Algorithm: Work, Surveillance, and Truth
AI is treated as part of the social and cognitive environment. Alongside risks such as misinformation, reduced epistemic trust, and surveillance, the chapter also explores positive applications, including AI as support systems, companions, and partners in domains such as mental health, care, and everyday life.