Exploring the Physics of Artificial Intelligence
When teaching young children to understand the world around them, parents rely on associations and pattern recognition. This learning process is strikingly similar to the way artificial intelligence (AI) systems are trained. By feeding AI systems a plethora of examples, researchers enable them to identify patterns and make connections, creating their own ‘neural networks’. However, the intricacies behind these connections often become obscured, mirroring an issue known in the field as the ‘black box problem’. This problem represents the opaqueness of AI decision-making processes, which can be problematic when considering AI’s trustworthiness and safety.
The complexities of AI often manifest in various sectors—from AI-powered vehicles that sometimes fail to prevent accidents to AI-assisted medical devices used in diagnostics. Such complexities have given rise to a new academic discipline: the physics of AI, which aims to harness AI for enhanced human understanding.
NTT’s Initiative for AI Trust and Safety
NTT Research has taken a significant step toward addressing these challenges with the launch of its new ‘Physics of Artificial Intelligence Group’. Announced during NTT’s Upgrade 2025 conference in San Francisco, California, this group is a derivation of NTT Research’s Physics & Informatics (PHI) Lab. Leading this initiative is Dr. Hidenori Tanaka, an expert with a PhD in Applied Physics & Computer Science and Engineering from Harvard University.
Dr. Tanaka is keen on exploring fundamental philosophical questions through the lens of AI: “As a physicist, I am excited about intelligence because, mathematically, how can you define concepts like creativity or kindness? These remain abstract without a mathematical framework. AI challenges us to express such concepts mathematically, which becomes essential if we want AI to emulate kindness, for instance,” explained Dr. Tanaka at the conference.
Addressing AI’s ‘Black Box’ Nature
The PHI Lab at NTT has long recognized the importance of deciphering the ‘black box’ nature of AI to develop systems that not only perform efficiently but also prioritize safety and trust. As AI technology advances, the demand for reliable governance and decision-making autonomy within AI adoption has grown exponentially.
The newly formed AI group at NTT Research aims to articulate the parallels between biological and artificial intelligences, seeking to demystify AI mechanics and foster cooperative human-AI interactions. While the integration of AI remains a contemporary endeavor, the philosophical and scientific fields have for centuries sought to understand the synergy between technology and humanity.
AI is structurally akin to the human brain; it comprises neuron networks where neurons connect through synapses—all modeled numerically in computerized frameworks. Dr. Tanaka believes that physics plays a role here, stating, “Physics involves formulating and testing mathematical hypotheses about the innate workings of anything from the universe.”
Global Academic Collaborations
Continuing its collaboration with the Harvard University Center for Brain Science (CBS), NTT’s new research group seeks to partner with institutions like Stanford University, engaging scholars such as Associate Professor Suya Ganguli.
Dr. Tanaka underscores the necessity of interdisciplinary approaches: “In 2017, as a PhD candidate at Harvard, I realized a desire to pursue something beyond traditional physics—to initiate new conceptual realms in physics.”
Within the context of AI, conversations across disciplines can be enlightening. Dr. Tanaka highlights that AI is a focal point everyone is eager to discuss. “NTT’s mission is to catalyze these dialogues, transcending diverse backgrounds, because every interaction is a learning opportunity,” he concluded.
For more updates on AI advancements, subscribe to aitechtrend.com.
