Published 2026-01-31
Keywords
- Large Language Models,
- Symbolic Artificial Intelligence,
- Anthropocentrity
Copyright (c) 2026 Stefano Ferilli

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Abstract
Despite their long history and of the variety of approaches they encompass, Artificial Intelligence (AI) and Machine Learning have nowadays become, in everyday language, synonymous of Large Language Models (LLMs) and Deep Learning. These two groundbreaking technologies have recently become widespread, overcoming some technological and social barriers and making AI available and familiar to everyone. Their success and performance are leading many people to think that they are the ultimate solution for many problems, and that we can blindly trust their outcomes, even when important values are at stake. This talk will frame these technologies, and especially LLMs, in the overall history of AI, highlighting their shortcomings, discussing some ethics-related issues they pose, and advocating for a more balanced attitude towards them, in order to take full advantage of their use while avoiding the risks they inherently bring about. Knowing them better will allow us to decide if, and under what conditions, we can safely hand over our home’s keys, or when other less popular, but more appropriate, approaches to AI should be used.