Artificial intelligence

Ignacio Escañuela romana

I often come across reflections on whether artificial intelligence will be conscious, will acquire human or superhuman capabilities, will advance in intelligence far beyond the human species that created it.

What do I think about this? Well, frankly, I don’t know. Why the ignorance? Because we fundamentally don’t know how human consciousness and its capacity for intelligence is doing. So we can hardly understand anything else.

But I do dare to make a reflection that is a little out of the ordinary these days.

The first is a reflection on intelligence. Intelligence is the ability to do things and to continue doing them in a new way. I usually teach that there are three types of these capacities, which do not coincide very much with what I find these days (but which, I must admit, I read years ago in a newspaper article that unfortunately I cannot reference, because I simply have not managed to find it again- Perhaps Piaget,the stages of intelligence as adaptation ). They are:

Ability to deny an impulse. Well, a dog at home has it. I doubt that the current AI has it, starting with the fact that it has no impulses.

Ability to find a new way to satisfy the impulse. Innovate a procedure. Many superior monkeys have this, they improvise tools and new ways of doing things. Again, an AI does not have this because it lacks these impulses.

Abstract or symbolic ability. Which I like to see in Cassirer’s way as the ability to construct symbols (arbitrary signs). Anyway, sorry to be such a doomsayer, but an AI doesn’t have it either. The signs it handles are not symbolic for it, and they are not symbolic simply because it lacks semantics (Searle, J. R. (1980). Minds, brains and programmes. Behavioral and brain sciences, 3(3), 417-424). It has no meanings. He gives me a wonderful recipe for cod, but he doesn’t know what hunger is, what cod is, what onion is, and so on.

So why do we call it artificial intelligence if I have pointed out that I don’t see that intelligence? Let’s say I want to clean a floor of breadcrumbs, I could do that by putting an anthill there for a while. Do the ants intend to do that because I have put them there? No. Do they know the reasons why I put them there? No. And yet they clean the floor of breadcrumbs far better than I could.

(Yes, I know that AI is often classified into AI that uses simple rules and little learning, AI that uses complex rules and learning to apply them, AI that uses learning to generate new complex rules. But AI rules are correlation tools, and the results are an output. The meaning is assigned by man)