CONNECTIONISM vs SYMBOLISM
To understand Artificial Intelligence, you have to start from the beginning. Understand the two main rival currents that have animated the discipline from the start. I'm talking about connectionism and symbolism. This rivalry still exists today, but connectionism has taken over in recent years. Thanks to recent discoveries on artificial neural networks.
Both have the same goal, the same ultimate goal (even if some dare not admit it), the development of the famous General Artificial Intelligence (or AGI in English), but seen in two different ways.
In the first current, concerning the world of connectionism, the idea is to copy the human body to the maximum, in the smallest details. So in this vision of things, very brilliant scientists like Yann lecun, Yoshua bengio, Geoffrey hinton and many others, have analyzed the electrical intensities of human neurons (axons, synapses ...). Neurons which are at the very base, and the smallest unit connected in a network, to model mental or behavioral phenomena. From these circulating electrical intensities, they deduced mathematical formulas which reproduce biological phenomena. They worked so well, that they managed to reproduce for example: "vision", "hearing", "other senses", and even algorithms like games of Go, chess, virtual humans ( to be mistaken) ... But each time, nothing is explainable. These are black boxes that do incredible things, but can't tell you "why" or "how". These artificial neural networks function like reflexes, or like instinct. It reacts immediately, but no explanation is possible. In this vision of things that is connectionism, the hope of achieving AGI is found in reproduction as close as possible to the living. On the understanding that "If we reproduce the biological neuron identically in an artificial neuron, general artificial intelligence will necessarily be born sooner or later." And it's not a bad idea at all. It holds up. In this world, we copy the body and go back to the spirit. Be careful, there are also a lot of abuses going on to try to charm investors who do not necessarily understand much. But it’s fashionable. Neural networks are very efficient at low level to reproduce what is instinctive / reflex, from where I think the first discoveries fast. I think that at high level it is possible to completely reproduce the reasoning, but much more difficult than with symbolic algorithms. In fact, neural networks only work after training with a considerable mass of data. To reproduce the reflection, it therefore needs a mass of data reproducing the reflection ... something that we do not have at the moment ... It is a lock by repetitive training. The brain does not need 10,000 repetitions to learn. This is a flaw in the RN. But when he is trained properly he becomes more efficient than a human, and that is his strength. Strength and weakness, as always. I think that to go further, neural networks must be perfected to get around the black box stage. Research should focus on explainability, which is the step above. With only the instinct stage reached, this is not enough to make a machine think.
Problem with this vision of things: it takes a lot of data, quality data and strongly linked to this ultimate algorithm. Because it is these data that will generate / lock the algorithm ... Complicated. you would have to put sensors in a brain that you do not fully control, and record ... Then with this data, try to lock a neuron network ... Really complicated. Second problem: it's the black box effect ... no explanation possible. AGI seems possible to me, but still far from my life.
In the second current, concerning the world of symbolism, the idea is different. The path is reversed. We try to reproduce the mind first, with symbolic algorithms to reproduce mental states, and we descend to the body that matters little. Whether the AGI wakes up in a silicon computer, or via a programming language, or in a DNA computer, or any other possible medium, not necessarily a neural network, it doesn't matter. In all cases, the AGI will be "awake". This trend has been somewhat forgotten in recent years. Few discoveries have been made in recent years. So this is a "winter" period for symbolism. The summer and winter periods follow one another once, once for the other current. In this current, everything is seen as a sequence of instruction and manipulation of symbols. Indeed, a mental state is a functional state, which is part of a sequence of instructions that can be nested (the ultimate algorithm), that is to say which is connected or is part (/ or not) to other mental states. When you read, you live the book, but don't take it against you. It remains a book, a "story". The history of the book is part of a parent mental state which is aware at all times that it is just a story. This current is therefore for me a simpler means of reproducing reflection and consciousness! Because algorithms can be directly "written", and not driven by data locking. Here, no data required. No data strongly related to the algorithm is needed. No data is needed to dig into the head of a human who is constantly plugged into the recording mode of his brain. But still it is necessary to succeed in writing this ultimate algorithm. Problem with this vision of things: symbolism doesn't solve everything either. You can't reproduce instinct without neural networks.
Can one do without the other?
You will understand, each current has its advantages and disadvantages. And so, the big question is there: "Can one do without the other?" I think that for the moment (when the neural networks are more powerful we will change our mind ...), connectionism cannot do without symbolism. Indeed, writing a reasoning machine cannot be done by instinct, in black box mode. Because the essence of reasoning is precisely not to be done by instinct. When we resonate, we reflect, it is not reflex. I think unless there is a noticeable improvement in neural networks in the years that follow, a "summer / winter" change will soon take place. Symbolism should regain control. I also think that symbolism can do without neural networks to begin with. Because the missing brick, and not just any, the most important is "writable" by symbolic algorithms. A machine that resonates is a super AI. Even if she does not see, and has no instinct. She will be able to add instinct herself, since she reasons. This is the advantage that I rated symbolically. The simplicity of writing the minimum brick that makes the machine autonomous. The singularity will handle more ...