The artificial intelligence systems available today are works of wonder. Inspired by our brain and helped by training data and clear limits, they are able to reach feats like winning chess and Go completion against masters. They are also able to identify objects and tell the difference between similar objects. But they certainly lack in features when it comes to more sensible arts, like music composition or writing.
In order to remove this limitation, some researchers have shifted their focus to a surprisingly common ability: the sense of smell. Researchers are now trying to learn how living beings convert the chemical information into sensations, a mechanism that could fix several A.I. issues. Even more so, olfactory circuits mimic complex brain regions, and further development would lead to improved machines.
Bleeding edge machine learning already mimics at least a part of our visual system, which prioritizes the extraction of relevant information. Data picked up by the visual cortex is sorted, as well-developed details are picked up first: textures, colors and edges, which also imply the use of spatial mapping. As the information is processed by a larger number of neurons, the image becomes more accurate, and details like the shape of a dress can be identified and categorized.
Deep neural networks work in a similar fashion, by prioritizing relevant information, a trait that made them revolutionary when they were first introduced. The issue is that they require intensive training in order to do it. Thousands of pictures of blue dresses need to be uploaded in order for it to be able to correctly classify the differences. If it has enough samples, it will be able to find the dress even in unfamiliar settings.
Neuronal networks have been very popular among researchers, but also in practical uses such as language recognition and other functions. While they are incredibly powerful, they still require things that biological beings do not need, which is still a major drawback.
One such situation occurs when we talk about self-driving cars. While you are driving, the medium is constantly shifting in real time, but your brain compensates for the changes and you are able to focus on the task at hand. A.I. tends to make mistakes in this situation because the information constantly changes, and the visual system cannot maintain the pace.
To smell or not to smell
While vision-based information is concrete, smells are fundamentally different since they have no clear shapes, colors or sizes. The structure of the olfactory system is also radically different. The odors are analyzed by an inferior network that is considered to be less complex than the visual one. The neurons in olfactory areas so do not prioritize information, capturing random traces that follow no specific set of rules. If the visual cortex creates maps, the olfactory system is not able to offer such a specific result, as the information received is quite vague.
This is why the olfactory system uses a considerably smaller number of neurons. When we are looking for a particular smell, only a small number of our neurons are highly active, as they are focused on tracking down that particular smell.
Several experiments have already revealed that a neural network which mimics are olfactory system is better at pinpointing targeted data, since it can focus without the need to use a high amount of previous data in order to ran comparisons. The speed is also greater since the system learns to recognize only what it is already considered important and ignore anything that is situated outside the given parameters.
The creation of a network that can learn as it goes can dramatically reduce the cost and increase the flexibility of the network in the long run.
It remains to be seen how the new methods can be applied in the future.
Juana loves to cover the tech and gaming industry, she always stays on the first row of CES conference and reports live from there.