This own and exclusive content offers you:
To talk about Artificial Intelligence, we must first talk about the human intelligence that it tries to imitate. In this case, intelligence is defined as the ability to acquire knowledge from information in the environment. There are three ways in which humans do this (and not all of them have been imitated by AI):
1. The deductionis based on combining already known ideas and syllogisms. E.g. if we know that “All men are mortal” and “Socrates is a man”, then we can deduce: “Socrates is mortal”. Interestingly, deduction does not allow us to acquire knowledge. It is simply based on combining already known rules of the world, generating new rules. This system was the first to try to be implemented, but it had little success.
2. Induction: it is based on generating rules from experience.
E.g. if I only see black crows, I can generate a rule “All crows are black”. If today we see an albino crow, this rule will be invalidated but will become “Most crows are black”.
All what we call AI in the 21st century, current supervised models need training, a process by which the model is given examples to generate a new rule by induction. They rely exclusively on experience. This makes it impossible for it to act in a general way for all problems in the world, only for those for which it has been trained (e.g. if I want a model to detect chairs, I will first have to show it thousands of different chairs during training and it will only work for that. It will not recognise people)
3. Abduction: is based on acquiring knowledge through assumptions and probabilities. E.g.: If we see the floor wet, we will be able to indicate that someone must have spilled water. This inference is made by chance, without training, and with a high failure rate. In the presence of the puddle, we infer this possibility but we know that it is not the only one: there may be humidity, a broken tap, condensation from the window… we keep several explanations at the same time while we find the correct one.
Currently, no AI is capable of imitating this kind of logic, which would allow us to move away from the dictatorship of training and have a generalist artificial intelligence.
If we want to set up our own AI, we will have to think about what kind of problem we are facing. we intend to attack, or how to adapt it to look like one of these. If we can do this, we will have the first step to building our own AI. The fundamental problems that AI can solve are:
So, if we want to set up a Netflix-type film recommendation model, we can approach it in two different ways:
If we do not have our solution associated with any of these problems… the solution may not require AI, and there may be other, simpler solutions. Or worse, it may be impossible to solve.
Ex: Severity of a Netflix failure to recommend movies vs. failure of a self-driving car. The latter cannot be implemented even if the failure rate is negligible or much lower than that of Netflix, because it would cost us a life.
It is crucial to conduct a risk analysis to determine whether the implementation of an AI model is appropriate in terms of cost and benefit.
For this reason, if the severity of the failure is high, it is important to change the problem to fall within the safe zone..
Esto puede ser realizado cambiando de modelo de IA para reducir errores, ya sea reentrenándolo o usando otros modelos actualizados del estado del arte; o reducir la gravedad del error cambiando el problema.
Ej: no podemos lograr la conducción totalmente automática pero si la conducción asistida, que ayude al humano a conducir y cuyo fallo no implique un accidente.
The quality and quantity of training data are critical to the success of an AI model. It is necessary to ensure that the data are:
It is quite common not to find any public database compatible with our solution. In these cases, we have a few last resources that we can follow:
It is important to consider the time it takes for an AI model to return results, especially in real-time applications. We can differentiate AI solutions into two types according to their average inference time:
It is essential to consider the cost of maintaining an AI model on a cloud platform, such as AWS or Azure, including the price of infrastructure and resource consumption. Each machine has its own characteristics regarding power, memory and price per hour, which we will have to take into account.
Real-time solutions are always more expensive, as they require the model to be available 24 hours a day. Also, heavier and more complex models will increase the cost of our solution, as they require more powerful and expensive machines. Another more subtle detail is the number of recurring calls, if many people call the model at the same time, it will be necessary to hire more simultaneous machines, raising the cost to double or triple.
In this sense, to implement an AI model, we will always have to make a budget of how much it will cost to maintain this model in the cloud. The cost may be too high for the benefits it can provide, giving us a surprise at the end of the month.
AI models can deteriorate over time due to the phenomenon of model drifting. E.g. a content recommendation model will not return new movies because they are not included in its training, and its results become obsolete over time.
It is essential to establish evaluation metrics (e.g. the number of clicks on recommended content, the percentage of undetected images in a chair detector, etc.) and to monitor the performance of the model continuously to detect and correct possible problems with retraining.
Even if they have complex mathematics underneath, it is not complicated to know how a given AI model works in general terms.
It is necessary to understand the strengths and weaknesses of AI models in order to use them correctly.
– ChatGPT and other LMMs are predictive language models, which return combinations of linguistic patterns and keywords associated with given questions. Being a model focused on autocompleting our sentences, it depends too much on the database it has, so it can neither give absolute truths nor be used as a source of truth.
– Image classifiers associate visual patterns to one or more categories through training, but they do not necessarily have to be the most consistent for a human being. E.g.: in 2016, a model trained to differentiate between wolves and Husky dogs was published. As they are so similar to each other, it was thought that this model could be useful for distinguishing subtle visual aspects. Unfortunately, after several tests, it was found that the model learned to distinguish the snow in the background of the image, not the animal. If there was no snow, it was a wolf. If not, it was a Husky.
– Generative AIs form an image, video or audio using visual/auditory patterns based on a prompt. This content is generated using noise as part of its composition, so the results are random and change each time. In the case of rare words, the lack of images in the database causes the results to be worse or non-existent.
Disclosure and transparency about how the models work are essential to avoid misuse of AI.
At Multimarkts we apply our own AI, taking into account all of the above so that we can offer the best experience to our customers, minimising errors. We invite you to meet us and see a real and very profitable demonstration of the use of AI.
Completely free of charge
We want to share knowledge to help you in your day-to-day work and for you to get to know us a little better.