Risks and Fears of Artificial Intelligence (AI) | Más Colombia
Wednesday, December 24, 2025
Home  »  Columnists  »  Risks and Fears of Artificial Intelligence (AI)

Risks and Fears of Artificial Intelligence (AI)

Diego Cabrejo, Columnist, Más Colombia

Diego Cabrejo

Mathematician and Electronic Engineer, Master in Pure Mathematics, Risk Manager and Co-Founder of the Fintech Prestanza (R). [email protected]

It is common to fear the impact of Artificial Intelligence (AI) on humanity and the workplace. However, it is important to analyze the risks with objective data and a rational approach, establishing controls to avoid worst-case scenarios.

It is curious that the fear of traveling by plane, despite having a lower risk of mortality per kilometer traveled than by car, is considerably greater. This is due to different factors, such as a sense of control, previous traumas or experiences, cultural upbringing and habit. Similarly, the fear of AI is greater than the associated objective risks. Let’s take a closer look at this.


According to a meta-study conducted by Gavin I. Clark* and Adam J. Rock (), up to 40% of the population suffers a degree of fear of air travel, while studies in the United States show that about 10% of people are afraid of driving a car.

How is it possible that a minor risk (traveling by plane) causes 4 times more fear in people? Indeed, there are several explanations or thinking patterns, such as:

  1. A sense of control causes my fear to decrease (even though the risk remains constant).
  2. Trauma or painful experiences cause fear to increase (even though risk remains constant).
  3. Education or culture may cause fear of some situations to increase (even though risk remains constant).
  4. Habit or repetition of a circumstance causes fear to decrease.

Likewise, the lack of control, changes or maladaptation make the fear of things increase.

For example, if we combine the fear of traveling by car with Artificial Intelligence, we see that 71% of people are afraid of getting into a self-driving car, according to studies by the American Automobile Association (AAA) (see link here). This is much more than the fear of flying!

And this is how we come to analyze the risks and fears generated by a technology that is changing our lives by leaps and bounds: Artificial Intelligence.


Within the conversations and contents that I find in my daily life and on the internet, the following concerns are repeated:

  • Artificial intelligence is going to kill millions of jobs.
  • Artificial intelligence is going to destroy humanity.
  • Artificial intelligence is going to make all our decisions in a few years.
  • Artificial intelligence is going to increase inequality and the accumulation of power.
  • Artificial intelligence is going to bring changes so big and so fast that humans are not going to be able to understand or process them.
  • Artificial intelligence has racial, political and ethical biases.
  • And a long etc…, etc….

All are valid concerns, but we must use critical thinking to better understand and define them one by one. In this way, we can measure and estimate them to reduce them from fears to risks and, finally, we must define controls and policies to minimize the risks to a level that we feel comfortable with.

Let us recall some fundamental principles about Artificial Intelligence (AI) set out in the book Power and Prediction, by A. Agrawal, J. Gans and A. Goldfarb.

First, AI requires information to improve, so with no information or poor information it will not be able to replace jobs or drive cars effectively. Second, AI is created by people, with human biases and limitations. Third, the information provided by AI is an input for decision making, but the responsibility always lies with people.

Therefore, as long as there are human instances in a process, responsible for changing, feeding, analyzing and validating the information provided by Artificial Intelligence, we will have enough power to control the risks and exorcize our fears towards this new power that has arrived and will not go away.

Finishing this trilogy on Artificial Intelligence, I am going to move my analysis towards the risks that destroy big companies (and also small ones) and how we can mitigate them.

Note: Proofreading of this article was done by attorney Alexandra Kurmen and ChatGPT.