John Giorgi shares 2019 witnessed a buzz about AI (Artificial Intelligence) balanced by the reservations concerning the dangers it has brought along. Many people expressed caution on its utilization. However, the COVID-19 pandemic outbreak across the world stirred back the relevance of AI usages. Experts hope that AI will act as a silver lining by developing a capacity to understand this deadly virus, anticipate its evolution, and combat the spread efficiently. In an attempt to innovate in haste, AI researchers are coming up with incorrect AI outcomes depending on limited datasets and doubtful expert validation and information.
Views by John Giorgi
John Giorgi has come up with suggestions based on inputs sourced from employees managed in big European insurance organizations. They decided to work on attempts to develop AI processes. The insurance companies’ interest in correct AI is growing, and the risks for these applications are increasing. It is because the AI risks didn’t get recognized in the innovation process. Some of the essential lessons to learn are:
A clear AI outline to commence knowing the risks
Organizations need to have the necessary knowledge on the probable harm that AI can induce. Without an acceptable definition, the managers and employers usually depend on their understanding and speculate on how it will impact the workplace. It’s very late that the organizations come to know about reputational and other expenses. Probably, by this time, the harm gets done, and a firm’s reputation is at stake.
Know the biases concerning AI
Generally, people have their expectations concerning AI risks based on their preconceived notions of it, as well as a set of assumptions. All of these assumptions are usually a result of media consumption and individual biases. For instance, some managers believe that AI is a technical issue. On the other hand, employees look at AI as a social aspect. All these expectations result in both groups undermining the overall AI risks and overseeing the upcoming risks during incubation.
No need to cover AI for averting conflicts
According to research and studies, managers who are starting to experiment in AI attempt to cover the technology to prevent conflicts linked with it. And such behavior can lead to a counterproductive approach as it reduces the crucial attention required for AI risks. It helps to prevent the risks from getting manifest after or at launch. For a few experts, AI is focus on places where human interaction and contact would play a vital role.
Finally, the AI risks have a chance to maximize the conventional risks for companies like legal, financial, human risks, and reputational amidst the others. When it manifests, it can lead to prolonged damage to an organization. Also, it casts the AI team and the technology used in a dangerous shade. It is the reason why AI risks are the top objective of regulators and companies. And it is essential to manage the same using advanced and innovative tools. These lessons will help bring down the AI risks involved in the wake of the COVID-19 pandemic outbreak.