Artificial Intelligence challenges us far more than we imagine. Not so much because of its scope but -long before that- because of the questions it demands of us.
AI remains a mystery for many and a fear for many others. However, and it's good to remember this, AI above all is an exact science. It is based on the most solid thing we humans have managed to generate: data, which once organized transforms into objective information. As such, AI does not allow for ambiguity or free interpretation.
It's a discipline that ensures generating a concrete result, based on the premises with which we work it. However, this is where we often commit the original sin in our relationship with AI, especially when we think of it within our organizations.
In the past year, our conversations with executives and companies focused almost exclusively on Artificial Intelligence and how to leverage it. Surprisingly, in most cases, the preliminary step hadn't been taken. We cannot think of AI as an organization unless we understand beforehand that, in order to talk about Artificial Intelligence, we have to prepare our entire organization for that new universe that is AI and is based on that information that becomes knowledge for all of us in the best of cases. Otherwise, we will quickly find ourselves talking more about fears than opportunities. Just diving into the Internet about the future scenarios promised by AI often leads to the conclusion that AI is ultimately a "threat" to humanity. This is where I see an ambiguity that many leaders today are not aware of.
Short-term Application
In the world we live in, Artificial Intelligence is already a reality. We work with it when we implement automation technologies; when we use a chatbot as users; when we search for something on the Internet; when we buy something on an e-commerce platform, and especially when we interact with fascinating tools like ChatGPT or related engines.
Where we fail as organizational leaders who should be preparing our companies for the AI era is in having internal conversations within our companies and organizations to precisely make the mindset change we need to operate in a world that will undoubtedly be anchored on Artificial Intelligence.
By this, I mean leaving behind the idea that AI has come to eradicate us from the face of the earth and starting to think about how we need to reconfigure our company's dynamics to be prepared to enhance ourselves with AI.
In other words, it's not about implementing AI-based technologies and thinking that's enough; it's about embracing the idea of learning to operate on AI in general. So, it's not about technology but about culture; it's not about implementation but about change management.
The first step is to learn how to unfold our conversations about AI. The first one focuses on understanding - and limiting - the conversation to the type of AI-based technologies we are using or needing to use. The second, on the other hand, requires us to question how we want to operate on AI; in what framework are we willing to do it, and, even better, with what set of values we want, as an organization, to work with AI.
The Ethical Long-term Vision on AI
Because we have already learned and accepted that AI is not just another technological evolution: it is a new world, which will change our world, as anticipated by the author Noa Harari (Sapiens: A Brief History of Humankind; Homo Deus).
Now, it is up to us to work on the cultural framework within our companies in which we want to take that evolutionary step.
We need to (re)define how we do what we do as an organization but now for a world in which artificial intelligence is the driving force behind everything we do.
This demands that we know how to have a culture of change. And, in the first instance, define an ethical framework, which will be the set of values that govern how we work: with people, resources, with and towards our community in a world of artificial intelligence. An example to illustrate this is the path we have traveled in the evolution of genetic research: we accepted cloning an animal, but we refrained from trying it with a human being.
This is the conversation that we are not yet having today in the realm of AI: neither as organizations nor as leaders of them. Given the rapid advancement of AI, it is urgent that we have that conversation sooner rather than later. Because no one doubts anymore that AI is a reality that is increasingly permeating every aspect of what we do. As we mentioned above, there is no area today that is not working with an implementation or based on an application of AI, improving processes and experiences. However, considering AI from that perspective only allows us to work with it in the short term.
The ethical framework is the vision that opens our eyes to knowing how to take advantage of AI in the long term. We are on a journey to the new world. Like Christopher Columbus in his time or the Apollo 11 team, we don't know what we will encounter along the way. Having values and an ethical framework that serves as a guide and support on that journey will allow us to adapt to what we find, instead of destroying ourselves.
By Alejandro Goldstein, Partner at OLIVIA