I will start with the pretended-naïve call to pause AI. We all know that technology is fluid and democratizing. No one can control the spread of tech because of a sizeable international ecosystem of millions of businesses, entrepreneurs, developers, investors, and entire countries. Even governments tried to put control on tech and could not (Social Media, for example). As an example, China might disagree if the US tries to pause AI. Investors working with entrepreneurs with tight profit goals will likely ignore the call. Now, if all that argument feels logical to you, why the call to start with?
The truth is that technology spreads and disrupts lives and businesses (25 years ago, the internet was starting; 13 years ago, the iPhone was launched; 10 years ago, the first Tesla Model S was made) so fast that we are all fearing the potentially devastating consequences of AI. It is the most impactful technology ever created. In a recent blog, Microsoft co-founder Bill Gates said, "the development of artificial intelligence (AI) is the most important technological advance in decades." The tech executives making the "pause-it" call know all of that and now seem to adopt a rhetoric to shield themselves in the future from its unintended consequences. Given the magnitude of the disruptions ahead, they will not save face value with that poor rhetoric. In my book "(NON) HUMAN INTELLIGENCE," I carefully describe these threats' implications and potential solutions.
We all need to accept that the nature of AI makes it challenging to predict the frequency and magnitude of our social tectonic plates shifts. The problem is that every model evolution accelerates in time. As a down-to-earth example, we all witnessed the jump in quality and applicability when chatGPT-4 substituted chatGPT-3 in less than four months. Well, as more computer power we give those models, the more access to data as we adopt and use it, and the faster they learn and become functional, the bigger of an impact we feel. And as we develop new learning and generative models, less human intervention will be necessary for evolving AI and doing any work. Remember, machines can pick up strawberries now or write convincing ads, measure their results, and rewrite them for even better results while you sleep or have lunch - soon with no human supervision. But I do not believe in a dystopic future.
We urgently need to get organized and act fast to avoid unnecessary suffering. The good news about this new wave of technology is that it will create an immeasurable amount of new economic value. Cathie Woods has estimated that new technologies will make four hundred trillion dollars of incremental value in the following decades. It is enough value to enable the creation of social cushions to support the severy impacted families globally! Remember that our global GDP has not reached one hundred trillion yet, so the additional value will be four times bigger than everything we produce today.
Our generation's homework comprises two leading groups of activities. First, create and enforce strict ethical rules for every software built on AI. Not an easy task, as basic notions of right and wrong, seem less evident in a divided country. And imagine that same discussion elevated to a global forum at the United Nations in today's polarized world. Second, create social mechanisms to protect disrupted families and grant high-level education to all children. The impact on education as a topic deserves an entire discussion per se in another article. I believe it is time to seriously discuss the creation of mechanisms such as social bonds and UBI - Universal Basic Income. Those pilots in many countries show that the social and economic benefits surpass the cost of the programs and are catapulting our children to life standards they would take generations to achieve. In the following years, as we watch the rich get richer and the poor get poorer faster, we will have to use our hearts relatively more than our minds. Those profound changes will put our morals to the test, and in ways, they have not been.
In 1942, Isaac Asimov proposed the "Three Laws of Robotics" in a short story called "Runaround." The laws were later expanded and used repeatedly in his works. The laws are:
- A robot may not injure a human being or, through inaction, allow a human to come to harm.
- A robot must obey the orders given to it by human beings, except where such orders would conflict with the first law.
- A robot must protect its own existence as long as such protection does not conflict with the first and second laws.
Would it not be great if we could apply the same law number one to ourselves? - A human being may not injure another human being or, through inaction, allow a human to come to harm physically, psychologically, or socially. It reads almost like a distant dream. If we do not conquer higher morals among ourselves and do not actively promote unconditional happiness to our kind, how can we expect to regulate and teach machines to help us on the task? Certainly, pausing AI is not the answer.