It has been published that Artificial Intelligence has predicted the winning number for this year’s Christmas lottery draw. 03695. This can only mean one thing: for the time being, AI is not as smart as we thought. You cannot predict a random event. (No, you can’t, even though you’ve heard a thousand times that you can by taking enough data). In independent random events, whatever happened previously has no influence on what will happen next.
Obviously, I don’t think it’s a real AI bug. Rather, it seems to be a poorly formulated question and a poorly understood answer by some clickbait oriented journalism. However, the number sold out in a few hours in the Elche administrations where it was available. This suggests that either most of us don’t understand mathematics, or we lack a critical spirit when reading the news. Or both.
Maybe the issue with AI isn’t that machines are too intelligent, but that we are not intelligent enough.
Exactly a year ago, as many of us, I approached generative AI with curiosity and distrust. Here, my experience: My interview with AI
Today, a year later, we have all changed a lot. Both the AI and myself included.
Some daily applications for office work
As a fan of continued learning, I’ve been incorporating changes into my routines over the past months, and ChatGPT in its free trial version played a crucial role. It simplified tasks like drafting contracts or NDAs, summarizing company annual reports pre-2021, and enhancing the readability and SEO rating of web pages.
Exploring the subscription version revealed practical possibilities. By accepting user-contributed documents, the system can “analyze” Excel reports or other documents, identify patterns and present usable conclusions. It can also access post-2021 information from the internet, extending beyond the free version’s training cut-off.
I also tested the “collaborative articles” on Linkedin. Essentially, these are articles orchestrated by AI with the valuable input of some humans who voluntarily contribute ideas and knowledge in exchange for a few “likes.” Yes, it is exactly that, and even this description might sound slightly dystopian, many of us are already happily participating – including me.
However, these small changes in my life are nothing in comparison to the transformations witnessed in the industry.
A hectic year
A brief summary of what has happened in this field throughout the year starts with the boom in generative AI. ChatGPT opened for public use at the end of 2022, now boosting 100 million weekly users worldwide. Graphic design AI applications have evolved from the first and ugly drawings of Dall-E a year ago to very sophisticated video creation systems. The latest toy, Freepik Picasso allows graphic instructions to be given in the form of a simple drawing and has waiting list for free tests.
Along the way, Pope Francis has been dressed in a rapper’s anorak, which is funny but shows how easy it is to create fake news, and unfortunately a scandal of mistreatment of the image of minors has also become possible. A technology is neither good nor bad, it depends on the use we make of it. It is difficult to find a technology that humanity has not used for evil at some time.
Among those concerned about the good use of AI, the Future of Life Institute, founded by Max Tegmark, stands out. They are specially active in the preventive fight against smart weapons. Tegmark, alongside other scientists, signed a controversial letter in March asking to pause the development of giant AI applications until secure regulations are in place.
The signatories included business stakeholders such as Elon Musk and Steve Wozniak who might be suspected of simply seeking to block their smaller competitors. On the other hand there were also esteemed scientists such as Turing awardee Joshua Bengio, historian Youval Harari and Max Tegmark himself. The fact that these scientists are concern is concerning to many of us. They pose the question, if a general AI is developed, how can we be sure that it will not learn to establish and pursue its own goals?
General Artificial Intelligence denotes an AI capable of learning in all directions in a manner equivalent to humans, in contrast to specific AI. Specific AI works within the limits of a narrow field of specialization: it plays chess or writes texts or creates photos, but does not learn to do everything since it is subject to very specific objectives. Theoretically, general AI could pose a threat to humanity if it learned to reason and set its own goals independently of ours.
Other experts, such as Andrew Yng, believe that there is no need to worry about the risk of extinction of the human species. However, this does not mean total lack of concern: Yng sees real threats to information, democratic participation and the labor market, among other areas.
Are we still in time to regulate in order to mitigate these risks? The European Union has just taken a first step by approving the Artificial Intelligence Act, following intense debates among institutions, experts, and the technology lobby. In the coming months we will see whether the balance has been found to mitigate risks while allowing growth of emerging European industries, and whether this regulation serves as a model for other regions. Many questions are still open.
That debate was on the table last November when a sensational surprise broke out. Open AI fired CEO Sam Altman. Subsequent information suggested that company might be devoloping a disruptive and high-risk General AI under the Q* Project. Altman’s dismissal might be linked with this. However, shortly after this leak Altman was reinstated, apparently due to pressure from the workers themselves.
The details around these events remain opaque and trigger speculations.
Should we worry?
Among the risks potentially posed by AI, I’m least worried about it taking over the world. The possibility of AI creating its own goals seems like theoretical speculation, too remote for us to influence or care about. And, on the other hand, looking around us – climate change, inequality, wars, genocides… – it does not seem that we have been very good at writing History. Pardon the cynicism but perhaps the machine will not do worse than us.
The second danger, and this one seems more real, is that it will take our jobs – or at least profoundly alter the structure of the labor market. Undoubtedly, changes are coming, and the challenge lies in ensuring that the transition to a new work model doesn’t cause suffering but instead fosters progress.
We need to discuss what we want work to be like in the future, and how we will prepare for it. Will the great improvement in productivity result in all of us working less? In that case, how will we fill our leisure time and ensure our self-esteem? Or will only an intellectual elite work? And then, how will we finance a decent life for all? Politicians should be debating proposals about this… instead of talking about all stuff that we already know.
Thirdly, what worries me most thinking in the short term, is the impact on the quality of the cultural and informative products that we will consume. Generative AI makes it easy to create text or images even for those who have nothing to say. As consumers, we need to develop our critical spirit because we will exposed to a lot of empty and superficial publications. ChatGPT might be the autotune of writing.
It’s already happening. There is abundat advice from the Linkedin content gurus about how to let ChatGPT write for us, generating not only the text but even the ideas. This results in fewer new ideas, all expressed in identical style. Journalism is also impacted and we are seeing in the media many biased and superficial articles. Asking the AI about the winning number in the lottery is an example. Since they mention a new technology, such articles attract clicks and attention. As seen in Elche.
On a personal note, I resist this trend. Although I use AI in productive tasks, I have not let it write or think for my – its only participation in this blog was in the aforementioned interview and sometimes for some help with the translation. I approach the art of writing with great respect and, although my style is far from perfect, I try to keep it personal. I don’t create content for any algorithm: I write articles for you, a person.
It is said that the lottery is a special tax for those who don’t know mathematics. However, many scientists play the lottery. Why? I don’t think it has anything to do with the hope of winning, which is approximately equal to zero. It has probably more to do with preserving an old tradition with strong emotional roots. The sound of the children from the San Ildefonso school singing the numbers and prizes all morning on the 22nd is for many in Spain an announcement that Christmas is coming. Just like the smell of roasted chestnuts, Christmas carols or the city lights.
This contradictory and irrational behavior that connects us and leads even the most skeptical of us to end up hugging each other for Christmas, is probably part of what makes us human. That is something AI is very far from being able to do.
So Merry Christmas and… may you win the lottery!