AI – Generative Artificial Intelligence – Great prospects and great fears

“Generative AI” refers to artificial intelligence systems that are capable of generating new content, often in ways that mimic or simulate human creativity.

These systems use various machine learning techniques, particularly neural networks, to produce output that can include text, images, music, and even code.

Rapid technological advancement, combined with advances in algorithms and machine learning techniques, has led AI to become a critical tool in myriad industries.

From scientific research to the financial industry, from robotics to the field of justice, to the universe of toys and beyond, AI is now omnipresent, significantly influencing our existence and redefining our interactions with the world around us.

Among the various branches of AI, Generative Artificial Intelligence stands out, commonly known as generative AI or GenAI. This innovative discipline uses advanced Machine Learning and Deep Learning techniques to create completely new data, such as images, musical compositions and texts, that did not exist before.

Unlike discriminative AI, which focuses on classifying and interpreting inputs, generative AI is designed to go beyond simple analysis: it aims to deeply understand the data provided to generate new and original content.

This ability to ‘create’ rather than simply ‘analyze’ opens up unexplored horizons and offers extraordinary opportunities. Generative AI can be used to create unique artistic concepts, compose music that has never been heard before, write texts ranging from fiction to poetry, and much more. In an increasingly interconnected and digitalized world, generative AI not only amplifies our creativity and innovation, but also represents a fundamental paradigm shift in the way we interact with technology: no longer simple consumers of content, but co-creators in a continuous dialogue with the machines. In this context, generative AI looms as a new frontier of artificial intelligence, a field in which the potential for growth and development seems to be infinite.

Some examples of generative AI applications include:

  1. Text Generation: Systems like OpenAI’s GPT-3 (and later), which can write text on various topics, simulating different writing styles.
  2. Visual Art: Tools like DALL-E, also from OpenAI, which can create images and artwork based on textual descriptions.
  3. Music: Algorithms that can compose new pieces of music, imitating existing styles or creating unique combinations.
  4. Programming: Some forms of generative AI can help write code, offering solutions to programming problems or optimizing existing code.
  5. DeepMind: A subsidiary of Alphabet (Google’s parent company), DeepMind is known for its advanced research in artificial intelligence. They have developed systems such as AlphaGo, which beat world champions in the game of Go, and WaveNet, a system that generates realistic human voices for speech synthesis.
  6. Adobe Sensei: Part of the Adobe Creative Cloud package, Sensei uses machine learning to improve various design and multimedia tools. For example, it can automatically crop photos, optimize layouts, and even suggest artistic changes.
  7. IBM Watson: Famous for winning the game show ‘Jeopardy!’, Watson uses generative AI to analyze and interpret large amounts of data in various industries, from healthcare to customer service, providing data-driven answers and solutions.
  8. Artificial Creativity in Music: Tools like AIVA (Artificial Intelligence Virtual Artist) and Jukedeck use artificial intelligence to compose music in various styles. These systems can create unique music tracks suitable for films, video games, and other applications.
  9. Generative Narrative: Platforms like Narrative Science and Automated Insights use AI to transform data and information into narratives and written reports, widely used in fields such as financial journalism and sports reporting.
  10. AI in Visual Art: Artists and researchers are experimenting with neural networks to create art. One example is “The Next Rembrandt,” a project that used deep learning to create a new painting in the style of Rembrandt.

Generative AI is opening new frontiers in creativity and innovation, pushing the limits of what machines can create and offering new tools for artists, writers, musicians and programmers.

So why have doubts?

Despite the enormous progress achieved, there is no shortage of doubts or criticisms about the definition and what current Artificial Intelligence is.

According to Noam Chomsky (*), ChatGPT is ‘basically high-tech plagiarism’ and a ‘way to avoid learning’.
(*) – philosopher, linguist, cognitive scientist, communication theorist, professor emeritus of linguistics at the Massachusetts Institute of Technology, is recognized as the founder of generative-transformational grammar, often indicated as the most relevant contribution to theoretical linguistics of the 20th century

AI-powered chatbots use large language models and terabytes of data to find detailed information and write it into text. But the AI ​​guessed which word would make the most sense in a sentence without knowing whether it was true or false or what the user wanted to hear.

Chomsky said of today’s artificial intelligence programs: ‘Their greatest flaw is that they do not have the most important ability of any intelligent being.’

“To say not only what is, was and will be the case, but also what is not the case and what may or may not be the case, is to describe and predict. They are necessary parts of understanding what something means, which is a sign of true wisdom.

Chomsky says the human brain is designed to ‘construct explanations’ rather than ‘infer brute correlations’.

This means he can use the information to come to new and insightful conclusions.

“The human mind is a system that works with small amounts of information in a surprisingly efficient and even elegant way.”

AI cannot think critically and iteratively, so it self-censors what it says.

Pause Giant AI Experiments: An Open Letter

The recent open letter Pause Giant AI Experiments released on March 23, 2023 by the Future of Life Institute (which has exceeded 33,700 membership signatures) states:

AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research[1] and acknowledged by top AI labs.[2] As stated in the widely-endorsed Asilomar AI PrinciplesAdvanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.

Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system’s potential effects. OpenAI’s recent statement regarding artificial general intelligence, states that “At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models.” We agree. That point is now.

Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.

AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.[4] This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.

AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.

In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.

Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an “AI summer” in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society.[5]  We can do so here. Let’s enjoy a long AI summer, not rush unprepared into a fall.

The above is to be integrated with Policymaking in the Pause – What can policymakers do now to combat risks from advanced AI systems?

Are the fears well-founded or is it even too late?

In recent debates regarding fears linked to the enormous potential of AI, it naturally arises to ask ourselves a crucial point in the evaluation of the opinions expressed by leading figures in the field of artificial intelligence, such as Max Tegmark, Nick Bostrom, Ray Kurzweil, Elon Musk, Sam Altman, Yann LeCun, Geoffrey Hinton and others. These individuals play an influential role in the digital industry and the evolution of AI, and their opinions can have a significant impact on public opinion and technology development policies.

The issue of impartiality and the presence of potential conflicts of interest is complex. On the one hand, it is true that for these individuals, progress in AI is intrinsically linked to professional and financial success. By working directly in the AI ​​industry, they can benefit personally and professionally from advances in this field. This connection could potentially influence their opinions, making them more favorable towards the development and adoption of AI, even when there are risks or ethical issues to consider.

Ultimately, while it is important to be aware of potential conflicts of interest and personal motivations, it is also essential to evaluate the statements and opinions of these figures in the broader context of their work and knowledge. Often, they can offer valuable insights based on years of experience and research in the field.

However, it is equally crucial to consider a variety of perspectives, including those of independent experts, philosophers, ethicists, and other professionals who can offer different angles and critiques on the growth and use of AI in society.

The fear is that if the alarms raised by independent experts are particularly well founded then there is real cause for concern

Eliezer Yudkowsky is an American decision theorist and conducts research at the Machine Intelligence Research Institute. He has been working on aligning artificial general intelligence since 2001 and is widely considered one of the founders of the field.

In his appeal in TIME on March 29, 2023 Pausing AI Developments Isn’t Enough. We Need to Shut it All Down write:

An open letter published today calls for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”

This 6-month moratorium would be better than no moratorium. I have respect for everyone who stepped up and signed it. It’s an improvement on the margin.

I refrained from signing because I think the letter is understating the seriousness of the situation and asking for too little to solve it.

The key issue is not “human-competitive” intelligence (as the open letter puts it); it’s what happens after AI gets to smarter-than-human intelligence. Key thresholds there may not be obvious, we definitely can’t calculate in advance what happens when, and it currently seems imaginable that a research lab would cross critical lines without noticing.

Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.

” It’s not that you can’t, in principle, survive creating something much smarter than you; it’s that it would require precision and preparation and new scientific insights, and probably not having AI systems composed of giant inscrutable arrays of fractional numbers.

Without that precision and preparation, the most likely outcome is AI that does not do what we want, and does not care for us nor for sentient life in general. That kind of caring is something that could in principle be imbued into an AI but we are not ready and do not currently know how. Absent that caring, we get “the AI does not love you, nor does it hate you, and you are made of atoms it can use for something else.”

If we held anything in the nascent field of Artificial General Intelligence to the lesser standards of engineering rigor that apply to a bridge meant to carry a couple of thousand cars, the entire field would be shut down tomorrow.We are not prepared. We are not on course to be prepared in any reasonable time window. There is no plan. Progress in AI capabilities is running vastly, vastly ahead of progress in AI alignment or even progress in understanding what the hell is going on inside those systems. If we actually do this, we are all going to die.

Conclusions (maybe)

In these scenarios it is honestly not easy to have clear ideas but certainly the fears expressed by multiple sources have solid bases and foundations which, combined with the enormous interests at stake (supranational and planetary level), make the issue enormously delicate and the fears about the future well-founded. developments.

A parallel could be drawn with the difficulties and contradictions on the paths to follow to manage climate change: in that case we see enormous difficulties in finding agreement at a planetary level to have a common approach and, unfortunately, we don’t have much hope on the AI ​​front where economic interests are likely to become even more enormous than fossil fuels…

Similar Posts