Science and Technology

When will the excitement wave of artificial intelligence subside?/ Passing from the peak of illusion to the useful use of technology

Eviralnews Repots;

Generative AI, like many new technologies, follows what is known as Gartner's development cycle, currently at the peak of expectations and still evolving, two to five years away from completion.

Eviralnews,; Less than two years ago, the launch of GPT Chat started the excitement of productive artificial intelligence. Some said that this technology would usher in the fourth industrial revolution and completely change the world as we know it.

In March 2023, Goldman Sachs predicted that 300 million jobs would be lost or reduced due to AI, and it looked like a big shift was underway.

Eighteen months later, it was announced that productive artificial intelligence would not change business. Many projects using the technology are scrapped, such as McDonald's attempt to automate drive-thru ordering that made headlines after hilarious Tik Tok fails. Government efforts to create systems to summarize public offerings and calculate welfare entitlements have suffered the same fate. So what happened?

The excitement cycle of artificial intelligence

Generative AI, like many new technologies, follows a path known as the Gartner Development Cycle, first described by the American technology research firm Gartner.

This model broadly describes an iterative process in which the initial success of a technology leads to increased public expectations that are ultimately not met. After the peak of initial expectations, there is a slope of disappointment, followed by a slope of enlightenment, which finally reaches productivity.

In a Gartner report released in June, most productive AI technologies are either at the peak of expectations or still in development. The report states that most of these technologies are two to five years away from completion.

A study by the American research institute RAND has revealed that 80% of AI projects fail, which is more than double the rate of non-AI projects.

Attractive prototypes of generative AI products have been developed, but their use in practice has been less successful. A study published last week by the American research institute RAND found that 80 percent of AI projects fail, more than double the rate of non-AI projects.

Current shortcomings of generative artificial intelligence

Rand's report lists many problems for productive AI, from high investment requirements in AI data and infrastructure to a lack of requisite human talent. However, the unusual nature of the limitations of generative AI represents a significant challenge.

For example, generative AI systems can solve some very complex college admissions tests while failing very simple tasks. This makes it very difficult to judge the potential of these technologies to lead to false confidence.

Now, if an AI can solve complex differential equations or write a paper, it should be able to take simple driving instructions, right?

A recent study found that the capabilities of large language models such as the GPT-4 do not always match what people expect from them. In particular, more robust models perform extremely poorly in high-risk cases where wrong answers can be catastrophic.

These results show that these models can instill false confidence in their users. Because they answer questions fluently, humans can draw overly optimistic conclusions about their capabilities and apply models to situations they are not suited for.

The experience of successful projects shows that it is difficult to follow a generative model of instructions. For example, Khan Academy's Khanamigo training system often shows correct answers to questions, despite being instructed not to.

Why isn't generative AI finished yet?

There are several reasons for this. First, generative AI technology is improving rapidly, despite its challenges, with scale and size being the main drivers of improvement.

Research shows that the size of language models (number of parameters) as well as the amount of data and computing power used for training all contribute to improving model performance. In contrast, the neural network architecture that powers this model seems to have the least impact.

Large language models also exhibit so-called emergent abilities, which are unexpected abilities in tasks for which they have not been trained. Researchers have reported that when models reach a certain amount of “progress,” new capabilities “emerge.”

Studies show that sufficiently complex linguistic models can develop the ability to reason by analogy and even reproduce optical illusions similar to those experienced by humans. The exact reasons for this observation are debated, but there is no doubt that large language models are becoming more complex

Studies have shown that sufficiently complex language models can develop the ability to reason by analogy and even reproduce optical illusions similar to those experienced by humans. The exact reasons for this observation are debated, but there is no doubt that large language models are becoming more complex; So AI companies are still working on bigger and more expensive models, and tech companies like Microsoft and Apple are betting on a return on their existing investments in productive AI. According to a recent estimate, generative artificial intelligence needs to earn $600 billion annually to justify current investments, and this figure is likely to increase to $1 trillion in the coming years.

Right now, the biggest winner from the generative AI boom is Nvidia, the largest maker of chips powering the generative AI arms race. As a prominent shareholder, Nvidia recently became the most valuable public company in history, tripling its stock price in a year to reach $3 trillion in June.

What will the future be like?

As AI hype dies down and we move into a period of disillusionment, we're also seeing more realistic AI adoption strategies.

First, AI is used to support humans instead of replacing them. A recent survey of American companies showed that they mainly use artificial intelligence to improve efficiency (49 percent), reduce labor costs (47 percent) and increase product quality (58 percent).

Second, we are seeing the rise of smaller (and cheaper) generative AI models that are trained on specific data and deployed locally to reduce costs and optimize efficiency. Even OpenAI, which has been leading the charge for larger models, has released the GPT-4 Mini (GPT-4o Mini) to reduce costs and improve performance.

Third, we see a strong focus on providing AI literacy training and educating the workforce about how AI works, its potential and limitations, and best practices for the ethical use of AI. We will likely have to learn (and re-learn) how to use various AI technologies for years to come.

Ultimately, the AI ​​revolution will be more like evolution. Its use will gradually grow over time and will gradually change and transform human activities; Which is much better than replacing them.

Get the latest Sciene and technology news on Eviralnews

Mhd Narayan

Bringing over 8 years of expertise in digital marketing, I serve as a news editor dedicated to delivering compelling and informative content. As a seasoned content creator, my goal is to produce engaging news articles that resonate with diverse audiences.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button