GTM-5LMFKKGL

Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

The end of artificial intelligence expansion may not be here yet: What happens next


Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. learn more


With the realization of artificial intelligence systems superhuman performance Amid increasingly complex tasks, the industry is grappling with the question of whether larger models are possible, or whether innovation must take a different path.

The general approach to large language model (LLM) development is that bigger is better, and performance scales with more data and more computing power. However, recent media discussion has focused on how the LL.M. is approaching its limits. “Has AI hit a wall?edge question, at the same time Reuters report “With the limitations of current approaches, OpenAI and other companies are pursuing new pathways to smarter artificial intelligence.”

The concern is that the expansions that have driven progress over the years may not be extended to next-generation models. Reports indicate that the development of cutting-edge models like GPT-5 that push the limits of current artificial intelligence may face challenges due to reduced performance gains during pre-training. information These challenges were reported on OpenAI and Bloomberg cover Google and Anthropic have similar news.

The issue raises concerns that these systems may suffer from the law of diminishing returns – each additional unit of input produces progressively smaller returns. As the LLM scales, the cost of acquiring high-quality training materials and scaling infrastructure increases exponentially, reducing the return on performance improvements for new models. Compounding this challenge is the limited availability of high-quality new data, since much of the accessible information is already incorporated into existing training data sets.

this doesn't mean the end AI performance improvements. It simply means that in order to maintain progress, further engineering is required through innovations in model architecture, optimization techniques, and data use.

Learn Moore's Law

A similar pattern of diminishing returns has occurred in the semiconductor industry. For decades, the industry has benefited from Moore's Law, which predicts that the number of transistors will double every 18 to 24 months, driving significant improvements in performance through smaller, more efficient designs. This also ultimately leads to diminishing returns, starting somewhere 2005 to 2007 because Dennard scale – The principle that shrinking transistors also reduces power consumption – has reached its limits, driving the need for Death of Moore's Law.

I observed this issue up close while working with AMD from 2012 to 2022. This problem does not mean that semiconductors and computer processors will no longer improve performance from one generation to the next. It does mean that improvements will come more from smaller chip designs, higher-bandwidth memory, optical switches, more caches and accelerated computing architecture, rather than from transistor shrinkage.

new path to progress

Similar phenomena have been observed Current LL.M.. Multimodal AI models such as GPT-4o, Claude 3.5, and Gemini 1.5 have demonstrated the power of integrating text and image understanding, enabling progress in complex tasks such as video analysis and contextual image captioning. Additional adjustments to the training and inference algorithms will lead to further performance improvements. Agent technology, which enables LLMs to perform tasks autonomously and coordinate seamlessly with other systems, will soon significantly expand its practical applications.

Future model breakthroughs may come from one or more hybrid AI architecture designs that combine symbolic reasoning with neural networks. OpenAI's o1 inference model has shown the potential for model integration and performance expansion. Although we are just out of the early stages of development, Quantum computing It is expected to accelerate artificial intelligence training and reasoning by solving current computing bottlenecks.

The perceived scaling is unlikely to end future gains, as the AI ​​research community continues to prove its ingenuity in overcoming challenges and unlocking new capabilities and performance advances.

In fact, not everyone agrees that climbing walls exist. OpenAI CEO Sam Altman puts it simply: “There are no walls.”

<em>SourceX <a href=httpsxcomsamastatus1856941766915641580 target= blank rel=noreferrer noopener>httpsxcomsamastatus1856941766915641580<a> <em>

Talking about “CEO DiaryPodcaster, former Google CEO and co-author Genesis Eric Schmidt largely agreed with Altman, saying he doesn't believe climbing walls exist — at least not within the next five years. “Five years from now, these LL.M. cranks will have turned two or three more times. Each one of these cranks will look like it has double, triple, quadruple the capability, so we can say, turn all these cranks on all of these systems. The crank will gain 50 or 100 times the power,” he said.

Leading artificial intelligence innovators remain optimistic about the pace of progress and the potential of new approaches. This optimism is reflected in recent conversations exist”Lenny's Podcast” Working with OpenAI Chief Product Officer Kevin Weil and Anthropic Chief Product Officer Mike Krieger.

<em>source <a href=httpswwwyoutubecomwatchv=IxkvVZua28k target= blank rel=noreferrer noopener>httpswwwyoutubecomwatchv=IxkvVZua28k<a> <em>

During the discussion, Krieger described what OpenAI and Anthropic are doing today as “feeling like magic,” but acknowledged that in just 12 months, “we’ll look back and say, can you believe we used that crap?” ?…This is how fast (artificial intelligence development) is developing.

It's true – it does feel like magic, as I recently experienced with OpenAI Advanced voice mode. Talking to “Juniper” feels completely natural and seamless, demonstrating how artificial intelligence is constantly evolving to understand and react to the emotions and nuances of real-time conversations.

Krieger also discussed the recent o1 model, calling it “a new way to expand intelligence that we feel we've just begun.” He added: “These models will accelerate getting smarter.”

These anticipated advances demonstrate that while traditional scaling methods may or may not face diminishing returns in the short term, the field of artificial intelligence is poised to continue making breakthroughs through new methods and creative engineering.

Does scaling still matter?

While scaling challenges dominate much of the current discussion around the LL.M., recent research suggests that current models are already able to achieve extraordinary results, raising the controversial question of whether more scaling matters.

one Recent research predicts ChatGPT can help doctors diagnose complex patient cases. The study, conducted using an early version of GPT-4, compared ChatGPT's diagnostic capabilities with those of doctors with and without artificial intelligence assistance. The surprising results showed that ChatGPT alone significantly outperformed both groups, including doctors using artificial intelligence assistance. There are many reasons for this, ranging from doctors' lack of understanding of how best to use robots to their belief that their own knowledge, experience and intuition are inherently superior.

This isn't the first study to show robots have better results than professionals. VentureBeat reports A study earlier this year showed that LL.M.s can perform financial statement analysis with an accuracy that rivals or even exceeds that of professional analysts. Also using GPT-4, another goal is to predict future revenue growth. GPT-4 achieved 60% accuracy in predicting future profit directions, significantly higher than the 53% to 57% range predicted by human analysts.

It’s worth noting that both paradigms are based on models that are already outdated. These results highlight that, even without new scaling breakthroughs, existing LL.M.s are already able to outperform experts on complex tasks, challenging assumptions about the need for further scaling to achieve impactful results.

Expansions, skills, or both

These examples demonstrate that current LLMs are already highly capable, but scaling up alone may not be the only way to innovate in the future. But as more scaling becomes possible, along with other emerging technologies that promise improved performance, Schmidt's optimism reflects the rapid pace of AI progress, which suggests that in just five years, models could develop into polymaths without Seamlessly answer complex questions across multiple domains.

Whether through extensions, skills, or entirely new approaches, the next frontier of artificial intelligence promises to transform not just the technology itself, but the role it plays in our lives. The challenge ahead is ensuring that progress is responsible, equitable and impactful for everyone.

Gary Grossman is executive vice president of the technology practice at Edelman Edelman is a global leader in centers of excellence for artificial intelligence.

data decision makers

Welcome to the VentureBeat community!

DataDecisionMakers is a place where experts, including technologists working in data, can share data-related insights and innovations.

If you want to stay up to date on cutting-edge thinking and the latest news, best practices and the future of data and data technologies, join us at DataDecisionMakers.

you might even consider Contribute an article Your own!

Read more from DataDecisionMakers



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

//aimtoomedeelri.net/4/8571219