GenAI’s Pace of Development Is Shattering Moore’s Law

artificial intelligence

We are still in the early innings of the artificial intelligence (AI) era.

Just don’t tell that to the products and generative AI foundation models that firms like Microsoft, OpenAI, Google, Amazon, Anthropic, Meta and others are commercializing. Many of them are already on their second, third, even fourth versions and beyond.

That’s because the initial development stage of the AI economy, in just the one year and change since OpenAI brought its ChatGPT system to proverbial life in November 2022, has already started to surpass the pace of technical progression outlined by Moore’s Law.

On Thursday (Feb. 8) Google renamed its artificial intelligence chatbot, formerly known as Bard and now called Gemini, launched a new version of it, Gemini Advanced, introduced mobile experiences, and added a new subscription plan, which now boasts over 100 million subscribers.

Google announced Bard in February 2023 and launched it in March, marking the company’s first public entry into the generative AI race.

And the rapid pace of Google’s AI development and product commercialization, far from being an outlier, is increasingly par for the AI course. OpenAI reportedly surpassed $2 billion in annualized revenue late last year, a milestone that puts the company among the ranks of the fastest-growing firms in history.

After all, OpenAI is already onto ChatGPT 4-Turbo, Anthropic’s Claude chatbot is on version 2.1, Meta is advancing AI across nearly all of its platforms and products, Amazon is looking to build a large language model (LLM) twice the size of OpenAI’s GPT-4 and has launched an AI shopping assistant called Rufus, while Microsoft is pushing forward with its Copilot AI companion product.

Hardly a static landscape — and that is just a snapshot of the model developers, not the many companies devising use cases.

See also: 12 Payments Experts Share How AI Changed Everything in 2023

A Lot Can Change in Just One Year

Advancing AI systems involves continuous research and development, and improvements can be made in various aspects, including model architecture, training methodologies and application-specific enhancements.

How AI models work is by turning data into tokens or characters, and then each token or character representing chunks of text data is then associated with a number, which is then stored or fed into a vector or matrices. Those are then fed into neural networks to generate a deep learning model.

Increasing the depth of neural network architectures can enhance their capacity to capture complex patterns and dependencies in data.

Many of the version updates of AI models are meant to increase the character window for the AI systems, as well as expand the content types they can intake.

For example, OpenAI’s GPT-4 can read, analyze, or generate up to 25,000 words of text, a significant improvement over the earlier GPT-3.5 model and those prior.

Improving the underlying model algorithm represents a key opportunity area. Improved optimization algorithms can contribute to faster convergence and better generalization. Techniques like adaptive learning rates, momentum, and advanced optimizers like Adam or RMSprop have been successful in improving training stability and efficiency.

“We always overestimate the first three years of a technology, and severely underestimate the 10-year time horizon,” Bushel CEO Jake Joraanstad told PYMNTS in December.

Read more: Will AI’s Biggest Questions Find Their Answers This Year?

Capturing the Multimodal Movement

PYMNTS examined the rapid rise in demand for generative AI tools such as OpenAI’s ChatGPT last week in a conversation with Andy Hock, senior vice president of product and strategy at Cerebras.

“The ChatGPT light bulb went off in everybody’s head, and it brought artificial intelligence and state-of-the-art deep learning into the public discourse,” Hock told PYMNTS during a conversation for the “AI Effect” series.

“And from an enterprise standpoint, a light bulb went off in the heads of many Fortune 1000 CIOs and CTOs, too,” he added. “These generative models do things like simulate time series data. They can classify the languages and documents for applications, say, in finance and legal. They can also be used in broad domains to do things like help researchers develop new pharmaceutical therapies or better understand electronic health records and predict health outcomes from particular treatments.”

Read also: Who Will Power the GenAI Operating System?

And before the widespread adoption of generative AI, several applications of AI 1.0 were popular in the financial services and payments sector, including machine learning for fraud detection and prevention via rules-based processes, algorithmic trading, credit scoring and risk assessment, customer service chatbots, AI-driven customer relationship management systems, and regulatory compliance.

Perhaps the biggest jump AI has made over the past year is the software system’s ability to parse visual, verbal, audio and textual data all together multimodally.

“Hyper-personalized, really immersive experiences are going to be so important going forward,” Ed Chandler, senior vice president and head of Commercial and Money Movement Solutions for Europe at Visa, told PYMNTS in an interview posted in August.

But it is going to require a massive number of resources and engineering talent to keep AI moving forward at its current pace. So much so that OpenAI CEO Sam Altman is reportedly pitching a multitrillion dollar AI ecosystem project to investors.