Are you ready what’s coming next for LLMs and AI agents?

ComputerWorld.com reported that “Tech industry experts have described AI as more revolutionary than electricity and the internet. (It’s also been called more dangerous than nuclear weapons, because the technology in the wrong hands could wreak havoc.)”  The March 23, 2026 article entitled “What’s coming next for LLMs and AI agents?” (https://www.computerworld.com/article/4148846/whats-coming-next-for-llms-and-ai-agents.html) included these comments from “Jeff Dean, chief scientist, Google DeepMind and Google Research, said during a panel discussion at Nvidia’s GTC developer show last week.”:

That’s not happening quite yet, but there are signs it’s coming, Dean said, noting that AI agents can already self-evolve by accepting and dismissing ideas.

There’s history here, too. In 2017, AI researchers came up with the concept of “meta learning,” where AI could search for models best suited for experiments and problem solving. The search parameters at the time were mostly specified in code, but now that can be done with natural language, Dean said.

Natural language interaction makes it easier for agents to find ways to get better, such as finding new information, specific algorithms, or distillation mechanisms. AI can be seen as a performance multiplier that frees researchers to think up new ideas. “It’s a partnership between super-capable researchers and super-capable agents,” Dean said.

More interactive LLMs: As AI technology progresses, LLMs could become more interactive with the real world, actually re-learning and updating themselves in real time and taking actions based on that new knowledge.

Today’s LLMs are basically strapped on a board, streamed through internet data, and then presented to the world, Dean said, with results that are largely predetermined.

But future models will learn on the fly by instantly interleaving physical and digital information. With that information, LLMs will be better able to direct robotic actions and predict answers to questions.

While that is already done in post training, what’s better is interleaving at the pre-training stage. “We sort of have this artificial distinction now…. It seems like that shouldn’t exist for the long term,” Dean said.

Continual-learning models are already emerging without any fixed parameters, with organically growing models advancing, pruning and compressing parameters, Dean said.

The Master agent: Nvidia and Google are already using AI for chip design; the next step is to figure out how to automate the process so chip designers and developers can do other things.

What do you think?

Next
Next

Watch out….Attackers are after Oracle Fusion Middleware!