Andrew Ng: Why Agentic AI is the smart bet for most enterprises

After his ScaleUp:AI ‘24 keynote, Andrew Ng, a luminary in the field of artificial intelligence, shared his expertise on the current landscape of generative AI, Agents, and the most valuable path for enterprises moving forward.
Jon Krohn, cofounder and Chief Data Scientist of Nebula.io, spoke with Ng about the strategic choices companies face when investing in AI. Ng’s key message resonated clearly: for the majority of businesses, focus on building applications using agentic workflows rather than solely chasing the most powerful foundational models.
Key takeaways
- Building practical applications with agentic workflows should be the priority for most businesses, rather than focusing on the latest foundational models.
- Decreasing costs of generative AI models make it more accessible for companies to experiment with advanced models without worrying about prohibitive expenses.
- Starting with the best available model allows businesses to build functional products, with optimization efforts focused only when scaling costs become significant.
- The evolution of agentic workflows enables AI models to collaborate and break complex tasks into manageable steps, improving efficiency and effectiveness.
- The potential of visual AI to process unstructured data is transforming industries like security and media by offering new capabilities for image and video analysis.
These insights came from our ScaleUp:AI event in November 2024, an industry-leading global conference that features topics across technologies and industries. Watch the full session below:
Krohn began the discussion by referencing Ng’s observation that GPT-3.5 with an agentic workflow could outperform a more advanced foundational model like GPT-4 using a zero-shot approach. This prompted the central question: How should companies balance their investments between pursuing cutting-edge models and leveraging more effective Agent architectures?
Ng’s response was unambiguous. Unless a company has “an extra few billion dollars to spare,” the priority should be on building practical applications through agentic workloads. He acknowledged the allure of competing with giants like OpenAI, Anthropic, and Gemini but emphasized the immense opportunities available in application development.
“Worry much more about building something valuable”
Ng highlighted the significant drop in the cost of using generative AI models, pointing out a potential 80% year-over-year decrease over the past 18 months. This trend suggests that the initial concerns about the expense of powerful models like GPT-4 are becoming less significant. His advice is clear: “Worry much more about building something valuable,” as the cost of using these APIs is likely to decrease over time.
Krohn followed up by asking whether enterprises should always aim for the latest and greatest LLM or prioritize effective agentic workflows, considering the trade-off between cost and efficiency. Ng reiterated his stance: “Don’t worry about the price of LLMs to get started.”
He suggested using the best available model to build something functional first. Only if the application becomes so successful that its usage costs become prohibitive should teams then focus on cost optimization by experimenting with lower-cost models or different agentic workflows. “The hardest thing is just building something that works,” Ng stated.
The evolution of agentic workflows
Reflecting on the evolution of AI thinking, Ng recalled the early “Society of Mind” concept, where intelligence arose from numerous simple Agents working together, and his later embrace of the “single algorithm theory” that propelled deep learning. He noted the current trend of marrying these concepts with multi-agent systems, leveraging the power of large language models.
Ng drew an analogy between this approach and human intelligence, where specialization for different tasks can be achieved through prompting and feeding additional data. Agentic workflows allow AI models to specialize, breaking down complex tasks into smaller, manageable steps.
This was further illustrated in the context of Vision AI, a field Ng believes is on the cusp of a revolution following the text processing breakthroughs. He explained that while large multimodal models are becoming proficient at interpreting images, complex tasks often require an iterative, agentic approach. For instance, counting people in a crowded scene is more accurately done by detecting individuals one by one (an agentic workflow expressed in code) rather than a single “glance” approach.
LandingAI’s Vision Agent exemplifies this by generating a plan expressed in code to utilize various tools and function calls for image processing. This approach significantly enhances accuracy for critical vision tasks and lowers the barrier for developers by automating the process of finding the right libraries and writing integration code.
Unlocking the potential of unstructured data
Ng also touched on the vast potential of visual AI to transform industries beyond traditional areas like manufacturing and healthcare, driven by its ability to tackle unstructured data. While hesitant to provide overly specific predictions (likening it to predicting all uses of electricity), he highlighted sectors like robotic automation, security, and media as ripe for disruption.
He showcased a compelling demo of a video retrieval system powered by Vision Agent, capable of indexing and searching through vast video libraries based on visual content. This ability to analyze and understand unstructured visual data opens up numerous possibilities across numerous industries.
Building trustworthy and safe AI
The conversation concluded with an audience question on mitigating the risk of users relying indiscriminately on probabilistic answers from AI Agents. Ng acknowledged that machine learning outputs are often not fully deterministic, even in areas like web search. He proposed a multi-faceted approach to address this:
- User training: Educating users about the nature of AI-generated responses.
- Guardrails and mechanisms: Implementing safety features like confirmation flows before critical actions are taken.
He cited the example of a confirmation step before an AI Agent makes a purchase, requiring explicit user approval. While acknowledging past incidents of AI errors, such as the lawyer who submitted AI-generated fake cases, Ng emphasized that current AI systems are significantly more reliable. He cautioned against the notion that AI can never make mistakes and shouldn’t be used everywhere but also noted that the perceived limitations are often overstated.
Watch more sessions from ScaleUp:AI, and see scaleup.events for updates on ScaleUp:AI 2025.
*Note: Insight Partners has invested in LandingAI.