“There’s going to be no one model that rules the world,” Said Amazon’s VP of AI and Data at a Recent Conference
While many companies have been rushing to produce a be-all-end-all AI model that will be a consistent go-to tool inside and across many tech companies, the reality is shaping up to be a more fluid and interconnected approach featuring constantly changing combinations. Swami Sivasubramanian, Amazon Web Services’ Vice President of AI and Data, provided valuable insight into this landscape during an event in Seattle. He observed that companies are continuously reshuffling their multifaceted relationships with AI tools, often mixing, matching, and switching between different models.
“There’s going to be no one model that rules the world,” said Sivasubramanian. “More than half of our customers use more than one model for a given application.” In other words, “model loyalty is near zero,” added S. “Soma” Somasegar, Madrona’s Managing Director, who interviewed Sivasubramanian during the “AI Unleashed” event at Amazon HQ as part of Seattle Tech Week.
Looking to the future, Sivasubramanian predicted that building large language models will become second nature for new computer science graduates, providing companies with even greater flexibility and options.
The dynamic nature of generative AI development and adoption was a key theme at the event. A panel discussion featuring startup founders – WhyLabs CEO Alessya Visnjic, Gradial CEO Doug Tallmadge, and OctoAI CEO Luis Ceze – moderated by Madrona partner Jon Turow, echoed this sentiment.
The event highlighted several key trends and insights. Inside many companies and corporate boards, there is a growing focus on real-world use cases and measuring the return on investment for AI projects, while also addressing ongoing concerns about security, compliance, and accuracy. The dynamic nature of the field has been further energized by the rise of competitive open-source models, such as Meta’s Llama 3.1. Companies are now adopting a mix of off-the-shelf models, customized open-source models, and proprietary models, depending on their specific needs and use cases.
In many instances, the greatest value comes from combining AI models with enterprise data, rather than relying solely on the models themselves. A key to successful implementation within organizations is integrating AI functionality into existing enterprise workflows, ensuring that AI enhancements are seamlessly incorporated into the business operations.
AWS, competing against giants like Microsoft, Google, and OpenAI, operates across every layer of the AI stack. It develops its own Titan foundation models, offers AI models as a service through Amazon Bedrock, and creates AI applications such as the Amazon Q AI assistant.
Sivasubramanian, who serves on the National AI Advisory Committee advising the White House, was named to Amazon’s senior leadership team last year. He offered insights into the future of AI without revealing specific AWS plans, hinting at developments likely to be discussed at the annual reconference. He spoke about the future of “agentic workflows” — systems designed to act independently with minimal human intervention — despite current challenges with their success rates. He also highlighted the growing use of multimodal AI in industries like healthcare, moving beyond basic “text in, text out” applications to tackle more complex problems.
The event concluded with Madrona Managing Director Matt McIlwain addressing the startup and business leaders in attendance, emphasizing the importance of focusing on new and emerging business models, not just AI models. “There is a comprehensive stack of technological innovation and business-model innovation that is going on right now,” McIlwain said. “It was only 20 years ago that the adoption of software-as-a-service was taking over as a business model. We don’t know what the model is going to look like in the future.”