2025 Predictions: Cloud Architectures, Cost Management and Hybrid By Design

In this episode of our predictions series, we consider the evolving nature of Cloud, across architecture, cost management, and, indeed, the lower levels of infrastructure. We asked our analysts Dana Hernandez, Ivan McPhee, Jon Collins, Whit Walters, and William McKnight for their thoughts. 

Jon: We’re seeing a maturing of thinking around architecture, not just with cloud computing but across technology provision. Keep in mind that what we know as Cloud is still only 25% of the overall space – the other three quarters are on-premise or hosted in private data centers. It’s all got to work together as a single notional platform, or at least, the more accurate we can make this, the more efficient we can be.

Whilst the keyword may be ‘hybrid’, I expect to see a shift from hybrid environments by accident, towards hybrid by design – actively making decisions based on performance, cost, and indeed governance areas such as sovereignty. Cost management will continue to catalyze this trend, as illustrated by FinOps. 

Dana: FinOps is evolving, with many companies considering on-prem or moving workloads back from the Cloud. At FinOpsX, companies were looking at blended costs of on-prem and Cloud. Oracle has now joined the big three, Microsoft, Google, and AWS, and it’ll be interesting to see who else will jump in.

Jon: Another illustration is repatriation, moving workloads away from the Cloud and back on-premise.

William: Yes, repatriation is accelerating, but Cloud providers might respond by 2025, likely through more competitive pricing and technical advancements that offer greater flexibility and security. We’re still heavily moving to the Cloud, and repatriation might take a few years to slow down. 

Whit: The vendor response to repatriation has been interesting. Oracle with Oracle Cloud Infrastructure (OCI), for example, is undercutting competitors with their pricing model, but there’s skepticism—clients worry Oracle might increase costs later through licensing issues. 

Jon: We’re also seeing historically pure-play Cloud providers move to an acceptance of hybrid models, even though they probably wouldn’t say that out loud. AWS’ Outposts on-premise cloud offering, for example, can now work with local storage from NetApp, and it’s likely this type of partnership will accelerate. I maintain that “Cloud” should be seen primarily as an architectural construct around dynamic provisioning and elastic scaling, and secondarily around who the provider – recognizing that hosting companies can do a better job of resilience. Organizations need to put architecture first.

Ivan: We’ll also see more cloud-native tools to manage those workloads. For instance, on the SASE/SSE side, companies like Cato Networks are seeing success because people don’t want to install physical devices across the network. We also see this trend in NDR with companies like Lumu Technologies, where security solutions are cloud-native rather than on-premises. 

Cloud-native solutions like Cato Networks and Lumu Technologies have more pricing flexibility than those tied to hardware components. They will be better positioned to adjust pricing to drive adoption and growth than traditional on-premises solutions. Some vendors are exploring value-based pricing, considering factors like customer business value to get into strategic accounts. This could be an exciting shift as we move into the future.

The post 2025 Predictions: Cloud Architectures, Cost Management and Hybrid By Design appeared first on Gigaom.

The Evolving Revolution: AI in 2025

AI was 2024’s hot topic, so how is it evolving? What are we seeing in AI today, and what do we expect to see in the next 12-18 months? We asked Andrew Brust, Chester Conforte, Chris Ray, Dana Hernandez, Howard Holton, Ivan McPhee, Seth Byrnes, Whit Walters, and William McKnight to weigh in. 

First off, what’s still hot? Where are AI use cases seeing success?

Chester: I see people leveraging AI beyond experimentation. People have had the opportunity to experiment, and now we’re getting to a point where true, vertical-specific use cases are being developed. I’ve been tracking healthcare closely and seeing more use-case-specific, fine-tuned models, such as the use of AI to help doctors be more present during patient conversations through auditory tools for listening and note-taking. 

I believe ‘small is the new big’—that’s the key trend, such as hematology versus pathology versus pulmonology. AI in imaging technologies isn’t new, but it’s now coming to the forefront with new models used to accelerate cancer detection. It has to be backed by a healthcare professional: AI can’t be the sole source of diagnoses. A radiologist needs to validate, verify, and confirm the findings. 

Dana: In my reports, I see AI leveraged effectively from an industry-specific perspective. For instance, vendors focused on finance and insurance are using AI for tasks like preventing financial crime and automating processes, often with specialized, smaller language models. These industry-specific AI models are a significant trend I see continuing into next year.

William: We’re seeing cycles reduced in areas like pipeline development and master data management, which are becoming more autonomous. An area gaining traction is data observability—2025 might be its year. 

Andrew: Generative AI is working well in code generation—generating SQL queries and creating natural language interfaces for querying data. That’s been effective, though it’s a bit commoditized now. 

More interesting are advancements in the data layer and architecture. For instance, Postgres has a vector database add-in, which is useful for retrieval-augmented generation (RAG) queries. I see a shift from the “wow” factor of demos to practical use, using the right models and data to reduce hallucinations and make data more accessible. Over the next two or three years, vendors will move from basic query intelligence to creating more sophisticated tools.

How are we likely to see large language models evolve? 

Whit: Globally, we’ll see AI models shaped by cultural and political values. It’s less about technical developments and more about what we want our AIs to do. Consider Elon Musk’s xAI, based on Twitter/X. It’s uncensored—quite different from Google Gemini, which tends to lecture you if you ask the wrong question. 

Different providers, geographies, and governments will tend to move either towards free-er speech, or will seek to control AI’s outputs. The difference is noticeable. Next year, we’ll see a rise in models without guardrails, which will provide more direct answers.

Ivan: There’s also a lot of focus on structured prompts. A slight change in phrasing, like using “detailed” versus “comprehensive,” can yield vastly different responses. Users need to learn how to use these tools effectively.

Whit: Indeed, prompt engineering is crucial. Depending on how words are embedded in the model, you can get drastically different answers. If you ask the AI to explain what it wrote and why, it forces it to think more deeply. We’ll see domain-trained prompting tools soon—agentic models that can help optimize prompts for better outcomes.

How is AI building on and advancing the use of data through analytics and business intelligence (BI)?

Andrew: Data is the foundation of AI. We’ve seen how generative AI over large amounts of unstructured data can lead to hallucinations, and projects are getting scrapped. We’re seeing a lot of disillusionment in the enterprise space, but progress is coming: we’re starting to see a marriage between AI and BI, beyond natural language querying. 

Semantic models exist in BI to make data more understandable and can extend to structured data. When combined, we can use these models to generate useful chatbot-like experiences, pulling answers from structured and unstructured data sources. This approach creates business-useful outputs while reducing hallucinations through contextual enhancements. This is where AI will become more grounded, and data democratization will be more effective.

Howard: Agreed. BI has yet to work perfectly for the last decade. Those producing BI often don’t understand the business, and the business doesn’t fully grasp the data, leading to friction. However, this can’t be solved by Gen AI alone, it requires a mutual understanding between both groups. Forcing data-driven approaches without this doesn’t get organizations very far.

What other challenges are you seeing that might hinder AI’s progress? 

Andrew: The euphoria over AI has diverted mindshare and budgets away from data projects, which is unfortunate. Enterprises need to see them as the same. 

Whit: There’s also the AI startup bubble—too many startups, too much funding, burning through cash without generating revenue. It feels like an unsustainable situation, and we’ll see it burst a bit next year. There’s so much churn, and keeping up has become ridiculous.

Chris: Related, I am seeing vendors build solutions to “secure” GenAI / LLMs. Penetration testing as a service (PTaaS) vendors are offering LLM-focused testing, and cloud-native application protection (CNAPP) has vendors offering controls for LLMs deployed in customer cloud accounts. I don’t think buyers have even begun to understand how to effectively use LLMs in the enterprise, yet vendors are pushing new products/services to “secure” them. This is ripe for popping, although some “LLM” security products/services will pervade. 

Seth: On the supply chain security side, vendors are starting to offer AI model analysis to identify models used in environments. It feels a bit advanced, but it’s starting to happen. 

William: Another looming factor for 2025 is the EU Data Act, which will require AI systems to be able to shut off with the click of a button. This could have a big impact on AI’s ongoing development.

The million-dollar question: how close are we to artificial general intelligence (AGI)?

Whit: AGI remains a pipe dream. We don’t understand consciousness well enough to recreate it, and simply throwing compute power at the problem won’t make something conscious—it’ll just be a simulation. 

Andrew: We can progress toward AGI, but we must stop thinking that predicting the next word is intelligence. It’s just statistical prediction—an impressive application, but not truly intelligent.

Whit: Exactly. Even when AI models “reason”, it’s not true reasoning or creativity. They’re just recombining what they’ve been trained on. It’s about how far you can push combinatorics on a given dataset.

Thanks all!

The post The Evolving Revolution: AI in 2025 appeared first on Gigaom.

Bridging Wireless and 5G

Wireless connectivity and 5G are transforming the way we live and work, but what does it take to integrate these technologies? I spoke to Bruno Tomas, CTO of the Wireless Broadband Alliance (WBA), to get his insights on convergence, collaboration, and the road ahead.

Q: Bruno, could you start by sharing a bit about your background and your role at the WBA?

Bruno: Absolutely. I’m an engineer by training, with degrees in electrical and computer engineering, as well as a master’s in telecom systems. I started my career with Portugal Telecom and later worked in Brazil, focusing on network standards. About 12 years ago, I joined the WBA, and my role has been centered on building the standards for seamless interoperability and convergence between Wi-Fi, 3G, LTE, and now 5G. At the WBA, we bring together vendors, operators, and integrators to create technical specifications and guidelines that drive innovation and usability in wireless networks.

Q: What are the key challenges in achieving seamless integration between wireless technologies and 5G?

Bruno: One of the biggest challenges is ensuring that our work translates into real-world use cases—particularly in enterprise and public environments. For example, in manufacturing or warehousing, where metal structures and interference can disrupt connectivity, we need robust solutions for starters. At the WBA, we’ve worked with partners from the vendor, chipset and device communities, as well as integrators, to address these challenges by building field-tested guidelines. On top of that comes innovation. For instance, our OpenRoaming concepts help enable seamless transitions between networks, including IoT, reducing the complexity for IT managers and CIOs.

Q: Could you explain how WBA’s “Tiger Teams” contribute to these solutions?

Bruno: Tiger Teams are specialized working groups within our alliance. They bring together technical experts from companies such as AT&T, Intel, Broadcom, and AirTies to solve specific challenges collaboratively. For instance, in our 5G & Wi-Fi convergence group, members define requirements and scenarios for industries like aerospace or healthcare. By doing this, we ensure that our recommendations are practical and field-ready. This collaborative approach helps drive innovation while addressing real-world challenges.

Q: You mentioned OpenRoaming earlier. How does that help businesses and consumers?

Bruno: OpenRoaming simplifies connectivity by allowing users to seamlessly move between Wi-Fi and cellular networks without needing manual logins or configurations. Imagine a hospital where doctors move between different buildings while using tablets for patient care, supported by an enhanced security layer. With OpenRoaming, they can stay connected without interruptions. Similarly, for enterprises, it minimizes the need for extensive IT support and reduces costs while ensuring high-quality service.

Q: What’s the current state of adoption for technologies like 5G and Wi-Fi 6?

Bruno: Adoption is growing rapidly, but it’s uneven across regions. Wi-Fi 6 has been a game-changer, offering better modulation and spectrum management, which makes it ideal for high-density environments like factories or stadiums. On the 5G side, private networks have been announced, especially in industries like manufacturing, but the integration with existing systems remains a hurdle. In Europe, regulatory and infrastructural challenges slow things down, while the U.S. and APAC regions are moving faster.

Q: What role do you see AI playing in wireless and 5G convergence?

Bruno: AI is critical for optimizing network performance and making real-time decisions. At the WBA, we’ve launched initiatives to incorporate AI into wireless networking, helping systems predict and adapt to user needs. For instance, AI can guide network steering—deciding whether a device should stay on Wi-Fi or switch to 5G based on signal quality and usage patterns. This kind of automation will be essential as networks become more complex.

Q: Looking ahead, what excites you most about the future of wireless and 5G?

Bruno: The potential for convergence to enable new use cases is incredibly exciting. Whether it’s smart cities, advanced manufacturing, or immersive experiences with AR and VR, the opportunities are limitless. Wi-Fi 7, will bring even greater capacity and coverage, making it possible to deliver gigabit speeds in dense environments like stadiums or urban centers. Conversely, we are starting to look into 6G. One trend is clear: Wi-Fi should be integrated within a 6G framework, enabling densification. At the WBA, we’re committed to ensuring these advancements are accessible, interoperable, and sustainable.

Thank you, Bruno! 

N.B. The WBA Industry Report 2025 has now been released and is available for download. Please click here for further information.

The post Bridging Wireless and 5G appeared first on Gigaom.

Find the soul