The EU’s AI Act

Have you ever been in a group project where one person decided to take a shortcut, and suddenly, everyone ended up under stricter rules? That’s essentially what the EU is saying to tech companies with the AI Act: “Because some of you couldn’t resist being creepy, we now have to regulate everything.” This legislation isn’t just a slap on the wrist—it’s a line in the sand for the future of ethical AI.

Here’s what went wrong, what the EU is doing about it, and how businesses can adapt without losing their edge.

When AI Went Too Far: The Stories We’d Like to Forget

Target and the Teen Pregnancy Reveal

One of the most infamous examples of AI gone wrong happened back in 2012, when Target used predictive analytics to market to pregnant customers. By analyzing shopping habits—think unscented lotion and prenatal vitamins—they managed to identify a teenage girl as pregnant before she told her family. Imagine her father’s reaction when baby coupons started arriving in the mail. It wasn’t just invasive; it was a wake-up call about how much data we hand over without realizing it. (Read more)

Clearview AI and the Privacy Problem

On the law enforcement front, tools like Clearview AI created a massive facial recognition database by scraping billions of images from the internet. Police departments used it to identify suspects, but it didn’t take long for privacy advocates to cry foul. People discovered their faces were part of this database without consent, and lawsuits followed. This wasn’t just a misstep—it was a full-blown controversy about surveillance overreach. (Learn more)

The EU’s AI Act: Laying Down the Law

The EU has had enough of these oversteps. Enter the AI Act: the first major legislation of its kind, categorizing AI systems into four risk levels:

  1. Minimal Risk: Chatbots that recommend books—low stakes, little oversight.
  2. Limited Risk: Systems like AI-powered spam filters, requiring transparency but little more.
  3. High Risk: This is where things get serious—AI used in hiring, law enforcement, or medical devices. These systems must meet stringent requirements for transparency, human oversight, and fairness.
  4. Unacceptable Risk: Think dystopian sci-fi—social scoring systems or manipulative algorithms that exploit vulnerabilities. These are outright banned.

For companies operating high-risk AI, the EU demands a new level of accountability. That means documenting how systems work, ensuring explainability, and submitting to audits. If you don’t comply, the fines are enormous—up to €35 million or 7% of global annual revenue, whichever is higher.

Why This Matters (and Why It’s Complicated)

The Act is about more than just fines. It’s the EU saying, “We want AI, but we want it to be trustworthy.” At its heart, this is a “don’t be evil” moment, but achieving that balance is tricky.

On one hand, the rules make sense. Who wouldn’t want guardrails around AI systems making decisions about hiring or healthcare? But on the other hand, compliance is costly, especially for smaller companies. Without careful implementation, these regulations could unintentionally stifle innovation, leaving only the big players standing.

Innovating Without Breaking the Rules

For companies, the EU’s AI Act is both a challenge and an opportunity. Yes, it’s more work, but leaning into these regulations now could position your business as a leader in ethical AI. Here’s how:

  • Audit Your AI Systems: Start with a clear inventory. Which of your systems fall into the EU’s risk categories? If you don’t know, it’s time for a third-party assessment.
  • Build Transparency Into Your Processes: Treat documentation and explainability as non-negotiables. Think of it as labeling every ingredient in your product—customers and regulators will thank you.
  • Engage Early With Regulators: The rules aren’t static, and you have a voice. Collaborate with policymakers to shape guidelines that balance innovation and ethics.
  • Invest in Ethics by Design: Make ethical considerations part of your development process from day one. Partner with ethicists and diverse stakeholders to identify potential issues early.
  • Stay Dynamic: AI evolves fast, and so do regulations. Build flexibility into your systems so you can adapt without overhauling everything.

The Bottom Line

The EU’s AI Act isn’t about stifling progress; it’s about creating a framework for responsible innovation. It’s a reaction to the bad actors who’ve made AI feel invasive rather than empowering. By stepping up now—auditing systems, prioritizing transparency, and engaging with regulators—companies can turn this challenge into a competitive advantage.

The message from the EU is clear: if you want a seat at the table, you need to bring something trustworthy. This isn’t about “nice-to-have” compliance; it’s about building a future where AI works for people, not at their expense.

And if we do it right this time? Maybe we really can have nice things.

The post The EU’s AI Act appeared first on Gigaom.

Demystifying data fabrics – bridging the gap between data sources and workloads

The term “data fabric” is used across the tech industry, yet its definition and implementation can vary. I have seen this across vendors: in autumn last year, British Telecom (BT) talked about their data fabric at an analyst event; meanwhile, in storage, NetApp has been re-orienting their brand to intelligent infrastructure but was previously using the term. Application platform vendor Appian has a data fabric product, and database provider MongoDB has also been talking about data fabrics and similar ideas. 

At its core, a data fabric is a unified architecture that abstracts and integrates disparate data sources to create a seamless data layer. The principle is to create a unified, synchronized layer between disparate sources of data and the workloads that need access to data—your applications, workloads, and, increasingly, your AI algorithms or learning engines. 

There are plenty of reasons to want such an overlay. The data fabric acts as a generalized integration layer, plugging into different data sources or adding advanced capabilities to facilitate access for applications, workloads, and models, like enabling access to those sources while keeping them synchronized. 

So far, so good. The challenge, however, is that we have a gap between the principle of a data fabric and its actual implementation. People are using the term to represent different things. To return to our four examples:

  • BT defines data fabric as a network-level overlay designed to optimize data transmission across long distances.
  • NetApp’s interpretation (even with the term intelligent data infrastructure) emphasizes storage efficiency and centralized management.
  • Appian positions its data fabric product as a tool for unifying data at the application layer, enabling faster development and customization of user-facing tools. 
  • MongoDB (and other structured data solution providers) consider data fabric principles in the context of data management infrastructure.

How do we cut through all of this? One answer is to accept that we can approach it from multiple angles. You can talk about data fabric conceptually—recognizing the need to bring together data sources—but without overreaching. You don’t need a universal “uber-fabric” that covers absolutely everything. Instead, focus on the specific data you need to manage.

If we rewind a couple of decades, we can see similarities with the principles of service-oriented architecture, which looked to decouple service provision from database systems. Back then, we discussed the difference between services, processes, and data. The same applies now: you can request a service or request data as a service, focusing on what’s needed for your workload. Create, read, update and delete remain the most straightforward of data services!

I am also reminded of the origins of network acceleration, which would use caching to speed up data transfers by holding versions of data locally rather than repeatedly accessing the source. Akamai built its business on how to transfer unstructured content like music and films efficiently and over long distances. 

That’s not to suggest data fabrics are reinventing the wheel. We are in a different (cloud-based) world technologically; plus, they bring new aspects, not least around metadata management, lineage tracking, compliance and security features. These are especially critical for AI workloads, where data governance, quality and provenance directly impact model performance and trustworthiness.

If you are considering deploying a data fabric, the best starting point is to think about what you want the data for. Not only will this help orient you towards what kind of data fabric might be the most appropriate, but this approach also helps avoid the trap of trying to manage all the data in the world. Instead, you can prioritize the most valuable subset of data and consider what level of data fabric works best for your needs:

  1. Network level: To integrate data across multi-cloud, on-premises, and edge environments.
  2. Infrastructure level: If your data is centralized with one storage vendor, focus on the storage layer to serve coherent data pools.
  3. Application level: To pull together disparate datasets for specific applications or platforms.

For example, in BT’s case, they’ve found internal value in using their data fabric to consolidate data from multiple sources. This reduces duplication and helps streamline operations, making data management more efficient. It’s clearly a useful tool for consolidating silos and improving application rationalization.

In the end, data fabric isn’t a monolithic, one-size-fits-all solution. It’s a strategic conceptual layer, backed up by products and features, that you can apply where it makes the most sense to add flexibility and improve data delivery. Deployment fabric isn’t a “set it and forget it” exercise: it requires ongoing effort to scope, deploy, and maintain—not only the software itself but also the configuration and integration of data sources.

While a data fabric can exist conceptually in multiple places, it’s important not to replicate delivery efforts unnecessarily. So, whether you’re pulling data together across the network, within infrastructure, or at the application level, the principles remain the same: use it where it’s most appropriate for your needs, and enable it to evolve with the data it serves.

The post Demystifying data fabrics – bridging the gap between data sources and workloads appeared first on Gigaom.

Making Sense of Cybersecurity – Part 2: Delivering a Cost-effective Response

At Black Hat Europe last year, I sat down with one of our senior security analysts, Paul Stringfellow. In this section of our conversation (you can find the first part here), we discuss balancing cost and efficiency, and aligning security culture across the organization.

Jon: So, Paul, in an environment with problems everywhere, and you’ve got to fix everything, we need to move beyond that. In the new architectures we now have, we need to be thinking smarter about our overall risk. This ties into cost management and service management—being able to grade our architecture in terms of actual risk and exposure from a business perspective.

So, I’m kind of talking myself into needing to buy a tool for this because I think that in order to cut through the 50 tools, I first need a clear view of our security posture. Then, we can decide which of the tools we have actually respond to that posture because we’ll have a clearer picture of how exposed we are.

Paul: Buying a tool goes back to vendors’ hopes and dreams—that one tool will fix everything. But I think the reality is that it’s a mix of understanding what metrics are important. Understanding the information we’ve gathered, what’s important, and balancing that with the technology risk and the business impact. You made a great point before: if something’s at risk but the impact is minimal, we have limited budgets to work with. So where do we spend? You want the most “bang for your buck.”

So, it’s understanding the risk to the business. We’ve identified the risk from a technology point of view, but how significant is it to the business? And is it a priority? Once we’ve prioritized the risks, we can figure out how to address them. There’s a lot to unpack in what you’re asking. For me, it’s about doing that initial work to understand where our security controls are and where our risks lie. What really matters to us as an organization? Go back to the important metrics—eliminating the noise and identifying metrics that help us make decisions. Then, look at whether we’re measuring those metrics. From there, we assess the risks and put the right controls in place to mitigate them. We do that posture management work. Are the tools we have in place responding to that posture? This is just the internal side of things, but there’s also external risk, which is a whole other conversation, but it’s the same process.

So, looking at the tools we have, how effective are they in mitigating the risks we’ve identified? There are lots of risk management frameworks, so you can probably find a good fit, like NIST or something else. Find a framework that works for you, and use that to evaluate how your tools are managing risk. If there’s a gap, look for a tool that fills that gap.

Jon: And I was thinking about the framework because it essentially says there are six areas to address, and maybe a seventh could be important to your organization. But at least having the six areas as a checkbox: Am I dealing with risk response? Am I addressing the right things? It gives you that, not Pareto view, but it’s about diminishing returns—cover the easiest stuff first. Don’t try to fix everything until you’ve fixed the most common issues. That’s what people are trying to do right now.

Paul: Yeah, I think—let me quote another podcast I do, where we do “tech takeaways.” Yeah, who knew? I thought I’d plug it. But if you think about the takeaways from this conversation, I think, you know, going back to your question—what should I be considering as an organization? I think the starting point is probably to take a step back. As a business, as an IT leader inside that business, am I taking a step back to really understand what risk looks like? What does risk look like to the business, and what needs to be prioritized? Then, we need to assess whether we’re capable of measuring our efficacy against that risk. We’re getting lots of metrics and lots of tools. Are those tools effective in helping us avoid the risks we deem important for the business? Once we’ve answered those two questions, we can then look at our posture. Are the tools in place giving us the kind of controls we need to deal with the threats we face? Context is huge.

Jon: On that note, I’m reminded of how organizations like Facebook, for example, had a pretty high tolerance for business risk, especially around customer data. Growth was everything—just growth at all costs. So, they were prepared to manage the risks to achieve that. It ultimately boils down to assessing and taking those risks. At that point, it’s no longer a technical conversation.

Paul: Exactly. It probably never is just a technical conversation. To deliver projects that address risk and security, it should never be purely technical-led. It impacts how the company operates and the daily workflow. If everyone doesn’t buy into why you’re doing it, no security project is going to succeed. You’ll get too much pushback from senior people saying, “You’re just getting in the way. Stop it.” You can’t be the department that just gets in the way. But you do need that culture across the company that security is important. If we don’t prioritize security, all the hard work everyone’s doing could be undone because we haven’t done the basics to ensure there aren’t vulnerabilities waiting to be exploited.

Jon: I’m just thinking about the number of conversations I’ve had with vendors on how to sell security products. You’ve sold it, but then nothing gets deployed because everyone else tries to block it—they didn’t like it. The reality is that the company needs to work towards something and make sure everything aligns to deliver it.

Paul: One thing I’ve noticed over my 30-plus years in this job is how vendors often struggle to explain why they might be valuable to a business. Our COO, Howard Holton, is a big advocate of this argument—that vendors are terrible at telling people what they actually do and where the benefit lies for a business. But one thing he said to me yesterday was about their approach. One representative I know works for a vendor offering an orchestration and automation tool, but when he starts a meeting, the first thing he does is ask why automation hasn’t worked for the customer. Before he pitches his solution, he takes the time to understand where their automation problems are. If more of us did that—vendors and others alike—if we first asked, “What’s not working for you?” maybe we’d get better at finding the things that will work.

Jon: So we have two takeaways for end users – to focus on risk management, and to simplify and refine security metrics. And for vendors, the takeaway is to understand the customer’s challenges before pitching a solution. By listening to the customer’s problems and needs, vendors can provide relevant and effective solutions, rather than simply selling their aspirations. Thanks, Paul!

The post Making Sense of Cybersecurity – Part 2: Delivering a Cost-effective Response appeared first on Gigaom.

Making Sense of Cybersecurity – Part 1: Seeing Through Complexity

At the Black Hat Europe conference in December, I sat down with one of our senior security analysts, Paul Stringfellow. In this first part of our conversation we discuss the complexity of navigating cybersecurity tools, and defining relevant metrics to measure ROI and risk.

Jon: Paul, how does an end-user organization make sense of everything going on? We’re here at Black Hat, and there’s a wealth of different technologies, options, topics, and categories. In our research, there are 30-50 different security topics: posture management, service management, asset management, SIEM, SOAR, EDR, XDR, and so on. However, from an end-user organization perspective, they don’t want to think about 40-50 different things. They want to think about 10, 5, or maybe even 3. Your role is to deploy these technologies. How do they want to think about it, and how do you help them translate the complexity we see here into the simplicity they’re looking for?

Paul: I attend events like this because the challenge is so complex and rapidly evolving. I don’t think you can be a modern CIO or security leader without spending time with your vendors and the broader industry. Not necessarily at Black Hat Europe, but you need to engage with your vendors to do your job.

Going back to your point about 40 or 50 vendors, you’re right. The average number of cybersecurity tools in an organization is between 40 and 60, depending on which research you refer to. So, how do you keep up with that? When I come to events like this, I like to do two things—and I’ve added a third since I started working with GigaOm. One is to meet with vendors, because people have asked me to. Two, go to some presentations. Three is to walk around the Expo floor talking to vendors, particularly ones I’ve never met, to see what they do. 

I sat in a session yesterday, and what caught my attention was the title: “How to identify the cybersecurity metrics that are going to deliver value to you.” That caught my attention from an analyst’s point of view because part of what we do at GigaOm is create metrics to measure the efficacy of a solution in a given topic. But if you’re deploying technology as part of SecOps or IT operations, you’re gathering a lot of metrics to try and make decisions. One of the things they talked about in the session was the issue of creating so many metrics because we have so many tools that there’s so much noise. How do you start to find out the value?

The long answer to your question is that they suggested something I thought was a really smart approach: step back and think as an organization about what metrics matter. What do you need to know as a business? Doing that allows you to reduce the noise and also potentially reduce the number of tools you’re using to deliver those metrics. If you decide a certain metric no longer has value, why keep the tool that provides it? If it doesn’t do anything other than give you that metric, take it out. I thought that was a really interesting approach. It’s almost like, “We’ve done all this stuff. Now, let’s think about what actually still matters.”

This is an evolving space, and how we deal with it must evolve, too. You can’t just assume that because you bought something five years ago, it still has value. You probably have three other tools that do the same thing by now. How we approach the threat has changed, and how we approach security has changed. We need to go back to some of these tools and ask, “Do we really need this anymore?”

Jon: We measure our success with this, and, in turn, we’re going to change.

Paul: Yes, and I think that’s hugely important. I was talking to someone recently about the importance of automation. If we’re going to invest in automation, are we better now than we were 12 months ago after implementing it? We’ve spent money on automation tools, and none of them come for free. We’ve been sold on the idea that these tools will solve our problems. One thing I do in my CTO role, outside of my work with GigaOm, is to take vendors’ dreams and visions and turn them into reality for what customers are asking for.

Vendors have aspirations that their products will change the world for you, but the reality is what the customer needs at the other end. It’s that kind of consolidation and understanding—being able to measure what happened before we implemented something and what happened after. Can we show improvements, and has that investment had real value?

Jon: Ultimately, here’s my hypothesis: Risk is the only measure that matters. You can break that down into reputational risk, business risk, or technical risk. For example, are you going to lose data? Are you going to compromise data and, therefore, damage your business? Or will you expose data and upset your customers, which could hit you like a ton of bricks? But then there’s the other side—are you spending way more money than you need, to mitigate risks? 

So, you get into cost, efficiency, and so on, but is this how organizations are thinking about it? Because that’s my old-school way of viewing it. Maybe it’s moved on.

Paul: I think you’re on the right track. As an industry, we live in a little echo chamber. So when I say “the industry,” I mean the little bit I see, which is just a small part of the whole industry. But within that part, I think we are seeing a shift. In customer conversations, there’s a lot more talk about risk. They’re starting to understand the balance between spending and risk, trying to figure out how much risk they’re comfortable with. You’re never going to eliminate all risk. No matter how many security tools you implement, there’s always the risk of someone doing something stupid that exposes the business to vulnerabilities. And that’s before we even get into AI agents trying to befriend other AI agents to do malicious things—that’s a whole different conversation.

Jon: Like social engineering?

Paul: Yeah, very much so. That’s a different show altogether. But, understanding risk is becoming more common. The people I speak to are starting to realize it’s about risk management. You can’t remove all the security risks, and you can’t deal with every incident. You need to focus on identifying where the real risks lie for your business. For example, one criticism of CVE scores is that people look at a CVE with a 9.8 score and assume it’s a massive risk, but there’s no context around it. They don’t consider whether the CVE has been seen in the wild. If it hasn’t, then what’s the risk of being the first to encounter it? And if the exploit is so complicated that it’s not been seen in the wild, how realistic is it that someone will use it?

It’s such a complicated thing to exploit that nobody will ever exploit it. It has a 9.8, and it shows up on your vulnerability scanner saying, “You really need to deal with this.” The reality is that you have already seen a shift where there’s no context applied to that—if we’ve seen it in the wild.

Jon: Risk equals probability multiplied by impact. So you’re talking about probability and then, is it going to impact your business? Is it affecting a system used for maintenance once every six months, or is it your customer-facing website? But I’m curious because back in the 90s, when we were doing this hands-on, we went through a wave of risk avoidance, then went to, “We’ve got to stop everything,” which is what you’re talking about, through to risk mitigation and prioritizing risks, and so on. 

But with the advancement of the Cloud and the rise of new cultures like agile in the digital world, it feels like we’ve gone back to the direction of, “Well, you need to prevent that from happening, lock all the doors, and implement zero trust.” And now, we’re seeing the wave of, “Maybe we need to think about this a bit smarter.”

Paul: It’s a really good point, and actually, it’s an interesting parallel you raise. Let’s have a little argument while we’re recording this. Do you mind if I argue with you? I’ll question your definition of zero trust for a moment. So, zero trust is often seen as something trying to stop everything. That’s probably not true of zero trust. Zero trust is more of an approach, and technology can help underpin that approach. Anyway, that’s a personal debate with myself. But, zero trust…

Now, I’ll just crop myself in here later and argue with myself. So, zero trust… If you take it as an example, it’s a good one. What we used to do was implicit trust—you’d log on, and I’d accept your username and password, and everything you did after that, inside the secure bubble, would be considered valid with no malicious activity. The problem is, when your account is compromised, logging in might be the only non-malicious thing you’re doing. Once logged in, everything your compromised account tries to do is malicious. If we’re doing implicit trust, we’re not being very smart.

Jon: So, the opposite of that would be blocking access entirely?

Paul: That’s not the reality. We can’t just stop people from logging in. Zero trust allows us to let you log on, but not blindly trust everything. We trust you for now, and we continuously evaluate your actions. If you do something that makes us no longer trust you, we act on that. It’s about continuously assessing whether your activities are appropriate or potentially malicious and then acting accordingly.

Jon: It’s going to be a very disappointing argument because I agree with everything you say. You argued with yourself more than I’m going to be able to, but I think, as you said, the castle defense model—once you’re in, you’re in. 

I’m mixing two things there, but the idea is that once you’re inside the castle, you can do whatever you like. That’s changed. 

So, what to do about it? Read Part 2, for how to deliver a cost-effective response. 

The post Making Sense of Cybersecurity – Part 1: Seeing Through Complexity appeared first on Gigaom.

Find the soul