Securing Identities: The Foundation of Zero Trust

Welcome back to our zero trust blog series! In our previous post, we took a deep dive into data security, exploring the importance of data classification, encryption, and access controls in a zero trust model. Today, we’re shifting our focus to another critical component of zero trust: identity and access management (IAM).

In a zero trust world, identity is the new perimeter. With the dissolution of traditional network boundaries and the proliferation of cloud services and remote work, securing identities has become more important than ever. In this post, we’ll explore the role of IAM in a zero trust model, discuss common challenges, and share best practices for implementing strong authentication and authorization controls.

The Zero Trust Approach to Identity and Access Management

In a traditional perimeter-based security model, access is often granted based on a user’s location or network affiliation. Once a user is inside the network, they typically have broad access to resources and applications.

Zero trust turns this model on its head. By assuming that no user, device, or network should be inherently trusted, zero trust requires organizations to take a more granular, risk-based approach to IAM. This involves:

  1. Strong authentication: Verifying the identity of users and devices through multiple factors, such as passwords, biometrics, and security tokens.
  2. Least privilege access: Granting users the minimum level of access necessary to perform their job functions and revoking access when it’s no longer needed.
  3. Continuous monitoring: Constantly monitoring user behavior and access patterns to detect and respond to potential threats in real-time.
  4. Adaptive policies: Implementing dynamic access policies that adapt to changing risk factors, such as location, device health, and user behavior.

By applying these principles, organizations can create a more secure, resilient identity and access management posture that minimizes the risk of unauthorized access and data breaches.

Common Challenges in Zero Trust Identity and Access Management

Implementing a zero trust approach to IAM is not without its challenges. Some common hurdles organizations face include:

  1. Complexity: Managing identities and access across a diverse range of applications, systems, and devices can be complex and time-consuming, particularly in hybrid and multi-cloud environments.
  2. User experience: Balancing security with usability is a delicate task. Overly restrictive access controls and cumbersome authentication processes can hinder productivity and frustrate users.
  3. Legacy systems: Many organizations have legacy systems and applications that were not designed with zero trust principles in mind, making it difficult to integrate them into a modern IAM framework.
  4. Skill gaps: Implementing and managing a zero trust IAM solution requires specialized skills and knowledge, which can be difficult to find and retain in a competitive job market.

To overcome these challenges, organizations must invest in the right tools, processes, and talent, and take a phased approach to zero trust IAM implementation.

Best Practices for Zero Trust Identity and Access Management

Implementing a zero trust approach to IAM requires a comprehensive, multi-layered strategy. Here are some best practices to consider:

  1. Implement strong authentication: Use multi-factor authentication (MFA) wherever possible, combining factors such as passwords, biometrics, and security tokens. Consider using passwordless authentication methods, such as FIDO2, for enhanced security and usability.
  2. Enforce least privilege access: Implement granular, role-based access controls (RBAC) based on the principle of least privilege. Regularly review and update access permissions to ensure users only have access to the resources they need to perform their job functions.
  3. Monitor and log user activity: Implement robust monitoring and logging mechanisms to track user activity and detect potential threats. Use security information and event management (SIEM) tools to correlate and analyze log data for anomalous behavior.
  4. Use adaptive access policies: Implement dynamic access policies that adapt to changing risk factors, such as location, device health, and user behavior. Use tools like Microsoft Conditional Access or Okta Adaptive Multi-Factor Authentication to enforce these policies.
  5. Secure privileged access: Implement strict controls around privileged access, such as admin accounts and service accounts. Use privileged access management (PAM) tools to monitor and control privileged access and implement just-in-time (JIT) access provisioning.
  6. Educate and train users: Provide regular security awareness training to help users understand their role in protecting the organization’s assets and data. Teach best practices for password management, phishing detection, and secure remote work.

By implementing these best practices and continuously refining your IAM posture, you can better protect your organization’s identities and data and build a strong foundation for your zero trust architecture.

Conclusion

In a zero trust world, identity is the new perimeter. By treating identities as the primary control point and applying strong authentication, least privilege access, and continuous monitoring, organizations can minimize the risk of unauthorized access and data breaches.

However, achieving effective IAM in a zero trust model requires a commitment to overcoming complexity, balancing security and usability, and investing in the right tools and talent. It also requires a cultural shift, with every user taking responsibility for protecting the organization’s assets and data.

As you continue your zero trust journey, make IAM a top priority. Invest in the tools, processes, and training necessary to secure your identities, and regularly assess and refine your IAM posture to keep pace with evolving threats and business needs.

In the next post, we’ll explore the role of network segmentation in a zero trust model and share best practices for implementing micro-segmentation and software-defined perimeters.

Until then, stay vigilant and keep your identities secure!

Additional Resources:

The post Securing Identities: The Foundation of Zero Trust appeared first on Gigaom.

Protecting Your Crown Jewels

Welcome back to our zero trust blog series! In the previous posts, we introduced the concept of zero trust and explored the essential building blocks of a comprehensive zero trust architecture. Today, we’re diving deeper into one of the most critical aspects of zero trust: data security.

Data is the lifeblood of modern organizations. From intellectual property and financial records to customer information and employee data, your organization’s data is its most valuable asset. However, in a world where data breaches make headlines almost daily, protecting this asset has never been more challenging or more critical.

In this post, we’ll explore the role of data security in a zero trust model, discuss the dangers of data misclassification, and share best practices for safeguarding your organization’s crown jewels.

The Zero Trust Approach to Data Security

In a traditional perimeter-based security model, data is often treated as a monolithic entity. Once a user or device is granted access to the network, they can typically access a wide range of data with little or no additional verification.

Zero trust turns this model on its head. By assuming that no user, device, or network should be inherently trusted, zero trust requires organizations to take a more granular, risk-based approach to data security. This involves:

  1. Data discovery and classification: Identifying and categorizing data based on its sensitivity, value, and criticality to the organization.
  2. Micro-segmentation: Isolating data into smaller, more manageable units and applying granular access controls based on the principle of least privilege.
  3. Encryption: Protecting data at rest and in transit using strong encryption methods to ensure confidentiality and integrity.
  4. Continuous monitoring: Constantly monitoring data access and usage patterns to detect and respond to potential threats in real-time.

By applying these principles, organizations can create a more robust, adaptable data security posture that minimizes the risk of data breaches and limits the potential damage if a breach does occur.

The Dangers of Data Misclassification

One of the most significant challenges in implementing a zero trust approach to data security is ensuring accurate data classification. Misclassifying data–or failing to classify it at all–can have severe consequences for your organization:

  • Overexposure: If sensitive data is misclassified as non-sensitive, it may be accessible to a broader range of users and systems than necessary, increasing the risk of unauthorized access and data breaches.
  • Underprotection: Conversely, if non-sensitive data is misclassified as sensitive, it may be subject to overly restrictive access controls, hindering productivity and collaboration.
  • Compliance violations: Misclassifying regulated data, such as personally identifiable information (PII) or protected health information (PHI), can result in compliance violations and hefty fines.
  • Delayed breach detection and response: Without accurate data classification, it’s difficult to prioritize security efforts and detect potential breaches in a timely manner. This can lead to longer dwell times and more extensive damage.

To mitigate these risks, organizations must invest in robust data discovery and classification processes, leveraging a combination of automated tools and manual review to ensure data is accurately categorized and protected.

Best Practices for Data Security in a Zero Trust Model

Implementing a zero trust approach to data security requires a comprehensive, multi-layered strategy. Here are some best practices to consider:

  • Establish clear data classification policies: Develop and communicate clear policies and guidelines for data classification, including criteria for determining data sensitivity and procedures for handling each data category.
  • Implement strong access controls: Enforce granular, role-based access controls (RBAC) based on the principle of least privilege. Regularly review and update access permissions to ensure users only have access to the data they need to perform their job functions.
  • Encrypt data at rest and in transit: Use strong encryption methods, such as AES-256, to protect data both at rest and in transit. Ensure encryption keys are securely managed and rotated regularly.
  • Monitor and log data access: Implement robust monitoring and logging mechanisms to track data access and usage patterns. Use security information and event management (SIEM) tools to correlate and analyze log data for potential threats.
  • Develop a data breach response plan: Create and regularly test a comprehensive data breach response plan that outlines roles, responsibilities, and procedures for detecting, containing, and recovering from a data breach. Ensure the plan includes clear guidelines for notifying affected parties and complying with relevant regulations.
  • Provide employee training and awareness: Educate employees on the importance of data security, their role in protecting sensitive data, and best practices for handling and sharing data securely. Conduct regular training and phishing simulations to reinforce these concepts.

By implementing these best practices and continuously refining your data security posture, you can better protect your organization’s crown jewels and build trust with customers, partners, and stakeholders.

Conclusion

In a zero trust world, data security is paramount. By treating data as the new perimeter and applying granular, risk-based controls, organizations can minimize the risk of data breaches and limit the potential damage if a breach does occur.

However, achieving effective data security in a zero trust model requires a commitment to accurate data classification, strong access controls, encryption, and continuous monitoring. It also requires a cultural shift, with every employee taking responsibility for protecting the organization’s most valuable assets.

As you continue your zero trust journey, make data security a top priority. Invest in the tools, processes, and training necessary to safeguard your crown jewels, and regularly assess and refine your data security posture to keep pace with evolving threats and business needs.

In the next post, we’ll explore the role of identity and access management (IAM) in a zero trust model and share best practices for implementing strong authentication and authorization controls.

Until then, stay vigilant and keep your data secure!

Additional Resources:

The post Protecting Your Crown Jewels appeared first on Gigaom.

Building Blocks of Zero Trust: A Comprehensive Guide

In our previous post, we introduced the concept of zero trust and explored why it’s becoming an essential approach to cybersecurity in today’s digital landscape. We discussed the limitations of the traditional “trust but verify” model and highlighted the key principles and benefits of embracing a zero trust philosophy.

Now that you have a solid understanding of what zero trust is and why it matters, it’s time to dive deeper into the building blocks that make up a zero trust architecture. In this post, we’ll explore the core components of zero trust and how they work together to create a robust, resilient security posture.

The Six Pillars of Zero Trust

While various frameworks and models exist for implementing zero trust, most of them share a common set of core components. These six pillars form the foundation of a comprehensive zero trust architecture:

  1. Identity: In a zero trust model, identity becomes the new perimeter. It’s essential to establish strong authentication and authorization mechanisms to ensure that only verified users and devices can access resources.
  2. Devices: Zero trust requires continuous monitoring and validation of all devices accessing the network, including IoT and BYOD devices. This pillar focuses on ensuring device health, integrity, and compliance.
  3. Network: By segmenting the network into smaller, isolated zones and enforcing granular access controls, organizations can minimize the blast radius of potential breaches and limit lateral movement.
  4. Applications: Zero trust principles extend to applications, requiring secure access, continuous monitoring, and real-time risk assessment. This pillar involves implementing application-level controls and securing communication between applications.
  5. Data: Protecting sensitive data is a core objective of zero trust. This pillar involves data classification, encryption, and access controls to ensure that data remains secure throughout its lifecycle.
  6. Infrastructure: Zero trust requires securing all infrastructure components, including cloud services, servers, and containers. This pillar focuses on hardening systems, applying security patches, and monitoring for vulnerabilities.

By addressing each of these pillars, organizations can create a comprehensive zero trust architecture that provides end-to-end security across their entire digital ecosystem.

Implementing the Zero Trust Building Blocks

Now that you understand the six pillars of zero trust, let’s explore some practical steps for implementing these building blocks in your organization.

  1. Establish strong identity and access management (IAM): Implement multi-factor authentication (MFA), single sign-on (SSO), and risk-based access policies to ensure that only verified users can access resources. Use tools like Azure Active Directory or Okta to streamline IAM processes.
  2. Implement device health and compliance checks: Use mobile device management (MDM) and endpoint protection platforms to enforce device health policies, monitor for threats, and ensure compliance with security standards. Solutions like Microsoft Intune or VMware Workspace ONE can help manage and secure devices.
  3. Segment your network: Use micro-segmentation to divide your network into smaller, isolated zones based on application, data sensitivity, or user roles. Implement software-defined networking (SDN) and network access controls (NAC) to enforce granular access policies.
  4. Secure your applications: Implement application-level controls, such as API gateways, and use tools like Cloudflare Access or Zscaler Private Access to secure application access. Regularly assess and test your applications for vulnerabilities and ensure secure communication between applications.
  5. Protect your data: Classify your data based on sensitivity, implement encryption for data at rest and in transit, and enforce strict access controls. Use data loss prevention (DLP) tools to monitor for data exfiltration and prevent unauthorized access.
  6. Harden your infrastructure: Regularly patch and update your systems, use hardened images for virtual machines and containers, and implement infrastructure as code (IaC) to ensure consistent and secure configurations. Leverage tools like Terraform or Ansible to automate infrastructure provisioning and management.

Measuring the Success of Your Zero Trust Implementation

As you implement zero trust in your organization, it’s crucial to establish metrics and key performance indicators (KPIs) to measure the success of your efforts. Some key metrics to consider include:

  • Reduction in the number of security incidents and breaches
  • Decreased time to detect and respond to threats
  • Improved compliance with industry regulations and standards
  • Increased visibility into user and device activity
  • Enhanced user experience and productivity

By regularly monitoring and reporting on these metrics, you can demonstrate the value of your zero trust initiatives and continuously improve your security posture.

Conclusion

Building a zero trust architecture is a complex and ongoing process, but by understanding the core components and implementing them systematically, you can create a robust, adaptable security posture that meets the challenges of the modern threat landscape.

Remember, zero trust is not a one-size-fits-all solution. It’s essential to tailor your approach to your organization’s unique needs, risk profile, and business objectives. Start small, focus on high-impact initiatives, and continuously iterate and improve your zero trust implementation.

In our next post, we’ll explore some real-world examples of successful zero trust implementations and share lessons learned from organizations that have embarked on their own zero trust journeys.

Until then, start evaluating your current security posture against the six pillars of zero trust and identify opportunities for improvement. The road to zero trust is long, but every step you take brings you closer to a more secure, resilient future.

Additional Resources:

Meta Description: Discover the six essential building blocks of a comprehensive zero trust architecture and learn practical steps for implementing them in your organization. From identity and device management to network segmentation and data protection, this guide covers the core components of a robust zero trust security posture.

The post Building Blocks of Zero Trust: A Comprehensive Guide appeared first on Gigaom.

Reflections on Snowflake Summit 2024

This past week, I had the opportunity to attend Snowflake Summit 2024 in San Francisco. As an analyst, I was treated to an exclusive pre-day of content from the Snowflake team, which proved both enlightening and thought-provoking.

The event kicked off with Snowflake addressing the recent “hack” reported in the news. They assured us that their collaboration with Crowdstrike and other partners has revealed no signs of compromise within Snowflake itself. The evidence points to compromised customer credentials, and the investigation remains ongoing.

One of the highlights was the introduction of new products, including the Trust Center. This innovative tool assesses the security of your Snowflake data estate, utilizing AI and ML to identify potential issues such as privacy mismatches, user access inconsistencies, and poor data classification. However, there was no mention of whether the Trust Center provided similar insights into user accounts—a critical consideration given the hack discussion. I managed to get clarity on this from the Trust Center’s product manager later in the day.

This experience underscored a recurring issue I see in the vendor space: a distinct lack of cohesive storytelling. Vendors often present without any sense of overarching narrative. This fragmented approach can feel like a song composed entirely of solos. Individually, each instrument may showcase talent, but without the common purpose of harmony, no audience will ever sing along. It’s chaotic and discordant—much like many of the tech presentations I sat in on.

We need change. We don’t just need better messaging; we need better stories. We need to ask ourselves: Who are we? Why do we exist? Where are we going? What are the stops along the way? What sights will we see? Where do we board? Improving how we communicate is essential.

On a positive note, Snowflake showcased all the components necessary to build a compelling data story for your enterprise. Their partners can fill in any gaps in your business narrative. The only missing piece is a cohesive purpose, which you can personalize to meet your specific needs.

The good news is that there are people like me here to help. I aim to demystify what is presented, help you create a strategy, and build a plan to achieve your goals. Together, we can create your story, one chapter at a time.

The post Reflections on Snowflake Summit 2024 appeared first on Gigaom.

Zero Trust 101: It’s Time to Ditch “Trust but Verify”

Welcome to our new blog series on zero trust! If you’re an IT executive trying to navigate the complex world of cybersecurity, you’re in the right place. Over the next few posts, we’re going to demystify this buzzworthy concept and show you how to make it work for your organization. No jargon, no fluff, just practical insights you can use to enhance your security posture and protect your business.

In this first post, we’ll explore why the traditional “trust but verify” approach to security is no longer enough and why zero trust is the way forward. We’ll also give you some action items to get started on your zero trust journey. But first, let’s talk about the elephant in the room: what exactly is zero trust, and why should you care?

The Problem with “Trust but Verify”

For years, the “trust but verify” model was the gold standard in cybersecurity. The idea was simple: once a user or device was authenticated and allowed into the network, they were trusted to access resources and data. This approach worked well enough when most employees worked in the office and used company-issued devices.

Times have changed, however, and the limitations of “trust but verify” have become increasingly apparent:

  • It doesn’t effectively limit the blast radius of a breach. If an attacker compromises a trusted user or device, they can move laterally within the network, accessing sensitive data and systems. The damage can be extensive and costly.
  • It’s too focused on access control alone. It doesn’t adequately address other critical areas like device security, network segmentation, and data protection. In today’s complex, distributed IT environments, this narrow focus leaves organizations vulnerable.

The bottom line is that “trust but verify” is no longer sufficient to protect against modern cyber threats. We need a more comprehensive, adaptable approach to security – and that’s where zero trust comes in.

Zero Trust: A Philosophy, Not a Product

Zero trust is a security model that assumes no user, device, or network should be trusted by default, regardless of whether they’re inside or outside the organization’s perimeter. It’s a philosophy that emphasizes continuous verification, least privilege access, and granular control over resources and data.

Now, you might be thinking, “Great, another cybersecurity buzzword to add to the pile.” And it’s true that the term “zero trust” has been co-opted by many vendors to align with their product offerings. But don’t be fooled: zero trust is not a product you can buy off the shelf. It’s a mindset, a set of principles that guide your approach to security:

  • Never trust, always verify
  • Assume breach
  • Verify explicitly
  • Use least privilege access
  • Monitor and audit continuously

By adopting these principles, organizations can create a more robust, resilient security posture that addresses the limitations of “trust but verify” and reduces the blast radius of potential breaches.

Why You Need Zero Trust

Embracing zero trust is not just about staying on top of the latest cybersecurity trends. It is a business decision that can deliver real, tangible benefits:

  • Reduced risk: By not trusting anyone or anything by default and continuously verifying access, you can significantly reduce your attack surface and limit the potential damage of a breach. This is crucial in an era where the average cost of a data breach is $4.35 million (IBM Security, 2022).
  • Improved visibility and control: Zero trust gives you granular control over who can access what and helps you spot potential threats more quickly. With better visibility into your environment, you can respond to incidents faster and more effectively.
  • Enabling digital transformation: As you adopt cloud services, implement IoT devices, and enable remote work, zero trust provides a framework for securing these new environments and use cases. It allows you to embrace innovation without compromising security.
  • Competitive advantage: By demonstrating a strong commitment to security, you can build trust with customers, partners, and regulators. In a world where data breaches make headlines almost daily, being able to showcase your robust security posture can set you apart from the competition.

Getting Started with Zero Trust

Implementing zero trust is not a one-and-done project. It’s a journey that requires a shift in mindset and a willingness to rethink traditional approaches to security. But just because it’s not easy doesn’t mean there’s nothing you can do to get started. Here are some action items you can tackle right away:

  1. Educate yourself and your team: Share this blog post with your colleagues and start a conversation about zero trust. The more everyone understands the concept, the easier it will be to implement.
  2. Assess your current security posture: Take a hard look at your existing security controls and identify gaps or weaknesses that a zero trust approach could address. This will help you prioritize your efforts and build a roadmap for implementation.
  3. Start small: Identify a specific use case or area of your environment where you can pilot zero trust principles, such as a particular application or user group. Starting small allows you to test and refine your approach before scaling up.
  4. Engage stakeholders: Zero trust is not just an IT initiative. It requires buy-in and participation from business leaders, end-users, and other stakeholders. Start talking to these groups about the benefits of zero trust and how it will impact them. Getting everyone on board early will make the transition smoother.

Wrapping Up

Adopting zero trust is a significant undertaking, but it’s one that’s well worth the effort. By embracing a philosophy of “never trust, always verify,” you can reduce your risk, improve your visibility and control, enable digital transformation, and gain a competitive edge in the market.

Over the course of this blog series, we’ll dive deeper into the key components of a zero trust architecture, explore best practices for implementation, and show you how to measure the success of your zero trust initiatives. We’ll also dispel common myths and misconceptions about zero trust and provide practical guidance for overcoming challenges along the way.

So, whether you’re just starting to explore zero trust or you’re well on your way to implementation, this series is for you. Stay tuned for our next post, where we’ll take a closer look at the building blocks of a zero trust architecture and how they work together to protect your assets and data.

In the meantime, start exploring zero trust and thinking about how it can benefit your organization. The future of security is here, and it’s time to embrace it.

Additional Resources:

The post Zero Trust 101: It’s Time to Ditch “Trust but Verify” appeared first on Gigaom.

Near, Far, Wherever Your CPUs Are

Over the past two years, I’ve been making the point that near edge and far edge are utilitarian terms at best, but they fail to capture some really important architectural and delivery mechanisms for edge solutions. Some of those include as-a-service consumption versus purchasing hardware, global networks versus local deployments, or suitability for digital services versus suitability for industrial use cases. This distinction came into play as I began work on a new report with a focus on specific edge solutions.

The first edge report I wrote was on edge platforms (now edge dev platforms), which was essentially a take on content delivery networks (CDN) plus edge compute, or a far-edge solution. Within that space, there was a lot of attention on where the edge is, which is irrelevant from a buying perspective. I won’t base a selection on whether a solution is a service provider edge or a cloud edge as long as it meets my requirements—which may involve latency but are more likely to be ones I mentioned in the opening paragraph.

Near Edge Vs. Far Edge

I talked about this CDN perspective in an episode of Utilizing Edge. The conversation— co-hosted by former GigaOm analyst, Alastair Cooke—went into the far-edge and near-edge conundrum. Alastair, who wrote the GigaOm Radar for Hyperconverged Infrastructure (HCI): Edge Deployments report (which I didn’t realize until a year later), brought experience from the near-edge perspective, just as I came in with the far-edge background.

One of my takeaways from this conversation is that the difference between CDN-based edges (far edge) and HCI deployments (near edge) is pushing versus pulling. I’m glad I only realized Alastair wrote the Edge HCI report after the fact because I had to work through this push versus pull thing myself. It’s quite obvious in retrospect, mainly because a CDN delivers content, so it’s always been about web resources centrally hosted somewhere that get pushed to the users’ locations. On the other hand, an edge solution deployed on location has the data generated at the edge, which you can then pull to a central location if necessary.

So, I made the case to also write a report on the near edge, where we evaluate solutions that are deployed on customers’ preferred locations for local processing and can call back to the cloud when necessary.

Why the Edge?

You may ask yourself, what’s the difference between deploying this type of solution at the edge and just deploying traditional servers? Well, if your organization has edge use cases, you likely have a lot of locations to manage, so a traditional server architecture can only scale linearly, which includes time and effort.

An edge solution would need to make this worthwhile, which means it must be:

  • Converged: I want to deploy a single appliance, not a server, a switch, external storage, and a firewall.
  • Hyperconverged: As per the above, but with software-defined resources, namely through virtualization and/or containerization.
  • Centrally managed: A single management plane to control all these geographically distributed deployments and all their resources.
  • Plug-and-play: The solution will provide everything needed to run applications. For example, I do not want to bring my own operating system and manage it if I don’t have to.

In other words, these must be full-stack solutions deployed at the edge. And because I like my titles to be representative, I’ve called this evaluation “full-stack edge deployment.”

Defining Full-Stack Edge

All the bullet points above became the table stakes—features that all solutions in the sector support and therefore do not materially impact comparative assessment. Table stakes define the minimum acceptable functionality for solutions under consideration in GigaOm’s Radar reports. The most considerable change between the initial scoping phase and the finished report is the hardware requirement. I first defined the report by looking at integrated hardware-software solutions, such as Azure Stack Edge, AWS Outposts, and Google Cloud Edge. I have since dropped the hardware requirement as long as the solution can run on converged hardware. This is for two reasons:

  • The first reason is that evaluating hardware as part of the report would take away from all the other value-adding features I was looking to evaluate.
  • The second reason is that we had a lot of engagement from software-only vendors for this report, which is a rear-view way of gauging that there is demand in this market for just the software component. These software-only vendors typically have partnerships with bare metal hardware providers, so there is little to no friction for a customer to procure both at the same time.

The final output of this year-long scoping exercise—the full-stack edge deployment Key Criteria and Radar Reports—defines the features and architectural concepts that are relevant when deploying an edge solution on your preferred location.

Simply saying “near edge” will never capture nuances such as an integrated hardware-software solution running a host OS with a type 2 hypervisor where virtual resources can be defined across clusters and third-party edge-native applications can be provisioned through a marketplace. But full-stack edge deployments will.

Next Steps

To learn more, take a look at GigaOm’s full-stack edge deployment Key Criteria and Radar reports. These reports provide a comprehensive overview of the market, outline the criteria you’ll want to consider in a purchase decision, and evaluate how a number of vendors perform against those decision criteria.

If you’re not yet a GigaOm subscriber, sign up here.

The post Near, Far, Wherever Your CPUs Are appeared first on Gigaom.

The Challenge of Securing User Identities

Several businesses I’ve worked with recently have had the misfortune of being victims of cybersecurity incidents. While these incidents come in many forms, there is a common thread: they all started with a compromise of user identity.

Why Identities are Targeted

Identity security—whether it involves usernames and passwords, machine names, encryption keys, or certificates—presents a real challenge. These credentials are needed for access control, ensuring only authorized users have access to systems, infrastructure, and data. Cybercriminals also know this, which is why they are constantly trying to compromise credentials. It’s why incidents such as phishing attacks remain an ongoing problem; gaining access to the right credentials is the foothold an attacker needs.

Attempts to compromise identity do leave a trail: a phishing email, an attempted logon from an incorrect location, or more sophisticated signs such as the creation of a new multifactor authentication (MFA) token. Unfortunately, these things can happen many days apart, are often recorded across multiple systems, and individually may not look suspicious. This creates security gaps attackers can exploit.

Solving the Identity Security Challenge

Identity security is complex and difficult to address. Threats are constant and many, with users and machines targeted with increasingly innovative attack methods by focused cyberattackers. A compromised account can be highly valuable to an attacker, offering hard-to-detect access that can be used to carry out reconnaissance and craft a targeted attack to deploy malware or steal data or funds. The problem of compromised identities is only going to grow, and the impact of compromise is significant, as in many cases, organizations do not have the tools or knowledge to deal with it.

It was the challenge of securing user identities that made me leap at the chance to work on a GigaOm research project into identity threat detection and response (ITDR) solutions, providing me with a chance to learn and understand how security vendors could help address this complex challenge. ITDR solutions are a growing IT industry trend, and while they are a discipline rather than a product, the trend has led to software-based solutions that help enforce that discipline.

How to Choose the Right ITDR Solution

Solution Capabilities
ITDR tools bring together identity-based threat telemetry from many sources, including user directories, identity platforms, cloud platforms, SaaS solutions, and other areas such as endpoints and networks. They then apply analytics, machine learning, and human oversight to look for correlations across data points to provide insight into potential threats.

Critically, they do this quickly and accurately—within minutes—and it is this speed that is essential in tackling threats. In the examples I mentioned, it took days before the identity compromise was spotted, and by then the damage had been done. Tools that can quickly notify of threats and even automate the response will significantly reduce the risk of potential compromise.

Proactive security that can help reduce risk in the first place adds additional value. ITDR solutions can help build a picture of the current environment and apply risk templates to it to highlight areas of concern, such as accounts or data repositories with excessive permissions, unused accounts, and accounts found on the dark web. The security posture insights provided by highlighting these concerns help improve security baselines.

Deception technology is also useful. It works by using fake accounts or resources to attract attackers, leaving the true resources untouched. This reduces the risk to actual resources while providing a useful way to study attacks in progress without risking valuable assets.

Vendor Approach
ITDR solutions fall into two main camps, and while neither approach is better or worse than the other, they are likely to appeal to different markets.

One route is the “add-on” approach, usually from vendors either in the extended detection and response (XDR) space or privileged access management (PAM) space. This approach uses existing insights and applies identity threat intelligence to them. For organizations using XDR or PAM tools already, adding ITDR to can be an attractive option, as they are likely to have more robust and granular mitigation controls and the capability to use other parts of their solution stack to help isolate and stop attacks.

The other approach comes from vendors that have built specific, identity-focused tools from the ground up, designed to integrate broadly with existing technology stacks. These tools pull telemetry from the existing stacks into a dedicated ITDR engine and use that to highlight and prioritize risk and potentially enforce isolation and mitigation. The flexibility and breadth of coverage these tools offer can make them attractive to users with broader and more complex environments that want to add identity security without changing other elements of their current investment.

Next Steps

To learn more, take a look at GigaOm’s ITDR Key Criteria and Radar reports. These reports provide a comprehensive overview of the market, outline the criteria you’ll want to consider in a purchase decision, and evaluate how a number of vendors perform against those decision criteria.

If you’re not yet a GigaOm subscriber, sign up here.

The post The Challenge of Securing User Identities appeared first on Gigaom.

ownCloud vulnerability with maximum 10 severity score comes under “mass” exploitation

Photograph depicts a security scanner extracting virus from a string of binary code. Hand with the word "exploit"

Enlarge (credit: Getty Images)

Security researchers are tracking what they say is the “mass exploitation” of a security vulnerability that makes it possible to take full control of servers running ownCloud, a widely used open-source filesharing server app.

The vulnerability, which carries the maximum severity rating of 10, makes it possible to obtain passwords and cryptographic keys allowing administrative control of a vulnerable server by sending a simple Web request to a static URL, ownCloud officials warned last week. Within four days of the November 21 disclosure, researchers at security firm Greynoise said, they began observing “mass exploitation” in their honeypot servers, which masqueraded as vulnerable ownCloud servers to track attempts to exploit the vulnerability. The number of IP addresses sending the web requests has slowly risen since then. At the time this post went live on Ars, it had reached 13.

Spraying the Internet

“We’re seeing hits to the specific endpoint that exposes sensitive information, which would be considered exploitation,” Glenn Thorpe, senior director of security research & detection engineering at Greynoise, said in an interview on Mastodon. “At the moment, we’ve seen 13 IPs that are hitting our unadvertised sensors, which indicates that they are pretty much spraying it across the internet to see what hits.”

Read 11 remaining paragraphs | Comments

Find the soul