Amazon Q: A new ChatGPT-like assistant for work

Amazon Q assistant


Amazon Web Services (AWS) CEO Adam Selipsky announced a new product called Amazon Q at the AWS re:Invent conference on Tuesday, which tries to improve upon Microsoft’s Copilot assistant. Amazon Q is a generative AI-powered assistant that can help users with various tasks at work, such as chatting, generating content, and taking actions. And from Se…Read More

Google Drive users say Google lost their files; Google is investigating

Google Drive users say Google lost their files; Google is investigating

Enlarge (credit: Google Drive)

Did Google Drive lose some people’s data? That’s the question swirling around the Internet right now as Google announces it’s investigating “sync issues” for Google Drive for desktop. On Monday The Register spotted a trending post on the Google Drive forums where a user claimed that months of Drive data suddenly disappeared, and their files went back to a state from May 2023. A few other users chimed in with the same issue, the worst of which says: “This is going to cause me major issues if I cannot get the files back. It’s all my work for the last 1-2 years. All my business work, all my personal files. Everything, just vanished. It must be 100’s of files suddenly gone.”

Google has a post up on the Google Drive help forums more or less acknowledging the issue. The post, titled “Drive for desktop (v84.0.0.0 – 84.0.4.0) Sync Issue,” says, “We’re investigating reports of an issue impacting a limited subset of Drive for desktop users and will follow up with more updates.” Google adds an ominous list of things to not do in the meantime like:

  • Do not click “Disconnect account” within Drive for desktop
  • Do not delete or move the app data folder:
    • Windows: %USERPROFILE%AppDataLocalGoogleDriveFS
    • macOS: ~/Library/Application Support/Google/DriveFS 
  • Optional: If you have room on your hard drive, we recommend making a copy of the app data folder.

Those instructions sound like they are aimed at preserving whatever possible file cache would exist on your computer. The description of this being a “sync” issue doesn’t really make a ton of sense, since no matter what, the Drive web interface should show all your files and let you download them. If the problem is uploading, you should still have your local files.

Read 3 remaining paragraphs | Comments

Baldur’s Gate 3 bug caused by game’s endless mulling of evil deeds

Baldur's Gate 3 character Gale staring mournfully at the camera

Enlarge / Conscience do cost, as a certain fictional denizen of Baltimore’s East Side once said. (credit: Larian Studios)

One of the best things about playing Baldur’s Gate 3 (BG3) is the way that it simulates the feeling of having an actual Dungeon Master overseeing your session. The second-person narration, the dice rolls, and even the willingness to say “Yes” to your quirkiest ideas all add to the impression that there’s some conscious intelligence on the other side.

But consciousness can sometimes be a curse, and a recent patch to BG3 introduce burdensome complexity into the game’s thinking. Essentially, the game was suffering from lag and slowdowns as players progressed because the game’s decision engine couldn’t stop assessing previous instances where a party member had gotten away with theft, murder, or other nefarious deeds.

The performance issues have affected some players ever since Patch 4, released on Nov. 2 with more than 1,000 changes. One of those changes was a seemingly small-scope, situational bit: “Scrying Eyes in Moonrise Towers will now only react to theft and vandalism if they see the crime being committed.” The floating orbs in that area were, apparently, ignoring players’ best attempts at sneaking, invisibility, or other cover-ups.

Read 4 remaining paragraphs | Comments

Win hardware, collectibles, and more in the 2023 Ars Technica Charity Drive

Just some of the prizes you can win in this year's charity drive sweepstakes.

Enlarge / Just some of the prizes you can win in this year’s charity drive sweepstakes. (credit: Kyle Orland)

It’s once again that special time of year when we give you a chance to do well by doing good. That’s right—it’s time for the 2023 edition of our annual Charity Drive!

Every year since 2007, we’ve encouraged readers to give to Penny Arcade’s Child’s Play charity, which provides toys and games to kids being treated in hospitals around the world. In recent years, we’ve added the Electronic Frontier Foundation to our charity push, aiding in their efforts to defend Internet freedom. This year, as always, we’re providing some extra incentive for those donations by offering donors a chance to win pieces of our big pile of vendor-provided swag. We can’t keep it, and we don’t want it clogging up our offices, so it’s now yours to win.

This year’s swag pile is full of high-value geek goodies. We have 40 prizes valued at over $2,500 total, including gaming hardware and accessories, collectibles, books, apparel, and more. In 2022, Ars readers raised over $31,500 for charity, contributing to a total haul of more than $465,000 since 2007. We want to raise even more this year, and we can do it if readers dig deep.

Read 9 remaining paragraphs | Comments

Microsoft’s ugly sweater for 2023 is Windows XP’s iconic default wallpaper

Windows XP was an actively supported Microsoft product for 13 years, including five years where it was the newest version available and another three years where it was vastly more popular than its successor. That longevity—plus Microsoft’s total domination of personal computing in the pre-iPhone, pre-Android world—helped make its default wallpaper one of the most recognizable images on the planet.

Microsoft is returning to the Bliss hill once again with this year’s entry in its now-traditional ugly retro-computing sweater series. Blue hemming at the bottom and on the sleeves evokes Windows XP’s bright-blue taskbar, and in case people don’t immediately recognize Bliss as “a computer thing,” there’s also a giant mouse pointer hovering over it.

The sweater is available from size small up to a 3XL, and costs $70 regardless of which version you buy. All sizes are currently expected to arrive sometime between December 2 and 6.

Read 3 remaining paragraphs | Comments

GigaOm Research Bulletin #005

Welcome to GigaOm’s research bulletin for November 2023

Hi, and welcome back!

GigaOm’s partnership with Ingram Micro clearly shows how we bring something new and unique to the industry. Read more here from Karl Connolly, Field CTO at Ingram Micro, about how our practitioner-led research and advisory approach enables partner, product and sales teams at Ingram and beyond. 

Plus, we’ve listened to your feedback on our research process, report formats and radar scoring and have implemented a raft of changes as a result. You can find specifics of scoring changes here, and you will already start to see differences in research and briefing requests. As always, we welcome any comments you may have! 

Finally, our latest podcast discusses the importance and role of observability in today’s market landscape from both an enterprise and an end-user standpoint. Give it a listen!

Research Highlights

See below for our most recent reports, blogs and articles, and where to meet our analysts in the next few months.  Any questions, reply directly to this email and we will respond. 

Trending: Patch Management, released in September, is one of our top Radar reads right now. “Monitoring and observability are crucial IT functions that help organizations keep systems up and running and performance levels high. Knowing that a problem is likely to occur means that it can be rectified before it impacts systems.”, says author Ron Williams.

We are currently taking briefings on: Primary Storage, K8 Data Protection and SOAP. 

Warming up are: Vector Database, CCaaS, DNS Security, App & API Security, K8 Resource Management, Data Governance, Data Lake, Data Center Switching, eSignature, Threat Hunting Solutions, SASE, ASM, ITAM and CIEM.

 

Recent Reports

We’ve released 24 reports in the period since the last bulletin. 

In Analytics and AI, we have released a report on Data Observability and Data Catalogs.

For Cloud Infrastructure and Operations, we have Application Performance Management (APM), Cloud Management Platforms (CMPs) and Patch Management, and in Storage, we have covered Cloud Based Data Protection.

In the Security domain, we have released reports on Cloud Security Posture Management (CSPM), Continuous Vulnerability Management, API Security, Data Loss Prevention, Application Security Testing, Security Orchestration, Automation & Response (SOAR), Endpoint Detection & Response (EDR), Multifactor Authentication (MFA) and Ransomware Detection.

And in Networking, we have covered Network Detection & Response (NDR), DDI, Service Mesh, Network Validation and Network as a Service (NaaS).

And in Software and Applications, we have a report on Digital Experience Platforms (DXPs), E-Discovery and Intelligent Document Processing (IDP).

 

Quoted in the Press

GigaOm analysts are quoted in a variety of publications. 

…and Jon was recently interviewed for an episode of ‘The State of Startups with Industry Analysts‘.

 

Blogs and Articles

We’ve published several additional blogs over the past couple of months, including:

 

Where To Meet GigaOm Analysts

In the next few months you can expect to see our analysts at VMWare Explore Europe, Black Hat Europe and Mobile World Congress Barcelona. Do let us know if you want to fix a meet. 

Jon recently interviewed our VP of Sales, Adrian Escarcega about his role at GigaOm. Take a listen here!

For news and updates, add [email protected] to your lists, and please get in touch with any questions. Thanks and speak soon!

Jon Collins, VP of Engagement
Claire Hale, Engagement Manager
 
P.S. Here is last month’s bulletin if you missed it.

The post GigaOm Research Bulletin #005 appeared first on Gigaom.

What’s the Score?

Why have we been making changes to the GigaOm Key Criteria and Radar reports?

We are committed to a rigorous, defensible, consistent, coherent framework for assessing and evaluating enterprise technology solutions and vendors. The scoring and framework changes we’ve made are directed toward this effort to make our assessments verifiable, ground them in agreed concepts, and ensure that scoring is articulated, inspectable, and repeatable.

This adjustment is designed to make our evaluations more consistent and coherent, which makes it easier for vendors to participate in research and results in clearer reports for end-user subscribers.

What are the key changes to scoring?

The biggest change is to the feature and criteria scoring in the tables of GigaOm Radar reports. Scoring elements are weighted as they have been in the past, but we do so in a more consistent and standardized fashion between reports. The goal is to focus our assessment scope on the specific key features, emerging features, and business criteria identified as decision drivers by our analysts.

Scoring of these features and criteria determines the plotted distance from the center for vendors in the Radar chart. We are extending our scoring range from a four-point system (0, 1, 2, or 3) to a six-point scoring system (0 through 5). This enables us to recognize truly exceptional products against those that are just very good. It affords us greater nuance in scoring and better informs the positioning of vendors on the Radar chart.

Determining vendor position along the arc of the Radar chart has been refined as well. Analysts previously were asked to determine where they believed solutions should be positioned on the radar—first, to determine if they should occupy the upper (Maturity) or lower (Innovation) hemisphere, then to identify position left-to-right, from Feature Play to Platform Play. Similar to how we’ve extended our feature and criteria scoring, the scheme for determining quadrant position is now more granular and grounded. Analysts must think about each aspect individually—Innovation, Maturity, Feature Play, Platform Play—and score each vendor solution’s alignment accordingly.

We have now adapted how we plot solutions along the arc in our Radar charts, ensuring that the data we’re processing is relevant to the purchase decision within the context of our reports. Our scoring focuses primarily on key differentiating features and business criteria (non-functional requirements), then, to a lesser extent, on emerging features that we expect to shape the sector going forward.

For example, when you look at Feature Play and Platform Play, a feature-oriented solution is typically focused on going deeper, perhaps on functionality, or on specific use cases or market segments. However, this same solution could also have very strong platform aspects, addressing the full scope of the challenge. Rather than deciding one or the other, our system now asks you to provide an independent score for each.

Keep in mind, these aspects exist in pairs. Maturity and Innovation are one pair, and Feature and Platform Play the other. One constraint is that paired scores cannot be identical—one “side” must be higher than the other to determine a dominant score that dictates quadrant residence. The paired scores are then blended using a weighted scheme to reflect the relative balance (say, scores of 8 and 9) or imbalance (like scores of 7 and 2) of the feature and platform aspects. Strong balanced scores for both feature and platform aspects will yield plots that tend toward the y-axis, signifying an ideal balance between the aspects.

But you have to make a choice, right?

Yes, paired scores must be unique; the analysts must choose a winner. It’s tough, but in those situations, they will be giving scores like 6 and 5 or 8 and 7, which will typically land them close to the middle between the two aspects. You can’t have a tie, and you can’t be right on the line.

Is Platform Play better than Feature Play?

We talk about this misconception a lot! The word “platform” carries a lot of weight and is a loaded term. Many companies market their solutions as platforms even when they lack aspects we judge necessary for a platform. We actually considered using a term other than Platform Play but ultimately found that platform is the best expression of the aspects we are discussing. So, we’re sticking with it!

One way to get clarity around the Platform and Feature Play concepts is to think in terms of breadth and depth. A platform-focused offering will feature a breadth of functionality, use-case engagement, and customer base. A feature-focused offering, meanwhile, will provide depth in these same areas, drilling down on specific features, use cases, and customer profiles. This can help reason through the characterization process. In our assessments, we ask, “Is a vendor deepening its offering on the feature side, or are there areas it intentionally doesn’t cover and instead relies on third-party integrations?” Ultimately, think of breadth and depth as subtitles for Platform Play and Feature Play.

The challenge is helping vendors understand the concept of platform and feature and how it is applied in scoring and evaluating products in GigaOm Radar reports. These are not expressions of quality but character. Ultimately, quality is expressed by how far each plot is from the center—the closer you are to that bullseye, the better. The rest is about character.

Vendors will want to know: How can we get the best scores?

That’s super easy—participate! When you get an invite to be in our research, respond, fill out the questionnaire, and be complete about it. Set up a briefing and, in that briefing, be there to inform the analyst and not just make a marketing spiel. Get your message in early. That will enable us to give your product the full attention and assessment it needs.

We cannot force people to the table, but companies that show up will have a leg up in this process. The analysts are informed, they become familiar with the product, and that gives you the best chance to do well in these reports. Our desk research process is robust, but it relies on the quality of your external marketing and whatever information we uncover in our research. That creates a potential risk that our analysis will miss elements of your product.

The other key aspect is the fact-check process. Respect it and try to stay in scope. We see companies inserting marketing language into assessments or trying to change the rules of what we are scoring against. Those things will draw focus away from your product. If issues need to be addressed, we’ll work together to resolve them. But try to stay in scope and, again, participate, as it’s your best opportunity to appeal before publication.

Any final thoughts or plans?

We’re undergirding our scoring with structured decision tools and checklists—for example, to help analysts determine where a solution fits on the Radar—that will further drive consistency across reports. It also means that when we update a report, we can assess against the same rubric and determine what changes are needed.

Note that we aim to update Key Criteria and Radar reports based on what has changed in the market. We’re not rewriting the report from scratch every year; we’d rather put our effort into evaluating changes in the market and with vendors and their solutions. As we take things forward, we will seek more opportunities for efficiency so we can focus our attention on where innovation comes from.

The post What’s the Score? appeared first on Gigaom.

SIEM and SOAR – Will They or Won’t They?

A considerable percentage of SIEM vendors share a vision for how to help security operations centers deal with the high volume and complexity of security attacks. These vendors are integrating acquired SOAR solutions or natively developing SOAR capabilities to create a unified platform for security analysts.

A combined SIEM and SOAR solution will make up most of the SOC analyst’s daily toolset and reallocate their brain power from conducting repetitive analysis and response tasks, to only investigating incidents of significant interest and importance. 

The core of this offering therefore enables the SOC to address the biggest hindrance for analysts: volume. Instead of dealing with high-volume, low-complexity attacks, businesses can dedicate analysts to truly important attacks, such as unknown unknowns or zero-day attacks. 

We can define this combined toolset of SIEM and SOAR as “autonomous SOC solutions”  as, with adequate configurations, the number of analysts will no longer be the only way a business can scale up its security operations to deal with more threats. I’ve been covering this in the coming Key Criteria report.

I previously wrote about the positive outlook for standalone SOAR, so I want to preface this by saying that I’m not one to make broad predictions. My role as an industry analyst is to observe vendors’ strategic decisions and their responses to customer demands, this being one such trend. We’ve got a large sample size of SIEM vendors, having identified roughly 40 solutions. Out of these, as many as 16 vendors have entered the autonomous SOC arena.

Between the 16 vendors we identified to deliver these autonomous SOC solutions and the remaining 20+ pure-play SIEM vendors, I can only classify this as a “will-they-won’t-they” situation on whether SIEM and SOAR will remain distinct or merge. 

Quiet developments over noisy acquisitions

Security acquisitions make a lot of noise in the market, and SOAR acquisitions have been some of the loudest. Google acquired Siemplify, Devo acquired LogicHub, Fortinet acquired CyberSponse, Palo Alto Networks acquired Demisto, Splunk acquired Phantom and was acquired by Cisco, Sumo Logic acquired DFLabs, and Micro Focus acquired Atar Labs, which, in turn, was acquired by OpenText. 

With this in mind, most observers would expect the majority of vendors delivering autonomous SOC solutions to have acquired and integrated a SOAR solution. However, if we filter out SIEM vendors who’ve acquired SOAR solutions but have not integrated them into a unified solution—the likes of Google, IBM, Fortinet, and Splunk — we quickly find that the majority of vendors featured in the Radar report for Autonomous SOC solutions have actually developed their solutions in-house. 

So, in the acquire-then-integrate bucket, we have Devo, LogPoint, Sumo Logic, and OpenText.

In the developing-SOAR-in-house category, we have a threefold increase in the number of vendors, including Elastic, Exabeam, Hunters, Huntsman, Logrhythm, NetWitness, Palo Alto Networks, Securonix, Logsign, ManageEngine, Microsoft, and Rapid7.

Coexisting solutions

SIEM has so far stood the test of time, so organizations are unlikely to swap out their existing solutions unless there’s a compelling reason to do so. SIEM is also an important checkbox for many regulations. As my esteemed colleague Chris Ray points out, as long as security standards have the acronym SIEM on the requirements list, the solution and its name will remain a constant, regardless of how much it evolves from a technical point of view. SOAR also has a strong mandate for existing as a standalone solution, which we further explore here.

So, as much as we like to put things in boxes, the reality is that SIEM, SOAR, and solutions combining the two will coexist for the foreseeable future. The only force that will validate the autonomous SOC market is whether customers are willing to invest money in combined solutions, replacing or augmenting the individual parts of incumbent tools. 

However, it would be disadvantageous for vendors with a combined solution to position themselves in the same space with pure-play SIEM competitors. Simply reframing them as a ‘next-generation SIEM’ doesn’t capture the extensive difference between a SIEM and an autonomous SOC solution. So, even if we need to use SIEM as a requirement for compliance, these vendors need to distinguish their solutions in a very crowded SIEM market.

The post SIEM and SOAR – Will They or Won’t They? appeared first on Gigaom.

Find the soul