Security and AI Essentials
Protect your organization with AI-powered, end-to-end security.
Defend Against Threats
Get ahead of threat actors with integrated solutions.
Secure All Your Clouds
Protection from code to runtime.
Secure All Access
Secure access for any identity, anywhere, to any resource.
Protect Your Data
Comprehensive data security across your entire estate.
Recent Blogs
Understanding the Secure Boot certificate challenge
Secure Boot is a foundational security feature that validates the integrity of your device's boot process, ensuring only trusted software can run...
Apr 27, 2026187Views
0likes
0Comments
Why MCP, A2A, OAuth, and SPIFFE are the backbone of agentic identity.
Apr 24, 2026954Views
0likes
1Comment
Where AV helps—and what it may not cover
Antivirus engines and traditional code scanners are highly effective at identifying known or suspicious executable content, such as binaries, scripts, or e...
Apr 24, 2026131Views
0likes
0Comments
Use Microsoft Entra Agent ID to manage every agent, govern its lifecycle, and enforce access controls as agents scale.
Apr 24, 2026727Views
0likes
0Comments
Recent Discussions
Auto Labeling Policy Delay for Old Files (Exsisting Files)
Hi Everyone, We are observing a difference in auto labelling policy behaviour in Purview for Sharepoint. An auto labelling policy has been enabled and scoped to sharepoint with metadata based rule(document creation date or document modification date). The scoped sharepoint only contain 7 unlabeled files that were uploaded before the policy turned on. The policy is working because if i placed any new file after enabling the policy got labelled within about 5 minutes, but the exsisting files are not labeled and remains unlabelled. It seems the new files are evalauated via the near time while exsisting file rely on asychronous mode. Can anyone help explain why exsisting files take longer to be proceesed even when there there are only a few files or share if you faced similar behaviour. This is the test scenario, as we plan to enable the same policy across more than 50 plus sites containing millions of unlabeled files and we want to understand and predict that even though its takes time all exsisting unlabeled files will eventually will be labelled. This is very crucial, so please helo us understand this behaviour. Regards, BanuMuralieDiscovery search: Sites not available when adding a Group data source
Hi, I am attempting to use Purview eDiscovery to search a SharePoint site associated with a Group. When adding the Data Source, I search for the URL of the SharePoint site, and the Group is returned. However, after selecting the group and clicking Manage, it indicates Sites are "Not Available". What causes this, and how do fix it? My user is a member of the "eDiscovery Manager" role group as an "eDiscovery Administrator", and licensed with "Microsoft 365 E3" and "Microsoft Purview Suite". It is also an Owner of the target Group / SP Site.9Views0likes0CommentsLabel usage rights not working correctly in office web-view
Hello everyone, I’ve created three simple Purview sensitivity labels (PUBLIC, RESTRICTED, CONFIDENTIAL). RESTRICTED and CONFIDENTIAL use usage-rights, so every internal employee and group is the owner of the document. External users are not permitted to open the document. Unfortunately, I can’t export or print my labeled documents in the Office Web View. In Office Desktop, however, the labels work correctly. This issue occurs for various internal users. Tested with Word, Excel, and PowerPoint. Co-authoring is enabled. User access never expires. Offline access is permitted. Do you have any ideas? Label usage rights Web-view Office DesktopYour Sentinel AMA Logs & Queries Are Public by Default — AMPLS Architectures to Fix That
When you deploy Microsoft Sentinel, security log ingestion travels over public Azure Data Collection Endpoints by default. The connection is encrypted, and the data arrives correctly — but the endpoint is publicly reachable, and so is the workspace itself, queryable from any browser on any network. For many organisations, that trade-off is fine. For others — regulated industries, healthcare, financial services, critical infrastructure — it is the exact problem they need to solve. Azure Monitor Private Link Scope (AMPLS) is how you solve it. What AMPLS Actually Does AMPLS is a single Azure resource that wraps your monitoring pipeline and controls two settings: Where logs are allowed to go (ingestion mode: Open or PrivateOnly) Where analysts are allowed to query from (query mode: Open or PrivateOnly) Change those two settings and you fundamentally change the security posture — not as a policy recommendation, but as a hard platform enforcement. Set ingestion to PrivateOnly and the public endpoint stops working. It does not fall back gracefully. It returns an error. That is the point. It is not a firewall rule someone can bypass or a policy someone can override. Control is baked in at the infrastructure level. Three Patterns — One Spectrum There is no universally correct answer. The right architecture depends on your organisation's risk appetite, existing network infrastructure, and how much operational complexity your team can realistically manage. These three patterns cover the full range: Architecture 1 — Open / Public (Basic) No AMPLS. Logs travel to public Data Collection Endpoints over the internet. The workspace is open to queries from anywhere. This is the default — operational in minutes with zero network setup. Cloud service connectors (Microsoft 365, Defender, third-party) work immediately because they are server-side/API/Graph pulls and are unaffected by AMPLS. Azure Monitor Agents and Azure Arc agents handle ingestion from cloud or on-prem machines via public network. Simplicity: 9/10 | Security: 6/10 Good for: Dev environments, teams getting started, low-sensitivity workloads Architecture 2 — Hybrid: Private Ingestion, Open Queries (Recommended for most) AMPLS is in place. Ingestion is locked to PrivateOnly — logs from virtual machines travel through a Private Endpoint inside your own network, never touching a public route. On-premises or hybrid machines connect through Azure Arc over VPN or a dedicated circuit and feed into the same private pipeline. Query access stays open, so analysts can work from anywhere without needing a VPN/Jumpbox to reach the Sentinel portal — the investigation workflow stays flexible, but the log ingestion path is fully ring-fenced. You can also split ingestion mode per DCE if you need some sources public and some private. This is the architecture most organisations land on as their steady state. Simplicity: 6/10 | Security: 8/10 Good for: Organisations with mixed cloud and on-premises estates that need private ingestion without restricting analyst access Architecture 3 — Fully Private (Maximum Control) Infrastructure is essentially identical to Architecture 2 — AMPLS, Private Endpoints, Private DNS zones, VPN or dedicated circuit, Azure Arc for on-premises machines. The single difference: query mode is also set to PrivateOnly. Analysts can only reach Sentinel from inside the private network. VPN or Jumpbox required to access the portal. Both the pipe that carries logs in and the channel analysts use to read them are fully contained within the defined boundary. This is the right choice when your organisation needs to demonstrate — not just claim — that security data never moves outside a defined network perimeter. Simplicity: 2/10 | Security: 10/10 Good for: Organisations with strict data boundary requirements (regulated industries, audit, compliance mandates) Quick Reference — Which Pattern Fits? Scenario Architecture Getting started / low-sensitivity workloads Arch 1 — No network setup, public endpoints accepted Private log ingestion, analysts work anywhere Arch 2 — AMPLS PrivateOnly ingestion, query mode open Both ingestion and queries must be fully private Arch 3 — Same as Arch 2 + query mode set to PrivateOnly One thing all three share: Microsoft 365, Entra ID, and Defender connectors work in every pattern — they are server-side pulls by Sentinel and are not affected by your network posture. Please feel free to reach out if you have any questions regarding the information provided.Defender XDR - how to grant "undo action" Permissions on File Quarantine?
Dear Defender XDR Community I have a question regarding the permissions to "undo action" on a file quarantine action in the action center. We have six locations, each location manages their own devices. We have created six device groups so that Accounts from Location 1 can only manage/see devices from Location 1 as well. Then we created a custom "Microsoft Defender XDR" Role with the following permissions. This way the admins from location 1 can manage all Defender for Endpoint Devices / incidents / recommendations etc. without touching devices they aren't managing.. very cool actually! BUT - if a file gets quarantined, it might want to be released again because of false positive etc. I can do that as a global admin, but not as an admin with granularly assigned rights - the option just isnt there.. I don't want to give them admins a more privileged role because of - you know - least privileges. but i don't have the option to allow "undo action" on file quarantine events, besides that being a critical feature for them to manage their own devices and not me having to de-quarantine files i dont care about.. Any thoughts on how to give users this permission?Automated Attack Disruption Testing
In the past I vaguely remember seeing attack simulation walkthroughs for MDE and there still is a link in the MDE onboarding to explore simulations and tutorials but that now just takes me to the XDR homepage. There are cases where we're talking to customers about the capability of Defender XDR and want to showcase in a safe way, without endangering demo devices. With Automated Attack Disruption announcements at Ignite 2024, I'd like to be able to showcase this particularly in the area of Ransomware protection, similar to the case study "protecting against ransomware when others couldn't" from the Ignite AI-driven Ransomware Protection session. Does anyone have an updated link to the attack simulation walkthroughs that were available and also any similar walkthoughs for Automated Attack Disruption?Sentinel RBAC in the Unified portal: who has activated Unified RBAC, and how did it go?
Following the RSAC 2026 announcements last month, I have been working through the full permission picture for the Unified portal and wanted to open a discussion here given how much has shifted in a short period. A quick framing of where things stand. The baseline is still that Azure RBAC carries across for Sentinel SIEM access when you onboard, no changes required. But there are now two significant additions in public preview: Unified RBAC for Sentinel SIEM itself (extending the Defender Unified RBAC model to cover Sentinel directly), and a new Defender-native GDAP model for non-CSP organisations managing delegated access across tenants. The GDAP piece in particular is worth discussing carefully, because I want to be precise about what has and has not changed. The existing limitation from Microsoft's onboarding documentation, that GDAP with Azure Lighthouse is not supported for Sentinel data in the Defender portal, has not changed. What is new is a separate, Defender-portal-native GDAP mechanism announced at RSAC, which is a different thing. These are not the same capability. If you were using Entra B2B as the interim path based on earlier guidance, that guidance was correct and that path remains the generally available option today. A few things I would genuinely like to hear from practitioners: For those who have activated Unified RBAC for a Sentinel workspace in the Defender portal: what did the migration from Azure RBAC roles look like in practice? Did the import function bring roles across cleanly, or did you find gaps particularly around custom roles? For environments using Playbook Operator, Automation Contributor, or Workbook Contributor role assignments: how are you handling the fact those three roles are not yet in Unified RBAC and still require Azure portal management? Is the dual-management posture creating operational friction? For MSSPs evaluating the new Defender-native GDAP model against their existing Entra B2B setup: what factors are driving the decision either way at your scale? Writing this up as Part 3 of the migration series and the community experience here is directly useful for making sure the practitioner angle is grounded.Understand New Sentinel Pricing Model with Sentinel Data Lake Tier
Introduction on Sentinel and its New Pricing Model Microsoft Sentinel is a cloud-native Security Information and Event Management (SIEM) and Security Orchestration, Automation, and Response (SOAR) platform that collects, analyzes, and correlates security data from across your environment to detect threats and automate response. Traditionally, Sentinel stored all ingested data in the Analytics tier (Log Analytics workspace), which is powerful but expensive for high-volume logs. To reduce cost and enable customers to retain all security data without compromise, Microsoft introduced a new dual-tier pricing model consisting of the Analytics tier and the Data Lake tier. The Analytics tier continues to support fast, real-time querying and analytics for core security scenarios, while the new Data Lake tier provides very low-cost storage for long-term retention and high-volume datasets. Customers can now choose where each data type lands—analytics for high-value detections and investigations, and data lake for large or archival types—allowing organizations to significantly lower cost while still retaining all their security data for analytics, compliance, and hunting. Please flow diagram depicts new sentinel pricing model: Now let's understand this new pricing model with below scenarios: Scenario 1A (PAY GO) Scenario 1B (Usage Commitment) Scenario 2 (Data Lake Tier Only) Scenario 1A (PAY GO) Requirement Suppose you need to ingest 10 GB of data per day, and you must retain that data for 2 years. However, you will only frequently use, query, and analyze the data for the first 6 months. Solution To optimize cost, you can ingest the data into the Analytics tier and retain it there for the first 6 months, where active querying and investigation happen. After that period, the remaining 18 months of retention can be shifted to the Data Lake tier, which provides low-cost storage for compliance and auditing needs. But you will be charged separately for data lake tier querying and analytics which depicted as Compute (D) in pricing flow diagram. Pricing Flow / Notes The first 10 GB/day ingested into the Analytics tier is free for 31 days under the Analytics logs plan. All data ingested into the Analytics tier is automatically mirrored to the Data Lake tier at no additional ingestion or retention cost. For the first 6 months, you pay only for Analytics tier ingestion and retention, excluding any free capacity. For the next 18 months, you pay only for Data Lake tier retention, which is significantly cheaper. Azure Pricing Calculator Equivalent Assuming no data is queried or analyzed during the 18-month Data Lake tier retention period: Although the Analytics tier retention is set to 6 months, the first 3 months of retention fall under the free retention limit, so retention charges apply only for the remaining 3 months of the analytics retention window. Azure pricing calculator will adjust accordingly. Scenario 1B (Usage Commitment) Now, suppose you are ingesting 100 GB per day. If you follow the same pay-as-you-go pricing model described above, your estimated cost would be approximately $15,204 per month. However, you can reduce this cost by choosing a Commitment Tier, where Analytics tier ingestion is billed at a discounted rate. Note that the discount applies only to Analytics tier ingestion—it does not apply to Analytics tier retention costs or to any Data Lake tier–related charges. Please refer to the pricing flow and the equivalent pricing calculator results shown below. Monthly cost savings: $15,204 – $11,184 = $4,020 per month Now the question is: What happens if your usage reaches 150 GB per day? Will the additional 50 GB be billed at the Pay-As-You-Go rate? No. The entire 150 GB/day will still be billed at the discounted rate associated with the 100 GB/day commitment tier bucket. Azure Pricing Calculator Equivalent (100 GB/ Day) Azure Pricing Calculator Equivalent (150 GB/ Day) Scenario 2 (Data Lake Tier Only) Requirement Suppose you need to store certain audit or compliance logs amounting to 10 GB per day. These logs are not used for querying, analytics, or investigations on a regular basis, but must be retained for 2 years as per your organization’s compliance or forensic policies. Solution Since these logs are not actively analyzed, you should avoid ingesting them into the Analytics tier, which is more expensive and optimized for active querying. Instead, send them directly to the Data Lake tier, where they can be retained cost-effectively for future audit, compliance, or forensic needs. Pricing Flow Because the data is ingested directly into the Data Lake tier, you pay both ingestion and retention costs there for the entire 2-year period. If, at any point in the future, you need to perform advanced analytics, querying, or search, you will incur additional compute charges, based on actual usage. Even with occasional compute charges, the cost remains significantly lower than storing the same data in the Analytics tier. Realized Savings Scenario Cost per Month Scenario 1: 10 GB/day in Analytics tier $1,520.40 Scenario 2: 10 GB/day directly into Data Lake tier $202.20 (without compute) $257.20 (with sample compute price) Savings with no compute activity: $1,520.40 – $202.20 = $1,318.20 per month Savings with some compute activity (sample value): $1,520.40 – $257.20 = $1,263.20 per month Azure calculator equivalent without compute Azure calculator equivalent with Sample Compute Conclusion The combination of the Analytics tier and the Data Lake tier in Microsoft Sentinel enables organizations to optimize cost based on how their security data is used. High-value logs that require frequent querying, real-time analytics, and investigation can be stored in the Analytics tier, which provides powerful search performance and built-in detection capabilities. At the same time, large-volume or infrequently accessed logs—such as audit, compliance, or long-term retention data—can be directed to the Data Lake tier, which offers dramatically lower storage and ingestion costs. Because all Analytics tier data is automatically mirrored to the Data Lake tier at no extra cost, customers can use the Analytics tier only for the period they actively query data, and rely on the Data Lake tier for the remaining retention. This tiered model allows different scenarios—active investigation, archival storage, compliance retention, or large-scale telemetry ingestion—to be handled at the most cost-effective layer, ultimately delivering substantial savings without sacrificing visibility, retention, or future analytical capabilities.SolvedShort survey: Feedback on Sensitivity Label Suggestions in Microsoft 365 Apps
Hi everyone, I’m looking to gather feedback on user experiences with Sensitivity Label suggestions in Microsoft 365 apps. This short survey aims to understand how label recommendations are working in practice and where improvements may be needed. Your responses will help identify common challenges and opportunities to make the label recommendation process more accurate, useful, and seamless for users. Survey link: Experience with Recommended Sensitivity Labels in Microsoft 365 – Fill out form The survey takes around 3 minutes to complete. Your feedback will directly help us better understand real-world experiences with label suggestions. Thank you very much for taking the time to contribute.Security Copilot Agents in Defender XDR: where things actually stand
With RSAC 2026 behind us and the E5 inclusion now rolling out between April 20 and June 30, anyone planning SOC workflows or sitting on a capacity budget needs to get a clear picture of what is GA, what is preview, and what was just announced. The marketing pages tend to blur those lines. This is my sober look at the current state, with the operational details that matter for adoption decisions. What is actually shipping right now The Phishing Triage Agent is GA. It only handles user-reported phish through Defender for Office 365 P2, but for most SOCs that is a meaningful chunk of the L1 queue. Verdicts come with a natural-language rationale rather than just a label, which is the part that determines whether analysts will trust it. The agent learns from analyst confirmations and overrides, so the feedback loop matters more than the initial setup. There is a setup detail that is easy to miss: the agent will not classify alerts that have already been suppressed by alert tuning. The built-in rule "Auto-Resolve - Email reported by user as malware or phish" needs to be off, and any custom tuning rules that touch this alert type need review. If you skip this, the agent runs on an empty queue and you wonder why nothing is happening. The Threat Intelligence Briefing Agent is also GA. It produces tenant-tailored intel briefings on a regular cadence. Useful, but lower operational impact than the triage agents. Copilot Chat in Defender went GA with the April 2026 update. Conversational Q&A inside the portal, grounded in your incident and entity data. This is the lowest-risk way to get value out of Security Copilot and probably where most teams should start. Public preview, worth watching The Dynamic Threat Detection Agent is the most technically interesting one. It runs continuously in the Defender backend, correlates across Defender and Sentinel telemetry, generates its own hypotheses, and emits a dynamic alert when the evidence converges. Detection source on the alert is Security Copilot. Each alert includes the structured fields (severity, MITRE techniques, remediation) plus a narrative explaining the reasoning. For EU tenants the residency point is worth confirming with whoever owns data protection in your org: the service runs region-local, so customer data and required telemetry stay inside the designated geographic boundary. During public preview it is enabled by default for eligible customers and is free. At GA, currently targeted for late 2026, it transitions to the SCU consumption model and can be disabled. The Threat Hunting Agent is also in public preview. Natural language to KQL with guided hunting. Lower stakes, but useful for teams without deep KQL expertise on hand. Announced at RSAC, still preview Two agents got the headlines in March: The Security Alert Triage Agent extends the agentic triage approach beyond phishing into identity and cloud alerts. The longer-term direction is consolidating phishing, identity, and cloud triage under a single agent. Rollout is from April 2026, in preview. The Security Analyst Agent is the multi-step investigation agent. Deeper context across Defender and Sentinel, prioritised findings, transparent reasoning trace. Preview since March 26. Both look promising on paper, but Microsoft's history of preview features that take a long time to mature is well-documented. I would not plan production workflows around either of them yet. What you actually get with the E5 inclusion This is the licensing change most people are dealing with right now. Security Copilot has been part of the E5 product terms since January 1, 2026. Tenant rollout is phased between April 20 and June 30, 2026, with a 7-day notification before activation. The numbers: 400 SCUs per month for every 1,000 paid user licenses Capped at 10,000 SCUs per month, which you hit at around 25,000 seats Linear scaling below that, so a 3,000-seat tenant gets 1,200 SCUs per month No rollover, the pool resets monthly What is included: chat, promptbooks, agentic scenarios across Defender, Entra, Intune, Purview, and the standalone portal. Agent Builder and the Graph APIs are in. If you also run Sentinel, the included SCUs apply to Security Copilot scenarios there. What is not included: Sentinel data lake compute and storage. Those still run through Azure on the regular meters. Beyond the included pool you pay 6 USD per SCU pay-as-you-go, with 30 days notice before that mode kicks in. Practical things worth knowing before activation A few details that are easy to miss in the docs: Under System > Settings > Copilot in Defender > Preferences, switch from Auto-generate to Generate on demand. Auto-generate will burn SCUs on incidents nobody is going to look at. Generate on demand gives you direct control. In the Security Copilot portal workspace settings, check the data storage location and the data sharing toggle. Data sharing is on by default, which means Microsoft uses interaction data for product improvement. If your compliance position does not allow that, change it before agents start running. Changing it requires the Capacity Contributor role. Agent runs are not equivalent to the same number of analyst chat prompts. A triage agent processing fifty alerts in one run consumes meaningfully more SCUs than fifty manual prompts on the same data. If you have a high-volume phishing pipeline, model that out before you flip the switch broadly. The usage dashboard in the Security Copilot portal breaks down consumption by day, user, and scenario. Output quality depends on telemetry quality. Flaky connectors, gaps in log sources, or a high baseline of misconfigured alerts will produce verdicts that match. Connector health monitoring (the SentinelHealth table in Advanced Hunting is a sensible starting point) is a precondition. The agents only improve if analysts feed the override loop. If your team treats the verdicts as background noise rather than confirming or correcting them, the feedback signal is lost and calibration stays where it shipped. That is a process problem, not a product problem, but it determines whether any of this is worth the SCUs. A reasonable adoption order A rough sequence that minimises capacity surprises: Copilot Chat in Defender first. Lowest risk, immediate value through natural language Q&A in the investigation context. Phishing Triage Agent on a controlled subset, with a review cadence in place. Check the built-in tuning rules first. Watch the SCU dashboard for the first month before adding anything else. Let the Dynamic Threat Detection Agent run while it is in public preview, since it is default-on and free anyway. Compare its alerts against existing Sentinel detections. Security Alert Triage Agent for identity and cloud once the phishing baseline is stable. Establish a monthly review covering agent decisions, false-positive rate, SCU cost, and MTTD/MTTR trends. Technically, agentic triage is moving past phishing into identity and cloud, and the Dynamic Threat Detection Agent represents a genuine attempt at the false-negative problem rather than just another rule engine. Lizenziell, the E5 inclusion removes the biggest barrier to adoption that previously existed. The risk is enabling everything at once. Agents that nobody reviews are agents that consume capacity without delivering value, and the SCU dashboard is the only thing that will tell you that is happening. One agent, one use case, a 30-day baseline, then the next one. The order matters more than the speed.'Registering user becomes local admin on Joined Devices' - WHAT
Stumbled on a tenant with 'JOIN' available for all users. Haven't worked with this much - most tenants I see only have registration. But then I noticed the horrifying 'Registering user is added as local administrator on the device during Microsoft Entra join' option was ALSO set to ALL. This is a tenant we just took on, but I've never seen that control before. This is terrifying, considering AFAIK, there is no real way for a registering user to know if they're registering or joining. Beneath it is an option to 'Manage Additional local administrators on all Microsoft Entra joined devices', which leads to the Role page for Device Administrators, which is empty. Under Description, this describes what APPEARS to be to be the same thing mentioned in the previous control - 'Users with this role become local machine administrators on all Windows 10 devices that are joined to Microsoft Entra'. But no one is assigned this. Conveniently, on my own tenant, I happened to let someone JOIN yesterday. We have this limited to 2 (now 3) people - most just register... But this user Joined, and the 'Joining user becomes local admin' option was on ALL. But I can't validate that the user ever become local admin. They don't have the role, their device shows as joined, but there's no additional roles. The audit logs don't look weird. They're not in that 'Device Administrators' group, which describes itself as 'Users with this role become local machine administrators on all Windows 10 devices that are joined to Microsoft Entra'. Thoughts? Freaking out, honestly. We have a mix of DC and Cloud users. I've inherited them all, and had the understanding that Join was essentially registration but with Org ownership. I've tried to get some input from Copilot, but he has basically waffled between 'No, this setting is just badly named' and 'no, actually it's this other setting' and 'no, you know what, it all makes sense somehow'. 1. Does that option actually set the joining user as global admin? Is that really the default setting? 2. can you validate this ANYWHERE in Entra? Or does it just disappear? 3. what is that Device Admin group? A separate group, independent of these two settings, that gives local admin? 4. Is there a graph endpoint that can be used to set this? ThanksPurview : comment filtrer les résultats “Data products” par termes du glossaire ?
a { text-decoration: none; color: #464feb; } tr th, tr td { border: 1px solid #e6e6e6; } tr th { background-color: #f5f5f5; } Bonjour, Je teste Microsoft Purview (Unified Catalog) avec des produits de données auxquels j’ai associé des termes de glossaire. Les termes de glossaire sont publiés et visibles dans l’onglet Découverte → Glossaire d’entreprise. Les produits de données sont également publiés et retrouvables via la recherche. Cependant, je ne vois pas d’option (ou elle ne retourne aucun résultat) pour filtrer les résultats de recherche des produits de données par termes de glossaire, contrairement à d’autres filtres disponibles (ex. Propriétaire, Type de produit) Est-ce que le filtrage des produits de données par termes de glossaire est supporté dans l’onglet Découverte ? Si oui, y a-t-il des pré-requis ou conditions particulières (ex. type de glossaire, indexation/délai, association au niveau data product vs assets, etc.) ?5Views0likes0CommentsFiltrer les résultats de la recherche des produits de données à l'aide des termes de glossaire
Bonjour, Je teste Microsoft Purview (Unified Catalog) avec des produits de données auxquels j’ai associé des termes de glossaire. Les termes de glossaire sont publiés et visibles dans l’onglet Découverte → Glossaire d’entreprise. Les produits de données sont également publiés et retrouvables via la recherche. Cependant, je ne vois pas d’option (ou elle ne retourne aucun résultat) pour filtrer les résultats de recherche des produits de données par termes de glossaire, contrairement à d’autres filtres disponibles (ex. Propriétaire, Type de produit). Est-ce que le filtrage des produits de données par termes de glossaire est supporté dans l’onglet Découverte ? Si oui, y a-t-il des pré-requis ou conditions particulières (ex. type de glossaire, indexation/délai, association au niveau data product vs assets, etc.) ?4Views0likes0CommentsMFA Options for Employees without Phones
Hello everbody, we're currently trying to implement MFA in our company, but approximately 1/10 of our employees have a workphone and are not allowed to use their personal phone. Since we also recently introduced Intune, the idea was to just use Windows Hello for Business, but when trying to provision it, we realized that you need to have MFA active for an account to be able to even activate it? Which kinda defeats the purpose. So my question is, is there some way to circumvent the MFA requirement for WHfB? Or what other options do we realistically have? Thanks in Advance!Unified Catalog Self-serve analytics integration
I'm hoping someone has gone through the process of setting up the Self-serve analytics in the Unified Catalog settings to push the Unified Catalog information down to a Fabric Lakehouse. I created a Workspace, and then created a lakehouse in this workspace, and created a folder under the files section in the lakehouse. I used the MSI that is shown in Purview when you configure the storage for the connection and granted it contriubutor access to the Workspace. I then went into Purview, settings for Unified Catalog, and in the solution integrations, set up Fabric storage and provided the URL to the File folder I set up on the lakehouse. I tested the connection and it tested successfully. When I set up the scheduler to run, I received the following: The blacked out is the Workspace ID. I'm trying to understand what I'm missing, I'm assuming write permissions are missing somewhere, but I'm not sure. Any assistance is appreciated.53Views0likes3CommentsMicrosoft Purview PowerShell: Interactive Sign-In Basics + Fixing Common Connect-IPPSSession Errors
If you’re new to Microsoft Purview PowerShell and your interactive sign-in fails when you run Connect-IPPSSession, you’re not alone. In this post, I’ll walk through the quick setup (module install + connection) and then cover practical fixes for a common authentication failure: “A window handle must be configured” (WAM / MSAL window handle error). Once connected, you can run Purview-related cmdlets for tasks like working with sensitivity labels, DLP policies, eDiscovery, and other compliance operations (depending on your permissions). Step 1: Install the Exchange Online PowerShell module Install-Module ExchangeOnlineManagement Import-Module ExchangeOnlineManagement Step 2: Connect to Microsoft Purview (Security & Compliance) PowerShell For interactive sign-in, you can start with the standard connection pattern below (replace the placeholder with your User Principal Name) Common issue: Interactive sign-in fails with a WAM “window handle” error The ExchangeOnlineManagement module uses modern authentication. In some hosts/environments, the sign-in UI can’t attach to a parent window, so token acquisition fails and you may see the error below. This is commonly associated with WAM (Web Account Manager) / MSAL interactive sign-in. Error Acquiring Token: A window handle must be configured. See https://aka.ms/msal-net-wam#parent-window-handles A window handle must be configured. See https://aka.ms/msal-net-wam#parent-window-handles At C:\Program Files\WindowsPowerShell\Modules\ExchangeOnlineManagement\3.9.2\netFramework\ExchangeOnlineManagement.psm1:591 char:21 + throw $_.Exception.InnerException; + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : OperationStopped: (:) [], MsalClientException + FullyQualifiedErrorId : A window handle must be configured. See https://aka.ms/msal-net-wam#parent-window-handles You’ll often hit this on secured devices, PowerShell ISE, or hardened corporate images. Below are two solutions to bypass this error. Start with the recommended option first. 1. Recommended workaround: Use Get-Credential without disabling WAM This approach avoids the WAM-based interactive prompt. You’ll be asked for credentials via a standard PowerShell credential dialog, and the module will complete modern authentication. $cred = Get-Credential Connect-IPPSSession -Credential $cred A credential prompt appears: Enter your username and password. After authentication, you should be connected to the Security & Compliance (Microsoft Purview) PowerShell session. As a quick validation, try a lightweight cmdlet such as Get-Label or Get-DlpCompliancePolicy (availability depends on permissions). If this works in your environment, it’s a simple way to proceed without changing system-wide WAM behavior. 2. Alternative workaround: Disable WAM for the session (use with caution) If the interactive UI is failing, you can try disabling WAM. Newer versions of the ExchangeOnlineManagement module support a -DisableWAM switch on the connection cmdlets, which bypasses the WAM broker and can avoid the “window handle” failure. Connect-IPPSSession -UserPrincipalName <yourUPN> -DisableWAM If you can’t use -DisableWAM or if it is not working as expected (or you’re troubleshooting a specific host issue), some admins set an environment variable to disable WAM for MSAL using the commands below. Treat this as a temporary troubleshooting step and follow your organization’s security guidance. $env:MSAL_DISABLE_WAM = "1" setx MSAL_DISABLE_WAM 1 Important warning! Changing authentication/broker behavior can have security and supportability implications. Use this only for troubleshooting and revert when you’re done using the following commands. $env:MSAL_DISABLE_WAM = "0" setx MSAL_DISABLE_WAM 0 Quick summary If you’re scripting for Microsoft Purview and interactive sign-in fails due to the WAM “window handle” error, try the sequence below. Install-Module ExchangeOnlineManagement Import-Module ExchangeOnlineManagement $cred = Get-Credential Connect-IPPSSession -Credential $cred Hope this helps! If you’ve hit this in a specific host (PowerShell ISE vs Windows PowerShell vs PowerShell 7, RDP/jump box, etc.), share what worked for you in the comments. Thanks for reading. Happy Scripting! Reference: Connect to Security & Compliance PowerShell | Microsoft Learn196Views3likes2CommentsPurview DLP Behaviours in Outlook Desktop
We are currently testing Microsoft Purview DLP policies for user awareness, where sensitive information shared externally triggers a policy tip, with override allowed (justification options enabled) and no blocking action configured. We are observing the following behaviours in Outlook Desktop: Inconsistent policy tip display (across Outlook Desktop Windows clients) – For some users, the policy tip renders correctly, while for others it appears with duplicated/stacked lines of text. This is occurring across users with similar configurations. Override without justification – Users are able to click “Send Anyway/Confirm and send” without selecting any justification option (e.g. business justification, manager approval, etc.), which bypasses the intended control. New Outlook: Classic Outlook: This has been observed on Outlook Desktop (Microsoft 365 Apps), including: Version 2602 (Build 19725.20170 Click-to-Run) Version 2602 (Build 16.0.19725.20126 MSO) Has anyone experienced similar behaviour with DLP policy tips or override enforcement in Outlook Desktop? Keen to understand if this is a known issue or if there are any recommended fixes or workarounds.Enable per‑user language selection for phishing simulation emails and landing pages
We use Attack Simulation Training to deliver phishing simulations to a global, multilingual user base. While Microsoft Defender supports multi‑language content, phishing simulation emails and landing pages are currently delivered in a single selected language per campaign. We are requesting a feature that allows phishing simulation emails and associated landing pages (including credential‑harvest pages) to automatically render in each user’s preferred language, based on: Outlook mailbox language settings, and/or Microsoft Entra ID user language preferences This capability would: Improve realism and accuracy of phishing simulations Ensure users experience simulations in the same language they normally work in Improve behavioral measurement in global organizations Reduce the need to create and manage multiple parallel simulations by language Providing consistent, per‑user language alignment across simulation emails, landing pages, and follow‑up training would significantly enhance the effectiveness of Attack Simulation Training for large, multilingual enterprises.
Events
in 2 days
Purview Lightning Talks Join the Microsoft Security Community for Purview Lightning Talks; quick technical sessions delivered by the community, for the community.
You’ll pick up practical Pu...
Thursday, Apr 30, 2026, 08:00 AM PDTOnline
1like
16Attendees
0Comments