updates
691 TopicsGeo-Replication is Here! Now generally available for Event Hubs Premium & Dedicated
Today, we are thrilled to announce the General Availability of the Geo-replication feature for Azure Event Hubs, now available in both Premium and Dedicated tiers. This milestone marks a significant enhancement in our service, providing our customers with robust business continuity and disaster recovery capabilities – ensuring high availability for their mission-critical applications. The Geo-replication feature allows you to replicate your Event Hubs data across multiple regions either synchronously or asynchronously, ensuring that your data remains accessible in the event of maintenance activities, regional degradation, or a regional outage. With Geo-replication, you can seamlessly promote a secondary region to a primary, minimizing downtime and ensuring business continuity. Before failover (promotion of secondary to primary) After failover (promotion of secondary to primary) With general availability, we are excited to announce that the Geo-replication feature now supports all the features that are generally available in the service today. This includes private networking, customer-managed key encryption, Event Hubs Capture, and many more. These enhancements ensure that you can leverage the full capabilities of Event Hubs while benefiting from the added reliability of Geo-replication. We have also increased visibility into the health and metrics of your replicas. This means you can now monitor the status of your replicas more effectively and know exactly when it is appropriate to promote your secondary to primary. This added visibility ensures that you can make informed decisions and maintain the high availability of your applications. Since the announcement of public preview, we’ve had several customers try out the Geo-replication feature and appreciate the enhanced reliability and peace of mind that comes with having a robust disaster recovery solution in place. Learn more Learn more about geo-replication concepts and the pricing model and try out this quickstart to learn how to setup geo-replication for your premium and dedicated tier namespaces. We encourage our customers to try out the Geo-replication feature and experience the benefits of turnkey business continuity and disaster recovery features firsthand. Your feedback is invaluable to us, and we look forward to hearing about your experiences.164Views2likes0CommentsAnnouncing the General Availability of New Availability Zone Features for Azure App Service
What are Availability Zones? Availability Zones, or zone redundancy, refers to the deployment of applications across multiple availability zones within an Azure region. Each availability zone consists of one or more data centers with independent power, cooling, and networking. By leveraging zone redundancy, you can protect your applications and data from data center failures, ensuring uninterrupted service. Key Updates The minimum instance requirement for enabling Availability Zones has been reduced from three instances to two, while still maintaining a 99.99% SLA. Many existing App Service plans with two or more instances will automatically support Availability Zones without additional setup. The zone redundant setting for App Service plans and App Service Environment v3 is now mutable throughout the life of the resources. Enhanced visibility into Availability Zone information, including physical zone placement and zone counts, is now provided. For App Service Environment v3, the minimum instance fee for enabling Availability Zones has been removed, aligning the pricing model with the multi-tenant App Service offering. The minimum instance requirement for enabling Availability Zones has been reduced from three instances to two. You can now enjoy the benefits of Availability Zones with just two instances since we continue to uphold a 99.99% SLA even with the two-instance configuration. Many existing App Service plans with two or more instances will automatically support Availability Zones without necessitating additional setup. Over the past few years, efforts have been made to ensure that the App Service footprint supports Availability Zones wherever possible, and we’ve made significant gains in doing so. Therefore, many existing customers can enable Availability Zones on their current deployments without needing to redeploy. Along with supporting 2-instance Availability Zone configuration, we have enabled Availability Zones on the App Service footprint in regions where only two zones may be available. Previously, enabling Availability Zones required a region to have three zones with sufficient capacity. To account for the growing demand, we now support Availability Zone deployments in regions with just two zones. This allows us to provide you with Availability Zone features across more regions. And with that, we are upholding the 99.99% SLA even with the 2-zone configuration. Additionally, we are pleased to announce that the zone redundant setting (zoneRedundant property) for App Service plans and App Service Environment v3 is now mutable throughout the life of these resources. This enhancement allows customers on Premium V2, Premium V3, or Isolated V2 plans to toggle zone redundancy on or off as required. With this capability, you can reduce costs and scale to a single instance when multiple instances are not necessary. Conversely, you can scale out and enable zone redundancy at any time to meet your requirements. This ability has been requested for a while now and we are excited to finally make it available. For App Service Environment v3 users, this also means that your individual App Service plan zone redundancy status is now independent of other plans in your App Service Environment. This means that you can have a mix of zone redundant and non-zone redundant plans in an App Service Environment, something that was previously not supported. In addition to these new features, we also have a couple of other exciting things to share. We are now providing enhanced visibility into Availability Zone information, including the physical zone placement of your instances and zone counts. For our App Service Environment v3 customers, we have removed the minimum instance fee for enabling Availability Zones. This means that you now only pay for the Isolated V2 instances you consume. This aligns the pricing model with the multi-tenant App Service offering. For more information as well as guidance on how to use these features, see the docs - Reliability in Azure App Service. Azure Portal support for these new features will be available by mid-June 2025. In the meantime, see the documentation to use these new features with ARM/Bicep or the Azure CLI. Also check out BRK200 breakout session at Microsoft Build 2025 live on May 20th or anytime after via the recording where my team and I will be discussing these new features and many more exciting announcements for Azure App Service. If you’re in the Seattle area and attending Microsoft Build 2025 in person, come meet my team and me at our Expert Meetup Booth. FAQ Q: What are availability zones? Availability zones are physically separate locations within an Azure region, each consisting of one or more data centers with independent power, cooling, and networking. Deploying applications across multiple availability zones ensures high availability and business continuity. Q: How do I enable Availability Zones for my existing App Service plan or App Service Environment v3? There is a new toggle in the Azure portal that will be enabled if your App Service plan or App Service Environment v3 supports Availability Zones. Your deployment must be on the App Service footprint that supports zones in order to have this capability. There is a new property called “MaximumNumberOfZones”, which indicates the number of zones your deployment supports. If this value is greater than one, you are on the footprint that supports zones and can enable Availability Zones as long as you have two or more instances. If this value is equal to one, you need to redeploy. Note that we are continually working to expand the zone footprint across more App Service deployments. Q: Is there an additional charge for Availability Zones? There is no additional charge, you only pay for the instances you use. The only requirement is that you use two or more instances. Q: Can I change the zone redundant property after creating my App Service plan? Yes, the zone redundant property is now mutable, meaning you can toggle it on or off at any time. Q: How can I verify the zone redundancy status of my App Service Plans? We now display the physical zone for each instance, helping you verify zone redundancy status for audits and compliance reviews. Q: How do I use these new features? You can use ARM/Bicep or the Azure CLI at this time. Starting in mid-June, Azure Portal support should be available. The documentation currently shows how to use ARM/Bicep and the Azure CLI to enable these features. The documentation as well as this blog post will be updated once Azure Portal support is available. Q: Are Availability Zones supported on Premium V4? Yes! See the documentation for more details on how to get started with Premium V4 today.3.4KViews8likes3CommentsMicrosoft Azure Cloud HSM is now generally available
Microsoft Azure Cloud HSM is now generally available. Azure Cloud HSM is a highly available, FIPS 140-3 Level 3 validated single-tenant hardware security module (HSM) service designed to meet the highest security and compliance standards. With full administrative control over their HSM, customers can securely manage cryptographic keys and perform cryptographic operations within their own dedicated Cloud HSM cluster. In today’s digital landscape, organizations face an unprecedented volume of cyber threats, data breaches, and regulatory pressures. At the heart of securing sensitive information lies a robust key management and encryption strategy, which ensures that data remains confidential, tamper-proof, and accessible only to authorized users. However, encryption alone is not enough. How cryptographic keys are managed determines the true strength of security. Every interaction in the digital world from processing financial transactions, securing applications like PKI, database encryption, document signing to securing cloud workloads and authenticating users relies on cryptographic keys. A poorly managed key is a security risk waiting to happen. Without a clear key management strategy, organizations face challenges such as data exposure, regulatory non-compliance and operational complexity. An HSM is a cornerstone of a strong key management strategy, providing physical and logical security to safeguard cryptographic keys. HSMs are purpose-built devices designed to generate, store, and manage encryption keys in a tamper-resistant environment, ensuring that even in the event of a data breach, protected data remains unreadable. As cyber threats evolve, organizations must take a proactive approach to securing data with enterprise-grade encryption and key management solutions. Microsoft Azure Cloud HSM empowers businesses to meet these challenges head-on, ensuring that security, compliance, and trust remain non-negotiable priorities in the digital age. Key Features of Azure Cloud HSM Azure Cloud HSM ensures high availability and redundancy by automatically clustering multiple HSMs and synchronizing cryptographic data across three instances, eliminating the need for complex configurations. It optimizes performance through load balancing of cryptographic operations, reducing latency. Periodic backups enhance security by safeguarding cryptographic assets and enabling seamless recovery. Designed to meet FIPS 140-3 Level 3, it provides robust security for enterprise applications. Ideal use cases for Azure Cloud HSM Azure Cloud HSM is ideal for organizations migrating security-sensitive applications from on-premises to Azure Virtual Machines or transitioning from Azure Dedicated HSM or AWS Cloud HSM to a fully managed Azure-native solution. It supports applications requiring PKCS#11, OpenSSL, and JCE for seamless cryptographic integration and enables running shrink-wrapped software like Apache/Nginx SSL Offload, Microsoft SQL Server/Oracle TDE, and ADCS on Azure VMs. Additionally, it supports tools and applications that require document and code signing. Get started with Azure Cloud HSM Ready to deploy Azure Cloud HSM? Learn more and start building today: Get Started Deploying Azure Cloud HSM Customers can download the Azure Cloud HSM SDK and Client Tools from GitHub: Microsoft Azure Cloud HSM SDK Stay tuned for further updates as we continue to enhance Microsoft Azure Cloud HSM to support your most demanding security and compliance needs.1.7KViews3likes1CommentNew Automation enhancements in AVS Landing Zone for Migration-Ready Infrastructure
Azure VMware Solution (AVS) Landing Zone offers PowerShell automation scripts that streamline deployment and management of key AVS components—jumpbox for secure access, HCX Connector for hybrid connectivity, and HCX Service Mesh for workload mobility—enabling consistent, repeatable setups that reduce manual effort, improve operational readiness, and accelerate migration timelines across multiple environments and regions.Introducing the Data-Bound Reference Layer in Azure Maps Visual for Power BI
The Data-Bound Reference Layer in Azure Maps for Power BI elevates map-based reporting by allowing users to visually explore, understand, and act on their data. This feature enables new possibilities for data analysts, business leaders, and decision-makers reliant on spatial insights.1.6KViews1like7CommentsBenchmark Different Capacities for EDA Workloads on Microsoft HPC Storages
Overview Semiconductor (or Electronic Design Automation [EDA]) companies prioritize reducing time to market (TTM), which depends on how quickly tasks such as chip design validation and pre-foundry work can be completed. Faster TTM also helps save on EDA licensing costs, as less time spent on work means more time available for the licenses. To achieve shorter TTM, storage solutions are crucial. As illustrated in the article “Benefits of using Azure NetApp Files for Electronic Design Automation (EDA)” (1*), with Large Volume feature, which requires a minimum size of 50TB, Azure NetApp Files can be boosted to reach up to 652,260 I/O rate at 2ms latency, and 826,379 at performance edge (~7 ms) for one Large Volume. Objective In real-world production, EDA files—such as tools, libraries, temporary files, and output—are usually stored in different volumes with varying capacities. Not every EDA job needs extremely high I/O rates or throughput. Additionally, cost is a key consideration, since larger volumes are more expensive. The objective of this article is to share benchmark results for different storage volume sizes: 50TB, 100TB, and 500TB, all using the Large Volume features. We also included a 32TB case—where Large Volume features aren't available on ANF—for comparison with Azure Managed Lustre File System (AMLFS), another Microsoft HPC storage solution. These benchmark results can help customers evaluate their real-world needs, considering factors like capacity size, I/O rate, throughput, and cost. Testing Method EDA workloads are classified into two primary types—Frontend and Backend, each with distinct requirements for the underlying storage and compute infrastructure. Frontend workloads focus on logic design and functional aspects of chip design and consist of thousands of short-duration parallel jobs with an I/O pattern characterized by frequent random reads and writes across millions of small files. Backend workloads focus on translating logic design to physical design for manufacturing and consists of hundreds of jobs involving sequential read/write of fewer larger files. The choice of a storage solution to meet this unique mix of frontend and backend workload patterns is non-trivial. Frontend and backend EDA workloads are very demanding on storage solutions – standard industry benchmarks indicate a high I/O profile of the workloads described above that include a substantial amount of NFS access, lookup, create, getattrs, link and unlink operations, as well as small and large file read and write operations. This blog contains the output from the performance testing of an industry standard benchmark for EDA. For this particular workload, the benchmark represents the I/O blend typical of a company running both front- and backend EDA workloads in parallel. Testing Environment We used 10 E64dsv5 as client VMs connecting to one single ANF or AMFLS volume with nconnect mount option (for ANF) to ensure generate enough workloads for benchmark. The client VM’s tuning and configuration are the same that specified on (1*). ANF mount option: nocto,actimeo=600,hard,rsize=262144,wsize=262144,vers=3,tcp,noatime,nconnect=8 AMLFS mount: sudo mount -t lustre -o noatime,flock All resources reside in the same VNET and same Proximity Placement Group when possible to ensure low network latency. Figure 1. High level architecture of the testing environment Benchmark Results As EDA jobs are highly latency sensitive. For today’s more complex chip designs, 2 milliseconds of latency per EDA operation is generally seen as the ideal target, while edge performance limit is around 7 milliseconds. We listed the I/O rates achieved at both latency points for easier reference. Throughput (in MB/s) is also included, as it is essential for many back-end tasks and the output phase. (Figure 2., Figure 3,. Figure 4, and Table 1.) For cases where the Large Volume feature is enabled, we observe the following: 100TB with Ultra tier and 500TB with Standard, Premium or Ultra tier can reach to over 640,000 I/O rate at 2ms latency. This is consistent to the 652,260 as stated in (*1). For Ultra 500TB volume can even reach 705,500 I/O rate at 2ms latency. For workloads not requiring much I/O rate, either 50TB with Ultra tier or 100TB with Premium tier can reach 500,000 I/O rate. For an even smaller job, 50TB with Premium tier can reach 255,000 and more inexpensive. For scenarios throughput is critical, 500TB with Standard, Premium or Ultra tier can all reach 10~12TB/s throughput. Figure 2. Latency vs. I/O rate: Azure NetApp Files- one Large Volume Figure 3. Achieved I/O rate at 2ms latency & performance edge (~7ms): Azure NetApp Files- one Large Volume Figure 4. Achieved throughput (MB/s) at 2ms latency & performance edge (~7ms): Azure NetApp Files- one Large Volume Table 1. Achieved I/O rate and Throughput at both latency: Azure NetApp Files- one Large Volume For cases with less than 50TB of capacity, where the Large Volume feature not available for ANF, we included Azure Managed Lustre File System (AMLFS) for comparison. With the same 32TB volume size, a regular ANF volume achieves about 90,000 I/O at 2ms latency, while an AMLFS Ultra volume (500 MB/s/TiB) can reach roughly double that, around 195,000. This shows that AMLFS is a better choice for performance when the Large Volume feature isn't available on ANF. (Figure 5.) Figure 5. Achieved I/O rate at 2ms latency: ANF regular volume vs. AMLFS Summary This article shared benchmark results for different storage capacities needed for EDA workloads, including 50TB, 100TB, and 500TB volumes with the Large Volume feature enabled. It also compared a 32TB volume—where the Large Volume feature isn’t available on ANF—to Azure Managed Lustre File System (AMLFS), another Microsoft HPC storage option. These results can help customers choose or design storage that best fits their needs by balancing capacity, I/O rate, throughput, and cost. With the Large Volume feature, 100TB Ultra and 500TB Standard, Premium, or Ultra tiers can achieve over 640,000 I/O at 2ms latency. For jobs that need less I/O, 50TB Ultra or 100TB Premium can reach 500,000, while 50TB Premium offers 255,000 at a lower cost. When throughput matters most, 500TB volumes across all tiers can deliver 10–12TB/s. If you have a smaller job or can’t use the Large Volume feature, Azure Managed Lustre File System (AMLFS) gives you better performance than a regular ANF volume. A final reminder, this article primarily provided benchmark results to help semiconductor customers in designing their storage solutions, considering capacity size, I/O rate, throughput, and cost. It did not address other important criteria such as heterogeneous integration or legacy compliance, which are also important when selecting an appropriate storage solution. References Benefits of using Azure NetApp Files for Electronic Design Automation (EDA) Learn more about Azure Managed LustreSecurity Review for Microsoft Edge version 138
We have reviewed the new settings in Microsoft Edge version 138 and determined that there are no additional security settings that require enforcement. The Microsoft Edge version 128 security baseline, which can be downloaded from the Microsoft Security Compliance Toolkit, continues to be our recommended configuration. Microsoft Edge version 138 introduces 6 new Computer and User settings and we have included a spreadsheet listing the new settings. There are two settings we would like to highlight for consideration as they enabling previewing behavior that will be enabled by default in a future release. Control whether TLS 1.3 Early Data is enabled in Microsoft Edge This setting allows enterprises to control whether the browser uses TLS 1.3 Early Data, a performance feature that sends HTTPS requests in parallel with the TLS handshake. This setting allows for faster use of secure connections. Enterprise customers are encouraged to test to identify any compatibility issues prior to the enablement. Specifies whether to block requests from public websites to devices on a user's local network This setting helps prevent malicious websites from probing or interacting with internal resources (i.e. printers, routers, or internal APIs), reducing the risk of lateral movement or data exposure. Enterprise customers are encouraged to test for any intentional requests from public to local devices. One thing to note on this policy setting is you may see a deprecation claim in the setting title. This was in error and will be corrected in a subsequent release. As a friendly reminder, all available settings for Microsoft Edge are documented here, and all available settings for Microsoft Edge Update are documented here. Please continue to give us feedback through the Security Baselines Discussion site or this post.Security baseline for Windows Server 2025, version 2506
Microsoft is pleased to announce the June 2025 revision of the security baseline package for Windows Server 2025 (v2506)! You can download the baseline package from the Microsoft Security Compliance Toolkit, test the recommended configurations in your environment, and customize / implement them as appropriate. Starting with this release, we plan to revise the Windows Server baseline more frequently to keep pace with evolving threats, new Windows features, and community feedback. Summary of Changes in This Release (v2506) This release includes several changes made since the last release of the security baseline for Windows Server 2025 in January 2025 to further assist in the security of enterprise customers along with better aligning with the latest standards. The changes include what is now depicted in the table below. Security Policy Change Summary Deny log on through Remote Desktop Services Allow remote logon for non-admin local accounts on MS and add “BUILTIN\Guests” to both DC and MS. WDigest Authentication Remove from the baseline Allow Windows Ink Workspace Remove from the baseline Audit Authorization Policy Change Set to “Success” in both DC and MS Include command line in process creation events Enable in both DC and MS Control whether exclusions are visible to local users Moved to Not Configured as it is overridden by the parent setting. Deny log on through Remote Desktop Services We updated SeDenyRemoteInteractiveLogonRight on member servers to use S-1-5-114 (Local account and member of Administrators group) instead of S-1-5-113 (all local accounts) to strike a better balance between security and operational flexibility. This change continues to block remote RDP access for high-risk local admin accounts—our primary threat vector—while enabling legitimate use cases for non-admin local accounts, such as remote troubleshooting and maintenance during failover or domain unavailability. By allowing non-admin local accounts to log on interactively, we preserve a secure recovery path without weakening protection for privileged accounts. In addition, to strengthen the Remote Desktop Services (RDS) posture on both Windows Server 2025 Domain Controllers and Member Servers, we added the Guests group to the "Deny log on through Remote Desktop Services" policy. While the Guest account is disabled by default, explicitly denying its RDP access adds a defense-in-depth measure that helps prevent misuse if the group is ever enabled or misconfigured. This complements the existing restriction on Local Account logon for DCs and helps ensure a consistent security posture across server roles. WDigest Authentication We removed the policy "WDigest Authentication (disabling may require KB2871997)" from the security baseline because it is no longer necessary for Windows Server 2025. This policy was originally enforced to prevent WDigest from storing users plaintext passwords in memory, which posed a serious credential theft risk. However, starting with 24H2 update (KB5041160) for Windows Server 2022 and continuing into Windows Server 2025, the engineering teams have deprecated this policy. As a result, there is no longer a need to explicitly enforce this setting, and the policy has been removed from the baseline to reflect the current default behavior. Allow Windows Ink Workspace We removed the policy “Allow Windows Ink Workspace” from the Windows Server 2025 security baseline. This policy applies only to Windows client editions and is not available on Windows Server. Including it in the baseline caused confusion removing an unnecessary setting from the baseline reduces GPO processing time and helps ensure all recommended settings are applicable for the Windows Server environment. Audit Authorization Policy Change We set Audit Authorization Policy Change (Success) on the baseline for both Domain Controllers and Member Servers to ensure visibility into any changes that affect the system’s security posture, including modifications to user rights and audit policies. These changes directly impact how access is granted and how activity is monitored, making them critical to detect for both security and compliance purposes. Logging successful changes helps identify misconfigurations, unauthorized privilege assignments, or malicious tampering — especially in cases of lateral movement or privilege escalation. Because these events occur infrequently, they generate minimal log volume while offering high forensic and operational value. While Failure auditing is not set, it is available as an optional setting on both Domain Controllers and Member Servers for organizations that have the monitoring capability to interpret and act on failed attempts to modify security policies. This provides an added layer of visibility in high-assurance or tightly controlled environments. Include command line in process creation events We added Include command line in process creation events in the baseline to improve visibility into how processes are executed across the system. Capturing command-line arguments allows defenders to detect and investigate malicious activity that may otherwise appear legitimate, such as abuse of scripting engines, credential theft tools, or obfuscated payloads using native binaries. This setting supports modern threat detection techniques with minimal performance overhead and is widely recommended. Visibility of Microsoft Defender Antivirus Exclusions We updated the configuration for the policy "Control whether exclusions are visible to local users" (Computer Configuration\Windows Components\Microsoft Defender Antivirus) to Not Configured in this release. This change was made because the parent policy "Control whether or not exclusions are visible to Local Admins" is already set to Enabled, which takes precedence and effectively overrides the behavior of the former setting. As a result, explicitly configuring the child policy is unnecessary and may introduce confusion without impacting actual behavior. You can continue to manage exclusion visibility through the parent policy, which provides the intended control over whether local administrators can view exclusion lists. UEFI Lock and Virtualization-Based Protections In Windows, some security features are protected by Secure Boot and the TPM. When combined with firmware protections that lock UEFI configuration variables, these protections become tamper-resistant: Windows can detect and respond to unauthorized hardware changes or tamper attempts, making it significantly harder for attackers to disable key security features after deployment. In the Windows Server 2025 security baseline, two policy categories are configured to take advantage of UEFI lock: Virtualization-Based Security (VBS) — managed via the policy: System\Device Guard\Turn On Virtualization Based Security Local Security Authority (LSA) Protection — managed via the policy: System\Local Security Authority\Configure LSASS to run as a protected process While there are no changes to the recommended settings for these policies in this release, we want to highlight their role in strengthening system defenses and provide guidance to help you make informed deployment decisions. UEFI lock enforces these protections in a way that prevents local or remote tampering—even by administrators. This aligns with strong security requirements in sensitive or high-assurance environments. However, it also introduces important operational considerations: Some hardware platforms may not fully support UEFI lock Compatibility issues, reduced performance, or system instability may occur Once enabled, UEFI lock is difficult to reverse Please let us know your thoughts by commenting on this post or through the Security Baseline Community.2.2KViews4likes0Comments