Microsoft Fabric Updates Blog

Unified by design: mirroring Azure Databricks Unity Catalog to Microsoft OneLake in Fabric (Generally Available)

We are thrilled to announce the general availability of Mirroring for Azure Databricks Unity Catalog in Microsoft Fabric—a secure, high-performance integration that provides seamless access to Azure Databricks tables from Fabric.

With Fabric and Azure Databricks, we are building the future of data platforms on a lakehouse foundation, powered by open data formats, full interoperability, and the flexibility to choose the right tools for any scenario. Now, with expanded integration with Azure Databricks Unity Catalog, you can enjoy a unified, governed experience across both platforms, without data duplication.

This powerful new capability is already being used by Fabric and Azure Databricks customers including The Adecco Group, the world’s leading talent advisory and solutions company. Guillaume Berthier, a Cloud Solution Architect for Data and Analytics at The Adecco Group said, “One of the main challenges in building our new unified Global Data Platform was bridging Azure Databricks and Microsoft Fabric. The Mirrored Unity Catalog feature has been a game-changer—it enables us to expose Databricks-managed datasets directly in Fabric, making them instantly usable for Power BI Direct Lake semantic models and to power our Fabric-hosted GraphQL APIs. This integration delivers real-time insights with minimal latency across both analytics and applications.”

Seamlessly bring your Azure Databricks Unity Catalog data to Microsoft OneLake

With OneLake, Fabric’s unified data lake, you can access your entire multi-cloud data estate from a single data lake that spans the entire organization. OneLake is automatically wired into every Fabric engine and since data is stored in the open Delta Parquet format, you can use data in OneLake for all your data projects, no matter the vendor or service. You can use mirroring and also shortcuts to unify all of your multi-cloud and on-premise sources and enable your people to work from a single copy of data. Mirroring and shortcuts already support access to almost all your Microsoft data services including open sources such as Azure Data Lake Service, Azure Blob, Dataverse (Power Platform & Dynamics 365), and databases like SQL Server, Azure SQL, Azure SQL MI, Azure Cosmos DB, and Azure Database for PostgreSQL. Now, we are helping complete this story with our expansion to Azure Databricks.

Creating a mirrored Azure Databricks catalog item is the easiest way to bring your Unity Catalog data into Microsoft OneLake. From the Fabric portal, you can configure a new item and, with just a few clicks, add an entire catalog, a schema, or individual tables to OneLake. Once connected, the data becomes available in OneLake in a read-only manner —you can write SQL queries, use it across Fabric workloads, and drive interactive analytics in Power BI using Direct Lake mode. As data is updated, or as tables are added, removed, or renamed in Azure Databricks, Fabric stays automatically in sync, so you’re always working with the most current data.

Watch the capability in action with this demo:

The Mirrored Azure Databricks Unity catalog item in Microsoft Fabric is more than just a technical bridge. It is a strategic enabler for modern, data-driven enterprises. By eliminating traditional data movement and providing real-time access to governed data, it unlocks a wide range of business advantages:

  • One copy of your data with no ETL needed
    Data in Azure Databricks is immediately available in Fabric with no ETL pipelines, reducing time and complexity.
  • Seamless integration with full Fabric and OneLake capabilities
    Mirrored Unity Catalog tables function like native Fabric tables, that can be queried, are secure, and ready for use with Power BI Direct Lake, semantic models, and AI tools, all without duplication. You can even create OneLake shortcuts from a Fabric lakehouse to your Mirrored Azure Databricks Unity Catalog item.
  • Reduced data sprawl and optimized cost efficiency
    Remove unnecessary data duplication and reduce additional data pipelines with this seamless, no-copy integration—saving time and reducing data storage costs.
  • Enhanced data governance and security
    Deliver consistent, enterprise-grade governance and security policies through the OneLake security framework, ensuring data compliance and secure access across all environments.

What’s new in this generally available (GA) release?

Here’s a summary of the key improvements and new capabilities:

  • Network security and compliance

Supports secure access to Azure Data Lake Service (ADLS) with firewalls enabled, letting organizations enforce strict network boundaries without losing functionality. Visit the documentation to learn how to configure firewall access for ADLS.

  • Public APIs for automation and CI/CD

Offers public APIs to create, manage, and monitor mirrored catalog items, simplifying integration with enterprise workflows and CI/CD pipelines. Explore the API documentation.

  • OneLake security integration

Fully integrates with the OneLake security framework, allowing workspace admins to enforce fine-grained, enterprise-grade access controls and compliance. To learn more, refer to the Secure Mirrored Azure Databricks Data in Fabric with OneLake security blog post.

How the mirrored Azure Databricks Unity Catalog item works and how to set it up:

The Mirrored Azure Databricks Catalog item provides real-time, secure access to Azure Databricks Unity Catalog data in Fabric without ETL or duplication.

Step 1: Start the Mirrored Azure Databricks Catalog Item Setup in Fabric

Start in your Fabric workspace by clicking Create New and selecting Mirrored Azure Databricks Catalog.

Step 2: Connect to Your Azure Databricks Workspace

Next, provide your Azure Databricks workspace URL and credentials. This secure connection lets Fabric browse your accessible catalogs and prepare data for mirroring without requiring an active cluster.

A screenshot of a computer

AI-generated content may be incorrect.

Step 3: Finalize and View Mirrored Tables in Fabric

Once connected and the catalog, schema, and table selections are confirmed, Fabric creates the mirrored catalog, syncing any catalog changes automatically to Fabric. The mirrored Unity Catalog tables then appear in Fabric for reporting, modeling, and visualization with Power BI and Direct Lake.

A screenshot of a computer

AI-generated content may be incorrect.

Step 4: Build Power BI Reports with Direct Lake

With mirrored catalog tables in Fabric, you can build a Power BI semantic model directly using Direct Lake. From the SQL analytics endpoint, create a new model, define DAX measures, establish relationships, and assign business-friendly names. You can combine mirrored data with other sources, then create interactive reports from the file menu or in Power BI Desktop by connecting to the semantic model. Once published, reports can be easily shared across your organization while querying data directly from OneLake, with no duplication.

What’s Next

As we continue to evolve the Mirrored Azure Databricks catalog Item, we may explore support for additional table types, including those with RLS/CLM policies, Lakehouse federated tables, Delta Sharing tables, streaming data, and views or materialized views.

Try it Today

Ready to simplify your data architecture and unlock real-time insights? Try the Mirrored Azure Databricks Catalog item in Microsoft Fabric today, build on your Azure Databricks investment, and gain fast, secure, and integrated access to your data.

Refer to the Tutorial: Configure Microsoft Fabric mirrored databases from Azure Databricks to get started now!

Also check out the latest Azure Databricks blog to learn why Databricks runs best on Azure.

Related blog posts

Unified by design: mirroring Azure Databricks Unity Catalog to Microsoft OneLake in Fabric (Generally Available)

July 17, 2025 by Xu Jiang

We are pleased to announce that Eventstream’s Confluent Cloud for Apache Kafka streaming connector now supports decoding data from Confluent Cloud for Apache Kafka topics that are associated with a data contract in Confluent Schema Registry. The Challenge with Schema Registry Encoded Data The Confluent Schema Registry serves as a centralized service for managing and … Continue reading “Decoding Data with Confluent Schema Registry Support in Eventstream (Preview)”

July 17, 2025 by Xu Jiang

Introducing multiple-schema inferencing in Eventstream! This feature empowers you to work seamlessly with data sources that emit varying schemas by inferring and managing multiple schemas simultaneously. It eliminates the limitations of single-schema inferencing by enabling more accurate and flexible transformations, preventing field mismatches when switching between Live and Edit modes, and allowing you to view … Continue reading “Enhancing Data Transformation Flexibility with Multiple-Schema Inferencing in Eventstream (Preview)”