Recent Discussions
Synapse Webhook Action with Private Logic App
Hi all, I have a Synapse workspace with public access disabled and using all private endpoints, both for inbound and outbound access from the managed vnet. I also have a Logic App with private endpoints. Both Synapse and Logic App are in separate virtual networks but peered together at a central hub site. Each have access to private DNS zones with records to resolve to each resource. When I disabled public network access on the Logic App, I can no longer use a Webhook activity from a Synapse pipeline with callback URI. A Web action works just fine, but with the Webhook activity, I get a response from the Logic App of 403 Forbidden. Ordinarily this looks like a permission issue, but when public network is enabled, the Logic App workflow works fine. When the Webhook action fails to runs, there is no activity run logged on the Logic App. There's something that the Webhook action is not getting back from the Logic App when public network access is disabled. I've been trying to find a solution (including sending back a 202 response to Synapse from the Logic App), but it continues to baffle me. Has any one else successfully configured Synapse Webhook action to call a workflow in a Standard Logic App over private endpoints? Any ideas or suggestions to troubleshoot this?7Views0likes0CommentsJune 2025 updates for Azure Database for PostgreSQL
Big news this month โ PostgreSQL 17 is now GA with in-place upgrades, and our Migration Service fully supports PG17, making adoption smoother than ever. Also in this release: Online Migration is now generally available SSD v2 HA (Preview) with 10s failovers and better resilience Azure PostgreSQL now available in Indonesia Central VS Code extension enhancements for smoother dev experience Enhanced role management for improved admin control Ansible collection updated for latest REST API Check all these updates in this monthโs recap blog: https://techcommunity.microsoft.com/blog/adforpostgresql/june-2025-recap-azure-database-for-postgresql/4412095 Check it out and tell us which feature you're most excited about!Copy Activity Successful, But Times Out
This appears to be an edge case, but I wanted to share. A copy activity is successful, but times out. Duration is 1:58:55. Times out at 2:00:12. Runs a second time time and is successful, loading duplicate records. The duplicate records is the undesired result. Copy Activity General Timeout: 0.02:00:00 Retry: 2 Source mySQL Parameterized SQL Parameterized Sink Synapse SQL Pool Parameterized Copy method: COPY command Settings Use V2 Hiearchy storage for staging General Synapse/ADF Managed Network23Views0likes0CommentsOracle 2.0 Upgrade Woes with Self-Hosted Integration Runtime
This past weekend my ADF instance finally got the prompt to upgrade linked services that use the Oracle 1.0 connector, so I thought, "no problem!" and got to work upgrading my self-hosted integration runtime to 5.50.9171.1 Most of my connection use service_name during authentication, so according to the docs, I should be able to connect using the Easy Connect (Plus) Naming convention. When I do, I encounter this error: Test connection operation failed. Failed to open the Oracle database connection. ORA-50201: Oracle Communication: Failed to connect to server or failed to parse connect string ORA-12650: No common encryption or data integrity algorithm https://docs.oracle.com/error-help/db/ora-12650/ I did some digging on this error code, and the troubleshooting doc suggests that I reach out to my Oracle DBA to update Oracle server settings. Which, I did, but I have zero confidence the DBA will take any action. https://learn.microsoft.com/en-us/azure/data-factory/connector-troubleshoot-oracle Then I happened across this documentation about the upgraded connector. https://learn.microsoft.com/en-us/azure/data-factory/connector-oracle?tabs=data-factory#upgrade-the-oracle-connector Is this for real? ADF won't be able to connect to old versions of Oracle? If so I'm effed because my company is so so legacy and all of our Oracle servers at 11g. I also tried adding additional connection properties in my linked service connection like this, but I have honestly no idea what I'm doing: Encryption client: accepted Encryption types client: AES128, AES192, AES256, 3DES112, 3DES168 Crypto checksum client: accepted Crypto checksum types client: SHA1, SHA256, SHA384, SHA512 But no matter what, the issue persists. :( Am I missing something stupid? Are there ways to handle the encryption type mismatch client-side from the VM that runs the self-hosted integration runtime? I would hate to be in the business of managing an Oracle environment and tsanames.ora files, but I also don't want to re-engineer almost 100 pipelines because of a connector incompatability.Solved3.8KViews3likes15CommentsAdvice requested: how to capture full SQL CDC changes using Dataflow and ADLS gen2
Hi, I'm working on a fairly simple ETL process using Dataflow in Azure Data Factory, where I want to capture the changes in a CDC-enabled SQL table, and store those in Delta Lake format in a ADLS gen2 sink. The resulting dataset will be further processed, but for me this is the end of the line. I don't have an expert understanding of all the details of the Delta Lake format, but I do know that I can use it to store changes to my data over time. So in the sink, I enabled all Update methods (Insert, Delete, Upsert, Update), since my CDC source should be able to figure out the correct row transformation. Key columns are set to the primary key columns in SQL. All this works fine as long as I configure my source to use CDC with 'netChanges: true'. That yields a single change row for each record, which is correctly stored in the sink. But I want to capture all changes since the previous run, so I want to set the source to netChanges: false. That yields rows for every change since the previous time the dataflow ran. But for every table that actually has records with more than one change, the dataflow fails saying "Cannot perform Merge as multiple source rows matched and attempted to modify the same target row in the Delta table in possibly conflicting ways." I take that to mean that my dataflow is, as it is, not smart enough to loop through all changes in the source, and apply them to the sink in order. So apparently something else has to be done. My intuition says that, since CDC actually provides all the metadata to make this possible, there's probably an out-of-the-box way to achieve what I want. But I can't readily find that magic box I should tick ๐. I can probably build it out 'by hand', by somehow looping over all changes and applying them in order, but before I go down that route, I came here to learn from the experts whether this is indeed the only way, or, preferably, that there is a neat trick I missed to get this done easily. Thanks so much for your advice! BR24Views0likes0CommentsPostgreSQL 17 General Availability with In-Place Upgrade Support
Weโre excited to share that PostgreSQL 17 is now Generally Available on Azure Database for PostgreSQL โ Flexible Server! This release brings community-driven enhancements including improved vacuum performance, smarter query planning, enhanced JSON functions, and dynamic logical replication. It also includes support for in-place major version upgrades, allowing customers to upgrade directly from PostgreSQL 11โ16 to 17 without needing to migrate data or change connection strings. PostgreSQL 17 is now the default version for new server creations and major version upgrades. ๐ Read the full blog post: http://aka.ms/PG17 Let us know if you have feedback or questions!Solution: Handling Concurrency in Azure Data Factory with Marker Files and Web Activities
Hi everyone, I wanted to share a concurrency issue we encountered in Azure Data Factory (ADF) and how we resolved it using a small but effective enhancementโone that might be useful if you're working with shared Blob Storage across multiple environments (like Dev, Test, and Prod). Background: Shared Blob Storage & Marker Files In our ADF pipelines, we extract data from various sources (e.g., SharePoint, Oracle) and store them in Azure Blob Storage. That Blob container is shared across multiple environments. To prevent duplicate extractions, we use marker files: started.marker โ created when a copy begins completed.marker โ created when the copy finishes successfully If both markers exist, pipelines reuse the existing file (caching logic). This mechanism was already in place and worked well under normal conditions. The Issue: Race Conditions We observed that simultaneous executions from multiple environments sometimes led to: Overlapping attempts to create the same started.marker Duplicate copy activities Corrupted Blob files This became a serious concern because the Blob file was later loaded into Azure SQL Server, and any corruption led to failed loads. The Fix: Web Activity + REST API To solve this, we modified only the creation of started.marker by: Replacing Copy Activity with a Web Activity that calls the Azure Storage REST API The API uses Azure Blob Storage's conditional header If-None-Match: * to safely create the file only if it doesn't exist If the file already exists, the API returns "BlobAlreadyExists", which the pipeline handles by skipping. The Copy Activity is still used to copy the data and create the completed.markerโno changes needed there. Updated Flow Check marker files: If both exist (started and completed) โ use cached file If only started.marker โ wait and retry If none โ continue to step 2 Web Activity calls REST API to create started.marker Success โ proceed with copy in step 3 Failure โ another run already started โ skip/retry Copy Activity performs the data extract Copy Activity creates completed.marker Benefits Atomic creation of started.marker โ no race conditions Minimal change to existing pipeline logic with marker files Reliable downstream loads into Azure SQL Server Preserves existing architecture (no full redesign) Would love to hear: Have you used similar marker-based patterns in ADF? Any other approaches to concurrency control that worked for your team? Thanks for reading! Hope this helps someone facing similar issues.20Views0likes0CommentsCopy Activity - JSON Mapping
Hello, I have created a copy activity in Azure synapse Analytics. I have a JSON file as an input and would like to unpack and save it as a csv file. I have tried several times but can not get the data in the correct output. The below is my input file: { "status": "success", "requestTime": "2025-06-26 15:23:41", "data": [ "Monday", "Tuesday", "Wednesday" ] } I would like to save it in the following output. status requestTime Data success 26/06/2025 15:23 Monday success 26/06/2025 15:23 Tuesday success 26/06/2025 15:23 Wednesday I am struggling to configure the mapping section correctly. I can not understand how to unpack the data array. The $['data'][0] gives me the first element. I would like to extract all elements in the format above. Any help would be appreciated.100Views0likes2CommentsExport to Excel is not working
Hi, After the recent Azure Data Explorer Web UI update, the "Export to Excel" feature is no longer functioning as expected. While it works for simple tables but it takes longer time than before, it fails for tables containing complex data outputs, such as Empty, Null, Array [], or JSON data. Clicking the "Export to Excel" option does not produce the expected results. Could you please investigate this issue and provide guidance on a resolution? Thank you,50Views1like3CommentsOracle 2.0 property authenticationType is not specified
I just published upgrade to Oracle 2.0 connector (linked service) and all my pipelines ran OK in dev. This morning I woke up to lots of red pipelines that ran during the night. I get the following error message: ErrorCode=OracleConnectionOpenError,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message= Failed to open the Oracle database connection.,Source=Microsoft.DataTransfer.Connectors.OracleV2Core,''Type=System.ArgumentException, Message=The required property is not specified. Parameter name: authenticationType,Source=Microsoft.Azure.Data.Governance.Plugins.Core,' Here is the code for my Oracle linked service: { "name": "Oracle", "properties": { "parameters": { "host": { "type": "string" }, "port": { "type": "string", "defaultValue": "1521" }, "service_name": { "type": "string" }, "username": { "type": "string" }, "password_secret_name": { "type": "string" } }, "annotations": [], "type": "Oracle", "version": "2.0", "typeProperties": { "server": "@{linkedService().host}:@{linkedService().port}/@{linkedService().service_name}", "authenticationType": "Basic", "username": "@{linkedService().username}", "password": { "type": "AzureKeyVaultSecret", "store": { "referenceName": "Keyvault", "type": "LinkedServiceReference" }, "secretName": { "value": "@linkedService().password_secret_name", "type": "Expression" } }, "supportV1DataTypes": true }, "connectVia": { "referenceName": "leap-prod-onprem-ir-001", "type": "IntegrationRuntimeReference" } } } As you can see "authenticationType" is defined but my guess is that the publish and deployment step somehow drops that property. We are using "modern" deployment in Azure devops pipelines using Node.js. Would appreciate some help with this!Solved298Views1like6CommentsBlob Storage Event Trigger Disappears
Yesterday I ran into an odd situation where there was a resource lock and I was unable to rename pipelines or drop/create storage event triggers. An admin cleared the lock and I was able to remove and clean up the triggers and pipelines. Today, when I try to recreate the blob storage trigger to process a file when it appears in a container, the trigger creates just fine but on refresh, it disappears. If I try to recreate it again with the same name as the one that went away ADF UI says it already exists. I cannot assign it to a pipeline because the UI does not see it. Any insight as to where it is, how I can see it, or even what logs would have such activity recorded to give a clue as to what is going on. This seems like a bug.12Views0likes0CommentsParameter controls are not showing Display text
Hi, After a recent update to the Azure Data Explorer Web UI, the Parameter controls are not displaying correctly. The Display Text for parameters is not shown by default; instead, the raw Value is displayed until the control is clicked, at which point the correct Display Text appears. Could you please investigate this issue and provide guidance on a resolution? Thank you,13Views0likes0CommentsJune 2023 Update: Azure Database for PostgreSQL Flexible Server Unveils New Features
The Azure Database for PostgreSQL Flexible Server's June 2023 update is live! Now enjoy: Easier major version upgrades with reduced downtime. Server recovery feature for dropped servers. A more user-friendly Connect experience. Improved server performance with new IO enhancements. Auto-growing storage and online disk resize, now in public preview. We also support minor versions PostgreSQL 15.2 (preview), 14.7, 13.10, 12.14, 11.19. Big thanks to our dedicated team! Check out our blog for more details: https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/june-2023-recap-azure-database-postgresql-flexible-server/ba-p/3868650July 2023 Recap: Azure Database PostgreSQL Flexible Server
July 2023 Recap: Azure Database PostgreSQL Flexible Server Support for PostgreSQL 15 is now available (general availability). Automation Tasks have been introduced for Streamlined Management (preview phase). Flexible Server Migration Tooling has been enhanced (general availability). Hardware Options have been expanded with the addition of AMD Compute SKUs (general availability). These updates represent substantial improvements in performance, scalability, and efficiency. Whether you are a developer, a Database Administrator (DBA), or an individual passionate about PostgreSQL, we trust that these enhancements will contribute positively to your experience with our platform. Should you find these updates valuable, we encourage you to engage with us through appropriate channels of communication. Thank you for your continued support and interest in Azure Database for PostgreSQL Flexible Server.Autoscaling with Azure: A Comprehensive Guide to PostgreSQL Optimization Using Azure Automation Task
Autoscaling Azure PostgreSQL Server with Automation Tasks Read our latest article detailing the power of Autoscaling the Azure Database for PostgreSQL Flexible Server using Azure Automation Tasks. This new feature can revolutionize how we manage resources, streamlining operations, and minimizing human error.August 2023 Recap: Azure Database for PostgreSQL Flexible Server
Absolutely thrilled to unveil our latest blog post, " ๐๐๐ด๐๐๐ ๐ฎ๐ฌ๐ฎ๐ฏ ๐ฅ๐ฒ๐ฐ๐ฎ๐ฝ: ๐๐๐๐ฟ๐ฒ ๐๐ฎ๐๐ฎ๐ฏ๐ฎ๐๐ฒ ๐ณ๐ผ๐ฟ ๐ฃ๐ผ๐๐๐ด๐ฟ๐ฒ๐ฆ๐ค๐ ๐๐น๐ฒ๐ ๐ถ๐ฏ๐น๐ฒ ๐ฆ๐ฒ๐ฟ๐๐ฒ๐ฟ ". This month is jam-packed with feature updates designed to amplify your experience! 1. ๐๐๐๐ผ๐๐ฎ๐ฐ๐๐๐บ ๐ ๐ผ๐ป๐ถ๐๐ผ๐ฟ๐ถ๐ป๐ด - Elevate your database health with improved tools and metrics. 2. ๐๐น๐ฒ๐ ๐ถ๐ฏ๐น๐ฒ ๐๐ก๐ฆ ๐ญ๐ผ๐ป๐ฒ ๐๐ถ๐ป๐ธ๐ถ๐ป๐ด - Simplify your server setup process for multiple networking models. 3. ๐ฆ๐ฒ๐ฟ๐๐ฒ๐ฟ ๐ฃ๐ฎ๐ฟ๐ฎ๐บ๐ฒ๐๐ฒ๐ฟ ๐ฉ๐ถ๐๐ถ๐ฏ๐ถ๐น๐ถ๐๐ ๐ฒ๐ป๐ต๐ฎ๐ป๐ฐ๐ฒ๐บ๐ฒ๐ป๐๐ - Now view hidden parameters for better performance optimization. 4. ๐ฆ๐ถ๐ป๐ด๐น๐ฒ ๐๐ผ ๐๐น๐ฒ๐ ๐ถ๐ฏ๐น๐ฒ ๐ฆ๐ฒ๐ฟ๐๐ฒ๐ฟ ๐ ๐ถ๐ด๐ฟ๐ฎ๐๐ถ๐ผ๐ป ๐ง๐ผ๐ผ๐น๐ถ๐ป๐ด โ Simplified migration experience with automated extension allow listing. Don't miss out! Read the full scoop here: August 2023 Recap: Azure Database for PostgreSQL Flexible ServerPostgreSQL 16 generally available (September 14, 2023)
Detailed Release Notes - https://www.postgresql.org/about/news/postgresql-16-released-2715/ How has PostgreSQL 16's new feature set changed the game for your database operations? Share your favorite enhancements and unexpected wins!835Views0likes0CommentsNovember 2023 Recap: Azure PostgreSQL Flexible Server
Excited to share our November 2023 updates for Azure Database for PostgreSQL Flexible Server Server Logs management has been streamlined for better monitoring and troubleshooting, along with customizable retention periods. Embracing the latest in security, we now support TLS Version 1.3, ensuring the most secure and efficient client-server communications. Migrations are smoother with our new Pre-Migration Validation feature, making your transition to Flexible Server seamless. Microsoft Defender integration, providing proactive anomaly detection and real-time alerts to safeguard your databases. Additionally, we've upgraded user and role migration capabilities for a more accurate and hassle-free experience. ๐ฅ Link - https://lnkd.in/gMMGaiAK Stay tuned for more updates, and feel free to share your experiences with these new features!February 2024 Recap: Azure PostgreSQL Flexible Server
Azure database for PostgreSQL Flexible Server - Feb '24 Feature Recap: General Availability of ๐ฃ๐ฟ๐ถ๐๐ฎ๐๐ฒ ๐๐ป๐ฑ๐ฝ๐ผ๐ถ๐ป๐๐ across all public Azure regions for secure, flexible connectivity. ๐๐ฎ๐๐ฒ๐๐ ๐ฒ๐ ๐๐ฒ๐ป๐๐ถ๐ผ๐ป ๐๐ฒ๐ฟ๐๐ถ๐ผ๐ป๐ to enhance your PostgreSQL performance and security. ๐๐ฎ๐๐ฒ๐๐ ๐ฃ๐ผ๐๐๐ด๐ฟ๐ฒ๐ ๐บ๐ถ๐ป๐ผ๐ฟ ๐๐ฒ๐ฟ๐๐ถ๐ผ๐ป๐ (16.1, 15.5, 14.10, 13.13, 12.17, 11.22) now supported for automatic upgrades. Enhanced ๐ ๐ฎ๐ท๐ผ๐ฟ ๐ฉ๐ฒ๐ฟ๐๐ถ๐ผ๐ป ๐จ๐ฝ๐ด๐ฟ๐ฎ๐ฑ๐ฒ ๐๐ผ๐ด๐ด๐ถ๐ป๐ด for smoother upgrades. ๐ฝ๐ด๐๐ฒ๐ฐ๐๐ผ๐ฟ ๐ฌ.๐ฒ.๐ฌ introduced for better vector similarity searches. ๐ฅ๐ฒ๐ฎ๐น-๐๐ถ๐บ๐ฒ ๐๐ฒ๐ ๐ ๐๐ฟ๐ฎ๐ป๐๐น๐ฎ๐๐ถ๐ผ๐ป now available with Azure_AI extension. Easier ๐ข๐ป๐น๐ถ๐ป๐ฒ ๐ ๐ถ๐ด๐ฟ๐ฎ๐๐ถ๐ผ๐ป from Single Server to Flexible Server in public preview. We recommend reading our latest blog post to explore these updates in detail - https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/february-2024-recap-azure-postgresql-flexible-server/ba-p/4089037March 2024 Recap: Azure PostgreSQL Flexible Server
Azure Database for PostgreSQL - Flexible Server March'24 Feature Recap ๐ ๐ถ๐ด๐ฟ๐ฎ๐๐ถ๐ผ๐ป ๐ ๐ฎ๐ฑ๐ฒ ๐๐ฎ๐๐: Seamlessly transfer PostgreSQL instances with Migration Service (GA). ๐๐ฎ๐๐ฒ๐๐ ๐ฃ๐ผ๐๐๐ด๐ฟ๐ฒ๐ ๐ ๐ถ๐ป๐ผ๐ฟ ๐ฉ๐ฒ๐ฟ๐๐ถ๐ผ๐ป๐: Automatically updated to include Postgres versions 16.2, 15.6, 14.11, 13.14, and 12.18. ๐ฃ๐ผ๐๐๐ด๐ฟ๐ฒ๐ ๐ญ๐ฒ ๐ ๐ฎ๐ท๐ผ๐ฟ ๐ฉ๐ฒ๐ฟ๐๐ถ๐ผ๐ป ๐จ๐ฝ๐ด๐ฟ๐ฎ๐ฑ๐ฒ: Test drive the newest features of PostgreSQL 16 with minimal disruption. ๐๐ ๐ฃ๐ฟ๐ฒ๐ฑ๐ถ๐ฐ๐๐ถ๐ผ๐ป๐ ๐ถ๐ป ๐ฅ๐ฒ๐ฎ๐น-๐ง๐ถ๐บ๐ฒ: Integrate machine learning predictions directly within your database with Azure_AI. ๐ก๐ฒ๐ ๐ ๐ผ๐ป๐ถ๐๐ผ๐ฟ๐ถ๐ป๐ด ๐ ๐ฒ๐๐ฟ๐ถ๐ฐ: Monitor 'Database Size' for precise capacity planning and performance optimization. Team Microsoft delivered impactful sessions and engaged with the community in Bengaluru @ PGConf India. Check out our blog for a full rundown of March's updates and how they can empower your projects - https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/march-2024-recap-azure-postgresql-flexible-server/ba-p/4107275
Events
Recent Blogs
- Weโre excited to announce a new database migration experience for SQL Server enabled by Azure Arc - now in public preview. This experience is designed to simplify and accelerate SQL Server migration ...Jul 17, 2025114Views0likes0Comments
- Introduction Transactional replication is a powerful SQL Server feature used to copy and synchronize data and database objects across servers. Itโs commonly employed in high-throughput scenarios su...Jul 16, 2025151Views0likes0Comments