devops
235 TopicsHow to deploy n8n on Azure App Service and leverage the benefits provided by Azure.
Lately, n8n has been gaining serious traction in the automation world—and it’s easy to see why. With its open-source core, visual workflow builder, and endless integration capabilities, it has become a favorite for developers and tech teams looking to automate processes without being locked into a single vendor. Given all the buzz, I thought it would be the perfect time to share a practical way to run n8n on Microsoft Azure using App Service. Why? Because Azure offers a solid, scalable, and secure platform that makes deployment easy, while still giving you full control over your container and configurations. Whether you're building a quick demo or setting up a production-ready instance, Azure App Service brings a lot of advantages to the table—like simplified scaling, integrated monitoring, built-in security features, and seamless CI/CD support. In this post, I’ll walk you through how to get your own n8n instance up and running on Azure—from creating the resource group to setting up environment variables and deploying the container. If you're into low-code automation and cloud-native solutions, this is a great way to combine both worlds. The first step is to create our Resource Group (RG); in my case, I will name it "n8n-rg". Now we proceed to create the App Service. At this point, it's important to select the appropriate configuration depending on your needs—for example, whether or not you want to include a database. If you choose to include one, Azure will handle the connections for you, and you can select from various types. In my case, I will proceed without a database. Proceed to configure the instance details. First, select the instance name, the 'Publish' option, and the 'Operating System'. In this case, it is important to choose 'Publish: Container', set the operating system to Linux, and most importantly select the region closest to you or your clients. Service Plan configuration. Here, you should select the plan based on your specific needs. Keep in mind that we are using a PaaS offering, which means that underlying compute resources like CPU and RAM are still being utilized. Depending on the expected workload, you can choose the most appropriate plan. Secondly—and very importantly—consider the features offered by each tier, such as redundancy, backup, autoscaling, custom domains, etc. In my case, I will use the Basic B1 plan. In the Database section, we do not select any option. Remember that this will depend on your specific requirements. In the Container section, under 'Image Source', select 'Other container registries'. For production environments, I recommend using Azure Container Registry (ACR) and pulling the n8n image from there. Now we will configure the Docker Hub options. This step is related to the previous one, as the available options vary depending on the image source. In our case, we will use the public n8n image from Docker Hub, so we select 'Public' and proceed to fill in the required fields: the first being the server, and the second the image name. This step is very important—use the exact same values to avoid issues. In the Networking section, we will select the values as shown in the image. This configuration will depend on your specific use case—particularly whether to enable Virtual Network (VNet) integration or not. VNet integration is typically used when the App Service needs to securely communicate with private resources (such as databases, APIs, or services) that reside within an Azure Virtual Network. Since this is a demo environment, we will leave the default settings without enabling VNet integration. In the 'Monitoring and Security' section, it is essential to enable these features to ensure traceability, observability, and additional security layers. This is considered a minimum requirement in production environments. At the very least, make sure to enable Application Insights by selecting 'Yes'. Finally, click on 'Create' and wait for the deployment process to complete. Now we will 'stop' our Web App, as we need to make some preliminary modifications. To do this, go to the main overview page of the Web App and click on 'Stop'. In the same Web App overview page, navigate through the left-hand panel to the 'Settings' section. Once there, click on it and select 'Environment Variables'. Environment variables are key-value pairs used to configure the behavior of your application without changing the source code. In the case of n8n, they are essential for defining authentication, webhook behavior, port configuration, timezone settings, and more. Environment variables within Azure specifically in Web Apps function the same way as they do outside of Azure. They allow you to configure your application's behavior without modifying the source code. In this case, we will add the following variables required for n8n to operate properly. Note: The variable APP_SERVICE_STORAGE should only be modified by setting it to true. Once the environment variables have been added, proceed to save them by clicking 'Apply' and confirming the changes. A confirmation dialog will appear to finalize the operation. Restart the Web App. This second startup may take longer than usual, typically around 5 to 7 minutes, as the environment initializes with the new configuration. Now, as we can see, the application has loaded successfully, and we can start using our own n8n server hosted on Azure. As you can observe, it references the host configured in the App Service. I hope you found this guide helpful and that it serves as a useful resource for deploying n8n on Azure App Service. If you have any questions or need further clarification, feel free to reach out—I'd be happy to help.676Views2likes5CommentsIn Azure Devops, How to view all child work items for a dependent feature ?
I have a Epic with few features. In one of the feature, I have a user story that has a related link to other features. Is it possible to see all the features. its child user stories, tasks, bugs that are open for all the associated features ? Epic -> Feature 1 -> User stories -> Tasks, Bugs Epic -> Feature 2 -> User stories -> Tasks, Bugs Epic -> Feature 3 -> User stories -> Tasks, Bugs Epic -> Feature 4 -> Special User story (with relation to Feature 1, 2) -> Tasks, Bugs In the view I want to see all the features (that are associated to Special User story) and its childs Epic -> Feature 1 -> User stories, Tasks, Bugs Epic -> Feature 2 -> User stories, Tasks, Bugs89Views0likes2CommentsAzurepipeline Extension- doesn't give any error nor able to show dynamic dropdown value
I have created an extension, pushed to Marketplace & then used it in my org all this went so smooth. then when I started building Pipe line at TASK step when I choose my extension it is populating a field that has options pre-defined. but when it comes to dynamic it says "Not Found" aka empty. Details:- Custom step has 3 fields. Category- Cars [ pre defined option list ] Color - Blue [ pre defined option list ] Car List - this used endpoint - https://gist.githubusercontent.com/Satyabsai/b3970e2c3d229de2c70f1def3007ccfc/raw/02dc6f7979a83287adcb6eeecddb5575bef3516e/data.json ******************** TASK.JSON file**************************** { "id": "d9bafed3-2b89-4a4e-89b8-21a3d8a4f1d3", "name": "TestExecutor", "friendlyName": "Execute ", "description": "Executes", "helpMarkDown": "", "category": "Test", "author": "s4legen", "version": { "Major": 5, "Minor": 4, "Patch": 0 }, "instanceNameFormat": "Execute Test Suite $(carlist)", "inputs": [ { "name": "category", "type": "pickList", "defaultValue": "Web", "label": "Category", "required": true, "helpMarkDown": "Select the ", "options": { "mobile": "car", "web": "truck", "api": "Plan" } }, { "name": "Color", "type": "pickList", "defaultValue": "Blue", "label": "color", "required": true, "helpMarkDown": "Select the ", "options": { "nonProd": "Blue", "prod": "Red" } }, { "name": "Carlist", "type": "pickList", "defaultValue" :"BMWX7", "label": "Carlist", "required": true, "helpMarkDown": "Select the list to execute", "properties": { "EditableOptions": "true", "Selectable": "true", "Id": "CarInput" }, "loadOptions": { "endpointUrl": "https://gist.githubusercontent.com/Satyabsai/b3970e2c3d229de2c70f1def3007ccfc/raw/02dc6f7979a83287adcb6eeecddb5575bef3516e/data.json", "resultSelector": "jsonpath:$[*]", "itemPattern": "{ \"value\": \"{value}\", \"displayValue\": \"{displayValue}\" }" } } ], "execution": { "Node16": { "target": "index.js" } }, "messages": { "TestSuiteLoadFailed": "Failed to load test from endpoint. Using default options." } } ************** ************************* const tl = require('azure-pipelines-task-lib/task'); const axios = require('axios'); const TEST_ENDPOINT = 'https://gist.githubusercontent.com/Satyabsai/b3970e2c3d229de2c70f1def3007ccfc/raw/02dc6f7979a83287adcb6eeecddb5575bef3516e/data.json'; async function getValue(field) { if (field === 'Carlist') { try { const response = await axios.get(TEST_ENDPOINT, { timeout: 5000 }); return { options: response.data.map(item => ({ value: item.value, displayValue: item.displayValue })), properties: { "id": "CarlistDropdown" } }; } catch (error) { tl.warning(tl.loc('TestLoadFailed')); } } return null; } async function run() { try { const color = tl.getInput('color', true); const category = tl.getInput('category', true); const carlist = tl.getInput('Carlist', true); const result = await axios.post(tl.getInput('clicQaEndpoint'), { testSuite, category, environment }, { timeout: 10000 }); tl.setResult(tl.TaskResult.Succeeded, `Execution ID: ${result.data.executionId}`); } catch (err) { tl.setResult(tl.TaskResult.Failed, err.message); } } module.exports = { run, getValue }; ******************** CAN SOMEONE TELL ME WHAT JSON RESPONSE IS ACCPETABLE BY AZURE TO POPULATE DROPDOWN DYNAMICALLY SOURCE IS api34Views0likes1CommentCollaborative Function App Development Using Repo Branches
In this example, I demonstrate a Windows-based Function App using PowerShell, with deployment via Azure DevOps (ADO) and a Bicep template. Local development is done in VSCode. Scenario: Your Function App project resides in a shared repository maintained by a team. Each developer works on a separate branch. Whenever a branch is updated, the Function App is deployed to a slot named after that branch. If the slot doesn't exist, it will be automatically created. How to use it: Create a Function App You can create a Function App using any method of your choice. Prepare a corresponding repo in Azure DevOps Set up your repo structure for the Function App source code. Create Function App code using the VSCode wizard In this example, we use PowerShell and create an anonymous HTTP trigger. Then, we manually add three additional files. The resulting directory structure looks like this: deploy.yml trigger: branches: include: - '*' pool: vmImage: 'ubuntu-latest' variables: azureSubscription: '<YOUR_CONNECTION_STRING_FROM_ADO>' functionAppName: '<YOUR_FUNCTION_APP_NAME>' resourceGroup: '<YOUR_RG_NAME>' location: '<YOUR_LOCATION_NAME>' steps: - checkout: self - task: AzureCLI@2 name: DeploySlotInfra inputs: azureSubscription: $(azureSubscription) scriptType: bash scriptLocation: inlineScript inlineScript: | BRANCH_NAME=$(Build.SourceBranchName) if [ "$BRANCH_NAME" = "master" ]; then echo "##[command]Deploying production infrastructure" az deployment group create \ --resource-group $(resourceGroup) \ --template-file deploy-master.bicep \ --parameters functionAppName=$(functionAppName) location=$(location) else SLOT_NAME="$BRANCH_NAME" echo "##[command]Deploying slot: $SLOT_NAME" az deployment group create \ --resource-group $(resourceGroup) \ --template-file deploy.bicep \ --parameters functionAppName=$(functionAppName) slotName=$SLOT_NAME location=$(location) fi - task: ArchiveFiles@2 displayName: 'Package Function App as ZIP' inputs: rootFolderOrFile: '$(System.DefaultWorkingDirectory)/' includeRootFolder: false archiveType: zip archiveFile: '$(Build.ArtifactStagingDirectory)/functionapp.zip' replaceExistingArchive: true - task: AzureCLI@2 name: ZipDeploy inputs: azureSubscription: $(azureSubscription) scriptType: bash scriptLocation: inlineScript inlineScript: | BRANCH_NAME=$(Build.SourceBranchName) if [ "$BRANCH_NAME" = "master" ]; then echo "##[command]Deploying code to production" az functionapp deployment source config-zip \ --name $(functionAppName) \ --resource-group $(resourceGroup) \ --src "$(Build.ArtifactStagingDirectory)/functionapp.zip" else SLOT_NAME="$BRANCH_NAME" echo "##[command]Deploying code to slot: $SLOT_NAME" az functionapp deployment source config-zip \ --name $(functionAppName) \ --resource-group $(resourceGroup) \ --slot $SLOT_NAME \ --src "$(Build.ArtifactStagingDirectory)/functionapp.zip" fi Please replace all <YOUR_XXX> placeholders with values relevant to your environment. Additionally, update the two instances of "master" to match your repo's default branch name (e.g., main), as updates from this branch will always deploy to the production slot. deploy-master.bicep @description('Function App Name') param functionAppName string @description('Function App location') param location string resource functionApp 'Microsoft.Web/sites@2022-09-01' existing = { name: functionAppName } resource appSettings 'Microsoft.Web/sites/config@2022-09-01' = { name: 'appsettings' parent: functionApp properties: { FUNCTIONS_EXTENSION_VERSION: '~4' } } deploy.bicep @description('Function App Name') param functionAppName string @description('Slot Name (e.g., dev, test, feature-xxx)') param slotName string @description('Function App location') param location string resource functionApp 'Microsoft.Web/sites@2022-09-01' existing = { name: functionAppName } resource functionSlot 'Microsoft.Web/sites/slots@2022-09-01' = { name: slotName parent: functionApp location: location properties: { serverFarmId: functionApp.properties.serverFarmId } } resource slotAppSettings 'Microsoft.Web/sites/slots/config@2022-09-01' = { name: 'appsettings' parent: functionSlot properties: { FUNCTIONS_EXTENSION_VERSION: '~4' } } Deploy from the master branch Once deployed, the HTTP trigger becomes active in the production slot, and can be accessed via: https://<FUNCTION_APP_NAME>.azurewebsites.net/api/<TRIGGER_NAME> Switch to a custom branch like member1 and create a test HTTP trigger After publishing, a new deployment slot named member1 will be created (if not already existing). You can open it in the Azure Portal and view its dedicated interface. The branch-specific HTTP trigger will now work at the following URL: https://<FUNCTION_APP_NAME>-<BRANCH_NAME>.azurewebsites.net/api/<TRIGGER_NAME> Notice: Using deployment slots for collaborative development is subject to slot count and SKU limits. For example, the Premium SKU supports up to 20 slots. See the Azure subscription and service limits, quotas, and constraints - Azure Resource Manager | Microsoft Learn for details. If you need to delete a slot after use, you can do so using PowerShell with the Remove-AzWebAppSlot command: Remove-AzWebAppSlot (Az.Websites) | Microsoft Learn244Views0likes0CommentsSSL certificate problem while doing GIT pull from Azure Devops Repos
We are using a proxy server that does SSL inspection of traffic and thus replaces the cert with the one that it issues in the process. That cert is issued by the cert authority on the proxy itself. This is fairly common with modern proxies. But users are getting following error while doing Git pull:- "git pull fatal: unable to access 'https://ausgov.visualstudio.com/Project/_git/Repo': SSL Certificate problem: self-signed certificate in certificate chain" Do I need to import the proxy CA issuing cert in Devops portal somewhere to resolve this or does the SSL inspection needs to be removed? Has anybody got it to work with proxy inspection still turned on?32Views0likes1CommentSSL certificate problem while doing GIT pull from Azure Devops Repos
We are using a proxy server that does SSL inspection of traffic and thus replaces the cert with the one that it issues in the process. That cert is issued by the cert authority on the proxy itself. This is fairly common with modern proxies. But users are getting following error while doing Git pull:- "git pull fatal: unable to access 'https://ausgov.visualstudio.com/Project/_git/Repo': SSL Certificate problem: self-signed certificate in certificate chain" Do I need to import the proxy CA issuing cert in Devops portal somewhere to resolve this or does the SSL inspection needs to be removed? Has anybody got it to work with proxy inspection still turned on?20Views0likes1CommentHas anyone here integrated JIRA with Azure DevOps
We are currently using Azure Pipelines for our deployment process and Azure Boards to track issues and tickets. However, our company recently decided to move the ticketing system to JIRA, and I have been tasked with integrating JIRA with Azure DevOps. If you have done something similar, I will appreciate any guidance, best practices, or things to watch out for.66Views0likes3CommentsAzure DevOps REST API - tag DeploymentGroups' target
Hello everyone, I am trying to setup a function in PowerShell to be able to set tags on specific targets of a deploymentgroup, and for that I am using this documentation page: https://learn.microsoft.com/en-us/rest/api/azure/devops/distributedtask/targets/update?view=azure-devops-rest-7.0&tabs=HTTP#request-body I created the request body as described in the page like bellow: { "id": 541, "tags": [ "tag1-backendWithDb", "tag1-backendWithDb-active-node", "tag2-backendWithDb-database", "tag2-backendWithDb", "tag2-backendWithDb-active-node", "tag3-blazor", "tag3-blazor-active-node", "tag4-yarp", "tag4-yarp-active-node" ] } Than I do the following command : Invoke-RestMethod -Method Patch -Uri "$baseurl/distributedtask/deploymentgroups/$($DGid)/targets?api-version=6.0-preview.1" -Credential $cred -Body ($body | ConvertTo-Json) -ContentType 'Application/json' But then I get an error like this : Invoke-RestMethod: { "$id": "1", "innerException": null, "message": "Value cannot be null.\r\nParameter name: machinesToUpdate", "typeName": "System.ArgumentNullException, mscorlib", "typeKey": "ArgumentNullException", "errorCode": 0, "eventId": 0 } The problem is that the document is not specifying any parameter named 'machinesToUpdate'. What is it that I am missing here?Solved75Views0likes3Comments