Microsoft's New Fabric Azure DevOps Extension: Promising Wrapper, Limited Value Today
A walkthrough of what works, what doesn't, and whether you actually need it

Principal Architect | Microsoft Fabric Expert | Data & AI Enthusiast
With over 15 years of experience in Data and BI, I specialize in Microsoft Fabric, helping organizations build scalable data platforms with cutting-edge technologies. As a Principal Architect at twoday, I focus on automating data workflows, optimizing CI/CD pipelines, and leveraging Fabric REST APIs to drive efficiency and innovation.
I share my insights and knowledge through my blog, Peer Insights, where I explore how to leverage Microsoft Fabric REST APIs to automate platform management, manage CI/CD pipelines, and kickstart Fabric journeys.
Microsoft recently released a new Azure DevOps extension for Microsoft Fabric, bringing Fabric CLI usage directly into ADO Pipelines. The promise is simple: install the extension, drop a FabricCLI@0 task into your YAML, and the Fabric CLI is automatically provisioned in the pipeline runtime. No manual pip install, no version pinning gymnastics. Just write your fab commands and go.
That's the promise. The reality, at the time of writing, takes a bit more work to get to — and once you're there, it's worth asking what the extension actually buys you.
In this post I'll walk through what the extension provides, the bumps I hit getting it to actually work, the to-go pipeline that does work, and the question I think is worth asking honestly: does the extension currently give you anything you can't do in a few lines with AzureCLI@2 and a pip install?
👉 You can find the extension's source code on GitHub: microsoft/ms-fabric-azure-devops-extensions.
👉 Install the extension from the Azure DevOps Marketplace.
👉 The launch announcement is here: Fabric CLI in Azure DevOps: automation without friction (Preview).
What the extension provides on paper
The extension adds a single task FabricCLI@0 to your ADO pipeline. The task auto-provisions the fab CLI at the version you pin, supports inline scripts or external script files, and works with PowerShell, PowerShell Core, Bash, and Batch. The README highlights:
Zero-setup Fabric CLI: The task installs
fabfor youMulti-platform scripting:
ps,pscore,bash,batchInline or file-based scripts
Workspace management, item deployment, Git integration, deployment pipelines: anything
fabcan doVersion-pinned CLI for reproducible builds
The expected pattern from the README is essentially this:
- task: FabricCLITask@0
displayName: 'Create Fabric Workspace'
env:
FAB_SPN_CLIENT_ID: $(FAB_SPN_CLIENT_ID)
FAB_TENANT_ID: $(FAB_TENANT_ID)
FAB_SPN_FEDERATED_TOKEN: $(FEDERATED_TOKEN)
inputs:
scriptType: inlineScript
FabricCLIVersion: V1.5.0
inlineScript: |
fab mkdir "MyWorkspace.Workspace" -P capacityname=$FAB_CAPACITY_NAME
Clean, declarative, and on the surface a reasonable wrapper for the CLI.
Use cases the extension targets
The set of scenarios the extension is aimed at is genuinely valuable:
Workspace provisioning: Create and configure Fabric workspaces, assign them to capacities, set permissions
Item deployment: Publish notebooks, lakehouses, semantic models, pipelines and more directly from your repo
Git integration: Connect workspaces to ADO Repos, commit, sync from branches
Deployment pipelines: Promote items across stages (Dev → Test → Prod) from your CI/CD
End-to-end Fabric CI/CD without managing the CLI installation in every pipeline
If you've been doing Fabric automation with raw REST API calls, or with pip install ms-fabric-cli scattered across pipelines, the goal of this extension is to remove that ceremony.
The reality: three gaps between docs and code
Working through the documented examples, I hit three issues that prevented the extension from working out of the box.
Issue 1: The task name in the docs doesn't match what's installed
The launch blog post tells you to use FabricCLI@1. The README in the source repo tells you to use FabricCLITask@0. The actually installed task, once the extension is added to your ADO organization, is FabricCLI@0. Three sources of truth, three different names, and only one works.
⚠️ If you get A task is missing. The pipeline references a task called 'FabricCLITask' or similar, it's almost certainly because you copied the YAML from the README. Use FabricCLI@0.
That's a documentation issue, not a code issue, but it costs you the first 10 minutes of trying the extension on a pipeline that simply won't validate.
Issue 2: The pscore script type generates a malformed pwsh command line
This one is more serious. Some of the samples use scriptLanguage: pscore where the task constructs a PowerShell Core command line that looks like this in the agent log:
"C:\Program Files\PowerShell\7\pwsh.exe" -NoLogo -NoProfile -NonInteractive "-ExecutionPolicy Unrestricted" -Command ". 'D:\a\_temp\fabricclitask_xxx.ps1'"
Notice "-ExecutionPolicy Unrestricted". The flag and its value are wrapped in a single set of quotes, so pwsh receives them as one argument with an embedded space. pwsh rejects this with:
The argument '-ExecutionPolicy Unrestricted' is not recognized as the name of a script file.
##[error]Script execution failed with exit code 64
Your inlineScript never runs. The task fails before reaching any of your code. I reproduced this on both ubuntu-latest and windows-latest, with task version 0.1.0.
There's no YAML workaround for this. The broken argv is constructed inside the task's wrapper code. The fix is to avoid the pscore code path entirely by using scriptLanguage: bash (on Linux agents).
Issue 3: The keyring problem on hosted Linux agents
Switching to bash gets your script running, but introduces a different error:
x [EncryptionFailed] An error occurred with the encrypted cache.
Enable plaintext auth token fallback with 'config set encryption_fallback_enabled true'
The Fabric CLI uses the OS keyring (libsecret on Linux) to encrypt cached auth tokens. Microsoft-hosted Linux agents don't have a running keyring service, so the encrypted-cache call fails. The error message helpfully tells you the fix: enable the plaintext fallback.
The fix is to run the corresponding fab config set commands as the very first lines of your inlineScript, before any other fab command:
inlineScript: |
fab config set encryption_fallback_enabled true
fab config set context_persistence_enabled false
fab ls
Both settings matter on ephemeral CI agents. encryption_fallback_enabled lets fab store tokens unencrypted when no keyring is available. context_persistence_enabled=false skips trying to persist navigation context, which is meaningless across one-shot pipeline runs.
⚠️ This config is safe on hosted ephemeral agents. They're destroyed after each run and the federated token is short-lived anyway. Don't enable plaintext fallback on a self-hosted persistent agent without considering the implications.
A small detour: authentication via Workload Identity Federation
Before showing the working pipeline, a quick note on authentication. The whole point of the modern setup is to avoid storing client secrets in your variable groups. We use Workload Identity Federation (WIF) so that ADO can mint a short-lived OIDC token for each pipeline run, which Entra trusts because of a federated credential on the SPN's app registration.
Setting that up the first time has its own learning curve. And yes, you may run into AADSTS700213: No matching federated identity record found errors along the way if the issuer/subject in Entra doesn't match exactly what ADO is presenting. The error message itself tells you the exact subject ADO sent; copy-paste that into the federated credential in Entra (don't retype) and verify will pass.
🔗 For the full federated setup, the official docs are here: Configure workload identity federation in Azure DevOps
⚠️ One thing the docs don't make obvious: federation working is not the same as the SPN being allowed to do anything in Fabric. Once auth succeeds, you'll still need:
Tenant setting "Service principals can use Fabric APIs" enabled (Fabric admin portal → Tenant settings → Developer settings), scoped to a security group containing your SPN
Tenant setting "Service principals can create workspaces, connections, and deployment pipelines" enabled if you want to provision workspaces
Your SPN added as a Capacity Admin or Contributor on the target Fabric capacity
You will find additional information in the documentation of the Developer Tenant settings.
If you skip these, the extension will authenticate beautifully and then return [Unauthorized] on every command. Authentication and authorization are separate problems; the extension only solves the first.
The to-go pipeline that actually works
Here's the pipeline that finally got fab ls to return real data after a fair amount of trial and error. Two tasks: one to fetch the federated token via WIF, one to run fab with the right config.
trigger: none
pool:
vmImage: 'ubuntu-latest'
steps:
- task: AzureCLI@2
displayName: 'Get federated token'
inputs:
azureSubscription: 'FabricServiceConnection'
scriptType: bash
scriptLocation: inlineScript
addSpnToEnvironment: true
inlineScript: |
echo "##vso[task.setvariable variable=FAB_SPN_FEDERATED_TOKEN;issecret=true]$idToken"
echo "##vso[task.setvariable variable=FAB_SPN_CLIENT_ID]$servicePrincipalId"
echo "##vso[task.setvariable variable=FAB_TENANT_ID]$tenantId"
- task: FabricCLI@0
displayName: 'List Fabric Workspace'
env:
FAB_SPN_CLIENT_ID: $(FAB_SPN_CLIENT_ID)
FAB_TENANT_ID: $(FAB_TENANT_ID)
FAB_SPN_FEDERATED_TOKEN: $(FAB_SPN_FEDERATED_TOKEN)
inputs:
scriptType: inlineScript
scriptLanguage: bash
FabricCLIVersion: 'v1.5.0'
inlineScript: |
fab config set encryption_fallback_enabled true
fab config set context_persistence_enabled false
fab ls
A few things worth highlighting:
addSpnToEnvironment: trueonAzureCLI@2is what exposes\(idToken,\)servicePrincipalId, and$tenantIdfrom your WIF service connectionscriptLanguage: bashsidesteps thepscoreargument-quoting bugfab config setlines are the very first thing ininlineScript. They have to run before anyfabcommand that touches auth or cache stateFAB_SPN_*env vars on the task pass the federated token and SPN identity into thefabprocess
If you're targeting multiple Fabric tasks in one pipeline, my recommendation is to wrap each Fabric-using task in its own AzureCLI@2 step (or fetch a fresh token per task). The OIDC idToken from ADO is short-lived (~10 minutes), so a long-running pipeline that fetches once and reuses can hit token expiry mid-run.
The question worth asking: is the extension actually adding value?
This is where I want to be honest. The extension's headline benefit is "zero-setup Fabric CLI." Look at what that's worth:
pip install --quiet --user ms-fabric-cli==1.5.0
export PATH="\(HOME/.local/bin:\)PATH"
Two lines. That's the "setup" the extension is saving you.
Here's the same end-to-end workflow done with vanilla AzureCLI@2... so no extension needed:
trigger: none
parameters:
- name: workspaceName
type: string
default: 'MyExtensionCreatedWorkspace'
pool:
vmImage: 'ubuntu-latest'
steps:
- task: AzureCLI@2
displayName: 'Run Fabric CLI'
inputs:
azureSubscription: 'FabricServiceConnection'
scriptType: bash
scriptLocation: inlineScript
addSpnToEnvironment: true
inlineScript: |
set -e
pip install --quiet --user ms-fabric-cli==1.5.0
export PATH="\(HOME/.local/bin:\)PATH"
export FAB_SPN_CLIENT_ID="$servicePrincipalId"
export FAB_TENANT_ID="$tenantId"
export FAB_SPN_FEDERATED_TOKEN="$idToken"
fab config set encryption_fallback_enabled true
fab config set context_persistence_enabled false
fab auth login
fab ls
Single task. No extension required. No org-level install. No marketplace dependency. No bugs to work around.
Compared to the extension version:
| Aspect | Extension (FabricCLI@0) |
Plain AzureCLI@2 + pip install |
|---|---|---|
| Tasks needed | 2 (token fetch + Fabric CLI) | 1 |
| Lines of YAML | ~30 | ~30 |
| Org admin install | Required | Not required |
fab auto-installed |
✅ | One-line pip install |
| Authentication style | Identical (WIF) | Identical (WIF) |
As it stands, the extension is essentially a thin wrapper on the CLI head. Its value-add today comes down to running pip install ms-fabric-cli for you, and that's about it. Everything else, authentication, configuration, and the actual fab commands, is something you write yourself.
Verdict
The extension is a reasonable starting point and clearly the right direction for first-class Fabric CI/CD in Azure DevOps. But in its current preview state, the official approach costs you more setup time than it saves, and the documentation is materially out of sync with the shipped code. As I see it today, there's no real value in using the extension over a plain AzureCLI@2 + pip install step. It's a thin wrapper around the CLI, and right now it's a wrapper that adds friction rather than removing it.
If you want to try it today, here's my honest recommendation:
Use the to-go pipeline above. It works. Use
bash, notpscore.File issues on the repo. 👉 microsoft/ms-fabric-azure-devops-extensions — Issues. The maintainers genuinely need to hear about the
pscoreargument bug, the keyring/wrapper ordering issue, and the doc/code mismatches. Constructive feedback now is how preview features improve.Don't feel bad about using
AzureCLI@2 + pip installinstead. It's not a hack. It's the same authentication, the same CLI, the samefabcommands. Until the extension provides meaningful value beyond runningpip installfor you, the plain approach is simpler, has fewer moving parts, and doesn't depend on a marketplace install your org admin has to manage.Or go further with a custom build accelerator. If you're doing serious Fabric automation, a thin task wrapper is rarely where the real productivity gains come from. Purpose-built helpers, recipe files, and reusable scripts that encode your team's conventions are what actually move the needle. 👉 That's the approach I take in my own FabricOps repo, where the goal is to give you opinionated building blocks for Fabric CI/CD rather than another way to invoke
fab.
I'll update this post (or write a follow-up) once the extension hits a state where it earns its place over the plain approach. The potential is there. The execution just needs another iteration or two.
Until then... automate everything, automate it smartly, and don't be afraid to skip a tool that isn't ready yet 😉

