Proper application lifecycle management for Dataverse always relied on pro-developer tools such as PAC CLI, Azure DevOps or PACX.

The application lifecycle schema may vary basing on the project complexity, the number of development teams involved, the speed of the release cycles you want to fullfill, and other factors, but regardless of the complexity, if you want control over your development stream, you need to store in a proper version control system:

  1. unpacked solution(s)
  2. source code of
    1. custom web resources
    2. PCFs you build for your project
    3. plugin packages / assemblies
  3. configuration data required for a proper environment setup

Pro-devs are used to the tipical routine of:

  1. coding locally (in Visual Studio / Visual Studio Code),
  2. unit-testing the stuff they've built
  3. deploying manually your changes to Dataverse
  4. system-testing the same stuff
  5. syncronizing locally (on the dev machine) the changes made to the solution they're working with
  6. committing everything to a given Azure DevOps / GitHub repo, in a dev or a feature\something branch, with a proper comment 😉

Proper source control management directly embedded into our Dataverse instance is a long awaited feature, aimed to streamline solution versioning to facilitate #1 of the first list and #5 and #6 from the second list.

As of now, the official MS article describing the capability it's still in preview and dated 11/05/2024.

Now that the preview has finally arrived on my tenant, I'm looking forward to test it out, compare it with the way I manage that stuff today, and taste the differences.

This is the first post of a series where I'll deep dive on this topic and provide my POV, as usual. Let's start from the beginning...

You'll find the following icons to highlight important stuff:🟢 stuff that for me is really useful/powerful/best practice⚠️ stuff to be aware of, because it can have unwanted side effects🔴 stuff that should be avoided🚩 red flags, design decision that IMHO should be reconsidered before going GA

🔧 Set up Git connection

There is an official doc on this topic, but I'll show you my experience.

Everything starts from https://make.powerapps.com, after selecting your Environment you can go to Solutions, and you should see the following button on the command bar:

Once clicked you can pick which type of connection strategy (type) you want to adopt:

Difference between Environment and Solution types is explained in the official docs, but briefly:

  • 🔴 Environment is when you want to sync everything you do, regardless of how you do it. It's the no-brain/low-control approach. Designed (I imagine) for basic users. Stay away from it as much as possible on professional or enterprise scenarios.
  • 🟢 Solution can be used to control which solutions to version, and where. This is the one to peek.

Once you choose Solution, you need to fill in where you want to store the customizations:

🚩 Dataverse automatically connects to Azure DevOps with your current credentials. This means that a "limitation" is that Azure DevOps must be on the same Azure tenant of the Dataverse. I work in a consulting firm, and sometimes when clients do not provide us an Azure DevOps, we use an internal one to host versioning and pipeline management... in those cases we could not use this feature.

Organization, Project and Repository are terms that should be very familiar to the ones reading this, so I won't spend any word on those.

Root Git Folder, however, means the name of the folder, inside your repo, where all your solutions files will be saved. If it doesn't exists, on your first commit it will be automatically created, so no worries.

Once you've filled in all the required info you can finally click on Next.

Now it's time to configure the first solution for commit.

You need to choose which solution to save, amongst all unmanaged solutions available on your env. Then you must select the branch in the repo that will contain all the commits made on the solution.

🚩 This is something that I don't quite like. Typically, a proper ALM process requires to generate a new branch for each feature developed, to properly manage merging and deployments. The fact that the branch is strictly tied to the solution, and cannot be changed, forces you to use always the same branch for all your customizations. It's quite a limit, one needs to figure out how to properly make it work in enterprise-scale scenarios.

⚠️ The last info to provide is the Git folder. At the beginning it may seem irrelevant but... how you decide to manage versioning of your solutions in your repo affect your ability to track and evolve your solutions organization over time. And this cannot be changed later.

My suggestion is (valid @ 03/02/2025): even if you have several solutions to version, peek always the same root folder chosen in the previous step. It may seem odd but, as mentioned in the official docs,

The system allows for multiple solutions to use a single root folder location and keeps track of which components belong to each solution in a separate file.

I tried both approaches of having one folder per solution, and all solutions in the same folder, and the first approach lead to a lot of duplicated folders on your repo. More info on this in a dedicated post.

Once you have everything set-up, click on Connect. A confirm dialog will show up:

Click Continue and the connection process starts:

Once ready you'll see a green icon in the Source control status column of your solutions grid.

If you enter now your solution, the navbar will display a new item called Source control (Preview):

By clicking on it you'll access the page where you can manage your commits. We'll deep dive on it in the next post. As of now, a few aspects to notice:

  1. 🟢 After the configuration the system may take a while to identify all the changes to commit. A warning message appears (image below) telling that "Source control components are being processed in the background". You need to wait until the process completes before being able to perform your first commit.

  1. ⚠️ Not all solution component types are currently managed / manageable by the feature. At the moment there is no official list of the supported/working component types in the docs.

🤔 Conclusions

We've seen how to configure the binding between a Dataverse environment and a Azure DevOps Git repo, and the stuff to be aware of in the process.

The configuration is pretty strike forward, with a few things to plan in advance to get the max from this new capability.

Stay tuned for the next posts where we'll deep dive on the commit process and on a few considerations on the overall application lifecycle management scenario enabled by this new feature.

📚 References

Author Of article : Riccardo Gregori Read full article