Since we released our solution to gather custom inventory from Intune managed devices and send it to log analytics we have received a lot of feedback and there has been many variants popping up in the community. We love to see both that the community uses what we do and also enriches and develops their own variants. One of the main feedback we have been getting is around the fact that the original solution is exposing the Log Analytics Workspace ID and Shared Key in the proactive remediation script locally on the clients. We have developed a solution for this and made this even more secure by using parts of the client verification features from CloudLaps from Nickolaj .
The Azure Function
By introducing use of an Azure Function as our own custom “API” we moved the actual log injection away from the Proactive Remediation and over to the backend. This means we don’t need any information about the backend Azure Log Analytics workspace in the scripts running on our clients. All we need is the trigger url for the Azure Function so we know where to send the payload. In addition to this, the Azure Function will verify that the request comes from an active client in your AzureAD before it allows any logdata to be sent to Log Analytics.
As you can see above, if the DeviceID does not exists in AzureAD, the request fails and the call to the Azure Function comes back with HTTPSTATUSCODE Forbidden.
The Proactive Remediation
We are using about the same script as before with a few changes. We are no longer sending any data direct to log analytics and we are adding a few bits to the payload for managing the client verification.
The whole setup
As the full solution requires a bit of setup I have created a deployment template that will help you get this setup easily. The full setup requires the following resources in Azure.
- Function App
- Application Insights
- App Service Plan
- Storage Account
- Key vault
- Log Analytics Workspace (Your current Intune Diagnostics log Workspace
In addition to the actual resources needed to be setup, there is also a bit of configuration to be done as well.
As you can see we are storing the WorkspaceID and the Shared Key for your log workspace inside a keyvault. The Function app’s Managed System Identity is given permissions to get those secrets to have the needed information to inject logs via the native HTTP Collector API to your workspace.
All this setup is included within our Bicep Deployment template and can be found at this Github Repo: MSEndpointMgr/IntuneEnhancedInventory where you can just click on the Deploy to Azure Button
In this repo you will also find the updated Azure Function version of the Proactive Remediations script. To perform the setup with the template you need to fill out some information:
Note the selection of the App Service Plan SKU. For larger environments, a consumption plan might not scale. Read more about the options here: https://docs.microsoft.com/en-us/azure/azure-functions/functions-scale
Post-Deployment actions
After the solution has been deployed take a note of the output from the deployment.

This function app hostname is the url you need to insert into the proactive remediations script to talk to your function. Example below:
# Define your azure function URL: # Example 'https://<appname>.azurewebsites.net/api/<functioname>' $AzureFunctionURL = "https://funbiceptesting001/api/mylogcollectorapi"
Then you must give the Azure Functions Managed System Identity permissions to talk to Azure AD via Microsoft Graph API. To perform that run the Add-MSIGraphPermissions.ps1 script with a Global Administrator user. This will grant the managed identity permission to read all devices in Azure AD.
Change the variables $TenantID and $ServicePrincipalAppDisplayName to target your tenant and function app.
#Requires -Modules Microsoft.Graph # Install the module. (You need admin on the machine.) # Install-Module Microsoft.Graph # Set Static Variables $TenantID="" $ServicePrincipalAppDisplayName =""
Verify settings in Azure Active Directory – Enterprise Applications – Select Managed Identities
Scaling
As the Proactive Remediation typically run on all clients at the same time, you might see scaling issues if you have more than 4000-5000 clients when running on a consumption plan. To work around this without going to an more expensive plan, there is a builtin option in the Proactive Remediation script to randomize the runtime over 50 minutes. As Proactive Remediations have a 60 minutes timeout value that is hard coded, this is the window we have. To enable this, you need to uncomment the following lines in the script.
#Randomize over 50 minutes to spread load on Azure Function - disabled on date of enrollment $JoinDate = Get-AzureADJoinDate $DelayDate = $JoinDate.AddDays(1) $CompareDate = ($DelayDate - $JoinDate) if ($CompareDate.Days -ge 1){ Write-Output "Randomzing execution time" #$ExecuteInSeconds = (Get-Random -Maximum 3000 -Minimum 1) #Start-Sleep -Seconds $ExecuteInSeconds }
To avoid this causing issues during provisioning phase (Autopilot), randomization is always disabled the day of enrollment.
Pricing
With this you should be ready to test this upgraded and more secure solution. The introduction of the Azure Function is adding a potential cost to the solution. Azure Functions consumption plan is billed based on per-second resource consumption and executions. Consumption plan pricing includes a monthly free grant of 1 million requests and 400,000 GB-s of resource consumption per month per subscription in pay-as-you-go pricing across all function apps in that subscription. Read more about pricing here: https://azure.microsoft.com/en-us/pricing/details/functions/#pricing
(3791)
I am running this script with the azure function deployed as described and it is successfully sending logs to my workspace. However the invoke-custominventoryazurefunction script is returning an error as the end;
InventoryDate:11-02 16:35 Inventory:OK DeviceInventory: 200 : Upload payload size is 3.4Kb AppInventory: 200 : Upload payload size is 161.2Kb DeviceInventory:Fail AppInventory:Fail
How do I troubleshoot this? Is there any logging built in?
This is just a “bug” in the output of the script. From what you are saying here, everything is working as expected. This output bug will be fixed soon.