This solution in continuously improving. The change log can be found at https://github.com/MSEndpointMgr/IntuneEnhancedInventory
Since we released our solution to gather custom inventory from Intune managed devices and send it to log analytics we have received a lot of feedback and there has been many variants popping up in the community. We love to see both that the community uses what we do and also enriches and develops their own variants. One of the main feedback we have been getting is around the fact that the original solution is exposing the Log Analytics Workspace ID and Shared Key in the proactive remediation script locally on the clients. We have developed a solution for this and made this even more secure by using parts of the client verification features from CloudLaps from Nickolaj .
The Azure Function
By introducing use of an Azure Function as our own custom “API” we moved the actual log injection away from the Proactive Remediation and over to the backend. This means we don’t need any information about the backend Azure Log Analytics workspace in the scripts running on our clients. All we need is the trigger url for the Azure Function so we know where to send the payload. In addition to this, the Azure Function will verify that the request comes from an active client in your AzureAD before it allows any logdata to be sent to Log Analytics.
As you can see above, if the DeviceID does not exists in AzureAD, the request fails and the call to the Azure Function comes back with HTTPSTATUSCODE Forbidden.
The Proactive Remediation
We are using about the same script as before with a few changes. We are no longer sending any data direct to log analytics and we are adding a few bits to the payload for managing the client verification.
The whole setup
As the full solution requires a bit of setup I have created a deployment template that will help you get this setup easily. The full setup requires the following resources in Azure.
- Function App
- Application Insights
- App Service Plan
- Storage Account
- Key vault
- Log Analytics Workspace (Your current Intune Diagnostics log Workspace
In addition to the actual resources needed to be setup, there is also a bit of configuration to be done as well.
As you can see we are storing the WorkspaceID and the Shared Key for your log workspace inside a keyvault. The Function app’s Managed System Identity is given permissions to get those secrets to have the needed information to inject logs via the native HTTP Collector API to your workspace.
All this setup is included within our Bicep Deployment template and can be found at this Github Repo: MSEndpointMgr/IntuneEnhancedInventory where you can just click on the Deploy to Azure Button
In this repo you will also find the updated Azure Function version of the Proactive Remediations script. To perform the setup with the template you need to fill out some information:
Note the selection of the App Service Plan SKU. For larger environments, a consumption plan might not scale. Read more about the options here: https://docs.microsoft.com/en-us/azure/azure-functions/functions-scale
Post-Deployment actions
After the solution has been deployed take a note of the output from the deployment.
This function app hostname is the url you need to insert into the proactive remediations script to talk to your function. Example below:
# Define your azure function URL: # Example 'https://<appname>.azurewebsites.net/api/<functioname>' $AzureFunctionURL = "https://funbiceptesting001/api/mylogcollectorapi"
Then you must give the Azure Functions Managed System Identity permissions to talk to Azure AD via Microsoft Graph API. To perform that run the Add-MSIGraphPermissions.ps1 script with a Global Administrator user. This will grant the managed identity permission to read all devices in Azure AD.
Change the variables $TenantID and $ServicePrincipalAppDisplayName to target your tenant and function app.
#Requires -Modules Microsoft.Graph # Install the module. (You need admin on the machine.) # Install-Module Microsoft.Graph # Set Static Variables $TenantID="" $ServicePrincipalAppDisplayName =""
Verify settings in Azure Active Directory – Enterprise Applications – Select Managed Identities
Scaling
As the Proactive Remediation typically run on all clients at the same time, you might see scaling issues if you have more than 4000-5000 clients when running on a consumption plan. To work around this without going to an more expensive plan, there is a builtin option in the Proactive Remediation script to randomize the runtime over 50 minutes. As Proactive Remediations have a 60 minutes timeout value that is hard coded, this is the window we have. To enable this, you need to uncomment the following lines in the script.
#Randomize over 50 minutes to spread load on Azure Function - disabled on date of enrollment $JoinDate = Get-AzureADJoinDate $DelayDate = $JoinDate.AddDays(1) $CompareDate = ($DelayDate - $JoinDate) if ($CompareDate.Days -ge 1){ Write-Output "Randomzing execution time" #$ExecuteInSeconds = (Get-Random -Maximum 3000 -Minimum 1) #Start-Sleep -Seconds $ExecuteInSeconds }
To avoid this causing issues during provisioning phase (Autopilot), randomization is always disabled the day of enrollment.
Pricing
With this you should be ready to test this upgraded and more secure solution. The introduction of the Azure Function is adding a potential cost to the solution. Azure Functions consumption plan is billed based on per-second resource consumption and executions. Consumption plan pricing includes a monthly free grant of 1 million requests and 400,000 GB-s of resource consumption per month per subscription in pay-as-you-go pricing across all function apps in that subscription. Read more about pricing here: https://azure.microsoft.com/en-us/pricing/details/functions/#pricing
For devices with MANY apps installed, the payload string gets truncated to length 32766 in the log analytics table. Have you found a way to avoid this problem?
This is a limitation in the API on the payload size. They payload can max be 32Mb. You would could rewrite the colleting PR script and split it up to sending it in bulks. No immediate plan from my side to fix this sadly, you can add it to the github as a feature request though.
With the end of support of Runtime 3 and therefore .net4 in azure function app, is it safe to upgrade the function app ?
If you upgrade the the latest version of this solution using “Deploy to Azure” button it will change to runtime 4. I have tested both on Runtime 4 and also on PSCore 7.2 on the latest version.
I am trying to deploy the template in my dev tenant and I receive an error. I have filled all the fields in as required so I am not sure what I am missing.
{
“code”: “InvalidTemplateDeployment”,
“details”: [
{
“message”: “Object reference not set to an instance of an object.”
}
],
“message”: “The template deployment ‘Microsoft.Template-20220826104811’ is not valid according to the validation procedure.
Microsoft changed some validation of the templates, this was fixed early september.
We saw this during MMS this year, and have implemented it, but we’re having trouble understanding the consumption costs of the Function App. Does this essentially double our Data Ingestion costs because it’s essentially ingesting the data twice?
Fantastic way to secure our PRs, regardless!
Azure Functions consumption plan is billed based on per-second resource consumption and executions. Consumption plan pricing includes a monthly free grant of 1 million requests and 400,000 GB-s of resource consumption per month per subscription in pay-as-you-go pricing across all function apps in that subscription.
When testing this Azure Function i get an error, Device is not in my tenant, like shown above. But all my devices are HAADJ in my tenant… When verifiying the inbound deviceID its matching a device with is co-managed and listed in AzureAD. Is there something different when it comes to HAADJ devices?
There is a known issue with getting the correct device matching for HAADJ in the released version. We are looking at it.
Hallo Jan Ketil,
Thanks for this awsome post!
We would like to implement to gather Lenovo Warranty Information from WMI on each computer in this inventory.
As I am kind of a newbie in this department, I am not sure how to start. Have you had any look at this?
We are taking the inspiration from this blogpost: https://thinkdeploy.blogspot.com/2021/06/collecting-and-storing-lenovo-warranty.html
We have not looked into warranty information in this as of yet as many vendors does not have a public API for this.
I am running this script with the azure function deployed as described and it is successfully sending logs to my workspace. However the invoke-custominventoryazurefunction script is returning an error as the end;
InventoryDate:11-02 16:35 Inventory:OK DeviceInventory: 200 : Upload payload size is 3.4Kb AppInventory: 200 : Upload payload size is 161.2Kb DeviceInventory:Fail AppInventory:Fail
How do I troubleshoot this? Is there any logging built in?
This is just a “bug” in the output of the script. From what you are saying here, everything is working as expected. This output bug will be fixed soon.