Quantcast
Channel: ConfigMgr – Mike's Tech Blog
Viewing all 78 articles
Browse latest View live

Nomad Software Update LsZ File Purge aka whack-a-mole

$
0
0

In short, there is an issue in Configuration Manager CB 1806 where an additional file (*express.cab) gets included with some software updates. This causes Nomad to generate a LsZ file that contains download instructions for two files as seen below:

; Format=0003 (UNICODE) with alt hash
; Package Version = 1       .
; Data Format = 0   (ORIGINAL)
; Generated by “CM02” from “D:\SCCMContentLib bc72d1d6-2283-4df9-bfe5-7f1d7f0e5ca8”. 11/13/2018 21:17:11(UTC) 0x1d47b963e1d6877
; BlockSize=00008000
; RDC data generation was enabled
0: “bc72d1d6-2283-4df9-bfe5-7f1d7f0e5ca8_1.LsZ” ln=0x0 dt=0
1: “Windows10.0-KB4467686-x64-express.cab” ln=0x1879155 dt=1d47b85de243723 CRC=50E7B98AAA4EC667600179BFB237E3A441E52F7B8FCD8397C77DC86DF0535581
; Large Hash 11/13/2018 21:17:17
; 11/13/2018 21:17:17..
; Large Hash complete 11/13/2018 21:17:29
785: “Windows10.0-KB4467686-x64.cab” ln=0x36EF89D2 dt=1d47b857f42088b CRC=FDAC6287883BB398EFA9BBBBB3D5DB307B2EB501F5E7680440571F02D0124A7A
; 947329831 bytes in 2 Files (blks=28913)
; LastModifiedTime 0x1d47b85de243723 11/13/2018 19:19:58(UTC)
; Hash B8EA224C75FCBEB721408B332FEB253FEF26C264
; HashV4 CB7188D42C59B4469BABFA321972604E49134D2A3A5BF3F50D3D5C82DC0EF939
; List Complete – 1

This CB 1806 issue is described in KB4465865:

For software updates that contain express installation files, Configuration Manager synchronizes the Express.cab file and distributes it to the client. This behavior occurs even if the client does not require the Express.cab file for a given deployment.

This behavior is mainly cosmetic. It does not involve the complete set of express installation files, only the Express.cab file.

Note Installing the hotfix will stop the unconditional download attempts for express CAB files for new updates. After the hotfix is installed, downloading will occur only if express installation file support is enabled for Windows 10 updates.

This will work fine until either this hotfix is applied or Configuration Manager is upgraded to a newer version like 1810. The old LsZ files will not automatically get updated and since Nomad peer to peers these files, it makes it difficult to get rid of the bad files. The problem will arise when a peer needs a patch and is running the newer version of CM. It will request the patch and corresponding LsZ file and it could potentially get a bad LsZ file (one that contains two files instead of one) from a peer. Nomad does not have a way to configure it to always pull LsZ files directly from the DPs and not peers, so only the first client in a subnet will download the LsZ from the DP. At this point, as far as Nomad is concerned, it has been instructed to download two files as part of the patch. Once Nomad hands the job back to the CM Client, the CM Client fails the hash check since it gets two files back when now it is only expecting one file.

Thus, there are two things that need to be done – first thing is to ‘clean’ the ‘bad’ LsZ files from all clients in order to force clients that need the content to pull the LsZ file from the DP. The second is to ‘clean’ the ‘bad’ LsZ files from all of the DPs after they are upgraded to the hotfix listed above or CB 1810.

For the client cleanup, we can use a CI/Baseline to perform this action. Since Nomad may not be on all systems, create an Application type CI called “Nomad Software Update LsZ Purge” and use the following PowerShell script for the Detection Method:

<# .SYNOPSIS Confirm if NomadBranch 6.x .DESCRIPTION For Detection Logic for a Configuration Item, specifically for Nomad 6.x version .NOTES 2019-1-3 Mike Terrill #>
$ErrorActionPreference = 'SilentlyContinue'
$key = 'HKLM:\SOFTWARE\1E\NomadBranch'
if ( ((Get-ItemProperty -path $key -Name ProductVersion).ProductVersion).Substring(0,2) -eq '6.') {
    write-host ((Get-ItemProperty -path $key -Name ProductVersion).ProductVersion).Substring(0,3)
}

For the Settings, use the same name (Nomad Software Update LsZ Purge) and for the Discovery script, use the following PowerShell script:

$NomadCachePath = get-itempropertyvalue -path 'HKLM:\Software\1E\NomadBranch' -name 'LocalCachePath' | Write-Output
$LsZFiles = Get-ChildItem $NomadCachePath*.LsZ -Name
$SoftwareUpdatesLsZFiles = $LsZFiles | Select-String -Pattern '^[\dA-F]{8}-(?:[\dA-F]{4}-){3}[\dA-F]{12}_1.Lsz$' | write-output
If ($SoftwareUpdatesLsZFiles.count -ge 1) {
    Write-Host "Non-compliant"
}
ElseIf ($SoftwareUpdatesLsZFiles.count -eq 0) {
    Write-Host "Compliant"
}

Since this only applies to the LsZ files for Software Updates, we only need to delete these LsZ files and not any others. Since Software Updates start with a GUID, we can filter on them using the pattern (thanks to Keith Garner and Gary Blok). Packages, OS Upgrade Packages, OS Images, Boot Images, etc. will show up at XXXYYYYY_Z (where XXXYYYYY is the site code and package ID from where the content was generated and Z is the version). Application DTs will be in the form of Content_GUID_Z.

For the Remediation script, use the following PowerShell script:

$NomadCachePath = get-itempropertyvalue -path 'HKLM:\Software\1E\NomadBranch' -name 'LocalCachePath' | Write-Output
$LsZFiles = Get-ChildItem $NomadCachePath*.LsZ -Name
$SoftwareUpdatesLsZFiles = $LsZFiles | Select-String -Pattern '^[\dA-F]{8}-(?:[\dA-F]{4}-){3}[\dA-F]{12}_1.Lsz$' | write-output

ForEach ($SoftwareUpdatesLSZFile in $SoftwareUpdatesLSZFiles)
{
    Write-Host "Deleting: $NomadCachePath$SoftwareUpdatesLSZFile" -ForegroundColor Red
    Remove-Item $NomadCachePath$SoftwareUpdatesLSZFile -Force
}

And for the Compliance Rule, use “The value returned by the specified script equals Compliant” and select “Run the specified remediation script when this setting is noncompliant”.

Add this to a Baseline as Optional and deploy it to a set of test clients to make sure it is operating as intended and deleting the correct LsZ files. When clients request the LsZ from a peer, it will get a file not found error, cycle through a few retry attempts and then disqualify that peer before going onto the next. It may take a while to completely rid the “bad” LsZ files from the environment, hence why I refer to it as “Whack-a-mole”, as soon as you get a batch clean another pops up.

The second thing that needs to be done is to clean up the “bad” LsZ files from the DPs once they have been upgraded to the hotfix or CB 1810. This can be done by running the following PowerShell script on each DP:

$NomadCachePath = get-itempropertyvalue -path 'HKLM:\Software\1E\NomadBranch' -name 'LocalCachePath' | Write-Output
$NomadDPLSZFILESPath = $NomadCachePath + "LSZFILES\"
$LsZFiles = Get-ChildItem $NomadDPLSZFILESPath*.LsZ -Name
$SoftwareUpdatesLsZFiles = $LsZFiles | Select-String -Pattern '^[\dA-F]{8}-(?:[\dA-F]{4}-){3}[\dA-F]{12}_1.Lsz$' | write-output

ForEach ($SoftwareUpdatesLSZFile in $SoftwareUpdatesLSZFiles)
{
    Write-Host "Deleting: $NomadDPLSZFILESPath$SoftwareUpdatesLSZFile" -ForegroundColor Red
    Remove-Item $NomadDPLSZFILESPath$SoftwareUpdatesLSZFile -Force
}

Once this is done, any new requests coming in *should* result in Nomad generating a new LsZ file for the Software Update that only contains the single file and not one that has the express.cab file included. Otherwise, contact the vendor if you continue to have issues.

Originally posted on https://miketerrill.net/


1 WDS PXE Server and Boot Images from multiple ConfigMgr Sites

$
0
0

Have you ever had (or wanted) the need to PXE boot from different Configuration Manager sites? Maybe your test machines are all on the same network and can talk to your ConfigMgr lab site, your ConfigMgr Technical Preview site, or your production ConfigMgr site. Heck, maybe you want to PXE boot from stand alone MDT as well from the same WDS PXE server. Well if you want this ability, keep on reading. If you are content with booting off USB sticks, then you can save some time and go get ready to update all your keys with the next ADK WinPE release (oh, and don’t forget to label them).

In my home lab I have an non-domain joined stand alone MDT server that also has WDS installed. I use this MDT when I need to prep an existing device for Autopilot (BTW – Per Larsen has a great blog on how to do this called How to deploy Autopilot device fast with MDT). I also have a few other environments that I run – one is in my Contoso domain and the other one is in my ViaMonstra domain (thanks Johan!). Depending on what I am doing, sometimes I will want to test OSD out in one or the other as I usually upgrade one of them during the fast ring and the other one once CB hits the slow ring. But having multiple PXE servers responding on the same network is much like mixing cats and dogs. That’s when I got a light bulb moment – ‘I can just import the Boot Image from each site into my WDS server and life will be great’. Except there was one little thing that I was forgetting about…when you PXE boot from a ConfigMgr DP, there is some black magic that happens. The initial TS environment variables (the things that tells the client what site to contact, certificate information, etc.) get tftp’ed from the ConfigMgr PXE DP. Since my stand alone WDS server has no idea what was being asked of it (since it knows nothing of ConfigMgr), the client never gets a response on how to proceed as you can see below:

Luckily I have dabbled a bit with PXE booting (yeah – I was P2P PXE booting before P2P PXE booting was cool) and know a few tricks on how to make things work the way we want it to work. The first thing you are going to need to do is Create Task Sequence Media from one of your ConfigMgr sites (you are thinking – ‘but I thought we are PXE booting’, relax, we are and this is where the trick comes in). Start Create Task Sequence Media Wizard and select the Bootable media option. Run through the wizard selecting the options you would normally choose for your environment and select the media type to be ISO.

Copy the boot.wim to a working directory on your WDS server. If you are using the default boot image in ConfigMgr, you can find the x64 version under \Program Files\Microsoft Configuration Manager\OSD\boot\x64. Be sure to copy the boot.{PackageID}.wim file (in this example it would be boot.PS100005.wim) as this is the one that will have all the ConfigMgr binaries. You can rename it if you like when you copy it (I usually just rename it to Site_xxxx-yyyy_x64.wim but this is optional).

 

On my WDS/MDT server, I created a directory called Boot Images and I created a sub folder for each of my labs to stay organized. I like to version my boot images by creating a subdirectory that corresponds to the production client version of the site and the ADK WinPE version of the boot image. For example, 1810-1809 means that my production client version is 1810 and the ADK WinPE version is 1809. Now I have my ViaMonstra_1810-1809_x64.wim (boot wim) and my ViaMonstra_1810-1809_x64.iso (boot media ISO) in the following directory.

Now mount the ISO and navigate to the SMS\data directory. Copy TsmBootstrap.ini and Variables.dat back into the working directory. NOTE: You could just mount the ISO and inject them into the boot image on the fly, but I like having the files outside of the ISO.

 

Open an elevated command prompt and mount the boot_x64.wim to a temporary mount directory (I created one called D:\mount):
dism /mount-wim /wimfile:”D:\Boot Images\ViaMonstra\1810-1809\boot_x64.wim” /index:1 /mountdir:d:\mount

 

Create a directory called data under sms and copy both TsmBootstrap.ini and Variables.dat to this directory:

Exit out of the directory and unmount the wim:
dism /unmount-wim /mountdir:d:\mount /commit

 

Add the Boot Image into WDS and give it a meaningful name and description:

 

Optionally, change the Priority to control the display order and default selection:

PXE boot a test client:

and you should see a happily PXE booted system that is ready for some OSD action.

Now it is time to head over to 2Pint Software and checkout how you can incorporate BranchCache into your boot images using their free OSD Toolkit.

Originally posted on https://miketerrill.net/

How to easily launch CMTrace

$
0
0

Over the years I have gotten in the habit of dropping CMTrace.exe into the System32 directory so that it is in the path and easy to launch. I had also been adding it to WinPE since before it was called CMTrace. In Configuration Manager Current Branch 1802, the ConfigMgr Team granted one of my UserVoice items and starting in that release it is automatically added to WinPE and can be launched from the command prompt:

Manage boot images with Configuration Manager

In Configuration Manager Current Branch 1806, the ConfigMgr Team started installing CMTrace with the Configuration Manager client:

CMTrace

Unfortunately, the %WinDir%\CCM directory is not in the path, so hitting the Windows key and typing CMTrace does not launch it. Either the path needs to be fully qualified or it has to be launch from finding it in Windows Explorer. Instead of adding %WinDir%\CCM to the path, or copying CMTrace to %WinDir%\System32, I had a better idea – how about just creating a NTFS hard link to the original CMTrace.exe (in %WinDir%\CCM) into %WinDir%\System32. A NTFS hard link is just another pointer to the content that is already on the disk else where. This can be done using either the command line utility called fsutil or by the PowerShell cmdlet New-Item with the -ItemType HardLink parameter. Since it is easy to use PowerShell in a Configuration Item, this makes it really easy.

I was going to originally show this at the MMSMOA 2019 Tips and Tricks session, but I wanted to give others a chance to get up on stage and show case their tips for a change to win a top of the line Surface Book 2 (plus, they said MVPs were not eligible to win). After getting home I was going to create a quick blog, but then got to questioning the original robustness of my first solution. I figured I would give it to my colleague Gary Blok (who is a great bug finder) and he would find something wrong with it. So I improved it a bit to account for a few more scenarios that I could think of, these included if another version had already been copied to the %WinDir%\System32 directory.

Download the CI here: CMTrace – System32.cab

Create a new Operating System CI:

Create a new Setting:

Add the Discovery Script:

$source = "C:\Windows\CCM\CMTrace.exe"
$target = "C:\Windows\System32\CMTrace.exe"

If (!(Test-Path $target)) {
    Write-Output "Non-compliant"
    }
Elseif ((Get-FileHash $source).hash -ne (Get-FileHash $target).hash) {
    Write-Output "Non-compliant"
    }
Else {Write-Output "Compliant"}

Add the Remediation Script:

$source = "C:\Windows\CCM\CMTrace.exe"
$target = "C:\Windows\System32\CMTrace.exe"

If (!(Test-Path $target)) {
    New-Item -Path $target -ItemType HardLink -Value $source -Force
    }
Elseif ((Get-FileHash $source).hash -ne (Get-FileHash $target).hash) {
    Remove-Item $target -Force
    New-Item -Path $target -ItemType HardLink -Value $source -Force
    }

Add the Compliance Rule:

Create a Baseline and add the CI. Deploy it to machines or a User/User Group. Once it is run, the results should look something like this:

 

We can see that this is hard linked to the CMTrace.exe in the %WinDir%\CCM directory by running the following command:

No more hunting to run CMTrace, just WinKey + cmtrace + Enter.

Originally posted on https://miketerrill.net/

AZSMUG: Presents Intune, RBAC, Graph, Autopilot and more…

$
0
0

On Monday, May 13th, AZSMUG held their Q2 meeting of 2019. It was broadcast and recorded using Microsoft Teams. A big thanks to the sponsors, Microsoft and HP, Inc. for making this a great day and to all of our speakers listed below.

AZSMUG and co-sponsors Microsoft and Hewlett-Packard are putting together a half day of expert sessions focusing on Microsoft Intune, Graph API, Autopilot, PC management and much, much more.

11:00 – 11:30 Welcome
11:30 – 12:00 Lunch
12:00 – 2:00 Intune, Graph, RBAC, Autopilot (Part 1)
2:00 – 2:05 Break
2:05 – 3:00 HP (Part 2)
3:00 – 3:05 Break
3:05 – 4:00 PowerShell Deployment Toolkit (Part 3)
4:00 – 4:05 Break
4:05 – 4:55 Automating Application Packaging and Deployment (Part 4)
4:55 – 5:00 Closing

“Leveraging HP’s OEM Image validation in your environment with HP’s Client Manageability tools”, Nathan Kofahl
This session will be primarily focused on bios and driver management best practices as demonstrated though demos of:
•HP Image Assistant
•The MIK plug in for SCCM
•The HP Client Management Script Library

Automating application packaging and deployment in ConfigMgr, Andrew Jimenez
Learn how the SCCM Application Packager downloads, prepares, packages and deploys the latest versions of popular software, with a focus on the ConfigMgr cmdlets that make the entire process possible. After, we’ll discuss one method of deploying these applications during OSD.

Ramya Chitrakar, Group Engineering Manager in Intune, Microsoft
Ramya has been at Microsoft since 2006 and worked as a developer for several years on SCCM – software updates, application deployment before moving to Intune. On Intune she worked as a developer on Hybrid MDM and Conditional Access, after which she became a manager handling Intune UI, Graph API, Enrollment, Autopilot, Reporting, iOS DEP, Intune for Edu among others. She is passionate about constant learning, having a growth mindset and a crazy passion for good cars!

Jon Andes, Software Developer, Microsoft
Jon is a Software Developer in Intune for Autopilot on the Windows client. He’s worked at Microsoft since 2009, starting out in Windows security, then joining a small Windows team to build apps for Edu before finally moving to Autopilot in Intune. In his free time, he enjoys skiing, biking, and playing guitar.

Josh Stetson, Senior Software Engineer, Microsoft
Josh is a Senior Software Engineer in Intune, specializing in Graph API and Role Based Access Control. He has worked at Microsoft since 2011, starting on Intune while it was still a small service specializing in the basic PC management needs of small to medium businesses and has enjoyed watching the service grow to the market leader it has become today. In his free time, Josh focuses on photography, still shooting film from time to time, and working with and restoring vintage computers from the mainframe and minicomputer days.

Ciaran Murphy, Senior Software Engineer, Microsoft
Ciaran is a Senior Software Engineering Manager in Intune. His team owns Intune’s integration with Graph (i.e. RBAC, Auditing, Intune Powershell SDK, etc.) and all the services which back the autopilot scenario. He has been working for Intune since 2013 when he relocated from Ireland and originally specialized in the conditional access space, but transferred about a year ago to his current team. Ciaran’s passions are rugby, skiing and reading. He is also attempting to prove that all it requires to run a marathon is stubbornness.

Nathan Kofahl, Manageability Lead, HP
As the Manageability lead for HP’s US Technical Escalations team, Nathan’s role is to act as an escalation point for HP’s US Presales teams. He spends his time communicating best practices to help customers eliminate deployment and manageability roadblocks. Additionally he is responsible for driving innovation into HP’s product portfolio based upon interaction, insights, and feedback gained from his time with HP’s customers.

Mikael Nystrom, Microsoft MVP and Principal Technical Architect, TrueSec
Mikael is a very popular instructor and is often used by Microsoft for partner trainings as well as to speak at major conferences such as TechEd, MMS, Ignite etc. Lately Mikael has been deeply engaged in the development of Windows 10 and Windows Server 2016 as part of the TAP. Mikael works as a Consultant,Speaker and Trainer with the world as his backyard

Andrew Jimenez, Systems Analyst Senior, Arizona State University
Andrew Jimenez is a Systems Analyst Senior at Arizona State University, where he helps manage a ConfigMgr environment of over 20,000 clients. He has honed his skills in OSD, PowerShell, packaging, and automation in higher education environments over the last 9 years.

Sponsors:

Microsoft
Microsoft enables digital transformation for the era of an intelligent cloud and an intelligent edge. Its mission is to empower every person and every organization on the planet to achieve more.

HP
About Us
Our vision is to create technology that makes life better for everyone, everywhere — every person, every organization, and every community around the globe. This motivates us — inspires us — to do what we do. To make what we make. To invent, and to reinvent. To engineer experiences that amaze. We won’t stop pushing ahead, because you won’t stop pushing ahead. You’re reinventing how you work. How you play. How you live. With our technology, you’ll reinvent your world.
This is our calling. This is a new HP.

Videos:

Intune, Graph, RBAC, Autopilot (Part 1)

HP (Part 2)

PowerShell Deployment Toolkit (Part 3)

Automating Application Packaging and Deployment (Part 4)

Originally posted on https://miketerrill.net/

Getting Network Efficiency from Data Deduplication and BranchCache

$
0
0

December 8, 2019

Moving large amounts of content across a diverse network environment happens to be one of my specialties that I have focused a better part of my career on over the last 12+ years. Basically it boils down to how to get the least amount of data from point A to point Z, across several networks in the most efficient manner possible, in the shortest amount of time, all without clobbering the network and systems that are touched. If you have done any kind of operating system deployment in the last decade, then you know operating systems and related content are getting larger, not smaller, and the network links are not increasing at the same pace. This is not your typical party conversation topic, so if you are bored already then feel free to head back to Twitter, Facebook or whatever you like doing. Otherwise, feel free to stick around and maybe this will give some inspiration or help you in some way with your network struggles.

I have been meaning to write about this topic for a while, since I presented on this at MMSMOA last May with my good friend Andreas Hammarskjold. However, as some of you that follow me on Twitter might know, I have taken up interest in Tesla lately and have been spending my extra time outside of work racing and tweaking my new Model 3 Performance which awesome car BTW and runs 11.5s in the quarter mile (shameless plug – referral link in my Twitter profile). Plus, my colleague Gary Blok is much younger and tends to beat me when it comes to blogging anything WaaS/IPU lately. I’ll say something like ‘man, this is awesome, we should totally blog this’, only to wake up the next morning to find out he did that night (I don’t think he sleeps).

This is going to focus on driver packages, however, it can be used for any time of content (like large software packages). At MMSMOA, I used Dell driver packages as the example, but I want to show that the same efficiencies can be gained using HP (or any vendor). I am not going to spend any time on what data deduplication is or how it works, or how to setup and use BranchCache, however these are two of the main technologies that make this all possible.

We have been working on a driver package strategy lately and we will probably try to present on it at MMSMOA 2020. The basic concept with all of our IPU and OSD content is that we have a production (Prod) version and a pre-production (Pre-prod) version. This way we can test and certify any changes before they get pushed to production. For driver packages, we are looking at getting even more efficiencies since these are one of the larger types of content on the network. In my example, I will be taking two different driver package versions for the HP Elitebook 840 G3. For our production package we will use SP93541 and for the pre-production package we will use SP96613.

Prod expanded size: 1.93 GB
Pre-prod expanded size: 1.45 GB

If you are using straight up CM and you sent both of these packages, you would be using 3.38 GB of network. So let’s see what happens if we zip them with the PowerShell command Compress-Archive and use Optimal compression:

Prod zip (optimal): 810 MB
Pre-prod zip (optimal): 624 MB

Combined they are now only 1.40 GB – a 1.98 GB savings! Now the big question is, can we get even more efficiency from BranchCache and data deduplication? Absolutely!! However, the first thing to point out is that not all zip compression algorithms are dedup friendly as the really aggressive ones scramble and compress the content as much as possible (like Software Updates).

Step 1 – Download the Prod Zipped Driver Package

For this test, I have created a simple Package-Program that does nothing more than caches the content (cmd /c). Prior to starting the test, I flushed the BranchCache cache and the CCM Cache.

Step 2 – Get the BITS job ID and monitor with BCMon

The BITS job ID can be found using the deprecated BITSADMIN:

Or by using the PowerShell command Get-BitsTransfer and then run the free BCMon utility from 2Pint Software passing it the BITS Job ID:

Here we can see that we got 786.65 MB from the server and 23.40 MB from BranchCache:

You might be wondering how this is possible since I flushed the cache before starting. Well, it means that the zip file itself was able to be be deduped, meaning there were some redundant file blocks in the zip file (hint: the same thing can happen on WIM files but at an even greater scale).

Step 3 – Download the Pre-prod Zipped Driver Package

Step 4 – Get the BITS job ID and monitor with BCMon

Here we can see that we got only 113.64 MB from the server and 510.51 MB from BranchCache for a whopping 81.79% efficiency!!! Starting at 3.38 GB, we managed to only send about 900 MB across the network. Also, this is what I like to call a single dimensional test since I only had one client on the network for testing. If there are other clients on the network and they have some of the necessary file blocks (not necessarily the same CM package), then clients will pull from them vs going back to the DP. BranchCache could care less about CM Package IDs or App DT Content ID – this is something that severely limits P2P efficiency in other P2P technologies, like the native CM Peer Cache or 1E Nomad. These technologies only operate at the CM Package ID or App DT Content ID level. In other words, there can be two peers on the same network with 99.99% of the same content but just in a different Package and the two will never share.

BranchCache FTW (even with zip files)!

Originally posted on https://miketerrill.net/

Forcing Configuration Manager VPN Clients to get patches from Microsoft Update

$
0
0

3/18/2020

By now IT departments are scrambling to get as many users as possible to work from home as a result of the COVID-19 outbreak. What they are finding out is that Microsoft patches chew up a lot of bandwidth when these clients can download the patches directly from Microsoft Update (yet still be managed by Configuration Manager). I have seen a few blog posts on the topic that ultimately end up leading to more questions than they answer. So hopefully I can make this as complete as possible and answer as many of those outstanding questions as possible.

To set the stage, I am not going to be talking about scenarios that involve CMG (I am going to assume that you are already ahead of the game and do not face this challenge). This is more for the customers on the trailing edge that have not (been able to) adopt the cloud strategy and are stuck with distribution points on the corpnet. The goal is to work with your VPN team so that they configure it for split tunneling. I will not go into this part as each VPN configuration is unique, however, I will help provide you with the necessary URLs that are needed to be excluded from coming back through the corpnet. Also be sure to factor in other things like proxy servers or other apps that inspect/filter web traffic as they will need to exclude this traffic as well so it does not come back through corpnet.

Everything starts with boundaries and if you know me, I have never been a fan of boundaries for content location (p2p FTW!). The only boundaries that I configure for content location is when I need to protect a DP in a build center where I do not want other clients outside of the build center leaching off the build center DP. Other than that, who has time to manage boundaries that are constantly changing? Plus, in my environment I could not even tell you how many subnets we have let alone pretend to get it right.

The other goal of this is to keep the operational aspect as simple as possible. Meaning, don’t expect the Software Update person to now configure a bunch of different software update deployments just to allow the VPN clients to get their updates from MU.

This is hopefully going to be a simple example to get you up and running (plus I can’t really show our production environment, so don’t ask). IP Ranges are your friend. Ever since the CM Team optimized the queries for client location requests, big honking IP Ranges are the way to go. Forget IP Subnets and AD Sites (unless you really like to cause yourself pain). So think big, like 0.0.0.0 – 255.255.255.255. However for this example I am going to keep it simple. The following are my three ranges:

Boundary Groups are pretty simple as well:

Corpnet Boundary Group Properties

Uses the Data Center DP:

In this example, every IP range is accounted for so I have not defined a relationship to the Default Site Boundary Group (or any other Boundary Groups). However, your configuration may be different:

And I am not using peer cache (BranchCache FTW!) and we do not want our corpnet devices going out to MU:

VPN Boundary Group Properties:

VPN Boundary Group uses the dedicated VPN DP(s):

Not making any assumptions, I like to explicitly state that the VPN Boundary Group should never fallback to another boundary group’s distribution point (in case an admin screws up a check box on a deployment). And if your MP(s) and SUP(s) are in the Default BG, then you will want the VPN clients to be able to get to them:

Once again I am not using peer cache (BranchCache FTW!). The “Prefer cloud based scenarios over on-premise sources” is an interesting checkbox. Since we have everything pretty much protected, it would not hurt to check it however it isn’t necessary. It could be a good safe guard in the event that someone screws up and distributes Microsoft patches to the VPN DP Group:

Be sure to set up dedicated DP Groups. You will only want to distribute Microsoft patches to the Data Center Distributions Point Group (Corpnet) and not the VPN Distribution Points Group. However, 3rd Party Updates will need to be staged on both DP Groups (and for third party updates check out Patch My PC):

IMPORTANT: When you set up the Software Update Deployment configure it exactly as follows. Since we hopefully have defined all possible IP Ranges (remember I said thing big and carve up 0.0.0.0 – 255.255.255.255 accordingly), every client should have either a DP to get content from without falling back or in the case of VPN clients and Microsoft patches – Microsoft Update:

On the clients, you are going to want to check out two logs. The first one will be the CAS.log:

And the second one will be the ContentTransferManger.log:

And remember, just because it says it is getting it from Microsoft Update does not necessarily mean it is getting directly from MU. It could still be going back through the corpnet because the split tunnel was not set up correctly or a proxy is re-directing traffic. So work closely with these teams. From a CM stand point mission accomplished.

Now for the URLs. There is not a public Microsoft doc/KB (at least that I know of) that says these are exactly the URLs that are required for this feature when defining Client -> MU traffic. However, most of them are similar to what the SUP uses when it downloads the content.

(from Software Updates)

Office 365 Updates will be further down the page:

(from Manage Office 365)

Lastly, Windows 10 Updates have a slightly different URL:

(from Windows 10 servicing)

The download location can be found in the meta data for each patch:

Plus you can run a query in SQL to find it:

select top 1000 SourceUrl
from vSMS_CIContentFiles

Hopefully this helps in getting the Microsoft Update traffic off of your VPN links. If you have any suggestions or other useful tips, please leave them in the comment section below.

Originally posted on https://miketerrill.net/

Protected: Configuring WoL with Configuration Manager – Part 1

$
0
0

This post is password protected. You must visit the website and enter the password to continue reading.

Protected: Configuring WoL with CM for HP Desktops – Part 2

$
0
0

This post is password protected. You must visit the website and enter the password to continue reading.


Configuring WoL with CM for Dell Desktops – Part 3

$
0
0

4/26/2020

[Download exported Configuration Baseline and Configuration Items here. This includes the CIs from Part 2 and Part 3.]

In Configuring WoL with Configuration Manager – Part 1, I covered the settings that are required to enable Wake-On-LAN that are not hardware manufacturer specific. In Part 2 I cover the BIOS (UEFI) specific settings for current HP desktops and in Part 3, I am going to go over the BIOS (UEFI) specific settings for current model Dell desktops. There are multiple ways to configure BIOS settings on Dell desktops and laptops. Dell provides Dell Command | Configure, which is a command line utility that can be used to get and set BIOS settings and can even be used to set multiple settings using an answer file. Dell also provides the Dell Command | PowerShell Provider, which is a module that makes BIOS configuration manageable through PowerShell. Another method for managing Dell systems is by using the Dell Command | Integration Suite for System Center. The Dell Command | Integration Suite for System Center is a console extension for Microsoft Endpoint Manager Configuration Manager (MEMCM – previously called System Center Configuration Manager) that integrates the other Dell Command Suite components. Dell is also starting to provide direct WMI access (without any required dependencies), but it is only supported on Gen 10 systems and Gen 9 systems running a current BIOS (any generation below is out of luck). Lastly, Dell also provides Dell Command | Monitor. Dell Command | Monitor not only enables administrators to inventory and monitor Dell systems (see my blogs How to Inventory Dell BIOS and UEFI Settings with ConfigMgr Part 1 and Part 2), it enables BIOS settings to be modified using WMI. The downside is that it needs to be running in the full OS and cannot be used in WinPE. However, since I like collecting Dell specific inventory with MEMCM and I will be enforcing/monitoring WoL BIOS settings in the full OS, I will be using the WMI methods that Dell Command | Monitor enables in this blog.

For configuring the Dell desktop WoL settings, I will be using Configuration Manager Configuration Items (CIs) that are deployed via a Configuration Baseline. CIs can not only be used to report on settings, they can also be used to enforce settings and manage drift. Unlike GPOs, reporting is natively built in to Configuration Manager, which makes compliance reporting really easy.

The current Dell desktop models have two BIOS settings that need to be configured in order to perform successful WoL – “Wake-On-LAN” and “Deep Sleep Control“. If power utilization is not a concern and you want to add a little more redundancy to systems that should always stay up, there are a few other settings that are of interest. The first one is what the system should do in the event of a power loss and is called “AC Power Recovery Mode“. The other settings have to do with the capability of enabling a power on event and can power on a system at a pre-determined hour, minute and day. These settings are: “Auto On“, “Auto On Hour“, and “Auto On Minute“.

The following chart summarizes the settings, the values that I am going configure, and the possible values. These settings will enable WoL, disable Deep Sleep Control (which if enabled will prevent WoL from being successful), always turn the system back on after a power loss, and enable the system to power up every day at 4:44 AM:

Setting Value Possible Values
Wake-On-LAN 4 1-Disable, 4-LAN, 5-LAN or WLAN, 6-WLAN only
Deep Sleep Control 2 1-S4andS5, 2-Disable, 3-S5Only
AC Power Recovery Mode 3 1-Off, 2-Last, 3-On
Auto On 2 1-Disable, 2-Everyday, 3-Weekdays, 4-Select days
Auto On Hour 4 0-23
Auto On Minute 44 0-59

As mentioned above, I like configuring BIOS settings using CIs. When creating a CI that may or may not be applicable to other systems, it is a good idea to create an Application type CI (instead of an Operating System type CI). This way detection logic can be applied to see if the CI should or should not be evaluated on a system. For BIOS settings, I like to limit my CIs to the hardware models that I have certified and tested that it actually works. For Dell models, I use the Win32_ComputerSystem Model identifier. The custom script for the Dell desktop detection logic is the following:

$ErrorActionPreference = 'SilentlyContinue'
$SupportedModels = @("OptiPlex 5050","OptiPlex 5060","OptiPlex 5070","OptiPlex 7040","OptiPlex 7050")
#-------------------------------------
$CS = gwmi -Class Win32_ComputerSystem
If ($SupportedModels -Contains $CS.Model) {
    Write-Output $CS.Model
    }

This corresponds to the following Dell desktop models: Dell OptiPlex 5050, 5060, 5070, 7040, 7050. NOTE: Add your own models here in each of the CI Detection Methods.

For the CI Name and CI Setting Name, I like to use the following naming structure for easy identification and purpose:

{Manufacturer} BIOS – {Laptop/Desktop/All} – {BIOS Setting Name}

So for the Dell Desktop WoL setting would look like the following:

Dell BIOS – Desktop – Wake On LAN

For the CI Description, I like to include the desired setting value and the models that are supported. For the Dell Desktop WoL setting I have the following:

4(LAN):Dell OptiPlex 5050, 5060, 5070, 7040, 7050

I also like to use categories for easy searching/filtering and use “BIOS Settings” and “WoL” for this CI. The CI General tab looks like the following:

CI Setting Name is the same:

I also like to keep the Discovery Script modular so that it is easy to re-use for multiple BIOS settings. By keeping the Setting name at the top of the script, that is the only thing that needs to be changed for creating Discovery Scripts for other BIOS settings. For the Dell Desktop WoL the discovery script would be the following:

#Discovery Script:
$SettingName = 'Wake-On-LAN'
#-------------------------------------
$BIOSSetting = Get-CimInstance -Namespace root\dcim\sysman -ClassName DCIM_BIOSEnumeration | Where-Object { $_.AttributeName -eq $SettingName}
Write-Output $BIOSSetting.CurrentValue

The Compliance Rule will be a string value that should be compared to the desired setting. In this case, I want this setting to be equal to “4” and I want to run the remediation script when the setting is non-compliant. NOTE: In order for the “Run the specified remediation script when this setting is noncompliant” to be visible, there needs to be a remediation script defined which is below.

Like the modular Discovery Script, I also like to keep the Remediation Script modular so that it is easy to re-use for multiple BIOS settings. By keeping the Setting name, setting value and BIOS password at the top of the script, that is the only thing that needs to be changed for creating Remediation Scripts for other BIOS settings. For the Dell Desktop WoL, the remediation script would be the following:

#Remediation Script:
$SettingName = 'Wake-On-LAN'
$Value = '4' #1-Disable,4-LAN,5-LAN or WLAN,6-WLAN only
$BIOSPW = 'Password1'
#-------------------------------------
$BIOS = Get-CimInstance -Namespace root\dcim\sysman -ClassName DCIM_BIOSService
$BIOSPWSetting = Get-CimInstance -Namespace root\dcim\sysman -classname dcim_biospassword
If (($BIOSPWSetting | ?{$_.AttributeName -eq 'AdminPwd' }).IsSet -eq $false)
{
    $Result = Invoke-CimMethod -InputObject $BIOS -MethodName SetBIOSAttributes -Arguments @{AttributeName=@($SettingName);AttributeValue=@($Value)}
}
elseif (($BIOSPWSetting | ?{$_.AttributeName -eq 'AdminPwd' }).IsSet -eq $true)
{
    $Result = Invoke-CimMethod -InputObject $BIOS -MethodName SetBIOSAttributes -Arguments @{AttributeName=@($SettingName);AttributeValue=@($Value);AuthorizationToken=$BIOSPW}
}
 
Exit $Result.ReturnValue

In order to change a BIOS setting, a BIOS password is required if one is set. Above is one method for a single static BIOS password. If you have multiple static BIOS passwords or dynamic BIOS Passwords, then more would need to be done in order to determine the correct BIOS password to use. This approach is more secure than using the Dell Command | Configure Utility and passing the password on a command line. If CM is secured properly (which it should be, otherwise you have more important things to worry about), then only the CM admin (or admins) that are scoped to manage CIs will be able to read these directly in the console. Getting the password from the Management Point is probably not impossible, but it would require a bit of work and some luck. As for the client, I have yet been able to find it. However, if there is a way to easily grab this information, please reach out to me and let me know via the comments below or a DM on Twitter.

The other settings, Deep Sleep Control and AC Power Recovery Mode, follow the same approach. For the daily Power-On, I combine each setting in the same CI and it will look like the following:

Lastly, we need to create a Configuration Baseline. I like to use a similar naming structure for easy identification and purpose:

BIOS Settings – {Purpose} – {Intended Platform} {Prod/Pre-Prod}

So for these WoL settings I use the following:

BIOS Settings – WoL – Desktop Pre-Prod

I like to duplicate Baselines (and even some CIs) into a production and pre-production. That way it is easy to test and make changes once it is already rolled out to production.

For the Baseline Description, I like to include a brief description for the Baseline. For this Baseline, I use the following:

Enabled WoL Settings and daily Power On settings on select desktops

Just like the CI, I also like to use categories for easy searching/filtering and use “BIOS Settings” and “WoL” for this Baseline. The Baseline General tab looks like the following:

Since all Settings might not apply to all targeted systems, it is very important to change the “Purpose” from “Required” (default) to “Optional”. Otherwise, systems that are not applicable will show up as non-compliant. Here I have combined the Dell CIs to same Baseline that contains the HP CIs.

This Configuration Baseline can now be deployed to a target collection like All Desktops and only the settings will only be applied to the applicable systems based on the detection methods.. Be sure to enable “Remediate noncompliant rules when supported” (and “Allow remediation outside of maintenance window” if desired).

Once again, if you have made it all the way to the bottom of this post, thanks for reading and congratulations! Hopefully this helps you to configure your systems for Wake-On-LAN so that they can be woken up and/or kept powered on during this time when there is a push to get more people to work from home. It will also help with other deployments, upgrades and patching as well. Now scroll back up to the top and download the provided Configuration Baseline and Configuration Items, modify them for your Dell models and test it out in your environment.

Originally posted on https://miketerrill.net/

IP Ranges, IP Subnets and Client Counts in MEMCM

$
0
0

Boundaries are used in Boundary Groups in Configuration Manager, which in turn are used for site assignment and site system/content location. There are multiple ways to configure Boundaries, this can be done with IP Subnets, AD Sites, IPv6 prefix, IP range and lastly VPN. I work in a 419K seat global environment, which means we have a CAS and three Primary Sites. A few years ago, when we migrated to a new hierarchy, I somehow got the job of doing the boundaries. Even after being in this environment for several years, I still could not tell you how many IP Subnets there are in total.

Luckily, Microsoft made vast performance improvements in using IP Ranges, so this seemed like an easy way to do boundary management. Starting at 0.0.0.0 and going up to 255.255.255.255. Simple, right? Well, not so fast. We like to keep our sites in balance as much as possible for performance reasons, and for this reason it is important to keep these ranges split up as evenly as possible. If you have ever tried to determine how many clients are in a particular IP Range, you probably banged your head a little (and it wasn’t because of the music you were listening to at the time).

Luckily, there is a nice little function already in CM that can be used for this task and a quick SQL query looks like this:

--Count of IP Addresses in an IP range
SELECT Count(IP_Addresses0) AS [Count]
FROM v_RA_System_IPAddresses
WHERE
([dbo].fnGetNumericIPAddress(IP_Addresses0)
BETWEEN [dbo].fnGetNumericIPAddress('192.168.5.1')
AND [dbo].fnGetNumericIPAddress('192.168.7.255') )

In my lab at home (which is not nearly as impressive as work), I get 12 clients returned in this IP range. Now, you are probably saying to yourself ‘well, this is nice and all but I don’t have multiple Primary Sites’. Have you ever had the need to build a collection query using IP subnets but all you have is a list of an IP range? You could use a ‘like’ operator in your collection query, but this can be really expensive on coll eval.

In order to achieve more performance optimizations, we decided it would be best to average out the physical clients and virtual clients across the three sites. One of our sites always had more of a back log come Monday and it turned out it had the bulk of laptops and desktops and very few virtual machines. This meant more tweaking of the IP Ranges and setting up collections to gradually target a baseline to them to get them to move sites. In other words, don’t attempt to move 15K+ clients from one site to another in one go. It will likely turn into a long day and probably a bad day as well. This is where the IP subnet-based collection queries come into play. I had a count of clients in a particular range (and yes, 15K+ were in one of those ranges that needed to be moved) but it was such a big range that even using the ‘like’ operator on an IP address would be madness (plus my team wouldn’t be happy if coll eval came to a halt). Using the same function as above but slightly modified, we are able to get all of the IP subnets in a given range:

--Subnets in an IP Range
SELECT Distinct sub.IP_Subnets0 AS [IP Subnet]
FROM v_RA_System_IPAddresses ipa
JOIN v_RA_System_IPSubnets sub on sub.ResourceID=ipa.ResourceID
WHERE
([dbo].fnGetNumericIPAddress(IP_Addresses0)
BETWEEN [dbo].fnGetNumericIPAddress('192.168.5.1')
AND [dbo].fnGetNumericIPAddress('192.168.7.255') )
AND
([dbo].fnGetNumericIPAddress(IP_Subnets0)
BETWEEN [dbo].fnGetNumericIPAddress('192.168.5.1')
AND [dbo].fnGetNumericIPAddress('192.168.7.255') )

Again, in my home lab this is not impressive, but it does return the only subnet that I have clients on (192.168.7.0). Hopefully you find this function useful the next time you need to deal with IP ranges and IP subnets.

Originally posted on https://miketerrill.net/

Lord of the Deployment Rings

$
0
0

If you have been involved with any kind of Windows deployments or Windows Update deployments in the last five or so years, you probably heard of the concept of deployment rings. Microsoft has been pushing this concept ever since they have moved to an ‘as a Service’ model for Windows 10 and other Microsoft products that are frequently updated. Also, if you work with or for a company that seems to be stuck in the 1990s, you are probably also familiar with the phrase ‘pets vs. cattle’. If not, then the simple explanation is managing every computer like it is a pet, instead of doing activities in a controlled, bulk fashion, more like herding cattle. As you can imagine, the ‘pets’ method has a rather high TCO and usually results in much slower deployments which makes it much harder to keep up with the ‘as a Service’ model.

Even using a deployment ring methodology, there is still a good reason to minimize risk but maximize velocity (this is what my Windows as a Service in the Enterprise process is based on). Because of this, I always like to implement a crawl-walk-run approach. Using Configuration Manager, this approach can be accomplished by setting up ring-based collections. This also has side benefits by being able to re-use these collections for deployments and updates instead of constantly creating hundreds (or possibly thousands) of collections (which can have a drastic impact on colleval).

The design goal was to come up with rings that represented 2%, 3%, 5% (crawl), 10%, 20% (walk), 30%, 30% (run). Starting with 400K (the approximate size of the workstations we manage) and then subtracting the high-risk systems, executives, insiders, etc. we have a target subset of 386,500 (adjust the decimal to reflect your own environment). With this number, that target rings look like the following:
Crawl Rings:
Ring 0 – Pilot testing, early adopters, etc., populate as desired
Ring 1 – 2% (~7730)
Ring 2 – 3% (~11,595)
Ring 3 – 5% (~19,325)
Walk Rings:
Ring 4 – 10% (~38,650)
Ring 5 – 20% (~77,300)
Run Rings:
Ring 6 – 30% (~115,950)
Ring 7 – 30% (~115,950)

By using the last two characters of the SMSUniqueIdentifier, we are able to get really close to these target percentages by splitting up the possible permutations into the percentages listed above. For example, the collection query for Enterprise Ring 1 would look like the following:

select SMS_R_SYSTEM.ResourceID,SMS_R_SYSTEM.ResourceType,SMS_R_SYSTEM.Name,SMS_R_SYSTEM.SMSUniqueIdentifier,SMS_R_SYSTEM.ResourceDomainORWorkgroup,SMS_R_SYSTEM.Client from SMS_R_System where Substring(SMS_R_System.SMSUniqueIdentifier,39,2) in ('43','CC','C4','5B','30')

Instead of manually creating these collections by copying and pasting collection queries, here is a quick PowerShell script that will create them for you. Adjust the limiting collections and collections schedules to fit your needs. Feel free to increase or decrease the number of rings by adjusting the collection queries.

In the future, I will show other ways that these Enterprise Rings can also be leveraged. If you find this useful, please let me know by leaving a comment below.

#Lord of the Deployment Rings
#22.06.02

#Get the next Sunday for the collection refresh schedule
$Date = Get-Date
while ($Date.DayOfWeek -ne "Sunday") {$Date = $Date.AddDays(1)}
$Schedule = New-CMSchedule -DayOfWeek Sunday -Start $Date.Date -RecurCount 1
$x = 0
$LimitingCollection = 'All Desktop and Server Clients'

#Create Enterprise High Risk Collection
$ExcludeCollection = New-CMCollection -CollectionType Device -LimitingCollectionName $LimitingCollection -Name 'Enterprise High Risk Ring' -Comment 'Place high-risk systems in this collection' -RefreshType None

#Create Enterprise Ring 0 Pilot Collection
$PilotCollection = New-CMCollection -CollectionType Device -LimitingCollectionName $LimitingCollection -Name 'Enterprise Ring 0 Pilot' -Comment 'Place pilot systems in this collection' -RefreshType None


#Define Enterprise Ring Collections
$Rings = @(
    @{ CollectionName = 'Enterprise Ring 1'; Comment = 'Approximately 2%'; Query = "select SMS_R_SYSTEM.ResourceID,SMS_R_SYSTEM.ResourceType,SMS_R_SYSTEM.Name,SMS_R_SYSTEM.SMSUniqueIdentifier,SMS_R_SYSTEM.ResourceDomainORWorkgroup,SMS_R_SYSTEM.Client from SMS_R_System where Substring(SMS_R_System.SMSUniqueIdentifier,39,2) in ('43','CC','C4','5B','30')"}
    @{ CollectionName = 'Enterprise Ring 2'; Comment = 'Approximately 3%'; Query = "select SMS_R_SYSTEM.ResourceID,SMS_R_SYSTEM.ResourceType,SMS_R_SYSTEM.Name,SMS_R_SYSTEM.SMSUniqueIdentifier,SMS_R_SYSTEM.ResourceDomainORWorkgroup,SMS_R_SYSTEM.Client from SMS_R_System where Substring(SMS_R_System.SMSUniqueIdentifier,39,2) in ('6C','AA','55','C9','72','BD','54')"}
    @{ CollectionName = 'Enterprise Ring 3'; Comment = 'Approximately 4%'; Query = "select SMS_R_SYSTEM.ResourceID,SMS_R_SYSTEM.ResourceType,SMS_R_SYSTEM.Name,SMS_R_SYSTEM.SMSUniqueIdentifier,SMS_R_SYSTEM.ResourceDomainORWorkgroup,SMS_R_SYSTEM.Client from SMS_R_System where Substring(SMS_R_System.SMSUniqueIdentifier,39,2) in ('9E','F1','12','8C','34','FC','ED','77','87','D1','48','57','5A')"}
    @{ CollectionName = 'Enterprise Ring 4'; Comment = 'Approximately 10%'; Query = "select SMS_R_SYSTEM.ResourceID,SMS_R_SYSTEM.ResourceType,SMS_R_SYSTEM.Name,SMS_R_SYSTEM.SMSUniqueIdentifier,SMS_R_SYSTEM.ResourceDomainORWorkgroup,SMS_R_SYSTEM.Client from SMS_R_System where Substring(SMS_R_System.SMSUniqueIdentifier,39,2) in ('0F','74','0B','2D','59','AE','27','DD','99','A9','4F','FB','BB','1B','66','C3','52','AC','85','84','B9','A8','26','8F','BC')"}
    @{ CollectionName = 'Enterprise Ring 5'; Comment = 'Approximately 20%'; Query = "select SMS_R_SYSTEM.ResourceID,SMS_R_SYSTEM.ResourceType,SMS_R_SYSTEM.Name,SMS_R_SYSTEM.SMSUniqueIdentifier,SMS_R_SYSTEM.ResourceDomainORWorkgroup,SMS_R_SYSTEM.Client from SMS_R_System where Substring(SMS_R_System.SMSUniqueIdentifier,39,2) in ('21','63','9A','3A','D2','36','AF','E3','5C','AD','B5','25','3D','88','DF','D5','DE','6E','15','7B','09','FE','B8','3F','CA','0A','95','0D','EE','33','97','A7','3C','D0','5D','E4','9C','1F','4C','1C','18','49','4E','3E','AB','89','D4','8D','C6','0C','53')"}
    @{ CollectionName = 'Enterprise Ring 6'; Comment = 'Approximately 30%'; Query = "select SMS_R_SYSTEM.ResourceID,SMS_R_SYSTEM.ResourceType,SMS_R_SYSTEM.Name,SMS_R_SYSTEM.SMSUniqueIdentifier,SMS_R_SYSTEM.ResourceDomainORWorkgroup,SMS_R_SYSTEM.Client from SMS_R_System where Substring(SMS_R_System.SMSUniqueIdentifier,39,2) in ('DA','CB','67','11','40','BF','16','7F','D3','6D','08','50','7C','1A','14','94','E6','60','3B','38','7D','7E','98','F8','E9','37','E5','FF','A3','B3','10','90','81','1E','4B','51','DB','8E','35','F4','47','CD','A5','00','5E','19','4D','69','92','75','06','CF','31','F0','E1','93','03','45','1D','5F','E8','91','F2','CE','B1','73','D7','22','82','76','71','4A','86','EC','B7','80','F3')"}
    @{ CollectionName = 'Enterprise Ring 7'; Comment = 'Approximately 30%'; Query = "select SMS_R_SYSTEM.ResourceID,SMS_R_SYSTEM.ResourceType,SMS_R_SYSTEM.Name,SMS_R_SYSTEM.SMSUniqueIdentifier,SMS_R_SYSTEM.ResourceDomainORWorkgroup,SMS_R_SYSTEM.Client from SMS_R_System where Substring(SMS_R_System.SMSUniqueIdentifier,39,2) in ('C5','A4','46','79','D8','BA','C0','A1','58','BE','68','78','29','02','E2','39','05','F5','E7','D9','28','24','6F','9B','8B','20','83','70','B4','61','A0','6A','96','23','2F','A6','04','DC','13','F6','0E','6B','01','E0','65','62','9D','2E','44','F7','C8','B0','FA','8A','C2','F9','2A','C1','D6','B2','41','EA','EF','FD','A2','17','7A','56','B6','2B','64','9F','42','EB','C7','07','2C','32')"}
    )

#Create Enterprise Ring Collections
foreach ($Ring in $Rings) {
    $x++
    $CollectionName = $Ring.CollectionName
    $Comment = $Ring.Comment
    $Query = $Ring.Query
    Write-Host "Creating Collection $CollectionName"
    New-CMCollection -CollectionType Device -LimitingCollectionName $LimitingCollection -Name $CollectionName -Comment $Comment -RefreshSchedule $Schedule -RefreshType Periodic
    Add-CMDeviceCollectionQueryMembershipRule -CollectionName $CollectionName -RuleName $CollectionName -QueryExpression $Query
    Add-CMDeviceCollectionExcludeMembershipRule -CollectionName $CollectionName -ExcludeCollection $ExcludeCollection
    }

Originally posted on https://miketerrill.net/

Adding DaRT to 2PS iPXE

$
0
0

The Microsoft Diagnostics and Recovery Toolset (aka DaRT) is a powerful toolset that is available to Software Assurance customers. Johan Arwidmark recently wrote a post called Back to Basics – Reset the Windows 10 Admin Password using DaRT that covers the steps for creating the recovery media. The DaRT Recovery Image Wizard will generate a WIM file and ISO file and it will even create a CD, DVD (if you have one of those attached to your system) or a USB key. For my test below, I was curious and generated my DaRT boot image using Windows 11 21H2 and the corresponding ADK and it worked just fine.

If you work in a lab where you do a lot of diagnostics and testing, it can be really handy to add it to a PXE server so that you do not have to keep track of USB keys. I have started using 2Pint Software’s iPXE Anywhere solution a lot lately and it is mind blowing. Let’s jump right in and see how we can add our newly generated DaRT boot.wim that we generated.

  • Create a directory on your IIS server and copy the boot.wim to this location (Note: I am just using the default IIS location, C:\inetpub\wwwroot, as this is just a lab):
  • Copy Boot.sdi and wimboot.x86_64.efi (assuming this is for 64-bit) from C:\ProgramData\2Pint Software\2PXE\Remoteinstall\Boot to the directory that was previously created:
  • If you have already staged a CM Boot Image, then you can simply copy one of the bcd files from C:\ProgramData\2Pint Software\2PXE\Remoteinstall\Tmp to the directory that was previously created. It should just be called bcd with no file extension. Creating the BCD from scratch can be done but is beyond the scope of this blog. The final directory contents should look like the following:
  • Edit the 2Pint.2PXE.Service.exe.config as admin (the default location is C:\Program Files\2Pint Software\2PXE).
  • Find the section <CustomMenuItems> and add a new entry <add key=”CustomWinPEDaRT” value=”DaRT”/>:
  • Look for and copy a section called <add key=”CustomWimWithFilesFrom2PXE”… (I already modified this one to point to OSDCloud):
  • Insert it before the closing custom items tag (</CustomItems>) and update the key name and value:
  • Save the config file and then cycle the 2Pint 2PXE Server service.
  • Create a new Virtual directory that points to C:\inetpub\wwwroot\DaRT:
  • Boot a client and notice the new DaRT entry under Other Actions:
  • If all works correctly, you will see the files being downloaded via http and you might even see some BranchCache action:
  • And then we get the start of the DaRT boot process:

Originally posted on https://miketerrill.net/

Protecting My Precious MEMCM

$
0
0

If you have worked with Microsoft Endpoint Manager Configuration Manager (MEMCM, CM for short and previously known as SCCM) for more than a day, you are probably aware of its immense power that it can yield on any and all of the clients it manages. It has an extremely mature Role Based Administration model that allows for very granular control to give only a certain level of access to only those that need it. This can minimize disasters such as deploying an unintended task sequence to the All Systems collection.

In larger organizations (and those that take risk seriously), this is hopefully a utilized feature. However, in smaller organizations or those where there are only a few CM admins, this might not be the case and those admins might have Full Administrator rights. Regardless of the model, CM Full Administrator accounts should be protected with the maximum security possible. This means that a separate elevated account should be used for this function and not a day-to-day user account that is also used for email, web browsing, etc. If you are using a day-to-day user account and it has CM Full Administrator access, stop reading now and implement a separate elevated account!

In today’s threat landscape, just a separate elevated account is not good enough. These elevated accounts should also be protected using multi-factor authentication (MFA). The good news is that CM natively supports two different types of MFA – certificate authentication and Windows Hello for Business (which is not to be confused with Windows Hello).

In this post, I am going to focus on using certificate authentication using smart cards. If you are thinking “great, I might as well stop reading now since I don’t have any access to smart cards”, don’t worry as I am going to focus on using virtual smart cards. This is something that anyone with a PKI and TPM will be able to easily configure and implement.

Part 1: Creating the Virtual Smart Card Certificates

The first thing we need to do is create a certificate template that can be used with smart cards. Open up the Certificate Authority and then right-click on Certificate Templates and select Manage.

In the Certificate Templates Console, find and right-click on the template called Smartcard Logon and select Duplicate Template.

On the Properties of New Template General tab, give it a display name of TPM Virtual Smart Card Logon and select the desired Validity period and Renewal period.

On the Request Handling tab, ensure the purpose is set to Signature and smartcard logon and under Do the following when the subject is enrolled and when the private key associated with this certificate is used: is set to Prompt the user during enrollment.

On the Cryptography tab, select Requests must use one of the following providers: and select the Microsoft Base Smart Card Crypto Provider.

On the Security tab, add a group that you would like to control who has access to request Smart Card certificates. For this example, I am just going to use Authenticated Users. Grant the group both Read and Enroll permissions.

Back in the Certificate Authority console, right-click on Certificate Templates again and select New > Certificate Template to Issue.

In the Enable Certificate Templates window, select the newly created TPM Virtual Smart Card Logon template and click OK.


Part 2: Installing the Virtual Smart Card and requesting the Virtual Smart Card Certificate

If you are connecting from a VM, ensure that it has TPM enabled. This can be done under the Security node in the Hyper-V settings.

In the running Operating System, make sure the TPM is ready for use. This can be done by running the TPM Management console (tpm.msc).

Open an elevated command prompt and run the following command and select the desired PIN:
tpmvscmgr.exe create /name VirtualSC /pin prompt /adminkey random /generate
NOTE: this command needs to be run from a console session and cannot be run from an RDP session (or a Hyper-V Enhanced session).

Running under the account that the Smart Card certificate needs to be assigned, open up the Certificate Manager for the current user (certmgr.msc). Right on the Personal folder and select All Tasks > Request New Certificate.

In the Certificate Enrollment wizard, on the Before you Begin step, click Next.

On the Select Certificate Enrollment Policy step, select Active Directory Enrollment Policy and then click Next.

On the Request Certificates step, select the TPM Virtual Smart Card Logon certificate and then click Enroll.

On the Windows Security prompt – Enrolling for: TPM Virtual Smart Card Logon, enter the PIN that was created earlier with the Virtual Smart.

If the certificate enrollment was successful, the Certificate Installation Results will look like the following.

Part 3: Configuring the account to require the Smart Card

In Active Directory Users and Computers, locate the user account. On the Account tab, under Account options: select Smart card is required for interactive logon. This will force the multi factor authentication for logging onto windows and include the MFA claim in the user’s authentication token.

If the user attempts to log on to Windows using only the password only, the following message is displayed – “You must use Windows Hello or a smart card to sign in.”.

Part 4: Configuring CM to use Certificate Authentication

First configure a designated group or an account that is already a Full Administrator in CM as a MEMCM Break Glass Admin and then disable the account(s). This will be used in case of an emergency (like if a certificate has expired or a machine that had the Virtual Smart Card installed is no longer accessible).

After this is done, sign on with the account that was used to request the Virtual Smart Card certificate, using the Smart Card to log into Windows. In the CM Console, open the Hierarchy Settings Properties. On the Authentication tab, select Certificate authentication. Read the warning impact – yes, all administrators that need to access CM will now need to have a Virtual Smart Card (unless you create an exclude group for them – but the idea is to make CM more secure, right?).

Under the Exclude the following users or groups section, add the Break Glass account(s) or group that was configured above.

Attempting to launch the CM Console without using the Virtual Smart Card will show that the console is unable to connect.

Next, sign into Windows using the Virtual Smart Card. It is the Sign-in option that looks like a security chip.

Launch the CM Console and it will now be able to connect.

Summary

Hopefully this post helps get you on the right track towards further locking down and protecting My Precious MEMCM. I have recently seen blog posts from the InfoSec community on Twitter poking more and more at CM, they know that once they get access to CM it is game over! Therefore, the more deadbolts that can be locked, the better.

Originally posted on https://miketerrill.net/


MEMCM Package MIF Matching

$
0
0

If you are familiar with the work that Gary Blok and I have done on BIOS and Drivers, you will already know that we like to use extra fields on the Package properties to store meta date that we use for reporting and automation. Four of the fields that we like to use are on the Reporting tab of the Package Properties:

These fields have been around forever in the product and are used to provide enhanced status information about the success of the deployment (meaning that they really need to match if this option is enabled). Since we are just ‘borrowing’ these fields, we want this option to remain set on “Use package properties for status MIF matching”. I was recently working on a script that did package promotion and syncing of these properties and was curious about being able to programmatically configure this setting to ensure it stayed set to the desired state. Looking at both the New-CMPackage cmdlet and the Set-CMPackage cmdlet, I found no such parameter that allows for this configuration (at least directly).

I decided to ping Gary and asked him if he had discovered a way to configure this setting, but his response was “Not that I know of, I would just make sure it was set properly on the package”. This meant that it was time to fire up WMIExplorer and see if I could find a hidden setting that controlled this property. With the property set to “Use package properties for status MIF matching”, I noticed that the PkgFlags were set to 16777216:

Changing the setting in the UI to “Use these fields for status MIF matching”, the PkgFlags value changed to 553648128:

Ok, this meant it was time to fire up the handy documentation and figure out what these values actually mean. This is in the SMS_PackageBaseclass Class and there is a nice definition of the properties that are controlled by PkgFlags. 0x20000000 looks like the one that we want to check for:

Using a little bit of PowerShell, we can get the PkgFlags value, check to see if it is enabled, and then correct it:

$x = Get-CMPackage -Id PS1000C8 -Fast
 if ($x.pkgflags -eq ($x.pkgflags -bor 0x20000000)) {
    Write-Output "MIF field matching enabled"
    Write-Output "Setting it back to use package properties"
    $x.PkgFlags = $x.pkgflags - 0x20000000
    $x.Put()
    }
 else {
    Write-Output "Use package properties for status MIF matching already set"
    }

Originally posted on https://miketerrill.net/

Windows KB5012170 failing to install (0x800F0922)

$
0
0

August 25, 2022

Windows KB5012170 is turning out to be one of those patches that is causing a ton of headaches. The biggest one is for those user’s that got the lovely BitLocker Recovery screen but had no idea what their BitLocker recovery key is or where to find it. This happened to my father-in-law’s laptop and unfortunately the recovery key was not listed in his Microsoft account. His laptop was basically ransomwared without the ability to pay the ransom (luckily his son-in-law knows a thing or two about deploying Windows).

The intent of this patch is actually good as it addresses three security vulnerabilities (CVEs) by updating the Secure Boot Forbidden Data (DBX) and blocking these compromised boot loaders. In other words, do not skip it, but rather take a crawl-walk-run approach when deploying it and be sure to test it thoroughly. Microsoft has now acknowledged another issue with this patch where it fails to install with error 0x800F0922. This is now listed on the support article under know issues:

Although not as bad as the BitLocker issue, this one has some administrators scratching their head (and rightfully so). Microsoft says that updating the UEFI bios could help in some cases. However, if you have some models that are installing this update without any issues and the same models that are failing to install the update, the problem could be not with the UEFI bios version, but actually the TPM settings.

Running this update on a HP ProBook 445 G8 that I have in my lab, I was getting this exact error code:

Checking the TPM State configuration of this machine, I can see that the TPM State is set to Disable:

(See my post How to Inventory HP BIOS and UEFI Settings with ConfigMgr on how to get this data back via hardware inventory)

This also happens to be one of the settings that I talked about in my TPM Management – Getting Ready for Windows 11 session that I gave with Michael Niehaus at MMSMOA the past May:

This can be monitored and remediated using by creating a CI using the following PowerShell and along with TPM State, be sure to set the TPM Activation Policy to ‘No prompts’ so that the end user does not get prompted after the reboot:

#Discovery Script:
$SettingName = 'TPM State'
#-------------------------------------
$BIOSSetting = gwmi -class hp_biossetting -Namespace "root\hp\instrumentedbios" -Filter "Name='$SettingName'"
Write-Output $BIOSSetting.CurrentValue

#Remediation Script:
$SettingName = 'TPM State'
$Value = 'Enable' #Disable,Enable
$BIOSPW = 'Password1'
#-------------------------------------
$BIOS= gwmi -class hp_biossettinginterface -Namespace "root\hp\instrumentedbios"
$BIOSSetting = gwmi -class hp_biossetting -Namespace "root\hp\instrumentedbios"
If (($BIOSSetting | ?{ $_.Name -eq 'Setup Password' }).IsSet -eq 0)
{
    $Result = $BIOS.SetBIOSSetting($SettingName,$Value)
}
elseif (($BIOSSetting | ?{ $_.Name -eq 'Setup Password' }).IsSet -eq 1)
{
    $PW = "<utf-16/>$BIOSPW"
    $Result = $BIOS.SetBIOSSetting($SettingName,$Value,$PW)
}

#Discovery Script:
$SettingName = 'TPM Activation Policy'
#-------------------------------------
$BIOSSetting = gwmi -class hp_biossetting -Namespace "root\hp\instrumentedbios" -Filter "Name='$SettingName'"
Write-Output $BIOSSetting.CurrentValue
Write-Output $BIOSSetting.PossibleValues

#Remediation Script:
$SettingName = 'TPM Activation Policy'
$Value = 'No prompts' #Disable,Enable
$BIOSPW = 'Password1'
#-------------------------------------
$BIOS= gwmi -class hp_biossettinginterface -Namespace "root\hp\instrumentedbios"
$BIOSSetting = gwmi -class hp_biossetting -Namespace "root\hp\instrumentedbios"
If (($BIOSSetting | ?{ $_.Name -eq 'Setup Password' }).IsSet -eq 0)
{
    $Result = $BIOS.SetBIOSSetting($SettingName,$Value)
}
elseif (($BIOSSetting | ?{ $_.Name -eq 'Setup Password' }).IsSet -eq 1)
{
    $PW = "<utf-16/>$BIOSPW"
    $Result = $BIOS.SetBIOSSetting($SettingName,$Value,$PW)
}

These will configure the following settings:

These settings do need a reboot before attempting the patch install again, but after the reboot, the patch should install just fine.

Originally posted on https://miketerrill.net/


Useful W365 Cloud PC Intune Filters

$
0
0

The Microsoft Intune Team has been busy at work releasing new features. One of those features that was released in the Service release 2301 was the ability to create filters based on the device’s Azure AD Join Type (aka deviceTrustType). This is really handy when there is the need to target devices based on this property.

In addition, I have started experimenting more with Windows 365 Cloud PC. Yes – I did manage to pull it up on my Tesla. But now is the time to take a closer look. There is also a nice filtering option for these devices as well. Here are a few that I have found useful so far:

All Cloud PCs
(device.model -contains “Cloud PC”)

All AADJ Cloud PCs
(device.model -contains “Cloud PC”) and (device.deviceTrustType -eq “Azure AD joined”)

All HAADJ Cloud PCs
(device.model -contains “Cloud PC”) and (device.deviceTrustType -eq “Hybrid Azure AD joined”)

If there are any W365 Cloud PC filters that you find useful, drop them in the comments below.

Originally posted on https://miketerrill.net/

Disable Bluetooth File Transfer with ConfigMgr

$
0
0

March 27, 2024

Update: Add {0000111F-0000-1000-8000-00805F9B34FB} for certain Polycom Bluetooth devices.

I recently got a request to see if it was possible to disable Bluetooth file transfer without disabling Bluetooth (because that would be bad) using Configuration Manager. Challenge accepted! First things first, read up on the Bluetooth Policy CSP to see if it is possible and what all is involved. Second, do a search on the internet to see if someone else has already solved this problem. After searching and searching, the only thing that I found was blogs that regurgitated the docs on how to completely disable Bluetooth. That was a non-started with all of the Bluetooth devices today.

Back to the policy to see if I could create a PowerShell script that could be used in a Configuration Item and Configuration Baseline. In the CSP, there is a setting called ServicesAllowedList. This default for this setting is an empty string and it means that everything is allowed. However, once a value is defined, then whatever is in that value is allowed and everything else is blocked. These services are in the form of a UUID.

In my searches, I had found some suggestions of using AppLocker to block the executable that is used for Bluetooth file transfer called Fsquirt. However, from the searches, it appears that this might not block file transfers from 3rd-party apps that use the Microsoft Bluetooth API. There are two services, called OBEX Object Push (OPP) (which is for file transfer) and Object Exchange (OBEX) (which is the protocol used for file transfer), that when disabled should block any Bluetooth file transfer.

This left 14 UUIDs that needed to be set for the value in order for other Bluetooth devices to continue to work. Using a little bit of PowerShell running as SYSTEM (this setting can only be set as SYSTEM, running as Admin is not enough), we can check to see that this value is set correctly and set it if it is not.

In ConfigMgr, create new CI. Be sure to select “This configuration item contains application settings”:

On the Detection Methods step, enter the following PowerShell. This will be used to determine if the system has Bluetooth. It could also be modified to be used for exceptions.

#Detect Bluetooth Devices
$BluetoothDevices = Get-WmiObject 'Win32_PnPEntity' | Where-Object {$_.Caption -like '*Bluetooth*'}
if ($BluetoothDevices) {
    Write-Output "Bluetooth Device(s) Detected"
    }

On the Settings step, click the New button to create a new Setting. Give it the name Bluetooth File Transfer – Disable.

For the Discovery script, use the following PowerShell. This contains the 14 UUIDs that are to be allowed:

#Disable Bluetooth File Transfer
#Discovery Script
#22.10.15
$Compliance = "Compliant"
$TargetServicesAllowedList = "{0000111E-0000-1000-8000-00805F9B34FB};{00001203-0000-1000-8000-00805F9B34FB};{00001108-0000-1000-8000-00805F9B34FB};{00001200-0000-1000-8000-00805F9B34FB};{0000110B-0000-1000-8000-00805F9B34FB};{0000110C-0000-1000-8000-00805F9B34FB};{0000110E-0000-1000-8000-00805F9B34FB};{0000110F-0000-1000-8000-00805F9B34FB};{00001124-0000-1000-8000-00805F9B34FB};{00001801-0000-1000-8000-00805F9B34FB};{00001812-0000-1000-8000-00805F9B34FB};{00001800-0000-1000-8000-00805F9B34FB};{0000180A-0000-1000-8000-00805F9B34FB};{00001813-0000-1000-8000-00805F9B34FB}"
$CurrentServicesAllowedList = (Get-CimInstance -Namespace 'root\cimv2\mdm\dmmap' -Query 'Select * from MDM_Policy_Result01_Bluetooth02').ServicesAllowedList

if ($CurrentServicesAllowedList -ne $TargetServicesAllowedList) 
    {
    $Compliance = "Non-compliant"
    }

$Compliance

For the Remediation script, use the following PowerShell.

#Disable Bluetooth File Transfer
#Remediation Script
#22.10.15
$Compliance = "Compliant"
$TargetServicesAllowedList = "{0000111E-0000-1000-8000-00805F9B34FB};{00001203-0000-1000-8000-00805F9B34FB};{00001108-0000-1000-8000-00805F9B34FB};{00001200-0000-1000-8000-00805F9B34FB};{0000110B-0000-1000-8000-00805F9B34FB};{0000110C-0000-1000-8000-00805F9B34FB};{0000110E-0000-1000-8000-00805F9B34FB};{0000110F-0000-1000-8000-00805F9B34FB};{00001124-0000-1000-8000-00805F9B34FB};{00001801-0000-1000-8000-00805F9B34FB};{00001812-0000-1000-8000-00805F9B34FB};{00001800-0000-1000-8000-00805F9B34FB};{0000180A-0000-1000-8000-00805F9B34FB};{00001813-0000-1000-8000-00805F9B34FB}"
$CurrentServicesAllowedList = (Get-CimInstance -Namespace 'root\cimv2\mdm\dmmap' -Query 'Select * from MDM_Policy_Result01_Bluetooth02').ServicesAllowedList

if ($CurrentServicesAllowedList -ne $TargetServicesAllowedList) 
    {
    $Compliance = "Non-compliant"
    }

if ($Compliance = "Non-compliant") {
    #Check for Instance
    $BluetoothPolicy = Get-CimInstance -Namespace 'root\cimv2\mdm\dmmap' -Query 'Select * from MDM_Policy_Config01_Bluetooth02'

    #Turn off Bluetooth file transfer
    #If Bluetooth policy exists then set ServicesAllowedList
    if ($BluetoothPolicy)
        {
        $Result = Set-CimInstance -InputObject $BluetoothPolicy -Property @{ParentID="./Vendor/MSFT/Policy/Config";InstanceID="Bluetooth";ServicesAllowedList=$TargetServicesAllowedList}
        }
    #If Bluetooth policy does not exist then create it and set ServicesAllowedList
    else {
        $Result = New-CimInstance -Namespace 'root\cimv2\mdm\dmmap' -ClassName 'MDM_Policy_Config01_Bluetooth02' -Property @{ParentID="./Vendor/MSFT/Policy/Config";InstanceID="Bluetooth";ServicesAllowedList=$TargetServicesAllowedList}
        }
    }
Exit $Result.ReturnValue

NOTE: If you might be wondering why I double check for compliance again in the remediation script is because there is a bug in CM that we have hit twice now after upgrades. The remediation script will just randomly run on systems. This created a mess when our site balancing script decided to run causing tens of thousands of clients to re-assign their site (which the discovery script has a very controlled fashion for doing so but the remediation did not have the logic). Now we include this for any major changes just to be safe.

Create a Compliance Rule using the following settings:

Finish creating the CI and then create a Baseline. Be sure to change the Purpose to Optional. This way if a device that is targeted does not have Bluetooth, it will just not be applicable.

Create a test collection and deploy the Baseline.

An easy way to test this is to pair two Windows systems. On Windows, open up the Bluetooth & other devices menu in Settings and click on Add Bluetooth or other device.

On the Add a device window, select Bluetooth – Mice, keyboards, pens, or audio and other kinds of Bluetooth devices.

Find the device you want to pair. In my case, I am using my Surface Book.

Once the connection is successful, back in the Bluetooth & other devices menu, click on the Send or receive files via Bluetooth link.

Test a file transfer to see if it is working correctly by clicking Send files.

Select the device you would like to send the file to.

Choose a file.

And the result should be a successful file transfer.

Add the device to the collection that was set up earlier for the Baseline deployment and make sure it shows up under the Configuration Manager Properties under the Configurations tab and that it has been evaluated and is compliant.

Click the View Report button and it should show that setting has been remediated.

Repeat the test above and this time it should say that the transfer was not completed and that file transfer is disabled by policy.

Keep in mind that this setting will persist even if the device is no longer targeted with the Bluetooth File Transfer – Disable Baseline. In the links above, I have also created a CI and Baseline that will enable file transfer again by clearing the ServicesAllowedList value. Be sure to test this in your environment with the various Bluetooth devices that are used to ensure there are no issues. In the testing that I have done, everything has continued to work, and I am able to pair/use Bluetooth devices. If you do run into any issues, please leave a comment below. Also, open up a support case with the vendor as they might have their own GUID that can be added.

Originally posted on https://miketerrill.net/

Disable Bluetooth File Transfer with Intune

$
0
0

March 27, 2024

Update: Add {0000111F-0000-1000-8000-00805F9B34FB} for certain Polycom Bluetooth devices.

After figuring out how to Disable Bluetooth File Transfer with ConfigMgr, I figured it would be worth while to do this for all of the Intune admins out there. This is easy since all the hard work was already done (figuring out what services to allow).

Sign on to the Intune portal and head over to Devices > Configuration profiles and select + Create profile.

Under Platform, select Windows 10 and later and for Profile type, select Settings catalog, then click Create.

  1. Give it a Name, like Disable Bluetooth File Transfer, click Next.
  2. On the Configuration settings step, click + Add settings.
  3. In the Settings picker, select Bluetooth.
  4. Under the 7 settings in “Bluetooth” category, select Services Allowed List, then close the Settings picker.
  5. Back on the Configuration settings step, copy and paste the following into the field and then click Next:
    {0000111E-0000-1000-8000-00805F9B34FB};{00001203-0000-1000-8000-00805F9B34FB};{00001108-0000-1000-8000-00805F9B34FB};{00001200-0000-1000-8000-00805F9B34FB};{0000110B-0000-1000-8000-00805F9B34FB};{0000110C-0000-1000-8000-00805F9B34FB};{0000110E-0000-1000-8000-00805F9B34FB};{0000110F-0000-1000-8000-00805F9B34FB};{00001124-0000-1000-8000-00805F9B34FB};{00001801-0000-1000-8000-00805F9B34FB};{00001812-0000-1000-8000-00805F9B34FB};{00001800-0000-1000-8000-00805F9B34FB};{0000180A-0000-1000-8000-00805F9B34FB};{00001813-0000-1000-8000-00805F9B34FB}
    This will automatically expand and look like the following:
  6. On the Scope tags step, add any necessary Scope tags, then click Next.
  7. On the Assignments step, select a group that contains a test machine, then click Next.
  8. On the Review + create step, review all the settings and then click Create.

An easy way to test this is to pair two Windows systems. On Windows, open up the Bluetooth & other devices menu in Settings and click on Add Bluetooth or other device.

On the Add a device window, select Bluetooth – Mice, keyboards, pens, or audio and other kinds of Bluetooth devices.

Find the device you want to pair. In my case, I am using my Surface Book.

Once the connection is successful, back in the Bluetooth & other devices menu, click on the Send or receive files via Bluetooth link.

Test a file transfer to see if it is working correctly by clicking Send files.

Select the device you would like to send the file to.

Choose a file.

And the result should be a successful file transfer.

Add the device to the targeted group and sync the policy. Repeat the test above and this time it should say that the transfer was not completed and that the file transfer is disabled by policy.

Be sure to test this in your environment with the various Bluetooth devices that are used to ensure there are no issues. In the testing that I have done, everything has continued to work, and I am able to pair/use Bluetooth devices. If you do run into any issues, please leave a comment below. Also, open up a support case with the vendor as they might have their own GUID that can be added.

Originally posted on https://miketerrill.net/

Viewing all 78 articles
Browse latest View live