Quantcast
Viewing all 78 articles
Browse latest View live

How to open CMTrace in WinPE like a boss

If you have ever done OSD, then chances are you have had to open up CMTrace a time or two to look at the smsts.log file. CMTrace displays one of the most annoying pop up boxes of all times and it is usually hiding behind the running task sequence dialog window: “Do you want to make this program the default viewer for log files?”

Image may be NSFW.
Clik here to view.
01 Do you want to

The answer is yes for the millionth time! Wouldn’t it be nice if it just did this automatically and never asked you again? If I had to guess, I bet you are shaking your head yes. For this post, I am going to show you how to modify your boot images so that it never asks you this annoying question again while running in WinPE.

Several years ago (back in 2009) I developed a solution on how to automatically open CMTrace (called Trace32 back in those days) for a debug task sequence. I finally got around to blogging the solution a few years ago and it is called ConfigMgr 2012 OSD: Automatically Open SMSTS log. This contains the important registry keys that we need to set in order to bake this into our boot images (and thus eliminating these three steps for the WinPE phases).

  1. The first thing we need to do is create a directory that we can use to mount our boot images (for example, MD d:\mount).
  2. Next we need to mount the boot image. This can be done using dism (be sure to run it from an elevated command prompt that has dism in the path – like the Deployment and Imaging Tools Environment short cut under Windows Kits). Select the boot image you would like to modify, I am using the default x86 boot image (in this example Configuration Manager is installed in D:\Program Files\):
    Dism /mount-wim /wimfile:”D:\Program Files\Microsoft Configuration Manager\OSD\boot\i386\boot.wim” /index:1 /mountdir:D:\mount
  3. Now we need to load the DEFAULT registry hive from the WinPE image. In my other blog post, we are creating the key in the HKEY Current User, however, since this is an offline registry, we are going to set it in the HKEY User hive which is what HKCU loads defaults from:
    Reg load HKU\winpe d:\mount\Windows\System32\config\default
  4. Now that we have the offline registry loaded, we can create the entries that we need. We will create the following registry keys that causes CMTrace to launch that annoying pop up box if they don’t already exist:
    Reg add HKU\winpe\Software\Classes\.lo_ /ve /d Log.File /f
    Reg add HKU\winpe\Software\Classes\.log /ve /d Log.File /f
    Reg add HKU\winpe\Software\Classes\Log.File\shell\open\command /ve /d “\”x:\sms\bin\x64\CMTrace.exe\” \”%1\”” /f
    NOTE: I used to put CMTrace in the Windows\System32 directory, but this is no longer needed since x:\sms\bin\i386 (use this path above for 32-bit boot images) and x:\sms\bin\x64 are now in the path now and ConfigMgr places the proper architecture of CMTrace in these locations by default. Also, be careful of smart quotes if copying and pasting.
  5. Next, unload the WinPE registry:
    Reg unload HKU\winpe
  6. Now we need to unmount the WIM file and commit the changes:
    Dism /unmount-wim /mountdir:d:\mount /commit
  7. In the Configuration Manager Console, select the boot image and update the distribution points. Generate new boot media based on the modified boot image, boot up a system (or PXE boot) and test by opening up a command prompt and typing CMTrace.

If everything worked right, CMTrace will open up without asking you “Do you want to make this program the default viewer for log files?”, as it already knows the answer. Repeat for other boot images you would like to modify. Here is a link to a text file that can be downloaded and renamed to .bat.

If you think this should automatically be in Configuration Manager, head on over to User Voice and vote for Stop CMTrace from asking us if we want to use it as the default viewer for log files in WinPE.

Originally posted on https://miketerrill.net/


Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

BIOS and Secure Boot State Detection during a Task Sequence

With all of the security issues and malware lately, BIOS to UEFI for Windows 10 deployments is becoming a pretty hot topic (unless you have been living under a rock, UEFI is required for a lot of the advanced security functions in Windows 10). In addition, with the Windows 10 Creators Update, Microsoft has introduced a new utility called MBR2GPT that makes the move to UEFI a non-destructive process. If you have already started deploying Windows 10 UEFI devices, it can be tricky to determine what state these devices are in during a running Task Sequence. The Configuration Manager Team introduced a new class called SMS_Firmware and inventory property called UEFI that helps determine which computers are running in UEFI in Current Branch 1702. This can be used to build queries for targeting and reports, but it would be nice to handle this plus Secure Boot state (and CSM) during a running Task Sequence. We do have the Task Sequence variable called _SMSTSBootUEFI that we will use, but we need to determine the exact configuration in order to execute the correct steps.

There are three different BIOS modes that a system can be running:
Legacy BIOS – also known as BIOS emulation, this requires a MBR partitioned disk in order to boot. Most Windows 7 systems are running this configuration.
UEFI Hybrid – this mode is when a system is running in UEFI, but with the Compatibility Support Module (CSM) (also known as Legacy ROMs) enabled. Unlike Legacy BIOS, this mode requires a GPT partitioned disk in order to boot. Windows 7 can run in this configuration and before there was MBR2GPT, this was the recommended mode to deploy Windows 7 in so that it could be easily upgraded to Windows 10 at a later date without repartitioning the disk.
UEFI Native – this mode is when a system is running in UEFI without the CSM. It also requires a GPT partitioned disk in order to boot. Windows 7 cannot run on a system that is configured for UEFI Native.

Now let’s talk about Secure Boot. Secure Boot and CSM are incompatible – if the CSM is enabled, then you cannot enable Secure Boot. When Secure Boot is enabled, you cannot enable the CSM. Based on this information, we know that Secure Boot will be unsupported in Legacy BIOS and UEFI Hybrid modes (Note: When I say unsupported, I am not talking about if the device is capable of running Secure Boot. Secure Boot requires a device running UEFI 2.3.1 Errata C or later and an operating system capable of running Secure Boot). Configuration Manager currently does not have out of the box functionality for reporting on Secure Boot, but the feature has showed up in the Technical Preview 1703 release. In the meantime, see my blog called Inventory Secure Boot State and UEFI with ConfigMgr on how to extend hardware inventory in Current Branch 1702 or older in order to collect this information.

From this information, we can create a handy chart to help visualize the configuration options:

Image may be NSFW.
Clik here to view.

NOTE: For UEFI Hybrid, Secure Boot State is unsupported if the CSM is enabled, however, an operating system that supports Secure Boot will show that status as Off (Disabled) in System Information.

Now, with this information and MBR2GPT, we should be able to create a single Windows 10 Feature Update Task Sequence for clients Windows 7/8/8.1/10 and it should not matter if they are already running UEFI or Legacy BIOS. The actions that we need to perform do matter and this is where we can set some Task Sequence variables to help with the logic on the various steps. But first, let’s see what needs to be done based on the four configurations above. We already said that Legacy BIOS is the only configuration that uses a MBR partitioned disk. Therefore, this will be the only configuration that we need to run MBR2GPT. When we run MBR2GPT, we also need to configure the device’s firmware settings for UEFI and enable Secure Boot (the Microsoft solution does not do this for you, you are on your own to use the vendor methods to do this piece).

If you are one of the few that took last year’s recommendation and started deploying Windows 7 in UEFI mode, then those systems will be running UEFI Hybrid. We do not need to run MBR2GPT on these systems since they are already running a GPT partitioned disk. We simply need to turn off the CSM (or Legacy ROMs) and enable Secure Boot (once again, the Microsoft solution does not do this for you).

For systems that are running UEFI Native but Secure Boot is not enabled, we simply need to enable Secure Boot. Lastly, for systems that are already running UEFI Native with Secure Boot enabled, we do not need to do anything additional for these systems. Adding these actions to our chart, it makes it very clear what actions need to be done under each scenario:

Image may be NSFW.
Clik here to view.

In a follow blog post, I will go into more detail on how we can use this logic in a single Windows 10 In-Place Upgrade Task Sequence, what the steps look like and where each of them go.

Originally posted on https://miketerrill.net/


Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

User Group Getting Started Guide

Image may be NSFW.
Clik here to view.

I just got back from the best systems management conference called the Midwest Management Summit. The reason this conference is so great (besides all of the awesome technical sessions and information) is because it is built for the community, by the community. In fact, the Microsoft Management Summit actually started out as a User Conference – I still remember the very first one I attended – The SMS & Windows 2000 User Conference.

Every year at the Midwest Management Summit, there is a session on user groups – how to find them, start them, or make one better. The first place to start is by searching for a user group in your area. The Minnesota System Center User group maintains a pretty comprehensive list of systems management/System Center user groups here. And if you know of any that should be added, just reach out to them and they will gladly add it.

Since I have been running the Arizona Systems Management User Group (AZSMUG for short) for over a decade, I often get questions from individuals on how to start a user group. I recently decided to put this in my OneNote for future use but thought that it would be great to share with everyone that is looking to start a user group. There isn’t an exact formula for what works and many of the user groups all run a little different. But the most important thing is to stay the course and keep the group going.

Fellow User Group Leader Daniel Ratliff also has a blog on his tips here.

Now for my tips and information on how to get a user group up and running:

Domain Name
Purchase and register a domain name. For AZSMUG, I use GoDaddy because it works well with Office 365 (the DNS settings for Skype for Business are important). I also use it to redirect www.azsmug.org to our Office 365 SharePoint page so that others can find the user group.

Technical Community
Register you user group with Technical Community. This will get you access to a free Office 365 E3 subscription that you can use to setup a few email accounts. It also gets you access to SharePoint in which you can use as a public website for your user group so that others can find it. You may need to let them know that you are a new user group just getting started and use one of the other user groups or user group leaders as verification. Note that the Office 365 subscription needs to be renewed/verified every year by proving that you are still an active user group. The following is what currently you get with the E3 subscription:

Image may be NSFW.
Clik here to view.

In addition, you can get funding (if you are lucky) from Technical Community for user group meetings. Although, I have given up on requesting funding after being denied several times.

Email Addresses
As soon as you get your domain name and Office 365 account setup and configured, create a few email accounts that you will use for official user group communication. I suggest setting up a shared mailbox for your user group – like usergroup@usergroup.org (where usergroup is the name of your user group). Also setup account for any members that will be helping to run the user group. Give those accounts access to the shared mailbox.

Chair Members or Board
If you are just getting started, you may just have a yourself or another person or two that helps run the user group. In this case, it is probably fine to not have anything official in terms of who runs the group and how the group is run. If after you start the user group you find that you have lots of interest from members wanting to help run the user group, you may want to adopt some formal bylaws in which elections are held yearly for various positions (like President, Vice President, Secretary, Treasurer). I know groups that use both models and each works well depending on the group.

Email lists
Email can be a primary method of communication to your user group members. There are several email distribution lists available today, plus you can create your own distribution list with your Office 365 Account. Otherwise, myITforum still maintains some distribution lists. Mail Chimp is another one that some user groups use (this has some integration into Eventbrite).

Sponsors & Sponsorship
Depending on how your group is set up will depend on how you get sponsorship. Some user groups get non-profit status so that they can have an operating budget and checking account. Other user groups get sponsors to come in and pay for food & beverage (and maybe guest speaker travel). Sponsors are given the option to present on their products at your user group. Just make sure that they keep it technically focused and do not turn it into a time share sales pitch (your user group members will thank you). Also, some vendors will want your user group member list of names/companies/email addresses. Depending on your sponsorship agreements, you may turn this over – just make sure your members know in advance and they are okay with this. Otherwise, have them raffle off an item at the meeting they present at. That way, user group members can opt-in to the raffle by providing their information. This keeps you and the user group out of any privacy issues.

Speakers
Try to speak at your own user group at least once a year. This will help you in your current position at work and be beneficial for your career to get some public speaking experience. Also try to encourage other user group members to present at your meetings. This will help them out as well. Plus, chances are that someone else is facing the same problem and needs to come up with a solution. Or use it to demonstrate your knowledge about a specific feature and how that helps in your day to day job.

Getting guest speakers can generate interest and get more people to attend the meetings. Many of the Microsoft MVPs will gladly present at your user group if they happen to be in town and are available. Otherwise, use sponsors or sponsorship money to pay for their travel to come to your meeting.

User Group Focus
Some user groups run a general focus (like all things Microsoft), whereas other user groups are more specific (like System Center or just even one product focus). Find out what fits for your audience. The last thing you want to do is put a bunch of time and effort into a meeting only to get a few people to show up because the topic is not of interest to the other user group members.

Meeting Invites
There are two good services that do not cost anything for the basic level of service for sending out invites and tracking registrations. Eventbrite and Meetup both work well and have professional looking invitations. The also provide other things as well (like promotion and the ability to email notifications and reminders – either natively or using external service like MailChimp).

Social Media
In addition to the meeting invitation service you provide (which can be used to promote events), use social media to promote your user group. Twitter, LinkedIn and Facebook are all great ways to spread the word about your user group and upcoming meetings.

Meeting frequency
This can be a tricky one. If you are just starting off, don’t bite off more than you can chew. In other words, start by planning for one meeting per quarter or four times per year (summers can be slow). Getting everything lined up for a user group meeting is a lot of work. I have run meetings every other month for several years and changed to a quarterly schedule a couple of years ago. This seems to draw more interest and attendance from user group members. If it is too frequent, members are more likely to have conflicts or decide to skip a meeting and catch the next one. But I do know groups that run every other month or even every month. Just keep in mind that it can be a lot of work keeping up with that cadence unless you have others helping out. If you can set a fixed date, say the third Thursday of the third month in the quarter, great – this will help people plan and know when the next meeting is going to be. If you cannot have a fixed date, then be sure to let your user group members know enough time in advance when you are in the pre-planning phases of the next meeting so that they can ‘pencil’ it into their calendar.

Meeting duration and times
This is another one that can be tricky and you will not be able to please everyone. The time might also depend on when you guest speakers can present. If they are busy working/consulting/teaching during the day, then you might have to run evening meetings (like from 5 – 7).

Meeting locations
If there is a local Microsoft office, then chances are you will be able to have your meetings there. Reach out to your local Microsoft contacts as you will need a Blue badge sponsor if you plan on having meetings outside of working hours (like after 5 PM). Also, they will be able to check the meeting room schedule and book the room for you. They can also help promote and generate interest in the user group with their customers in the area. Otherwise, a local library, training center or a member’s work location are all other alternatives. Pick a location with free parking or one that will validate parking.

Online Meetings/Recording Meetings
If you get to a point where you want to open up meetings online for others to attend or if you just want to record them, you can use Skype for Business that is part of the Office 365 E3 subscription.

If you have any other suggestions, please let me know in the comments below or send me a message on Twitter.

Originally posted on https://miketerrill.net/


Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

BIOS and Secure Boot State Detection during a Task Sequence Part 2

In BIOS and Secure Boot State Detection Part 1, I talked about the various states a system can be in for the BIOS Mode and Secure Boot state. Having these states defined as OSD variables can be useful in determining what actions need to be performed in order to switch a system to UEFI Native with Secure Boot enabled. Depending on how you perform the vendor firmware changes, you may or may not need to define the difference between UEFI Hybrid and UEFI Native. UEFI Hybrid is when the system is running UEFI and the Compatibility Support Module (CSM) is enabled (this is how you can run Windows 7 in UEFI mode – yes, really). In order to enable Secure Boot, the CSM needs to be disabled first. Also, for Secure Boot state, you may or may not need to define all of the possible options. If the goal is to get to Secure Boot enabled, that may be good enough to just test for that. However, Secure Boot disabled may be a nice to have in the case you have systems that do not play well with Secure Boot being enabled.

I start off by creating a group called Set BIOS and Secure Boot Variables. For a Windows 10 In-place Upgrade Task Sequence, I place this group after the Install Updates step in the Post-Processing group (but more on that in another post). This way, the system is already running Windows 10, which is a Secure Boot capable operating system (unlike Windows 7, which is not capable of running Secure Boot). The first Task Sequence variable I like to define is called BIOSMode, I set this to LegacyBIOS on the condition that _SMSTSBootUEFI equals FALSE.

Image may be NSFW.
Clik here to view.
image
Image may be NSFW.
Clik here to view.
image

We could just use the _SMSTSBootUEFI variable, however it is not as intuitive to other administrators if they need to read or edit the Task Sequence or read Status Messages and/or log files.

Next, add another Task Sequence variable called SecureBootState with the value Enabled. The condition on this is going to be based on the registry value:  HKLM\SYSTEM\CurrentControlSet\Control\SecureBoot\State\UEFISecureBootEnabled = 1.

Image may be NSFW.
Clik here to view.
image

Image may be NSFW.
Clik here to view.
image

Now add another Set Task Sequence variable step with the same name, SecureBootState, but this time set the value to Disabled. The condition on this is going to be based on the registry value:  HKLM\SYSTEM\CurrentControlSet\Control\SecureBoot\State\UEFISecureBootEnabled = 0.

Image may be NSFW.
Clik here to view.
image

Image may be NSFW.
Clik here to view.
image

There is also the Secure Boot state of unknown or NA, but for the time being I do not use this one in any of my Task Sequences. I also do not use the condition where SecureBootState = Disabled currently, but I figured it would be handy to have it in the future if needed. I have also created an item on uservoice so that maybe one day we will see a variable like this as part of the product: Create an OSD variable for Secure Boot – _SMSTSSecureBootState.

Feel free to download my exported Set BIOS and Secure Boot Variables Task Sequence here (created on Configuration Manager Current Branch 1702). Stay tuned on how to use these variables in a BIOS to UEFI Task Sequence…

Originally posted on https://miketerrill.net/


Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

Upgrading the BIOS Part 1

Operating systems and software are not the only thing that needs to be upgraded these days. It is really important that the BIOS firmware gets updated as well. Lately, I have been talking to a lot of IT Pros at conferences and user group meetings and I have discovered that not too many people upgrade or ‘flash’ the BIOS on systems after they have already been deployed (or even ever – sometimes they are sent out with the version they came with from the vendor). It is really important to change this going forward. I recommend developing standard versions that you support so that all systems are running your minimum standard version or newer. Periodically, a review of BIOS releases should be done to see if a later version should become the new minimum standard.

So why even upgrade the BIOS in the first place? There are a few reasons that I can think of that answer this question. The first reason is Windows 10 support. Believe it or not, the hardware vendors test the latest operating systems on the models that they currently support. Take the Lenovo ThinkPad T450, looking in the BIOS release history, you can see that Windows 10 support was added for version 1.17:

<1.17> 2015/09/07
– (New) Added win10 support.
– (New) Enabled N25Q128 SPI ROM support.
– (New) Added security fix addresses LEN-2015-002 SMM “Incursion” Attack.
– (New) Included security fixies.
– (New) Added new incompatibility bit for Back Flash Prevention.

Now this does not mean that Windows 10 will not work on versions lower than version 1.17. It means that this is probably the version that they validated and tested Windows 10 against. If you happen to run into an issue running Windows 10 on a version lower than 1.17 and you call in for support, chances are they will have you upgrade the BIOS to the latest version to see if that addresses your issue.

The second reason to upgrade the BIOS is to get fixes. It makes sense to start off on one of the latest releases than it does to start off with a version that is a year or more behind in fixes. By not upgrading to a recent version as part of the deployment process, you are potentially wasting everyone’s time – the end user, help desk, desk side (and your time if the problem comes back to you). Save the hassle and be proactive. Looking at a newer BIOS release version for the same Lenovo ThinkPad T450, we see that there is even a ‘SCCM’ fix listed in version 1.19:

<1.19>
– (New) Updated verbtable for noise.
– (New) Changed Haswell + N16s Tolud.
– (New) Updated Winuptp & Winuptp64.
– (Fix) Fixed an issue that srsetupwin fails to install pop/hdp with clearing SVP.
– (Fix) Fixed an issue related to SCCM 80070490 error when HDP is set.
– (Fix) Fixed an issue related to silent install auto restart issue.

The third reason to upgrade the BIOS is to get security related fixes. Yes, they find and fix security fixes in the BIOS firmware just like they do in operating systems and software. Do your security team (and yourself) a favor and deploy versions that contain these security fixes. Looking at the BIOS release history for the HP EliteBook Folio 9470m, we can see some of these security fixes listed in this version:

Version:F.60 A (20 Jan 2015)
Fixes
– Fixes an intermittent issue where enabling the LAN/WLAN switching feature in the F10 BIOS settings causes the system to stop functioning properly (hang) at POST after a warm boot.

Enhancements
– Provides improved security of UEFI code and variables. HP strongly recommends transitioning promptly to this updated BIOS version which supersedes all previous releases.

NOTE: Due to security changes, after this BIOS update is installed, previous versions cannot be reinstalled.

Pay close attention to the note at the end of the release text – it states that previous versions cannot be reinstalled. What this means is that you can no longer ‘flash’ back to an earlier BIOS version. This is important when it comes to deploying BIOS and how we detect what systems need to be updated, but more on that later.

The fourth reason that comes to mind is has to do with manipulating the BIOS settings programmatically. I have written blogs and talked on the topic of using the vendor utilities to programmatically change the BIOS settings (like BIOS to UEFI) using a Configuration Manager task sequence. Just as it is important to standardize on the BIOS versions, you should also develop standards on how each BIOS setting should be configured in order to maintain consistency and ensure devices are configured accordingly. By running on the latest BIOS version, you will ensure that these utilities will work correctly and configure the settings correctly.

I am sure I can think of many more reasons why you should start baselining and upgrading the BIOS versions for the supported systems in your environment, but hopefully I have identified the top four reasons and have convinced you that this needs to be done on a regular basis. In the next blog, Upgrading the BIOS Part 2, I will discuss the approach to flashing the BIOS along with some lessor understood caveats as it relates to BitLocker, BIOS passwords and UEFI 64-bit systems.

Originally posted on https://miketerrill.net/


Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

Upgrading the BIOS Part 2

In Upgrading the BIOS Part 1, I gave some very important reasons why you should be proactive about upgrading the BIOS on supported systems in your environment. In this blog, I want to discuss the approach to flashing the BIOS along with some lessor understood caveats as it relates to BitLocker, BIOS passwords and UEFI 64-bit systems.

Any solution that I create and implement, I like it to be as modular as possible so that I can get maximum use out of it (it is the engineer in me and probably the reason that I still enjoy playing with Legos at my age). When flashing the BIOS, we need to be able to do it under two different operating systems – a full operating system like Windows 7/8.1/10 and a lightweight operating system called WinPE. This will allow us to handle existing clients that are already deployed and also bare metal/break fix scenarios. After all, it would not make much sense to have to boot into a full operating system just to flash the BIOS. Other solutions that I found relied on Configuration Manager Applications and Package/Programs. While these may work for specific scenarios, they cannot cover all scenarios. The Install Application and Install Package task sequence steps only run under a full operating system and not WinPE, so those methods eliminate the bare metal scenarios. Sure, we could create another task sequence or a duplicate package that just does bare metal, but now we have twice as much to manage, update and maintain – no thanks!

Now, there is a slight disclaimer that I need to put out there for the time being. Because of certain limitations with some vendor systems, plus the fact that Configuration Manager can only have one boot image assigned to a task sequence and that you need to use the correct boot image architecture to boot a UEFI system, then you will need to have a separate task sequence to handle the bare metal/break fix scenarios (or better yet, pressure the vendor into supporting 64-bit WinPE). The problem is some vendor models currently only support a 32-bit flash utility. If a system is configured for UEFI (or we are doing BIOS to UEFI in a single task – yes, this is possible now), then you need to use the corresponding boot image architecture. This is going to be 64-bit for modern PC systems (within the last four years or so). Long story short, be sure to check with the vendor of the models that you currently support in order to handle those exceptions (or get rid of them and buy something you can support).

Another important point, both the HP and Lenovo flash BIOS utilities require the WinPE-HTA component to be included in the WinPE boot image. Do not ask me why, I just know that it does not work without it. Just make sure that component is included and things should work just fine. Now Dell, HP and Lenovo do supply 32-bit and 64-bit flash BIOS utilities (with certain models being the exception), so you only need one task sequence if you have these vendors (and supported models). These are the only three vendors that I will be covering, but I will gladly take donated test systems of other vendors you would like me to test.

BIOS Passwords can be tricky. Usually you password protect something in the first place in order to make it secure. However, the flash BIOS utilities will take the password as a command line parameter or in some cases (HP) a bin file. Neither one of these methods are ideal for automation with Configuration Manager. A bin file is downloaded to the cache (or TS working directory) at some point in the process and you do not need the password in order to make changes (including clearing the password) as long as you have the bin file. Command lines get logged and there is nothing like having a password in clear text sitting in a log file. If you would like to see better handling/log file suppression, then head on over to UserVoice and vote up Secret task sequence variable value Exposed. I am not advocating to not use BIOS passwords, you should absolutely be using them in order to lock down settings that should not be changed (like Boot Order). You may have to get clever and write a compiled exe that masks the password(s) in your environment (yes – I know that even this can be cracked depending on how it is done, but at least it is more secure than clear text log files or bin files). Lastly, if dealing with multiple passwords, most of the vendors allow three tries before requiring a reboot and attempting again. If you have multiple passwords in the environment, have a group in the Task Sequence that removes the password before flashing the BIOS. You typically only get one shot specifying a password when flashing the BIOS, so this is a way to overcome this limitation.

When it comes to BitLocker, it will need to be suspended before flashing the BIOS (which is one of the reasons I like using a Task Sequence). If it is not suspended prior, BitLocker will detect a change to the system, and then be prepared to enter the BitLocker recovery key upon restart. It is easy to suspend BitLocker but keep in mind the native Configuration Manager step only suspends BitLocker for one restart. Newer Windows operating systems will allow BitLocker to be suspended for x number of reboots or indefinitely. See my previous blog, called How to detect, suspend, and re-enable BitLocker during a Task Sequence, for more information and examples. For 3rd party disk encryption you are on your own. Your best bet is to contact the vendor on how they support flashing the BIOS. If they do not understand what you are asking, then start seeking alternative disk encryption products (like BitLocker).

So, here are my tips and tricks in a nutshell when flashing the BIOS:

  • Use a task sequence for total control.
  • Incorporate flashing the BIOS into your OSD process for Refresh/In-place Upgrade/New Computer/Break-Fix.
  • Suspend BitLocker!!! (or be prepared to enter the BitLocker recovery key)
  • Disable/re-enable BIOS passwords or use it in the command line of the flash utility (just be careful not to have passwords in clear text in log files or command lines).
  • Dell requires a utility called Flash64w in order to flash in WinPE x64 (see First look – Dell 64-bit Flash BIOS Utility).
  • HP and Lenovo system work on WinPE x64, but requires WinPE-HTA to work.
  • Test, test, test!
  • Baseline and document supported configurations (including how each setting is configured), current BIOS version and release date (this part is key).

In an upcoming blog series, I will be covering off some of my tips and tricks on how to deploy BIOS updates in a modular, dynamic fashion during a Task Sequence (bonus – the same method can be used for drivers as well).

Originally posted on https://miketerrill.net/


Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

Configuration Manager OSD, Recovery Partitions and MBR2GPT

As I was preparing for my Midwest Management Summit 2017 session, Building the Ultimate Windows 10 UEFI Task Sequence, I did a full end-to-end run of the In-place Upgrade Task Sequence and started running into issues. This led me to discover a couple of issues around Configuration Manager (specifically the Format and Partition Disk step) and Windows 10 Recovery partitions. The version of this Task Sequence flips the machine from BIOS to UEFI using the new MBR2GPT utility. The high-level process goes something like this:
1. Deploy TS to current OS (can be Win7/8/8.1/10) running in Legacy BIOS mode
2. In-place Upgrade to Windows 10 (still in Legacy BIOS after upgrade)
3. Once upgrade is done, reboot into WinPE 1703
4. Run MBR2GPT (supported and recommend method is to run in this version of WinPE)
5. Flip firmware settings (if successful)
6. Reboot to Windows 10 running UEFI
Only one little problem, MBR2GPT was not able to convert the disk (and I even managed to make it crash, but that is another story). After inspecting the disk layout, I noticed that there were 4 partitions (MBR2GPT can only work with 3 or less since it needs to create the EFI partition). After further investigation, it appeared that there were now two Recovery partitions (which seemed a bit odd):

Image may be NSFW.
Clik here to view.

The test system was first built from the following Configuration Manager Windows 7 OSD Task Sequence:

Image may be NSFW.
Clik here to view.

The problem is with the highlighted Partition Disk 0 – BIOS step. Behind the scenes, it creates a diskpart answer file that it uses to partition and format the disk. The Recovery partition is getting set to type 7 instead of type 27 (hidden). When the Windows 10 Setup runs, it does not recognize (or use) the Recovery partition that was created by Configuration Manager and proceeds to create a proper 450 MB hidden Recovery partition after the Windows partition. This is why the system is ending up with four partitions. Even if the Recovery partition that Configuration Manager created was the correct type, it would need to have a certain amount of free space based on its size in order for the Windows 10 Setup to be able to use it (see BIOS/MBR-based hard drive partitions for details on the Recovery tools partition sizes). During the upgrade, what should happen is setup should either resize the existing partition or create a new one if needed (see /ResizeRecoveryPartition switch description from the Windows Setup Command-Line Options). In my testing, it never attempted to resize the partition and always created another Recovery partition after the Windows partition. This is still bad because we end up with 4 partitions and that does not work well for MBR2GPT.

Luckily, there are a few things that can be done to avoid both of these issues. When creating partitions for BIOS based systems, I still like to use the built-in Format and Partition Disk step for creating the System Reserved and the Windows partitions. The reason for this is that I can assign the Windows partition to an OSD variable (which can be useful later on in the Task Sequence). For the Recovery partition, I create this right after the Format and Partition Disk step using a diskpart script in a Package. Now this is what my Format and Partition Disk step looks like for BIOS systems:

Image may be NSFW.
Clik here to view.

Image may be NSFW.
Clik here to view.

On the Windows partition, I now set this to 100% of the remaining space on the disk and notice that OSDisk variable that gets assigned for later use. By leaving this to 99%, this leaves a lot of space for the recovery partition on really large disks when we only need about 499 MB. This Task Sequence step directly after this is a Run Command Line step that calls diskpart with an answer file (with the same conditions as the Format and Partition Disk step).

Image may be NSFW.
Clik here to view.

Image may be NSFW.
Clik here to view.

NOTE: this is statically set to Disk 0 (like the Format and Partition Disk step). If you have systems that the OS disk is showing up on Disk 1, then be sure to create multiple steps with conditions.

After selecting the disk number in my MBR_RecoveryPartition.txt file, the first thing I do is select the Windows partition (#2) and shrink it by 499 MB (to keep within the Recovery Partition parameters). Then simply create a new partition with the remaining space, format it and then set the partition type to 27 (hidden). I list the partition information before and after the commands so that the information gets picked up in the logs and status messages. Using this method, we now have a 499 MB hidden Recovery partition.

Windows 10 In-Place Upgrade

If we just leave this as-is, then chances are the Windows 10 Setup will still create another recovery partition (which is not what we want to happen). Since the previous Windows recovery partition will be replaced, we can create a step right before the Upgrade Operating System step that cleans the Recovery partition. This way, the Windows 10 Setup will use it since there is enough free disk space on that partition (which is exactly what we want so that MBR2GPT will run). This is also Run Command Line step that calls diskpart with an answer file.

Image may be NSFW.
Clik here to view.

Image may be NSFW.
Clik here to view.

NOTE: this is also statically set to Disk 0. If you have systems that the OS disk is showing up on Disk 1, then be sure to create multiple steps with conditions. In addition, only target systems where the third partition is the recovery partition or put a condition on this step that checks for a recovery partition. All data on this partition will be lost after this step executes.

When formatting this partition, the partition type gets reset to 7. The above diskpart script resets the partition type back to 27 (hidden) after the format. Windows 10 Setup will now use the third partition as the recovery partition and after the upgrade there will only be three partitions. This will now allow MBR2GPT to run correctly after the upgrade so that BIOS to UEFI can be done as part of the in-place upgrade to Windows 10.

Originally posted on https://miketerrill.net/


Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

Windows 10 BIOS to UEFI In-place Upgrade Task Sequence using MBR2GPT

At the Midwest Management Summit 2017, I gave a session called Building the Ultimate Windows 10 UEFI Task Sequence. In this session, I covered both types of BIOS to UEFI Task Sequences – Wipe-and-Load and In-place Upgrade. This blog is going to cover the In-place Upgrade version of the BIOS to UEFI Task Sequence. This Task Sequence will use variables that I previously wrote about in the blog posts: BIOS and Secure Boot State Detection during a Task Sequence Part 1 & Part 2, as the goal is to have a single Task Sequence that covers the various scenarios. In addition, this blog also replaces the original blog I wrote, Using MBR2GPT with Configuration Manager OSD, when I first discovered MBR2GPT in one of the Windows Insider builds.

When converting from BIOS to UEFI, it is best to do this after the system has been upgraded to Windows 10. The version of Windows 10 does not matter, although, it should be a version that is still supported. Also, even though MBR2GPT will run in the full OS (starting with Windows 10 1703), it is a best practice recommendation to run it from WinPE (version 1703 or later). The reason for this is that there can be other applications on the system that use filter drivers for disk access (antivirus, antimalware, 3rd party disk encryption and other 3rd party p2p solutions). These applications could interfere with the disk conversion and potentially cause a failure, therefore, always run MBR2GPT in WinPE for best results.

Typically, a Boot Image is not assigned to an In-place Upgrade Task Sequence. However, since we are going to use WinPE as part of our Task Sequence, a WinPE 1703 (or later) Boot Image should be assigned to the Task Sequence. Also, it is important to use the 64-bit Boot Image when running on a 64-bit UEFI System.

Image may be NSFW.
Clik here to view.

The basic flow goes like this after the OS has been upgraded:
Disable BitLocker
Set BIOS and Secure Boot Variables
Restart into WinPE (if running Legacy BIOS)
BIOS to UEFI
Run MBR2GPT (if running Legacy BIOS)
Configure BIOS Firmware Settings
Restart into Windows
Re-enable BitLocker

If you are not using BitLocker, then you can skip the two BitLocker groups. Also, even though this process works with BitLocker using earlier algorithms, if you are coming from a version of Windows before Windows 10 1511 (like when coming from Windows 7), then you might want to consider the new encryption type AES-XTS (see the blog BitLocker: AES-XTS new encryption type for more information). Moving to the new encryption type will require decryption/re-encryption of the drive.

Disable BitLocker

Image may be NSFW.
Clik here to view.

The reason for putting this Group in after the OS has upgraded is to cover the scenario when coming from Windows 7. As I mentioned in my blog How to detect, suspend, and re-enable BitLocker during a Task Sequence, the built in Disable BitLocker Task Sequence step on suspends BitLocker for one reboot. Therefore, I run this Group one more time just incase BitLocker was re-enabled after the In-place Upgrade.

Set BIOS and Secure Boot Variables

Image may be NSFW.
Clik here to view.

I cover these steps in detail in the two blogs mentioned above, but the two variables that get used in the BIOS to UEFI Group are BIOSMode and SecureBootState.

Restart into WinPE (if running Legacy BIOS)

Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

On this step, we only need to reboot if the system is running in Legacy BIOS Mode. If it is running in UEFI Hybrid or UEFI Native without Secure Boot, the disk will already be configured for GPT. On the Options tab, add the condition: Task Sequence Variable BIOSMode equals “LegacyBIOS” (Note: you could also use _SMSTSBootUEFI equals FALSE, but having LegacyBIOS is easier to find in log files, status messages and is easier for help desk personal to understand). Also add the hardware manufacturers that you want to support. This is important because you cannot convert BIOS to UEFI on a GEN 1 Hyper-V VM and you will probably want to test the rest of the Task Sequence on a VM outside of the BIOS to UEFI steps.

BIOS to UEFI

Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

On this Group, we only need to perform BIOS to UEFI or BIOS Firmware Settings if the system is running Legacy BIOS, UEFI Hybrid or UEFI Native without Secure Boot. On the Options tab, add the condition: Task Sequence Variable SecureBootState not equals “Enabled”. Once again, also add the hardware manufacturers that you want to support.

MBR2GPT (if running Legacy BIOS)

Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

I like to run this step prior to configuring the BIOS settings. Secure Boot can be programmatically enabled, however per the specification it cannot be programmatically disabled. If you enable Secure Boot prior to running converting the disk and MBR2GPT is not able to convert the disk for some reason (like too many MBR partitions, see my blog Configuration Manager OSD, Recovery Partitions and MBR2GPT), then the machine will require a desk side visit to reset the BIOS settings and manually disable Secure Boot.

This step will run under WinPE. MBR2GPT can be called directly using a Run Command Line step since it is in the path in WinPE. If dealing with systems that do not install the OS on disk 0, then you will need to create multiple steps and put the necessary conditions on each. MBR2GPT will generate useful log files and I like to save them in the Task Sequence log directory (_SMSTSLogPath). This way they will be available after the Task Sequence runs. On the Option tab, add the condition: Task Sequence Variable BIOSMode equals “LegacyBIOS. This will ensure that this step only runs under this condition. Note: we could have also used and/or added _SMSTSinWinPE equals “TRUE”. Also enable Continue on error. This is important because we do not necessarily want the entire In-Place Upgrade to fail just because MBR2GPT was not able to run. If it is a hard failure, then the Task Sequence will definitely not continue as the system will probably no longer boot up.

Configure BIOS Firmware Settings

Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

In the Firmware Settings Group, you will add your own BIOS settings commands, utilities or tools. These commands, utilities and tools can run in a full OS or WinPE. If you use Dell systems, please see my previous blog post Automating Dell BIOS-UEFI Standards for Windows 10 for the commands (and order) of switching the UEFI settings using the Dell CCTK (aka Command Monitor). On the Option tab, add the condition: Task Sequence Variable _SMSTSLastActionSucceeded equals “TRUE”. This will ensure that this group is only entered if the previous step that runs was successful. In the case of a Legacy BIOS system, if MBR2GPT is not successful, we want the Task Sequence to continue, but we do not want to flip the BIOS settings to UEFI and enable Secure Boot. In the other case of a system running UEFI Hybrid or UEFI Native without Secure Boot, it will run if the previous non-skipped step was successful. NOTE: It is important to be running the latest BIOS version and BIOS utilities for best results. Also, be sure to account for BIOS passwords if used in your environment. It is best to disable the BIOS passwords and then re-enable them at the end of the process.

Restart into Windows

Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

After the Firmware Settings are changed, a system reboot is required for them to be applied. This restart will boot the system back into Windows 10.

Re-enable BitLocker

Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

Once the system has been configured for UEFI Native with Secure Boot and booted back up into Windows 10, it is time to re-enable BitLocker. The Re-enable BitLocker Group will run in a full OS and only if the OSDBitLockerStatus equals “Protected”. This variable gets set earlier in the Task Sequence before the operating system is upgrade. For more information, see my blog How to detect, suspend, and re-enable BitLocker during a Task Sequence.

MBR2GPT and BitLocker

If you read the Microsoft documentation for using MBR2GPT, they only tell you that you need to delete the existing protectors and recreate them (they don’t mention that you need to reset the Windows Recovery Environment to generate a new reagent.xml and update the bcd). They do not really give any clear guidance on how to do this.

Reset Windows Recovery Environment

Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

Resetting the Windows Recovery Environment only needs to be done when using MBR2GPT with BitLocker. On the Option tab, add the condition: Task Sequence Variable BIOSMode equals “LegacyBIOS”.

I have seen some forum posts on the internet that talk about deleting the ReAgent.xml file (found in C:Windows\System32\Recovery). Windows will re-create this file on the next reboot and it should modify the bcd file accordingly. However, I prefer to update it (and the bcd) by simply disabling WinRE and re-enabling it. I also display the status after re-enabling it. Each of these commands will pipe output to the smsts.log and also CM Status Messages. For clarity they are split in three different steps.

Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

Reset BitLocker Protectors for MBR2GPT

Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

Just like Resetting the Windows Recovery Environment, resetting the BitLocker Protectors only needs to be done when using MBR2GPT with BitLocker. On the Option tab, add the condition: Task Sequence Variable BIOSMode equals “LegacyBIOS”.

Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

Now we just need to delete the BitLocker protectors. This can be done by running the following command: manage-bde -protectors -delete c:

Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

It is extremely important to perform a restart after deleting the BitLocker protectors and before enabling BitLocker. If it is not done in this order, the system will prompt for the BitLocker recovery key on the next reboot.

Enable BitLocker

Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

The last thing to do in the Re-enable BitLocker Group is to enable the BitLocker protectors. This can be done using the native Enable BitLocker Task Sequence step. Since the operating system drive is already encrypted, just the BitLocker protectors are being created and/or enabled (depending on the scenario).

In summary, this approach will cover multiple upgrade scenarios, including BIOS to UEFI, when performing an in-place upgrade to Windows 10.

Originally posted on https://miketerrill.net/


Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

Configuration Manager Dynamic Drivers & BIOS Management with Total Control Part 1

When approaching any solution, it is a good idea to come up with a list of requirements that the solution needs to be able to meet. This way you will be sure to start off on the right foot and not have to rip and replace, add to, or redo over the solution in the future. In terms of Driver & BIOS management, I have come up with the following requirements that need to be met in order to have the best solution possible that can be used in multiple scenarios:

1. Runs in Full OS and WinPE
2. Same method works across baremetal, refresh and in-place upgrade Task Sequences
3. Dynamic without the need to edit the TS or scripts
4. Supports Production and Pre-Production in the same TS
5. Intuitive and easy to use

There are a lot of blogs out there on how to manage drivers and BIOS updates with Configuration Manager, however, each of them fall short of the above requirements in one way or another. I first started out on this quest back in 2015 when I was investigating what it would take to go from BIOS to UEFI. Long story short, you need to use the vendor utilities (or methods) to change the firmware settings and they worked the best when the BIOS was running the latest version. I wanted to be able to flash the BIOS in the full OS, as well as WinPE (requirement #1). This meant that Configuration Manager Packages needed to be used since Applications cannot be used in WinPE. This way, the same process could be used regardless if the system was bare metal from the vendor or an existing machine that was getting a refresh or in-place upgrade Task Sequence (requirement #2). By the way, some vendors still have limitations on flashing the BIOS in WinPE x64, but a lot of models now support this for the most part.

Another goal was to be able to do this without having a 5 mile long Task Sequence that needs to be edited every time there was a new model, new BIOS version or new Driver Package (requirement #3). Every time a Task Sequence changes, it has the possibility of stopping imaging for the environment while replication takes place. If you have a small environment, this may be okay, but in a large environment it can be like stopping a production assembly line (not good). Next, the solution needs to be able to support BIOS versions and Driver Packages that are marked as production, as well as support BIOS versions and Driver Packages that are pre-production (requirement #4). This way a proper Test > QA > Pilot > Production methodology can be carried out using the current production Task Sequence (this is the Total Control part). If you look at the BIOS releases or driver releases over the past two years, you will notice that the hardware vendors have been busy releasing updates. As newer versions of Windows 10 get released, the vendors usually release new drivers starting a month after the CB release. Lastly, the solution needs to be intuitive and easy to use so that it can be managed by junior level administrators (requirement #5).

At the 2016 Midwest Management Summit, I had come up with a solution that covered most of the above requirements for doing the BIOS updates. At the time, I had split each of the vendors because it made it more modular, but also because some vendors (to remain nameless) at the time did not support flashing under WinPE x64. The only thing that I did not have figured out was how to do the dynamic content location request. In the Task Sequence below, I was cheating by creating a dummy group handle the CLR (the dummy group is one that does not execute but the TS does not know it and will still do the CLR at the start of the TS).

Image may be NSFW.
Clik here to view.
image

Little did I know, the step that I was using to get the content locations for the BIOS packages (Download Package Content), actually can do a dynamic content location request (I learned this during a trip to Redmond last November). Fast forward a bit and this is what the Flash BIOS portion of the Task Sequence looks like now:

Image may be NSFW.
Clik here to view.
image

Now, ‘how do drivers fit into this?’ you say. Well, the same concepts can be applied – in fact, drivers are even easier. For a Wipe-n-Load Task Sequence, we can now do driver management in three easy steps:

Image may be NSFW.
Clik here to view.
image

And for an In-Place Upgrade Task Sequence, driver management can be done in two easy steps all using the same process:

Image may be NSFW.
Clik here to view.
image

So by now you are probably thinking that this is all too good to be true and there has to be a catch. No catch – it is really this simple. In Configuration Manager Dynamic Drivers & BIOS Management with Total Control Part 2, I go into detail on how to set up, configure and use the solution.

Originally posted on https://miketerrill.net/


Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

Configuration Manager Dynamic Drivers & BIOS Management with Total Control Part 2

In Configuration Manager Dynamic Drivers & BIOS Management with Total Control Part 1, I talked about the requirements for the various scenarios when coming up with a solution for driver and BIOS management. I also gave a glimpse of what the Task Sequence steps look like once the solution is in place. In Part 2, I am going to show what it takes to get the solution step up and running (at first glance it looks complicated, but it is actually pretty easy, especially if you download the templates below).

Video summary:

Before getting started, there are a few assumptions:
1. Configuration Manager is already running Current Branch
2. Windows ADK is Windows 10 Creators Update (1703) (required for MBR2GPT)
3. Microsoft Deployment Toolkit (MDT) 8443 is integrated with Configuration Manager
4. A MDT Toolkit package is available in Configuration Manager
5. A MDT database is setup and configured

If you do not already have MDT installed and configured, please see this excellent guide at windows-noob.com (just make sure to use the MDT 8443 release): How can I deploy Windows 10 with MDT 2013 Update 2 integrated with System Center Configuration Manager (Current Branch): https://www.windows-noob.com/forums/topic/14057-how-can-i-deploy-windows-10-with-mdt-2013-update-2-integrated-with-system-center-configuration-manager-current-branch/
For setting up the MDT database, see Use the MDT database to stage Windows 10 deployment information: https://docs.microsoft.com/en-us/windows/deployment/deploy-windows-mdt/use-the-mdt-database-to-stage-windows-10-deployment-information

Starting off, there will be a one time modification of the MDT database, extending the database to include some custom fields that we are going to define.

1. Add the following columns to the dbo.Settings table: TARGETBIOSDATE, FLASHBIOSCMD, BIOSPACKAGE, W10X64DRIVERPACKAGE, W7X64DRIVERPACKAGE. If you manage 32-bit operating systems, you can add columns for those as well. Also, as of now, there should not be a need for a build specific Windows 10 driver package (like one for 1607 and another for 1703), but if that changes then additional columns can be added to support them in the future. BIOS stepping – this is where you need to apply one or more BIOS versions to get to the latest version. Some older models require this and additional BIOSPACKAGE columns can be created to support this. This is not going to be covered in this blog, but if there is enough interest I will cover it in a future blog.
Image may be NSFW.
Clik here to view.
image

There is already a great blog called How to extend the MDT 2010 database with custom settings that is still applicable to MDT 8443 and can be used as a reference. Be sure to refresh the views after adding the columns.

2. Create BIOS Packages and Driver Packages for each make/model. If you do not already have them for each of the models you support or if you want to get the updated releases, then check out the Driver Automation Tool the awesome guys over at SCConfigMgr have created. This is a great tool and will save you a ton of time.

3. Define each make/model in the MDT database. I do not cover Lenovo systems in this blog, so if you manage those systems then check out The Deployment Bunny’s blog Modelalias User Exit for Microsoft Deployment Toolkit 2010/2012.
Image may be NSFW.
Clik here to view.

4. On the Details tab, scroll down to the bottom where the custom properties are listed and enter the Package IDs, Target BIOS Date and Flash BIOS command. The Target BIOS date is that that shows up in WMI for the BIOS version (also seen in msinfo32) in YYYYMMDD format.
Image may be NSFW.
Clik here to view.

For the Custom Settings and the Task Sequences, feel free to save some time and download them here:
Dynamic BIOS and Drivers Blog.zip

Disclaimer:                                                                                                                                                                                 Your use of these example Task Sequences is at your sole risk. This information is provided “as-is”, without any warranty, whether express or implied, of accuracy, completeness, fitness for a particular purpose, title or non-infringement. I shall not be liable for any damages you may sustain by using these examples, whether direct, indirect, special, incidental or consequential.

5. Create the following custom settings file and add it to an existing reference settings package or create a new one (feel free to replace SQL connection with a webservice of your choice):

[Settings]
Priority=CSettings,MMSettings,Default
Properties=TARGETBIOSDATE,FLASHBIOSCMD,BIOSPACKAGE,W7X64DRIVERPACKAGE,W10X64DRIVERPACKAGE

[Default]

[CSettings]
SQLServer=CM02
Database=MDT
Netlib=DBNMPNTW
SQLShare=DeploymentShare$
Table=ComputerSettings
Parameters=UUID, AssetTag, SerialNumber, MacAddress
ParameterCondition=OR

[MMSettings]
SQLServer=CM02
Database=MDT
Netlib=DBNMPNTW
SQLShare=DeploymentShare$
Table=MakeModelSettings
Parameters=Make, Model
ParameterCondition=AND

The following section is for a Wipe-n-Load Task Sequence

6. Create a MDT Gather step in the Task Sequence that uses the custom settings created above. This gather step will get the above entries that have been populated in the MDT database.
Image may be NSFW.
Clik here to view.

NOTE: Be sure to suspend BitLocker before flashing the BIOS in order to prevent being prompted for the recovery key. Also, if BIOS passwords are used, they will either need to be turned off or passed to the flash BIOS command line.

7. Create the Update BIOS Group with the following steps and conditions:
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

8. Set the BIOSUpdate Task Sequence variable. This variable will determine if a BIOS update is necessary based on the BIOS release date and also if TARGETBIOSDATE, FLASHBIOSCMD and BIOSPACKAGE exist.

Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

9. Create a Flash BIOS group. The conditions on this group are if BIOSUPDATE is TRUE and IsOnBattery is False. Since many BIOS update utilities require AC, we do not even want to try to update the BIOS if it is running on battery. The IsOnBattery variable is set by the MDT Gather step. It also should be checked as part of pre-flight checks, but by also keeping it in the Update BIOS group, we keep it modular and this group can be used in a Software Distribution Task Sequence to update the bios existing clients.

Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

10. Set BIOS Variables is where part of the magic happens and is a Set Dynamic Variables Task Sequence step. This is where we set the following variables: OSDDownloadDestinationLocationType, OSDDownloadContinueDownloadOnError, OSDDownloadDownloadPackages and OSDDownloadDestinationVariable. I talked about these variables in my Hacking the Task Sequence 2017 session at the Midwest Management Summit and they may lead to a future blog post. But for now, just understand that they work with the Download Package Content step executable. We are downloading the package found in the BIOSPACKAGE variable and we are going to store the download location in a base variable called BIOS. Since there is only one package, the location will get stored in the variable BIOS01. DISCLAIMER: Although these Task Sequence variables are not read only (meaning they do not start with an “_”), they are not publicly documented, which translates to “use at your own risk”.

Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

11. The Download BIOS step is a Run Command Line step that calls OSDDownloadContent.exe. This exe is tied to the Download Package Content Task Sequence step and will consume and use the variables set in the previous step. This step is also part of the magic as it will do a dynamic content location request for the package and then download it to the TS Cache.

Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.


[Update] Directly after the Download BIOS step, it is important to insert a Reset Variables step. Since the variables are being set outside of the Download Package Content step, the variables do not get deleted. Resetting them to blank will allow any subsequent Download Package Content steps outside of this process to work normally if inserted into the Task Sequence.

Image may be NSFW.
Clik here to view.

12. The Flash BIOS step is another Run Command Line step that executes the command stored in the FLASHBIOSCMD variable. It also sets the working directory to the location where the BIOS package was downloaded (BIOS01). Use Continue on error or define the exit return codes so the Task Sequence does not fail.

Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

13. The Flashing BIOS… step is also another Run Command Line step that executes timeout.exe for 60 seconds. Timeout.exe is not part of the MDT Toolkit Package, so you will need to add the correct versions (x86/x64) to your MDT Toolkit Package if you want to use it. The Windows 10 version of timeout.exe will not run on Windows 7. However, the Windows 7 version of timeout.exe will run on Windows 10. The alternatively, simply include timeout.exe in the path on your Boot Images and then it will run regardless of the operating system. Otherwise, I have seen ping commands being used to create a sleep cycle. The reason I do this is because some vendors recommend not rebooting right away (even though the main process finished). Therefore, I stick it in there for good measure.

Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

14. Once the first phase of the BIOS update has run, the system needs to be rebooted so that the second phase can run. Since this is for a Wipe-n-Load Task Sequence, we will go ahead and reboot into the Boot Image after the BIOS update has completed. Provide the end user a message like “A new BIOS is being installed. DO NOT Power Off or unplug the system during this process. The computer must restart to continue.”

Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

15. Directly below the Apply Network Settings step, create a new group called Apply Drivers. The condition on this group is if W10X64DRIVERPACKAGE exists. If the variable does not exist, then it was not populated in the MDT db and this group will be skipped. This is by design for the scenario where a model does not have a Windows 10 Driver Package and/or the out of the box Windows 10 drivers work just fine.

Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

16. Set Driver Variables (similar to the Set BIOS Variables) is where part of the magic happens and is a Set Dynamic Variables Task Sequence step. This is where we also set the following variables: OSDDownloadDestinationLocationType, OSDDownloadContinueDownloadOnError, OSDDownloadDownloadPackages and OSDDownloadDestinationVariable. Here, we are downloading the package found in the W10X64DRIVERPACKAGE variable and we are going to store the download location in a base variable called DRIVERS. Since there is only one package, the location will get stored in the variable DRIVERS01.

Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

17. The Download Driver Package step is a Run Command Line step that calls OSDDownloadContent.exe. This exe is tied to the Download Package Content Task Sequence step and will consume and use the variables set in the previous step. This step is also part of the magic as it will do a dynamic content location request for the package and then download it to the TS Cache.

Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

[Update] Directly after the Download Driver Package step, it is important to insert a Reset Variables step. Since the variables are being set outside of the Download Package Content step, the variables do not get deleted. Resetting them to blank will allow subsequent Download Package Content steps outside of this process to work normally if inserted into the Task Sequence.

Image may be NSFW.
Clik here to view.

18. The Apply Driver Package step is another Run Command Line step that simply executes DISM to apply the drivers to the Windows installation contained in the OSDisk variable (which is set in the Format and Partition Disk step).

Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

The following section is for an In-Place Upgrade Task Sequence

The same approach can be used for an In-Place Upgrade Task Sequence with a few changes.

[Update] Directly after the Download BIOS step, it is important to insert a Reset Variables step. Since the variables are being set outside of the Download Package Content step, the variables do not get deleted. Resetting them to blank will allow subsequent Download Package Content steps to work normally if inserted into the Task Sequence.

Image may be NSFW.
Clik here to view.

19. After the Flashing BIOS… step, change the Restart Computer step to restart to the currently installed default operating system. Provide the end user a message like “A new BIOS is being installed. DO NOT Power Off or unplug the system during this process. The computer must restart to continue.” NOTE: After the reboot, if using BitLocker on Windows 7, you will need to disable/suspend it again since the built-in Configuration Manager step only suspends it for one reboot.

Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

20. Create a group called Download Drivers with the condition that the variable W10X64DRIVERPACKAGE exists (similar to the Apply Drivers group created above for the Wipe-n-Load Task Sequence).

Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

21. The Set Driver Variables step is the same as Step 16 above.

Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

22. The Download Driver Package step is the same as Step 17 above.

Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

[Update] Directly after the Download Driver step, it is important to insert a Reset Variables step. Since the variables are being set outside of the Download Package Content step, the variables do not get deleted. Resetting them to blank will allow subsequent Download Package Content steps outside of this process to work normally if inserted into the Task Sequence.

Image may be NSFW.
Clik here to view.

23. Now we need to inform the Update Operating System step the location of the drivers. Enable the “Provide the following driver content to Windows Setup during upgrade” option and enter %DRIVERS01% for the “Staged content” location. Since we only want this to run when the Driver Package exists, add the condition Task Sequence Variable DRIVERS01 exists.

Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

24. In the case that the Driver Package does not exist, duplicate the previous step, clear the “Provide the following driver content to Windows Setup during upgrade” option and add the condition Task Sequence Variable DRIVERS01 not exists. This can be reduced to using one step by using the undocumented variable OSDUpgradeStagedContent variable that Johan Arwidmark talks about in his blog post Improving the ConfigMgr Inplace-Upgrade Task Sequence.

Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

25. Chances are, there will be a need to have more than BIOS Package or Driver Package in the production environment for a given model. As new BIOS updates and drivers are released, they can be pilot tested in the environment using the same exact Task Sequences without modification. There is no need to setup different Task Sequences, simply define your pilot systems in the MDT database under the Computers node.

Image may be NSFW.
Clik here to view.

26. This can be done by adding a new record in the MDT database using the Asset tag, UUID, Serial number or MAC address.

Image may be NSFW.
Clik here to view.

27. The same extended fields show up on the Details tab for the computer record. Add in the new BIOS package and Driver package information and the next time the system is built it will use these packages. Once the packages have passed the pilot testing phase and have been deemed production worthy, simply change the BIOS package and Driver package information in the Make and Model node for that particular model.

Image may be NSFW.
Clik here to view.

Lastly, as new models enter the environment, simply create the BIOS and Driver Packages in Configuration Manager and then create the entry in MDT for the new model – once again, all done without modifying either Task Sequence. In summary, this solution meets all of the five requirements that I defined in Part 1:
1. Runs in Full OS and WinPE
2. Same method works across baremetal, refresh and in-place upgrade Task Sequences
3. Dynamic without the need to edit the TS or scripts
4. Supports Production and Pre-Production in the same TS
5. Intuitive and easy to use

Now, who would like to see this functionality built into Configuration Manager out of the box?
And, who would like to see the hardware vendors publish the BIOS WMI date stamp so that it can be consumed electronically?

Originally posted on https://miketerrill.net/


Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

Nomad, WinPE and Assumed Link Speed

Lately, we have been ramping up our OSD in one of our test environments. One of the servers that hosts some of our test VMs sits behind a pretty solid firewall with only the required ports open for Configuration Manager traffic. OSD was rocking and rolling along and then we started to ‘Nomadize’ the Task Sequence. First test and the TS fell over on the first step that attempted to get content (which previously worked).

This one had me scratching my head as I pretty much know Nomad inside and out. It uses the same ports as Configuration Manager when getting content from a Distribution Point. I have seen where proxies have been configured (even for system), but this usually doesn’t break. It just makes it slower than it needs to be since the proxy slows things down. After combing the log, one line in particular caught my eye:

LinkSpeed {url} cannot be calculated (try using AssumedLinkSpeed setting)

Image may be NSFW.
Clik here to view.

I turns out Ping (ICMP) was blocked on the firewall and Nomad was not able to calculate an initial link speed value. On the full client, there is a setting in the registry called AssumedLinkSpeed (referenced in the log). It will use this value instead when it cannot initially reach the DP via ICMP. For this task sequence, we were in WinPE since it was a bare metal build. There is a step in the task sequence actions called Install and Configure Nomad in Windows PE. This step enables Nomad to work in WinPE. As I opened the registry in WinPE, I noticed that AssumedLinkSpeed was missing (and hence the reason for the failure). To correct this, I created a Run Command Line step with the following command directly after each of the Install and Configure Nomad in Windows PE steps:

Step Name: Set Nomad Assumed Link Speed FIX
Command: reg add HKLM\Software\1E\NomadBranch /v AssumedLinkSpeed /t REG_DWORD /d 100 /f

After adding this and rerunning the task sequence is was off and running downloading content. I have submitted a case with the vendor and hopefully it will get fixed in an upcoming hotfix.

Originally posted on https://miketerrill.net/


Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

Better together-BranchCache and Nomad…wait, not so fast…

A few months ago, I noticed that Nomad had become really slow almost to the point that I thought I had a serious problem with my home lab. It did not matter if it was a virtual machine or a physical machine. Downloading content from a distribution point using Nomad had become painfully slow (averaging 1 blk/s). However, downloading without using Nomad was completely normal. I could also copy files using SMB from the distribution point at normal speeds and also download using HTTP using the browser. By the way, this is a good troubleshooting step. Look in the CAS.log for the DP location for the package, it will look something like this:

Image may be NSFW.
Clik here to view.

Then copy and paste that URL in IE or Edge

Image may be NSFW.
Clik here to view.

Then try downloading a big file and see what kind of transfer speeds you are getting with what you expect:

Image may be NSFW.
Clik here to view.

Downloading a similar package using Nomad yielded the following painful 1 blk/s (typically it would be about 100x this value):

Image may be NSFW.
Clik here to view.

This system is on a gig network with a gig interface card but only using 584 Kbps?:

Image may be NSFW.
Clik here to view.

Looking a little closer at the Network Activity in the Resource Monitor I saw PeerDist running and remembered that I had enabled BranchCache in my lab to do some testing:

Image may be NSFW.
Clik here to view.

Sure enough, the BranchCache service was running:

Image may be NSFW.
Clik here to view.

After stopping (and disabling) the BranchCache service, there was a little hiccup in the Nomad log:

Image may be NSFW.
Clik here to view.

But shortly afterward, things were back to normal (getting about ~30 Mbps):

Image may be NSFW.
Clik here to view.

Still a bit conservative on the network but at least it was moving along again:

Image may be NSFW.
Clik here to view.

Nomad log back to normal getting around 100 blk/s:

Image may be NSFW.
Clik here to view.

In summary, I will definitely be letting 1E know about this issue. BranchCache is used for things other than ConfigMgr traffic in the Enterprise (i.e. SharePoint and other web applications). Also, BranchCache could even be used for ConfigMgr traffic that Nomad does not handle (like WSUS, Automatic Client Deployment, etc.), even for Nomad customers!

Originally posted on https://miketerrill.net/


Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

Adding Windows 10 Version, BIOS Mode and Secure Boot State to BGInfo

Recently, my team has been doing a lot of testing for our next big Windows 10 In-place Upgrade. We are designing and developing a new process that I call Windows as a Service in the Enterprise (and we plan on sharing this at MMSMOA in May). As part of our testing, we need to test both physical systems and virtual, both legacy BIOS and UEFI. Since the days seem to run together, often times I find myself wondering not only what system I am looking at, but what OS it is currently running and how it is configured. Sure this is easy to find from System Information, but typing msinfo32 gets old. Having used BGInfo in the past, I thought this would be a perfect solution to just display this information on the desktop.

I took the time one weekend to figure out how I could use BGInfo to display the friendly version of Windows 10 – the YYMM one, not the OS build (I can barely remember my phone number these days let alone a bunch of Windows 10 build numbers – see sKatteRbrainZ‘s blog My First Day as Microsoft CEO for a funny laugh about Windows 10 versions, names and other things that don’t make sense). Also, being called the ‘UEFI guy’ at times, I wanted to display if a system was running legacy BIOS (bad) or UEFI (good) and if Secure Boot was On (good) or Off (bad).

First things first, you want to be sure that you download the latest version of BGInfo that works with Windows 10 (at the time of this writing, the latest version is 4.25). Running BGInfo, we see that there are a few built in fields that we can use (but not what we are looking to add) and also a Custom button:

Image may be NSFW.
Clik here to view.

Looking at the possibilities, we see the options to select Environment variable, Registry, WMI Query, Version information for a file, Time stamp of a file, Contents of a file, or VB Script file (wait – no PowerShell option?? Quick, some tell Mark Russinovich to update this thing before Jeffrey Snover finds out).

Image may be NSFW.
Clik here to view.

The friendly version of Windows 10 is actually easy to get since it is stored in the registry value ReleaseId under HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion.

Image may be NSFW.
Clik here to view.

Simply add a new custom user defined field and call it Version pointing to that registry value.

Image may be NSFW.
Clik here to view.

Next, add a field called Manufacturer and another one called Model. These are handy to have when connecting to systems remotely, you quickly know what make/model you are on. Both will be WMI queries:
SELECT Manufacturer FROM Win32_ComputerSystem
SELECT Model FROM Win32_ComputerSystem

Image may be NSFW.
Clik here to view.

Now it is time to add the BIOS Mode. At first I thought this was going to be another easy WMI query. Back in Configuration Manager 1702, the CM team added this information to WMI under the class SMS_Firmware (and then added Secure Boot state in 1706). This was known as the Mike Terrill feature per David James:

However, my hopes were shattered once I found out that BGInfo can only use the default namespace. So with that out and PowerShell no where to be found, I had to resort to good old VBScript. The trick was to use the Secure Boot registry key to determine both the BIOS mode and the Secure Boot state.

BIOSMode.vbs:

Const HKEY_LOCAL_MACHINE  = &H80000002
strComputer = "."
hDefKey = HKEY_LOCAL_MACHINE
strKeyPath = "SYSTEM\CurrentControlSet\Control\SecureBoot\State"
Set oReg = GetObject("winmgmts:{impersonationLevel=impersonate}!\\" & strComputer & "\root\default:StdRegProv")

If oReg.EnumKey(hDefKey, strKeyPath, arrSubKeys) = 0 Then
  echo "UEFI"
Else
  echo "BIOS"
End If

 

SecureBoot.vbs:

Const HKEY_LOCAL_MACHINE = &H80000002 
strComputer = "."  
hDefKey = HKEY_LOCAL_MACHINE
Set oReg = GetObject("winmgmts:{impersonationLevel=impersonate}!\\" & strComputer & "\root\default:StdRegProv") 
strKeyPath = "SYSTEM\CurrentControlSet\Control\SecureBoot\State" 
strValueName = "UEFISecureBootEnabled" 
oReg.GetDWORDValue hDefKey,strKeyPath,strValueName,dwValue 

If dwValue = 0 Then 
	Echo "OFF"
ElseIf dwValue = 1 Then 
	Echo "ON"
Else
	Echo "NA"
End If

Place both scripts where BGInfo can access them when it is run (I use C:\Program Files\BGInfo\) and create a new custom user defined field for each (BIOS Mode and Secure Boot). Once done the User Defined Fields will look like the following:

Image may be NSFW.
Clik here to view.

Lastly, add, order and position the desired fields that you want to be displayed and the Apply button to test it. I have selected the following fields:

Image may be NSFW.
Clik here to view.

This looks like the following when run on different configurations of Windows 10:

Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

Download BGInfo file and scripts (rename to .vbs after unzipping) here.

Hopefully you find this as useful as me when doing your Windows 10 In-place Upgrade testing. If you have other suggestions on fields you have added, tell me about them in the comments below.

Originally posted on https://miketerrill.net/

Status Message delay during disconnect parts of a Task Sequence

Back in my MMSMOA session Hacking the Task Sequence 2014, I presented on what at the time was a unique situation – speeding up Task Sequences that were running in disconnected states. Earlier that year, I was creating a demo on how to OSD (Wipe-and-load) over Wi-Fi, only I ran into a problem in the fact it was taking way too long. This was a simple Task Sequence for a Surface Pro 2 and my goal was to get it down to sub 15 minutes. However, even the simple Task Sequence was taking 35+ minutes. No one is going to sit around watching this demo for that long so I need to figure out why it was taking so long.

As for the goals, the looked like this:

  • Goal – to rebuild (refresh) a Surface Pro 2 over Wi-Fi in sub 15 mins
  • Did not want to make wireless work in WinPE – we already have what we need prior to WinPE
  • Want to control what was downloaded and therefore ‘download all contents’ before starting the TS was not a viable option
  • Even with a simple TS, build time was taking 35+ mins

Image may be NSFW.
Clik here to view.
When checking out the smsts.log, I noticed that when in WinPE, each step had a ~10 second, then ~25 second, the ~45 second retry delay for each step. This was approximately an additional 80 seconds that were being lost because the task sequence was trying to send a status message back to the management point. And since the device was in WinPE with no Wi-Fi support, it was unable to send these messages. I figured there has to be a way to turn this off and after check with some of my Microsoft contacts, the answer was ‘nope – no way to turn them off’.

Well, usually where there is a will, there is a way. I started thinking what types of scenarios do not send status message??? Then the light bulb turned on – Full Media. In a Full Media build, ConfigMgr assumes that the device is not network connected and therefore does not even attempt to send status messages. I just needed to figure out how I could trick the currently running task sequence into thinking it was now running as a Full Media build. Dumping the task sequence variables of each type, I found a variable called _SMSTSMediaType. When it was running my deployed task sequence it was set to BootMedia. When it was running from a Full Media build, it was set to FullMedia.

The solution was then simple – before restarting into WinPE, change the read-only TS variable _SMSTSMediaType to FullMedia using TSEnv2 (a handy utility from 1E):

  • TSEnv2.exe set _SMSTSMediaType=FullMedia

Once back in the new OS, re-enable status messages by changing it back to bootable media (BootMedia):

  • TSEnv2.exe set _SMSTSMediaType=BootMedia

Image may be NSFW.
Clik here to view.

Now for the disclaimer – changing read-only variables is not supported (but it does work).

Now for the good news – in Configuration Manager Current Branch 1802 there is a new task sequence variable called SMSTSDisableStatusRetry. Set this variable to TRUE when you want to turn off the retry functionality and then set it to FALSE when you want to turn it back on. You will have to set it to TRUE if you have multiple reboots during the disconnected state, as it only stays off until the next reboot.

Originally posted on https://miketerrill.net/

Unloading a Disk Filter Driver in WinPE

At MMSMOA 2018 in my Hacking the Task Sequence 2018 that I presented with my good friend Andreas Hammarskjöld, one of the demonstrations that I did was to show how to unload a disk filter driver in WinPE without doing a reboot. The number one reason for wanting to do this is to provide a zero touch method for converting systems that are running 3rd party disk encryption from BIOS to UEFI. None of the 3rd party disk encryption vendors that I know of support MBR2GPT, which makes it extremely difficult (not to mention costly) to get these systems that are currently running BIOS over to UEFI. If you attempt to do the destructive process (wipe and load) using the method that was first supported in Configuration Manager 1610, it fails after attempting to boot up after the conversion steps. The reason for this is that diskpart is running under the filter driver. It appears to do a diskpart clean, partition, format, but since it is doing this under the filter driver it is only cleaning the contents within the encrypted container (this is when the boot image is booting from the hard drive – this does not apply to PXE or USB booted systems, but then again that is not zero touch).

The following method can be used in order to switch the systems from BIOS to UEFI during a wipe and load operating system deployment and the 3rd party disk encryption can then be re-installed after the new OS is installed. Alternatively, this opens the opportunity to make the switch to BitLocker and ensure that your future upgrades are not delayed because you are waiting on the 3rd party disk encryption company to get their product working on the latest version of Windows 10 (further putting you behind the eight ball and delaying your upgrade). Another thing is that it enables you up to get away from those ‘legacy’ preboot authentication methods that use cached AD credentials (yuck) and get with the times using a trusted boot process combined with things like Secure Boot, Credential Guard and Device Guard. If your disk encryption team insists on using 3rd party disk encryption, ask them how many of the recent breaches were a result of someone cracking disk encryption – zero, most of them happened from bad credential hygiene. In other words, disk encryption is just one attack vector and it if is preventing you from getting to UEFI and Secure Boot, then you are not going to be able to take advantage of the modern virtualization based security available in Windows 10. Which brings me to my favorite line of my Hacking the Task Sequence 2018 session – “Because 3rd party disk encryption sucks and prevents zero touch BIOS to UEFI which makes you less secure, not more secure”.Image may be NSFW.
Clik here to view.

*Disclaimer: the following process may or may not work with your 3rd party disk encryption software. I have had success with both McAfee and Check Point. This does not work with WinMagic, however I am told they will provide an method for cleaning the disk. If you are successful with other 3rd party disk encryption software using this method, leave a comment below so that I can update the post.

The first thing you will need is Devcon. This is part of the WDK, Visual Studio and the Windows SDK for desktop apps (see the Devcon link for more information and download links). You can include this in your boot image (see my post ConfigMgr 2012: Always including certain files in your Boot Images) or use a reference package and copy it to WinPE in x:\windows\system32 before you start (cause we are deleting the contents of the disk and you will not be able to run from a package once this happens).

Booting the system from media shows that the disk is encrypted and unreadable (note volume 2 shows as RAW):

Image may be NSFW.
Clik here to view.

Now boot the system like we would during OSD from the hard drive with a Boot Image that contains the necessary disk filter drivers. Running the same command we see that the disk is unlocked and readable:

Image may be NSFW.
Clik here to view.

The first thing I like to do is to clean the disk to free up and processes that may be using it and then take the disk offline. This is done using the following diskpart commands:
clean
offline disk
detail disk (shows that the disk is offline)
exit

Image may be NSFW.
Clik here to view.

Running the following devcon command to show the loaded disk filter drivers and we can see that Prot_2k is loaded:
devcon classfilter diskdrive upper

Image may be NSFW.
Clik here to view.

Now we are going to use the following commands to unload the filter driver. In order to prevent the reboot, we are going to simply restart the ide and scsi bus (the disk will likely be on one or the other and therefore it does not hurt restarting both). This is done using the following commands:
devcon classfilter diskdrive upper !Prot_2k (for Check Point, use MfeEpePC for McAfee)
devcon restart ide\*
devcon restart scsi\*

Image may be NSFW.
Clik here to view.

Go back in to diskpart to bring the disk back online and clean the encryption from the disk using the following commands:
diskpart
sel dis 0
online disk
detail disk
clean

Image may be NSFW.
Clik here to view.

Image may be NSFW.
Clik here to view.

At this point, if running in a Task Sequence, the next step could be the built in Format and Partition Disk step. However, for the sake of this example, I am going to go ahead and create a partition and format it to show that the disk encryption is now gone and the disk is readable if booting from other media.

Image may be NSFW.
Clik here to view.

Rebooting from media and now we can see that the encryption is gone and the disk is no longer encrypted.

Image may be NSFW.
Clik here to view.

Whether your company chooses to stay on 3rd party disk encryption or move to BitLocker as part of the process, hopefully this provides you with information on how to make a successful zero touch transition from BIOS to UEFI so that you can take advantage of the advanced security features in Windows 10.

As for BitLocker, it no longer needs to be suspended during a Feature Update (i.e. moving from one version of Windows 10 to another) starting with Windows 10 1803. As for 3rd party disk encryption, there are ways to get around the encryption during a Feature Update ;).

Originally posted on https://miketerrill.net/


Optimizing Win10 OS Upgrade WIM Sizes

If you are using the full Win10 media to perform OS upgrades (like using a Configuration Manager Task Sequence), then you are going to want to put the install.wim on a diet. In addition, if you are servicing your WIM with the latest patches, you want to make sure that you are doing it in a way that does not bloat the WIM. My colleagues and I have played with various ways to optimize the WIM size so that we could keep it current and also keep upgrade times to a minimum. In the past at MMSMOA and on Twitter, I have presented the follow graphic on what happens to the WIM size if you ‘stack’ CUs:

Image may be NSFW.
Clik here to view.

The first blue bar represents the original Windows 10 1703 x64 install.wim. The subsequent blue bars represent applying the latest CU to the original install.wim. The orange bars represent stacking the CUs, so it would be install.wim + CU1 + CU2 + CU3 + CU4 + CU5 + CU6. As you can see, the WIM size grows an extra 1.12 GB larger when it is done this way. The lesson learned here is do not stack CUs and always apply the latest CU to the latest available install.wim. By the way, unless you are replacing the install.wim in the source directory of your OS Upgrade package each month, the Servicing option in Configuration Manager will end up stacking CUs since it will start from the previously serviced WIM (oh – and it will want to patch multiple indexes if those exist making the process take forever).

That analysis was done on Windows 10 1703, however, things changed a bit with the 1709 release. Someone at Microsoft got lazy and thought it would be a great idea to put not 2, not 3, but 6 indexes in the 1709 WIM.

Image may be NSFW.
Clik here to view.

You are probably think – ‘yeah, but WIMs are single instance so it shouldn’t make that much of a difference’. WRONG! What you want to do (unless of course you are deploying all of those other releases) is export the index you need (in our case Enterprise).

Starting with the install.wim from the ISO en_windows_10_multi-edition_vl_version_1709_updated_dec_2017_x64_dvd_100406172.iso (remember – you want to start with the latest available media from Microsoft), we see that the size is 4.05 GB

Image may be NSFW.
Clik here to view.

Now, export index 3 to a new WIM in folder 1709.1712.ORG.1

dism /export-image /sourceimagefile:e:\win10\1709\1709.1712.ORG\install.wim /sourceindex:3 /destinationimagefile:e:\win10\1709\1709.1712.ORG.1\install.wim

And notice the size of the WIM is now 3.63 GB (a 0.42 GB savings).

Image may be NSFW.
Clik here to view.

Now I have read some other blogs recently that follow a very similar procedure as below, but they all are missing one critical step – adding in .NET. Now you are wondering ‘why add .NET?’. The ultimate goal is to NOT have to apply the CU after the OS Upgrade. If you upgrade a system running 1607 that has .NET installed with a 1709 WIM with the latest CU but without .NET, guess what happens? On the first scan the system will want to download the latest CU and apply it turning our 60 – 120 minute upgrade into a 120 – 240 minute upgrade. Props to my colleague Jeff Carreon for discovering this last summer (2017). So we have been adding .NET as a base standard to all of our WIMs since then to prevent this from happening.

Adding .NET is simple. I like to copy the install.wim out to a Temp directory and then re-export the WIM each time (Sorry – I don’t have this scripted yet but the first part is a one time process to get to the WIM that we will use each month).

Copy install.wim from 1709.1712.ORG.1 to Temp

Add .NET

dism /mount-image /imagefile:e:\win10\1709\Temp\install.wim /index:1 /mountdir:e:\win10\1709\mount
dism /image:e:\win10\1709\mount /enable-feature /featurename:NetFx3 /All /LimitAccess /Source:g:\sources\sxs
dism /commit-image /mountdir:e:\win10\1709\mount
dism /image:e:\win10\1709\mount /get-features /format:table
dism /unmount-image /mountdir:e:\win10\1709\mount /commit

Notice the size of the WIM is now 3.79 GB.

Export index 1 to a new WIM in folder 1709.1712.ORG.1.NET

dism /export-image /sourceimagefile:e:\win10\1709\Temp\install.wim /sourceindex:1 /destinationimagefile:e:\win10\1709\1709.1712.ORG.1.NET\install.wim

Now the size of the WIM is 3.70 GB (0.09 GB less).

Image may be NSFW.
Clik here to view.
This is your new install.wim starting point until Microsoft releases an updated media ISO.

We service our WIM on the third Tuesday of the month. This gives one week to discover any issues with the released patches (not like that has ever happened) before making a change to our production WIM.

May 2018 Servicing:

The first patch that we apply to the WIM is the latest Servicing Stack Update. It is not only important to do this, it is critical that you do it if you want your WIM to work correctly. Also, pay attention to the fine print on the KB articles for the CUs and you will likely see something like this at the bottom:

Image may be NSFW.
Clik here to view.

Again, copy the install.wim from 1709.1712.ORG.1.NET to Temp and mount the WIM:

dism /mount-image /imagefile:e:\win10\1709\Temp\install.wim /index:1 /mountdir:e:\win10\1709\mount

Add in May Servicing Stack KB4132650.

dism /image:e:\win10\1709\mount /add-package /packagepath:e:\win10\1709\temp\windows10.0-kb4132650-x64_80c6e23ef266c2848e69946133cc800a5ab9d6b3.msu

The next patch that we apply is the one for Adobe Flash Player. The latest available in May was KB4093110 (however, check for a new one each time).

dism /image:e:\win10\1709\mount /add-package /packagepath:e:\win10\1709\temp\windows10.0-kb4093110-x64_2422543693a0939d7f7113ac13d97a272a3770bb.msu

Add in May quality CU KB4103714.

dism /image:e:\win10\1709\mount /add-package /packagepath:e:\win10\1709\temp\windows10.0-kb4103714-x64_97bad62ead2010977fa1e9b5226e77dd9e5a5cb7.msu

Commit the image

dism /commit-image /mountdir:e:\win10\1709\mount

Unmount the image:

dism /unmount-image /mountdir:e:\win10\1709\mount /commit

Notice the size of the patched install.wim is 4.60 GB.

Export index 1 to a new WIM in folder 1709.1712.ORG.1.NET.461

dism /export-image /sourceimagefile:e:\win10\1709\Temp\install.wim /sourceindex:1 /destinationimagefile:e:\win10\1709\1709.1712.ORG.1.NET.461\install.wim

Now notice the size of the install.wim is 4.45 GB (saving an additional 0.15 GB). Not bad for a fully patched WIM with .NET and it is only 0.40 GB larger than the original multi-index WIM.

Image may be NSFW.
Clik here to view.

NOTE: I have also experimented with using clean up image, component cleanup and reset base (dism /image:e:\win10\1709\mount /cleanup-image /startcomponentcleanup /resetbase) and it did manage to produce a slightly smaller WIM (4.42 GB), dism reports an error (Error: 0x800f0806 The operation could not be completed due to pending operations) and my guess is that there are online operations that need to be done for .NET. So for now I am leaving this step out.

Take this new patched install.wim with .NET and replace the one in your OS Upgrade Package each month.

The first time you do this, be sure to go into the OS Upgrade package and ‘reload’ the image properties:

Image may be NSFW.
Clik here to view.

Then you will notice that it only lists one index:

Image may be NSFW.
Clik here to view.

Go into any Task Sequences that use this OS Upgrade package and make sure that all of the Upgrade Operating System steps reflect image index 1 for the edition:

Image may be NSFW.
Clik here to view.

Image may be NSFW.
Clik here to view.

Originally posted on https://miketerrill.net/

Windows as a Service in the Enterprise Overview Part 1

[Windows as a Service Table of Contents – this link contains a list of blogs covering the solution.] Coming Soon!

Windows as a Service in the Enterprise Overview Part 1

Windows 10 brings several new challenges to the Enterprise – one of the major challenges is deploying Windows 10 and then keeping up to date with the Feature Update releases (i.e. 1709, 1803, etc.). Although Microsoft has done a great job on making things easier (like a non-destructive in-place upgrade), there are still several technical hurdles that need to be overcome. Businesses just want things to work with the least amount of impact as possible and this is where things start to get tricky. There is a much larger payload that needs to be moved around the network, competing for the network traffic that the business uses (as opposed to systems management traffic). There is also the time it takes to perform the upgrade. There are several factors that will determine how long this takes, but the goal should be to not interrupt business operations.

If you are reading this and thinking that you are alone, you are not – we all (Enterprise IT) are facing the same challenges. At my place of business, we knew that we had to do something in order to be able to survive the cadence and volume of Windows 10 upgrades that we need to do. Luckily, myself and my colleagues have been doing OS deployments for a very long time. We put our heads together and came up with a process that we call Windows as a Service (WaaS) in the Enterprise and recently presented two sessions (part 1 and part 2) both repeated at MMSMOA. Our goal is to make this available to the community so that people can implement the parts of it in their environment or just spawn new ideas on how to make Windows 10 deployments easier. Our goal was simple – Minimize Risk, Maximize Velocity.

In Part 1 of the session we covered the ‘What’ and ‘Why’. We needed to have a reason ‘why’ we needed to do something and get into ‘what’ we are doing to solve the problem. A majority of this was a result of our first feature update, 1511 to 1607, on shortcomings in not only the tools, but the need to create processes for a relatively new concept – OS in-place upgrade. Our areas of improvement were the following:

  • Be proactive rather than reactive
  • Better leverage existing technology and tools
  • Give the user better control
  • Don’t just report it, remediate (fix) it
  • Positive hand off between the teams involved

The Windows 10 In-place Upgrades are handled using a Configuration Manager Task Sequence. We realized that once the Task Sequence begins, the user gives up control for the duration of the process. There are things that we do to make this duration as fast as possible, however, if there are any errors in the Task Sequence, it fails and we have to start all over again. This experience was extremely frustrating for end users. The lessoned learned was to do as much as possible before the end user even knows that anything is happening.

Applications continue to be the major hurdle for companies keeping up with Windows 10. Software vendors are not on board with the cadence of Windows 10 and an in-place upgrade of the OS is a foreign concept to most of them. They think that if their app installs fine net-new on a new release of Windows 10 then all is good. I doubt that many of them actually test their applications after the in-place upgrade of the OS. Needless to say, we have ran into a few that cease to work after the upgrade. Third party security and disk encryption products also are a major pain point. Not only does it take them months after a Windows 10 release to be ready, most of these deeply rooted products in the OS really mess with the in-place upgrade process. My recommendation is to drop them like a hot potato – it will save you time, money and frustration and you are likely to be more secure without them.

Disk space and cache management were a big problem. That great idea to buy systems with 120 GB SSDs isn’t turning out to be such a great idea today, especially since Windows 10 x64 wants 20 GB of free disk space for the upgrade. Cowboy management of disk space also made matters worse – randomly deleting the cache without regards to proper cache cleaning methods (like using the COM object) leaves things in a state of a mess.

We talked about levering existing technology and tools, Windows 10 setup has a special switch that allows you to perform a compatibility scan on a system before attempting the upgrade. The disadvantage of having to download the Windows 10 OS Upgrade package is actually an advantage – it allows us to pre-cache the content that we are going to need to do the upgrade ahead of time. And a double bonus is that it can be run completely silently without the user ever knowing. This will enable you to know with certainty that the upgrade is going to complete without any failures. If there are failures or blockers, you will find out about the before ever disrupting the end user.

End user experience – this is something that we wanted to do better. During the previous in-place upgrade, a deferral method was used. The problem with this approach is that it can be misleading based on the execution time. A deferral to one person might mean a 24-hour deferral, however, depending on the execution schedule, it could mean something very different. I personally do not like this approach and wanted to use something more native to Configuration Manager – give the users the ability to opt-in a week or two prior to their deadline.

Improve – don’t just report it, fix it! If something fails for a specific reason, then we want to fix it automatically and have the process continue once things are remediated. This also allows us to route specific issues to the responsible teams in a more automated fashion. We wanted to develop a clear process with defined entry and exit points, also one that was going to be repeatable since we would be doing this frequently.

Lastly, so of our other requirements involved gathering better metrics. We wanted to know what our first time success rates were going to be as we didn’t have any of this information before. Other things of interest were runtimes – how long is this taking? Are the end users opting into the process or waiting for the deadline? Are they on the corporate network or VPN? All this data we want to collect so that we can use it to help define success to our management, use it to possibly diagnose problems, and also use it to further improve our processes.

What we came up with was a gated process to maximize success and improve the end user experience. We only wanted to start the process (the part that the end user is aware of) if everything was ready to go, content cached and we were 99% sure the upgrade was going to be successful. We also wanted to minimize the in-place upgrade time for the end users that would be experiencing it on their own time. Systems need to be patched and ready to go after the upgrade, we didn’t want them to be upgraded and then have to sit through another 30-60 minute cumulative update.

Servicing vs Task Sequences. Task Sequences make this all possible in complex environments, and provide a depth in reporting not obtainable using servicing.
Newer functionality in recently CM releases which have made things easier: Run TS from TS (nested Task Sequences). Persist in Cache / Preserve in Cache Variables. Pre-Deploy Content / Download Package Content.
All of this and the WaaS in the Enterprise concept started to take place and look like this:

Image may be NSFW.
Clik here to view.

 

This turning into a multi-phase, gated approach:

Image may be NSFW.
Clik here to view.

Image may be NSFW.
Clik here to view.

Import Wizard:
The Import Phase is the part where systems enter the WaaS process. For most organizations, this might not need to be split up as all Windows 10 workstations are probably managed by the same team within the company. For our company, we have approximately four different ways we segment workstation clients, so we needed to account for this when the deployment teams are submitting systems into the process. Since they are already scoped to only see what they have rights to see, the Import Wizard needed to be able to have access to these collections under the technicians credentials. They could add systems directly to these Ready for Pre-assessment collections in the console, but the goal of the Import Wizard was to make it easier. Plus, with code optimization, we are able to add 5000 systems to a collection in about 60 seconds.

Ready for Pre-assessment:
The Ready for Pre-assessment collections are just place holder collections for systems that are entering the WaaS process and are scoped for the various workstation clients. A backend job processes systems in these collections and moves them into the Pre-assessment collection.

Pre-assessment:
The goal of the pre-assessment phase is to prevent systems that have known issues based on inventory data from proceeding in the process. These could be issues that would create a hard blocker for Windows 10 setup or applications that are known not to survive the in-place upgrade of the OS. This is split up into three categories:
General (Pass/Fail)

  • OS
  • OS Architecture
  • OS Build
  • Last HW Inv
  • Last MP Client Registration
  • Last Heartbeat
  • CCMCache size

Hardware (Pass/Fail)

  • Free disk space
  • Memory
  • Models (descoped for v1)

Software (Pass/Fail/NA)

  • CM Client
  • 3rd party disk encryption (earlier version is a known blocker)
  • 3rd party anti-virus (earlier version is a known blocker)
  • Earlier app versions that either do not work with 1709 or do not survive the in-place upgrade
  • 16 checks and growing
  • Will probably add more from Compat Scan data.

Currently this is done using SMA Automation and systems that fail any tests remain in the Pre-assessment collection. Systems that pass all checks are moved at night to the next phase – Pre-cache/Compat Scan.

Pre-cache/Compat Scan:
The goal of the Pre-cache/Compat Scan phase is to prevent systems that have known issues from proceeding in the process. This is done by using a Task Sequence and it accomplishes the following:

  • Pre-cache content ahead of the scheduled deployment
  • Run Windows 10 Setup with the Compatibility Scan Option
  • Collect the results for evaluation and metrics
  • Discover previously unknown blockers (and add them into the Pre-assessment checks)
  • Deployment is configured to ‘Download contents before starting’
  • Driver packages are dynamically downloaded during the Task Sequence
  • Metrics are written to the registry

Running the Compat Scan as part of the actual upgrade is possible, but by then end user has already been disrupted and will likely be frustrated if the Compat Scan fails and the upgrade does not happen. For this reason (along with pre-caching) we have split this out as a separate phase. Systems that fail remain in the Pre-cache/Compat Scan collection and the deployment reruns daily. Systems that pass are moved to the next phase – Ready for Scheduling. Another note is that the Upgrade Operating System Task Sequence step places the client in provisioning mode (see UserVoice item here). There are a few options on how to handle this to prevent clients from getting stuck in provisioning mode that will be covered in later blog posts.

This is the first phase where we start writing some key metrics to the registry on the target system. This will enable us to collect key metrics, such as how many times Compat Scan has run, how long it took to run, the return code and return status, along with a few other data points and an overall WaaS Stage progress. This information is used for troubleshooting and reporting metrics.

Image may be NSFW.
Clik here to view.

Ready for Scheduling:
The Ready for Pre-assessment collections are just place holder collections for systems that have completed the Pre-cache/Compat Scan phase and are now ready to be scheduled for the actual in-place upgrade. The deployment technicians are scopes to see these collections and they are used by the Scheduling Wizard.

Scheduling Wizard:
The Scheduling Wizard facilitates the scheduling of systems for the in-place upgrade. It is based on a monthly cycle. End users have the ability to opt-in and run the in-place upgrade before the scheduled date. The Task Sequence pop-up notification is enabled for certain systems (like laptop users). 31 day collections with corresponding deployments and maintenance windows are used for each day of the month. This makes it easier to track when systems are scheduled and makes daily reporting simpler.

Pre-flight:
The Pre-flight phase runs prior to in-place upgrade to double check the readiness rules that are defined in the Pre-assessment phase to make sure nothing has changed. In addition, it also checks a few execution rules as well. The execution rules that are currently defined are: AC/Battery check, MP connectivity, VPN status, Pending reboot, and most importantly – the Kill Switch. The Kill Switch is an extra safety measure (along with the maintenance windows) to prevent the in-place upgrade from running in the case of an emergency. These metrics are also written to the registry, things like the number of Pre-flight attempts, when it was last run, the return code and status, and the version of the Pre-flight script that was run.

Image may be NSFW.
Clik here to view.

The original plan was to run this as a Package/Program before running the Task Sequence. I don’t like stopping the Task Sequence once it starts, like for prompting the user to plug into power with a count down, as this messes with runtimes for maintenance windows and screws up Task Sequence runtime metrics. However, we discovered that the Package/Program does not adhere to maintenance windows. Systems that were off during the required time would power up the next morning. Instead of waiting for the next available maintenance window, the Pre-flight job would go ahead and execute (and possibly prompt the user to plug into power). Therefore, we decided to move this back into the Task Sequence at the very start.

In-place Upgrade:
This is where the main In-place Upgrade Task Sequence runs. Additional metrics are collected as part of this process as well. Things like – In-place Upgrade runtime, return code and return status, if a user is logged on and also if it was kicked off by the user (opt-in vs. running at the required time).

Image may be NSFW.
Clik here to view.

The initial design called for auto-rescheduling of certain events – failed Pre-flight, No Status, Accepted, Waiting. Running (i.e. hung) or Failed states would remain in the collection for 7 days to allow time for investigation. This will give the end user or technician the ability to re-run the Task Sequence in the case that the system did not experience a hard down failure. Lastly, systems that Succeeded would be pulled from the collection and removed from the process. However, we made a tweak to leave all systems in the collections for 7 days in order to simplify reporting and minimize collection churn. After 7 days, devices that have been successfully upgraded are removed from the WaaS process. Those systems in certain states go into a Needs Remediation collection so that a technician can have a closer look and see exactly why they are having issues. Systems that have not tried to run (that qualify) go into a Mandatory collection that does not have any restrictions. This means if a system was powered off each night during the scheduled deployment period, on the 8th day it would get a mandatory upgrade during the day.

Exclusions/Tech Led:
There are always those scenarios where a certain group of systems need to be excluded from the process. In order to handle this situation, exclusion collections were setup so that a system (or group of systems) could be completely excluded from the WaaS process. In addition, a separate, On-demand Task Sequence and deployment was created for the following reasons: manual tech led deployments (i.e. executives) and one off testing. This Task Sequence uses nested Task Sequences and has both the Pre-cache/Compat Scan and In-place Upgrade Task Sequences. The Deployment is configured as ‘Available’.

Hopefully this help defined the ‘why’ we needed to do something and ‘what’ we are doing to solve the problem. Not all organizations will need to get this detailed – adopting just a Pre-cache/Compat Scan strategy with some extra registry information might be good enough for some organizations. Also, the goal is to spawn other ideas as well in how to handle upgrades in your own environment. In Windows as a Service in the Enterprise Part 2, we will go into the ‘how’ and talk about the technical details behind the solution and how all of it fits together.

Originally posted on https://miketerrill.net/

How to install a Win10 SSU before the LCU using Configuration Manager

If you are involved in patching Windows 10 systems, then you might be familiar with the Servicing Stack Update (SSU) dilemma that has been going on in the Configuration Manager (or SCCM as some like to call it) world lately. If you read the notes at the bottom of a KB for any of the cumulative updates you will see the following:

Microsoft strongly recommends you install the latest servicing stack update (SSU) for your operating system before installing the latest cumulative update (LCU). SSUs improve the reliability of the update process to mitigate potential issues while installing the LCU. For more information, see Servicing stack updates

Now, if you are getting updates via Microsoft Update, then you have nothing to worry about as MU knows to sequence the SSU before the LCU. However, if you are deploying updates with Configuration Manager, it uses WSUS and cannot (currently) handle the sequencing the SSU before the LCU. So what is a ConfigMgr admin to do? Simple – it involves a little pixie dust (who doesn’t like pixie dust?), configuration items, collections and deployments. You see, I got this crazy idea as I was watching my Twitter feed and internal emails going back and forth on how to handle this issue. It was a relatively peaceful afternoon and I had decided to configure some CIs and Baselines to enable and configure BranchCache when I had a light bulb moment. So be sure to thank the 2Pint Software guys for spawning this idea (and be sure to check out their downloadable CI to enable BranchCache here).

Now here is where the light bulb moment happened – as I was creating the Configuration Baseline, I happened to notice that they can be comprised of Configuration Items, Software Updates, or other Configuration Baselines. After all, Software Updates are really just CIs. Then I remembered that we can create a collection based on the results of the Configuration Baseline. By creating a collection for your Windows 10 systems – target the Configuration Baseline and SSU to this collection and then target the LCU to the Compliant collection. This way we can be sure that the SSU gets installed before the LCU.

Here is a simple example that you can follow for your environment:

  1. Create a collection called All Windows 10 1709 x64 Clients
    I use DDR information for this: select SMS_R_SYSTEM.ResourceID,SMS_R_SYSTEM.ResourceType,SMS_R_SYSTEM.Name,SMS_R_SYSTEM.SMSUniqueIdentifier,SMS_R_SYSTEM.ResourceDomainORWorkgroup,SMS_R_SYSTEM.Client from SMS_R_System where SMS_R_System.OperatingSystemNameandVersion like “Microsoft Windows NT Workstation 10.0%” and SMS_R_System.Build = “10.0.16299”
    NOTE: Be careful of “smart quotes” if you are copying and pasting this query.
  2. Make sure you have the latest SSU synchronized in CM. For this example I am using 2018-12 Update for Windows 10 Version 1709 for x64-based Systems (KB4477136).
  3. Create a new Configuration Baseline
    Name: Windows 10 1709 x64 SSU
    Description: Checks compliance for the Servicing Stack Update KB4477136
    Complete the other items like filtering and the option for co-managed clients as required by your environment.
    Image may be NSFW.
    Clik here to view.
  4. Click the Add button and select Software Updates. Search for KB4477136 and then expand the Name field so that you select the correct update (and check the box next to it in order to select it).
    Image may be NSFW.
    Clik here to view.
  5. Deploy the Configuration Baseline to the All Windows 1709 x64 Clients collection created in step 1. Pick a schedule that works with your environment.
    Image may be NSFW.
    Clik here to view.
  6. On the Deployments tab of the Configuration Baseline, right-click and select Create New Collection > Compliant
    Image may be NSFW.
    Clik here to view.
  7. For the collection name, enter: All Windows 10 1709 x64 SSU Compliant Clients and configure and evaluation schedule that works for your environment.
    Image may be NSFW.
    Clik here to view.
    Image may be NSFW.
    Clik here to view.
  8. Run the Configuration Baseline on a client that you know is missing the SSU and it should show up as Non-compliant.
    Image may be NSFW.
    Clik here to view.
  9. Target the SSU to the All Windows 10 1709 x64 Clients collection, make sure a client updates and then rerun the Configuration Baseline. It should now show as Compliant.
    Image may be NSFW.
    Clik here to view.
  10. Back in the Configuration Manager Console, after a collection evaluation, the All Windows 10 1709 x64 SSU Compliant collection should now show Win 10 devices that have the SSU installed. You can now use this collection to target the deployment for the LCU.
    Image may be NSFW.
    Clik here to view.

For more information on SSUs, see the following links:

ADV990001 | Latest Servicing Stack Updates

Servicing stack updates

Servicing Stacks by @SeguraOSD (he goes into log details in this post)

Originally posted on https://miketerrill.net/

Windows as a Service in the Enterprise Table of Contents

Windows as a Service in the Enterprise Overview Part 2

Windows as a Service in the Enterprise Table of Contents – this link contains a list of blogs covering different parts of the solution.

Windows as a Service in the Enterprise Overview Part 2

In Windows as a Service in the Enterprise Overview Part 1, we talked about the challenges that Windows 10 brings to the Enterprise and covered off ‘why’ we needed to do something and also ‘what’ we are doing to solve the problem. In Part 2, we are going to talk about the ‘how’ the solution fits together and along with some of the technical details. We also plan on writing other detailed blogs on particular parts of the solution that you will find listed in the Windows as a Service Table of Contents.

Our number 1 objective is to be able to Minimize Risk and Maximize Velocity. With that, we set a goal to create an extensible, modular and reusable framework. What we came up with was a gated process to maximize success and improve the end user experience. We only wanted to start the upgrade (or inform the user they could opt-in) if everything was ready to go, content was cached and we were 99% sure the upgrade was going to be successful. We also wanted to minimize the in-place upgrade time for the end users that would be experiencing the upgrade on their own time. Systems need to be patched and ready to go after the upgrade, hence we did not want them to be upgraded and then have to sit through another 30-60 minute cumulative update (see Optimizing Win10 OS Upgrade WIM Sizes for more information on how we accomplish this).

Multi-phase, gated approach consisting of the following phases: Pre-assessment, Pre-cache/Compat Scan, Ready for Scheduling, Pre-flight, In-place Upgrade.

Image may be NSFW.
Clik here to view.

Image may be NSFW.
Clik here to view.

Considerations and Challenges:
Different device types – when we talk about building a modular process, we wanted it to include all possible device types that are running Windows 10 that need to be upgraded. This goes beyond the traditional office worker laptop and desktop systems. We also needed to cover kiosk devices, devices that display information in public view and VDIs to name a few. What we did not want was a different process/method for upgrading each of these since this would become unbearable to keep up with the maintenance.
Different types of users – mobile users are quite common these days, especially ones that work remotely on the road or at home over VPN. It was extremely important that the solution work for this type of user since we have a large population of them. We also have users in various time zones around the world with different working hours and need to be sure not to disrupt their working day. We still have some groups that prefer to do managed deployments – meaning they do not want the end user of the system involved at all. They want the upgrade to happen at night without any user interaction. On the flip side, we wanted to also see how many users would opt into upgrading at their convenience before the deadline. Initially, we were told that we would be lucky to get 15% opt-in, but so far that percentage is more like 65% out of all users including the ones that do not get the TS notification (so it is actually an even higher percentage). It is a new workforce – not the same as it was 20 years ago or even 10 years ago. People upgrade the OS on their smart phone all the time without IT holding their hand. Make it available and people will install it on their schedule.
Improve the end user experience – this is something that we wanted to do better from the last upgrade which was based on deferrals. I’ll admit, I am not a fan of the deferral process. It causes issues with reporting. It is not a true deferral. Depending on when the system is added to the collection could mean that it runs at different times. I remember getting my deferral notification from the previous upgrade. It was about 3 PM in the afternoon and I was still in the middle of something. I deferred the upgrade thinking it would be a 24 hour deferral before asking again. Nope! At 8 PM that night it attempted again and since I was not in front of my system to click the defer button the upgrade ran. So we decided to use native functionality in Configuration Manager (with some modifications but more on that later). Users would know the required deadline from Software Center and get reminder notifications the closer it got to the deadline. We also customized some of the screens so that end users knew that something was happening (there is nothing worse than an end user rebooting the system in the middle of the upgrade).
Minimize the business impact – there are many parts to this, but loss of productivity and maximizing success are the main ones. The purpose of the business is to well…run the business, not spend time running OS upgrades. So the goal of doing as much of the prep work ahead of time so that a) the upgrade is able to run (eliminating things that are going to block the upgrade from running), b) that it runs as fast as possible and c) that it has a high chance of success. By maximizing the 1st time success rates (and tracking them for data that we can use in our annual review), we minimize the impact to the business and also minimize technician time and the amount of touches they need to do.
Collect metrics for better reporting and future use – how do you think some of the large companies that you buy from seem to know you so well? They collect data and (among other things) use this to provide a better experience in the future. We did the same thing by collecting information that we will be able to analyze and use to create an even better experience for the next upgrade.
Content distribution and updated WIM – this helps speed things up. Start pre-caching content if you are not already and I highly recommend using peer to peer technologies (especially BranchCache because it is great at deduping WIM files among other types of content). Otherwise, you might not become best friends with your network team.
Change Control process – this is probably everyone’s favorite that has to deal with change control. The solution that we come up with has to meet the change control requirements.
Lastly, automate as much as possible. There is going to be times where you need to troubleshoot an issue by digging through log files, running process monitors or something else to help diagnose the problem. So the more automation, the more time it frees up to do things that are not easily automated.

Backend Setup

Collections

There are certain collections that are tied to the various phases of the process. Not everyone can see all of these collections as we want the automation moving devices through the process based on the rules that have been defined.
NOTE: With the extended support on the Fall release, we will likely move to a yearly cadence (which is what we wanted anyway) and skip the Spring release all together and we will likely drop the Spring collections and the Fall text from the Fall collections.

Ready for Pre-assessment
The entry point into the process is the Ready for Pre-assessment collections. These are nothing more than place holders that are used when the deployment team imports devices into the process. They are scoped to see their respective collection (we have multiple groups that manage their devices so it makes it a little more complex than the average environment). These collections are called:
OSD_W10_{Fall/Spring}_Ready_for_PreAssessment_xxx
Fall indicates the xx09 releases, whereas Spring indicates the xx03 releases of Windows 10. You could also use xx09/xx03 or H1/H2 to differentiate between the two sets of WaaS collections.

Pre-assessment
Once the first WaaS job runs (currently it runs twice a day), it moves the machines from the Ready for Pre-assessment collections into the Pre-assessment collection. These collections are called:
OSD_W10_{Fall/Spring}_PreAssessment
Once in the Pre-assessment collection, this is where another backend job runs once a day and evaluates all of the Pre-assessment rules mentioned in Part 1 against the systems in the collection using inventory data. Devices that pass all of the rules are moved into the Pre-cache/Compat Scan collection once a day at night. Those that do not pass, stay in the Pre-assessment collection until they pass or are removed from the WaaS process.

Pre-cache/Compat Scan
The Pre-cache/Compat Scan collection is the first place in the process that something is being run on the device. These collections are called:
OSD_W10_{Fall/Spring}_Precache_Compat_Scan
This collection has the Pre-cache/Compat Scan Task Sequence deployed to it. It runs completely hidden and the intent is to cache the content that will be used during the actual in-place upgrade and also run the Windows 10 compatibility scan to uncover any hard blockers or other issues that we do not already know about. Once we discover issues, we create a Pre-assessment rule so that any other devices with this problem do not make it past Pre-assessment until the issue is resolved. Devices that pass Pre-cache/Compat Scan get moved to Ready for Scheduling. Devices that do not pass, stay in the Pre-cache/Compat Scan collection until they pass, are moved back to Pre-assessment, or are removed from the WaaS process.

Ready for Scheduling
The Ready for Scheduling collection is another place holder collection that contains the devices that have passed every phase up to this point and are ready to be scheduled for the upgrade. These collections are called:
OSD_W10_{Fall/Spring}_Ready_for_Scheduling
When the deployment team runs the Scheduling Wizard, it reads the devices from this collection.

In-place Upgrade
There are 31 daily collections for In-place Upgrade, 1 per every day of the month. For the months with fewer than 31 days, those collections are not used those months. These collections are called:
OSD_W10_{Fall/Spring}_Day_{01-31}_8PM
8 PM indicates the time that the corresponding maintenance window opens and also the required time of the deployment. It is possible to have more than one deployment time per day if required, but it does increase complexity. We also have certain scenarios where 8 PM does not work for some business groups and handle those slightly different (see Next Available Maintenance Window). We also have the mandatory collection where some devices go that missed the 7 required night deployment times. These collections are called:
OSD_W10_{Fall/Spring}_Mandatory
This collection has a required deployment without any restrictions and will run the next time the device comes on the network. Other devices go into a needs remediation collection so that a technician can investigate why the device did not upgrade during the 7 required night deployment times. These collections are called:
OSD_W10_{Fall/Spring}_Needs_Remediation
Although the 8 PM deadline works for most scenarios, we did have a few cases where the device could not run until later (like midnight). These devices already had maintenance windows defined (in one case from 12 AM to 5 AM). Since the daily collections also have maintenance windows, they were combining to create an 8 PM to 5 AM window. Since these devices did not qualify for user opt-in and already had a maintenance window, we created another scheduling option called Next Available Maintenance Window. These collections are called:
OSD_W10_{Fall/Spring}_Next_Available_MW
These collections do not have a maintenance window on them. When a device is scheduled, the wizard checks to see if a device already has an assigned maintenance window. For those that do, they get place in the Next Available Maintenance Window collection. The next window could be that night or the upcoming weekend.
For the tech led, on-demand scenarios, we have collections that have the On-demand Task Sequence deployment. These collections are called:
OSD_W10_{Fall/Spring}_Available_On_Demand
The deployment team has access to this collection via LOB specific collections mentioned below and can add machines into it since there is only an available deployment targeted to it.

Exclusion collections
There are multiple exclusion collections that are configured for the various deployment teams. They can add devices to their collection so that it does not appear in the WaaS process. These collections are called:
OSD_W10_{Fall/Spring}_Exclude_xxx

On Demand
There are multiple on demand collections that are configured for the various deployment teams. These are for the manual tech led upgrades and techs can add devices to their collection so that the available in-place upgrade deployment can be run from software center. Devices in this collection may or may not go through Pre-assessment first. They do run a task sequence that contains both the Pre-cache/Compat Scan and In-place Upgrade Task Sequences. These collections are called OSD_W10_{Fall/Spring}_Available_On_Demand_xxx and roll into the main OSD_W10_{Fall/Spring}_Available_On_Demand collection mentioned above.

Mandatory In-place Upgrade
After the opt-in period and seven nightly attempts (at 8 PM), certain devices that have not run go into a collection that has a mandatory deployment without any restrictions on it. This means it will run as soon as the device comes online. These collection are called OSD_W10_{Fall/Spring}_Mandatory.

Needs Remediation
After the opt-in period and seven nightly attempts (at 8 PM), certain devices that have not run or devices that have run but failed for some reason go into a needs remediation collection so that deployment team member can investigate either the failure or determine if the device should be rescheduled again. These collections are called OSD_W10_{Fall/Spring}_Needs_Remediation_xxx.

Next Available Maintenance Window
There were certain devices that already had defined maintenance windows that were more restrictive than the night WaaS 8 PM to 4 AM OSD maintenance windows that are set up on all of the day collections. Some of them did not start until midnight or the weekend. So for scheduling purposes, the wizard was modified to determine if a device was already in collection with a defined maintenance window and if it was, it would be placed in the next available maintenance window collection. This way the upgrade would only happen during the time period that the device owner had previously defined. These collections are called OSD_W10_{Fall/Spring}_Next_Available_MW.

Collection Membership Rules
With the exception of the Exclude collections, all membership rules are Direct Membership rules. This is so there can be granular control over individual devices. Also, we do not need to set an aggressive collection evaluation cycle and beat on the collection evaluator. Direct Membership rules are pretty efficient since it will only evaluate when a rule is added or removed. There were several internal team discussions on the best way to handle this and both Stephen Owen and Keith Garner came up with some really efficient methods for bulk adding and bulk removing devices from a collection.
The WaaS process is state based, meaning that devices should only be in one state (collection) at a time and follow the flow of the process. The deployment teams are only scoped to see some of the collections (Exclude, On-demand, Ready for Pre-assessment and Ready for Scheduling), the rest is hands off – collection membership is all done by the automation framework (WaaS Jobs).

WaaS Jobs
There are a few WaaS Jobs that are configured to perform the automation behind the scenes. This can be done using various methods, however, our automation team has selected SMA. Hopefully they will publish some blogs on the technical aspects of how these jobs run.

Job 1
This first checks for any devices in the Ready for Pre-assessment collection (devices that were placed here by the Import Wizard) and moves them into the Pre-assessment collection (adds the devices to the Pre-assessment collection and then removes them from the Ready for Pre-assessment collection). This job runs twice a day at 3 PM and 7 PM.

Job 2.1
This job runs the pre-assessment rules against the devices in the Pre-assessment collection. The devices that do not pass pre-assessment will stay in the Pre-assessment collection. This job runs twice a day at 1:30 PM and 9:30 PM.

Job 2.2
This job moves the devices that passed pre-assessment into the Pre-cache/Compat Scan collection (adds the devices to the Pre-cache/Compat Scan collection and then removes them from the Pre-assessment collection). This job runs once per day at 10:30 PM so that pre-caching starts at night time during the off hours for the devices that are online. Originally jobs 2.1 and 2.2 were combined but were later split out so that pre-assessment could be run more than once per day in the event that a device’s pre-assessment issue was resolved the same day.

Job 3
This job moves devices that have successfully completed the Pre-cache/Compat Scan phase into the Ready for Scheduling collection (adds devices to the Ready for Scheduling and then removes them from the Pre-cache/Compat Scan collection). The ones that do not finish or complete the Pre-cache/Compat Scan phase stay in the collection. This job runs once per day at 7 AM.

Job 3.5
This job moves the devices that we submitted from the Scheduling Wizard into the correct day collection or Next Available Maintenance Window collection. If a device was scheduled for the 31st of the month then it would get placed into the Day 31 collection. This job runs once per day at 9 PM. This way any changes in content from the time the device was pre-cached will start to download at night if the device is online.

Job 4
This job does collection clean up. It looks at the n-7 day collection and determines which devices are done and removes them from the process (i.e. successful upgrade), which devices should go into the Mandatory collection, and which devices should go into the Needs Remediation collection. It also looks at the Next Available Maintenance collection, Needs Remediation collection, the On Demand and the Mandatory collection and pulls out any devices that were successfully upgraded.

Maintenance Windows
The OSD_W10_{Fall/Spring}_Day_{01-31}_8PM have maintenance windows on them. The maintenance window is from 8 PM to 4 AM and runs for 7 days from the starting day. The reason for this is to prevent required deadline deployments from running during the day. If a device was simply offline or powered off, it will not get hit with the upgrade the next morning once it is powered on. Most of the devices fall into the category where there is not a pre-defined maintenance window, however, there are certain devices that do have pre-defined maintenance windows. These get scheduled into the OSD_W10_{Fall/Spring}_Next_Available_MW collection (as mentioned above) and will run during the device’s next scheduled maintenance window time frame. Maintenance windows do not apply to Opt-in (user invoked upgrade before the deadline) or On Demand Deployments.

Deployments
There is one required deployment for each day that recurs for 7 days. Rerun if previously failed is configured in order to account for Pre-flight failures since there is not an easy way to currently perform pre-flight checks before running a task sequence that still adheres to maintenance windows (i.e. run another program before running the task sequence does not honor the maintenance windows and will make it confusing for end users). The deployments are also configured to allow users to run independently of assignment which allows the end users to opt-in ahead of the required deadline. For the daily deployments, task sequence progress is displayed. This can be selectively enabled and disabled starting in CB 1706. The deployments are configured to download all content locally before starting the task sequence. This is why it is important to keep the number of reference packages to a minimum. For driver packages, we use a dynamic method that downloads the correct driver package during the running task sequence. Also the deployments are configured to use remote distribution points and the default boundary group. We use peer to peer technology and do not use boundary groups for content location (which would be a nightmare in our environment). Lastly, we rename the deployment “advertisement name” for better/cleaner report filtering.

$NewTS = Get-CMTaskSequenceDeployment –TaskSequenceId $TSID | where AdvertisementID –eq $DPID
$NewTS.AdvertisementName = 'Windows 10 In-Place Upgrade Fall - Day 01 PM'
$NewTS.Put()

Pre-cache/Compat Scan Task Sequence
There are two goals to the Pre-cache/Compat scan phase – the first one is to pre-cache the contents (including dynamically downloading the correct driver package), and the second one is to see if the system will actually pass the Windows 10 compatibility scan. If it does not, the user is not even disrupted or aware that anything is happening at this point since it is configured to run silently in the back ground. There is no point sending the actual upgrade to a system that fails the compatibility scan. That just leads to end user frustration. Also, it helps us identify previously unknown hard blockers (applications that prevent the in-place upgrade from running). This was invaluable for a few reasons – we discovered applications that we did not know about and were able to add those back to our pre-assessment rules and secondly, we found out that old install binaries would prevent the upgrade as well and these simply just needed to be deleted from the flagged system. If there is only one thing that you adopt from this WaaS process, it should be this – running a pre-cache/compat scan ahead of time. This alone will increase your first time success rates.

In addition to the pre-caching and the compat scan, we also wanted to gather some other information, so the task sequence also collects the following information by writing to the registry: start and stop time, start and stop time for downloading content (drivers), check readiness failure reason, compat scan results. This information is then collected by extending the hardware inventory.

Image may be NSFW.
Clik here to view.

One interesting thing that we found out was that running a compat scan only (i.e. using the TS Upgrade Operating System step with the Perform Windows Setup compatibility scan without starting upgrade option) put the client in provisioning mode. This was bad since we were running the Pre-cache/Compat Scan TS completely silent and sometimes a device would be powered off during this process effectively leaving the client in provisioning mode. We had put some fail safes in the TS so that it would pull it back out after a period of time and we also created a User Voice item. This is now fixed in CB 1806, the client now does not go into provisioning mode when only running a compat scan. My colleague Gary Blok has a post called WaaS – Post 1 – PreCache Compat Scan TS that goes into the details of the Pre-Cache/Compat Scan TS and even provides a sample that can be downloaded and imported.

Pre-Flight
Originally we tried to get the Pre-flight checks to run as a script before kicking off the In-place Upgrade Task Sequence, but we quickly found out that it does not adhere to maintenance windows. So for the laptop user that had their device off the night before who powered up to check some emails while running on battery would all of the sudden get a notification to plug their laptop in so the upgrade could run – but it would not run since it was out of the maintenance window and it led to a very confusing experience. Therefore, we had to move the Pre-flight phase into the task sequence, which is something that I absolutely despise. Since there is no abort exit code on a task sequence, you are left with sending either a failure or success. If you send a success for a pre-flight failure, you have a false positive since the machine did not upgrade and if you send a failure, you have a pre-flight failure that is hard to tell apart from a real failure, skewing your metrics and results. The Task Sequence engine could use some improvement in this area and hopefully we will see something some day in a future CB release (hint hint, DJam or Rob if you are reading this post).

As for the actual Pre-flight checks, these are mainly the same checks that we did for the Pre-assessment but they are done on the client at run time and not against inventory data. We also added some execution checks, like running on battery, emergency kill switch, MP connectivity, pending reboot and VPN status. There are two steps in the Task Sequence for this – one that will create pop-up messages using ServiceUI if a user is logged on and one that will run silently if no user is logged on. Pop up notifications are to inform the user if there is a failure, the reason and what needs to be done to remediate the failure. It is designed to auto re-check the rules every 5 minutes and continues if the issue(s) is resolved. It retries 12 times (60 mins total) and then fails the step. If the Pre-flight fails, it skips the Main TS and continues to the end where it records the failure(s). The results are captured in a PreFlight.log in the ccm\logs directory and also in the registry like the Pre-cache/Compat Scan metrics.

Image may be NSFW.
Clik here to view.

This enables easier identification of Pre-flight failures.

In-Place Upgrade Task Sequence
The In-Place Upgrade Task Sequence is similar to the Pre-cache/Compat Scan Task Sequence, as it also sets variables along the way to keep track of metrics. This Task Sequence also makes heavy use of nested Task Sequences. The goal was to make things modular, so for various parts of the Task Sequence we split up parts that could be re-used in other Task Sequences. Tracking some of the same metrics was important so that we could know how many attempts for first time success rates, run time, if a user was logged on, if a user invoked the upgrade (opt-in) and a variety of other things. Just like the other phases, this is written to the registry, inventoried and used in reporting.

Image may be NSFW.
Clik here to view.

Gary also has a detailed blog on a sample In-Place Upgrade Task Sequence and modules that can be downloaded and imported on his blog WaaS – Post 2 – In Place Upgrade TS.

On Demand Task Sequence
We had another requirement for technicians to be able to run tech-led in-place upgrades (think executives). They needed to be able to kick these off from Software Center while sitting in front of the system. We did not want to have to maintain a completely separate task sequence for this use case, so we decided to use nested Task Sequences. It is a very simple Task Sequence that combines the Pre-Cache/Compat Scan and In-Place Upgrade Task Sequence into 1 Task Sequence. It sets a variable called SMSTS_OnDemand to TRUE. Then enables the TS Progress UI so that the Pre-cache/Compat Scan Task Sequence progress is visible by setting the variable TSDisableProgressUI to FALSE. Step three is the Pre-cache/Compat Scan Task Sequence and step four is the In-place Upgrade Task Sequence. It has a single, available deployment to the OSD_W10_Fall_Available_On_Demand collection with the Pre-download content for this task sequence enabled. This way, as soon as devices are added to the on demand collection, they should start downloading the OS Upgrade Package and be ready for when the tech shows up to do the upgrade. They could also be run through the Pre-assessment, Pre-cache/Compat Scan phases as well, but it is not a requirement. Lastly, the deployment is configured like the other deployments to ‘download all content before starting’.

User Experience
The user experience is always a fun thing to deal with and decide how to handle. There are a lot of creative ideas on the internet from the community that provide different front ends, notification screens and deferrals. However, we wanted to use native Configuration Manager functionality as much as possible. Plus, if the CM Product Team does not hear otherwise, they assume that everyone is satisfied with the out of the box functionality. The goal should be to let them know how we want or think the product should work and provide that feedback using User Voice, Twitter, User Groups and conferences.

We decided to go with the built in notifications and instantly found some bugs and limitations. For some of our devices and lines of business, there should be no pop up notifications. The upgrade should just run off hours without any notification or user involvement. For some of our other devices and lines of business we wanted pop up notifications and reminders so that they would have the ability to opt-in to the upgrade before the deadline. The challenge here is that notifications are enabled on the Task Sequence. Having to duplicate the Task Sequences seemed a bit crazy. So we used a little local policy trick and enabled notifications on only those devices that should get notifications. This was covered at MMSMOA in the Windows 10 OS Deployment – MVP Showcase session. It is on the list of WaaS items to be blogged, so stay tuned on how we do this…

The notification screens and text were another point of concern (bug and limitation). Depending on how it was launched – from the pop-up notification or from Software Center, different screens, fonts, and font colors would be used.

Image may be NSFW.
Clik here to view.

Image may be NSFW.
Clik here to view.

A support case was opened for this issue and it is mostly fixed in 1806. The text boxes are still very ridged on what can be put in each one and there is not the ability to simply provide a hyper link to an internal support page. Also, the from the pop-up notification, the default selection cannot be customized and we would like the ability to specify the default or have it changed to Snooze and remind me: Later. Although our opt-in numbers were nice and high for 1709, we suspect the usual human behavior of just clicking OK kicked off a fair number of the upgrades.

Another thing that was done to make sure users knew an upgrade was in process was we customized the lock and logon screens. We thought about actually preventing users from logging in but instead we just warned them that an upgrade was in process. Depending on what step the Task Sequence is running, the progress notification may have already happened prior to log on and if it is in the middle of the OS Upgrade step, an end user logging on will not see the progress UI. So this helps prevent issues of that happening.

Image may be NSFW.
Clik here to view.

Image may be NSFW.
Clik here to view.

Reporting
We built from scratch reports for Pre-assessment and In-place Upgrades and also modified some existing reports as well to only show data for machines currently in the WaaS process. I am sure that we will be blogging about some of these in the future and we are also hoping to get more Power BI style dashboards for daily status.

Image may be NSFW.
Clik here to view.

Image may be NSFW.
Clik here to view.

Image may be NSFW.
Clik here to view.

Troubleshooting
There is always a need for troubleshooting. For most of our unexplained errors, I blame them on our 3rd party security software. These are extremely difficult to pinpoint and show the actual cause. Microsoft recommends removing 3rd party anti-virus/anti-malware if you are having issues with upgrades. However, we structured things so that each phase could use various reports, status messages, and log files to help find the issue at hand. For certain failures, logs were automatically zipped up and copied to a log share so they could be further analyzed. After we figured out how to incorporate the Dynamic Updates into the OS Upgrade Package, systems that were not upgrading previously, upgraded without any issues. Unfortunately, we did not figure out how to do this until the tail end of our upgrade. Gary has a blog on it called IPU & Offline Dynamic Updates. Special mention to Adam Gross (http://www.asquaredozen.com/), David Segura (https://www.osdeploy.com/), and Marc Graham (https://blog.ctglobalservices.com/author/mag/) – as we have tested this over and over until we finally got it to work. Follow them on Twitter and read their blogs as they have some nice scripts and solutions for building and optimizing install.wim files and OS Upgrade Packages.

WaaS 1809
We have some things planned for WaaS 1809 that we did not get to in the first iteration or things we thought of since then that we want to incorporate. We have some exciting things planned around new and not so new technology (Ledbat++, Delivery Optimization and BranchCache) that will make pre-caching content even better. We plan on continuing to share our knowledge, best practices, tips and tricks and ways of doing things with the community so that you might find that one thing that makes your upgrade that much easier and that much more successful. Hopefully you enjoyed this detailed overview of our WaaS process and have been able to take away something useful. Stay tuned for more posts and good luck with your Windows 10 upgrades.

Originally posted on https://miketerrill.net/

Viewing all 78 articles
Browse latest View live