Removing a non-existent domain

Recently lost a huge storage array and with that the DC for a demo child domain. I therefore had to clean up the now non-existent domain. I did this with ntdsutil. Below are the steps involved. Basically the domain controllers for the domain are removed, then the DNS naming context and finally the domain.

Note that if you receive an error that the domain cannot be remove because of a leaf object run the following to force replication.

repadmin /syncall /aped

Deploying the OMS Agent Automatically

I needed to deploy a new OMS workspace to every machine in my environment and these machines were at various states of configuration. Some had the OMS agent already installed, some had various workspaces already added and some had the new OMS workspace already configured. Additionally these machines were in different environments with differing access to file services where the agent could be stored.

Therefore I created a script that downloads the agent from the Internet if its not already installed, checks if the desired workspace is already configured and adds if its not.

Update the workspace ID and Key to that of your environment.

 

Downloading a Cumulative Update for manual installation

Recently I needed to manually install a cumulative update. It’s actually a simple process. Firstly you need to find the KB for the update. This can be found by navigating to https://support.microsoft.com/en-us/help/4000825/windows-10-windows-server-2016-update-history and select the OS build which will show a list of all the updates.

Once you know the KB head over to https://www.catalog.update.microsoft.com/ and type in the KB. You will now be able to download it and manually install as required.

Solving a strange Remote Desktop Gateway authentication problem

I recently deployed a new Remote Desktop Gateway server but when I authenticated it would tell me the logon failed even though I knew the policies were valid for the user (because I could logon from a different computer) and I knew the credential was correct.

There were no logs under TerminalServices-Gateway\Operational which meant the problem was not a policy issue as the connection was not getting this far.

To start the troubleshooting Kerberos logging was enabled on the client machine:

On connecting again I could check the System event log and examined Kerberos logs. Sure enough there were Kerberos errors. The problem seemed to be on my machine the authentication did not fallback to NTLM which it did on the machines where it worked.

The solution was to add an SPN for the public facing name which solved the problem.

Easily configure Remote Desktop Gateway firewall rules

When you install Remote Desktop Gateway which enables RDP to be encapsulated in HTTPS a number of firewall exceptions are required which are enabled automatically. This also means the RDG has a public and private IP address.

There are many other firewall exceptions that are normal for Windows functionality that by default are enabled for the Any profile which when you have a public IP address on a NIC means they are also enabled to the Internet. What you really need is for those exceptions to be bound to the domain profile, i.e. the internal NIC. This is easy to do with PowerShell. Firstly you can list all the exceptions that are enabled for Any.

This will list a lot of exceptions. Next we want to change them to a profile of Domain except for the two required for RDG, the RDG UDP and HTTPS rules. This can be done with the following:

Done!

Note, you could change the rules to be excluded for other requirements you may have for other types of server.

Deploying Operating Systems in Azure using Windows PE

In this article I want to walk-through deploying operating systems in Azure using a custom Windows PE environment and along the way cover some basics around PE and OS deployment. Before going any further I would stress I don’t recommend this. The best way to deploy in Azure is using templates, have generic images and then inject configuration into them using declarative technologies such as PowerShell DSC, Chef or Puppet however there are organizations that have multiple years of custom image development at their core that at least in the short term need to be maintained which was my goal for this investigation. Is it even possible to use your own Windows PE based deployment.

My starting point was to get a deployment working on-premises on Hyper-V. Azure uses Hyper-V and at this level there really is nothing special about what Azure does so my thinking is if I got a process running on-premises I should be able to take that VHD, upload it to Azure, make an image out of it and create VMs from it (and this proved to be true!). The benefit of this approach was speed of testing and the ability to interact with the Windows PE environment during the development and testing phase. Something that is much harder in Azure as there is no console access.

The first step was to create a VHD (not VHDX for Azure compatibility) that contained Windows PE which I would boot to. I downloaded the latest Windows ADK from https://developer.microsoft.com/en-us/windows/hardware/windows-assessment-deployment-kit (1709) and installed on a machine. Once installed I created my own Windows PE (x64) instance via the Deployment and Imaging Tools Environment. I used the following commands:

I then wanted to add PowerShell and other components to it including imagex.exe:

Notice in the code above when I’m adding packages I do this my mounting the boot.wim file that is part of my copied PE environment, performing actions against it then committing those changes when I unmounts it. I’m modifying that boot.wim. This is an important point.

Once the PE was ready I wanted to quickly test so I built a VHD based on that PE environment.

This creates a new VHD file and attaches it to the current OS as drive V:. I then make bootable media of my PE folder to the V: folder then detach. I then copied this VHD file to a Hyper-V box and created a VM that used it as its boot disk. Sure enough it booted and I was facing a PE environment. The next step was to format the disks and apply an image automatically. My initial though was “how can I format the disk and apply an OS to the disk if booted from it (PE)?” however quickly it became obvious that the PE I was booted into wasn’t really running from the local disk. Instead what happens is on boot the boot.wim file on the PE media is read into a writable RAM disk which is where the PE actually runs from (the X drive). Therefore even though the C: drive contained that boot.wim it’s not actually being used and do it can be wiped. Therefore I created a script that did three things

  1. Wiped the disk and create the system and windows partitions
  2. Applied a Windows Server image (1709 Server Core)
  3. Make the disk bootable

To partition the disk I created a text file, parts.txt which contained:

I could then call this with (I would copy this to my Windows PE environment as well):

The WIM file I placed on a file share (this would be an Azure Files share once in Azure) so I had to map to the network drive and apply so the complete file became:

I saved this as autolaunch.bat and added to the root of my Windows PE boot.wim (by remounting it) along with the parts.txt. I also modified the startnet.cmd found under the WindowsSystem32 folder of my mounted PE environment to call my autolaunch.bat file, e.g.

I then unmounted and created a new VHD. I made sure my install.wim was present in my file server as referenced, copied over the VHD to my Hyper-V server, changed the VM to use the new VHD and sure enough it booted, formatted the disks and laid down the image. Note you are putting a password in the file, this is not ideal. Also not if your password contains special characters you may have to escape them in the batch file or they won’t work correctly, for example if you password contained % you actually need %% in the string!

The next step was to try this in Azure. I created a storage account in Azure, added an Azure Files share and uploaded the install.wim file to it. I changed the autolaunch.bat to map to the Azure Files share instead of the local file share (along with the path to the WIM file). It therefore became:

And execute to:

To upload the VHD to Azure and create an image from that uploaded file I used the following PowerShell. This is important. Trying to upload from other tools or the Azure portal seems to leave the VHD in a strange state and unusable.

From here I created a new VM from my image. I used PowerShell below to create my VM. Note I’m enabling boot diagnostics. This allowed me to view the console even if I couldn’t interact with it. Therefore I had some idea what was happening

I then jumped over to the portal and via the Support + Troubleshooting section – Boot diagnostics – Screenshot I could see it deploying in Azure (updated about every 30 seconds or so).

peinstallblur

This worked! The OS installed and strangely I could RDP to it even though I never enabled this, it had the right name, it had the Azure agent installed. What trickery is this and then it hit me. I never added my own unattend.xml file. All I did was apply a 2016 image to a disk and it rebooted. Basically the same as if I had used a template with 2016. The ISO file that Azure automatically creates when deploying a VM that contains an unattend.xml file and other setup files still got created, still got attached and was therefore still used. This was good but also bad as I wanted to use my own unattend.xml file to further prove we could customize.

The next step was to generate my own unattend.xml file and use it. At this point I didn’t want to keep having to rebuild the VHD every time I made a script change and so I broke apart the logic so the autolaunch.bat just connected to the Azure Files, partitioned the disk then copied down an imageinstall.bat file and executed. This way I could change imageinstall.bat on the file share whenever I wanted to change the functionality. autolaunch.bat became:

And imageinstall.bat which was placed on the file share became:

I created a new VHD with the reduced autolaunch.bat and uploaded to Azure (and created a new image after deleting the old one with Remove-AzureRmImage -ImageName $imageName -ResourceGroupName $rgImgName).

Now I’m jumping over a few steps here but basically I created an unattend file to set a default password, have a placeholder for the computer name, enable auto mount of disks, move the pagefile to D:, enable RDP and required firewall rules and also launch the install.cmd that Azure normally runs. This would install the agent, register with Azure fabric etc. Because I place my unattend.xml in the windows\panther folder it overrides any found on removable media, i.e. the Azure one! My unattend file was:

Now in this file I have a placeholder string for the computername, TSCAEDPH . I wanted to replace this with the computername specified on the Azure fabric. How would I get this from inside the guest? Well Azure has an endpoint at 169.254.169.254 that can be called from within a VM and basic information can be found and so I created a PowerShell script that would find the computername and update the unattend.xml I had copied to the panther folder of the deployed image:

This was saved as unattendupdate.ps1 on the Azure Files share as well which now contained the install.wim, unattend.xml, imageinstall.bat and this ps1 file. Fingers crossed I kicked off a new VM build. It worked. It used my unattend.xml file but still got the Azure agent etc installed. It also still renamed the local administrator account to that specified as part of the VM creation as that happens as part of the Azure install step process which I was now calling from my unattend.xml file.

Now there are some problems here. If Azure changes the structure of their ISO file with the install.cmd this will break so it would have to be re-investigated however this is still better than trying to duplicate everything they do manually which is far more likely to change far more often.

So there you go. You can use your own PE in Azure to customize and create deployments including unattend. You can still call the Azure agent install and finalize. But ideally, use images 😉