Deploying an Azure IaaS VM using PowerShell

I recently had to deploy some new VMs and wanted to use PowerShell and also join them to a domain and get the anti-malware extension used. Below is the PowerShell I used. You would need to modify the variables in the below to match your own domains.

$Region = "EastUS"
$VNetName = "savilltech-vnet-east"
$VNetRG = "savilltech-vnets_rg"
$VNetSubnetName = "savilltech-vnet-east-savhybridinfra-vms"
$VNetSubnetCIDR = "10.244.3.64/26"
$NSGName = "savNSGLockdownEastUS"
$VMRG = "savilltech-savhybridinfra_rg"

$SQLVMName = "AZUUSESQL01"
$SQLVMSize = "Standard_DS3_v2" #4vcpu 16GB memory
$SQLVMIP = "10.244.3.68"

$VMDiagName = "savhybridinfradiag"

#Domain Join Strings
$string1 = '{
    "Name": "savilltech.net",
    "User": "savilltech.net\adminname",
    "OUPath": "OU=Servers,OU=Hybrid,OU=Environments,DC=savilltech,DC=net",
    "Restart": "true",
    "Options": "3"
        }'
$string2 = '{ "Password": "rawpasswordhere" }'


#Get the network subnet
$NSG = Get-AzureRmNetworkSecurityGroup -Name $NSGName -ResourceGroupName $VNetRG
$VNet = Get-AzureRmVirtualNetwork -Name $VNetName -ResourceGroupName $VNetRG
$VNetSubnet = Get-AzureRmVirtualNetworkSubnetConfig -Name $VNetSubnetName -VirtualNetwork $VNet    

#Local Credential
$user = "localadmin"
$password = 'localadminpasshere'
$securePassword = ConvertTo-SecureString $password -AsPlainText -Force
$cred = New-Object System.Management.Automation.PSCredential ($user, $securePassword) 

# Antimalware extension
$SettingsString = '{ "AntimalwareEnabled": true,"RealtimeProtectionEnabled": true}';
$allVersions= (Get-AzureRmVMExtensionImage -Location $Region -PublisherName "Microsoft.Azure.Security" -Type "IaaSAntimalware").Version
$typeHandlerVer = $allVersions[($allVersions.count)-1]
$typeHandlerVerMjandMn = $typeHandlerVer.split(".")
$typeHandlerVerMjandMn = $typeHandlerVerMjandMn[0] + "." + $typeHandlerVerMjandMn[1]


#Create the resource group
New-AzureRmResourceGroup -Name $VMRG -Location $Region

#Create the diagnostics storage account
New-AzureRmStorageAccount -ResourceGroupName $VMRG -Name $VMDiagName -SkuName Standard_LRS -Location $Region

# Create VM Object
$vm = New-AzureRmVMConfig -VMName $SQLVMName -VMSize $SQLVMSize 

$nic = New-AzureRmNetworkInterface -Name ('nic-' + $SQLVMName) -ResourceGroupName $VMRG -Location $Region `
    -SubnetId $VNetSubnet.Id -PrivateIpAddress $SQLVMIP

# Add NIC to VM
$vm = Add-AzureRmVMNetworkInterface -VM $vm -Id $nic.Id

# VM Storage
$vm = Set-AzureRmVMSourceImage -VM $vm -PublisherName MicrosoftWindowsServer -Offer WindowsServer `
    -Skus 2016-Datacenter -Version latest
$vm = Set-AzureRmVMOSDisk -VM $vm  -StorageAccountType PremiumLRS -DiskSizeInGB 512 `
    -CreateOption FromImage -Caching ReadWrite -Name "$SQLVMName-OS"
$vm = Set-AzureRmVMOperatingSystem -VM $vm -Windows -ComputerName $SQLVMName `
    -Credential $cred -ProvisionVMAgent -EnableAutoUpdate

$diskConfig = New-AzureRmDiskConfig -AccountType PremiumLRS -Location $Region -CreateOption Empty `
        -DiskSizeGB 2048
$dataDisk1 = New-AzureRmDisk -DiskName "$SQLVMName-data1" -Disk $diskConfig -ResourceGroupName $VMRG
$vm = Add-AzureRmVMDataDisk -VM $vm -Name "$SQLVMName-data1" -CreateOption Attach `
    -ManagedDiskId $dataDisk1.Id -Lun 1

$vm = Set-AzureRmVMBootDiagnostics -VM $vm -Enable -ResourceGroupName $VMRG -StorageAccountName $VMDiagName

# Create Virtual Machine
New-AzureRmVM -ResourceGroupName $VMRG -Location $Region -VM $vm

Set-AzureRmVMExtension -ResourceGroupName $VMRG -VMName $SQLVMName -Name "IaaSAntimalware" `
    -Publisher "Microsoft.Azure.Security" -ExtensionType "IaaSAntimalware" `
    -TypeHandlerVersion $typeHandlerVerMjandMn -SettingString $SettingsString -Location $Region

Set-AzureRmVMExtension -ResourceGroupName $VMRG -VMName $SQLVMName -ExtensionType "JsonADDomainExtension" `
    -Name "joindomain" -Publisher "Microsoft.Compute" -TypeHandlerVersion "1.0" -Location $Region `
    -SettingString $string1 -ProtectedSettingString $string2

 

Using Azure Application Gateway to publish applications

I was recently part of a project to deploy SharePoint and Office Online Server (OOS) to Azure IaaS as part of a hybrid deployment. A requirement was to make the SharePoint available to the Internet in addition to the OOS (enabling editing of documents/previews online).

The deployment was very simple, 3 VMs were deployed to a subnet that has connectivity to an existing AD:

  • SQL Server – 10.244.3.68
  • SharePoint Server – 10.244.3.69, alias record sharepoint.onemtcqa.net
  • OOS Server – 10.244.3.70, alias oos.onemtcqa.net

The alias records were created on the internal DNS and external DNS, a split-brain DNS. We also had a wildcard certificate for onemtcqa.net which we could therefore use for https for both sites.

Azure has two built-in load balancer solutions (with more available through 3rd party solutions and virtual appliances).

  • The layer 4 Azure Load Balancer which could have been used by configuring the front-end as a public IP and supports any protocol
  • The layer 7 Azure Application Gateway that in addition to providing capabilities like SSL offload and cookie based affinity also has the optional Web Application Firewall to provide additional protection. More information on the Application Gateway can be found at https://docs.microsoft.com/en-us/azure/application-gateway/application-gateway-introduction. The front-end IP can be internal or public and the back end can load balance to multiple targets (like the layer 4 load balancer option).

Because the services being published were HTTP based, it made sense to utilize the Azure Application Gateway and would provide a great reason to get hands on with the technology. Additionally the added protection via the WAF was a huge benefit.

There are various SKU sizes available for the Azure Application Gateway along with the choice of Standard or WAF integration. Information on the sizes and pricing can be found at https://azure.microsoft.com/en-us/pricing/details/application-gateway/. I used the Medium size which is the smallest possible when using the WAF tier.

There are a number of settings related to the App Gateway which all relate to each other in a specific manner which provides the complete solution. A single App Gateway can publish multiple sites which meant I only needed a single App Gateway instance with a single public IP for both the sites I needed to publish.

Below is a basic picture of the key components related to an App Gateway that I put together to aid in my own understanding! The arrows show directions of link, so the Rule links to three other items which really bind everything together.

When deploying the Application Gateway through the portal there are some initial configurations:

  • The SKU
  • The virtual network it will connect to and you must specify an empty subnet that can only be populated by App Gateway resources. This should be at least a /29
  • The front end IP and if a public IP is created it must be dynamic and cannot have a custom DNS name
  • If the listener is HTTP or HTTPs and the port

Note, if using a public IP, because it is dynamic and cannot have a custom DNS name you can check its actual DNS name using PowerShell and then create an alias on the Internet to that DNS name. Use Get-AzureRmPublicIPAddress and use the DnsSettings.Fqdn attribute. For example:

(Get-AzureRmPublicIpAddress -Name HybridInfraAppGatewayQA -ResourceGroupName onemtcqa-exphybridinfra_rg |
    Select-Object -ExpandProperty DnsSettings).Fqdn

The name will be <GUID>.cloudapp.net. I created two alias records, sharepoint and oos, both pointing to this name on the public DNS servers.

Once created we need to tweak some things from those created by the portal wizard.

The virtual subnet that is used for the App Gateway needs its NSG modified as some additional ports must be opened from the Any source to the Virtual Network (this is in addition to the AzureLoadBalancer default inbound rule). Add an inbound rule to allow 65503-65534 TCP from Any to VirtualNetwork. Note this only needs to be enabled on the NSG applied to the Application Gateways subnet and NOT the subnets containing the actual back-end resources. Also ensure the Application Gateway subnet can communicate with the subnets hosting the services.

By default the built-in probe that checks if a backend target is healthy and a possible target for traffic looks for a response between 200 and 399 as a healthy response (per https://docs.microsoft.com/en-us/azure/application-gateway/application-gateway-probe-overview) however for the SharePoint site this won’t work as it prompts for authentication so we need to create a custom probe on HTTPS which accepts 200-401. This can be done with PowerShell (I’m using the internal DNS name here which is the same as external):

$gw = Get-AzureRmApplicationGateway -Name HybridInfraAppGatewayQA -ResourceGroupName onemtcqa-exphybridinfra_rg  

# Define the status codes to match for the probe 
$match=New-AzureRmApplicationGatewayProbeHealthResponseMatch -StatusCode 200-401  

# Add a new probe to the application gateway 
Add-AzureRmApplicationGatewayProbeConfig -name AppGatewaySPProbe -ApplicationGateway $gw -Protocol Https -Path / -Interval 30 -Timeout 120 -UnhealthyThreshold 3 -Match $match -HostName sharepoint.onemtcqa.net

Set-AzureRmApplicationGateway -ApplicationGateway $gw

Open the HTTP Settings object, ensure it is HTTPS, upload the certificate and select to use a custom probe and select the probe that was just created.

A default listener was created but this can’t be used so instead create a new multi-site listener.

  • Use the existing frontend IP configuration and 443 port
  • Enter the hostname, e.g. sharepoint.onemtcqa.net
  • Protocol is HTTPS
  • Use an existing certified or upload a new certificate to use

Open the backend pool and add the internal IP address of the target(s).

The initial default rule created should work which links to the listener created, the backend pool and the HTTP setting that was modified.

If you open the Backend health under Monitoring it should show a status of healthy and you should be able to connect via the external name (that points to the DNS name of the public IP address).

Now the OOS has to be published which does not require authentication which means a different probe must be used which means a different listener and different targets. Even though it will be a different listener its not like old style listeners where only one can listen on a specific port. This is rather just a set of configurations and so multiple 443 listeners can share the same frontend configuration (and therefore public IP).

  1. Create a new Backend pool with the OOS machines as the target
  2. Create a new multi-site listener that uses the existing Frontend IP configuration and port with the OOS public hostname, HTTPS and OOS certificate (same if a wildcard or subject alternative names)
  3. Create a new health probe. Use the OOS internal DNS name, HTTPS and for path use /hosting/discovery
  4. Create a new HTTP setting that is HTTPS, uses the certificate and uses the new health probe
  5. Create a new basic rule that uses the new listener, the new backend pool and the new HTTP setting

Click the below to see a large image of the OOS set of additional configurations.

Now your OOS should also be available and working! You have now published two sites through a single Application Gateway.

Migrate from ATA to Azure ATP with easy PowerShell

This week Azure Advanced Threat Protection (ATP) was made available as a product that is part of EMS E5 and is essentially ATA in the cloud. ATA is a service that takes a data feed from all domain controllers then uses that data to help identify various types of attack such as pass-the-hash, golden ticket, dumps of DNS and more. Now those capabilities are available using the Azure ATP service removing the need for the on-premises components. Like the lightweight gateway option for ATA where the agent runs on each DC (instead of the full gateway where port forwarding is used), with Azure ATP a sensor is deployed to each DC (however if you don’t want this a standalone sensor can be deployed with port forwarding from DCs just like the regular gateway for ATA) which sends only a fraction of the traffic with minimal overhead.

Head over to https://portal.atp.azure.com, create a new workspace then once you select that workspace, select Configurations – Sensors. From here you can download the sensor setup file and get the access key which will link your DCs to the specific workspace.

I already had ATA deployed in my environment and wanted to simply uninstall the ATA lightweight gateway and silently deploy the Azure ATP sensor on all DCs so I created a simple PowerShell script to do just that. You can pass it a list of DCs, it could read from a file or it can scan the Domain Controllers OU. Of course you could remove the part about uninstalling ATA and just use it to deploy Azure ATP. Note I have saved the agent to a file share so you would want to change the file share I use in this script in addition to adding your access key.

#$servers = Get-Content .Documentsdcs.txt
#$servers = "AZUASEDC2","AZUEUEDC1","AZUEUEDC2","AZUUSWDC2","AZUUSWDC1","AZUASEDC1"
$servers = Get-ADComputer -SearchBase "OU=Domain Controllers, DC=savilltech, DC=net" -Filter * | Select-Object -ExpandProperty Name

$cred = Get-Credential #account used to map to share where the Azure ATP client is

foreach($server in $servers)
{
    Write-Output "Trying to move from ATA to Azure ATP for $server"
    Invoke-Command -ComputerName $server -ScriptBlock {
        Write-Output "   Uninstalling ATA"
        $app = Get-WmiObject -Class Win32_Product | Where-Object {
            $_.Name -match "Microsoft Advanced Threat Analytics Gateway" }
        $app.Uninstall()

        Write-Output "   Installing Azure ATP monitor"
        New-PSDrive -Name X -PSProvider FileSystem -Root \AZUUSEDC1Core -Credential $args[0] | out-null
        #https://docs.microsoft.com/en-us/azure-advanced-threat-protection/atp-silent-installation
        & 'X:ATPAzure ATP Sensor Setup.exe' /quiet NetFrameworkCommandLineArguments="/q" AccessKey=WORKSPACEACCESSKEYHERE
        Start-Sleep -Seconds 60 #Enable the install to complete
        Remove-PSDrive X

    } -ArgumentList $cred
} 

Once deployment is finished complete the configuration via the Azure ATP portal, e.g. enable some sensors as domain synchronizer candidates. Bask in the great monitoring happening for your domain!

Azure NSG Integration with Storage and Other Services

Network Security Groups (NSGs) are a critical component in Azure networking which enable the flow of traffic to controlled both within the virtual network, i.e. between subnets (and even VMs), and external to the virtual network, i.e. Internet, other parts of known IP space (such as an ExpressRoute connected site) and Azure components such as load balancers. Rules are grouped into NSGs and applied to subnets (and sometimes vNICs however its easier management to apply at the subnet level). Rules are based on:

  • Source IP range
  • Destination IP range
  • Source port(s)
  • Destination port(s)
  • Protocol
  • Allow/Deny
  • Priority

In place of the IP ranges certain tags can be used such as VirtualNetwork (known IP space which includes IP spaces connected to the virtual network, e.g. an on-premises IP space connected via ExpressRoute), Internet (not known IP space) and AzureLoadBalancer. Additionally through the use of service tags other Azure services can be included in rules which include the IP ranges of certain services for example Storage, SQL and AzureTrafficManager. It is also possible to limit these to specific regions for the service, for example Storage.EastUS as the service tag to enable access only to Storage in EastUS. This could then be used in a rule instead of an IP range. This is very beneficial as now you can enable only specific machines in a specific subnet to communicate to specific services in specific regions. Without this functionality you would have to try and create rules based on the public IP addresses each service used. More information on service tags can be found at https://docs.microsoft.com/en-us/azure/virtual-network/security-overview#service-tags.

Another useful feature is application security groups. Using application security groups you can create a number of groups for the various types of application tiers you have (using New-AzureRmApplicationSecurityGroup), use them in NSG rules (e.g. -DestinationApplicationSecurityGroupId) and then you assign a network interface for a VM to be part of a specific application security group (using the ApplicationSecurityGroup parameter at creation time). Now you don’t have to worry about the actual IP address or subnet of the VM in the NSG rules. The NIC is now part of the application security group and will automatically have the rules applied based on that membership. Imagine you created an application security group for all the VMs in a certain tier of the application and they would all automatically have the correct rules regardless of their IP address or subnet membership.

On the other side of the equation you have Azure services like Storage and SQL and by default they have public facing endpoints. While there are some ACLs to limit access it can be very difficult/impossible to try and restrict them to only specific Azure IaaS VMs in your environment. For example you may have a storage account or Azure SQL database instance you only want to be accessible from VMs in a specific subnet in a virtual network. This is now possible through a combination of service endpoints and the Azure service firewall capability.

Firstly on the virtual network, service endpoints are enabled for specific services (e.g. Storage) for specific subnets. This now makes that subnet available as part of the firewall configuration for that target service.(note that if you skip this step it can be done automatically when performing the configuration on the actual service!).

Next on the actual service (which must be in the same region as the virtual network) select the ‘Firewalls and virtual networks’ option, change the ‘Allow access from’ to ‘Selected networks’, ‘add existing virtual network’, select the virtual network and subnets and click Add and then Save. Now the service will only be available to the selected virtual subnets.

When you put all these various features together there are now great controls available between VMs in virtual networks and key Azure services to really help lock down access in a simple way.

More information on service endpoints can be found at https://docs.microsoft.com/en-us/azure/virtual-network/virtual-network-service-endpoints-overview.

Quickly check who are Global Admins in your Azure AD with PowerShell

The code below will list the Global Admins in your Azure AD. Note that if using privileged identity management any users currently elevated would also show.

$role = Get-AzureADDirectoryRole | Where-Object {$_.displayName -eq 'Company Administrator'}
Get-AzureADDirectoryRoleMember -ObjectId $role.ObjectId

Also note the PowerShell/Graph API name for Global Admins is Company Administrator.

Checking the creation time of an Azure IaaS VM

I recently had a requirement to check the age of VMs deployed in Azure. As I looked it became clear there was no metadata for a VM that shows its creation time. When you think about this it may be logical since if you deprovision a VM (and therefore stop paying for it) then provision it again what is its creation date? When it was first created or when it last was provisioned.

As I dug in there is a log written at VM creation however by default these are only stored for 90 days (unless sent to Log Analytics) BUT if its within 90 days I could find the creation date of any VM. For example:

$logs = Get-AzureRmLog -ResourceProvider Microsoft.Compute -StartTime (Get-Date).AddDays(-90)
 
foreach($log in $logs)
{
    if(($log.OperationName.Value -eq 'Microsoft.Compute/virtualMachines/write') -and ($log.SubStatus.Value -eq 'Created'))
    {
        Write-Output "- Found VM creation at $($log.EventTimestamp) for VM $($log.Id.split("/")[8]) in Resource Group $($log.ResourceGroupName) found in Azure logs"
        # write-output "   - ID $($log.Id)"
        $vmCreationTime = $($log.EventTimestamp)
        #$log
    }
}

What if the VM was not created in the last 90 days? If the VM uses a managed disk you can check the creation date of the managed disk. If it is NOT using a managed disk then there is not creation time on a page blob however by default VHDs include the creation data and time as part of the file name. Therefore to try and find the creation data of a VM I use a combination of all 3 by first looking for logs for the VM, then look for a managed disk and finally try a data in unmanaged OS disk.

#Must run elevated

#Find any creation
#$logs = Get-AzureRmLog -ResourceProvider Microsoft.Compute -StartTime (Get-Date).AddDays(-30)
 
#OR

#Find creation for a specific VM
$vm = get-azurermvm -name 'testnotmanaged' -ResourceGroupName 'testunmanagedrg'
$logs = Get-AzureRmLog -ResourceId $vm.Id -StartTime (Get-Date).AddDays(-90)
$vmCreationTime = $null

foreach($log in $logs)
{
    if(($log.OperationName.Value -eq 'Microsoft.Compute/virtualMachines/write') -and ($log.SubStatus.Value -eq 'Created'))
    {
        Write-Output "- Found VM creation at $($log.EventTimestamp) for VM $($log.Id.split("/")[8]) in Resource Group $($log.ResourceGroupName) found in Azure logs"
        # write-output "   - ID $($log.Id)"
        $vmCreationTime = $($log.EventTimestamp)
        #$log
    }
}

#If could not find a match
if($vmCreationTime -eq $null)
{
    #Disk method

    #Managed Disk
    $osdisk = $null
    try {$osdisk = Get-AzureRmDisk -DiskName $vm.StorageProfile.OsDisk.Name -ResourceGroupName RGSavEastUSBurstDemo -ErrorAction SilentlyContinue
        $vmCreationTime = $($osdisk.TimeCreated)}
    catch {}
    
    if($osdisk -ne $null)
    { Write-Output "VM creation at $vmCreationTime for VM $($log.Id.split("/")[8]) in Resource Group $($log.ResourceGroupName) via managed disk creation time"  }

    #Obviously flaws here if used an existing disk but its good enough. For unmanaged looks like I can do:
    if($osdisk -eq $null)
    {
    #Unmanaged Disk
        $vmosdiskloc = $vm.StorageProfile.OsDisk.vhd.Uri
        $vmosdiskstorageact = $vmosdiskloc.Substring(8).Split(".")[0]  #extract out the storage account name by removing the https:// then finding the first part before period (.)
        $storaccount = Get-AzureRmStorageAccount | where {$_.StorageAccountName -eq $vmosdiskstorageact}
        $vmosdiskcontainer = $vmosdiskloc.Substring(8).Split("/")[1]  #extract the middle path between location and vhd which would be the container
        $vmosdiskname = $vmosdiskloc.Substring(8).Split("/")[2]
        $OSBlob = Get-AzureStorageBlob -Context $storaccount.Context -Container $vmosdiskcontainer -Blob $vmosdiskname 
        try {$vmCreationTime = [datetime]::ParseExact(($OSBlob.Name.Substring(($OSBlob.Name.Length-18),14)),'yyyyMMddHHmmss',$null)}
        catch { $vmCreationTime = $null}
        if($vmCreationTime -ne $null)
        {  Write-Output "VM creation at $vmCreationTime for VM $($log.Id.split("/")[8]) in Resource Group $($log.ResourceGroupName) via date in page blob for OS disk" }
    }
}

 

Removing a non-existent domain

Recently lost a huge storage array and with that the DC for a demo child domain. I therefore had to clean up the now non-existent domain. I did this with ntdsutil. Below are the steps involved. Basically the domain controllers for the domain are removed, then the DNS naming context and finally the domain.

C:UsersAdministrator>ntdsutil
ntdsutil: metadata cleanup
metadata cleanup: connections
server connections: connect to server savdaldc01
Binding to savdaldc01 ...
Connected to savdaldc01 using credentials of locally logged on user.
server connections: quit
metadata cleanup: select operation target
select operation target: list sites
Found 5 site(s)
0 - CN=Dallas,CN=Sites,CN=Configuration,DC=savilltech,DC=net
1 - CN=Azure,CN=Sites,CN=Configuration,DC=savilltech,DC=net
2 - CN=Houston,CN=Sites,CN=Configuration,DC=savilltech,DC=net
3 - CN=Austin,CN=Sites,CN=Configuration,DC=savilltech,DC=net
4 - CN=SanAntonio,CN=Sites,CN=Configuration,DC=savilltech,DC=net
select operation target: select site 0
Site - CN=Dallas,CN=Sites,CN=Configuration,DC=savilltech,DC=net
Domain - DC=dev,DC=savilltech,DC=net
No current server
No current Naming Context
select operation target: list servers in site
Found 2 server(s)
0 - CN=SAVDALDC01,CN=Servers,CN=Dallas,CN=Sites,CN=Configuration,DC=savilltech,DC=net
1 - CN=SAVDALDEVDC01,CN=Servers,CN=Dallas,CN=Sites,CN=Configuration,DC=savilltech,DC=net
select operation target: select server 1
Site - CN=Dallas,CN=Sites,CN=Configuration,DC=savilltech,DC=net
Domain - DC=dev,DC=savilltech,DC=net
Server - CN=SAVDALDEVDC01,CN=Servers,CN=Dallas,CN=Sites,CN=Configuration,DC=savilltech,DC=net
        DSA object - CN=NTDS Settings,CN=SAVDALDEVDC01,CN=Servers,CN=Dallas,CN=Sites,CN=Configuration,DC=savilltech,DC=net
        DNS host name - savdaldevdc01.dev.savilltech.net
        Computer object - CN=SAVDALDEVDC01,OU=Domain Controllers,DC=dev,DC=savilltech,DC=net
No current Naming Context
select operation target: quit
metadata cleanup: remove selected server
Transferring / Seizing FSMO roles off the selected server.
Unable to determine FRS owner for role PDC.
Unable to determine FRS owner for role Rid Master.
Unable to determine FRS owner for role Infrastructure Master.
"CN=SAVDALDEVDC01,CN=Servers,CN=Dallas,CN=Sites,CN=Configuration,DC=savilltech,DC=net" removed from server "savdaldc01"


metadata cleanup: select operation target
select operation target: list naming contexts
Found 7 Naming Context(s)
0 - CN=Configuration,DC=savilltech,DC=net
1 - DC=savilltech,DC=net
2 - CN=Schema,CN=Configuration,DC=savilltech,DC=net
3 - DC=DomainDnsZones,DC=savilltech,DC=net
4 - DC=ForestDnsZones,DC=savilltech,DC=net
5 - DC=dev,DC=savilltech,DC=net
6 - DC=DomainDnsZones,DC=dev,DC=savilltech,DC=net
select operation target: select naming context 6
No current site
Domain - DC=dev,DC=savilltech,DC=net
No current server
Naming Context - DC=DomainDnsZones,DC=dev,DC=savilltech,DC=net
select operation target: quit
metadata cleanup: remove selected naming context
"DC=DomainDnsZones,DC=dev,DC=savilltech,DC=net" removed from server "savdaldc01"


metadata cleanup: select operation target
select operation target: list domains
Found 2 domain(s)
0 - DC=savilltech,DC=net
1 - DC=dev,DC=savilltech,DC=net
select operation target: select domain 1
No current site
Domain - DC=dev,DC=savilltech,DC=net
No current server
Naming Context - DC=DomainDnsZones,DC=dev,DC=savilltech,DC=net
select operation target: quit
metadata cleanup: remove selected domain
"DC=dev,DC=savilltech,DC=net" removed from server "savdaldc01"
metadata cleanup: quit
ntdsutil: quit

Note that if you receive an error that the domain cannot be remove because of a leaf object run the following to force replication.

repadmin /syncall /aped

Deploying the OMS Agent Automatically

I needed to deploy a new OMS workspace to every machine in my environment and these machines were at various states of configuration. Some had the OMS agent already installed, some had various workspaces already added and some had the new OMS workspace already configured. Additionally these machines were in different environments with differing access to file services where the agent could be stored.

Therefore I created a script that downloads the agent from the Internet if its not already installed, checks if the desired workspace is already configured and adds if its not.

Update the workspace ID and Key to that of your environment.

#OMSInstallCSE.ps1   John Savill 
 
#Check if OMS Agent is installed
$MMAObj = Get-WmiObject -Class Win32_Product -Filter "name='Microsoft Monitoring Agent'"
 
#If the agent is not installed then install it
if($MMAObj -eq $null)
{
    $OMS64bitDownloaURL = "https://go.microsoft.com/fwlink/?LinkId=828603"
    $OMSDownloadPath = "c:Temp"
    $OMSDownloadFileName = "MMASetup-AMD64.exe"
    $OMSDownloadFullPath = "$OMSDownloadPath$OMSDownloadFileName"
 
    #Create temporary folder if it does not exist
    if (-not (Test-Path $OMSDownloadPath)) { New-Item -Path $OMSDownloadPath -ItemType Directory | Out-Null }
 
    Write-Output "Downloading the agent..."
 
    #Download to the temporary folder
    Invoke-WebRequest -Uri $OMS64bitDownloaURL -OutFile $OMSDownloadFullPath | Out-Null
 
    Write-Output "Installing the agent..."
 
    #Install the agent
    $ArgumentList = '/C:"setup.exe /qn ADD_OPINSIGHTS_WORKSPACE=0 AcceptEndUserLicenseAgreement=1"'
    Start-Process $OMSDownloadFullPath -ArgumentList $ArgumentList -ErrorAction Stop -Wait | Out-Null
}
 
#Add the CSE Workspace
$WorkspaceID = 'IDofWorkspace'
$WorkspaceKey = 'KeyofWorkspace'
 
#Check if the CSE workspace is already configured
$AgentCfg = New-Object -ComObject AgentConfigManager.MgmtSvcCfg
$OMSWorkspaces = $AgentCfg.GetCloudWorkspaces()
 
$CSEWorkspaceFound = $false
foreach($OMSWorkspace in $OMSWorkspaces)
{
    if($OMSWorkspace.workspaceId -eq $WorkspaceID)
    {
        $CSEWorkspaceFound = $true
    }
}
 
if(!$CSEWorkspaceFound)
{
    Write-Output "Adding CSE OMS Workspace..."
    $AgentCfg.AddCloudWorkspace($WorkspaceID,$WorkspaceKey)
    Restart-Service HealthService
}
else
{
    Write-Output "CSE OMS Workspace already configured"
}
 
# Get all configured OMS Workspaces
$AgentCfg.GetCloudWorkspaces()

 

Downloading a Cumulative Update for manual installation

Recently I needed to manually install a cumulative update. It’s actually a simple process. Firstly you need to find the KB for the update. This can be found by navigating to https://support.microsoft.com/en-us/help/4000825/windows-10-windows-server-2016-update-history and select the OS build which will show a list of all the updates.

Once you know the KB head over to https://www.catalog.update.microsoft.com/ and type in the KB. You will now be able to download it and manually install as required.

Solving a strange Remote Desktop Gateway authentication problem

I recently deployed a new Remote Desktop Gateway server but when I authenticated it would tell me the logon failed even though I knew the policies were valid for the user (because I could logon from a different computer) and I knew the credential was correct.

There were no logs under TerminalServices-GatewayOperational which meant the problem was not a policy issue as the connection was not getting this far.

To start the troubleshooting Kerberos logging was enabled on the client machine:

[HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlLsaKerberosParameters]
"MaxTokenSize"=dword:0000ffff
"LogLevel"=dword:00000001

On connecting again I could check the System event log and examined Kerberos logs. Sure enough there were Kerberos errors. The problem seemed to be on my machine the authentication did not fallback to NTLM which it did on the machines where it worked.

The solution was to add an SPN for the public facing name which solved the problem.

setspn -S HTTP/rdg.dallas.savilltech.com dal-rdg01$