Two new videos on Azure AD – Conditional Access and Tokens!

Recorded two new videos this week. The first is an understanding of how tokens work with Azure AD and then one looking at conditional access (which can control the access to get those tokens for various scenarios).

Word of caution – I talk about terms of use in the second video. If you just enable this for ALL users it will break things that can’t accept it, for example the account you use for Azure AD Connect to sync to Azure AD so make sure you exclude accounts that can’t accept!

Understand the authentication pros and cons with Azure AD

When using Azure AD there are two types of authentication available:

  • Cloud authentication where the authentication takes place against Azure AD
  • Federated authentication where the authentication takes place against the federated service, for example using ADFS against Active Directory Domain Services

When using the cloud authentication there are two ways to validate the password:

  • A hash of the password hash from AD is replicated to Azure AD (and no matter which authentication option used this is recommended to enable Azure AD to help detect leaked credentials and give a “break the glass” fallback authentication option if your primage configuration fails) and this is used for the cloud based authentication
  • The password validation is done against Active Directory Domain Services using Passthrough Authentication (PTA) which works by writing the username/password (in an encrypted form for each PTA agent configured) to a service bus instance which are then read by PTA instances deployed to Windows OS instances which take the entry, decrypt, authenticate against ADDS then respond with the result to then complete the authentication request

There are therefore three options for the authentication configuration

  • Password hash
  • PTA
  • Federation

The order I have them is generally the preference but there are some pros and cons of each (in addition to a few considerations) and I wanted to outline them briefly here.

Password Hash

  • Pro – Cloud scale/resilience since this is all native Azure AD with no other reliance during authentication
  • Pro – Provides breach replay protection and reports of leaked credentials since the stored hash can be used to compare against credentials found on dark web (visibility varies depending on Azure AD license, P2 provides best insight). Also enabled the ability to block banned passwords during password change. This benefit is for any configuration providing password hash is replicated and does not have to be used for the authentication
  • Pro – As above even if not using password hash for authentication if its stored and the primary method, e.g. PTA of federation fails (such as loss of connectivity to infrastructure) you can quickly switch to password hash based authentication
  • Con – If the ADDS account has been locked, restricted hours set or password expired it will not impact the ability to logon via Azure AD
  • There is a delay for new accounts or changes to be reflected from AD to Azure AD. This is typically a 30 minute replication window (except for passwords which replicate every 2 minutes). Therefore plan for a delay for new accounts/changes to be reflected in Azure AD
  • You may hear talk of a con is you want the authentication to occur against on-premises DCs however the way tokens and specifically refresh tokens work is only the first authentication would hit AD and after that future access in the same session would not re-authenticate via PTA/federation anyway as the refresh token would be used to acquire additional access tokens. I will cover this in a separate video.

Passthrough-Authentication (PTA)

  • Pro – If a concern with this method you don’t have to store password hashes in Azure AD (however this is a risk vs reward discussion and the benefit of having the hash greatly outweighs any downside IMO)
  • Pro – This is lighter than using federation and establishes an outbound 443 connection to Azure AD not requiring any inbound port exceptions
  • Pro – Any AD account restrictions like hours, account lockout, password expired would be enforced
  • Con – Legacy authentication (pre 2013 Office clients) may not work with PTA
  • This is lighter than federation and easy to deploy multiple PTA instances on-premises for scale and resiliency but does still require deployments
  • When users authenticate, their password is sent to Azure AD (encrypted via HTTPS and then sent via PTA for authentication)

Federation

  • Pro – 3rd party MFA, Azure MFA Server and custom policies/claim rules (outside of the Azure AD 3rd party MFA integration like Duo). It is also possible to create a multi-site ADFS farm, then coupled with some type of geo-DNS solution you can authenticate a user to their closest ADFS “presence
  • Pro – Certificate based authentication
  • Single-sign on if on AD joined machine in corp network. This can be matched with password hash and PTA with seamless-sign on enabled
  • Password never hits the cloud, it is send to federation server. Both others the password is sent to the cloud
  • INITIAL authentication hits federation servers for policy (but subsequent app requests won’t go via ADFS since will use refresh token gained)
  • INITIAL authentication against AD DS domain controllers
  • Con – Large amount of infrastructure required (proxy, adfs servers) especially when other federations moved to Azure AD. The OpEx cost is also a major consideration. Think about the maintenance (managing servers, trusts, certificates) and staff to operate this.
  • Con – With the ADFS proxy it means firewall exceptions to enable inbound traffic
  • Con – Can limit scale/availability

Note that for all scenarios I can still use features like Conditional Access. I try to start at the top of the options and work down if needed. I really consider the federation a legacy option that most organizations are moving away from since Azure AD would be used for the actual application federations moving forward.

Microsoft has a good doc at https://docs.microsoft.com/en-us/azure/security/azure-ad-choose-authn which should also be reviewed.

Any thoughts, please post below!

Getting PowerShell 6

PowerShell 5.1 marks the last major update to PowerShell we are likely to see built into Windows. The future of PowerShell has gone the open-source path with PowerShell 6 being available via GitHub and available not just for Windows but also multiple Linux distributions and MacOSX. This is made possible as PowerShell 6, or rather PowerShell CORE 6.0 is built on .NET Core (which is cross platform) instead of the Windows exclusive .NET.

The good news is PowerShell 6 can be installed alongside the PowerShell that is part of Windows/WMF. Download and install from https://github.com/PowerShell/PowerShell/releases. Once installed you can launch by running pwsh.exe. If you look at $psversiontable you will see you have the Core PSEdition instead of the standard Desktop.

I recommend installing this and running alongside the regular PowerShell and getting used to it. The good news is most regular PowerShell will run and if you execute get-module -listavailable you will see the built-in modules. For non-built in modules you will need to check if they are supported with PowerShell Core.

Read the Microsoft article at https://blogs.msdn.microsoft.com/powershell/2018/01/10/powershell-core-6-0-generally-available-ga-and-supported/ for a great overview and walks through the key features that are not part of PowerShell Core 6, e.g. workflows in addition to key considerations.

Tools like Visual Studio Code can be used with PowerShell 5.1 and PowerShell Core 6.0. Simply change the settings for Visual Studio Code to add pwsh, e.g. add the following to user settings (File – Preferences – Settings) (change to your specific PowerShell version). I added this just under the existing User Setting (the , goes after the existing line in the file).


Happy PowerShelling!

Understand precedence with PowerShell

There are many ways to create functionality in PowerShell including basic cmdlets, aliases and functions. When you use multiple combinations its important to understand the precedence. This is best understood by walking through a basic example.

Firstly just run:

get-process

This will result in processes being displayed as expected.

Now lets create a function called get-process that lists child items.

function get-process { Get-ChildItem }

Now if you run get-process it will show child items so the function trumps the built-in cmdlet.

Now let’s create an alias so get-process points to get-service.

New-Alias get-process -Value get-service

Run get-process and it shows services so an alias trumps a function (which trumps the native cmdlets).

Note, you can always force running the cmdlet by its full name.

microsoft.powershell.management\get-process

Once you’ve finished you can reverse by deleting one-at-a-time.

 

Email people via Office 365 from PowerShell when passwords about to expire

I have a demonstration environment where many users have accounts but they never logon to AD directly nor look at this demonstration email mailbox. They only use the environment via Azure AD where they logon at Azure AD via the replicated password hash. Because of this they don’t get password expiry notifications and continue to logon however if they try and access something that does hook into AD and not Azure AD the logon fails.

They wanted to be emailed of upcoming password expiry to their real-email. To accomplish this their real email was stored in extensionAttribute10. I didn’t use the proxyaddresses as this may have SIP information. This attribute could be easily set with:

I had a mailbox for a core process I use. Now that user has no other rights so I placed the password in the script but that’s not ideal at all. If this was Azure Automation I could have used a credential object, I could have at least made the password harder to read by creating an encrypted version of the password and then storing that in the file (but its still reversible, just slightly harder to glance at!), e.g.

However the account can’t do anything except email and access to the script location was highly restricted so I left it as text which was also easier to demonstrate below however in my environment I used the alternate approach above just to make it a little harder to get the password on glance :-). Replace this with your own email and password.

The script looks for any password expiring in less than 10 days and emails a simple message. Customize as you like! It has a basic HTML block with a placeholder (MESSAGEHOLDER) that is replaced by a custom string for the user.

Have fun!

Add group members to another tenant via Azure AD B2B and PowerShell

I needed to add members of a number of groups from one Azure AD tenant to a group in another Azure AD tenant that would then be given access to a resource. The goal was to not require the users added to have to redeem the invite which is common when adding a B2B user. To do this the first step was a user invited via B2B the normal way, that user redeemed the invite and in this case was then made a global admin (although another option would have been to enable guests to invite guests). The key point was this user had the ability to invite people via B2B and could enumerate users in the invited Azure AD instance which would mean invites would not have to be redeemed.

My first version of the script was very simply however I soon realized I would have to rerun the script to add new users and so I enhanced it to extract the current members of the group, convert to regular email format (since when invite to Azure AD the users have @ replaced with _ and is put in a string with various components separated by a #). The script therefore extracts the first part and converts the _ back to a @. Then looks for only for people who are not already members.

In the script below replace the group names, Azure AD names and IDs to meet your requirements.

 

 

Create AD sites through PowerShell

I recently needed to create an AD site for each MTC (an office), add the IP range assigned to that MTC (which was in a CSV file) and then associate the site with a site link for its region. This is so the Active Directory automatic site coverage feature will enable DCs to populate per-site DNS records for the MTCs ensuring authentication traffic uses the most optimal DC. The DCs are spread over four regional locations.

The CSV file simply had one or two second octet numbers for the /16 IP range associated with the MTC. The code therefore enumerates through each OU, checks to see if the MTC can be found in the CSV data for the IP ranges. Next if the site does not already exist it is created, added to its regional site link (based on the parent OU name and for NA if its East or West) and then the IP ranges for the MTC assigned.

 

Bulk created group policy objects with PowerShell

A lot of the work I do around Active Directory and Azure AD is for our OneMTC.net environment used by our global Microsoft Technology Centers. It is built around a number of region-based organizational units which then have child OUs for each MTC.

The requirement was to create a number of GPOs for each MTC which could then be modified by the local administrator of the MTC. To do this I created two template GPOs with most of the basic settings which I then just needed to copy to a new, per-MTC GPO instance then link to the GPO. This was very easy with PowerShell and the GroupPolicy module.

I also had already created the GPOs for a couple of MTCs so wanted to skip creating the objects for them. In the PowerShell below you can see I have a variable for the top-level of the MTC and then an array of the top level regional OUs. From there I have the names of the GPO templates and an array of the MTCs to skip. At that point I just enumerate for OUs, copy the GPOs and link the new per-instance GPO to the OU.

 

Delivering a Customizable, Graphical Insight into Azure VM Security, Health and Connectivity Using Several Azure Services Together

In this blog I want to walkthrough a solution I recently architected and implemented along with a two other MTC architects to deliver a solution we needed for two reasons:

  1. To provide insight into the VMs hosted in Azure across the global Microsoft Technology Center environment
  2. Showcase the use of some key Microsoft cloud technologies

The Requirement

The global MTC organization is made up of around 30 offices which each have several Azure subscriptions to host the projects they are working on and environments used in customer activities. Additionally, there are several global, shared Azure subscriptions that host core infrastructure and experiences. These subscriptions are tied to various Azure AD tenants depending on requirements. The primary subscription for each MTC also hosts a virtual network that is part of a global IP space that is connected via one of four regional ExpressRoute circuits to the MTC worldwide VPN that provides connectivity between all MTC offices.

While there is a standard governance and process guide each MTC has control of their own subscriptions and resources however from a central MTC organization perspective insight into several key factors was required.

  • Are the VMs registered with the central Log Analytics instance to report inventory and patch state. Log Analytics is part of the Operations Management Suite and is used to accept log information from almost any sort and then provides power analytical capabilities to use that information to provide insight into the environment. A number of solutions are included that provide visibility into best practices, patch status, anti-malware status and much more. For OS instance visibility Log Analytics uses the Microsoft Monitoring Agent (MMS) which is the same agent used by System Center Operations Manager.
  • What is the current patch status of the VM. This is provided by information to Log Analytics and to Azure Security Center if registered. Azure Security Center (ASC) provides a central security posture location for Azure resources including VM health, network health, storage health and more.
  • Is the VM connected to ExpressRoute. This can be found by checking the virtual network a VM is attached to and if that virtual network has an ExpressRoute Gateway connected.
  • Does the VM have a public IP and is it health. Public IP existence can be found through the properties of VM IP configurations and the health which is based on use of Network Security Groups to lock down communication through ASC.
  • Is the VM older than 30 days. Object creations are logged in Azure. By default, these are kept for 60 days which enables a search of the logs for the VM creation. If not found it would mean the VM is older than 60 days and if found the exact age can be determined. The age is useful as short-term VMs do not have the same levels of reporting requirements, i.e. does not have to be registered to OMS.

The insight into the health needed to be in the form to provide easy overall insight while allowing detail to be exposed through drilling down into the data.

The Solution

I started off crafting a solution in PowerShell through which I can access the full knowledge of the Azure Resource Manager via the AzureRM module and also other solutions such as Log Analytics, Azure Security Center and Azure Storage.

If you like to read the end of the book below is the final solution and what I will walk through is some of the detail you see in the picture.

The first challenge was the context to run the script under since multiple Azure AD tenants were utilized and I didn’t want to have to manage multiple credentials. Therefore, Azure AD B2B (business to business) was utilized. A single identity in the main Azure AD tenant was created and then a communication sent to each MTC to add that identity via Azure AD B2B to any local Azure AD tenant instances and then to give that account Read permissions to all subscriptions. This enabled a single credential to be used across every subscription, regardless of the Azure AD tenant the subscription was tied to. This same credential was also give rights to the Log Analytics instance all VMs reported to which enables queries to be run.

Now the access was available the next step was the actual PowerShell to gather the required information. A storage account was created that would be used to store the output of the execution which would be a basic execution report and two JSON files that contained custom objects representing the VM state and Azure subscription information.

The basic PowerShell flow is as follows:

  • Import the ASC and Log Analytics PowerShell modules
  • Access the credential that will be used
  • Connect to Azure using the credential
  • Store a list of every subscription associated to the credential in an array
  • Connect to the Azure Storage account to create a context for BLOB storage
  • Connect to the Log Analytics workspace and trigger two queries whose results were stored in two arrays
    • List of all machines that report to the instance that are stored in Azure
    • List of all machines that are missing patches that are stored in Azure
  • Create three files that contained todays date; log files, VM JSON files and subscription JSON file
  • Create two empty arrays that will store custom objects for VM state and subscription information
  • For every subscription perform the following:
    • List the administrators and write to the log
    • Retrieve the ASC status for the subscription and store in an array
    • For every Resource Group
      • Find the virtual networks connected to ExpressRoute gateway and store in an array
      • For every VM in the Resource Group
        • Find the creation time by scanning the operational log of Azure. Save the creation time if found and if older than 30 days or report older than 30 days if no log found
        • For each NIC inspect the IP configurations
          • Is it connected to a virtual network that has ExpressRoute connectivity
          • Does it have a public IP address and if so what is the health of that public IP based on information previously saved from ASC
        • Is the VM registered in OMS
        • Is the VM missing patches based on information from OMS or ASC
        • Create a custom object using a hash table with all desired information about the VM and add to the VM object array
    • Add a subscription information custom object to the subscription array
  • Upload the three data files generated to the Azure storage account as BLOBs

To actually run the PowerShell I used Azure Automation which not only provided a resilient engine to run the code but capabilities such as credentials which could securely store the identity that was used removing any need to hardcode it in the script itself. The schedule capability was used to trigger the runbook (the container for the PowerShell in Azure Automation) to right daily at 11pm.

At this point in an Azure Storage account was a report and two JSON files with one of them, the VM state JSON file, the most useful which enabled all information to be queried easily however the goal was to have it more easily digestible which meant PowerBI and ideally getting the data more easily available to everyone, e.g. Teams along with a notification that the nights execution was successful.

The solution was to use a Logic App (created by Ali Mazaheri, https://blogs.msdn.com/alimaz) which enables activities to be chained together using various connectors which include Azure Storage, Teams and SharePoint. The Logic App was designed with a recurrence trigger (but could also trigger based on object creations and other triggers) and to then perform the following:

  • List the blobs in the azurescan container (a container is like a folder in Azure Store)
  • For each object that is not empty
    • Get the BLOB content
    • Create a file containing that content in SharePoint
    • Copy the BLOB to an archive BLOB
    • Delete the original BLOB

 

  • Write a message to a team’s channel that the log migration was completed (Or send an email, notification to phone, etc.)

A great feature of Logic Apps is that they are implemented by adding the built-in connectors or your own API apps, Azure Functions and then graphically laying out the flow using conditions, branches and those connectors by passing output as an input for next connectors and in this case some custom expressions. Below is the key content of the Logic App (as an alternative we could have also used Azure Functions and EventGrid to achieve the same goal).

The final step was the Power BI portion to read in the file from SharePoint and provide a visualization of the data contained in the JSON. David Browne created this powerful dashboard that enabled various visualizations of the data and easy access to change the criteria of the data contained.

The Power BI Service can connect directly to SharePoint Online to read the files.  Power Query in Power BI is used to identify the latest data files, convert them from JSON to a tabular format and to clean the data.  The data is then loaded into an in-memory Tabular Model hosted by Power BI and configured for daily refresh.

Fix WSUS Console Crash

I recently deployed a new WSUS server on Windows Server 2016 but the console would crash, the WSUS engine had crashed and it turns out the problem is it runs out of memory. Make sure your WSUS server had at least 8GB of memory then perform the following:

  1. Open IIS Manager
  2. Select <server> – Application Pools
  3. Right click on WsusPool and select Advanced Settings
  4. Change the Recycling – Private Memory Limit (KB) from 1.4GB to around 4.8GB, e.g. 50331645
  5. Click OK
  6. Start or Recycle the WsusPool

Problem should be fixed!