Hell’s Cloud Ops

Been watching Hell’s Kitchen in the background while working on some projects and I think it would make an awesome cloud operations show and a fun way to communicate some core concepts. Imagine…..

Chef in calm voice – OK team, today we are working on providing a tasty SQL service for our customer that will be used from a fairly basic application. Off you go.

<contestants scurry off to their workstation areas>

<chef wanders over to Bob>

Chef angry – Bob, WHAT ARE YOU DOING?

Bob – I’m creating each VM to be part of the SQL cluster I’m creating

Chef furious – You’re creating each VM one at a time in the portal???? Oh my god! Is your computer made of red and yellow plastic with “My first” written on the top of it? At least I see you’re using Availability Sets for some resiliency but this is ridiculous. How will you ensure consistency? How will you scale to creating 50 instances of this? How would this integrate with DevOps. Start again, use Infrastructure as Code and if I see you in a portal that mouse will be going where the sun doesn’t shine.

Bob – Yes chef!

<15 minutes later Bob presents his template>

Chef – OK, nice template, good resources. oh no no no no. What have you done????? WHY HAVE YOU HARD CODED values in the resources section??? WHERE IS THE PARAMETER FILE?? How are you going to change control this? How will you deploy this between different environments, deploy between different instances. You donkey! Take environment specific values out of the template and get them in a parameter file! Then you have one, change controlled template. Environment, instance specific values are completely separate! IDIOT! FIX THIS!

<5 minutes later Bob returns>

Chef – Lets see. Good parameter use, lets look at the parameter file. DONKEY! Are you here to destroy the company??? WHYYYYY do you have the administrator password in the parameter file???

Bob – I needed it to join the machines to the domain via the domain join extension chef

Chef – And you felt the best way to do that was to place that password in the file that you then uploaded to a repository??? Your companies most important password is now known to everyone and a group of teenagers has taken over your company, your wife has left you and your kids pretend they are adopted they are so embarrassed. Good luck stocking vending machines after destroying your company. IDIOT! Where would be a better place do you think? CAN YOU THINK?

Bob – Azure Key Vault chef

Chef – Can you do that? are you capable. DO IT! And heaven help you if you forget to update the vault’s advanced access policy to allow use of the secret from ARM template deployments.

<5 minutes and Bob returns>

Chef – Lets see how you can ruin my day now. This is acceptable. Will work well. Nice use of secrets. I see you even created a release pipeline. Now tell me, why didn’t you just use Azure SQL database?

<A small tear rolls down Bob’s cheek and credits roll>

Using AD extensionAttributes in Azure AD

I had a value in one of my extensionAttributes in AD populated with a data I needed to leverage in Azure AD dynamic groups. The specific attribute was extensionAttribute5. Without doing anything else this attribute is replicated to Azure AD and can be used as part of a dynamic group. For example I created a rule:

(user.extensionAttribute5 -contains "Chief Technical Architect")

However I was unable to see this value by looking at users through PowerShell AzureAD module. They are visible through the Exchange Online PowerShell environment however I wanted to leverage Azure AD PowerShell. I therefore added the attributes as part of the Azure AD Connect replication. Note I also add one of the msDS-cloudExtensionAttributes to show another attribute available) :

extensionattributepic1

extensionattributepic2

Once replicated you are now able to view the values as shown:

PS Azure:\> Get-AzureADUser -ObjectId johnsav@onemtc.net | Select-Object -ExpandProperty ExtensionProperty

Key              Value
---              -----
odata.metadata   https://graph.windows.net/32dc2feb-7fd6bf/$…
odata.type       Microsoft.DirectoryServices.User
createdDateTime  9/26/16 6:32:37 PM
employeeId
userIdentities []
userState
userStateChangedOn
extension_391c602828_msDS_cloudExtensionAttribute1   Chief Technical Architect
extension_391c602828_extensionAttribute5             Chief Technical Architect

If you need a specific value then reference by it’s full name that is shown above (note your name will be different), for example:

(Get-AzureADUserExtension -ObjectId johnsav@onemtc.net).get_item(“extension_391c602828_extensionAttribute5”)

Deploying Agents to Azure IaaS VMs using the Custom Script Extension

In an ideal world organizations should try to avoid creating custom images with their own special agents and configurations. This means a lot of image management as each time an agent is updated the image has to be updated in addition to the normal patching of OS instances. The Azure marketplace has a large number of OS images that are kept up-to-date which should be used if possible and any customization performed on top of that. I recently had a Proof of Concept where a number of agents needed to be deployed post VM deployment along with other configurations. Items such as domain join can be done with the domain join extension but for the other agent installs we decided to use the Custom Script Extension to call a bootstrap script which would do nothing other than pull down all content from a certain container using azcopy.exe and then launch a master script. The master script would be part of the downloaded content and would then perform all the silent installations and customization’s required.

A storage account is utilized with two containers:

  • Artifacts – This contains the master script and all the agent installers etc. This could use a zip file to enable a structure to be maintained of the various agents and the master script could unzip at the start
  • Bootstrap – This contains azcopy.exe (in my case version 10) and the bootstrap.ps1 file that does nothing other than call azcopy to copy everything from the artifacts container to the c:\ root, then launch the master script from the local copy

Below is my example bootstrap.ps1 file. Notice it has one parameter, the URI of the container which will be the shared access signature enabling access.

Azcopy.exe was downloaded from https://docs.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-v10 and copied to the bootstrap container along with the bootstrap.ps1 file. In my case there is nothing sensitive in the file and so I made the container public. This would avoid having to have an access key as part of my ARM template that would ultimately call this script.

All the installers and the master script were uploaded to the artifacts container. For this container I wanted a shared access signature (SAS) that would give read and list rights. The idea would be some automation would generate a new SAS each week and write to a secret in key vault that only people that should deploy had access to. The SAS would have a lifetime of 2 weeks to have an overlap with the newly generated. In addition to generating and storing the complete SAS I needed a second version that was escaped for cmd.exe. This is because the SAS has & in it which was being interpreted during my testing breaking its use. I tried to use no parse (–%) but this did not work since it was being called by cmd.exe therefore the escape is to use ^&. The script below generates the SAS and the escaped SAS and writes both versions as secrets to key vault.

Once this was done I now had a SAS available in the key vault that would give read and list to the artifacts container. Remember to configure the Access Policy on the vault to enable use of secrets from ARM templates (advanced settings) and additionally for the users/groups to have access to the secret. A test of this process to my local machine worked, i.e.

Next I tried calling as I would via the Custom Script Extension which with the escaped version worked great (note its the escaped URL as this will get expanded in the template).

Initially my test was to an existing Azure VM so I used the following (note I’m getting the escaped version of the secret from Key Vault):

Once this worked I finally created an ARM template that included a reference to the secret and all worked as planned.

The parameter file (note I also get a secret to join the domain even though I’m not using the domain join extension in this example):

The actual template (note in the CSE extension at the end I need the single quotes around the URI or it once again tries to interpret it so you have to use two, i.e. ”, to get one ‘ when it actually executes):

And the execution (note the network and RG already existed in my environment).

Hope this helps!

PowerShell Master Class Module 1 Available!

I’ve finally got round to starting recording on my PowerShell Master Class. This will be a long course that will be free on YouTube. I’ll be adding modules over the next month and updating as needed. I’ll tweet (@NTFAQGuy) when I add new things.

Module 1 is PowerShell Fundamentals and available at https://youtu.be/sQm4zRvvX58.

The course playlist is available at https://www.youtube.com/playlist?list=PLlVtbbG169nFq_hR7FcMYg32xsSAObuq8.

The course assets are available via GitHub at https://github.com/johnthebrit/PowerShellMC.

New Free Data Courses on Pluralsight Available!

Over the past two months I’ve been busy on some Data in Azure courses for Microsoft and Pluralsight. These are free and you just need to sign-up for a free account on Pluralsight. These will shortly be available via Azure as well but are available now through Pluralsight.

Azure Stack Marketplace Management

Been doing some work with Azure Stack and wanted to easily update all the Microsoft provided extensions and a set of core images if there are new versions by running a simple script.

Script available at https://github.com/johnthebrit/AzureStack/blob/master/azurestackmarketplace.ps1. Simply run the script and after it downloads the assets it will check if there are older versions and prompt you if you want to delete the old ones.

Lots of new Azure Design and Identity free training available

I may have seemed to be very quiet over the past few months but that’s because I’ve been working pretty much every night and weekend on 11 new courses for azure.com that will shortly be available via the site but are immediately available for free via PluralSight. If you don’t have an account simply sign up for a free account and you can then access my (and other peoples tracks).

Planning Microsoft Azure Identity and Security

Planning Microsoft Azure Infrastructure

The identity track looks at identity management before diving into authentication, authorization, auditing, monitoring and risk. The infrastructure track looks at compute, storage, networking and monitoring.

I hope you find these courses useful and there are more to come.

On a side note I’m trying to raise money for Cure Childhood Cancer as part of my Ironman Chattanooga on 9/30/2018. This will be my 5th Ironman this year and 12th overall. If you can help even a little please head over to https://www.firstgiving.com/fundraiser/john-savill/IM2018 and maybe your company matches so if they do that helps as well. I’ll be trackable on the day via https://bat.live/track/imchattanooga2018?bib=356.

Thank you!

Two new videos on Azure AD – Conditional Access and Tokens!

Recorded two new videos this week. The first is an understanding of how tokens work with Azure AD and then one looking at conditional access (which can control the access to get those tokens for various scenarios).

Word of caution – I talk about terms of use in the second video. If you just enable this for ALL users it will break things that can’t accept it, for example the account you use for Azure AD Connect to sync to Azure AD so make sure you exclude accounts that can’t accept!