Finding your Exchange Servers

For similar reason I mentioned in this post ‘How to locate your ARS servers using the service connection point’ I wrote a function to find my exchange servers.

It’s never a good idea to hard code stuff into your scripts as these make your code less portable and also you are at the mercy of environment changes.  Your scripts will fail when the hard coded variable value no longer matches the server or object your are trying to connect to.

By default the function returns exchange servers from the local site of the machine running the script. if the $InSiteOnly parameter is specified the function only returns exchange servers in the local / specified site unless there are no servers in that site when it will return servers from all sites.

Function Get-ExchangeServers { # Version 2.00
 param (
  [parameter(Mandatory=$false,Position=1,HelpMessage='Returns the exchange server names from the specified site in preference to any other site')][string]$ADSiteName,
 [parameter(Mandatory=$false,Position=2,HelpMessage='When present will only return exchange servers from the local / specified site unless the site does not contain more than "$maxNumberOfServers" exchange servers then it will return servers from all sites in addition to the specified site')][switch]$InSiteOnly,
[parameter(Mandatory=$false,Position=0,HelpMessage='The maximum number of exchange servers to return, it will by default return local / specified site servers at the top of the list')][ValidateRange(1,[int]::MaxValue)]
 [parameter(Mandatory=$false,Position=3,HelpMessage='Returns the specified exchange server version only from the specified site in preference to any other site')] [ValidateSet("2013","2016")][string]$Version 
 if ( $Version ) {
  switch ( $Version ) {
   "2013" { $VersionString = "Version 15.0" }
   "2016" { $VersionString = "Version 15.1" }
 if ( $ADSiteName ) { # AD Site name specified so get the site DN  $computerSiteDN = [System.DirectoryServices.ActiveDirectory.Forest]::GetCurrentForest().Sites  | Where-Object { $ -eq $ADSiteName } | Select-Object @{name="DN";expression={$_.GetDirectoryEntry().distinguishedName}} | Select-Object -ExpandProperty DN }
 else { 
$ADSiteName = [System.DirectoryServices.ActiveDirectory.ActiveDirectorySite]::GetComputerSite().GetDirectoryEntry().name
 if ( $computerSiteDN -eq $null ) { 
# AD Site name not specified or not found so get the local machines site
  $computerSiteDN = [System.DirectoryServices.ActiveDirectory.ActiveDirectorySite]::GetComputerSite().GetDirectoryEntry().distinguishedName
 if ( $computerSiteDN -eq $null ) {
  Throw "FATAL ERROR: Unable to get the AD site DN"
 $returnData = @() # ensures that an array of server names is always returned
 # search the site for exchange servers
 $SearchTool = <strong>New-Object</strong> DirectoryServices.DirectorySearcher([ADSI]('LDAP://' + ([ADSI]'LDAP://RootDse').configurationNamingContext))

 $SearchTool.Filter = "(objectClass=msExchExchangeServer)"
 $ExchangeServers = $SearchTool.FindAll()
 # get the exchange servers that are in the local / specified AD Site
 $exchangeServersInSite    = @($ExchangeServers | <strong>Where-Object</strong> { $_.Properties.msexchserversite -eq $computerSiteDN })
 if ( $VersionString ) {
  $exchangeServersInSite    = @($exchangeServersInSite  | Where-Object { $_.Properties.serialnumber.substring(0,12) -eq $VersionString })
 $exchangeServersInSite = @($exchangeServersInSite | Select-Object</strong> @{name="name";expression={$}} | Select-Object  `
-ExpandProperty  name)
 if ( $exchangeServersInSite.count -le 0 ) { # no servers found in local / specified AD site so lets get all exchange servers in all sites
  $exchangeServersInSite = @($ExchangeServers | <strong>Where-Object</strong> { $_.Properties.msexchserversite -ne $computerSiteDN } )
  if ( $VersionString ) {
   $exchangeServersInSite    = @($exchangeServersInSite  | <strong>Where-Object</strong> { $_.Properties.serialnumber.substring(0,12) -eq $VersionString })}
  $exchangeServersInSite = @($exchangeServersInSite | Select-Object @{name="name";expression={$}} | Select-Object `
-ExpandProperty</em>  name)}
 if ( $exchangeServersInSite.count -le 0 ) {
  Throw "FATAL ERROR: Unable to find any Exchange servers"
 if ( $InSiteOnly ) {
  # Return just the exchange servers we have so far unless the site specified 
  # has no servers then return servers from all sites
  $returnData += $exchangeServersInSite | Get-Random -Count $(if ($exchangeServersInSite.count -le $maxNumberOfServers ) { $exchangeServersInSite.count } else { $maxNumberOfServers })}
 else {
  if ( $maxNumberOfServers -le $exchangeServersInSite.count ) { 
# number of servers requested can be delivered from the in site server list so 
# lets return them
 $returnData += $exchangeServersInSite | Get-Random -Count $maxNumberOfServers }
  else { 
# we need more servers so lets add in additional servers from the other sites
   $exchangeServersNotInSite = @($ExchangeServers | Where-Object { $_.Properties.msexchserversite -ne $computerSiteDN } )
   if ( $VersionString ) {
    $exchangeServersNotInSite    = @($exchangeServersNotInSite  | Where-Object { $_.Properties.serialnumber.substring(0,12) -eq $VersionString })
   $exchangeServersNotInSite = @($exchangeServersNotInSite | Select-Object @{name="name";expression={$}} | Select-Object -ExpandProperty</em>  name)

   $returnData += $($exchangeServersInSite + $( $exchangeServersNotInSite | Get-Random -Count</em> $(if ($exchangeServersNotInSite.count -le $($maxNumberOfServers - $exchangeServersInSite.count) ) { $exchangeServersNotInSite.count } else { $($maxNumberOfServers - $exchangeServersInSite.count) })))
 if ( $ReturnData.count -le 0 ) {
  Write-Error "ERROR: No exchange Servers Returned for site: '$ADSiteName'"
 Return ,$returnData
}            # Get-ExchangeServers            Version 2.00


Extracting Photos from AD

This post is actually about the ‘DontConvertValuesToFriendlyRepresentation’ switch on get-qaduser but I came across this because I was trying to extract the photos from AD so that’s how I ended naming the post Extracting Photos from AD as most people will probably be search for this and not the command line switch.

Getting the photo from AD is pretty simple but there are a couple of things to know. When you upload the photo to AD it’s converted from a jpeg to a array of bytes so you can’t just download it you have to convert it back. the Quest commandlets are helpful and covert lots of the raw data stored in AD into more readable formats. What this means is that sometimes the help is more of a hindrance because the value you wanted for the photo has been converted so then the byte conversion fails ‘[System.Io.File]::WriteAllBytes( $Filename,$photoAsBytes )’

There are two solutions to this. The first is to just access the directory entry like this ‘$user.DirectoryEntry.thumbnailPhoto.Value’ and the second is to tell the commandlet not to convert the values by using the ‘DontConvertValuesToFriendlyRepresentation’ switch.

And as I was comparing the speed of the AD commandlets I extracted the thumbnailPhoto attribute using both the AD and Quest commandlets. The AD commandlets are faster but not by much as long as you use the ‘-DontUseDefaultIncludedProperties’ The quest commandlets pull down lots of attributes which is why it takes longer so when getting lots of AD objects it’s worth using this switch too.

$ldapFilter = "(&(employeeID=*)(sAMAccountType=805306368)(thumbnailPhoto=*)(!(|(userAccountControl:1.2.840.113556.1.4.803:=2))))"
$searchRoot = "OU=User Accounts,DC=MyADDomain,DC=com"
$useADCommandlets = $false 
$sizelimit = 0
$OutputPath = 'c:\Temp\Photos'
Function ConvertTo-Jpeg {
 param ($userName,$photoAsBytes,$path='c:\temp')
 if ( ! ( Test-Path $path ) ) { New-Item $path -ItemType Directory }
 [System.Io.File]::WriteAllBytes( $Filename,$photoAsBytes )

if ( $useADCommandlets ) {
 #Import-Module ActiveDirectory
 $Users = GET-ADUser -LDAPFilter $ldapFilter  -Properties thumbnailPhoto # | select -First $sizelimit # remove the select to get all users 
 ForEach ( $User in $Users ) {
  ConvertTo-Jpeg -userName $user.SamAccountName -photoAsBytes $user.thumbnailPhoto -path $OutputPath 
else {
 $Users = get-qaduser  -LdapFilter $ldapFilter -SearchRoot $searchRoot -DontUseDefaultIncludedProperties -DontConvertValuesToFriendlyRepresentation  -IncludedProperties thumbnailphoto -SizeLimit $sizelimit   # set sizelimit to 0 to get all users
 ForEach ( $User in $Users ) {
  #ConvertTo-Jpeg -userName $user.SamAccountName -photoAsBytes $user.DirectoryEntry.thumbnailPhoto.Value -path $OutputPath # if you didn't use the -DontConvertValuesToFriendlyRepresentation switch 
  ConvertTo-Jpeg -userName $user.SamAccountName -photoAsBytes $user.thumbnailPhoto -path $OutputPath

Group Nesting Strategy – stop the madness.

Talk about getting side tracked looking to confirm the syntax of a PowerShell command I came across this post on the Quest One Identity forum and started to answer it, then realised that I should really be posting here not in the forum and then use a shameless plug on the One Identity forum.

This was the question:

“We use the lousy nested structure for shared folder ntfs permissions where a domain local group contains a universal which contains a global and the global has the users.  I want to find a way to create the 3 groups required when a new folder is setup, then add users to the global group”

Woah! 3 groups for every file share!!! I want to scream stop the madness now!

I’ve been trying to explain this for years and I really want to enlist a few people in the guido school of administration

“Our Guido School of Admin Training is a very informal school held outside in the nice, fresh air in the alley between two office buildings. There are no formal registration procedures, however you do have to be nominated to attend.

What to expect: The instructor, Guido, will shake your hand and then gently haul you over to the nearest wall by your collar. Then it becomes really exciting and lots of fun. With a jaunty smile, Guido grabs you securely by the back of the neck and smacks your face against the wall while saying in a firm tone of voice:

Do {smack} not {smack} nest {smack} security {smack} groups {smack} into {smack} protected {smack} groups.”

Only in my case its do {smack} not {smack} just create groups {smack} for the sake of nesting them {smack}!

I’ve never heard anyone recommend a 3 level group nesting strategy suggested in the post 2 is already one too many and possibly two too many.

Use the appropriate groups for the job and NEVER create a group unless you have a good reason to.

The standard Microsoft teaching is to create 2 groups a domain local or universal and a global group nesting the global in the local.

That’s not the idea at all and is why almost every AD I’ve ever seen is a complete mess

Often an Active Directory will have more groups than the company has employees by several multiples which should tell you something is wrong.  Do a quick count in your environment and see how deep you are into this bad practice.

Let’s take a quick look at this.  You create a local group or a universal group,  well which is it ?  A local group can only be used to secure resources in the domain in which it is created.  A universal group is available in all domains in the forest.   Both groups can contain users and groups from all trusted domains.  So which group depends if you have resources in multiple domains or just one.

Now we come to the crux of the problem here.  You create a global group, add the users to the global group and then nest the global group in the local group.

Why not just add the users to the local group?

Good question.  Why not?  There is no real reason but then whats the point of a global group at all? Another good question.

Did you know that a global group takes up less room in the security token so you can be a member of more global groups than you can local groups before you get authentication issues because you have run out of allocated memory space for the security token.

So why not just use global groups then?

If you don’t have any trusted domains then you could just use global groups but we are getting away from the point here.

Why do Microsoft tell us to so the nesting?

Well the short answer is because this is the most flexible way of doing things.

So whats wrong with it then – what are they not telling you on the training courses?

The problem is that if you are creating a local group and a global group and there is a one to one relationship every time, then as shown above you could just use the local or global group.  This happens because when you designed the delegation model you did this in isolation from everything else and your solution is actually quite sound but if you step back a little and perhaps look at the wider picture you might see whats wrong.

What was it Microsoft said the global groups were for?

You add the users to the global groups, right?  Whats the global group called?  The group name should reflect the group of users you just added to it.   If for example we were controlling access to a finance file share the global group could reasonably be expected to be called ‘Finance Users’ now what about the local group, whats that called and what’s it for?  Well the local or universal group should be named after what it’s controlling access to.   I’ve adopted the term ‘capability group’ from a book which I promise I’ll look up and post the name as it’s an excellent book explaining how to delegate properly.  The local group name might be ‘Finance Share’  ( I’m not getting into naming conventions this is simplified just for this post).  Lets also suppose there are some applications the ‘Finance Users’ need installed on their PCs, this could be set up using a local group called ‘Finance Applications’ and we could nest the ‘Finance Users’ group in the two local groups.

A self documenting solution

If I look at a user and I see he is a member of the ‘Finance Users’  group, even if I don’t add HR data to my user objects, e.g. set the department attribute to ‘Finance’ I can see that the user is a finance user.   If I did populate the AD users department value then the ‘Finance Users’ group could be managed automatically by creating an ARS dynamic group or you could write a custom script to manage the group as a scheduled task.  Also when I look at the ‘Finance Users’ group I can see it is a member of two local groups, ‘Finance Applications’ and ‘Finance Share’ so I can now see just from looking at the groups what the ‘Finance Users’ have access to.

Now lets say you build another file server and want to share this out to Sales and Finance.  Create a ‘capability’ group called ‘Sales and Finance Share’, create a global group and add all the sales employees to the group and then nest BOTH the ‘Sales User’ and ‘Finance Users’ in the ‘Sales and Finance Share’ group.

What no one is pointing out is that we want to REUSE the global groups – maybe they just think it’s obvious but trust me in my experience it’s not.

The global groups should be reused and they could be considered role groups, although this is where there possibly is a case for for a 3 group nesting strategy but be careful in a large environment you will hit the group limit.  When you hit the limit the access token will be truncated causing access issues when the group you need to access the resource cannot fit in the allocated memory.  This will cause an ‘access denied’ or WORSE will get access because the DENY group wouldn’t fit in the memory which is another reason to NOT use DENY anywhere just don’t give them access in the first place.

In an ideal world…..

Your ideal is that when a new employee starts they are added to a single group, e.g. the ‘Finance Users’ group, automatically and this group give access to everything they need to do their job either by adding the global group directly to the ACL or by nesting it in a local group that has been applied to the ACL of the resource.  If they move departments, e.g. from ‘Sales’ to ‘Finance’ their group membership automatically gets updated.


How to get your mugshot photo into AD

My previous posts on this topic explained how to convert the format to a  jpeg  then resize it to fit into the AD size limit and then finally how to upload it into AD in the thumbnailAttribute.

This worked fine for years but recently we had some complaints that the pictures were not consistent. AD , SharePoint and Outlook were not all showing the same photograph.

The long and the short of this is that exchange creates it’s own copy on the exchange server and if you don’t use the exchange commandlet ‘Set-UserPhoto’ to upload the photo then this cache may not be updated.  You can force it to be updated by deleting the cached photo but it’s better to just use the new commandlet like this:

Set-UserPhoto -Identity $guid -PictureData ([System.IO.File]::ReadAllBytes($jpegFile)) -Confirm:$false

Date and Time Formatting

I’ve made a few posts about the hassles of dates and time – I just wish we could all agree on a format and stick to it.

If you are dealing with dates in Powershell in a production script where the script and the updates it makes are viewed globally then you have my sympathy.

Hopefully this post will help you solve your problems.

Dealing with dates is awkward because everybody likes to have their own system for writing down dates.  Our cousins in the US use MM/DD/YYYY and we use DD/MM/YYYY.  Other countries use different separators like the period ( Germany ) or a hyphen (Canada).

This is before we get into the different ways to display a date, try this command at a PoSH prompt:


This lists all the ways that a date can be displayed.


This usually isn’t a problem when getting dates out of an AD attribute as the data is stored as UTC and is in a standard format.  When you extract the information and display it the system does the conversion for you and displays it using the ‘culture’ of your host operating system.

What your OS is cultured?

You can find out what date and time formats your PC is currently using at a PoSH prompt

A cultured PoSH prompt?


In the UK this will return en-GB for English – Great Britain.

Anyway the problem is when you start extracting date strings from CSV files.  Now you need to know if the string 12/02/2016 is the 12th of February or the 2nd of December.

There is no fix for this by the way you just have to know!  You could parse the whole file and just check your assumption that the dates are in UK or US format but there is no guarantee that there are any date strings that violate either assumption.  Any date string that does not have a number above 12 will pass both tests.  Only days 13 and above will reveal the formatting.

Right so lets assume we know that the file has dates in US format.  For our US cousins they can now stop reading as there is nothing for you to do, unless of course the date stings in your file are UK format, then you have exactly the same issue as your UK counterpart.

Excel gets it wrong too so don’t feel bad for your powershell code

Did you know Excel also makes assumptions based on your regional settings and you know what happens when you assume something.     Yup excel will make a complete mess of your data!  Try it if you don;t believe me!  Anyway back to PowerShell.

Converting dates

I’m only going to cover, for now at least the conversion between UK and US dates.  The same principles apply for any conversion.  You need to know the format in the file and the ‘culture’ of the system processing it.  Why do I need the ‘culture’?  Because this can actually be different for the user of any system so rather than assume what it is use Get-Culture and this way your script should in theory work on any system using any date ‘culture’.

These are not the droids you are looking for

In converting my date strings to match my system date I have 3 possible states.

  1. The date string in the file matches my system ‘culture’
  2. The date string is in US and my system is UK – convert from US to UK
  3. The date string is in UK and my system is in US – convert from UK to US

Once we know which action we need to do you just use a substring statement to rearrange the date string.

Oh one other thing I’m assuming we are using a 4 digit year in this example and every day  and month is using 2 characters.

if ( ( ( $dateFormat -eq “US” ) -and ( $(Get-Culture).Name -eq “en-US” ) ) -or ( ( $dateFormat -eq “GB” ) -and ( $(Get-Culture).Name -eq “en-GB” ) ) )

# leave as is

{$Users = Import-Csv $File -Encoding “UTF7” | Select-Object ‘ID’,’Date’}

elseif ( ( $dateFormat -eq “US” ) -and ( $(Get-Culture).Name -eq “en-GB” ) )

{$Users = Import-Csv $File -Encoding “UTF7” |
Select-Object ‘ID’,@{Name=’Date’;Expression={“$($_.’Date’.substring(3,2))/$($_.’Date’.substring(0,2))/$($_.’Date’.substring(6,4))”}}}

elseif ( ( $dateFormat -eq “GB” ) -and ( $(Get-Culture).Name -eq “en-US” ) )
{$Users = Import-Csv $File -Encoding “UTF7”  |
Select-Object ‘ID’,@{Name=’Date’;Expression={“$($_.’Date’.substring(0,2))/$($_.’Date’.substring(3,2))/$($_.’Date’.substring(6,4))”}}}

I think I got that right 🙂  anyway all you need to do is to split up that date sting to suit your culture.  The code above shows you how so now you should be able to write your own conversions.


Fixing the auto correct – today’s pet hate! 

The mysterious case of the missing hyphen…

Whilst most of the time auto correct does a great job the millions of funny posts showing where it got it wrong just highlights the fact that automation is a double edge sword.

For example here are the top 25 funniest auto corrects

This post is not really about auto correct on your mobile rather the auto correct that office does as you type text.

Mostly it does a good job

Mostly it does a good job but some changes are pretty annoying like it’s conversion of the humble hyphen to a dash – obviously someone at Microsoft thought a dash looks nicer than a hyphen.

Here’s the top ten annoying things in Office, apparently 🙂

 Getting the ASCII Code

In a powershell prompt you can get the ASCII code by doing this [byte][char]”-” and this would return 45, not quite the meaning of life which is 22 according to the hitch hikers guide to the universe but it’s closer to it than the ASCII code for a “dash” which is 150.

Why do we care what the ASCII code is?

Why is this a problem?  Well if you use word and then paste to excel and enter a hyphen it might get converted to a dash.  Still not a problem until you want to use import-csv to get that data into a variable so that PowerShell can do something useful with the data.  What you’ll discover is that the dash disappears and is replaced with a space or maybe a nice little square box depending on the encoding you select.  It’s not a space it’s CHR 150 but you can’t see it so it’s displayed as a space.  Try splitting the string with .substring(position,1) and then cast it to a byte as mentioned above.

A real world problem

Today I was importing data into AD using a CSV file provided by the HR department.  Somehow one of the cells had a dash instead of a hyphen.  When I pushed the data into AD the hyphen or more accurately the dash got lost.  There was another problem with the file in that the column names didn’t match the AD attribute names.  This was easily dealt with by formatting the data as I imported it like this… ( the UTF encoding keeps those pesky UMLAUTS where they need to be in case you are wondering why I used it)

Translating the column names and calculating values

$HRUsers = Import-Csv $HRFile -Encoding “UTF7” |
Select-Object @{Name=”EmployeeID”;Expression={$_.’PS ID’}},
@{Name=”givenName”;Expression={$_.’First Name (PRF)’}},
@{Name=”sn”;Expression={$_.’Last Name (PRF)’}},
@{Name=”Title”;Expression={$_.’Business Title’.replace([char]150,”-“)}},

All I need do is set the Name to the AD attribute Name and then the Expression uses the column name from the HR extract.

Replacing the dash with a hyphen

To deal with the dash problem I used a replace in the expression:

$_.’Business Title’.replace([char]150,”-“)

Which replaces the DASH with a hyphen, just how we like it 🙂

Easy – well it is, now I’ve posted this.  There were a lot of people with similar issues but no full solution that I saw, so I cobbled this together from the pieces I found.

Don’t send too many emails

So this might sound obvious but it’s easy to send an email using PowerShell so a corollary is that if you don’t put limits into your scripts you could send a lot of email!

I used a working function in a new script to save time and this is good practice isn’t it to reuse code you have already written and tested.

Unfortunately the script that called the function was sending the wrong information so the net result was my script sent lots and lots of debug messages.  Not cool!

When I write scripts that automate updates in AD I always put limits in so that should an unexpected event happen the script doesn’t destroy my AD environment and I do mean destroy, automation is a double edged sword get it wrong and you can end up doing really bad things so just in case I ALWAYS put limits in so that a script will not do more than an expected number of updates.

Sending 1000s of emails might not be as destructive but clearly it’s not right so I’ll be adding something like this to any function that can send emails from now on.

if ( ! ( Test-Path variable:script:maxEmailLimit ) ) {
$script:maxEmailLimit = 10
if ( ! ( Test-Path variable:script:emailsSent ) ) {
$script:emailsSent = 0

The point of this blog is also to bring up another technique I use related to variable scope. You can create variables anywhere in your script with scope script.  This means you can call a function that can create variables that will not disappear once the function completes.  Remembering how many emails you sent already is a good example of using this.

The Test-Path checks if I have already created a variable and if not I create it with scope script which means all my other functions will have access to the variable too, in case I have more than one function that can send emails.

Then before sending any emails I compare $script:emailsSent with  script:maxEmailLimit and if I’ve already sent too many emails my script will send a diagnostic email to me telling me I breached the limit and will then stop sending emails – You can decide if the script should continue or fail gracefully depending on your application.