Finding your Exchange Servers

For similar reason I mentioned in this post ‘How to locate your ARS servers using the service connection point’ I wrote a function to find my exchange servers.

It’s never a good idea to hard code stuff into your scripts as these make your code less portable and also you are at the mercy of environment changes.  Your scripts will fail when the hard coded variable value no longer matches the server or object your are trying to connect to.

By default the function returns exchange servers from the local site of the machine running the script. if the $InSiteOnly parameter is specified the function only returns exchange servers in the local / specified site unless there are no servers in that site when it will return servers from all sites.

Function Get-ExchangeServers { # Version 2.00
 param (
  [parameter(Mandatory=$false,Position=1,HelpMessage='Returns the exchange server names from the specified site in preference to any other site')][string]$ADSiteName,
 [parameter(Mandatory=$false,Position=2,HelpMessage='When present will only return exchange servers from the local / specified site unless the site does not contain more than "$maxNumberOfServers" exchange servers then it will return servers from all sites in addition to the specified site')][switch]$InSiteOnly,
[parameter(Mandatory=$false,Position=0,HelpMessage='The maximum number of exchange servers to return, it will by default return local / specified site servers at the top of the list')][ValidateRange(1,[int]::MaxValue)]
 [parameter(Mandatory=$false,Position=3,HelpMessage='Returns the specified exchange server version only from the specified site in preference to any other site')] [ValidateSet("2013","2016")][string]$Version 
 if ( $Version ) {
  switch ( $Version ) {
   "2013" { $VersionString = "Version 15.0" }
   "2016" { $VersionString = "Version 15.1" }
 if ( $ADSiteName ) { # AD Site name specified so get the site DN  $computerSiteDN = [System.DirectoryServices.ActiveDirectory.Forest]::GetCurrentForest().Sites  | Where-Object { $ -eq $ADSiteName } | Select-Object @{name="DN";expression={$_.GetDirectoryEntry().distinguishedName}} | Select-Object -ExpandProperty DN }
 else { 
$ADSiteName = [System.DirectoryServices.ActiveDirectory.ActiveDirectorySite]::GetComputerSite().GetDirectoryEntry().name
 if ( $computerSiteDN -eq $null ) { 
# AD Site name not specified or not found so get the local machines site
  $computerSiteDN = [System.DirectoryServices.ActiveDirectory.ActiveDirectorySite]::GetComputerSite().GetDirectoryEntry().distinguishedName
 if ( $computerSiteDN -eq $null ) {
  Throw "FATAL ERROR: Unable to get the AD site DN"
 $returnData = @() # ensures that an array of server names is always returned
 # search the site for exchange servers
 $SearchTool = <strong>New-Object</strong> DirectoryServices.DirectorySearcher([ADSI]('LDAP://' + ([ADSI]'LDAP://RootDse').configurationNamingContext))

 $SearchTool.Filter = "(objectClass=msExchExchangeServer)"
 $ExchangeServers = $SearchTool.FindAll()
 # get the exchange servers that are in the local / specified AD Site
 $exchangeServersInSite    = @($ExchangeServers | <strong>Where-Object</strong> { $_.Properties.msexchserversite -eq $computerSiteDN })
 if ( $VersionString ) {
  $exchangeServersInSite    = @($exchangeServersInSite  | Where-Object { $_.Properties.serialnumber.substring(0,12) -eq $VersionString })
 $exchangeServersInSite = @($exchangeServersInSite | Select-Object</strong> @{name="name";expression={$}} | Select-Object  `
-ExpandProperty  name)
 if ( $exchangeServersInSite.count -le 0 ) { # no servers found in local / specified AD site so lets get all exchange servers in all sites
  $exchangeServersInSite = @($ExchangeServers | <strong>Where-Object</strong> { $_.Properties.msexchserversite -ne $computerSiteDN } )
  if ( $VersionString ) {
   $exchangeServersInSite    = @($exchangeServersInSite  | <strong>Where-Object</strong> { $_.Properties.serialnumber.substring(0,12) -eq $VersionString })}
  $exchangeServersInSite = @($exchangeServersInSite | Select-Object @{name="name";expression={$}} | Select-Object `
-ExpandProperty</em>  name)}
 if ( $exchangeServersInSite.count -le 0 ) {
  Throw "FATAL ERROR: Unable to find any Exchange servers"
 if ( $InSiteOnly ) {
  # Return just the exchange servers we have so far unless the site specified 
  # has no servers then return servers from all sites
  $returnData += $exchangeServersInSite | Get-Random -Count $(if ($exchangeServersInSite.count -le $maxNumberOfServers ) { $exchangeServersInSite.count } else { $maxNumberOfServers })}
 else {
  if ( $maxNumberOfServers -le $exchangeServersInSite.count ) { 
# number of servers requested can be delivered from the in site server list so 
# lets return them
 $returnData += $exchangeServersInSite | Get-Random -Count $maxNumberOfServers }
  else { 
# we need more servers so lets add in additional servers from the other sites
   $exchangeServersNotInSite = @($ExchangeServers | Where-Object { $_.Properties.msexchserversite -ne $computerSiteDN } )
   if ( $VersionString ) {
    $exchangeServersNotInSite    = @($exchangeServersNotInSite  | Where-Object { $_.Properties.serialnumber.substring(0,12) -eq $VersionString })
   $exchangeServersNotInSite = @($exchangeServersNotInSite | Select-Object @{name="name";expression={$}} | Select-Object -ExpandProperty</em>  name)

   $returnData += $($exchangeServersInSite + $( $exchangeServersNotInSite | Get-Random -Count</em> $(if ($exchangeServersNotInSite.count -le $($maxNumberOfServers - $exchangeServersInSite.count) ) { $exchangeServersNotInSite.count } else { $($maxNumberOfServers - $exchangeServersInSite.count) })))
 if ( $ReturnData.count -le 0 ) {
  Write-Error "ERROR: No exchange Servers Returned for site: '$ADSiteName'"
 Return ,$returnData
}            # Get-ExchangeServers            Version 2.00



ARS bugs and annoyances – Managed Unit not sorted

In reshaping my deployment of ARS 7 I’ve made extensive use of the dynamic objects ARS provides, i.e. Managed Units and Dynamic groups.  Both of these are defined by a set of membership rules.  In doing so I came across one limitation ( or bug ) and one annoyance.  I’d like these to be ‘Feature Requests’ for the next version of ARS.

  • The bug – objects in a Managed Unit are not sorted
  • The annoyance – You cannot rename the membership rules in a dynamic object

The bug ( although I suspect Quest / Dell / Quest / One Identity, never thought about this ) is that if I use a Custom Include Query that displays the OUs below a target ‘searchRoot’ the OUs are not displaed in any order and there is no control over this, e.g. If I target a users OU and under this OU there is an OU for each country the MU displays the countries in a random order.  If you want to try this out  use this query as a membership rule ‘(&(objectCategory=organizationalUnit)(street=DisplayOUInMU))’ where I tag the OUs street attribute with either ‘DisplayOUInMU’ or ‘Don’tDisplayInMU’  I also have a 3rd setting ‘DisplayObjectsinMU’ which allows me to also display the objects in the OU in the MU.

I think that the MU should by default always sort the objects it displays in alphabetical order.  In case you were wondering why I don’t just add the OUs implicitly there are two reasons, one, there are a lot of them and two, what if we add another country OU, I wanted to make the MU automatically pick it up.  I have a fix for this by the way, add the dynamic rule but also add the explicit OU objects that already exist in the OU that you want to display.  Any new OUs will get the correct ‘street’ attribute value as I use an ARS policy to update the street attributed based on the parent OU.  The new OUs won’t be sorted so you will need to go and update the MU membership rules although now I am writing this I could write an ARS Policy script to automate this but I’ll wait a little in case One Identiry decide to add this feature / bug fix to the next ARS version.

The annoyance – You cannot rename the membership rules in a dynamic object.  This should be an easy thing to allow in the same way as you can rename the PVG rules in an ARS Policy.  I have dynmaic objects with 3 of even 4 ‘custom searches’ wouldn’t it be nice to be able to give these a meaningful name so you don’t have to open each one when you need to modify it?

How to locate your ARS servers using the service connection point

In case you need to know which servers to connect to using Connect-QADService and you don’t want your script to have hard coded domain information in so the code is more portable, i.e. will run in any environment I came up with a script to locate the available ARS servers using the service connection points published into AD by the ARS servers.

Quest / Dell / Quset / On Identity still havent update the path and use the products original ( Enterprise Directory Manager) name so the SCP are located here: CN=Enterprise Directory Manager,CN=Aelita,CN=System,<Domain DN> Version 6 didn’t have a port number but ARS 7 did.  I don’t know if this is unique to my environment as I was running the two ( ARS 6.9 and 7.x ) services in parrallel or if this was a hard coded change from ARS 7.   If you use this script and find the port is different in your environment let me know.

Call the function like this 

 $ARSServerFQDN = Get-ARS7Servers | select -First 1 

or without the select statement if you want to see all the servers.  You can then use this to control which server you are connecting to 

 connect-QADService -service $ARSServerFQDN -proxy
Function Get-ARS7Servers { # Version 2.00
 $searchRoot = "CN=Enterprise Directory Manager,CN=Aelita,CN=System,$([System.DirectoryServices.ActiveDirectory.Domain]::GetCurrentDomain().GetDirectoryEntry() | select -ExpandProperty DistinguishedName)" 
 Get-QADObject  -SearchRoot  $searchRoot -Type serviceConnectionPoint | SELECT  -ExpandProperty Name | Where { $_.indexOf(":17228") -gt 0 } | Select @{name='serverName';expression={$_.split(":")[0]}} | select -ExpandProperty serverName
}            # Get-ARS7Servers                Version 2.00

Documenting ARS delegated permissions

One of the killer reasons to use ARS is the ease with which you can answer the two auditor questions, who can manage that group / OU / user and what can that user  manage.

ARS provides another layer between the security principle and the AD object.  This layer is an Access template.  An access template is a list of rights that are delegated to objects.  You delegate rights to a security principle via the template.   If you update the template then all the links where the template is used are also updated.  Lets say you create a telephone number template ( called user-telephone Numbers ) that allows security principles to edit the telephone number of user objects.  You delegate this right to the security principle, ‘Telephony Admins’ by linking the security principle via the new template ‘user-Telephone Numbers’ to 3 different user OUs in AD.   Doing this natively in AD is a bit of a chore because you need to select the same rights on each OU location separately.  If you then wanted to change the rights to include say the mobile number attribute in an AD only world you would first need to check where the original ‘right’ was delegated and then apply a second ACL to all the OUs, can be difficult, tedious and error prone for sure.  In an ARS managed environment you simply update the template and the rights will change on all of the location where you have used the template.

When it comes to answering the audit questions, just view the object you are interested and select the administration tab.  There are three buttons, but the interesting ones in the context of this blog are:

  • Security – shows who can administer the object
  • Delegation – shows what objects the user can administer


What if the auditor wants a document of the rights being delegated

They usually ask for screenshots although I’m not sure why.  Anyway I wanted a way to export this information into a CSV file so I could compare files later to see if anything had changed and also to use as a backup allowing me to restore rights if they had changed.  If I get to send these reports to the Auditors then thats a bonus.

ARS includes a commandlet that will make this really easy to do:


There is a trustee parameter that would make this faster I suspect but I could not get this working so I just added a where clause into the pipeline.

Get-QARSAccessTemplateLink -Proxy |
 select DirectoryObjectDN,
        DN |
 where { $_.Trustee.NTAccountName -eq "MyDomain\adminlandrews" }

Now you can pipe that into the Export-Csv and you will have everything you need to show the auditors the delegated rights given to a user.

Remove the where clause and the report will include all trustees and it can then be manually filtered to show any rights delgated to or for objects in AD.

I have actually taken this a bit further and added a front end GUI to the commands using PrimalForms and in about 4 lines of code that I wrote, rather than the 1000s written by Primal Forms I have something that can:

  • Report on a trustees rights
  • Clone a trustee rights to another security principle
  • Remove a trustees rights
  • Replace a Trustee with another Trustee
  • Backup and Restore settings applied to a security Principle
  • Backup and Restore all permissions

Oh and just one more thing…….. ARS admins don’t show up in any of the delegation reports as they have full access to everything, so you need to make sure you tell the auditors this fact and a list of ARS admins of course.


Are your functions standalone?

Are you sure you can take a function from one script and paste it into another and it will work, or do you find that your new script fails because a variable is set to null or does not exist at all?  If this happens it’s probably because when you wrote the script you used variables that existed in the main script  inside the function but when you ‘transplanted’ the function into a new script it was not obvious that you needed to grab the lines of code that instantiated the variable values in the main script and so now the function does not work as expected if at all.  The latter is probably a better outcome as at least your know that the function needs to be fixed.  If it runs without errors then the silent failure probably means your new script is actually not working properly and you might never notice, e.g. a reporting script that does not show all the expected users because a variable is  not instantiated.   This might lead to an audit failure when it turns out users still have access but your script if failing to highlight this.

$testVariable = 10 in the main script is the same as $script:testVariable = 10

It’s easy to use variables inside a function that were instantiated in the main script and this is because variables have script scope by default when created in the main script body.  A variable instantiated line this $testVar = 10 in the main script is actually the same as typing $script:testVar = 10 and for good measure $local:testVar is of script scope too ( confusing, a little maybe).

Warning contains explicit variables 🙂

Your function can choose to reference $script:testVariable or $testVariable and sloppy coding could mean you get the wrong value if you also have a local scope variable in the function.  You really should be explicit when referencing a variable in a function.   If you want to use a script scope variable in the function you should either pass it as a parameter or use $script:testVar within the function.  If you don’t do this then you might be heading for a debug nightmare as carelessly ignoring the scope of variables will bite you one day for sure.   I guess technically we should always use the variable scope as part of the variable name, i.e. use $local:, $script: or $global: if we really want to be pedantic about it.  This may have been a useful strategy if using $local in the main script meant the variable could not be used in any of the called functions but alas this is not the case so maybe this is just too much effort for little gain.

$script:ShowDebug and $script:Padright

I spend, probably too much time, making my scripts not only look nice when reading the code but also in the debug messages look ‘readable’ 🙂  I learnt a long time ago that debugging can be painful so I now use a function to control my debug messages to screen.   My Write-Msg function can write message to screen and the eventlog and even to a log file as well as highlight particular words int eh message string.  This means that when I’m debugging, or ‘hand cranking’ the code as I like to say I can just see the information that’s important in the mass of messages as they fly by on screen.  This beats write-verbose or write-debug strategies hands down which are all one colour and you just can’t see whats important.

For neatness I also like to force a common line length to every message displayed which is a simple case of using the .padright method of a string object.  So that I can easily control the length of every message I use a variable to control this so it’s easy to change later, i.e. $padright = 150 and write-host “”.padright($padight,”-“) will draw a night delimiter across the screen 150 characters wide.    If I update the variable then every write-host command is updated as they all use $padright as the line length.

Using script scope variables in a function and setting them if they don’t exist

I use the same variable name in all my functions so that the can write debug messages in the function and I’ll either pass it as a parameter or I’ll just use the $script:padright variable.  Even better are the functions that accept the variable value as a parameter but if the parameter does no exist it not only uses the script cope variable it creates one if it does not exist.  That way if my default value in one function was 100 and in another 150 they all use the same value depending on which function was called first.

Using this strategy on any non local variables in your functions guarantees that the function is portable.

Example useage

Function Test-Params
param (
[parameter(Mandatory=$false,Position=0,HelpMessage = "Sets the debug message line    padding to be applied - Default is 150")][int]$padright
if ( $padright -eq 0 ) {
if ( Test-Path variable:script:padright ) { $padright = $script:padright }
else {
$padright = 150
$script:padright = $padright
if ( $showDebug ) {
Write-Host "".padright($padright,"-") -ForegroundColor Green -BackgroundColor Black }

PowerShell Profiles and why I don’t use them

The idea behind a PowerShell profile is that you can customise your PowerShell environment and have your system remember the setup the next time you open a PowerShell prompt / ISE.

What happens when you send your script to someone else?

It’s actually quite a cool idea and you can make sure all your PowerShell modules are loaded in the profile too.  The problem I see with this is that now your script has some hidden dependencies.  These are the modules etc. that you loaded into your PowerShell profile.

Unless the person you send the script to has the same modules loaded in their PowerShell profile the script won’t work

I find it better to just add the couple of lines to import the module into every script I write.  This makes sure that the script is portable assuming of course that the modules are available on the the users system where they are running the script.  You could of course write more code to download and install the module but lets not get carried away just write a message to the screen explaining why the script won’t run and let them source the required modules.

I use this function in my scripts and then handle the return value in my main script


function Get-ModuleStatus { # Version 2.00  
 param (
  [parameter(Mandatory=$true , HelpMessage="Enter the Module Name, e.g. ActiveRolesManagementShell")]
  [parameter(Mandatory=$false, HelpMessage="Optionally Enter the Version Number, e.g. 7.2")]
 if ( $version ) { 
  if ( $forceVersion ) {
   if ( $module = Get-Module -Name $name | where { $_.version.ToString() -eq $Version } ) { Return $true }
  else {
   if ( $module = Get-Module -Name $name | where { $_.version.ToString() -ge $Version } ) { Return $true }
  if ( $module = Get-Module -name $name ) {
   # wrong version loaded so unload 
   Remove-Module -Name $name 
   $module = $null 
 elseif ( $module = Get-Module -name "$name" ) { Return $true }
 if ( $version ) { 
  try { Import-Module -Name $name  -MinimumVersion $version | Out-Null  }
  catch { return $false	}
 else {
  try { Import-Module -Name "$name" } 
  catch { return $false	}
 Return $true 
}           # Get-ModuleStatus           Version 2.00

here is an example of how to call and handle the error

if ( ( Get-ModuleStatus "ActiveRolesManagementShell" ) -eq $false ) { # load the quest cmdlets
 $message = "ActiveRolesManagementShell could not be loaded SCRIPT HALTING on $($Env:COMPUTERNAME) - Please investigate"
 $emailParameters.Add("Subject","$scriptName Script FATAL ERROR - Unable to Load ARS COMMANDLETS on $($Env:COMPUTERNAME)")
 Stop-ScriptRun -emailParameters $emailParameters -sendMail -stop -throwMessage "FATAL ERROR - Unable to Load ARS COMMANDLETS"
} # throw if we are unable to load the Quest cmdlets