How to locate your ARS servers using the service connection point

In case you need to know which servers to connect to using Connect-QADService and you don’t want your script to have hard coded domain information in so the code is more portable, i.e. will run in any environment I came up with a script to locate the available ARS servers using the service connection points published into AD by the ARS servers.

Quest / Dell / Quset / On Identity still havent update the path and use the products original ( Enterprise Directory Manager) name so the SCP are located here: CN=Enterprise Directory Manager,CN=Aelita,CN=System,<Domain DN> Version 6 didn’t have a port number but ARS 7 did.  I don’t know if this is unique to my environment as I was running the two ( ARS 6.9 and 7.x ) services in parrallel or if this was a hard coded change from ARS 7.   If you use this script and find the port is different in your environment let me know.

Call the function like this 

 $ARSServerFQDN = Get-ARS7Servers | select -First 1 

or without the select statement if you want to see all the servers.  You can then use this to control which server you are connecting to 

 connect-QADService -service $ARSServerFQDN -proxy
Function Get-ARS7Servers { # Version 2.00
 $searchRoot = "CN=Enterprise Directory Manager,CN=Aelita,CN=System,$([System.DirectoryServices.ActiveDirectory.Domain]::GetCurrentDomain().GetDirectoryEntry() | select -ExpandProperty DistinguishedName)" 
 Get-QADObject  -SearchRoot  $searchRoot -Type serviceConnectionPoint | SELECT  -ExpandProperty Name | Where { $_.indexOf(":17228") -gt 0 } | Select @{name='serverName';expression={$_.split(":")[0]}} | select -ExpandProperty serverName
}            # Get-ARS7Servers                Version 2.00
Advertisements

Documenting ARS delegated permissions

One of the killer reasons to use ARS is the ease with which you can answer the two auditor questions, who can manage that group / OU / user and what can that user  manage.

ARS provides another layer between the security principle and the AD object.  This layer is an Access template.  An access template is a list of rights that are delegated to objects.  You delegate rights to a security principle via the template.   If you update the template then all the links where the template is used are also updated.  Lets say you create a telephone number template ( called user-telephone Numbers ) that allows security principles to edit the telephone number of user objects.  You delegate this right to the security principle, ‘Telephony Admins’ by linking the security principle via the new template ‘user-Telephone Numbers’ to 3 different user OUs in AD.   Doing this natively in AD is a bit of a chore because you need to select the same rights on each OU location separately.  If you then wanted to change the rights to include say the mobile number attribute in an AD only world you would first need to check where the original ‘right’ was delegated and then apply a second ACL to all the OUs, can be difficult, tedious and error prone for sure.  In an ARS managed environment you simply update the template and the rights will change on all of the location where you have used the template.

When it comes to answering the audit questions, just view the object you are interested and select the administration tab.  There are three buttons, but the interesting ones in the context of this blog are:

  • Security – shows who can administer the object
  • Delegation – shows what objects the user can administer

AdminTab

What if the auditor wants a document of the rights being delegated

They usually ask for screenshots although I’m not sure why.  Anyway I wanted a way to export this information into a CSV file so I could compare files later to see if anything had changed and also to use as a backup allowing me to restore rights if they had changed.  If I get to send these reports to the Auditors then thats a bonus.

ARS includes a commandlet that will make this really easy to do:

Get-QARSAccessTemplateLink

There is a trustee parameter that would make this faster I suspect but I could not get this working so I just added a where clause into the pipeline.

Get-QARSAccessTemplateLink -Proxy |
 select DirectoryObjectDN,
        AppliedTo,
        Trustee,
        TemplateDN,
        SynchronizedToAD,
        DN |
 where { $_.Trustee.NTAccountName -eq "MyDomain\adminlandrews" }

Now you can pipe that into the Export-Csv and you will have everything you need to show the auditors the delegated rights given to a user.

Remove the where clause and the report will include all trustees and it can then be manually filtered to show any rights delgated to or for objects in AD.

I have actually taken this a bit further and added a front end GUI to the commands using PrimalForms and in about 4 lines of code that I wrote, rather than the 1000s written by Primal Forms I have something that can:

  • Report on a trustees rights
  • Clone a trustee rights to another security principle
  • Remove a trustees rights
  • Replace a Trustee with another Trustee
  • Backup and Restore settings applied to a security Principle
  • Backup and Restore all permissions

Oh and just one more thing…….. ARS admins don’t show up in any of the delegation reports as they have full access to everything, so you need to make sure you tell the auditors this fact and a list of ARS admins of course.

 

Are your functions standalone?

Are you sure you can take a function from one script and paste it into another and it will work, or do you find that your new script fails because a variable is set to null or does not exist at all?  If this happens it’s probably because when you wrote the script you used variables that existed in the main script  inside the function but when you ‘transplanted’ the function into a new script it was not obvious that you needed to grab the lines of code that instantiated the variable values in the main script and so now the function does not work as expected if at all.  The latter is probably a better outcome as at least your know that the function needs to be fixed.  If it runs without errors then the silent failure probably means your new script is actually not working properly and you might never notice, e.g. a reporting script that does not show all the expected users because a variable is  not instantiated.   This might lead to an audit failure when it turns out users still have access but your script if failing to highlight this.

$testVariable = 10 in the main script is the same as $script:testVariable = 10

It’s easy to use variables inside a function that were instantiated in the main script and this is because variables have script scope by default when created in the main script body.  A variable instantiated line this $testVar = 10 in the main script is actually the same as typing $script:testVar = 10 and for good measure $local:testVar is of script scope too ( confusing, a little maybe).

Warning contains explicit variables 🙂

Your function can choose to reference $script:testVariable or $testVariable and sloppy coding could mean you get the wrong value if you also have a local scope variable in the function.  You really should be explicit when referencing a variable in a function.   If you want to use a script scope variable in the function you should either pass it as a parameter or use $script:testVar within the function.  If you don’t do this then you might be heading for a debug nightmare as carelessly ignoring the scope of variables will bite you one day for sure.   I guess technically we should always use the variable scope as part of the variable name, i.e. use $local:, $script: or $global: if we really want to be pedantic about it.  This may have been a useful strategy if using $local in the main script meant the variable could not be used in any of the called functions but alas this is not the case so maybe this is just too much effort for little gain.

$script:ShowDebug and $script:Padright

I spend, probably too much time, making my scripts not only look nice when reading the code but also in the debug messages look ‘readable’ 🙂  I learnt a long time ago that debugging can be painful so I now use a function to control my debug messages to screen.   My Write-Msg function can write message to screen and the eventlog and even to a log file as well as highlight particular words int eh message string.  This means that when I’m debugging, or ‘hand cranking’ the code as I like to say I can just see the information that’s important in the mass of messages as they fly by on screen.  This beats write-verbose or write-debug strategies hands down which are all one colour and you just can’t see whats important.

For neatness I also like to force a common line length to every message displayed which is a simple case of using the .padright method of a string object.  So that I can easily control the length of every message I use a variable to control this so it’s easy to change later, i.e. $padright = 150 and write-host “”.padright($padight,”-“) will draw a night delimiter across the screen 150 characters wide.    If I update the variable then every write-host command is updated as they all use $padright as the line length.

Using script scope variables in a function and setting them if they don’t exist

I use the same variable name in all my functions so that the can write debug messages in the function and I’ll either pass it as a parameter or I’ll just use the $script:padright variable.  Even better are the functions that accept the variable value as a parameter but if the parameter does no exist it not only uses the script cope variable it creates one if it does not exist.  That way if my default value in one function was 100 and in another 150 they all use the same value depending on which function was called first.

Using this strategy on any non local variables in your functions guarantees that the function is portable.

Example useage

Function Test-Params
param (
[parameter(Mandatory=$false,Position=0,HelpMessage = "Sets the debug message line    padding to be applied - Default is 150")][int]$padright
)
if ( $padright -eq 0 ) {
if ( Test-Path variable:script:padright ) { $padright = $script:padright }
else {
$padright = 150
$script:padright = $padright
}
}
if ( $showDebug ) {
Write-Host "".padright($padright,"-") -ForegroundColor Green -BackgroundColor Black }

PowerShell Profiles and why I don’t use them

The idea behind a PowerShell profile is that you can customise your PowerShell environment and have your system remember the setup the next time you open a PowerShell prompt / ISE.

What happens when you send your script to someone else?

It’s actually quite a cool idea and you can make sure all your PowerShell modules are loaded in the profile too.  The problem I see with this is that now your script has some hidden dependencies.  These are the modules etc. that you loaded into your PowerShell profile.

Unless the person you send the script to has the same modules loaded in their PowerShell profile the script won’t work

I find it better to just add the couple of lines to import the module into every script I write.  This makes sure that the script is portable assuming of course that the modules are available on the the users system where they are running the script.  You could of course write more code to download and install the module but lets not get carried away just write a message to the screen explaining why the script won’t run and let them source the required modules.

I use this function in my scripts and then handle the return value in my main script

 

function Get-ModuleStatus { # Version 2.00  
 [CmdletBinding()]
 param (
  [parameter(Mandatory=$true , HelpMessage="Enter the Module Name, e.g. ActiveRolesManagementShell")]
  [string]$name,
  [parameter(Mandatory=$false, HelpMessage="Optionally Enter the Version Number, e.g. 7.2")]
  [string]$Version,
  [switch]$forceVersion  
 )
 if ( $version ) { 
  if ( $forceVersion ) {
   if ( $module = Get-Module -Name $name | where { $_.version.ToString() -eq $Version } ) { Return $true }
  }
  else {
   if ( $module = Get-Module -Name $name | where { $_.version.ToString() -ge $Version } ) { Return $true }
  }
  if ( $module = Get-Module -name $name ) {
   # wrong version loaded so unload 
   Remove-Module -Name $name 
   $module = $null 
  }
 }
 elseif ( $module = Get-Module -name "$name" ) { Return $true }
 if ( $version ) { 
  try { Import-Module -Name $name  -MinimumVersion $version | Out-Null  }
  catch { return $false	}
 }
 else {
  try { Import-Module -Name "$name" } 
  catch { return $false	}
 }
 Return $true 
}           # Get-ModuleStatus           Version 2.00

here is an example of how to call and handle the error

if ( ( Get-ModuleStatus "ActiveRolesManagementShell" ) -eq $false ) { # load the quest cmdlets
 $message = "ActiveRolesManagementShell could not be loaded SCRIPT HALTING on $($Env:COMPUTERNAME) - Please investigate"
 $emailParameters.Add("body",$message)
 $emailParameters.Add("Subject","$scriptName Script FATAL ERROR - Unable to Load ARS COMMANDLETS on $($Env:COMPUTERNAME)")
 Stop-ScriptRun -emailParameters $emailParameters -sendMail -stop -throwMessage "FATAL ERROR - Unable to Load ARS COMMANDLETS"
} # throw if we are unable to load the Quest cmdlets

Forcing Admins to do the right thing

No I don’t mean enrolling them in the ‘Guido school of Admins‘ I mean getting them to do the right thing without even telling them a thing.  I mean preventing bad behaviour by using features of ActiveRoles Server.

Moving Objects around in AD can cause major issues!

Moving objects around in AD can cause major issues especially with LDAP based applications which always seem to , hard code the object DNs.

ARS Managed Units can present a virtualised OU structure – Bruce Lee might have referred to this by saying  “The art of Moving objects without moving objects”  or was it the “Art of fighting without fighting” – Enter the Dragon.

ARS Managed Units allow you to virtualise an AD OU Structure making it simpler to delegate rights and also provides a more logical view of AD and of course a less cluttered view for the admins as they only see the objects they need to manage.

Another advantage is that administrators cannot create new objects in a Managed Unit. A managed Unit can contain an OU but you can’t create objects in it.

What this means is that you can prevent an administrator creating a new parent OU and more importantly they can only move objects between the OUs you expose, i.e. when possible they can move the object from the ‘incorrect’ location to the correct OU location.

You could achieve similar results with a complex delegation model but lets stop for a minute here, the AD is already a mess and you can’t move the objects around for fear of breaking things so here’s a solution that makes them appear in the right place when they are not.

The solution then is to create a Managed Unit that has a membership rule that includes only the immediate child objects of the targeted OU.

This simple LDAP query achieves this

(&(objectCategory=organizationalUnit)(street=DisplayOUinMU))

All I need to do now is set the street attribute on all the objects in the target OU using a simple powershell script and create an ARS policy that ensures that any objects created or moved to the OU get this value too which is a very simple PVG policy.  The policy sets the attribute to the parentOU attribute value.

Then I set some additional membership rules to display any objects not in the target OU, i.e. are in the wrong OU path and Bobs your uncle.

Now when an admin users looks, he sees all the objects in the right location, almost anyway 🙂 The objects in the target OU ( the Managed Unit with the same name as the Target OU) are mostly from the wrong locations BUT the admin can no longer create more objects in that location only in the correct OUs exposed via the MU.

This is driving the correct behavior with little management or even training overhead. Without this I found that admins, despite being told time and time again, would create more objects in the wrong OUs because related groups or service accounts were already in the wrong OU location.

Putting objects in an OU ‘just because’ is up there with cloning a user object for the new start ARRRRGGGHHHHH don’t do it! ( Please start working on a roles based design today)

I’ve requested a Product ENHANCEMENT with OneIdentity ( Quest ) as I think it would be good to have this as an an include rule in a similar way to the include group member rule.   Then you would not need the PVG policy to set the attribute you would just include immediate child objects of the targeted OU.  Even more useful would be the ability to further filter that to specific object types, e.g. just the users or groups or perhaps just the OUs.  Maybe they will include it in the next release you never know.

ARS policy not working on Deprovisioned accounts

I just discovered, well actually another OneIdentity forum user (OneIdentity Employee) discovered, how to get ARS polices to apply to Deprovisioned accounts.

I’d thought for ages that all the policies stopped working when the account was deprovisioned. Generally this has been fine and I’ve not been bothered by it but a couple of times the SD have enabled a deprovisioned account.

The problem now is, that none of the ARS policies work on this account or so I thought and this can be a big problem especially if the account in question is a personal functional account that used to be linked to a personal employee account. As part of the leaver process all related accounts to the personal employee account are disabled and deprovisioned together and most of the attributes are cleared. Enable that account and it’s no longer linked to the HR data feed, i.e the JLT process is broken. As none of the policies apply any restriction or automation also does not work on this account and of course the deprovision option can’t be selected as it’s already been deprovisioned.

It would be a good feature if ARS prevented an account being enabled when it was in the deprovisioned state. We can of course use an ARS policy to do this but wait, ARS policies don’t work on deprovisioned accounts or do they?

it turns out that a workflow works regardless of the deprovision status and a deprovisioning policy also works on deprovisioned accounts, I always wondered why they has a provisoning and deprovisiong choice when creating a new policy. Now I know. Clearly there are overheads in watching every change to objects in AD so I’m guessing that to reduce the load they decided that in most instances why would we want to trigger an ARS policy on a deprovisioned account so they created 2 policy types. Provisioning only works on accounts where the edsvaDeprovisionStatus attribute is not present and Deprovisioning polices work on both.

This actually opens up a whole new way of thinking about the polices you want to apply and when.

I already have a script that prevents the SD enabling an account if it was disabled by the HR data feed. The quick fix then is to recreate that as a deprovisioning policy and now the SD can’t enable the account in any other way than by undo deprovision. Then all the required identity attributes will be restored, all the ARS policies apply and every one is happy, for a while at least 🙂

By the way did you also know that by default Managed Units do not contain deprovisioned users but there is a membership rule, right at the bottom of the list ‘Maintain deprovisioned Users’ that adds them back in.

I think the only things that can’t be configured to work on deprovisioned accounts are group family and dynamic groups, and I see no reason why I would want to override that rule anyway.

Extracting Photos from AD

This post is actually about the ‘DontConvertValuesToFriendlyRepresentation’ switch on get-qaduser but I came across this because I was trying to extract the photos from AD so that’s how I ended naming the post Extracting Photos from AD as most people will probably be search for this and not the command line switch.

Getting the photo from AD is pretty simple but there are a couple of things to know. When you upload the photo to AD it’s converted from a jpeg to a array of bytes so you can’t just download it you have to convert it back. the Quest commandlets are helpful and covert lots of the raw data stored in AD into more readable formats. What this means is that sometimes the help is more of a hindrance because the value you wanted for the photo has been converted so then the byte conversion fails ‘[System.Io.File]::WriteAllBytes( $Filename,$photoAsBytes )’

There are two solutions to this. The first is to just access the directory entry like this ‘$user.DirectoryEntry.thumbnailPhoto.Value’ and the second is to tell the commandlet not to convert the values by using the ‘DontConvertValuesToFriendlyRepresentation’ switch.

And as I was comparing the speed of the AD commandlets I extracted the thumbnailPhoto attribute using both the AD and Quest commandlets. The AD commandlets are faster but not by much as long as you use the ‘-DontUseDefaultIncludedProperties’ The quest commandlets pull down lots of attributes which is why it takes longer so when getting lots of AD objects it’s worth using this switch too.


cls
$ldapFilter = "(&(employeeID=*)(sAMAccountType=805306368)(thumbnailPhoto=*)(!(|(userAccountControl:1.2.840.113556.1.4.803:=2))))"
$searchRoot = "OU=User Accounts,DC=MyADDomain,DC=com"
$useADCommandlets = $false 
$sizelimit = 0
$OutputPath = 'c:\Temp\Photos'
Function ConvertTo-Jpeg {
 param ($userName,$photoAsBytes,$path='c:\temp')
 if ( ! ( Test-Path $path ) ) { New-Item $path -ItemType Directory }
 $Filename="$($path)\$($userName).jpg"
 [System.Io.File]::WriteAllBytes( $Filename,$photoAsBytes )
}

if ( $useADCommandlets ) {
 #Import-Module ActiveDirectory
 $Users = GET-ADUser -LDAPFilter $ldapFilter  -Properties thumbnailPhoto # | select -First $sizelimit # remove the select to get all users 
 ForEach ( $User in $Users ) {
  ConvertTo-Jpeg -userName $user.SamAccountName -photoAsBytes $user.thumbnailPhoto -path $OutputPath 
 }
}
else {
 $Users = get-qaduser  -LdapFilter $ldapFilter -SearchRoot $searchRoot -DontUseDefaultIncludedProperties -DontConvertValuesToFriendlyRepresentation  -IncludedProperties thumbnailphoto -SizeLimit $sizelimit   # set sizelimit to 0 to get all users
 ForEach ( $User in $Users ) {
  #ConvertTo-Jpeg -userName $user.SamAccountName -photoAsBytes $user.DirectoryEntry.thumbnailPhoto.Value -path $OutputPath # if you didn't use the -DontConvertValuesToFriendlyRepresentation switch 
  ConvertTo-Jpeg -userName $user.SamAccountName -photoAsBytes $user.thumbnailPhoto -path $OutputPath
 }
}