Uploading a Jpeg into AD

This final function will upload a suitably sized Jpeg into AD. It’s not as simple as loading the jpeg into the thumbnail attribute you have to byte encode it. This is however as easy as casting the variable type to an array of bytes ( variable type [byte[]] ) and letting PowerShell do the heavy lifting for you as always.

The only two lines of code you need are the [byte[]]$jpg = Get-Content $jpegFile -encoding byte and the Set-QADUser -Identity $guid -ObjectAttributes @{thumbnailPhoto=$jpg

I’ve used the GUID to identify the target user but you can use anything in your version of the function obviously. The function also writes to a log file for you.

This was quite an early script for me and I’ve not set the parameters to be mandatory or error checked them, see my other posts on creating fucntions ( this probably breaks all my recommendations 🙂 ) The function just assumes the parameters are always going to be correct. I’d recommend you add some error handling and these functions are probably due a rewrite but as the 3 functions I just posted are currently part of a much larger script I don’t have to worry about the error handling as I know what’s being passed to each function but it’s still good practice to add error handling in all of your functions then you can just copy and paste them into other scripts safe in the knowledge that they will work and deal with all errors.

Function Update-Thumbnail {
 param (
  [string]$jpegFile,
  [string]$guid,
  [string]$sAMAccountName,
  [string]$logfile
 )
# check if the GUID was passed as a parameter - if not we can't upload the picture
 if ( (!($guid)) -or ( $guid.Length -le 0) ) {Return $false	}
# convert the picture file - byte encoded
 if ($jpg) { if ($jpg -is [IDisposable]) {try {$jpg.Dispose() | Out-Null } catch{}}; Remove-Variable -Name jpg -ErrorAction SilentlyContinue | Out-Null }
 Try { [byte[]]$jpg = Get-Content $jpegFile -encoding byte } Catch{}
 If ( $jpg )	{
 $logAction = " thumbnail updated "
 Try {Set-QADUser -Identity $guid -ObjectAttributes @{thumbnailPhoto=$jpg} -ErrorAction SilentlyContinue | Out-Null } Catch{$logAction = " thumbnail failed to update " } 
  $($(Get-Date).tostring() + " " + $(RPAD -stringToPad $sAMAccountName -paddedLength 20) + $logAction )| out-file $logfile -append -encoding default
 }
 if ($jpg) { if ($jpg -is [IDisposable]) {try {$jpg.Dispose() | Out-Null } catch{}}; Remove-Variable -Name jpg -ErrorAction SilentlyContinue | Out-Null }
 Return $true
}
Advertisements

Umlauts, what umlauts?

My company, being international has a lot of employees who’s names include umlauts. Exchange doesn’t support these so they automatically get converted.    I needed to do the same when creating logon names for the users.

Using a naming standard for logon names based on the employee’s name is a disaster waiting to happen but everyone seems to do it

I have to be careful not to go off on a tangent here, but I once wrote a paper on naming standards and why we should not  derive user names from the users actual names.  In short, apart from the hilarious or the outright rude names that can be generated when using a rule to translate a users name into the logon name this solution just isn’t scaleable!

A user name should be unique across domains so that really means using a numeric logon name that will never change

My recommendation was to use the employeeID or some other arbitrary number that I could guarantee was unique and would never clash with any other user names, especially those of another AD when we merge or acquire another company.  Line management just didn’t understand this, well they did understand it but decided to ignore it as they preferred using their names to logon :-).  We still create user names today based on the users real names although I never knew anyone called jones2 or jones3 and it’s funny how the directors never have numbers in their logon names 🙂 .

I’d be happy to post my document if someone were actually interested and whilst I had no success in getting support for the naming standard you might be luckier than me.

Back to the real post converting umlauts

Anyway, umlauts!  It’s not a good idea to be using these in a users logon name, the CN or anything really as to find these users you would need to search using the umlauts which is going to be a little awkward with a UK keyboard.

I use a script to create users automatically in AD based on a HR feed.  The username is created using the initials and the last name.  The data feed included the first and last name with umlauts which is a problem then.  This little function will convert  the umlauts to the same characters as exchange does.

function Convert-DiacriticCharacters {
 param ( [string]$inputString )
 $inputString = $inputString.Replace("ß","ss")
 $inputString = $inputString.Replace("ä","ae")
 $inputString = $inputString.Replace("Ä","Ae")
 # $inputString = $inputString.Replace("ë","ee") # exchange appears to convert this to e
 $inputString = $inputString.Replace("ö","oe")
 $inputString = $inputString.Replace("Ö","Oe")
 $inputString = $inputString.Replace("ü","ue")
 $inputString = $inputString.Replace("Ü","Ue")
 [string]$formD = $inputString.Normalize( [System.text.NormalizationForm]::FormD )
 $stringBuilder = new-object System.Text.StringBuilder
 for ($i = 0; $i -lt $formD.Length; $i++){
  $unicodeCategory = [System.Globalization.CharUnicodeInfo]::GetUnicodeCategory($formD[$i])
  $nonSPacingMark = [System.Globalization.UnicodeCategory]::NonSpacingMark
  if($unicodeCategory -ne $nonSPacingMark){ $stringBuilder.Append($formD[$i]) | out-null }
 }
 $stringBuilder.ToString().Normalize([System.text.NormalizationForm]::FormC)
} # end function Convert-DiacriticCharacters

Find the ARS Scheduled Task Script Policy Name

I often write some useful info into a log file at the beginning of a script as this helps me debug later on. When I also send out emails I like to include the script that was called. Rather than hard code this you can get the script name using this line of code.

$thisScript = $($MyInvocation.MyCommand).Name # holds the script name for use in logging messages

I noticed that when I used the script as an ARS policy script and called it from an ARS  scheduled task that this returned what looked like a GUID rather than the script name which wasn’t a lot of good in my log file or email report.

Taking a quick look at the scheduled tasks attributes I could see a reference to a very similar looking GUID but it was prefixed with an “s”. Then I looked at the attributes of the script policy and the name attribute was the GUID (without the “s” prefixing it).

I did a quick test using get-qadobject and found that using the line of code below it returned the name of the script policy which was the GUID with the “s” prefix

$thisScript = $(Get-QADObject $thisScript.substring(1) -Proxy ).name

As I debug my ARS policy scripts from the powergui editor I needed a way of working out if my script was being run manually within the editor or even from the command line or as an ARS scheduled task.   The solution I came up with was to set a script scope variable, $Script:taskDNDefault to a known value and then in the Get-TaskParameters function set it to something else if the script is being called as a scheduled task like this:

In the main script set $TaskDN to the expected task object location in ARS – this lets you run the script from the command line and still read the task parameters rather than manually feeding them to the script.

$taskDN = “CN=ScriptPolicyName,CN=Audit Tasks,CN=MyDomain,CN=Scheduled Tasks,CN=Server Configuration,CN=Configuration”

function Get-TaskParameters {
try {
$Task.DirObj.GetInfo()
$Task.DirObj.GetInfoEx(@(“edsaParameters”),0)
$strParameters = $Task.DirObj.Get(“edsaParameters”)
$Script:taskDNDefault = “Scheduled Task..: $($Task.DirObj.Get(“distinguishedName”)). `n”
}
catch {
$Task = Get-QADObject -Identity $TaskDN -Proxy -IncludedProperties edsaParameters
$strParameters = $Task.edsaParameters
}

$strParameters = ” + $strParameters + ”
if( $Task ){ if ( $Task -is [IDisposable] ){ try { $Task.Dispose() } catch{} } }; Remove-Variable -Name Task -ErrorAction SilentlyContinue
return $strParameters
}

Set the $taskDNDefault to some known value, e.g. “Default”, and call the Get-TaskParameters function to get the parameters and as part of that function, if it’s running as a scheduled task the $taskDNDefault variable will be set to the DN of the task – see the try {} section of code and how it extracts the distinguished name from the $task object.  If you are running th escript from the command line this will fail and the catch{ } section will run and this section of code does not update the $taskDNDefault variable.

Then when we write the log entry we can simply check the value of $taskDNDefault and if it’s not “Default” we use our extra bit of code to find the script name.  Now it works if I call this from the command line or from a scheduled task.

Here’s an example:

$thisScript = $($MyInvocation.MyCommand).Name  # holds the script name for use in logging messages

 # if the script is called using an ARS scheduled task
 # the script name returned will be a GUID prefixed
# with the letter “s” otherwise it will be the actual script name

$taskDNDefault = “Default”
$parameters = Get-TaskParameters

if ( $taskDNDefault -ne “Default” ) {
$thisScript = $(Get-QADObject $thisScript.substring(1) -Proxy ).name
}

Note: I strip off the “s” prefix by using .substring(1)

Remembering what you were doing…

Hopefully not too misleading a title as the post is really about variable scope and how this little trick allowed me to build a function that remembered what it was doing so that I could reuse it and not have to write additional code in the main script block to get it to work.

I was working on a script that called a look up function in several parts of the script and I thought that it was wasteful to potentially keep repeating the same look ups. Caching the look ups in the scrip would both speed up my code and also make it more readable I hoped. The rational is that a call to AD to look up some attributes can be quite time consuming and a lookup in a hash table is much quicker. I used a hash table so I can store the actual objects I found in the search and used the look up attribute as the key.

Initially I stored these look ups in the main script but quickly realised that if I wanted to change the logic I’d have to change it in multiple location so the smart place to do it was inside the called function.

The task then was to make a self contained function that just returned the user details. The solution was to just add a script (or global ) variable inside the function on the first call and then reuse that. I didn’t need to change a line of code in my main script block either to add this functionality as the call was the same, the new logic was all inside the function. Th ereturn value was still a user object.

The script below, is only to demo the technique not to show you how to look up a users details and the caveat here is that if you are doing 1000s of look ups then you may need a lot of memory to store the user details so make sure you only store the data you need not the whole user object. My environment isn’t big enough to hit any problems of scale so I don’t generally worry about storing user info in memory but I still try to keep it to a minimum, after all it’s also quicker to do this and just a good habit to get into.

If I had added the cached look up logic in my main script block around each function call i.e. if I initialise a variable in the main script to store the look ups and the check this before actually making the call to the function that actually does the look up then I have two problems. Every time I want to call the function I have to add the additional code surrounding the function call to the script and secondly if I copy and paste the function into another script I have to remember all the additional lines of code need to get it working.

The first issue is only really a problem if I call the function in a lot of places in my main script block. By repeating the checks before I call the function I’m adding complexity and bloating my code. It also adds scope for typos and issues with variable scope etc. If I later revise the way the function works I might also need to revisit every function call to make sure it still works.

The ideal then is to have a self contained function that stores the contacts I already looked up but how? My solution was to use a script, or even global scope variable that is initilaised inside the function using the scrip: or Global: prefix to the variable name. I check that the variable does not exist and then create an empty one. If the variable exists it exists then I search the hash table before dropping back to the expensive AD look up. This also simplifies the code as now I just call the function and I don’t care if the return value is from an actual look up or a cached look up. My code is easier to read and faster, win win 🙂

Make sure you clean the memory before running this script ( use my Clean-Memory Function) although maybe you want to store look ups over multiple runs of the script to save even more time when running the script multiple times.


function getUserDetails {
 Param (
  [CmdletBinding(SupportsShouldProcess=$false,ConfirmImpact="None")]
  [parameter(Mandatory=$true)]
  [ValidateNotNull()]
  [Int]$contactId
 ) 

 if ( !  $Script:contacts ) {  $Script:contacts = @{} }
 if ( $Script:contacts.contains($contactId) ) {
   # we already looked up this contact so just return it from the hash table
   return  $contacts.item($contactId)
 }
 $contactData = " < code to get the contact data > "
 # update the hash table with the contact data and then return the object
 $Script:contacts.add($contactId,$contactData)
 Return $contactData
}

Clean-Memory

First post of the new year and I’m repeating myself, again. I was updating some scripts and my Clean-memory function didn’t do what I was expecting it to, so I delved in a little bit more and cleaned it up a little.

I run this at the top of my main scripts using the -force parameter or providing a list of variable names that I want to make persistent. Working in an ISE is great but variables do tent to bleed all over the place and just to make sure that your script will run when you give it to someone else or run it on another computer I always use this function to remove any variables I initialised at the end of the script so my environment is nice and tidy.

Anyway – here’s the script if it’s of any use to you.

function Clean-Memory {
<#
.SYNOPSIS
Removes all variables from memory that did not exist before this script was run.
.DESCRIPTION
Removes all variables from memory that did not exist before this script was run.
 
without the -force parameter the script uses $Script:startupvariables to store
any variables you have already instantiated.  Watch out if your script crashed 
on the last run as there will be more variables in memory than would exist 
before the first run of the script.  $Script:startupvariables  allows you to
identify new variables which must have been created during the script run and
you can therefore choose to clear them or save them for the next run depending 
on how and when you call this function.

Use the -force parameter to remove all variables regardless of any stored variables.  
         
The script uses the Remove-Variable cmdlet to force the deletion of the variables
not stored in the $Script:startupvariables variable.

.PARAMETER PersistantVariables
  An array of variable names that should not be removed from memory          
.PARAMETER Force
  Removes all variables from memory 
.EXAMPLE
$startupvariables = Clean-Memory 
Stores instantiated variable names in the array $startupvariables
.EXAMPLE
Clean-Memory $startupvariables
clears any variable not passed as a parameter or is in the list contained in $startupvariables
.EXAMPLE
Clean-Memory "exitcode" 
Clears all variables except $exitcode
.INPUTS
An optional array of variable names
.OUTPUTS
A script scope variable $Script:startupvariables
.NOTES
NAME     : Clean-Memory
VERSION  : Version 3.0
AUTHOR   : Lee Andrews
.LINK
Remove-Variable
.LINK
  https://clan8blog.wordpress.com       
#>
[CmdletBinding(SupportsShouldProcess=$False,ConfirmImpact="None")]
param(
  [Parameter(Mandatory=$false,HelpMessage="Array of persistant variable names",Position=0)]
  [string[]]$PersistantVariables,
  [Parameter(Mandatory=$false,HelpMessage="When present fucntion will clear all variables")]
  [switch]$Force 
 )
if ( ( $PersistantVariables ) -or ( $Force ) )  {
  $PersistantVariables += $script:MyInvocation.MyCommand.Parameters.Keys
  Get-Variable |
  Where-Object { ( $PersistantVariables -notcontains $_.name ) } |
  ForEach-Object {
   #write-debug "$($_.name)" 
   try { Remove-Variable -Name "$($_.Name)" -Force -Scope "script" -ErrorAction SilentlyContinue -WarningAction SilentlyContinue}
   catch { }
  }
}
New-Variable -name startupVariables -Force -Scope "Script" -value ( Get-Variable | ForEach-Object { $_.Name } )
} # Removes all redundant variables from memory

Which nested group do I need to remove the user from?

I had a little problem today where the SD wanted to take away someones rights to a resource controlled by a group.  The problem was they knew which group gave the user the rights and they knew which user but not which group he was in.

What?  Howsat?  ( I hate cricket actually but always liked that catch phrase).  Well the user is in groupA and group A is nested in Group B, and group B, C,D and E are all nested in group F.  I know that groupF is used to delegate the rights but when I look at the user he’s not a direct member of groupF and if I look at groupF I can’t see any common groups.

So it’s trickier than it looks then.  In my environment this particular group had about 20 other nested groups so the service desk wanted a quicker way of finding out which group they need to take the user out of.

Here’s my solution.  Use a function that is called recursively and lists all the groups that the user is in that is ultimately nested in the target group.

The function takes a userDN and a groupDN as inputs.  It gets the groups members using Get-QADGroupMember then iterates through the group members.  if the member is another group then it calls the function recursively if it’s a user we see if the user is the one we are looking for, if it is then we store the groupDN in an array and return.  The end result is the function returns an array of groupDNs that the user is in that are nested in the target group.

here’s the code including the helper functions:

$GroupDN = "CN=SomeGroupName,DC=AD,DC=com"
$targetUserDN = "CN=SomeUserName,OU=Accounts,DC=AD,DC=com"
cls

function Get-SnapinStatus { 
	[CmdletBinding(SupportsShouldProcess=$true)]
	param	(
		[parameter(ValueFromPipeline=$true, ValueFromPipelineByPropertyName=$true, Mandatory=$true, HelpMessage="Enter the snapin name, e.g. Microsoft.Exchange.Management.PowerShell.Admin")]
		[string]$name
	)
	if(!(Get-PSSnapin -name "$name")) {
		if(Get-PSSnapin -Registered | Where-Object {$_.name -eq "$name"}) {
			try { 
				Add-PSSnapin -Name "$name" 
    Write-Host "PS Snapin $name was imported..."
				return $true
			}
			catch {
    Write-Host "ERROR: PS Snapin $name failed to be imported"				
				return $false 
			}
		} 
		else {
   Write-Host "ERROR: PS Snapin $name was not available (Windows feature isn't installed)"
   return $false
		}
	}
	else {
	 #Write-Host "PS Snapin $name was already imported...." 
  return $true
	}
} # Loads snapins if they are not already loaded

function Get-GroupMembership($userDN,$groupDN){
 $groups = @()
 $groupMembers = Get-QADGroupMember -identity $groupDN
 $script:objectCount += $groupMembers.count 
 Write-Debug $processed 
 Write-progress -Activity "Searching For Groups" -Status ("Checking - "+$GroupDN) -percentcomplete ($processed / $objectCount*100) -id 1
 ForEach ( $groupMember in $groupMembers ) {
  if ( $groupmember.gettype().name -eq "ArsGroupObject" ) {
   $script:processed++
   $groups += Get-GroupMembership $userDN $groupmember.DN
  }
  if ( $groupmember.gettype().name -eq "ArsUSerObject" ) {
   $script:processed++
   Write-Debug $processed 
   if ( $groupMember.DN -eq $userDN ) {
    $groups += $groupDN
    Write-Debug "Found group: $groupDN"
    Return ,$groups 
   }
  }
 }
 Return ,$groups 
} 

# main script
Write-Host "Script Started"
Write-Host "Loading Quest CMDLETS"
if ( ! ( Get-SnapinStatus "Quest.ActiveRoles.ADManagement" ) ) {	throw "Unable to laod the Quest AD management snapin please investigate." }
Write-Host "Checking group members..." 
$objects = Get-QADGroupMember -Identity $GroupDN -Indirect | select -ExpandProperty DN 
if ( $objects -notcontains $targetUserDN ) { 
 Write-Host "User is not in the target group so nothing to do"
 exit
}
$objectCount = 0
$processed = 0  
Write-Host "Searching nested groups...."
$groups = Get-GroupMembership $targetUserDN $GroupDN 
Write-progress -Activity "Searching For Groups" -Status ("Checking - "+$GroupDN) -Completed -id 1 
if ( $groups -ne $null ) { 
 Write-Host "User is in the following nested groups:" 
 ForEach ( $group in $groups ) { Write-Host "`t $group" }
}
else { Write-Host "User not found in any nested groups" }
Write-Host "Script Finished" 

SupportsShouldProcess

This wasn’t what I planned to blog tonight but I wanted to check my facts before posting and I realised my level of understanding on this topic was a little low. Specifically I was planning on blogging how I implemented a debug switch in some of my recent scripts.

Before publishing my notes I quickly googled for other peoples notes on this topic and the top hit was this one:

http://becomelotr.wordpress.com/2013/05/01/supports-should-process-oh-really/

Having read that I was more confused and wanted to do some testing BUT I didn’t fancy moving a lot of files around so I thought why not just write a function that does something really innocuous but will always prompt you for confirmation. This thought takes you to the heart of this post, which is getting your own functions to use the -confirm and -whatif switches. This is actually simple….. but as the post above shows maybe a little harder to implement than you thought.

In the parameter declaration add this [CmdletBinding(SupportsShouldProcess=$False,ConfirmImpact=”None”)] and hey presto you can use the -debug, -confirm and -whatif switches.

 Not so fast! Did I hear someone say.

Well then you’d be right then! the -debug switch sets the $DebugPreference to “inquire” which is about as useful as a chocolate teapot. My solution, which was my originally planed post was so check for the value of $DebugPrefernce and if it was anything other than “SilentlyContinue” then I set it to “Continue”

“just like that” , “not like this and not like that” , “Just like that”.. 

I also set a $debug variable as this can be handy in if statements.

#region declare parameters
 param (
 [CmdletBinding(SupportsShouldProcess=$true,ConfirmImpact=“None”)]
 [Parameter(mandatory=$true,HelpMessage=“The inputFolder”,Position=0)]
 [string]$inputDir
 )
#endregion
#region check debug switch and optionally print to screen

if ( $DebugPreference -ne “SilentlyContinue” ) {
  $DebugPreference = “Continue”
  $debug = $true
 }
else { $debug = $false }
#endregion

So now to deal with the -whatif switch, which is mentioned in quite a few posts and I was finding it a little confusing to say the least and as I said I wanted to write a little SAFE test program and not have to move any files around. I came up with a function that takes a string and writes it to screen using write-host. What’s that? write-host will just print to the screen anyway so how’s that a test? simples use the cmdlet binding and set the ConfirmImpact to High. Now when I call this function it will always prompt me if I want to write the message to screen unless I use -confirm:$false.

Function print-msg {
 [cmdletbinding(SupportsShouldProcess=$True,ConfirmImpact=“High”)]
 Param (
  [parameter(mandatory=$true,ValueFromPipeline=$true,ValueFromPipelineByPropertyName=$true)]
  [string]$MessageText
 )

 Begin {
  Write-Host “In Begin Block: print-msg” -ForegroundColor Green
 }

 Process {
  Write-Host “In Process Block: print-msg” -ForegroundColor Green
  Write-host “ConfirmPreference = $ConfirmPreference”
  if ($pscmdlet.ShouldProcess(“Prompt – but don’t make it too long as it may get truncated”)) {
   Write-Host “Hello World: $MessageText”
   Write-host “ConfirmPreference = $ConfirmPreference”
  }
 }

 End {
  Write-Host “In end block: print-msg” -ForegroundColor Green
 }
} 

Then I wrote a another test function to test what was in the post above. This illustrated the point perfectly so try testing this yourself to improve your understanding too.  To test you will need to make sure that you push messages into the function via the pipeline like this @(“INPUT TEXT1″,”INPUT TEXT2”) | print-msg.  The trick to getting this work is pushing the $PSBoundParameters to the called function and before you do this add the confirm switch to the parameters using $PSBoundParameters.Confirm = $false.

Function show-msg {
 [cmdletbinding(SupportsShouldProcess=$True,ConfirmImpact=“Low”)]
 Param (
  [parameter(mandatory=$true,ValueFromPipeline=$true,ValueFromPipelineByPropertyName=$true)]
  [string]$MessageText
 )

 Begin {
  cls
  Write-Host “In Begin block: show-msg” -ForegroundColor Blue
  Write-Host “This is the input list of parameters” -ForegroundColor Blue
  $PSBoundParameters.GetEnumerator()
 }

 Process {
  Write-Host “In Process Block: show-msg” -ForegroundColor Blue
  Write-host “ConfirmPreference = $ConfirmPreference”
  try {
   if ($pscmdlet.ShouldProcess(“Print: $MessageText”)) {
    $PSBoundParameters.Confirm = $false
    Write-Host “Prompted for confirm – response was YES or ALL “ -ForegroundColor Cyan
    print-msg @PSBoundParameters
   }
   else {
    Write-Host “Prompted for confirm – response was NO” -ForegroundColor Magenta
    $PSBoundParameters.Confirm = $true
   }
  }
  catch { }
 }

 End {
  Write-Host “In end block: show-msg “-ForegroundColor Blue
 }
}