onsdag den 17. december 2014

Service Manager Performance Optimizations Checklist

This is intended as a reference compilation of performance optimizations for Service Manager. I have divided them into multiple parts for quicker reference.

Everything that follows is from various blogposts, most of them can be found in my previous blogpost. All credit goes to the guys and gals who wrote those.

This is work in progress, so keep comming back :D


  • Make sure the SQL meets the recommended hardware requirements (Look up the SM Sizer tool). Also in order of importance disk IO > RAM > CPU.
  • If possible keep Service Manager, Datawarehouse (and Orchestrator) on seperate SQL boxes. This makes them easier to scale later on.
Post install:
  • Create tempdbs for both service manager and datawarehouse. Rule of thumb is one per two cpus up to one per cpu. Put them on fast disks (if possible seperate LUNs/disks)
  • Disable autogrow on ServiceManager and tempdbs (size them properly to begin with). Have SCOM or similar monitor them, and resize manually if needed.
  • Set Maximum memory for the/each instance so that the OS has 4 gb RAM available.
  • For Service Manager make sure that SQL broker is set to 1 (read more here, page 14)
  • Make sure autoshrink is disabled (it is by default).
  • Some experience increased performance by setting max degree of parallelism to between 1 and 4 (read more here, page 15).
Service Manager

  • Make sure that you will be installing a secondary management server and have consoles connect to this, and this alone. The primary management server will be a dedicated workflow server.
    Rule of thumb is 12 concurrent console sessions per cpu, but you can likely handle more.
  • Make sure there is a low latency & high bandwidth connection between consoles and the (secondary) management server. This can be a problem with a geographically dispersed organization. If the connection is an issue consider using remote desktop, citrix or 3rd party alternatives (Cireson/GridPro) to the console.
Post install:
  • Apply UR2 - it has a critical console performance fix.
  • Configure the Global Operators Group (read FAQ: Why Does It Take So Long to Find Users in the Assigned To and Primary Owner Fields?)
  • Disable app pool recycling (read FAQ: Why is the self-service portal so slow?)
  • Consider increasing the group calculation interval (read Service Manager Performance)
  • Only create SLOs that are really needed. An alternative to the builtin service level management is using Orchestrator or SMA.
  • Disable workflow: Incident_Adjust_PriorityAndResolutionTime_Custom_Rule.Add if using SLOs.
  • Disable first assigned workflow if not used (read SCSM - The item cannot be updated.....aka. Click Apply and die) - really frustrating for your analysts to have this enabled.
  • Consider data retention settings. do you really need closed service requests for more than 90 days? Fewer work items means better performance.
  • setup workflows to close resolved incidents, completed service requests, etc. Cireson has an auto close app or you can roll your own. I did a piece on auto-resolving incidents, but you can easily edit the script to close resolved incidents.
  • When creating AD-connectors point only at a specific OU containing the users you want to import into Service Manager. If you then need to import from more than one OU then create more AD-connectors. Also use this ldap query for only importing enabled accounts
    Remember to check the 'Do not write null values for properties not set in Active Directory' box.
    If you have more than one AD-connector use different runas account (each based on different AD-users) for each.
    Read more on AD-connector optimizations here.
I will try and keep this updated as I learn new tricks. There are tons more, but I find these to be fairly trivial to apply and still alot to gain.

søndag den 14. december 2014

Service Manager 2012 Performance (A collection of blogpost)

Service Manager performance is essential, and almost any SCSM-admin will be (or already have) tried out some of the tricks in the following. Before we get started I would like to quote Donald Knuth (First time I heard about him too):

"We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil"

In other words, don't just optimize for the sake of it! To quote another piece of Wikipedia (because I am to lazy to write it myself):

Rewriting sections "pays off" in these circumstances because of a general "rule of thumb" known as the 90/10 law, which states that 90% of the time is spent in 10% of the code, and only 10% of the time in the remaining 90% of the code. So, putting intellectual effort into optimizing just a small part of the program can have a huge effect on the overall speed — if the correct part(s) can be located.

Which I believe also applies to Service Manager too.

Now let's dig into it. I will do a "lazy summary" on some of the links for those who cannot be bothered to read it all ;) And then point out some nifty optimizations that is worth considering (which may or may not be valid for your configuration). Or just if I find something cool or new (to me).

Also I would like to encourage to comment on particular tips or tricks that helped you make Service Manager perform better, or if I left something out that you feel is worth mentioning.

  • Don't use the "advanced type" for views. Ever!
  • Size your SM DB properly (to avoid it growing on demand in a production environment).
  • If possible keep all DB- and log-files on seperate physical disks.
  • Don't skimp on console computers. Multiple cores and 4+ GB RAM.
The section on "Group, Queue, and User Role Impact on Performance" may apply to you. If you are not using queues for service level management or to control access to work items, CIs, etc. for users, or if you are using service level management but it is not a time-critical part of your process, then this optimization may be for you.
By default service manager computes what goes into what queue every 30 seconds, and consequently which SLOs should be applied, or who can access what (defined using groups).  That sounds like hard work, and quite a waste if we could do it much less often, like every 10 minutes (as suggested).
Beware: The value is entered in hexadecimal (base 16, decimal numbers are in base 10) by default.

  • Download queries from here, extract zip, and run the one called "SubscriptionStatus.sql" against the ServiceManager DB. Look at the top rows and if the column "minutes behind" is greater than 3 you may have a problem. Read the entire article to dig in deeper.
I actually had a workflow in my system that was behind by 192 days (and counting...). Sorted out to be the same exact workflow being recorded twice, but only one of them was updated as being run.

Also more on this further down.

Not really a blogpost, but there is a critical performance update to the console in UR2. So apply that (no questions asked) if you have performance issues with the console.

  • Configuration is key (I think he actually says that somewhere).
  • Simply watch the video. Start at 16:00 if you want to dig right in, and watch about 30 minutes (some of it can be skipped where he talks about testing at MS bla bla). Remember to take notes, but remember the caveat in the beginning of this post - There is alot of possible configurations for optimization, but you are likely to get the most out of just a few of them.
On a personal note: I don't get why he is showing that Service Manager can run on a beast of a backend with many more users, computers, work items, etc. than Microsoft tested for, and the morale is that configuration is the critical component (he disables some not-needed workflows, and reconfigures stuff). Why not then test it out on some more down-to-earth hardware and then the morale of the story could be that Service Manager can run on a very large scale on some decent, but not out-of-this-world hardware, with proper configuration.

Just watch it already!

FAQ: A Collection of Tips to Improve System Center 2012 Service Manager Performance (by Peter Zerger)

He did a collection of performance hints, so I will include him in this collection :D

Service Manager slow perfomance (By Mihai Sarbulescu)

An elaboration on what Travis talked about troubleshooting workflows and delays (linked earlier in this post).

Poor Performance on Service Manager 2012? (by Thomas Mortsell)

Some cool tips, especially on the SQL backend. I haven't heard about splitting the SM DB into multiple files (across multiple disks, controllers, etc.). I would suspect some tables to be alot more busy than others, and those could possible benefit from being in a seperate filegroup. Anyone had luck with this?

That was it. Remember to comment below. I may do a post someday with performance optimizations that might as well be done as part of a Service Manager installation. Or in most cases some easy to do post-install optimizations.

Service Manager Request Query Result Filtering (By Nathan Lasnoski)

Keep this in mind if you are using query results in your request offerings. Not only a performance optimizations, but there are a (configurable) limit to how many objects are returned which can easily confuse the requester.

torsdag den 25. september 2014

Mapping SCOM Incidents to support group

The integration between Operations Manager and Service Manager is sadly lacking. A connector can import SCOM alerts and create an incident, but configuring incident properties based on the alert is simple (one can set a number of custom fields on the alert), but an incident template is required for each combination of properties (urgency, impact, and support group). Depending on the number of possible (relevant) support groups this can amount to quite a large number of templates. This issue will be addressed later on, but to begin with we need to identify what support group should handle which alert.

I may (quite possibly) be missing a few details in the following, so feel free to post a comment below.

My approach is similar to how ip tables work. A list of rules where one starts from the top going down until a rule criteria matches a given alert. An example could look like this where each row corresponds a rule.
1SG1b59f78ce-c42a-8995-f099-e705dbb34fd4Health Service Heartbeat Failure
2SG2308c0379-f7f0-0a81-a947-d0dbcf1216a7Failed to Connect to Computer
7SG2CB - Sharepoint serversSharepoint
9SG1Microsoft.Windows.FileServer.*File Service
11SG2Microsoft.SystemCenter.2012.OrchestratorSystem Center
12SG1Microsoft.SystemCenter.OperationsManager.InfraSystem Center
13SG2Microsoft.SystemCenter.OperationsManager.DataAccessServiceSystem Center
14SG2Microsoft.SystemCenter.Apm.Infrastructure.MonitoringSystem Center
15SG2Microsoft.SystemCenter.Apm.InfrastructureSystem Center
17SG2Microsoft.SystemCenter.Apm.NTServicesApplication Performance
18SG1Microsoft.SystemCenter.Apm.WebApplication Performance
19SG3*Catch all

Where index defines the priority of the alert, SCSM_SG is the support group the alert (incident) should be mapped to, Tag, Rule_ID, MP_name, Group are alert criterias and comment is well, a comment for the reader to better understand each rule.
Each alert is then matched against this table, stopping when the first match is found. This allows generic alerts such as a heartbeat failure to be handled by a specific support group, while all alerts for a specific computer group mush be handled by the specified support group. In this example rule index 7 is a sharepoint group in the Codebeaver firm (possibly containing sharepoint related computers) is handled by SG2 unless the alert matches one of the rules with a lower index (ie. higher priority).
Note that reach row/rule should only contain a single a single criteria as the logic cannot handle multiple criteria (it should be fairly trivial to edit the script to allow for multiple criteria on a single rule).
  • Tag is a well, tag, that is defined in the description of an alert, allowing custom alerts to be tagged by adding #tag:mytag to the end of the alert description. This allows alerts defined in the same management pack to be routed based on a tag in the description (possibly inserted based on some variable)
  • Rule_ID is just that, the rule ID
  • MP_name is the name of a management pack. It supports wildcards, basically anything that the powershell -like comparison will acccept
  • Group is a computer group. Some monitoring objects will be child monitoring objects of a given computer group, ie. a disk monitor in the CB - Sharepoint Servers group.

The script is listed below (this is a long one, you may want to get a cup of coffee/mug of beer before proceeding).

# Processes alerts in SCOM and marks the alert as ready to be
# forwarded to Service Manager.
# Authored by:
# Anders Spælling,
# And a little help from my friends

# 600 - Information - Enumerating monitoring objects from specific group
# 601 - Information - Forwarding alerts from specific group
# 602 - Information - Is alive ping
# 603 - Information - Forwarding remaining alerts (not member of specified groups)
# 604 - Information - No new alerts
# 605 - Information - Going to sleep
# 606 - Information - Alert processed and forwarded to SM
# 607 - Information - Connected to SCOM mgt. srv.
# 700 - Error       - No matching group found
# 701 - Error       - Unable to connect to SCOM mngt. server
# 702 - Error       - Unable to update alert
# 703 - Error       - Alert forwarding for group failed
# 703 - Error       - Alert forwarding for remaining alerts failed
# 704 - Error       - Alert mapping failed
# 705 - Error       - Unable to load alert mapping rules

# will not commit changes to alerts or write to event-log
# will instead output these to write-host
$DEBUG_MODE = $false

# Load SCOM snap-in
Import-Module OperationsManager

# Define constants
$SCOMComputerName = "FILL THIS OUT"
$EventLogName = "SCOM Alert Forwarding"
# sleep loop for 240 seconds
# how long the loop runs, in minutes, set to 3h55m
$LOOPTIME = 3*60+55

# customfield1 values - used by the SCSM connector to route IRs
$HIGH =  "High"
$MEDIUM = "Medium"
$LOW = "Low"
$NOT_DEFINED = "Not defined"
$NOT_MEMBER_OF_GROUP = "Not member of group"


# recursive depth to list monitoring objects
# location of alert mapping data
$RuleMappingFileLocation = "AlertMapping.csv"


Function Write-SCOMEventLog

    $EventlogExists = Get-EventLog -ComputerName $SCOMComputerName -List | Where-Object {$_.LogDisplayName -eq $EventLogName}

    If(-not $EventlogExists)
        New-EventLog -LogName $EventLogName -Source AlertUpdate -ComputerName $SCOMComputerName

    # will not write to event log in debug mode
    if(-not $DEBUG_MODE)
        Write-EventLog -ComputerName $SCOMComputerName -LogName $EventLogName -Source AlertUpdate -Message "$EventDescription" -EventId $EventID -EntryType $Type
        Write-Host "*DEBUG_MODE: Write-EventLog -ComputerName $SCOMComputerName -LogName $EventLogName -Source AlertUpdate -Message `"$EventDescription`" -EventId $EventID -EntryType $Type"


# Get parent monitoring objects recursively
Function Get-ParentMonitoringObject
    Param($MonitoringObjects, [int]$Depth=0)

    # keep an eye on how deep we go in the recursion but only report it in debug mode
    if(++$Depth -gt $MAXDEPTH)
            Write-Host "Reached max depth for recursion, depth = $Depth"

    $S = [array]$MonitoringObjects

    # Get all parent monitoring objects for each monitoring object and append these to $S
    foreach($MonitoringObject in $S)
        $Result = Get-ParentMonitoringObject $MonitoringObject.GetParentMonitoringObjects() $Depth
        #Write-Host $Result
        $S += $Result
    return $S

Function Get-SupportGroup
    Param($Alert, [array]$Groups)

    # Check if rules are loaded
    if($Rules -eq $null)
        throw "Alert mapping rules not loaded"

    # We wish to map according to '#tag' in the alert description, rule id, MP name and finally group
    # The rules are loaded from a CSV file

    # *** optimizations/todo ***
        # if the index is 1 then return support group

    # * TAG MATCH * #

    # check if the alert description is tagged. this is possible in custom made monitors where we wish to direct an alert to a specific support group
    $AlertDescription = $Alert.Description.ToLower()
    $IndexOfTag = $AlertDescription.IndexOf("#tag:")
    $TagMatch = $null
    if($IndexOfTag -gt 0)
        $Tag = $AlertDescription.Substring($IndexOfTag).Replace("#tag:","")

        # look for the first tag match in the rules
        $TagMatch = $Rules | ? {$Tag -ilike $_.Tag} | Sort-Object {[int] $_.Index} | select -First 1 | select Index, SCSM_SG

    # DEBUG
    if($TagMatch -and $DEBUG_MODE)
        Write-Host ("Tag match for '" + $Alert.Name + "': " + $TagMatch)

    # * RULE ID MATCH * #

    $RuleId = $Alert.RuleId.Guid
    $RuleIdMatch = $Rules | ? {$_.Rule_ID -eq $RuleId} | Sort-Object {[int] $_.Index} | select -First 1 | select Index, SCSM_SG

    # DEBUG
    if($RuleIdMatch -and $DEBUG_MODE)
        Write-Host ("Rule ID match for '" + $Alert.Name + "': " + $RuleIdMatch)


    # Get the management pack name
        $ManagementPackName = (Get-SCOMMonitor -ComputerName $SCOMComputerName -Id $Alert.MonitoringRuleId).GetManagementPack().Name
        $ManagementPackName = (Get-SCOMRule -ComputerName $SCOMComputerName -Id $Alert.MonitoringRuleId).ManagementPackName

    $ManagementPackNameMatch = $Rules | ? {$ManagementPackName -ilike $_.MP_name} | Sort-Object {[int] $_.Index} | select -First 1 | select Index, SCSM_SG

    # DEBUG
        Write-Host ("Management pack name: '" + $ManagementPackName + "'")
            Write-Host ("Management pack match for '" + $Alert.Name + "': " + $ManagementPackNameMatch)


    $ComputerGroupMatch = @()
    if($Groups.Count -gt 1)
        Write-Host "More than 1 matching group found for $($Alert.Name)"
    # There may not be a matching computergroup
        foreach($Group in $Groups)
            $ComputerGroupMatch += $Rules | ? {$Group -ilike $_.Group} | Sort-Object {[int] $_.Index} | select -First 1 | select Index, SCSM_SG

    # DEBUG
    if($ComputerGroupMatch -and $DEBUG_MODE)
        Write-Host ("Computer group match for '" + $Alert.Name + "': " + $ComputerGroupMatch)


    # add all the matching rules, sort them by index and select the first rule
    $SupportGroup = ([array]$TagMatch + [array]$RuleIdMatch + [array]$ManagementPackNameMatch + [array]$ComputerGroupMatch) | Sort-Object Index | select -First 1 |  select -ExpandProperty "SCSM_SG"
    #write-host ([array]$TagMatch + [array]$RuleIdMatch + [array]$ManagementPackNameMatch + [array]$ComputerGroupMatch)

    Return $SupportGroup

Function Get-AlertMapping

    #Write-Host "Getting all monitoring objects for `"$($Alert.Name)`""

    # Get monitoring object associated with alert
    $MonObj = Get-SCOMMonitoringObject -Id $Alert.MonitoringObjectId -ComputerName $SCOMComputerName
    # we only care about monitoring objects that will potentially match "CB groups"
    $MonitoringObjects = Get-ParentMonitoringObject $MonObj | select -ExpandProperty DisplayName | Sort-Object -Unique | ? {$_ -ilike "CB -*"}

    # Find matching groups
    #Write-Host "`"$($Alert.Name)`" monitoring objects"
    $MatchingGroups = @()
    foreach($MyGroup in $MyGroups)
        if($MyGroup -in $MonitoringObjects)
            $MatchingGroups += $MyGroup

    return (Get-SupportGroup $Alert $MatchingGroups)


Function Update-SCOMAlert
    Param($Alert, $CustomFieldText, $ResState)

            $SupportGroup = Get-AlertMapping $Alert
        catch [System.Exception]
            Write-SCOMEventLog "Unable to map alert to support group for alert: $($Alert.Name)`nException message: $($_.Exception.Message)" 704 "Error"
            # we will route it to RTI server
            $SupportGroup = "RTI Server"

        $Alert.customfield2 = $CustomFieldText
        $Alert.customfield3 = $SupportGroup
        $Alert.resolutionstate = $ResState

        # we will not commit changes to alert in debug mode
        if(-not $DEBUG_MODE)
            $Alert.Update("Alert processed and ready for Service Manager")
            # Write-Host "*DEBUG_MODE: Updating alert $($Alert.Name), criticality is set to $($Alert.customfield2), support group: $($Alert.customfield3)"
    catch [System.Exception]
        Write-SCOMEventLog "Unable to update alert: $($Alert.Name)`nException message: $($_.Exception.Message)" 702 "Error"
        $Alert = $Null

    # Alert variable set to null if unable to update
    if($Alert -ne $Null)
        $EventDescription = "Alert processed and ready for Service Manager.  Alert: " + $Alert.Name + ", " + "AlertID: " + $Alert.ID + ". Priority : " + $CustomFieldText + ". Support group: " + $SupportGroup
        Write-SCOMEventLog $EventDescription 606 "Information"

Function Forward-SCOMAlert

    $Alerts = $null
    $Alert = $null

        $SCOMGroup = Get-SCOMGroup -DisplayName $GroupDisplayName
        if ($SCOMGroup)

            Write-SCOMEventLog "Enumerating related monitoring objects from `"$GroupDisplayName`"" 600 "Information"
            $ClassInstances = $SCOMGroup.GetRelatedMonitoringObjects('Recursive')
            $Alerts = Get-SCOMAlert -ComputerName $SCOMComputerName -Instance $ClassInstances -ResolutionState (0) -Severity 2

            Write-SCOMEventLog "Forwarding $($Alerts.Count) alerts in `"$GroupDisplayName`"" 601 "Information"
            Foreach ($Alert in $Alerts)
                Update-SCOMAlert $Alert $CustomFieldText $ResState
            Write-SCOMEventLog "No matching Group was found $GroupDisplayName`nException message: $($_.Exception.Message)" 700 "Error"
    catch [System.Exception]
        Write-SCOMEventLog "Alert forwarding for group $GroupDisplayName failed`nException message: $($_.Exception.Message)" 703 "Error"

# Connect to SCOM Management Server#
    new-SCOMmanagementGroupConnection -ComputerName $SCOMComputerName

    Write-SCOMEventLog "Connected to SCOM management server: $SCOMComputerName" 607 "Information"
catch [System.Exception]
    Write-SCOMEventLog "Unable to connect to $SCOMComputerName, terminating script...`nException message: $($_.Exception.Message)" 701 "Error"


$Starttime = Get-Date
    $LoopStart = (Get-Date)
    # sleep for $SLEEPTIME seconds if no new alerts (no adjusting for time spent in loop)
    $SleepTimeInSeconds = $SLEEPTIME

        # load mapping rules
        $Rules = Import-Csv $RuleMappingFileLocation
    catch [System.Exception]
        Write-SCOMEventLog "Unable to load mapping rules`nException message: $($_.Exception.Message)" 705 "Error"
        $Rules = $null

    # My groups starts with CB (filter for relevant computer groups here)
    $MyGroups = Get-SCOMGroup -DisplayName "CB - *" -ComputerName $SCOMComputerName | select -ExpandProperty DisplayName

    # Get number of new alerts with critical severity
    $AlertCount = ([array](Get-SCOMAlert -ComputerName $SCOMComputerName -ResolutionState (0) -Severity 2)).Count

    # only forward alerts if there are any new
    if($AlertCount -gt 0)
        # Ping!
        Write-SCOMEventLog "Is Alive" 602 "Information"

        # forward alerts for different groups at a time - here 3 differently rated servers in terms of criticality
        Forward-SCOMAlert "High Criticality Servers" $HIGH 10
        Forward-SCOMAlert "Medium Criticality Servers" $MEDIUM 10
        Forward-SCOMAlert "Low Criticality Servers" $LOW 10
        Forward-SCOMAlert "Criticality Undefined" $NOT_DEFINED 10

            # handle remaining alerts
            $Alerts = [array](Get-SCOMAlert -ComputerName $SCOMComputerName -ResolutionState (0) -Severity 2)
            if($Alerts.Count -gt 0)
                Write-SCOMEventLog "Forwarding $($Alerts.Count) remaining alerts (not member of group)" 603 "Information"
                Foreach ($Alert in $Alerts)
                    Update-SCOMAlert $Alert $NOT_MEMBER_OF_GROUP 10
        catch [System.Exception]
            Write-SCOMEventLog "Alert forwarding for remaining alerts failed`nException message: $($_.Exception.Message)" 704 "Error"

        $LoopEnd = (Get-Date)

        # adjust for time spent forwarding alerts
        $SleepTimeInSeconds = $SLEEPTIME - (New-TimeSpan -Start $LoopStart -End $LoopEnd).TotalSeconds
        Write-Host "sleep time $SleepTimeInSeconds"
        # account for a loop that takes longer than the default sleep time
        if($SleepTimeInSeconds -lt 0) { $SleepTimeInSeconds = 0 }
        Write-SCOMEventLog "No new alerts" 604 "Information"
        Write-Host "*DEBUG_MODE: Exiting loop"

    Write-SCOMEventLog "Sleeping for $SleepTimeInSeconds Seconds" 605 "Information"
    Start-Sleep -s $SleepTimeInSeconds
Until ((Get-Date).AddMinutes(-$LOOPTIME) -gt $Starttime)

Note that the script does not forward alerts as such, it simply updates the custom fields on the alert and sets the status to a custom alert status that a SCOM connector should then pick up on and lift the alert to Service Manager. This is a fairly trivial setup I will not be covering here.

Now that the alert is updated in a way that allows us to identify which support group should be assigned to the incident we need to do the mapping. One way is to create a bunch of templates and use the SCOM alert connector, or we can just use Orchestrator!
I will not go through the details on how to do this but describe an outline on how. Start by creating a "Monitor Object" activity that monitors the Operations Manager-Generated Incident class for new instances (trigger on new). Either invoke a different runbook (do not check "Wait for completion") or add the activities directly in the runbook (not at all best practice).
The runbook should first retrieve the SCOM incident (using Get Object) and then update the Support Group field with the value from custom field 3 (this is the field where the script puts the support group). Note that the support group must match one of the support groups listed in the incident tier queue list.

That's it. The alert has gone all the way from SCOM to an incident and the proper support group is assigned.

Obviously the alert mapping script must be scheduled to run automatically. It is designed in a way so that one can run it of 2 (or more) SCOM management servers but at different times. The script is set to run for 3 hours 55 minutes where each loop is 240 seconds long. Now in order to provide high availability, schedule the script at both servers but at different times (with a 120 seconds offset). This will make the forwarding effectivly run every 2 minutes as long as both servers are up. As the loops stop after 3 hours 55 minutes it should be restarted every 4 hours. This is a good alternative to a scheduled task that runs every 2 minutes (replicating the loop in the script).
The script requires more or less admin rights on SCOM. I was having issues on running it with anything less (on top of needing some rights in the context of running a scheduled task).

I have had this lying around as draft for weeks now. I have a million other things to do, so little time to keep polishing before publishing :D

tirsdag den 2. september 2014

Auto-approving review activities after x time

I recently needed to auto-approve a review activity if none of the assigned reviewers responded in a timely manner. As I was pressed for time I wanted something that I could implement fast and simple (like this blog-post).

The problem is then: A review activity has not been approved (status = completed) after x hours. When that time passes it should auto-approve (ie. set status to completed).

I cannot use a workflow because it is unable to detect this state. I also cannot use a monitor object activity in Orchestrator to detect this state. I was left with a scheduled PS-script, but then it occured to me: Subscription! The subscription is unable to apply a template and thereby set the status to completed, but I could send the email to Service Manager with the subject as [RA1234] (the ID of the review activity) and the body: Automatically approved by Service Manager [Approved].

The criteria in the subscription is:
when meets criteria:
status = in progress
created date is less than or equal to [now-1d]

Now I haven't actually gotten around testing this, and it may need a bit of tweaking to get exactly right, but I am pretty confident it will work. Since the sender is the workflow account which has admin rights it should be able to approve the RA without being reviewer. I will update here when I know more.

Read more on approving review activities via. email here.

I have two more blog posts lined up, one on speeding up development using the SCSM SDK, and one on mapping support groups intelligently (and automatically) to SCOM generated incidents using a fancy script and an excel sheet!
I just need some time to polish both posts. Hope to get them out soon.

tirsdag den 22. juli 2014

Restarting a workflow - Scripted

In my previous post I showed a way to restart a stuck Service Request workflow. Now, detecting SRs that are stuck can be quite tedious using the console. I wrote a script that can detect a possibly stuck workflow. It is actually rather simple, looping all relevant SRs and testing if there is no active activity and one or more pending activities.

Import-Module SMLets

# Activity statuses
$ContainsActivity = Get-SCSMRelationshipClass System.WorkItemContainsActivity
$InProgressStatus = Get-SCSMEnumeration ActivityStatusEnum.Active
$PendingStatus = Get-SCSMEnumeration ActivityStatusEnum.Ready
# SR in progress statuses
$SRStatusInProgressId = (Get-SCSMEnumeration ServiceRequestStatusEnum.InProgress$).Id
$SRStatusInProgressPendingId = (Get-SCSMEnumeration ServiceRequestStatusEnum.InProgress.PendingUserResponse).Id
$SRStatusInProgressUpdatedId = (Get-SCSMEnumeration ServiceRequestStatusEnum.InProgress.UpdatedByUser).Id

$Now = Get-Date
$Then = $Now.AddHours(-2)
# Get SRs with active status that has not been modified since X hours ago
$sCriteria = "(Status = '$SRStatusInProgressId' or Status = '$SRStatusInProgressPendingId' or Status = '$SRStatusInProgressUpdatedId') and LastModified < '$Then'"
$SRClass = Get-SCSMClass system.workitem.servicerequest$

$Criteria = New-Object "Microsoft.EnterpriseManagement.Common.EnterpriseManagementObjectCriteria" $sCriteria, $SRClass
# Get all SRs matching the critera
$SRs = Get-SCSMObject -Criteria $Criteria

foreach($SR in $SRs)
    $HasActivityInProgress = $false
    $HasPendingActivity = $false
    foreach($Activity in Get-SCSMRelatedObject -SMObject $SR -Relationship $ContainsActivity)
        if($Activity.Status -eq $InProgressStatus)
            $HasActivityInProgress = $true            

        elseif($Activity.Status -eq $PendingStatus)
            $HasPendingActivity = $true

    # If true then the SR is possibly stuck with a pending activity
    if(-not $HasActivityInProgress -and $HasPendingActivity)
        Write-Host $($SR.DisplayName)

Next up is trying to get the workflow started again. One approach is to put the SR on hold, wait abit (10-20 seconds), activate the SR, and optionally restore the original SR status (typically some custom status like "pending user response").

Import-Module SMLets

$SRID = 'SRxxxx';
$SR = Get-SCSMObject -Class (Get-SCSMClass system.workitem.servicerequest$) -Filter "DisplayName -like '$SRID*'"
$PrevStatus = $SR.Status
$StatusInProgress = Get-SCSMEnumeration ServiceRequestStatusEnum.InProgress$

# set on hold
$SR | Set-SCSMObject -PropertyHashtable @{Status = (Get-SCSMEnumeration ServiceRequestStatusEnum.OnHold)}

# wait for activites to go on hold
Start-Sleep -s 20

# resume SR
$SR | Set-SCSMObject -PropertyHashtable @{Status = $StatusInProgress}

if($PrevStatus -ne $StatusInProgress)
    # wait for activites to "reset"
    Start-Sleep -s 20
    #restore previous status
    Write-Host "Setting status to: $($PrevStatus.DisplayName)"
    $SR | Set-SCSMObject -PropertyHashtable @{Status = $PrevStatus}

Leave in a comment below how many stuck workflows you found ;)

torsdag den 26. juni 2014

Managing activities and restarting a workflow - Hold and resume

I recently discovered a bug in the workflow engine by unintended use of an activity workflow. You can read more on Technet: Help reproducing a bug - Activity status stuck in pending mode, Service Request still "in progress"

There is however a simple way of kickstarting a Service Request or other work item with an activity flow - behold the "Put on Hold" task (pun intended).

This SR will be stuck with a MA in pending mode forever - or until we do something to fix it!
Now let's fix this bugger. Click the "Put on Hold" task to the right and click Ok. Wait abit and the activity flow should look like this.
Click the "Resume" task and click Ok. The workflow engine will recalculate the flow and will put the activity that was stuck in progress.
This can fix a large number of workflows gone haywire. Tried adding activities to a Parallel Activity that is already in progress? Forget about it! They will never get out of pending mode. Atleast not before you put the request on hold and resumes it.

You can also edit the flow (within reason) in ways you cannot do while it is running. Suppose you have a sequential flow where you need to put in an activity before another activity alread in progress. Just put it on hold, add the activity and place it where needed and resume. The completed activities will be unchanged, but the first "not-completed" in line will be in progress and the rest following it will be pending.
Activities are also subject to being deleted (if skip is not a valid option - skip is available for admins only. Rob Ford has a workaround for that though) - you cannot delete already completed activities - but really you can if you want to - read on...

Make sure the request is "In progress" - use "Return to activity" on the completed activity that you wish to delete. Note that all completed activities following this will also be "un-completed". Click Ok to commit the change and reopen when the activity is "In Progress". Put the request on hold. The former completed activity is now subject to deletion. You really wanted to get rid of that activity did you?
If you "return to activity" in a request that is on hold it will resume again.

fredag den 30. maj 2014

Lend out IT-equipment in Service Manager using custom forms and console tasks - Part 2a

In this part I will be discussing console tasks that will allow a console operator to lend out an item as well as return it.

In order to expose the console task to the console we will need a MP telling Service Manager the necessary details. First off we will define when a console task should be shown. When using a console task in a form (a so called FormTask) we have access to an interface called IDataItem. Changes made using this interface will reflect immediately in the form (and we will not have to bother with saving the changes).
When calling a console task from a view we will be editing an EnterpriseManagementObject (or some variant thereof).

First of I will limit the console tasks to only work from a view:

<Category ID="LendItemTaskHandler.DonotShowFormTask.Category" Target="LendItemTaskHandler" Value="Console!Microsoft.EnterpriseManagement.ServiceManager.UI.Console.DonotShowFormTask" />
<Category ID="ReturnItemTaskHandler.DonotShowFormTask.Category" Target="ReturnItemTaskHandler" Value="Console!Microsoft.EnterpriseManagement.ServiceManager.UI.Console.DonotShowFormTask" />

Next we define the console tasks. I will just show the code for the first one. The ID is the target we defined above. The target of the console task is then defined as a class just like when doing type projections.
What we are doing is telling the Microsoft.EnterpriseManagement.UI.SdkDataAccess.ConsoleTaskHandler that it should invoke CB.LendableItemConsoleTasks (this is the name of the assembly, the DLL-file) when someone clicks the task in the console, and type is a combination of the namespace the LendableTaskHandler is contained in, ie. namespace is CB.LendableItemConsoleTasks in which a class LendableTaskHandler is defined, and finally we provide a single argument "LendItem" which we can look for in the code later on.

<ConsoleTask ID="LendItemTaskHandler" Accessibility="Public" Enabled="true" Target="LendableLibrary!CB.LendableItem" RequireOutput="false">
  <Argument Name="Assembly">CB.LendableItem.ConsoleTasks</Argument>
  <Argument Name="Type">CB.LendableItem.TaskHandlers.LendableTaskHandler</Argument>

The entire XML can be viewed here.

Next up is adding an empty project to the solution in which the custom form is. We call the project CB.LendableItem.ConsoleTasks (this will also be the name of the DLL). Go to project properties and change the output type to "class library" and make sure the target framework is .NET Framework 3.5. Optionably you can also sign the assembly in the signing tab - the console will complain if executing console tasks from an unsigned assembly.

In order to avoid writing the same code over and over again when creating console tasks I use inheritance:

    class TaskHandler : ConsoleCommand
        private IDataItem _dataItem;
        private EnterpriseManagementObject _emo;
        EnterpriseManagementObjectProjection _emop;
        private EnterpriseManagementGroup _mg;

        public override void ExecuteCommand(IList<NavigationModelNodeBase> nodes, NavigationModelNodeTask task, ICollection<string> parameters)
            base.ExecuteCommand(nodes, task, parameters);

            NavigationModelNodeBase node = nodes.First();

            //Get the server name to connect to
            String strServerName = Registry.GetValue("HKEY_CURRENT_USER\\Software\\Microsoft\\System Center\\2010\\Service Manager\\Console\\User Settings", "SDKServiceMachine", "localhost").ToString();

            //Connect to the server
            _mg = new EnterpriseManagementGroup(strServerName);

            if (nodes[0] is EnterpriseManagementObjectNode)
                _emo = (nodes[0] as EnterpriseManagementObjectNode).SDKObject;
            else if (nodes[0] is EnterpriseManagementObjectProjectionNode)
                _emop = (EnterpriseManagementObjectProjection)(nodes[0] as EnterpriseManagementObjectProjectionNode).SDKObject;
                _emo = _emop.Object;

            _dataItem = Microsoft.EnterpriseManagement.GenericForm.FormUtilities.Instance.GetFormDataContext(node);

        public IDataItem DataItem
                return _dataItem;

        public EnterpriseManagementObject ManagementObject
                return _emo;

        public EnterpriseManagementObjectProjection ManagementObjectProjection
                return _emop;

        public EnterpriseManagementGroup ManagementGroup
                return _mg;

What I have done here is create a generic TaskHandler. I can then simply inherit it like this

    class LendableTaskHandler : TaskHandler
        // variables go here

        public override void ExecuteCommand(IList<NavigationModelNodeBase> nodes, NavigationModelNodeTask task, ICollection<string> parameters)
            base.ExecuteCommand(nodes, task, parameters);

And get on with the code specific for this console task. Before we continue we need to make sure we have a proper object projection in which we can access ex. the user who borrowed an item.

// search criteria for ObjectProjectionCriteria
String sId = ManagementObject[mpLendableItemLibrary.GetClass("CB.LendableItem"), "CB_ItemID"].Value.ToString();
String sLendableItemSearchCriteria = "";
sLendableItemSearchCriteria = String.Format(@"<Criteria xmlns=""http://Microsoft.EnterpriseManagement.Core.Criteria/"">" +
                "<Expression>" +
                "<SimpleExpression>" +
                    "<ValueExpressionLeft>" +
                    "<Property>$Context/Property[Type='CB.LendableItem']/CB_ItemID$</Property>" +
                    "</ValueExpressionLeft>" +
                    "<Operator>Equal</Operator>" +
                    "<ValueExpressionRight>" +
                    "<Value>" + sId + "</Value>" +
                    "</ValueExpressionRight>" +
                "</SimpleExpression>" +
                "</Expression>" +

ManagementPackTypeProjection mptpLendable = mpLendableItemLibrary.GetTypeProjection("TypeProjection.LendableItem");

ObjectProjectionCriteria opcLendable = new ObjectProjectionCriteria(sLendableItemSearchCriteria, mptpLendable, mpLendableItemLibrary, ManagementGroup);

IObjectProjectionReader<EnterpriseManagementObject> oprLendables =
    ManagementGroup.EntityObjects.GetObjectProjectionReader<EnterpriseManagementObject>(opcLendable, ObjectQueryOptions.Default);

_emop = oprLendables.First();

This is based on something Travis posted. In short we retrieve the item already provided to use in ExecuteCommand, but with the necessary type projections.

Remember the argument provided in the xml ealier? It can be accessed like this

else if(parameters.Contains("ReturnItem"))


, and when either of those two methods are done executing we refresh the view.

I will also setup some helper functions

public EnterpriseManagementSimpleObject GetCurrentStatus()
    return ManagementObject[mpcLendableItem, "CB_Status"];

I will be looking up the current status alot. mpcLendableItem is defined in ExecuteCommand, and ManagementObject in the parent ExecuteCommand (the generic one).

I will also be in need of retrieving related users, such as the user who reserved the item

public EnterpriseManagementObject GetReservedByUser()
    ManagementPackRelationship mprReservedBy = mpLendableItemLibrary.GetRelationship("CB_ReservedBy");

    foreach (EnterpriseManagementRelationshipObject<EnterpriseManagementObject> obj in
        ManagementGroup.EntityObjects.GetRelationshipObjectsWhereSource<EnterpriseManagementObject>(ManagementObject.Id, TraversalDepth.OneLevel, ObjectQueryOptions.Default))
        if (obj.RelationshipId == mprReservedBy.Id)
            return obj.TargetObject;
    return null;

This is just an altered code snippet from Rob Ford.

Now let's get on with lending out an item. First I will be validating that the item is actually lendable, ie. someone reserved it, and the status is 'Reserved'.

EnterpriseManagementSimpleObject currentStatusEMO = GetCurrentStatus();
EnterpriseManagementObject reservedBy = GetReservedByUser();

if (reservedBy != null && currentStatusEMO.ToString().Equals(mpEnumReserved.ToString()))

I am already using the helper functions! See this post on comparing enumerations.

Next we will be creating a 'borrowed' relationship between the user who reserved the item and the item.

EnterpriseManagementObjectProjection projection = ManagementObjectProjection;

ManagementPackRelationship mprBorrowedBy = mpLendableItemLibrary.GetRelationship("CB_BorrowedBy");

projection.Add(reservedBy, mprBorrowedBy.Target);

So we simply retrieve the projection defined earlier in this post and then add the relationship. Note that the relationship is defined as

<RelationshipType ID="CB_BorrowedBy" Accessibility="Public" Abstract="false" Base="System!System.Reference">
  <Source ID="Source_bad06373_9362_433d_be2f_adf7aa2b5912" MinCardinality="0" MaxCardinality="2147483647" Type="CB.LendableItem" />
  <Target ID="Target_87f8bbbd_5aba_4013_aaf1_b2f15c00addc" MinCardinality="0" MaxCardinality="1" Type="MicrosoftWindowsLibrary!Microsoft.AD.User" />

which is why we use mprBorrowedBy.Target and not mprBorrowedBy.Source.

In order to avoid commit clashing (calling commit on the same object in succession) properties in the projection is entered as

DateTime now = DateTime.Now;
projection.Object[mpLendableItemLibrary.GetClass("CB.LendableItem"), "CB_BorrowedDate"].Value = now;

// must be returned within 28 days
projection.Object[mpLendableItemLibrary.GetClass("CB.LendableItem"), "CB_ReturnDate"].Value = now.AddDays(28);

// status is now borrowed
projection.Object[mpLendableItemLibrary.GetClass("CB.LendableItem"), "CB_Status"].Value = mpEnumBorrowed;

// commit on projection will also commit the object

Return item is somewhat similar, except that we need to remove some relationships. What I ended up with

EnterpriseManagementSimpleObject currentStatusEMO = GetCurrentStatus();
EnterpriseManagementObject borrowedBy = GetBorrowedByUser();

if (borrowedBy != null && currentStatusEMO.ToString().Equals(mpEnumBorrowed.ToString()))
    ManagementPackRelationship mprReservedBy = mpLendableItemLibrary.GetRelationship("CB_ReservedBy");
    ManagementPackRelationship mprBorrowedBy = mpLendableItemLibrary.GetRelationship("CB_BorrowedBy");

// Remove the related users     (ManagementObjectProjection[mprReservedBy.Target].First() as IComposableProjection).Remove();
    (ManagementObjectProjection[mprBorrowedBy.Target].First() as IComposableProjection).Remove();

    ManagementObjectProjection.Object[mpLendableItemLibrary.GetClass("CB.LendableItem"), "CB_BorrowedDate"].Value = null;
    ManagementObjectProjection.Object[mpLendableItemLibrary.GetClass("CB.LendableItem"), "CB_ReservedDate"].Value = null;
    ManagementObjectProjection.Object[mpLendableItemLibrary.GetClass("CB.LendableItem"), "CB_ReturnDate"].Value = null;
    ManagementObjectProjection.Object[mpLendableItemLibrary.GetClass("CB.LendableItem"), "CB_Status"].Value = mpEnumAvailable;


In part 2b I will be adding an offering on the portal allowing a user to reserve the item. I may also elaborate abit on the current solution (ex. returning items in bulks).

Full source-code available here.

onsdag den 28. maj 2014

Comparing enumeration values in a Service Manager Console Task

While working on part 2 of my adventure into Service Manager customization I came across a seemingly simple problem; comparing two enumeration values. I wanted to change the value bound to a custom configuration item if it had a specific value, and it seemed a bit lackluster to compare the DisplayNames of the two. I would rather compare GUIDs or something similarly unique.
After spending ages on figuring out how to get the GUID out of the bound enumeration value I found that the toString method actually provided me with what I needed.

First a bit of setup. I need the MP that defines the custom class I made

ManagementPack mpLendableItemLibrary = ManagementGroup.ManagementPacks.GetManagementPack(new Guid("370a302c-9b0c-6c1a-033d-9b97f8406db5")); 

I also need the instance as an EnterpriseManagementObject. In the ExecuteCommand a list of nodes is provided (containing as many nodes as selected in a view, or just one in a form). Thus I can get it by

managementObject = node["$EMOInstance$"] as EnterpriseManagementObject

Or (Suggested by Rob Ford)

if (node is EnterpriseManagementObjectNode)
    managementObject = (node as EnterpriseManagementObjectNode).SDKObject;
else if (node is EnterpriseManagementObjectProjectionNode)
    EnterpriseManagementObjectProjection emop = (EnterpriseManagementObjectProjection)(node as EnterpriseManagementObjectProjectionNode).SDKObject;
    managementObject = emop.Object;

I actually couldn't use the instance of IDataItem to do the comparison. The enumeration is defined in the same MP as the custom class.

ManagementPackEnumeration mpEnumBorrowed =

I can get the property CB_Status by

EnterpriseManagementSimpleObject currentStatusEMO = managementObject [mpLendableItemLibrary.GetClass("CB.LendableItem"), "CB_Status"];

And finally I can do the comparison (both ToString methods provide me with the name of the enumeration which also must be unique).

    // do something!

torsdag den 15. maj 2014

Lend out IT-equipment in Service Manager using custom forms and console tasks - Part 1

Inspired by John Hennens Building Custom Forms for Service Manager with Visual Studio I will give an example that is somewhat closer to a reallife Service Manager customization. Many IT departments lend out equipment to employees. One could use something like a service request to keep track of who has borrowed what, and besides the fact that a service request shouldn't be long lived by design, a seperate system (be that post-it notes or something more advanced) is needed to keep track of the equipment. So what we wish from a Service Manager customization is
  1. Users can browse and reserve equipment on the self service portal
  2. Items can be managed in the console (details will follow)
First we will create a new custom class based on the configuration item class with the following properties and relationships:
  • Borrowed Date - Datetime - The date the item was borrowed by a user
  • Reserved Date - Datetime - The date the item was reserved by a user
  • Return Date - Datetime - The date the item must be returned by a user
  • Status - List - A list of the different states an item can be in, Available, Reserved, Borrowed, Overdue
  • Reserved by - Relationship - The user the item is reserved by
  • Borrowed by - Relationship - The user the item is borrowed by
We then create a type projection exposing these two relationships allowing us to easily access these when building the form.

We need a custom form that can display all of these properties (and more), and console tasks to manage them:
  • Borrow item - Changes the status to 'Borrowed' and updates the 'Borrowed by' relationship.
  • Return item - Changes the status to 'Available' and deletes the 'Borrowed by' relationship.
  • Reset item - Sets all properties to default values and removes relationships.
We also need a runbook that reserves the requested item. For the sake of it I will be using SMA (or die trying).

Enough talk, more action! First I created the custom form. You can view the entire XAML-code here.

It looks like this btw:

It seems that there is currently a bug in WPFToolkit where the Datepicker resides which makes the "Show Calendar" button looks greyed out as if it was disabled.
This is supposedly a fix to the issue, but the datepickers are still wrong in my implementation. Bummer :(

We will using the form primarily for viewing and not editing. For this purpose I will create a few console tasks.

As explained by John one will need to target the custom form at a type projection in order to access class relationships directly using XAML. The class definition is described here, along with type projections and values for the status enumeration.
The custom form is defined in XML here. Note that I have signed the assembly using the same key as I use for signing management packs. This can be done in Visual Studio in properties for a project in the signing tab. Check the "Sign the assembly" box and select the key to sign with. I am also signing all MPs except the one containing views.

All source code can be found here, and a ready to import MP-bundle here.

In part 2 I will be doing console tasks and putting an offering on the portal for end-users to request reservation of an item. In part 3 I will attempt at adding an easy to view history to the custom form that shows who reserved or borrowed an item in the past.

I just realized I was not using a UserPicker, the obvious choice for picking users, DOH! Simply use this code in place of the SingleInstancePicker
<scwpf:UserPicker User="{Binding Path=IsReservedBy, Mode=TwoWay, UpdateSourceTrigger=PropertyChanged}"/>

Søg i denne blog