Thursday, April 9, 2015

Script to Monitor Service Manager Workflows

I was asked to create a better way to check for workflow failures in Service Manager. If you use Operations Manager to monitor SCSM workflows, you know that there is one rule, and it throws an alert for every failure. I didn't like this. Instead, this script will provide a summary of failures (if there are any failures), since the last time it checked.

The script I am posting was meant to be scheduled, ran ad-hoc, or it can be modified to put into SCOM - property bags and such.

The script could have been a lot smaller if I connected directly to the database, but people don't seem to this, so it uses the NATIVE SCSM powershell module. No SMLETS needed.

This script should be ran on the workflow server.

You can also throw a -verbose behind it so you can see what it is actually doing. Modify until you heart is content. I am not putting this into the Microsoft Gallery, because it would need some cleanup and such.

The first time you run it, it is going to look for a workflow status log file. It is also going to report all failures. However, it will save the most recent status Id in the log file, and then only give failures since that status id. 





param(
  [Parameter(Mandatory=$False)][string] $LogFilePathandName = ".\MonitorWorkflowFailuresLog.txt"
  )
write-verbose "Starting Script"
#Write-EventLog -LogName "Operations Manager" -Source "Health Service Script" -EntryType Information -EventID 12345 -Message "Script Starting"
[int]$IntRef = $null
$ScriptRunTime = get-date
Write-verbose  @"
Paramters Used:
  LogFilePathandName = $LogFilePathandName
"@
<#
We need to create a log file entry for each workflow and keep track of the latest workflow instance ID.
This way, we only check the latest IDs.
#>




if ((get-childItem $LogFilepathandName -ErrorAction silentlycontinue) -eq $null) 
  {
    Write-verbose "Could not find Log a File. A new log file will be created"
    New-Item $LogFilePathandName -type file -value "WorkflowName,WorkflowId,WorkflowInstanceId
sample,123,123
"
  }
  else 
    {
<#
If a log file has been found, it needs to be in the correct format.
The format should be:
WorkflowName,WorkflowId,WorkflowInstanceId
ThisisaWorflowName,1234,2134123
Thisisaworkflownametoo,123425,43251
#>

write-verbose @"
Found a Log File, make sure the Log file is in the following format:
WorkflowName,WorkflowId,WorkflowInstanceId
ThisisaWorflowName,1234,2134123
Thisisaworkflownametoo,123425,43251
"@

      #check to make sure the first line is the header
      $RetreivedLogFileFirstTwoLines = Get-Content $LogFilePathandName -totalcount 2
      $RetrievedLogFileHeaderLine = $RetreivedLogFileFirstTwoLines[0]
      if ($RetrievedLogFileHeaderLine -eq "WorkflowName,WorkflowId,WorkflowInstanceId")
        {
          write-verbose "The Log file header is in the expected State - $RetrievedLogFileHeaderLine"
 $RetrievedLogFileHeaderLineInExpectedState = $true
          <#The header is in the expected state, so let's make sure there is at least one line to compare
          #We need to take the first data line and see if it is in a "string,string,int" format.
          #We really just wants to make sure items 2 and 3 (in an array that is 1 and 2) con be converted to integers#>
          $RetrievedLogFileFirstDataLine = $RetreivedLogFileFirstTwoLines[1]
          write-verbose "Checking RetrievedLogFileFirstDataLine - $RetrievedLogFileFirstDataLine"
 $RetrievedLogFileFirstDataLineSplit = $RetrievedLogFileFirstDataLine.Split(",")
          Write-verbose "Checking to see if there are three array items in the file Line"
          if ($RetrievedLogFileFirstDataLineSplit.Count -eq 3)
            {
write-verbose "Count is 3 for data line splt"
#There are three comma Delimited Items. Can Items 2 and 3 (1 and 2) be converted to an integer?
<##NONE OF THIS WORKS IN .NET 3.5 OR EARLIER
[System.Guid]::Parse(($RetrievedLogFileFirstDataLineSplit[1])) | Out-Null
Try {
$RetrievedLogFileFirstDataLineWorkFlowIdInExpectedState = [System.Guid]::Parse(($RetrievedLogFileFirstDataLineSplit[1])) | Out-Null # test if is possible to cast and put parsed value in reference variable
$RetrievedLogFileFirstDataLineWorkFlowIdInExpectedState = $true
 }
 catch {
$RetrievedLogFileFirstDataLineWorkFlowIdInExpectedState = $false
}
#>
<#So we are going to revert to a method that will work in all versions.#>
write-verbose "Checking to see if WorkflowId is in a GUID State"
$RetrievedLogFileFirstDataLineWorkFlowIdInExpectedState = $true
write-verbose "Checking to see if WorkflowInstanceId is in an int State"
$RetrievedLogFileFirstDataLineWorkflowInstanceIdInExpectedState = [int32]::TryParse(($RetrievedLogFileFirstDataLineSplit[2]) , [ref]$IntRef) # test if is possible to cast and put parsed value in reference variable
 if ($RetrievedLogFileFirstDataLineWorkFlowIdInExpectedState -eq $true -and $RetrievedLogFileFirstDataLineWorkflowInstanceIdInExpectedState -eq $true)
{
 #The First Data line integers are in the expected state.
 $RetrievedLogFileFirstDataLineInExpectedState = $true
 write-verbose "The First Data line integers are in the expected state."
}
else
 {
#Either RetrievedLogFileFirstDataLineWorkFlowIdInExpectedState or RetrievedLogFileFirstDataLineWorkflowInstanceIdInExpectedState
#was not true, therefore the first data line was not in the correct format
$RetrievedLogFileFirstDataLineInExpectedState = $false
write-verbose "RetrievedLogFileFirstDataLineWorkflowInstanceIdInExpectedState is $RetrievedLogFileFirstDataLineWorkflowInstanceIdInExpectedState"
write-verbose "RetrievedLogFileFirstDataLineWorkFlowIdInExpectedState $RetrievedLogFileFirstDataLineWorkFlowIdInExpectedState"
write-verbose "Either RetrievedLogFileFirstDataLineWorkFlowIdInExpectedState or RetrievedLogFileFirstDataLineWorkflowInstanceIdInExpectedState was not true, therefore the first data line was not in the correct format"
 }
}
else
 {
#RetrievedLogFileFirstDataLineSplit did not have 3 array items, therefore was not in the correct format
$RetrievedLogFileFirstDataLineInExpectedState = $false
write-verbose "RetrievedLogFileFirstDataLineSplit did not have 3 array items, therefore was not in the correct format"
 }
}
else
{
 $RetrievedLogFileHeaderLineInExpectedState = $false
 write-verbose "RetrievedLogFileHeaderLine was not in the expected State"
}
}
if ($RetrievedLogFileHeaderLineInExpectedState -eq $false -or $RetrievedLogFileFirstDataLineInExpectedState -eq $false)
 {
#Need to write the log file and then exit.
write-error "Something was false. RetrievedLogFileHeaderLineInExpectedState  is $RetrievedLogFileHeaderLineInExpectedState and RetrievedLogFileFirstDataLineInExpectedState  is $RetrievedLogFileFirstDataLineInExpectedState. There is formatting issue, the script is stopping."
exit
 }
<#Now that the Log file is taken care of, we need to get all of the workflows from Service Manager#>
write-verbose "About to Import the System.Center.Service.Manager Module, checking to see if it is already imported"
if ((get-module -name "System.Center.Service.Manager") -eq $null)
 {
write-verbose "System.Center.Service.Manager is not imported, importing now"
#Write-EventLog -LogName "Operations Manager" -Source "Health Service Script" -EntryType Information -EventID 12345 -Message $Log
import-module -force "C:\Program Files\Microsoft System Center 2012\Service Manager\Powershell\System.Center.Service.Manager.psd1"
 }
 
 if ((get-module -name "System.Center.Service.Manager") -eq $null)
 {
write-error "System.Center.Service.Manager did not import, cannot continue, exiting."
exit
 }
#declare the failure array
$FailureArray = @()
#load the entire log file
write-verbose "Load the log file $LogFilePathandName"
$EntireLogFile = Get-Content $LogFilePathandName
# Save all WF into $workflow
$workflow = Get-SCSMWorkflowStatus
# Loop $workflow
write-verbose "Looping through each workflow"
foreach ($wf in $workflow) {
write-verbose $wf.Name
write-verbose "Searching for the workflow in the log file"
#; if you find the workflow, get the most recent statusid; otherwise, make the status id 0
$WorkflowFromLogFile = $null
$wfid = $wf.id
[array]$WorkflowFromLogFile = $EntireLogFile | ? {$_ -like "*,$wfId,*"}
if ($WorkflowFromLogFile -eq $null)
 {
write-verbose "The Workflow was not found in the log File, adding a line in the log file for the workflow."
[int]$LastStatusRowId = 0
$NewWorkflowLineforLogFile = [string]$wf.Name + "," + [string]$wf.id + "," + [string]0
Add-Content $LogFilePathandName "$NewWorkflowLineforLogFile"
$WorkflowFromLogFileSplit = $NewWorkflowLineforLogFile.split(",")
$LogLineWorkflowInstanceIdCovertableStatus = "GOOD"
$LogLineStatus = "GOOD"
 }
 else
{
 if ($WorkflowFromLogFile.count -gt 1)
{
 #found more than one line, throw error for now
 write-error "Found more than one line in the log file for the workflow. This means that the workflow will be checked two times, which does not cause errors. However, it would be a good idea to find and remove the duplicate."
 $LogLineStatus = "GOOD"
}
 if ($WorkflowFromLogFile.count -eq 1)
{
 write-verbose "Found one line for the workflow in the log file."
 $LogLineStatus = "GOOD"
 #We need to get the WorkflowStatusRowId from the log file. To do this, we can split the line from the log file into an array, and look at the third (2nd) item.
 $WorkflowFromLogFileSplit = $WorkflowFromLogFile[0].split(",")
  if ([int32]::TryParse(($WorkflowFromLogFileSplit[2]) , [ref]$IntRef) -eq $true)
{
 [int]$LastStatusRowId = $WorkflowFromLogFileSplit[2]
 $LogLineWorkflowInstanceIdCovertableStatus = "GOOD"
}
else 
 {
#The id did would not convert to an integer, there is a problem with the log file. Skip and report the error, but do not kill the script.
write-error  "The id would not convert to an integer, there is a problem with the log file. Skip and report the error, the script will continue."
$LogLineWorkflowInstanceIdCovertableStatus = "BAD"
 }
}
 } # this is the else, where it found a line
 
if ($LogLineWorkflowInstanceIdCovertableStatus -eq "GOOD" -and $LogLineStatus -eq "GOOD")
 {
write-verbose "Get the workflow status"
$status = Get-SCSMWorkflowStatus -name $wf.Name
$status = $status.GetStatus()
$status = $status | ? {[int]$_.RowId -gt $LastStatusRowId}
#Get the last Row Id for the status
$MaximumStatusRowId = ($status | measure-object -Property RowId -maximum).maximum

write-verbose "Check if the workflow has ran since the last check"
if ($MaximumStatusRowId -eq $null)
 {
  write-verbose "No need to update anything on the log file here. The max ID is the same."
 }
 else 
{
 write-verbose "Replace the line in the text file with a new line of the same data"
 $NewWorkflowFromLogFile = $WorkflowFromLogFile -replace ",$LastStatusRowId",",$MaximumStatusRowId"
 write-verbose "THIS IS THE NEW FILE ENTRY:"
 write-verbose [string]$NewWorkflowFromLogFile
 write-verbose "Setting Content"
 (Get-Content $LogFilePathandName) | Foreach-Object {$_ -replace $WorkflowFromLogFile, $NewWorkflowFromLogFile} | Set-Content $LogFilePathandName -verbose
 foreach ($st in $status) 
{
[string]$strowid = ($st.RowId)
write-verbose "checking each status in the workflow and adding the FailureArray if failed. Status Row Id: $strowid"
#If the status is failed, we need to report it. Rather than reporting a failure for each workflow, we are going to summarize it for all if them.
#So we are going to add them all to an array.
 if ($st.status -eq "Failed") 
 {
#$Log = $wf.name
##Write-EventLog -LogName "Operations Manager" -Source "Health Service Script" -EntryType Information -EventID 12345 -Message "Added to FailureLog $Log"
$object = New-Object -TypeName PSObject
$object | Add-Member -Name 'Name' -MemberType Noteproperty -Value $wf.name
$object | Add-Member -Name 'Status' -MemberType Noteproperty -Value $st.status
$object | Add-Member -Name 'TimeStarted' -MemberType Noteproperty -Value $st.TimeStarted
$object | Add-Member -Name 'TimeFinished' -MemberType Noteproperty -Value $st.TimeFinished
$object | Add-Member -Name 'RelatedObject' -MemberType Noteproperty -Value $st.RelatedObject
$FailureArray += $object
 }
}     
}
 } #If everything is good statement
write-verbose "Next Workflow
"
} #foreach workflow statement


 #$FailureArray | foreach {[string]$FailureArrayString = [string]$FailureArrayString + $_ + "`n"}
 $FailureArray
 $FailureArray | ConvertTo-HTML | Out-File .\Report.htm
 Invoke-Expression .\Report.htm

Wednesday, September 3, 2014

Tip: Approve all In Progress Activities in Service Manager

Stop manually approving each test review activity. Service Manager implementations usually include immense amounts of testing. If you are testing Service Requests or Change Requests, you probably have tons of review activities to approve. It can be time consuming to approve each activity, because, unlike manual activities, we can't just select them all and complete them. We have the option of deleting or canceling the work items, but this isn't really testing.

SMLETS and powershell makes our lives much easier when it comes to administration. However, review activity relationships are slightly different than most other relationships. It can sometimes be difficult to figure out what to do when it comes to powershell and reviewers.

The below powershell script retrieves all review activities, retrieves the reviewers for those activities, and then sets the decision to approved. This will allow the review activity to review the decisions, and then complete itself - exactly as it would happen inside the console.

If you don't care how it works, then just grab the script and run it on your management server where you have SMLETS installed.
If you want to learn a little and become a better Service Manager Administrator, I have broken down the script.
If you don't have SMLETS, you can get it from Codeplex.

The script with explanation comments:
#This is pretty self-explanatory, but we are importing SMLETS.
Import-Module SMLETS
#Before we can retrieve any objects, we need to get the object class. The Object class we are looking for is "ReviewActivity". Why the "$" (dollar sign) at the end? As part of this particular cmdlet,  the search is using regexp. The "$" marks the end of the line. We use this because there are cases where the cmdlet would retrieve more than one object class due to regexp matching.
$RAC = get-SCSMClass System.WorkItem.Activity.ReviewActivity$
#We need to filter out object results to only "In Progress" review activities. "Activity Status" is an enumeration.
$ActStatusEnumInProgress = Get-SCSMEnumeration ActivityStatusEnum.Active$
#We need the GUID of the enumeration so our "filter" switch will retrieve the correct results.
$InProgressEnumId = $ActStatusEnumInProgress.id
#Here, we are querying the review activity objects. The "class" switch specifies what class we want, in this case, "review activity".
#The "Filter" switch is a server side filter that is far more efficient than "Where-object". We can filter on any column for that object.
$RAS = get-SCSMObject -class $RAC -filter "Status -eq '$InProgressEnumId'"
#So far, we should have all of the Review activities that are in progress. You can type $RAS to see a list of the review activities.
#We probably have more than one activity in an array, but we need to perform actions on each individual object. 
foreach ($RA in $RAS){
  #We are going to retrieve any relationships for each activity, where the activity is the source of the relationship.
  $RElObj = Get-SCSMRelationshipObject -BySource $RA
  #We do not want all relationships, only the reviewer relationship.
  foreach ($Obj in $RELObj) {
    if ($Obj.TargetObject.ClassName -eq "System.Reviewer") {
      #Now we are getting the reviewer object itself, but rather than specifying class and filter, we have the GUID. We can use the "id" switch.
      #Once we get the object, we are piping the object into the "Set-SCSMObject" command, which will update the object. The only thing we have to do is set the status to approved for each of the reviewers and internal SCSM workflow will take care of the rest.
      #The command below will not actually make any changes. The "whatif" switch is a powershell switch that essentially tells you what the command is going to do, but does not commit the command. This is a great way to test non-destrutively.
      #Many commands perform actions without output, so it is sometimes difficult to see what is going on. The "verbose" switch will output additional details at command execution.
      #If you are ready to execute the command and approve your activities, simply remove "-whatif".
      get-SCSMObject -id ($Obj.Targetobject.ID) | Set-SCSMObject -Property Decision -Value "Approved" -whatif -verbose
    }
  }
}

The Script with no comments:
Import-Module SMLETS
$RAC = get-SCSMClass System.WorkItem.Activity.ReviewActivity$
$ActStatusEnumInProgress = Get-SCSMEnumeration ActivityStatusEnum.Active$
$InProgressEnumId = $ActStatusEnumInProgress.id
$RAS = get-SCSMObject -class $RAC -filter "Status -eq '$InProgressEnumId'"
foreach ($RA in $RAS){
  $RElObj = Get-SCSMRelationshipObject -BySource $RA
  foreach ($Obj in $RELObj) {
    if ($Obj.TargetObject.ClassName -eq "System.Reviewer") {
      get-SCSMObject -id ($Obj.Targetobject.ID) | Set-SCSMObject -Property Decision -Value "Approved" -whatif -verbose
    }
  }
}

When you begin writing powershell scripts, it is a good idea to keep them in a repository, as you will most likely need them more than once, especially when it comes to System Center. If you want to talk more about powershell or System Center, come see me at Techfest Saturday, September 13th at the Sparkhound booth!
Techfest Registration: Techfest Registration
Techfest Site: Houston Techfest

Friday, August 1, 2014

Orchestrator - Misrepresented and Misunderstood

Purpose:
1.       Talk about the misrepresentation of Orchestrator.
2.       Point out some things about Orchestrator that people don't quite understand.
3.       Convince people to give Orchestrator a try.

Summary:
System Center Orchestrator is a product in the System Center Suite. If you have heard of Orchestrator, you have probably heard it along with other buzzwords like "automation" and "cloud." This text is not about writing a cool runbook, automation, pitching cloud services, or discussing how System Center can solve all of the world's IT problems (it can though). I want to discuss a misconstrued view of Orchestrator that I continue to see over and over, which I think prevents IT Organizations from using it or trying it out.

I think that Orchestrator is both misrepresented and misunderstood.

What was my original perception of Orchestrator?

As an Operations Manager and Service Manager guy, I knew that I could build automation for nearly anything without ever relying on any products outside thoese two systems. I have been using Operations Manager to automate IT tasks, and perform cleanup as a result of alerts for years.

Service Manager provides the tools to take a ticket system and turn it into an automation machine using the authoring console, Powershell, and the native flexibility of the system.

So when Microsoft began touting Orchestrator, my first reaction was, why in the world do I need that?” The simple answer is, I don't…

As a matter of fact, I have never had to use Orchestrator for anything…ever. I still haven't found a task, workflow, process, or anything else I couldn't complete by using the other System Center products that were already implemented, even when they were communicating outside the realm of Microsoft. (Thanks PowerShell!)

So, in a sense, Orchestrator almost seems like a novelty - an unnecessary System Center Product…but it's not.

Why use a complex system when a simple system is available?

Let's take a look at Operations Manager and Service Manager. We know they both can do just about anything that we want in terms of process and automation. When you get down deep, both products can get somewhat complicated. They both run complex operations on the back-end, which allows it to use a simple front-end. I mean, have you looked at the databases or stored procedures? My point is, there is a lot going on that we don't see. Even without much of anything configured, the systems are churning away.

The beauty in Orchestrator is its back-end simplicity. What is Orchestrator doing? Nothing. A little maintenance here or there, checking schedules, checking the clock, etc. What are the others doing? Constantly performing complex operations even at base load. And what are they doing when you begin automating your processes, performing remediation, or waiting for a criteria match? Even more. So much in fact, you can easily drop 40 GB of ram or more with several separated disks on the SQL server to maintain an acceptable level of performance while the systems are performing their operations.

So why are so many people taking such complex operations and stuffing them full of more complex operations creating a memory and disk eater, when we have this nice, little calculator waiting for its command? A nice little calculator that will do anything you tell it to do, and communicate with any machine you want to communicate with. Yet a lot of people still don't use it.

I think the answer is simply misrepresentation.

Everyone talks about all the cool, complex things Orchestrator can do, all the systems it can talk to, and all the integration packs that are available. It actually sounds kind of scary. I mean, who has time to do all that?

But, at its base, Orchestrator is not much more than a scheduling engine - a simple calculator.

"Simple" is what Orchestrator was meant to be all along. Take a complex operation and break it down into simple steps.

Do this…

o    Find something you would like to do – such as automation, remediation, communication between 2 systems. Whatever it is, start out with a goal.
o    Document the exact technical process that should happen on paper. If you can't write your process on paper, you can't write it with a computer.
o    Read through Kevin Holman's Orchestrator quick start guide.
o    Take 15 minutes and install Orchestrator 2012 R2.
o    Take another 10 minutes to download and install the integration packs.
o    Create a Runbook

Remember this…
1.       You will stumble through your first runbook, but keep at it, it will get easier.
2.       The more complex operations you remove from other systems and enter into Orchestrator, the easier it will be to maintain, document, and transfer knowledge.
3.       Don't make it complicated. Don't write a giant PowerShell script and enter it into Orchestrator; this defeats the purpose of simplicity.  Break out your steps into multiple activities.

Tuesday, April 22, 2014

Query ALL Service Manager ENUMS and their Hierarchy

I find myself listing out all of the enumerations for lists in Service Manager quite a bit. Rather than spending time doing this over and over, I wrote a query that retrieves all of the enumeration items from Service Manager. I tried to keep it simple so anyone could adjust to his or her needs. It does not require the DW, as I am pulling directly from the ServiceManager database.

   
  SELECT [EnumType].[EnumTypeId] AS Id,
      [EnumType].[ManagementPackId] AS ManagementPackId,
      ep.EnumTypeName,
      [EnumType].[EnumTypeName] AS Name,
      [EnumType].[EnumTypeAccessibility] AS Accessibility,
      [EnumType].[ParentEnumTypeId] AS ParentId,
      DisplayName
  into #eview
  FROM dbo.EnumType
  LEFT Join dbo.EnumType ep on EnumType.EnumTypeId = ep.EnumTypeId and ep.ParentEnumTypeId IS NULL
  LEFT OUTER JOIN DisplayStringView DS1 ON DS1.LTStringId = dbo.[EnumType].[EnumTypeId] AND DS1.LanguageCode = 'ENU'
 
  INNER JOIN dbo.ManagementPack
   ON dbo.ManagementPack.ManagementPackId = [EnumType].ManagementPackId AND dbo.ManagementPack.ContentReadable = 1;
   
  with tree as (
  SELECT ManagementPackid, Id, name,
  cast(DisplayName as varchar(max)) as Hierarchy,
  DisplayName,
  ParentId
  FROM #eview
  Where ParentId IS NULL and displayName IS NOT NULL
  UNION ALL
  SELECT c.ManagementPackId, c.Id, c.name,
  p.hierarchy + ', ' + cast(c.DisplayName as varchar(max)),
  c.DisplayName, c.ParentId
  FROM #eview c
  join tree p on p.Id = c.parentID
  WHERE c.displayName IS NOT NULL
  )
select ManagementPackid, parentid, Name, Hierarchy, DisplayName
from tree
order by 3
drop table #eview

Thursday, February 13, 2014

Get Parent Affected User for Notifications

This is more of a note for myself. For an activity, this will get the affected user of the parent work item.

$Context/Path[Relationship='CoreActivity!System.WorkItemContainsActivity' SeedRole='Target' TypeConstraint='WorkItem!System.WorkItem']/Path[Relationship='WorkItem!System.WorkItemAffectedUser' TypeConstraint='System!System.User']/Property[Type='System!System.User']/FirstName$

Obviously it is only the first name and the references will need to be changed to match the reference alias in the MP.

Tuesday, September 3, 2013

SCSM 2012: Self Service Portal Service category color customization

The is a great solution and worthy of a repost.

http://www.expiscornovus.com/2012/05/06/scsm2012-self-service-portal-service-category-color-customization/


SCSM 2012: Self Service Portal Service category color customization

Lately I’ve been wandering a bit more on the Technet Forums, this has been pretty useful. Friday I came across this thread from Bart Timmermans about customization in the Self Service Portal of the product System Center Service Manager 2012. He asked if it was possible to adjust the styling of Service category headers. Of course I accepted the challenge.
Analysis
The Self Service Portal is a solution on SharePoint 2010 which deploys some web parts. Like Travis Wright described in his latest Self Service Portal blogpost those web parts use Silverlight .xap files. After some .NET Reflector work on the Portal.BasicResources DLL. I found that a lot of the color styling for the portal is being done by using brushes.
Brushes
A brush is an Silverlight object which can be used to paint for example solid colors or linear gradient. In that DLL I found a .xaml file which defined some solidcolorbrushes and they had a key. In Silverlight they use a 8 digit notation for the color, this is a RGBA value.
Settings.xml
The Self Service Portal actually has a settings.xml file which can be used to define some basic settings. I noticed it also had some setting keys for colors. This triggered me to add a key for one of the brushes, ExpanderHeaderBgBrush. My attempt worked. After adjusting the Settings.xml and clearing my browsers cache I saw a new green color!
Service Category background color
Solution
1. Go to C:\inetpub\wwwroot\System Center Service Manager Portal\ContentHost\Clientbin (or another location if your installation directory was different
2. Open the Settings.xml file
3. Add a setting key for ExpanderHeaderBgBrush with your desired RGBA color:
ExpanderHeaderBgBrush setting key
Happy customizing!

Tuesday, July 23, 2013

Isolating Powershell Sessions in Workflows in Service Manager

Problem:
One common issue I have ran into with writing workflows for Service Manager is the powershell sessions seem to be shared. When I call a set of cmdlets, such as the Active Directory cmdlets, they do not always load/unload properly, causing issues with subsequent scripts.

Solution:
We can isolate the powershell sessions inside the Service Manager Workflows. While powershell experts may know this, most of us don't, so here is my non-expert explanation.

  • When you run Service Manager Workflows, they run in the same process. 
  • If these workflows are running powershell, the powershell sessions are sometimes (or always) shared. 
  • By "running powershell inside a powershell" we can isolate our scripts, preventing issues between shared sessions.
  • Once the script is complete, it cleans itself up, and completely closes the sessions.
The only issue I see with this method is that it might take the workflow a second or two longer to run because of having to open the new session - plan accordingly.

How we do it:
Take your completed script, and simply wrap the script in powershell.
For example, if my script is: 

Import-Module Activedirectory
Get-User

I simply wrap it like this:

Powershell {
Import-Module Activedirectory
Get-User
}

Another Example with Parameters:
Powershell {
param($Name, $pcc);
get-process $Name -ComputerName $pcc;} -args "explorer", "localhost"



Thanks goes out to Thomas Bianco for coming up with this simple workaround as a way to isolate Powershell Sessions.

Friday, May 3, 2013

Create and Assign Service Manager Incidents Directly from SCOM on Demand

The Issue

If you use Operations Manager and Service Manager, you know by now that SCOM will automatically create Incidents in Service Manager. However, for most organizations, this just doesn’t make sense because they do not have a 1-to-1 Alert-to-Action ratio. You can set up basic criteria to limit the automatic creation, but this usually still results in too many unnecessary incidents. As a result, most organizations do not utilize this connector, which at one point was one of the most requested features of SCOM – to do really cool things with ticketing systems.

The Solution

So, instead, I have created a solution that will allow you to create incidents on demand directly from a SCOM Alert, while utilizing all the cool features of the Service Manager SCOM Alert connector. All you have to do is right click the alert(s) to create the on-demand tickets.

What are some features of the solution in conjunction with the native Connector:

  • Right click one more multiple alerts and assign incidents directly to the specified group/user
  • Closing the alert closes the ticket and vice-versa
  • The Assigned User and the Ticket Id are maintained in the alert as sourced from SCSM
  • The affected component in SCOM is automatically added as a related configuration item in SCSM
  • Easily can be extended to do more fun stuff with only basic PowerShell Knowledge

How Does it Work

The solution utilizes the following components:

  1. SCOM and SCSM obviously
  2. A very small PowerShell Script
  3. SCOM CMDLETS

Workflow:

  1. A user right clicks the alert and sets the resolution State.
  2. A Command Subscription triggers based on the resolution state, sets a couple of custom fields, and changes the resolution state to “Generate Incident” a
  3. The SCSM Alert connector triggers based on the new resolution state, generates an incident, and applies an incident template based on data in the custom fields.

How to Implement the Solution

These Steps need to be performed in SCOM

Step One

Copy the following PowerShell script code and save on your SCOM management server as UpdateCustomFieldPowershell.ps1. (I took this code from another blog online and modified it as my own. Unfortunately, I don’t know who wrote the original script.)

Param($alertid) 

$alertid = $alertid.toString()

write-eventlog -logname "Operations Manager" -source "Health Service Script" -eventID 1234 -entrytype "Information" -message "Running UpdateCustomFieldPowershell"

Import-Module OperationsManager; "C:\Program Files\System Center 2012\Operations Manager\Powershell\OperationsManager\Functions.ps1"; "C:\Program Files\System Center 2012\Operations Manager\Powershell\OperationsManager\Startup.ps1"

$alert = Get-SCOMAlert -Criteria "Id = '$alertid'"

write-host $alert

If ($alert.CustomField2 -ne "AlertProcessed")

    {

$AlertResState = (get-SCOMAlertResolutionState -ResolutionStateCode ($Alert.ResolutionState)).Name

$AlertResState

   # $alert.CustomField1 = $alert.NetBIOSComputerName

     $alert.CustomField1 = $AlertResState

     $alert.CustomField2 = "AlertProcessed"

$alert.ResolutionState  = 254

    $alert.Update("")

    }

exit

Step Two

We need to create some new alert resolution states. The alert resolution states will trigger the script. You want to create a resolution state for each support group you would assign an alert. You can use whatever format you want. I used the format of “Assign to GROUPNAME”. Also keep in mind the Resolution State Ids and order you will use. I made my alphabetical. DO NOT use the resolution state 0,1,254, or 255.

To create new resolution states:

  • Go to the SCOM Console
  • Go to the Administration Workspace
  • Go to Settings
  • Select Alerts
  • Select the new button, create a resolution state and assign an Id. Resolution states will always be ordered by their Id
  • Repeat for each resolution state

After you create your alert resolution states, you will need to create one more that triggers the SCSM Connect. Name this Alert Resolution State “Generate Incident.” Also, make sure this is the exact name as the script requires. If you want to change the name, you will have to update the script. Also, set the Id to 254.

Step Three

We need to set up a command channel and subscription that will trigger and run the script.

  • Open the SCOM Console
  • Go the the Administration Workspace
  • Go to Channels
  • Create a new Command Channel
  • Enter the full path of the above script
  • Enter the command line parameters as shown in the example below (Be sure the use the double and single quotes correctly)
    • "C:\OpsMgrProductionScripts\SCOMUpdateCustomField.ps1" '$Data/Context/DataItem/AlertId$'
  • Enter the startup folder as C:\windows\system32\windowspowershell\v1.0\
  • Save the new Channel

Next, we need to set up the subscriber for the command channel.

  • Open the SCOM Console
  • Go the the Administration Workspace
  • Open subscribers
  • Create a new subscriber
  • In the addresses tab, click Add
  • In the subscriber address, set the channel type to command and then select the channel you set up in the previous steps.
  • Save the address and the subscriber

Next, we need to set up the Command Subscription

  • Open the SCOM Console
  • Go the the Administration Workspace
  • Open Subscriptions
  • Create a new Subscription
  • On the subscription criteria, check the checkbox “with a specific resolution state
  • Select all the new resolution states except “Generate Incident” (Do not select anything other than the assignment states)
  • On the subscribers, add the new subscriber you created in the previous steps
  • On the Channels, add the new channel you created in the previous steps
  • Save the subscription

Step Four

The last thing we have to do in SCOM is set up the Alert connector. The alert connector will be triggered based on the resolution status of “Generate Incident”.

  • Open the SCOM Console
  • Go the the Administration Workspace
  • Go to connectors and select Internal Connectors
  • Open the SCSM Alert Connector
  • Create a new subscription in the connector
  • In the criteria of the subscription

These Steps need to be performed in SCSM 

Step One

The first thing you want to do is enable and connect your SCSM SCOM Alert Connector. If you do not know how to do that, you can refer to technet. http://technet.microsoft.com/en-us/library/hh524325.aspx. Verify it works before moving any further.

Step Two

  • Create a new Management Pack dedicated to storing the SCOM Incident Templates in SCSM
  • Create a SCOM incident template for each group that you want to assign via SCOM. Typically, this is about 10-20 templates. For testing purposes, I would just start with one or two.
  • Add the correct group as the assigned to in each template. It is not necessary to fill any other information.

Step Three

  • In SCSM open the SCOM Alert Connector
  • Go to the alert routing rules and add a new rule
    • For each rule select one of the templates that you created
    • On the select criteria type, select the Custom Field radio button
    • For custom field one, enter the exact name of the resolution state you used in SCOM. For example, if you are going to assign to the server team, and the name of resolution state is called “Assign to ServerTeam”, this is the exact phrase you need to enter into Custom Field one.
  • Select Custom Field two from the drop down
  • For custom field two, enter “AlertProcessed”
  • Click OK
  • Repeat for each template

Time for Testing! 

Now you are ready to test. Find an alert in SCOM, right click the alert and set it to a resolution state for assignment. Give the subscription time to run and the SCSM connector time to run. Usually, if the connector is running every 2 minutes, it takes the total process about 5 minutes to complete. While the actual workflows are running in a second, it simply takes time for both of them to trigger.

 

Troubleshooting

If there are any issues with the configuration, the event logs will usually tell you about failures. If it is not working, but you don’t see any failures, your criteria probably do not match.

Conclusion

This is a great alternative solution to automatically creating tickets from SCOM. You can still automatically create tickets as well simply by adding subscriptions to the SCSM SCOM Alert connector. If you have any issues, question, leave a comment.

Tuesday, January 15, 2013

Notes Regarding SCSM 2012 Upgrade


I just wanted to share some notes regarding the Service Manager 2012 SP1 Upgrade, that might not be obvious unless you thoroughly read the documentation. I hope these notes help prevent some problems.

Release Notes:
http://technet.microsoft.com/en-us/library/jj614520.aspx

The SCSM Console New Requirement:
Upgrade Note2-core 2.0 GHz CPU
4 GB of RAM
10 GB of available disk spaces
new requirement of Microsoft SQL Server 2012 Analysis Management Objects (AMO). Microsoft SQL Server 2012 AMO is supported on SQL Server 2008 and SQL Server 2012


Self-Service Portal: Web Content Server with SharePoint Web Parts
8-Core 2.66 GHz CPU
8-core, 64-bit CPU for medium deployments
16 GB of RAM for 20,000 users, 32 GB of RAM for 50,000 users (See the Hardware Performance section in this guide.)
80 GB of available hard disk space


When you upgrade from System Center 2012 – Service Manager, you perform an in-place upgrade of the Self-Service Portal. - This is the only thing the documentation says. I am not sure what it means.

Authoring Tool Workflows
When you use the Service Manager SP1 version of the Authoring tool to create a workflow, then custom scripts using Windows PowerShell cmdlets called by the workflow fail. This is due to a problem in the Service Manager MonitoringHost.exe.config file.

To work around this problem, update the MonitoringHost.exe.config XML file using the following steps.



1.     Navigate to %ProgramFiles%\Microsoft System Center 2012\Service Manager\ or the location where you installed Service Manager.
2.     Edit the MonitoringHost.exe.config file and add the section in italic type from the example below in the corresponding section of your file. You must insert the section before <publisherPolicy apply="yes" />.
3.     Save your changes to the file.
4.     Restart the System Center Management service on the Service Manager management server.


<?xml version="1.0"?>
<configuration>
  <configSections>
    <section name="uri" type="System.Configuration.UriSection, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" />
  </configSections>
  <uri>
    <iriParsing enabled="true" />
  </uri>  
  <runtime>
    <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
      <dependentAssembly>
        <assemblyIdentity name="Microsoft.Mom.Modules.DataTypes" publicKeyToken="31bf3856ad364e35" />
        <publisherPolicy apply="no" />
        <bindingRedirect oldVersion="6.0.4900.0" newVersion="7.0.5000.0" />
      </dependentAssembly>
      <dependentAssembly>
        <assemblyIdentity name="Microsoft.EnterpriseManagement.HealthService.Modules.WorkflowFoundation" publicKeyToken="31bf3856ad364e35" />
        <publisherPolicy apply="no" />
        <bindingRedirect oldVersion="6.0.4900.0" newVersion="7.0.5000.0" />
      </dependentAssembly>
  <dependentAssembly> 
         <assemblyIdentity name="Microsoft.EnterpriseManagement.Modules.PowerShell" publicKeyToken="31bf3856ad364e35" />
        <bindingRedirect oldVersion="6.0.4900.0" newVersion="7.0.5000.0" />
     </dependentAssembly> 
      <publisherPolicy apply="yes" />
      <probing privatePath="" />
    </assemblyBinding>
    <gcConcurrent enabled="true" />
  </runtime>
</configuration>

SCOM Agent Supported in SCSM 2012 SP1


System Center 2012 – Operations Manager
System Center 2012 – Operations Manager agents were not supported with System Center 2012 – Service Manager. However, the agent that is automatically installed by System Center 2012 – Service Manager SP1 is compatible with System Center 2012 – Operations Manager and System Center 2012 – Operations Manager SP1.  After Service Manager Setup completes, you must manually configure the agent to communicate with the Operations Manager management server.
To validate that the Operations Manager Agent was installed, open Control Panel and verify that the Operations Manager Agent is present. To manually configure the Operations Manager agent, see Configuring Agents.
You can upgrade Service Manager servers in the presence of an System Center 2012 – Operations Manager console.

Source: MS Documentation
http://www.microsoft.com/en-us/download/details.aspx?id=27850

Wednesday, July 11, 2012

SCSM Cube Processing and Analysis Services is a Beast

If you are using the Service Manager DW and cubes, you may have ran into some issues with the cubes not processing, failing, data issue, or something else. I have provided a couple of resources to help with troubleshooting at the bottom of my post, but I want give a little insight on my experience in dealing with what could possibly become a maintenance headache.

Back story: I am currently working on a DEV and PRD SCSM 2012 RTM environment for a client. Each environment is up. DEV is being heavily used, but PRD is not. They have slightly different configurations, and each cube processing issue was resolved using two different methods.

Because development is non-impacting, after troubleshooting for a few hours and not being about to resolve the issue, i figured it was best to reinstall the DW. After uninstall, reinstall, and re-sync everything works ALL data intact.


DEV Steps:

  1. Go into the console > Administration
  2. Unregister the DW
  3. On the DW Server, Go to Add/Remove Programs > Uninstall System Center 2012 Service Manager
    1. If you get an error about a log not being found, do this:
      1. Shift Right Click "Add/Remove Programs" and select run in a different process
  4. After the uninstall, restart the machine
  5. After the machine restarts, go into the registry of the DW Management Server and remove the following keys and all sub keys:
    1. System Center
    2. Microsoft Operations Manager
  6. Restart the Machine again
  7. Go REMOVE/DELETE the DW databases
  8. Go REMOVE/DELETE the DW Analysis services database
  9. On the DW Management Server, perform a fresh install, following the prompts and creating new databases.
  10. Once the install is complete, re-register SCSM to the DW
  11. Leave it Alone for 24 hours
  12. After 24 hours check to see if all the jobs and cubes have processed
The steps above (for dev) were quicker than troubleshooting. Hope this helps.

PRD Steps - Actualy troubleshooting, not re-install
Analysis services is install on the DW server. I noticed in the event log, I was receiving event 33573. One of the events stated "The operation has been cancelled due to memory pressure." This seemed pretty obvious, so I opened task manager, attempted to process the cube, and noticed that it maxed out my 8GB of memory in a couple of minutes, then the memory utilization dropped. I checked the event log again, and I received the same error. So, I increased the memory 16 GB, and processed again - No More Memory Errors. 4 of the 6 cubes processed. I still have two that are failing, but not because of memory. You might need to increase your memory above 16GB depending on the number of work and config items.

After fixing the memory issue, I noticed the following events:





Message : An Exception was encountered while trying to process a cube.  Cube Name: SystemCenterChangeAndActivityManagementCube Exception Message: An exception occurred while processing the cube.  Please see the event viewer log for more information.  Cube:  SystemCenterChangeAndActivityManagementCube Stack Trace:    at Microsoft.SystemCenter.Warehouse.Olap.OlapCube.Process(ManagementPackCube mpCube).
Warning
7/11/2012 7:53
Data Warehouse
33573
None
Message : An Exception was encountered while trying during cube processing.  Message=  Processing warning encountered - Location: , Source: Microsoft SQL Server 2008 R2 Analysis Services Code: 1092550657, Description: Errors in the OLAP storage engine: The attribute key cannot be found when processing: Table: 'ActivityAssignedToUser', Column: 'ActivityDimKey', Value: '2'. The attribute is 'ActivityDimKey'..     
Warning
7/11/2012 7:53
Data Warehouse
33574
None
Message : Cube Processing workitem has failed.
This is most likely caused by the DWDataMart (primary datamart) being out of sync from other marts.
This is an intermittent problem and will resolve on its own as the Load jobs complete their runs.
However, to work around this issue, administrators can manually start the Load.Common load job, wait for it to complete and then start the Cube processing job.
Error
7/11/2012 7:53
Data Warehouse
33573
None
Message : An Exception was encountered while trying during cube processing.  Message=  Processing error encountered - Location: , Source: Microsoft SQL Server 2008 R2 Analysis Services Code: -1054932986, Description: Errors in the OLAP storage engine: The process operation ended because the number of errors encountered during processing reached the defined limit of allowable errors for the operation..       Processing error encountered - Location: , Source: Microsoft SQL Server 2008 R2 Analysis Services Code: -1054932978, Description: Errors in the OLAP storage engine: An error occurred while processing the 'ActivityAssignedToUser' partition of the 'ActivityAssignedToUser' measure group for the 'SystemCenterChangeAndActivityManagementCube' cube from the DWASDataBase database..       Processing error encountered - Location: , Source: Microsoft SQL Server 2008 R2 Analysis Services Code: -1054932986, Description: Errors in the OLAP storage engine: The process operation ended because the number of errors encountered during processing reached the defined limit of allowable errors for the operation..       Processing error encountered - Location: , Source: Microsoft SQL Server 2008 R2 Analysis Services Code: -1056964601, Description: Internal error: The operation terminated unsuccessfully..       Processing error encountered - Location: , Source: Microsoft SQL Server 2008 R2 Analysis Services Code: -1055129598, Description: Server: The operation has been cancelled..