Tips & Tricks for Writing your Own PowerShell System Monitoring Tool

Editor's note: The following post was written by Office Servers and Services MVP Hilton Giesenow as part of our Technical Tuesday series. Albert Duan of the MVP Award Blog Technical Committee served as the technical reviewer for this piece.

In simple terms, Monitoring means keeping an eye on something. In an IT context, it means ensuring something like a server doesn’t run out of disk space. It likely means doing this on a scheduled basis - and in truth, it’s something that absolutely can be done by hand.

In fact in some cases, it might even be preferable to do it this way. For instance, if you’ve installed a new tool or application and want to find out more about what it does and how it behaves ‘in the wild’ in your environment, you may want to keep an eye on the servers yourself for a few days. Even with well-known tools this is worthwhile – when I do a SharePoint installation for a customer, I like to see what the servers are reporting after a few hours, in the first few days, and after the first weekend to see if there are any environment-specific issues that we weren’t expecting or informed about. The application or stack you’re working with might be entirely new, such that no effective monitoring tool exists – such as in the recent past with the arrival of Containers, for example.

However, monitoring by hand is for the most part a problematic approach. Here’s why:  

  • manual monitoring requires performing all of the steps by hand, meaning that it’s a considerably slower process. Errors may become critical simply because the monitoring is taking too long and cannot be repeated with sufficient frequency.
  • Being manual means these checks are bound to human time fluctuations – they’re unlikely to be performed well, if at all, during nights, weekends and holidays.
  • Being manual also means that there’s a chance of human error. Important events or notifications could be missed.

Thankfully, one can overcome these challenges by switching to an automated monitoring tools – this way, monitoring is performed in a fraction of the time, can be performed with a considerably higher frequency, and is almost guaranteed to be consistent.

Nevertheless, we do continue to face situations where out of the box automated monitoring doesn’t fit the bill. For instance, a recent customer situation occurred where short term budget issues meant a shortfall  without the company’s standard tool monitoring the SharePoint farm and its underlying SQL Server cluster for a period of a few months. In another example, the standard tool in the environment may not necessarily cater well for a more specialized workload – for example, perhaps you have a very good network and server monitoring tool, but one that doesn’t report particularly well on SharePoint,  SQL Server, Microsoft CRM or simillar.

In cases like these, PowerShell can be a tremendous ally, allowing you to fill temporary or even permanent gaps by creating scripts for specific needs.

Indeed, such scripts exist on the internet for specific products, and a search with your favourite engine will return many personal blogs, or sites like TechNet. Oftentimes these resources combine the experience of various experts in a single technology, and can thus be a great start to monitoring a particular application or platform.

Depending on your context and needs, such scripts might even exactly fit the bill. However, if you’re looking to create something completely from scratch (perhaps for the PowerShell challenge) or simply to combine the capabilities of multiple scripts into one, this article will highlight some of the key issues you’ll encounter and will present some possible solutions.

It parallels a similar journey I undertook when faced with a similar scenario – extending the existing monitoring environment with enhanced SharePoint-specific functionality based on scripts I’d both downloaded and written over the years, but also creating at the same time a framework that was more integrated, standardized and extensible. The fruit of this labor is the PoShMon open source project, a fully functional PowerShell agentless monitoring tool available on GitHub (https://github.com/HiltonGiesenow/PoShMon ).

Of course PoShMon can be downloaded from the PowerShellGallery (https://www.powershellgallery.com/packages/PoShMon) and used as is, but it can also serve as a sample for more in-depth analysis and exploration of the topics we’ll cover below.

Getting Started

To get the ball rolling on our monitoring tool, let’s implement a very simple check to examine the free drive space on the current machine. We’ll go ahead and include the total drive space in our analysis while we’re at it, so that we can get a more complete picture as well as do “free percentage” calculations across the drives.

To do this, we can call the relevant WMI provider, isolate the non-removable storage (e.g. virtual DVD drives), and calculate the free percentage. Finally, we’ll write out the results, including a warning of “low” space (which we’ll hard code as being below 10% free space). For completeness sake, I’m going to wrap this into a function as well:

  Function Test-DriveSpace
{
    $driveSpaceOutput = @()

    $serverDriveSpace = Get-WmiObject win32_logicaldisk

    foreach ($drive in ($serverDriveSpace | Where DriveType -eq 3)) # fixed drive
    {
        $totalSpace = $drive.Size/1GB
        $freeSpace = $drive.FreeSpace/1GB
        $freeSpacePercent = $freeSpace / $totalSpace * 100
        $highlight = $false

        Write-Verbose ("`t`t" + $drive.DeviceID + " : " + $totalSpace.ToString(".00") + " : " + $freeSpace.ToString(".00") + " (" + $freeSpacePercent.ToString("00") + "%)")

        if ($freeSpacePercent -lt 10)
        {
            Write-Warning "`t`tFree drive Space ($("{0:N0}" -f $freeSpacePercent)%) is below variance threshold"
            $highlight = $true
        }
      
        $driveSpaceOutput += [pscustomobject]@{
            'DriveLetter' = $drive.DeviceID;
            'TotalSpace' = $totalSpace.ToString(".00");
            'FreeSpace' = $freeSpace.ToString(".00") + " (" + $freeSpacePercent.ToString("00") + "%)";
            'Highlight' = $highlight;
        }
    }
    
    return $driveSpaceOutput
}

At this point we have something that’s at least of some minimal use. We can see the free versus total drive space and, if we’re monitoring a file server for instance, we arguably now have the most important aspect monitored.

We can also add further functionality from here, such as monitoring memory, CPU, event logs, and so on in a similar manner. More, we can run this script whenever we want and get the results. However, we’re still far short of a “monitoring tool.”  

Some key elements are still missing. The most important of which are notifications, scheduling and configuration. In addition, irrespective of your viewpoint on “test-driven development,” there’s no doubt that at least some testing would be useful. As would the ability to monitor multiple remote servers in a group, especially those that represent a single application or “farm” (such as a custom website’s front end server farm, or a SharePoint or CRM farm).

Let’s tackle each of these topics in turn.

A Note on Modules

There’s one thing to note before we turn to the aforementioned essential topics. Something we won’t address in this article is the idea of wrapping your functionality into a proper PowerShell module. It is highly recommended to do so, especially as your codebase grows into multiple files. However it’s a relatively standard PowerShell topic and therefore not one we’ll spend time on in this monitoring-specific article. You can of course research the topic further yourself, and PoShMon provides a working example to which you can turn for reference. 

Notifications, or ‘What’s Up, Doc?’

As I mentioned, our script has some (albeit limited) value to offer. However, our having to run it on demand still leaves us prey to most of the issues highlighted in the introduction of this article. Fortunately, only two main steps remain to correct this – introducing notifications and scheduling. Let’s get started with the first one.

Notifications inform us of the outcome of our monitoring scans and they can take many forms. Traditional SMS messages to our phones, push notifications to our mobile devices and toast popups on our desktops can all be of use, as are messages to our “ChatOps” tools like Slack or the new Microsoft Teams tool. However, the classics like email notifications are still the most widespread and are easiest to implement. PowerShell provides an extremely easy mechanism for doing this in the form of the Send-MailMessage [https://msdn.microsoft.com/en-us/powershell/reference/5.1/microsoft.powershell.utility/send-mailmessage] command. With it, we simply need to provide some relevant settings like the SMTP server to use, a list of “to” address, subject, body and so on. It’s commonly used so there’s no need to examine the function in detail here. However, the following tips are useful to consider when sending email notifications:

  • For the “From” address. Consider using something that’s easy for the recipient to create a mail rule based on. This lets recipients flag the items, tag them, file them easily into a subfolder, and so on. Recall that this doesn’t even need to be a valid email address, so you can easily set it to something like MyMonitoringTool@ [yourdomain.com].
  • For the “To” address. Consider using a mailing list instead of just your direct email box. This means that if you’re unavailable, at least one other person on your team will receive the alerts, even if they simply ignore them for the other 364 days of the year.
  • For the email Subject. Consider setting it to something that shows the key information immediately. That way, you’ll be able to easily see if something really is at fault, versus just a status update. This can be particularly useful on a mobile device, to see if it warrants unlocking your phone and actually reading the message for instance. I find this particularly relevant when a lot of messages are coming in during a busy period as it allows me to set a flag to increase the visibility of the item. Also consider setting the email priority as High if a failure occurs.
  • Consider adding a parameter, or setting to your script to filter when (or rather whether) to send the notification altogether. I like to see a complete report on a daily basis of what the environment looks like, and how it’s changing and growing over time. But within the day, especially at the 15 minute intervals I call “critical” monitoring, I only want to be notified of any actual issues encountered. You could introduce a parameter to handle this, perhaps with options like “All” or “OnlyOnFailure” and using the ValidateSet parameter attribute.

An example function is below, which implements these suggested points:

  Function Send-MonitoringEmail
{
    [CmdletBinding()]
    param (
        [pscustomobject]$driveSpaceOutput,
        [ValidateSet("All","OnlyOnFailure")][string]$SendNotificationsWhen
    )

    $atLeastOneFailure = $false
    foreach ($driveSpace in $driveSpaceOutput)
    {
        if ($driveSpace.Highlight)
        {
            $atLeastOneFailure = $true
            break
        }
    }

    if ($SendNotificationsWhen -eq "All" -or ($SendNotificationsWhen -eq "OnlyOnFailure" -and $atLeastOneFailure -eq $true))
    {
    
        $subjectSuffix = if ($atLeastOneFailure) { "Failure" } else { "Successful" }

        $params = @{
            Subject = "[PSMonitoring] - " + $subjectSuffix
            Body = "[left as an exercise for the reader to decide how to format the html]"
            BodyAsHtml = $true
            To = "sharepointteam@mycompany.com"
            From = "psmonitoring@mycompany.com"
            Priority = if ($atLeastOneFailure) { [System.Net.Mail.MailPriority]::High } else { [System.Net.Mail.MailPriority]::Normal }
            SmtpServer = "smtp.gmail.com"
            UseSsl = $true
            Port = 587
            Credential = Get-Credential -Message "mail credential" -UserName "hilton@giesenow.com"
        }

        Send-MailMessage @params
    } else {
        Write-Verbose "Skipping email"
    }
} 

Scheduling

Discussing when to send notifications naturally leads us to discussing when to actually run the monitoring itself. Now, we could choose to build our own complex timer/jobbing/scheduling engine in PowerShell. But while it might be a fun exercise, it would hardly be a good use of time given the tremendous scheduling capability that Windows has had for decades, called the Task Scheduler.

If your monitoring code still exists within a single standalone script, all you need to do is call your script directly from the scheduled task. However, if you’ve taken the time to wrap your functionality into a module, you will need to create a standalone script that actually invokes the module. Either way, as we’ve learned from the Notifications section above, we may want to have two separate scheduled jobs running – one with a far higher frequency of 10 or 15 minutes and that sends notifications only for emergencies, and another performing a more comprehensive overview scan on a daily or weekly basis.

In this case, it is possible to combine these into a single script and have the scheduled tasks each send a separate parameter. However, my “critical” and “daily” runs differ quite a bit so I like to split them into distinct scripts that I can also run on demand if desired, as well as more accurately debug. I then set up the relevant scheduled tasks to call their matching script.

For a step-by-step guide on setting up the Scheduled Task, see this blog post [ https://blogs.technet.microsoft.com/heyscriptingguy/2012/08/11/weekend-scripter-use-the-windows-task-scheduler-to-run-a-windows-powershell-script/ ] which provides a good starting point. One change that I make is to have the Program/Script contain just “powershell.exe” and the path to our script file be set in the Arguments box. I also set the ExecutionPolicy to Bypass as we’d likely need that for what our scripts need to do. The final settings are something like this (depending on your path):

Program/Script: PowerShell.exe Arguments: -ExecutionPolicy Bypass -File "[YourPath]\[YourScript].ps1"

These steps could be repeated to create a daily run as well. If you’re really brave, however, it is possible to set up this Scheduled Task using PowerShell, via the New-ScheduledTask [ https://technet.microsoft.com/itpro/powershell/windows/scheduledtasks/new-scheduledtask ] cmdlet.

A Note on Structure

There are still some key requirements missing from our tool. But before we get to those lets do some forward planning and pay off some technical debt before it comes back to bite us. If you’ve followed along with our concepts so far, you might have taken the time to wrap things up into a module, as discussed above, or at the very least to have split the code up into reasonable functions to divvy up some of the responsibility. At a minimum, you should have a function to do the actual hard drive space test (e.g. Test-DriveSpace) and one to send the email notifications (e.g. Send-EmailNotifications), and then a main method to coordinate the calls to each of these. If you haven’t done this yet, it will be worthwhile to do so before jumping to the next steps. Go on and do it – I’ll wait.

Smarter Configurations

To make sure our tool is flexible and adaptable, and certainly if we ever plan to share it with anyone, we need to avoid one of the biggest pitfalls that exist in coding and scripting – hard coding. Instead we need to make these values configurable. PowerShell’s primary way of sending values into functions is via specific parameters and we certainly could start this way. It would mean that our “Test-DriveSpace” function might change such that the threshold for reporting low disk space can be specified instead of hard-coded, resulting in a change from something like this:

  Function Test-DriveSpaceOldVersion
{
    …

    foreach ($drive in ($serverDriveSpace | Where DriveType -eq 3)) # fixed drive
    {
        …

        if ($freeSpacePercent -lt 10)
        {
            Write-Warning "`t`tFree drive Space ($("{0:N0}" -f $freeSpacePercent)%) is below variance threshold"
            $highlight = $true
        }
      
        …
    }
    
    return $driveSpaceOutput
} 

To this:

  Function Test-DriveSpace
{
    [CmdletBinding()]
    param (
        [hashtable]$Configuration
    )

    …

    foreach ($drive in ($serverDriveSpace | Where DriveType -eq 3)) # fixed drive
    {
        …

        if ($freeSpacePercent -lt $Configuration.MinimumFreeDrivePercent)
        {
            Write-Warning "`t`tFree drive Space ($("{0:N0}" -f $freeSpacePercent)%) is below variance threshold"
            $highlight = $true
        }
      
…
    }
    
    return $driveSpaceOutput
} 

Similarly, our notification code might well change to receiving its email settings as parameters. Our main coordinator function (the one that calls Test-DriveSpace and Send-EmailNotifications) would now also need to change. But here’s where the complexity starts to creep in. This main function needs to address two issues: first of all, it now needs to explicitly deal with every “Test” method’s unique specific parameter requirements. For instance, if we add CPU monitoring we now need to add a parameter to deal with CPU warning thresholds, or if we add monitoring for SharePoint content databases, we might want to monitor the number of site collections inside them, and so on. Virtually each “test” method is likely to need at least one setting such as this. Secondly - and perhaps even worse - this complexity needs to be passed up to the consumer of the main function via an ever-growing list of parameters.

Fortunately for us, PowerShell lets us deal with all this complexity by allowing us to create a hashtable of values that we can pass around. It lets us build this hashtable outside of the function calls altogether, and we can even persist and read it from an external file, for example in JSON format. This hashtable can be multi-tiered (i.e. including nested hashtables), allowing for settings be grouped together. For instance, the email details be part of a “Notification” or “Email” nested hashtable. Using a hashtable like this would require setting it up, something like:

  $Configuration = @{
                    MinimumFreeDrivePercent = 30
                    Email = @{
                        To = "sharepointteam@mycompany.com"
                        From = "psmonitoring@mycompany.com"
                        SmtpServer = "smtp.gmail.com"
                        UseSsl = $true
                        Port = 587
                        Credential = Get-Credential
                    }
                } 

And then changing our main function and email function to using it, something like:

  Function Send-MonitoringEmail
{
    [CmdletBinding()]
    param (
        [pscustomobject]$driveSpaceOutput,
        [hashtable]$Configuration,
        [ValidateSet("All","OnlyOnFailure")][string]$SendNotificationsWhen
    )

    …

    if (…)
    {
        …

        $params = @{
            Subject = "[PSMonitoring] - " + $subjectSuffix
            Body = "[left as an exercise for the reader to decide how to format the html]"
            BodyAsHtml = $true
            To = $Configuration.Email.To
            From = $Configuration.Email.From
            Priority = if ($atLeastOneFailure) { [System.Net.Mail.MailPriority]::High } else { [System.Net.Mail.MailPriority]::Normal }
            SmtpServer = $Configuration.Email.SmtpServer
            UseSsl = $Configuration.Email.UseSsl
            Port = $Configuration.Email.Port
            Credential = $Configuration.Email.Credential
        }

        Send-MailMessage @params
    } else {
        Write-Verbose "Skipping email"
    }
} 

Function Invoke-PSMonitoring
{
    [CmdletBinding()]
    param (
        [hashtable]$Configuration,
        [ValidateSet("All","OnlyOnFailure")][string]$SendNotificationsWhen,
        [string[]]$ServersToMonitor = 'localhost'
    )

    $driveSpaceOutput = Test-DriveSpace $Configuration

    Send-MonitoringEmail $driveSpaceOutput $Configuration $SendNotificationsWhen
}

Here’s a recap of our code structure from sections above :

In PoShMon, I’ve taken this a few steps further and implemented a structured set of functions to make the configuration process a little more strongly typed, as well as to introduce some basic validation of the configuration. You can see more of that here [ https://github.com/HiltonGiesenow/PoShMon/tree/master/src/Functions/PoShMon.Configuration ] for further investigation.

Taking the Test

As mentioned previously, not everyone is in favor of “test driven development.”  However, there’s no doubt that unit testing itself is of tremendous benefit and this is especially so when accessing something like system state as we’re doing in our monitoring tool.  As a result, let’s look at how to introduce testing into the mix. We’re going to do so using the powerful and popular Pester [https://github.com/pester/Pester] testing framework for PowerShell.

If you’ve not used Pester before, there are many guides available on how to get started with it, so we’ll skip the basics. Essentially, Pester allows you to write PowerShell code that tests other PowerShell code. That means of course that we can use our regular PowerShell skills to write the tests. However, there’s no requirement for it to the exact same kind of PowerShell. In this context, that means that although the monitoring code itself should be as backwards compatible as possible to address as many scenarios as it can, the same is not true for the tests. When it comes to those, the newer the version of PowerShell the better, as we can tap into newer feature sets.  In particular, we can use things like PowerShell 5 Classes, which are useful in our tests as they very easily encompass both state and behavior, especially when it comes to mocking certain system components and their behaviors. For example, we’re likely going to use a lot of WMI objects in our tests and these contain methods that can be of great use. Should you want to test Event Log entries, for instance, the native WMI object’s date values are confusing to work with but the objects have a “ConvertToDateTime” method that normalizes these for us.

In our tests, we can create a mock WMI object and stub out this method to just return a regular PowerShell date structure, as follows:

  class EventLogItemMock {
            [int]$EventCode
            [string]$SourceName
            [string]$User
            [datetime]$TimeGenerated
            [string]$Message

            EventLogItemMock ([int]$NewEventCode, [String]$NewSourceName, [String]$NewUser, [datetime]$NewTimeGenerated, [String]$NewMessage) {
                $this.EventCode = $NewEventCode;
                $this.SourceName = $NewSourceName;
                $this.User = $NewUser;
                $this.TimeGenerated = $NewTimeGenerated;
                $this.Message = $NewMessage;
            }

            [string] ConvertToDateTime([datetime]$something) {
                return $something.ToString()
            }
        }

Right now, our monitoring tool assesses hard drive space. So let’s look at what a complete sample test for that could look like below.

  $sutFilename = ($MyInvocation.MyCommand.Path).replace(".tests", "")
. $sutFilename

class DiskMock {
    [string]$DeviceID
    [int]$DriveType
    [string]$ProviderName
    [UInt64]$Size
    [UInt64]$FreeSpace
    [string]$VolumeName

    DiskMock ([string]$NewDeviceID, [int]$NewDriveType, [String]$NewProviderName, [UInt64]$NewSize, [UInt64]$NewFreeSpace, [String]$NewVolumeName) {
        $this.DeviceID = $NewDeviceID;
        $this.DriveType = $NewDriveType;
        $this.ProviderName = $NewProviderName;
        $this.Size = $NewSize;
        $this.FreeSpace = $NewFreeSpace;
        $this.VolumeName = $NewVolumeName;
    }
}

Describe "Test-DriveSpace" {
    Mock -CommandName Get-WmiObject -MockWith {
        return [DiskMock]::new('C:', 3, "", [UInt64]50GB, [UInt64]11GB, "MyCDrive")
    }

    It "Should not warn on space above threshold" {

        $Configuration = @{
                MinimumFreeDrivePercent = 10
            }

        $actual = Test-DriveSpace $Configuration

        $actual.FreeSpace | Should Be "11.00 (22%)"
        $actual.Highlight | Should Be $false
    }

    It "Should warn on space below threshold" {

        $Configuration = @{
                MinimumFreeDrivePercent = 25
            }

        $actual = Test-DriveSpace $Configuration

        $actual.FreeSpace | Should Be "11.00 (22%)"
        $actual.Highlight | Should Be $true
    }
}

Of course we could flesh this out with further tests, introduce coverage metrics, and testing for the other functions and so on, but hopefully this small sample provides an idea of how to get started.

Agentless Monitoring & PowerShell Remoting

We have covered a lot of concepts in this article but one key concern still remains – examining how to remotely monitor servers using our tool. There are two main reasons you might want to do this. First, while our script is schedulable and growing in usefulness and functionality, setting it up as a scheduled task on every server we want to monitor leads to a lot of management complexity and pain. Of course, all of this itself could be scripted and controlled via PowerShell, but it’s significantly more simple in the long term to manage and maintain if it all executes centrally, especially considering PowerShell has such a powerful remoting capability. Monitoring remotely like this also fulfils the dream of “agent-less” monitoring as there’s nothing we need to install or deploy onto each target server.

Secondly, even if you have just a single platform you want to monitor, such as a SharePoint or CRM farm, there may end up being certain operations within the monitoring tests themselves that you would want to run on certain machines. For example, when I configure a SharePoint farm I create HOSTS file entries on each of the Front End and Search servers so that they can resolve requests back to the local machine. This significantly improves crawl performance and it also makes it easier to confirm, from within each Front End itself, that it is serving each of the sites correctly. As a result, if I had a monitoring test for this I’d want the web request to physically execute on each machine directly. As an added bonus, it means the sites are automatically kept alive, negating the need for all-too-common “warmup” scripts.

So, with these reasons in mind let’s explore the setup, challenges and use of PowerShell remoting for our needs. There are essentially two stages to these, depending on your requirements.

Enabling Standard PowerShell Remoting

The most basic step to enabling remoting for PowerShell, and sometimes all you’ll need to do at all, is to run the “Enable-PSRemoting” command from an elevated (Administrator) command prompt. Running the command will require confirmation, but passing a “-Force” parameter will stop this if necessary. This command will perform a short sequence of steps on the target machine, including enabling the WinRM service itself as well as setting the requisite firewall rule. You can read more about the it here [https://msdn.microsoft.com/en-us/powershell/reference/5.1/microsoft.powershell.core/enable-psremoting] if you’ve not worked with it before.

At this point, the WMI objects we’d be looking for are now accessible remotely. You can easily test this by running our main hard drive space command but this time passing a “-ComputerName” parameter value. For instance, to get the hard drive space on a server called Server1, you’d call:

  Get-WmiObject win32_logicaldisk -ComputerName Server1 

What becomes immediately clear at this point, however, is that our scripts need to change to accept a list of servers to monitor and way to indicate which result comes from which server. That means our main entry function now needs to accept a list of server names (perhaps with a default for ‘localhost’ in case that’s not supplied by callers), and it needs to pass it to the Test-DriveSpace function, something like this:

  Function Invoke-PSMonitoring
{
    [CmdletBinding()]
    param (
        [hashtable]$Configuration,
        [ValidateSet("All","OnlyOnFailure")][string]$SendNotificationsWhen,
        [string[]]$ServersToMonitor = 'localhost'
    )

    $driveSpaceOutput = Test-DriveSpace $Configuration

    Send-MonitoringEmail $driveSpaceOutput $SendNotificationsWhen
}

The Test-DriveSpace function needs to change as well so that it accepts this list and sends it to the Get-WmiObject function. It also needs to change the return result to include the server name. Fortunately, when you invoke a remote command this way, PowerShell returns an additional property on each resulting object, called PSComputerName, which we can pass back in the return from our function. An example adjusted Test-DriveSpace appears below.

  Function Test-DriveSpace
{
    [CmdletBinding()]
    param (
        [hashtable]$Configuration,
        [string[]]$ServersToMonitor = 'localhost'
    )

    …

    $serverDriveSpace = Get-WmiObject win32_logicaldisk -ComputerName $ServersToMonitor

    foreach ($drive in ($serverDriveSpace | Where DriveType -eq 3)) # fixed drive
    {
        …
      
            # $outputItem = @{
        $driveSpaceOutput += [pscustomobject]@{
            'Server' = $drive.PSComputerName;
            'DriveLetter' = $drive.DeviceID;
            'TotalSpace' = $totalSpace.ToString(".00");
            'FreeSpace' = $freeSpace.ToString(".00") + " (" + $freeSpacePercent.ToString("00") + "%)";
            'Highlight' = $highlight;
        }
    }
    
    return $driveSpaceOutput
}

The final step to this is deciding how you might want to display the results within the notification output. For instance, you may want to group the drives by server in the final email, which you can do using the Group cmdlet.

Running Ad-Hoc Remote Commands

Thus far, we’ve been looking at standard commands that allow you to pass an optional “-ComputerName” parameter. However, for more robust tests you might need to execute blocks of code on the remote machine. For instance, if you want to centrally monitor a SharePoint farm, and you need to execute certain SharePoint PowerShell commands to assess things like the Search component or other Service Application health. To do this, you can wrap the script block you intend to execute within an Invoke-Command cmdlet call - like running the following code will run on the local machine:

  $env:COMPUTERNAME 

This will show the contents of the ComputerName environment variable. However, wrapping the command in an Invoke-Command and supplying an alternate computer name via the now-familiar “-ComputerName” parameter will cause the script block to run on that remote server, and will therefore return the remote computer’s name, as follows:

  Invoke-Command -ComputerName Server1 { 
$env:COMPUTERNAME
}

You would typically use the “Invoke-Command” cmdlet to do this, passing both a script to remotely execute as well as the now-familiar “-ComputerName” parameter. However, there’s typically a bit more to the story with these kinds of commands, as we’ll see next.

Solving the ‘Double Hop’ Issue

All of the above works well if you’re monitoring metrics that exist directly on the remote server, like drive space, event logs, memory, performance counters and so on. However, if you want to execute commands against the remote environment that in turn need resources from one of more environments further down the line, you’re going to face a further challenge.

In this case, PowerShell will happily carry your credentials over to the remote server, but it may well not be able to carry it further (i.e. down to the database). This problem has been around for many, many years - even well before PowerShell - and it’s commonly referred to as the “Double Hop” issue because our credential is typically only able to make one “hop” – that is,  to one single remote server. In a PowerShell specific context the issue is elegantly described in this blog post [https://blogs.technet.microsoft.com/ashleymcglone/2016/08/30/powershell-remoting-kerberos-double-hop-solved-securely/]. It describes the challenge of managing the hop, dealing with potential credential issues, and so on and presents a few possible solutions.

These may work fine for you, but you may not be able to get Kerberos set up correctly in your environment, or you may not be allowed to use CredSSP for this, given its (valid) security concerns. Fortunately another great TechNet blog [https://blogs.msdn.microsoft.com/sergey_babkins_blog/2015/03/18/another-solution-to-multi-hop-powershell-remoting/] comes to the rescue. In essence, it describes a PowerShell-specific solution whereby we can create a stored session credential that we can “teleport” into as necessary, using the “Register-PSSessionConfiguration” [https://msdn.microsoft.com/en-us/powershell/reference/5.1/microsoft.powershell.core/register-pssessionconfiguration] command. (Incidentally, JEA, which is presented as an option in the first article, uses PSSessionConfigurations as well, but JEA itself may not be something your organization wants to jump on just yet).

To get this up and running, you can create a named session with a stored credential, using the “-Name” and “-Credential” parameters. Take care to use a credential that makes sense and that has a password policy you can deal with (i.e. if it changes you’ll have a way to update the PSSessionConfigurations). Thereafter, when calling the Invoke-Command cmdlet we just need to pass the “-ConfigurationName” parameter that matches the Name of the PSSessionConfiguration we created. The final script would look something like (using our earlier example):

  Invoke-Command -ComputerName Server1 -ConfigurationName MySessionName { 
    $env:COMPUTERNAME
}

Closing Out

We’ve covered a lot of ideas and concepts in this article. Of course, you can simply make use of (and even contribute to) the PoShMon project, in which case this article might provide some useful background to how and why it does some of what it does.

If you decide to instead create your own similar framework or project, hopefully what we’ve looked at here will help you on your  adventures with performing more PowerShell-based monitoring. There are many more topics and practices that will likely be of interest in that case, like continuous integration, additional notification mechanisms and even things like “self-healing”, whereby your systems can begin to repair themselves following known recurring issues (you can read more about this here [https://github.com/HiltonGiesenow/PoShMon/wiki/Creating-a-Self-Healing-System-Using-PoShMon] ). Finally we can also export critical monitoring values into TEXT/CSV on a regular basis and inject into BI tools (like Power BI) for rich data visualization. But those are topics for another time.


Hilton Giesenow is an industry veteran of almost 20 years and a 12-times Microsoft MVP with experience across varied IT, development and consulting roles, geographies and client industries and types. These days Hilton can mostly be found helping customers understand and craft strategies to successfully plan for, implement and adopt Office 365, SharePoint and Azure. You can find his SharePoint podcast at https://www.TheMossShow.com/ (now retired) and company details at https://www.expertsinside.com/.