Using Pester and the Operation Validation Framework to Verify a System is Working

If you haven’t seen the Operation Validation Framework on GitHub, it’s definitely something worth taking a look at. This framework allows you to write Pester tests to perform end-to-end validation that a system is operating properly. Pester is typically used to perform test driven development or unit testing on your PowerShell code, but in this scenario it’s used for operational testing.

You need to have Pester installed, but if you’re running Windows 10 then you already have Pester. Download the Operation Validation Framework and place it in your $env:PSModulePath just like you would any other module.

Write your operational tests:

The last test uses a function that can be found in my MrSQL module on GitHub if your interested in it. I’m simply using it to remove the requirement of needing the SQLPS module installed to query a SQL Server with PowerShell.

One thing I would like to figure out is how to skip tests (set dependencies) based on the results of previous tests in my Pester tests since all of the subsequent tests will fail if the first one fails in this scenario and those failures take a lot more time to return results than successes do. Based on some responses to a tweet of mine, skipping tests may not be the best way to approach this, it may be better to run a simple test and only run a comprehensive test if the simple one completes successfully. I noticed that Invoke-OperationValidation has a TestType parameter and the valid values for it are simple and comprehensive with the default being both @(‘Simple’, ‘Comprehensive’) so I’ll plan to research that option.

Now to run the operational tests using the Invoke-OperationValidation function. Omitting the IncludePesterOutput parameter would eliminate the top portion of the results (the top portion of the output is what you would typically see for results when using Pester).


I came up with this scenario because I recently experienced a problem with a SQL Server where all of the necessary services were running and it was listening on port 1433, but it was unresponsive when trying to query actual data from any of its databases.

Operational tests can not only be used to test a single system as you’ve seen in this scenario, but to also perform end-to-end validation of system operation where you have multiple tiers such as web front-end servers, application servers, reporting servers, and back-end database servers along with other things that systems like that depend on such as directory services and network connectivity.

Just think, you could use these types of operational tests to check the amount of latency a system is experiencing and automatically spin up or remove VM’s or containers as needed once a certain threshold is reached. You could also take corrective action based on failing tests. It’s my understanding that the default output from Invoke-OperationValidation is returned in a format that can be used by other monitoring systems.

I knew learning how to write Pester tests would pay off, not only in the form of writing better code with fewer bugs in it especially when changes are made to it down the road, but now we’re starting to see other uses for Pester tests such as this in the form of operational tests.

By the way, I’ve been blogging on this site since 2009 and today’s blog article is number 400.



  1. Matt Hitchcock

    Hey Mike. So, my view on this is that all tests should always run even if 90% will fail because 1 test has failed which the others depend on. The reason for this is that it gives true visibility for the extent of the issue and the impact that issue has. For example, if you just show that one test has failed you think “ok just one thing to fix”. So you fix it, then another test shows red, you repeat and so on. Also, just showing the one failure and not running the rest that would depend on it wouldn’t then show “hey, SQL isn’t running so you can’t connect, you can’t write data, you can’t do x, etc.”.
    With the Operational Monitoring side of stuff, the people watching these monitors typically have less technical knowledge of the back end, so for me, the more Red, the better 🙂

    Interested in what others think.

  2. Kevin Marquette (@KevinMarquette)

    I am doing some similar types of operational readiness testing. Here are some examples of a few SQL tests I run:

    You could either use different tags for quick tests and long tests. If the issue is on tests that have a long time out. SQL connection is bad vs missing database, you could test for that connection outside of a test. Then use that status to early fail or skip those other tests.

    You could either have a $connected | should be $true or if($connected){it “does something” {}}

  3. Kevin Marquette (@KevinMarquette)

    How about something like this:

    describe “test” {

    $flag = $false

    BeforeEach {
    $flag | should be $true

    It “is connected” {
    $true | should be $true

    It “is connected2” {
    $true | should be $true

  4. jamesone111

    I’d have

    It “is working” {
    $script:skipnext = $true
    | should be something
    $script:skipnext = $false

    it “has the right values” -skip:$skipNext { do some more stuff}

    If the test in “it is working” fails the next line doesn’t run so the next test is skipped. If the test gets to the end the Boolean swaps values and the next test runs

    • jamesone111

      That line
      | should be something

      got mangled because it was

      BLAH BLAH | should be something

      but I wrapped blah blah in angle brackets and the site said “huh – illegal HTML there”

  5. nzspambot

    Hi Mike, check out as well for running things well remotely

  6. Cody

    It looks interesting. I’ve been using Jenkins for operational validation instead. Maybe the two can be combined in a useful way.


Leave a Reply

%d bloggers like this: