Module 18 - T313

T313: Applying Your Test Plan to Environmental Sensor Stations (ESS) Based on NTCIP 1204 v04 ESS Standard

HTML of the Course Transcript

(Note: This document has been converted from the transcript to 508-compliant HTML. The formatting has been adjusted for 508 compliance, but all the original text content is included.)

Ken Leonard: ITS standards can make your life easier. Your procurements will go more smoothly and you’ll encourage competition, but only if you know how to write them into your specifications and test them. This module is one in a series that covers practical applications for acquiring and testing standards-based ITS systems.

Ken Leonard: I’m Ken Leonard, the Director of the U.S. Department of Transportation’s Intelligent Transportation Systems Joint Program Office. Welcome to our ITS Standards Training Program. We’re pleased to be working with our partner, the Institute of Transportation Engineers, to deliver this approach to training that combines web-based modules with instructor interaction to bring the latest in ITS learning to busy professionals like yourself. This combined approach allows interested professionals to schedule training at your convenience without the need to travel. After you complete this training, we hope that you’ll tell your colleagues and customers about the latest ITS standards and encourage them to take advantage of these training modules as well as archived webinars. ITS standards training is one of the first offerings of our updated Professional Capacity Training Program. Through the PCB Program, we prepare professionals to adopt proven and emerging ITS technologies that will make surface transportation safer, smarter, and greener. You can find information on additional modules and training programs on our website at ITS PCB Home. Please help us make even more improvements to our training modules through the evaluation process. We will look forward to hearing your comments, and thank you again for participating, and we hope you find this module helpful.

Ken Vaughn: Hi, this is Ken Vaughn. I’m your instructor today for this course, which is T313, Applying Your Test Plan to Environmental Sensor Stations Based on the NTCIP 1204 v04 of the ESS Standard.

Ken Vaughn: As I mentioned, I am the presenter today. I am the former chair of the NTCIP ESS Working Group. I’m also a founder of Trevilon LLC, which developed the NTCIP software that is available. I’m also a member of, and a national expert to, ISO TC204, which is the international ITS standards group.

Ken Vaughn: During our module today, we’ll be learning four key learning objectives. The first one is to describe the role of test plans within the testing lifecycle and the testing to be undertaken. The second is to identify key elements of NTCIP 1204 v04 relevant to this particular test plan. The third is to describe the application of a good test plan to an ESS system that is being procured. And then finally we’ll talk about the testing of the ESS using standard procedures.

Ken Vaughn: With that, we’ll get into the first learning objective—describing the role of test plans and the testing to be undertaken. This will include four particular key objectives. One is understanding what an ESS is, and then we’ll review the concept of the systems lifecycle, go on and describe the purpose of the testing process, and then we’ll describe the testing documentation itself.

Ken Vaughn: So with that—what is an ESS? Well, an ESS typically can monitor the environment—things like wind speed and direction; temperature, humidity, and pressure; precipitation type and its rate of fall; snow accumulation totals; visibility; pavement conditions; radiation—like solar, and how much light is hitting the earth; water levels; and air quality. Now it may also support snapshot cameras and pavement treatment systems. So there are actually a few items you could control with an ESS station, although most of the features relate to sensing what’s out in the field. Perhaps contrary to what most people might think, the station might actually be mobile. That mobile device could be on, say, a snowplow or something, and you might have instrumentation and processing power on that mobile station to be able to identify the exact location—perhaps a temperature was taken, or how deep the snow is, or something like that. It is worth pointing out that within the standard, virtually all of these capabilities are optional. NTCIP 1204 defines how to implement these capabilities in the standardized fashion, but which particular features you need will need to be specified in your procurement, because all of these are going to be optional. You may not get them if you don’t ask for them in your procurement. That process of procuring that equipment is defined in previous modules in the series.

Ken Vaughn: Specifying and acquiring a device is only half the job. Just because a device works with the current system does not mean that it is conformant to the standard or that it’ll work with your next central system. The goal of requiring conformance to a standard is to ensure that the product will interoperate with any conforming system over a prolonged period—not just the system you have in place today. This is achieved by following the systems engineering lifecycle. The testing is depicted on the right side of the diagram, as shown here. Each testing stage corresponds to a specific stage in the left-hand side of the diagram.

Ken Vaughn: The left side of the diagram focuses on user needs and system requirements and the right side of the diagram deals with verification and validation. Each portion of the right side of the diagram corresponds to the equivalent portion of the diagram on the left side. For example, Unit Testing verifies the implementation of the Detailed Design of interoperable components. Subsystem Verification—where NTCIP testing usually falls—tests a Higher Level of Design of a system, and it tests the subsystem as a whole. System Verification and Deployment tests System-wide Requirements—not just the individual device, but how the device works with the central system and provides overall functionality to the end-user. And then finally, System Validation verifies or validates the Concept of Operations and ensures that the entire system provides the services that the users need. Finally, systems that are properly designed and documented will include traceability tables that trace the detailed test procedures all the way back to the original requirements that they are testing. The NTCIP standards for ESS have followed this process, and the traceability tables contained in the standards are key components in ensuring a smooth testing process. This presentation will focus on NTCIP testing. This is typically a part of Subsystem Verification, but participants should be aware of how this testing fits into the larger picture.

Ken Vaughn: When performed properly, testing will provide objective evidence that the system under test solves the right problem and solves it right. Validation is sometimes referred to as solving the right problem—which means that the user needs are being satisfied. By comparison, verification is sometimes referred to as solving the problem right—meaning that the solution meets the stated requirements and design. NTCIP standardizes the user needs and requirements. This allows users to select the desired features from a list rather than creating their own specifications that might differ from other agencies. The result is a high degree of consistency among deployments. However, every deployment site is unique and many of these standardized user needs and requirements are defined to be optional. The NTCIP standard and related testing is not designed to force a single one-size-fits-all solution on every deployment. It allows customization within defined boundaries. However, this means that a conformant implementation for one site might not include the right set of features for another site. NTCIP testing is designed to verify conformance to the design of the standard. It is not designed to validate whether the designer selected the right requirements or even if the standard documented those requirements correctly. Those are tests that are left to the procuring agency as a part of validation. NTCIP testing is only designed to verify a system and make sure that an implementation performs correctly per the definition in the standard. Specifically, NTCIP testing will ensure that a delivered product is compliant with procurement specification—i.e., that it supports selected user needs and requirements—and ensures that the product is conformant with the standard—i.e., it implements the features according to the standardized design.

Ken Vaughn: Just like there’s an app for just about anything, there’s also pretty much a standard for just about everything. And yes, there is a standard that defines how one should produce test documentation for systems and software. The standard is a generalized information technology standard known as IEEE 829 that was last updated in 2008. The IEEE 829-2008 standard defines several types of test documents, as well as how they relate to one another. At the highest level is the test plan. A Test Plan is primarily a management document that guides the overall effort for one stage of testing. It points to a Test Design Specification that provides further details about how each feature will be tested with one or more test cases. The Test Case Specification then details the exact inputs and outputs that will be used to implement the test, and will include a reference to the Test Procedures to implement the test case. Once all of this material is available and a product is available to test, testing can commence. The testing will then result in various test logs and anomaly reports that will be used to summarize the testing in its Test Summary Report.

Ken Vaughn: A Test Plan is primarily a management-level document. It answers the standard questions of who, what, where, when, why, and how.

Ken Vaughn: Let’s take a look at these. The “who” question can be refined to be: Who is responsible for each of the testing tasks? Obviously the test plan should identify who will perform the testing, but it will also identify who is responsible for various other tasks—such as who will provide the items to be tested, who will provide the test facilities, and who will set up the environment. Each of these tasks requires a unique set of skills and resources that should be identified. For NTCIP test plans, the actual testing may be performed by the agency, the vendor, or a third independent party. Each has advantages and challenges. For example, the agency knows what it needs, but it may not have sufficient knowledge of NTCIP standards to do the testing in-house efficiently. On the other hand, the vendor has the knowledge but may be less knowledgeable about the user needs and may have conflicts of interest. A third party may have the knowledge and be independent, but it may be more difficult for the agency to access for contractual reasons. These issues should be considered for each project during the development of the test plan—with the reasoning for the selection documented.

Ken Vaughn: The test plan should also address what will be tested. The first half of this issue is addressed by identifying the component that will be tested. Is it a software module, a system component, or the entire deployed system? Within NTCIP testing, we generally focus on testing a component of a system—which could potentially be the ESS itself or the management system. Both of those are different components. We usually test one or the other in isolation for NTCIP first and then move on to do the system test later.

Ken Vaughn: The test plan also needs to explain what aspects of the requirements will be tested. For example, specifications will generally include requirements for communications, functionality, performance, hardware, and environmental capabilities. All of these requirements should be tested at some point, but they may be the subject of different test plans. By definition, NTCIP testing will include testing communications, but it may also include testing other requirements. The NTCIP test plan should document if and how these other requirements will be tested during the NTCIP testing. For example, if the ESS reports a temperature that does not seem to reflect current conditions, should the test report an error? This may not be an NTCIP protocol issue, but it is a type of error that could be detected during the test, if desired. Some of the times, these issues become more complex. Perhaps if it is at freezing and the device is reporting 32 when it should be reporting 0 Celsius, then that may reflect a problem with NTCIP as opposed to the sensor value. Somewhere we need to make sure that that sort of testing occurs. It’s up to the user to document that within the test plan—defining where those sorts of issues will be tested. Likewise, communication response times are performance-related, which should typically be measured during NTCIP testing if your communications environment will support meaningful readings. What we mean by “meaningful readings” is, if you’re testing remotely, then the response time received at the remote site might be longer than if you’re testing locally. The requirements for performance are generally measured locally. There’s also the testing of more complex conditions, such as performing operations when there are sporadic power outages or other anomalies. The test plan should document to what extent these other features and other requirements should be tested.

Ken Vaughn: Next, the test plan should also define when the associated testing will be performed. Testing occurs throughout the right side of the V-diagram, with each stage designed to build the system from smaller tested components. The final round of testing focuses on ensuring that the delivered product fully meets the user needs it was designed to fulfill. Each stage of testing should have its own test plan and may have multiple plans testing different sets of requirements. NTCIP testing is generally performed during Subsystem Verification, which may include other testing as well—for example, environmental testing. Subsystem Testing—which is testing a component like ESS—occurs after the Unit Testing but before System Testing. In the case of ESS, it often occurs on a deployed station, due to the desire to collect real sensor data. The deployed device may be a sample device hosted at the manufacturer facility or may be one of the first installations of the device for the project. The test plan should define what preconditions are required for that testing to begin. The “when” aspect of a test plan may also include a set of tentative dates so that the various involved parties are able to properly prepare for the test with defined contingencies if the schedule slips. NTCIP testing can also be included in other stages of this project. All of this should be documented within the test plan documents.

Ken Vaughn: The test plan also needs to define where the test will need to take place. For example, will it be a simple bench test without any connected live sensors? A laboratory environment where sensors are connected—perhaps even allowing manipulation of environmental conditions? Or perhaps a real-world environment deployment site? Each of these options has advantages and challenges. For example, a simple bench test is easy to perform, but does not provide much—if any—actual sensor data. A laboratory environment can test the limits of the sensors, but the costs involved and artificially altering the environment may be prohibitive. As a result, many ESSs are tested at an installed location, either using the initial deployment for the project or perhaps using a previously installed test site at the manufacturer’s facility. However, when testing in this fashion, it is important to consider that sensors generally will not be tested over their full dynamic range. If the site is on a live roadway, there may be safety implications that have to be considered. For example, if you want to test ice on the roadway and make sure that your sensors actually detect the ice on the roadway, then that involves having ice on the roadway—which has a safety challenge. Another factor is the location of the tester and the type of connection between the tester and the ESS. The tester might be onsite in the field with a direct connection—nearby in a local office using an agency-owned network or halfway across the world with a connection through the Internet. The location of the tester will not only impact what actions the tester is able to perform—such as easily altering the test environment by spraying water on a device or something—but will also impact the communications delay experience. Nonetheless, the cost savings offered by allowing a remote testing is often perceived to be a significant benefit. The test plan just needs to identify which option is being taken. The tradeoffs of those need to be considered and then documented within the test plan.

Ken Vaughn: Yet another question to answer within the test plan is, “Why are we doing the testing?” The test plan should answer this and explain why the testing is being performed by providing an actual justification in the document. Now, at the most basic level, NTCIP testing is designed to verify that the product is conformant to the standard—but this is only a surface analysis. NTCIP tests are often performed to verify that the device is compliant with a set of specifications. In other words, not only should the device conform to the standard; it also needs to implement all the options defined for the specific project. The test plan should identify the source of the project specifications and provide an indication of whether they are specific to a deployment project, specific to a product line offered by the manufacturer, or perhaps defined by some other scope. The test plan should also explain any other practical implications of the testing and the consequences of passage or failure. For example, how does this testing relate to contractual issues? Does the device need to pass the test prior to the vendor being paid? Does the device need to pass the test before the project moves to the next phase? Or perhaps the test plan is designed to be used after the system is accepted and running and is more of a troubleshooting procedure. The test plan needs to provide the context in which it is to be used and justify its existence.

Ken Vaughn: Finally, the test plan will also explain how the testing will be performed—especially explaining the significant tools that will be used. This figure shows a typical NTCIP setup with the device under test connected to a test application through some sort of communications cloud. In addition to these components, a passive data analyzer may also be connected to the communications cloud to record the bytes sent back and forth between the test application and the device under test. The test plan should fully identify each of these components. The test application is an essential component, since at a minimum there’s a need to exchange electronic communication packets with the device under test. Advanced test applications will automate many of the steps required in a test procedure, and thereby increase reliability while speeding the performance of the test.

Ken Vaughn: We’ve mentioned that there will likely be multiple test plans for any project. The relationship among these test plans should ideally be defined in a Master Test Plan document. This document will not only identify the major testing stages but will also identify how different test plans within the stage—such as communications versus environmental—may interrelate. This will explain the purpose of each test plan and the order in which they should be performed.

Ken Vaughn: We’ve also mentioned that the test plan is a management-level document that is specific to each project. It does not define the testing details. However, it will define the features of the device that should be tested, and it should provide references to other documents that define the full details. Normally these documents are developed in order. After you define your test plan, you develop a high-level design of testing that maps requirements to specific test cases along with any other refinements. Then you define test cases more completely by defining inputs and outputs. Finally, you detail how each of these test cases should be implemented through a test procedure. In the case of NTCIP 1204 v04, most of this more detailed information is already standardized in Annex C, but you still need to customize this information for your project. For example, your ESS will likely only support a subset of the requirements in the standard, and your test plan for a particular phase of the project may only address a subset of the supported requirements. We’ll address how you can write your test plan to properly link to existing standards text in the next portion of this presentation.

Ken Vaughn: That brings us to the first pop quiz.

Ken Vaughn: Which of the following most accurately describes a benefit of having standardized NTCIP test documentation included in NTCIP 1204 v04? Four possible answer choices: A) Eliminates the need for customized test documentation completely; B) Reduces the effort to customize test documentation; C) Ensures that all devices conform to the standard; D) Eliminates the need for additional tools to perform testing. A is completely eliminating the need for customized test documentation; B is reducing the effort to produce that test documentation; C is that all devices will conform to the standard; or D eliminates the need for additional test tools.

Ken Vaughn: The answer for that is B. It reduces the effort to customize test documentation, because most of the test documentation has been standardized in Annex C of the standard. It does not completely eliminate the need for customized test documentation. The test plan document itself is still needed to customize testing to each specific project. Remember, we talked about the need to define the particular schedule, which particular tests you’ll perform for this particular test plan, etc. Likewise, it does not ensure that all devices conform to the standard. The testing documentation merely ensures that there’s a standard way to test these devices. And then finally, it does not eliminate the need for test tools. We still rely on those tools to actually communicate to the device under test.

Ken Vaughn: That completes the first learning objective of describing the role of test plans and the testing to be undertaken. We’ll now move on to the second learning objective—identifying key elements of NTCIP 1204 relevant to the test plan.

Ken Vaughn: There are three key objectives under this learning objective—explaining the relationship among NTCIP standards, explaining the structure of the standard itself, and explaining elements related to testing.

Ken Vaughn: This figure is based on the NTCIP guide. When testing a device claiming conformance to the NTCIP 1204 v04, it is important to realize that the standard does not exist in isolation. It relies upon a variety of other standards. Many of those standards have their own options that may need to be considered. NTCIP 1204 v04 is an information-level standard. In other words, it defines data that can be exchanged with the device to allow information sharing and device control. It also includes references to NTCIP 1201, called Global Objects. This standard defines additional data that are common to many different device types. For example, it defines how to store the current time, the standards that the device supports, and other issues—but these standards only define data. They do not define how this information is exchanged. The preferred application-level exchange mechanism for NTCIP 1204 is NTCIP 2301, which is based on SNMP—an internet standard. This defines how data are logically exchanged between the two applications, but it is largely independent of how the data are transported over a network. Increasingly, ESSs typically use UDP/IP for this purpose—but there are other options. As you are probably aware, IP communications can occur across a wide variety of subnetworks—including various wired and wireless technologies. Increasingly, ESSs are connected over some form of ethernet, which may use any number of physical plants to connect devices.

Ken Vaughn: NTCIP 1204 v04 follows the standard outline for NTCIP data dictionary standards. Section 1 provides general background. Section 2 provides a concept of operations that includes a definition of user needs. Section 3 provides the formal functional requirements—the “shall” statements of what the device is required to do. Section 4 defines dialogs of how sequences of messages are exchanged in order to fulfill the requirements documented in Section 3. Section 5 defines the data objects in the Management Information Base. These are the individual data elements that are included in the messages exchanged in the dialogs that fulfill the functional requirements that map and fulfill the user needs defined in the concept of operations. It is one logical process all the way through, from the high-level concept of operations to the detailed issues within the data elements contained in Section 5.

Ken Vaughn: There are also several annexes, according to the standard NTCIP data dictionary standard outline. The mapping between the requirements of the design elements is provided in Annex A. We talked about how the requirements fulfill the user needs of the concept of operations. That’s what is documented in Section 3. Annex A defines how the design elements of the dialogs and the data elements map to the requirements. So dialogs in Section 4, data elements in Section 5—how they map to Section 3—the requirements. Annex B provides a graphical depiction of the major nodes of the object tree. All of the data defined for NTCIP exists under this object tree. Annex B provides an overview of how that works. Annex C provides standardized testing documentation for the standard, which includes traceability from requirements to test cases. So now we’re moving from requirements to the test cases; those test cases will include references to the dialogs and the data elements connecting all of the standards together. Annex D provides a summary of revisions that have been made to the document from version to version. Annex E records other user needs that have been requested—but are not currently included in this particular standard—with a reason provided. Annex F and G contain generic clauses that apply to several standards and may eventually be moved to NTCIP 1201 or some other document. Finally, Annex H provides a summary of the objects within an ESS that define its configuration.

Ken Vaughn: Several of these components relate to testing. As we mentioned, Section 2 contains the user needs and the PRL—the mapping between user needs and functional requirements. The PRL references requirements that are fully defined in Section 3, and then the main portion of interest for testing is Annex C. Based on the requirements selected in the PRL, Annex C provides traceability to the test cases that apply to a particular implementation, along with the definition and associated test procedures. The procedures themselves are based on the design defined in Sections 4 and 5—as mapped through the RTM, which is provided in Annex A. You see the tight interrelationships among each of the sections of the standard; that relationship exists for virtually all of the data dictionary standards within NTCIP.

Ken Vaughn: When properly linked, these elements of the standard simplify the effort to produce test documentation for your project, but everyone should realize that some portions of the documentation—mainly the test plan—are specific to each project. The outline that we present here and in the complete example contained in the Student Supplement, are based on the outline contained in IEEE 829-2008. As discussed in the previous section, much of the material in the test plan—such as testing location, personnel, and other items—will be specific to each project. However, while the features to be tested will also be unique to each project, they will be largely based on the selections contained in the PRL—as contained in Section 2 of NTCIP 1204 v04.

Ken Vaughn: This is a small example of the PRL from NTCIP 1204 v04. The first two columns of the PRL are used to identify user needs. Each user need is traced to one or more requirements, which are identified in Columns 3 and 4. Each user need and requirement is associated with a conformance statement in Column 5. Finally, during procurement, the agency will fill out the table by selecting the desired options; that is, making the appropriate choice within the sixth column, which is called support. The process will identify all of the requirements the device will be required to support for the project. Each of these requirements should be tested at some point during the project. However, this does not mean that every requirement must be tested as part of every test plan. For example, a project may decide to perform pretesting of the device remotely that only includes some tests. Once the pretest is passed, the vendor may be allowed to ship and install the first site, after which all requirements will be tested using a second test plan. Once this stage has passed, the agency may allow the vendor to ship and install all remaining ESS sites. After each site is installed, the agency may want to use a third test plan and ensure that everything is working at each site—or at least a spot check. Each one of those test plans would be a separate complete test plan, all based on the same PRL—just testing a smaller or a different selection of the requirements.

Ken Vaughn: For most projects, you will want to perform all of the test cases that trace to all selected requirements, at some point before accepting the component. The easy way to specify this in your test plan is to simply reference the standardized table and then note any exceptions that you may want to make from the standardized mapping. This traceability is defined in Annex C, which defines at least one test per requirement—but there may be multiple test cases listed for some requirements and the same test case may appear multiple times in the table.

Ken Vaughn: Using the standardized traceability tables, the requirements are linked directly to completely standardized definitions of test cases, as shown here. The documentation for a test case defines the purpose, inputs, and outputs of the test case. The purpose is captured in the description. As you see here, it provides a distinct description of the purpose of the test case. The required inputs for the test case are identified in the variables clause of the test case. Since the standardized procedures do not define precise values for inputs, the test plan should include an annex that defines the value to be used for each variable for each test case selected. For example, this test case defines the variable “Required_Temperature_Sensors.” As you see here, in many test cases the values to assign to variables may relate to options that are defined in the PRL. In this case, the variable identifies the location within the PRL where this value is defined—PRL 3.6.3. In other cases, the values can be assigned to a variable may be more flexible. For example, the text string to be entered to configure a sensor location could be any random text string. Expected output is partially captured in the description and pass/fail criteria, but it is further detailed in the test procedure details.

Ken Vaughn: Underneath each test case definition is the associated test procedure. A test procedure is simply a step-by-step process for the test. The documentation includes a pass/fail indication for any test step that includes verification, along with a reference to the standardized requirement that relates to the verification step. Thus, by properly referencing the standardized test documentation from your project-specific test plan, you can easily provide a thorough definition of test cases and test procedures to test conformance to the standard and compliance to your project—to the extent that your project is conformant.

Ken Vaughn: In summary, the test plan is almost entirely project-specific, although we do provide a detailed outline with sample text in the Student Supplement to assist you in developing this plan. The standardized test documentation defines traceability between requirements and test cases that would normally be contained in a Test Design Specification. However, if you wish to take any exceptions to this mapping to perform a pretest, you need to indicate this. Rather than creating an entirely new document, you could easily add this as an annex to your own test plan. Likewise, the test cases are largely defined in the standardized test documentation as well, but many of the inputs are defined as variables rather than precise values. Thus to fully specify a test, one would need to define the input values that will be used. Once again, this can be achieved by adding an annex to the test plan. The standardized test procedures can then be used without any further modification or customization.

Ken Vaughn: That brings us to our second pop quiz.

Ken Vaughn: Which statement most closely describes the documentation that a project should prepare before conducting an NTCIP 1204 v04 test? Answer choices are: A) Just reference NTCIP Annex C of NTCIP 1204 v04; B) Develop a test plan with appropriate additions to link to NTCIP 1204 v04; C) Develop a test plan and a set of test procedures with appropriate additions to link to 1204; or D) Develop all documents defined by IEEE 829-2008. So A all you have to do is reference Annex C; B is you have to develop your own test plan and then link to 1204; C develop a test plan and test procedures and then link to 1204; or D develop all documents. Which one of those answer choices is most appropriate?

Ken Vaughn: The correct answer is B. You need to develop a test plan with links to 1204. As we’ve described, most of the documentation is already done for you. You just need to customize it to your project, and that’s done in the test plan—defining the timing, defining who does what, and all of those managerial sorts of issues. The procedures are in fact already done—you can’t just reference Annex C because it won’t define who does what; it won’t define the dates it will be performed on, etc. You do need a test plan, but the procedure is already done for you, so item C is incorrect. The test design specification is already defined in the requirements as well as the procedures. And then finally, developing all documents—you don’t have to develop everything. Most of it’s already been done for you.

Ken Vaughn: That concludes Learning Objective 2. We’ve talked about the role of test plans. We’ve talked about identifying the key elements of the standard. We’ll now move on to Learning Objective 3—describing the application of a good test plan to an ESS system being procured.

Ken Vaughn: This includes several subtopics. One: we’ll be talking about stating what a typical ESS site might look like. We’ll also understand which other modules assist in defining the requirements. We’ll explain how requirements in T313 trace to test cases and test procedures through a design test specification. Then finally, we’ll create a test plan for an ESS.

Ken Vaughn: What does a typical ESS site look like? We mentioned almost everything is optional within an ESS test site within the standard. However, very typically an ESS may include a wind sensor, a temperature sensor, humidity sensor, and an air pressure sensor—monitoring the atmosphere, if you will. Then also a precipitation sensor to monitor rainfall, snowfall; as well as multiple pavement sensors, because they often will have different lanes and/or different roadways at an intersection. Likewise, perhaps a subsurface sensor at each one of those sites as well. And modern systems often include a camera as well to give visual feedback as well as data feedback. That station information will typically be used by the system to determine not only the current conditions, but also be able to predict when ice may start forming or when other conditions may adversely impact travel conditions. So that’s probably—from what I’ve seen—a very typical deployment.

Ken Vaughn: The user, in order to identify that deployment, will go through the PRL and identify which features that they want on their particular site. The PRL traces each user need at a high level—“I want to monitor winds,” for example—to detailed requirements that they need to select as well. Each one of those requirements underneath may be shown as optional or mandatory. When they’re optional, in particular, the user needs should indicate whether they’ll need to be supported as well. The PRL includes a column to allow the user to select the user need or requirement so that it can be customized for any project. Rather than having to specify “You have to support this particular requirement number,’ you can just take the standard form and check off the items that you’re interested in. The end result is a series of selections in the Support column—with additional notes filled in as necessary in the final column to the right. In this example, we might say the ESS shall support at least one wind sensor—or maybe we require it to support two. Whatever the number is, that’s where you would enter the value. In this example, we see the agency has specified an ESS that supports retrieving wind data with support for at least one sensor. Now let’s see how the selection of this requirement relates to the standardized test documentation.

Ken Vaughn: The Requirements to Test Case Traceability Table contained in Annex C traces our sample requirement to the retrieved wind data test case defined in Clause C.2.3.3.3. Our test plan merely needs to refer to this table and require that all test cases that trace to any requirement selected in the PRL be performed. If we are preparing a partial NTCIP test, the test plan developer would need to perform the actual traceability and identify precisely which test cases they’d like to perform. Now let’s take a look to see what is defined for the retrieved wind data test case.

Ken Vaughn: If we go to Clause C.2.3.3.3 in the standard, we can find the definition of our sample test case. We will see that it will verify that the ESS will allow a management station to determine the current wind information. The one variable input is the required number of sensors—which, as we saw previously, the agency defined to be 1 in the PRL. So the input variable should be assigned the value of 1. We should record this—along with all of our variable assignments—as part of our project-specific test plan. The test case also defines the rules for passing and failing the test. In this case, it is a simple rule that every verification step in the procedure must be passed. The exact procedures for this test case are also defined in the standard directly underneath this test case definition, as we discussed before.

Ken Vaughn: We now have the key information needed to begin writing our test plan. We mentioned that the test plan defines the who, what, when, where, why, and how of a test. The IEEE 829-2008 standardized outline actually uses different terminology and a different order. The outline starts with defining an identifier for the document, so it can be easily referenced by other documents. This identifier is followed by a scope—which should answer the “Why” question associated with the test. The test plan then answers the “What” question by identifying the item to be tested and the features that will be tested for the item—while also explicitly defining what will not be tested. The test plan then answers the “How” question by specifying the approach and pass/fail criteria to be used. It also defines the criteria for suspending the test—which is important, since a complete test may take more than one day. It also identifies all the test deliverables that will be produced and the tasks that will be performed. The test plan then answers the “Where” question in the Environment/Infrastructure section. This is followed by answering the “Who” question by identifying responsibilities and staffing needs. Finally, the test plan addresses the “When” question by providing a schedule and listing out risks and contingencies. The test plan concludes with a glossary. We’ll investigate each of these further in the following slides. A complete example of a test plan using this outline is provided in the Student Supplement.

Ken Vaughn: The test plan addresses the “Why” question by defining the objectives of the test. This will include defining the primary purpose of this test—especially as it’s distinguished from other tests for the procurement—and what happens on successful completion of the test. For example, does passing the test result in payment to the vendor? The “Why” is also answered by providing a project background and scope. The background gives the reader of the document a fuller context on what the overall project is and how this test plan fits into the bigger picture. The scope defines the limits of our test—which, for the purpose of this presentation, is limited to NTCIP issues. Finally, this section of the test plan also includes a listing of documents that serve as references.

Ken Vaughn: The answer to the “What” question is fairly straightforward. We identify the device that will be tested and the requirements that will and will not be tested. If we are developing a test plan that will perform a complete NTCIP test, the features to be tested or not tested can simply be a copy of a completed PRL.

Ken Vaughn: The “How” is primarily defined by the Approach section. This section will reference the standardized test procedures. It should also include a reference to the variable values that will be used, as we mentioned earlier. While defining input values are a part of the IEEE 829-2008 test case specification, we include these as an annex to our test plan in order to fully link our project test documentation to the standardized documentation. In other words, the test case specification is part of the standardized test documentation, but they don’t define the detailed values: we’ll put those in the annex of our test plan. For NTCIP testing, the pass/fail criteria and suspension criteria can adequately be handled by simple statements—such as those included in the sample Student Supplement. The pass/fail criteria references the verification steps within the standardized test procedures. The suspension criteria essentially says that testing can be paused between any two test cases. Finally, the test deliverables will generally include the test report documentation as specified in IEEE 829-2008. The testing tasks are generally divided along agency, tester, and vendor, but the exact assignment of tasks may vary from project to project—for example, depending if you’re using the vendor’s facility or the agency’s facility.

Ken Vaughn: The test plan answers the “Where” question in the Environment/Infrastructure section by defining how the equipment will be connected. For example, will the device be installed in the field with the test software connected locally? Or perhaps this is a lab environment where the test software is remotely connected over the internet. The Environment section should also identify any particular needs. For example, the test plan should identify the location of the test facility and any special needs to be considered—such as how power is provided; protection from the elements; material needs—such as table, chair, reflective clothing; any special items to simulate environmental conditions—maybe you want to spray water onto the test site or something like that. These are important items when considering that it might take days to complete the testing while the staff is in the field.

Ken Vaughn: The “Who” section of the test plan assigns responsibilities for the various tasks defined in the test plan. It should also identify the level of effort that might be required.

Ken Vaughn: The “When” question is answered by defining a schedule and identifying risks and contingencies that explain what will happen if things go wrong during testing.

Ken Vaughn: That brings us to our third pop quiz.

Ken Vaughn: Which of the following is not included in a test plan? Identification of who will perform the testing; identification of which features will be tested; identification of the reason for the test; or identification of the steps used to test the device. Which one of those is not included in a test plan? Who will perform the test? Which features are tested? Why it’s being tested? Or the specific test steps.

Ken Vaughn: The correct answer is D. The test steps are part of the test procedures and they’re defined in a separate document. Actually, they’re standardized in Annex C of the standard. The test plan should identify who’s responsible for testing. It should also identify which features will be tested. And finally, it should also identify the reason for testing.

Ken Vaughn: That completes three of our four learning objectives. We’ve talked about the role of test plans; we’ve identified the key elements of the standard; and we’ve described the application of a good test plan. Now we’re going to talk about the actual testing of an ESS using standardized procedures.

Ken Vaughn: This will include talking about the performance of sample test procedures. It will also discuss using different types of test steps. We’ll talk about analyzing and recording test results, and then finally we’ll appreciate the benefits of automated testing.

Ken Vaughn: If you remember back to our previous test case specification, we looked at the description and it said, “The ESS allows the management station to determine current wind information.” That raises the question—what exactly do we mean by “wind information?”

Ken Vaughn: If we look back to the requirement that this test case is traced to, we see the following. Wind information is simply a reference to that particular requirement. As you can see, our one requirement in the PRL includes a number of very precise items—including the 2-minute average wind speed, the 2-minute average wind direction, the current wind speed, the current wind direction, the maximum gust speed in the last 10 minutes, and the maximum gust direction in the last 10 minutes, and finally the wind situation. In order to fully test this one master requirement, each of these detailed requirements need to be verified by the test case—which is achieved in the detailed steps of the test procedure.

Ken Vaughn: Let’s see what an example test procedure looks like. Test procedures are defined in Annex C starting in clause C.2. The procedure for our sample test cases listed immediately under the test case description can be found in clause C.2.3.3.3 of the standard.

Ken Vaughn: The first step tells us to configure the number of sensors required by the specification. You may remember that this was one of the required inputs for the test. Configuration steps generally come at the beginning of the test procedure and define parameters that can be configured in software at a single time—well before performing the test—perhaps performing the test multiple times. You only define once, perform multiple times. We’re also told to record the value. You’ll note that the terms CONFIGURE and RECORD are all in uppercase. This is because they are defined terms with precise meanings to be used in testing’as defined in NTCIP 8007.

Ken Vaughn: The next step tells us to get the “windSensorTableNumSensors.0” object. This is a precise reference to a very precise object that the device should support if it conforms to the standard. This object defines the number of wind sensors supported by the device. The precise definition of the GET keyword indicates that a number of specific verification steps are performed when performing this step. For example, when we do a GET operation, the verification steps will include that there’s only one response received, that the response contains the same objects as listed in the request, that it has the appropriate request ID number—all sorts of other detailed response checks. For example, the use of the term requires that the tester verify that the response is received that corresponds to the request. If any aspect of that multiple check verification fails, the entire test step fails, and that would be marked there on the right. In addition, you see a clause number under the pass/fail statement. This indicates the likely requirement that failed if this test step failed.

Ken Vaughn: The next step tells us to verify that the response value is greater than or equal to the required number of sensors. This is a typical NTCIP assumption, in its wording that a device that supports more than the required number of sensors would exceed the required specifications. It is still allowed as complying with the specification. If I require at least one wind sensor, the assumption within NTCIP is that if the device supports two wind sensors, it meets the requirement supporting one. Once again, we see that the pass/fail statement is associated with the clause—and this time we realize that the clause is the same one where we specified the minimum requirements in our PRL.

Ken Vaughn: The next step tells us to record the response value to the previous request. This is because the value will be used later in the procedure.

Ken Vaughn: The next step tells us to start a loop of the next 22 substeps for each wind sensor supported by the table. Note that the loop tests each wind sensor that the device claimed it supported—which may be more than required. So if we only required one wind sensor and the device reports it supports two wind sensors, then we will test both of those wind sensors because the device claims support for two wind sensors. This is because a failure to support the number of sensors the device claims support for is a violation of the standard itself. Such a failure is captured by testing each sensor for which support is claimed. If the device claims support for fewer sensors then the PRL requires, it would be reported in Step 3, and this loop would only attempt to test those for which support is claimed.

Ken Vaughn: The next step inside of the loop tells us to get seven objects. A closer inspection of these objects reveals that these are the seven objects that represent the seven specific subrequirements that we saw before in the refined meaning of wind information. It talks about average speed, average direction, spot speed, spot direction, gust speed, gust direction, and wind sensor situation.

Ken Vaughn: The next three steps tell us to verify the response value for “windSensorAvgSpeed.N,” where N is the sensor that we’re testing. We make sure that that value is within the defined limits of the object—i.e., the maximum and minimum allowed by the standard—and that the value also appears to be appropriate. Limit testing is easy to automate—i.e., making sure that the value reported is within the defined allowed range per the standard—but the evaluation of what is appropriate is more challenging. Previously we talked about the extent to which NTCIP testing would attempt to verify performance of sensors. It is up to the agency or tester to determine what are the appropriate means based on the test plan and the intent to verify the proper operation of sensors. It should be noted that trying to distinguish between a protocol error and sensor hardware errors can be challenging. For example, if the sensor reports values in meters per second rather than tenths of meters per second, the values will be hard to distinguish in calm conditions. Variations could be due to improper units or equipment accuracy. The extent to which this is checked is an issue that should be addressed in the test plan. Thus human interaction is generally needed to verify these steps. The next 18 steps perform these same three checks for the other six objects retrieved from the device. The loop then repeats for each sensor, and the finally the test is done.

Ken Vaughn: Now, within that one sample test procedure, we identified several different key terms, like CONFIGURE or GET. There are other terms that are used within NTCIP testing as well, such as DELAY. This directs a tester to delay the performance of a following step for some amount of time. This is particularly useful when the test automation can perform a series of steps in quick succession. You may want a delay before proceeding on to the next step. PERFORM is another keyword that allows a test procedure to embed another procedure within it. This is particularly useful for repeatedly configuring the device to the known state before performing or continuing a test operation. Another keyword is SET. Similar to GET, this performs an SNMP operation and includes various checks on the response value. It allows a tester to alter the control or configuration of the device—such as controlling a pavement treatment sensor or the camera on the ESS equipment. Another item is IF and also ELSE. This allows an implementation of branching logic in the test procedure, based on a condition.

Ken Vaughn: When completing a test, the tester needs to be aware that failures may be due to a number of different reasons including: the device under test may have an incorrect implementation—so the failure is a failure of the device under test. But the error may be due to the fact that the user made an error—such as incorrectly configuring the variables or communication settings, or incorrectly evaluating a verification step. Also the hardware—other than the device under test—may have malfunctioned. For example, it may have a faulty ethernet connection. Also, the test procedure itself may contain a logical error or the standard may contain an error or ambiguity. All of these are possible. The good news is that many of these errors decrease with wide-scale deployment. We are now operating with the fourth version of the ESS standard. Most of the standard has been well proven for errors—errors in the standard should be rare. You should not come across many errors in the standard itself, except for new features. Likewise, you should not come across too many errors in the procedures that have been standardized, except for the newest procedures. If it is determined to be a problem with the device under test, the error should be documented and reported to the device developer. The form of the report should be defined in your test plan and may be the instant report defined by IEEE 829-2008. Once all the tests are complete, the entire test execution can be summarized in a test summary report, the form of which should also be defined in the test plan. You should consider using the IEEE 829-2008 outline for this report.

Ken Vaughn: NTCIP 1204 v04 test procedures are 250 pages long and contain many looping operations. Manually performing these tests with traditional off-the-shelf SNMP tools would be extremely time insensitive—to the extent that they’d be unreasonable to perform in full for any particular project. The development of automated test scripts can accelerate the performance of this testing by orders of magnitude while also allowing reliable, repeatable testing of the device by different users. This means that when a tester identifies a problem, the specific conditions that led to the problem can be relayed to the developer, and the developer should be able to reproduce this error. This is a critical first step in solving most bugs. Nonetheless, it should be noted that while automated testing dramatically reduces errors and increases the confidence of the test results, it also introduces a new source of error, which is the automated routine itself. But since this is a tool that will likely be repeatedly used on many different projects, any errors in the automation will likely quickly be discovered—which will benefit every subsequent project. Finally, while automation can drastically and dramatically increase the speed of testing and improving performance, some steps will still require manual verification—which still requires time.

Ken Vaughn: That brings us to our final pop quiz.

Ken Vaughn: Which of the below is not a type of test step used in NTCIP 1204 v04 testing UPDATE, SET, VERIFY, or IF? So which of those four keywords are not used as a standard keyword within NTCIP 1204 testing?

Ken Vaughn: The correct answer is UPDATE. There is no definition for UPDATE for NTCIP 1204 v04 testing. A SET operation can be used to alter the value of a parameter in ESS. This might be used for pavement sensors to control their operation and/or to control the operation of the camera on an ESS. VERIFY is a test step that is used to ensure that the values received from the equipment is within the appropriate ranges. And then finally, an IF statement can be used for branching logic and is used for branching logic within the standardized procedures.

Ken Vaughn: That largely concludes our module. We’ve discussed the role of test plans and the testing to be undertaken. We’ve identified the key elements of NTCIP 1204 relevant to the test plan. We’ve also described the application of a good test plan. Then finally we described the testing of an ESS using the standardized procedures.

Ken Vaughn: You’ve now completed the ESS curriculum, which included the A313a module—Understanding the User Needs for NTCIP 1204; A313b—we talked about specifying requirements for NTCIP 1204; and the most recent, this module, T313—Applying Your Test Plan to the NTCIP 1204 Standard.

Ken Vaughn: We thank you for completing this module. You can provide your feedback with the link below and provide us with any thoughts, comments, and value of the training. We once again thank you for your participation and we hope to hear from you in the future. Thank you.