Module 49 - T312

T312: Applying Your Test Plan to a Transportation Sensor System (TSS) Based on the NTCIP 1209 Standard v02

HTML of the Course Transcript

(Note: This document has been converted from the transcript to 508-compliant HTML. The formatting has been adjusted for 508 compliance, but all the original text content is included.)

Ken Leonard: ITS Standards can make your life easier. Your procurements will go more smoothly and you'll encourage competition but only if you know how to write them into your specifications and test them. This module is one in a series that covers practical applications for acquiring and testing standards based ITS systems. I am Ken Leonard the director of the U.S. Department of Transportation's Intelligent Transportation Systems Joint Program Office. Welcome, to our ITS Standards training program. We're pleased to be working with our partner, the Institute of Transportation Engineers, to deliver this approach to training that combines Web based modules with instructor interaction to bring the latest in ITS learning to busy professionals like yourself. This combined approach allows interested professionals to schedule training at your convenience without the need to travel. After you complete this training we hope that you'll tell your colleagues and customers about the latest ITS Standards and encourage them to take advantage of these training modules as well as archived webinars. ITS Standards training is one of the first offerings of our updated Professional Capacity Training Program. Through the PCB program we prepare professionals to adopt proven and emerging ITS technologies that will make surface transportation safer, smarter, and greener. You can find information on additional modules and training programs on our website at ITS PCB Home. Please help us make even more improvements to our training modules through the evaluation process. We look forward to hearing your comments and thank you, again, for participating and we hope you find this module helpful.

Narrator: Throughout the presentation this activity slide will appear indicating there is a multiple choice pop quiz following this slide. You will use your computer mouse to select your answer. There is only one correct answer. Selecting the submit button will record your answer and the clear button will remove your answer if you wish to select another answer. You'll receive instant feedback on your answer choice. This module is T312 Applying Your Test Plan to Transportation Sensor Systems based on NTCIP 1209 Standard v02. Ralph Boaz is a transportation engineering and marketing consultant. He was formerly vice president of Econolite Control Products and transportation section manager for Ball System Engineering. He has been a project manager and consultant to the ATC and NTCIP standards programs. He's a member of numerous ATC and NTCIP standards working groups. He is the chair of the transportation sensor system working group for NTCIP. And in 2002, he founded Pillar Consulting Inc., where he assists companies and agencies in ITS planning, implementation, deployment, testing, and training.

Ralph Boaz: Thank you for that introduction, Nicola. And let's take a look at our target audience. It is my pleasure to be your instructor today and we'll be covering a lot of topics regarding testing. We have a broad audience that we're aiming at. We have traffic management and engineering staff, operations people, systems integrators, device manufacturers, and test personnel. Now, although we'll get into pretty technical areas at times there is a need for management people to understand the testing process as well and so that can be gleaned out of this course.

Here are the recommended prerequisites. I won't go through all of these right now but hopefully you have taken these and are familiar with the material in them. Here we see the others. I want to highlight A312a and A312b Understanding User Needs for TSS's based on the 1209 Standard and the second one was specifying requirements. Those courses really give you the information that you need to have to understand what we're talking about in this module.

Here's that curriculum path in a graphical organization for those that prefer it that way. And, again, hopefully, you will have picked up these courses preceding this one.

All right, these are our learning objectives. We want you, at the end of this course, to be able to recognize the purpose, structure, and content of well written test documentation based on IEEE Standard 829-2008. Secondly, we want to describe the TSS and the role of test documentation within the context of the systems lifecycle. Third, we want to identify a process to develop test documentation for a TSS specification based on the NTCIP 1209 Standard. And fourth, we want to describe the testing of a TSS using sample test documentation and we'll go right down through some examples and highlight some items in the test documentation.

So our first learning objective is to recognize the purpose, structure and content of well written test documentation. Here we're going to describe the documents used to specify testing and then we're going to describe the documents used for test reporting. All of this information comes from IEEE Standard 829-2008, IEEE Standard for Software and System Test Documentation. And if you don't have a copy of that and you're actually going into a testing process we highly advise you to get that.

Our module here we're going to actually be focusing more on the documents that are used to specify testing and that's simply because we wanted to be able to get through this module in a reasonable period of time. I think the test reporting documents are more straight forward in their understanding.

So in previous courses this PCB program has been going on for a couple of years, maybe even a few years here. And as we've taught these courses, some of the content has evolved along with standards and along with the progress of the different particular NTCIP standards or ATC standards as they've moved forward. So on the right you'll see IEEE Standard 829-1998 and some of your previous modules used that kind of nomenclature for the documents. And if you look to the left what we're doing now is we're referring to IEEE Standard 829-2008. And actually they correspond pretty well as you can see from the table. And when we're talking about on the left you can see it uses the terms, level test plan, level test design, level test case. Well, level is a term that's meant to be replaced with the level of testing that you're performing.

Now, when we're trying to talk about a collection of test documents, we'll call that test documentation so now we have kind of defined that. The level test plan that specifies a scope, approach, resources, and schedule for a specified level of testing. And, again, level could be replaced by the name of the level that's being covered. Level test design specifies the refinements of the test approach within the test plan and identifies the features to be tested by that design and associated tests. We have level test case. That defines the information needed as it pertains to inputs and outputs from the software or software based system that's being tested. These test cases can be a single test case or a group of test cases. The level test procedure specifies a step for executing a set of test cases, or more generally, the steps used to exercise the software product or software based system item in order to evaluate a set of features.

So if we look at a diagram of this we'll see we have our level test plan and there may be different test designs that are needed to actually address all of the testing that we want to do as part of this plan but we show here one level test design. And that level test design will refer to level test cases. There's usually many of those. And it will also refer to the level test procedures, those steps necessary for performing the test case. And there may be one for a whole set of test cases or there may be one for each test case. It all depends. But the test cases and test procedures get associated in this document.

Now, if we were to have an additional test design it would move the right. It would have a test design document and it would refer to level test cases. Now, they could be even some of the level test cases that were used in the first design and then level test procedures. Usually, there's a different setup necessary. When we talk about a test design there's usually a different kind of setup or some major difference that we have to capture in the design of the test and that causes us to create the separate level test design. And then we go down and we perform our execution.

Now, we're talking about a TSS communications test plan in this module. So here we have our TSS communication test plan, communications bench test design and we'll talk about that a little later. And we have our test communications test case. We have our test communications test procedure. And then, again, we'll go into more details about this but in our example here in this module we're going to have a TSS communication integrated test design. And then we'll have a TSS communications field test design.

Now, let's move over to the documents used for test reporting. We have a level test log and that provides the chronological record of details regarding the execution of a test. We have an anomaly report. In previous modules this was also called a test incident report. So there's a correlation there to the older IEEE standard. And it identifies any event that occurs during the testing process that requires investigation. And, again, this can be various names depending on your preference. We have our level interim test status report and that's a big thing for probably a small report. It summarizes the results of the designated testing activities and optionally provides evaluations or recommendations based on these results. If you go to the level test report, it summarizes the results of the designated testing activities and provides evaluations and recommendations based on these results. You can see these are very similar, one in the level interim test status report, it's optional whether you want to have evaluations and recommendations come up in this report. It is expected that the level test report that goes at the end of testing for a particular test design that it provides the evaluations and recommendations.

So if you look at the documents here for test reporting, we have our test execution. At any point during the execution we may want to have some status reported so we'll use the LITSR as well as the level test log. We'll come out of execution, any anomaly reports, and then we'll have our level test report. If we substitute—if we put in the names for what we're testing today or that we're talking about today is that we'll have a TSS interim test status report. We'll have a TSS communications test log, the TSS communications test incident report, and the TSS communications test report.

So now we have an activity. "Which of the following is a true statement?" Your answer choices are a) there's usually one level test case per level test design, b) always use the word level in test document names, c) anomaly reports provide a chronological record of tests, or d) a level test report summarizes the results of testing. Please choose the best answer. We asked which of the following is a true statement. And if you said a level test report summarizes the results of testing, you're correct. A level test report gives us the results of a designated testing activity. Remember, it was also very similar to the level test status report. If you said "A," you were incorrect. There are typically many level test cases per level test design. If you said "B" you were incorrect because level refers to the level or type of testing that is to be performed and you're supposed to replace that with something that's more descriptive to your testing. And if you said anomaly reports provide chronological record of the test that was incorrect. The test log does that. And not only the reports document but any event during the testing process.

Let's look at our summary. So in this learning objective number one we recognized the purpose structure and content of well written test documentation. We described the documents used to specify testing. And we described the documents used for test reporting.

Let's move on to Learning Objective 2. Describe TSS testing and the role of test documentation within the context of the system's lifecycle. Major points that we'll be covering in this learning objective are to identify the types of testing for a TSS, describe the stages of NTCIP communications testing, and third we're going to discuss testing the TSS communications in the context of the system's lifecycle.

Before we get much further here I wanted to remind you of the definition of a TSS within NTCIP. A transportation sensor system, or TSS, is defined as any system or device capable of sensing and communicating near real time traffic parameters using NTCIP. So we talk about being near real time because it's not like real time from an aeronautical point of view. It may be real time out in the field but because we're sending this information over a system to a central site, for instance, we call it near real time traffic parameters.

You need some clarification on the terminology. A TSS is considered a field device from an NTCIP perspective. So you have the word "system" in this but it's considered a field device from an NTCIP point of view. It may be a relatively simple device or a combination of devices working together. And don't confuse TSS with the central system that manages the TSS.

So here's a kind of architecture of some examples of how this TSS works or is characterized. So we have our management station on the left. So a TSS may be a traffic controller connected to loop detectors that are embedded in the street. Typically, the wires in the street connect to these devices in the transportation cabinet. They're called loop detectors and they're not very robust, computing wise, they're concentrating on the detection part of the sensor. And they use the traffic controllers to communicate their NTCIP 1209 information back to the central. So it's kind of a group there. It may be a video detection system. Video detection systems have typically connected with them are pretty powerful processors and so those can be used to communicate back to the management station. And there are other technologies out there, such as radar detection, magnetometers, acoustic, lasers, and many others.

These are the types of testing mentioned in IEEE standard 829. There is acceptance and qualification testing. There is development testing. There's operational testing, component testing, component integration testing, integration testing, systems integration testing, system testing, and there's many, many, many others. And even within this list you will have overlap. Generally speaking, what we're going to be talking about today is we're talking about aspects in the context of acceptance or qualification testing.

Now, if we think of the TSS as a device there's different things we might want to test when we're doing acceptance and qualification. We might want to do functional testing, performance testing, communications testing, environmental testing, electrical, material, shock, user interface testing, and there's many, many others. Primarily, what we're talking about in this module is communications testing. So that's the focus of the testing that we're going to be illustrating here.

I call these the stages for NTCIP communication testing. And this is just what has helped me in my practice. You could think of three levels. You could have more or less. But I have bench communications testing, integrated communications testing, and field communications testing. We'll go into details of this.

Bench communications testing's primary focus is to exercise as much of the 1209 v02 data elements of dialogs as is practical. You really want to exercise as much in this tight environment as we can. It's typically performed in a lab or workshop using test software running on a personal computer connected directly to the TSS. Now, we expand that a little bit when we're talking about integrated communications testing. And the primary purpose here is to test the TSS communications with other components of the system. And primarily important here is it includes the use of the central system software that you're going to be using to control or manage the TSS device. And this is typically performed in a lab, a workshop, or special integration area depending what kind of space you need. And it uses the central system to test that center-to-field communications to the TSS. And what we want to do here is exercise as much of the real-time, real communications infrastructure as practical to be able to differentiate protocol issues from other anomalies once we get out in the field.

In field communications testing the primary purpose is to test the communications under real world conditions. We use the central system software. We use the central system itself to test the TSS communications. We limit full deployment out on the streets until we're confident of the system and the field devices and their functionality. In other words, agencies may want to have a certain nearby street where they're implementing this device, where they put the device out in the field in which they can control and carefully watch it and easily access it in the beginning. Now, what's interesting here in this approach is that I worked with the city who had done unit testing to show that the required data elements were supported. And they set up an office to communicate with the lab using a special set of copper wire connection because that's what they would be using in the field. And when they did their field deployment they had all kinds of communications errors and various communication problems. They would stay for days but then when we'd be working on the problem and the issue all of a sudden the problem might disappear and it would show up someplace else. And so by using this stepwise approach we were able to identify that it was out of our control. It had to do with the phone company's communications lines. And what they were doing behind our backs doing their phone stuff is switching these around at various junction places. And the wires, of course, are all still there but they were connecting them through different networks to get the information there and it would change the noise on the line. And my point here is even though we tried to exercise those lines in our integration testing, we really didn't find out what the true problems were until we got the unit in the street.

Here's the systems lifecycle which everyone is familiar with. We see the decomposition and definition from user needs down to the development, the design and the development, and we see how there is a corresponding integration and recomposition as we move up from the development and the unit testing, and subsystem verification, and system verification deployment, et cetera.

So testing the TSS communications, the context of a system lifecycle, there are some things we need to point out is that unit testing or device testing tests the item in its interfaces. Subsystem verification involves testing the item integrated with other parts of the subsystem. And if you have multiple devices in the center-to-field NTCIP situation, it will be like multiple subsystems to test. System verification assures that the entire system including its subsystems meet the system requirements of the project. And system validation shows that the system as implemented meets the original user needs.

There's various methods used in performing verification. Items can be verified using inspection analysis, demonstration, and test. And verifying an entire system will likely use all of these. Verification goes more to conforming the correctness of a feature and this testing is part of the verification process and validation. It goes more to confirming completeness.

So if we consider a TSS in the grand scheme of a system we really take a look at that as part of subsystem development. So in the unit testing we're doing our bench communication testing of the TSS device. And then in the subsystem verification we're doing our integrated communication testing and the field communication testing. Again, there's probably other subsystems that would do this part of the "Vee" diagram.

And we have another activity for you. Let's put on our thinking caps. "When bench testing the communications for a TSS the primary objective is to?" a) test the TSS communications with other components, b) exercise as much of the NTCIP 1209 protocol as possible, c) test the TSS communications under real world conditions, or d) test the central system communications to the TSS. Please make your selection. We're looking for the best answer. Let's review. Again, the question was, "When bench testing the communications for a TSS the primary objective is to?" If you said b) exercise as much of the NTCIP protocol as possible—you were correct—and that's usually performed using a software tool on a PC. If you said, "A," this was incorrect. We're not testing the communications with other components of this part at this point and that is part of integrated testing. If you said "C," test the communications under real world conditions that was incorrect. Our real world tests are a part of the field communications testing. And if you said test the central system communications to the TSS that was incorrect. Testing the central system communications of the TSS is part of integrated testing and field testing.

Summarizing our Learning Objective Number 2, we described the TSS testing and test documentation within the context of the systems lifecycle. We identified the types of testing for a TSS. We described the stages of NTCIP communications testing. And we discussed testing the TSS communications in the context of the system's lifecycle.

Now, we had a question in a previous version of this module. Someone asked, "Wouldn't I want to test the entire unit at the same time?" And my answer is not necessarily. You may have units in the field already and now you're just updating the firmware so that those units can use NTCIP. All of your environmental tests, and shock tests, and others in this case may have already been done. The second question we had here was "Would I want to go through this testing with every unit?" And I'd say not usually. You go through a sample that gives you assurance that they work and then verify that your purchased units have that same validated software or firmware. And many agencies go through a statistical testing where they test so many units, the first five or ten, let's say, coming in. And then after that they do spot testing until they find a problem and then they increase the testing, again. Those were great questions.

Okay. Let's go on to our Learning Objective Number 3 and that is to identify a process develop test documentation for a TSS specification based on NTCIP 1209. We're going to use—in developing the test document we're going to identify this process to pull information from the specification. So we'll identify key elements of NTCIP 1209 Standard and the agency spec. We'll describe our process to develop the test documentation based on the spec. We'll show you how to create a test traceability matrix. And then we'll describe test tools available for NTCIP communications.

So here are key elements of the standard and of the spec. We have user needs and features. We have requirements. We have a protocol requirements list in the standard— that's a template. We have a requirements traceability matrix in the standard. We have a management information base and the data objects in the standard. And we have dialogs in the standard. They tell you how to exchange the information, the order of which things are done. Those are all in the standard. Then we have in the protocol requirements list, a completed one of those, goes into the specification. So many of the rest of these items can be referenced to the spec within the specification. Many times the requirements and user needs are added to the specification.

So let's talk about a process for developing the test documentation we discussed in Learning Objective 1 based on the NTCIP 1209 spec. So we'll develop our TSS communications test plan. Then we'll ask ourselves do our test designs cover the test plan? Well, we don't have a test design yet. So we're going to go and develop our test communication bench test, or integrated test, or field test design. And then we'll ask if our test case is sufficient for purpose? Do they adequately test what we're trying to do within this design? And since we don't have any we'll go over and we say no and we say develop the test communications test case. Then do we have a procedure for that test case that's sufficient? Well, this is our first time through here, so no we don't have one, so we'll develop the test procedure for that test case. And then it may be that we want to have multiple test procedures to address that test case so we may come back here and create another one. Or if our test procedure is sufficient we'll go back and say did we cover all of the test cases? Well, since we had one we'll just go back through this scenario, through this flow chart, and we'll create the rest of our test cases and test procedures as necessary. When we have enough test cases when we drop down through here we'll say hey we go back and we'll say do the test designs cover our test plan? Well, if the first one was our bench test we'll say no. And we'll go back up and develop our integrated test design and continue in the process. Now, it is often the case, the test cases may be used across the different test designs for one reason for another. And it's often the case there will be multiple test procedures for a test case or one test case could be for multiple test procedures.

Now, this flow chart is not the only way to do it. You could develop a test plan and go through all the test designs and that would be another approach. But in real life it's kind of a hybrid thing where you flush out the process for a test design by maybe doing one test case in a test procedure. And then you go do the next test design and you make sure that you have that process flushed out for that test design. And then you go back and you fill in more and you get to a certain point where it says let's really get into the testing now. And you start testing and then you think of other ones to add. So it's kind of a process that you get an initial amount of documentation together to do what you think is necessary and then others develop and are added to the pile. And then when we're all done with that we can go on into test execution.

Now, I just wanted to show how we developed this TSS communications test plan. It prescribes the scope approach resources and schedule. And some of the testing aspects that are covered are the items to be tested, the features to be tested, the features not to be tested, the testing tasks, the personnel responsible for the tasks, and the risks associated with the plan. It's pretty high level but covers a gamut of the broad aspect of the testing.

And one point here is when it's talking about the testing task to be performed it's more general. We're not talking about the detailed test cases right here in the plan. That's going to come later. And if you could see as we take that agency specification and much of the information that we need, including the features to be tested or not to be tested, will become part of our test plan.

Here you can see where we take the protocol requirement list and we go through it, we find out the features. Remember in the TSS standard user needs are described as features. And so we can say yes, see this example in the first table above it says 2.5.4.1 retrieve in-progress sample data. And it says that the PRL says it is included. So we add that to our features to be tested in our test plan. And then if we look at multi-version interoperability which was 2.5.5 we said no the features are not to be tested. And we'll also state that in our test plan.

So now let's look at the test designs. It specifies a detailed approach for exercising a collection of tests, identifies the features to be tested, it identifies requirements to be tested, and identifies the tests or test cases associated with the design. So here we take our agency specification. Not only are we including the features but we now are looking at the requirements to be tested for this particular test design. It may be that certain requirements can only be tested in one test design and other requirements have to be tested by another test design. Here, again, we're looking at our protocol requirements list and we're grabbing the features in this case. And also the requirements from those identified in the PRL.

Now, we're looking at the TSS communications test cases. These define a test case identified by a test design specification. And what it especially calls out is the input and outputs specifications. So we get from our specification the requirement to be tested and any additional specs. But also from the standard we get the data objects to be tested. And we also get the MIB which has the standard values for the data objects. So we're kind of getting the boundaries maybe for testing that we want to perform during our bench communication tests in this case. Here we illustrate that the requirement that was included in this example, we have 3.4.3.1.7. That requirement was included. If we go to the RTM in the standard you can see which dialog is being used and especially what the data objects are being used. And we see that this zone class label is going to be used in our test case. So we have a test case called verify zone class labels.

Now, we'll look at the TSS communications test procedures. These are used to specify the steps for executing one or more test cases. They can be done at different levels and, again, they can be used with many test cases or there may be one test procedure for one test case. So here we see the agency specification where we get our requirements to be tested in the additional specs. And then we get in the 1209 standard. Using that from the standard we get the data objects and the dialogs to be tested. And, again, the MIB also there is the standard values for data objects. Especially to understand how to test this particular item, this particular feature, if you go back here where we were talking about verify zone class labels. We're going to use the dialog to understand what's going on. And here's the dialog shown on the bottom left of the screen. And we won't go down through details of this but it shows how to get the zone class labels from the TSS device. Now, when we're also developing these procedures, we'll need, of course, to know the information about the zone class label and both of these go into creating our test procedure.

So this is an important tool this is the test traceability matrix. The TTM provides traceability from requirements to test cases and to test procedures. Each TSS test design has a TTM for the requirements and test cases applicable to the test design. So we'll step through this table. You see on the left, you see the requirement ID and we see the requirement then next to that. We go into the next column, we have our test case ID and you see there the first test case listed was TC3.4.3.1.7-1 and we'll talk about the convention that's used a little later in this module. And then we have our test cases. And if you go horizontally from the test case ID you get the name of the test cases. In this case, Verify Zone Class Labels. Here we have another one here, Get Number of Sample Data Entries Nominal. Now, if we wanted to look at—now, we want to also tie these together with the appropriate test procedures so we go and we can look vertically here. We can see the test procedures ID which we have TP3.4.3.1.7-1. And you can already see some of our convention here where we use TC in front of the test cases and we use TP in the front of the test procedures. And we go horizontally from the test procedures ID. So for this Verify Zone Class Labels test case we have Verify Zone Class Labels Procedure. Now, if we go down to the next entry where we have get the number of sample data entries nominal that particular test case we can see here that we have two test procedures. The first one is TC3.4.3.1.8-1 is Get the Number of Sample Data Entries, Nominal Procedure One. And we have a second one that says get Number of Sample Data Entry as Nominal Procedure Two. Now, we're not going to go and define that. We're just showing you how you can use this table to combine the multiple cases where a single test case has multiple test procedures.

I'd like to talk a bit about tools. There are many generic SNMP test tools available for Ethernet communications. SNMP is widely used in routers and other devices for networks. And, of course, NTCIP is built on SNMP. They are generic, however. There are data analyzers where you can actually have devices listening in to the communications line, for instance, it may be a pass through on an Ethernet cable or serial cable. This idea goes back to having the little lights on a serial cable that would come on. But you have these data analyzers that can be very sophisticated so you can actually see the bits and bytes running across the lines. Then we have specialized NTCIP testing tools and these tools test both the Ethernet, typically both the Ethernet and serial communications, they'll test all of the objects within the management information base, with set and get operations. They'll verify that the read only objects are not settable. That would be a bad thing. They'll use test scripting. And test scripting is essentially using the language to create a sequence of steps to form complicated tests or more complex tests. And some of the tools use XML, others use Visual Basic. We have logs. These tools can log information at various levels depending on what the user wants. Has reports for various levels. And some of the tools, not all of them, have performance testing. In other words, they'll be tracking how fast the machine, the TSS responded. A particular tool that was developed under an FHWA contract was the test procedure generator, TPG—I'm sorry—I should have said a tool that was developed by the USDOT to help develop consistent robust test procedures for implementations of NTCIP standards. However, in this case we can't use this tool quite yet because the NTCIP 1209 standard v02 needs some modification to automatically take the standard and generate procedures. So there will be some updates coming at some point.

Let's go to another activity. True or false, "The best way to start developing your test documentation is with a test design?" Your answers are true or false. Please make your selection. Well, if you said false you were correct. It's the test plan, not the test design that it's a starting point for developing test documentation. It covers the scope, approach, resources, and schedule for testing. If you said true that was incorrect. The test design is developed after the test plan and it may only cover a portion of the testing to be performed.

We have another activity. We have another little quiz here. "What is the most appropriate test document in which to include a test traceability matrix or TTM?" Is it TSS communications test cases? TSS communications test procedures? Test communications test design? Or the TSS communications test report? Please make your selection. Let's review our answers. If you said TSS communications test design you were correct. A test design identifies the features to be tested and the test cases associated with the design. We also include in our TTMs, test traceability matrix, we also include the test procedures. If you said "A," TSS communication test cases, you are incorrect. The test case defines the input and output specifications for identified test. If you said "B" that was incorrect. The communications test procedures specifies the steps for executing one or more test cases. And if you said "D," the TSS communications test report you are incorrect. The test report summarizes the results of the designated testing activities.

Summarizing what we just learned our learning objective was identify a process to develop test documentation for a TSS specification based on the NTCIP 1209 Standard v02. We identified the key elements of the standard and of an agency specification. We described the process to develop the test documentation based on the specification. We talked about developing a test traceability matrix. And we described the test tools available for NTCIP communications.

We had a question come in that I'd like to add to our discussion here today, "It seems like you could be writing test cases and test procedures forever, where do you draw the line?" Well, it's really up to you. You need to decide up front what you will generally do for requirements and then consider which requirements you may want to have more extensive testing. For example, you may decide that you will have at least one nominal test and test procedure for each requirement. You may also decide that you will set each data element with nominal values for each requirement. But other ones you may want to have boundary conditions tested and go to more extensive testing depending on how what you feel the need is to actually completely test that requirement.

Our fourth and final learning objective is describe the testing of a TSS using sample test documentation. We're going talk about the TSS communications test plan, the TSS communications test design, the TSS communications test cases, and the TSS communications test procedures. And we're going to go through a simple example. And we won't go through the test reporting. And we just ask that you look at the participant supplement for the outlines of test reporting documents.

I want to talk about variations in preparing the test documentation. We've kind of presented them in a fashion that's very, very close to the 829 method. However, the outlines of test documents may be tailored in order to improve their effectiveness on a given project. So you may find, hey I'm using this automated tool, it has a format like this, we'd like to use that for our test procedures. There's different things you can modify and it's all a matter of what's going to be effective and what's going to provide you the traceability that you need. So keep that in mind. In practice, there's only one test design being used the test plan and the test design may be in one document. Test cases may be combined with test procedures especially when using a testing tool, the tools often do that. Multiple test cases may have a single test procedure and single test case may have multiple test procedures.

So here's our example TSS deployment. The city of Buena Vida, California, has a central system that uses NTCIP communications to communicate with their signal systems and dynamic message signs. The city wants to be able to manage their video detection systems from the same central system. The city has developed a TSS communication specification based on NTCIP 1209 v02. One of their current equipment providers, ICU Detection Systems, has implemented NTCIP 1209 standard v02 for their video detection system products. The city needs to test the communications to verify compliance to their specification. Now, I just want to, if you haven't gathered, this an example deployment. The TSS was recently approved, v02 was recently approved, and we don't have a full working system out there currently but hopefully that's coming soon because if the agencies specify it the vendors will build it. That's for all of you agency folks that are listening in that if you specify it they'll build. You can use the same software to run our video system as you're using for your main traffic signal system. Okay. So I don't want anybody trying to call the city of Buena Vida or ICU Detection Systems because I made those up.

Here's a reminder of our test specification documents, the test plan, the test design, the test cases, the test procedures. So we're going to step through the outline for the test plan. This document, this TSS communications test plan is part of the Buena Vida central system project. This introduction includes the following subsections identifier, document identifier scope references. And you'll find by reading the IEEE 829 document that these introductory sections becomes routine very similar. We have a document identifier. In this case we have invented a scheme here for our documents. We have the TSS com, short for communications. Test plan we call it a TP. And then we have the version of it a written out version of that. TSS communications test plan v01.04 and the date. Here's more information you can put in here as necessary. And then we talk about the scope. This test plan covers the communications for video detection devices as part of an overall acceptance process for traffic control field equipment under the Buena Vida central system project. So here we're just trying to summarize the software product or the system items and features to be tested here in the scope.

We have our references and you can see the various documents that were included. I'm sure there would be more if this was a real example. We're just trying to give you an idea of the kind of information that will go into these sections. A picture in the level of the overall sequence, this may be part of a bigger plan. So our communications test plan or TSS communications test plan may be part of the overall video detection acceptance test plan. So here this shows that organization. You may have references to standing specs or standing test procedures that you use in your agency and those may be part of the picture also.

So when we're talking about the test classes and overall test conditions, here we're trying to summarize the unique nature of the particular level of test. Here we say this test plan covers the NTCIP communications for a TSS. Testing will be performed in both the laboratory environment with test software and integrated testing with the city central system both in the lab and in the field.

Continuing here we have the details for the level of test plan. We talk about the test items and the identifiers and we get pretty specific here on what we're using as part of this test. In section 2.2 we talk about our test traceability matrix, here's where you would plug that in. And then, again, we would list the features we were testing. We'd list the features that we're not testing. And then we'd talk about our approach. This testing will include a bench testing of the TSS of the city's lab using a test tool, integrating the TSS with the city system within the lab, and exercising the communications using the central system software. And C field testing the TSS cities at the city's test intersection using the central software. Then we talk about here for the test plan what we consider a pass or fail. The test item will be considered to pass if it successfully completes all test cases and their associated test procedures. So we're being hardline on this. Just like in a marriage, communication is everything.

We don't want to stop every time there is a test that doesn't go well because we wouldn't get very far. So we want to create a suspension criteria and resumption requirements for our testing. So this spelled out here is there are four major feature areas of the TSS communications, configure the TSS, control the TSS, monitor the TSS, and collect data from the TSS. If there are more than five failed test cases in any one major area testing for that area will be suspended until the issues are resolved. And then we talk about our tests as deliverables, all the way through the testing. Some people discuss having a test communication test report for all the test designs and that can be done also. But I think it's very practical to have a communication test report for each test design. I guess you could have a summary of those reports as well.

We talk about test management. We talk about our planned activities and tasks, that's progression. We talk about the environment and the infrastructure. Again, we're getting a little more specific here for bench TSS communications testing. The city's lab will be equipped with a laptop running NTCIP Tester Software. I just made that company up and that product up. Loaded with NTCIP 1209 MIB test cases and test procedures. Then, we talk about and go on to describe what we're doing for the integration communications testing and then the field. We talk about who is responsible and who has authority in this test. Now, you see here in 3.4 through 3.6 here we don't have any additional interfaces required because we're sticking strictly to the NTCIP interface that we're going to use out in the field. We're talking about no additional resources that we identify here down in 3.3. And there's no additional training required because we're hiring a consultant that knows what they're doing. Now, you'll note, in, as we fill out this test plan, it's kind of a good practice to put the sections of the test plan in there even if you're not using it because it shows to a reader who is familiar with these test plans that something was not forgotten.

We continue with the test plan here. We talked about the quality assurance, here we reference the Q&A procedures of the city and of Testing Consultants Limited who is doing our testing. Use metrics, we have to decide what the metrics are, how we're measuring our testing and we'll use a percentage of test cases passed. And the test coverage, all data elements first specified by the PRL and the RTM shall be included in at least one test using nominal values. And we have a glossary and document changes et cetera.

So now we're going to go into the bench test design. And as we go through these documents I'll be highlighting more so we won't go through as much detail as we just did in the communications testing. So we have our document identifier. And you'll see that the document is the same from TSS com, in this case bench test design version 1.02. We have our scope. This test design describes the TSS bench communications testing for the TSS communications test plan. Here we have our features that we're testing. We list those off. We have our approach refinements. We get more specific than we did in the test plan. We said the bench testing will be conducted using a laptop running NTCIP tester software loaded with the NTCIP 1209 MIB test cases, and test procedures, TestCo. The TestCo NTCIP tester software will be used to drive the testing and capture results. We have our test traceability matrix here in our test design. We talk about our feature pass/fail criteria. And we have our test deliverables, which would be the test design itself and all the associated documents resulting from the test design and the testing. And we have our general information glossary change in procedures and history.

Now, let's talk about the test communications test cases. Now, what's interesting is that in this case the IEEE standards creates a test case document of test cases. And most instances or many instances, I should say anyway; that people will have configuration management systems for the test cases and they may keep each test case as a separate document. But we'll go through this together. So you have your identifier, again, using our same convention, our scope, this document contains all of the test cases used for the TSS communications test design, there's no additional context necessary. You can put in new references again.

Notation for this description test cases have identifiers that are associated with a feature being tested having the form of TC for test case, f-n.vv where the TC Indicates it's a test case. The "f" indicates the feature paragraph number within the standard. "- n" is a sequential number uniquely identifying the test case for the feature like one, two, three, four, five, et cetera. And ".vv" is the two digit version number of the test case. So now you'll see that come up here. So now we have this kind of information. We have one for every test case. So we have the test case identifier. And here we see our TC3.4.3.1.7-1.03. So this is test case for requirement 3.4.3.1.7. and this is test case one, and it's version is three. So that's how we follow that convention. We can keep everything straight. And people using it know exactly where it falls into our test plan, in test design. And it's called verify zone class labels. I'm sorry, that's the requirement and this is the first test case for that.

So now, let's talk about our objective this test case validates that the TSS has zone classification labels according to the city's specification. Now, these classes when we're talking about vehicle classes in the traffic world we're talking about trucks. We're talking about small cars, compacts. There's all kinds of names that are officially—there's thirteen of them covered by the FHWA. And the city, however, doesn't have the capability of discerning between some of these numerous classes. So they've, what we called, bin them into groups. So they want to have created—they have classes eleven through thirteen. They have classes eight through ten, classes four through seven, classes two through three and class one. And each entry is in a file and it's on the single line in that file and it's called classlabels.txt. So in the test case we have to say what are the inputs that are part of the test case? The outcomes in this case it's a self-validating test. And what we mean by a self-validating test is you don't have to have a user compare the result to some other document or other test cases to know that it was valid. This one will come out and say it passed or it didn't pass. Environmental needs now we're right down to the gory details about what we need in our environment, we specify the CPU, and capabilities—sorry, the capabilities of the PC. We talk about the version of the tester software that we're using. We don't have any special procedural requirements or inter-case dependencies. And then we have the typical document information that we keep.

Now, I'll talk about the TSS communications test procedures. Again, we have our document identifier in this case it starts with a TP. This is TSS communications test procedure 1 version 8 for feature 3.4.3.1.7. And the scope is this test procedure is used to validate that TSS has zone classification labels according to the city specification. We have a reference to the test case. There's relation to no other procedures. Then we have the details, the inputs, outputs, the special requirements. We have the inputs go to the test case. We reference the test case and the outputs. We have our logs and the pass/fail indication. So now we need an order description of the steps to be taken. So this describes, in kind of a pseudo-language I used here, it might be done in XML or Visual Basic but I'll step through fairly quickly here through this procedure. So we're going to open the file that we described previously that has all of our bins, if you will, groups of classes identified. We open that file. We're going to get—we're going to read a class entry from the device. And then we're going to read the file entry from the file. We'll then get in step four we'll get the label for that entry of classes in the device. Now, we have the sensor zone class label and we're going to compare it in step five to the entry we read from the file. If they're equal then we log in our logs this tool—We have a tool that when you say log it prints it into a file. It says step six pass label. And then it gives the label and the fact that it passed. If that entry and the class label did not agree then it would log step six pass label and it would say fail. And in our case here when it says fail it would abort the test. But if the CZ class label entry is greater than zero, in other words, we're not down to the end of the class label or class entries within the device, then we say the class entry equals the previous class entry minus one. And we go back to step three and we read another file entry from the file. Now, there's other things that you could have put into this, like what if the file is not complete, or the things you could put, it all depends on you how robust you want to make this. But I just wanted a simple enough understanding how we could create this procedure that would then output whether we're passing or failing the test.

An important thing about developing test procedures is that we have a consistency in the expression and levels of the test cases—sorry we need consistency in the expression and levels of the test cases because that really helps the understanding and makes it easier for the tester. So it's really up to the tester, the people writing the test, to really address things at an appropriate level and occasionally go to further deeper levels or changes necessary for particular cases only. So that's, again, that's where we decide whether we pass or fail. Here's the general information that goes in our test procedures.

And now we get to go to our activity. "Which of the following is a true statement?" Your answer choices are a) only manufacturers need to be concerned with testing. B) well-written agency transportation sensor systems specifications facilitate testing. Good testing is easy. And the only thing that matters is the level test report. Please make your selection. Let's review our answers. If you said b) well-written agency specification facilitate testing, it's absolutely true. We demonstrated through this module and through the entire PCB program how creating the specification will lead to helping you test down the road. You see this connectivity from testing all the way back to user needs and requirements. If you said only manufacturers need to be concerned with testing that was incorrect. Agencies and the consultants cannot rely on the manufacturer to be sure that a product meets a specification. You need to test. You may have an approach of testing that allows once you've approved something, you can keep buying the same units or with the same software. That may be an approach to mitigate it but you do have to test. Good testing is easy. I think after we went through this module you normally didn't picked this one—because it's incorrect. Good testing can difficult and tedious but it's necessary. In the long run it saves the agency money, reputation, it shows the public accountability. If you have this kind of testing going on you have something to stand behind. It certainly costs a lot more money to find out in a month after you put a device out on the street that it doesn't work right and you have to get another bucket truck or something like that and bring it down and bring it back to the lab and figure out what's going on. If you said d) the only thing that matters is the level test report that was incorrect. While some people want to hold to the bottom line, understanding and performing good testing practices is essential.

Summarize our learning objective. We described the testing of a TSS using a sample test documentation. We talked about the test plan, test design, test cases, and test procedures.

I think a question that we had here was someone asked, "This seems really hard. Are most test procedures that complicated?" Well, for the most part no, but it all depends on the requirement and what details and nuances you're trying to test. For simple values you could have a simple test procedure that just "sets" and "gets" the value to a data element. You could potentially use that test procedure for every rewrite data object you have in your test design. So that's a very simple procedure. But in other cases like the one we just described we needed to really show how to get to the information that's in the device.

Okay. What we've learned? You've done a great job today. IEEE 829-2008 provides for test documents that are used in test specification and test reporting. There are many types of testing that can be performed on a TSS. When testing a TSS for compliance to an NTCIP 1209 base specification we're concerned with communications. Both NTCIP 1209 standard and the agency's specification are critical to developing good test documentation. Four, a test traceability matrix is a key element of the TSS communications test design. And in practice the IEEE standard 829-2008 recommended document outlines may be tailored to improve their effectiveness on a given project.

Here are some resources. There are others in the student supplement. And we've already went through some of the questions. We certainly appreciate your participation and this concludes our module.

#### End of T312_Final.mp4 ####