Research topic


  • Automation of network element testing (specifically SS7 STPs)
    • Testing of network elements is needed before deployment. Testing can be for:
      • protocol conformance and/or
      • interoperability (between different vendors' equipment)
    • In this project we focussed on interoperability testing but our results are useful even for conformance testing.
    • SS7 (Signaling System No. 7) defines a network that provides connectionless (datagram) service between PSTN switches and databases (e.g. 800-number translations)
    • STPs (Signaling Transfer Points) are the datagram routers in an  SS7 network. 
    • Testing of STPs involves four phases:
      • Test setup
        • Test STPs are connected in a quad configuration common in actual SS7 network deployments
        • The quad STPs are connected to test equipment that emulate PSTN switches and/or databases
        • Link monitors are hooked up to collect SS7 messages generated/forwarded as a result of test actions
      • Test execution
        • The test equipment generate SS7 messages that are routed through the STPs
        • A series of tests, each consisting of a set of actions, are performed. Examples of test actions include link failures, restorals, link congestion (through a generation of an excessive load)
      • Test results retrieval
        • Messages and logs collected on link monitors, the STPs under test, and the test equipment are retrieved onto general-purpose computers for analysis
      • Test results analysis
        • The test results collected in the previous phase are usually very large ASCII files. These need to be analyzed to understand whether the STPs under test behaved as per specification.
    • The four phases of testing can be tedious and repetitive. Manual execution of these phases could lead to errors. Therefore, this testing activity is a prime candidate for automation
    • Specifically, we automated the test results analysis phase.
      • We developed a methodology in which
        • we first create "expected behavior" files that predicts the messages expected as a result of test actions
        • next the test results are compared against the set of "expected behavior" files
        • the set of all unmatched messages in the test results are collected in an "unexpected event list (UEL)"
        • the set of all unmatched messages in the "expected behavior" files are collected in a "missing event list (MEL)" or a "hidden event list (HEL)." Messages belong in the MEL if the links on which these messages were expected were monitored, but if these links were not monitored, then they belong in the HEL.
        • Manual processing of the UEL, HEL, MEL is needed to make final test pass/fail decisions unless these lists are all empty in which case, the STP "passes" the test!
        • An important aspect of this methodology is time. The test results files should note the time at which each message is detected by a monitor and saved. The test execution (phase 2) requires that the time at which each test action is performed is noted accurately. Expected behavior files predict expected behavior after each test action. Thus, test actions are included in the expected behavior files. The matching process first notes the time of a test action and then checks the test results collected just past this time to see if the events noted in the expected behavior just after that test action did occur.
      • We implemented this methodology in a software program called ASTRA (Automated SS7 Test Results Analysis), which can be run on a Windows or UNIX machine
      • The expected behavior files are not linear databases of expected events following specific test actions.
        • This is because:
          • the times of expected events need to be specified for the reasons mentioned earlier. These are primarily expressed relative to the times of test actions; but in addition, there is a need to express relative times of two expected events.
          • the frequency of occurrence of certain messages cannot be specified a priori
          • the exact sequence in which messages are sent are dependent on the specific STP implementation and hence cannot be predicted
        • Hence, we developed a language to specify the behavior of STPs (or any network element that implements SS7). We call this SEBEL (STP Expected Behavior Expression Language). This can be extended to other protocols.
    • We completed development of SEBEL, implementation of ASTRA for a set of tests proprietary to our sponsor, and tested it against collected test results from one set of test runs. We demonstrated that ASTRA can complete test result analysis of very large test results files in a matter of minutes, while a manual analysis could take weeks. Furthermore, ASTRA is more reliable.
  • This project was sponsored by Verizon Systems Integration and Testing Laboratory.