by Aumayr Benedikt
Abstract:
Automatically creating unit tests from runtime observations is a practically-relevant technique for making software testing more eficient. The primary application for unit testing is regression testing where we want to ensure that software does not change its behavior during its evolution - or does change, depending on intention. This thesis implements an approach for generating unit tests by observing program executions such as system tests. The major challenge for making such runtime observations is minimizing the amount of state information that needs to be maintained to ensure a correct functioning of individual unit tests. Maintaining state is important because unit tests are typically randomly executed, which is contrary to the specific order of unit execution during system tests. So, for example, if during a system test a given stack contained a specific value during retrieval then this specific value (aka state) must be restored for unit testing prior to retrieval to ensure that the unit test corresponds to the system test. The goal of this work is reducing the overhead of capturing and maintaining such state. As such, for example, not the entire stack state much be restored prior to retrieval. Several approaches have already been proposed that help in the capture and maintenance of state for unit testing. However, the majority of them rely on instrumentation techniques. We analyze the feasibility of gathering test data for unit tests by observing system tests with a highlevel debugging technology, the Java Debug Interface, and compare it to the existing approaches. A prototypical implementation of our approach is used to evaluate this choice of technology.
Reference:
Unitx Tests From Runtime Observations (Bachelor's Thesis) (Aumayr Benedikt), 2012.
Bibtex Entry:
@Baccthesis{Benedikt2012,
author = {Aumayr Benedikt},
title = {Unitx Tests From Runtime Observations (Bachelor's Thesis)},
year = {2012},
abstract = {Automatically creating unit tests from runtime observations is a practically-relevant
technique for making software testing more eficient. The primary
application for unit testing is regression testing where we want
to ensure that software does not change its behavior during its evolution
- or does change, depending on intention. This thesis implements
an approach for generating unit tests by observing program executions
such as system tests. The major challenge for making such runtime
observations is minimizing the amount of state information that needs
to be maintained to ensure a correct functioning of individual unit
tests. Maintaining state is important because unit tests are typically
randomly executed, which is contrary to the specific order of unit
execution during system tests. So, for example, if during a system
test a given stack contained a specific value during retrieval then
this specific value (aka state) must be restored for unit testing
prior to retrieval to ensure that the unit test corresponds to the
system test. The goal of this work is reducing the overhead of capturing
and maintaining such state. As such, for example, not the entire
stack state much be restored prior to retrieval. Several approaches
have already been proposed that help in the capture and maintenance
of state for unit testing. However, the majority of them rely on
instrumentation techniques. We analyze the feasibility of gathering
test data for unit tests by observing system tests with a highlevel
debugging technology, the Java Debug Interface, and compare it to
the existing approaches. A prototypical implementation of our approach
is used to evaluate this choice of technology.},
file = {:BSc Theses\\2012 Benedikt Aumayr\\Benedikt Aumayr - Unit Tests From Runtime Observations-preprint.pdf:PDF},
owner = {AK117794},
timestamp = {2015.09.21},
}