C-Sharp | Java | Python | Swift | GO | WPF | Ruby | Scala | F# | JavaScript | SQL | PHP | Angular | HTML
Goel-Okumoto (GO) ModelThe model developed by Goel and Okumoto in 1979 is based on the following assumptions:
Since each fault is perfectly repaired after it has caused a failure, the number of inherent faults in the software at the starting of testing is equal to the number of failures that will have appeared after an infinite amount of testing. According to assumption 1, M (∞) follows a Poisson distribution with expected value N. Therefore, N is the expected number of initial software faults as compared to the fixed but unknown actual number of initial software faults μ0 in the Jelinski Moranda model. Assumption 2 states that the failure intensity at time t is given by dμ(t)/dt = ∅[N - μ(t)] Just like in the Jelinski-Moranda model, the failure intensity is the product of the constant hazard rate of a single fault and the number of expected faults remaining in the software. However, N itself is an expected value. Musa's Basic Execution time ModelMusa's basic execution time model is based on an execution time model, i.e., the time taken during modeling is the actual CPU execution time of the software being modeled. This model is easy to understand and apply, and its predictive value has been generally found to be good. The model target on failure intensity while modeling reliability. It assumes that the failure intensity reduces with time, that is, as (execution) time increases, the failure intensity decreases. This assumption is usually true as the following is assumed about the software testing activity, during which data is being collected: during testing if a failure is observed, the fault that caused that failure is detected, and the fault is deleted. Even if a specific fault removal action may be unsuccessful, overall failures lead to a reduction of faults in the software. Consequently, the failure intensity decreases. Most other models make a similar assumption, which is consistent with actual observations. In the basic model, it is consider that each failure causes the same amount of decrement in the failure intensity. That is, the failure intensity reduces with a constant rate with the number of failures. In the more sophisticated Musa's logarithmic model, the reduction is not assumed to be linear but logarithmic. Musa's basic execution time model established in 1975 was the first one to explicitly require that the time measurements be in actual CPU time utilized in executing the application under test (named "execution time" t in short). Although it was not initially formulated like that the model can be classified by three characteristics:
It shares these methods with the Goel-Okumoto model, and the two models are mathematically equivalent. In addition to the use of execution time, a difference lies in the interpretation of the constant per-fault hazard rate ∅. Musa split ∅ up in two constant methods, the linear execution frequency f, and the so-called fault exposure ratio K: dμ(t)/ dt= f K [N - μ(t )] f can be calculated as the average object instruction execution rate of the computer, r, divided by the number of source code instructions of the application under test, ls, times the average number of object instructions per source code instruction, Qx : f = r / ls Qx. The fault exposure ratio associate the fault velocity f [N - μ(t)], the speed with which defective parts of the code would be passed if all the statements were consecutively executed, to the failure intensity experienced. Therefore, it can be explained as the average number of failures occurring per fault remaining in the code during one linear execution of the program.
Next TopicMusa-Okumoto Logarithmic Model
|