TheDeveloperBlog.com

Home | Contact Us

C-Sharp | Java | Python | Swift | GO | WPF | Ruby | Scala | F# | JavaScript | SQL | PHP | Angular | HTML

Software Engineering | Goel-Okumoto (GO) Model

Software Engineering | Goel-Okumoto (GO) Model with software engineering tutorial, models, engineering, software development life cycle, sdlc, requirement engineering, waterfall model, spiral model, rapid application development model, rad, software management, etc.

<< Back to SOFTWARE

Goel-Okumoto (GO) Model

The model developed by Goel and Okumoto in 1979 is based on the following assumptions:

  1. The number of failures experienced by time t follows a Poisson distribution with the mean value function μ (t). This mean value method has the boundary conditions μ(0) = 0 and Limt→∞μ(t) = N < ∞.
  2. The number of software failures that occur in (t, t+Δt] with Δt → 0 is proportional to the expected number of undetected errors, N - μ(t). The constant of proportionality is ∅.
  3. For any finite collection of times t1 < t2 < � � � < tn the number of failures occurring in each of the disjoint intervals (0, t1 ),(t1, t2)... (tn-1,tn) is independent.
  4. Whenever a failure has occurred, the fault that caused it is removed instantaneously and without introducing any new fault into the software.

Since each fault is perfectly repaired after it has caused a failure, the number of inherent faults in the software at the starting of testing is equal to the number of failures that will have appeared after an infinite amount of testing. According to assumption 1, M (∞) follows a Poisson distribution with expected value N. Therefore, N is the expected number of initial software faults as compared to the fixed but unknown actual number of initial software faults μ0 in the Jelinski Moranda model.

Assumption 2 states that the failure intensity at time t is given by

                dμ(t)/dt = ∅[N - μ(t)]

Just like in the Jelinski-Moranda model, the failure intensity is the product of the constant hazard rate of a single fault and the number of expected faults remaining in the software. However, N itself is an expected value.

Musa's Basic Execution time Model

Musa's basic execution time model is based on an execution time model, i.e., the time taken during modeling is the actual CPU execution time of the software being modeled. This model is easy to understand and apply, and its predictive value has been generally found to be good. The model target on failure intensity while modeling reliability.

It assumes that the failure intensity reduces with time, that is, as (execution) time increases, the failure intensity decreases. This assumption is usually true as the following is assumed about the software testing activity, during which data is being collected: during testing if a failure is observed, the fault that caused that failure is detected, and the fault is deleted.

Even if a specific fault removal action may be unsuccessful, overall failures lead to a reduction of faults in the software. Consequently, the failure intensity decreases. Most other models make a similar assumption, which is consistent with actual observations.

In the basic model, it is consider that each failure causes the same amount of decrement in the failure intensity. That is, the failure intensity reduces with a constant rate with the number of failures. In the more sophisticated Musa's logarithmic model, the reduction is not assumed to be linear but logarithmic.

Musa's basic execution time model established in 1975 was the first one to explicitly require that the time measurements be in actual CPU time utilized in executing the application under test (named "execution time" t in short).

Although it was not initially formulated like that the model can be classified by three characteristics:

  • The number of failures that can be experienced in infinite time is finite.
  • The distribution of the number of failures noticed by time t is of Poisson type.
  • The functional method of the failure intensity in terms of time is exponential.

It shares these methods with the Goel-Okumoto model, and the two models are mathematically equivalent. In addition to the use of execution time, a difference lies in the interpretation of the constant per-fault hazard rate ∅. Musa split ∅ up in two constant methods, the linear execution frequency f, and the so-called fault exposure ratio K:

                dμ(t)/ dt= f K [N - μ(t )]

f can be calculated as the average object instruction execution rate of the computer, r, divided by the number of source code instructions of the application under test, ls, times the average number of object instructions per source code instruction, Qx : f = r / ls Qx.

The fault exposure ratio associate the fault velocity f [N - μ(t)], the speed with which defective parts of the code would be passed if all the statements were consecutively executed, to the failure intensity experienced. Therefore, it can be explained as the average number of failures occurring per fault remaining in the code during one linear execution of the program.






Related Links:


Related Links

Adjectives Ado Ai Android Angular Antonyms Apache Articles Asp Autocad Automata Aws Azure Basic Binary Bitcoin Blockchain C Cassandra Change Coa Computer Control Cpp Create Creating C-Sharp Cyber Daa Data Dbms Deletion Devops Difference Discrete Es6 Ethical Examples Features Firebase Flutter Fs Git Go Hbase History Hive Hiveql How Html Idioms Insertion Installing Ios Java Joomla Js Kafka Kali Laravel Logical Machine Matlab Matrix Mongodb Mysql One Opencv Oracle Ordering Os Pandas Php Pig Pl Postgresql Powershell Prepositions Program Python React Ruby Scala Selecting Selenium Sentence Seo Sharepoint Software Spellings Spotting Spring Sql Sqlite Sqoop Svn Swift Synonyms Talend Testng Types Uml Unity Vbnet Verbal Webdriver What Wpf