Renewal Processes and Repairable Systems

133
Renewal Processes and Repairable Systems

Transcript of Renewal Processes and Repairable Systems

Page 1: Renewal Processes and Repairable Systems

Renewal Processes and Repairable

Systems

Page 2: Renewal Processes and Repairable Systems
Page 3: Renewal Processes and Repairable Systems

Renewal Processes and RepairableSystems

Proefschrift

ter verkrijging van de graad van doctoraan de Technische Universiteit Delft,

op gezag van de Rector Magnificus prof. dr. ir. J. T. Fokkema,voorzitter van het College voor Promoties,

in het openbaar te verdedigenop mandaag 3 februari 2003 om 16.00 uur

door

Suyono

Magister Sains MatematikaUniversitas Gadjah Mada, Yogyakarta, Indonesie

geboren te Purworejo, Indonesie

Page 4: Renewal Processes and Repairable Systems

Dit proefschrift is goedgekeurd door de promotor:Prof. dr. F. M. Dekking

Samenstelling promotiecommissie:

Rector Magnificus, voorzitterProf. dr. F. M. Dekking, Technische Universiteit Delft, promotorProf. dr. R. M. Cooke, Technische Universiteit DelftProf. dr. M. Iosifescu, Centre for Mathematical Statistics, RoemenieProf. dr. C. L. Scheffer, Technische Universiteit Delft (emeritus)Prof. dr. R. K. Sembiring, Institut Teknologi Bandung, IndonesieProf. dr. H. J. van Zuylen, Technische Universiteit DelftDr. J. A. M. van der Weide, Technische Universiteit Delft

The research in this thesis has been carried out under the auspices of the Thomas

Stieltjes Institute for Mathematics, at the University of Technology in Delft.

Published and distributed by : DUP Science

DUP Science is an imprint ofDelft University PressP.O. Box 982600 MG Delft The NetherlandsTelephone: +31 15 27 85 678Telefax: +31 15 27 85 706E-mail: [email protected]

ISBN 90-407-xxx-x

Keywords: Poisson point processes, renewal processes, repairable systems

Copyright c© 2002 by Suyono

All right reserved. No part of the material protected by this copyright noticemay be reproduced or utilized in any form or by any means, electronic or me-chanical, including photocopying, recording or by any information storage andretrieval system, without written permission from the publisher: Delft Univer-sity Press.

Printed in The Netherlands

Page 5: Renewal Processes and Repairable Systems

To my wife Demonti,my son Afif,

and my daughters Hana and Nadifa.

Page 6: Renewal Processes and Repairable Systems
Page 7: Renewal Processes and Repairable Systems

Acknowledgments

I am very grateful to my supervisor Dr. J. A. M. van der Weide, who introducedme interesting topics to research, for his creative guidance, his idea, his encour-agement, and his patience in guiding me doing the research. I also would liketo express my gratitude to my promotor Prof. dr. F. M. Dekking not only forgiving me the opportunity to carry out my Ph.D research at the Delft Universityof Technology, but also for his constructive comments.

In addition, I wish to thank all members of the Department of Control, Risk,Optimization, Stochastic and Systems (CROSS) for their hospitality and theirassistances, especially to Cindy and Diana for their administrative assistance,and to Carl for his computer assistance. Also, I wish to thank Durk Jellemaand Rene Tamboer for the arrangement of almost everything I needed duringmy research and stay in the Netherlands.

I am pleased to acknowledge my colleagues at the Department of Mathemat-ics, State University of Jakarta. Their encouragement and support have beeninvaluable. In this occasion I also would like to thank all of my friends in theNetherlands for their support and their help.

This work would not have been possible without the support of a cooperativeproject between Indonesian and Dutch governments. In this connection I wouldlike to express my appreciation to Prof. dr. R. K. Sembiring and Dr. A. H. P.van der Burgh who organized a research workshop four years ago in Bandung,to Dr. O. Simbolon and Dr. B. Karyadi, former project managers of ProyekPengembangan Guru Sekolah Menengah (Secondary School Teachers Develop-ment Project), and to H. P. S. Althuis, director of Center for InternationalCooperation in Applied Technology (CICAT).

Finally, I would like to thank my dearest wife and our beloved children. Sayamenyampaikan banyak terima kasih kepada istriku Dra. Demonti Siswari ataspengorbananya yang begitu besar yang dengan rela dan ikhlas ditinggal menuntutilmu di Belanda, atas doanya yang senantiasa dipanjatkan kepadaNya sehinggasaya alhamdulillah banyak mendapatkan jalan keluar ketika dalam kesulitan,atas motivasinya yang selalu membuat saya kembali bersemangat dikala kehi-langan semangat, atas kesabarannya yang dengannya alhamdulillah kita banyak

vii

Page 8: Renewal Processes and Repairable Systems

viii

mendapat pertolongan dan kebaikan dari Allah, dan atas jerih payah serta usa-hanya mengurus dan mendidik anak-anak yang diamanahkan kepada kita se-hingga insyaAllah menjadi qurata a’yun bagi kita. Juga untuk anak-anakku,Hana Firdaus, Muhamad Afif Abdurrahim, dan Nadifa Mumtaz, ayah sampaikanbanyak terima kasih atas pengertian dan doa kalian. Pengorbanan kalian sung-guh sangat besar untuk keberhasilan ayah. Kalian menjalani masa kecil tanpamemperoleh perhatian dan kasih sayang yang sepantasnya dari ayah. Meskipunmasih kecil kalian sudah harus ikut membantu perjuangan yang ayah dan ibulakukan. Semoga ini semua menjadi latihan dan bekal yang berharga bagi kalianuntuk menjalani kehidupan di masa yang akan datang. Amiin. Tidak lupa untukayah, ibu, adik, mertua dan saudara-saudaraku semua, saya sampaikan terimakasih atas dorongan, doa, dan bantuannya yang telah diberikan padaku dan kelu-argaku khususnya saat saya tinggalkan anak-anak dan istriku untuk menempuhprogram Ph.D. selama kurang lebih 4 tahun di Belanda.

Delft, February 3, 2003 Suyono

Page 9: Renewal Processes and Repairable Systems

Contents

Acknowledgments vii

1 Introduction 1

1.1 Related works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.2 Basic notions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

1.2.1 Point processes . . . . . . . . . . . . . . . . . . . . . . . . 41.2.2 Renewal processes . . . . . . . . . . . . . . . . . . . . . . 9

1.3 Outline of the thesis . . . . . . . . . . . . . . . . . . . . . . . . . 12

2 Renewal Reward Processes 13

2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132.2 Instantaneous reward processes . . . . . . . . . . . . . . . . . . . 14

2.2.1 A civil engineering example . . . . . . . . . . . . . . . . . 202.3 Renewal reward processes . . . . . . . . . . . . . . . . . . . . . . 252.4 Asymptotic properties . . . . . . . . . . . . . . . . . . . . . . . . 282.5 Covariance structure of renewal processes . . . . . . . . . . . . . 332.6 System reliability in a stress-strength model . . . . . . . . . . . . 35

2.6.1 Type I models . . . . . . . . . . . . . . . . . . . . . . . . 362.6.2 Type II models . . . . . . . . . . . . . . . . . . . . . . . . 40

3 Integrated Renewal Processes 45

3.1 Notations and Definitions . . . . . . . . . . . . . . . . . . . . . . 453.2 (N(t)) a Poisson or Cox process . . . . . . . . . . . . . . . . . . . 463.3 (N(t)) a renewal process . . . . . . . . . . . . . . . . . . . . . . . 503.4 Asymptotic properties . . . . . . . . . . . . . . . . . . . . . . . . 573.5 An application . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

ix

Page 10: Renewal Processes and Repairable Systems

x CONTENTS

4 Total Downtime of Repairable Systems 654.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 654.2 Distribution of total downtime . . . . . . . . . . . . . . . . . . . 664.3 System availability . . . . . . . . . . . . . . . . . . . . . . . . . . 734.4 Covariance of total downtime . . . . . . . . . . . . . . . . . . . . 764.5 Asymptotic properties . . . . . . . . . . . . . . . . . . . . . . . . 784.6 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 834.7 Systems consisting of n independent components . . . . . . . . . 88

4.7.1 Exponential failure and repair times . . . . . . . . . . . . 884.7.2 Arbitrary failure or repair times . . . . . . . . . . . . . . 93

A The proof of Theorem 2.5.1 95

B Numerical inversions of Laplace transforms 105B.1 Single Laplace transform . . . . . . . . . . . . . . . . . . . . . . . 105B.2 Double Laplace transform . . . . . . . . . . . . . . . . . . . . . . 108

Bibliography 113

Samenvatting 117

Summary 119

Ringkasan 121

Curriculum Vitae 123

Page 11: Renewal Processes and Repairable Systems

Chapter 1

Introduction

Many situations in our daily life can be modelled as renewal processes. Twoexamples are arrivals of claims in an insurance company and arrivals of pas-sengers at a train station, if the inter-arrival times between consecutive claimsor passengers are assumed to be independent and identically distributed (iid)non-negative random variables. In these examples the number of claims or pas-sengers arrived during the time interval [0, t], t ≥ 0, is usually taken as a formaldefinition of a renewal process. A mathematical definition of a renewal processis given in Subsection 1.2.2.

From renewal processes we may construct other stochastic processes. Firstly,consider the renewal-process example of the insurance claims. Consider theclaim sizes as random variables and assume that they are independent andidentically distributed. If we interpret the claim size as a reward, then the totalamount of all claims during the time interval [0, t] is an example of anotherprocess which is known as a renewal reward process. In this thesis we studyrenewal reward processes in Chapter 2.

In renewal reward processes it is usually assumed that the reward is earnedall at once at the end of the ’inter-arrival times’ of the corresponding renewalprocess. It means that only the rewards earned until the last renewal beforetime t are considered. The reward may also depend on the inter-arrival time.Motivated by an application in the study of traffic, see Subsection 2.2.1, it isinteresting to consider a version of renewal reward processes where the rewardis a function of an inter-arrival time length, instantaneously earned, and thereward earned in an incomplete time interval is also taken into account. We callthis version in this thesis an instantaneous reward process.

Now consider the second example of renewal processes about the arrivals ofpassengers at a train station. Suppose that a train just departed at time 0,and there were no passengers left. We are interested in the waiting time from

1

Page 12: Renewal Processes and Repairable Systems

2 Introduction

when passengers arrive (after time 0) until the departure of the next train atsome time t ≥ 0. One of the quantities of interest is the total waiting time ofall passengers during the time interval [0, t]. This is an example of a class ofstochastic processes which we will call an integrated renewal process. We willstudy integrated renewal processes in Chapter 3.

Integrated renewal processes have a connection with shot noise processes. Inthe shot noise processes usually it is assumed that shocks occur according to aPoisson process. Then associated with the ith shock, which occurred at timeSi > 0, i ≥ 1, is a random variable Xi which represents the ’value’ of that shock.The values of the shocks are assumed to be iid and independent of their arrivalprocess. The value of the ith shock at time t ≥ Si equals Xiψ(t − Si), whereψ(x) ≥ 0 for x ≥ 0, and ψ(x) = 0 otherwise. Then the total value of all shocksat time t is a shot noise process. As we will see in Chapter 3, if we take Xi ≡ 1for all i ≥ 1 and ψ(x) = x1[0,∞)(x), then the corresponding shot noise processis an ’integrated Poisson process’.

The next topic that we will study in this thesis is the total downtime ofrepairable systems. We first consider a system, regarded as a single component,that can be in two states, either up (in operation) or down (under repair). Wesuppose that the system starts to operate at time 0. After the first failure thesystem is repaired, and then functions like the new system. Similarly after thesecond failure the system is repaired, and functions again as good as the new one,and so on. We will assume that the alternating sequence of up and down timesform a so called alternating renewal process. One of the interesting quantitiesto consider is the total downtime of the system during the time interval [0, t].Note that the total downtime can be considered as a reward process on analternating renewal process. The total downtime is important because it can beused as a performance measure of the system. We will study the total downtimein Chapter 4. In this chapter we also consider repairable systems comprisingn ≥ 2 independent components.

Expressions for the marginal distributions of the renewal reward processes(including the instantaneous reward processes), the integrated renewal pro-cesses, and the total downtime of repairable systems, in a finite time inter-val [0, t], are derived in this thesis. Our approach is based on the theory ofpoint processes, especially Poisson point processes. The idea is to represent theprocesses that we study (including the total downtime of repairable systems)as functionals of Poisson point processes. Important tools we will use are thePalm formula, and the Laplace functional of a Poisson point process. Usuallywe obtain the marginal distributions of the processes in the form of Laplacetransforms.

Asymptotic properties of the processes that we study are also investigated.We use Tauberian theorems to derive asymptotic properties of the expectedvalue of the processes from their Laplace-transform expressions. Other asymp-totic properties of the processes like asymptotic variances and asymptotic dis-

Page 13: Renewal Processes and Repairable Systems

Related works 3

tributions are studied.The rest of this chapter is organized as follows. In the next section we

give an overview of the literature which has a connection with the study of therenewal and instantaneous reward processes, the integrated renewal processesand the total downtime of repairable systems. We also explain the position ofour contributions. In Section 1.2 we introduce some basic notions and factsabout point processes and summarize some facts about renewal processes whichwill be used in the subsequent chapters. Finally in Section 1.3 we give theoutline of this thesis.

1.1 Related works

Renewal reward processes have been discussed by many authors. Several resultsare known in the literature. For example, the renewal-reward theorem and itsexpected-value version, and the asymptotic normality of the processes can befound in Wolff [49] and Tijms [46]. These asymptotic properties are frequentlyused in applications of these processes, see for example Csenki [6], Popova andWu [31], Parlar and Perry [29], and Herz et al. [20].

Herz et al. [20] modelled the maximal number of cars that can safely crossthe traffic stream on a one-way road during the time interval [0, t] as a renewalreward process. For large t the renewal-reward theorem and its expected-valueversion have been used. In this application it is interesting to consider the casewhen t is small. So we need to know the distribution properties of renewalreward processes (and also instantaneous reward processes) in every finite timeinterval [0, t]. Several authors have discussed this case. An integral equationfor the expected value of renewal reward processes was studied by Jack [23]. Mi[25] gave bounds for the average reward over a finite time interval. Erhardsson[12] studied an approximation of a stationary renewal reward process in a finitetime interval by a compound Poisson distribution.

In this thesis we derive expressions for the marginal distributions of renewaland instantaneous reward processes in a finite time interval. We give an applica-tion to the study of traffic. We also reconsider asymptotic properties of renewalreward processes. A proof of the expected-value version of the renewal-rewardtheorem by means of a Tauberian theorem is given. Asymptotic normality ofinstantaneous reward processes is proved. Another result that we derive is thecovariance structure of a renewal process, which is a special case of renewalreward processes. We also study a topic about system reliability in a stress-strength model which is closely related to renewal reward processes.

It seems that the terminology ’integrated renewal process’ has not beenknown yet in the literature. But its special case where the corresponding re-newal process is a homogeneous Poisson process has been considered by severalauthors. In Example 2.3(A) of Ross [36] the expected value of an ’integrated

Page 14: Renewal Processes and Repairable Systems

4 Introduction

homogeneous Poisson process’ has been calculated. The probability densityfunction (pdf) of an integrated homogeneous Poisson process can be found inGubner [19]. In this thesis we derive the marginal distributions of integratedrenewal processes. We also consider other natural generalizations of the inte-grated homogeneous Poisson process, where the homogeneous Poisson process isgeneralized into the non-homogeneous one and a Cox process (doubly stochasticPoisson process). Asymptotic properties of the integrated Poisson and renewalprocesses are also investigated.

The distribution of the total downtime of a repairable system has been widelydiscussed in a number of papers. An expression for the cumulative distributionfunction (cdf) of the total downtime up to a given time t has been derived byseveral authors using different methods. In Takacs [44] the total probabilitytheorem has been used. The derivation in Muth [26] is based on considerationof the excess time. Finally in Funaki and Yoshimoto [13] the cdf of the totaldowntime is derived by a conditioning technique. Srinivasan et al. [37] derivedan expression for the pdf of the total uptime, which has an obvious relation tothe total downtime, of a repairable system. They also discussed the covariancestructure of the total uptime. For longer time intervals, Takacs [44] and Renyi[33] proved that the limiting distribution of the total downtime is a normal dis-tribution. Takacs [44] also discussed asymptotic mean and variance of the totaldowntime. In all these papers it is assumed that the failure times and the repairtimes are independent. Usually it is also assumed that the failure times are iid,and the same for the repair times. An exception is in Takacs [45], where the iidassumption has been dropped, but still under the assumption of independencebetween the failure times and repair times. In his paper [45] Takacs discussedsome possibilities of asymptotic distributions of the total downtime.

In this thesis we use a different method for the computation of the distri-bution of the total downtime. We also consider a more general situation wherewe allow dependence of the failure time and the repair time. Some asymptoticproperties, which are generalizations of the results of Takacs [44] and Renyi[33], of the total downtime are derived. We also discuss the total downtime ofrepairable systems consisting of n ≥ 2 independent component.

1.2 Basic notions

1.2.1 Point processes

A point process is a random distribution of points in some space E. In this thesiswe will assume that E is a locally compact Hausdorff space with a countablebase. As an example E is a subset of an Euclidean space of finite dimension.These assumptions ensure the existence of a Poisson point process in an infinitespace, among others, which will be used in the next chapters.

Page 15: Renewal Processes and Repairable Systems

Basic notions 5

The concept of a point process is formally described as follows. Let E be theBorel σ-algebra of subsets of E, i.e., the σ-algebra generated by the open sets.For x ∈ E, define the measure δx on (E, E) by

δx(A) :=

1, x ∈ A0, x /∈ A

for A ∈ E . The measure δx is called the Dirac measure in x. A point measure on(E, E) is a Radon measure µ, a non-negative measure µ with property µ(K) < ∞for every compact set K, which has a representation:

µ =∑

i∈I

δxi,

where I is a countable index set and xi ∈ E. A point measure µ is called simpleif µ(x) ≤ 1 for all x ∈ E.

Designate by Mp(E) be the space of all point measures on (E, E). LetMp(E)be the smallest σ-algebra making all evaluation maps

m ∈ Mp(E) 7→ m(A) ∈ N0, A ∈ E ,

measurable, where N0 = N0 ∪ ∞ and N0 denotes the set of all non-negativeintegers.

Definition 1.2.1 A point process on E is a measurable map from a probabilityspace (Ω,F ,P) into the measurable space (Mp(E),Mp(E)).

So if N is a point process on E and ω ∈ Ω then N(ω) is a point measure on(E, E). The probability distribution PN of the point process N is the measureP N−1 on (Mp(E),Mp(E)).

The following proposition says that a map N from Ω into Mp(E) is a pointprocess on E if and only if N(A) is an extended non-negative-valued randomvariable for each A ∈ E .

Proposition 1.2.1 [34] Let N be a map from a probability space (Ω,F ,P) intothe space (Mp(E),Mp(E)). The map N is a point process if and only if for everyA ∈ E the map ω 7→ N(ω, A) from (Ω,F ,P) into (N0,P(N0)) is measurable,where P(N0) denotes the power set of N0.

The intensity measure of a point process N is the measure ν on (E, E)defined, for A ∈ E , by

ν(A) := E[N(A)]

=∫

Ω

N(ω, A)P(dω)

=∫

Mp(E)

µ(A)PN (dµ).

Page 16: Renewal Processes and Repairable Systems

6 Introduction

Example 1.2.1 Let (Xn, n ≥ 1) be an iid sequence of non-negative randomvariables. Let Sn =

∑ni=1 Xi. Then

N :=∞∑

n=1

δSn

is a point process on [0,∞). The stochastic process (N(t), t ≥ 0) where N(t) =N([0, t]), the number of points in the interval [0, t], is called a renewal process.In this context the interval [0,∞) is usually referred to as the interval of time.The random variable Xn is called the nth inter-arrival time or nth cycle of therenewal process. The intensity measure of the renewal process N(t) is given byν(dt) = dm(t), where m(t) is the renewal function, see Subsection 1.2.2.

Let f be a non-negative measurable function defined on E. Recall that thereexist simple functions fn with 0 ≤ fn ↑ f and fn is of the form

fn =kn∑

i=1

c(n)i 1

A(n)i

, A(n)i ∈ E ,

where c(n)i are constants and A

(n)i , i ≤ kn, are disjoint. Define for the function

f and ω ∈ Ω

N(f)(ω) :=∫

E

f(x)N(ω, dx).

This is a random variable since by the monotone convergence theorem

N(f)(ω) = limn→∞

N(fn)(ω)

and each

N(fn)(ω) =kn∑

i=1

c(n)i N(ω, A

(n)i )

is a random variable.The Laplace functional of a point process N is defined as the function ψN

which takes non-negative measurable functions f on E into [0,∞) by

ψN (f) := E[exp−N(f)]

=∫

Ω

exp−N(f)(ω)

P(dω)

=∫

Mp(E)

exp−

E

f(x)µ(dx)

PN (dµ).

Proposition 1.2.2 [34] The Laplace functional ψN of a point process N uni-quely determines the probability distribution PN of N .

Page 17: Renewal Processes and Repairable Systems

Basic notions 7

Poisson point processes

One of the most important examples of point processes is a Poisson point pro-cess.

Definition 1.2.2 Given a Radon measure µ on (E, E), a point process N onE is called a Poisson point process on E with intensity measure µ if N satisfies

(a) For any A ∈ E, and any non-negative integer k

P(N(A) = k) =

e−µ(A)µ(A)k

k! if µ(A) < ∞0 if µ(A) = ∞,

(b) If A1, ..., Ak are disjoint sets in E then N(A1), ..., N(Ak) are independentrandom variables.

Example 1.2.2 Let E = [0,∞) and the intensity measure µ satisfies µ([0, t]) =λt for some positive constant λ and for any t ≥ 0. Then the Poisson pointprocess N is called a homogeneous Poisson point process on E with intensity orrate λ. If µ([0, t]) =

∫ t

0λ(x)dx where λ(x) is a non-negative function of x, then

the Poisson process N is called a non-homogeneous Poisson process on E withintensity or rate function λ(x).

The following theorem concerning the Laplace functional of a Poisson pointprocess will be used in the subsequent chapters.

Theorem 1.2.1 [34] The Laplace functional of a Poisson point process N onE with intensity measure µ is given by

ψN (f) = exp−

E

(1− e−f(x))µ(dx)

.

The next theorem says that a renewal process with inter-arrival times iidexponential random variables is a homogeneous Poisson process.

Theorem 1.2.2 [32] Let (Xn, n ≥ 1) be an iid sequence of exponential randomvariables with parameter 1. Let Sn =

∑ni=1 Xi and set N =

∑∞n=1 δSn . Then

N is a homogeneous Poisson process on [0,∞) with rate 1.

Starting from a Poisson point process we may construct a new Poisson pointprocess whose points live in a higher dimensional space.

Theorem 1.2.3 [35] Let Ei, i = 1, 2 be two locally compact Hausdorff spaceswith countable bases. Suppose (Xn, n ≥ 1) are random elements of E1 such that

∞∑n=1

δXn

Page 18: Renewal Processes and Repairable Systems

8 Introduction

is Poisson point process on E1 with intensity measure µ. Suppose (Yn, n ≥1) are iid random elements of E2 with common probability distribution Q andsuppose the Poisson point process and the sequence (Yn) are defined on the sameprobability space and are independent. Then the point process on E1 × E2

∞∑n=1

δ(Xn,Yn)

is Poisson point process with intensity measure µ×Q, where µ×Q(A1×A2) =µ(A1)Q(A2) for Ai measurable subset of Ei, i = 1, 2.

Palm distribution

Palm distribution plays an important role in the study of point processes. APalm distribution is defined as a Radon-Nikodym derivative. Let N be a pointprocess on E with distribution PN such that ν = E[N ] a Radon measure. LetB ∈Mp(E) and let 1B : Mp(E) 7→ 0, 1 be the indicator function, i.e.,

1B(µ) =

1, µ ∈ B0, µ /∈ B

.

Consider the measure E[1B(N)N ] which is absolutely continuous with respectto ν. By the Radon-Nikodym theorem there exists a unique almost surely (ν)function Px(B) : E 7→ [0, 1] such that

A

Px(B)ν(dx) =∫

Mp(E)

1B(µ)µ(A)PN (dµ)

for all A ∈ E . The family Px(B) : x ∈ E,B ∈ Mp(E) can be chosen sothat Px is a probability measure on Mp(E) for all x ∈ E and so that foreach B ∈ Mp(E) the function x 7→ Px(B) is measurable, see Grandell [17].The measure Px is called the Palm distribution (or Palm measure) of the pointprocess N .

Let N be a Poisson point process with intensity measure ν and distributionPν .

Theorem 1.2.4 [17] For every non-negative measurable function f on E ×Mp(E),

Mp(E)

E

f(x, µ)µ(dx)Pν(dµ) =∫

E

Mp(E)

f(x, µ + δx)Pν(dµ)ν(dx). (1.1)

The equation (1.1) is known as the Palm formula for a Poisson point process.We will frequently use this formula later. The Palm distribution for the Poisson

Page 19: Renewal Processes and Repairable Systems

Basic notions 9

point process N can be obtain by taking f(x, µ) = 1A(x)1B(µ), A ∈ E , B ∈Mp(E), which gives

Px = ∆x ∗Pν

where ∆x ∗ Pν denotes the convolution between ∆x, the Dirac measure in δx,and Pν , i.e.,

∆x ∗Pν(B) :=∫

Mp(E)

Mp(E)

1B(µ1 + µ2)∆x(dµ1)Pν(dµ2)

=∫

Mp(E)

1B(µ + δx)Pν(dµ).

1.2.2 Renewal processes

In the previous section we saw that a renewal process on the non-negative realline is an example of point processes. In this section we summarize some factsabout renewal processes including delayed renewal processes which will be usedin the subsequent chapters. Equivalently, the notion of the renewal process inExample 1.2.1 can be stated as follows.

Definition 1.2.3 Let (Xi, i ≥ 1) be an iid sequence of non-negative randomvariables. A renewal process (N(t), t ≥ 0) is a process such that

N(t) = supn ≥ 0 : Sn ≤ t

where

Sn =n∑

i=1

Xi, n ≥ 1 and S0 = 0.

The renewal process (N(t)) represents the number of occurrences of someevent in the time interval [0, t]. We commonly interpret Xn as the time betweenthe (n − 1)st and nth event and call it the nth inter-arrival time or nth cycleand interpret Sn as the time of nth event or the time of nth arrival. The linkbetween N(t) and the sum Sn of iid random variables is given by

N(t) ≥ n if and only if Sn ≤ t. (1.2)

The distribution of N(t) can be represented in terms of the cdfs of the inter-arrival times. Let F be the cdf of X1 and Fn be the cdf of Sn. Note that Fn isthe n-fold convolution of F with itself. From (1.2) we obtain

P(N(t) = n) = P(N(t) ≥ n)−P(N(t) ≥ n + 1)= P(Sn ≤ t)−P(Sn+1 ≤ t)= Fn(t)− Fn+1(t). (1.3)

Page 20: Renewal Processes and Repairable Systems

10 Introduction

Let m(t) be the expected number of events in the time interval [0, t], i.e.,m(t) = E[N(t)]. The function m(t) is called a renewal function. The relation-ship between m(t) and F is given by the following proposition.

Proposition 1.2.3 [18]

m(t) =∞∑

n=1

Fn(t). (1.4)

The renewal function m(t) also satisfies the following integral equation.

Proposition 1.2.4 [18] The renewal function m satisfies the renewal equation

m(t) = F (t) +∫ t

0

m(t− x)dF (x). (1.5)

Let µ = E(X1) and σ2 = Var(X1) be the mean and the variance of X1,and assume that µ and σ are strictly positif. Some asymptotic properties of therenewal process (N(t)) are given in the following.

Theorem 1.2.5 [18] Assume that µ < ∞. Then,

(a)

limt→∞

N(t)t

=1µ

with probability 1,

(b)

limt→∞

m(t)t

=1µ

, (1.6)

(c)

m(t) =t

µ+

σ2 − µ2

2µ2+ o(1) (1.7)

where o(1) denotes a function of t tending to zero as t →∞, provided σ2

is finite.

Theorem 1.2.6 [7] Assume that σ2 < ∞. Then

limt→∞

Var[N(t)]t

=σ2

µ3. (1.8)

If µ3 = E(X31 ) < ∞, then

Var[N(t)] =σ2t

µ3+

(112

+5σ4

4µ4− 2µ3

3µ3

)+ o(1). (1.9)

For large t the distribution of N(t) is approximately normal, i.e.,

N(t)− t/µ√tσ2/µ3

d−→ N(0, 1) as t →∞.

Page 21: Renewal Processes and Repairable Systems

Basic notions 11

Delayed renewal processes

Let X0, X1, X2, . . . be a sequence of non-negative independent random variables.Let G be the cdf of X0 and F be the common cdfs of Xi, i ≥ 1. Let

Sn =n−1∑

i=0

Xi, n ≥ 1 and S0 = 0.

The stochastic process (ND(t), t ≥ 0) where

ND(t) = supn ≥ 0 : Sn ≤ t

is called a delayed renewal process. It is easy to see that

P(ND(t) = n) = G ∗ Fn−1(t)−G ∗ Fn(t),

where F ∗G represents the convolution of F and G. The corresponding delayedrenewal function satisfies

mD(t) := E[ND(t)] =∞∑

n=1

G ∗ Fn−1(t).

As for ordinary renewal processes, the delayed renewal processes have thefollowing asymptotic properties:

(a) The delayed renewal function mD(t) is asymptotically a linear function oft, i.e.,

limt→∞

mD(t)t

=1µ

, (1.10)

where µ = E(X1), see Ross [36],

(b) If σ2 = Var(X1) < ∞, µ0 = E(X0) < ∞ and F is a non-lattice distribu-tion, then

mD(t) =t

µ+

σ2 + µ2

2µ2− µ0

µ+ o(1) (1.11)

and if σ2 < ∞ then

limt→∞

Var[ND(t)]t

=σ2

µ3, (1.12)

see Takacs [44].

Page 22: Renewal Processes and Repairable Systems

12 Introduction

1.3 Outline of the thesis

The subsequent chapters in this thesis can be read independently, and are mostlybased on the results in Suyono and van der Weide [38, 39, 40, 41, 42, 43]. Firstly,in Chapter 2 we discuss renewal reward processes. In Section 2.2 we study aversion of renewal reward processes, which we call an instantaneous rewardprocess. In Section 2.3 we consider the marginal distribution of renewal rewardprocesses in a finite time interval. Section 2.4 deals with asymptotic propertiesof the renewal and instantaneous reward processes. The covariance structure ofrenewal processes is studied in Section 2.5. Finally Section 2.6 is devoted to astudy of system reliability in a stress-strength model, where the amplitude of astress occurring at a time t can be considered as a reward.

Chapter 3 deals with integrated renewal processes. In Section 3.2 we considerthe marginal distributions of integrated Poisson and Cox processes. In the nextsection we consider the marginal distributions of integrated renewal processes.Asymptotic properties of the integrated renewal processes are studied in Section3.4. Finally in the last section we give an application.

In Chapter 4 we discuss the total downtime of repairable systems. We startwith investigating the distribution of the total downtime in Section 4.2. Section4.3 is devoted to the study of system availability, which is closely related tothe total downtime. Section 4.4 concerns the covariance of the total downtime.Asymptotic properties of the total downtime for the dependent case is studiedin Section 4.5. Two examples are given in the next section. Finally in Section4.7 we consider the total downtime of repairable systems consisting of n ≥ 2independent components.

Most of the results about the marginal distributions of the processes thatwe study are in the form of Laplace transforms. In some cases the transformscan be inverted analytically, but mostly the transforms have to be invertednumerically. We give numerical inversions of Laplace transforms in AppendixB.

Page 23: Renewal Processes and Repairable Systems

Chapter 2

Renewal Reward Processes

2.1 Introduction

Consider an iid sequence (Xn, n ≥ 1) of strictly positive random variables. LetSn =

∑ni=1 Xi, n ≥ 1, S0 = 0, and (N(t), t ≥ 0) be the renewal process with

renewal cycles Xn. Associated with each cycle length Xn is a reward Yn wherewe assume that ((Xn, Yn), n ≥ 1) is an iid sequence of random vectors. Thevariables Xn and Yn can be dependent. The stochastic process (R(t), t ≥ 0)where

R(t) =N(t)∑n=1

Yn (2.1)

(with the usual convention that the empty sum equals 0) is called a renewalreward process. Taking Yn ≡ 1, we see that renewal processes can be consideredas renewal reward processes.

Motivated by an application in the study of traffic, see Subsection 2.2.1, it isinteresting to consider a version of renewal reward processes where the rewardis a function of cycle length, i.e,

Rφ(t) =N(t)∑n=1

φ(Xn) + φ(t− SN(t)) (2.2)

where φ is a measurable real-valued function. We will call the process (Rφ(t), t ≥0) an instantaneous reward process. We will assume that rewards are non-negative, i.e., the function φ is a non-negative. Note that in this process we alsoconsider the reward earned in the incomplete renewal cycle (SN(t), t]. If we onlyconsider the reward until the last renewal before time t, and take Yn = φ(Xn)then (2.2) reduces to (2.1).

13

Page 24: Renewal Processes and Repairable Systems

14 Renewal Reward Processes

In this chapter we will study these renewal and instantaneous reward pro-cesses. Firstly, in Section 2.2, we consider the marginal distribution of theinstantaneous reward process defined in (2.2). An application of this processto the study of traffic is given. We give an example where the variables Xn

represent the time intervals between consecutive cars on a crossing in a one-wayroad, and φ(x) represents the number of cars that can cross the traffic streamsafely during a time interval of length x between two consecutive cars.

In Section 2.3 we consider the marginal distributions of the renewal rewardprocess given by formula (2.1). We will only consider the case where the ran-dom variables Yn are non-negative. Asymptotic properties of the renewal andinstantaneous reward processes are discussed in Section 2.4. We give an al-ternative proof for the expected-value version of the renewal-reward theoremusing a Tauberian theorem. Section 2.5 deals with the covariance structure ofrenewal processes. The last section is devoted to the study of system reliabilityin a stress-strength model, where the amplitude of a stress occurring at a timet can be considered as a reward. Besides considering renewal processes as theoccurrence of the stresses, in this last section we also model the occurrence ofthe stresses as a Cox process (doubly stochastic Poisson process).

We will denote the cdfs of the random variables X1 and Y1 by F and Grespectively, and denote the joint cdf of X1 and Y1 by H, i.e.,

H(x, y) = P(X1 ≤ x, Y1 ≤ y).

The Laplace-Stieltjes transforms of F and H will be denoted by F ∗ and H∗,i.e.,

F ∗(β) =∫ ∞

0

e−βxdF (x)

andH∗(α, β) =

∫ ∞

0

∫ ∞

0

e−(αx+βy)dH(x, y).

2.2 Instantaneous reward processes

In this section we will consider the marginal distributions of the instantaneousreward process as defined in (2.2). We will use the theory of point processesintroduced in Chapter 1. Let (Ω,F ,P) be the probability space on which the iidsequence (Xn, n ≥ 1) is defined and also an iid sequence (Un, n ≥ 1) of exponen-tially distributed random variables with parameter 1 such that the sequences(Xn) and (Un) are independent. Let (Tn, n ≥ 1) be the sequence of partial sumsof the variables Un. By Theorem 1.2.2

∞∑n=1

δTn

Page 25: Renewal Processes and Repairable Systems

Instantaneous reward processes 15

is a Poisson point process on [0,∞) with intensity measure ν(dx) = dx, and byTheorem 1.2.3 the map

Φ : ω 7→ ∑∞n=1 δ(Tn(ω),Xn(ω))

defines a Poisson point process on E = [0,∞) × [0,∞) with intensity measureν(dtdx) = dtdF (x), where F is the cdf of X1. Note that for almost all ω ∈ ΩΦ(ω) is a simple point measure on E satisfying Φ(ω)(t × [0,∞)) ∈ 0, 1 forevery t ≥ 0. Note also that ν([0, t]× [0,∞)) < ∞ for t ≥ 0. Let Mp(E) be theset of all point measures on E. We will denote the distribution of Φ by Pν , i.e.,Pν = P Φ−1.

Define for t ≥ 0 the functional A(t) on Mp(E) by

A(t)(µ) =∫

E

1[0,t)(s)xµ(dsdx).

In the sequel we will write A(t, µ) = A(t)(µ). Suppose that the point measure µhas the support supp(µ) = ((tn, xn))∞n=1 with t1 < t2 < . . ., see Figure 2.1 (a).It follows that

-

6

rr

r(t1, x1)(t2, x2)

(t3, x3)

t

x

(a)

0-

6

¾

¾

¾

x1

x2

x3

t1 t2 t3 t

A(t, µ)

(b)

0

Figure 2.1: (a). Illustration of supp(µ). (b). Graph of A(t, µ)

µ =∞∑

n=1

δ(tn,xn)

and A(t, µ) can be expressed as

A(t, µ) =∞∑

n=1

1[0,t)(tn)xn.

Page 26: Renewal Processes and Repairable Systems

16 Renewal Reward Processes

Note that for every t ≥ 0, A(t, µ) is finite almost surely. Figure 2.1 (b) showsthe graph of a realization of A(t, µ).

For (tn, xn) ∈ supp(µ)

1[0,xn)(t− A(tn, µ)) = 1 ⇐⇒ A(tn, µ) ≤ t < A(tn, µ) + xn

⇐⇒ x1 + . . . + xn−1 ≤ t < x1 + . . . + xn.

Hence for a measurable, bounded function f on E we have∫

E

1[0,x)(t− A(s, µ))f(s, x)µ(dsdx) = f(tn, xn)

where n is chosen such that x1 + . . . + xn−1 ≤ t < x1 + . . . + xn.

Now define for t ≥ 0 the functional Rφ(t) on Mp(E) by

Rφ(t)(µ) =∫

E

1[0,x)(t− A(s, µ))µ(1[0,s) ⊗ φ) + φ(t− A(s, µ))

µ(dsdx)

where

µ(1[0,t) ⊗ φ) =∫

E

1[0,t)(s)φ(x)µ(dsdx).

Note that if

µ =∞∑

n=1

δ(tn,xn)

with t1 < t2 < . . ., then

Rφ(t)(µ)

=∞∑

n=1

1[0,xn)(t− A(tn, µ))

∞∑

i=1

1[0,tn)(ti)φ(xi) + φ(t−

∞∑

i=1

1[0,tn)(ti)xi

)

=n−1∑

i=1

φ(xi) + φ(t−

n−1∑

i=1

xi

)(2.3)

where n satisfies x1+. . .+xn−1 ≤ t < x1+. . .+xn. Then we have a representationfor the instantaneous reward process (Rφ(t)) as a functional of the Poisson pointprocess Φ as stated in the following lemma.

Lemma 2.2.1 With probability 1,

Rφ(t) = Rφ(t)(Φ).

Page 27: Renewal Processes and Repairable Systems

Instantaneous reward processes 17

Proof: Let ω ∈ Ω. Since Φ(ω) =∑∞

n=1 δ(Tn(ω),Xn(ω), then using (2.3) we obtain

Rφ(t)(Φ(ω)) =i−1∑n=1

φ(Xn(ω)) + φ(t− Si−1(ω))

where i satisfies Si−1(ω) ≤ t < Si(ω). But this condition holds if and only ifi = N(t, ω) + 1, where N(t) = supn ≥ 0 : Sn ≤ t, which completes the proof.2

The following theorem gives the formula for the Laplace transform of themarginal distribution of the instantaneous reward process (Rφ(t), t ≥ 0).

Theorem 2.2.1 Let (Xn, n ≥ 1) be an iid sequence of strictly positive randomvariables with common cdf F . Let (Sn, n ≥ 0) be the sequence of partial sumsof the variables Xn and (N(t), t ≥ 0) be the corresponding renewal process:N(t) = supn ≥ 0 : Sn ≤ t. Let φ : [0,∞) → [0,∞) be a measurable function.Let (Rφ(t), t ≥ 0) be the instantaneous reward process defined in (2.2). Thenfor α, β > 0

∫ ∞

0

E(e−αRφ(t))e−βtdt =

∫∞0

[1− F (t)]e−βt−αφ(t)dt

1− ∫∞0

e−βt−αφ(t)dF (t). (2.4)

Proof: By Lemma 2.2.1

E(e−αRφ(t)) =∫

Mp(E)

e−αRφ(t)(µ)Pν(dµ)

=∫

Mp(E)

exp− α

E

1[0,x)(t− A(s, µ))

[µ(1[0,s) ⊗ φ) + φ(t− A(s, µ))

]µ(dsdx)

Pν(dµ)

=∫

Mp(E)

E

1[0,x)(t− A(s, µ))

exp− α

[µ(1[0,s) ⊗ φ) + φ(t− A(s, µ))

]µ(dsdx)Pν(dµ).

Applying the Palm formula for Poisson point processes, see Theorem 1.2.4, we

Page 28: Renewal Processes and Repairable Systems

18 Renewal Reward Processes

obtain

E(e−αRφ(t))

=∫ ∞

0

∫ ∞

0

Mp(E)

1[0,x)(t− A(s, µ + δ(s,x)))

exp− α

[(µ + δ(s,x))(1[0,s) ⊗ φ) + φ(t− A(s, µ + δ(s,x)))

]

Pν(dµ)dF (x)ds

=∫ ∞

0

∫ ∞

0

Mp(E)

1[0,x)(t− A(s, µ))

exp− α

[µ(1[0,s) ⊗ φ) + φ(t− A(s, µ))

]Pν(dµ)dF (x)ds.

Using Fubini’s theorem and a substitution we obtain

∫ ∞

0

E(e−αRφ(t))e−βtdt

=∫ ∞

0

∫ ∞

0

Mp(E)

∫ ∞

0

1[0,x)(t− A(s, µ))

exp− α

[µ(1[0,s) ⊗ φ) + φ(t− A(s, µ))

]e−βtdtPν(dµ)dF (x)ds

=∫ ∞

0

∫ ∞

0

Mp(E)

exp−

[βA(s, µ) + αµ(1[0,s) ⊗ φ)

]Pν(dµ)

[ ∫ x

0

exp− βt− αφ(t)

dt

]dF (x)ds.

The integral with respect to Pν can be calculated as follows. Note that

βA(s, µ) + αµ(1[0,s) ⊗ φ) =∫

E

1[0,s)(r)[βu + αφ(u)]µ(drdu).

So we can apply the formula for the Laplace functional of Poisson point pro-cesses, see Theorem 1.2.1, to obtain

Mp(E)

exp−

[βA(s, µ) + αµ(1[0,s) ⊗ φ)

]Pν(dµ)

= exp−

∫ ∞

0

∫ ∞

0

[1− e−1[0,s)(r)[βu+αφ(u)]

]dF (u)dr

= exp−s

∫ ∞

0

[1− e−βu−αφ(u)]dF (u)

.

Page 29: Renewal Processes and Repairable Systems

Instantaneous reward processes 19

It follows that∫ ∞

0

E(e−αRφ(t))e−βtdt =∫ ∞

0

∫ ∞

0

exp− s

∫ ∞

0

[1− e−βu−αφ(u)]dF (u)

[ ∫ x

0

exp− βt− αφ(t)

dt

]dF (x)ds

=∫ ∞

0

exp− s

∫ ∞

0

[1− e−βu−αφ(u)]dF (u)

ds

∫ ∞

0

∫ x

0

exp− βt− αφ(t)

dtdF (x)

=

∫∞0

[1− F (t)]e−βt−αφ(t)dt

1− ∫∞0

e−βu−αφ(u)dF (u). 2

We can take derivatives with respect to α in (2.4) to find Laplace transformsof the moments of Rφ(t). For example the Laplace transforms of the first andsecond moments of Rφ(t) are given in the following proposition.

Proposition 2.2.1 Suppose that the same assumptions as in the Theorem 2.2.1hold. Assume also that the function φ(t) is continuous or piecewise continuousin every finite interval (0, T ). Then

(a) If E[φ(X1)e−βX1

]< ∞ for some β > 0 and φ(t) = o(e−γt), γ > 0, as

t →∞, then for β > γ∫ ∞

0

E[Rφ(t)]e−βtdt

=

∫∞0

φ(t)e−βtdF (t) + β∫∞0

[1− F (t)]e−βtφ(t)dt

β[1− F ∗(β)]. (2.5)

(b) If E[φ2(X1)e−βX1

]< ∞ for some β > 0 and φ(t) = o(e−γt/2), γ > 0, as

t →∞, then for β > γ∫ ∞

0

E[R2φ(t)]e−βtdt =

∫∞0

φ2(t)e−βtdF (t) + β∫∞0

[1− F (t)]φ2(t)e−βtdt

β[1− F ∗(β)]

+2

∫∞0

φ(t)e−βtdF (t)∫∞0

[1− F (t)]φ(t)e−βtdt

[1− F ∗(β)]2

+2

[∫∞0

φ(t)e−βtdF (t)]2

β[1− F ∗(β)]2. (2.6)

Corollary 2.2.1 If we only consider the rewards until the last renewal beforetime t, then (2.2) simplifies to

Rφ(t) =N(t)∑n=1

φ(Xn) (2.7)

Page 30: Renewal Processes and Repairable Systems

20 Renewal Reward Processes

and∫ ∞

0

E(e−αRφ(t))e−βtdt =1− F ∗(β)

β[1− ∫∞0

e−βt−αφ(t)dF (t)].

As an application of Corollary 2.2.1 consider the function φ(t) = t. In thiscase Rφ(t) = SN(t) =

∑N(t)n=1 Xn and the double Laplace transform of SN(t) is

given by∫ ∞

0

E(e−αSN(t))e−βtdt =1− F ∗(β)

β[1− F ∗(α + β)].

As another application take φ(t) ≡ 1 in (2.7). Then Rφ(t) = N(t) which is arenewal process. In this case we have

∫ ∞

0

E(e−αN(t))e−βtdt =1− F ∗(β)

β[1− e−αF ∗(β)], (2.8)

from which, using the uniqueness theorem for power series, see Bartle [3], wecan derive

∫ ∞

0

P[N(t) = n]e−βtdt =1β

[1− F ∗(β)]F ∗(β)n. (2.9)

Also, from (2.8) we can easily deduce that∫ ∞

0

E[N(t)]e−βtdt =F ∗(β)

β[1− F ∗(β)]. (2.10)

Formulae (2.9) and (2.10) are standard, see for example Grimmett and Stirzaker[18], and can be derived directly using (1.3) and (1.4) or (1.5) by taking theirLaplace transforms.

2.2.1 A civil engineering example

Consider a traffic stream on a one-way road. The number of cars that can crossthe stream on an intersection depends on the time intervals between consecu-tive cars in the traffic stream. Civil engineers usually model the traffic streamas a homogeneous Poisson process, which means that the distances betweenconsecutive cars are assumed to be independent random variables all with thesame exponential distribution, see Herz et al. [20]. The number of cars thatcan safely cross the traffic stream between the nth and the (n + 1)th cars ofthe traffic stream equals bxn+1/ac, where xn+1 is the time distance betweenthe two cars, a > 0 some parameter, and bxc represents the integer part of x.As a more general and more realistic model we consider a renewal process as amodel for the traffic stream, i.e. the time intervals between consecutive are iid

Page 31: Renewal Processes and Repairable Systems

Instantaneous reward processes 21

with some arbitrary distribution. The number of cars that can safely cross thetraffic stream during the time between two consecutive cars in the traffic streamcan be considered as a reward and the total number of cars that can cross thetraffic stream up to time t is an instantaneous reward process. We will calculatethe distribution of the maximal number of cars that can safely cross the trafficstream during the time interval [0, t].

Suppose that we have 100 synthetic data of the inter-arrival times of cars asin Table 2.1. The average of the data is equal to 5.7422. If we assume that the

Table 2.1: The synthetic data of the inter-arrival times of cars.

1.2169 1.3508 1.5961 1.6633 2.53082.5696 2.6021 2.6447 2.6762 2.67832.6913 2.7065 2.8696 3.2053 3.43943.5028 3.5474 3.5577 3.6191 3.77243.9254 3.9400 4.0549 4.0759 4.10934.1170 4.1417 4.2162 4.2280 4.25264.4784 4.7046 4.7171 4.7174 4.75854.7814 4.8284 4.8364 4.8691 5.02785.1833 5.2221 5.2357 5.3068 5.32915.3864 5.4620 5.4675 5.4865 5.49075.5378 5.6410 5.6628 5.6834 5.76105.7809 5.8106 5.8397 5.8755 6.11236.2104 6.2269 6.3748 6.6107 6.65876.6626 6.6807 6.6835 6.7116 6.72836.7373 6.7529 6.7672 6.9731 7.04787.1100 7.1933 7.3344 7.6249 7.63117.9067 8.0114 8.0606 8.2526 8.30958.3575 8.3931 8.4245 8.8314 9.00089.0716 9.1862 9.4143 9.4661 9.58509.6002 10.2193 10.2391 10.7850 11.8890

data is exponentially distributed with parameter λ, then the estimate for λ isequal to 0.1741 (=1/5.7422). Suppose that the reward function φ is given by

φ(t) = bt/2c. (2.11)

In this case∫ ∞

0

E[Rφ(t)]e−βtdt =(β + λ)e−2(β+λ)

β2[1− e−2(β+λ)

] (2.12)

and∫ ∞

0

E(e−αRφ(t))e−βtdt =1− e−2(λ+β)

[λ + β][1− e−α−2(λ+β)]− λ[1− e−2(λ+β)](2.13)

Page 32: Renewal Processes and Repairable Systems

22 Renewal Reward Processes

with λ = 0.1741. Using numerical inversions of Laplace transforms, see Ap-pendix B, we obtain the graph of the mean of Rφ(t), see Figure 2.2 (dashedline), and the distribution of Rφ(t) for t = 10, see the first column of Table 2.2.

0 2 4 6 8 10 12 14 16 18 20−1

0

1

2

3

4

5

6

7

8

t

E(R

(t))

Figure 2.2: Graphs of the mean of Rφ(t): solid line for Gamma(1,6), dotted linefor empirical distribution and dashed line for exp(0.1741).

Table 2.2: Distributions of Rφ(10) with Xn ∼ exp(0.1741), Xn ∼ Gamma(1, 6)and with F (x) = Fn(x), the empirical distribution function of the data set inTable 2.1.

k P(Rφ(10) = k); P(Rφ(10) = k); P(Rφ(10) = k);Xn ∼ exp(0.1741) F (x) = Fn(x) Xn ∼ Gamma(1, 6)

0 0.0001 0.0000 0.00001 0.0032 0.0007 0.00032 0.0452 0.0026 0.00133 0.2597 0.1543 0.13444 0.6040 0.8223 0.83015 0.0877 0.0200 0.0338

If we look at the histogram of the data, see Figure 2.3, it does not seemreasonable to assume that the data is exponentially distributed. Without as-suming that the data has come from a certain family of parametric distributionswe can calculate the distribution of the instantaneous reward process using the

Page 33: Renewal Processes and Repairable Systems

Instantaneous reward processes 23

0 2 4 6 8 10 120

2

4

6

8

10

12

14

16

18

Figure 2.3: Histogram of the data set in Table 2.1.

empirical distribution Fn of the data:

Fn(x) =#Xi ≤ x : i = 1, ..., n

n. (2.14)

Let X1:n ≤ X2:n ≤ ... ≤ Xn:n be the order statistics corresponding to Xi, i =1, 2, ..., n, and let X0 = 0. We denote by xj:n the realizations of Xj:n. Using(2.14) we obtain

∫ ∞

0

E[Rφ(t)]e−βtdt

=1n

∑nk=1 φ(xk:n)e−βxk:n + β

∑nk=1

∫ xk:n

xk−1:n

[1− k−1

n

]e−βtφ(t)dt

β[1− 1

n

∑nk=1 e−βxk:n

] .(2.15)

Based on the data in Table 2.1,

1n

n∑

k=1

φ(xk:n)e−βxk:n

=1

100

[ 22∑

k=5

e−βxk:100 + 259∑

k=23

e−βxk:100 + 381∑

k=60

e−βxk:100

+496∑

k=82

e−βxk:100 + 5100∑

k=97

e−βxk:100

]

=: K1(β)

Page 34: Renewal Processes and Repairable Systems

24 Renewal Reward Processes

and

β

n∑

k=1

∫ xk:n

xk−1:n

[1− k − 1

n

]e−βtφ(t)dt

=1

100

[96e−2β + 78e−4β + 41e−6β + 19e−8β + 4e−10β

]−K1(β).

So the numerator of (2.15) equals

1100

[96e−2β + 78e−4β + 41e−6β + 19e−8β + 4e−10β

].

It follows that∫ ∞

0

E[Rφ(t)]e−βtdt =96e−2β + 78e−4β + 41e−6β + 19e−8β + 4e−10β

β[100−∑100

k=1 e−βxk:100

] . (2.16)

Inverting this transform numerically we obtain the graph of the mean of Rφ(t),see Figure 2.2 (dotted line).

Next we calculate the double Laplace transform of Rφ(t) using the empiricaldistribution of the inter-arrival times. Substituting (2.14) into (2.4) we obtain

∫ ∞

0

E(e−αRφ(t))e−βtdt =

∫∞0

[1− Fn(t)]e−αφ(t)−βtdt

1− ∫∞0

e−αφ(t)−βtdFn(t)

=

∑nk=1

∫ xk:n

xk−1:n

[1− k−1

n

]e−αφ(t)−βtdt

1− 1n

∑nk=1 e−αφ(xk:n)−βxk:n

. (2.17)

Based on the data in Table 2.1, the numerator of (2.17) is equal to

100∑

k=1

∫ xk:100

xk−1:100

[1− k − 1

n

]e−αφ(t)−βtdt

=1

100β

[K2(β)−

(96 + 78e−(α+2β) + 41e−2(α+2β) + 19e−3(α+2β)

+4e−4(α+2β))(

1− e−α)e−2β

]

and the denominator of (2.17) is equal to

1− 1100

100∑

k=1

e−αφ(xk:100)−βxk:100 =K2(β)100

,

Page 35: Renewal Processes and Repairable Systems

Renewal reward processes 25

where

K2(β) = 100−4∑

k=1

e−βxk:100 − e−α22∑

k=5

e−βxk:100 − e−2α59∑

k=23

e−βxk:100

−e−3α81∑

k=60

e−βxk:100 − e−4α96∑

k=82

e−βxk:100 − e−5α100∑

k=97

e−βxk:100 .

The distribution of Rφ(t) for t = 10 can be seen in the second column of Table2.2.

The data set in Table 2.1 was generated from a Gamma(1,6) random variablewhich has a pdf

f(x; 1, 6) =1

120x5e−x, x ≥ 0.

Based on this cycle length distribution the graph of the mean of Rφ(t) can beseen in Figure 2.2 (solid line) and the distribution of Rφ(t) for t = 10 can be seenin the last column of Table 2.2. From this table we see that the Kolmogorov-Smirnov distance (see e.g., Dudewicz [10]) between the cdfs of Rφ(t) based onthe exponential and the Gamma cycles equals 0.2183, whereas the Kolmogorov-Smirnov distance between the cdfs of Rφ(t) based on the empirical distributionfunction and the Gamma cycle equals 0.0199. So we conclude in this examplethat approximation for the distribution of Rφ(t) based on the use of the empir-ical distribution function of the data is better than the use of an exponentialdistribution with parameter estimated from the data.

2.3 Renewal reward processes

Consider the renewal reward process defined in (2.1), i.e.,

R(t) =N(t)∑n=1

Yn.

We assume that Y1 is a non-negative random variable. In this section we willderive an expression for the distribution of R(t) for finite t.

Let (Ω,F ,P) be the probability space on which the iid sequence (Xn, Yn) ofrandom vectors is defined and also an iid sequence (Un, n ≥ 1) of exponentiallydistributed random variables with parameter 1 such that the sequences (Xn, Yn)and (Un) are independent. Let (Tn, n ≥ 1) be the sequence of partial sums ofthe variables Un. Then the map

Φ : ω ∈ Ω 7→∞∑

n=1

δ(Tn(ω),Xn(ω)),Yn(ω)),

Page 36: Renewal Processes and Repairable Systems

26 Renewal Reward Processes

where δ(x,y,z) is the Dirac measure in (x, y, z), defines a Poisson point processon E = [0,∞)× [0,∞)× [0,∞) with intensity measure ν(dtdxdy) = dtdH(x, y),where H is the joint cdf of X1 and Y1. Let Mp(E) be the set of all pointmeasures on E. We will denote the distribution of Φ over Mp(E) by Pν .

Define for t ≥ 0 the functionals AX(t) and AY (t) on Mp(E) by

AX(t)(µ) =∫

E

1[0,t)(s)xµ(dsdxdy)

and

AY (t)(µ) =∫

E

1[0,t)(s)yµ(dsdxdy).

In the sequel we will write AX(t, µ) = AX(t)(µ) and AY (t, µ) = AY (t)(µ). If

µ =∞∑

i=1

δ(ti,xi,yi)

with t1 < t2 < . . ., then

AX(t, µ) =∞∑

i=1

1[0,t)(ti)xi and AY (t, µ) =∞∑

i=1

1[0,t)(ti)yi.

Note that with probability 1, AX(t, µ) and AY (t, µ) are finite.Define also for t ≥ 0 the functional R(t) on Mp(E) by

R(t)(µ) =∫

E

1[0,x)(t− AX(s, µ))AY (s, µ)µ(dsdxdy).

Then we can easily prove the following lemma:

Lemma 2.3.1 With probability 1,

R(t) = R(t)(Φ).

The following theorem gives the formula for the distribution of R(t) in theform of double Laplace transform.

Theorem 2.3.1 Let ((Xn, Yn), n ≥ 1) be an iid sequence of random vectorswith joint cdf H, where Xn are strictly positive and Yn are non-negative randomvariables. Let (N(t), t ≥ 0) be the renewal process with renewal cycles Xn.Define for t ≥ 0

R(t) =N(t)∑n=1

Yn.

Page 37: Renewal Processes and Repairable Systems

Renewal reward processes 27

Then for α, β > 0∫ ∞

0

E(e−αR(t))e−βtdt =1− F ∗(β)

β[1−H∗(β, α)]. (2.18)

Proof: By Lemma 2.3.1

E(e−αR(t))

=∫

Mp(E)

e−αR(t)(µ)Pν(dµ)

=∫

Mp(E)

exp− α

E

1[0,x)(t− AX(s, µ))AY (s, µ)µ(dsdxdy)Pν(dµ)

=∫

Mp(E)

E

1[0,x)(t− AX(s, µ)) exp− αAY (s, µ)

µ(dsdxdy)Pν(dµ).

Applying the Palm formula for Poisson point processes we obtain

E(e−αR(t)) =∫ ∞

0

∫ ∞

0

∫ ∞

0

Mp(E)

1[0,x)(t− AX(s, µ))

exp− αAY (s, µ)

Pν(dµ)dH(x, y)ds.

Using Fubini’s theorem we obtain∫ ∞

0

E(e−αR(t))e−βtdt

=1β

[1− F ∗(β)]∫ ∞

0

Mp(E)

exp−

[βAX(s, µ) + αAY (s, µ)

]Pν(dµ)ds.

Using the Laplace functional of Poisson point processes we obtain∫

Mp(E)

exp−

[βAX(s, µ) + αAY (s, µ)

]Pν(dµ)

=∫

Mp(E)

exp−

E

1[0,s)(r)(βu + αv)µ(drdudv)Pν(dµ)

= exp−

∫ ∞

0

∫ ∞

0

∫ ∞

0

[1− e−1[0,s)(r)[βu+αv]

]dH(u, v)dr

= exp− s[1−H∗(β, α)]

.

It follows that∫ ∞

0

E(e−αR(t))e−βtdt =1β

[1− F ∗(β)]∫ ∞

0

exp− s[1−H∗(β, α)]

ds

=1− F ∗(β)

β[1−H∗(β, α)]. 2

Page 38: Renewal Processes and Repairable Systems

28 Renewal Reward Processes

The following proposition concerns the Laplace transforms of the first andsecond moments of R(t), which can be derived by taking derivatives with respectto α in (2.18), and then setting α = 0.

Proposition 2.3.1 Under the same assumptions as in the Theorem 2.3.1 wehave

(a) If E[Y1e

−βX1]

< ∞ for some β > 0, then

∫ ∞

0

E[R(t)]e−βtdt =

∫∞0

∫∞0

ye−βxdH(x, y)β[1− F ∗(β)]

, (2.19)

(b) If E[Y 2

1 e−βX1]

< ∞ for some β > 0, then∫ ∞

0

E[R2(t)]e−βtdt

=

∫∞0

∫∞0

y2e−βxdH(x, y)β[1− F ∗(β)]

+2[ ∫∞

0

∫∞0

ye−βxdH(x, y)]2

β[1− F ∗(β)]2.(2.20)

Remark 2.3.1 If (Xn, n ≥ 1) and (Yn, n ≥ 1) are independent then (2.18),(2.19), and (2.20) reduce to

(a)∫ ∞

0

E(e−αR(t))e−βtdt =1− F ∗(β)

β[1− F ∗(β)G∗(α)],

(b)∫ ∞

0

E[R(t)]e−βtdt =µY F ∗(β)

β[1− F ∗(β)],

(c)∫ ∞

0

E[R2(t)]e−βtdt =F ∗(β)

β[1− F ∗(β)]

[σ2

Y + µ2Y +

2µ2Y F ∗(β)

1− F ∗(β)

],

where µY = E(Y1) and σ2Y = Var(Y1).

2.4 Asymptotic properties

Asymptotic properties of renewal reward processes like the renewal-reward the-orem and its expected-value version, and asymptotic normality of the processes

Page 39: Renewal Processes and Repairable Systems

Asymptotic properties 29

are well known. In this section we will reconsider some of them. We will useTauberian theorems to derive expected-value version of the renewal reward the-orem. We will also investigate other asymptotic properties of the renewal rewardprocesses including asymptotic normality of the instantaneous reward processdefined in (2.2). We first consider the renewal reward process (R(t), t ≥ 0) withrenewal cycles Xn and rewards non-negative random variables Yn as defined in(2.1), i.e.,

R(t) =N(t)∑n=1

Yn. (2.21)

If µX = E(X1) and µY = E(Y1) are finite, then

limt→∞

R(t)t

=µY

µXwith probability 1, (2.22)

and

limt→∞

E[R(t)]t

=µY

µX, (2.23)

which are well known as the renewal-reward theorem and its expected-valueversion respectively, see Tijms [46] for example. The renewal reward theoremcan easily be proved using the strong law of large numbers. A proof of (2.23)can be found for example in Ross [36], where he used Wald’s equation. Wewill give a proof for (2.23) using the following Tauberian theorem which can befound in Widder [48].

Theorem 2.4.1 (Tauberian theorem) If α(t) is non-decreasing and suchthat the integral

f(s) =∫ ∞

0

e−stdα(t)

converges for s > 0 and if for some non-negative number γ and some constantC

f(s) ∼ C

sγas s −→ 0

thenα(t) ∼ Ctγ

Γ(γ + 1)as t −→∞.

The proof of (2.23):In Section 2.3 we have proved that the Laplace transform of E[R(t)] is given

by ∫ ∞

0

E[R(t)]e−βtdt =

∫∞0

∫∞0

ve−βudH(u, v)β[1− F ∗(β)]

, (2.24)

Page 40: Renewal Processes and Repairable Systems

30 Renewal Reward Processes

see (2.19). Assuming µY is finite, we obtain from this equation∫ ∞

0

e−βtdE[R(t)] =

∫∞0

∫∞0

ve−βudH(u, v)1− F ∗(β)

.

Using dominated convergence we can prove that∫ ∞

0

∫ ∞

0

ve−βudH(u, v) = µY + o(1) as β −→ 0.

Similarly if µX is finite,

F ∗(β) = 1− βµX + o(β) as β −→ 0.

It follows that ∫ ∞

0

e−βtdE[R(t)] ∼ µY

µX

as β −→ 0.

Obviously E[R(t)] is non-decreasing. So we can apply the Tauberian theoremwith γ = 1. 2

A stronger version of (2.23) can be derived for the case where X1 has adensity in some interval. Assume that σ2

X = Var(X1) and σXY = Cov(X1, Y1)are finite. Let fX and fXY denote the density function of X1 and the jointdensity of X1 and Y1 respectively. Let M(t) = E[R(t)]. Conditioning on X1,from (2.21) we obtain

M(t) = K(t) +∫ t

0

M(t− x)fX(x)dx, (2.25)

where K(t) =∫ t

0

∫∞0

yfXY (x, y)dydx.Define for t ≥ 0 the function

Z(t) = M(t)− µY

µXt.

From (2.25), we find that Z(t) satisfies the following integral equation

Z(t) = a(t) +∫ t

0

Z(t− x)fX(x)dx,

wherea(t) = K(t)− µY +

µY

µX

∫ ∞

t

(x− t)fX(x)dx.

We see that a(t), t ≥ 0, is a finite sum of monotone functions. So we can usethe key renewal theorem, see e.g., Tijms [46], to obtain

limt→∞

Z(t) =1

µX

∫ ∞

0

a(x)dx

=1

µX

[ ∫ ∞

0

[K(x)− µY ]dx +µY

µX

∫ ∞

0

∫ ∞

x

(s− x)fX(s)dsdx

].

Page 41: Renewal Processes and Repairable Systems

Asymptotic properties 31

It can easily be verified that∫ ∞

0

[K(x)− µY ]dx = −E[X1Y1] = −(σXY + µXµY )

and ∫ ∞

0

∫ ∞

x

(s− x)fX(s)dsdx =12E(X2

1 ) =12(σ2

X + µ2X).

It follows that

limt→∞

Z(t) =1

µX

[−(σXY + µXµY ) +

µY

2µX(σ2

X + µ2X)

]

=σ2

XµY − 2µXσXY − µ2XµY

2µ2X

.

From this we conclude that

E[R(t)] =µY

µXt +

σ2XµY − 2µXσXY − µ2

XµY

2µ2X

+ o(1) as t →∞. (2.26)

Now we will consider asymptotic properties of the instantaneous rewardprocess defined in (2.2), i.e.,

Rφ(t) =N(t)∑n=1

φ(Xn) + φ(t− SN(t)).

Putting µY = E[φ(X1)], we can prove that (2.22) and (2.23) look exactly thesame (the contribution of the reward earned in the incomplete renewal cycle(SN(t), t] disappear in the limit). Next, to obtain a formula close to (2.26) weput σXY = Cov(X1, φ(X1)). A similar argument as for proving (2.26) can beused to deduce

E[Rφ(t)] =µY

µXt +

σ2XµY − 2µXσXY − µ2

XµY

2µ2X

+ A + o(1) as t →∞,

where A = 1µX

∫∞0

[1− F (t)]φ(t)dt, provided the function φ(t) is Laplace trans-formable. The extra constant A can be interpreted as a contribution of thereward earned in the incomplete renewal cycle.

Under some conditions on the function φ, the limiting distribution of theinstantaneous reward process (Rφ(t)) is a normal distribution. To prove this weneed the following lemmas.

Lemma 2.4.1 [8] If χ(t), ε(t) and δ(t) are random functions (0 < t < ∞),and are such that the asymptotic distribution of χ(t) exists, ε(t) converges inprobability to 1 and δ(t) converges in probability to 0 for t → ∞, then theasymptotic distribution of χ(t)ε(t) + δ(t) exists and coincides with that of χ(t).

Page 42: Renewal Processes and Repairable Systems

32 Renewal Reward Processes

Lemma 2.4.2 [33] If (X(1)n ), (Xn), and (X(2)

n ) are sequences of random vari-ables such that X

(1)n ≤ Xn ≤ X

(2)n and the sequences (X(1)

n ) and (X(2)n ) have

the same asymptotic distribution for n →∞, then Xn has also that asymptoticdistribution.

Lemma 2.4.3 [33] Let ξn denote a sequence of identically distributed randomvariables having finite second moment. Let ν(t) denote a positive integer-valuedrandom variable for t > 0, for which ν(t)

t converges in probability to c > 0 fort →∞. Then ξν(t)√

ν(t)converges in probability to 0.

Assume that the function φ is non-negative and non-decreasing, or bounded.Let Yn := φ(Xn). Then

N(t)∑n=1

Yn =N(t)∑n=1

φ(Xn) ≤ Rφ(t) ≤N(t)+1∑

n=1

φ(Xn) =N(t)+1∑

n=1

Yn. (2.27)

Assume that σ2X and σ2

Y are finite. Then using the Central Limit Theorem forrandom sums, see Embrechts et al. [11] we obtain, as t →∞,

[Var

(Y1 − µY

µXX1

)t

µX

]−1/2

N(t)∑n=1

Yn − µY

µXt

d−→ N(0, 1), (2.28)

where

Var(

Y1 − µY

µXX1

)t

µX=

(µ2

Xσ2Y + µ2

Y σ2X − 2µXµY σXY

µ3X

)t.

Now we will consider the limiting distribution of∑N(t)+1

n=1 Yn. Let

C =µ2

Xσ2Y + µ2

Y σ2X − 2µXµY σXY

µ3X

.

Note that ∑N(t)+1n=1 Yn − µY

µXt√

Ct=

∑N(t)n=1 Yn − µY

µXt√

Ct+

YN(t)+1√Ct

where the first term in the right-hand side converges in distribution to thestandard normal random variable. If we can prove that YN(t)+1√

t

p−→ 0 as t →∞then by Lemma 2.4.1

∑N(t)+1n=1 Yn − µY

µ t√Ct

d−→ N(0, 1) as t →∞. (2.29)

Page 43: Renewal Processes and Repairable Systems

Covariance structure of renewal processes 33

But since σ2Y < ∞ it follows, by Lemma 2.4.3 and the fact that N(t)

t

a.s.−→ 1µX

(>0) as t →∞,

YN(t)+1√t

=YN(t)+1√N(t) + 1

√N(t) + 1

tp−→ 0 as t →∞.

Finally, combining (2.27), (2.28), (2.29), and Lemma 2.4.2, it follows that

Rφ(t)− µY

µXt√

Ct

d−→ N(0, 1) as t →∞.

2.5 Covariance structure of renewal processes

Basically, the covariance structure of renewal reward processes can be derivedusing point processes, but it will involve more complicated calculations. In thissection we will derive the covariance structure of a renewal process, a specialcase of renewal reward processes. Using the notations in Section 2.2 define fort ≥ the functional N(t) on Mp(E) by

N(t)(µ) =∫

E

E

1[0,x)(t− A(s, µ))1[0,s)(u)µ(dudv)µ(dsdx).

Let ω ∈ Ω. Then

N(t)(Φ(ω)) =∞∑

n=1

∞∑

i=1

1[0,Xn(ω))(t− A(Tn(ω), Φ(ω)))1[0,Tn(ω))(Ti(ω))

=∞∑

i=1

1[0,TN(t,ω)+1(ω))(Ti(ω))

= N(t, ω).

So we have, with probability 1,

N(t) = N(t)(Φ).

Using this functional expression of N(t) we derive the double Laplace transformof E[N(t1)N(t2)] which is stated in the following theorem. The proof is givenin Appendix A.

Theorem 2.5.1 Let (Xn, n ≥ 1) be an iid sequence of strictly positive randomvariables with common cdf F . Let (N(t), t ≥ 0) be the renewal process withrenewal cycles Xn. Then for α, β > 0,

∫ ∞

0

∫ ∞

0

E[N(t1)N(t2)]e−αt1−βt2dt1dt2

=[1− F ∗(α)F ∗(β)]F ∗(α + β)

αβ[1− F ∗(α)][1− F ∗(β)][1− F ∗(α + β)]. (2.30)

Page 44: Renewal Processes and Repairable Systems

34 Renewal Reward Processes

Example 2.5.1 Let (N(t), t ≥ 0) be a renewal process with inter-arrival timesXn having Gamma distribution with common pdf

f(x; λ,m) =λe−λx(λx)m−1

Γ(m), λ > 0, x ≥ 0.

Using (2.10) and (2.30) we obtain

∫ ∞

0

E[N(t)]e−βtdt =λm

β[(β + λ)m − λm]

and∫ ∞

0

∫ ∞

0

E[N(t1)N(t2)]e−αt1e−βt2dt1dt2

=[λ(α + λ)m − λ3m]

αβ[(α + λ)m − λm][(β + λ)m − λm][(α + β + λ)m − λm].

As an example, take m = 2. Transforming back these Laplace transforms weobtain

E[N(t)] =12λt− 1

4+

14e−2λt

and for t1 ≤ t2

E[N(t1)N(t2)] =116

[1− 2λ(t2 − t1) + 4λ2t1t2 − (1 + 4λt1 − 2λt2)e−2λt1

−(1 + 2λt1)e−2λt2 + e−2λ(t2−t1)].

Hence for t1 ≤ t2

Cov[N(t1), N(t2)] =14λ(t1 − e−2λt1 − e−2λt2) +

116

(e−2λ(t2−t1) − e−2λ(t1+t2)).

Note that for m = 1 the process (N(t), t ≥ 0) is a homogeneous Poissonprocess with rate λ, and in this case

E[N(t1)N(t2)] = λ2t1t2 + λ mint1, t2.

This result can also be obtained using (2.30). Moreover the covariance betweenN(t1) and N(t2) for t1 < t2 is given by

Cov[N(t1), N(t2)] = λt1.

Page 45: Renewal Processes and Repairable Systems

System reliability in a stress-strength model 35

2.6 System reliability in a stress-strength model

In this section we consider a system which is supposed to function during acertain time after which it fails. The failures of the system are of two kinds:proper failures which are due to own wear-out and occur even in an unstressedenvironment, and failures which are due to random environmental stresses. Wewill only consider the latter. The system we consider here is generic, and modelsall kinds of varieties of products, subsystems, or components.

The study about system reliability where the failures are only due to randomenvironmental stresses is known as a stress-strength interference reliability or astress-strength model. Many authors have paid attention to the study of sucha model, see for example Xue and Yang [50], Chang [4], and Gaudoin and Soler[14]. In other literature this model is also called a load-capacity interferencemodel, see Lewis and Chen [24].

In Gaudoin and Soler [14] three types of stresses are considered: point, al-ternating and diffused stresses. The systems they considered may have a memo-rization: a stress which occurs at a given time can influence the future failures ifthe system has kept it in memory. Two types of stochastic influence models areproposed: stress-strength duration models (type I) and random environmentlifetime models (type II). The type I models are based on the assumption thata system failure occurs at time t if the accumulation of memory of all stressesoccurred before time t exceeds some strength threshold of the system whereasthe type II models are models for which, conditionally on all stresses that oc-curred before, the cumulative failure (hazard) rate at time t is proportional tothe accumulation of memory of all stresses occurring before time t. In type IImodels the stresses weaken the system. The accumulation of stresses will causethe system to fail, but this failure is not associated to a given strength threshold.

In this section we will consider extensions of some models proposed in Gau-doin and Soler. We will restrict ourselves to point stresses. The point stressesare impulses which occur at random times with random amplitudes. In the typeI models Gaudoin and Soler assumed that the occurrence times of the stressesis modelled respectively as a homogeneous Poisson process, a non-homogeneousPoisson process and a renewal process. The two kinds of memory they consid-ered are: (i) the system keeps no memory of the stresses, and (ii) the systemkeeps a permanent memory of all stresses occurred before. The amplitudes ofthe stresses and their occurrence times are assumed to be independent. We givetwo generalizations. Firstly, the occurrence times of the stresses are modelledas a Cox process (doubly stochastic Poisson process) and we keep the indepen-dence assumption. Secondly, the occurrence times of the stresses is modelled asa renewal process, but they may depend on the amplitudes of the stresses. Wediscuss this in Subsection 2.6.1.

In the type II models Gaudoin and Soler model the occurrence times of thestresses respectively as a homogeneous Poisson process and a non-homogeneous

Page 46: Renewal Processes and Repairable Systems

36 Renewal Reward Processes

Poisson process. They assumed that the amplitudes of the stresses are indepen-dent of their occurrence times, and considered any kind of memory. We give ageneralization where the occurrence times of the stresses is modelled as a Coxprocess. We give also a further generalization where the occurrence times of thestresses are modelled as a renewal process and may depend on their amplitudes,but we only assume that the system keeps a permanent memory of the stresseswhich occurred. We discuss this in Subsection 2.6.2.

2.6.1 Type I models

Suppose that a system operated at time t0 = 0 is exposed to stresses occurringat random time points S1, S2, S3, ... where S0 := 0 < Si < Si+1, ∀i ≥ 1. LetN(t) = supn ≥ 0 : Sn ≤ t be the number of stresses that occurred in thetime interval [0, t]. Let the amplitude of the stress at time Sn be given by thenon-negative random variable Yn. Assume that the sequence (Yn, n ≥ 1) isiid with a common distribution function G and independent of the sequence(Sn, n ≥ 1). After the occurrence of a stress the system may keeps the stressinto its memory. In Gaudoin and Soler the memory of the system is representedby a deterministic Stieltjes measure. Here we will represent the memory of thesystem in terms of a recovery rate. We call a function h the recovery rate ofthe system if it is non-negative, non-increasing, bounded above from 1, and itvanishes on (−∞, 0). We will assume that at time t the contribution of thestress that has occurred at time Sn ≤ t has an amplitude Ynh(t − Sn). So theaccumulation of the stresses at time t is given by

L(t) =∞∑

n=1

Ynh(t− Sn)

=N(t)∑n=1

Ynh(t− Sn). (2.31)

If the strength threshold of the system equals a positive constant u, then thereliability of the system at time t is given by

R(t) = P( sup0≤s≤t

L(s) ≤ u). (2.32)

In general it is difficult to calculate R(t). In the case the system keeps nomemory of the stresses, the equation (2.32) simplifies to

R(t) = P(maxY1, Y2, ..., YN(t) ≤ u) (2.33)

and in the case of the system keeps a permanent memory of the stresses (without

Page 47: Renewal Processes and Repairable Systems

System reliability in a stress-strength model 37

recovery), equation (2.31) reduces to

L(t) =N(t)∑n=1

Yn (2.34)

and (2.32) simplifies to

R(t) = P(L(t) ≤ u). (2.35)

We see that if (N(t)) is a renewal process, then (L(t)) is a renewal rewardprocess.

Gaudoin and Soler [14] consider homogeneous Poisson processes, non-homo-geneous Poisson processes and renewal processes as models for (N(t)). A gen-eralization of the non-homogeneous Poisson process is the Cox process, see e.g.,Grandell [17]. A Cox process can be considered as a non-homogeneous Poissonprocess with randomized intensity measure. For a non-homogeneous Poissonprocess with intensity measure ν we have

P(N(t) = k) =(ν[0, t])k

k!e−ν[0,t], k = 0, 1, 2, . . .

For a Cox process the intensity measure ν is chosen according to some proba-bility measure Π and

P(N(t) = k) =∫

(ν[0, t])k

k!e−ν[0,t]Π(dν), k = 0, 1, 2, . . . .

So if (N(t)) is a Cox process then, by conditioning on the number of stresses inthe time interval [0, t], the reliability in (2.33) can be expressed as

R(t) =∫

e−[1−G(u)]ν[0,t]Π(dν).

As an example let ν[0, t] = Λt where Λ is chosen according to uniform distribu-tion in [0, 1]. Then

R(t) =1− e−[1−G(u)]t

[1−G(u)]t.

In the case without recovery, note that the reliability R(t) is just the cdf ofL(t) at point u. It follows that we only need to calculate the distribution ofL(t) in (2.34). By conditioning on the number of stresses in the time interval[0, t] we obtain the Laplace transform of L(t) which is given by

ψ(t, α) := E(e−αL(t)

)

=∫

e−[1−G∗(α)]ν[0,t]Π(dν)

=∫

e−[1−G∗(α)]R t0 ν(ds)Π(dν).

Page 48: Renewal Processes and Repairable Systems

38 Renewal Reward Processes

As an example let ν(ds) = X(s)ds where (X(t), t ≥ 0) is a continuous timeMarkov chain on 0, 1. Suppose that the chain starts at time 0 in state 1where it stays an exponential time with mean 1/λ1. Then it jumps to state 0where it stays an exponential time with mean 1/λ0, and so on. It follows that

ψ(t, α) = E(e−c(α)

R t0 X(s)ds

)

where c(α) = 1−G∗(α).Let

τi = inft ≥ 0|X(t) 6= i.Starting from 1, the random variable τ1 is the time at which the process leavesthe state 1 and

P1(τ1 > t) = e−λ1t.

Similarly for τ0 we haveP0(τ0 > t) = e−λ0t.

Then

ψ1(t, α) = E1

(e−c(α)

R t0 X(s)ds, τ1 > t

)+ E1

(e−c(α)

R t0 X(s)ds, τ1 < t

)

= e−(λ1+c(α))t +∫ t

0

λ1e−(λ1+c(α))xψ0(t− x, c(α))dx, (2.36)

and

ψ0(t, α) = E0

(e−c(α)

R t0 X(s)ds, τ0 > t

)+ E0

(e−c(α)

R t0 X(s)ds, τ0 < t

)

= e−λ0t +∫ t

0

λ0e−λ0xψ1(t− x, c(α))dx. (2.37)

Define for β > 0 and i = 0, 1

ψi(β, α) =∫ ∞

0

e−βtψi(t, α)dt.

From (2.36) and (2.37) we get the system of equations

ψ1(β, α) = 1λ1+c(α)+β + λ1

λ1+c(α)+β ψ0(β, α)ψ0(β, α) = 1

λ0+β + λ0λ0+β ψ1(β, α)

It follows that

ψ1(β, α) =λ1 + λ0 + β

µc(α) + (λ1 + λ0 + c(α))β + β2.

Page 49: Renewal Processes and Repairable Systems

System reliability in a stress-strength model 39

Transforming back this transform we obtain

ψ1(t, α) = e−12 (λ+c(α))t

[cos

(√bt

2

)+

λ− c(α)√b

sin

(√bt

2

)](2.38)

where λ = λ0 + λ1 and b = 4λ0c(α) − [λ + c(α)]2. To find the distributionfunction of L(t) we transform back numerically the Laplace transform in (2.38)with respect to α. For example if λ0 = λ1 = 1 and G(x) = 1− e−x then we getthe distribution function of L(10) as in Figure 2.4. Note that there is a mass

0 2 4 6 8 10 12 14 16 18 200

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

u

F(u

)

Figure 2.4: The graph of the distribution function of L(10).

0 5 10 15 20 25 30 35 40 45 500

0.2

0.4

0.6

0.8

1

1.2

1.4

t

R(t

)

Figure 2.5: The graph of R(t) for u = 5.

at 0 which corresponds to the event that no stress occurs in the time interval

Page 50: Renewal Processes and Repairable Systems

40 Renewal Reward Processes

[0, t]. The graph of R(t), for u = 5, is given in Figure 2.5.Next we will consider the second generalization of the reliability in (2.35),

i.e.,R(t) = P(L(t) ≤ u)

where L(t) =∑N(t)

n=1 Yn. As in Gaudoin and Soler [14] we assume that N(t) isa renewal process, but we allow a dependence between the sequence (Sn) and(Yn). Since N(t) is a renewal process then Xn = Sn−Sn−1, n = 1, 2, . . . , whereS0 = 0 are iid random variables. We will assume that ((Xn, Yn), n ≥ 1) is aniid sequence of random vectors. Note that in this case L(t) is a renewal rewardprocess. So we can use Theorem 2.3.1 to determine the distribution of L(t):

∫ ∞

0

E(e−αL(t))e−βtdt =1− F ∗(β)

β[1−H∗(β, α)]

where F is the cdf of X1, and H is the joint cdf of X1 and Y1.

2.6.2 Type II models

Using the notation in Subsection 2.6.1 we now consider a model for the lifetimeof the system where the cumulative failure rate at time t is proportional tothe accumulation of all stresses occurred before time t. In this case the systemreliability is given by

R(t) = E(e−α

P∞n=1 Ynh(t−Sn)

)

where α > 0 is a proportionality constant and h is an arbitrary recovery function,see Subsection 2.6.1. We see in this case that the reliability is the Laplacetransform of L(t) :=

∑∞n=1 Ynh(t − Sn). The case where the occurrence times

(Sn) is a non-homogeneous Poisson process has been discussed by Gaudoin andSoler [14]. Here firstly we will consider a generalization where (Sn) is a Coxprocess. We assume that the sequences (Sn) and (Yn) are independent. We willexpress L(t) as a functional of a Poisson point process.

Let (Ω,F ,P) be the probability space on which the random variables Sn andYn are defined such that the sequences (Sn) and (Yn) are independent. Sincewe assume that (Sn) is a Cox process, then the map

ω ∈ Ω 7→∞∑

n=1

δSn(ω)

defines a Poisson point process on [0,∞) with intensity measure ν where themeasure ν is chosen randomly according to some probability distribution Π.More formally, the intensity measure ν is chosen from the set M+([0,∞)) of all

Page 51: Renewal Processes and Repairable Systems

System reliability in a stress-strength model 41

Radon measures on [0,∞). Moreover, since (Sn) and (Yn) are independent, themap

Φ : ω ∈ Ω 7→∞∑

n=1

δ(Sn(ω),Yn(ω))

defines a Poisson point process on E = [0,∞) × [0,∞) with intensity measureν × G(dtdy) = ν(dt)dG(y), where G is the cdf of Y1. Let Mp(E) be the set ofsimple point measures µ on E. Denote the distribution of Φ over Mp(E) byPν×G, i.e., Pν×G = P Φ−1.

As in Lemma 2.2.1 we have, with probability 1,

L(t) = L(t)(Φ) (2.39)

where L(t)(µ) =∫

Eyh(t − s)µ(dsdy). Using the formula for the Laplace func-

tional of Poisson point processes, see Theorem 1.2.1, we obtain

R(t) =∫

M+([0,∞))

Mp(E)

e−R

Eαyh(t−s)1[0,t](s)µ(dsdy)Pν×G(dν)Π(dν)

=∫

M+([0,∞))

exp−

E

[1− e−αyh(t−s)1[0,t](s)

]ν ×G(dsdy)

Π(dν)

=∫

M+([0,∞))

exp−

∫ t

0

[1−G∗(αh(t− s))

]ν(ds)

Π(dν).

The last equality follows from the independence assumption between (Sn) and(Yn).

As an example let M+([0,∞)) = µ : µ(dt) = λdt, λ ∈ [0,∞) and Π be aprobability distribution of an exponential random variable with parameter η onM+([0,∞)). Then

R(t) =∫ ∞

0

exp−

∫ t

0

λ[1−G∗(αh(t− s))

]ds

ηe−ηλdλ

= η

∫ ∞

0

exp−

[t + η −

∫ t

0

G∗(αh(t− s))ds]λ

η + t− ∫ t

0G∗(αh(t− s))ds

.

Moreover, if we assume that h(t) = e−t and G(u) = 1 − e−γu, γ > 0, then weget

R(t) =η

η − ln(

αe−t+γα+γ

) .

Now consider the second generalization where the system keeps a permanentmemory of the stresses (h(t) ≡ 1). Suppose that the occurrence times (Sn) are

Page 52: Renewal Processes and Repairable Systems

42 Renewal Reward Processes

modelled as a renewal process. As in the end of the previous subsection, ifXn = Sn − Sn−1, n = 1, 2, . . . where S0 = 0 represent the inter-arrival times ofthe renewal process and (Xn, Yn) is assumed to be an iid sequence of randomvectors then

∫ ∞

0

R(t)e−βtdt =1− F ∗(β)

β[1−H∗(β, α)]

where F is the cdf of the X1 and H is the joint cdf of X1 and Y1. As a specialcase, when the sequences (Xn) and (Yn) are independent, we have

∫ ∞

0

R(t)e−βtdt =1− F ∗(β)

β[1− F ∗(β)G∗(α)]

where G is the cdf of the Y1.As an example assume that (Xn, Yn) is an iid sequence of random vectors.

Suppose that X1 and Y1 have a joint bivariate exponential distribution with

P(X1 > x, Y1 > y) = e−(λ1x+λ2y+λ12 max(x,y)); x, y ≥ 0; λ1, λ2, λ12 > 0.

The marginals are given by

P(X1 > x) = e−(λ1+λ12)x (2.40)

and

P(Y1 > y) = e−(λ2+λ12)y. (2.41)

The correlation coefficient ρ between X1 and Y1 is given by

ρ =λ12

λ1 + λ2 + λ12.

In this case∫ ∞

0

R(t)e−βtdt =C1β + C2

C1β2 + C3β + C4,

where C1 = λ2 + λ12 + α, C2 = C1(C1 + λ1), C3 = λ1α + C2, and C4 =α(λ1 + λ12)(C1 + λ1). Inverting this transform we obtain

R(t) = e−C32C1

t

[cos

(1

2C1

√4C1C4 − C2

3 t

)

+2C2 − C3√4C1C4 − C2

3

sin(

12C1

√4C1C4 − C2

3 t

)].

In case the sequences (Xn) and (Yn) are independent where X1 and Y1 satisfy(2.40) and (2.41),

∫ ∞

0

R(t)e−βtdt =α + λ2 + λ12

β(α + λ2 + λ12) + α(λ1 + λ12).

Page 53: Renewal Processes and Repairable Systems

System reliability in a stress-strength model 43

Inverting this transform we obtain

R(t) = exp− α(λ1 + λ12)

α + λ2 + λ12t

.

Now we will observe the effect of the dependence between X1 and Y1 on thereliability R(t) in this example. As examples, firstly take α = 1, λ1 = λ2 = 2

3 ,λ12 = 1

3 . Then for the dependent case, ρ = 0.2 and

R(t) = e−32 t

[cosh(0.9574t) + 1.2185 sinh(0.9574t)

],

and for the independent case

R(t) = e−12 t.

The graphs of R(t) can be seen in Figure 2.6.

0 1 2 3 4 5 6 7 8 9 100

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

t

R(t

)

dependentindependent

Figure 2.6: The graphs of R(t) for α = 1, λ1 = λ2 = 23 , λ12 = 1

3 .

Secondly, take α = 1, λ1 = λ2 = 13 , λ12 = 2

3 . Then for the dependent case,ρ = 0.5 and

R(t) = e−1.25t[cosh(0.6292t) + 1.7219 sinh(0.6292t)

],

and for the independent case

R(t) = e−12 t.

The graphs of R(t) can be seen in Figure 2.7.

Page 54: Renewal Processes and Repairable Systems

44 Renewal Reward Processes

0 1 2 3 4 5 6 7 8 9 100

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

t

R(t

)

dependentindependent

Figure 2.7: The graphs of R(t) for α = 1, λ1 = λ2 = 13 , λ12 = 2

3 .

0 1 2 3 4 5 6 7 8 9 100

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

t

R(t

)

dependentindependent

Figure 2.8: The graphs of R(t) for α = 1, λ1 = λ2 = 19 , λ12 = 8

9 .

Finally, take α = 1, λ1 = λ2 = 19 , λ12 = 8

9 . Then for the dependent case,ρ = 0.8 and

R(t) = e−1.0833t[cosh(0.3436t) + 2.9913 sinh(0.3436t)

],

and for the independent case

R(t) = e−12 t.

The graphs of R(t) can be seen in Figure 2.8.

Page 55: Renewal Processes and Repairable Systems

Chapter 3

Integrated RenewalProcesses

3.1 Notations and Definitions

Consider a locally finite point process on the positive half line [0,∞). Denotethe ordered sequence of points by 0 < S1 < S2 < . . .. We will think of thepoints Sn as arrival times. We define S0 := 0, but this does not mean that weassume that there is a point in 0. Let N(t) be the number of arrivals in thetime interval [0, t], i.e., N(t) = supn ≥ 0 : Sn ≤ t. Define for t ≥ 0

Y (t) =∫ t

0

N(s)ds.

If (N(t), t ≥ 0) is a renewal process, we call the stochastic process (Y (t), t ≥ 0)an integrated renewal process. Note that we can express Y (t) as

Y (t) =N(t)∑

i=1

(t− Si) = tN(t)− Z(t), (3.1)

where

Z(t) =N(t)∑

i=1

Si. (3.2)

Figure 3.1 shows the graphs of Y (t) and Z(t).In this chapter we will discuss the distributions of Y (t) and Z(t). In Section

3.2 we discuss the distributions of Y (t) and Z(t) when (N(t)) is a Poisson or aCox process. In Section 3.3 we discuss the distributions of Z(t) and Y (t) when(N(t)) is a renewal process. Their asymptotic properties are studied in Section3.4. Finally an application is given in Section 3.5.

45

Page 56: Renewal Processes and Repairable Systems

46 Integrated Renewal Processes

-

6

S1 S2

Y (t)

t0

1

2

N(t)

¡¡¡

¡¡

¡¡

¡

¡¡

¡¡

¡¡

¡¡

¡

¡¡

¡

¡¡

¡¡¡¡¡

¡¡

¡¡

¡¡

¡¡

¡¡¡

¡¡

¡¡

¡

¡¡

¡

¡¡Z(t)

Figure 3.1: Graphs of Y (t) and Z(t).

3.2 (N(t)) a Poisson or Cox process

Firstly, suppose that the process (N(t), t ≥ 0) is a homogeneous Poisson processwith rate λ. It is well known that given N(t) = n, the n arrival times S1, ..., Sn

have the same distribution as the order statistics corresponding to n independentrandom variables uniformly distributed on the time interval [0, t] (see e.g. Ross[36]). Conditioning on the number of arrivals in the time interval [0, t] we obtain

E(e−αY (t)) =∞∑

n=0

E[e−α

Pni=1(t−Si)|N(t) = n

]P(N(t) = n)

=∞∑

n=0

e−αntE[e−α

Pni=1 Vi

] (λt)n

n!e−λt

where Vi, i = 1, 2, ..., n are independent and identically uniform random variableson [0, t]. Since

E[e−αV1

]=

1αt

[eαt − 1

]

it follows that

E(e−αY (t)) =∞∑

n=0

e−αnt

[1αt

[eαt − 1

]]n (λt)n

n!e−λt

=∞∑

n=0

[λ(1−e−αt)

α

]n

n!e−λt

= exp

λ(1− αt− e−αt)α

. (3.3)

From (3.3) we deduce that E[Y (t)] = 12λt2 and Var[Y (t)] = 1

3λt3. Using asimilar argument we can prove that Z(t) has the same Laplace transform as

Page 57: Renewal Processes and Repairable Systems

(N(t)) a Poisson or Cox process 47

Y (t). So by uniqueness theorem for Laplace transforms we conclude that Z(t)has the same distribution as Y (t).

The distribution of Y (t) has mass at zero with

P(Y (t) = 0) = e−λt.

The density function fY (t) of the continuous part of Y (t) can be obtained byinverting the Laplace transform in (3.3). Note that we can express (3.3) as

E(e−αY (t)) = e−λt∞∑

n=0

λn(1− e−αt)n

n!αn

= e−λt∞∑

n=0

λn

n!

n∑

k=0

(−1)k(nk

)e−ktα

αn.

Inverting this transform we obtain, for x > 0,

fY (t)(x)

= e−λt∞∑

n=1

λn

n!

[xn−1

(n− 1)!1(0,∞)(x) +

n∑

k=1

(−1)k

(n

k

)(x− kt)n−1

(n− 1)!1(kt,∞)(x)

]

= e−λt∞∑

n=1

λnxn−1

n!(n− 1)!1(0,∞)(x)

+λe−λt∞∑

k=1

(−1)k

k!

∞∑

n=k

[λ(x− kt)]n−1

(n− 1)!(n− k)!1(kt,∞)(x)

=

√λ√x

e−λtI1(2√

λx)1(0,∞)(x)

+λe−λt∞∑

k=1

(−1)k

k![λ(x− kt)]

12 (k−1)Ik−1(2

√λ(x− kt))1(kt,∞)(x)

where Ik(x) is the Modified Bessel function of the first kind, i.e.,

Ik(x) = (1/2x)k∞∑

m=0

(1/4x2)m

m!Γ(k + m + 1),

see Gradshteyn and Ryzhik [16]. The graphs of the pdf of Y (t) for λ = 2,t = 1, 2, 3 and t = 10 can be seen in Figure 3.2.

For large t, the distribution of Y (t) can be approximated with a normaldistribution having mean 1

2λt2 and variance 13λt3. To prove this we will consider

the characteristic function of the normalized Y (t). Firstly, note that

E

e

−iαY (t)− 1

2 λt2√13 λt3

= e

12 iα

√3λtE

(e− iα

√3

t√

λtY (t)

).

Page 58: Renewal Processes and Repairable Systems

48 Integrated Renewal Processes

0 1 2 3 4 5 60

0.1

0.2

0.3

0.4

0.5

0.6

0.7

x

f Y(1

)(x)

0 2 4 6 8 10 12 14 16 18 20−0.02

0

0.02

0.04

0.06

0.08

0.1

0.12

0.14

0.16

0.18

x

f Y(2

)(x)

(a) λ = 2, t = 1 (b) λ = 2, t = 2

0 5 10 15 20 25 30−0.02

0

0.02

0.04

0.06

0.08

0.1

x

f Y(3

)(x)

0 20 40 60 80 100 120 140 160 180 2000

0.002

0.004

0.006

0.008

0.01

0.012

0.014

0.016

x

f Y(1

0)(x

)

(c) λ = 2, t = 3 (d) λ = 2, t = 10

Figure 3.2: The graphs of the pdf of Y (t) for λ = 2, t = 1, 2, 3 and t = 10.

Using (3.3) with α is replaced by iα√

3t√

λtand an expansion we obtain

E(

e− iα

√3

t√

λtY (t)

)= exp

λt√

λt

iα√

3

[1− iα

√3√

λt− e

−iα√

3t√λ

]

= exp

λt√

λt

iα√

3

[3α2

2λt− iα3

√3

2λt√

λt+ o(t−3/2)

]

= exp

−1

2iα√

3λt− 12α2 +

λt√

λt

iα√

3o(t−3/2)

as t →∞. It follows that

E

e

−iαY (t)− 1

2 λt2√13 λt3

−→ e−

12 α2

as t →∞

which is the characteristic function of the standard normal distribution.Now consider the case where (N(t)) is a non-homogeneous Poisson process

with intensity measure ν. Given N(t) = n, the arrival times Si, i = 1, 2, ..., n

Page 59: Renewal Processes and Repairable Systems

(N(t)) a Poisson or Cox process 49

have the same distribution as the order statistics of n iid random variableshaving a common cdf

G(x) =

ν([0,x])ν([0,t]) , x ≤ t

1, x > t.

In this case the Laplace transform of Z(t) is given by

E(e−αZ(t)

)=

∞∑n=0

E(e−α

Pni=1 Si |N(t) = n

)P(N(t) = n)

=∞∑

n=0

[∫ t

0e−αxdν([0, x])

ν([0, t])

]nν([0, t])n

n!e−ν([0,t])

= exp∫ t

0

e−αxdν([0, x])

e−ν([0,t])

= exp∫ t

0

[e−αx − 1

]dν([0, x])

. (3.4)

From this Laplace transform we deduce that

E[Z(t)] =∫ t

0

xdν([0, x])

and

Var[Z(t)] =∫ t

0

x2dν([0, x]).

Similarly, we can prove that

E(e−αY (t)) = exp∫ t

0

[e−α(t−x) − 1]dν([0, x])

, (3.5)

E[Y (t)] =∫ t

0

(t− x)dν([0, x]),

and

Var[Y (t)] =∫ t

0

(t− x)2dν([0, x]).

Note that in general Y (t) has a different distribution from Z(t) in case (N(t))is a non-homogeneous Poisson process.

Next we consider the distributions of Y (t) and Z(t) when (N(t)) is a Coxprocess. A Cox process is a generalization of a Poisson process. In a Cox processthe intensity measure ν of the Poisson process is chosen randomly according to

Page 60: Renewal Processes and Repairable Systems

50 Integrated Renewal Processes

some probability distribution Π. So if (N(t)) is a Cox process then from (3.4)and (3.5) we obtain

E(e−αZ(t)

)=

∫exp

∫ t

0

[1− e−αx]dν([0, x])

Π(dν)

and

E(e−αY (t)

)=

∫exp

∫ t

0

[1− e−α(t−x)]dν([0, x])

Π(dν).

As an example let the intensity measure ν satisfy ν([0, t]) = Λt for somepositive random variable Λ. If Λ is exponentially distributed with parameter η,then

E(e−αY (t)

)= E

(e−αZ(t)

)=

αη

α(η + t)− (1− e−αt).

3.3 (N(t)) a renewal process

In this section we will consider the distribution of the processes (Y (t)) and(Z(t)) defined in (3.1) and (3.2) for the case that (N(t)) is a renewal process.Let Xn = Sn − Sn−1, n ≥ 1, be the inter-arrival times of the renewal process.Note that (Xn, n ≥ 1) is an iid sequence of strictly positive random variables.Let F denote the cdf of X1. As usual we will denote Laplace-Stieltjes transformof F by F ∗. First we consider the process (Z(t)). Obviously we can expressZ(t) as

Z(t) =N(t)∑

i=1

[N(t) + 1− i]Xi. (3.6)

We will use point processes to derive the distribution of Z(t).Let (Ω,F ,P) be the probability space on which the iid sequence (Xn) is de-

fined and also an iid sequence (Un, n ≥ 1) of exponentially distributed randomvariables with parameter 1 such that the sequences (Xn) and (Un) are indepen-dent. Let (Tn, n ≥ 1) be the sequence of partial sums of the variables Un. Thenthe map

Φ : ω 7→ ∑∞n=1 δ(Tn(ω),Xn(ω)), (3.7)

where δ(x,y) is the Dirac measure in (x, y), defines a Poisson point process onE = [0,∞) × [0,∞) with intensity measure ν(dtdx) = dtdF (x). Let Mp(E) bethe set of all point measures on E. We will denote the distribution of Φ by Pν ,i.e., Pν = P Φ−1.

Page 61: Renewal Processes and Repairable Systems

(N(t)) a renewal process 51

Define for t ≥ 0 the functional A(t) on Mp(E) by

A(t)(µ) =∫

E

1[0,t)(s)xµ(dsdx).

In the sequel we write A(t, µ) = A(t)(µ). Define also for t ≥ 0 the functionalZ(t) on Mp(E) by

Z(t)(µ) =∫

E

E

1[0,x)(t− A(s, µ))µ([r, s)× [0,∞))u1[0,s)(r)µ(drdu)µ(dsdx).

Lemma 3.3.1 With probability 1,

Z(t) = Z(t)(Φ).

Proof: Let ω ∈ Ω. Then

Z(t)(Φ(ω))

=∞∑

n=1

1[0,Xn(ω))(t− A(Tn(ω), Φ(ω)))

∞∑

i=1

Φ(ω)([Ti(ω), Tn(ω))× [0,∞))Xi(ω)1[0,Tn(ω))(Ti(ω))

=∞∑

i=1

Φ(ω)([Ti(ω), TN(t,ω)+1(ω))× [0,∞))Xi(ω)1[0,TN(t,ω)+1(ω))(Ti(ω))

=N(t,ω)∑

i=1

[N(t, ω) + 1− i]Xi(ω). 2

Theorem 3.3.1 Let (Xn, n ≥ 1) be an iid sequence of strictly positive randomvariables with common distribution function F . Let (Sn, n ≥ 0) be the sequenceof partial sums of the variables Xn and (N(t), t ≥ 0) be the corresponding re-newal process: N(t) = supn ≥ 0 : Sn ≤ t. Let

Z(t) =N(t)∑

i=1

[N(t) + 1− i]Xi.

Then for α, β > 0

∫ ∞

0

E(e−αZ(t))e−βtdt =1β

[1− F ∗(β)]∞∑

n=0

n∏

i=1

F ∗(α[n + 1− i] + β)(3.8)

(with the usual convention that the empty product equals 1).

Page 62: Renewal Processes and Repairable Systems

52 Integrated Renewal Processes

Proof: By Lemma 3.3.1

E(e−αZ(t)) =∫

Mp(E)

e−αZ(t)(µ)Pν(dµ)

=∫

Mp(E)

exp− α

E

E

1[0,x)(t− A(s, µ))

µ([r, s)× [0,∞))u1[0,s)(r)µ(drdu)µ(dsdx)Pν(dµ)

=∫

Mp(E)

E

1[0,x)(t− A(s, µ))

exp−α

E

µ([r, s)× [0,∞))u1[0,s)(r)µ(drdu)

µ(dsdx)Pν(dµ)

Applying the Palm formula for Poisson point processes, see Theorem 1.2.4, weobtain

E(e−αZ(t))

=∫ ∞

0

∫ ∞

0

Mp(E)

1[0,x)(t− A(s, µ + δ(s,x)))

exp−

E

α(µ + δ(s,x))([r, s)× [0,∞))u1[0,s)(r)(µ + δ(s,x))(drdu)

Pν(dµ)dF (x)ds

=∫ ∞

0

∫ ∞

0

Mp(E)

1[0,x)(t− A(s, µ))

exp−

E

αµ([r, s)× [0,∞))u1[0,s)(r)µ(drdu)Pν(dµ)dF (x)ds.

Using Fubini’s theorem and a substitution we obtain∫ ∞

0

E(e−αZ(t))e−βtdt

=∫ ∞

0

∫ ∞

0

Mp(E)

∫ ∞

0

1[0,x)(t− A(s, µ))

exp−

E

αµ([r, s)× [0,∞))u1[0,s)(r)µ(drdu)

e−βtdtPν(dµ)dF (x)ds

=1β

[1− F ∗(β)]∫ ∞

0

Mp(E)

exp−

E

[αµ([r, s)× [0,∞)) + β

]u1[0,s)(r)µ(drdu)

Pν(dµ)ds.

Page 63: Renewal Processes and Repairable Systems

(N(t)) a renewal process 53

The integral with respect to Pν can be written as a sum of integrals over the setsBn := µ ∈ Mp(E) : µ([0, s)× [0,∞)) = n, n = 0, 1, 2, .... Fix a value of n andlet µ ∈ Mp(E) be such that µ([0, s) × [0,∞)) = n and supp(µ) = ((ti, xi))∞i=1.So tn < s ≤ tn+1. For such a measure µ the integrand with respect to Pν canbe written as

E

[αµ([r, s)× [0,∞)) + β

]u1[0,s)(r)µ(drdu)

=∞∑

i=1

[αµ([ti, s)× [0,∞)) + β

]xi1[0,s)(ti)

=n∑

i=1

(α[n + 1− i] + β)xi.

Now the measure Pν is the image measure of P under the map Φ, see (3.7).Expressing the integral with respect to Pν over Bn as an integral with respectto P over the subset An := ω ∈ Ω : Tn(ω) < s ≤ Tn+1(ω) of Ω, and usingindependence of (Tn) and (Xn), we obtain

Bn

e−R

E

[αµ([r,s)×[0,∞))+β

]u1[0,s)(r)µ(drdu)Pν(dµ)

=∫

An

exp−

n∑

i=1

(α[n + 1− i] + β)Xi(ω)P(dω)

= E[

exp−

n∑

i=1

(α[n + 1− i] + β)Xi

]P(An)

=n∏

i=1

E[

exp− (α[n + 1− i] + β)Xi

]sn

n!e−s

=n∏

i=1

F ∗(α[n + 1− i] + β)sn

n!e−s.

Hence∫

Mp(E)

e−R

E[αµ([r,s)×[0,∞))+β]u1[0,s)(r)µ(drdu)Pν(dµ)

=∞∑

n=0

n∏

i=1

F ∗(α[n + 1− i] + β)sn

n!e−s.

Since for each n,∫∞0

sn

n! e−sds = 1, the theorem follows. 2

The Laplace transform of the mean of Z(t) can be derived from (3.8) asfollows. Let ϕ(α) = E

(e−αZ(t)

)and define

Wn,i(α) = F ∗(α[n + 1− i] + β).

Page 64: Renewal Processes and Repairable Systems

54 Integrated Renewal Processes

Then

Vn(α) =n∏

i=1

F ∗(α[n + 1− i] + β) =n∏

i=1

Wn,i(α).

Hence

V ′n(α) =

n∑

j=1

W ′n,j(α)

n∏

i=1,i6=j

Wn,i(α), (3.9)

where

W ′n,j(α) = −(n + 1− j)

∫ ∞

0

xe−(α[n+1−j]+β)xdF (x)

if E[X1e

−βX1]

< ∞ for some β > 0. Since Wn,i(0) = F ∗(β) and

W ′n,j(0) = −(n + 1− j)

∫ ∞

0

xe−βxdF (x),

it follows that

V ′n(0) =

n∑

j=1

−(n + 1− j)∫ ∞

0

xe−βxdF (x)F ∗(β)n−1

= −∫ ∞

0

xe−βxdF (x)F ∗(β)n−1n∑

j=1

(n + 1− j)

= −12

∫ ∞

0

xe−βxdF (x)F ∗(β)n−1n(n + 1).

Hence∫ ∞

0

E[Z(t)]e−βtdt

=∫ ∞

0

−ϕ′(0)e−βtdt

=12β

[1− F ∗(β)]∫ ∞

0

xe−βxdF (x)∞∑

n=0

F ∗(β)n−1n(n + 1)

=1β

[1− F ∗(β)]∫ ∞

0

xe−βxdF (x)1

[1− F ∗(β)]3

=

∫∞0

xe−βxdF (x)β[1− F ∗(β)]2

.

Thus we have the following proposition:

Page 65: Renewal Processes and Repairable Systems

(N(t)) a renewal process 55

Proposition 3.3.1 Under the same assumptions as Theorem 3.3.1, and ifE

[X1e

−βX1]

< ∞ for some β > 0, then

∫ ∞

0

E[Z(t)]e−βtdt =

∫∞0

xe−βxdF (x)β[1− F ∗(β)]2

. (3.10)

Now we will derive the Laplace transform of the second moment of Z(t).From (3.9) we obtain

V ′′n (α) =

n∑

j=1

W ′′n,j(α)

n∏

i=1,i6=j

Wn,i(α)

+n∑

j=1

W ′n,j(α)

n∑

k=1,k 6=j

W ′n,k(α)

n∏

i=1,i 6=j 6=k

Wn,i(α),

where

W ′′n,j(α) = (n + 1− j)2

∫ ∞

0

x2e−(α[n+1−j]+β)xdF (x)

if E[X2

1e−βX1]

< ∞ for some β > 0,. Since Wn,i(0) = F ∗(β), W ′n,j(0) =

−(n+1− j)∫∞0

xe−βxdF (x), and W ′′n,j(0) = (n+1− j)2

∫∞0

x2e−βxdF (x), then

V ′′n (0) =

n∑

j=1

(n + 1− j)2∫ ∞

0

x2e−βxdF (x)F ∗(β)n−1

+n∑

j=1

−(n + 1− j)∫ ∞

0

xe−βxdF (x)

n∑

k=1,k 6=j

−(n + 1− k)∫ ∞

0

xe−βxdF (x)F ∗(β)n−2

=∫ ∞

0

x2e−βxdF (x)F ∗(β)n−1n∑

j=1

(n + 1− j)2

+[ ∫ ∞

0

xe−βxdF (x)]2

F ∗(β)n−2n∑

j=1

(n + 1− j)n∑

k=1,k 6=j

(n + 1− k).

Page 66: Renewal Processes and Repairable Systems

56 Integrated Renewal Processes

So∫ ∞

0

E[Z2(t)]e−βtdt

=∫ ∞

0

ϕ′′(0)e−βtdt

=1β

[1− F ∗(β)]∞∑

n=0

∫ ∞

0

x2e−βxdF (x)F ∗(β)n−1n∑

j=1

(n + 1− j)2

+[ ∫ ∞

0

xe−βxdF (x)]2

F ∗(β)n−2n∑

j=1

(n + 1− j)n∑

k=1,k 6=j

(n + 1− k)

=1β

[1− F ∗(β)](∫ ∞

0

x2e−βxdF (x)F ∗(β) + 1

[1− F ∗(β)]4

+2[ ∫ ∞

0

xe−βxdF (x)]2 2 + F ∗(β)

[1− F ∗(β)]5

)

=1

β[1− F ∗(β)]3

([F ∗(β) + 1]

∫ ∞

0

x2e−βxdF (x)

+4 + 2F ∗(β)1− F ∗(β)

[ ∫ ∞

0

xe−βxdF (x)]2

).

Thus we have the following proposition:

Proposition 3.3.2 Under the same assumptions as Theorem 3.3.1, and ifE

[X2

1e−βX1]

< ∞ for some β > 0,

∫ ∞

0

E[Z2(t)]e−βtdt =[1 + F ∗(β)]

∫∞0

x2e−βxdF (x)β[1− F ∗(β)]3

+2[2 + F ∗(β)][

∫∞0

xe−βxdF (x)]2

β[1− F ∗(β)]4. (3.11)

Remark 3.3.1 If X1 is exponentially distributed with parameter λ, then using(3.10) and (3.11) we obtain

∫ ∞

0

E[Z(t)]e−βtdt =λ

β3

and ∫ ∞

0

E[Z2(t)]e−βtdt =2λ[3λ + β]

β5.

Inverting these transforms we obtain E[Z(t)] = 12λt2, E[Z2(t)] = 1

4λ2t4 + 13λt3,

and hence Var[Z(t)] = 13λt3. These results are the same as those in the previous

section.

Page 67: Renewal Processes and Repairable Systems

Asymptotic properties 57

Now we will consider the marginal distribution of the process (Y (t)) when(N(t)) is a renewal process. It is easy to see that

Y (t) =N(t)∑

i=1

(i− 1)Xi + N(t)[t− SN(t)].

Define for t ≥ 0 the functional Y(t) on Mp(E) by

Y(t)(µ) =∫

E

E

1[0,x)(t− A(s, µ))µ([0, r)× [0,∞))u1[0,s)(r)

+µ([0, s)× [0,∞))(t− A(s, µ))µ(drdu)µ(dsdx).

Then as in Lemma 3.3.1, with probability 1, Y (t) = Y(t)(Φ). The followingtheorem can be proved using arguments as for Z(t). We omit the proof.

Theorem 3.3.2 Let (Xn, n ≥ 1) be an iid sequence of strictly positive randomvariables with common distribution function F . Let (Sn, n ≥ 0) be the sequenceof partial sums of the variables Xn and (N(t), t ≥ 0) be the corresponding re-newal process: N(t) = supn ≥ 0 : Sn ≤ t. Let

Y (t) =N(t)∑

i=1

(i− 1)Xi + N(t)[t− SN(t)].

Then

(a)∫ ∞

0

E(e−αY (t))e−βtdt =∞∑

n=0

1− F ∗(αn + β)αn + β

n∏

i=1

F ∗(α[i− 1] + β),

(b)∫ ∞

0

E[Y (t)]e−βtdt =F ∗(β)

β2[1− F ∗(β)],

(c) If E[X1e

−βX1]

< ∞ for some β > 0, then∫ ∞

0

E[Y 2(t)]e−βtdt =2F ∗(β)[1− F ∗(β)2 + β

∫∞0

te−βtdF (t)]β3[1− F ∗(β)]3

.

3.4 Asymptotic properties

In this section we will discuss asymptotic properties of (Y (t)) and (Z(t)) asdefined in Section 3.1 for the case that (N(t)) is a renewal process having inter-arrival times Xn with common cdf F . We first consider asymptotic propertiesof the mean of Z(t).

Page 68: Renewal Processes and Repairable Systems

58 Integrated Renewal Processes

Theorem 3.4.1 If µ1 = E[X1] < ∞ then as t →∞

E[Z(t)] ∼ t2

2µ1.

Proof: In Section 3.3 we have proved that the Laplace transform of E[Z(t)] isgiven by

∫ ∞

0

E[Z(t)]e−βtdt =

∫∞0

xe−βxdF (x)β[1− F ∗(β)]2

.

Note that∫ ∞

0

e−βtdE[Z(t)] = limt→∞

E[Z(t)]e−βt + β

∫ ∞

0

E[Z(t)]e−βtdt.

Since 0 ≤ Z(t) ≤ tN(t) where N(t) denotes the renewal process correspondingto the sequence (Xn), it follows that

0 ≤ limt→∞

E[Z(t)]e−βt ≤ limt→∞

tE[N(t)]e−βt

= limt→∞

t

[t

µ1+ o(1)

]e−βt

= 0.

This implies∫ ∞

0

e−βtdE[Z(t)] = β

∫ ∞

0

E[Z(t)]e−βtdt

=

∫∞0

xe−βxdF (x)[1− F ∗(β)]2

.

By dominated convergence it is easy to see that∫ ∞

0

xe−βxdF (x) = µ1 + o(1)

and

F ∗(β) = 1− µ1β + o(β)

as β → 0. Hence∫ ∞

0

e−βtdE[Z(t)] ∼ 1µ1β2

as β → 0.

Obviously E[Z(t)] is non-decreasing. So we can apply Theorem 2.4.1 (Tauberiantheorem) with γ = 2 to get the result. 2

Page 69: Renewal Processes and Repairable Systems

Asymptotic properties 59

Next we will derive a stronger version for the asymptotic form of E[Z(t)].We will assume that the inter-arrival times Xn are continuous random variables.We also assume that the Laplace transform of E[Z(t)] given in Theorem 3.3.1is a rational function, i.e.,

∫∞0

xe−βxdF (x)β[1− F ∗(β)]2

(3.12)

is a rational function of β. This situation holds for example when X1 has agamma distribution.

Since the Laplace transform of E[Z(t)] is a rational function, we can split(3.12) into partial fractions. To do this, firstly observe that F ∗(0) = 1 andF ∗′(0) = −µ1 < 0. So we conclude that the equation

1− F ∗(β) = 0 (3.13)

has a simple root at β = 0. Hence the partial fraction expansion of (3.12)contains terms proportional to 1/β2 and 1/β. For now, we will consider β asa complex variable and denote its real part by <(β). Assuming µi = E(Xi

1),i = 1, 2, 3 are finite we can express the Laplace transform of E[Z(t)] as

∫ ∞

0

E[Z(t)]e−βtdt =1

µ1β3+

1µ2

1

(µ3

6+

µ22

4µ1

)1β

+ r(β). (3.14)

where r(β) is a rational function of β with non-zero poles at β1, β2, .... It followsfrom (3.13) that for every j, <(βj) < 0. Also, since F is continuous, thenthere can be no purely imaginary roots of (3.13). There are some other remarksabout the roots βj . If βj is a simple real root then it gives a term proportionalto 1/(β−βj) in the partial fraction of r(β), inverting into eβjt. If βj is a multipleroot then it leads to a term proportional to treβjt. Since <(βj) < 0 these termstend to zero exponentially fast as t → ∞. Note also that the roots βj mustoccur in conjugate pairs. Otherwise E[Z(t)] contains a term which is a complexnumber. So from (3.14) we obtain

E[Z(t)] =1

2µ1t2 +

1µ2

1

(µ3

6+

µ22

4µ1

)+ o(1) as t →∞. (3.15)

In (3.15) the term o(1) tends to 0 as t →∞ exponentially fast.Similar arguments as before can be used to obtain the asymptotic variance

of Z(t). Assume that µ4 = E[X41 ] < ∞. If the Laplace transform of E[Z2(t)] in

(3.11) is a rational function of β, then we can prove that

∫ ∞

0

E[Z2(t)]e−βtdt =6

µ21β

5+

2(µ2 − µ21)

µ31β

4+

C

β+ r(β)

Page 70: Renewal Processes and Repairable Systems

60 Integrated Renewal Processes

where C is a constant depending on µi, i = 1, 2, 3, 4 and r(β) is a rationalfunction of β having non-zero poles. Inverting this transform we obtain

E[Z2(t)] =1

4µ21

t4 +µ2 − µ2

1

3µ31

t3 + C + o(1) as t →∞. (3.16)

It follows, from (3.15) and (3.16), that

Var[Z(t)]t3

→ µ2 − µ21

3µ31

as t →∞.

For the process (Y (t)) we have the following asymptotic properties. Theproof is quite similar to that of (Z(t)), and is therefore omitted.

Proposition 3.4.1 Let N(t) be a renewal process with inter-arrival times Xn.Let µi = E[Xi

1]. Let Y (t) =∫ t

0N(s)ds be the corresponding integrated renewal

process. Then

(a) If µ1 < ∞ then

E[Y (t)] ∼ t2

2µ1as t →∞.

(b) If µi, i = 1, 2, 3 are finite and if the Laplace transform of E[Y (t)] statedin Theorem 3.3.2 is a rational function of β, then

E[Y (t)] =1

2µ1t2 +

(µ2

2µ21

− 1)

t +(

µ22

4µ31

− µ3

6µ21

)+ o(1) as t →∞.

(c) If µi, i = 1, ..., 5 are finite and if the Laplace transform of E[Y 2(t)] statedin Theorem 3.3.2 is a rational function of β, then

E[Y 2(t)] =1

4µ21

t4 +(

5µ2 − 2µ21

6µ31

)t3 +

1µ2

1

(1 +

3µ22

2µ21

− 21µ2

2− 2µ3

3µ1

)t2

+1µ2

1

(2µ3

3− µ2

2

µ1+

µ4

4µ1− 3µ2µ3

2µ21

+3µ3

2

2µ51

)t

+C + o(1) as t →∞,

where C is a constant depending on µi, i = 1, 2, 3, 4, 5.

(d) Under the assumptions as in (c),

Var[Y (t)]t3

→ µ2 − µ21

3µ31

as t →∞.

Page 71: Renewal Processes and Repairable Systems

An application 61

3.5 An application

Suppose that travellers arrive at a train depot according to a renewal process.Suppose that a train just departed at time 0, and there were no travellers left.If the next train departs at some time t ≥ 0 then the sum of the waiting timesof all the travellers arriving in the time interval [0, t], i.e.,

∑N(t)i=1 (t − Si), is an

integrated renewal process. In Ross [36] Example 2.3(A) the special case ofthis process where the arrival process of the travellers is a homogeneous Poissonprocess with rate λ has been considered. He showed that the expected sum ofthe waiting times of the travellers arriving in the time interval [0, t] is equal to1/2λt2. The calculation is based on conditioning on the number of arrivals inthe interval [0, t]. Using the results in the preceding sections in this example,we give more information about the process.

Let Y (t) =∑N(t)

i=1 (t − Si). As stated in Section 3.2 the variance of Y (t) isequal to 1/3λt3, the distribution of Y (t) has mass at 0 with P(Y (t) = 0) = e−λt

and the continuous part of Y (t) has a density function, for x > 0,

fY (t)(x)

=

√λ√x

e−λtI1(2√

λx)1(0,∞)(x)

+λe−λt∞∑

k=1

(−1)k

k!(λ(x− kt))

12 (k−1)Ik−1(2

√λ(x− kt))1(kt,∞)(x)

where Ik(x) is the Modified Bessel function of the first kind. The graphs of themean and the variance of Y (t) for λ = 0.5 can be seen in Figure 3.3 and 3.4(solid line). The graphs of the density function of Y (t) for λ = 0.5, t = 3 andt = 10 can be seen in Figure 3.5 and 3.6 (solid line).

Now suppose that the inter-arrival times Xn have a common Gamma(γ,2)distribution having a density function

f(x; γ, 2) = γ2xe−γx, γ > 0, x ≥ 0.

Note that if γ = 2λ then these inter-arrival times have the same mean as theexponential random variable with parameter λ. Using Theorem 3.3.2 we obtain

∫ ∞

0

E[Y (t)]e−βtdt =γ2

β3(β + 2γ),

and∫ ∞

0

E[Y 2(t)]e−βtdt =2γ2(β3 + 4γβ2 + 8γ2β + 6γ3)

β5(β + 2γ)3.

Inverting these transforms we obtain

E[Y (t)] =γ

4t2 − 1

4t +

18γ

− 18γ

e−2γt

Page 72: Renewal Processes and Repairable Systems

62 Integrated Renewal Processes

0 2 4 6 8 10 12 14 16 18 200

10

20

30

40

50

60

70

80

90

100

t

E(Y

t)

solid line : exp(0.5)dotted line : U(0,4)dashed line : Gamma(1,2)

Figure 3.3: Graphs of the mean of Y (t) when the (Xn) iid exp(0.5), uniform(0,4)and Gamma(1,2).

0 2 4 6 8 10 12 14 16 18 20−200

0

200

400

600

800

1000

1200

t

Var

(Yt)

solid line : exp(0.5)dotted line : U(0,4)dashed line : Gamma(1,2)

Figure 3.4: Graphs of the variance of Y (t) when the (Xn) iid exp(0.5), uni-form(0,4) and Gamma(1,2).

and

E[Y 2(t)] =γ2

16t4 − γ

24t3 +

18t2 − 1

8γt +

132γ2

+ (116

t2 − 116γ

t− 132γ2

)e−2γt.

Hence the variance of Y (t) is given by

Page 73: Renewal Processes and Repairable Systems

An application 63

0 1 2 3 4 5 6 7 8 9 100

0.05

0.1

0.15

0.2

0.25

0.3

0.35

x

f Y(3

)(x)

exp(0.5)Gamma(1,2)

Figure 3.5: Graphs of the probability density function of Y (3) when the (Xn)are iid exp(0.5) and Gamma(1,2).

0 10 20 30 40 50 60 70 80−0.005

0

0.005

0.01

0.015

0.02

0.025

0.03

0.035

0.04

0.045

x

f Y(1

0)(x

)

exp(0.5)Gamma(1,2)

Figure 3.6: Graphs of the probability density function of Y (10) when the (Xn)are iid exp(0.5) and Gamma(1,2).

Var[Y (t)] =γ

12t3 − 1

6γt− 1

64γ2+

18te−2γt − 1

64γ2e−4γt.

The graphs of the mean and the variance of Y (t) for γ = 1 (hence X1 has thesame mean as the exponential random variable with parameter 0.5) can be seenin Figure 3.3 and 3.4 (dashed line).

Page 74: Renewal Processes and Repairable Systems

64 Integrated Renewal Processes

The double Laplace transform of Y (t) when the inter-arrival times Xn havea common Gamma(γ, 2) distribution is given by

∫ ∞

0

E(e−αY (t))e−βtdt

= γ2∞∑

n=0

(αn + β + γ)2 − γ2

(αn + β + γ)2(αn + β)

n∏

i=1

1[α(i− 1) + β + γ]2

.

The pdf of Y (t) can be approximated by first truncating the infinite sum in thistransform and then by inverting the truncated transform. The graphs of thepdfs of Y (3) and Y (10) for γ = 1 can be seen in Figure 3.5 and 3.6 (dashedline).

If we assume that the inter-arrival times Xn, are independent and uniformlydistributed on [0, 4] (hence X1 has the same mean as an exponential randomvariable with parameter 0.5), then the Laplace transforms of the first and secondmoments of Y (t) are given by

∫ ∞

0

E[Y (t)]e−βtdt =1− e−4β

β2[4β − 1 + e−4β ],

and∫ ∞

0

E[Y 2(t)]e−βtdt =[1− e−4β ][4β2 + β − 1

4 − (4β2 + β − 12 )e−4β − 1

4e−8β ]β3[2β − 1

2 (1− e−4β)]3.

Using numerical inversions of Laplace transforms in Appendix B we get thegraphs of E[Y (t)] and Var[Y (t)], see Figure 3.3 and 3.4 (dotted line). Themarginal pdf of Y (t) is more complicated to obtain in this case.

Page 75: Renewal Processes and Repairable Systems

Chapter 4

Total Downtime ofRepairable Systems

4.1 Introduction

Consider a repairable system which is at any time either in operation (up)or under repair (down) after failure. The effectiveness of the system can bemeasured by the total downtime, i.e., the total amount of time the system isdown during a given time interval. An expression for the cumulative distributionfunction (cdf) of the total downtime in the time interval [0, t] has been derivedby several authors using different methods. In Takacs [44] the total probabilitytheorem has been used. The derivation in Muth [26] is based on considerationof the excess time. Finally in Funaki and Yoshimoto [13] the cdf of the totaldowntime is derived by a conditioning technique. Srinivasan et al. [37] derivedan expression for the probability density function (pdf) of the total uptimeof the system in the time interval [0, t]. They also discussed its covariancestructure. For longer time intervals, Takacs [44] and Renyi [33] proved that thedistribution of the total downtime approaches a normal distribution. Takacs[44] also discussed the asymptotic mean and variance of the total downtime.In all these papers it is assumed that the failure time and the repair time areindependent.

We use a different method for computation of the distribution of the totaldowntime. We also consider a more general situation where we allow depen-dence of the failure time and the repair time. Our derivation is based on arepresentation of the total downtime as a functional of a Poisson point process.

This chapter is organized as follows. In Section 4.2 we define the totaldowntime and derive its distribution in a fixed time interval. In Section 4.3 wediscuss the system availability which is closely related to the total downtime.

65

Page 76: Renewal Processes and Repairable Systems

66 Total Downtime of Repairable Systems

In Section 4.4 we study the covariance structure of the total downtime for adependent case. Asymptotic properties of the total downtime for a dependentcase are derived in Section 4.5. We give examples in Section 4.6 and finallyin Section 4.7 we consider repairable systems consisting of n ≥ 2 independentcomponents.

4.2 Distribution of total downtime

We consider a repairable system which is at any time either in operation (up)or under repair (down) after failure, denoted as 1 and 0 respectively. Supposethat the system starts to operate at time t = 0. Let (Xi) and (Yi), i ≥ 1,denote the time spent in the state 1 and 0 respectively during the ith visit tothat state. The random variables Xi and Yi are known as the failure time andthe repair time respectively. We assume that the sequence (Xi, Yi) of randomvectors is iid with strictly positive components. However, our set up is moregeneral than that in Takacs [44], Muth [26], Funaki and Yoshimoto [13], Renyi[33], and Srinivasan et al. [37], as we allow that Xi and Yi are dependent.

Let Sn =∑n

i=1(Xi + Yi), n ≥ 1, S0 = 0, and N(t) = supn ≥ 0 : Sn ≤ t.Then the total downtime D(t) can be expressed (with the usual convention thatthe empty sum equals 0) as

D(t) =

∑N(t)i=1 Yi, if SN(t) ≤ t < SN(t) + XN(t)+1

t−∑N(t)+1i=1 Xi, if SN(t) + XN(t)+1 ≤ t < SN(t)+1.

(4.1)

Denote the state of the system at time t by Z(t). Then the total downtimeD(t) can also be expressed as

D(t) =∫ t

0

10(Z(s))ds. (4.2)

We will assume that Z(t) is right continuous.Throughout this chapter we will use the following notation for cdfs:

F (x) = P(X1 ≤ x),

G(y) = P(Y1 ≤ y),

H(x, y) = P(X1 ≤ x, Y1 ≤ y),

K(w) = P(X1 + Y1 ≤ w).

We denote by Fn and Gn the cdfs of∑n

i=1 Xi and∑n

i=1 Yi, respectively. TheLaplace-Stieltjes transforms of a cdf F and a joint cdf H will be denoted by F ∗

and H∗, i.e.,

F ∗(β) =∫ ∞

0

e−βxF (x)dx

Page 77: Renewal Processes and Repairable Systems

Distribution of total downtime 67

andH∗(α, β) =

∫ ∞

0

∫ ∞

0

e−(αx+βy)dH(x, y).

We will use point processes for the derivation of the distribution of thetotal downtime D(t). Let (Ω,F ,P) be the probability space on which the iidsequence (Xi, Yi) is defined and also an iid sequence (Ui, i ≥ 1) of exponentiallydistributed random variables with parameter 1 such that the sequences (Ui) and(Xi, Yi) are independent. Let (Tn, n ≥ 1) be the sequence of partial sums of thevariables Ui. Then the map

Φ : ω 7→ ∑∞n=1 δ(Tn(ω),Xn(ω),Yn(ω)),

where δ(x,y,z) is the Dirac measure in (x, y, z), defines a Poisson point processon E = [0,∞)× [0,∞)× [0,∞) with intensity measure ν(dtdxdy) = dtdH(x, y).Note that for almost all ω ∈ Ω, Φ(ω) is a simple point measure on E such thatthere is at most one point from the support of Φ(ω) on each fibre t × [0,∞)and Φ(ω)([0, t]× [0,∞)) < ∞ for every t ≥ 0. Let Mp(E) be the set of all pointmeasures on E. We will denote by Pν the distribution of Φ over Mp(E).

Define on Mp(E), for t ≥ 0, the functionals

AX(t)(µ) =∫

E

x1[0,t)(s)µ(dsdxdy),

AY (t)(µ) =∫

E

y1[0,t)(s)µ(dsdxdy),

and

A(t)(µ) = AX(t)(µ) + AY (t)(µ).

So for example A(t)(µ) is the sum of the x- and the y-coordinates of the pointsin the support of µ up to time t. In the sequel it is convenient to write anexpression like AX(t)(µ) as AX(t, µ). Define also for t ≥ 0

D(t)(µ) =∫

E

1[0,x)(t− A(s, µ))AY (s, µ)

+1[x,x+y)(t− A(s, µ))[t− AX(s+, µ)]

µ(dsdxdy).

The next lemma motivates the definition of D(t).

Lemma 4.2.1 With probability 1,

D(t) = D(t)(Φ).

Page 78: Renewal Processes and Repairable Systems

68 Total Downtime of Repairable Systems

Proof: Let ω ∈ Ω. Then

D(t)(Φ(ω))

=∞∑

i=1

[1[0,Xi(ω))(t− A(Ti(ω),Φ(ω)))AY (Ti(ω),Φ(ω))

+ 1[Xi(ω),Xi(ω)+Yi(ω))(t− A(Ti(ω),Φ(ω)))[t− AX(Ti(ω)+,Φ(ω))]].

Note that 1[0,Xi(ω))(t − A(Ti(ω), Φ(ω))) = 1 if and only if SN(t,ω)(ω) ≤ t <SN(t,ω)(ω) + XN(t,ω)+1(ω), and this last statement implies that i = N(t, ω) + 1.The same conclusion is true when 1[Xi(ω),Xi(ω)+Yi(ω))(t − A(Ti(ω),Φ(ω))) = 1.Since the intervals [Si−1, Si−1 + Xi), [Si−1 + Xi, Si−1 + Xi + Yi) : i ≥ 1partition [0,∞), for any t > 0 one and only one of the indicators in the sum willbe non-zero. So if 1[0,Xi(ω))(t− A(Ti(ω),Φ(ω))) = 1 then i = N(t, ω) + 1 and

D(t)(Φ(ω)) = AY (TN(t,ω)+1(ω),Φ(ω))

=

0, if N(t, ω) = 0∑N(t,ω)j=1 Yj(ω), if N(t, ω) ≥ 1,

and if 1[Xi(ω),Xi(ω)+Yi(ω))(t− A(Ti(ω), Φ(ω))) = 1 then

D(t)(Φ(ω)) = t− AX(TN(t,ω)+1(ω)+,Φ(ω))

= t−N(t,ω)+1∑

j=1

Xj(ω). 2

The following theorem gives the distribution of the total downtime D(t) inthe form of a double Laplace transform.

Theorem 4.2.1 Let D(t) be as defined in (4.1). Then for α, β > 0

∫ ∞

0

E[e−αD(t)]e−βtdt =α[1− F ∗(β)] + β[1−H∗(β, α + β)]

β(α + β)[1−H∗(β, α + β)]. (4.3)

Page 79: Renewal Processes and Repairable Systems

Distribution of total downtime 69

Proof: By Lemma 4.2.1

E(e−αD(t))

=∫

Mp(E)

e−αD(t)(µ)Pν(dµ)

=∫

Mp(E)

exp− α

E

[1[0,x)(t− A(s, µ))AY (s, µ)

+1[x,x+y)(t− A(s, µ))(t− AX(s+, µ))]µ(dsdxdy)

Pν(dµ)

=∫

Mp(E)

E

[1[0,x)(t− A(s, µ))e−αAY (s,µ)

+1[x,x+y)(t− A(s, µ))e−α(t−AX(s+,µ))]µ(dsdxdy)Pν(dµ)

=: C1(α, t) + C2(α, t).

Applying the Palm formula for Poisson point processes, see Theorem 1.2.4, weobtain

C1(α, t) :=∫

Mp(E)

E

1[0,x)(t− A(s, µ))e−αAY (s,µ)µ(dsdxdy)Pν(dµ)

=∫ ∞

0

∫ ∞

0

∫ ∞

0

Mp(E)

1[0,x)(t− A(s, µ + δ(s,x,y)))

exp− αAY (s, µ + δ(s,x,y))

Pν(dµ)dH(x, y)ds

=∫ ∞

0

∫ ∞

0

∫ ∞

0

Mp(E)

1[0,x)(t− A(s, µ))e−αAY (s,µ)

Pν(dµ)dH(x, y)ds

and

C2(α, t) :=∫

Mp(E)

E

1[x,x+y)(t− A(s, µ))e−α(t−AX(s+,µ))µ(dsdxdy)Pν(dµ)

=∫ ∞

0

∫ ∞

0

∫ ∞

0

Mp(E)

1[x,x+y)(t− A(s, µ + δ(s,x,y)))

exp− α

[t− AX(s+, µ + δ(s,x,y))

]Pν(dµ)dH(x, y)ds

=∫ ∞

0

∫ ∞

0

∫ ∞

0

Mp(E)

1[x,x+y)(t− A(s, µ))

exp− α

[t− AX(s+, µ)− x

]Pν(dµ)dH(x, y)ds.

Page 80: Renewal Processes and Repairable Systems

70 Total Downtime of Repairable Systems

Using Fubini’s theorem and a substitution we obtain

∫ ∞

0

C1(α, t)e−βtdt

=∫ ∞

0

∫ ∞

0

∫ ∞

0

Mp(E)

[ ∫ x

0

e−βtdt

]

exp−

[αAY (s, µ) + βA(s, µ)

]Pν(dµ)dH(x, y)ds

=1β

[1− F ∗(β)]∫ ∞

0

Mp(E)

exp−

[αAY (s, µ) + βA(s, µ)

]Pν(dµ)ds.

Note that

αAY (s, µ) + βA(s, µ) =∫

E

1[0,s)(s)(αy + β(x + y))µ(dsdxdy).

So we can use the formula for the Laplace functional of Poisson point processes,see Theorem 1.2.1, to obtain

Mp(E)

exp−

[αAY (s, µ) + βA(s, µ)

]Pν(dµ)

= exp−

∫ ∞

0

∫ ∞

0

∫ ∞

0

[1− e−1[0,s)(s)(αy+β(x+y))

]dH(x, y)ds

= exp− s

[1−H∗(β, α + β)

].

It follows that

∫ ∞

0

C1(α, t)e−βtdt =1β

[1− F ∗(β)]∫ ∞

0

exp− s

[1−H∗(β, α + β)

]ds

=1− F ∗(β)

β[1−H∗(β, α + β)]. (4.4)

Page 81: Renewal Processes and Repairable Systems

Distribution of total downtime 71

Similarly we calculate the Laplace transform of C2(α, t) as follows:

∫ ∞

0

C2(α, t)e−βtdt

=∫ ∞

0

∫ ∞

0

∫ ∞

0

Mp(E)

∫ ∞

0

1[x,x+y)(t− A(s, µ))

exp− α

[t− AX(s+, µ)− x

]e−βtdtPν(dµ)dH(x, y)ds

=∫ ∞

0

∫ ∞

0

∫ ∞

0

Mp(E)

[ ∫ x+y

x

e−(α+β)tdt

]

exp− (α + β)A(s, µ) + α[AX(s+, µ) + x]

Pν(dµ)dH(x, y)ds

=1

α + β

∫ ∞

0

∫ ∞

0

∫ ∞

0

Mp(E)

[e−(α+β)x − e−(α+β)(x+y)

]eαx

exp− (α + β)A(s, µ) + αAX(s+, µ)

Pν(dµ)dH(x, y)ds

=1

α + β

[ ∫ ∞

0

∫ ∞

0

[e−βx − e−(βx+(α+β)y)

]dH(x, y)

]

∫ ∞

0

Mp(E)

exp− (α + β)A(s, µ) + αAX(s+, µ)

Pν(dµ)ds

=1

α + β[F ∗(β)−H∗(β, α + β)]

∫ ∞

0

Mp(E)

exp− (α + β)A(s, µ) + αAX(s+, µ)

Pν(dµ)ds.

The integral with respect to Pν can be calculated using the Palm formula forPoisson point process as follows:

Mp(E)

exp− (α + β)A(s, µ) + αAX(s+, µ)

Pν(dµ)

=∫

Mp(E)

exp−

E

[1[0,s)(s)(α + β)(x + y)− 1[0,s](s)αx

]µ(dsdxdy)

Pν(dµ)

= exp−

∫ ∞

0

∫ ∞

0

∫ ∞

0

[1− e−[1[0,s)(s)(α+β)(x+y)−1[0,s](s)αx]

]dH(x, y)ds

= exp−s

∫ ∞

0

∫ ∞

0

[1− e−[βx+(α+β)y]

]dH(x, y)

= exp− s

[1−H∗(β, α + β)

]

Page 82: Renewal Processes and Repairable Systems

72 Total Downtime of Repairable Systems

It follows that∫ ∞

0

C2(α, t)e−βtdt

=1

α + β[F ∗(β)−H∗(β, α + β)]

∫ ∞

0

exp− s

[1−H∗(β, α + β)

]ds

=F ∗(β)−H∗(β, α + β)

(α + β)[1−H∗(β, α + β)]. (4.5)

Summing (4.4) and (4.5) we get the result. 2

Taking derivatives with respect to α in (4.3) and setting α = 0 we get theLaplace transforms of E[D(t)] and E[D2(t)] as stated in the following proposi-tion:

Proposition 4.2.1 For β > 0,

(a)

∫ ∞

0

E[D(t)]e−βtdt =F ∗(β)−H∗(β, β)β2[1−H∗(β, β)]

, (4.6)

(b)

∫ ∞

0

E[D2(t)]e−βtdt =2β3

[F ∗(β)−H∗(β, β)

1−H∗(β, β)

− β[1− F ∗(β)]∫∞0

∫∞0

ye−β(x+y)dH(x, y)[1−H∗(β, β)]2

].(4.7)

Remark 4.2.1 For the case that (Xi) and (Yj) are independent (4.3) simplifiesto

∫ ∞

0

E(e−αD(t))e−βtdt =α[1− F ∗(β)] + β[1− F ∗(β)G∗(α + β)]

β(α + β)[1− F ∗(β)G∗(α + β)]. (4.8)

Takacs [44], Muth [26], Funaki and Yoshimoto [13] derived for the independentcase the following formula for the distribution function of the total downtime:

P(D(t) ≤ x) = ∑∞

n=0 Gn(x)[Fn(t− x)− Fn+1(t− x)], t > x1, t ≤ x.

(4.9)

Taking double Laplace transforms on both sides of (4.9) we obtain (4.8).

Page 83: Renewal Processes and Repairable Systems

System availability 73

4.3 System availability

This section concerns the system availability of repairable systems, which isclosely related to the total downtime. The system availability A11(t) at time tis defined as the probability that the system is working at time t, i.e.,

A11(t) = P(Z(t) = 1).

The relationship between the system availability A11(t) and the total downtimeD(t) is given by the following equation:

E[D(t)] = t−∫ t

0

A11(s)ds, (4.10)

which can easily verified using (4.2).In Pham-Gia and Turkkan [30] the system availability of a repairable system

where both uptime and downtime are gamma distributed has been considered.They calculate the system availability by computing numerically the renewaldensity of a renewal process with inter-arrival times the sum of two gammarandom variables, and then using the following integral equation:

A11(t) = F (t) +∫ t

0

F (t− u)dm(u), (4.11)

where m(t) = E[N(t)]. This equation can be found for example in Barlow [2].In general an expression for the system availability can be derived using our

result about the expected value of the total downtime given in (4.6), and using(4.10). Taking Laplace transforms on both sides of (4.10) we obtain

∫ ∞

0

E[D(t)]e−βtdt =1β2

− 1β

∫ ∞

0

A11(t)e−βtdt.

Taking (4.6) into consideration we obtain∫ ∞

0

A11(t)e−βtdt =1− F ∗(β)

β[1−H∗(β, β)]. (4.12)

In particular, if (Xi) and (Yi) are independent then∫ ∞

0

A11(t)e−βtdt =1− F ∗(β)

β[1− F ∗(β)G∗(β)]. (4.13)

Remark 4.3.1 The Laplace transform of A11(t) can also be derived from (4.11).Taking Laplace transform on both sides of this equation we obtain

∫ ∞

0

A11(t)e−βtdt =1β

[1− F ∗(β)][1 + m∗(β)] (4.14)

Page 84: Renewal Processes and Repairable Systems

74 Total Downtime of Repairable Systems

where m∗ is the Laplace-Stieltjes transform of m(t). But it is well known that

m∗(β) =K∗(β)

1−K∗(β),

where K is the cdf of X1 + Y1. Substituting this equation into (4.14) and usingthe fact that K∗(β) = H∗(β, β) we get (4.12).

Example 4.3.1 Let (Xi, i ≥ 1) be an iid sequence of non-negative randomvariables having a common Gamma(λ,m) distribution with a pdf

f(x;λ, m) =λnxn−1e−λx

Γ(m), x ≥ 0.

Let (Yi, i ≥ 1) be an iid sequence of non-negative random variables having acommon Gamma(µ, n) distribution. Assume that (Xi) and (Yi) are independent.Then using (4.13) we obtain

∫ ∞

0

A11(t)e−βtdt =(µ + β)n[(λ + β)m − λm]

β[(λ + β)m(µ + β)n − λmµn]. (4.15)

The system availability A11(t) can be obtained by inverting this transform.As an example let m = n = 1. Then X1 and Y1 are exponentially distributed

with parameter λ and µ respectively. The system availability is given by

A11(t) =µ

λ + µ+

λ

λ + µe−(λ+µ)t. (4.16)

As another example let m = n = 2, λ = 1 and µ = 2. In this case

A11(t) =23

+112

e−3t +14e−3t/2 cos(

√7t/2) +

528

√7e−3t/2 sin(

√7t/2).

For non-integers m and n we can invert numerically the transform in (4.15).As an example let m = 7.6, n = 2.4, λ = 2 and µ = 1.1765. In this case

∫ ∞

0

A11(t)e−βtdt =(1.1765 + β)2.4((2 + β)7.6 − 27.6)

β[(2 + β)7.6(1.1765 + β)2.4 − 27.61.17652.4].

The graph of A11(t) can be seen in Figure 4.1 which is the same as Figure 1 inPham-Gia and Turkkan [30].

Example 4.3.2 Let ((Xn, Yn), n ≥ 1) be an iid sequence of non-negative ran-dom vectors having a common joint bivariate exponential distribution given by

P(X1 > x, Y1 > y) = e−(λ1x+λ2y+λ12 max(x,y)); x, y ≥ 0; λ1, λ2, λ12 > 0.

Page 85: Renewal Processes and Repairable Systems

System availability 75

0 2 4 6 8 10 12 14 16 18 200.5

0.6

0.7

0.8

0.9

1

1.1

1.2

Figure 4.1: Graph of A11(t).

Obviously

P(X1 > x) = e−(λ1+λ12)x (4.17)

and

P(Y1 > y) = e−(λ2+λ12)y. (4.18)

The correlation coefficient ρXY between X1 and Y1 is given by

ρXY =λ12

λ1 + λ2 + λ12.

The Laplace-Stieltjes transform of the cdf F of X1 and the joint cdf H of X1

and Y1 are given by

F ∗(β) =λ1 + λ12

β + λ1 + λ12

and

H∗(α, β) =(λ1 + λ2 + λ12 + α + β)(λ1 + λ12)(λ2 + λ12) + λ12αβ

(λ1 + λ2 + λ12 + α + β)(λ1 + λ12 + α)(λ2 + λ12 + β),

see Barlow [2]. It follows that∫ ∞

0

A11(t)e−βtdt =(λ + 2β)(λ2 + λ12)

β[2β2 + (3λ + λ12)β + λ(λ + λ12)]

Page 86: Renewal Processes and Repairable Systems

76 Total Downtime of Repairable Systems

where λ = λ1 + λ2 + λ12. Inverting this transform we obtain

A11(t) =λ2 + λ12

λ + λ12+

λ1

λ1 + λ2e−λt +

λ12(λ2 − λ1)(λ1 + λ2)(λ + λ12)

e−(λ+λ12)t/2.

4.4 Covariance of total downtime

Let U(t) = t−D(t) be the total uptime of the system up to time t. Obviously

Cov(D(t1), D(t2)) = Cov(U(t1), U(t2)).

So we might as well study Cov(U(t1), U(t2)).Let 0 ≤ t1 ≤ t2 < ∞. Then

E[U(t1)U(t2)] = E[∫ t1

x=0

∫ t2

y=0

11(Z(x))11(Z(y))dydx

]

= 2∫ t1

x=0

∫ t1

y=x

P(Z(x) = 1, Z(y) = 1)dydx

+∫ t1

x=0

∫ t2

y=t1

P(Z(x) = 1, Z(y) = 1)dydx. (4.19)

Let ϕ(x, y) = P(Z(x) = 1, Z(y) = 1). For 0 ≤ x ≤ y < ∞,

ϕ(x, y) = P(Z(x) = 1, Z(y) = 1, y < X1)+P(Z(x) = 1, Z(y) = 1, x < X1 < y)+P(Z(x) = 1, Z(y) = 1, X1 < x). (4.20)

Obviously

P(Z(x) = 1, Z(y) = 1, y < X1) = 1− F (y).

For the second term, note that the event ”Z(x) = 1, Z(y) = 1, x < X1 < y” isequivalent to the event ”x < X1 and for some n ≥ 1, Sn < y < Sn + Xn+1”,where Sn =

∑ni=1(Xi + Yi). Let Rn =

∑ni=2(Xi + Yi), n ≥ 2. Then (X1, Y1),

Page 87: Renewal Processes and Repairable Systems

Covariance of total downtime 77

Rn and Xn+1 are independent. Denote by Kn the cdf of Rn. Then

P(Z(x) = 1, Z(y) = 1, x < X1 < y)

=∞∑

n=1

P(x < X1, Sn ≤ y < Sn + Xn+1)

= P(x < X1, X1 + Y1 ≤ y < X1 + Y1 + X2)

+∞∑

n=2

P(x < X1, (X1 + Y1) + Rn ≤ y < (X1 + Y1) + Rn + Xn+1)

=∫

x1∈(x,y]

y1∈[0,y−x1]

x2∈(y−x1−y1,∞)

dF (x2)dH(x1, y1)

+∞∑

n=2

x1∈(x,y]

y1∈[0,y−x1]

rn∈[0,y−x1−y1]

xn+1∈(y−x1−y1−rn,∞)

dF (xn+1)dKn(rn)dH(x1, y1)

=∫

x1∈(x,y]

w∈[x1,y]

x2∈(y−w,∞)

dF (x2)

+∞∑

n=2

rn∈[0,y−w]

xn+1∈(y−w−rn,∞)

dF (xn+1)dKn(rn)

dH(x1, w − x1)

=∫

x1∈(x,y]

w∈[x1,y]

∞∑n=1

P(Rn ≤ y − w < Rn + Xn+1)dH(x1, w − x1)

=∫

x1∈(x,y]

w∈[x1,y]

P(z(y − w) = 1)dH(x1, w − x1)

=∫

x1∈(x,y]

w∈[x1,y]

A11(y − w)dH(x1, w − x1),

where A11(t) denotes the availability of the system at time t starting in state 1at time 0. Finally, the last term in (4.20) can be obtained by conditioning onX1 + Y1, i.e.,

P(Z(x) = 1, Z(y) = 1, X1 ≤ x)= P(Z(x) = 1, Z(y) = 1, X1 + Y1 ≤ x)

=∫ ∞

0

P(Z(x) = 1, Z(y) = 1, X1 + Y1 ≤ x|X1 + Y1 = w)dK(w)

=∫ x

0

P(Z(x− w) = 1, Z(y − w) = 1)dK(w)

=∫ x

0

ϕ(x− w, y − w)dK(w).

Page 88: Renewal Processes and Repairable Systems

78 Total Downtime of Repairable Systems

So we obtain

ϕ(x, y) = 1− F (y) +∫

x1∈(x,y]

w∈[x1,y]

A11(y − w)dH(x1, w − x1)

+∫ x

0

ϕ(x− w, y − w)dK(w). (4.21)

Taking double Laplace transforms on both sides of (4.21) we obtain

ϕ(α, β) :=∫ ∞

0

∫ ∞

0

ϕ(x, y)e−αx−βydxdy

=α[1− F ∗(β)]− β[F ∗(β)− F ∗(α + β)]

αβ(α + β)[1−K∗(α + β)]

+A11(β)[H∗(β, β)−H∗(α + β, β)]

α[1−K∗(α + β)].

where

A11(β) :=∫ ∞

0

A11(t)e−βtdt =1− F ∗(β)

β[1−H∗(β)],

see (4.12). This formula is a generalization of the result in Srinivasan [37], sinceH∗(β) = F ∗(β)G∗(β) when Xi and Yi are independent.

Now from (4.19) we obtain∫ ∞

t1=0

∫ ∞

t2=t1

E[U(t1)U(t2)]e−αt1−βt2dt2dt1

=2ϕ(0, α + β)

β(α + β)+

ϕ(α, β)− ϕ(0, α + β)αβ

=ϕ(α, β)

αβ+

[α− β]ϕ(0, α + β)αβ(α + β)

.

It follows that∫ ∞

0

∫ ∞

0

E[U(t1)U(t2)]e−αt1−βt2dt1dt2 =1

αβ[ϕ(α, β) + ϕ(β, α)].

4.5 Asymptotic properties

In this section we want to address asymptotic properties of the total downtimeD(t). To this end we use a method in Takacs [44] which is based on a compar-ison with the asymptotic properties of a delayed renewal process related to theprocess that we are studying. First we summarize some known results aboutdelayed renewal processes (N(t), t ≥ 0) which will be used in the following.

Page 89: Renewal Processes and Repairable Systems

Asymptotic properties 79

Let (Vn, n ≥ 1) be an i.i.d. sequence of non-negative random variables. LetV0 be a non-negative random variable which is independent of the sequence(Vn). Define S0 = 0, Sn =

∑n−1i=0 Vi, n ≥ 1 and N(t) = supn ≥ 0 : Sn ≤ t.

The Laplace-Stieltjes transforms of the first and the second moments of N(t)are given by

∫ ∞

0

e−βtdE[N(t)] =E(e−βV0)

1−E(e−βV1)(4.22)

and∫ ∞

0

e−βtdE[N2(t)] =2E(e−βV0)

[1−E(e−βV1)]2− E(e−βV0)

1−E(e−βV1), (4.23)

respectively, see Takacs [44].Now we are in a position to derive the asymptotic properties of D(t). The

same argument as used in Takacs [44] for the independent case can now be em-ployed to derive asymptotic properties of the total downtime for the dependentcase.

Let µX = E(X1), µY = E(Y1), σ2X = Var(X1), σ2

Y = Var(Y1) and σXY =Cov(X1, Y1). Let

Vn = Xn + Yn, n = 1, 2, 3.... (4.24)

Lemma 4.5.1 Let (N(t), t ≥ 0) be the delayed renewal process determined bythe random variables (Vn), n = 0, 1, 2, ..., where V0 has the distribution

P(V0 ≤ x) =

1µX

∫ x

0[1− F (y)]dy, x ≥ 0

0, x < 0(4.25)

and Vn, n = 1, 2, ..., are defined as in (4.24). Then

E[D(t)] + µXE[N(t)] = t. (4.26)

Proof: It is easy to verify that E(e−βV0) = 1−F∗(β)βµX

and E(e−βV1) = H∗(β, β).Substitution in (4.22) yields

∫ ∞

0

e−βtdE[N(t)] =1− F ∗(β)

βµX [1−H∗(β, β)]. (4.27)

Now from (4.6) we obtain

∫ ∞

0

e−βtdE[D(t)] =F ∗(β)−H∗(β, β)β[1−H∗(β, β)]

. (4.28)

Page 90: Renewal Processes and Repairable Systems

80 Total Downtime of Repairable Systems

Taking the Laplace-Stieltjes transform on the left-hand side of (4.26) and taking(4.27) and (4.28) into consideration we obtain

∫ ∞

0

e−βtdE[D(t)] + µX

∫ ∞

0

e−βtdE[N(t)]

=F ∗(β)−H∗(β, β)β[1−H∗(β, β)]

+1− F ∗(β)

β[1−H∗(β, β)]

=1β

which is equal to the Laplace-Stieltjes transform of t. 2

Remark 4.5.1 The relation (4.26) has been proved by Takacs [44] for the casethat (Xi) and (Yj) are independent.

Theorem 4.5.1 If µX + µY < ∞, then

limt→∞

E[D(t)]t

=µY

µX + µY, (4.29)

and if σ2X and σ2

Y are finite, and X1 +Y1 is a non-lattice random variable, then

limt→∞

(E[D(t)]− µY t

µX + µY

)=

µY σ2X − µXσ2

Y − 2µXσXY

2(µX + µY )2− µXµY

2(µX + µY ).

(4.30)

Proof: From Lemma 4.5.1 and (1.10) we obtain

limt→∞

E[D(t)]t

= 1− µX limt→∞

E[N(t)]t

= 1− µX

µX + µY

=µY

µX + µY.

For the second part, from (4.26) we obtain

limt→∞

(E[D(t)]− µY t

µX + µY

)= −µX lim

t→∞

(E[N(t)]− t

µX + µY

).(4.31)

Now if X1 + Y1 is a non-lattice random variable, then using (1.11) we obtain

limt→∞

(E[N(t)]− t

µX + µY

)

=σ2

X + σ2Y + 2σXY + (µX + µY )2

2(µX + µY )2− µ0

µX + µY(4.32)

Page 91: Renewal Processes and Repairable Systems

Asymptotic properties 81

Next it is easy to see that

µ0 = E(V0) =σ2

X + µ2X

2µX. (4.33)

Substitution (4.32) and(4.33) into (4.31) completes the proof. 2

Remark 4.5.2 The first result (4.29) of Theorem 4.5.1 can be proved using aTauberian theorem. From (4.28) if µX and µY are finite, we obtain

∫ ∞

0

e−βtdE[D(t)] ∼ µY

β(µX + µY )as β → 0.

Obviously E[D(t)] is non-decreasing. So we can use Theorem 2.4.1 to conclude(4.29).

Now we will derive the asymptotic variance of D(t).

Lemma 4.5.2 Let (N(t), t ≥ 0) be the delayed renewal process determined bythe random variables (Vn), n = 0, 1, 2, ..., where V0 has the Laplace transform

E(e−βV0) =[1− F ∗(β)]

∫∞0

∫∞0

ye−β(x+y)dH(x, y)βµXµY

and Vn, n = 1, 2..., are defined as in (4.24). Then

E[D2(t)] = 2∫ t

0

E[D(u)]du− µXµY (E[N(t)] + E[N2(t)]). (4.34)

Proof: Firstly, it is easy to see from the Laplace transform of V0 that

µ0 = E(V0) = µX +σ2

X + µ2X

2µX+

σXY + σ2Y + µ2

Y

µY. (4.35)

Then take the Laplace-Stieltjes transforms on both sides of (4.34) and use (4.6),(4.7), (4.22), (4.23), and (4.35). 2

Theorem 4.5.2 If σ2X and σ2

Y are finite, and X1 + Y1 is a non-lattice randomvariable, then

limt→∞

Var[D(t)]t

=µ2

Xσ2Y + µ2

Y σ2X − 2µXµY σXY

(µX + µY )3.

Proof: If X1 +Y1 is a non-lattice random variable then, using (1.11) and (1.12)respectively, we can show that the delayed renewal process defined in Lemma

Page 92: Renewal Processes and Repairable Systems

82 Total Downtime of Repairable Systems

4.5.2 has properties as t →∞

E[N(t)] =t

µX + µY+

σ2X + σ2

Y + 2σXY + (µX + µY )2

2(µX + µY )2

− µY (σ2X + µ2

X) + 2µ2XµY + 2µX(σXY + σ2

Y + µ2Y )

2µXµY (µX + µY )+ o(1)

and

E[N2(t)] = E[N(t)]2 +σ2

X + σ2Y + 2σXY

(µX + µY )3t + o(t)

From (4.30) we deduce

2∫ t

0

E[D(u)]du =µY t2

µX + µY+

[µY σ2

X − µXσ2Y − 2µXσXY

(µX + µY )2− µXµY

µX + µY

]t

+ o(t) as t →∞.

It follows, using Lemma 4.5.2, that

E[D2(t)] =µ2

Y t2

(µX + µY )2

−[µXµ3

Y + (µ2X − 2σ2

X)µ2Y + (σ2

Y + 4σXY )µXµY − µ2Xσ2

Y

(µX + µY )3

]t

+ o(t).

and hence, by taking (4.30) into consideration,

Var[D(t)] = E[D2(t)]−E[D(t)]2

=µ2

Xσ2Y + µ2

Y σ2X − 2µXµY σXY

(µX + µY )3t + o(t). 2

Now we will consider the asymptotic distribution of the total downtime. Forthe case that (Xi) and (Yj) are independent the limiting distribution of the totaldowntime D(t) is normal as t →∞, i.e.,

D(t)− µY tµX+µY√

µ2Xσ2

Y +µ2Y σ2

X

(µX+µY )3 t

d−→ N(0, 1), (4.36)

provided σ2X and σ2

Y are finite, see Takacs [44] and Renyi [33]. In the followingtheorem we give the limiting distribution of the total downtime D(t) for thedependent case.

Theorem 4.5.3 If σ2X and σ2

Y are finite then

D(t)− µY tµX+µY√

µ2Xσ2

Y +µ2Y σ2

X−2µXµY σXY

(µX+µY )3 t

d−→ N(0, 1) as t →∞.

Page 93: Renewal Processes and Repairable Systems

Examples 83

Proof: First note that

N(t)∑

i=1

Yi ≤ D(t) ≤N(t)+1∑

i=1

Yi,

where N(t) = supn ≥ 0 :∑n

j=1(Xj + Yj) ≤ t. Using the Central LimitTheorem for random sums, see Embrechts et al. [11], we obtain

[Var

(Y1 − µY (X1 + Y1)

µX + µY

)t

µX + µY

]−1/2

N(t)∑

i=1

Yi − µY t

µX + µY

d−→ N(0, 1),

where

Var(

Y1 − µY (X1 + Y1)µX + µY

)t

µX + µY

=(

σ2Y +

µ2Y

(µX + µY )2Var(X + Y )− 2µY

µX + µYCov(Y, X + Y )

)t

µX + µY

=σ2

Y (µX + µY )2 + µ2Y (σ2

X + σ2Y + 2σXY )− 2µY (µX + µY )(σXY + σ2

Y )(µX + µY )3

t

=µ2

Xσ2Y + µ2

Y σ2X − 2µXµY σXY

(µX + µY )3t.

The proof is complete if we can show that

YN(t)+1√t

P−→ 0 as t →∞. (4.37)

But by the fact that N(t)t

P−→ 1µX+µY

(> 0) and the assumption that σ2Y < ∞,

then using Lemma 2.4.3 we obtain

YN(t)√N(t)

P−→ 0.

Hence (4.37) follows. 2

4.6 Examples

In this section we give two examples. In the first example we will see the effectof dependence of the failure and repair times on the distribution of the totaldowntime. In the second example we will see that for some cases we haveanalytic expressions for the first and second moments of the total downtime.

Page 94: Renewal Processes and Repairable Systems

84 Total Downtime of Repairable Systems

Example 4.6.1 Let (Xi, i ≥ 1) and (Yi, i ≥ 1) be the sequences of the failuretimes and repair times respectively, of a repairable system such that ((Xi, Yi), i ≥1) is an iid sequence of non-negative random vectors. Let X1 and Y1 have a jointbivariate exponential distribution given by

P(X1 > x, Y1 > y) = e−(λ1x+λ2y+λ12 max(x,y)); x, y ≥ 0; λ1, λ2, λ12 > 0.

The marginals are given by

P(X1 > x) = e−(λ1+λ12)x (4.38)

and

P(Y1 > y) = e−(λ2+λ12)y. (4.39)

In this case we have µX = 1λ1+λ12

, µY = 1λ2+λ12

, σ2X = 1

(λ1+λ12)2, σ2

Y =1

(λ2+λ12)2, σXY = λ12

(λ1+λ2+λ12)(λ1+λ12)(λ2+λ12), and the correlation coefficient

ρXY between X1 and Y1

ρXY =λ12

λ1 + λ2 + λ12.

Using (4.6) we obtain∫ ∞

0

E[D(t)]e−βtdt

=(2λ1 + λ12)β + (λ1 + λ12)2 + λ2(λ1 + λ12)

β2[2β2 + (3λ1 + 3λ2 + 4λ12)β + (λ1 + λ2)2 + 3λ12(λ1 + λ2) + 2λ212]

.

This transform can be inverted analytically. As an example for λ1 = 1, λ2 = 2,and λ12 = 3 we obtain

E[D(t)] =49t− 13

162+

281

e−9t/2 +118

e−6t.

The distribution of D(t) has mass at 0 with

P(D(t) = 0) = P(X1 > t) = e−(λ1+λ12)t.

The pdf of the continuous part of D(t) can be obtained by inverting its doubleLaplace transform. As an example let λ1 = 1, λ2 = 2, and λ12 = 3. In this case

∫ ∞

0

E[e−αD(t)]e−βtdt =α + β − 4α

4+β − βC(α, β)

β(α + β) [1− C(α, β)](4.40)

where

C(α, β) =20(6 + α + 2β) + 3β(α + β)(6 + α + 2β)(4 + β)(5 + β)

.

Page 95: Renewal Processes and Repairable Systems

Examples 85

0 1 2 3 4 5 6 7 8 9 100

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

exact densityasymptotic density

Figure 4.2: The graph of the density of D(10) with λ1 = 1 λ2 = 2 and λ12 = 3.Solid line for exact density; dashed line for normal approximation.

Using the numerical inversion of a double Laplace transform, see Appendix B,we get the graph of the pdf of D(10), see Figure 4.2 (solid line). In this Figurewe also compare the pdf of D(10) with its normal approximation (dashed line).We see that the pdf of D(10) is close to its normal approximation.

The effect of dependence between the failure and the repair times can be seenin Figure 4.3. In this figure we compare the graphs of the normal approximationsof D(10) where (Xi) and (Yj) are independent and satisfy (4.38) and (4.39)with the normal approximations of D(10) for various correlation coefficientsρXY . We see that the smaller their correlation coefficient the closer their normalapproximations.

Example 4.6.2 Let (Xi, i ≥ 1) and (Yi, i ≥ 1) denote the sequences of the fail-ure times and repair times respectively, of a repairable system such that (Xi) areiid non-negative random variables having a common Gamma(λ,m) distributionwith a pdf

fX1(x) =λm

Γ(m)xm−1e−λx, x ≥ 0

and (Yi) are iid non-negative random variables having a common Gamma(µ, n)distribution.

Firstly we will consider the case where m = n = 1. In this case X1 and Y1

are exponentially distributed with parameter λ and µ respectively. Using (4.6),

Page 96: Renewal Processes and Repairable Systems

86 Total Downtime of Repairable Systems

0 1 2 3 4 5 6 7 8 9 100

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

dependentindependent

0 1 2 3 4 5 6 7 8 9 100

0.1

0.2

0.3

0.4

0.5

0.6

0.7

dependentindependent

(a) σXY = 0.8 (λ1 = λ2 = 1, λ12 = 8) (b) σXY = 0.5 (λ1 = λ2 = 1, λ12 = 2)

0 1 2 3 4 5 6 7 8 9 100

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

0.5

dependentindependent

(c) σXY = 0.2 (λ1 = λ2 = 2, λ12 = 1)

Figure 4.3: The graphs of the normal approximations of D(10). Solid line fordependent cases and dashed line for the independent cases.

(4.7), and (4.3) we obtain

∫ ∞

0

E[D(t)]e−βtdt =λ

β2[β + λ + µ],

∫ ∞

0

E[D2(t)]e−βtdt =2λ(λ + β)

β3[β + λ + µ]2,

and∫ ∞

0

E[e−αD(t)]e−βtdt =α + β + λ + µ

(λ + β)(α + β) + µβ.

Inverting these transforms we obtain

E[D(t)] =λt

λ + µ− λ

(λ + µ)2(1− e−(λ+µ)t),

Page 97: Renewal Processes and Repairable Systems

Examples 87

E[D2(t)] =λ2t2

(λ + µ)2+

2λ(µ− λ)t(λ + µ)3

+2λ(λ− 2µ) + 2λ[2µ− λ + µ(λ + µ)t]e−(λ+µ)t

(λ + µ)4,

and

E[e−αD(t)] = e−(α+λ+µ)t/2

(cos(

√ct/2) +

α + λ + µ√c

sin(√

ct/2))(4.41)

where c = 4λα − (α + λ + µ)2. The graph of the pdf of D(t) can be obtain byinverting numerically the Laplace transform in (4.41). As an example the graphof the pdf of D(20) for λ = 1 and µ = 5 can be seen in Figure 4.4 (dashed line).

0 1 2 3 4 5 6 7 80

0.1

0.2

0.3

0.4

0.5

0.6

0.7

x

f t(x)

X1~G(2,2),Y

1~G(10,2)

X1~exp(1),Y

1~exp(5)

Figure 4.4: The graph of the pdf of D(20) with X1 ∼ exp(1), Y1 ∼ exp(2) (solidline) and X2 ∼ Gamma(2, 2), Y2 ∼ Gamma(10, 2) (dashed line).

For integers m and n we have explicit expressions for the first and secondmoments of D(t). As an example, if m = n = 2, λ = 2 and µ = 10, then using(4.6) and (4.7) we obtain

E[D(t)] =16t− 1

18+

1180

e−12t +120

cos(2t)e−6t +110

sin(2t)e−6t,

and

E[D2(t)] =136

t2 +1

216t− 1

864+

1864

e−12t +1

108te−12t +

14t cos(2t)e−6t

−18

sin(2t)e−6t.

Page 98: Renewal Processes and Repairable Systems

88 Total Downtime of Repairable Systems

In this case we also have explicit expressions for E[e−αD(t)]. The graph of thepdf of D(20) for m = n = 2, λ = 2 and µ = 10 can be seen in Figure 4.4 (solidline). Note that in this case X1 and Y1 have means 1 and 0.2 respectively, whichare the same as the means of exponential random variables with parameter 1 and5 respectively.

4.7 Systems consisting of n independent compo-nents

In this section we will consider the distribution of the total uptime (downtime)of a system consisting of n ≥ 2 stochastically independent components. Thesystem we will discuss can be a series, a parallel or a k-out-of-n system, butwe will formulate the results only for a series system. The results only concernthe total uptime. The corresponding results for the total downtime can bederived similarly, or using the obvious relation between the total uptime andtotal downtime. Firstly we will discuss the case where both the failure and repairtimes of the components are exponentially distributed, and later on we willconsider the case where the failure or the repair times are arbitrarily distributed.

4.7.1 Exponential failure and repair times

Consider a series system comprising n ≥ 2 stochastically independent two-statecomponents, each of which can be either up or down, denoted as 1 and 0 re-spectively. Suppose that system starts to operate at time 0. If a componentfails, it is repaired and put into operation again. During the repair the unfailcomponent may fail. There are no capacity constrains at the repair shop.

Denote by Zi(t), i = 1, 2, ..., n the state of the ith component at time t. Thenthe total uptime of the system in the time interval [0, t] is given by

U(t) =∫ t

0

11n(Z1(s), ..., Zn(s))ds (4.42)

where 1n denotes the vector of ones of length n.Let Xij and Yij , j = 1, 2, ..., denote the consecutive uptimes and down-

times, respectively, of the ith component. Assume that the sequences (Xij)and (Yij) are independent. Assume also that for each i, the random variablesXij , j = 1, 2, ..., have a common exponential distribution with parameter λi,and Yij , j = 1, 2, ..., have a common exponential distribution with parameterµi. Then (Zi(t), t ≥ 0), i = 1, 2, . . . , n are independent, continuous time Markovchains on 0, 1 with generators

Qi =( −µi µi

λi −λi

), λi, µi > 0.

Page 99: Renewal Processes and Repairable Systems

Systems consisting of n independent components 89

LetYn(t) = (Z1(t), Z2(t), . . . , Zn(t)).

Then Yn = (Yn(t), t ≥ 0) is a continuous-time Markov chain on

I = 0, 1n,

the set of row vectors of length n with entries zeros and or ones.Let a ∈ I be a state of the Markov chain Yn. Then a has the form

a = (ε1(a), ε2(a), . . . , εn(a))

where εj(a) ∈ 0, 1, j = 1, 2, . . . , n. The generator

Q = (qab)a,b∈I

of the Markov chain Yn has the following properties. Suppose

b = (ε1(b), ε2(b), . . . , εn(b)).

If a and b have two or more different entries, then qab = qba = 0. Suppose nowa and b have only one different entry. Then there exists an index j such that

εi(a) = εi(b) for all i 6= j

andεj(a) = 1− εj(b).

Letνi,0 = λi and νi,1 = µi.

Thenqab = νj,εj(b) and qba = νj,εj(a).

Lemma 4.7.1 The vector π = (πa)a∈I where

πa = ν1,ε1(a)ν2,ε2(a) . . . νn,εn(a)

n∏

i=1

1λi + µi

is the stationary distribution of the Markov chain Yn.

Proof: It is clear that πa is non-negative for every a ∈ I. The fact that∑

a∈I

πa = 1

can be proved by induction. The proof is complete if we can show that

πaqab = πbqba for all a, b ∈ I.

Page 100: Renewal Processes and Repairable Systems

90 Total Downtime of Repairable Systems

If a and b have two or more different entries then

qab = qba = 0.

Now suppose that a and b have only one different entry, say at jth entry. Inthis case

νi,εi(a) = νj,εj(b) for all i 6= j

and if νj,εj(a) = λj then νj,εj(b) = µj and vice versa. The entries πa and πb of πare given by

πa = Cνj,εj(a)

andπb = Cνj,εj(b)

where

C = ν1,ε1(a) . . . νj−1,εj−1(a)νj+1,εj+1(a) . . . νn,εn(a)

n∏

i=1

1λi + νi

.

It follows that

πaqab = Cνj,εj(a)νj,εj(b)

= Cνj,εj(b)νj,εj(a)

= πbqba. 2

As a consequence of the fact that the Markov chain Yn has the stationarydistribution π, we have the following proposition:

Proposition 4.7.1 Let U(t) be the total uptime of a series system with nstochastically independent components. Suppose that the ith, i = 1, 2, . . . , ncomponent has up and down times which are independent and exponentiallydistributed with parameter λi and µi respectively. Then

(a)

Eπ[U(t)] =n∏

i=1

µi

λi + µit,

where Eπ denotes the expectation under initial distribution the stationarydistribution π,

(b) with probability 1,

U(t)t

−→n∏

i=1

µi

λi + µi, as t −→∞.

Page 101: Renewal Processes and Repairable Systems

Systems consisting of n independent components 91

Proof: Use the fact that if the initial distribution of the Markov chain Yn isits stationary distribution then the distribution of the chain at any time equalsthe stationary distribution, and the fact that the fraction of time the chain ina state approximately equals the stationary distributions at that state, see e.g.Wolff [49]. 2

Next we will derive the variance of U(t) for the case n = 2. For the bigger nthe calculations become more complicated. For n = 2 the Markov chain Y2 hasthe state space I = 00, 01, 10, 11 and stationary distribution

π = (πa)a∈I =1

(λ1 + µ1)(λ2 + µ2)(λ1λ2 λ1µ2 µ1λ2 µ1µ2).

The transition matrix of the Markov chain Y2 can be calculated explicitly:

P (t) =P1 + P2e

−(λ1+µ1)t + P3e−(λ2+µ2)t + P4e

−(λ1+µ1+λ2+µ2)t

(λ1 + µ1)(λ2 + µ2)

where

P1 =

1111

π,

P2 =

µ1λ2 µ1µ2 −µ1λ2 −µ1µ2

µ1λ2 µ1µ2 −µ1λ2 −µ1µ2

−λ1λ2 −λ1µ2 λ1λ2 λ1µ2

−λ1λ2 −λ1µ2 λ1λ2 λ1µ2

,

P3 =

λ1µ2 −λ1µ2 λ1µ2 −µ1µ2

−λ1λ2 λ1λ2 −µ1λ2 µ1λ2

λ1µ2 −λ1µ2 λ1µ2 −µ1µ2

−λ1λ2 λ1λ2 −µ1λ2 µ1λ2

,

and

P4 =

µ1µ2 −µ1µ2 −µ1µ2 µ1µ2

−µ1λ2 µ1λ2 µ1λ2 −µ1λ2

−λ1µ2 λ1µ2 λ1µ2 −λ1µ2

λ1λ2 −λ1λ2 −λ1λ2 λ1λ2

.

In particular the transition probability from state 11 at time 0 into state 11 attime t is given by

P11,11(t) =1

(λ1 + µ1)(λ2 + µ2)

[µ1µ2 + λ1µ2e

−(λ1+µ1)t + µ1λ2e−(λ2+µ2)t

+λ1λ2e−(λ1+λ2+µ1+µ2)t

]. (4.43)

Page 102: Renewal Processes and Repairable Systems

92 Total Downtime of Repairable Systems

Under the stationary distribution we have

Eπ[U(t)] = π11t =µ1µ2t

(λ1 + µ1)(λ2 + µ2).

For the second moment of U(t), under the stationary distribution,

Eπ[U2(t)] = Eπ

[∫ t

0

111Y2(s)ds

]2

= 2∫ t

0

∫ s

0

Pπ(Y (r) = 11, Y (s) = 11)drds

= 2µ1µ2

(λ1 + µ1)(λ2 + µ2)

∫ t

0

∫ s

0

P11,11(s− r)drds.

Using (4.43) we obtain

Eπ[U2(t)] = (π11t)2 + σ2t + r(t)

where

σ2 =2µ1µ2

(λ1 + µ1)(λ2 + µ2)

[λ1λ2

(λ1 + λ2 + µ1 + µ2)2+

λ1µ2

λ1 + µ1+

µ1λ2

λ2 + µ2

]

and

r(t) =λ1λ2[e−(λ1+λ2+µ1+µ2)t − 1]

(λ1 + λ2 + µ1 + µ2)2+

λ1µ2[e−(λ1+µ1)t − 1](λ1 + µ1)2

+µ1λ2[e−(λ2+µ2)t − 1]

(λ2 + µ2)2.

It follows that the variance of U(t), under the stationary distribution,

Varπ[U(t)] = σ2t + r(t).

From this expression we conclude

limt→∞

Varπ[U(t)]t

= σ2.

This limit is also valid under any initial distribution of the Markov chain Y2,and can be obtained using a formula (8.11) of Iosifescu [21]. Moreover, usinganother formula on page 256 of Iosifescu, under any initial distribution of theMarkov chain Yn, the limiting distribution of U(t) is normal, i.e.,

U(t)− π11t

σ√

t

d−→ N(0, 1) as t →∞.

Now we will consider the probability distribution of U(t). Define for α, β > 0and a ∈ I

ψa(α, β) :=∫ ∞

0

Ea[e−αU(t)]e−βtdt

Page 103: Renewal Processes and Repairable Systems

Systems consisting of n independent components 93

and starting from t = 0 in state a

τa := inft ≥ 0 : Yn(t) 6= a.

Starting from 1n, the random variable τ1n is the time at which the chain leavesthe state 1n and

P1n(τ1n > t) = e−Pn

i=1 λit.

Conditioning on T1n we obtain the system equations

ψ1n(α, β) = 1

α+β+Pn

i=1 λi[1 +

∑b6=1n

q1nbψb(α, β)]

ψa(α, β) = 1β+qa

[1 +∑

b6=a qabψb(α, β)] if a 6= 1n

where qa =∑n

i=1 νi,1−εi(a) whenever a = (ε1(a), ε2(a), . . . , εn(a)). Solving thissystem equations we get the double Laplace transform of U(t). As an examplefor n = 2, the solution for ψ11(α, β) is given by

ψ11(α, β) =ABC + Dβ + Cβ2 + β3

µ1µ2Cα + (ABC + Eα)β + (D + Fα)β2 + (2C + α)β3 + β4

(4.44)where A = λ1 + µ1, B = λ2 + µ2, C = A + B, D = A2 + B2 + 3AB, E =AB + µ1A + µ2B + 2µ1µ2 and F = µ1 + µ2 + C. Note that the left-hand sideof (4.44) can be written as

∫ ∞

0

∫ t

0

f(x, t)e−(αx+βt)dxdt

where f(x, t) is the density function of U(t) at time t. Transforming back thedouble Laplace transform (4.44) with respect to α, we get

∫ ∞

0

f(x, t)e−βtdt =(ABC + Dβ + Cβ2 + β3)µ1µ2C + Eβ + Fβ2 + β3

e− (ABCβ+Dβ2+2Cβ3+β4)x

µ1µ2C+Eβ+F β2+β3 .

The probability density function of U(t) can be obtained by inverting numeri-cally this transform.

4.7.2 Arbitrary failure or repair times

In general it is complicated to obtain an explicit expression for the distributionof the total uptime of systems comprising n ≥ 2 components when the failureor repair times of the components are arbitrarily distributed. In some cases itis possible to derive the expression for the mean of the total uptime.

Consider the series system in the previous subsection. Assume that for eachi, the random variables Xij , j = 1, 2, ..., have a common distribution function

Page 104: Renewal Processes and Repairable Systems

94 Total Downtime of Repairable Systems

Fi, and the random variables Yij , j = 1, 2, ..., have a common distribution func-tion Gi. Denote by F ∗i and G∗i the Laplace-Stieltjes transforms of Fi and Gi

respectively. Let A(i)11 (t) be the availability of the ith component at time t. Then

from (4.42), the mean of the total uptime of the series system can be formulatedas

E[U(t)] =∫ t

0

n∏

i=1

A(i)11 (s)ds. (4.45)

In some cases we have analytic expressions for the availability, which can beobtained by inverting its Laplace transform given by

∫ ∞

0

A(i)11 (t)e−βtdt =

1− F ∗i (β)β[1− F ∗i (β)G∗i (β)]

, (4.46)

see (4.13).As an example let n = 2. Suppose that X1j and Y1j are exponentially

distributed with parameter 1 and 2 respectively. Suppose also that X2j ∼Gamma(1,2) having a pdf

f(x) = xe−x, x ≥ 0

and Y2j ∼ Gamma(2,2). Then using (4.46), we obtain

A(1)11 (t) =

23

+13e−3t

and

A(2)11 (t) =

23

+112

e−3t +128

e−3t/2[7 cos(√

7t/2) + 5√

7 sin(√

7t/2)].

Using (4.45) we obtain

E[U(t)] =49t +

115396

− 1216

e−6t − 554

e−3t

−[

7264

e−9t/2 +16e−3t/2

]cos(

√7t/2)

−[

191848

e−9t/2 +142

e−3t/2

]√7 sin(

√7t/2).

Page 105: Renewal Processes and Repairable Systems

Appendix A

The proof of Theorem 2.5.1

In Section 2.5 we have proved that, with probability 1, N(t) = N(t)(Φ) whereΦ is a Poisson point process having intensity measure ν(dsdx) = dsdF (x) and

N(t)(µ) =∫

E

E

1[0,x)(t− A(s, µ))1[0,s)(u)µ(dudv)µ(dsdx), (A.1)

whereA(s, µ) =

E

1[0,s)(y)zµ(dydz)

and E = [0,∞) × [0,∞). In the sequel we will write In = [0,∞)n and M =Mp(E), the set of all point measures on E. The distribution of Φ on M isdenoted by Pν .

The main tools or arguments that we will use in this proof are:

(1) : Fubini’s theorem(2) : substitution(3) : The Palm formula for Poisson point processes (see Theorem 1.2.4)(4) : The Laplace functional of Poisson point processes (see Theorem 1.2.1).

We indicate the use of these arguments by writing the corresponding numbers

over equality signs. For example the notation(1,2)= means that we have used

Fubini’s theorem and substitution one or several times. We will also use thefollowing notations: For fix α, β ≥ 0 we put

P = 1− F ∗(α)Q = 1− F ∗(β)R = 1− F ∗(α + β)S = F ∗(α)− F ∗(α + β)T = F ∗(β)− F ∗(α + β)

95

Page 106: Renewal Processes and Repairable Systems

96 The proof of Theorem 2.5.1

where F ∗ denotes the Laplace-Stieltjes transform of F . Note that P + S =Q + T = R.

Define for fix α, β ≥ 0

L(α, β, s, s) :=∫

M

e−αA(s,µ)−βA(s,µ)Pν(dµ), s, s ≥ 0.

For s > s,

L(α, β, s, s)

=∫

M

exp−

E

[1[0,s)(y)αz + 1[0,s)(y)βz

]µ(dydz)

Pν(dµ)

(4)= exp

∫ ∞

0

∫ ∞

0

[1− e1[0,s)(y)αz+1[0,s)(y)βz

]dF (z)dy

= exp−

∫ s

0

∫ ∞

0

[1− e−(α+β)z]dF (z)dy −∫ s

s

∫ ∞

0

[1− e−αz]dF (z)dy

= exp− s[1− F ∗(α + β)]− (s− s)[1− F ∗(α)]

= exp− s[1− F ∗(α)]− s[F ∗(α)− F ∗(α + β)]

= e−sP−sS .

In the sequel we will write L(α, β, s, s; s > s) = L(α, β, s, s) when s > s. Hence

L(α, β, s, s; s > s) = e−sP−sS .

Similarly, it can be proved that for s < s

L(α, β, s, s; s < s) = e−sQ−sT ,

and for s = s

L(α, β, s, s) := L(α, β, s, s; s = s) = e−sR.

Now we will calculate the double Laplace transform of E[N(t1)N(t2)]. Using(A.1) we obtain

∫ ∞

0

∫ ∞

0

E[N(t1)N(t2)]e−αt1−βt2dt1dt2

=∫

I2

M

N(t1, µ)N(t2, µ)Pν(dµ)e−αt1−βt2dt1dt2

Page 107: Renewal Processes and Repairable Systems

97

(2)=

I2

M

[ ∫

I41[0,x)(t1 − A(s, µ))1[0,s)(u)µ(dudv)µ(dsdx)

I41[0,x)(t2 − A(s, µ))1[0,s)(u)µ(dudv)µ(dsdx)

]e−αt1−βt2

Pν(dµ)dt1dt2(1,2,3)

=1α

I3

M

I61[0,x)(t2 − A(s, µ + δ(s,x)))1[0,s)(u)1[0,s)(u)

[1− e−αx]e−βt2e−αA(s,µ)(µ + δ(s,x))(dudv)(µ + δ(s,x))(dsdx)µ(dudv)Pν(dµ)dF (x)dsdt2.

Note that this integral can be split into four terms. For one of these terms,the integration over I6 is with respect to δ(s,x)(dsdx)δ(s,x)(dudv). This integralequals 0, because the integrand contains the factor 1[0,s)(u) which with respectto these measures integrates to 1[0,s)(s). So we only need to calculate the threeremaining integrals.

Case 1: The integral with µ(dsdx) and δ(s,x)(dudv).In this case

T1 :=1α

I3

M

I61[0,x)(t2 − A(s, µ + δ(s,x)))1[0,s)(u)1[0,s)(u)[1− e−αx]

e−βt2e−αA(s,µ)δ(s,x)(dudv)µ(dsdx)µ(dudv)Pν(dµ)dF (x)dsdt2

(1,2)=

1αβ

I2

M

I41[0,s)(u)1[0,s)(s)[1− e−αx][1− e−βx]

e−αA(s,µ)−βA(s,µ)−βxµ(dudv)µ(dsdx)Pν(dµ)dF (x)ds

(3)=

T

αβ

I3

M

I21[0,s)(u)1[0,s)(s)[1− e−βx]e−αA(s,µ)−βA(s,µ)

µ(dudv)Pν(dµ)dF (x)dsds

(3)=

QT

αβ

∫ ∞

0

∫ ∞

s

I2

M

1[0,s)(u)e−αA(s,µ)−αv−βA(s,µ)−βv

Pν(dµ)dF (v)dudsds

=QT

αβF ∗(α + β)

∫ ∞

0

∫ ∞

s

sL(α, β, s, s, s < s)dsds

(4)=

QT

αβF ∗(α + β)

∫ ∞

0

∫ ∞

s

se−sQ−sT dsds

=QT

αβF ∗(α + β)

1QR2

=F ∗(α + β)[F ∗(β)− F ∗(α + β)]

αβ[1− F ∗(α + β)]2.

Page 108: Renewal Processes and Repairable Systems

98 The proof of Theorem 2.5.1

Case 2: The integral with δ(s,x)(dsdx) and µ(dudv).In this case

T2 :=1α

I3

M

I61[0,x)(t2 − A(s, µ + δ(s,x)))1[0,s)(u)1[0,s)(u)[1− e−αx]

e−βt2e−αA(s,µ)µ(dudv)δ(s,x)(dsdx)µ(dudv)Pν(dµ)dF (x)dsdt2

=1α

I3

M

I41[0,x)(t2 − A(s, µ))1[0,s)(u)1[0,s)(u)[1− e−αx]

e−αA(s,µ)e−βt2µ(dudv)µ(dudv)Pν(dµ)dF (x)dsdt2(1,2,3)

=1

αβ

I4

M

I21[0,s)(u)1[0,s)(u)[1− e−αx][1− e−βx]

e−(α+β)A(s,µ)−(α+β)v(µ + δ(u,v))(dudv)Pν(dµ)dF (v)dudF (x)ds

=P − T

αβF ∗(α + β)

I2

M

I21[0,s)(u)1[0,s)(u)e−(α+β)A(s,µ)

(µ + δ(u,v))(dudv)Pν(dµ)duds.

This integral can be split into two terms.

Subcase 21: Using the measure µ(dudv) in the inner integral.In this case

T21 :=P − T

αβF ∗(α + β)

I2

M

I21[0,s)(u)1[0,s)(u)e−(α+β)A(s,µ)

µ(dudv)Pν(dµ)duds

(3)=

P − T

αβF ∗(α + β)

I3

M

1[0,s)(u)se−(α+β)A(s,µ)−(α+β)v

Pν(dµ)dF (v)duds

=P − T

αβF ∗(α + β)2

∫ ∞

0

s2L(α, β, s, s)ds

(4)=

P − T

αβF ∗(α + β)2

∫ ∞

0

s2e−sRds

=P − T

αβF ∗(α + β)2

2R3

=2F ∗(α + β)2[1− F ∗(α)− F ∗(β) + F ∗(α + β)]

αβ[1− F ∗(α + β)]3.

Page 109: Renewal Processes and Repairable Systems

99

Subcase 22: Using the measure δ(u,v)(dudv) in the inner integral.In this case

T22 :=P − T

αβF ∗(α + β)

I2

M

I21[0,s)(u)1[0,s)(u)e−(α+β)A(s,µ)

δ(u,v)(dudv)Pν(dµ)duds

=P − T

αβF ∗(α + β)

I21[0,s)(u)1[0,s)(u)L(α, β, s, s)duds

(4)=

P − T

αβF ∗(α + β)

∫ ∞

0

se−sRds

=P − T

αβF ∗(α + β)

1R2

=F ∗(α + β)[1− F ∗(α)− F ∗(β) + F ∗(α + β)]

αβ[1− F ∗(α + β)]2.

Hence

T2 := T21 + T22

=F ∗(α + β)[1 + F ∗(α + β)][1− F ∗(α)− F ∗(β) + F ∗(α + β)]

αβ[1− F ∗(α + β)]3.

Case 3: The integral with µ(dsdx) and µ(dudv).In this case

T3 :=1α

I3

M

I61[0,x)(t2 − A(s, µ + δ(s,x)))

1[0,s)(u)1[0,s)(u)[1− e−αx]e−βt2e−αA(s,µ)

µ(dudv)µ(dsdx)µ(dudv)Pν(dµ)dF (x)dsdt2(1,2)=

1αβ

I2

M

I61[0,s)(u)1[0,s)(u)[1− e−αx][1− e−βx]e−αA(s,µ)

e−βA(s,µ+δ(s,x))µ(dudv)µ(dudv)µ(dsdx)Pν(dµ)dF (x)ds

(3)=

1αβ

I4

M

I41[0,s)(u)1[0,s)(u)[1− e−αx][1− e−βx]

e−αA(s,µ+δ(s,x))e−βA(s,µ+δ(s,x))

µ(dudv)(µ + δ(s,x))(dudv)Pν(dµ)dF (x)dsdF (x)ds.

We can be split this integral into two terms.

Page 110: Renewal Processes and Repairable Systems

100 The proof of Theorem 2.5.1

Subcase 31: Using the measure δ(s,x)(dudv).In this case

T31 :=1

αβ

I4

M

I41[0,s)(u)1[0,s)(u)[1− e−αx][1− e−βx]e−αA(s,µ+δ(s,x))

e−βA(s,µ+δ(s,x))µ(dudv)δ(s,x)(dudv)Pν(dµ)dF (x)dsdF (x)ds

=1

αβ

I4

M

I21[0,s)(s)1[0,s)(u)[1− e−αx][1− e−βx]

e−αA(s,µ)−αxe−βA(s,µ)µ(dudv)Pν(dµ)dF (x)dsdF (x)ds

(3)=

PS

αβ

∫ ∞

0

∫ s

0

I2

M

1[0,s)(u)e−αA(s,µ)−αve−βA(s,µ)−βv

Pν(dµ)dF (v)dudsds

=PS

αβF ∗(α + β)

∫ ∞

0

∫ s

0

sL(α, β, s, s; s > s)dsds

(4)=

PS

αβF ∗(α + β)

∫ ∞

0

∫ s

0

se−sP−sSdsds

=PS

αβF ∗(α + β)

1PR2

=F ∗(α + β)[F ∗(α)− F ∗(α + β)]

αβ[1− F ∗(α + β)]2.

Subcase 32: Using the measure µ(dudv).In this case

T32 :=1

αβ

I4

M

I21[0,s)(u)1[0,s)(u)[1− e−αx][1− e−βx]e−αA(s,µ+δ(s,x))

e−βA(s,µ+δ(s,x))µ(dudv)∫

E

µ(dudv)Pν(dµ)dF (x)dsdF (x)ds

(3)=

1αβ

I6

M

I21[0,s)(u)1[0,s)(u)[1− e−αx][1− e−βx]

e−αA(s,µ+δ(s,x))−αve−βA(s,µ+δ(s,x)+δ(u,v))

(µ + δ(u,v))(dudv)Pν(dµ)dF (v)dudF (x)dsdF (x)ds.

This integral can be split into two terms.Sub-subcase 321: Using the measure µ(dudv) in the inner integral.In this case

T321 :=1

αβ

I6

M

I21[0,s)(u)1[0,s)(u)[1− e−αx][1− e−βx]

e−αA(s,µ+δ(s,x))−αve−βA(s,µ+δ(s,x)+δ(u,v))

µ(dudv)Pν(dµ)dF (v)dudF (x)dsdF (x)ds

= A + B

Page 111: Renewal Processes and Repairable Systems

101

where

A =1

αβ

I2

∫ s

0

I3

M

I21[0,s)(u)1[0,s)(u)[1− e−αx][1− e−βx]

e−αA(s,µ+δ(s,x))−αve−βA(s,µ+δ(s,x)+δ(u,v))

µ(dudv)Pν(dµ)dF (v)dudF (x)dsdF (x)ds

(3)=

1αβ

I2

∫ s

0

I5

M

1[0,s)(u)1[0,s)(u)[1− e−αx][1− e−βx]

e−αA(s,µ)−αv−αx−αve−βA(s,µ+δ(u,v))−βv

Pν(dµ)dF (v)dudF (v)dudF (x)dsdF (x)ds

=PS

αβF ∗(α + β)

∫ ∞

0

∫ s

0

∫ s

0

∫ ∞

0

M

se−αA(s,µ)−αve−βA(s,µ)−βv

Pν(dµ)dF (v)dudsds

+PS

αβF ∗(α + β)

∫ ∞

0

∫ s

0

∫ s

s

∫ ∞

0

M

se−αA(s,µ)−αve−βA(s,µ)

Pν(dµ)dF (v)dudsds

=PS

αβF ∗(α + β)2

∫ ∞

0

∫ s

0

s2L(α, β, s, s; s > s)dsds

+PS

αβF ∗(α)F ∗(α + β)

∫ ∞

0

∫ s

0

s(s− s)L(α, β, s, s; s > s)dsds

(4)=

PS

αβF ∗(α + β)2

∫ ∞

0

∫ s

0

s2e−sP−sSdsds

+PS

αβF ∗(α)F ∗(α + β)

∫ ∞

0

∫ s

0

s(s− s)e−sP−sSdsds

=PS

αβF ∗(α + β)2

2PR3

+PS

αβF ∗(α)F ∗(α + β)

1P 2R2

=F ∗(α + β)[F ∗(α)− F ∗(α + β)]

αβ[1− F ∗(α + β)]2

[2F ∗(α + β)

1− F ∗(α + β)+

F ∗(α)1− F ∗(α)

],

and

B =1

αβ

I2

∫ ∞

s

I3

M

I21[0,s)(u)1[0,s)(u)[1− e−αx][1− e−βx]

e−αA(s,µ+δ(s,x))−αve−βA(s,µ+δ(s,x)+δ(u,v))−βx

µ(dudv)Pν(dµ)dF (v)dudF (x)dsdF (x)ds

(3)=

1αβ

I2

∫ ∞

s

I5

M

1[0,s)(u)1[0,s)(u)[1− e−αx][1− e−βx]

e−αA(s,µ+δ(u,v))−αve−βA(s,µ)−βx−βv−βv

Pν(dµ)dF (v)dudF (v)dudF (x)dsdF (x)ds

Page 112: Renewal Processes and Repairable Systems

102 The proof of Theorem 2.5.1

=QT

αβF ∗(α + β)

∫ ∞

0

∫ ∞

s

∫ ∞

0

∫ s

0

∫ ∞

0

M

se−αA(s,µ)−αve−βA(s,µ)−βv

Pν(dµ)dF (v)dudF (v)dsds

+QT

αβF ∗(α + β)

∫ ∞

0

∫ ∞

s

∫ ∞

0

∫ s

s

∫ ∞

0

M

se−αA(s,µ)e−βA(s,µ)−βv

Pν(dµ)dF (v)dudF (v)dsds

=QT

αβF ∗(α + β)2

∫ ∞

0

∫ ∞

s

s2L(α, β, s, s; s < s)dsds

+QT

αβF ∗(β)F ∗(α + β)

∫ ∞

0

∫ ∞

s

s(s− s)L(α, β, s, s; s < s)ds

(4)=

QT

αβF ∗(α + β)2

∫ ∞

0

∫ ∞

s

s2e−sQ−sT dsds

+QT

αβF ∗(β)F ∗(α + β)

∫ ∞

0

∫ ∞

s

s(s− s)e−sQ−sT dsds

=QT

αβF ∗(α + β)2

2QR3

+QT

αβF ∗(β)F ∗(α + β)

1Q2R2

=F ∗(α + β)[F ∗(β)− F ∗(α + β)]

αβ[1− F ∗(α + β)]2

[2F ∗(α + β)

1− F ∗(α + β)+

F ∗(β)1− F ∗(β)

].

Sub-subcase 322: Using the measure µ(dudv) in the inner integral.In this case

T322 :=1

αβ

I6

M

I21[0,s)(u)1[0,s)(u)[1− e−αx][1− e−βx]

e−αA(s,µ+δ(s,x))−αve−βA(s,µ+δ(s,x)+δ(u,v))

δ(u,v)(dudv)Pν(dµ)dF (v)dudF (x)dsdF (x)ds

=1

αβF ∗(α + β)

I2

∫ s

0

∫ ∞

0

∫ s

0

du

M

[1− e−αx][1− e−βx]

e−αA(s,µ)−αxe−βA(s,µ)Pν(dµ)dudF (x)dsdF (x)ds

+1

αβF ∗(α + β)

I2

∫ ∞

s

∫ ∞

0

∫ s

0

du

M

[1− e−αx][1− e−βx]

e−αA(s,µ)e−βA(s,µ)−βxPν(dµ)dudF (x)dsdF (x)ds

=PS

αβF ∗(α + β)

∫ ∞

0

∫ s

0

sL(α, β, s, s; s > s)dsds

+QT

αβF ∗(α + β)

∫ ∞

0

∫ ∞

s

sL(α, β, s, s; s < s)dsds

Page 113: Renewal Processes and Repairable Systems

103

(4)=

PS

αβF ∗(α + β)

∫ ∞

0

∫ s

0

se−sP−sSdsds

+QT

αβF ∗(α + β)

∫ ∞

0

∫ ∞

s

se−sQ−sT dsds

=PS

αβF ∗(α + β)

1PR2

+QT

αβF ∗(α + β)

1QR2

=F ∗(α + β)

αβ[1− F ∗(α + β)]

[F ∗(α)− F ∗(α + β) + F ∗(β)− F ∗(α + β)

].

So we obtain

T3 = T31 + A + B + T322

=F ∗(α + β)

αβ[1− F ∗(α + β)]

[1 + F ∗(α + β)][F ∗(α) + F ∗(β)− 2F ∗(α + β)]

1− F ∗(α + β)F ∗(α)[F ∗(α)− F ∗(α + β)]

1− F ∗(α)+

F ∗(β)[F ∗(β)− F ∗(α + β)]1− F ∗(β)

.

It follows that∫ ∞

0

∫ ∞

0

E[N(t1)N(t2)]e−αt1−βt2dt1dt2

= T1 + T2 + T3

=[1− F ∗(α)F ∗(β)]F ∗(α + β)

αβ[1− F ∗(α)][1− F ∗(β)][1− F ∗(α + β)]. 2

Page 114: Renewal Processes and Repairable Systems

104 The proof of Theorem 2.5.1

Page 115: Renewal Processes and Repairable Systems

Appendix B

Numerical inversions ofLaplace transforms

B.1 Single Laplace transform

Let f be a real-valued function defined on the positive half-line. The Laplacetransform of f is defined to be

f(β) =∫ ∞

0

f(t)e−βtdt, (B.1)

where β is a complex variable, whenever this integral exists. Given f , we canretrieve the original function f using the following inversion formula:

f(t) =1

2πi

∫ a+i∞

a−i∞etβ f(β)dβ

=eat

∫ ∞

−∞eituf(a + iu)du. (B.2)

where a is a real number chosen such that f(β) has no singularity on or to theright of the vertical line β = a, see e.g. Abate and Whitt [1]. For some Laplacetransforms f we have analytic expressions for f , a table for these is available,see for example Oberhettinger [28]. When the transform cannot be invertedanalytically, we can approximate the function f numerically. Several numericalinversion algorithms have been proposed by several authors, see for exampleAbate and Whitt [1], Weeks [47] and Iseger [22]. Following Abate and Whitt,we will use the trapezoidal rule to approximate the integral in (B.2) and analyzethe corresponding discretization error using the Poisson summation formula.

105

Page 116: Renewal Processes and Repairable Systems

106 Numerical inversions of Laplace transforms

The trapezoidal rule approximates the integral of a function g over thebounded interval [c, d] by the integral of the piecewise linear function obtainedby connecting the n + 1 evenly spaced points g(c + kh), 0 ≤ k ≤ n whereh = (d− c)/n, i.e.,

∫ d

c

g(x)dx ≈ h

[g(c) + g(d)

2+

n−1∑

k=1

g(c + kh)

],

see Davis and Rabinowitz [9]. In case c = −∞ and d = ∞ we approximate theintegral of g over the real line as

∫ ∞

−∞g(x)dx ≈ h1

∞∑

k=−∞g(kh1) (B.3)

where h1 is a small positive constant. This formula can also be obtained usingthe trapezoidal rule with obvious modifications.

Applying (B.3) to (B.2) with step size h1 = π/t, t > 0, and letting a = A/tat the same time, we get

f(t) ≈ eA

2t

∞∑

k=−∞(−1)kf([A + iπk]/t). (B.4)

This approximation can also be obtained by using the Poisson summation for-mula: For an integrable function g

∞∑

k=−∞g(t + 2πk/h2) =

h2

∞∑

k=−∞ϕ(kh2)e−ih2tk (B.5)

where h2 is some positive constant and ϕ(u) =∫∞−∞ g(x)eiuxdx, the Fourier

transform of g. Taking g(x) = e−a1xf(x)1[0,∞)(x) in (B.5) where a1 is chosensuch that the function g is integrable, we obtain

∞∑

k=0

e−a1(t+2πk/h2)f(t + 2πk/h2) =h2

∞∑

k=−∞f(a1 − ikh2)e−ih2tk (B.6)

where f is the Laplace transform of f , see (B.1). Letting a1 = A/t and h2 = π/tin (B.6) we obtain

f(t) =eA

2t

∞∑

k=−∞(−1)kf([A + iπk]/t)− ed (B.7)

where

ed =∞∑

k=1

e−2kAf([2k + 1]t).

Page 117: Renewal Processes and Repairable Systems

Single Laplace transform 107

Comparing (B.4) and (B.7), we conclude that ed is an explicit expression forthe discretization error associated with the trapezoidal rule approximation. Thisdiscretization error can easily be bounded whenever f is bounded. For exampleif |f(x)| ≤ C then |ed| ≤ Ce−2A/(1− e−2A), and if |f(x)| ≤ Cx then

|ed| ≤ (3e−2A − e−4A)Cx

(1− e−2A)2.

We used (B.4) to invert numerically Laplace transforms in this thesis. Notethat the formula (B.4) can be expressed as

f(t) ≈ eA

2tf(A/t) +

eA

t

∞∑

k=1

(−1)k<(f([A + iπk]/t)) (B.8)

by using the fact that f(β) + f(β) = 2<(f(β)), where β and <(β) denote thecomplex conjugate and real part of β respectively.

Below we give a numerical-inversion example of Laplace transform of theexpected value of the instantaneous reward process in Chapter 2. We write theprograms in Matlab.

Example B.1.1 In Subsection 2.2.1 equation (2.12) we have the Laplace trans-form of the expected value of an instantaneous reward process:

∫ ∞

0

E[Rφ(t)]e−βtdt =(β + λ)e−2(β+λ)

β2[1− e−2(β+λ)]. (B.9)

Denote the right-hand side of (B.9) by f(β). Using (B.8) we get

E[Rφ(t)] ≈ eA

2tf(A/t) +

eA

t

M∑

k=1

(−1)k<(f([A + iπk]/t)).

The following is a Matlab program for approximating E[Rφ(t)] for λ = 0.1741.

function [f]=expmean(t,M)

A=5;P=exp(A)/(2*t);B=A/t;C=i*pi/t;lambda=0.1741;D=B+lambda;

Page 118: Renewal Processes and Repairable Systems

108 Numerical inversions of Laplace transforms

T1=P*D*exp(-2*D)/(B^2*(1-exp(-2*D)));

m=0;for k=1:M

beta=B+C*k;U=(beta+lambda)*exp(-2*(beta+lambda));V=beta^2*(1-exp(-2*(beta+lambda)));R=real(U/V);m=m+(-1)^k*R;

end

T2=2*P*m;f=T1+T2; %f=E[R_\phi(t)].

B.2 Double Laplace transform

This section concerns a generalization of the result in the previous section. Werefer to Choudhury at al. [5] with some modifications. Let f(t1, t2) be a real-valued function of non-negative real variables t1 and t2. Denote its doubleLaplace transform by

f(α, β) =∫ ∞

0

∫ ∞

0

f(t1, t2)e−(αt1+βt2)dt1dt2

where α and β are complex variables, provided the integral exists. To retrievenumerically the function f from f , we use the two-dimensional Poisson summa-tion formula: For a real-valued function g on R2,

∞∑

j=−∞

∞∑

k=−∞g(t1 + 2πj/h1, t2 + 2πk/h2)

=∞∑

j=−∞

∞∑

k=−∞

h1h2

4π2ϕ(jh1, kh2)e−i(h1t1j+h2t2k), (B.10)

where ϕ denotes the bivariate Fourier transform of g, i.e.,

ϕ(u, v) =∫ ∞

−∞

∫ ∞

−∞g(x, y)ei(xu+yv)dxdy,

Page 119: Renewal Processes and Repairable Systems

Double Laplace transform 109

Taking g(x, y) = f(x, y)e−(a1x+a2y) when x, y ≥ 0 and g(x, y) = 0 otherwise in(B.10), we obtain

∞∑

j=−∞

∞∑

k=−∞e−[a1(t1+2πj/h1)+a2(t2+2πk/h2)]f(t1 + 2πj/h1, t2 + 2πk/h2)

=h1h2

4π2

∞∑

j=−∞

∞∑

k=−∞f(a1 − ijh1, a2 − ikh2)e−i(h1t1j+h2t2k).

Letting h1 = π/t1, h2 = π/t2, a1 = A1/t1, a2 = A2/t2, we obtain, after somesimplifications,

f(t1, t2) =eA1+A2

4t1t2

∞∑

j=−∞

∞∑

k=−∞(−1)j+kf([A1 − iπj]/t1, [A2 − iπk]/t2)− ed

(B.11)where

ed =∞∑

j=0

∞∑

k=1

e−(A1j+A2k)f([2j + 1]t1, [2k + 1]t2)

+∞∑

j=1

∞∑

k=0

e−(A1j+A2k)f([2j + 1]t1, [2k + 1]t2).

If |f(t1, t2)| ≤ C for some constant C, then the error ed can be bounded:

|ed| ≤ C(e−A1 + e−A2 − e−(A1+A2))(1− e−A1)(1− e−A2)

.

By noting that f(α, β) + f(α, β) = 2<(f(α, β)), from (B.11), for A1 = A2 = A,we get an approximation for f(t1, t2):

f(t1, t2)

≈ e2A

4t1t2

−f(A/t1) + 2

M∑

j=0

N∑

k=0

(−1)j+k<(f([A− iπj]/t1, [A− iπk]/t2))

+2M∑

j=1

N∑

k=0

(−1)j+k<(f([A− iπj]/t1, [A + iπk]/t2))

. (B.12)

We used this formula to obtain the numerical inversion of the double Laplacetransform in (4.40) of the total downtime.

So far we have assumed that the variables t1 and t2 are continuous. It is alsopossible to consider the case where t1 or t2 are discrete variables. Choudhury atal. [5] have discussed the cases where t1 or t2 are non-negative integers. We will

Page 120: Renewal Processes and Repairable Systems

110 Numerical inversions of Laplace transforms

discuss the case where one of the variable is discrete and the other is continuousin the following example.

In Chapter 2 we have the following formula: For α, β > 0,

∫ ∞

0

E(e−αRφ(t))e−βtdt =

∫∞0

[1− F (t)]e−αφ(t)−βtdt

1− ∫∞0

e−αφ(t)−βtdF (t)(B.13)

where

Rφ(t) =N(t)∑n=1

φ(Xn) + φ(t− SN(t))

where φ is a non-negative measurable function, see (2.4). It can be proved thatthis formula remains true if we replace α by iα.

The random variable Rφ(t) possibly takes values x + lλ, l = 0,±1,±2, . . .,depend on the choice of the function φ. In this case, by replacing α with iα, wecan write the equation (B.13) as

∫ ∞

0

∞∑

l=0

P(Rφ(t) = x + lλ)e−iα(x+lλ)−βtdt =

∫∞0

[1− F (t)]e−iαφ(t)−βtdt

1− ∫∞0

e−iαφ(t)−βtdF (t).

Denote the right-hand side of this equation by ψ(α, β). Using the inversionformula for Laplace transforms and discrete Fourier transforms, see Abate andWhitt [1], we obtain

P(Rφ(t) = x + lλ) =λeat

4π2

∫ 2π/λ

0

∫ ∞

−∞ψ(α, a + iβ)ei[(x+lλ)α+tβ]dβdα (B.14)

for suitably constant a. Applying the trapezoidal rule twice to (B.14) with stepsizes 2π/(λn) and π/t for the integrals with respect to α and β respectively, andletting a = A/t at the same time, we obtain

P(Rφ(t) = x + lλ)

≈ eA

2nt

[12

∞∑

k=−∞(−1)k

(ψ(0, (A + iπk)/t) + ψ(2π/λ, (A + iπk)/t)ei2πx/λ

)

+n−1∑

j=1

∞∑

k=−∞(−1)kψ(2πj/(λn), (A + iπk)/t)ei2πj(l+x/λ)/n

. (B.15)

We used this formula to approximate the probability mass function of Rφ(t) inSubsection 2.2.1 as described in the following example.

Page 121: Renewal Processes and Repairable Systems

Double Laplace transform 111

Example B.2.1 We have the following transform of Rφ(t), after replacing αwith iα,:

∫ ∞

0

E[e−iαRφ(t)]e−βtdt

=1− 2e−2(β+λ)

(β + λ)[1− e−iαe−2(β+λ)]− λ[1− e−2(β+λ)], (B.16)

see (2.13). In this example Rφ(t) is a non-negative-integer-valued random vari-able. Denote the right-hand side of (B.16) by ψ(α, β). Since ψ(0, (A+iπk)/t) =ψ(2π, (A + iπk)/t), using (B.15) we get a formula for approximating the pdf ofRφ(t):

P(Rφ(t) = l) ≈ eA

2nt

n−1∑

j=0

M∑

k=−M

(−1)kψ(2πj/n, (A + iπk)/t)ei2πjl/n.

The following is a Matlab program for approximating the pdf of Rφ(t) for λ =0.1741.

function [f]=exppmf(l,t,n,M)

A=5;P=exp(A)/(2*n*t);B=A/t;C=i*pi/t;h=2*pi/n;D=i*l*h;lambda=0.1741;

m=0;for j=0:(n-1)

E=exp(D*j);alpha=j*h;F=exp(-i*alpha);for k=-M:M

beta=B+C*k;U1=beta+lambda;U2=exp(-2*U1);U=1-U2;V1=U1*(1-F*U2);V2=lambda*U;V=V1-V2;R=E*U/V;

Page 122: Renewal Processes and Repairable Systems

112 Numerical inversions of Laplace transforms

m=m+(-1)^k*R;end

end

f=real(P*m); %P(R_\phi(t)=l)

Page 123: Renewal Processes and Repairable Systems

Bibliography

[1] J. Abate and W. Whitt, The Fourier-series method for inverting transformsof probability distributions, Queuing Systems, 10, 1992, pp. 5-88.

[2] R. E. Barlow and F. Proschan, Statistical theory of reliability and life test-ing, Holt, Rinehart and Winston, New York, 1975.

[3] R. E. Bartle, The Elements of Real Analysis, John Wiley, New York, 1964.

[4] D. S. Chang, Reliability bounds for the stress-strength model, Computersind. Engng, vol. 29 no. 1-4, 1995, pp. 15-19.

[5] G. L. Choudhury, D. M. Lucantoni and W. Whitt, Multidimensional trans-form inversion with applications to the transient M/G/1 queue, The Annalsof Applied Probability, vol. 4 no. 3, 1994, pp. 719-740.

[6] A. Csenki, Asymptotics for renewal reward processes with retrospectivereward structure, Operations Research Letters, 26, 2000, pp. 201-209.

[7] D. R. Cox, Renewal Theory, Spottiswoode Ballantyne, London, 1962.

[8] H. Cramer, Mathematical Methods of Statistics, Princeton University Press,Princeton, 1946.

[9] P. J. Davis and P. Rabinowitz, Methods of Numerical Integration, 2nd ed.,Academic Press, New York, 1984.

[10] E. J. Dudewicz, Modern Mathematical Statistics, John Wiley, New York,1988.

[11] P. Embrechts, C. Klupelberg and T. Mikosch, Modelling extremal eventsfor insurance and finance, Springer-Verlag, Berlin, 1999.

[12] T. Erhardsson, On stationary renewal reward processes where most rewardsare zero, Probab. Theory Relat. Fields, 117, 2000, pp. 145-161.

[13] K. Funaki, K. Yoshimoto, Distribution of total uptime during a given timeinterval, IEEE Trans. Reliability, vol 43 no. 3, 1994 Sep, pp. 489-492.

113

Page 124: Renewal Processes and Repairable Systems

114 BIBLIOGRAPHY

[14] O. Gaudoin and J. L. Soler, Failure rate behavior of components subjectedto random stresses, Reliability Engineering and System Safety, 58, 1997,pp. 19-30.

[15] B. V. Gnedenko, Y. K. Belyayev, A. D. Solovyev, Mathematical Methodsof Reliability Theory, Academic Press, New York, 1969.

[16] I. S. Gradshteyn and I. M. Ryzhik, Table of Integrals, Series, and Products,Academic Press, New York, 1994.

[17] J. Grandell, Doubly Stochastic Poisson Processes, Springer-Verlag, Berlin,1943.

[18] G. R. Grimmett and D. R. Stirzaker, Probability and Random Processes,Clarendon Press, Oxford, 1997.

[19] J. A. Gubner, Computaion of shot-noise probability distributions and den-sities, SIAM Journal on Scientific Computing, vol. 17 no. 3, 1996, pp.750-761.

[20] R. Herz, H. G. Schlichter und W. Siegener, Angewandte Statistikfur Verkehrs- und Regionplaner, 2. auflage, Werner-Ingenieus-Texte 42,Werner-Verlag, Dusseldorf, 1992.

[21] M. Iosifescu, Finite Markov Processes and Their Applications, John Wiley,New York, 1980.

[22] P. W. den Iseger and M. A. J. Smith, A new method for inverting Laplacetransforms, Econometric Institute, EUR Rotterdam, 1998, pp. 1-14.

[23] N. Jack, Repair replacement modelling over finite time horizons, J. Opl.Res. Soc. vol. 42 no. 9, 1991, pp. 759-766.

[24] E. E. Lewis and H. C. Chen, Load-capacity interference and bathtub curve,IEEE Trans. Reliability, vol 43 no. 3, 1994 Sept, pp. 470-475.

[25] J. Mi, Average number of events and average reward, Probab. Eng. Inform.Sc. vol. 14 no. 4, 2000, pp. 485-510.

[26] E. J. Muth, A method for predicting system downtime, IEEE Trans. Reli-ability, R-17, 1968 Jun, pp. 97-102.

[27] E. J. Muth, Excess time, A measure of system repairability, IEEE Trans.Reliability, R-19, 1970 Feb, pp. 16-19.

[28] F. Oberhettinger, Fourier Transforms of Distributions and Their Inverses;a collection of tables, Academic Press, New York, 1973.

Page 125: Renewal Processes and Repairable Systems

BIBLIOGRAPHY 115

[29] M. Parlar and D. Perry, Inventory models of future supply uncertaintywith single and multiple suppliers, Naval Research Logistics, 43, 1996, pp.191-210.

[30] T. Pham-Gia and N. Turkkan, System availability in a Gamma alternatingrenewal process, Naval Research Logistics, vol 46 no. 7, 1999 Oct, pp. 822-844.

[31] E. Popova and H. C. Wu, Renewal reward processes with fuzzy rewardsand their applications to T-age replacement policies, European Journal ofOperational Research, 117, 1999, pp. 606-617.

[32] R. D. Reiss, A Course on Point Processes, Springer-Verlag, Berlin, 1993.

[33] A. Renyi, On the asymptotic distribution of the sum of a random number ofindependent random variables, Acta Mathematica Academiae ScientiarumHungaricae, 8, 1957, pp. 193-199.

[34] S. I. Resnick, Extreme Values, Reguler Variation, and Point Processes,Springer-Verlag, Berlin, 1987.

[35] S. I. Resnick, Adventures in Stochastic Processes, Birkauser, Boston, 1992.

[36] S. M. Ross, Stochastic processes, John Wiley, New York, 1996

[37] S. K. Srinivasan, R. Subramanian, and K. S. Ramesh, Mixing of two renewalprocesses and its applications to reliability theory, IEEE Trans. Reliability,R-20, 1971 May, pp. 51-55.

[38] Suyono and J. A. M. van der Weide, Renewal reward processes, Journal ofIndonesian Math. Society, vol. 6 no. 5, 2000, pp. 353-357.

[39] Suyono and J. A. M. van der Weide, Covariance of a renewal process, Proc.Indonesian Student Scientific Meeting (ISSM), Manchester, UK, 2001, pp.383-386.

[40] Suyono and J. A. M. van der Weide, System reliability in stress-strengthmodels, Prosiding Konferensi Nasional Matematika XI, Malang, Indonesia,2002, pp. 1135-1139.

[41] Suyono and J. A. M. van der Weide, Total uptime of series systems, Proc.ISSM, Berlin, Germany, 2002, pp. 486-489.

[42] Suyono and J. A. M. van der Weide, A method for computing total down-time distributions in repairable systems, Submitted to the Journal of Ap-plied Probability, 2002.

[43] Suyono and J. A. M. van der Weide, Integrated renewal processes, Submit-ted to the Stochastic Processes and their Applications, 2002.

Page 126: Renewal Processes and Repairable Systems

116 BIBLIOGRAPHY

[44] L. Takacs, On certain sojourn time problems in the theory of stochasticprocesses, Acta Mathematica Academiae Scientiarum Hungaricae, 8, 1957,pp. 169-191.

[45] L. Takacs, On a sojourn time problem in the theory of stochastic processes,Transactions of American Mathematical Society, 93, 1959, pp. 531-540.

[46] H. C. Tijms, Stochastic Models: An Algorithmic Approach, John Wiley,New York, 1994.

[47] W. T. Weeks, Numerical inversion of Laplace transforms using Laguerrefunctions, J.ACM 13, 1966, pp. 419-426.

[48] D. V. Widder, The Laplace transform, Princeton University Press, Prince-ton, 1946.

[49] R. W. Wolff, Stochastic Modeling and the Theory of Queues, Prentice-Hall,Englewood Cliffs, NJ, 1989.

[50] J. Xue and K. Yang, Upper & lower bounds of stress-strength interferencereliability with random strength-degradation, IEEE Trans. reliability vol.46 no. 1, 1997 March, pp. 142-145.

Page 127: Renewal Processes and Repairable Systems

Samenvatting

Vernieuwingsprocessen en repareerbare systemen

In dit proefschrift bespreken we de volgende onderwerpen.

1. Renewal reward processen

Wij geven een berekening van de marginale verdelingen van renewal rewardprocessen en van een klasse van processen die we in dit proefschrift aan-duiden als instantaneous reward processen. Onze benadering is gebaseerdop de theorie van puntprocessen, in het bijzonder Poisson puntprocessen.Hierbij wordt het renewal reward proces (instantaneous reward proces)gerepresenteerd als een functionaal van een Poisson punt proces. Belan-grijke hulpmiddelen die gebruikt worden bij de afleiding zijn de Palm for-mule en de formule voor de Laplace getransformeerde van een Poissonpuntproces. Als resultaat vinden we de Laplace getransformeerden van demarginale verdelingen. We geven een toepassing van instantaneous rewardprocessen in een verkeersprobleem.

Een aantal asymptotische eigenschappen van renewal reward processenwordt opnieuw onder de loupe genomen. Zo geven we met behulp van eenTauber stelling een bewijs van de versie van de renewal reward stellingmet verwachtingswaarden. We leiden vervolgens een tweede orde term afvoor de versie van de renewal reward stelling met verwachtingswaarden.Gelijksoortige resultaten onderzoeken we voor instantaneous reward pro-cessen. Verder bewijzen we asymptotische normaliteit voor instantaneousreward processen.

We berekenen de covariantie structuur van een renewal proces, dat kanworden beschouwd als een bijzonder geval van een renewal reward proces.Verder bestuderen we systeem betrouwbaarheid voor een stress-strengthmodel, waarbij de omvang van de stress geinterpreteerd wordt als een ”re-ward”. De tijdstippen waarop stress optreedt modelleren met behulp vanvernieuwingsprocessen en Cox processen. Met de resultaten die we afgeleidhebben voor renewal reward processen onderzoeken we de invloed van de

117

Page 128: Renewal Processes and Repairable Systems

afhankelijkheid tussen stress en strength op de systeem betrouwbaarheid.

2. Geintegreerde vernieuwingsprocessen

In de literatuur is de kansdichtheid van een geintegreerd homogeen Pois-son proces bekend. Het is natuurlijk om dit te generaliseren voor niet-homogene Poisson processen, Cox processen en vernieuwingsprocessen.In dit proefschrift leiden we formules af voor de marginale verdelingenmet behulp van conditionering voor geintegreerde Poisson en Cox pro-cessen. Voor geintegreerde vernieuwingsprocessen gebruiken we puntpro-ces representaties. De resultaten worden gegeven in de vorm van Laplacegetransformeerden. De asymptotische eigenschappen van geintegreerdevernieuwingsprocessen worden onderzocht. Tenslotte is er een toepassingop een verkeersprobleem.

3. Totale downtime van reprareerbare systemen

Verschillende auteurs hebben met een aantal verschillende methodes for-mules afgeleid voor verdelingsfunctie van de toatle downtime van een repa-reerbaar systeem, dat bij deze studies als een component wordt beschouwd.Wij leiden formules af met nog een andere methode (gebaseerd op punt-processen) en we beschouwen ook het algemenere geval waarbij afhankeli-jkheid wordt toegelaten tussen faal en reparatie tijden.

De covariantie structuur en de asymptotische eigenschappen van de totaledowntijd zijn voor het onafhankelijke geval bekend in de literatuur. Wijleiden resultaten af voor het afhankelijke geval. We geven voorbeelden vanhet effect van afhankelijkheid tussen faaltijd en reparatietijd op de totaledowntijd.

We bespreken ook de totale downtijd voor repareerbare systemen die uittwee of meer stochastisch onafhankelijke componenten bestaan. We leidenee uitdrukking af voor de marginale verdeling van de totale uptijd van hetsysteem als faaltijden en reparatietijden van iedere component exponen-tieel verdeeld zijn. Voor willekeurige faal- en reparatietijden geven we eenuitdrukking voor de verwachte totale uptijd.

118

Page 129: Renewal Processes and Repairable Systems

Summary

Renewal processes and repairable systems

In this thesis we discuss the following topics.

1. Renewal reward processes

The marginal distributions of renewal reward processes and its version,which we call in this thesis instantaneous reward processes, are derived.Our approach is based on the theory of point processes, especially Poissonpoint processes. The idea is to represent the renewal reward processesand its version as functionals of Poisson point processes. Important toolswe use are the Palm formula and the Laplace functional of Poisson pointprocesses. The results are presented in the form of Laplace transforms. Anapplication of the instantaneous reward processes to the study of traffic isgiven.

Some asymptotic properties of the renewal reward processes are reconsid-ered. A proof of the expected-value version of the renewal reward theo-rem using the Tauberian theorem is given. A second order term in theexpected-value version of the renewal reward theorem is obtained. Similarresults for the instantaneous reward processes are investigated. Asymp-totic normality of the instantaneous reward processes is proved.

The covariance structure of renewal processes, which can be considered asa special case of renewal reward processes, is derived. As an addition, westudy system reliability in a stress-strength model, where the amplitudesof stresses can be considered as rewards. We consider renewal and Coxprocesses as models for the occurrences of the stresses. Using our result onrenewal reward processes we investigate the effect of dependence betweenstress and strengths on system reliability.

2. Integrated renewal processes

The marginal probability density function of an integrated homogeneousPoisson Process is known in the literature. It is natural to generalize the

119

Page 130: Renewal Processes and Repairable Systems

integrated homogeneous Poisson process into integrated non homogeneousPoisson, Cox, and renewal processes. In this thesis we derive expressionsfor the marginal distributions of integrated Poisson and Cox processesusing conditioning arguments, and derive the marginal distributions of in-tegrated renewal processes using the theory of point processes. The resultsare presented in the form of Laplace transforms. Asymptotic propertiesof the integrated renewal processes are also investigated. An applicationto the study of traffic is given.

3. Total downtime of repairable systems

An expression for the cumulative distribution function of the total down-time of a repairable system, which is regarded as a single component,under an assumption that the failure and the repair times of the systemare independent has been derived by several authors using different meth-ods. We use a different method (using point processes) to compute thedistribution function of the total downtime. We also consider a more gen-eral situation where we allow dependence of the failure and the repairtimes of the system.

The covariance structure and asymptotic properties of the total downtimefor the independent case are also known in the literature. We derive thesimilar results for the dependent case. Examples are given to see theeffect of dependence between the failure and the repair times on the totaldowntime.

We also discuss the total downtime of repairable systems consisting ofn ≥ 2 stochastically independent components. We derive an expressionfor the marginal distribution of the total uptime of the system for thecase the failure and the repair times of each component are exponentiallydistributed. For arbitrary failure or repair times of the components wederive an expression for the mean of the total uptime.

120

Page 131: Renewal Processes and Repairable Systems

Ringkasan

Proses renewal dan sistem-sistem tereparasi

Di tesis ini di bahas topik-topik berikut.

1. Proses renewal reward

Distribusi marginal dari proses renewal reward (renewal reward process)dan versinya, yang dinamakan proses instantaneous reward (instantaneousreward process), diturunkan. Dasar teori yang dipakai adalah teori ten-tang proses titik (point process), khususnya proses titik Poisson. Carayang digunakan adalah dengan menyatakan proses renewal reward danversinya sebagai fungsional dari proses-proses titik Poisson. Untuk perhi-tungan digunakan rumus-rumus Palm dan fungsional Laplace dari prosestitik Poisson. Hasil-hasil yang diperoleh disajikan dalam bentuk transfor-masi Laplace. Penerapan proses instantaneous reward dalam studi tentanglalu lintas diberikan.

Beberapa sifat asimtotik dari proses renewal reward dipelajari kembali.Bukti dengan teorema Tauber untuk versi harga harapan dari teoremarenewal reward diberikan. Suku konstan dalam versi harga harapan dariteorema renewal reward ditemukan. Sifat-sifat serupa untuk proses in-stantaneous reward juga di selidiki. Distribusi asimtotik normal untukproses instantaneous reward dibuktikan.

Kovariansi dari proses renewal, yang juga dapat dipandang sebagai prosesrenewal reward, diturunkan. Sebagai tambahan dipelajari reliabilitas sis-tem dalam model stress-strength, dimana besarnya stress dapat dipandangsebagai reward. Proses kejadian dari stress dimodelkan dengan proses Coxdan renewal. Efek dari dependen antara stress dan strength terhadap relia-bilitas sistem diselidiki menggunakan hasil tentang proses renewal reward.

2. Proses integrated renewal

Fungsi kepadatan probabilitas marginal dari sebuah proses integrated re-newal (integrated renewal process) dengan proses Poisson homogen se-

121

Page 132: Renewal Processes and Repairable Systems

bagai proses dasarnya telah dikenal di literatur. Adalah alami untukmenggeneralisasi proses Poisson homogen sebagai proses dasar ke prosesPoisson tak homogen, proses Cox, dan ke proses renewal. Di tesis ini dis-tribusi marginal dari proses integrated renewal diturunkan dengan teknikkondisional untuk proses Poisson dan Cox sebagai proses dasarnya, dandengan memakai teori tentang proses titik untuk proses renewal seba-gai proses dasarnya. Hasil-hasilnya disajikan dalam bentuk transformasiLaplace. Sifat-sifat asimtotik dari proses integrated renewal juga dise-lidiki. Sebuah penerapan dalam studi lalu lintas diberikan.

3. Total downtime sistem-sistem tereparasi

Sebuah ekspresi untuk fungsi distribusi kumulatif dari total downtime se-buah sistem yang dapat direparasi, di mana sistem tersebut dipandangsebagai sebuah komponen, telah diturunkan oleh beberapa penulis de-ngan metode yang berbeda di bawah asumsi bahwa waktu-waktu bekerjadan reparasi dari sistem saling independen. Di tesis ini sebuah metodeyang berbeda (yakni dengan menggunakan teori tentang proses titik) di-gunakan untuk menentukan fungsi distribusi dari total downtime. Situasiyang dipelajari juga lebih umum karena diperbolehkan adanya dependensiantara waktu bekerja dan reparasi dari sistem.

Kovariansi dan sifat-sifat asimtotik dari total downtime untuk kasus in-dependen juga telah diketahui di literatur. Hasil-hasil yang serupa untukkasus dependen diturunkan dalam tesis ini. Contoh-contoh untuk melihatefek dependen antara waktu bekerja dan reparasi terhadap total downtimediberikan.

Di tesis ini juga dibahas total downtime dari sistem-sistem dengan duakomponen atau lebih dimana komponen-komponen tersebut saling inde-penden secara stokastik. Sebuah ekspresi untuk distribusi marginal daritotal uptime sistem dimana waktu bekerja dan reparasi dari setiap kom-ponennya berdistribusi eksponensial diturunkan. Untuk kasus distribusiwaktu bekerja dan reparasi dari komponennya sembarang, harga harapandari total uptime diturunkan.

122

Page 133: Renewal Processes and Repairable Systems

Curriculum Vitae

Suyono was born on December 18, 1967 in Purworejo, Indonesia. After finish-ing his secondary high school in Purworejo in 1986, he studied MathematicsEducation at the Jakarta Institute for Teachers Training and Education andgraduated in 1991. In August 1994 he started his Master Program in Math-ematics at Gadjah Mada University in Yogyakarta, Indonesia, and obtainedhis degree in 1998. From October 1998 until February 2003 he carried out adoctoral research at the Department of Control, Risk, Optimization, Stochasticand Systems (CROSS), Faculty of Information Technology and Systems (ITS),Delft University of Technology. Currently he is a lecturer at the Department ofMathematics, Faculty of Mathematics and Natural Sciences, State Universityof Jakarta, Indonesia.

123