Спросить
Войти
Категория: Математика

A NEW GENERALIZED VARENTROPY AND ITS PROPERTIES

Автор: Maadani S.

URAL MATHEMATICAL JOURNAL, Vol. 6, No. 1, 2020, pp. 114-129

DOI: 10.15826/umj.2020.1.009

A NEW GENERALIZED VARENTROPY AND ITS PROPERTIES

S. Maadani1, G. R. Mohtashami Borzadaran2, A. H. Rezaei Roknabadi3

Ferdowsi University of Mashhad, Azadi Square, Mashhad, Iran 1madani.sa@mail.um.ac.ir, 2grmohtashami@um.ac.ir, 3rezaei@um.ac.ir

Abstract: The variance of Shannon information related to the random variable X, which is called varentropy, is a measurement that indicates, how the information content of X is scattered around its entropy and explains its various applications in information theory, computer sciences, and statistics. In this paper, we introduce a new generalized varentropy based on the Tsallis entropy and also obtain some results and bounds for it. We compare the varentropy with the Tsallis varentropy. Moreover, we explain the Tsallis varentropy of the order statistics and analyse this concept in residual (past) lifetime distributions and then introduce two new classes of distributions by them.

1. Introduction

Nowadays, the use of information measures has an essential role in analyzing statistical issues and is greatly considered by the statisticians. Shannon [21] introduced a measure of uncertainty for the discrete random variable X with probability mass function P(x) to form into E(— log P(X)), which is a basis for the information theory. The generalization of Shannon&s measure for continuous random variable X with density function f (x) and support S, which is named a differential entropy, reads as follows:

h(X) = -f f (x)log f (x)dx. (1.1)

This measure is the expectation of random variable (— log f (X)) and has recently attracted the attention of researchers.

In computer sciences, the variance of (— logp(X)) of the discrete random variable X is called the varentropy. This measure is an essential factor of the optimal code length calculation in the data compression process, dispersion of sources, and so on. To conduct further studies, we refer the reader to [3, 7, 15]. Since the varentropy was defined for discrete random variables, in this paper, we focus on the varentropy for continuous random variables, and we discuss it under the same name.

Let X be a continuous random variable with density function f. Then the varentropy of X is defined as

VE(X) = Var (- log f (X)) = E[- log f (X) - h(X)]2, (1.2)

where VE(X) is called the varentropy of X. Unfortunately, there are not many studies on the varentropy in the field of statistics. Song [22] introduced VE (of course not with that name), as an intrinsic measure of distributions shape, which can be an excellent alternative for the kurtosis measure. When the traditional kurtosis measure is not measurable, as Student&s t distributions with degrees of freedom less than four, Cauchy and Pareto distributions, VE is a measure that can be used to compare the heavy-tailed distributions instead of kurtosis measure.

Liu [16] studied VE under the concept of information volatility and introduced some mathematical properties of it. He calculated VE for some distributions and showed that VE of gamma, beta (with parameters (a, a) when a < 2 — \\[2 ) and normal distributions are more than less than, and equal to 1/2 respectively, and that VE of the uniform distributions is zero. Therefore VE can separate the gamma, normal, beta and uniform distributions. He showed that VE of the generalized Gaussian distribution is exactly the reciprocal of its shape parameter, which gives us a new method to estimate this parameter. Zografos [29] found an empirical estimator for Song&s measure in the elliptic multivariate distributions. Enomoto et al. [13] considered the multivariate normality test based on the sample measure of multivariate kurtosis defined by Song [22]. Afhami et al. [2] introduced the goodness of fit test based on entropy and varentropy of k-record values for the generalized Pareto distribution and more recently, in addition to the above, the application of the varentropy in reliability theory has been conducted in [10].

A generalization of the Shannon entropy is the Tsallis entropy (see [23]). Let X be a continuous random variable with density function f. Then the Tsallis entropy of order a for X is defined as

IT(X, a) = 1 ( I fa(x)d,x - 1), a > 0, a / 1, (1.3)

(1 a) J s

and if a ^ 1, then the Tsallis entropy is reduced to (1.1). The Tsallis entropy has many applications in physics, statistical mechanics and image processing. The properties of the Tsallis entropy have been investigated by several authors, see papers [17, 24, 25, 28].

On the other hand, the concentration of measure principle is one of the cornerstones in geometric functional analysis and probability theory, and it is widely used in many other areas. Hence the concentration property of information content (— log f (X)) is one of the central interests in information theory, and it has great relevance with various other areas such as probability theory, and the varentropy is the measure of this concentration. Suppose that X and Y are two random variables with the same Shannon entropy; for example, the Shannon entropy is zero in both standard uniform and the exponential (with the parameter e) distributions. Can we say that the uncertainty criterion is the same in both random variables? In our opinion, our confidence in the measured value depends on the degree of information dispersion around the entropy. Therefore, for random variables with the less varentropy the uncertainty criteria are more appropriate. This concept is valid for the measure of the Tsallis uncertainty information, and if two random variables have the same Tsallis entropy, the Tsallis varentropy indicates which of these random variables has the more appropriate criterion for Tsallis uncertainty.

The purpose of this paper is to generalize Shannon&s varentropy based on the Tsallis entropy, and compare its properties with Shannon&s varentropy and extend it in the field of order statistics and reliability theory.

This paper contains the following sections. The generalized varentropy which we call TVE is introduced in Section 2. We also obtain some of its properties and compare TVE with VE in this section. In Section 3 we discuss the Tsallis varentropy of the order statistics. In Section 4, we study TVE in lifetime researches and achieve some bounds for it by hazard rate and reversed hazard rate functions. Moreover, we examine the effects of system&s age on TVE. Finally, in Section 5, we introduce two new classes of distributions by residual and past Tsallis varentropy.

2. Introduction of Tsallis Varentropy

Let X be a continuous random variable with density function f. Then Tsallis entropy of order a for X is the expectation of a random variable (fa(X) — 1)/(1 — a) and TVE is the variance of it. Following what was said above, we define TVE and introduce some properties of this measure.

Definition 1. For the continuous random variable X with density function f, the Tsallis var-entropy of order a for X is defined as follows:

TVE(X, a) = —2 Var (/a_1(X)) a > 0 a / 1

(1 - a)2

{j f 2a-1(x)dx - ( J f a(x)dx) 2) , (2.1)

(1 - a)2

where TVE(X, a) is the Tsallis varentropy of order a for X. It is clear that when a ^ 1, (2.1) implies (1.2).

For example, if X ~ Exp(0) with density function f (x) = (x > 0, 0 > 0), then

1 / 02a—2 ,aa—1 n 2\\ 02a—2 1

TEV(X, a) =---(--) ) = —-rj-a> -. (2.2)

v & ; (i-a)2V2a-l \\ a J J a2(2a-l)& 2 v ;

We see that lim TVE(X, a) = 1 and that TVE(X, 1) = 1 is the Shannon varentropy of the exponential distribution.

Remark 1. If X ~ Exp (0) and 0 < a < 1/2, then TVE(X, a) diverges to infinity.

Theorem 1. X has a uniform distribution if and only if TVE(X, a) = 0 for all a > 0. Proof. If X ~ U(a, b) with density function f (x) = 1/(b - a) a < x < b, then TEV(X, a) = [(6 - a)2~2a - ((b - a)1"0)2] = 0.

(1 - a)2

On the other hand, if TVE(X, a) = 0, then Var (fa—1(X)) = 0, so f (X) is almost surely constant. Suppose that f (X) = c (if a < X < b) is the support of X, then

/ f(x)dx = cdx and c =

./a, ./a,

Liu [16] showed that if X is a continuous random variable with symmetric density function f with respect to x = a, then VE(|X|) = VE(X).

Proposition 1. Suppose that X is a continuous random variable with a symmetric density function f with respect to x = a. Then

TVE(|X |, a) = 22a—2TVE(X,a).

Proof. Without loss of generality suppose a = 0. In this case the density function g(x) of the random variable |X| is g(x) = f (-x) + f (x) = 2f (x), and hence

1

TVE(\\X\\,a) = -^—s^Jo (2 f(x)f«~ldx- j (2 f(x))adx

0
2
2a—2 / rx

(1 - a)2

f2a—1(x)dx - / fa(x)dx = 2 —2TVE(X, a)

1
2

For example, if X has the Laplace distribution with density function f(x) = —e l&Tl, then

we can show that

(2P)2-2a 1

On the other hand if X ~ Laplace (0,P), then |X| ~ Exp(1/p). Therefore by using (2.2), we have

It implies that TVE(|X|,a) = 22a-2TVE(X, a). It is obvious that if a ^ 1, then VE(|X|) = VE(X).

One of the most important properties of VE is the following:

The varentropy is a scale and location invariant measure so VE(aX + b) = VE(X) for all a, b € R. This property implies that in the location and scale family of distributions, VE is independent of the distribution parameters. Therefore the empirical estimation of VE can separate the distribution of this family. Now the question arises, is TVE an affine invariant measure? To answer this question, let us look at the following theorem and at the next example.

Theorem 2. Suppose that X is a continuous random variable and that f (x) is its density function. Then

TVE (aX + b, a) = a2-2aTVE (X, a). Proof. If Y = g(X) and g(X) is a strictly monotone function of X, then

My) = 7(FWy

It is easy to see that

Therefore if g(X) = aX + b, then TVE(aX + b, a) = a2-2aTVE(X, a). □

Theorem 2 implies that in the location and scale family of distributions, the Tsallis varentropy is independent of the location parameter but it depends on the scale parameter. For example, if X ~ Nct2), then TVE of X is

rr~\\ j—jt tt -\\7- \\ /n l/\\/2« - 1 - 1/a 1

TEV(X,a) = (27r<7 ) x -2 , a>~.

(1 — a) 2

We can see that if a ^ 1, TVE(X, 1) = VE(X) = 1/2, and TVE is reduced to VE of normal distribution, then we can see that TVE is dependent on the scale parameter ct2 .

Definition 2. The Tsallis varentropy of order a for a random vector X = (Xi,X2,....,Xn) with joint density function f (x), is defined as follows:

TVE(X,a)= 1 ( [ /^(xjrfx- ( / /0(x)dx)2Y a > 0, a + 1.

(1 — a) \\ j Rn v j Rn & /

Theorem 3. If X is an n-dimensional random variable, then for any invertible n x n matrix A and any n x 1 vector B we have TVE(AX + B,a) = |A|2-2aTVE(X, a), where |A| is the determinant of the matrix A.

Table 1. Comparison VE(X) and TVE(X, a) (here VKO, r(^) and B(a,b) are trigamma, gamma and beta functions respectively).

Distribution Density function VEiX) TVE{X, a)

uniform (a, b) fix) = j^— b — a 0 0

exponential f{x) = 6e-ex, e > 0, z > 0 1 e2a-2 i Q,2(2Q& — 1)& Q>2

Laplace fix) = e-lx-fl/&/2a, a > 0 1 (2 a)2-2" ^ 1 Q,2(2Q& — 1)& Q>2

Pareto fix)=9ß e/xe+\\ ß > 0, 6 > 0, x > ß 1 (ö + D- ip. ß2—2aß2a ^ 1 ^ (1-a)2 10(0+1)(2Q&-1)-0 [Q&(0+1)-1]2 J

normal p-(.x-v)-/(2a-) fix) = - \\/ ¿na ~ l 2

gamma f{x) = mx6~le~Xx& 9 > 0, A > 0, * > 0 (0—l)2i/&(0)—0+2 / A" r((2a-l)(0-l)+l) \\r(0)/ U(2Q-1)A](2»-1)((&-1)+1 A"[r(a(0-l)+l)]2l 1 r(0)(Qv\\)2c,(^1)+2 J & 2

Weibull fix) =6>W-1e~(^\\ e > 0, A > 0, z > 0 4<i\\)i\\-e-1)2 + 2e-l-\\ iß\\)2a~2 fr(0-1(2Q-l)(0-l)+r1) (1-Q&)2 i (2Q- — 1 )[(2a—1)(0—i)e-1+e-1] q,[29-I<j(9-1)+29-1] J

beta ^(i-.r1 f{ & B(m,n) & 0 < x < 1, m > 0, n > 0 (m-\\fi<(m) + (n-\\fi<(n) ^^^{B((2Q-l)(m-l)+l,(2Q-l)(n-l)+l) — B~1(m,n)B2(a(m — 1)+1, a(ra-l)+l) }

Rayleigh X > 0, a > 0 2a-la2-2a f p(Q,) T2 (( Q& + 1 )/2) ] (1-Q.)2 l(2a>-lf Q-+1 j

Proof. The proof is similar to Theorem 2 in the n-dimensional spaces. □

Remark 2. Theorems 2 and 3 indicate that TVE is a location-invariant measure but is not the scale-invariant, unless a ^ 1.

Remark 3. If X and Y are two random variables, X ~ Exp(0), Y ~ N(^,a2) and Var(X) = Var (Y) then

TVE(X, a) = k(a) TVE(Y, a), a >

k(a) = (2n)

,a_i(i + \\/2a - 1

a\\/2a — 1 & and if a ^ 1, then VE(X) = 2VE(Y).

In Table 1, we compare the VE and TVE for some continuous distributions.

Theorem 4. Let X1,X2,...Xn be independent random variables with joint density function f (x). Then

1 n

TVE(X1,X2,Xn, a) = ---2 TT {(1 - a)2TVE(Xi} a) + [(1 - a)IT(Xi} a) + l]2}

(1 — a) i=i 1 n

"77^2 II {[(1 - « a) + I]2}, (1 a) i=i

and when a — 1, (2.3) reduces to

TVE(Xi,X2,...,Xra, 1) = VE(Xi,X2,...,X„ ) = VE(Xi).

Proof. If Xi, X2,..., Xn are independent random variables, we know that

22

Var ( II Xi) = 11 [Var (Xi) + E2(Xt)] — J] E2(Xt). (2.4)

i=i i=i i=i

Since f (xi),..., f (xn) are marginal density functions of f (x) and fa-i(Xi),..., fa-i(Xn) are independent random variables, (2.4) implies that

1n

TVE(Xi,X-2, ..Xn, a) = --^ Var ( TT fa~l(Xt))

(1 — a) v &

2
1 T T T » / „n—1 , ,, N , „n—1,„>,1 1

n Var (fa~l{Xi) + E2{fa~1{Xi))) - --3 n E2(fa~l(Xt))

i=i (1 — a) i=i

a — i t

Equation (1.3) indicates that E(fa—i(X)) = (1 — a)IT(X,a) + 1, and (2.1) implies

Var (fa—i(X)) = (1 — a)2 TVE(X, a).

Therefore

TVE(X1,X2,Xn, a) = - - TT {(1 - ct)2TVE(Xi, a) + [(1 - a)IT{Xu «) + l]2}

(1 — a) f=i

1n

--— I I S I 1 (1 — a)2 tJi

n{[(1 — a) It (Xi,a) + 1j^.

It is obvious that when a — 1, by using L&hopital&s rule, we have

TVE(Xi,X2 ,...,Xn, 1) = VE(Xi,X2,...,Xn ) = VE(Xi).

Corollary 1. If X and Y are two independent random variables with joint density function f (x,y) and marginal density functions fx(x) and fY(y), respectively, then

TVE ((X, Y), a) = (1 — a)2TVE(X, a)TVE(Y, a) + TVE(X, a)[(1 — a) It (Y, a) + 1]2

+TVE(Y,a)[(1 — a) It (X, a) + 1]2,

where IT(X, a) and IT(Y, a) are Tsallis entropies of X and Y respectively, and (2.5) implies that TVE((X, Y), 1) = VE(X, Y) = VE(X) + VE(Y).

Corollary 2. By using (2.5), the following inequalities are valid:

(a) TVE((X, Y),a) > (1 — a)2TVE(X,a)TVE(Y,a).

(b) TVE((X, Y), a) > TVE(X, a)[(1 — a)It(Y, a) + 1]2 + TVE(Y, a)[(1 — a)Ir(X, a) + 1]2. Corollary 3. If Xi,X2,...,Xn are iid random variables, then using Theorem 4 we have

TVE{X1,X2,.. .,Xn,a) =-o {(1 - a)2TVE{X1}a) + [(1 - a)IT{X1}a) + l]2}"

(1 — a)

1 2{[(l-«)M^i,«) + lfr.

Theorem 5. Let X and Y be two random variables with joint density function f (x,y) and conditional density function f (x|y). If

E(f 2a—2(X|Y)) ■ E(f2a—2(Y)) > [E(fa—i(X, Y))]2, (2.6)

TVE((X, Y),a) > (1 — a)-2Cov(f2a—2(X|Y), f2a—2(Y)), (2.7)

and the equality established when X and Y are independent.

Proof. The joint density of X and Y is f (x, y) = f (x|y) ■ f (y) therefore,

TVE((X, Y), a) = —Var (fa~l(X\\Y) ■ fa~l(Y)) (1 — a)

1

;{E(f2a~2(X\\Y) ■ f2a~2(Y)) - [E(fa~1(X\\Y) ■ r-HY))}2}.

Using covariance definition we have

Cov (f2a—2(X|Y), f2a—2(Y)) = E(f2a—2(X|Y), f2a—2(Y)) — E(f2a—2(X|Y)) ■ E(f2a—2(Y)), therefore,

TVE((X, Y), a) = —{Cov (f2a~2(X\\Y), f2a~2(Y)) + E(f2a~2(X\\Y)) ■ E(f2a~2(Y)) (1 — a)

— [E(fa—i(X, Y))]2}.

If (2.6) holds, then (2.7) will be easily obtained. □

3. Tsallis varentropy of order a for order statistics

Suppose that Xi,X2, ...,Xn are independent and identically distributed observations from density and cumulative function f and F, respectively. If we arrange of Xi, X2,..., Xn from the smallest to the largest denoted as Xi:n < X2:n < ■ ■ ■ < Xn:n and fi:n denotes the density function of the ith order statistic, then

fi:n{x) = 1 .,u[F{x)r1[l-F{x)]n-if{x), B(i, n — i + 1)

B(a, b) = I xa-1(1 - a > 0, b> 0.

The order statistics have many applications in probability and statistics, as the characterization of distributions, goodness-of-fit test, reliability engineering, and many other problems. For more information, we refer the reader to [4, 8]. The order statistics also have been studied widely in information theory in [5, 12, 18, 26, 27]. Furthermore, the stochastic order is also has many applications in finance, risk theory, management science and biomathematics. For example, we refer the reader to scholarly researches such as [1, 6, 9, 11, 14, 19, 20]. In this section, we introduce the Tsallis varentropy of order a for the ith order statistic. This measure can be one of the useful information measures for system designers. We know that one of the systems in reliability engineering is an (n — i + 1)-out-of-n system, and the system is active, when at least (n — i + 1) components are operating. Assume that X1,X2, denote the identical lifetime of the system

components. Then the ith order statistic indicates the lifetime of the systems. In special cases, X1:n and are the lifetime of the series and parallel systems, respectively. Therefore the Tsallis entropy of the ith order statistic is a measure of the uncertainty of the lifetime system and the Tsallis varentropy is the volatility of this information.

Definition 3. Let X1, X2,..., be a random sample from a continuous distribution with density function f. Let Xi:n denotes the ith order statistic. The Tsallis varentropy of ith order statistics is defined as :

TVE(Xi:n, a) = ^J-^Varir-1^)) = (1 - a)"2| ^ 1 (x)dx - ( jf fZn(x)dx)2}, where S is the support of Xi:n.

In the following theorem we introduce a method for calculating the Tsallis varentropy for ith order statistic.

Theorem 6. Suppose that X is a continuous random variable with density function f and cumulative distribution function F, and let Xi:n denote the ith order statistic. Then the Tsallis varentropy of Xi:n can be expressed as:

TVE(Xi:n, a) = (1 — a)-2 [A:n(a) — (B,n(a))2], (3.1)

4 r.T-l B{{2a ~ 1){i ~ 1} + (2a ~ 1)(n ~ i] + 1} Fif^tF-^T))) (V21

BrM = + (3.3)

where Zi has the beta distribution with parameters a(i — 1) +1 and a(n — i) + 1 and Ti has the beta distribution with parameters (2a — 1)(i — 1) + 1 and (2a — 1)(n — i) + 1.

Proof is parallel to [1, Lemma 2.1], we can prove that fs f2a— i(x)dx and fs f®n(x)dx are equivalent (3.2) and (3.3) respectively. □

Corollary 4. The first and last Tsallis varentropy of order a are:

TVEiXin,a) = (l -0,-2|____E(/2„-2(F-,(ri))) -[¡¡^Hi^&™)!

TV E(Xn-n, a) = (1 -^\\{iii_1)[n_l) + lE{f°-\\F-HTn)))

r na t 2

In the following theorem we show that if X has a symmetric density function with respect to x = a, then the Tsallis varentropy is symmetric with respect to i.

Theorem 7. Suppose that X is a continuous random variable with the symmetric density function with respect to x = a, then

TVE(Xi:n, a) = TVE(X„-j+i:n, a).

Proof. If X has a symmetric density function with respect to x = a, then X + a has a symmetric density with respect to x = 0. Using the properties of order statistics Xi:„ + a = (Xn_t+i:n + a), we have TVE(Xj:ra + a, a) = TVE(-Xra_i+i:ra - a, a). Using Theorem 2, we have TVE(Xj:„, a) = TVE(Xra_i+i:ra, a). □

Example 1. If X ~ U(a, b) then

E(/2a-2(F-\\Tt))) = 1 and E(r-\\F-\\Zt))) = 1

vo- a)2a_2 v v v (b - a)a-1&

Using (3.2) and (3.3) we have:

Ai:n(a) =

(b - a)2-2a [B((2a - 1)(i - 1) + 1, (2a - 1)(n - i) + 1)]

B2a-1(i,n - i + 1)

111 — 11.

Bi:n(a) =

- a)1_a [B(a(i - 1) + 1, a(n - i) + 1)]

Ba(i,n - i + 1)

Finally using (3.1) we get

(b - a)2~2a / 5 ((2a - l)(i - 1) + 1, (2a - 1 )(n - i) + l)

1 VE(Ai:n, a) =

(1 - a)2 \\ B2a_1(i,n - i + 1)

"B(a(i - 1) + 1, a(n - i) + 1)&

Ba(i,n - i + 1) and also

(b - a)2_2a T n2a_1 n2a

TVE(X1:ra,a) = TVE(X„:„,a) =

(1 - a)2 t(2a - 1)(n - 1) + 1 (a(n - 1) + 1)2

Remark 4- If TVE(Xi:n, a) = TVE(Xn_j+1:„, a) and TVE(Xi:n, a) is decreasing with respect to i for i < (n + 1)/2(n/2) when n is odd(even), then TVE(Xi:n, a) will be increasing with respect to i for i > (n + 1)/2(n/2 + 1). Therefore the median (both random variables in the middle) of order statistics has a minimum Tsallis varentropy.

-1-1-1-1-I-r

20 30 40 50 60 70 80

Figure 1. TVE(Xi:n, 2) versus i for the standard uniform distribution.

Figure 1 shows the Tsallis varentropy of ith order statistics for the uniform distribution and it is symmetric with respect to i.

Example 2. If X ~ Exp (0) according to Theorem 6 we have E (f 2a-2 (F-i (Ti))) =

2a-2,02a-2B((2a — 1)(i — 1) + 1, (2a — 1)(n — i + 1))

B((2a — 1)(i — 1) + 1, (2a — 1)(n — i) + 1) E (fa"i (F-i (Zi))) =

fa-i, „-ursss 0a-1B(a(i — 1) + 1, a(n — i + 1))

Ai:n(a) =

B(a(i — 1) + 1, a(n — i) + 1) &

62a~2B{{2a - l)(&i - 1) + 1, (2a - 1 )(n - i + 1)) B2a~l(i,, n -i + 1) (a(i_ ea~1B(a{i - 1) + 1, a(n - i + 1)) /:n(Q°" B«(i,n-i + 1)

TVE(X,,n,a) =

02a-2 f B((2a — 1)(i — 1) + 1, (2a — 1)(n — i + 1))

(1 — a)2 I B2q;-1 (i,n — i + 1)

B (a(i — 1) + 1,a(n — i + 1))

B a(i,n — i + 1)

Figures 2a-2c show the Tsallis varentropy of ith order statistics for the exponential distribution for 0 = 2 and some selected values for a. When a ^ 1, the symmetric property is observed.

4. The Tsallis varentropy in lifetime study

In reliability science, the hazard rate and reversed hazard rate functions are essential functions that can help engineers to analyze the system&s disability. If f and F are density and survival function, respectively, the hazard rate and reversed hazard functions of X are r(x) = f (x)/F(x) and ^(x) = f (x)/F(x), respectively. We know that if a lifetime distribution has an increasing (decreasing) hazard rate, then it is called the IFR(DFR) distribution, and if it has an increasing

(a) TVE(Xi:n, 2) versus i (b) TVE(Xi:n, 1.005) versus i (c) TVE(Xi:n, 1.0005) versus i.

Figure 2. TVE(Xi:n, a) versus i for the exponential distribution and 9 = 2 and n = 100.

(decreasing) reversed hazard rate, then it is called the IRFR(DRFR). In this part, we introduce some bounds by hazard and reversed hazard rate functions for TVE and study them in residual (past) and double truncated lifetime distributions and also we examine the effect of system&s age on them.

Theorem 8. Let X be a nonnegative continuous random variable and let r(x) be the hazard rate function of it. Then

{b)TVE(X,a)<(>)-^{Cov(r2a-\\X),F2a-2(X))}, if 0<a<\\ («>£), (4.2)

(a) TVE{X, a)= * + ^^ - }, (4.1)

(1 - a)2 I 2a - 1 a2 J

(l-a)2^UM&& - - - - -2 y- - 2,

*/ * * IFR(DFR). (4.3)

Proof. It is obvious that

TVE(X, a) =-1—^Vai(ra-\\X)Fa-\\X)).

On the other hand,

Var (XY) = Cov (X2, Y2) + E(X2)E(Y2) - (E(X)E(Y))2. (4.4)

Using (4.4), we have

TVE(X, a) =--—2 {c°v (r2a-2(X),F2a~2(X)) + E(r2a~2(X)) • E(F2a~2(X))

[E(ra_1 (X)) • E(F^1 (X))]2}.

Since E(F2a_2(X)) = 1/(2a - 1) and E(F^1 (X)) = 1/a, (4.1) is easily obtained. For 0 < a < 1/2, the inequality

E(r2a~2(X)) E2(ra~l(X)) 2a-1 < a2

is established and the first inequality of (4.2) is proved.

We know that E(r2a-2(X)) > E2(ra-i(X)) and 1/(2a - 1) > 1/a2 for all a > 1/2. Hence

E{r2a~2{X)) E2{ra~l{X)) 2a -1 > a2

and the second inequality of (4.2) is obtained. It is easy to see that if F has an IFR distribution, then r(x) is an increasing function of x, and because F is decreasing, the covariance is negative and the first inequality of (4.3) holds. The second inequality is similarly obtained. □

Corollary 5. Let X be a nonnegative continuous random variable and let ^(x) be the reversed hazard rate function of it, then

(a) TO,a) = ^{Cov (^-2(*),^-2(*)) + - },

(b)TVE(X,a)<(>)^—2{Coy(^2a-2(X),F2a-2(X))}, if 0

(c) TVE(X, a) > (<)-^|^"_2(iX)) - ^(At"a~21(X))}, if F is IRFR(DRFR).

In the survival analysis and reliability engineering, we usually know the system&s age. Hence (2.1) is not suitable in such a situation. The random variables {X — t|X > t}, {t — X|X < t} and {X|ti < X < t2} are indicative residual, past and double truncated (interval) lifetime of the system. If f and F are density function and survival function of X, respectively, then the residual, past and interval lifetime density functions at the time t are as follows:

gR(x,t> = F(t)& x~t&

9p{x&t) = F§j& X~t& f (X)
9l{x>tl>t2) = F{t2)-F{tl)>

Also dynamic Tsallis entropy of X for the residual, past and double truncated lifetime random variables are defined as

Itp (X, a, t) =

1t
1a

f a(x)dx

---—/—:- — 1

ITl(X,a,t1,t2)= 1

1a

fo fa(x)dx Fa(t)

It" fa(x)dx

a > 0, a = 1, a > 0, a = 1,

1

a > 0, a = 1.

L(F (t2) — F (ti))a

Definition 4. The residual, past and interval Tsallis Varentropy of nonnegative random variables {X — t| X > t}, {t — X| X < t} and {X| ti < X < t2} are defined as

1 ii f(X)\\«-i \\ F2-2a(t)

TVEr(X, a, t) = -_-IVar((^-i) \\X > t) = —^Var {r~l{X)\\X > t), (4.5)

1 iif (X) \\a-i \\ F 2-2a(t)

TVEp(X, a, t) = |X 7T^Var (/ (X)|X " t]& (46)

TVEj(X,a,t\\,t2) = * Var ((p{< X < t2)

{F{t2\\-F{\\f~2\\ar(r-\\x)\\t1 < X < t2). (1 — a)

It is clear that when t ^ 0 (t ^ to), TVEr(X, a, t) (TVEp(X, a, t)) = TVE(X, a) and if ti ^ 0, t2 ^ to, then TVE/(X, t1 , t2) = TVE(X, a). For example, if X has a Pareto distribution with density function

= />>0, 0>O> =

t#(2a—2)02a—2

(I"«)&

w(2a— 2 m2U— 2

TVEr(X, a, t) = —--^Var(X(i?+1)(1-o)|X > i),

t2—2a02a ( _i 1

TFEr(X, a, i) = ^-^ <1 „,„ , , „ +

(1 - a)2 I 0(0 + 1)(1 - 2a) + 0 [-a(0 + 1) + 1]2

If a ^ 1, then the Tsallis residual varentropy reduces to the residual varentropy of Pareto distribution. It is (0 + 1)2/02 for all t > 0, and that is independent of the age of systems, but the Tsallis residual varentropy is not.

Theorem 9. X has a uniform distribution if and only if TVER(X, a, t) = 0, TVEP(X, a, t) = 0, or TVEi(X,ti,t2) = 0.

Proof. If X ~ U(a,b), then

zp 2—2a (t)

TVEr(X, a, t) =-^Var ((b - a,)1&"\\X >t) = 0.

On the other hand if TVER(X, a, t) = 0, then

F 2—2a (t)„„.„, ^a—1

(1 - a)2

Var (f (X)a—1|X > t) = 0

and f(X) is almost surely constant. Similar to Theorem 1, X has the uniform distribution. For the other two cases, the proof is the same. □

Proposition 2. If X has an exponential distribution, then the Tsallis residual varentropy is independent of lifetime of systems.

Proof. In the exponential case, we know

gR(x, t) = = 0e"fe, x > 0.

Therefore the residual lifetime distribution is independent of t and gR(x, t) = f (x) and TVEr(X, a, t) = TVE(X, a). □

We can introduce two new classes of distributions using the following definition.

Definition 5. We say that F has an increasing (decreasing) Tsallis residual varentropy ITRVE(DTRVE) if TVER(X, a, t) is an increasing (decreasing) function of t, and F has an increasing (decreasing) Tsallis past varentropy ITPVE(DTPVE) if TVEP(X, a, t) is an increasing (decreasing) function of t for all t > 0.

Theorem 10. E(F) has DTRVE(ITPVE) in t > 0 if TVEr(X,a,t)(TVEp(X,a,t)) < to, ITr(X,a,t)(lTP(X, a, t)) < to, and 0 < a < 1/2.

Proof. Using the differentiation of (4.5) and (4.6) with respect to t, we have

(1 - a)2TVER(X, a, t) = r(t){(2a - 1)(1 - a)2TVEr(X, a, t)

-[(1 - a)/TR(X, a, t) + 1 - ra—1(t)]2}, (48)

(1 - a)2TVEP(X, a, t) = ¿u(t){(1 - 2a)(1 - a)2TVEp(X,a,t) +[^a—1(t) - (1 - a)/TP(X, a, t) - 1]2},

where (X, a,t) and ITp (X, a, t) are the Tsallis residual and past entropy of X respectively. We see that if 0 < a < 1/2 then TVER(X,a,t) (TVER(X,a,t)) < (>) 0 and F(F) has DTRVE(ITPVE). □

Theorem 11. F has ITRVE(DTRVE) in t > 0 if TVEr(X, a, t) < to, Itr (X,a,t) < to, and for all a > 1/2,

(2a - 1)(1 - a)2TVEp(X,a,t) > (<)[(1 - a)/yR(X,a,t) + 1 - ra—1(t)]2.

Also F has DTPVE(ITPVE) in t > 0 if TVEp(X,a,t) < to, Itp (X,a,t) < to, and for all a > 1/2,

|1 - 2a|(1 - a)2TVEp(X,a,t) > (<)[^a—1(t) - (1 - a)/Tp(X,a,t) - 1]2. (4.10)

Proof. In Definition 5 F has ITRVE(DTRVE) in t if TVER(X, a, t) > (<) 0. By using (4.8), the proof is completed. Also (4.10) can be similarly proved by using (4.9). □

Corollary 6. If F has ITRVE(DTRVE) in t > 0, then for all a > 1/2 And if F has DTPVE(ITPVE) in t > 0, then for all a > 1/2

Therefore (4.11) and (4.12) are lower (upper) bound for Tsallis varentropy for all a > 1/2. Corollary 7. Xet F be both ITRVE(DTRVE), so TVER(X,a,t) = 0. Then

(2a - 1)(1 - a)2TVEr(X,a,t) = [(1 - a)/rR(X,a,t) + 1 - ra—1(t)]2, a > 1/2,

TVF(Xn\\ [(i-^WoQ + i-r-W 1

and if F is both ITPVE(DTPVE), then TVEP(X,a,t) = 0 and we have

|1 - 2a|(1 - a)2TVEp(X,a,t) = [^a—1 (t) - (1 - a)/Tp(X,a,t) - 1]2,

therefore

TUE(X a) = ir-Hoo) ~ (I-a)lT(X,a)-I]2 1

1 & j |1 — 2a|(l — a)2 & 2- 1 j

Therefore (4.13) and (4.14) introduce the Tsallis varentropy when system&s age is ineffective on it.

5. Conclusion

In this paper, we introduced the generalized varentropy of order a for continuous random variables based on the Tsallis entropy. We showed that unlike the varentropy, which is a location and scale-invariant measure, the Tsallis varentropy is invariant to the location transformation but is not invariant to scale translate, unless when a ^ 1. After presenting some theorems of the properties of the Tsallis varentropy, we investigated them in the order statistics, which can be useful for the system designers in the lifetime information for the (n — i + 1)-out-of-n systems. Also we studied them for the lifetime distributions and obtained some bounds for them by using the hazard and reversed hazard rate functions. Then we studied the age of systems regarding residual lifetime distributions and showed that in the uniform and exponential distributions, Tsallis residual varentropy is independent of the age of systems. We introduced two new classes of distributions by using the residual and past Tsallis varentropy, and we described some its properties.

Acknowledgements

The authors would like to thank the editor and anonymous referees for their valuable comments and suggestions that improved the quality of the paper.

REFERENCES

1. Abbasnejad M., Arghami N. R. Renyi entropy properties of order statistics. Comm. Statist. Theory Methods, 2010. Vol. 40, No. 1. P. 40-52. DOI: 10.1080/03610920903353683
2. Afhami B., Madadi M., Rezapour M. Goodness-of-fit test based on Shannon entropy of k-record values from the generalized. J. Stat. Sci., 2015. Vol. 9, No. 1. P. 43-60.
3. Arikan E. Varentropy decreases under the polar transform. IEEE Trans. Inform. Theory, 2016. Vol. 62, No. 6. P. 3390-3400. DOI: 10.1109/TIT.2016.2555841
4. Arnold B. C., Balakrishnan N., Nagaraja H. N. A First Course in Order Statistics. Classics Appl. Math., vol. 54. Philadelphia: SIAM, 2008. 279 p. DOI: 10.1137/1.9780898719062
5. Baratpour S., Ahmadi J., Arghami N. R. Characterizations based on Renyi entropy of order statistics and record values. J. Statist. Plann. Inference, 2008. Vol. 138, No. 8. P. 2544-2551. DOI: 10.1016/j.jspi.2007.10.024
6. Baratpour S., Khammar A. Tsallis entropy properties of order statistics and some stochastic comparisons. J. Statist. Res. Iran, 2016. Vol. 13, No. 1. P. 25-41. DOI: 10.18869/acadpub.jsri.13.1.2
7. Bobkov S., Madiman M. Concentration of the information in data with log-concave distributions. Ann. Probab., 2011. Vol. 39, No. 4. P. 1528-1543. URL: https://projecteuclid.org/euclid.aop/1312555807
8. David H. A., Nagaraja H. N. Order Statistics. 3rd edition. Wiley Ser. Probab. Stat. Hoboken, New Jersey: John Wiley Sons, Inc. 2003. 458 p. DOI: 10.1002/0471722162
9. Di Crescenzo A., Longobardi M. Statistic comparisons of cumulative entropies. In: Stochastic Orders in Reliability and Risk. Li H., Li X. (eds.). Lect. Notes Stat., vol. 208. New York: Springer, 2013. P. 167-182. DOI: 10.1007/978-1-4614-6892-9^
10. Di Crescenzo A., Paolillo L. Analysis and applications of the residual varentropy of random lifetimes. Probab. Engrg. Inform. Sci., 2020. P. 1-19. DOI: 10.1017/S0269964820000133
11. Ebrahimi N., Kirmani S. N. U. A. Some results on ordering of survival functions through uncertainty. Statist. Probab. Lett, 1996. Vol. 29, No. 2. P. 167-176. DOI: 10.1016/0167-7152(95)00170-0
12. Ebrahimi N., Soofi E. S., Zahedi H. Information properties of order statistics and spacing. IEEE Trans. Inform. Theory, 2004. Vol. 50, No. 1. P. 177-183. DOI: 10.1109/TIT.2003.821973
13. Enomoto R., Okamoto N., Seo T. On the asymptotic normality of test statistics using Song&s kurtosis. J. Stat. Theory Pract, 2013. Vol. 7, No. 1. P. 102-119. DOI: 10.1080/15598608.2013.756351
14. Gupta R. C., Taneja H. C., Thapliyal R. Stochastic comparisons based on residual entropy of order statistics and some characterization results. J. Stat. Theory Appl., 2014. Vol. 13, No. 1. P. 27-37. DOI: 10.2991/jsta.2014.13.1.3
15. Kontoyiannis I., Verdu S. Optimal lossless compression: Source varentropy and dispersion. IEEE Trans. Inform. Theory, 2014. Vol. 60, No. 2. P. 777-795. DOI: 10.1109/TIT.2013.2291007
16. Liu J. Information Theoretic Content and Probability. Ph.D. Thesis, University of Florida, 2007.
17. Nanda A. K., Paul P. Some results on generalized residual entropy. Inform. Sci., 2006. Vol. 176, No. 1. P. 27-47. DOI: 10.1016/j.ins.2004.10.008
18. Park S. The entropy of consecutive order statistics. IEEE Trans. Inform. Theory, 1995. Vol. 41, No. 6. P. 2003-2007. DOI: 10.1109/18.476325
19. Psarrakos G., Navarro J. Generalized cumulative residual entropy and record values. Metrika, 2013. Vol. 76. P. 623-640. DOI: 10.1007/s00184-012-0408-6
20. Raqab M.Z., Amin W. A. Some ordering result on order statistics and record values. IAPQR Trans., 1996. Vol. 21, No. 1. P. 1-8.
21. Shannon C.E. A mathematical theory of communication. Bell System Technical J., 1948. Vol. 27, No. 3. P. 379-423 DOI: 10.1002/j.1538-7305.1948.tb01338.x
22. Song K.-S. Renyi information, log likelihood and an intrinsic distribution measure. J. Statist. Plann. Inference, 2001. Vol. 93, No. 1-2. P. 51-69. DOI: 10.1016/S0378-3758(00)00169-5
23. Tsallis C. Possible generalization of Boltzmann-Gibbs statistics. J. Stat. Phys., 1988. Vol. 52. P. 479-487. DOI: 10.1007/BF01016429
24. Vikas Kumar, Taneja H. C. A generalized entropy-based residual lifetime distributions. Int. J. Biomath., 2011. Vol. 04, No. 02. P. 171-148. DOI: 10.1142/S1793524511001416
25. Wilk G., Wlodarczyk Z. Example of a possible interpretation of Tsallis entropy. Phys. A: Stat. Mech. Appl., 2008. Vol. 387, No. 19-20. P. 4809-4813. DOI: 10.1016/j.physa.2008.04.022
26. Wong K. M., Chen S. The entropy of ordered sequences and order statistics. IEEE Trans. Inform. Theory, 1990. Vol. 36, No. 2. P. 276-284. DOI: 10.1109/18.52473
27. Zarezadeh S., Asadi M. Results on residual Renyi entropy of order statistics and record values. Inform. Sci., 2010. Vol. 180, No. 21. P. 4195-4206. DOI: 10.1016/j.ins.2010.06.019
28. Zhang Z. Uniform estimates on the Tsallis entropies. Lett. Math. Phys., 2007. Vol. 80. P. 171-181. DOI: 10.1007/s11005-007-0155-1
29. Zografos K. On Mardia&s and Song&s measures of kurtosis in elliptical distributions. J. Multivariate Anal., 2008. Vol. 99, No. 5. P. 858-879. DOI: 10.1016/j.jmva.2007.05.001
generalized varentropy past tsallis varentropy residual tsallis varentropy tsallis varentropy varentropy
Другие работы в данной теме:
Контакты
Обратная связь
support@uchimsya.com
Учимся
Общая информация
Разделы
Тесты