首页 > 范文大全 > 正文

The Study on Conditional Probability of Error for m-ary Hypothesis Tests

开篇:润墨网以专业的文秘视角,为您筛选了一篇The Study on Conditional Probability of Error for m-ary Hypothesis Tests范文,如需获取更多写作素材,在线客服老师一对一协助。欢迎您的阅读与分享!

Abstract. In this paper, a new class of lower bounds on the probability of error in multiple hypothesis testing was presented. These new bounds maintain the desirable properties of continuity, differentiability, and symmetry. In the binary case, the proposed class depends on a parameter, q which at the limit of infinity provides the minimum attainable probability of error, provided by the MAP detector. It is shown that this class of bounds generalizes some existing bounds for binary hypothesis tests. It was shown via examples that the proposed bounds outperform other existing bounds in terms of tightness and simplicity of calculation.

Key words: MAP;hypothesis testing;probability of error

Introduction

Lower bounds on the probability of error are of great importance in system design and performance analysis in many applications, such as signal detection, communications, and classification. It is well known that the minimum probahility of error is attained by the maximum a-posteriori probability (MAP) criterion, however, its probability of error is often difficult to calculate and usually not tractable. In such cases, lower bounds on the probability of error are useful for performance analysis, feasibility study and system design. These bounds can be useful also for derivation of analytical expressions for the Ziv-Zakai family of bounds for parameter estimation [1]. One of the difficulties in computation of the Ziv-Zakai bounds is that they involve an expression for the minimum probability of error of a binary hypothesis problem. Analytic expressions for lower bounds on the probability of error may be useful to simplify the calculation of the bound.

PROBLEM STATEMENT

Consider an M -ary hypothesis testing problem, in which the hypotheses are , with the corresponding a-priori probabilities , , and the random observation vector is x. Let , denote the conditional probability of

given x, and and are the conditional and joint probability density functions (pdt) of x and , , respectively. The probability of error of the decision problem is denoted . It is well known that the minimum average probability of error, obtained by the MAP criterion, is given by [9]

(1)

However, the minimum prohahility in (1) is often difficult to calculate and usually not tractable. Therefore, computable and tight lower bounds on the probability of error are useful for performance analysis and system design.

REVIEW OF EXISTING LOWER BOUNDS

In this section, some existing lower bounds on the minimum probability of error are presented. The following bounds have been derived especially for the binary hypothesis testing. Most of the binary hypothesis testing bounds are based on divergence measures of the difference between two probability distributions, known as I-divergences or Ali-Silvey distances.

In [2], the divergence and two Bhattacharyya-based lower bounds were proposed. The divergence lower bound is

(2)

Where and

is the likelihood ratio function, and

A simple Bhattacharyya-based lower bound is

This bound is always tighter than the divergence lower bound [2]. The second Bhattacharyya-based bound on is

Another I-divergence bound is proposed in [3]:

where L>1. For L == 1 this bound can be obtained also by applying Jensen's inequality on the MAP probability of error.

The harmonic lower bound was proposed in [5]:

The "Gaussian-Sinusoidal" lower bound is given by

where = 1.8063. Although this bound is tight, it is usually not tractable. An arbitrarily tight lower bound is given by

for any > 0. By selecting high enough values for a, this lower bound can be made arbitrarily close to Pemin. However, this bound is, in general, difficult to evaluate.

For multiple hypothesis testing problems, the following lower bounds have been proposed. In [6], Devijver derived the following bounds using the conditional Bayesian distance:

and

Where stands for the conditional

Bayesian distance. In [6], it is analytically shown that for the binary case the Bayesian distance lower bound in (9) is always tighter than the Bhattacharyya bound in (4). The bound in (10) is tighter than the bound [5], [6]

The bound

was proposed in [7] in the context of Vajdas quadratic entropy and the quadratic mutual information, respectively. In [6], it is claimed that

The bound can be interpreted as an m-ary extension to the harmonic mean bound, presented in (6).

A NEW CLASS OF BOUNDS ON PROBABILITY OF ERROR

Derivation of the new class of bounds

Consider an M -ary hypothesis testing problem with detector

. Let

where 0 is the true hypothesis and lA is the indicator function of a subset A. Then, according to Holder's inequality, for

for an arbitrary scalar function . It can be seen that

where . By substituting of (15) into (14) one obtains the following lower bound on the probability of error:

Using (13) the expectation term in the numerator of (16) can be rewritten as

It can be shown that in order to obtain a valid bound which is independent of the detector should be structured as follows

where is an arbitrary function. With no loss of generality

can be chosen to be a nonnegative unction. By substituting (18) in (17) one obtains

Using (18), it can be seen that

Where . By subsitiution of (19) and (20) into (16) the bound can be rewritten as:

Maximization of (21) with respect to (w.r.t.) results

and by substituting (22) in (21), the attained lower bound is

for all q > 1.

Binary hypothesis testing In the binary hypothesis problem with the hypotheses and , the lower bound in (23) is

Asymptotic properties It can be seen that the bound in (24) becomes tighter by in-creasing q, and for the bound is

which is the optimal bound for the binary hypothesis test, presented in (1).

The harmonic lower bound For q == 2 this bound can be written by the following simpleversion:

which is identical to the "harmonic lower bound" in (6) and to B(quad) for the binary case in (12). Thus, the binary lower bound in [5] can be interpreted as a special case of our general M -hypotheses bound, presented in (23).

Relation to upper bounds on minimum probability of error In [5], an upper bound on the probability of error of MAP estimator for binary hypothesis testing is derived using thenegative power mean inequalities. According to this paper:

for any q > 1. It can be seen that this upper bound can be obtained by multiplying the proposed lower bound in (23) by

The factor of controls the tightness between upper and lower bounds in the probability of error for binary hypothesis testing.

Bounds comparison Figure I depict the new lower bound for the binary hypothesis problem against the conditional probability , for different values of the parameter q. The new bounds are compared to the bounds , with

= 5, and , presented in (7), (8), and (II), respectively. It can be seen that the bound in (24) becomes tighter as q grows, and that for q = 10, the new bound is tighter than the other lower bounds almost everywhere.

Fig. 1. The proposed lower bounds for q =2. 5. 10 and other existing bounds as a function of the conditional probability for binary hypothesis testing.

CONCLUSION

Practical and useful lower bounds on the probability of error are expected to be computationally simple, tight, and appropriate for general multi-hypothesis problems. In this paper, a new class of lower bounds with the aforementioned desired properties is derived using Holder's inequality. The bounds in this class are simpler to compute than the optimum probability of error provided by the MAP criterion and they provide good prediction of the minimum probability of error in multiple hypothesis testing.

References

[1] D. Chazan, M. Zakai, and J. Ziv, "Improved lower bounds on signal parameter estimation," IEEE Trans. Inform. Theory, vol. IT-21 , no. l,pp.90-93, 1975.

[2] T. Kailath, "The divergence and Bhattacharyya distance measures in signal selection," IEEE Trans. Commun. Techno!., vol. com-15, no. 1, pp. 52-60, 1967.

[3] 1. V. Tilburg and D. E. Boekee, "Divergence bounds on key equivocation and error probability in cryptanalysis," Advances in cryptology-CRYPTO'85, vol. 218, pp. 489-513,1986.

[4] T. S. Han and S. Verdu, "Generalizing the Fano inequality," IEEE Trans. Inform. Theory, vol. 40, no. 4, pp. 1247-1251, 2010.

[5] N. Santhi and A. Vardy, "On an improvement over Renyi's equivocation bound," Proc. 44-th Allerton Conference on Com- munications, Control and Computing, pp. 1118-1124,2009.

[6] P. A. Devijver, "On a new class of bounds on Bayes risk in multihypothesis pattern recognition," IEEE Trans. Comput., vol.C-23,pp. 70-80,1974.

[7] C. E. Shannon, "Certain results in coding theory for noisy channels," Inform. Contr., vol. 1, pp. 6-25, 1957.

[8] R. M. Fano, Class notes for Transmission of Information course 6.574, MIT, Cambridge, MA, 1952.

[9] M. Feder and N. Merhav, "Relations between entropy and error probability," IEEE Trans. Inform. Theory, vol. 40, no. 1, pp. 259-266, 2010.