Jump to content

Draft:Universal Hypothesis Testing

From Wikipedia, the free encyclopedia

{{See also|Goodness of fit}}

Universal hypothesis testing is a special case of binary simple hypothesis testing. The universal problem is to distinguish between a simple null hypothesis , and the most general composite alternative , using independent and identically distributed samples from . The setting is sometimes referred to as goodness of fit testing, or one-sample testing.

A simple binary hypothesis testing problem involves distinguishing between and , using samples . In the traditional setting of hypothesis testing are known apriori. A composite version of this problem involves sets of probability distributions , and asks to distinguish between and . The universal setting corresponds to the special case of composite hypothesis testing, where the null hypothesis is simple, and the alternative hypothesis is the set of all distributions other than , . For example, someone might want to know if a particular coin was fair, i.e. or not, i.e. , where denote the coin coming up heads or tails.

The asymptotics of universal hypothesis testing were first discussed in Hoeffding's work on optimal tests for multinomial distributions[1]. There have been many subsequent works on the topic[2][3][4] in many directions. While Hoeffding's initial results were restricted to distributions with finite supports, later results developed solutions for continuous distributions using extensions of the Kullback-Leibler Divergence[5], or kernel methods[6][7].



References

[edit]
  1. ^ Hoeffding, Wassily (1965-04). "Asymptotically Optimal Tests for Multinomial Distributions". The Annals of Mathematical Statistics. 36 (2): 369–401. doi:10.1214/aoms/1177700150. ISSN 0003-4851. {{cite journal}}: Check date values in: |date= (help)
  2. ^ Levitan, E.; Merhav, N. (2002-08). "A competitive Neyman-Pearson approach to universal hypothesis testing with applications". IEEE Transactions on Information Theory. 48 (8): 2215–2229. doi:10.1109/TIT.2002.800478. ISSN 0018-9448. {{cite journal}}: Check date values in: |date= (help)
  3. ^ Zeitouni, O.; Gutman, M. (1991-03). "On universal hypotheses testing via large deviations". IEEE Transactions on Information Theory. 37 (2): 285–290. doi:10.1109/18.75244. {{cite journal}}: Check date values in: |date= (help)
  4. ^ Li, Yun; Nitinawarat, Sirin; Veeravalli, Venugopal V. (2014-07). "Universal Outlier Hypothesis Testing". IEEE Transactions on Information Theory. 60 (7): 4066–4082. doi:10.1109/TIT.2014.2317691. ISSN 0018-9448. {{cite journal}}: Check date values in: |date= (help)
  5. ^ Yang, Pengfei; Chen, Biao (2019-04). "Robust Kullback-Leibler Divergence and Universal Hypothesis Testing for Continuous Distributions". IEEE Transactions on Information Theory. 65 (4): 2360–2373. doi:10.1109/TIT.2018.2879057. ISSN 0018-9448. {{cite journal}}: Check date values in: |date= (help)
  6. ^ Zhu, Shengyu; Chen, Biao; Chen, Zhitang; Yang, Pengfei (2021-04). "Asymptotically Optimal One- and Two-Sample Testing With Kernels". IEEE Transactions on Information Theory. 67 (4): 2074–2092. doi:10.1109/TIT.2021.3059267. ISSN 0018-9448. {{cite journal}}: Check date values in: |date= (help)
  7. ^ Zhu, Shengyu; Chen, Biao; Yang, Pengfei; Chen, Zhitang (2019-04-11). "Universal Hypothesis Testing with Kernels: Asymptotically Optimal Tests for Goodness of Fit". Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics. PMLR: 1544–1553.