Information potential for some probability density functions


Acu A., BAŞCANBAZ TUNCA G., Rasa I.

APPLIED MATHEMATICS AND COMPUTATION, cilt.389, 2021 (SCI-Expanded) identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 389
  • Basım Tarihi: 2021
  • Doi Numarası: 10.1016/j.amc.2020.125578
  • Dergi Adı: APPLIED MATHEMATICS AND COMPUTATION
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Academic Search Premier, Applied Science & Technology Source, Computer & Applied Sciences, INSPEC, Public Affairs Index, zbMATH, Civil Engineering Abstracts
  • Anahtar Kelimeler: Probability density function, Information potential, Entropy, Positive linear operators, B-spline functions, STEKLOV OPERATORS, CONVERGENCE
  • Ankara Üniversitesi Adresli: Evet

Özet

This paper is related to the information theoretic learning methodology, whose goal is to quantify global scalar descriptors (e.g., entropy) of a given probability density function (PDF). In this context, the core concept is the information potential (IP) S-[s](x) := integral(R) p(s) (t, x)dt, s > 0 of a PDF p(t, x) depending on a parameter x; it is naturally related to the Renyi and Tsallis entropies. We present several such PDF, viewed also as kernels of integral operators, for which a precise relation exists between S-[2](x) and the variance Var[p(t, x)]. For these PDF we determine explicitly the IP and the Shannon entropy. As an application to Information Theoretic Learning we determine two essential indices used in this theory: the expected value E[logp(t, x)] and the variance Var[logp(t, x)]. The latter is an index of the intrinsic shape of p(t, x) having more statistical power than kurtosis. For a sequence of B-spline functions, considered as kernels of Steklov operators and also as PDF, we investigate the sequence of IP and its asymptotic behaviour. Another special sequence of PDF consists of kernels of Kantorovich modifications of the classical Bernstein operators. Convexity properties and bounds of the associated IP, useful in Information Theoretic Learning, are discussed. Several examples and numerical computations illustrate the general results. (C) 2020 Elsevier Inc. All rights reserved.