Competitive regret

From HandWiki
Short description: Concept in decision theory

In decision theory, competitive regret is the relative regret compared to an oracle with limited or unlimited power in the process of distribution estimation.

Competitive regret to the oracle with full power

Consider estimating a discrete probability distribution p on a discrete set 𝒳 based on data X, the regret of an estimator[1] q is defined as

maxp𝒫rn(q,p).

where 𝒫 is the set of all possible probability distribution, and

rn(q,p)=𝔼(D(p||q(X))).

where D(p||q) is the Kullback–Leibler divergence between p and q.

Competitive regret to the oracle with limited power

Oracle with partial information

The oracle is restricted to have access to partial information of the true distribution p by knowing the location of p in the parameter space up to a partition.[1] Given a partition of the parameter space, and suppose the oracle knows the subset P where the true pP. The oracle will have regret as

rn(P)=minqmaxpPrn(q,p).

The competitive regret to the oracle will be

rn(q,𝒫)=maxP(rn(q,P)rn(P)).

Oracle with partial information

The oracle knows exactly p, but can only choose the estimator among natural estimators. A natural estimator assigns equal probability to the symbols which appear the same number of time in the sample.[1] The regret of the oracle is

rnnat(p)=minq𝒬natrn(q,p),

and the competitive regret is

maxp𝒫(rn(q,p)rnnat(p)).

Example

For the estimator q proposed in Acharya et al.(2013),[2]

rnσ(q,Δk)rnnat(q,Δk)𝒪~(min(1n,kn)).

Here Δk denotes the k-dimensional unit simplex surface. The partition σ denotes the permutation class on Δk, where p and p are partitioned into the same subset if and only if p is a permutation of p.

References

  1. 1.0 1.1 1.2 Orlitsky, Alon; Suresh, Ananda Theertha. (2015), Competitive Distribution Estimation, Bibcode2015arXiv150307940O 
  2. Acharya, Jayadev; Jafarpour, Ashkan; Orlitsky, Alon; Suresh, Ananda Theertha (2013), "Optimal probability estimation with applications to prediction and classification", Proceedings of the 26th Annual Conference on Learning Theory (COLT)