In probability theory, Spitzer's formula or Spitzer's identity gives the joint distribution of partial sums and maximal partial sums of a collection of random variables. The result was first published by Frank Spitzer in 1956.[1] The formula is regarded as "a stepping stone in the theory of sums of independent random variables".[2]
Let
X1,X2,...
Sn=X1+X2+...+Xn
Rn=max(0,S1,S2,...Sn)
infty | |
\sum | |
n=0 |
n | |
\phi | |
n(\alpha,\beta)t |
=\exp\left[
infty | |
\sum | |
n=1 |
tn | |
n |
\left(un(\alpha)+vn(\beta)-1\right)\right]
where
\begin{align} \phin(\alpha,\beta)&=\operatornameE(\exp\left[i(\alphaRn+\beta(Rn-Sn)\right])\\ un(\alpha)&=\operatornameE(\exp\left[i\alpha
+\right]) | |
S | |
n |
\\ vn(\beta)&=\operatornameE(\exp\left[i\beta
-\right]) \end{align} | |
S | |
n |
and S± denotes (|S| ± S)/2.
Two proofs are known, due to Spitzer and Wendel.[3]