Family of mathematical integrals
Portrait painting of John Wallis In mathematics , and more precisely in analysis , the Wallis integrals constitute a family of integrals introduced by John Wallis .
Definition, basic propertiesThe Wallis integrals are the terms of the sequence ( W n ) n ≥ 0 {\displaystyle (W_{n})_{n\geq 0}} defined by
W n = ∫ 0 π 2 sin n x d x , {\displaystyle W_{n}=\int _{0}^{\frac {\pi }{2}}\sin ^{n}x\,dx,} or equivalently,
W n = ∫ 0 π 2 cos n x d x . {\displaystyle W_{n}=\int _{0}^{\frac {\pi }{2}}\cos ^{n}x\,dx.} The first few terms of this sequence are:
The sequence ( W n ) {\displaystyle (W_{n})} is decreasing and has positive terms. In fact, for all n ≥ 0 : {\displaystyle n\geq 0:}
W n > 0 , {\displaystyle W_{n}>0,} because it is an integral of a non-negative continuous function which is not identically zero;W n − W n + 1 = ∫ 0 π 2 sin n x d x − ∫ 0 π 2 sin n + 1 x d x = ∫ 0 π 2 ( sin n x ) ( 1 − sin x ) d x > 0 , {\displaystyle W_{n}-W_{n+1}=\int _{0}^{\frac {\pi }{2}}\sin ^{n}x\,dx-\int _{0}^{\frac {\pi }{2}}\sin ^{n+1}x\,dx=\int _{0}^{\frac {\pi }{2}}(\sin ^{n}x)(1-\sin x)\,dx>0,} again because the last integral is of a non-negative continuous function.Since the sequence ( W n ) {\displaystyle (W_{n})} is decreasing and bounded below by 0, it converges to a non-negative limit. Indeed, the limit is zero (see below).
Recurrence relation By means of integration by parts , a reduction formula can be obtained. Using the identity sin 2 x = 1 − cos 2 x {\displaystyle \sin ^{2}x=1-\cos ^{2}x} , we have for all n ≥ 2 {\displaystyle n\geq 2} ,
∫ 0 π 2 sin n x d x = ∫ 0 π 2 ( sin n − 2 x ) ( 1 − cos 2 x ) d x = ∫ 0 π 2 sin n − 2 x d x − ∫ 0 π 2 sin n − 2 x cos 2 x d x . Equation (1) {\displaystyle {\begin{aligned}\int _{0}^{\frac {\pi }{2}}\sin ^{n}x\,dx&=\int _{0}^{\frac {\pi }{2}}(\sin ^{n-2}x)(1-\cos ^{2}x)\,dx\\&=\int _{0}^{\frac {\pi }{2}}\sin ^{n-2}x\,dx-\int _{0}^{\frac {\pi }{2}}\sin ^{n-2}x\cos ^{2}x\,dx.\qquad {\text{Equation (1)}}\end{aligned}}} Integrating the second integral by parts, with:
v ′ ( x ) = cos ( x ) sin n − 2 ( x ) {\displaystyle v'(x)=\cos(x)\sin ^{n-2}(x)} , whose anti-derivative is v ( x ) = 1 n − 1 sin n − 1 ( x ) {\displaystyle v(x)={\frac {1}{n-1}}\sin ^{n-1}(x)} u ( x ) = cos ( x ) {\displaystyle u(x)=\cos(x)} , whose derivative is u ′ ( x ) = − sin ( x ) , {\displaystyle u'(x)=-\sin(x),} we have:
∫ 0 π 2 sin n − 2 x cos 2 x d x = [ sin n − 1 x n − 1 cos x ] 0 π 2 + 1 n − 1 ∫ 0 π 2 sin n − 1 x sin x d x = 0 + 1 n − 1 W n . {\displaystyle \int _{0}^{\frac {\pi }{2}}\sin ^{n-2}x\cos ^{2}x\,dx=\left[{\frac {\sin ^{n-1}x}{n-1}}\cos x\right]_{0}^{\frac {\pi }{2}}+{\frac {1}{n-1}}\int _{0}^{\frac {\pi }{2}}\sin ^{n-1}x\sin x\,dx=0+{\frac {1}{n-1}}W_{n}.} Substituting this result into equation (1) gives
W n = W n − 2 − 1 n − 1 W n , {\displaystyle W_{n}=W_{n-2}-{\frac {1}{n-1}}W_{n},} and thus
W n = n − 1 n W n − 2 , Equation (2) {\displaystyle W_{n}={\frac {n-1}{n}}W_{n-2},\qquad {\text{Equation (2)}}} for all n ≥ 2. {\displaystyle n\geq 2.}
This is a recurrence relation giving W n {\displaystyle W_{n}} in terms of W n − 2 {\displaystyle W_{n-2}} . This, together with the values of W 0 {\displaystyle W_{0}} and W 1 , {\displaystyle W_{1},} give us two sets of formulae for the terms in the sequence ( W n ) {\displaystyle (W_{n})} , depending on whether n {\displaystyle n} is odd or even:
W 2 p = 2 p − 1 2 p ⋅ 2 p − 3 2 p − 2 ⋯ 1 2 W 0 = ( 2 p − 1 ) ! ! ( 2 p ) ! ! ⋅ π 2 = ( 2 p ) ! 2 2 p ( p ! ) 2 ⋅ π 2 , {\displaystyle W_{2p}={\frac {2p-1}{2p}}\cdot {\frac {2p-3}{2p-2}}\cdots {\frac {1}{2}}W_{0}={\frac {(2p-1)!!}{(2p)!!}}\cdot {\frac {\pi }{2}}={\frac {(2p)!}{2^{2p}(p!)^{2}}}\cdot {\frac {\pi }{2}},} W 2 p + 1 = 2 p 2 p + 1 ⋅ 2 p − 2 2 p − 1 ⋯ 2 3 W 1 = ( 2 p ) ! ! ( 2 p + 1 ) ! ! = 2 2 p ( p ! ) 2 ( 2 p + 1 ) ! . {\displaystyle W_{2p+1}={\frac {2p}{2p+1}}\cdot {\frac {2p-2}{2p-1}}\cdots {\frac {2}{3}}W_{1}={\frac {(2p)!!}{(2p+1)!!}}={\frac {2^{2p}(p!)^{2}}{(2p+1)!}}.}
Another relation to evaluate the Wallis' integralsWallis's integrals can be evaluated by using Euler integrals :
Euler integral of the first kind : the Beta function :B ( x , y ) = ∫ 0 1 t x − 1 ( 1 − t ) y − 1 d t = Γ ( x ) Γ ( y ) Γ ( x + y ) {\displaystyle \mathrm {B} (x,y)=\int _{0}^{1}t^{x-1}(1-t)^{y-1}\,dt={\frac {\Gamma (x)\Gamma (y)}{\Gamma (x+y)}}} for Re(x ), Re(y ) > 0 Euler integral of the second kind : the Gamma function :Γ ( z ) = ∫ 0 ∞ t z − 1 e − t d t {\displaystyle \Gamma (z)=\int _{0}^{\infty }t^{z-1}e^{-t}\,dt} for Re(z ) > 0 .If we make the following substitution inside the Beta function: { t = sin 2 u 1 − t = cos 2 u d t = 2 sin u cos u d u , {\displaystyle \quad \left\{{\begin{matrix}t=\sin ^{2}u\\1-t=\cos ^{2}u\\dt=2\sin u\cos udu\end{matrix}}\right.,} we obtain:
B ( a , b ) = 2 ∫ 0 π 2 sin 2 a − 1 u cos 2 b − 1 u d u , {\displaystyle \mathrm {B} (a,b)=2\int _{0}^{\frac {\pi }{2}}\sin ^{2a-1}u\cos ^{2b-1}u\,du,} so this gives us the following relation to evaluate the Wallis integrals:
W n = 1 2 B ( n + 1 2 , 1 2 ) = Γ ( n + 1 2 ) Γ ( 1 2 ) 2 Γ ( n 2 + 1 ) . {\displaystyle W_{n}={\frac {1}{2}}\mathrm {B} \left({\frac {n+1}{2}},{\frac {1}{2}}\right)={\frac {\Gamma \left({\tfrac {n+1}{2}}\right)\Gamma \left({\tfrac {1}{2}}\right)}{2\,\Gamma \left({\tfrac {n}{2}}+1\right)}}.} So, for odd n {\displaystyle n} , writing n = 2 p + 1 {\displaystyle n=2p+1} , we have:
W 2 p + 1 = Γ ( p + 1 ) Γ ( 1 2 ) 2 Γ ( p + 1 + 1 2 ) = p ! Γ ( 1 2 ) ( 2 p + 1 ) Γ ( p + 1 2 ) = 2 p p ! ( 2 p + 1 ) ! ! = 2 2 p ( p ! ) 2 ( 2 p + 1 ) ! , {\displaystyle W_{2p+1}={\frac {\Gamma \left(p+1\right)\Gamma \left({\frac {1}{2}}\right)}{2\,\Gamma \left(p+1+{\frac {1}{2}}\right)}}={\frac {p!\Gamma \left({\frac {1}{2}}\right)}{(2p+1)\,\Gamma \left(p+{\frac {1}{2}}\right)}}={\frac {2^{p}\;p!}{(2p+1)!!}}={\frac {2^{2\,p}\;(p!)^{2}}{(2p+1)!}},} whereas for even n {\displaystyle n} , writing n = 2 p {\displaystyle n=2p} and knowing that Γ ( 1 2 ) = π {\displaystyle \Gamma \left({\tfrac {1}{2}}\right)={\sqrt {\pi }}} , we get :
W 2 p = Γ ( p + 1 2 ) Γ ( 1 2 ) 2 Γ ( p + 1 ) = ( 2 p − 1 ) ! ! π 2 p + 1 p ! = ( 2 p ) ! 2 2 p ( p ! ) 2 ⋅ π 2 . {\displaystyle W_{2p}={\frac {\Gamma \left(p+{\frac {1}{2}}\right)\Gamma \left({\frac {1}{2}}\right)}{2\,\Gamma \left(p+1\right)}}={\frac {(2p-1)!!\;\pi }{2^{p+1}\;p!}}={\frac {(2p)!}{2^{2\,p}\;(p!)^{2}}}\cdot {\frac {\pi }{2}}.}
Equivalence From the recurrence formula above ( 2 ) {\displaystyle \mathbf {(2)} } , we can deduce that W n + 1 ∼ W n {\displaystyle \ W_{n+1}\sim W_{n}} (equivalence of two sequences).Indeed, for all n ∈ N {\displaystyle n\in \,\mathbb {N} } : W n + 2 ⩽ W n + 1 ⩽ W n {\displaystyle \ W_{n+2}\leqslant W_{n+1}\leqslant W_{n}} (since the sequence is decreasing)W n + 2 W n ⩽ W n + 1 W n ⩽ 1 {\displaystyle {\frac {W_{n+2}}{W_{n}}}\leqslant {\frac {W_{n+1}}{W_{n}}}\leqslant 1} (since W n > 0 {\displaystyle \ W_{n}>0} )n + 1 n + 2 ⩽ W n + 1 W n ⩽ 1 {\displaystyle {\frac {n+1}{n+2}}\leqslant {\frac {W_{n+1}}{W_{n}}}\leqslant 1} (by equation ( 2 ) {\displaystyle \mathbf {(2)} } ).By the sandwich theorem , we conclude that W n + 1 W n → 1 {\displaystyle {\frac {W_{n+1}}{W_{n}}}\to 1} , and hence W n + 1 ∼ W n {\displaystyle \ W_{n+1}\sim W_{n}} . By examining W n W n + 1 {\displaystyle W_{n}W_{n+1}} , one obtains the following equivalence: W n ∼ π 2 n {\displaystyle W_{n}\sim {\sqrt {\frac {\pi }{2\,n}}}\quad } (and consequently lim n → ∞ n W n = π / 2 {\displaystyle \lim _{n\rightarrow \infty }{\sqrt {n}}\,W_{n}={\sqrt {\pi /2}}} ).Proof
For all n ∈ N {\displaystyle n\in \,\mathbb {N} } , let u n = ( n + 1 ) W n W n + 1 {\displaystyle u_{n}=(n+1)\,W_{n}\,W_{n+1}} .
It turns out that, ∀ n ∈ N , u n + 1 = u n {\displaystyle \forall n\in \mathbb {N} ,\,u_{n+1}=u_{n}} because of equation ( 2 ) {\displaystyle \mathbf {(2)} } . In other words ( u n ) {\displaystyle \ (u_{n})} is a constant.
It follows that for all n ∈ N {\displaystyle n\in \,\mathbb {N} } , u n = u 0 = W 0 W 1 = π 2 {\displaystyle u_{n}=u_{0}=W_{0}\,W_{1}={\frac {\pi }{2}}} .
Now, since n + 1 ∼ n {\displaystyle \ n+1\sim n} and W n + 1 ∼ W n {\displaystyle \ W_{n+1}\sim W_{n}} , we have, by the product rules of equivalents, u n ∼ n W n 2 {\displaystyle \ u_{n}\sim n\,W_{n}^{2}} .
Thus, n W n 2 ∼ π 2 {\displaystyle \ n\,W_{n}^{2}\sim {\frac {\pi }{2}}} , from which the desired result follows (noting that W n > 0 {\displaystyle \ W_{n}>0} ).
Suppose that we have the following equivalence (known as Stirling's formula ):
n ! ∼ C n ( n e ) n , {\displaystyle n!\sim C{\sqrt {n}}\left({\frac {n}{e}}\right)^{n},} for some constant C {\displaystyle C} that we wish to determine. From above, we have
W 2 p ∼ π 4 p = π 2 p {\displaystyle W_{2p}\sim {\sqrt {\frac {\pi }{4p}}}={\frac {\sqrt {\pi }}{2{\sqrt {p}}}}} (equation (3))Expanding W 2 p {\displaystyle W_{2p}} and using the formula above for the factorials, we get
W 2 p = ( 2 p ) ! 2 2 p ( p ! ) 2 ⋅ π 2 ∼ C ( 2 p e ) 2 p 2 p 2 2 p C 2 ( p e ) 2 p ( p ) 2 ⋅ π 2 = π C 2 p . (equation (4)) {\displaystyle {\begin{aligned}W_{2p}&={\frac {(2p)!}{2^{2p}(p!)^{2}}}\cdot {\frac {\pi }{2}}\\&\sim {\frac {C\left({\frac {2p}{e}}\right)^{2p}{\sqrt {2p}}}{2^{2p}C^{2}\left({\frac {p}{e}}\right)^{2p}\left({\sqrt {p}}\right)^{2}}}\cdot {\frac {\pi }{2}}\\&={\frac {\pi }{C{\sqrt {2p}}}}.{\text{ (equation (4))}}\end{aligned}}} From (3) and (4), we obtain by transitivity:
π C 2 p ∼ π 2 p . {\displaystyle {\frac {\pi }{C{\sqrt {2p}}}}\sim {\frac {\sqrt {\pi }}{2{\sqrt {p}}}}.} Solving for C {\displaystyle C} gives C = 2 π . {\displaystyle C={\sqrt {2\pi }}.} In other words,
n ! ∼ 2 π n ( n e ) n . {\displaystyle n!\sim {\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}.}
Deducing the Double Factorial Ratio Similarly, from above, we have:
W 2 p ∼ π 4 p = 1 2 π p . {\displaystyle W_{2p}\sim {\sqrt {\frac {\pi }{4p}}}={\frac {1}{2}}{\sqrt {\frac {\pi }{p}}}.} Expanding W 2 p {\displaystyle W_{2p}} and using the formula above for double factorials, we get:
W 2 p = ( 2 p − 1 ) ! ! ( 2 p ) ! ! ⋅ π 2 ∼ 1 2 π p . {\displaystyle W_{2p}={\frac {(2p-1)!!}{(2p)!!}}\cdot {\frac {\pi }{2}}\sim {\frac {1}{2}}{\sqrt {\frac {\pi }{p}}}.} Simplifying, we obtain:
( 2 p − 1 ) ! ! ( 2 p ) ! ! ∼ 1 π p , {\displaystyle {\frac {(2p-1)!!}{(2p)!!}}\sim {\frac {1}{\sqrt {\pi \,p}}},} or
( 2 p ) ! ! ( 2 p − 1 ) ! ! ∼ π p . {\displaystyle {\frac {(2p)!!}{(2p-1)!!}}\sim {\sqrt {\pi \,p}}.}
Evaluating the Gaussian Integral The Gaussian integral can be evaluated through the use of Wallis' integrals.
We first prove the following inequalities:
∀ n ∈ N ∗ ∀ u ∈ R + u ⩽ n ⇒ ( 1 − u / n ) n ⩽ e − u {\displaystyle \forall n\in \mathbb {N} ^{*}\quad \forall u\in \mathbb {R} _{+}\quad u\leqslant n\quad \Rightarrow \quad (1-u/n)^{n}\leqslant e^{-u}} ∀ n ∈ N ∗ ∀ u ∈ R + e − u ⩽ ( 1 + u / n ) − n {\displaystyle \forall n\in \mathbb {N} ^{*}\quad \forall u\in \mathbb {R} _{+}\qquad e^{-u}\leqslant (1+u/n)^{-n}} In fact, letting u / n = t {\displaystyle u/n=t} , the first inequality (in which t ∈ [ 0 , 1 ] {\displaystyle t\in [0,1]} ) is equivalent to 1 − t ⩽ e − t {\displaystyle 1-t\leqslant e^{-t}} ; whereas the second inequality reduces to e − t ⩽ ( 1 + t ) − 1 {\displaystyle e^{-t}\leqslant (1+t)^{-1}} , which becomes e t ⩾ 1 + t {\displaystyle e^{t}\geqslant 1+t} . These 2 latter inequalities follow from the convexity of the exponential function (or from an analysis of the function t ↦ e t − 1 − t {\displaystyle t\mapsto e^{t}-1-t} ).
Letting u = x 2 {\displaystyle u=x^{2}} and making use of the basic properties of improper integrals (the convergence of the integrals is obvious), we obtain the inequalities:
∫ 0 n ( 1 − x 2 / n ) n d x ⩽ ∫ 0 n e − x 2 d x ⩽ ∫ 0 + ∞ e − x 2 d x ⩽ ∫ 0 + ∞ ( 1 + x 2 / n ) − n d x {\displaystyle \int _{0}^{\sqrt {n}}(1-x^{2}/n)^{n}dx\leqslant \int _{0}^{\sqrt {n}}e^{-x^{2}}dx\leqslant \int _{0}^{+\infty }e^{-x^{2}}dx\leqslant \int _{0}^{+\infty }(1+x^{2}/n)^{-n}dx} for use with the sandwich theorem (as n → ∞ {\displaystyle n\to \infty } ).
The first and last integrals can be evaluated easily using Wallis' integrals. For the first one, let x = n sin t {\displaystyle x={\sqrt {n}}\,\sin \,t} (t varying from 0 to π / 2 {\displaystyle \pi /2} ). Then, the integral becomes n W 2 n + 1 {\displaystyle {\sqrt {n}}\,W_{2n+1}} . For the last integral, let x = n tan t {\displaystyle x={\sqrt {n}}\,\tan \,t} (t varying from 0 {\displaystyle 0} to π / 2 {\displaystyle \pi /2} ). Then, it becomes n W 2 n − 2 {\displaystyle {\sqrt {n}}\,W_{2n-2}} .
As we have shown before, lim n → + ∞ n W n = π / 2 {\displaystyle \lim _{n\rightarrow +\infty }{\sqrt {n}}\;W_{n}={\sqrt {\pi /2}}} . So, it follows that ∫ 0 + ∞ e − x 2 d x = π / 2 {\displaystyle \int _{0}^{+\infty }e^{-x^{2}}dx={\sqrt {\pi }}/2} .
Remark: There are other methods of evaluating the Gaussian integral. Some of them are more direct.
Note The same properties lead to Wallis product , which expresses π 2 {\displaystyle {\frac {\pi }{2}}\,} (see π {\displaystyle \pi } ) in the form of an infinite product .
External links Pascal Sebah and Xavier Gourdon. Introduction to the Gamma Function . In PostScript and HTML formats.