Asymptotic series play a crucial role in understanding quantum field theory, as Feynman diagram expansions are typically asymptotic series expansions. As I will occasionally refer to asymptotic series, I have included in this appendix some basic information on the subject.
By now as graduate students you have seen infinite series appear many times. However, in most of those appearances, you have probably made the assumption that the series converged, or that the series is only useful when convergent.
Asymptotic series are non-convergent series, that nevertheless can be made useful, and play an important role in physics. The infinite series one gets in quantum field theory by summing Feynman diagrams, for example, are asymptotic series.
To be precise, consider a function \(f(z)\) with an expansion as \[f(z) \: = \: A_0 \: + \: \frac{A_1}{z} \: + \: \frac{A_2}{z^2} \: + \: \cdots\] where the \(A_i\) are numbers. We can think of the series \(\sum A_i/z^i\) as approximating \(f(z)/\varphi(z)\) for large values of \(z\).
We say that the series \(\sum A_i/z^i\) represents \(f(z)\) asymptotically in direction \(e^{i \phi}\) if, for a given \(n\), the first \(n\) terms of the series may be made as close as desired to \(f(z)\) by making \(|z|\) large enough with \(\arg z\) fixed to \(\phi\), i.e. \[\lim_{|z| \rightarrow \infty} z^n \left[ f(z) \: - \: \sum_{p=0}^n \frac{A_p}{z^p} \right] \: = \: 0 .\] (Put another way, write \(z = re^{i \phi}\), then take the limit as \(r \rightarrow \infty\) but hold \(\phi\) fixed.) We shall see later that as one varies the direction \(e^{i \phi}\), one can get different asymptotic series expansions for the same function – this is known as Stokes’ phenomenon, and we shall study it in section 1.7.
Asymptotic series need not converge; in fact, in typical cases of interest, an asymptotic series will never converge. (Nonconvergence is sometimes added to the definition of asymptotic series, so that, in that alternate definition, an asymptotic series can never converge. In our definition here, convergence is allowed, albeit it is unusual.)
It is important to note that asymptotic series are distinct from convergent series: a convergent series need not be asymptotic. For example, consider the Taylor series for \(\exp(z)\). This is a convergent power series, but the same power series does not define an asymptotic series for \(\exp(z)\). After all, \[\lim_{z\rightarrow \infty} z^n \left[ \exp(z) \: - \: \sum_{p=0}^n \frac{z^p}{p!} \right] \: \rightarrow \: \infty\] and so the series is not asymptotic to \(\exp(z)\), though it does converge to \(\exp(z)\).
Not all functions have an asymptotic expansion; \(\exp(z)\) is one such function. If a function does have an asymptotic expansion, then that asymptotic expansion is unique. However, several different functions can have the same asymptotic expansion; the map from functions to asymptotic expansions is many-to-one, when it is well-defined.
Example: Consider the integral \[I(x) \: = \: \int_0^{\infty} \frac{ \exp(-tx) }{ 1+t } dt.\] We can generate a series approximating this function by a series of integrations by parts. The key observation is that if we define \[I(x,k) \: = \: \int_0^{\infty} \frac{ \exp(-tx) }{ (1+t)^k } dt,\] then \[\begin{aligned} I(x,k) & = & - \frac{1}{x} \int_0^{\infty} \frac{1}{ (1+t)^k } d \left( \exp(-tx) \right), \\ & = & - \frac{1}{x} \left[ \left. \frac{ \exp(-tx) }{ (1+t)^k } \right|_{0}^{\infty} \: - \: \int_0^{\infty} \frac{ \exp(-tx) }{ (1+t)^{k+1} } (-k) \, dt \right], \\ & = & + \frac{1}{x} \: - \: \frac{k}{x} \, I(x,k+1), \end{aligned}\] hence \[\begin{aligned} I(x) & = & I(x,1) \: = \: \frac{1}{x} \: - \: \frac{1}{x} I(x,2), \\ & = & \frac{1}{x} \: - \: \frac{1}{x} \left[ \frac{1}{x} - \frac{2}{x} I(x,3) \right], \\ & = & \frac{1}{x} \: - \: \frac{1}{x^2} \: + \: \frac{2}{x^2} I(x,3), \\ & = & \frac{1}{x} \: - \: \frac{1}{x^2} \: + \: \frac{2}{x^3} \: - \: \frac{3!}{x^3} I(x,4), \end{aligned}\] and so forth. Proceeding in this fashion, we find \[\label{eq:asymp-sect:int-ex-1} I(x) \: = \: \sum_{p=0}^n (-)^p \frac{p!}{x^{p+1}} \: + \: (-)^{n+1} \frac{ (n+1)! }{x^{n+1}} \int_0^{\infty} \frac{ \exp(-tx) }{ (1+t)^{n+2} } \, dt.\]
The series \[\sum_{p=0}^{\infty} (-)^p \frac{p!}{x^{p+1}}\] is not convergent, in any standard sense. For example, when \(x=1\), this is the series \[1 \: - \: 1 \: + \: 2! \: - \: 3! \: + \: 4! \: - \: 5! \: + \: \cdots.\] More generally, note that the magnitude of the ratio of successive terms in the series is \[\frac{ (p+1)! }{ |x|^{p+1} } \, \frac{ |x|^p }{p!} \: = \: \frac{p+1}{|x|},\] so that for all \(p > |x|\) (in other words, for all but finitely many terms), for any fixed \(x\), the magnitude of the terms increases as \(p\) grows, so this alternating series necessarily diverges.
However, although the series diverges, it is asymptotic to \(I(x)\). To show this, we must prove that for fixed \(n\), \[\lim_{x \rightarrow \infty} x^n \left[ I(x) \: - \: \sum_{p=0}^n (-)^p \frac{p!}{x^{p+1}} \right] \: = \: 0.\] Using the expansion ([eq:asymp-sect:int-ex-1]), the limit is given by \[\begin{aligned} \lim_{x \rightarrow \infty} x^n \left[ I(x) \: - \: \sum_{p=0}^n (-)^p \frac{p!}{x^{p+1}} \right] & = & \lim_{x \rightarrow \infty} x^n \left[ (-) (-)^{n+1} \frac{ (n+1)! }{x^{n+1}} \int_0^{\infty} \frac{ \exp(-tx) }{ (1+t)^{n+2} } \, dt \right], \nonumber \\ & = & \lim_{x \rightarrow \infty} (-)^n \frac{ (n+1)! }{x} \int_0^{\infty} \frac{ \exp(-tx) }{ (1+t)^{n+2} } \, dt, \\ & = & 0. \end{aligned}\] Thus, we see that the series is asymptotic.
Example: Consider the ordinary differential equation \[y' \: + \: y \: = \: \frac{1}{x}\] The solutions of this ODE have an asymptotic expansion, as we shall now verify.
To begin, assume that the solutions have a power series expansion of the form \[y(x) \: = \: \sum_{n=0}^{\infty} \frac{a_n}{x^n}\] for some constants \(a_n\). Plugging this ansatz into the differential equation above and solving for the coefficients, we find \[\begin{aligned} a_0 & = & 0, \\ a_1 & = & 1, \\ a_2 & = & a_1 \: = \: 1, \\ a_3 & = & 2 a_2 \: = \: 2, \\ a_4 & = & 3 a_3 \: = \: 3!, \\ a_5 & = & 4 a_4 \: = \: 4!, \end{aligned}\] and so forth, leading to the expression \[y(x) \: = \: \sum_{n=1}^{\infty} \frac{ (n-1)! }{x^n } .\]
First, let us check convergence of this series. Apply the ratio test to find \[\lim_{n \rightarrow \infty} \frac{ n!/x^{n+1} }{ (n-1)!/x^n } \: = \: \lim_{n \rightarrow \infty} \frac{ n }{x} \: \longrightarrow \: \infty .\] In particular, by the ratio test, this series diverges for all \(x\) (strictly speaking, all \(x\) for which it is well-defined, i.e. all \(x \neq 0\)).
We can derive this asymptotic series in an alternate fashion, which will explain its close resemblance to the previous example. Recall the method of variation of parameters for solving inhomogeneous equations: first find the solutions of the associated homogeneous equations, then make an ansatz that the solution to the inhomogeneous equation is given by multiplying the solutions to the homogeneous solutions by functions of \(x\). In the present case, the associated homogeneous equation is given by \[y' \: + \: y \: = \: 0 ,\] which has solution \(y(x) \propto \exp(-x)\). Following the method of variation of parameters, we make the ansatz \[y(x) \: = \: A(x) \exp(-x)\] for some function \(A(x)\), and plug back into the (inhomogeneous) differential equation to solve for \(A(x)\). In the present case, that yields \[A' \exp(-x) \: = \: \frac{1}{x} ,\] which we can solve as \[A(x) \: = \: \int_{- \infty}^{x} \frac{ \exp(t) }{t} dt .\] (Note that I am implicitly setting a value of the integration constant by setting a lower limit of integration. Also note that the integral above is ill-defined if \(x\) is positive, a matter I will gloss over for the purposes of this discussion.) Thus, the solution to the inhomogeneous equation is given by \[y(x) \: = \: \exp(-x) \int_{- \infty}^x \frac{ \exp(t) }{t} dt ,\] whose resemblance to the previous example should now be obvious.
An important example of an asymptotic series is the asymptotic series for the gamma function, known as the Stirling series.
To derive the Stirling series, begin with the result \[\psi(z+1) \: = \: \ln z \: + \: \frac{1}{2z} \: - \: \sum_{n=1}^{\infty} \frac{ B_{2n} }{ 2n z^{2n} } ,\] where \(\psi(z)\) is the digamma function. Since \[\psi(z+1) \: = \: \frac{d}{dz} \ln \Gamma(z+1)\] we can integrate to get \[\ln \Gamma(z+1) \: = \: C \: + \: \left(z + \frac{1}{2}\right) \log z \: - \: z \: + \: \sum_{n=1}^{\infty} \frac{ B_{2n} }{ (2n)(2n-1) z^{2n-1} }\] for some integration constant \(C\), where we have used the fact that \[\frac{d}{dz} z \left( \ln z \: - \: 1 \right) \: = \: \ln z.\] We can solve for \(C\) by substituting the expression above into the Legendre duplication formula ([eq:Legendre-dup]) \[\Gamma(z+1) \Gamma(z + \frac{1}{2}) \: = \: 2^{-2z} \pi^{1/2} \Gamma(2z+1)\] from which one can derive that \(C = (1/2) \log (2 \pi)\). Thus, \[\ln \Gamma(z+1) \: = \: \frac{\ln 2 \pi }{2} \: + \: \left( z \: + \: \frac{1}{2} \right) \ln z \: - \: z \: + \: \sum_{n=1}^{\infty} \frac{ B_{2n} }{ (2n)(2n-1) z^{2n-1} } ,\] which is Stirling’s series, an asymptotic series for the natural logarithm of the gamma function.
We can also derive a more commonly used expression for Stirling’s series by exponentiating the series above. We get \[\Gamma(z+1) \: = \: \sqrt{2 \pi} z^{z + 1/2} \exp(-z) \exp\left( \sum_{n=1}^{\infty} \frac{ B_{2n} }{ (2n)(2n-1) z^{2n-1} } \right) .\] We can simplify the last factor as follows. Recall the Taylor expansion \[\log\left( 1 \: + \: x \right) \: = \: x \: - \: \frac{x^2}{2} \: + \: \frac{x^3}{3} \: - \: \frac{x^4}{4} \: + \: \cdots .\] If we find an \(x\) such that \[\sum_{n=1}^{\infty} (-)^{n+1} \frac{x^n}{n} \: = \: \sum_{n=1}^{\infty} \frac{ B_{2n} }{ (2n)(2n-1) z^{2n-1} } ,\] then we can write \[\exp\left( \sum_{n=1}^{\infty} \frac{ B_{2n} }{ (2n)(2n-1) z^{2n-1} } \right) \: = \: 1 + x .\] Although finding a closed-form expression is impossible, we can find a series in \(z\) for \(x\). From the first terms, clearly \[x \: = \: \frac{B_2}{2z} \: + \: {\cal O}( z^{-2} ) ,\] and if we work out the expansion more systematically, we discover \[\begin{aligned} x & = & \frac{B_2}{2z} \: + \: \frac{B_2^2}{8 z^2} \: + \: {\cal O}(z^{-3}), \\ & = & \frac{1}{12 z} \: + \: \frac{ 1}{288 z^2 } \: + \: {\cal O}(z^{-3}) . \end{aligned}\] Thus, we recover Stirling’s asymptotic series in the form \[\Gamma(z+1) \: = \: \sqrt{2 \pi} z^{z + 1/2} \exp(-z) \left( 1 \: + \: \frac{1}{12 z} \: + \: \frac{ 1}{288 z^2 } \: + \: {\cal O}(z^{-3}) \right) .\]
In passing, using Stirling’s approximation it is straightforward to demonstrate that \[\label{eq:Stirling-app} \lim_{x \rightarrow \infty} x^{b-a} \frac{ (x+a)! }{ (x+b)! } \: = \: 1,\] which is sometimes useful.
Given a power series that converges in some finite domain, when one integrates the power series term-by-term along a contour that extends outside of the domain, although the result is no longer convergent, nevertheless it is often still sensible as an asymptotic series.
Formally, this result is known as Watson’s lemma. Various versions of it can be found in Miller [chapter 2]. One version is given below:
Suppose that \(f(t)\) is a function which has the following expansion near \(t=0\): \[f(t) \: = \: \sum_{n=1}^{\infty} a_n t^{n - 1},\] convergent for \(|t| \leq a\). Furthermore, suppose that \(f(t)\) grows at most exponentially for large \(t\), meaning that there exist positive constants \(K\), \(b\) such that \[| f(t) | \: < \: K e^{b |t|}, \: \: \: |t| \geq a .\] Then for large \(|z|\), the function \[F(z) \: \equiv \: \int_0^{\infty} e^{-zt} f(t) dt\] has asymptotic series expansion \[\label{eq:watson-asymp1} \sum_{n=1}^{\infty} a_n \left( \int_0^{\infty} e^{-zt} t^{n - 1} dt \right) \: = \: \sum_{n=1}^{\infty} a_n \Gamma(n) z^{-n} \: \: \: \mbox{ for } | \arg z | < \pi/2 .\]
We can demonstrate this result by computation. Let \(S_n(z)\) be the partial sum \[S_n(z) \: \equiv \: \int_0^{\infty} e^{-zt} \left( \sum_{k=1}^n a_k t^{k - 1} \right) dt .\] Since the sum is finite, we can exchange the sum and integral: \[S_n(z) \: = \: \sum_{k=1}^n a_k \left( \int_0^{\infty} e^{-zt} t^{k - 1} dt \right) \: = \: \sum_{k=1}^n a_k \Gamma(k) z^{-k} .\] To establish that the result is an asymptotic series expansion of \(F(t)\), we need to show that \[\lim_{z \rightarrow \infty} z^{n} \left( F(z) \: - \: S_n(z) \right) \: = \: 0\] for \(| \arg z | < \pi/2\). By virtue of the bound on \(f(t)\), there must exist a constant \(C_n\) such that \[\left| f(t) \: - \: \sum_{k=1}^n a_k t^{k - 1} \right| \: \leq \: C_n e^{b|t|} |t|^{(n+1) - 1} .\] Then, \[\begin{aligned} |z|^{n} \left| F(z) \: - \: S_n(z) \right| & \leq & z^n \int_0^{\infty} \left| e^{-zt} \right| \left| f(t) \: - \: \sum_{k=1}^n a_k t^{k-1} \right| dt, \\ & \leq & |z|^{n} C_n \int_0^{\infty} e^{-(|z| \cos \phi - b) t} t^{(n+1) - 1 } dt \\ & & \: = \: \frac{ |z|^{n} C_n }{ \left( |z|\cos \phi - b \right)^{n+1} } \Gamma( n+1 ) , \end{aligned}\] where \(\phi = \arg z\). For \(|\arg z | < \pi/2\), \(\cos \phi > 0\), so for large enough \(|z|\) the expression above is well-defined and goes to zero as \(|z| \rightarrow \infty\). Thus, we confirm the asymptotic series expansion ([eq:watson-asymp1]) and Watson’s lemma.
At least morally, Watson’s lemma is one reason why in practice, we can often be sloppy when manipulating convergent series. Specifically, integrating term-by-term outside of the region of convergence may no longer result in a convergent series, but under reasonable hypotheses, we see that the resulting series will be at least asymptotic.
For more information, see for example
G. N. Watson, “The harmonic functions associated with the parabolic cylinder,” Proc. London Math. Soc. 2 (1918) 116-148.
Consider a contour integral of the form \[\label{eq:app:steepest-descent:proto} G(z) \: = \: \int_C g(t) \exp( z f(t) ) \, dt .\] The method of steepest descent is a systematic procedure for generating an asymptotic series that approximates integrals of this form. Briefly, the method says that for large values of \(z\), the dominant contribution to the integral \(G(z)\) will come from values of \(t\) such that \(f'(t) = 0\), known as saddle points. (Such points will make a contribution to the integral that is proportional to the integrand evaluated at the saddle point.) One replaces the contonour with a line through the saddle point in a direction such that the imaginary part of \(z f(t)\) is constant, to get a new integral which is easier to evaluate and which yields an asymptotic series approximation.
One also sometimes speaks of the stationary phase approximation, or the method of stationary phase, which applies the same idea to integrals of the form \[\int g(t) \exp(i z f(t) ),\] and so could be described as a Wick-rotated form of the method of steepest descent. For simplicity, we will use the term ‘method of steepest descent’ to refer to all cases.
One of the most important applications of the method of steepest descent, for our purposes, is to the Feynman path integral description of quantum mechanics and quantum field theory, where it is used to recover the classical limit. In this appendix we will describe the method in more pedestrian cases.
The description of the method of steepest descent may make it sound obscure, but in fact, in examples it is not so difficult to understand. Let us first apply the method to obtain the leading term in an asymptotic series expansion in the special case of a contour located along the real line, so that \(t\) is real, involving real-valued functions \(f(t)\), \(g(t)\). Let \(t_0\) be a point such that \(f'(t_0) = 0\), and for simplicity let us assume \(f''(t_0) < 0\). Then, locally, we can approximate the integral by a Gaussian. Expand \[\begin{aligned} g(t) & = & g(t_0) \: + \: (t - t_0) g'(t_0) \: + \: \frac{1}{2}(t - t_0)^2 g''(t_0) \: + \: \cdots, \\ f(t) & = & f(t_0) \: + \: (t - t_0) f'(t_0) \: + \: \frac{1}{2} (t - t_0)^2 f''(t_0) \: + \: \cdots, \\ & = & f(t_0) \: + \: \frac{1}{2}(t - t_0)^2 f''(t_0) \: + \: \cdots . \end{aligned}\] Now, define \(s = \sqrt{z}(t - t_0)\), so that we can write the integral as \[\begin{aligned} G(z) & = & \int_{-\infty}^{\infty} dt \left( g(t_0) \: + \: (t - t_0) g'(t_0) \: + \: \cdots \right) \exp\left( z \left( f(t_0) \: + \: \frac{1}{2} (t - t_0)^2 f''(t_0) \: + \: \cdots \right) \right) , \\ & = & \exp(z f(t_0) ) \int_{-\infty}^{\infty} \frac{ds}{\sqrt{z}} \left( g(t_0) \: + \: \frac{s}{\sqrt{z}} g'(t_0) \: + \: {\cal O}(z^{-1}) \right) \exp\left( - \frac{1}{2} s^2 | f''(t_0) | \: + \: {\cal O}(z^{-1/2}) \right), \\ & = & \frac{ \exp(z f(t_0) ) }{\sqrt{z}} \int_{-\infty}^{\infty} ds \, g(t_0) \exp\left( - \frac{1}{2} s^2 | f''(t_0) | \right) \: + \: {\cal O}(z^{-1}) , \\ & = & \frac{ \exp(z f(t_0) ) }{\sqrt{z}} g(t_0) \sqrt{ 2}{ | f''(t_0) | } \pi \: + \: {\cal O}(z^{-1} ) . \end{aligned}\] Not only do we see that the dominant contribution comes from \(t = t_0\) for large \(z\), in the sense that the contribution to the integral is proportional to the integrand evaluated at \(t_0\), but we also get an explicit expression for the contribution to the limit from the \(t_0\) point, in the same limit.
The method of steepest descent also applies to complex integrands and contour integrals. In such cases, the general idea is that in an integral over a complex exponential of the form \(\exp(z f(t))\), for large \(z\), the part of the integration contour that mostly just changes the phase will not significantly contribute to the integral, but rather will tend to cancel out. A little more systematically, if the integral involves integrating over all phases of the complex exponential, then the different contributions should sum to zero, on the grounds that \[\int_0^{2 \pi} \cos \theta \, d \theta \: = \: \int_0^{2 \pi} \sin \theta \, d \theta \: = \: 0 .\] At this same, level of approximation, the leading contribution to the integral should come from parts of the contour where the phase does not change significantly. See Arfken-Weber-Harris [section 12.7] or Morse-Feschbach volume 1 [section 4.6] for more information on the general complex case.
As a prototypical example, let us also apply this method of the case of the gamma function, where we shall derive the Stirling series.
Recall the Euler integral description of the gamma function: \[\Gamma(z+1) \: = \: \int_0^{\infty} \exp(-t) t^{z} dt .\] Change integration variables to \(t = \tau z\): \[\label{eq:asymp-series:steep-desc:gamma1} \Gamma(z+1) \: = \: z^{z+1} \int_0^{\infty} \exp(- \tau z) \tau^z d \tau\] giving an integral proportional to \[\int_C \exp(z f(t) ) dt\] with \(f(t) = -t + \ln t\).
Solving \(\partial f / \partial t = 0\), we find that the only possible saddle point is at \[\frac{1}{t} \: - \: 1 \: = \: 0\] i.e. \(t_0 = 1\). Let us expand \(f(t)\) about this saddle point. Write \(t = 1+x\), then \[\begin{aligned} f(t) & = & \log( 1+x ) \: - \: (1+x) \\ & = & \left( x \: - \: \frac{x^2}{2} \: + \: \frac{x^3}{3} \: - \: \frac{x^4}{4} \: + \: \cdots \right) \: - \: (1+x), \\ & = & -1 \: - \: \frac{x^2}{2} \: + \: \frac{x^3}{3} \: - \: \cdots, \end{aligned}\] from which we see the leading order approximation to the integral should be proportional to \(\exp(-z)\). We can now approximate the gamma function by \[\begin{aligned} \Gamma(z+1) & = & z^{z+1} \exp(z f(1) ) \int_{-1}^{\infty} \exp(z f''(1) x^2 / 2!) dx , \\ & = & z^{z+1} \exp(-z) \int_{-1}^{\infty} \exp(- z x^2 / 2) dx, \\ & \cong & z^{z+1} \exp(-z) \int_{-\infty}^{\infty} \exp( -z x^2 / 2 ) dx, \\ & = & z^{z+1} \exp(-z) \sqrt{ \frac{2 \pi}{z} }, \\ & = & \sqrt{2 \pi z} z^z \exp(-z) , \end{aligned}\] which is the leading term in Stirling’s expansion of the factorial function.
So far we have only described leading terms in asymptotic series expansions. We can also compute higher-order terms using the method of steepest descent, as we outline next. In principle, recall the idea is to replace the original contour \(C\) in ([eq:app:steepest-descent:proto]) by a line through the saddle point in a direction such that the imaginary part of \(z f(t)\) is constant, known as the path of steepest descent. Concretely, if our contour crosses a single saddle point \(t_0\), then along the line described, we make a change of variables, and define a real variable \(w\) by \[f(t) \: = \: f(t_0) \: - \: w^2\] The fact that \(w \in {\mathbb R}\) ensures that \(\mbox{Im }f(t) = \mbox{Im }f(t_0)\) everywhere along the contour. Let us also make the simplifying assumption in equation ([eq:app:steepest-descent:proto]) that \(g(t) = 1\). (Incorporating \(g(t) \neq 1\) is straightforward, one just includes its Taylor series and expands accordingly.) Then, \[G(z) \: = \: \exp(z f(t_0) ) \int_C \exp(- z w^2) \left( \frac{dt}{dw} \right) dw .\] Assume that the contour \(C\) is such that the \(w\) integral can be taken to run over the real numbers from \(- \infty\) to \(\infty\). Next, we need to write \(dt/dw\) as a function of \(w\), rather than \(t\). In general, we can accomplish such an inversion at the power-series level, so write \[\frac{dt}{dw} \: = \: \sum_{n=0}^{\infty} a_n w^n\] for some constants \(a_n\). Substituting in one has \[\begin{aligned} G(z) & = & \exp( z f(t_0) ) \int_{-\infty}^{\infty} \exp(-z w^2) \sum_{n=0}^{\infty}a_n w^n dw, \\ & = & \exp(z f(t_0) ) \sum_{n=0}^{\infty} a_n z^{-(n+1)/2} \, \Gamma\left(\frac{n+1}{2}\right), \\ & = & \frac{ \exp(z f(t_0)) }{ \sqrt{z} } \sum_{m=0}^{\infty} a_{2m} \Gamma\left( m + \frac{1}{2} \right) \left( \frac{1}{z} \right)^{m}, \label{eq:asymp-series:steep-desc:genl-higher} \end{aligned}\] where we have used the fact that from symmetry, only even powers of \(w\) can contribute to the integral.
For more information, see Arfken-Weber-Harris [section 12.7] or Morse-Feschbach volume 1 [section 4.6].
One very important property of asymptotic series is that they do not uniquely determine a function. It is easy to check, for example, that if \(\mbox{Re}(z) > 0\), then the same series can be simultaneously asymptotic to both \(f(z)/\varphi(z)\) and \(f(z)/\varphi(z) + \exp(-z)\).
This fact is very important in quantum field theory, and is a reflection of nonperturbative effects in the theory. Summing over Feynman diagrams yields a series in which the coupling constant of the theory plays the part of \(1/z\). Now, a typical quantum field theory has ‘nonperturbative effects,’ which cannot be seen in a (perturbative) Feynman diagram expansion. Nonperturbative effects, which are not uniquely determined by the perturbative theory, are exponentially small in the coupling constant, i.e. multiplied by factors of \(\exp(-1/g) = \exp(-z)\). Since the Feynman diagram expansion is only an asymptotic series, and the nonperturbative effects are exponentially small, adding nonperturbative effects does not change the asymptotic expansion, i.e. does not change the Feynman diagram expansion.
Properties of asymptotic series:
Asymptotic series can be added, multiplied, and integrated term-by-term. However, asymptotic series can only be differentiated term-by-term to obtain an asymptotic expansion for the derivative only if it is known that the derivative possesses an asymptotic expansion.
How can we sum, in any sense, a divergent series?
One approach is as follows. Given a divergent series \[F(z) \: = \: \sum_{n=0}^{\infty} A_n z^n\] for some constants \(A_n\), consider the related series \[B(z) \: = \: \sum_{n=0}^{\infty} A_n \frac{z^n}{n!} .\] Depending upon how badly divergent the original series \(F(z)\) was, one might hope that the new series \(B(z)\) might actually converge in some region. Assuming that \(B(z)\) converges and can be resummed, how might one recover \(F(z)\)? Well, use the formula \[\int_0^{\infty} \exp(-t/z) t^n dt \: = \: z^{n+1} n!\] to show that, formally, \[\label{b-sum} z F(z) \: = \: \int_0^{\infty} \exp(-t/z) B(t) dt .\]
To calculate \(F(z)\) using the formal trick above, we need \(B(t)\) for real positive values of \(t\) less than or of order \(z\). So long as any singularities in \(B(t)\) on the complex \(t\) plane are at distances greater than \(|z|\) from the origin, this should be OK. (In quantum field theory, singularities in \(B(t)\) are typically associated with nonperturbative effects – instantons – so again we see that nonperturbative effects limit the usefulness of resummation methods for the (asymptotic) Feynman series. See Weinberg (vol 2) for more information.)
This particular resummation technique is known as Borel summation. If the integral on the right-hand-side exists at points outside the radius of convergence of the original series, then the Borel sum is defined to be \[\frac{1}{z} \int_0^{\infty} \exp(-t/z) B(t) \, dt .\] (The original reference on Borel summation is, to our knowledge, Borel, Leçons sur les Séries Divergentes (1901) pp 97-115.)
As asymptotic series expansions do not uniquely determine functions, the reader should not be surprised to learn there are additional resummation techniques, which can yield different results.
For example, Euler resummation is to define the sum of the series \(\sum A_n\) to be given by \[\lim_{z \rightarrow 1^-} \sum_{n=0}^{\infty} A_n z^n\] when this limit exists. For example, the Euler sum of the series \[1 \: - \: 1 \: + \: 1 \: - \: 1 \: + \: 1 \: - \: \cdots\] is given by \[\lim_{z \rightarrow 1^-} \left( 1 \: - \: z \: + \: z^2 \: - \: z^3 \: + \: \cdots \right) \: = \: \lim_{z \rightarrow 1^-} \frac{1}{1 + z} \: = \: \frac{1}{2} .\] By contrast, in the Borel sum, we would define \[B(z) \: = \: \sum_{n=0}^{\infty} (-)^n \frac{z^n}{n!} \: = \: \exp(-z),\] then the Borel sum is given by \[F(z) \: = \: \frac{1}{z} \int_0^{\infty} \exp(-t/z) B(t) dt \: = \: \frac{1}{z}\int_0^{\infty} \exp(-t/z) \exp(-t) dt \: = \: - \frac{z+1}{z^2} .\]
With these definitions, it is entertaining to note that the series \[1 \: - \: 2! \: + \: 4! \: - \: \cdots \: = \: \sum_{n=0}^{\infty} (-)^n (2n)!\] is not Borel summable, whereas by contrast the series \[1 \: + \: 0 \: - \: 2! \: + \: 0 \: + \: 4! \: + \: \cdots\] is Borel summable. We can see this as follows. For the first series, define \[B(z) \: = \: \sum_{n=0}^{\infty} \left( (-)^n (2n)! \right) \frac{z^n}{n!}.\] This series does not converge for any \(z \neq 0\). For the second series, define \[B(z) \: = \: \sum_{n=0}^{\infty} \left( (-)^n (2n!) \right) \frac{ z^{2n} }{ (2n) ! } \: = \: \sum_{n=0}^{\infty} (-)^n z^{2n}.\] For \(|z| < 1\), this converges to \[\frac{1}{1 + z^2}\] so we can define the Borel sum of the second series (but not the first) to be \[\frac{1}{z} \int_0^{\infty} \frac{ \exp(-t/z) }{ 1 + t^2 } dt.\]
Further resummation methods are discussed in Hardy, Whittaker-Watson.
Stokes’ phenomenon is the observation that the operations of analytic continuation and asymptotic series expansion do not commute with one another.
For example, consider the confluent hypergeometric function \(M(a,c;z)\). It can be shown that in the limit of large real positive \(z\), \[M(a,c;z) \: \cong \: \frac{\Gamma(c)}{\Gamma(a)} z^{a-c} \exp(z),\] and in the limit of large real negative \(z\), \[M(a,c;z) \: \cong \: \frac{\Gamma(c)}{\Gamma(c-a)} (-z)^{-a}.\]
However, these two different limits cannot be obtained from one another by analytic continuation of either separately. For example, if we started with the \(z\rightarrow +\infty\) limit, and analytically continued, we would have found \[\frac{\Gamma(c)}{\Gamma(a)} (-z)^{a-c} \exp(-z) \: \neq \: \frac{\Gamma(c)}{\Gamma(c-a)} (-z)^{-a} .\]
Thus, analytic continuation does not commute with asymptotic series expansions. This is known as Stokes’ phenomenon. This is very unlike convergent Taylor series, for example, where analytic continuation does commute with series expansion.
It can be shown that for large \(z = |z| \exp(i \phi)\) with \(0 < \phi < \pi\), \[M(a,c;z) \: \cong \: \frac{\Gamma(c)}{\Gamma(c-a)} \frac{\exp(ia\pi)}{z^a} \: + \: \frac{\Gamma(c)}{\Gamma(a)} \exp(z) z^{a-c} .\] The first term dominates when \(\phi=\pi\), at which the second term is negligible. When \(\phi=0\), the opposite is true: the second term dominates, the other term is negligible. For \(\phi=\pi/2\), the two terms are comparable.
The analysis can be repeated for \(-\pi < \phi < 0\); but as it is very similar, for brevity we shall not repeat it here.
So, what we have found in general is that for general \(\phi\), the leading term is a combination of the two terms, but in the two limits, one dominates and the other is much less than the corrections, so that the leading term in the asymptotic series expansion is defined by only term, not both. Which term dominates, varies as \(\phi\) changes. Thus, analytic continuation does not commute with asymptotic series expansion.
Show that the function \(e^z\) has no asymptotic series expansion for \(z\) real, positive, and large.
Show that the function \(e^z\) has the asymptotic series expansion \[0 \: + \: 0 \: + \: 0 \: + \: 0 \: + \: \cdots\] for \(z\) real, negative, and of large magnitude. The previous problem plus this one give an easy example of Stokes’ phenomenon, namely that the same function can have different asymptotic series expansions as one goes in different directions on the complex plane.
(AW 8.3.8) Use the Stirling series for the gamma function to show that \[\lim_{x \rightarrow \infty} x^{b-a} \frac{ (x+a)! }{ (x+b)! } \: = \: 1\]
Use Watson’s lemma to derive an asymptotic series for \[\int_0^{\infty} e^{-zt} \left(1 \: + \: t^2 \right)^{1/2} dt\]
(AW 7.3.1) Using the method of steepest descent, evaluate the second Hankel function given by \[H_{\nu}^{(2)}(s) \: = \: \frac{1}{\pi i} \int_{-\infty}^0 \exp\left( \frac{s}{2} \left( z \: - \: \frac{1}{z} \right) \right) \frac{dz}{z^{\nu+1}}\] with contour **** FILL IN ****
(WW VIII.6) Show that the series \[1 \: - \: 2! \: + \: 4! \: - \: \cdots\] is not Borel summable, whereas the series \[1 \: + \: 0 \: - \: 2! \: + \: 0 \: + \: 4! \: + \: \cdots\] is Borel summable.
References:
G. H. Hardy, Divergent series, Oxford University Press, 1949.
E. T. Whittaker, G. N. Watson, A course of modern analysis, 4th edition, Cambridge University Press, 1963.
P. Morse and H. Feshbach, Methods of Theoretical Physics volume I, McGraw-Hill, New York, 1953.
P. Miller, Applied asymptotic analysis, Graduate Studies in Math. 75, American Math. Society, Providence, RI, 2006.
S. Weinberg, The quantum theory of fields.