Thursday 24 June 2021

Convergence Of Dirichlet Series

The Riemann Zeta series is an example of a Dirichlet series.

$$\zeta(s)=\sum\frac{1}{n^{s}}=\frac{1}{1^{s}}+\frac{1}{2^{s}}+\frac{1}{3^{s}}+\frac{1}{4^{s}}+\ldots$$

Dirichlet series have the general form $\sum a_{n}/n^{s}$, in contrast to the more familiar power series $\sum a_{n}z^{n}$.



In this blog post we'll explore when these series converge, first looking at absolute convergence, and then more general convergence.


The video for this post is at [youtube], and the slides are here [pdf].


Abscissa Of Absolute Convergence

A series converges absolutely even when all its terms are replaced by their magnitudes, sometimes called absolute values. This is quite a strong condition, and not all series that converge do so absolutely.

Let's assume a Dirichlet series converges absolutely at $s_{1}=\sigma_{1}+it_{1}$, and consider another point $s_{2}=\sigma_{2}+it_{2}$ where $\sigma_{2}\geq\sigma_{1}$. On the complex plane, $s_{2}$ is to the right of $s_{1}$.

Now let's compare the magnitudes of the terms in this series at $s_{1}$ and $s_{2}$. Remember $n^{\sigma+it}=n^{\sigma}e^{it\ln n}$, and because the magnitude of any $e^{i\theta}$ is 1, we can simplify $\left|n^{\sigma+it}\right|=n^{\sigma}$.

$$\sum\left|\frac{a_{n}}{n^{s_{1}}}\right|=\sum\frac{\left|a_{n}\right|}{n^{\sigma_{1}}}\geq\sum\frac{\left|a_{n}\right|}{n^{\sigma_{2}}}=\sum\left|\frac{a_{n}}{n^{s_{2}}}\right|$$

This is simply telling us that the magnitude of each term in the series at $s_{2}$ is less than or equal to the magnitude of the same term at $s_{1}$. So if the series converges at $s_{1}$, it must also converge at $s_{2}$. More generally, the series converges at any $s=\sigma+it$ where $\sigma\geq\sigma_{1}$. 

If our series doesn't converge everywhere, the $s$ for which it diverges must therefore have $\sigma<\sigma_{1}$. We can see there must be a minimum $\sigma_{a}$, called the abscissa of absolute convergence, such that the series converges for all $\sigma>\sigma_{a}$. 

Notice how absolute convergence depends only on the real part of $s$. Working out the domain of convergence along the real line automatically gives us the domain of convergence in the complex plane.

For example, in we previously showed the series $\sum1/n^{\sigma}$ converges for real $\sigma>1$. We also know the series diverges at $\sigma=1$. These two facts allow us to say $\sigma_{a}=1$, and so the series converges for all complex $s=\sigma+it$ where $\sigma>1$.

It's interesting that the region of convergence for a Dirichlet series is a half-plane, whereas the region for the more familiar power series $\sum a_{n}z^{n}$ is a circle.


Abscissa Of Convergence

Absolute convergence is easier to explore as we don't need to consider the effect of complex terms which contribute a negative amount to the overall magnitude of the series. For example, a term $e^{i\pi}=-1$ can partially cancel the effect of a term $2e^{i2\pi}=+2$. This cancelling effect can mean some series do converge, even if not absolutely.

Our strategy, inspired by Apostol, will be to show that if a Dirichlet series is bounded at $s_{0}=\sigma_{0}+it_{0}$ then it is also bounded at $s=\sigma+it$, where $\sigma>\sigma_{0}$, and then push a little further to show it actually converges at that $s$. 

Let's start with a Dirichlet series $\sum a_{n}/n^{s}$ that we know has bounded partial sums at a point $s_{0}=\sigma_{0}+it_{0}$ for all $x\geq1$. 

$$\left|\sum_{n\leq x}\frac{a_{n}}{n^{s_{0}}}\right|\leq M$$

Being bounded is not as strong a requirement as convergence, the partial sums could oscillate for example.

We'll use Abel's partial summation formula, explained in a later blog, which relates a discrete sum to a continuous integral.

$$\sum_{x_{1}<n\leq x_{2}}b_{n}f(n)=B(x_{2})f(x_{2})-B(x_{1})f(x_{1})-\int_{x_{1}}^{x_{2}}B(t)f'(t)dt$$

Because we're comparing to $s_{0}$, we'll define $f(x)=x^{s_{0}-s}$ and $b_{n}=a_{n}/n^{s_{0}}$. Here $B(x)$ is defined as $\sum_{n\leq x}b_{n}$, and so $\left|B(x)\right|\leq M$.

$$\begin{align}\sum_{x_{1}<n\leq x_{2}}\frac{a_{n}}{n^{s}}&=\sum_{x_{1}<n\leq x_{2}}b_{n}f(n)\\ \\ &=\frac{B(x_{2})}{x_{2}^{s-s_{0}}}-\frac{B(x_{1})}{x_{1}^{s-s_{0}}}+(s-s_{0})\int_{x_{1}}^{x_{2}}\frac{B(t)}{t^{s-s_{0}+1}}dt\end{align}$$

We now consider the magnitude of the series, which is never more than the sum of the magnitudes of its parts, and make use of $\left|B(x)\right|\leq M$. 

$$\begin{align}\left|\sum_{x_{1}<n\leq x_{2}}\frac{a_{n}}{n^{s}}\right|&\leq\left|\frac{B(x_{2})}{x_{2}^{s-s_{0}}}\right|+\left|\frac{B(x_{1})}{x_{1}^{s-s_{0}}}\right|+\left|(s-s_{0})\int_{x_{1}}^{x_{2}}\frac{B(t)}{t^{s-s_{0}+1}}dt\right|\\ \\ &\leq Mx_{2}^{\sigma_{0}-\sigma}+Mx_{1}^{\sigma_{0}-\sigma}+\left|s-s_{0}\right|M\int_{x_{1}}^{x_{2}}t^{\sigma_{0}-\sigma-1}dt\end{align}$$

Because $x_{1}<x_{2}$, we can say $Mx_{2}^{\sigma_{0}-\sigma}+Mx_{1}^{\sigma_{0}-\sigma}<2Mx_{1}^{\sigma_{0}-\sigma} for \sigma>\sigma_{0}$. Despite appearances, evaluating the integral is easy.

$$\begin{align}\left|\sum_{x_{1}<n\leq x_{2}}\frac{a_{n}}{n^{s}}\right| &\leq 2Mx_{1}^{\sigma_{0}-\sigma}+\left|s-s_{0}\right|M\left(\frac{x_{2}^{\sigma_{0}-\sigma}-x_{1}^{\sigma_{0}-\sigma}}{\sigma_{0}-\sigma}\right)\\ \\ &\le2Mx_{1}^{\sigma_{0}-\sigma}\left(1+\frac{\left|s-s_{0}\right|}{\sigma-\sigma_{0}}\right)\end{align}$$

The last step uses $\left|x_{2}^{\sigma_{0}-\sigma}-x_{1}^{\sigma_{0}-\sigma}\right|=x_{1}^{\sigma_{0}-\sigma}-x_{2}^{\sigma_{0}-\sigma}<x_{1}^{\sigma_{0}-\sigma}<2x_{1}^{\sigma_{0}-\sigma}$. 

The key point is that $\sum_{x_{1}<n\leq x_{2}}a_{n}/n^{s}$ is bounded if $\sum_{n\leq x}a_{n}/n^{s_{0}}$ is bounded, where $\sigma>\sigma_{0}$.

Let's see if we can push this result about boundedness to convergence. 

$$\left|\sum_{x_{1}<n\leq x_{2}}\frac{a_{n}}{n^{s}}\right|\le2Mx_{1}^{\sigma_{0}-\sigma}\left(1+\frac{\left|s-s_{0}\right|}{\sigma-\sigma_{0}}\right)=Kx_{1}^{\sigma_{0}-\sigma}$$

Here $K$ doesn't depend on $x_{1}$. If we let $x_{1}\rightarrow\infty$ then $Kx_{1}^{\sigma_{0}-\sigma}\rightarrow0$, which means the magnitude of the tail of the infinite sum $\sum a_{n}/n^{s}$ diminishes to zero, and so the series is not just bounded, it also converges. 

Let's summarise our results so far:

  • If $\sum_{n\leq x}a_{n}/n^{s_{0}}$ is bounded, the infinite sum $\sum a_{n}/n^{s}$ converges for $\sigma>\sigma_{0}$.
  • With the special case of $s_{0}=0$, if $\sum_{n\leq x}a_{n}$ is bounded, the infinite sum $\sum a_{n}/n^{s}$ converges for $\sigma>0$.

The special case is particularly useful as we can sometimes say whether a series converges for $\sigma>0$ just by looking at the coefficients $a_{n}$.

Following the same logic as for $\sigma_{a}$, it is clear there is an abscissa of convergence $\sigma_{c}$ where a Dirichlet series converges for $\sigma>\sigma_{c}$, and diverges for $\sigma<\sigma_{c}$.


Maximum Difference Between $\sigma_{c}$ And $\sigma_{a}$

We know that not all convergent series are absolutely convergent, so we can say $\sigma_{a}\geq\sigma_{c}$. We shouldn't have to increase $\sigma$ by too much before a conditionally convergent series converges absolutely.

If a series converges at $s_{0}$, the magnitude of terms is bounded. We can call this bound $C$.

$$\sum\left|\frac{a_{n}}{n^{s}}\right|=\sum\left|\frac{a_{n}}{n^{s_{0}}}\cdot\frac{1}{n^{s-s_{0}}}\right|\leq C\sum\frac{1}{n^{\sigma-\sigma_{0}}}$$

We know series of the form $\sum n^{\sigma_{0}-\sigma}$ only converge for $\sigma_{0}-\sigma>1$, so we can say if $\sigma$ is larger than $\sigma_{c}$ by at least 1, the series converges absolutely.

$$\boxed{0\leq\sigma_{a}-\sigma_{c}\leq1}$$


Example: Alternating Zeta Function

Let's apply our results to the alternating zeta function, also called the eta function.

$$\eta(s)=\sum\frac{(-1)^{n+1}}{n^{s}}=\frac{1}{1^{s}}-\frac{1}{2^{s}}+\frac{1}{3^{s}}-\frac{1}{4^{s}}+\ldots$$

At $s_{0}=0$ the partial sum $\sum_{n\leq x}(-1)^{n+1}$ oscillates but is always bounded $\leq1$, and so $\sum(-1)^{n+1}/n^{s}$ converges for $\sigma>0$. 


No comments:

Post a Comment