Wednesday 3 November 2021

Pairs of Symmetric Zeros

A common strategy taken by mathematicians wrestling with the Riemann Hypothesis is to explore the properties of its zeros. Here we'll take our first steps along this path. 



The video for this topic is here [youtube], and the slides are here [pdf].


Property $\zeta(\overline{s})=\overline{\zeta(s)}$

We previously showed the property

$$\zeta(\overline{s})=\overline{\zeta(s)}$$

holds for the series $\zeta(s)=\sum1/n^{s}$, valid for $\sigma>1$. 

We can show the new series extending $\zeta(s)$ to $\sigma>0$ maintains this property.

Instead of doing this with slightly laborious algebra, we'll take this opportunity to introduce the more elegant and rather powerful principle of analytic continuation, which we'll define properly in a separate blog post.


Using Analytic Continuation

Let's construct a function 

$$f(s)=\zeta(s)-\overline{\zeta(\overline{s})}$$

Wherever $\zeta(s)$ is analytic, so is $f(s)$. 

We know $f(s)=0$ along the real line where $\sigma>1$. Using analytic continuation, $f(z)$ must also be zero in any domain that $f(s)$ is analytic, as long as that domain includes the real line $\sigma>1$.

So $f(s)=0$ in the complex half-plane $\sigma>1$, but also $\sigma>0$ because we extended $\zeta(s)$ to this larger domain, where it remains analytic except at $s=1$. 

But $f(s)=0$ means $\zeta(\overline{s})=\overline{\zeta(s)}$, which means this property holds in $\sigma>0$. If later we are able to extend $\zeta(s)$ into $\sigma<0$, this property will continue to hold there too.


Symmetric Zeros

If $\zeta(s)=0$ then the property $\zeta(\overline{s})=\overline{\zeta(s)}$ tells us $\zeta(\overline{s})=0$. 

$$\zeta(s)=0\implies\zeta(\overline{s})=0$$

This means the zeros exist in symmetric pairs $\sigma+it$ and $\sigma-it$, or exist on the real line where $t=0$.


Thoughts

For a first attempt, this is quite an enlightening insight into the zeros of the Riemann Zeta function $\zeta(s)$.

Zeros existing in symmetric pairs $\sigma\pm it$ is compatible with the Riemann Hypothesis, but sadly it doesn't mean they all lie on a single line $\sigma=a$, never mind the holy grail $\sigma=1/2$.


Sunday 24 October 2021

Good Complex Analysis Youtube Playlists

Understanding complex analysis - the study of functions of a complex variable - is important because we're exploring the Riemann Zeta function extended into the complex plane.



The following are three youtube playlists which I have found to be high quality and accessible.


1. Open University M332

This course from the UK's amazing Open University programme is simply excellent:


2. Richard E Borcherd's Undergraduate Course on Complex Analysis

Richard E Borcherds is a leading mathematician (Fields Medal winner) and knows his subject. This combined with a gentle narrative helps us understand the subject better, not just be exposed to lots of facts.


3. Petra Bonfert-Taylor's Analysis of a Complex Kind

Petra Bonfert-Taylor is an accomplished German mathematician specialising in complex analysis. Her very friendly approach, one which empathises with the newcomer or learning student, is particularly valuable.

I found this playlist really good for whizzing through on a first pass to get an overall feel for the various themes in complex analysis.


Sunday 12 September 2021

Swapping $\lim \sum$ For $\sum \lim$

When we previously extended the Riemann Zeta function into the complex plane and visualised it, we observed that to the right, as $\sigma \rightarrow +\infty$, the magnitude $|\zeta(s)|$ appears to approach 1.



The video for this topic is here [youtube], and slides here [pdf].


Let's consider what happens to the Riemann Zeta function $\zeta(s) as \sigma\rightarrow+\infty$.

$$\lim_{\sigma\rightarrow\infty}\sum_{n}\frac{1}{n^{s}}=\lim_{\sigma\rightarrow\infty}\left(\frac{1}{1^{s}}+\frac{1}{2^{s}}+\ldots\right)$$

It's tempting to look at each term and notice that $|n^{-s}|=n^{-\sigma}\rightarrow0$ as $\sigma\rightarrow\infty$ for all $n$ except $n=1$, then conclude $\zeta(s)\rightarrow1$ as $\sigma\rightarrow\infty$. In effect, we've taken the limit inside the sum.

$$\sum_{n}\lim_{\sigma\rightarrow\infty}\frac{1}{n^{s}}=\lim_{\sigma\rightarrow\infty}\left(\frac{1}{1^{s}}\right)+\lim_{\sigma\rightarrow\infty}\left(\frac{1}{2^{s}}\right)+\ldots$$

However, the limit of an infinite sum is not always the sum of the limits. Tannery's theorem (also here, and here) tells us when we can swap sum and limit operators.


Tannery's Theorem

The theorem has three requirements

  • An infinite sum $S_{j}=\sum_{k}f_{k}(j)$ that converges
  • The limit $\lim_{j\rightarrow\infty}f_{k}(j)=f_{k}$ exists
  • An $M_{k}\geq\left|f_{k}(j)\right|$ independent of $j$, where $\sum_{k}M_{k}$ converges

If the requirements are met, we can take the limit inside the sum.

$$\lim_{j\rightarrow\infty}\sum_{k}f_{k}(j)=\sum_{k}\lim_{j\rightarrow\infty}f_{k}(j)$$


Proof

Let's first show the sum of the limit actually exists.

By definition, $\left|f_{k}(j)\right|\leq M_{k}$, and $\sum_{k}M_{k}$ converges. Taking $j\rightarrow\infty$ gives us $\left|f_{k}\right|\leq M_{k}$, and so $\sum_{k}\left|f_{k}\right|$ converges, which in turn means $\sum_{k}f_{k}$ converges absolutely. This quantity is the sum of limits.

Now let's show the limit of the sum is the sum of the limits.

Since $\sum_{k}M_{k}$ converges there must be an $N$ so that $\sum_{k=N}M_{k}<\epsilon$, where $\epsilon$ is as small as we require.

We can therefore say,

$$\left|\sum_{k=N}f_{k}(j)\right|\leq\sum_{k=N}\left|f_{k}(j)\right|\leq\sum_{k=N}M_{k}<\epsilon$$

The following is the case when $j\rightarrow\infty$.

$$\left|\sum_{k=N}f_{k}\right|\leq\sum_{k=N}\left|f_{k}\right|\leq\sum_{k=N}M_{k}<\epsilon$$

Let's now consider the absolute difference between $\sum_{k}f_{k}(j)$ and $\sum_{k}f_{k}$. Although the following looks complicated, it is simply splitting the sums over $[0,\infty]$ into sums over $[0,N-1]$ and $[N,\infty]$.

$$\begin{align}\left|\sum_{k}f_{k}(j)-\sum_{k}f_{k}\right|&=\left|\sum_{k}^{N-1}f_{k}(j)+\sum_{k=N}f_{k}(j)-\sum_{k}^{N-1}f_{k}-\sum_{k=N}f_{k}\right| \\ \\&\leq\left|\sum_{k=N}f_{k}(j)\right|+\left|\sum_{k=N}f_{k}\right|+\left|\sum_{k}^{N-1}f_{k}(j)-\sum_{k}^{N-1}f_{k}\right| \\ \\ &<2\epsilon+\left|\sum_{k}^{N-1}\left(f_{k}(j)-f_{k}\right)\right|\end{align}$$

As $j\rightarrow\infty$, the finite sum $\sum_{k}^{N-1}\left(f_{k}(j)-f_{k}\right)\rightarrow0$, which leaves a simpler inequality.

$$\lim_{j\rightarrow\infty}\left|\sum_{k}f_{k}(j)-\sum_{k}f_{k}\right|<2\epsilon$$

Because $\epsilon$ can be as small as we require, we have $\lim_{j\rightarrow\infty}\sum_{k}f_{k}(j)=\sum_{k}f_{k}$, which proves the theorem.

$$\lim_{j\rightarrow\infty}\sum_{k}f_{k}(j)=\sum_{k}f_{k}=\sum_{k}\lim_{j\rightarrow\infty}f_{k}(j)$$


Application To $\zeta(s)$

Let's apply Tannery's Theorem to $\zeta(s)$. Here $f_{k}(j)$ is written as $f_{n}(s)=1/n^{s}$.

We start with the convergent infinite sum.

$$\zeta(s)=\sum_{n}\frac{1}{n^{s}}\text{ converges for }\sigma>1$$

We confirm $f_{n}(s)$ exists when $\sigma\rightarrow\infty$.

$$\lim_{\sigma\rightarrow\infty}\frac{1}{n^{s}}=f_{n}=\begin{cases} 1 & n=1\\ 0 & n>1 \end{cases}$$

We also find an $M_{n}\geq\left|f_{n}(s)\right|$ independent of $\sigma$.

$$\left|\frac{1}{n^{s}}\right|=\frac{1}{n^{\sigma}}\leq M_{n}=\frac{1}{n^{\alpha}}$$

Here $1<\alpha\leq\sigma$. The sum $\sum_{n}M_{n}$ converges because $\alpha>1$.

The criteria have been met, so we can legitimately move the limit inside the sum.

$$\lim_{\sigma\rightarrow\infty}\sum_{n}\frac{1}{n^{s}}=\sum_{n}\lim_{\sigma\rightarrow\infty}\frac{1}{n^{s}}=1+0+0+\ldots$$

So $\zeta(s)\rightarrow1$, as $\sigma\rightarrow+\infty$. 


Monday 6 September 2021

The Riemann Zeta Function is Almost Symmetric

This blog is a quick note on the symmetry of the Riemann Zeta function.

The video for this blog is online [youtube], and the slides here [pdf].



The plots of the magnitude $|\zeta(s)|$ that we rendered previously suggest the function is symmetric about the real axis. This isn't quite true. 

Remembering the complex conjugate $\overline{s}$ is a reflection of s in the real axis, for example $\overline{3+2i}=3-2i$, let's look again at the terms in the series $\zeta(s)=\sum 1/n^s$.

$$n^{-\overline{s}}=e^{-\overline{s}\ln(n)}=\overline{e^{-sln(n)}}=\overline{n^{-s}}$$

This means $\zeta(\overline{s})$ is the complex conjugate of $\zeta(s)$. 

So, although the magnitude of $\zeta(s)$ is mirrored above and below the real axis, the sign of the imaginary part is inverted.

Let's also consider the recently developed series $\zeta(s)=(1-2^{1-s})^{-1}\eta(s)$. 

Following the same logic, we can say that $\eta(\overline{s})$ is the complex conjugate of $\eta(s)$, so this part has inverted phase above and below the real axis. 

We can say something similar for the other factor because $1/\overline{z}=\overline{1/z}$.

$$(1-2^{1-\overline{s}})^{-1} = \overline{\left(1-2^{1-s}\right)^{-1}}$$

Therefore we have:

$$\begin{align} \zeta(\overline{s}) &= ({1-2^{1-\overline{s}})^{-1}}\cdot\eta(\overline{s})\\ \\ &=\overline{(1-2^{1-s})^{-1}}\; \cdot \; \overline{\eta(s)} \\ \\ &= \overline{\zeta(s)} \end{align}$$

This gives us the same conclusions that the magnitude is mirrored in the real axis, but the phase is inverted.

The plot below shows the phase of $\zeta(s)$ coloured to illustrate this almost-symmetry. 


Sunday 5 September 2021

$\zeta(s)$ Has Only One Pole In $\sigma>0$

Here we show that the Riemann Zeta function has only one pole in the domain $\sigma>0$. It is based on a suggestion by user @leoli1 on math.stackexchange.



The video for this blog is here [youtube], and the slides are here [pdf].


Previously

The Riemann Zeta function represented by the series $\zeta(s)=\sum1/n^{s}$ converges for $\sigma>1$, and therefore has no poles in that domain.

In the last blog post developed a new series for $\zeta(s)$ based on the eta function $\eta(s)$.

$$\zeta(s)=\frac{1}{1-2^{1-s}}\eta(s)$$

Because $\eta(s)$ converges for $\sigma>0$ (Dirichlet series), any divergence must come from the factor $(1-2^{1-s})^{-1}$. 

The denominator $(1-2^{1-s})$ is zero at $s=1+2\pi ia/\ln(2)$ for integer $a$, so the factor $(1-2^{1-s})^{-1}$ diverges at all these points. 

Visualising $\zeta(s)$ suggested it had only one pole at $s=1+0i$. If true, this would require $\eta(s)$ to have zeros at $s=1+2\pi ia/\ln(2)$ for integers $a\neq0$ to cancel out the other poles from $(1-2^{1-s})^{-1}$. 

To prove this directly isn't easy, but there is a nice indirect path.


Yet Another Series for $\zeta(s)$

We start with a specially constructed Dirichlet series.

$$X(s)=\frac{1}{1^{s}}+\frac{1}{2^{s}}-\frac{2}{3^{s}}+\frac{1}{4^{s}}+\frac{1}{5^{s}}-\frac{2}{6^{s}}+\ldots$$

The pattern can be exploited to find yet another series for $\zeta(s)$.

$$\begin{align}\zeta(s)-X(s)&=\frac{3}{3^{s}}+\frac{3}{6^{s}}+\frac{3}{9^{s}}+\ldots \\ \\ &=\frac{3}{3^{s}}\zeta(s) \\ \\ \zeta(s)&=\frac{1}{1-3^{1-s}}X(s)\end{align}$$


Comparing Potential Poles

The Dirichlet series $X(s)$ converges for $\sigma>0$ (Dirichlet series), so any divergence must come from the factor $(1-3^{1-s})^{-1}$. The denominator $(1-3^{1-s})$ is zero when $s=1+2\pi ib/\ln(3)$ for integer $b$.

We can equate the two expressions for where the poles of $\zeta(s)$ could be.

$$\begin{align}1+\frac{2\pi ia}{\ln(2)}&=1+\frac{2\pi ib}{\ln(3)} \\ \\ \frac{a}{b}&=\frac{\ln(2)}{\ln(3)}\end{align}$$

There are no non-zero integers $a$ and $b$ which satisfy this because $\ln(2)/\ln(3)$ is irrational.

This leaves us with $s=1+0i$ as the only pole for $\zeta(s)$ in the domain $\sigma>0$.


Monday 23 August 2021

A New Riemann Zeta Series

Previously, we visualised the Riemann Zeta series $\sum 1/n^s$ in its domain $\sigma > 1$ and saw that its shape strongly suggested the function should continue to the left of $\sigma=1$.



Slides are here [pdf], and a video is here [youtube].


Series And Functions

Before we try to extend the Riemann Zeta series to the left of $\sigma=1$ we should first understand how it might even be possible.

Let's take a fresh look at the well known Taylor series expansion of $f(x)=(1-x)^{-1}$ developed around $x=0$.

$$S_{0}=1+x+x^{2}+x^{3}+\ldots$$

The series $S_{0}$ is only valid for $|x|<1$, but the function $f(x)$ is defined for all $x$ except $x=1$. This apparent discrepancy requires some clarification.

That series $S_{0}$ is just one representation of the function, valid for some of that function's domain, specifically $|x|<1$. We can use the standard method for working out Taylor series to find a different representation of $f(x)$ valid outside $|x|<1$. For example, the following series $S_{3}$ is developed around $x=3$, and is valid for $1<x<5$.

$$S_{3} = -\frac{1}{2} + \frac{1}{4}(x-3) - \frac{1}{8}(x-3)^{2} + \frac{1}{16}(x-3)^{3} - \ldots$$

So the series $S_{0}$ and $S_{3}$ both represent $f(x)=(1-x)^{-1}$ but over different parts of its domain. This clarifies the distinction between a function, and any series which represent it in parts of its domains. 

Perhaps the series $\sum1/n^{s}$ only gives us a partial view of a much richer function that encodes information about the primes. Could that function be represented by a different series over a different domain?


A New Series

Let's write out the familiar series for $\zeta(s)$.

$$\zeta(s)=\sum\frac{1}{n^{s}}=1+\frac{1}{2^{s}}+\frac{1}{3^{s}}+\frac{1}{4^{s}}+\frac{1}{5^{s}}+\ldots$$

An alternating version of the zeta function is called the eta function $\eta(s)$.

$$\eta(s)=\sum\frac{(-1)^{n+1}}{n^{s}}=1-\frac{1}{2^{s}}+\frac{1}{3^{s}}-\frac{1}{4^{s}}+\frac{1}{5^{s}}-\ldots$$

This is a Dirichlet series which, as explained in a previous post, converges for $\sigma>0$. If we could express $\zeta(s)$ in terms of $\eta(s)$, we would have a new series for the Riemann Zeta function that extends to the left of $\sigma=1$, even if only as far as $\sigma>0$.

Looking at the difference $\zeta(s)-\eta(s)$, we can see a pattern to exploit.

$$\begin{align}\zeta(s)-\eta(s)&=\frac{2}{2^{s}}+\frac{2}{4^{s}}+\frac{2}{6^{s}}+\ldots\\ \\&=\frac{2}{2^{s}}\left(1+\frac{1}{2^{s}}+\frac{1}{3^{s}}+\ldots\right)\\ \\&=2^{1-s}\zeta(s)\end{align}$$

Isolating $\zeta(s)$ gives us a new series that is valid in the larger domain $\sigma>0$.

$$\boxed{\zeta(s)=\frac{1}{1-2^{1-s}}\sum\frac{(-1)^{n+1}}{n^{s}}}$$

The denominator $(1-2^{1-s})$ is zero at $s=1+0i$, and provides $\zeta(s)$ with its divergence at that point. 


Visualising The New Series

Any enthusiastic mathematician would be impatient to visualise this new series. The plot below shows a contour plot of $\ln\left|\zeta(s)\right|$, this time evaluated using the new series. 



Our previous intuition was justified, the surface does continue smoothly to the left of $\sigma=1$. In fact the contours suggest the function should again continue smoothly even further into $\sigma<0$. 

Even more interesting is the appearance of zeros, all of which seem to be on the line $s=1/2+it$. 

These zeros are critically important, but we'll have to continue our journey to see why.


Animated 3D View

The following animation shows the the logarithm of the magnitude of this extended Riemann Zeta function from different angles to better illustrate its shape.


Monday 26 July 2021

Infinite Products - Revisited

Here we look again at infinite products because previously we didn't focus on infinite products of complex numbers, and presented the convergence criteria without explaining them.



The video for this topic is at [youtube], and slides are here [pdf].


Initial Intuition

Let's first develop an intuition for infinite products through some examples.

$$2\times3\times4\times5\times\ldots$$

It is easy to see the above product diverges. Each factor increases the size of the product. 

$$2\times0\times4\times5\times\ldots$$

It is a fundamental idea that multiplying by zero causes a product to be zero. The product is zero because one of the factors is zero.

$$\frac{1}{2}\times\frac{1}{3}\times\frac{1}{4}\times\frac{1}{5}\times\ldots$$

This product is more interesting. Each factor is a fraction that reduces the size of the product. As the number of these reducing factors grows, the product gets ever closer to zero. We can make the leap to say the value of the infinite product is zero.

We have found two different ways for the product to be zero. We'll need to keep both in mind as we work with infinite products. 


Definition

Similar to infinite series, we say an infinite product converges if the limit of the partial products is a finite value.

$$\lim_{N\rightarrow\infty}\prod_{n=1}^{N}a_{n}=P$$

We'll see why it is conventional to insist the finite value is non-zero.


Example 1

Does the infinite product $\prod_{n=1}^{\infty}\left(1+1/n\right)$ converge? 

Each factor $(1+1/n)$ is larger than one, so we expect the product to keep growing. Let's be more rigorous and see how the partial products actually grow.

$$\begin{align}\prod_{n=1}^{N}\left(1+\frac{1}{n}\right)&=\prod_{n=1}^{N}\left(\frac{n+1}{n}\right)\\&\\&=\frac{\cancel{2}}{1}\times\frac{\cancel{3}}{\cancel{2}}\times\frac{\cancel{4}}{\cancel{3}}\times\ldots\times\frac{N+1}{\cancel{N}}\\&\\&=N+1\end{align}$$

As $N\rightarrow\infty$, the product diverges.


Example 2

Lets now look at the the similar infinite product $\prod_{n=2}^{\infty}\left(1-1/n\right)$. Notice n starts at 2 to ensure the first factor is not zero. 

Each factor $(1-1/n)$ is smaller than one, so we expect the product to keep shrinking. Let's see if the partial products do indeed get smaller. 

$$\begin{align}\prod_{n=2}^{N}\left(1-\frac{1}{n}\right)&=\prod_{n=2}^{N}\left(\frac{n-1}{n}\right)\\&\\&=\frac{1}{\cancel{2}}\times\frac{\cancel{2}}{\cancel{3}}\times\frac{\cancel{3}}{\cancel{4}}\times\ldots\times\frac{\cancel{N+1}}{N}\\&\\&=\frac{1}{N}\end{align}$$

As $N\rightarrow\infty$, the product tends to zero. Remember that for convergence we insist the limit is non-zero. For this reason we say the product diverges to zero.


Convergence And $a_{n}$

We know that for an infinite series $\sum a_{n}$ to converge, the terms $a_{n}$ must $\rightarrow0$. For an infinite product $\prod a_{n}$ to converge, the terms $a_{n}\rightarrow1$. 

If each term $a_{n}$ was larger than 1, the product would get ever larger. If each term $a_{n}$ was smaller than 1, the product would get ever smaller towards zero. Negative $a_{n}$ cause partial product to oscillate, meaning convergence only happens if $a_{n}\rightarrow1$.


Removing Zero-Valued Factors

A single zero-valued factor collapses an entire product to zero. If an infinite product has a finite number of zero-valued factors, they can be removed to leave a potentially interesting different product. 

For example, the following product is zero because the first factor is zero. 

$$\prod_{n=1}\left(1-\frac{1}{n^{2}}\right)=0$$

Removing the first factor leaves a much more interesting product.

$$\prod_{n=2}\left(1-\frac{1}{n^{2}}\right)=\frac{1}{2}$$


Convergence Criteria 1

Since the terms in a convergent infinite product tend to 1, it is useful to write the factors as $(1+a_{n})$. 

$$P=\prod\left(1+a_{n}\right)$$

We can turn a product into a sum by taking the logarithm.

$$\ln(P)=\ln\prod\left(1+a_{n}\right)=\sum\ln\left(1+a_{n}\right)$$

Using $1+x\leq e^{x}$ we arrive at a nice inequality.

$$\ln(P)\leq\sum a_{n}$$

This tells us that if the sum is bounded, the product is bounded too. If the terms $a_{n}$ are always positive, then the sum can only grow monotonically (without oscillation), so the boundedness is convergence. This is a useful result but we can strengthen it.

Expanding out the product $\prod(1+a_{n})$ gives us a sum which includes the terms 1, all the individual $a_{n}$, and also the combinations of different $a_{n}$ multiplied together. This gives us an inequality for $\sum a_{n}$ in the other direction.

$$1+\sum a_{n}\leq\prod\left(1+a_{n}\right)=P$$

This tell us that if the product converges, so does the sum. The two results together give us our first convergence criterion.

$$\sum a_{n}\text{ converges }\Leftrightarrow\prod\left(1+a_{n}\right)\text{ converges, for }a_{n}>0$$

This allows us to say $\prod(1+1/n)$ diverges because $\sum1/n$ diverges, and that $\prod(1+1/n^{2})$ converges because $\sum1/n^{2}$ converges.


Convergence Criterion 2

A very similar argument that uses $1-x\leq e^{-x}$ leads to a criterion for products of the form $\prod(1-a_{n})$.

$$\sum a_{n}\text{ converges }\Leftrightarrow\prod\left(1-a_{n}\right)\text{ converges, for }0<a_{n}<1$$

So $\prod_{2}^{\infty}(1-1/n)$ diverges, because $\sum1/n$ diverges.


Divergence To Zero

The logarithmic view of infinite products is useful because it turns a product into a sum, but it has an interesting side effect. 

If the partial products tend to zero, then the logarithm diverges towards $-\infty$. This is why we say a product diverges to zero


Convergence Criteria 3

The previous convergence criteria are for real values of $a_{n}$. It would be useful to have a criterion for complex $a_{n}$. To do that we need an intermediate result about $|a_{n}|$.

For complex $a_{n}$, we have $|a_{n}|>0$ for all $a_{n}\neq0$. This gives is an intermediate result. 

$$\sum|a_{n}|\text{ converges }\Leftrightarrow\prod(1+|a_{n}|)\text{ converges}$$

We are interested in products $\prod(1+a_{n})$ with complex $a_{n}$, not just $\prod(1+|a_{n}|)$. Let's start with two partial products with complex $a_{n}$.

$$p_{N}=\prod^{N}(1+a_{n})$$

$$q_{N}=\prod^{N}(1+|a_{n}|)$$

We need to assert $a_{n}\neq-1$ to ensure no zero-valued factors $(1+a_{n})$. 

For $N>M\geq1$, we can compare $|p_{N}-p_{M}|$ with $|q_{N}-q_{M}|$ with a little algebra.

$$\begin{align}\left|p_{N}-p_{M}\right|&=|p_{M}|\cdot\left|\frac{p_{N}}{p_{M}}-1\right|\\&\\&=|p_{M}|\cdot\left|\prod_{M+1}^{N}(1+a_{n})-1\right|\\&\\&\leq|q_{M}|\cdot\left|\prod_{M+1}^{N}(1+|a_{n}|)-1\right|\\&\\&=|q_{M}|\cdot\left|\frac{q_{N}}{q_{M}}-1\right|\\&\\\left|p_{N}-p_{M}\right|&\leq\left|q_{N}-q_{M}\right|\end{align}$$

If $|q_{N}-q_{M}|<\epsilon$, where $\epsilon$ is as small as we want, then $|p_{N}-p_{M}|<\epsilon$ too. This the Cauchy criterion for convergence, and it tells us that if $q_{N}$ converges, so does $p_{N}$.

So $\sum|a_{n}|$ converges means $\prod(1+|a_{n}|)$ converges, which we can now say means $\prod(1+a_{n})$ also converges. We finally have our third convergence criterion.

$$\sum|a_{n}|\text{ converges }\implies\prod(1+a_{n})\text{ converges, for }a_{n}\neq-1$$

Notice this criterion is one way. We can't say the sum converges if the product converges. So we've have a new constraint.


Why Convergence Is Non-Zero

Let's see why convergence according to these criteria mean the products converge to a non-zero value. We've already seen how $a_{n}\rightarrow0$, which means that $|a_{n}|<1/2$ except for a finite number of terms. 

We use the useful inequality $1+x\leq e^{x}$ again.

$$1\leq\prod(1+|a_{n}|)<e^{\sum|a_{n}|}$$

This tells us that if the sum $\sum|a_{n}|$ converges, then the product $\prod(1+|a_{n}|)$ converges and is non-zero. 

We can use another inequality $1-x\geq e^{-2x}$ for $0\leq x\leq1/2$, and that $e^{y}>0$ for all real $y$.

$$0<e^{-2\sum|a_{n}|}\leq\prod(1-|a_{n}|)\leq1$$

This tells us that if the $sum \sum|a_{n}|$ converges, then the product $\prod(1-|a_{n}|)$ converges and is non-zero. 

Now we use another inequality $1-|a_{n}|\le|1\pm a_{n}|\le1+|a_{n}|$ to relate $\prod(1-|a_{n}|)$ to $\prod(1-a_{n})$.

$$\prod(1-|a_{n}|)\leq|\prod(1-a_{n})|\leq\prod(1+|a_{n}|)$$

Assuming $\sum|a_{n}|$ converges, we can finally say $|\prod(1-a_{n})|$ is non-zero, because its value is between two known non-zero values.

Riemann Zeta Function $\zeta(s) \neq 0$ for $\sigma>1$ 

The Riemann Zeta function can be written as an infinite product over primes. Here $s=\sigma+it$, and $\sigma>1$.

$$\zeta(s)=\sum\frac{1}{n^{s}}=\prod\left(1-\frac{1}{p^{s}}\right)^{-1}$$

It is natural to ask if $\zeta(s)$ has any zeros in the domain $\sigma>1$.

None of the factors $(1-1/p^{s})^{-1}$ is zero. That would require $p^{s}$ to be zero, and that isn't possible. 

$$\left|p^{s}\right|=\left|e^{s\ln(p)}\right|=e^{\sigma\ln(p)}>0$$

We also need to check the infinite product doesn't diverge to zero. For the moment let's consider $1/\zeta(s)=\prod(1-1/p^{s})$. Using the third convergence criterion, we check whether $\sum|1/p^{s}| $converges.

$$\sum\left|\frac{1}{p^{s}}\right|=\sum\frac{1}{p^{\sigma}}\leq\sum\frac{1}{n^{\sigma}}$$

The reason $\sum1/p^{\sigma}\leq\sum1/n^{\sigma}$ is because there are fewer primes $p$ than integers $n$. Because $\sum1/n^{\sigma}$ converges for $\sigma>1$, so does $\sum|1/p^{s}|$. This means $1/\zeta(s)$ converges to a non-zero value, and therefore so does $\zeta(s)$.

We can now say the Riemann Zeta function has no zeros in the domain $\sigma>1$.