# Fredholm Endomorphisms of Index 0

A while ago I posted a note on the ArXiV about my attempt at applying the theory of Fredholm operators from functional analysis to more general context of $K$-algebras. I wanted to work through an argument here because it seemed like it has a suspiciously nice proof; I’ve learned to be skeptical of such proofs.

Let $f$ be a linear transformation of an infinite dimensional vector space $V$ with basis $\mathcal C$. We can break up the vector space into four subspaces (two in the domain and two in the codomain). First are the kernel, which is a subspace of the domain, and the image of $f$, which is subspace in the codomain; we’ll denote these by the typical $\ker(f)$ and $\text{im}(f)$. For the purposes of intuition about the topic, you can think about the kernel of a linear transformation as a measurement of how far away $f$ is from being injective. In the codomain, there is a similar measurement for surjectivity called the cokernel, which is defined by $V/\text{im}(f)$. The final subspace is $V/\ker(f)$, which is the quotient space which the first isomorphism theorem assures is isomorphic to the image.

The goal of the above article is to provide a classification of endomorphisms which are close to invertible, but not necessarily invertible. The measurement for this that the paper proposes is inspired by the Fredholm index of a bounded endomorphism. Before getting to that, let’s remind ourselves of the setting: $B(K)$ will denote the family of matrices, indexed by $\mathbb Z^+$ which have only finitely many nonzero elements in any row or column. This matrix has a unique minimal ideal $M_\infty(K)$, the matrices which have only finitely many nonzero entries. The algebra $B(K)/M_{\infty}(K)$ will be denoted by $Q(K)$. While these three algebras are useful for building intution, they will be less useful for the following arguments. For that we’ll be using their endomorphism analogues.

Given a linear transformation of a countable-dimensional vector space $f: V \rightarrow V$ with basis $\mathcal B$, there is a natural representation of $f$ as a column-finite matrix $[f]_{\mathcal B}$, that is, an infinite matrix in which every column is finite.

The endomorphism-ring equivalent of $\mathcal B(K)$ is the ring of bounded endomorphisms, $B(V)$. For the given vector space $V$, define a descending sequence of subspaces $\{V_n : n \in \mathbb{Z}^+\}$ by $V_n = \bigoplus_{i = n}^{\infty} Kb_i$.

Then an endomorphism $T$ is bounded if it has the property that for any $m \in \mathbb Z^+$, there is some $n \in \mathbb Z^+$ with the property that $T(V_n) \subseteq V_m$. To quote the paper where I first heard about this endomorphism ring, “A moment’s reflection on the standard correspondence between representation of endomorphisms as $\mathbb Z^+ \times \mathbb Z^+$ matrices confirms that $B(K) \simeq B(V)$.

In a similar way, we’ll call the (non-unital) subalgebra of $B(V)$ which consists only of the bounded endomorphisms with finite range $M_\infty(V)$ and assert, that $M_\infty(K) \simeq M_\infty(V)$. In a similar way as above $M_\infty(V)$ is a minimal ideal of $B(V)$.

Returning to infinite matrices, but feeling comfortable that we can transfer between matrices and endomorphisms, a matrix is called Fredholm if it is invertible in $B(K)/M_infty(K)$.

Lemma:

If an endomorphism $f$ is Fredholm, then $\ker(f)$ and $\text{coker}(f)$ are both finite dimensional.

The fact that both of these subspaces are finite dimensional, and intuition tells us that the kernel and cokernel can measure the extent to which an endomorphism is injective and surjective respectively, we define the index of the Fredholm endomorphism to be $\text{Ind}(f) = \dim(\ker(f)) - \dim(\text{coker}(f)$.

Is this a blunt instrument to measure endomorphisms? Sure, but its simplicity has some facility. For $i \geq 0$ define $S_i$ to be the operator which shifts entries the vector $(v_1, v_2, v_3)^t$ forward by $i$ entries, replacing the now vacant entries with zeroes. A moment’s thought shows us that $\text{Ind}(S_i) = -i$. One can similarly define a “backward shift” by $S_j$ for $j<0$ and show that $\text{Ind}(S_j) = -j$. Note that $S_0$ is the same as the identity endomorphism on $V$.

This post is already getting a bit long, so let’s end it with the following proposition which has the aforementioned easy proof.

Proposition:

A Fredholm endomorphism has index zero if and only if it can be written as the sum of an invertible endomorphism from $B(V)$ and an endomorphism from $M_\infty(V)$.

Proof: The backwards direction is the proof of Proposition 2.9 in my aformentioned note. So let $f$ be a Fredholm endomorphism of index zero. Then, necessary, the kernel and cokernel of $f$ are isomorphic. Let $\phi$ be an isomorphism between $\ker(f)$ and $\text{coker}(f)$, which are both finite dimensional subspaces of $V$. The first isomorphism from linear algebra also provides that $\hat f: V/\ker(f) \rightarrow \text{im}(f)$ is an isomorphism where $\hat f$ is the restriction of $f$ to the subspace $V/\ker(f)$ (you can also think about it as $f$ applied to cosets. Then for $V = \ker(f) \oplus V/\ker(f)$, define an endomorphism $\phi \oplus \hat f: V \rightarrow V$. As it is a coproduct of injective endomorphisms which takes $V$ onto $V$, this is an isomorphism, i.e. an invertible endomorphism from $B(V)$.

It’s straightforward to see that both $\phi$ and $\hat f$ are bounded, hence their coproduct is also. Moreover $(\phi \oplus \hat f) - f = \phi$ which is finite dimensional. Hence the claim. $\blacksquare$

I have a feeling that this proof is too easy, but I can’t put my finger on what is wrong with it. Any ideas? My only thought is that possibly the redefinition of $V$ as $\ker(f) \oplus V/\ker(f)$ involves a basis change which makes $f$ not bounded? But I’m not sure about that…