What do determinants tell us about the eigenvalues of integer-valued matrices? — And what can we say about Fuglede-Kadison determinants of operator-valued semicircular elements?

Consider a matrix with integer entries, like

B=\begin{matrix} 1&2&1&3\\ 2&2&1&1\\ 1&1&2&3\\ 3&1&3&1 \end{matrix}

How can we convince ourselves that there are no two eigenvalues close to zero; say, you should convince me that two eigenvalues both with absolute value smaller than 0.1 are not possible. Of course, we should not calculate the eigenvalues but decide on this by arguments as soft as possible.

It is easy to get upper bounds for eigenvalues by the operator norm or the normalized trace of the matrix – those are quantities which are usually easy to control.

Not so clear are lower bounds, but actually it’s the determinant which will be of good use here. Namely, since the determinant is the product of the eigenvalues, knowing a lower bound for the absolute value of the determinant, together with the above mentioned easy upper bounds, gives lower bounds for the absolute values of the non-zero eigenvalues. In this example, we can calculate the determinant as -1, estimate the norm against the Frobenius norm of \sqrt{60} and can derive from this that no two eigenvalues of absolute value smaller than 0.1 are possible, because otherwise we would have the following estimate

\vert \text{det}(B)\vert \leq 0.1\times 0.1\times \sqrt{60}\times\sqrt{60}=.6

which contradicts the fact that \vert \text{det}(B)\vert=1.

Maybe you don’t like so much the idea of having to calculate the determinant, as this does not feel so soft. Assume however that I told you, as an oracle, that the matrix is invertible, then you can forget about calculating the determinant and just argue that the determinant of a matrix with integer entries must be an integer, too, and since it cannot be zero you can conclude that it must in absolute value at least be 1; and so you can repeat the above estimate.

Those observations are of course not restricted to the above concrete example; actually, I have hopefully convinced you that no invertible matrix with integer entries can have too many eigenvalues close to zero, and the precise estimate depends on the matrix only through its operator norm.

This kind of reasoning can actually be extended to “infinite-dimensional matrices”, which means for us operators in a finite von Neumann algebra. There we are usually interested in the distribution of selfadjoint operators with respect to the trace and in recent work with Johannes Hoffmann and Tobias Mai it has become important to be able to decide that such distributions do not accumulate too much mass in a neighborhood of zero. In finite von Neumann algebras there exists also a nice version of the determinant, named after Fuglede and Kadison, and the main question is whether the above reasoning survives also in such a setting. And indeed, it does for the operators in which we are interested. But this is for reasons which are still nice, but quite a bit more tricky than for ordinary matrices.

The operators we are interested in are operator-valued semicircular elements of the form S=\sum a_i\otimes s_i, where the s_i are free semicircular elements and the a_i are ordinary mxm matrices, with the only restriction that their entries are integers. We assume that such an S is invertible (as an unbounded operator) ; one can show that then its Fuglede-Kadison determinant \Delta(S) is not equal to zero. The main question is whether we have a uniform lower estimate for \Delta(S) away from zero. As there is now no formula for the determinant in terms of the entries of the matrix S, the integer values for the entries of the a_i are of no apparent use. Still we are able to prove the astonishing fact that for all such invertible operator-valued semicircular operators their Fuglede-Kadison determinant \Delta(S) is always \geq e^{-1/2}.

For all this and much more you should have a look at my newest paper Fuglede-Kadison determinants of matrix-valued semicircular elements and capacity estimates with Tobias. If you need more of an appetizer, here is also the abstract of this paper: We calculate the Fuglede-Kadison determinant of arbitrary matrix-valued semicircular operators in terms of the capacity of the corresponding covariance mapping. We also improve a lower bound by Garg, Gurvits, Oliveira, and Widgerson on this capacity, by making it dimension-independent.

If this is still not enough to get you interested, maybe a concrete challenge will do. Here is a conjecture from our work on the limiting behavior of determinants of random matrices.

Conjecture: Let m and n be natural numbers and, for each i=1,…,n, an mxm-matrix a_i with integer entries be given. Denote by \eta the corresponding completely positive map

\eta:M_m(\mathbb{C})\to M_m(\mathbb{C}), \quad b\mapsto \eta(b):=\sum_{i=1}^n a_iba^*_i

Assume that \eta is rank non-decreasing. Then consider n independent GUE random matrices X^{(N)}_1,\dots,X^{(N)}_n and let A_N denote the Gaussian block random matrix

A_N=\sum_{i=1}^n a_i\otimes X^{(N)}_i

of size mN x mN. We claim that A_N is invertible for sufficiently large N and that

\lim_{N\to\infty}\text{det}(A_NA_N^*)^{1/N}=\text{cap}(\eta)e^{-m},

where the capacity \text{cap}(\eta) is defined as the infimum of \text{det}(\eta(b)) over all b\in M_m(\mathbb{C}) for which det(b)=1.

Leave a comment