The noncommutative Edmonds’ problem revised: how to calculate the inner rank of matrices in noncommuting variables

Update: The paper has now been published in Foundations of Computational Mathematics as an open access article. Here is a link.

My paper with Johannes and Tobias on the noncommutative Edmonds’ problem has got a total makeover; the new revised version appeared just on the arXiv. It relies now heavily on the recent results of Tobias and mine on the Fuglede-Kadison determinant of matrix-valued semicircular elements. But also, the organization of the paper is now much more reader-friendly – at least we hope so. Everything you need to know can be gotten from the first three sections; if you need proofs, read the rest.

Let me first recall the noncommutative Edmonds’ problem. Consider a matrix

A=a_1\otimes x_1+\ldots+a_n\otimes x_n,

where the x_i are noncommuting variables and the a_i are arbitrary matrices of the same size N; N is fixed, but arbitrary.

The noncommutative Edmonds’ problem asks whether A is invertible over the noncommutative rational functions in the x_i; this is equivalent (by deep results of Cohn) to asking whether A is full, i.e., its noncommutative rank is equal to N. The noncommutative rank is here the inner rank rank(A), i.e., the smallest integer r such that we can write A as a product of an Nxr and an rxN-matrix. This fullness of the inner rank can also be equivalently decided by a more analytic object: to the matrix A we associate a completely positive map

\eta:M_N(\mathbb{C})\to M_N(\mathbb{C}),\qquad \eta(b):=\sum_{i=1}^n a_iba_i^*.

In terms of \eta, the fullness condition for A is equivalent to the fact that \eta is rank non-decreasing (here we have of course the ordinary commutative rank on the complex NxN-matrices).

This noncommutative Edmonds’ problem has become quite prominent in recent years and there are now a couple of deterministic algorithms; see, in particular, the work of Garg, Gurvits, Oliveira and Widgerson.

Let me now recall our noncommutative probabilistic approach to the noncommutative Edmonds’ problem, by replacing the formal variables by concrete operators on infinite-dimensional Hilbert spaces. And a particular nice, and our favorite, choice for the analytic operators are freely independent semicircular variables s_1,\dots,s_n, which are the noncommutative analogues of independent Gaussian random variables. In the case where all the a_i are selfadjoint, the matrix

S=a_1\otimes s_1+\ldots+a_n\otimes s_n

is a matrix-valued semicircular element.

Since one can reduce the general case to the selfadjoint one, it suffices to consider in the following the selfadjoint situation. Then the corresponding S is also a selfadjoint operator, hence its distribution is a probability measure on the real line. By our work of the last ten years or so we know that the invertibility of the formal matrix A over the noncommutative rational functions is equivalent to the invertibility of S as an unbounded operator. But this is equivalent to the question whether S has a trivial kernel, which is equivalent to the question whether its distribution has no atom at zero.

And we know how to calculate the distribution \mu of our matrix-valued semicircular element S. Namely its Cauchy transform g(z), that is, the analytic function

g(z)=\int_{\mathbb{R}} \frac 1{z-t}d\mu(t)

on the complex upper half plane is of the form

g(z)=\frac 1N \text{Tr}(G(z))

where G(z) is given as the unique solution of the matrix-valued equation

zG(z)=1+\eta(G(z)) G(z),

in the lower half-plane of the NxN-matrices. One should note that for each z in the upper complex half plane there is (by some old results of mine with Reza Rashidi Far and Bill Helton) exactly one solution G(z) in the complex lower half-plane of the matrices. And, finally, we know that \mu can have an atom only at zero, and the inner rank of A is related to the mass of this atom; more precisely,

\text{rank}(A)=N(1-\mu(\{0\})).

So it seems, problem solved: Calculate the Cauchy transform and use the Stieltjes inversion formula to get the mass of the distribution at zero. We can even make this very concrete; what we need to consider is

\theta(y):= - y \Im g(iy))

in the limit where the positive real number y tends to zero. This limit is the mass of the atom at zero.

This is nice, but of course we cannot calculate anything analytically here, but have to rely on numerical approximations. So the question is, for which y should we calculate \theta(y) with which accuracy to be able to make a justified statement about the mass of the atom. And this is were things are getting tricky, but also nice.

Consider, for example, the matrix-valued semicircular element

\begin{pmatrix} s_1 + 2s_4& s_1 + s_3 + s_4& s_1 - s_2 + s_4& s_1\\ s_1 + s_3 + s_4& s_1 + s_3& s_1 - s_2 + s_3& s_1 + s_3 - s_4\\ s_1 - s_2 + s_4& s_1 - s_2 + s_3& s_1 - 2s_2& s_1 - s_2 - s_4\\ s_1& s_1 + s_3 - s_4& s_1 - s_2 - s_4& s_1 - 2s_4 \end{pmatrix}

If you want to know how we can get out of our machinery that the atom at zero of its distribution has mass 0.5 (that is, its noncommutative rank is 2), have a look at the revised version of our paper!

Leave a comment