“Structured random matrices and cyclic cumulants: A free probability approach” by Bernard and Hruza

The evolution of some papers of Bernard and Hruza in the context of “Quantum Symmetric Simple Exclusion Process” (QSSEP) have been addressed before in some posts, here and here. Now there is a new version of the latest preprint, with the more precise title “Structured random matrices and cyclic cumulants : A free probability approach“. I think the connection of the considered class of random matrices with our free probability tools (in particular, the operator-valued ones) is getting nicer and nicer. One of the new additions in this version is the proof that applying non-linear functions entrywise to the matrices (as is usually done in the machine learning context) does not lead out of this class and one can actually describe the effect of such an application.

I consider all this as a very interesting new development which connects to many things, and I will try to describe this a bit more concretely in the following.

For the usual unitarily invariant matrix ensembles we know that the main information about the entries which contributes in the limit to the calculation of the matrix distribution are the cyclic classical cumulants (or ”loop expectation values”). Those are cumulants of the entries with cyclic index structure c(x_{i_1,i_2},x_{i_2,i_3},\dots,x_{i_n,i_1}) – and, for the unitarily invariant ensembles, the asymptotics of those cumulants does not depend on the chosen indices i_1,i_2,\dots,i_n. Actually, in the limit, with the right scaling, those give the free cumulants \kappa_n of our limiting matrix distribution. The idea of Bernard and Hruza is now to extend this to an inhomogeneous setting, where the indices matter and thus the limiting information is given by ”local” free cumulants \kappa_n(t_1,\dots,t_n) where the t_k are the limits of i_k/N. For the operator-valued aficionado this has the flavor of operator-valued cumulants (over the diagonals), and this is indeed the case, as shown in the paper. For the case of inhomogeneous Gaussian matrices, where the limits are given by operator-valued semicircular elements, such results are not new, going back to the work of Dima Shlyakhtenko on band matrices, and constitute one of our most beloved indications for the power of operator-valued free probability. The paper of Bernard and Hruza goes much beyond the semicircular case and opens a very interesting direction. The fact that this is motivated by problems in statistical physics makes this even more exciting. Of course, there are many questions arising from this, on which we should follow up. In particular, is there a good version of this not only over diagonal matrices; or, is there a relation with the notion of R-cyclic distributions, which I introduced with Andu and Dima a while ago in this paper?

As I mentioned at the beginning I am also intrigued by the effect of applying non-linear functions to the entries of our matrices. This is something, we usually don’t do in ”classical” random matrix theory, but which has become quite popular in recent years in the machine learning context. There are statements which go under the name of Gaussian equivalence principle, which say that the effect of the non-linearity is in many cases the same as adding an independent Gaussian random noise. Actually, this is usually done for special random matrices, like products of Wishart matrices. However, this seems to be true more general; I was convinced that this is valid for unitarily invariant matrices, but the paper of Bernard and Hruza shows that even in their more general setting one has results of this type.

In order to give a concrete feeling of what I am talking about here, let me give a concrete numerical example for the unitarily invariant case. Actually, I prefer her the real setting, i.e., orthogonal invariant matrices. For me canonical examples of those are given by polynomials in independent GOE, so let us consider here X_N=A_N^2+A_N+B_NA_N+A_NB_N+B_N, where A_N and B_N are independent GOE. The following plot shows the histogram of the eigenvalues of one realization of X_N for $N=5000$.

We apply now a non-linear function to the entries of X_N. Actually, one has to be a bit careful about what this means; in particular, one needs a good scaling and should not act naively on the diagonals, as this would otherwise dominate everything. Here is the precise definition of the entries of the new matrix Y_N.

y_{ij}=\begin{cases} \frac 1{\sqrt{N}} f(\sqrt{N} x_{ij}), & i\not=j\\ 0,& i=j.\end{cases}

For the function f, let’s take the machine learning favorite, namely the ReLU function, i.e., f(x)=\text{ReLU}(x):=\max(0,x). Here is the histogram of the eigenvalues of the corresponding matrix Y_N

But now the Gaussian equivalence principle says that this Y_N should asymptotically have the same distribution as the original matrix perturbed by a noise, i.e., as \theta_1 X_N+\theta_2 Z_N, where Z_N is a GOE matrix, independent from X_N, and where in our case \theta_1=1/2 and \theta_2=\sqrt{5(1-2/\pi)}/2. The following plot superimposes the eigenvalue distribution of \theta_1 X_N+\theta_2 Z_N to the preceding plot for Y_N. Looks convincing, doesn’t it.

One should note that the ReLU functions does not fall into the class of (polynomial or analytic) functions, for which the results are usually proved, but ReLU is explicit enough, so that probably one could also do this more directly in this case. Anyhow, all this should just be an appetizer, there is still quite a bit to formulate and prove in this direction, but I am looking forward to a formidable feast in the end.

Berkeley’s Probabilistic Operator Algebra Seminar is Back

Voiculescu’s online seminar on probabilistic operator algebra is running again, for the spring term, but now with a different time; it’s not on Mondays anymore, but on Tuesdays. For more info and details about the upcoming talks, see here. In particular, to get the zoom links for the talks, one should write to Jorge (jgarzav_at_caltech_dot_edu).

Actually, the next talk, on January 30 at 10:30 a.m. Pacific time (which is 7:30 p.m. German time), will be given by myself. It will be on Bi-free probability and reflection positivity and is based on my small note on the arXiv. Most of the talk will be about my (not very deep) understanding of the notion of reflection positivity, which has quite some relevance in quantum field theory and statistical physics. It seems to me that in the bi-free extension of free probability, we have a canonical candidate for a reflection, namely the exchange between the two (left and right) faces and it might be worth to investigate positivity questions in this context.

Free probability, between math and physics (and also machine learning) – some updates

In the recent post of a similar title I mentioned some papers which related physics problems (eigenstate thermalization hypothesis or Open Quantum SSEP) with free probability. Let me point out that the title of the preprint by Hruza and Bernard has been changed to “Coherent Fluctuations in Noisy Mesoscopic Systems, the Open Quantum SSEP and Free Probability” and that there are some new and follow up preprints in this directions, namely “Spectrum of subblocks of structured random matrices: A free probability approach“, by Bernard and Hruza, and also “Designs via free probability“, by Fava, Kurchan, Pappalardi. In all of them free cumulants and their relations to random matrices play an important role. Not too surprisingly, I find this very interesting in general, but also in particular, as during my voyage in the machine learning world I became a bit obsessed with the fact that free cumulants are given by the leading order of classical cumulants of the entries of unitarily invariant matrix ensembles (with a “cyclic” or “loop” structure of the indices). This seems to be highly relevant – though, at the moment I am not actually sure for what exactly.

Anyhow, if anybody is interested in this, in the last lecture of my machine learning course I give a very high level survey on these relations, and in the video on Gaussian equivalence principle in the same course I talk about a more concrete model of this in the random features model context.

Free Probability, Random Matrices and Machine Learning

Update (August 14): The lecture notes of the course on “High-Dimensional Analysis: Random Matrices and Machine Learning” are now available.

My activity on this blog has been quite low for a while, so it might be good to give a life sign and let you know that I still intend to keep the blog running, and hopefully be more active again in the future.

Lately, I have become interested in machine learning, mostly from the point of view of random matrices and free probability. I am trying to get some ideas what is going on there on a mathematical and conceptual level – mostly from the perspective that neural networks should give some inspirations for interesting questions on and extensions of random matrix and free probability theory.

The best way to learn a subject is to teach it, and that’s what I did. I just finished here in Saarbrücken a lecture series on “High-Dimensional Analysis: Random Matrices and Machine Learning”. I have created a blog page concerning this course; the lectures were recorded and will bit by bit be uploaded to the corresponding youtube playlist.

I hope to have more to say here on those topics in the (hopefully not too far) future. I know that quite a few people from our community have also some interest (maybe even already some work) in this direction. Guest blogs on such topics are highly welcome!

Black hole, replica trick, de Finetti and free probability

(guest post by Jinzhao Wang)

Hawking famously showed that black holes radiate just like a blackbody. Behaving like a thermodynamic object, a black hole has an entropy worth a quarter of its area (in units of the Planck area), which is now known as the Bekenstein-Hawking (BH) entropy. Through several thought experiments, Bekenstein already reached this conclusion up to the 1/4 prefactor before Hawking’s calculation, and he also reasoned that a black hole is the most entropic object in the universe in the sense that one cannot pack entropy more efficiently in a region bounded by the same area with the same mass than a black hole does. This is known as the Bekenstein bound. As for Hawking, the BH entropy can be deduced from the gravity partition function computed using the gravitational path integral (GPI), just like how entropy is derived from the partition function in statistical physics. 

However, Hawking’s discovery led to a new problem that put the fundamental principle of physics in question, that the information carried by a closed system cannot be destroyed under evolution. This cherished principle is respected by the unitarity in quantum theory.  On the other hand, since the radiation has a relatively featureless thermal spectrum, it cannot preserve all the information a star contains before it collapses into a black hole nor the information carried by the objects that later fall into it. If the radiation is all there is after the complete evaporation, the information is apparently lost. If we are not willing to give up our well-established theories, one way out is to speculate that the black hole never really evaporates away but it somehow stops evaporating and becomes a long-living remnant when it has shrunk to the Planckian size. All the entropy production is then due to the correlation with the remaining black hole. While this could be plausible, there is already tension long before the black hole approaches its end life.  If we examine the radiation entropy after the black hole passes its half-life, the radiation entropy keeps rising according to Hawking and they have to be attributed to the correlation with the remaining black hole. This means that the mid-aged black hole has to be as entropic as the radiation but this is impossible without violating the Bekenstein bound. In fact, Page famously argued that if we suppose a black hole indeed operates with some unknown unitary evolution, then typically the radiation entropy should start to go down at its half-life, in contrast to Hawking’s calculation. We refer to this tension past the Page time as the entropic information puzzle. The challenge is to derive the entropy curve that Page predicted, i.e. the Page curve, using a first-principle gravity calculation.

Recently, significant progress (see  here and here ) has been made to resolve the entropic information puzzle. (cf. this review article and the references therein.) The entropy of radiation is calculated in semiclassical gravity with GPI à la Hawking, and the Page curve is derived. Remarkably and unexpectedly, the Page curve can be obtained in the semiclassical regime without postulating radical new physics. The new ingredient is the replica trick, which essentially probes the radiation spectrum with many copies of the black hole. The idea is that we’d like to compute all the moments of the radiation density matrix \mathrm{Tr}\rho^n=\mathrm{Tr}\rho^{\otimes n}\eta_n, where we rewrite it as the expectation value of the n-fold swap operator \eta_n on n identical and independent replicas of the state \rho. The trouble is we don’t know explicitly what \rho is, rather our current understanding of quantum gravity only allows us to describe the moments we’d like to compute implicitly in terms of a n-replica partition function with appropriately chosen boundary conditions.

\langle\eta_n\rangle_\mathrm{b.c.}\ \stackrel{!}{=}\ \mathrm{Tr}\rho^{\otimes n}\eta_n

where the LHS is what we really compute in gravity and we postulate on the RHS that this partition function gives the moments of \rho that we want.

To evaluate \langle\eta_n\rangle_\mathrm{b.c.}, the GPI sums over all legit configurations, such as all sorts of metrics, topologies, and matter fields, consistent with the given boundary conditions. In particular, new geometric configurations show up and modify Hawking’s result. These geometries connect different replicas and are called replica wormholes. Since Hawking only ever considered a single black hole scenario, he missed these wormhole contributions in his calculation. In practice, performing the GPI over all the wormhole configurations can be technically difficult and one needs to resort to some simplifications and approximations. For the entropy calculation, one often drops all the wormholes but the maximally symmetric one that connects all the replicas. This approximation leads to a handy formula, called the island formula, for computing the radiation entropy and thus the Page curve. However, we should keep in mind that sometimes this approximation can be bad and the island formula needs a large correction. It would be interesting to see when and how this happens.

Fortunately, there is a toy model of an evaporating black hole due to Penington-Shenker-Stanford-Yang (PSSY), in which one can resolve the spectrum of the radiation density matrix without compromise. This model simplifies the technical setup as much as possible while still keeping the essence of the entropic information puzzle. This recent paper computes the radiation entropy by implementing the full GPI and identifies the large corrections to the commonly used island formula. Interestingly, the key ingredient is free probability.  The GPI becomes tractable after being translated into the free probabilistic language. Here we summarise the main ideas. In the replica trick GPI, the wormholes are organized by the non-crossing partitions. Feynman taught us to sum over all contributions weighted by the exponential of the gravity action evaluated on the wormholes (wormhole contributions). Then the resulting n-replica partition function (i.e. nth moment of \rho), is equal to summing over the wormhole contributions, matching exactly with the free moment-cumulant relation. Therefore, the wormhole contributions shall be treated as free cumulants. Furthermore, the matter field propagating on a particular wormhole configuration (labeled by \pi) is organized by the Kreweras complement of \pi. Together the total contribution to the n-replica partition function from both the wormholes and the matter field on them amounts to a free multiplicative convolution. It is a convolution between two implicit probability distributions encoding the quantum information from the gravity sector and the matter sector. With this observation, one can then evaluate the free multiplicative convolution using tools from free harmonic analysis to resolve the radiation spectrum and thus the Page curve. 

The figure illustrates a typical configuration for 13 replica black holes. The arrowed solid lines with length \beta indicate the boundary conditions that prepare each black hole in a thermal state of temperature 1/\beta; the dashed lines indicate the radiation quanta and they are cyclically connected to implement the observable \eta_{13}. Combinatorially, any wormhole configuration (in black) corresponds to a non-crossing partition and the configuration of the matter fields (in red) corresponds to the corresponding Kreweras complement.

Through modeling the free random variables in random matrices, we can go one step further and deduce from the convolution that the radiation spectrum matches the one obtained from a generalized version of Page’s model.  Therefore, we really start from the first-principle gravity calculation to address the challenge that Page posed, and free probability helps to make this connection clear and precise in the context of the PSSY model. What remains to be understood is why freeness is relevant here in the first place. To what extent is free probability useful in quantum gravity and is there a natural reason for freeness to emerge? Free probability already has applications in quantum many-body physics (cf. the previous post). If we think of quantum gravity as a quantum many-body problem, some aspects of it can be understood in terms of random tensor networks. This viewpoint has been very successful in the context of the AdS/CFT correspondence. In this view, freeness can plausibly be ubiquitous in gravity thanks to the random tensors. Another hint comes from concrete quantum mechanic models such as the SYK model, from which simple gravity theory can emerge in the low energy regime. The earlier work of Pluma and Speicher drew the connection between the double-scaled SYK model and the q-Brownian motion. Perhaps quantum gravity is calling for new species of non-commutative probability theories that differ from the usual quantum theory. 

There is a subtle logical inconsistency that we should address. The postulate above \langle\eta_n\rangle_\mathrm{b.c.}\ \stackrel{!}{=}\ \mathrm{Tr}\rho^{\otimes n}\eta_n is not exactly correct. The free convolution indicates that the radiation spectrum obtained is generically continuous, suggesting that we are dealing with a random radiation density matrix. Hence, in hindsight, it’s more appropriate to say that the GPI is computing the expected n-th moment \mathbb{E}\mathrm{Tr}\rho^{\otimes n}\eta_n. However, it is puzzling because this extra ensemble average \mathbb{E} radically violates the usual Born’s rule in quantum physics. 

In fact, we can give a proper physical explanation within the standard quantum theory. This was pointed out in this other recent paper, leveraging the power of the quantum de Finetti theorem. The key is to make a weaker postulate that the implicit state upon which we evaluate  \eta_n should be correlated instead of independent among the n replicas. Let’s denote this joint radiation state as \rho^{(n)}, which may not be a product state \rho^{\otimes n} as postulated above. This is because in the GPI \langle\eta_n\rangle_\mathrm{b.c.}, one only imposes the boundary conditions, so gravity could automatically correlate the state in the bulk even if one means to prepare them independently. It’s hence too strong to postulate that the joint radiation state implicitly defined via the boundary conditions has the product form \rho^{\otimes n}. Nonetheless, this joint state \rho^{(n)} should still be permutation-invariant because we don’t distinguish the replicas. Even better, it should also be exchangeable (meaning that the quantum state can be treated as a marginal of a larger permutation-invariant state) because we can in principle consider an infinite amount of replicas and randomly sample n replicas to evaluate \eta_n. This allows us to invoke the de Finetti theorem to deduce that the joint radiation state on n-replicas is a convex combination over identical and independent replica states \rho^{(n)}=\int d\mu(\rho)\rho^{\otimes n} with some probability measure \mu(\rho).

\langle\eta_n\rangle_\mathrm{b.c.}\ \stackrel{!}{=}\ \mathrm{Tr}\rho^{(n)}\eta_n = \int d\mu(\rho)\mathrm{Tr}\rho^{\otimes n}\eta_n.

The de Finetti theorem thus naturally brings in an ensemble average \int d\mu(\rho) that is consistent with the result of the free convolution calculation. Interestingly, one can further show that the de Finetti theorem implies that the replica trick really computes the regularized entropy \lim_{n\to\infty}S(R_1\ldots R_n)/n, i.e. the averaged radiation entropy of infinitely many replica black holes. It is to be contrasted with the radiation entropy of a single black hole, which can be much bigger because of the uncertainty in the measure \mu(\rho). The latter is closer to what Hawking calculated and he was wrong because the entropy contribution due to probability measure \mu(\rho) is not what we are after. This ensemble reflects that our theory of quantum gravity is incomplete to correctly pin down the exact description of the radiation, but we really shouldn’t attribute this uncertainty to the physical entropy of radiation. The gist of the replica trick is that with many copies the contribution from \mu(\rho) is moderated out in the regularized entropy because it doesn’t scale with the number of replicas. Therefore, when someone actually goes out and operationally measures the radiation entropy, she has to prepare many copies of the black hole and sample them to deduce the measurement statistics just like for measuring any quantum observable. Then she will find herself dealing with a de Finetti state, where \mu(\rho) acts like a Bayesian prior that reflects our ignorance of the fundamental theory. Nonetheless, the measurement shall reveal the truth and help update the prior to peak at some particular \rho. Hence, operationally the entropy measured should never depend on how uncertain the prior is. This is perhaps a better explanation of what Hawking did wrong.  These conceptual issues are now clarified thanks to the wisdom of de Finetti.

Free probability, between maths and physics

The fun of free probability is that if you think you have seen everything in the subject suddenly new exciting connections are popping up. This happened for example a few months ago with the preprints

Eigenstate Thermalization Hypothesis and Free Probability, by Silvia Pappalardi, Laura Foini, and Jorge Kurchan

and

Dynamics of Fluctuations in the Open Quantum SSEP and Free Probability, by Ludwig Hruza and Denis Bernard

According to the authors, the occurrence of free probability in both problems has a similar origin: the coarse-graining at microscopic either spatial or energy scales, and the unitary invariance at these microscopic scales. Thus the use of free probability tools promises to be ubiquitous in chaotic or noisy many-body quantum systems.

I still have to have a closer look on these connections and thus I am very excited that there will be great opportunity for learning more about this (and other connections) and discussing it with the authors at a special day at IHP, Paris on 25 January 2023. This is part of a two-day conference “Inhomogeneous Random Structures”.

Wednesday 25 January: Free probability, between maths and physics.
Moderator: Jorge Kurchan (Paris)

Free probability is a flourishing field in probability theory. It deals with non-commutative random variables where one introduces the concept of «freeness» in analogy to «independence» of commuting random variables. On the mathematical side, it has given new tools and a deeper insight into, amongst others, the field of random matrices. On the physics side, it has recently appeared naturally in the context of quantum chaos, where all its implications have not yet been fully worked out.

Speakers: Denis Bernard (Paris), Jean-Philippe Bouchaud (Paris), Laura Foini (Saclay), Alice Guionnet (Lyon), Frederic Patras (Nice), Marc Potters (Paris), Roland Speicher (Saarbrücken)

Another update on the q-Gaussians

There is presently quite some activity around the q-Gaussians, about which I talked in my last post. Tomorrow (i.e., on Monday, April 25) there will be another talk in the UC Berkeley Probabilistic Operator Algebra Seminar on this topic. Mario Klisse from TU Delft will speak on his joint paper On the isomorphism class of q-Gaussian C∗-algebras for infinite variables with Matthijs Borst, Martijn Caspers and Mateusz Wasilewski. Whereas my paper with Akihiro deals only with the finite-dimensional case (and I see not how to extend this to infinite d) they deal with the infinite-dimensional case, and, quite surprisingly, they have a non-isomorphism result: namely that the C*-algebras for q=0 and for other q are not isomorphic. This makes the question for the von Neumann algebras even more interesting. It still could be that the von Neumann algebras are isomorphic, but then by a reason which does not work for the C*-algebras – this would be in contrast to the isomorphism results of Alice and Dima, which show the isomorphism of the von Neumann algebras (for finite d and for small q) by actually showing that the C*-algebras are isomorphic.

I am looking forward to the talk and hope that afterwards I have a better idea what is going on – so stay tuned for further updates.

A dual and a conjugate system for the q-Gaussians, for all q

Update: On Monday, April 4, I will give an online talk on those results at the UC Berkeley Probabilistic Operator Algebra Seminar.

I have just uploaded the joint paper A dual and conjugate system for q-Gaussians for all q with Akihiro Miyagawa to the arXiv. There we report some new results concerning the q-Gaussian operators and von Neumann algebras. The interesting issue is that we can prove quite a few properties in a uniform way for all q in the open interval -1<q<1.

The canonical commutation and anti-commutation relations are fundamental relations describing bosons and fermions, respectively. In 1991, Marek Bożejko and I considered an interpolation between those bosonic and fermionic relations, depending on a parameter q with -1\le q \le 1 (where q=1 corresponds to the bosonic case and q=-1 to the fermionic case): a_ia_j^*-q a_j^* a_i=\delta_{ij} 1. These relations can be represented by creation and annihilation operators on a q-deformed Fock space. (Showing that the q-deformed inner product which makes a_i and a_i^* adjoints of each other is indeed an inner product, i.e. positive, was one of the main results in my paper with Marek.) In the paper with Akihiro we consider only the case where the number d of indices is finite.

Since then studying the q-Gaussians A_i=a_i+a_i^* has attracted quite some interest. Especially, the q-Gaussian von Neumann algebras, i.e., the von Neumann algebras generated by the A_i, have been studied for many years. One of the basic questions is whether and how those algebras depend on q. The extreme cases q=1 (bosonic) and q=-1 (fermionic) are easy to understand and they are in any case different from the other q in the open interval -1<q<1. The central case q=0 is generated by free semicircular elements and free probability tools give then easily that this case is isomorphic to the free group factor.

So the main question is whether the q-Gaussian algebras are, for -1<q<1, isomorphic to the free group factor. Over the years it has been shown that these algebras share many properties with the free group factors. For instance, for all -1<q<1 the q-Gaussian algebras are II1-factors, non-injective, prime, and have strong solidity. A partial answer to the isomorphism problem was achieved in the breakthrough paper by Guionnet and Shlyakhtenko, who proved that the q-Gaussian algebras are isomorphic to the free group factors for small |q| (where the size of the interval depends on d and goes to zero for d\to\infty). However, it is still open whether this is true for all -1<q<1.

In our new paper, we compute a dual system and from this also a conjugate system for q-Gaussians. These notions were introduced by Voiculescu in the context of free entropy and have turned out to carry important information about distributional properties of the considered operators and to have many implications for the generated von Neumann algebras.

Our approach starts from finding a concrete formula for dual systems; those are operators whose commutators with q-Gaussians are exactly the orthogonal projection onto the vacuum vector. If we also normalize such dual operators by requiring that they vanish on the vacuum vector, then the commutator relation gives a recursion, which can be solved in terms of a precise combinatorial formula involving partitions and their number of crossings, where the latter has, however, to be counted in a specific, and different from the usual, way. The main work consists then in showing that the dual operators given in this way have indeed the vacuum vector in the domain of their adjoints. The action of the adjoints of the dual operators on the vacuum gives then, by general results going back to Voiculescu and Shlyakhtenko, the conjugate variables.

One should note that whereas the action of the dual operators on elements in the m-particle space is given by finite sums, going over to the adjoint results, even for their action on the vacuum, necessarily in non-finite sums, i.e., power series expansions. Thus it is crucial to control the convergence of such series in order to get the existence of the conjugate variables. There have been results before on the existence of conjugate variables for the q-Gaussians, by Dabrowski, but those relied on power series expansions which involved coefficients of the form q^m for elements in the m-particle space and thus guaranteed convergence only for small q. In contrast, the precise combinatorial formulas in our work lead to power series expansions which involve coefficients of the form q^{m(m-1)/2}. This quadratic form of the exponent is in the end responsible for the fact that our power series expansions converge for all q in the interval (-1,1).

The existence of conjugate systems for all q with -1<q<1 has then, by previous general results, many consequences for all such q (some of them had been known only for the restricted interval of q, some of them for all q, by other methods). We can actually improve on the existence of the conjugate system and show that it satisfies a stronger condition, known as Lipschitz property. This implies then, by general results of Dabrowski, the maximality of the micro-states free entropy dimension of the q-Gaussian operators in the whole interval (-1,1).

Unfortunately, we are not able to use our results for adding anything to the isomorphism problem. However, the fact that the free entropy dimension is maximal for all q in the whole interval is another strong indication that they might all be isomorphic to the free group factor.

Topological Recursion Meets Free Probability

Before I am getting too lazy and just re-post here information about summer schools or postdoc positions, I should of course also come back to the core of our business, namely to make progress on our main questions and to get excited about it. So there are actually two recent developments about which I am quite excited. Here is the first one, the second will come in the next post.

During the last few years there was an increasing belief that free probability (at least its higher order versions) and the theory of topological recursion should be related, maybe even just different sides of the same coin. So our communities started to have closer contacts, I started a project on this in our transregional collaborative research centre (SFB-TRR) 195, we had summer schools (here in Tübingen in 2018) and workshops (here in Münster in 2021) on possible interactions and finally there was the breakthrough paper Analytic theory of higher order free cumulants by Gaëtan Borot, Séverin Charbonnier, Elba Garcia-Failde, Felix Leid, Sergey Shadrin. This paper achieves, among other things, the solution to two of our big problems or dreams, namely:

  • Rewrite the combinatorial moment-cumulant relations into functional relations between the generating powers series; for first order this was done in Voiculescu’s famous formula relating the Cauchy and the R-transform going back to the beginnings of free probability in the 80s; for second order this was one of the main results in my paper with Benoit, Jamie and Piotr from 2007. For higher orders, however, this was wide open – and its amazing solution can now be found in the mentioned paper.
  • Is our theory of free probability only the planar (genus 0) sector of a more general theory which takes all genera into account? This is actually the idea of topological recursion, that you should consider all orders and genera and look for relations among them. I have to admit that I was always quite skeptic about defining the notion of freeness for non-planar situations – but it seems that the paper at hand provides a consistent theory for doing so; apparently also putting the notion of infinitesimial freeness into this setting.

Instead of having me mumbling more about all this, you might go right away to the paper and read its Introduction to get some more precise ideas about what this is all about and what actually is proved.

Let me also add that there is another interesting preprint, On the xy Symmetry of Correlators in Topological Recursion via Loop Insertion Operator by Alexander Hock, which also addresses the functional relations between moments and free cumulants in the g=0 case.

Postdoc position on “Integrable Probability” with Alexey Bufetov at Leipzig University

Within the Institute of Mathematics of Leipzig University Professor Alexey Bufetov is looking to fill a postdoctoral research position for up to three years, starting in Autumn 2022. The position is supported by ERC Starting Grant 2021 “Integrable Probability”

The research focus is integrable probability in the wide sense. Experience in one (or more) of the following topics might be of help:

– interacting particle systems,

– random matrices

– models of statistical physics,

– asymptotic representation theory,

– algebraic combinatorics,

– random walks on groups.

The position carries no teaching load. The salary level is TV-L 13.
In order to apply please do the following:

1) ( Required) Send a full CV to the address bufetov@math.uni-leipzig.de

Please include the phrase “Application to a postdoctoral position” and your last name into the subject field.

2) (Optional) You might arrange for several (from one to four) recommendation letters to be sent directly to the address  bufetov@math.uni-leipzig.de

All applications made before 25 March will be fully considered. Late applications will be considered if the position is still vacant.

For informal inquiries please contact    bufetov@math.uni-leipzig.de