“Some estimates of norms of random matrices” and “Tensor sparsification via a bound on the spectral norm of random tensors”

Every now and again, I get it in my head to revisit Latala’s paper “Some estimates of norms of random matrices” where he gives a sharp bound on the expected spectral norm of a mean zero random matrix with independent entries.

His proof is damn near incomprehensible (certainly so for me), results in a bound with an unspecified universal constant, and doesn’t give moment bounds. But it’s clear that all of these problems can be gotten around. It looks like he’s using an entropy-concentration tradeoff, as Rudelson and Vershynin call it, but his proof is so convoluted that it’s hard to pin down exactly what’s going on.

But whenever I get it in my head to attempt to clarify and extend his proof, I remember that Nguyen, Drineas, and Tran have already done so. They’ve actually extended it to bound the expected norm of mean zero random tensors with independent entries. Unfortunately they don’t seem to give Latala his full amount of credit, since they claim they’re using a new approach (viz, the entropy-concentration tradeoff) due to Rudelson and Vershynin, when it seems to me that they’re revising Latala’s proof to make it more readable and making obvious extensions. This isn’t to say that the NDT paper isn’t worthwhile: I think it’s a great example of power of the Gaussian symmetrization techniques and a very approachable demonstration of the entropy-concentration tradeoff. Just I think Latala should be given more credit for observing this entropy-concentration tradeoff.

This case serves as a supporting point in my argument that it’s not enough to just publish results. Your exposition should be clear about the intuition and techniques that are used, not simply a chain of technical arguments. If Latala had spent more time refining his exposition, the NDT result would be clearly seen as a descendant of his work.