So one question I have in the introduction section is that it seems the article misses the difference between the number of Feynman diagrams to calculate a_n and the value of a_n. They point out that the the number of Feyman diagrams grows ~n!, which is much larger than the rate x^n shrinks (given 0<x<1). If the number a_n calculated by Feyman diagrams doesn't grow proportional to the number of Feyman diagrams, then it is still entirely possible for x^n to shrink faster than any change in a_n.
Based on my limited knowledge of particle physics, physicists are currently able to calculate using Feynman diagrams because a_n does grow less than x^n shrinks. There are some equations (I think dealing with specific forces/fields) where the constant it larger than others which makes calculates much harder. x ~=.7 shrinks much slower than x~=.007. Yet even then the general trend does hold and it does allow for making calculations which can then be tested against experimental data.
What we find is that our calculations do match the experimental data. It isn't a perfect match, there is room for error and confidence intervals and such. The important point is that what this article suggest doesn't seem to happen. If at some point a_n grew much faster than x^n shrunk, then the real world solution would diverge and our answer from calculating n out to 5 wouldn't closely match the data. It almost sounds like the article is suggesting things will diverge only once we calculate out for n>100 or so, but reality doesn't await for those calculations. If this problem really existed, it would happen because reality is calculating out n all the way to infinity even while the physicists can not.
So I'm left with two possible conclusions.
1. The article is misunderstanding the relationship between the number of Feynman diagrams needed to calculate a_n and a_n itself.
2. The real critique is that the current model is wrong because the model diverges, not that reality itself diverges. Thus while this model is approximate for what we currently calculate, it is inherently wrong.
The second issue is an interesting idea. A model that looks correct and is correct for all calculations done so far, but which may no be correct for more detailed calculations but which we do not and will not have the computation power to test at that level.
I only read the article halfway through, and I am no physicist, but my understanding was that this was a new mathematical method that allowed to calculate past the divergent part.
Possibly because some terms cancel out later? a bit like some limit calculations.
If that's the case, I was picturing it a bit like how imaginary numbers were initially introduced, to find real-valued solutions to 3rd+ degree polynomial equations. Step into another realm, perform your transformations, and find back a real solution. Laplace transforms come to mind as well, there are a phletora of such tools (fourrier, taylor series, etc) that allow to express the problem in a different space.
My question was more in doubting if the divergent part actually existed.
n!x^n will eventually diverge for 0<x<1, regardless of how small x is, but was that really the issue? I thought the problem was more a question if g(n)x^n would diverge when g(n) involves computing n! Feynman diagrams. It can't be automatically assumed that computing n! Feynman diagrams leads to an answer that grows comparable to n!. Or maybe it can, but I didn't see that argument being made anywhere, though I may have missed it.
It is generally believed that the perturbation expansion that we see in realistic quantum field theories are what are known as asymptomatic expansions. These are series that have a radius of convergence of zero (i.e. they only converge when the expansion parameter is exactly zero and diverge for all non zero values).
There are then two natural questions: 1. if the perturbation series diverges, why doesn't the universe explode? and 2. If the series diverges, why can we use it at all?
Let's first talk about the first part: why doesn't the universe explode? Well, it's because the perturbation series is not actually what is going on, the real answer is the solution to the full set of equations. It's just that we're using a perturbation expansion as a crutch. It's sort of like if the universe's function is 1/(1-x) but we constantly insist on using 1+x+x^2+... Clearly the first function is completely well behaved at x=2 but the second one is not. If we notice that our series explodes for x=2 we should not immediately assume that the universe also must explode, it's just that our representation of the true physics is not faithful. This is perhaps a bad example because the series in question is convergent for some x, just not for x=2. The perturbation expansions in question are more subtle since the never converge.
This then leads into the second question: if the series diverges, how can we even use it? Well the idea here is that it's not just any divergent series (like my silly example with 1/(1-x) above) but rather an asymptomatic series. This means that as long as you truncate the series at some point it is in fact reasonably close to the target function for a sufficiently small value of the parameter. It's just that the more terms you want to include, the sooner the approximation breaks in terms of the parameter. So, if you want to include 10 terms it might be a decent approximation until x ~0.1 but if you include 100 terms it might only be a good approximation until x~0.01. Now, within the overlapping range (x<0.01) it's better to have 100 terms than 10 terms, so it's not like including more terms is bad in all ways. But you see the issue: if you include 1000 terms you get a better approximation for your function for values x<0.001 than you had with 100 terms but now your approximation breaks much sooner. If you want to include all the terms your approximation breaks the moment you leave the point x=0.
Why do we think that QFT perturbation theories generally have zero radius of convergence? Well, look at QED, the quantum theory of E&M. If the theory had any nonzero radius of convergence, that also means that the theory would need to make sense for negative coupling constants. However, what would E&M look like for negative coupling? Well, we'd still have electron/positron virtual pair creation from the vacuum since the interactions of the theory are still the same. However this time around they wouldn't attract each other anymore but instead repel each other causing an instability in the vacuum of the theory. We would just constantly be producing these particle/anti-particle pairs and they'd form two separate clusters where all the electrons attract each other and all the positions attract each other but they pairwise repel. In other words, the vacuum would break. This suggests that QED with a negative coupling constants doesn't make sense. But this contradicts the fact that the radius of convergence of the perturbative expansion is nonzero.
That's not to say that all QFTs must have zero radius of convergence, but similar arguments can (I think) be made for the type of QFTs that we actually see in nature.
Based on my limited knowledge of particle physics, physicists are currently able to calculate using Feynman diagrams because a_n does grow less than x^n shrinks. There are some equations (I think dealing with specific forces/fields) where the constant it larger than others which makes calculates much harder. x ~=.7 shrinks much slower than x~=.007. Yet even then the general trend does hold and it does allow for making calculations which can then be tested against experimental data.
What we find is that our calculations do match the experimental data. It isn't a perfect match, there is room for error and confidence intervals and such. The important point is that what this article suggest doesn't seem to happen. If at some point a_n grew much faster than x^n shrunk, then the real world solution would diverge and our answer from calculating n out to 5 wouldn't closely match the data. It almost sounds like the article is suggesting things will diverge only once we calculate out for n>100 or so, but reality doesn't await for those calculations. If this problem really existed, it would happen because reality is calculating out n all the way to infinity even while the physicists can not.
So I'm left with two possible conclusions.
1. The article is misunderstanding the relationship between the number of Feynman diagrams needed to calculate a_n and a_n itself.
2. The real critique is that the current model is wrong because the model diverges, not that reality itself diverges. Thus while this model is approximate for what we currently calculate, it is inherently wrong.
The second issue is an interesting idea. A model that looks correct and is correct for all calculations done so far, but which may no be correct for more detailed calculations but which we do not and will not have the computation power to test at that level.