Full disclosure- I am one of the authors of the paper.
Note that the blog you're citing was written a year and a half ago. It refers to a select few conjectures, and naturally has no references to the developments in the past year and half (which were the main reason the paper got published).
Furthermore, the author of the blog didn't respond to multiple emails we sent him, attempting to discuss the actual mathematics.
So basically the vast majority of the criticism here, is based on a single, outdated blog, by a professor (respected as he may be) who has not revisited the issues and new results since first posting the blog, and has not given any mathematical argument as to why the results shown in the paper (the actual updated paper that was published) are supposably unimportant.
Not the person you're replying to, but I admit to characterizing your paper as "garbage" in another comment thread. Since you're inviting discourse, which I greatly appreciate, I'm compelled to reply.
1) To anyone who's studied algebra, it is clear that identities of the form LHS = RHS can be obtained by a nested application of transformations and substitutions in a consistent manner.
2) Of course, arriving at a new, insightful result often involves taking mundane steps. However, in this case, the new mathematical discoveries based on the output tableaus of your algorithm are hypothetical. Whereas the manuscript (and the authors) have already pocketed one of the premium accolades in sciences in the form of a Nature publication.
3) To drive the point above home, do you think the resulting mathematical insights themselves, without riding on the "AI" novelty aspect, would clear the bar for a Nature (or similar high-impact) publication? To be clear, I'm not a mathematican, but I believe the answer would be no. Contrast this with another AI/ML advance published in Nature quite recently: AlphaGo. Note how the gist of their paper, superhuman performance in Go, is a self-standing achievement that merely makes use of machine learning techniques.
I would give the actual work behind this paper a "strong accept" if the claims were properly scoped, perhaps with a weak/borderline score on "significance/impact" since I'm not really sure why anyone cares about discovering discovering these sorts of identities. Probably a Conditional Accept in its current form because of the mismatch between actual results + reasonable expectation of potential vs. what's claimed.
So, "over-hyped" and "claims wildly out of line with actual results" are definitely more than fair statements. "Fraud" or "garbage" are way too strong.
Re: Nature, I don't really understand it or care. I can say that in my own input to hiring committees I tend to treat Nature papers in CS/Math as red flags unless they're consolidations of a bunch of other work published in top sub-field journals/conferences.
For some reason Nature really loves these "automated discovery of random mathematical facts" type of papers. I don't understand it. I tend to assume it's click-through-rate-driven editorial decision making.
I think the vast majority of criticism here does not target the research per se, but rather the way the results are "hyped" and presented as a massive break-through. I agree with this criticism, and also think that the two positive Nature reviews seem rather shallow, at least from a non-expert's perspective (this is not your fault, of course). When it comes to long term impact, I'd find it interesting to discuss how your work can (ideally) interact with proof assistants like Lean. Also, the work around Lean is a good example of a "hyped" topic that is presented by its contributors with caution and modesty.
I don't really see much value in debating the procedural aspects (Nature review process etc).
I see a lot of value discussing the research and its content.
We think the results shown in the paper are significant and of some importance, and so do others who reviewed our work.
This is where I think the focus should be.
Please read our paper and not only the blogs criticizing it:)
There is a link to access it here: https://rdcu.be/ceH4i
Then don't take offense by the discussion here, because it's mostly on some "meta" aspects of science communication, and you are probably not responsible for any of the aspects that have been critizised.
Regarding the research itself, I am not an expert, but I am curious to learn how this line of research (automated conjecture generation) intersects with proof automation/proof assistants, and in particular with the work that the Lean community is doing (creating an "executable" collection of mathematical knowledge). Perhaps there are some works you can point to.
Note that the blog you're citing was written a year and a half ago. It refers to a select few conjectures, and naturally has no references to the developments in the past year and half (which were the main reason the paper got published).
Furthermore, the author of the blog didn't respond to multiple emails we sent him, attempting to discuss the actual mathematics.
So basically the vast majority of the criticism here, is based on a single, outdated blog, by a professor (respected as he may be) who has not revisited the issues and new results since first posting the blog, and has not given any mathematical argument as to why the results shown in the paper (the actual updated paper that was published) are supposably unimportant.
Would appreciate your opinions on the matter.