Interesting comment, posted on a recent research document.
So I've laid out a few steps of authentication, based off of my mathematical, and physics-based education, and consulting with various other associates.
This is something being studied for peer review and an upcoming conference. The challenge is this:
Lots of people in research gate have their own and unique theories. There's lots of technical discussion about the technicalities which may invalidate these theories, but in reality, invalidating or authenticating theories such as this is a highly rigorous process more akin to hard mathematics than it is to anything which can be put verbally.
Below is a comment for context and brevity. It was posted under a research gate paper.
After this context, a few standards for authentication will be laid out, and I would challenge anybody with their own theories to attempt to meet these standards.
Context:
"would like to extend congratulations to John, as anyone performing these calculations will also see as I did, that this theory is easily renormalizable at one Feynman loop, by my current calculations.
Anybody else who can verify this as well. It's either exactly at one loop, or around there, indicating high stability in versatile QM/GR scenarios and means it handles infinities that other popular theories such as String Theory Struggle with exceptionally well, among other implications.
It also means the applications of the actual elements of the framework structure are easily adaptable in many scenarios traditionally difficult, I.E, Early birth of the universe, large rotating black holes, ECT
This evidencing, that is part of a small group of theories, such as qed, string theory, loop Quantum gravity, and many others recently emerging, which have indicated high authentication rates for rigorous academic standards of how this would historically be addressed.
I'm investigating also, another colleague who seems to have a very robust framework indicating a similar confluence of being normalizablity, with just one Feynman loop also being currently calculated to renormalize his theory, this of course will take additional analysis from people beyond me, in the spirit of peer review.
We all need to remember that unilateral acceptance of a theory is unlikely due to the decentralized networks contrasting with what's allowed a theory such as quantum mechanics in general relativity to propagate.
I fully believe, that there is a range of unified theories possible, all based off of competent identification of similar mathematical principles, and general principles, with a range of uses and complexity all adhering to these principles based off of personal development and usage needs.
Zero sum thinking it is absurd in this matter, attempting to invalidate theories such as this based off of a small and minor inconsistency does not hold up to rigorous academic standards of how one would systematically and historically address how a theory could be considered a functioning Unified theory.
This type of thinking, with the cognitive dissonance that continues to refuse acknowledging that even theories like quantum mechanics and general relativity have inconsistencies and are still very valuable.
We could pretend to invalidate these Frameworks based off of a small and general technicalities as well. But this would be foolish, which is why the zero sum thinking is the bane of science. Imagine, if a logic of a small inconsistency in validating an entire framework, such as his common here on Research a, was applied to General relativity, seeing his quantum mechanics was already prevalent at the moment it came out.
Fact of the matter is, authenticating unified theories boils down to something more akin to hard mathematics, and cannot be invalidated by simple verbal English phrases of potential technical inconsistencies. It's far more advanced, and complicated than that, and no matter what you say, you're not going to be able to invalidate or supersede at the mathematical authentications needed to validate theories such as this,.
Again, if you apply that logic to conflicting theories such as quantum mechanics in general relativity, the argument becomes an inherently illogical. Especially if any point made to argue this is based on quantum mechanics or general relativity.
The dissonance, of when it is acceptable to ignore a certain technicalities, and when it isn't, based off of what other people are championing, is beyond ridiculous. If you apply this to even the inconsistencies and quantum mechanics in general relativity, you can pretend that all of our advancements in these areas in the past 40 years didn't matter.
We act as though just because things like certain inconsistencies in Quantum mechanics, General relativity, and string theory, some of the most major theories done integrate, that they're not still utilizable and good efforts. Seems to be a dissonance, and when this logic is applied to popular theories versus one that is developed by somebody less or known, or a less widely accepted theory.
There will not be one, once we start seeing the greater Mosaic of understanding will all move forward.
such as String Theory Struggle with, among other implications."
In lieu of this, here are the standards I challenge people to meet, when attempting to authenticate their own theories, and post the results here if you want to:
Computational Verification:
1. Numerical Simulations:
Use of computational models to simulate theoretical predictions and compare them against experimental data.
Algorithmic Consistency: Ensuring that the algorithms used in simulations and calculations are robust and produce consistent results.
2. High-Energy Experiments
Particle Colliders: Utilizing facilities like the Large Hadron Collider (LHC) to test predictions about particle interactions at high energies. It's going to also include matching up with their data from repositories.
Detector Sensitivity: Ensuring that detectors are sensitive enough to observe rare or subtle phenomena predicted by the theory.
3. Standard Classical Experiments
Reproducibility: Experiments must be reproducible by independent researchers under the same conditions.
Precision Measurements: High precision measurements to test the predictions of the theory, such as those in electromagnetism and gravity.
4. Quantum Verification
Wave Function Analysis: Verifying that the theory’s predictions about quantum states and their evolution match experimental observations.
Entanglement and Superposition: Testing predictions about quantum entanglement and superposition through experiments like the double-slit experiment.
5. Computational Authentication
Feynman Loop Calculations: Performing and verifying one-loop (and higher-loop) Feynman diagram calculations to ensure the theory is renormalizable.
Normalization Data: Comparing the renormalization data against known standards to ensure consistency.
Feynman Loop Validation: Validating the theory through detailed calculations of Feynman loops to ensure mathematical consistency.
6. Logical and Theoretical Framework Consistency
Group Theory and Symmetry: Ensuring that the theory adheres to established symmetries and group structures, such as those in the Standard Model (e.g., SU(3)xSU(2)xU(1)).
Lorentz Invariance: Maintaining Lorentz invariance in the regions where special relativity holds
Predictive Power and Experimental Validation
Predictions of New Phenomena: The theory should predict new phenomena that can be tested and potentially falsified by experiments.
Data Compatibility: Predictions must be compatible with existing experimental data, and any deviations must be accounted for and explained.
7. Peer Review and Publication
Publishing in Reputable Journals: The theory must be published in peer-reviewed journals where it can be scrutinized by the scientific community. Alternatively, to avoid gatekeeping, this can be done by simply consulting with experts in the field, which are within your peer network, and having them verify the work in some sort of documentable way.
Transparency and Collaboration: Maintaining transparency in methods and data, and encouraging collaborative efforts to test and validate the theory.
8. Research-Based Comparison
Comparison Against Known Models: Conducting research-based comparisons against known models that have low loop consistency to highlight the advantages of the new theory.