Brian Dyson (https://www.researchgate.net/scientific-contributions/BF-Dyson-2003401809) used to joke that there were many times more models for predicting creep-fatigue than there were researchers working on the topic.

Yet time fraction remains overwhelmingly the most frequently used method for calculating the effects of creep, even though many papers (https://www.researchgate.net/profile/Yukio-Takahashi-2, https://www.researchgate.net/scientific-contributions/J-Wareing-2041878341, https://www.researchgate.net/profile/Michael-Spindler) have been written that highlight time fraction's limitations and proposing demonstrably better models.

Why is it so difficult to change the methods to calculate the effects of creep that are routinely applied to both research and industrial applications?

It occurs to me that it is difficult simply because creep-fatigue is a two parameter problem (rather than a 1 parameter problem). Furthermore, I believe it is because the effect of fatigue is known to a significantly smaller uncertainty than we know the effects of creep. I am confident that we know the effect of fatigue on creep-fatigue because the following experimental considerations are the same for both fatigue tests and creep-fatigue tests (but are different for both creep tests and creep-fatigue tests):

1. The test specimens (fatigue specimens used for both)

2. The test machine (servo controlled)

3. The control mode used to conduct the test (typically extensometer or strain control)

4. The data that is logged during testing

5. The failure criterion used (drop in maximum load or stress)

I am interested to hear alternative points of view and will be happy to reply with further evidence and my own thoughts and answer any questions that arise.

More Michael William Spindler's questions See All
Similar questions and discussions