I am trying to understand different methods / techniques used to calibrate heat exchanger (HX) models.
Say, I am building a HX model from scratch to predict performance prediction at multiple operating points e.g. changes in mass flow rates or entering fluid temperatures. I am using the relevant and standard heat transfer correlations (HTC) from textbook for hot and cold side fluids and applying them on each discretized elements of the HX. Since the HTCs are derived from experiments, and have certain assumptions they have an error band or uncertainty associated with them. Therefore, the prediction from the model will not exactly match with the actual performance. To factor in these uncertainties, a ‘correction’ or ‘adjustment’ factor could be tacked on to the model that will bring the performance to the measured value/s. Now my question is:
1. On which equation this correction should be applied e.g. should be applied at heat transfer correlations or final heat transfer rate equation or anywhere else? (In my opinion it should be the heat transfer correlation as it is the main source of uncertainty and in process it also corrects the temperatures at discretized elements of HX, but I am open to suggestion for better place to apply these corrections)
2. What is an acceptable value of these corrections? I understand it will vary for type of HX, if there is phase change in HX etc. I am trying to get a ballpark value. For example, is 1.5 on heat transfer correlations is acceptable? If not, at which point it starts getting into acceptable range below, 1.2 or 1.05?
3. Is there a standard or document that describes the procedure to calibrate the HX models so that if I have a single (or limited) data point/s, I can calibrate the model at a point to make it sufficiently predictive over its operating map?