Equivalent Series Resistance (ESR) and Internal Resistance (Ri) are two measures for the evaluation of a supercapacitors' resistance. The former is evaluated through the electrochemical impedance spectroscopy (EIS) and the latter, through the cyclic charge-discharge (CCD) experiment and the potential drop on current peak at the initiation of the discharge. Disregarding the obvious distinction in the method used for their measurement/evaluation, what is their difference (if any)?
Basically, the sources of the resistance in a supercapacitor are the same independent from the method used to measure and quantify them:
The intrinsic resistance of the electrolyte (R electrolyte), diffusion resistance of the electrolyte among and into the porous structure of the active material (R diff), contact resistance between the active material and the current collector (R cont) are the most well-known sources of the resistance.
On the other hand, ESR and Ri, to the best of my knowledge, are intended to measure the same things. So, as far as the global magnitude of the cell resistance is concerned (like in the case of power density estimation, where, the specific contributions of the different components to the global cell resistance are not important, but their summation instead) why should ESR and Ri be different? and if they are, what is the criterion for the use of either ESR or Ri ? There is no reliable agreement on this in the literature (some use ESR and some other the Ri).