Univariate maximum entropy: -Int[P(x) ln(P(x)]dx. Relative univariate entropy: -Int[P(x) ln(P(x)/P_prior(x))]dx. If discrete variable space, sumation instead of integration operation should be used. The former can be viewed as Shannon-Jaynes entropy, but for probability density functions, the latter is sometimes called mutual entropy or Kullback Leibler divergence.
We can assume a uniform prior when no prior is known. In that case relative entropy becomes Shannon-Jaynes entropy. So practically it is the same entropy, see for the proof in book "Data analysis. A Bayesian tutorial", page 116 . We can get prior from our observations or from your mathematical mode/forecast by aplying a Bayes rule
So long as it is the same entropy, there is no simple answer on general question of which is better. A rule of thumb: which you can use is this: If you do not have any prior information, but just constraints, you apply Shannon-Jaynes entropy. If you know at least something about your result apriori, then you use the relative entropy one.
A caution! : have in mind that observations can be included in both prior function and in constraints through Dirac Delta functions as Adom Giffin showed. That means you have to pick the form of entropy which is most relevant to your problem.
Univariate means a single probabilistic variable which is x.