Andrew, I am not a expert in this area, but here are my suggestions; By sample frame data, my guess is that you are talking about the population from which your sample units units were selected for the survey. Some responded others did not.Then data or estimates from another source or sources that are closely related to the response being studied (supplemental data) . I my premise is correct, then the following are, in my opinion the differences between the two:
1) The sample frame data can provide you with direct estimate of the likelihood of response (or response prpensity) which can help you quantify non response bias. The supplemental data my not give you a good estimate of response propensity
2) The supplemental data can provide you estimates of the response similar to what you are interested in, which can be used in comparison with your estimates of the current survey to assess non-response bias. But the response rate for the survey that produced the supplemental data will have to be high, else that other survey may also be subject to high error due to non-responsefor it to be useful.
3) Related to 2) above, estimated non-response bias using the supplemental data may contain errors both from you current survey and the supplemental data and this error will likely be much larger if the supplemental data has low response rate. That is why it is important to have high response rate for the supplemental data.
4) Consistency in how measurements are made may also be an issue; if measurements are not consistent for both surveys, errors can be even larger.
I hope this is helpful. I will be happy to discuss any follow-ups, but note that this is not my core area of expertise.
I agree with Thompson that there is a danger of bias in any supplementary data as well.
The response propensity groups on your primary survey data are something like doing a post-stratification, based on response rates. I think the idea is that whatever causes some group of respondents, who may have a distribution of responses to a given variable/question that is substantially biased in comparison to others, would also cause it to be a group with a different response rate (from a different propensity to respond). So a group with a lower response rate will be under represented if you do not weight the data in that group to account for the missing data. But because the ones that did respond are not drawn at random, and this weighting is analogous to random sampling, some bias will remain. Also, the groups themselves may not be that easy to determine. They would have to be based on similar characteristics, i think, similar to stratification that would be used in stratified random sampling. If too few respond in a given "response propensity" group, that would not be good.
If you are doing stratified random sampling to begin with, you might simply reweight the strata, as if the sample sizes per stratum were what you had wanted all along. :-) However, this is still going to increase your bias.
If you are comparing to supplemental data, I don't know, but I suppose that could be done several ways. But if the supplemental data isn't of high quality and quantity, as I think Thompson also surmised, I would also think its bias might do more harm than good. Comparing to results from a completely separate survey would be a great check, but I don't think that is what you were saying.
I think I have seen people substitute administrative data, or propose to, for cases they could not collect, but that sounds rather risky to me. There are various imputation methods, and I suppose some could use data from the frame collected (nearest neighbor, etc), or from elsewhere. But I prefer regression, where you can take advantage of related data you already have, say when you have collected an annual census and then monthly samples on the same data elements (variables of interest/questions), for example. In fact, such regression (model-based) "predicted" numbers can be used for imputation for nonresponse, or even for out-of-sample cases. (My RG contributions page has a lot of information on that.) A great advantage is that you can estimate the "variance of the prediction error." - And even for probability design-based sampling, a model-assisted design-based approach to estimation can be very useful, greatly improving accuracy of results. However, the supplemental data has to cover the entire population. For prediction it is called regressor data. For model-assisted design-based sampling and estimation, it is called auxiliary data. You might have some administrative data you can use (something available on the population that you did not collect), or the results of a related survey.
One qualifying comment I should make Andrew is that I have recently become aware that I should qualify my remarks by saying that I worked with continuous data for many years, and my remarks are from that experience, and may sometimes only apply to continuous data. Thought I better install that caveat here. - Best wishes.