I just re-discovered WebCite (http://www.webcitation.org) among the many bookmarks I have collected over the years. I never used it, but I recognize the problem it is supposed to be a solution for. Who knows more?
By following the link you included in the question, I liked the core idea of the webcite. Probably, soon I will start to learn more about its capabilities and applications. Thank you for question that is also informative.
I don't have experience either of using this, but I have heard of this function to save cited webpages just as they are when citing them, and I have been googleing for it because I need it. So thank you, now I know.
It is otherwise very annoying when something cited web-page-based disappears, and even more if it changes in a critical way so the citation becomes meaningless or even looks not motivated or wrong. There has been of course the possibility to use web archives as the Wayback Machine to try to check appearance of web pages at some earlier points of time, but readers of articles probably very seldom go that deep.
This with Webcitation maybe also means that the practice of time-stamping citation of webpages (Retrieved 08-03-2018 etc) becomes less necessary?
But best had maybe been if we have had a different design of the Internet from the beginning. Xanadu, Ted Nelson's hypertext project, started in 1960, had/has the idea of only adding information to the system, never taking anything away. There would be new versions of a document, with the old ones always there in parallel. And citation would be "transclusion" - a new document would always go to the source and fetch the citation and leave a micropayment. So nothing should or could be copied either. Another term here is "transcopyright". But we can't change that now.
@ Anders Norberg : Wow, probably you and me are the only ones remembering Xanadu/Nelson ;-) But what about DOI: isn't it at least partially fulfilling the role of Nelson's ideas about permanent accessibility to documents?
I guess there are a couple more remembering Xanadu. (Internet pre-history is a very fascinating subject. Vannevar Bush's "As we may think" in Atlantic Monthly 1945 is one more of my favourites.)
Yes, DOI handles are at least something meant to provide sustainable access and retrievability of documents, but, if I have understood this right(?), it works with document metadata instead of saving a copy somewhere. So if the article is removed from the web, it can't be found with the DOI handle, and if a webmaster fails to update the metadata in the right way when moving a document, the handles does not work either (so we have to google again)? Or have I misunderstood this? I wonder what happens if a document (also other than scientific articles), which already has a DOI, is edited/updated/corrected for something minor? Does the DOI now point to the edited document? In that case - not optimal. Or can a document with a DOI not be updated without it prompting a new DOI or something?
Hi! Yes, VB is another one I read every now and then again (if I have time). You are probably right with respect to the severe restrictions on DOI-availability/-accessibility. Someone more versatile than I in web technology should be able to contribute interesting details about Xanadu versus DOI, or rather: the non-zero difference Xanadu - DOI ;-)
Of course, for people preferring self-publishing, https://bitly.com is a comfortable alternative. Just try it out. But .. this doesn't answer my original question. I am still waiting ....
As others have pointed out there are other, more robust, options for solving the core problem that webcitation addresses. DOI is clearly the choice of publishers but this is not easily accessible to others. PURL (as in Persistent URL) is also another very useful service that's been around for a long time & is an initiative of the Internet Archive https://archive.org/services/purl/
Not sure that bit.ly really deals with this problem in the same way & often URL shortener services such as this become the targets for phishing etc & can become blocked
Amazing: As a researcher, I am a long-time dedicated PC-user (since ca 1980) and internet services (since ca 1990) with a genuine interest in scholarly documentation and publishing (since ca 1975). Still, the acronym PURL never crossed my desk nor has it been mentioned by the many colleagues working and publishing in computer science and related fields.
What's going wrong there? Apparently, PURL has to do more for its visibility and image (going from availability to acceptability).
Even here on RG I found only one (sic!) mention of it in a question two years ago. Perhaps also a problem of (young?) researchers not being aware of or interested in genuine solutions for their publishing goals?
Interesting reflection Paul! You really have a point there. I had not heard of PURL either. Of course the Internet archive have developed something.
An enigma, or two: There is a lot of information out there of high relevance to us for some specialised use that we do not have any clue about before it becomes more popular. But now probably I will hear about PURL several times during next month...it is also about human perception.