I am currently working on using ontologies to serve as a knowledge base of a web-app. I am using Apache Jena as the mediator between the Ontology and the jaa.
This is an opinionated rant about why despite these objectives in reality sparql endpoints have not really achieved much https://daverog.wordpress.com/2013/06/04/the-enduring-myth-of-the-sparql-endpoint/
As Hoan also said, by implementing SPARQL Endpoint, you can provide data service to the people and machines. Both people and software programmes can play and explore with your data without worrying about the storage of data into a local system. In case of publishing RDF data on the Web as a dump file, people need to first store your data into a local system before exploring it.
Thanks Leslie for the insight. Regarding the federated queries, don't you think even the RDF dumps can be queried at a go if the data is linked? Thanks again
The recommended way to publish RDF data is both per-resource, and also as a huge dump. RDF resources do not have to be out of date as Leslie suggests. See eg http://vocab.getty.edu/doc/#Export_Files.
For an approach that bridges the gap between SPARQL (complex processing) and RDF resources, see Linked Data Fragments (LDF). It implements SPARQL on the client, based on simple requests (single triple pattern only).