Ontology is a branch of philosophy. It is a systematic account of existence, i.e. the entities that exist. Artificial intelligence applies it to mean a systematic hierarchy of declared concepts using which knowledge bases are designed. The study of expert systems is a branch of artificial intelligence. An expert system consists of an inference engine ( software) and at least one knowledge base.
Concepts in an ontology range from the most general, e.g. "thing" to the very specific, e.g. a part number in the engine of a submarine.
Context, in contrast to concept, is a collection of concepts related to the subject concept that helps to explains the meaning of the subject concept. For example, the Zoology part of an ontology may have the concept that elephants don't fly. To know that you are in that part of the ontology, you could look at the next most general concept, elephants, which is a subset of mammals, etc. However, if you look at the concepts in the vicinity of the part of the ontology that deals with children's stories and fiction, you will find that elephants, horses, etc. can and do fly. Remember, not all concepts in an ontology need to have a tangible part.
These are examples of concepts in an ontology that at face value look totally opposite and contradictory. It is the context that resolves the apparent discrepancy.
As we have seen from the above examples, you will need to look at higher levels of generality to understand the context of a concept. This will put it into perspective with other parts of the ontology.
I hope this is on the right track. If you were looking for something different let me know and I will try to help you with that.
Context provides associated concepts that should help with reasoning because it should tend to exclude lines of reasoning that could have been errors in the absence of the clarifying context. I am not sure that I understand your question completely. Please elaborate if this is not what you mean.
Thank you for your insightful answer, respected Dr Marion. I have observed literature based on lightweight ontology to form the concepts. Development of semantic web in the direction of metadata interoperable added the notion of DAML+OIL, RDFS and the ontology representation language and tools for reasonings. With these tools, Users can reason about which objects and classes can automatically compare, contrasted or manipulated based on the input ontologies. I mean a rigorous development of the meaning of concepts and relations toward modelling the context based on the representations of logic.
In our research work we have used the context in order to complete the semantic definition of a given concept. We have used it to define the aspect of concept usage based on SOWA conceptual graph theory . For more information you can see the Amine website where you find different types of ontology depend on concept definition that enable to integrate the different situations, rules, instances, etc. of a concept:
One way to tell whether a set of concepts in an ontology is right or wrong is to create a knowledge base from the concepts and reason on it with an inference engine. The approach then would be to induce the system to produce a logical conclusion in a case where you know the answer. If the set of concepts is wrong, it should produce faulty knowledge that leads to a wrong answer. If the set of concepts is right, it is not likely to produce a wrong answer. However. the set of concepts may be right and still produce a wrong answer if the ontology is not rich and specific enough to address the question at hand. Also, the concepts can be wrong and the reasoning also can be faulty, but the test may in some rare instances produce the right answer because the errors cancel out. This method of testing concepts is not perfect, as there are exceptions, but you can get an insight about the accuracy and completeness of a set of concepts by the results that they produce.