Hello everyone!
I am currently exploring the performance of large models in understanding knowledge in specific domains, and attempting to construct a knowledge framework similar to what humans establish when learning a subject. This understanding does not need to be flawless, but it should provide a comprehensive grasp of the core concepts and structure of the subject.
Research Questions:
Has the academic community conducted research on the extraction of knowledge understanding from large models for specific content?
Is there any professional terminology to describe the output process of this knowledge framework?
Purpose:
My goal is to extract the understanding of a subject or theme by a large model and hope that this understanding can be represented in the form of vectors or text, which can then be "fed" to other AI systems to achieve knowledge transfer and application.
Request:
If you are aware of any relevant research, could you share some professional terminology or key concepts? If possible, please attach some relevant academic papers or resource links; this would be of great help to my research.
Application Scenarios:
The application scenarios I envision include, but are not limited to, using the output of large models as input for other AI systemssing the output of large models as input for other AI systems to expand the knowledge scope and application capabilities of AI systems.
Thanks:
I express my heartfelt gratitude for any suggestions, guidance, or resource sharing, as this is crucial for advancing my research.
Looking forward to everyone's replies and discussions!