Is it possible now or in the future to create an artificial intelligence that will draw knowledge directly from the analysis of Internet resources and learn this knowledge?
Read Stephen Hawking's answer to the question in his last book before he died. on the answers to the big questions. He had no doubt that the answer is yes. It will start with an Asperger type of knowledge, very well in a small area, but it will grow to something comparable if cognitive treats are programmed in.
I agree with Leon Hoffman. In my view its needs some kind off separate update channel for improvement . Other kind of updating may be harm for us.Or the improvement always controlled by a human he must have some expertise. And also there may have many problem arises when someone hacks into these network. I amazed so that how can you comes with latest ideas. As a technology guy still i don't even think about it.
It's pretty logical to answer this question by yourself if you can answer honestly how many times you were able to fake/mask your actual state of mind in front of others.
I would like to clear some of the assumptions of the question and the answers given:
1)There is an implied assumption that there is knowledge on the Internet. The Internet is the system of interconnected computers and more likely the question is referring to the web as a source of information
2) that there is a repository of information does not mean that there is information in the repository. Sometimes there is sometimes there is data and sometimes there is junk..
3) For there to be knowledge there must be the piece of information supported upon additional verificational data/information. The piece of information representing knowledge must follow from the other pieces of data
Given these parameters, it is possible given the current state of the art algorithms that can extract particular pieces of data and find support based on it(there are different methods for it. Under the definition of knowledge given above the answer is definitely yes.
Evolution does not necessarily imply improvement so a measure of improvement must be given before anyone can state positively that there is one.
Please define 100% accurate learning. Is it for the entire web as a source of information or is it for subset?. What is the criteria for 100% accurate learning in non separable spaces? How is the criteria going to evaluate conflicting contradictory? Contradictory data? False data?