An interesting question, James. Can you give some idea of what an instance of 'formulating an idea' would mean in physical dynamic terms? And also what would be required for this event to be attributed to a 'system' or a 'person'. I am thinking that you might have six modular AI devices each of which had ten parts that could be switched to become parts of one of the other systems instead at the flick of a switch or trip of a logic gate. What would justify attributing the formulating event to any one system? (Trivially one part could be one of those software chips you used to stick into a BBC-B computer and could lift out and put it in another.)
I have a suspicion that AI devices do not formulate ideas They just instantiate computations. But that would depend on what formulating an idea meant in dynamic terms.
I found a few articles and I hope that they are useful for you:
Dougherty, E. R. (1988). Mathematical methods for artificial intelligence and autonomous systems (No. 04; Q336, D6.).
Cohen, P. R. (1984). Heuristic reasoning about uncertainty: an artificial intelligence approach.
Lenat, D. B. (1976). AM: An artificial intelligence approach to discovery in mathematics as heuristic search (No. STAN-CS-76-570). STANFORD UNIV CA DEPT OF COMPUTER SCIENCE.
Bender, E. A. (1996). Mathematical methods in artificial intelligence (pp. 589-593). Los Alamitos: IEEE Computer Society Press.
Luger, G. F. (2005). Artificial intelligence: structures and strategies for complex problem solving. Pearson education.
Turner, R. (1984). Logics for artificial intelligence.
Poole, D., Mackworth, A., & Goebel, R. (1998). Computational intelligence: a logical approach.
Do you think we create concepts based on our experience or do you think our experience reflects the way we construct concepts? i think the concepts have to come before the experiences because our experience is based on complex inferences about objects and dynamic relations.
And I wonder what you think it would be for an AI system to 'formulate' a concept. James has not answered that yet. Does formulation of a concept actually mean anything unless it is cashed out in some sort of experience? Maybe it does but I worry that we may have a sterile discussion if we have not decided what we mean by these words in simple dynamic terms.
When you say dimensions do you just mean degrees of freedom? Mathematicians and sometimes the rest of us work with ideas that involve a lot more than four degrees of freedom. But if by dimension we mean the elements of the spacetime metric of physics do we have any reason that there are any more to have ideas about anyway? String theory seems to have fallen pretty flat.
An initial response to Jonathan’s excellent question re meaning of "formulate an idea" -- following is just a "line of thought" with some non-standard ideas:
Consider what might be called an “Idea-Stream Based” (ISB) AI system.
Core components of an ISB are:
A knowledge representation structure (for KR. see any AI text book eg Russell and Norvig) -- a type of database storing what the agent currently knows/believes including memories and current spatial context as known)
An “ideas-stream” (compare “stream of consciousness”) which, suitably manipulated and controlled, can be built upon for action, remembering, planning, abstraction, learning etc (Really??? I hear you say)
Among required enabling processes are those that (i) select the next idea to appear in the stream, trigger action from the stream, incorporate sensory input into the stream… and (ii) those that grow the system’s KR structure and resolve its contents into ideas as required.
NB An ISB is, superficially at least, very different from the currently fashionable “Deep Learning” systems.
A little more precisely, an “idea” may be defined as
EITHER a sensum (cf “The Grounding Problem”)
OR a set of ideas (recursion) with certain relationships between them drawn from a defined set of possible relationships between ideas
In sum, an “idea” is a fragment of the knowledge representation structure which can be handled as a unit by the ideas-stream which is turn the basis of cognition.
Then:
CONJECTURE 1 – any specific AI system of the foregoing type (ISB) will have associated with it a fixed set of all the ideas possibly creatable by its processes in all possible circumstances in its “lifetime”– call this set AIS. Note that which ideas the system will actually create in any particular period of time will depend upon the particular experiences to which it is exposed during that time.
CONJECTURE 2 – there is a set analogous to AIS associated with any human brain as genetically specified – call this set HBS
? We have no reason to believe that the sets AIS and HBS are necessarily identical
>> Arthur -- Thanks for list of articles. All certainly relevant to application of mathematics/logic to artificial intelligence in general, but I’m not sure any of them home in even a little on the question I posed? By the way I used to work with Ray Turner and read some draft chapters of the excellent little book by Ray (on your list) long, long ago...
>> Stefan -- I like (I think) your definition and development of “idea/concept”, but English words are so imprecise. Any chance you can put up some computer pseudo-code that would capture more exactly your definition? No doubt that would lose generality, but in a good cause?
Use a Kolmogorov complexity argument. If we assume we are finite state machines (very large, approximating Turing machines but bounded in time and space and so having limited access to external storage) then there is a limit to the complexity of the strings we may generate (take the highest complexity string generated by any human being from the beginning of human existence to its end i.e. to when we cease to exist as a species) then build a machine that can generate a more complexed string i.e. a machine with more states or better access to external storage. This machine can produce strings of higher complexity. Are these strings ideas/concepts? That’s up to you and the definitions you use and the design you come up with :-)
We (humans) can formulate rational entities in general, as we are irrational beings and we, as such, can formulate anything. Computers, on the other hand, are a subset of those rational entities, so they cannot formulate anything -- the "formulate" action belongs to an irrational category and is not innate to machines. We can formulate any computer. A pseudo-formulation process, a computer may acquire, is a form of an artificial intelligence manifestation where the action "formulate" is an artificial one, i.e. the computer will mimic intelligence, but is not intelligent. Any artificial formulation a computer may establish is thus only a rational state of the machine that we, on our part, can formulate. The answer, hence, is no.
Svetoslav Zabunov - your answer is based on an implicit and unexplained distinction between ’artificial’ and ‘natural’ properties or behaviours and upon the idea, stated without any form of evidence, that a logical mechanism such as a computer, or indeed a finite state machine, can only exhibit ‘rational’ behaviours.
The distinction between artificial and human (not natural) intelligence was just explained in my previous message, using irrational means. You do not make separation between explanation statement and rational statement. These are not equal.
Artificial properties of machine intelligence are such by definition, that's why they are called artificial. This definition is fully explicit. Further, a machine (computer, state machine, etc.) is rational by definition and needs not to be based on evidence.
If you, on the other hand, want to prove that matter is rational, then you are in big philosophical trouble.
Dear Svetoslav Zabunov, you shy away from discussion of the basis of your assertions. I understand that is because these assertions are not rational and have no basis that can be ex-pained rationally. My irrational reaction to this is “utter nonsense”. Since this is irrational you needed reply as there is no rational response and an irrational response is pointless asit will be irrationally ignored :-)
Dear Svetoslav Zabunov - you do lack arguments but I did not take your reply as an insult. As you said above the distinction you draw is drawn by irrational means, so how can it be supported by rational argument? As it cannot be I do not find the lack of argument insulting. Rather I accept that you take a position irrationally, but I find no reason to take that position seriously because it is not supportable by rational argumentand I see no other form of support offered for the position.