How could we describe the act of "thinking" with mathematical tools? Which paradigm is best suited for? What does "thought" mathematically mean? Is there any alternative to the procedural (linear) conception of neural calculus?
For a rigorous mathematical treatment you would need a proper definition of the term "thinking".
If I would have to guess I would use a nonlinear mapping between an denumerable set (for all possible data inputs from your senses) onto another denumerable set (the "saving" options in your brain) for the description.
What you describe is quite reductive: a function from a set A of nervous receptors (spread throughout the body) to a set B of recording (with some flags). Moreover, that kind of calculus regards "the interpretation of external inputs", while "thinking" seems to involve synaptic transmission outwardly unpredictable - "logical" just for the "inherent" observer.
A simple case. Imagine being in an empty room, without any clue from the outside: you may indifferently begin thinking about a bag or an apple. An outsider could never predict which one you are thinking about: what he can do is rather to establish the probability of each one, pondering your previous experience - or better, what he thinks to know about you. Supposedly, recent and local events have an average influence greater than remote ones (i.e. you would never think about apples just because you thought about them two years ago, but you may if you were to buy some a few moments ago), so he should ponder this too. In the end, the "logics" you would follow is "masked": in order to describe it (to "read in your mind"), the outsider should predict the most presumable choices and determine the likelihood of his judgement about you.
Dear Frederico,
as I'm no expert in this field (but I'm very interested in the question), could you tell me: is the more interesting question in your empty-room-example: can the person also think about an apple if he/she has never seen one?
So, my only point is that you need a map from the world around you into your brain (as a figure of speech), as you need some sensual experience (hearing, seeing, feeling...) in order to think properly about things - I, for example cannot imagine thinking about something I have never seen and heard of.
Replying to J.Gruenwald, I think it is extremely improbable for a person to conceive something he/she has never come up against with before - but not impossible! Just think about composing music: the composer (if skilled enough) produces an arrangement never designed before. This happens because he is capable to relate the objects he can experience (in this case, musical notes, sounds, and the sense of rhythm) into a single thing - a miscellaneous performed by imitation, which means you should have [had] sensorial experiences to perform representation.
Likewise, if you try, you may conceive a fruit you have never seen before (although, actually, you'd be visualizing it at that very moment, at every stage of its gestation): whether it is an existing object or not, it is only matter of probability; in particular, a botanist (maybe bioinformatician) is more likely to conceive it, since he knows better
the laws that describe the efflorescence's growth.
Now I provide some hypothetical clues in order help defining "thinking" (as would be valuable to answer to A.K.Pathak too):
- "A nervous signal is a sequence of nodes (cells)"
- "Being conscious of some signals through their representation into a known language (visual, acoustic, tactile, abstract, etc.)" means "Recording" them.
- "Associating [by resemblance] some current signals to previously recorded ones" means "Remembering".
- "Visiting a sequence of nodes in the same order as they appear in a previously visited signal [, maybe ignoring the presence of some between nodes n1 and n2 in the recorded sequence (these two nodes belonging to both)]" means "Recognizing": longer the sequence, better the recognition.
And so "Thinking" could be "Miming/Simulating the path of certain signals acting on current feedback in order to recognize previous patterns".
Replying to A.K.Pathak, it seems you have notice the similarity between my example and Schrodinger's Cat paradox: so, how much is plausible a "multi-tasking" brain? And, how much is deep its kind of multi-tasking? As you say, a current belief in neuroscience is that brain performs a task at a time, switching from one to another simply storing the first one as a kind of record, then turning to the second. Now, the question is: if the first one simply put aside in order to be recalled within a certain time lapse, otherwise "forgotten", what happens if every previous thought's time of latency elapses? You forget everything! But, if every thought was encoded by multiple sets of nervous signals, each of which sufficient to characterize it (so, linearly dependent), forgetting about it would be harder - because not every set would be recorded at the same time. This means that "thinking" would signify "strengthening memory": creating [similar] alternatives for any thought is the key to safeguard memory from brain damage - as a matter of fact, there are evidences that surgical removal of brain material does not "delete" specifical memories, but languish them: the result is that the same memories become gradually vaguer (as the projection of a snipped fragment of holographic plate).
About the timeline-like representation you provide, it would rather depict the "Record-Remember history", whereas is not sure thinking (as defined above) implies a unique recognition - which means the graph may not be a function.
It would be optimal to provide a definition of "thinking" comprehensive of non-locality, so, if you have any suggestion, you are welcome.
So without much thought on the subject see my mind two models , BDI ( Belief , Desire and Intention ) in MAS ( Multi -Agent Systems) and a formalism which I believe is the way to prove such Process calculus and Algebra of Communicating Processes.
@ P.T.Breuer
Can you please specify why it would be more appropriate speaking about "beliefs" instead of "thoughts"? I agree (see my first answer) it is more correct to say instead of "thinks" - then using epistemic connectives; but, in my opinion, although these two concepts are strongly linked to each other, they are not equivalent: I see "thinking" as the process which shapes "beliefs".
Accordingly, we can say that "A thought is consistent [with a belief] insofar it does not speculate (add) propositions not deductible from the records it is meant to be associated to" (i.e. Record: "The clock fell" -> Thought: "A clock is no longer in its original position" -> this strengthens the belief it fell), where its "meaning" refers to the idea of mimicry/simulation mentioned above, and the "would-be-associated-to" records are theirselves the arguments of the belief/knowledge. On the other hand, "A belief is consistent insofar its properties describe the predictability of a certain behavior".
@ S.Gruner
Thanks for suggesting those readings; actually, I just read about von Neumann's opinion regarding the differences between machine language and brain's one. [I am fond of analytic philosophy].
@Frederico:
I gave your points some thought (regarding thinking about new things) - I guess you are right and in such a model you will rather need three sets: the two I mentioned above and a third one.
The second set must also be mappable onto the third one in a way that set no. 3 contains all elements of set 1 and 2 and additionally all possible sets which contain all possible mappings between the elements of sets 1 and 2.
In this way, using the third set as domain, it would be possible to map the signal itself as the transition from one to another of the last subsets you introduced in no.3 - admitting each nerve cell is part of the first set (some related to outer inputs like tact, sight and so on; other, the "internal" cells, to the "thought sense" - which is actually a projection of ordinary senses).
Besides, I account these subsets could have geometrical properties: perhaps they represent all possible sets of cells (nodes) within which anyone can be reached through a single synapse.
@Federico: I think, this could get us somewhere.
I am not sure about the geometrical properties, though - for sure there have to be certain algebraic properties (maybe even some sort of group, if one can identify the inverse element, etc...)
Thank you very much! I will let you know how this reasoning continues, as soon as I'll improve it with more mathematical rigor: right now I'm working on some plausible axioms and operators basing on the model that were mentioned. Again, thanks for helping me refining ideas about it!
Anyway, further readings or advices are welcome.
If "thinking" is meant as a process, and not as a static list of thoughts (see Peter's answer) then it might be a kind of solving a jigsaw puzzle. We have a situation and we look in our memory for matching patterns. Once a piece added, we have a new situation, and the thinking goes on. Differently from the geometric jigsaw puzzle, no piece matches perfectly, so one must build a tree of situations. However, we have limited possibilities to think in parallel on many situations, and the number of branches increases exponentially, so we have a good ability to neglect many branches, and of course, all their successors. Also differently from the geometric jigsaw puzzle, we have the ability to create new puzzle pieces, by combining some of the pieces existent in our memory. This is important in mathematical and other cases of theoretical thinking. I believe that humans are very different in organising their internal data base and in handling with it, so for the moment this description cannot be done which much more accuracy if it must remain general (without becoming too particular for some kind of thinker....)
We might say that, given a situation (i.e. the cover with puzzle's full image printed), we look for elements in which occur similar attributes. This search can be minimal (I scrutinize small regions, piece by piece, then quickly infer each time few and simple matching patterns) or maximal (I once watch over the whole situation for a long time in order to select some different, maybe complex or apparent, features, then search for a single definitive matching pattern): the first realizes a low level discrimination and has a [small] error rate [but] upgraded at each cycle, while the second a higher one and has a error margin commensurate with the time and the accuracy of the observation. I believe the first represents a way to creativeness, as the second is more conservative: in this sense, I mean, creativity could be mathematically produced by odd errors occurring during the pattern recognition.
Then, I must define this kind of "oddity". Given a situation, what do we do? Actually, we think about the first thing that comes to mind! It comes, then we can't help but to begin inferring on it: so, as soon as we find the first element that somehow corresponds to the situation (there are sufficient/enough attributes in common), we get the starting point and so on - but nothing says it is the best one (it is just offhand and irrepressible). Therefore, we can call "odd errors" those for which a partial matching provides (add) alien attributes, which are then taken into account during further recognitions: some may be ineffective, others may change the subject.
You say . In my opinion, the nature of "combining" is closely linked to the aforementioned concept of oddity, in the measure a record is arbitrarily affiliated to another.
I've just read an abstract of the book you reviewed: it seems to gather and give some uniformity to everything that was discussed here, from the idea of simulation (imitation, miming inherent to the act of thinking), to the translation into a known language to represent physiological dynamics, to the operation of interlocking (jigsaw puzzle) - definitely, I am quite curious to deepen what you speculate on, expecially the concept of microlect you provide.
In order to quicken the discussion, would you mind to describe an example of yours? It would be as valuable as appreciated.
Now I think to understand better your concept of microlect: it represents the vocabulary regarding a certain meme, or a group of these. I think it is a good instrument to compare "imagination machines", because you can develop operators to study the relationships between different microlects - I assume that a person has many of these.
But, let be the case: you have two microlects that assign to the same term two distinct meanings. In a given circumstance, which mental model will be considered by the individual? Intuitively, the closer to the situation it is, higher the chance... And yet, I find it hard to believe that the other is simply dismissed, especially if deeply rooted (for which it might even prevail!) - I think mental models are context-free, and that, if codominat (or simply it does not exists any relation for which the one could prevail on the other), both concur to outline the response, mutually interfering. Then, we would say "radicalism" whereas one subjugates all other and - since there is no progress with extremism - that the evolution of thoughts (and memes theirselves) depends on the participation of many models in the same evaluation. The scope of a microlect evolves in time: in similar circumstances, years later, an individual could mean the same phenomenon differently, although both memes take part. This is how education works.
Yes, I used "radicalism" in a more genral sense. A person who thinks cats are better than dogs is extremist in that sense.
Hello,
I have no complete model of "thinking" in mathematics or logic, but I believe that my "layer logic" is one step in this direction, as it can deal better with selfreference and is more easy than classical mathematic in handling infinity.
Just have a look at this thread:
Yours
Trestone
https://www.researchgate.net/post/Is_this_a_new_valid_logic_And_what_does_layer_logic_mean
The signal can definitely be represented as mediated by neurons. Alas, a bijection between subsequent neurons involved in the transmission is as elucidative as "machine code": as it is a low-level programming language (a.k.a. close to the hardware), it does not provide any hint to effectively describe the process [of thinking] above.
After all, the brain is essentially an interpreter: it receives impulses from the outside, then translates them into a more abstract language so that it is possible for it to compare them with others in a more effective way; without abstraction, thinking ability and remembering are pointless. In order to be conscious, a thinking machine should at least recognize the domain[s] (which could actually be microlect[s]) the elements it uses to postulate its propositions belong to: it is not a problem of codomain (you could define, as J.Gruenwald suggested, a set of as codomain), but of domain - since, if you have a unique homogeneous set (it is so because of the low-level syntax you advance), you can not discern effective groups for such a discrimination. For which I mean "effectiveness" as the capability to segregate in order to refer to something in particular.
Your architecture lacks in terms of extrapolation of a law for which these groups are recognizable among all the possible parts of such a set - what does tell you that the progressive activation of a particular chain of neurons implies a one-to-one relation to a said model? Actually, nothing; moreover, there are evidences that such a connection is non-linear (I spoke about this before).
I think you should start with modeling what is necessary in order to think, basically memory status , energy flow...etc
The purpose of thinking should always be in the sense that the result would tend to decrease entropy and increase structured states of memorized information: I mean the entropy inside the thinking entity. So logically, the more thinking it does the more entropy outside would tend to increase. But that is an other story...
I think that the ultimate state of thinking should be able to "stop time". Or if that is not legit to say, it would then be able to "travel" at an infinite speed; that to be able to interpret the different states of information (possible routes) through past and back to the future. The conundrum in such possibility is that, for the thinking entity time should be infinite too. An example may be the way our mind interpret dreams; It seems too long or in reality (for an external referential) it does not last that much.
The hard part is; From where can one begin. Even us, humans, we can not think as to fulfill the ultimate meaning. Our consciousness, is a barrier that jumbles the process with external signals.
One should also know the boundaries of thinking, as the finding of the answer to: "Why are things being what they are and not being otherwise ?"
Sorry I did not give you a concrete mathematical being that could model thinking. But Mathematics, is that what Thinking did invent ?
I think that trying to dive directly into thinking modeling as if it is a closed process sustained by itself, is like trying to find something in a big picture that is zoomed million times. I think we should zoom out and see it as a whole different but related processes. Why don't we discuss; Creation, Resistance, Selection, Destruction and Evolution. Together, they sound to me as the ultimate state of thinking.
It is very unlikely to completely represent thinking mathematically, for otherwise computers will replace humans with complete capacity. Human thinking is not completely mathematical, imagination one of our distinctive cognitive parts of thinking which by no means can be described mathematically. That is why computers whose all information are coded mathematically will never replace humans however knowledgeable and well equipped logically they are.
However mathematics enables us to augment our thinking capacity as technology does how we live and do things efficiently.
@ A. Ayadi
I find really interesting a consideration based on entropy. You said , indeed the formation of sentences in abstract languages seems to be in line with: there are lot of possible phonemes, but only a small set of these is [or had been] selected in order to have a clearly defined meaning for who knows its grammar and lexicon. Then you say that, ; but, as language provides an organized and ordered way to communicate with others, such communication should be entropy-decreasing... unless we say the content of a message is never really grasped, thus it causes confusion. But then, the discussion takes a Pirandellian turn (in which case we should introduce a [not absurd at all] kind of "mismatching entropy").
Otherwise, if, before the communication, we consider the phonemes above (or everything that can be used as bricks of thought) as being collected in tubes of paint (order), the assembled sentence would resemble a mural (disorder/less order), a painting with some complex patterns; but in this case, the first assertion of yours I quoted falls.
Regarding "light-speed thinking", would you mind to consider my last answer in https://www.researchgate.net/post/What_information_transmits_excited_neuron-detector_What_sense_is_this_information ?
@D. A. Lakew
Mathematics and logics are essentially languages (which ) to describe recurrences in consistent reality and its behaviors. I think that, insofar thinking is real, it can be represented with some models. Moreover, as I discussed in the beginning of this question, the act of creating something "new" is reducible - by imitation - to the juxtaposition of previously met or reasoned things: once the set of rules that makes this composition "juxt-" is established, you can start the study of "imagination dynamics". In addition, I could say that the outcome of such a imagination machinery (allow me the neologism: imaginery) can extend the set, by the fact that it may extract and result in new rules - and this leads to something evolutionary.
I agree, but what about a common way of describing them? Keeping in mind Turing's machine, a sort of language above languages...
Yes, I used Turing's [universal] machine as a matter of paragon, as it represents a way to implement any particular machine using the "instruments" of a single. Analogously, an "universal microlect" - I would call so a microlect which has infinite spatial and temporal scope - would be meant to be a jargon above jargons; nay, a microlect which absolutizes any other microlect, which extends to infinity their scopes - while "normal microlects", interacting each other, extend or reduce mutually in the measure they are "aggressive".
I understand your argumentation... If it would contain those exception, it should rather be a composite of verbal, mathematical and diagrammatical tools; but it's difficult to imagine...
The classic work that deals with this issue is George Boole's An Investigation of the Laws of Thought
http://www.gutenberg.org/files/15114/15114-pdf.pdf
Based on my experience in computer programming, teaching, problem solving, and neuroanatomy [I am a pathologist] it is my opinion that first we have to address the concept of memory before dealing with the thought process.
I have postulated that memory is associative and may be established along specific axonal interconnections between neurons with some structural or other functional support by astrocytic elements. In addition, memory has additional characteristics that are triggered by images, sound, smell, taste, and touch causing a cascade leading to one or more stored experiences and associated memories to be 'loaded into our conscious work space'.
Now, thinking is the act of applying a series of mental actions regarding external stimuli and/or internal stored memories and this includes learned schema retained as an internal list of actions either physical and/or mental as well as simple events.
Therefore, a potentially useful model would be the development of an associative database where any unitary memory or schema could be stored and then linked together in any order, linear and nonlinear, circular and noncircular where each link can be absolute or conditional.
Then, one could create a program allowing for the initialization of these memories, links, and logical conditions with additional code that allows for the activated "Brain" to:
Add/modify/delete stored memories
Add/modify/delete associative inks
Add/modify/delete conditional logic that activates those links
When time has allowed, I have experimented with this concept and it does look promising.
I look forward to hearing what you think.
I have the honor to be, respectfully yours, Mark Gusack, M.D.
304 429-6741 x2477
If we consider brain (meant as memory device) as an elaborator of what you call , the domain of these should be a topographic space, being that a schema is not simply a linear list but resembles a graph, with plural conditions to satisfy for parallel nodes.
As you say, senses animate a cascade of associations, so I find logic that, for each step, external stimuli and stored memories (input signals) are linked/applied to a schema in order to test their "meaning of association": initial input signal joins with some aspect (node) of some schema, then the output joins with "neighboring" nodes [and schemas - I'm inspired by non-synaptic transmission between neighboring neurons], whose outputs join with respectively near nodes and schemas, and so on - each step inhibits some routes; finally, after enough iterations, some schema proves to be stable (enough internal conditions - nodes' trigger point - are satisfied and there is no signal dispersion with close schemas) - it might be precisely what is . Maybe, brain as memory device is a simplifier: it trains itself to keep track exclusively between stable schemas - what we may call "architecture of memory".
Furthermore, what if I call these schemas... memes?
Category theory, and it's unique stress on morphism to advance a certain kind of "connectionism" between different thought systems, and perhaps to abstract away from the complexities of lower level mathematics to arrive at something more intuitive.This would allow a mimicking of both brains and minds - the connectivity between neural nets collectively constitute a mind. For to understand and to conceive of something in thought is to consider categories and the associative relations between them.