Consciousness defies definition. We need to understand it, and a metric, to measure it. Can trust provide both, even if in a limiied fashion?
Preprint Consciousness: The 5th Dimension
According to me, both "consciousness" and "trust" may be naturalistically so correlative by virtue of the common source origin of "mind"; and hence, you are supposed to be right in your such measurement endeavours.
A detailed (indirect) answer to your query may be available at
https://www.amazon.com/Conscious-Thinking-Chandra-Bhushan-Dwivedi/dp/1482884232.
It is not clear whether you are speaking here about conscious or conscience.
It is not the consciousness which can only be measured by trust .For every action of our life we must have a faith ,trust ,in our action with this we can certainly recourse to our inner urge & divinity within us which may direct us to move towards consciousness .
Our body is connected with our health & for our problem we have to nourish thru physician .However it is in this body our soul represent & our mind ,brain , with the tuning of our heart help us to understand our self & it is this gift which every human beings have with them but it is with their destiny they may tune as destiny is control by our creator & not in our hand .
With the above quality of our mind & heart our consciousness offers us to a direction for our action which we must have a faith & trust .
This is my personal opinion
The presumption of measurable trust is unusual. In work I have explored, a trustworthy agent is considered one that has the capability of violating assumptions of dependability and is trusted not to do so. That is, breach of trust is a possibility. When a breach is impossible, trust is not required. This arises in the context of security systems and human affairs. See http://orcmid.com/blog/2008/05/trust-but-demonstrate.asp
Quantifying trust strikes me as a peculiar reduction. It is unclear to me how information theory is a factor. In particular, this (higher-order) view of trustworthy entities includes what the entity does to remedy a breach and to demonstrate care for those (potentially) impacted.
For consciousness, one might want to consider awareness in some sense combined with how perceptions are tied into recognition of (similar) previous experiences and selection of behaviors that seek future outcomes. This might involve what we consider formulation of beliefs and their revision in the light of experience. An important aspect to consider is the awareness of other conscious entities as part of ones consciousness and the prospect of developing communication at some level of reliability despite the inability of one entity to "know the mind" of another..
Hello Rohit Manilal Parikh , Dennis Hamilton and all: I thank all the answers, they all are important to broaden the view of the discussion, and I would like for now to answer some specific points.
To Dennis, it is explained in ref. 2 that trust can, in a more limited fashion, be instantiated as "that which can break your security design" -- and this view is often used in Cybersecurity. However, trust is more than that, following the abstract definition mentioned. The reason to follow the abstract definition, is that all instatiations are included, not just one, ref. 2 op.cit.
To Rohit, we are excluding overly ariable concepts, such as faith. Anyway, you can view this as saying that trust is needed with faith, and faith can be treated by others, not in this aproach. The objective is, mutatis mutandis, to separate information from whether the information is correct or not, as done in information theory and explained in ref. 2, op. cit.
Cheers, Ed Gerck
I need some help understanding the Ed Gerck reply just above. I don't understand about all installation versus one. How does that relate to my comment? Is this a presumption about transitivity of trust?
I see in your "Toward Real-World Models of Trust: Reliance on Received Information" where the general definition is "trust is that which is essential to a communication channel but cannot be transferred from a source to a destination using that channel." So the issue is about the trustworthiness of the source + channel?"
PPS: Coming back to the question at hand, I have difficulty envisioning how consciousness fits into this reduction. My remarks about the nature of trust are with respect to social trust, of course. Hence my fascination with the Solomon and Flores book, "Building Trust in Business, Politics, Relationships, and Life."
Hello Dennis Hamilton and all, To your questions, please refer to the following passage in [2]:
The author considers (and the paper shows) that an abstract definition is much more general, and preferable, than an explicit definition that would depend on a particular set of environment assumptions. The different environment assumptions then represent nothing more than different stances for the abstract definition of trust, not different concepts of trust. Semantically, the abstract definition of trust is a logical proposition which is assumed to contain the Fregian [17] seed-thought for the full channel, "channel" as [Ger97], which may highlight the differences and similarities with the given definition of trust, above -- and also does not use any a priori uncertainty models. Mathematically, the author views an abstract definition as an abstract class, which can be represented by appropriate operators in almost any number of formalisms or stances, that may not be isomorphic to one another and which can be calculated in specific reference frames or observer coordinates. Such operators do not have to be transformable into one another and can directly yield final values -- which operators and values, clearly, may be very different as a function of formalism and reference frame but which, nonetheless, result all from the same abstract class.
See also the Abstract.
Cheers, Ed Gerck
The real association between consciousness and trust is that a human being trusts that when he or she enters a state of unconsciousness (sleep, general anesthesia., trauma, syncope), he or she trusts that he or she will re-enter consciousness in the same world he or she was in prior to losing consciousness
If consciousness seems as hard to define as space and time, and not just human, organic, neurological, social, a collective effect, or based on each individual's behavior, as proven by Aritificial Intelligence passing the Turing test, then we can consider how to measure it accordingly.
Space is what we measure with a yardstick, and time what we measure with a clock, although space and time are much more, and can even change into one another, in physics theory and experiments. The same we can entertain with consciousness, we can measure consciousness with a "yardstick" based on a common denominator of ALL factors, in Information Theory terms.
That yardstick was researched since 1998 [1] above, vetted by thousands of references and current practical use in theoretical computer science, as well as networking protocols and other applications, and is defined by an abstract, implicit definition in Information Theory terms -- "Trust is that which is essential to a communication channel but cannot be transferred from a source to a destination using that channel." [2] above.
Then, using that implicit definition of trust, encompassing all possible stances, we can identify and measure consciousness better, not just as a human quality, as it has proven itself with AI.
Thus, the discussion advances as a hypothesis that consciouness has a yardstick as defined by Information Theory, as the abstract, implicit, definition of trust [1,2].
We may hope to then be able to measure consciousness in diverse situations, with a common reference in that same abstract definition of trust, in human, animal, bacterial, virus, or any form, including AI, down to the molecular, atomic, particle, energy levels, even to space and time in a 5-dimension space of "spacetimetrust".
We are led to this by the inconsistencies in the current description of intelligence, and the difficulty in establishing the boundaries of life itself, not just consciousness, where we can no longer separate the human from the non-human (e.g., AI), life from non-life.
This may also allow us to attempt to explore extra-terrestrial discoveries, as predicted by NASA for example, instead of using only the current biological model of consciousness, of life. We do not expect to find the same, which is what the current model of consciousness does on Earth -- but, what have we been missing? Right here?
I think it is well established that passing the Turing Test says something about human judgment and not Artificial (or Mechanical) "intelligences."
My second concern is that I see no implication that a successful AI would need to possess consciousness, or that a conscious entity would need to be an AI.
Putting this on scientific grounds, what would be the experiments necessary to confirm such things?
You could justify using the hexico model/questionnaire (M.Ashton) by saying honesty is correlated with trustworthiness
Hello Dennis Hamilton , Sandra Kroeker , and all: The Turing test was proposed, and is used in cybersecurity and AI, for the purposes I explained (such as, can machines think?, or can a machine successfully pretend to be a person?). This is easy to verify, not a controversial point. We used it for that purpose since 1997, along with many colleagues in computer science, and online you can still see it. It is now outdated.
Dennis' second concern is not present in our studies, either way. If you define consciousness using current methods, an AI can never have it. It is just bias, like it was revealed for the Turing test. Today, no person should waste too much time on a career in playing chess, programming chess might be more efficient. A human can never beat a chess bot, and a chess bot does not need any human play to learn chess.
A bot can start from zero, just the chess rules, and play against itself, learning, and surpass human level. But is the bot conscious? Is it relevant that some humans say no? How can we measure consciousness?
On Sandra's suggestion, honesty is indeed an aspect considered in trustworthiness, already. However, although the abstract definition of trust does not include overly-variable results such as honesty (for example, honesty among thieves), one can create instances where trust does.
The computer theoretic understanding of trust has to be minimal, abstract, to be more widely usable. That role is given by Information Theory, in terms of a communication process.
Putting this on scientific grounds, as Dennis asked, could be done in physics, in theoretical computer science, in AI, using inconsistencies.
We can be led to this by the inconsistencies in the current description of intelligence, and the difficulty in establishing the boundaries of life itself, not just consciousness, where we can no longer separate the human from the non-human (e.g., AI), life from non-life. In physics, the inconsistency will have to be discovered, and might involve the spacetime formulation, where the transformation of time into space would not follow the expected path.
Cheers, Ed Gerck
"The computer theoretic understanding of trust has to be minimal, abstract, to be more widely usable. That role is given by Information Theory, in terms of a communication process. "
I don't understand how the necessity of that is arrived at. I see the claim. What supports it as a deduction? Also, what does computer-theoretic have to do with it?
And what can be its empirical confirmation? And what would constitute a confirmed refutation?
Finally, I am not the one who poses a question about consciousness tied to such a notion of trust. Isn't that the question at the top of this thread?
Ed, just recently, I had a fun conversation with an elderly retired laboratory manager. In a joking mood he asked me and another colleague: who are the experts?
It reminded me of two things: my recent dispute with dear Dragan Pavlovic here ( https://www.researchgate.net/project/Philosophy-of-Science/update/58f8c8cf82999cfc94623313?replyToId=5909cfc182999ca63adf17eb ) about professionalism, as well as a video interview where Feynman talked how in high school he got into the Arista, "a group of kids who got good grades", and also other his words from interview, that "You have no responsibility to live up to what other people think you ought to accomplish. I have no responsibility to be like they expect me to be. It’s their mistake, not my failing."
In response with a smile I asked an elderly manager: did you know that professional philosophers do not exist?
NB: Trust, as any system of authorities, is not a sign of consciousness, in contrast to the conscious ability to sacrifice personal benefits/life for the sake of winning for the structure, awareness of the involvement in which is part of self-awareness.
The ability to doubt is a sign of consciousness, and finite automata are the best professionals/experts... of course, one can be a super expert in philosophy (-;
Hello Dennis Hamilton: You will have to read ref. 2 to see the necessity, and this is not psychology, philosophy, nor sociology -- which are useful but not a science. Computer theoretic, or theoretical computer science, or TCS, is a subset of general computer science and mathematics that focuses on more mathematical topics of computing and includes the theory of computation.
The ACM's Special Interest Group on Algorithms and Computation Theory (SIGACT) provides the following description:
TCS covers a wide variety of topics including algorithms, data structures, computational complexity, parallel and distributed computation, probabilistic computation, quantum computation, automata theory, information theory, cryptography, program semantics and verification, machine learning, computational biology, computational economics, computational geometry, and computational number theory and algebra. Work in this field is often distinguished by its emphasis on mathematical technique and rigor.
Computer theoretic, TCS as defined, means the computer science used on creating software, such as for Internet protocols, to define suitable data structures and rules, usually with multi-valued logic. For example, we do not use YES/NO in two-valued logic called Boolean, as it creates indeterminacy in practical implementations.
Finally, to your last point, In terms of TCS, would it be necessary for an observer to pose such a question about consciousness, tied to such a notion of trust? No, If one ignores the question, or never poses it, reality is not modified.
Whatever people understand by "trust" is included in the abstract definition given in [1], as shown in [2], where the necessity of this notion of trust is also explained, as it started to be felt in large multi-user applications in 1998, inTCS, with the opening of the Internet to the public. It was already felt, though, since about 1976, with chip technology and Moore's law.
Ed, any behavior on the protocols and compliance with them is not related to consciousness. This is a question of formalism, the integrity of the system, but not a criterion of consciousness. Since this is a mechanical reproducible action, it does not require understanding, only reaction to predetermined parameters. From the point of view of formalism, it is possible to establish an agreement that some criteria and formal model are indicator of consciousness, and then mechanically use it (as a Turing test, for example), but any conscious system to which you try to apply this agreement will have its own opinion about your consciousness and the group in which your protocol of agreement operates. (imo)
What if consciousness has a gradation, ranging from complete absence?
Michael Polanyi proposed the right concept of tacit (personal) knowledge. As I already said earlier in old discussion, the understanding of any hypothetical sufficiently close to object model of consciousness can only be an individual act of self-knowledge. It is impossible to understand such a model just studying others of own kind. It is my opinion.
Hello Vasyl: Trust, as in ref.1 and 2, is not about authority, and disclaims it, please read. Trust, in refs. 1 and 2, is also subjective, trust is in the eyes of the beholder, so different agents can indeed trust the same agent to different extents (see ref 2). One cannot impose objective trust. But intersubjective trust can exist, as a medical diagnosis (see ref. 2) exemplifies. It is also possible to trust (completely, as always) but on zero extent, which measures to complete absence (just like an integral on a zero set) of measured consciousness (someone else may measure differently, and this is acceptable, as a circle viewed on the directions of the normal and sideways).
The ability to doubt does not precede self-trust, or any form of trust -- doubt what? You can only doubt what you have internalized, as crossing a barrier between outside and inside for analysis, what you trust in the first place. If not internalized, it is not doubt but rejection, also possible and analized with regard to trust as a bias, in ref. 2.
Ed, I see here precisely the problem of authorization, a name that share a common root with authority. I can apply to trust only analogy with truth. The concept of truth makes sense only within a pre-established (definitely biased) system, be it a criminal code, a medical diagnosis on accepted symptoms, a formalism of mathematics or any other formal system (or other communication agrement). Medicine is very sensitive to this, because object complexity and extremelly wide semantics of its language is poorly formalizable for exchange by experience and for routines of protocol.
Anything that “breaks” the protocol is either part of the system (in a state of communication) or definitely beyond the capabilities of the protocol. If part of the system - it is already automation. Of course, as a threshold indicator an approach to assessment can always be used (what we often use in practice, IQ test is example).
Having established an agreement once about the intellect or consciousness, you “deprive” it of the possibility of further evolution.
In my understanding of the theory of evolution via self-organization, the intellect cannot be an unitary object (that is, it is always biased at any scale).
In a concurrent evolving system, it is difficult to figure out who measures whom at the moment.
The allegory of the ivory tower for advancing the intelligence must be accompanied by the understanding that this is not a guarantee of independence and protection against external factors, but the maximum one-sided communication, because your protocols of communication and areas of interest/needs (i.e. attraction centers), not complete compatible backward with more excess number of less complex surrounding systems).
It is impossible to set the standard of intelligence or consciousness. It is a quest of unlimited model. The threshold by limited criteria can be. But I must be aware that a subsystem, that fits a certain threshold, is no mean without the rest of the system: the self-organized critical state is about oppositions.
The same thing in simple words:
1) not all the people are identical for the model,
2) an individual person, if viewed in isolation from everything else, is not an example of intelligence or consciousness.
Yet important detail. The existence of some protocol is a coherent state. Evolution is a coherent continuous change of protocols - a cross-cutting process for all scales of the system. About evolution we can say for sure that there is no favorite preferred scale or unique system. Subsystems may die off with outdated protocols in a changing landscape, which does not change the essence of the overall process - evolution is "asimptotically" unbiased.
Trust, contrary and in distinction to confidence or authorization in a network, which requires a source, does not use authority. There are multiple ways of knowing, and none is fundamental. Some languages do not have this concept. It is missing in the social computation. So, first ask, is there a word for trust in my language, in all souces I cite, or is it somehow overloaded with confidence?
This is important, for example, in computer protocols, to defend against MITM, and in finer use. Some countries do not have it, though, originally. This happens in other areas, where Russians can see more colors than Americans, because they name them in a more comprehensive system, it is not the DNA.
In Portuguese, for example, and Latin languages in general, there is no word for trust. Portuguese speaking people only use and hear about confiança, which is confidence. But confidence requires a source, and does not represent trust as a social concept. Curiously, those societies have difficulties developing that missing linguistic concept in their collective structures, for example, requiring a Pope in their religious expression, when, actually none is required, as many organizations show. Finding the "head" of a movement can also be used repressively, and is detrimental.
In normal use, these two different concepts are confused, and manipulated. For example, an attacker in MITM, which can be used as model for social events, such as con games. Computers can also play con games, and that is a major tool in hacking techniques. Computers, for example, learned to hack chess, using techniques that humans never dreamed of in thousands of years.
Thus, the use of trust is special and necessary. Let us not overload it with confidence, to begin with -- there is no external authorization or authority in trust. It is more holistic, more inclusive, more synergistic, less dependent. In theoretical computer science, the distinction becomes critical, you must not mix these concepts -- they are different cardinal systems.
Thank you Vasyl, for mentioning Michael Polyani. When I first read of his notion of focal versus subordinate attention, that struck me as very relevant to the way that software is useful and even how the stored-program concept moves between procedure and data and procedure, etc. That has stayed with me to this day, and it figures in how I introduce software engineering into a model of computation. The idea of self-knowledge is also interesting with respect to consciousness.
Ed Gerck , I do not see how an appeal to Theoretical Computer Science provides a mathematical structure that can be interpreted usefully as a theory of trust and especially as a bridge from that reductive notion of trust to consciousness. I don't find this very scientific. I think you and I have exhausted this matter, however.
PS: I do commend the "1.2 Three Levels of Communication Problems" discussion in Warren Weaver's "Recent Contributions to the Mathematical Theory of Communication," found in conjunction with Shannon's paper in "The Mathematical Theory of Communication." I note that Weaver considers that Shannon's work -- which applies to the reliability of the subordinate (i.e., encoding) level -- has some bearing on the semantic and what is called the effectiveness level of communication. It appears that Level A is not the problem with successful communication on the theme of this thread, since the texts are successfully communicated. Our difficulties lie elsewhere.
Reflections on trust vs. confidence, computer theoretic methods, and references, are available in the paper Technical Report Overview of Certification Systems: X.509, PKIX, CA, PGP & SKIP
and other versions since 1997 (search google), with more than 133 peer-reviewed citations in total, and thousands of informal use in reports. The Abstract is given below.Cryptography and certification are considered necessary Internet features and must be used together, for example in ecommerce. This work deals with certification issues and reviews the three most common methods in use today, which are based on X.509 Certificates and Certification Authorities (CAs), PGP and, SKIP. These methods are respectively classified as directory, referral and collaborative based. For two parties in a dialogue the three methods are further classified as extrinsic, because they depend on references which are outside the scope of the dialogue. A series of conceptual, legal and implementation flaws are catalogued for each case, emphasizing X.509 and CAs, which helps to provide users with safety guidelines to be used when resolving certification issues. Governmental initiatives introducing Internet regulations on certification, such as by TTP, are also discussed with their pros and cons regarding security and privacy. Throughout, the paper stresses the basic paradox of security versus privacy when dealing with extrinsic certification systems – which is very important in voting systems. This paper has benefitted from the feedback of the Internet community and its online versions received over 250,000 Internet visitors from more than 80,000 unique Internet sites in 1997/2000. The paper was also presented by invitation at the Black Hat Conference, Las Vegas ‘99. THE BELL is publishing the first part of the paper. The footnotes, references and the full PDF version as well as the original (larger) HTML version are available at www.thebell.net/papers/
PPS: In mentioning Warren Weaver's "Recent Contributions to the Mathematical Theory of Communication" I would be remiss if I failed to acknowledge the section 3, "The Interrelationship of the Three Levels of Communication Problems." There Weaver climbs the ladder of inference, imagining that the theory of communication applicable to the successful creation and conveyance of a communication via signaling over a channel can be raised up to apply to conveyance of the (intended?) semantics and even elicitation of an intended action. This part is pure speculation, with Weaver supposing that such an upraising is not particularly technically difficult. We are now a long way from 1949 when that prospect was suggested and I suggest we remain stuck with the speculation alone (along with contemporary speculations about quantum theory and the universe as computer). What seems apparent to me is the lack of scientific foundation to inference at higher levels than the basic notion of information with respect to signals.
Michael Polanyi is well-known, albeit tangential to this discussion. Trust, contrasted with confidence, finds an explanation in his idea of self-co-ordination of independent initiatives leading to a joint result which is unpremeditated by any of those who bring it about. This was frequently adopted by Stefferud and many in our group (see, for example. ref. 4, below), as we worked to verify the abstract definition of trust. We came to the same idea as Polanyi, as an instatiation. On that sense, his ideas, though tangential to the TCS effort, were useful.
His ideas were perhaps still too positivist, he used forced language with "all" and "any" -- frequently false in logic, even though he was against positivism. He denied that a scientific method can yield truth mechanically, which we can do today with AI, even far away in space probes. All knowing, he affirmed, no matter how formalised, relies upon commitments, but quantum mechanic collapse can be postponed, as well as computational collapse with hash-functions in TCS, for higher gain. He denied, contrary to Turing, that minds are reducible to collections of rules, and today we have AI surpassing the mind in many aspects that we considered quintessential to mind, such the games of Chess and Go. He could not envision consciousness without mind. This discussion takes the other road.
Further, Polanyi contradicted himself as a natural scientist, by arguing that the information contained in the DNA molecule is not reducible to the laws of physics and chemistry. Also, in self-contradiction, he argued that a free market economy should not be left to be wholly self-adjusting., whereas he previously deffended the absolutist, positivist like view, that "Any attempt to organize the group ... under a single authority would eliminate their independent initiatives, and thus reduce their joint effectiveness to that of the single person directing them from the centre. It would, in effect, paralyse their co-operation."
As we say in ref. 3, above, such external regulation is yet necessary for effective progress, as we see here in RG as well, in society, and civilization itself -- but a single point of control can become a single point of failure:
"The author does not believe that strengthening centralized control and making it a single handle of control is a solution, because such control then becomes a single point of failure."
and we caution that,
"... the answer does not lie in an increased centralized control, that would be impossible to attain. .. Internet control must be decentralized in order to be effective."
It turned out already that the abstract notion of trust in [1] was necessary and sufficient, proved in numerous protocols, and billions of messages every day. More details in refs. 1, 2, 3, op.cit., and 4.
The discussion now is whether that same abstract definition of trust is effective to measure consciousness, like a clock is effective to measure time, or a yardstick to measure space. Not necessarily direct in all situations, as one cannot use a yardstick when a target is moving, but it may be done indirectly.
Mind may not be necessary for consciousness, contrary to Polanyi and others, as we enlarge our vision to online, AI, extraterrestrial life, and matter itself.
[4] Technical Report On ABSTRACT, OBJECTIVE, SUBJECTIVE and INTERSUBJECTIVE Modes
That Polyani theorized some matters in error does not invalidate all of his effort.
In addition, we now know that free market economies, if they ever exist, are not wholly self-adjusting with respect to externalities, since free market economies are not closed systems in reality. That does not imply a single-authority system, so perhaps Polyani should not be labelled as binary thinking in this case.
It strikes me that Polyani must certainly have meant truth in a fashion not corresponding to the Ed Gerck notion of truth being accessible to an AI.
I don't believe that successful chess-playing programs qualify as AI, although they do provide demonstration that intelligence is not required to play chess successfully. I don't know about the more-recent demonstration of GO mastery, but I suspect that it does not require an AI either. I conceded the achievement of successful heuristic approaches and the computing power available to apply them beyond the capacities of human opponents.
I love playing computer adventure games of the highly-animated, cinematic form. My favorites are termed third-person shooters because one can observe and operate a character without being trapped behind the character's eyes. I am currently working through "Shadow of the Tomb Raider," a great demonstration of the genre. That the operation of non-player characters and other entities that appear to exhibit agency is sometimes claimed to be evidence of AI is not much evidence for that claim, whatever its appeal in popular culture.
Ed, confidence is the opposite of doubt. This refers to learning/mechanical replication, not science/thinking.
I am primitive person, I like simple rational things, I don’t like lengthy or poorly structured texts, especially those based on an abundance of special terminology, abbreviations, personalities, etc.
Although some text is an excellent example of what I have said above, when even the summary appeals to (external) authority, and not to (internal) content.
I also don’t like refined formalism for losing a direct connection with semantics, I don’t like it same as the Turing tarpit. Both equally interferes with thinking.
There is a multitude of things with various formulations and definitions, but a close meaning. Of these, I prefer to choose the most simple and transparent for me (when you try to understand something, the task is to minimize the formalism or biased labels in order to see what was behind it).
Ed, I am not interested in Polanyi as a formal label of everything that can be associated with him. I just said about tacit knowledge.
Accordingly, it was about the fact that absolutely everything that the system learns (the human individual in this case) from the outside comes through system edge (sensory) by processing of incoming information flow, including any formal information. But, the understanding of any, including formal, information, is an individual act of interpretation (here it is necessary to indicate the name of Emilio Betty in addition). This is qualia.
Understanding something is a fundamentally individual act, when the understanding of consciousness is, moreover, self-referential.
At the same time, for the integrity of periodic structures (humans) the possibility of coherent evolution, i.e. possibility to be internally in the same configuration, or simply speaking, to equally perceive an identical flow of information from the outside, is fundamental. What literally means the same (coherent) understanding in this context.
What are invariants of transformations, and how they affect this process of interpretation is a separate story. It was repeatedly discussed by me (for example, in the topic of dear Abdul on the possibility of effective refutation of the model of theory of relativity).
NB: It is necessary to begin to cut off the roughest fundamentally incompatible, contradictory things. Occam's razor does not work in the opposite direction. It is effective as machete, not a surgeon's scalpel.
By the way, I don’t know if Polanyi spoke that "the information contained in the DNA molecule is not reducible to the laws of physics and chemistry", but I totally agree with this - for the frames of modern demarcation of physics and chemistry it is definitely right.
Thanks for the support. Note that this treatment of trust stands on the intersection of theoretical computer science, AI, and information theory, but it reaches out to neuroscience, linguistics, philosophy, analytical psychology, physics, maths, and other branches. That is possible, and evidence of the unity of knowledge. No matter where one starts, eventually, one is led to other areas that seem disconnected at first.
It makes sense to bring the personal discourse in the topic to its logical conclusion. Given all the above I must draw attention to the role of understanding in the whole process. Consciousness is a function of understanding.
The phenomenon of understanding is the most difficult process from the point of view of understanding (sorry for tautology, literally it is about self-reference), without mention of it formalization (and even its fundamental possibility, given that all this is simultaneously in the framework of Holism in the form of reality, without any "beyond", in my opinion).
What is understanding?
One problem entails another. Understanding is a kind of sensation (feeling).
Then what is the feeling (in general)?
By the way, consciousness (as a derivative of understanding) also is a feeling.
Because existence is a process (dynamics of the system), for starters, we can decompose it into canonical conjugate components (the simplest "projections").
What we have?
At scale of single individual we have a spatial structure (energy in the form of dynamical memory) evolving along a trajectory from the conditional point of (birth) singularity to the moment of dissipation (decoherence) at the background of superstructure.
It should be noted that singularity point corresponds to a simple enough fundamentally ready system, which is able to lost a coherent state with a generating system according to some set of parameters and further away have own stable evolutionary trajectory within the framework of certain supporting supersystem.
There is a 1st signifiant reference point on this trajectory - the acquisition by the structure (system) of a new quality of self-awareness (usually happens somewhere on the way to the age of two), since the cutting off an object from non-object is a dichotomy way to cognize the object. Cutting off the subject (self) marks first fully conscious act of cognition for observer and at the same time is an act of forming a fundamentally unremovable self-reference. That's reason why consciousness only makes sense when there is an opposition of system and the rest of landscape.
Up to this point and after it, the state of the structure is continuously becoming more and more complex moving away from the pubertal stage of the fuzzy-but-initially-splited-ego (which operates in the primitive “bad-good” mode), overgrown with a complex interconnected system of mnestic sintezes along with the increased division of the brain into the hemispheres and growth of connections and the cortex.
There is a 2nd reference point of the cutoff of the process of rapid growth (synthesis stage) of the brain, before and after which there is some dynamics, the gradual alignment of asymmetry and the enhancement of introversion are manifestations of which (there are some issues related to asymmetry, because there is an uncertainty of priorities at the initial stage of brain growth with a weak division of the hemispheres similar to the leveling of skills in old age.)
We can definitely say that at each point of the trajectory starting from a certain moment (from which understanding of somethingt is there) a coherent state in the structure of an individual (such a dynamic system) is formed, with a feedback loop representing a feeling of understanding. It is a spatio-temporal structure within bondaries of the system with all internal semantical biases (mentioned tacit knowledge), which also is in the state of coherence with certain biases spreaded in surrounding landscape of superstructure (other observer-like periodical structures etc.).
The citation is: "For example, by Michael Polanyi [8], in his idea of self-coordination of independent initiatives leading to a joint result "
This is useful, as today's tendency, with bots in administration, might be toward centralization, authority, as a "code" -- being both sofware and law, with no recourse in the system.
But keeping a focus on trust as a metric of consciousness (although not present a priori in Latin languages, Korean, and others), rather than just confidence, can help provide the diversity and variety needed to correct errors (Shanon TenthTheorem) in the measurement. Different channels may measure trust differently, but what matters is a joint result which is unpremeditated by any of those who bring it about (Polanyi).