Can we impart those art and scince of 'understanding' in machines that we design in so far that they may be able to perceive and understand the world in a similar way which we do?
we should use our inactive talents that machines never can do them. we are the main source of methaphysics.
Thanks Mehdi. I also believe we would be able to unleash our hidden potentiality, that is to identify what other talents we may have. but we should also become careful enough to be or not to be dictated by synthetic intelligence. God created the universe and we are his attributes and so machines are our attributes. ever if we allow to take the machine intelligence take the upper hand, we may end up becoming pets. Some day, due to advancements in AI, machines may indeed start to think faster than us and so we must incorporate in those programmes the real sense of humanity and rationality, else, we all know the unreal fate.
I think mind uploading and singularity allow as to became more intelligent
We can mix ourselves with machines for using advantages of them and they can help us for a better life and improve our talents. They will know us better than us and will respect us and help us for unleash our hidden potentiality.
Bharath and Mehdi: well, we may have lost the respect of humankind, so its better to be respected by machines. our own greed and disrespect for others and particularly for the exploited mankind may find its answer in machines. indeed we can make our life more better with the help of intelligent machines if ever they do understand what "humanity" really means-
@ respect for other religion and race
@ respect for the women in our society
@ unleash the full potentiality of women and feel that they have a soul, their own choice and options
in a sense, the business of AI in future should not be all about profits and investments. we may leverage their skills for some greater cause. The human activity of exploring is different from that of other species and animals. yet, we are all part of the same environment. so why not let the machines move up the evolutionary chains. Humankind's interest in understanding the world we dwell in, and beyond is unbounded but we often forget to understand ourselves clearly that is- what is "humanity"? so, as Isaac Asimov has contended, how would machines understand this attribute of "humanity" is a challenging aspect that would require perceptions, conation, thought, love, respect and respect for other race only then could such an Artificial Society be really conceived upon.
Our brain has a storage capacity of about 2 petabytes and that is limited. So, mind uploading would definitely relate to cognitive load and would require to manage those loads. However, that might be possible in some way.
The real question is, and of interest however, whether AI system's ability to think, learn, communicate and interact with human beings should be personality-driven social intelligence or just emulation of human functions? They must be able to understand 'humanity' in a better way than we do, only then can that help us to have better life and let us improve our talents. That would be the real definition of intelligence...
Sigh, why do we A. want to make machines think like us, and B. expect them to respect us, when we can't respect ourselves. Will our machines be given obsequious characteristics like CP30 in order to let us feel that they are subservient, or be muted by choice if they are smarter than us like R2D2, just so that we are allowed to feel smug and safe? What rules will we think up to replace Isaac Asimov's laws of robotics since they have proven untennable at least in current robotic designs? And how will we in our infinite wisdom decide whether or not to allow an I-robot to survive when its networked twin should not?
The day when our A.I.'s start thinking for themselves is coming, and I don't think we are at all ready for it.
Graeme;
There is some point of contentions relative to the term "obsequious" characteristics that would define any machines other than the drones you have mentioned (R2D2) or CP30. As you have categorized those evolving machines taxonomically into such 'drones' and 'droids', it would be cumbersome for the human race to measure their feelings and the level of anxiety that would generate based on their origin. Our mental reactions toward such machines will help specify our need for such designs to come to shape at the first place. If we deem that those designs would terribly undermine human race, perhaps, there would still be enough time for such retractions, ??, though I have a doubt about it and so it is pertinent.
Secondly, there is still such contention of whether Asimov's laws of robotics are purely untenable in the present circumstances, considering that the zeroth law along with the three laws if not considered enough to define robotic behaviors. If that's the assumption, then it may require one to perform more psychological experimentations on human-robot relationships of the future.
The problem with the notion that the behavior of a system or machine would be dictated by the nature of its coding is now by itself very limiting, do you not think Sir? And since when you are speaking about the design secrets of machine brains to give it a form, some states, a mind and then consciousness, then it may indeed be not such "obsequious" in the fullest sense. But considering another point of view of whether to equip such machines with adaptive algorithms such as using genetic neural ensembles or others, would indeed call for the "Fourth Law Of Robotics", in such shape as the robot must obey all the three laws of robotics including the zeroth law, and the fourth law may somewhat be conceived that "a robot must die a natural death as much like human", say, given a time span of 30-40 years of life. Well but, this seems too quirky enough to consider the dangers of adaptive codings of what new behavioral repertoires would they evolve in machine minds, and with those hybrid mental attributes of even a Cyborg having a Bionic brain, the propositional content of such emotional conscious cues may be enough to read their intention, but not of those droids and drones which you may call exanimate or even sycophants.
But still, we may assume to be safe enough since the AI based machines still lacks three things;
--Intuitions
--hindsight and,
--dreaming
The last of which, 'dreams' could pose real problems for the conscious minds of natural creation as well, for such artificial ones.
So, the question invariably comes to a mind, can we afford to design Golems and give them a Pandora box so to unleash their obscurities and make us paranoid about those threats?
So, considering your point of view, i may say that our infinite wisdom should employ the gift of curiosity in rational ways since, this had been a topic of teleological interest ever since the dawn of artificial intelligence conceived by Nicolas Rashvesky in the 1930's who wrote on the possibilities of conversational capability of machines and that even some day, machines would lie intentionally. Well, that would be troublesome for us humans if they start to fool us, as we are fooling them into performing tasks that they may not love to do someday!!!
Thanks.
Siddharta:"Our mental reactions toward such machines will help specify our need for such designs to come to shape at the first place. If we deem that those designs would terribly undermine human race, perhaps, there would still be enough time for such retractions, ??, though I have a doubt about it and so it is pertinent."
If you look at the work of the MIT Media-Lab, you begin to see that in fact, the nature of Robotic-Human Interaction is already being tuned by the people there, in an attempt to link robotic interaction to pleasant effects. I remember that one thing they found, was that there was a sweet spot, at which the robot, became enough like human to trigger the human instincts about badly formed human faces, with the result that robots that looked too close to human, were ignored over those that looked more like tele-tubbies etc. They decided that it was dangerous to get too close to the ideal human figure as a body design for robots.
Siddharta:"Secondly, there is still such contention of whether Asimov's laws of robotics are purely untenable in the present circumstances, considering that the zeroth law along with the three laws if not considered enough to define robotic behaviors. If that's the assumption, then it may require one to perform more psychological experimentations on human-robot relationships of the future.
I think the problem with Asimov's laws is that they are absolutes. For which the conditions required to determine a state change are ambiguous. As such, they cannot be programmed in a "Logic" language that is not also able to deal with ambiguitites. Since even "Fuzzy" logic has limits as to its ability to resolve ambiguities I have been spearheading the idea that "Logic" is the wrong answer for dealing with ambiguities and have been suggesting a Similarity Selection Model as a possible alternative. I will be back to deal with more aspects of your questions later, as I need to do a run to the local store. More Later.
Consider if you will the concept harm. Obviously the robot needs to have a different definition of this, than a human would have, if only because, they don't feel pain, or at least we don't yet know how to teach them how to feel pain, and maybe they shouldn't have to....
In I robot, Will Smith, describes a sort of psychic pain, the pain of knowing that a girl was let die, in order that he could live. Obviously the character that Will portrayed was harmed by the experience, But the robot was decisive and saved him at the cost of the little girls life. How the robot was able to disambiguate the ambiguity of letting one human die and saving another, was part of the question that Will had a burning interest in finding out. His distrust of the "Inhuman Choice" is part of the subplot that brings out the importance of the "Laws of Robotics"
So let us look at "Rule Bases" the naturally assumed mechanism by which a robot chooses a rule to apply at a specific state.
Asimov's Laws are a sort of ethical rule-base, that has an easy transformation into rule=>action tuples.
The problem becomes how do we process the rules, to define the actions.
What we find, is, that each different situation needs a different rule, and that because the actions are absolutes, most of the rules result in only one of a number of actions, But these aren't the actions we are used to in robotics these are Meta-Actions, in essence, programs that the computer in the robot has to run, in order to get the results.
So we have a number of rules that come out to "Save Humans from Harm caused by your actions" Not only does this require that the robot know what harm is, but it requires that the robot know what a human is. "Act to Save Humans from harm despite ambiguity", well, How does the robot disambiguate this? "Predict harm to humans and head it off" and so on. The assumption that you can get by with 3 or 4 laws, is based on the idea that you can disambiguate the laws in your rule-base.
In logic systems what we find, is that disambiguation expands the number of laws to at least the square of the original law set, depending on the ambiguity of the laws. We can use fuzzy logic to contain the concept of humanity simply by making the conditions under which the rule is detected expand to fit the whole set of possible human actors, but what are the characteristics that the robot can detect, that it should use as the set-inclusion parameters that define human?
Is a little girl somehow less human in the robots eyes, than the adult man? Or does the heartbeat affect how human the robot see's the girl as being? Should robots know the heartbeat/oxygen debt ratio of different sized bodies so it could determine if the child can be saved even if their heart has quit beating?
I think the real problem is that Isaac Asimov designed the laws without an adequate understanding of A.I. possibly because he was an early adopter of the idea.
Siddharta:"The problem with the notion that the behavior of a system or machine would be dictated by the nature of its coding is now by itself very limiting, do you not think Sir?"
Well, that is an interesting question. If you mean, does the machine have unlimited options, the answer will always be no. If you mean that the machine can learn from its environment, and so is not limited only to the actions that it was programmed to have, yes.
I think that the fantasy of an unregulated and hostile computer, is just that, a fantasy. But, the idea that there might be something like the Meta-Robot, that acts through intermediaries and who's action through them does not directly impact on its ethics circuits and thus can break the "laws" in spirit if not in text,
implies that there should probably be different levels of volatility even in automatic update systems and that therefore the "Ethics" circuit should not be easily messed with.
I have been rather disappointed in the Computer Industry because, instead of using ROMS and other secure firmware, they have incorporated hackable flash firmware in even the most secure computers, and try to update it across the network.
Personally I think that a secure computer system must have secure low-level routines, and if you design an ethical robot, the ethics rule-base must be held in non-volatile ROM instead of flash.
Siddharta:"But considering another point of view of whether to equip such machines with adaptive algorithms such as using genetic neural ensembles or others, would indeed call for the "Fourth Law Of Robotics", in such shape as the robot must obey all the three laws of robotics including the zeroth law, and the fourth law may somewhat be conceived that "a robot must die a natural death as much like human", say, given a time span of 30-40 years of life. "
Shades of Blade-runner, "Do Androids dream of electric sheep".
Well, Planned Obsolescence as a 4th law? What would you hope to accomplish?
I suppose an immortal machine might get out of hand after repair and maintenance redesigned it the 7th or 8th time that it's internals were obsolete, but survival of machines past a certain point, is as yet impractical without complete replacement of parts, and would it be the same machine after such a complete refit? The answer lies I suppose in what portions of the machine are hardware/wetware and what portions are software/firmware, and whether the firmware from one version is plug compatible with the next.
Currently Robots tend to require overhauls two or three times within a 24 hour period and seldom operate for longer than a week without operator intervention.
I am not exactly worried at this time that we will suddenly get a more robust technology.
Siddharta:"So, the question invariably comes to a mind, can we afford to design Golems and give them a Pandora box so to unleash their obscurities and make us paranoid about those threats?
So, considering your point of view, i may say that our infinite wisdom should employ the gift of curiosity in rational ways since, this had been a topic of teleological interest ever since the dawn of artificial intelligence conceived by Nicolas Rashvesky in the 1930's who wrote on the possibilities of conversational capability of machines and that even some day, machines would lie intentionally. Well, that would be troublesome for us humans if they start to fool us, as we are fooling them into performing tasks that they may not love to do someday!!!"
If I was looking for a perfect slave, the robot might or might not fit that role, but, my studies into artificial consciousness have nothing to do with slavery, and everything to do with self-understanding.
Can we afford to follow philosophical will-o-the-whisps, in an attempt to deny that consciousness is a predictable and therefore modelable phenomena? That the way the brain works can be figured out?
Should we hide from ourselves exactly what we are, out of some fear that some machine will replace us with plastic? How then will we be able to tell when some other comes among us that can take our shape but does not work the same way that we do?
I see the development of Artificial Consciousness as a necessary proof that we have figured out how the brain works. Whether we choose to build machines that use that technology and sell them to the general public is a Marketing Decision, that I simply will not make. If I do not need the funding from big business to do my research, then they cannot claim it when it is done. I would prefer to keep some trade secrets so that someone else does not hack their own copy, but can I do so, and still prove my contention that it is possible to build a machine that is conscious? I am not sure that it is practical. In the meantime, I find myself pressed to defend not my own work, but the assumptions made by a scientist over 3 decades ago before the word robot meant anything physical. Ask yourself, should we expect his assumptions to have any benefit today?
If RG has a central Database that is updated by satelite data bases, part of the problem is that the updates could be destabilizing the machines, and causing them to fail to fully copy the Database from the satelite machines to the central database. People would see this as being censored when the real problem was a QOS failure in the Database.
Prof. Graeme Smith
Realms of Ethical Consciousness: Programming Artificial Minds
----------
Well, things appear to be prominent whereby, programming robots to enable them to feel pleasant effects can indeed lead them to perceive all those pleasures, whether vulgar or decent, so I would like to understand first, rather agree as a priori. Undeniably, I think robots will have those kinds of emotions some day, yet I would like to mention that human emotions are both complex, and confusing. And that’s perplexing for machines to emulate those as all inclusive. In essence, as I have mentioned before, design of device, say, even a musical instrument, shape their affects’ tone and resonance. And so, designing machines as artificial entities would definitely shape their affects and belief. Things essential for such is to have at our disposal good grounding “values”, to remove common errors of ignorance about morality with which, we as humans are afflicted. This, I suppose, then, would really mean something more than just “humanizing” robots. Triggering such human instincts in robots was what Asimov dreamt in his book “The Bicentennial Man” where he wished “if robots are more human than mankind!”
When you say; Isaac Asimov designed the laws of robotics without an understanding of A.I.’, I believe you are “factual”, as well, “fictional”, however indeed, Asimov himself was much skeptical about those laws that he mentioned in his own works, those three laws which were formulated at a point in time- when scientists did not have fountains of wisdom about machine intelligence. So, one may not assume that those laws as finitely absolute, or unconditional, since there is a behavioral component attached to such laws, the behavior of which, evolves into complexity as the time passes by. It’s different from the paradox of whether Congress should not try to rewrite the laws of Physics, math and nature, but indeed, natural laws of human psyche are ludicrously solicitous, rather than perfunctory. Having been bestowed with such completeness in cognition of knowledge, one may not be “wise” enough. So, to make robots street-smarts would nevertheless, be synonymous to say that some humans are stupid enough to do things that animals do smartly...! Yet, I would be cautious enough for that not to enable my girlfriend leave me and fall down in love with such a street-smart robot (save me for a humor!).
It is often difficult for humans with drastically different values live together in love and peace, so, rightly, one may assume that robots would be able to live together in peace considering that all robots are made equal. However, it would be puzzling enough to identify unique personality traits in robots- for example, what personal qualities would one expect such robots to inherit? Perfectionist, selfish, observer, loyalist, altruist, competitive or a combination of these minus selfishness, which qualities to include and what to discard? For such a reason, we may have to think about differential behavior trends in artificial systems, dissimilar yet analogous to humans.
Regarding your reference to the “linking of robotic interaction to pleasant effects”, we need to consider the psychic characteristics of an artificial mind from a different perspective. That distinction which I presuppose is somewhat different from just “transferring of human consciousness” into a robot’s mind. In a more genial term, the choice of decisions that robots would take will much depend on the finite options as policy states to be offered to them, or fairly, to allow them to “discover” such policies directly from their environment. On such rule-bases as a naturally assumed mechanism by which a robot chooses a rule to apply at a specific state may likely depend on such situational paradigms of perceptual sensitivity of the environment. In such a scenario, model of choice when decisions must be justified by arguments based on principles priori (first principle) would define the notion of stability in robotic actions. Those collections of actions, justified by arguments, definitely base on some logic that likely define morality of meta-actions. Rules and routines depend on changing situations or situations might even demand new specific rules. To achieve any goal, whether mega or minor, would require oneself to process rules(as you said) and find pathways using such given policy choices to define particular actions. Hence, I may also say, actions would tend to remain path-depended which however, have certain limitations. So, programming robots to overcome such limitations in ‘time and space’ is indeed a challenging job, as evident from the event shown in iRobot. The question remain, what made the robot take such a bizarre choice to save Will Smith and not his daughter? (Women’s group might say it was gender inequality); rather, it was a plot in the movie without which the movie wouldn’t have proceeded. But things may happen to be foreseen deeply and in section, minutely, what really made the robot make such a choice? This indeed raises meta-ethical issues concerning the emerging field of machine ethics. Whether we endeavor to design sovereign “ethical machines” by making ethics computable or thereby including a set of parameters to define human intentions, emotions and actions to enable machines to “judge” human beings and adjudicate “reality” is however, a different aspect of machine design. What is imperative in this respect is much relevant to our substantive analysis of how our own brain learns to distinguish objects and signals, how it makes choices, and discards options. The mechanism much depends on the structural physiology of our own neural network as well on the neuroeconomics of decision making. The structural perspective was stated by Prof. Donald Hebbs in his paper “Learning from a neural perspective” which was a theory of neural network founded in the 1930’s. Our knowledge about machine intelligence was very primitive at that time but took shape following phenomenal works by Alan Turing, Newell, John McCarthy and other distinguished thinkers, we all know.
Too many options may lead to disambiguation, and logic is not the only answer for dealing with ambiguities (I greatly agree), but in certain, just “one” of such answers if I am correct enough. Herein, you are liberal enough to devise one such model that you have mentioned as “Similarity Selection Model”.
Those ‘Golden Insights’ into the laws of robotics by Asimov do shed some fundamental lights on human-robot-interaction and interrelationships, but may indeed fail to define machine behavior in dealing with ambiguity and confusion arising out of uncertainty originating from the complexities on account of volatility in human emotional patterns.
Regarding teaching robots how to feel pain, it must first be determined what “pain” is and to teach them something beyond sensory perception (extra-sensory) paradigms.
It shall be remembered that those who control machines control humanity. If such machines are allowed to be autonomous, they might control humanity as well the tools of humanity. So, there is nothing novel about this paradox. Sir, one need to be careful enough not to ignore the fact that ‘what if’ such intelligent agents start hacking our systems and throw us into chaos? Well, these may lead us to philosophical science fiction beyond the realms of this topic head.
Some thoughts are following on the sense of robots.
Siddharta:"Prof. Graeme Smith"
Umm... I hope I mentioned to you before, that I am not a Prof. but in fact an Amateur Scientist. I do not qualify for any professional title, because I have no degree.
Siddharta:"Undeniably, I think robots will have those kinds of emotions some day, yet I would like to mention that human emotions are both complex, and confusing. And that’s perplexing for machines to emulate those as all inclusive. In essence, as I have mentioned before, design of device, say, even a musical instrument, shape their affects’ tone and resonance. And so, designing machines as artificial entities would definitely shape their affects and belief. Things essential for such is to have at our disposal good grounding “values”, to remove common errors of ignorance about morality with which, we as humans are afflicted."
I have heard of some work that was done in Germany possibly under Dr. Domassio, that suggested that the limbic signals to the brain could be simulated using 5 depths of 5 basic libidos.
On top of this we think are the "Meta-Cognitive" "Feelings" such as "Familiarity", "Feeling of Self", "Feeling of Knowing", and "tip of the Tongue" to name just a few. My work suggests that these same limbic signals also parameterize the models of future comfort in the human mind, during the model-test-execute, Model-Test-Execute, stage, and the Meta-Cognitive Feelings, help parameterize experience during "Consciousness Processing".
The ability to build into the model of the strategic level, feedback from meta-cognitive functions, is critical to the ability of the brain to monitor its own function.
Siddharta:"So, one may not assume that those laws as finitely absolute, or unconditional, since there is a behavioral component attached to such laws, the behavior of which, evolves into complexity as the time passes by."
It is the complexity of behavior, that is the problem in designing a rule base that has ambiguous states, but needs absolute outputs. In a logic system there are two ways of disambiguation, hive off a new rule for each special case, or alternately fuzzify the logic, so that a range of results trigger the same rule.
If you hive off a new rule for each special case, you quickly run out of room for the rule base, and you increase the processing load significantly to determine which rule to choose. If you fuzzify, you essentially lose the specificity of the logic, and have to deal with boundary conditions that fall within the fuzzy boundaries but are not valid, the so called "False Positive".
A good example might be found in Google, where the search is "Stemmed" back to the roots of the words in the query, and then a search is done on each stem, then statistical means are used to combine the stems in such a way as to prioritize the list of "Positives" so that a certain QOS is practical.
Now, what you might not know is that Google is paid to fail to filter out commercial sites that are false positives, so that the companies that own those sites will be sent more customers. It is not unknown for a Google search to turn up literally thousands of false positives. Many of which need to be considered in order to eliminate them, and point to commercial sites. The advertising benefit for those companies is clear. The benefit for the searcher is less clear, but often, when the stemming fails to work, what the computer thinks is a false positive is actually a hit, on what the researcher wanted to find.
The trick is to find the right level of stemming so that the search results actually fit the query. With fuzzy logic the trick is to know just how much to adjust the scope of the query so that the results can be found, without excessive numbers of false positives. In similarity selection, first you generate all the false positives you can, and then you use limbic input to select from the cloud the partition that best contains the results you wanted.
The result is still disambiguation, but in this version, the false positives are the similarity stage and are critical to the function of Selection because they give more latitude to select from.
Sorry Graeme, that was not intentional i forgot as i am still fighting memory loss...But its your art of describing complex things in such clearest of forms and shapes that invites such dignity and honor, so may i be pardoned this time.
Yes, what you say may refer to how the search results fit the query algorithms. Since you have mentioned about the applications of fuzzy logic in search engine functions, the question would invariably come into one's mind about the real span of fuzzy logics if it has any limits, since, it operates between two extremes. So, is the boundary condition of fuzzy logic in which the query is bounded by false positive results or the flexibility of stemming system that generates false positive? Still confused, actually how the software determines what filtering patterns are required to generate search queries matching desired outputs.
well its understandable that they compute the means and match those with similar patterns to generate query results that may contain false positives. I think in stemming, any two or first three character strings are stemmed, matched and again repeated as loop functions but still, invariably, the system cannot eliminate false positives if its using fuzzy logic. So, again, filtering is required to generate accurate matches and such filters need to be designed to eliminate false positive. However, that would greatly diminish search results if such fuzzy logics are not applied, or if strict filtering is applied. So, what gives flexibility to such powerful search tools of Google and others, and why Google search is better than Yahoo! or other less known search engines? Is it the cost involved in designing such logic parameterizations or the complexity of technology(patent technology)?
If that's the technology you are up to or something better, i may see you become a billionaire and sought after since your model of 'Similarity Selection' would probably answer some of those complexities and limitations of a perfect search, as also, of filtering false positives that may result in a greater degree of perfectionism in search behavior of query tools.
Sidharta:"Sorry Graeme, that was not intentional i forgot as i am still fighting memory loss...But its your art of describing complex things in such clearest of forms and shapes that invites such dignity and honor, so may i be pardoned this time."
Well, ok, but if you complain when I put too many "D"'s in your name, it would make us both look like a couple of numbskulls so I guess I had better not kick too much. ;)
Sidharta:"Yes, what you say may refer to how the search results fit the query algorithms. Since you have mentioned about the applications of fuzzy logic in search engine functions, the question would invariably come into one's mind about the real span of fuzzy logics if it has any limits, since, it operates between two extremes. So, is the boundary condition of fuzzy logic in which the query is bounded by false positive results or the flexibility of stemming system that generates false positive? Still confused, actually how the software determines what filtering patterns are required to generate search queries matching desired outputs. "
Actually, both stemming and Fuzzy Logic will result in some false positives each.
Stemming because it reduces the words down too far from their original state, and thus opens up all the intervening definitions, and fuzzy logic, because it essentially increases the scope of the logic, without regard to which parts of the statement are the most important.
Consider a technical word with 5 or 6 syllables, that has an exact meaning. Now stem it, and you end up with a Root Word, that has only one syllable that is used significantly often in multiple languages. The result of using the stem rather than the technical word, has to be more false positives, because you have lost the disambiguation embedded in the Jargon. Interestingly enough, however by using statistical connections between the stemmed words, it is possible to almost rebuild the search in such a way as to favor the statistically significant combinations of the stem words, in such a way as to put the valid data within the first few pages of the search.
But this is almost accidental, in that the stemmed words are NOT the original search terms, and especially in the case of technical words, often have little relationship to the terms being searched for. A search directly for the original search term often however comes up with a false negative, because the Technical Term is not in the "Dictionary" of the search engine, and so it fails to find the index. Before stemming a lot of searches were based on Hash Maps, which like stemming used the common parts of the search term to find the nearest index to the term used. The problem with this approach was simply, that if nobody had mapped the right term, there would be no term to find at the index selected. Naturally that meant a false negative. Since Science is often coining new descriptive phrases using exact technical terms, Hash Maps fail more with scientific and technical queries than would be expected if there were only 200,000 words in the English Language for instance.
The nice thing about similarity selection, is that it doesn't actually have a map, it simply allows any cell with connections similar enough to the stimulus triggered connections to respond with a false positive. So it can Stem, it can Hash, it can trigger the exact term if it exists, all at once depending on how it, used to store similar data. The trick is how to evaluate the cloud of data formed, and to decide which elements are important enough to include in the search output.
If we consider the limbic system, and the Reactive Learning System as a sort of filter, that prioritizes the data cloud, according to familiarity and salience, the presence of an exact technical term, has relevance because, having learned the exact definition of that term, the Reactive Learning System, can give it a higher priority over the stemmed value, or the Hashed Value, while not limiting the search terms to just the specific definition. Thus a new confluence of technical terms that had never been put together before, becomes a juxtaposition that defines an actual linkage that has a particular meaning even though the brain had never heard the confluence before.
When you allow a soft enough processing technique, and combine the confluence at a number of different levels, (Similar to Jeff Foxworthy's HTM), you get confirmation of the confluence, by multi-level recognition of similarities to existing knowledge.
So here we have two extremes, false positives and false negatives, that are both more about the way the search is done, than the search terms.
Similarity Selection by filtering the False Positives almost never draws a complete blank. If there is anything within the system at all similar it draws from that at least a glimmer of an idea of what the definition might be. This glimmer can then be used to search the existing knowledge base to further disambiguate, and as a basis for searching for the exact meaning. To some extent, the HTM like multi-level statistical analysis, also helps because like Google, it derives from the statistical relationships some link between the terms that is more likely than would be possible without some sort of linkage in the confluence.
Sidharta:"If that's the technology you are up to or something better, i may see you become a billionaire and sought after since your model of 'Similarity Selection' would probably answer some of those complexities and limitations of a perfect search, as also, of filtering false positives that may result in a greater degree of perfectionism in search behavior of query tools."
Well, it would be nice, to at least make a living ;)
Remember I am modeling what I see in the circuits of the brain. I may not have it completely right, but I think I am getting closer to what the brain does than even Jeff Foxworthy, and his, "HTM technology", is showing great promise.
Filtering, is of course the real problem with this technology, and from the processing point of view, the actual necessary processing load will be higher than Jeff's, if only because the statistical model is applied to a much larger cloud of false positives. I see HTM's main limitation as the fact that it statistically reduces the population via Bayesian Inference before it applies the Markov Chains.
With similarity selection we want at the lowest levels the absolute highest number of similar terms, Because the confluence of each two terms will result in a heirarchical population reduction at each level of processing, Ideally we don't want to completely disambiguate the terms until late in the processing so that we don't get as many false negatives. One way of looking at similarity selection is as a sort of Lazy Bayesian inference. We don't allow the population to drop too far, until we have fed enough knowledge into the Markov chains to end up with a significant reduction in false negatives.
Think of it as "Population thinking" applied to Information, a sort of Darwinistic Approach.
In essence this is what Fuzzy Logic claims to be able to do, but because it is linked so closely to logic, it tends to settle for early selection and lend itself to pattern matching which again reduces the population of false positives too fast.
Having come to this conclusion, even digital logic, if foiled from pattern matching too early might be able to implement a similar level of search capacity, which is why I invented the simple Satisfycing Gate technology.
By Munging the Pattern Sensitivity of Digital Memory, and using a population based protocol between different layers of processing, I hope to be able to expand the data-cloud size artificially increasing the number of early false positives, in the hopes that I can overcome the population reduction at each level of processing enough to allow fewer false negatives at the higher processing levels. It is the opportunity to associate the same terms with a wider network of information that I think makes this technology more valuable than even HTM.
Thanks Graeme for such an enlightened explanation. Well indeed, that makes us come down to Markov Chains. I was already thinking about it's applications ¤):
I am not sure that Markov chains can be implemented in a population based system, the idea of chains, seems to be that they would be a narrow serial structure, rather than a widely branching structure.
One aspect of this memory system, is that although it can have Pseudo-Sequences, interrupting a pseudo-sequence allows for a wider range of outcomes than following it from start to finish. I thought maybe a GA approach to Markov instances would work better than Markov Chains.
Of course I don't know if anybody has ever defined a Markov instance, or indeed whether they have a different name already.