Please be explicit in your question, what do you mean by "model algorithm", and "physical mechanism of choice".
For the modeling of dynamical behavior of a physical mechanism, inputs and outputs data history is necessary. The choice of the neural network architecture, type, and the learning algorithm depends on what you want to do with this model.
Please be explicit in your question, what do you mean by "model algorithm", and "physical mechanism of choice".
For the modeling of dynamical behavior of a physical mechanism, inputs and outputs data history is necessary. The choice of the neural network architecture, type, and the learning algorithm depends on what you want to do with this model.
As you have correctly interpreted, i would define first what is the physical mechanism of choice. Well, it is not much dissimilar from the one mental or psychological mechanism of choice that was defined by Prof. Herbet Simon, Dr. Kahneman and Tversky. So, i mean it in analogy. Yes, it covers both decision making theories and logical methods of choosing among finite (infinite) given options fed to an intelligent system.
Our environment has infinite options to choose from, some well defined, and many options ill-defined in terms of value maximizing. Our ability to choose is an wonderful phenomenon, even animals have certain models of choice. And so we impart this(our) particular mechanism of choice as physical mechanism in machines by defining such choice option variables in mathematical form. And so in machines do they have logical methods of choosing among "finite"options.Let me be specific here regarding this 'finite' options for choice.
Given an autonomy to choose, machines and humans will choose differently or choice mechanisms of systems could be programmed based on some finite defined parameters. As for humans, we rather work harder than machines to take certain decisions even if we "do not have any option". That makes us smarter than machines in some sense. To make machines more understandable or even smart, expanding 'options' to choose from and then defining the parameters of such options machine readable is our goal.
Now, suppose, if such algorithms be defined to 'understand' options (we often misunderstand) as we do, since we often choose wrongly, or make wrong choice from a given set of parameters, can it be avoided in machines from choosing suboptimal options? Given some options to choose from, what algorithms or progammes may be developed to let the machine "imagine" other possibilities that are not even present? Since decision making depends much on the ability of a system to choose from given options, what would the machines decide if there are no given options? That would define their behavior out of the box context. We are all aware that expert decision making system exists, or such automated decision makers, so, would such machines find it really difficult dealing with options whose parameters are not defined, yet, they would be required to take some decision, since that is their primary goal to take any action.
So, what i mean it here by "physical mechanism" is that, to make such choice procedure flexible enough for machines to choose those best alternatives as options(or say, maximizers) since, AI systems do not have expectations about undefined options. It can, then be visualized that given an undefined set of variables, systems would settle for a "second best" or read from the environment if such "first best" is not available.
For decision making, it might refer to contextual comparison. So, algorithm in that sense is such search modules whose parameters are both well defined, as well, ill-defined or even, undefined. Prof. Kahneman and Tversky showed that people 'respond' more strongly to losses occurring from suboptimal choice or bad choice. So, machines may be programmed to perceive such 'response' from a given finite(infinite) set of policy actions as 'states' of probable choice with undefined parameters for a few of them(within many well defined) so that an intelligent system may be able to optimize or rather purely determine that 'if' such options are maximizing or suboptimal?
My question was, in such respect, if such algorithms be developed to model such choice mechanisms replicating those psychological aspects of indeterminacy in a physical system such as in AI using models of neural network.
I hope i made my point clear, but will further provide explanations if this is suboptimal.
Can you provide some details about the sample input and output data that you will be working with. The semantics of the data will help to know if Neural Network (NN) will work for you or may be you will need to apply fuzzy logic because, i foresee some of your data will be unstructured or subjective and there is the possibility of vagueness and this may call for the application of fuzzification process before applying NN.
To make a decision, a human performs a kind of weighting, practically he compares the advantages and disadvantages of the choices that are offered. So in general you can not make a good choice if you have no prior knowledge. Something that allows you opting for a particular choice. Otherwise, one might choose randomly, and fall on a choice that could be good, bad, or somewhat bad, ...etc.
I think that if you want to model algorithms to analyze the physical mechanism of choice, it will be necessary to incorporate in these algorithms, a criterion that would allow the neural network to know, whether it has made a good or a bad choice, and to refine its decision-making
Yes, you have very exactly pointed out those relevant facts in your above analysis, thanks for that.
Indeed, to make a decision we need to perform weighting of choices and then take a course of action. So it is also true that one cannot make a good choice if one have no prior knowledge about such states of actions or options. And it is one of these limitations which is the basis of my research where, a system without any prior (or say, little) knowledge about such options, would behave and learn by discovering as well evaluating such options based on pre-defined parameters. In effect, incorporating a large number of variables as options would result in ambiguity, as the system would be already fighting with such ambiguities. The questions is, to solve such ambiguity proping up that may slow down machine decision-making process. So, this may not be entirely considered as an algorithmic approach to thought, rather, to empower some degree of abstractness in algorithms to module the physical mechanism of choice to navigate uncertainty.
Research has shown in humans that just increasing the number of options to choose from may have some negative effects on choice mechanism(Barry Schwartz). And then wherein, i am speaking about 'unknown options' that makes things more complicated. To make optimal choice, you need to choose policy states that are, in essence, optimal. How can a system know about the efficacy of an option that by itself is undefined?
Here, i am proposing to migrate to an 'open system' beyond defining sets of options having subsets as alternatives(say, lists), as axioms of choice under classical system. Random choice is also a possibility within classical axioms of choice sets, so this is something different when i am seeking something which is not entirely randomness, rather, to make the system 'learn' and seek out patterns of such undefined axioms and differentiate those from randomness using back-propagation algorithms to evaluate the optimality of such a search process, that is, to let the Neural Network know that these states 'exists', and may or may not be random, or say, patterns. Indeed, fuzzy logic is one of such method, rather, a non-local oscillating logic working between two extremes and is a possible extension of a closed system. But there is a possibility, to some little extent, that it might limit a system's learning and affect the learning strength of such neural network, even if in theory. I have already considered Bayesian algorithm using Markov chain but that's for a linear model, so, i suppose that in an open system learning such as this to create expectations about options and then to evaluate those in the background, would pose problem to define boundaries of rational thinking in machines, as much as in human. As if, such axioms are not predefined and the NN has no prior knowledge about the 'unknown', however, considering even that superpositioning of options would create an entanglement limited by choice in quasi-states.
So, my first option is to give the algorithm the idea about creation of such quasi-states, particularly, moving away from non-locality. So, my attempt is to design such algorithm to help 'localize' non-singular states, as i have previously worked with singular states applying Markov chains, and so to devise such states of options even if such given parameters are violated.Well, this seems to be confusing as much to redefine learning system without imposing limitations of the membership functions in fuzzy rule-based system.
However, in a nutshell, i may define this endeavor as a new sort of searching natural patterns as well, filtering randomness. Thus, it might be an unifying approach to bridge the gap between semantics of fuzzy logic and formalism based on unconditioned distribution of random options defining conditioned constructs of patterned entities.
Yet, as you have mentioned, Fuzzy logic has its immense application in this type of modeling for approximation of reasoning when axioms as options are undefined but my point is a differnet one as i am seeking to identify similarities between pairs of possible known and unknown options.
Boluwaji :,
Let me provide a mathematical construct that i am applying as one of the methods in my model of NN, well, just to make some sense of the underlying approach. This is to ensure that the system base their decision on some given input, which is the first assumption. The second model would define the paradox of choice in mathematical analogy:
This title topic is the subject of my ongoing research paper, so, let me be content not to elaborate in such details in the public domain since the model is not stable, i.e., i am myself in search of such a stable system, but may upload the full paper once it gets through completion.
(sorry)My playbook neither support notations nor do RG have math input functions. So, i need to be simple.
A membership function m»f takes the value in the intervals as (0,1).
Now,
m»F: U~(0,1)
The fuzzy set F in U where the support functions as value consisting of element 'u' may be considered in a set F={(u, m»F(u))|u consists in U} which is defined by a support value if m»F(u)>0.
Now, i want to define an open state as finite f^¡ and 0 as limits of S with policy actions defined as p»n for a known given definition.
Again, i would like to define such a system having infinite policy states as S'^¡ with undetermined values fV(x)=x(dS'^¡+S(matrix of given states within a set)+S»n matrix of unknown states+X»n¡)).
In numerical form, the probability of such a matrix based output would be:
Now.
{ { A, B, C, D,E, F}|u consists in U } where the given cardinality is | S|=6
[ 1 3 2 9 0 2]
| 0 2 3 6 9 0|
| x 5 3 5 6 7| [ x«1]
| 5 3 6 y 5 6| | x«2|
X= | 6 3 5 6 4 6| |y«1|
| 9 3 5 3 6 9| |y«2| * fV(x)=x(dS'^¡+S(matrix of given states within a set)+S»n matrix of. unknown states+X»n¡))= [ ]
| 8 0 2 0 1 6| |z«1|
| 0 2 3 4 1 5| |z«2|
| 1 0 z 6 3 2|
[ 5 2 2 9 0 1 ]
______________________/
F={(u, m»F(u))|u consists in U}+{ { A, B, C, D,E, F}|u consists in U } where the given cardinality is | S|=5
[ 1 3 2 9 0 2]
| 0 2 3 6 9 0|
| x 5 3 5 6 7| [ x«1]
| 5 3 6 y 5 6| | x«2|
X= | 6 3 5 6 4 6| |y«1|
| 9 3 5 3 6 9| |y«2| * fV(x)=x(dS'^¡+S(matrix of given states within a set)+S»n matrix of. unknown states+X»n¡))= [ ]
| 8 0 2 0 1 6| |z«1|
| 0 2 3 4 1 5| |z«2|
| 1 0 z 6 3 2|
[ 5 2 2 9 0 1 ]
______________________/
F={(u, m»F(u))|u consists in U}+ X«n as recursive function.
*note: x«1= x subscript 1 and the values are much arbitrary.
Now, say, to define the variables x, y and z, we assign some set values leaving only 'z' undefined. That would give some output but would be highly optimized. I don't require to be so optimized. What i want the system to find out some probability match that would be very close to approximation. Now, this is difficult as we need to perform recursive analysis on the matrix module to get something near what i would desire my system to achieve.
One would assume there to be subsets of A~F as input lists with some undefined matrix inputs as subsets of policy states with unknown parameters.
We will get some output say, 'n' value derived from the above model.
This is one of the first phase of the model. The rest of the model works with purely undefined states with few defined variables. And so, the neural network can be fed with these probability matches of as many defined states but one may observe that value maximization would be reduced as we add as many functionsl states.
Literally, speaking in the wind wouldn't help much to conceive such a model. So, the goal is to create such quasi states, determine their probable course of actions and create a series of probability maps required to be fed into the network. This would help the NN to 'learn' certain probable patterns even if those patterns are not taught , and then, make the system encounter the environment where it could start to match the computed outputs with the real environment, and gain knowledge about undefined options, incorporate those as 'learned' entities, and discard randomness. The more such maps are fed, more the choice mechanism would become flexible and let the machine understand ambiguities even if to deal with such.
I tender my apology to you if you may have misunderstood the phrase 'in the wind' which i actually meant for myself, in any format. Actually, that the idea did not originate abruptly. I read it somewhere in a paper by Colin Camerer, saying "applying algorithmic models of neural hardware for choice". That paper is cited as "Why Neuroeconomics has generated raucous debate?" Well, that was enough for someone to delve deeper into such issues confronting machine learning.
The problems arise as you have indicated on the theory of both decision weights as well of connecting states within the network policy rules:
# To specifically identify those factors that hinder learning strength, i.e., threads, rate and momentum
#Mathematics describing hidden layers that carry decision weights
#Limitations of training error on learning strength
#Most important, to imagine quasi-states within those hidden layers that would depend on specific pattern learning algorithms
The validating cycles are not important if the learning is not a backpropagation one, so, if such undefined options that i am referring to would pose as noise, average error would likely increase many fold. That' one problem, the other being assigning quasi-parameterizations of the subset options. What formulas can be conceived so that occurrence of randomness can be identified with some degree of accuracy when patterns tend to break? Such a formula I would suppose would define the behavior of the algorithm in question. Whether it can be sigmoidal functions or on the optimal number of hidden layers that would fit in a system, or any such similar things.
So, i would like to thank you again for taking such pain to go through my supposition in order to understand my work, and may be, if you think that something interesting may be worked out, we can shed some light on these problems in concert.
This paper may provide some glimpse which i found after rigorous search: the paper says about mind matter, neural correlates, quantization of states and something on neural network.
I took a little time to think about the problem you asked
Neural networks in A.I (Artificial Intelligence) that we use as control engineers are very different from those used in Neuroscience or Neurobiology.
In automatic, the ANN (Artificial Neural Network) is a crude representation of reality. In neurobiology it is a model that represents the complexity of reality, with details increasingly refined.
I am not an expert in neurobiology, and I can tell you that I understand nothing about the human psychology. Since our last discussion, I was able to get an idea of the neurobiology or neuroscience.
From what it seems to me to have understood on this topic, and according to Amoroso and al.:
A triune noumenon (a neural network in A.I) consists of:
1. Localized Fermi brain states,
2. The generally temporal Psychon world of conscious thought (at the semi-classical limit), and
3. The structural-phenomenological Noumenon of nonlocal atemporal connectivity and integration.
So it is a complex neural network, which must take into account: space-time (localization, and past, present and future), interaction (physical, electrical and electromagnetic, chemical, and biological) between microscopic particles, during information transmission (via dendrites and synapses), and the faculty of consciousness.
It’s really complex
About the structure or architecture of the neural network, I cannot help you.
Concerning the optimization algorithm, maybe, I can offer you an idea that could lead to a likely solution.
Reinforcement learning is learning what to do--how to map situations to actions--so as to maximize a numerical reward signal. The learner is not told which actions to take, as in most forms of machine learning, but instead must discover which actions yield the most reward by trying them. In the most interesting and challenging cases, actions may affect not only the immediate reward but also the next situation and, through that, all subsequent rewards. These two characteristics--trial-and-error search and delayed reward--are the two most important distinguishing features of reinforcement learning.
You said you have no prior knowledge of the choices to make, but if we make a choice, it is done for a specific purpose or for purposes on which we have at least an idea. This idea is seen as a reward.
The award may be made using a genetic algorithm optimizing a cost function for a population of choice. And the cost function must evaluate the results obtained for each choice took up
You have drawn up a very interesting and far reaching conclusion based on the subject matter as now it appears you have shown me some patterns out of randomness. From where i would like to continue on your thoughts about such quantum states of consciousness which indeed, is a very complex phenomenon. But as also, we are, through our discussion, be able to generalize these meta-concepts into some formidable language of mathematics and algorithms, to some extent that i am able to observe.
Particularly, since you have mentioned that there is one way to seek deeper into the problem of choice without its attributes being defined, that makes the theory a bit complex. Rather, as you have suggested to attempt to design some genetic algorithm for optimizing the cost function for a population of choice, would require to compute the cost function to be defined first, and let the network do the rest. Well, this is interesting, but tricky part too. Yes indeed, i have very fewer options left so should look into Reinforcement learning model and define some actions, and leave others undefined whose states may be anything within fuzzy limit. And then, let the algorithm do a trial and search module to discover the value functions of both defined and undefined states(actions) and derive a reward table.
So, can it be possible for the algorithm to assign values randomly to such actions that are not predefined as cost of choices within some arbitrary limits using fuzzy systems and then let the reward values be computed using both feedforward and backpropagation algorithms? As also, how can i assume the algorithm to decide by itself what decision weights should it consider when those are defined as hidden Markov Chains(hidden layers) within the neural network? What should be the descriptive parameters of the independent variables since i can include some noise, but cannot leave those as missing gaps otherwise the network would fail to learn. How can i use randomness in learning and that would require to design a functional algorithm, the formula which i have, but in mathematical form only. I think it may require some gating mechanism(s).
To assign quantum states to the actions during reinforcement learning as hidden layer algorithm may open up a new field of adaptive learning in machines i would suppose(though this is already a mature field), as i was going through one of your previous paper at the ACM, and another paper by Dr.Lofti Zadeh, and i found somethings very interesting i would like to mention in some detail in a short communication paper.
around 60 papers are there discussing various aspects on Liquid network, reinforcement learning and handwriting, speech recognition, robotic and machine learning.
The lab seems to have a substantial number of publications on RNN based neural network and developing new concepts of machine learning.
Thanks sidharta for suggesting those links.I'm really interested in the subject, only, being responsible for several courses, right now I'm heavily involved with exams. Whenever possible we will resume our discussion, I do not like to leave a problem in suspension.