I suggest to consider the comparison with the behavior of the human brain - any simulation of emotion should be judged by its similarity to the effects of emotion in humans.
a) emotion appaears to be a strong focus on one aspects or multiple related aspects. That claim supports the pattern approach of Mr. Bugnoli.
b) emotion appears to have the potential to override other judgement. Be honest: the stronger any human emotion - the less the brain is able to focus on other tasks or conclusions
c) in contrary to plain consciousness - the set of active thoughts at a given moment - an emotional pattern seems to have the special feature to twist perception and deduction - not only ignore rational thinking but to squeeze it in order to fit the given pattern.
d) a simple emotion seems to be related to good or bad - while complex emotions seem to be related to concepts or other emotions. So it is true that the ANN from the beginning must have a node that it absolutely likes to fire and one node that absolutely avoids to fire, however any new emotion would be linked WITH A WEIGHT to either of these basic nodes rather than being wird to be "only good" or "only bad".
First you need to define "emotion" with some mathematical tools, then implementing it. Moreover, emotions are quite different one to another, so you shall define each one apart, taking into account to use the same model for everyone.
Right now, you can start estimating the fluctuation of some parameters observing human behavior and their response to stimuli supposed i.e. to incite sympathy; then realize an algorithm able to recognize these fluctuation. The next step would be an algorithm which, depending on input values, returns step by step the interpretation of its data processing.
if interested, see the attached paper below (and also the PhD thesis of the first author)
An Emotional Learning-inspired Ensemble Classifier (ELiEC)
Mahboobeh Pasrapoor, Urban Bilstrup
Abstract— In this paper, we suggest an inspired architecture by brain emotional processing for classification applications. The architecture is a type of ensemble classifier and is referred to as ‘emotional learning-inspired ensemble classifier’ (ELiEC). In this paper, we suggest the weighted k-nearest neighbor classifier as the basic classifier of ELiEC. We evaluate the ELiEC’s performance by classifying some benchmark datasets.
Yes, of course! As Federico points, it depends on the range of your definition. See these: http://mitpress.mit.edu/sites/default/files/titles/content/ecal13/ch122.html, http://www.ncbi.nlm.nih.gov/pubmed/25078111 or http://www.sciencedirect.com/science/article/pii/S1571064506000327. Levine has also made interesting contributions to IJSE journal on synthetic emotions, which I Edit. https://www.google.es/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0CB8QFjAA&url=http%3A%2F%2Fwww.igi-global.com%2Fjournal%2Finternational-journal-synthetic-emotions-ijse%2F1144&ei=Kt-zVPODOYTNygPzsoHICw&usg=AFQjCNGYGqzR-NnsyK1tzTnsbEyrv4wY3g&sig2=z7CuQUvY8ewm2lzH4K5IaQ&bvm=bv.83339334,d.ZGU
Undoubtedly, you can characterize individual nodes, but maybe it would be quite reductive: as we know, emotions take shape through the activation (nervous activity) of certain regions of the brain; they are essentially "patterns". So, in my opinion, it would be more effective to design them within a set of connected nodes (a graph, indeed): the path the signal runs (the sequence of switching from a node to another, each one shifting some parameters) describes the emotion the inputs were meant to.
ANN are mathematical models, they have very little in common with real neurons so you cannot generate emotions, consciousness... using Artificial Neural Networks or any other algorithm on digital computers.
If you want to simulate emotion, you need to start with a model. I suggest looking at Robert Plutchik's model as a starting point. Then consider how to convert it into a numerical representation.
I'd personally say it probably requires a little bit more than a standard artificial neural network, as ANNs typically follow the idea of electrical signals and learns through changing the weights. It seems that when it comes to emotion a little more is required, maybe a bit more detail in terms of simulating neurotransmitters and hormones.
I guess it all depends on how detailed you want to get. ANNs are good specific problem solvers, but if you want to get into detail about modelling real neurons then you would do best looking into "Computational Neuroscience"
The mathematics for "learning" improved and ANN became "deep learning" just remarketing ANN. If you like to generate emotions, consciousness... you need to use biological building blocks see Can we build a conscious machine http://dx.doi.org/10.13140/2.1.2286.5608
you can have an ANN with two nodes on the output: if one node is on then it's happy if the other node is on then it's sad. You train it with (some representation of) birthday cake and bad marks in homework - and that's it. If you want more emotions and/or less obvious situation it becomes more complicated.
However, in this way you would not be able to have it distinguishing the "intensity" of similar emotions: which is the greater happiness between receiving a gift and getting good results? This clearly depends on the context (indeed, emotions emerge from the circumstances). In order to segregate a emotion, I'm saying, you first need to educate it to its various ranks of intensity -otherwise, you should characterize every degree as a brand new emotion; in these terms, I do not think emotions could be compared on a discrete scale, but rather on a continuous.
I suggest to consider the comparison with the behavior of the human brain - any simulation of emotion should be judged by its similarity to the effects of emotion in humans.
a) emotion appaears to be a strong focus on one aspects or multiple related aspects. That claim supports the pattern approach of Mr. Bugnoli.
b) emotion appears to have the potential to override other judgement. Be honest: the stronger any human emotion - the less the brain is able to focus on other tasks or conclusions
c) in contrary to plain consciousness - the set of active thoughts at a given moment - an emotional pattern seems to have the special feature to twist perception and deduction - not only ignore rational thinking but to squeeze it in order to fit the given pattern.
d) a simple emotion seems to be related to good or bad - while complex emotions seem to be related to concepts or other emotions. So it is true that the ANN from the beginning must have a node that it absolutely likes to fire and one node that absolutely avoids to fire, however any new emotion would be linked WITH A WEIGHT to either of these basic nodes rather than being wird to be "only good" or "only bad".
Of course, you only have to specify a set of emotions on which you want to work, then determine the existing relationships and translate it into the form of data which can be realized for the neural network.
Interesting debate.. I wonder if as per Frederico's statement on the emotional intensity if we can promulgate the intensity by varying the node level in the ANN. In this way by using neuroevolutionnary technique and depending on the intensity of an emotional level the system can adjust the number of nodes, which will reflect on the level of intensity of that emotion.
N. Hewahi and A. Baraka," Emotion Recognition Model Based on Facial Expressions, Ethnicity and Gender using Backpropagation Neural Network", International Journal of Technology Diffusion, 3(!),pp.33-34, 2012.
N. Hewahi and A. Baraka, " Impact of Ethnic Group on Human Emotion Recognition using Backpropagation Neural Network", BRAIN: Broad Research in Artificial Intelligence and Neuroscience, Vol 2, no. 4, pp.20-27, 2011.
First we have to understand the role of emotions in a living system like man or animal. If we fully understand their function it is easy to build a model which represents this functionality. The only problem is we do understand the role of emotions incompletely.
If you read the book (Neuroelectrodynamics) and then http://dx.doi.org/10.13140/2.1.2286.5608 you may understand that different forms of computation in the brain exhibit different characteristics. Emotion , consciousness.... cannot be generated on digital computers - One can describe what he/she thinks about emotions make a model and simulate it - the simulation does not generate emotions on digital computers it doesn't mater if one uses ANN,DNN...
One can efficiently “compute” using biological structure and solve problems that have never been solved in the past using digital computers, NLP (e.g. Chinese room argument). That's the basic idea in http://dx.doi.org/10.13140/2.1.2286.56 since
emotions, consciousness can be generated within biological structure
I am sorry, I did nor read the book Neuroelectrodynamics. It would be great, if you could clarify your insight in the computing with biological structures.
We do not need the same functionality like neurons to find the same results. A functionality can be realized in different ways and neurons are one of them, it the biological way. So we do not need cyborg computers to find consciousness or emotions but we need to understand the mechanism which emerge such phenomenon.
The biological way of computing is far more powerful and more efficient than our digital techniques (see computing by physical interaction). The outcome of non-digital computation is rich, it includes emotions, consciousness ....
Importantly we can fully use such computations generated within biological substrate and build powerful systems that can exhibit emotions, consciousness http://dx.doi.org/10.13140/2.1.2286.56
a living system (animal, human) has emotions and consciousness in different stages of development. Why do you think that an artificial system like a cyborg will have such states?
A natural brain is able to have such states but there is no reason why a bunch of neurons should have it. There is no structure like in the brain. It would be a wet NN but nothing else. The same functionality would have an ANN which does not need biological nutrition.
it depends on your theory of emotion. My approach is that emotions are the result of a state indication between the needs of an agent and the situation of the environment and bodily states.
I agree..In fact.. Ive been able to create a matrix set that allows ANN to learn and behave according to emotional state captured from voice frequencies.
Yes it does depend on a model - which was the reason behind my question to all and sundry. The idea of direct recognition of emotional output needs to be set into a context for the use of such information.
My model, which I have floated before in researchgate, is one in which we have internalised feedback (ie not emitted to the environment) but reflected in a recursive process in which some sort of convergence mechanism is required to resolve open loop problems implicit in current input.
I propose the there is an evolutionary genetic component which provides what might be termed generic actions (goals) these determined convergence criteria in the feedback based process. Such a mechanism must include a test (derived from goals) and that could be the emotional state. The positive feedback creates a driving force but the direction is determined by the emotional state (fear = avoid), pleasure/anticipation = approach/converge and that this whole system constitutes a model for closing somewhat open-loops in a overall control-based system. Thus we have a process driven in a control- like manner, with a driving force (akin to resonance ) and with positive and negative convergence (emotional states). This means that issuing a detected emotion is just in the input phase of this process and that is why I asked the question about the PURPOSE. There are two sources of purpose at this point.
(& Heman)
Your purpose for doing this (ie what is in your model and why do you need it) and also indicated is a system purpose for processing other people’s emotions. And in this case it would feed into an internal (mind ) model so that there should be some sort of reason for identifying other people’s emotional state. Which is an open loop problem to be solved – that of handling other people’s emotional state.
Identfying emotional state is the input part to solving a problem in which other people’s emotional state matters. It is not what emotions do for the mind mechanism - hence my question:
What would be your purpose for simulating emotions as an output from an ANN?
why do we find emotions in humans and animals? Of course they have a purpose!
In my model emotions are signals to the mind of the relation of needs of the agent and the situation of the environment and bodily states. I use Pluchiks 8 primary emotions.
A simple thought related to this: we humans with neuron brain normally(with calm and serene state of mind) take a decision based on some input conditions and for the same set of inputs our decision differs while we are emotionally stressed. Even during an emotional crisis(like depressed or angry , mood off etc) we take decisions much easily (scolding some good person with very little fault) that otherwise we would not have decided what to do. Almost all the extreme decisions (like suicide ) would not have been taken even the input reasons being same if state of emotional stress is different(cool and calm). So can this be modeled in ANN that changes outcome not only on inputs but also on state of mind. Please don't mind if my question seems silly.