Been machine intelligent artificially, could we teach them to love artificially?
In response to Mohammed's comment so can a human act like loving. What we perceive are the expressions of love and these can be encoded or learnt by an AI.
In moving forward with the question I would suggest certain parameters of evaluation for such possibility:
When I think about love, I think about books. Therefore love is about creation, about knowledge or solutions for life.
I don't think robots are fully creative as humans since they operate with goals. It is also difficult (as far as I know) to construct knowledge or solutions for life from a large set of ideas.
Arturo Geigel points are so interesting. I am curious. Do you refer to AI as an entity or do you believe that machine is an entity that holds AI?
Sergio Silva Ribeiro ,
If we are to evaluate an AI on the same footing as a human it has to be embodied. It is as with a human it has to be embodied to channel the expressions, if not you are channeling your love to the 'memory' or 'conception of' not the entity itself.
Beatrice Luca,
'I don't think robots are fully creative as humans since they operate with goals' in this sentence there is ambiguity of the 'they' so I will assume you are referring to robots.
I have been painting since I was adolescent and one thing I can tell you is that creativity is in large part random selection of ideas which can be then be:
ordered coherently into an idea for a painting. Then one "enters into a contract with the medium"(quoted from my former painting teacher Andy Bueso http://www.mapr.org/es/museo/proa/artista/bueso-andy ). This implies that the medium limits your expression by the physicality of the medium(e.g. two dimensions if it is a canvas,black and white if it is charcoal, etc.). Tacit knowledge(e.g. know-how knowledge) is difficult to describe but not impossible, this can be refined to the point of an algorithm.
The answer to this question essentially boils down to the question of: How do we define love? In particular, do we assume consciousness to be a prerequisite for love?
The human understanding of love is generally conditional upon the existence of consciousness. In that case, the necessary condition for solving the problem of love is to solve the hard problem of consciousness first.
Arturo Geigel
Great observations! But, my opinion is that constructing a benchmark for testing love will probably be non-trivial. However, I suspect that it may be possible to make the problem of 'simulating' love (in AI entities) independent of the problem of constructing a benchmark, if 'love' is carefully defined.
Sergio Silva Ribeiro
As far as I understand, a machine is an entity that holds AI, which in turn, is its own entity.
Shounak Datta,
The reason for laying benchmarks early on is to circumscribe the discussion towards more formal aspects rather than mere opinions (which happens a lot in RG threads of this type). I do agree it is non trivial to derive a benchmark. I also believe that establishing a benchmark also provides a guidepost for the development of AI. One of the fundamental problems of the field (and now for AGI) is the moving post problem. As soon as a goal is achieved people reflect that the metric was not what they were expecting and move it further down. Sometimes to the point that not even humans can comply with it( the reason for my preemptive evaluation parameter 5 ).
Shounak Datta
If we understand a machine holding AI, the natural conclusion should be that it could hold Artificial Love (AL) as well.
Sergio Silva Ribeiro , The position that you hold is valid if one assumes a strict position on point 1 and 2 of my evaluation criteria. if one can prove that an algorithm concludes for x where x is an AI routine then it can be extended to y where y is a love routine. Shounak Datta position is that "The human understanding of love is generally conditional upon the existence of consciousness" this involves metaphysical considerations that fall outside scientific domain.
It remains on whether there can be agreement on both positions which I do not see likely unless either person denies on of the underlying premises of the domain of inquiry(strictly scientific or expand it to non naturalist metaphysics).
Arturo Geigel
You look like to be close to defining the AL equation using x and y in your explanation ;-) We could discuss it on the metaphysical arena in the field of ideas. But using "A" before "L" is an excellent indication that we can avoid that long eternal discussion (I mean the metaphysical) to be discussing something more close to the experimental and practical science (empirical). Because been artificial we can verify, measure and reproduce.
Sergio Silva Ribeiro ,
I personally tend to favor your line of thought but Shounak Datta 's position is that "The human understanding of love is generally conditional upon the existence of consciousness. In that case, the necessary condition for solving the problem of love is to solve the hard problem of consciousness first."
While I think this is debatable, I would need to hear more on why he finds that consciousness is a precondition for love. I think that metaphysically both are separate entities, especially if one takes consciousness as constrained by current scientific stances. But again I will withhold any conclusive judgement until he expands on it.
Arturo Geigel, Sergio Silva Ribeiro - Philosophically speaking, an entity has to be 'self-aware' in order to be able to 'love'. Simply put, 'love' is the practice of valuing the needs of some other entity greater than one's own needs. Hence, one cannot 'love' without having any knowledge of one's own existence, philosophically speaking.
That being said, if one defines 'love' in some other concrete way, which is independent of consciousness (I'm considering self-awareness and consciousness as synonyms for this discussion), then we might be able to simulate this consciouness-independent version of 'love' artificially, without requiring that the problem of consciousness be solved first. Essentially, it all boils down to how you define this 'love'. Sergio Silva Ribeiro do you have any concrete, scientific definition in mind?
Sergio Silva Ribeiro - In any case, if we assume that it is possible to simulate 'love' artificially, then yes I do believe that a machine can harbor an entity with such characteristics.
Shounak Datta,
If I got your line of thought correctly it is something like this:
1) 'love' is the practice of valuing needs of some other entity
2) To be love, the value assigned is greater than that entity's own needs.
3) Hence, one cannot 'love' without having any knowledge of one's own existence (minor conclusion).
4) Therefore, an entity has to be 'self-aware' in order to be able to 'love' (major conclusion).
There appears to be several propositions missing and I am having trouble filling in the missing premises. Using what I have I can provide the following analysis.
a) Assuming 1 and 2 I have no problem .
b) I am having trouble with the content of the statement of the minor conclusion. How does existence is a precondition to love? Love can be described as an internal state and one can for all intents and purposes not be aware that one is in love to be in love. Awareness is necessary only to state that one is in 'love'.
c) I am having trouble on the inference from 2 to 3 since assigning value greater to external events does not require being able to have knowledge of ones own existence (and therefore self awareness), just a recognition of boundaries of entities and concepts.
Premises 1 and 2 are easily satisfied by an AI since:
1) I can equate that to a valuation function of an AI whose internal metrics of valuation.
2) This can be programmed on the hardware as a state machine
3) State machine have transitions of state
4) The states of the software running the state machine on hardware are internal states of the hardware (i.e. the software runs on a chip which is encased on a surface therefore it is internal to the chip).
5) There can be a program that defines the hardware boundaries in physical space and the sensors will take that into account to provide information on what is internal and external and therefore the boundaries of the robot/AI (these even fulfills the notion of awareness given since no constraints have been given towards excluding this case).
@Arturo Geigel, This seems interesting, can you elaborate on the exact layout of the system you have in mind?
I am still thinking about the machine to be self-aware. I prefer to use the expression 'artificial love' than 'love'. The machine is not natural, it is artificial. So, everything about them should be artificial, for today. Of course, they could evolve in the future. But I guess we could think about AL as something more embryonic.
I would suggest the following reading: https://en.wikipedia.org/wiki/Hard_problem_of_consciousness
Shounak Datta,
I would break down the discussion into what is debatable in terms of the question. To break it down so that we can start talking about the components of the system, we can divide them into:
Most of the debate on whether a system can be conscious or love or have human capabilities and properties lie mostly in 2 and 3. Mostly because:
a) We do not know enough about ourselves
b) We have not given proper context and delimitation to the problem
c) Preconceptions
d) Religious belief
Before going forward I would like to address d. In my view science and religion have different axiomatic systems that are not capable of being mixed. There is nothing wrong with that it is just that they are incompatible and each of them respected in their separate nature.
With respect to 2, I would say that a symbolic system is best suited and a mapping via NN from inputs to internal representations. Due to the complexity of representations I would tend to favor a graph structure to interlink representations.
With respect to 3 is where my proposed system does vary. I have previously built a system for robotic vision where the system's camera was drawn to a particular position via color saliency and optical flow. To avoid getting stuck in highways or other motion intensive scenarios, I designed a saturation curve for the algorithm. This "unstuck" the robotic vision. This saturation curve along with transition states of "feeling" can provide a more viable algorithm for "love" without having the robot/AI becoming a stalker. The other part would be the ramp up curve based on inputs and adjustable to already established relationships and transition states. The curves themselves can be subject to another agent which would be the equivalent if you like of restricted awareness of its own state and can adjust the curves via reinforcement learning and built in rules.
The other important part of the system is making the inputs and outputs with enough programming details to overcome the uncanny valley.
If you want we can drill down further on any of the points here.
Sergio Silva Ribeiro ,
I think the distinction of Artificial love vs love should be based on essential properties of love itself. I think that the distinction of artificial vs non artificial can be prone to criticism based on arbitrariness rather than a proper distinction of absence from robot love. I think this was the point brought by Shounak Datta . This needs to be resolved metaphysically, but I have not seen a solid argument for it nor for consciousness (The Chinese room argument by Searle seems to me rather weak in terms of grounding it on a solid distinction of what an AI cannot do that can be proven that humans themselves do and can be corroborated by a third party).
I'm not sure I fully understand your proposal for point 3. The part that is reasonably clear is:
"With respect to 3 is where my proposed system does vary. I have previously built a system for robotic vision where the system's camera was drawn to a particular position via color saliency and optical flow."
I did not understand the rest of the proposal.
Shounak Datta ,
This was just an example of where did my notion of saturation applied to inputs came from. It can provide effective controls to side effects in programming that might lead to a robotic misbehavior. It worked extremely well for that case and I think it is crucial for a proper transition for a "loving" AI. It can provide ways of getting "unstuck"in a particular state. As way of example was the idea of a stalker robot that confuses love with stalking. To provide proper behavior saturation curves are needed to avoid such extreme behavior. The second implied mechanism of bounding behavior is to encode the behavior as state machine so transitions are "limited" among themselves. Event driven behavioral patterns would bring about possible state transitions that might be unwelcomed or otherwise plainly unpredictable.
If we give the additional agent the possibility to extend the state diagram behavior(with linkable modules and meta programming) the rules would be at the metaprogramming level but would be quite complex in nature.
Arturo Geigel
"As way of example was the idea of a stalker robot that confuses love with stalking."
To avoid the situation when your robot fall in love at first sight or became a stalker. You could use the concept of the state machine for "love." I mean before been in the status of "in love" some status should be increased based on interaction.
For example, creating rules for impression formation (IF), your robot could idendity in a set of some objects that match with its internal set of IF-rules. IF-rules could be organized in different groups like physical characteristics, common interests, believes, etc.
Some IF-rules could be identified by input devices like camera, microphone, heat sensor, etc. A status like "interest" could be set for some objects.
The robot could try to interact with the objects, getting results like success or not on attempting. Some IF-rules could be based on interaction. A status like "affection" could be set for some objects.
So, you could understand AL progressively. Or we just could set up a robot to love someone/something and avoid all those processes.
In fact, the crucial point is the concept of AL adopted.
Sergio Silva Ribeiro ,
That is why I brought up the camera example to put in context the type of AI I am aiming for.
I think that state transitions are more nuanced and fuzzy than those that can be implemented in If rules (though you can easily rule out stalking if the rules are done properly). I am thinking of emulating activation functions of neural networks but for rules. This would be similar to Fuzzy rules but at each specified interval in the ramp can be traced to a map to specified behavior . Overlapping intervals can then be randomly selected from one another and the decay or ramp can point to intensity. Also the curves would be more complex than those that you usually find in fuzzy rules(You can even have superpositioning of curves).
This can give a wider behavioral range that would fall in line with neural networks.
In my opinion love is an emotion, which will be "unfeelable" by AI
Marianne Levon Shahsuvaryan ,
How do you define emotion and "unfeelable"?
If we take Britannica's definition of emotion " is a complex experience of consciousness, sensation, and behavior reflecting the personal significance of a thing, event, or state of affairs"[1]. I can easily come up with a programming that meets all the criteria above
To fulfill the unfeelable it would mean the negation of "an emotional state or reaction"[3]. This would entail not being able to achieve 3 and not being able to implement a state machine programming for the AI which I see as very achievable.
[1] https://www.britannica.com/science/emotion
[2-3] Google's definition box
Marianne Levon Shahsuvaryan
Using the expression "Artificial Love" instead of just "Love," we can skip the metaphysical discussion. So, a machine "loves" not exactly like us and can "feel" but not exactly like us.
Abderrahim Benkhaled
"Machines do not have feelings"
No they don't. Just sensors to help them to "feel."
Sergio,
If we support the assertion that machines do not have feelings, then we also have to deny humans have feelings. Since neither can be proven independently by third parties(if we are to base the evaluation empirically and not philosophically).
If you allow anectdotal based on human human perception of love, then it can equally be proven for ai.
Arturo Geigel
Can we tickle a robot to produce smile? Why not?
Machines have feelings. Why machine need sensors? For touch, check, see, hear, etc. Are not these feeling? I suppose these are.
Do machines have human feelings?
No they don't. Just sensors to help them to "feel."
;)
Sorry for jumping in.
Sergio is right, kind of.
Sensor inception is the robot way of feeling.
We may too much "complicating" the very definition of love.
If we regress the love definition as something rewarding when we done certain task, like child love chocolate, or mom nurture child,
An AI Robot definitely able to be prepared to do such things.
If we prepare the robot to love the earth and ready to annihilate those dare to tarnish it,
yups, we can do that too. :P
The one thing I can be sure robot can't do is to innovate new way to feel or to love, and the same goes to us human too. To certain degree we human may synthesize new way to love, based on what we have, and the robot can do the same.
Sergio,
Let us disambiguate what is meant by feelings
1st sense is that of feelings like touchthat can feel pressure etc. This can be processed by a neuron and a sensor with programming.
2nd sense is that of feel like passion, fear,etc. These are usually abstracted by people and categorized as non ai achievable.
The 2nd sense is the one I am rejecting as not being empirically defined. This makes it non achievable by both humans and ai unless treated as the first sense on both humans and ai.