Friday, November 2, 2012

Lovestruck Robots

Are robots subject to the allure of love? 

Perhaps part of the answer to this possibility lies in motivational internalism. Motivational internalism is the idea that behavior is motivated by one's evaluations, not one's situations. In this view, internal attitudes - or in the case of a robot; programming - motivates people to behave as they do. Robots and computers perform based on their internal programming, so the possibility of a computer or robot "falling in love" seems plausible. 

Like humans, robots and computers experience conflicts or glitches when programming differs from situations. Much like how humans behave when they fall in love, artificial intelligence, by nature of its design, has pathways similar to our neural ones that strengthen activities or sensations, making it easier to follow that path again. This is why people fall into habits (and love) and how computers could, too.

The feeling of being lovestruck causes people vacillate between feeling like they've got fireflies in their tummy and lightening striking all reason from their mind... or, in the case of a robot, glitches, malfunctions and crashes that interrupt focus and productivity.

Dopamine, the neurotransmitter most closely associated with the pleasure and reward center of the brain, was considered part of a beautiful computational theory that at one time garnered support from data in neuroscience. Dopamine, as it seemed, rose for items when a program predicted a reward, and dropped for items when it predicted an absence of reward.

What ultimately came out of these studies were the following facts, which do not dismiss the possibility of a robot falling in love. The idea of a robot falling in love is plausible because these rules might in fact relate to the object of a robot's affection rather than whether or not a robot can experience the equivalent of what we'd call fondness, affection or love.

Here's why:

  1. Dopamine spikes in response to novel items which have never been paired with reward, and thus have no predictive value. Thus:
    • A robot loves novelty (object).
  2. The latency and duration of dopamine spikes is constant across species, experiments, stimulus modality and stimulus complexity. In contrast, reward prediction should take longer to establish in some situations than others - for example, reward prediction may be slower for more complex stimuli. Thus:
    • The more complex any given subject, the longer it will hold a robot's attention or affection.
  3. The dopamine signal actually occurs before animals have even been able to fixate on a stimulus - this questions the extent to which this signal is mechanistically capable of the "reward prediction error" function. Thus:
    • Like humans, robots can be struck by love or "fall in love" before they know what hit them.
  4. VTA dopamine neurons fire simultaneous with (and possibly even before) object recognition is completed in the infero-temporal cortex, and simultaneously with visual responses in striatum and subthalamic nucleus. It seems unlikely that VTA can perform both object recognition and reward prediction. Thus:
    • Unlikely does not mean impossible: As we advance in our understanding of artificial intelligence and quantum computing, so too will we find the answers to complexity of this nature. If a VTA can perform both (object recognition and reward prediction) at the same time, then a robot could fall in love with an abstract idea (much like humans do) before it has a chance to recognize the shape or dimensions of the stimulus or data that caused these internal reactions.
  5. The most likely visual signal to these VTA neurons may originate from superior colliculus, a region that is sensitive to spatial changes but not those that would be involved in object processing per se. Thus:
    • A robot's love could be more than skin or surface item deep.

Love, it would seem, evokes a highly specific sort of reaction within the human brain as well as within the programming of robots. When these reactions connect to receptors, closing a circuit and causing a pleasurable sensation (or glitch), something blocks the reabsorbtion of this response (or in the case of humans, dopamine), which allows the sensation (or chemical) to linger. 

We may not yet know what primary active element exists to cause temporal-difference errors in reward prediction, but that doesn't mean that we should dismiss the idea that someday our robots might just fall touch screen over keyboard in love with a new feature or application it correlates with the sensation of love.

Post a Comment