Robots and EmotionsPosted: April 25, 2009
Designing robots with true emotions — a topic I ran into on a different forum —
may be a matter of quibbling about semantics and definitions — e.g. if ‘true’ emotions are understood as reactions to attempts to satisfy human needs (success or failures to such attempts) then robots wouldn’t fall within that realm. I don’t want to enter that controversy, nor do I have anything to contribute from a programming / AI point of view, my programming competence being quite rudimentary as well as outdated. I feel that I can add some useful contributions from a design or planning perspective, however, that may help in overcoming one impoortant limitation of focus I have noticed there. That is the focus on emotions as reactions to events or processes related to physiological and survival needs. I am not denying the pertinence of that perspective, and am sure that pursuing it will produce interesting results. However, as an architect concerned about how the built environment produces not only satisfactory physiological conditions for survival and functioning of the human body but also emotions that seem unrelated to mere physiological mechanisms at least at first glance. Further investigations might clarify how proportions, rhythm, scale, composition of building form etc. produce physiological responses that contribute to our sense of well-being or displeasure, and therefore emotions. But I am convinced that there are additional factors involved that can only be classified as survival mechanisms with some procrustean difficulty. I see humans endowed with a need for defining themselves as individuals — that is, basically, being ‘different’ from others. This can include adopting some archetypal or culturally defined identity, a social role, a group identity based on age, work or career, philosophy, even fashion, life style, hobbies etc. I call this ‘image’ — of who we want to be. But the desire for ‘difference’ essentially ends up in a tendency or need for designing an identity — a ‘different’ one from any we have known before. And the images we humans adopt (the concept of ‘role model’ aims at this but misses the ‘design’ aspect of the need in presupposing a ‘model’ to imitate) can be quite contrary to the rationality of mere survival of body or species: it can include aspects of asceticism, of service to others, self-sacrifice, for example. Other criteria come into play here: criteria of ‘nobility’, of friendship, the good, beauty, for instance, that may appear ‘irrational’ to the survival-focused warrior.
The role of the built environment in this now becomes more clear: it can ‘match’ or reinforce that image of who we are or want to be. Or it can fail to do so: mismatch, conflict with our desired image. And that sense of match or mismatch is arguably an emotion that we as designers are crucially interested in. More importantly: If we recognize this human tendency or desire to ‘design’ / redesign ourselves according to a new image, to become individuals of our own choice, as a human right, we must ask ourselves how our building designs can help, assist people in this quest. This becomes the supreme responsibility of the architect. It cannot consist of merely expressing, asserting the creativity of the architect / designer (making design creativity a consumer item and thereby arguably cheapening it, even as we are asked to pay for fashionable design…) but should ask how it can help the user design, create that new identity, that new image. One might imagine that the image will first appear as a mental construct in the users’ mind, and then entices the person to actions that define the life that goes with it. That process may in reality be more interactive: the building may invite users to engage in occasions, activities, that define or are more in tune with a new image, which only becomes defined and recognized over time through the activities and design forms with which it is associated.
The pleasure or displeasure of this ‘matching’ or mismatching of built environment form and the imagery it evokes in users’ minds, with the images those users might want to adopt as their desired ‘way of life’ — something not just ‘chosen’ from a pre-established menu of societal options or opportunities, but actually created by the individual emerging as such in the very process of creating: those are the emotions we should be concerned about.
The question I have for the robot designers arising from this would be: How can robots be given an internal representation of such images against which to match external messages conveying (attempting to evoke) imagery? Can they be given image preferences for themselves, which would then be the basis for the emotions derived from the match or mismatch of internal image with external image messages? Can they ‘have’ their own image preferences? I assume they can be programmed with preferences if these can be adequately represented), but whose preferences would they be? Can they be programmed to ‘design’ new imagery against which to evaluate their environment? How? According to what criteria? And finally, should it not be clear that any robot lacking such capabilities but acting in the environment IS representing an image? Which may just be so ‘poor’ (in the sense of undefined) that we don’t even want to acknowledge it as such?
A related, equally interesting and potentially controversial issue is that of power. Being able to act according to a chosen ‘different’ image requires some degree of power: empowerment; the ability to creatively design and act upon new images guiding life even more so. The plans and activities arising from such efforts will likely influence, get in the way of the plans of other entities — human or otherwise. What provisions should be built into a being (human or otherwise) to ‘responsibly’ deal with the resulting conflicts? Just giving robots abilities commensurate with their mechanical power (that exceeds those of humans, which is the reason for their being designed) may not be enough. Is responsibility an emotion?