Publications:Pitfalls of Affective Computing : How can the automatic visual communication of emotions lead to harm, and what can be done to mitigate such risks?
Do not edit this section
Keep all hand-made modifications below
|Title||Pitfalls of Affective Computing : How can the automatic visual communication of emotions lead to harm, and what can be done to mitigate such risks?|
|Author||Martin Cooney and Sepideh Pashami and Anita Sant'Anna and Yuantao Fan and Sławomir Nowaczyk|
|HostPublication||WWW '18 Companion Proceedings of the The Web Conference 2018|
|Conference||The Web Conference 2018 (WWW '18), Lyon, France, April 23-27, 2018|
|Abstract||What would happen in a world where people could "see others' hidden emotions directly through some visualizing technology Would lies become uncommon and would we understand each other better Or to the contrary, would such forced honesty make it impossible for a society to exist The science fiction television show Black Mirror has exposed a number of darker scenarios in which such futuristic technologies, by blurring the lines of what is private and what is not, could also catalyze suffering. Thus, the current paper first turns an eye towards identifying some potential pitfalls in emotion visualization which could lead to psychological or physical harm, miscommunication, and disempowerment. Then, some countermeasures are proposed and discussed--including some level of control over what is visualized and provision of suitably rich emotional information comprising intentions--toward facilitating a future in which emotion visualization could contribute toward people's well-being. The scenarios presented here are not limited to web technologies, since one typically thinks about emotion recognition primarily in the context of direct contact. However, as interfaces develop beyond today's keyboard and monitor, more information becomes available also at a distance--for example, speech-to-text software could evolve to annotate any dictated text with a speaker's emotional state.|