Value sensitive design

Value sensitive design (VSD) is a theoretically grounded approach to the design of technology that accounts for human values in a principled and comprehensive manner. VSD originated within the field of information systems design and human-computer interaction to address design issues within the fields by emphasizing the ethical values of direct and indirect stakeholders. It was developed by Batya Friedman and Peter Kahn at the University of Washington starting in the late 1980s and early 1990s. Later, in 2019, Batya Friedman and David Hendry wrote a book on this topic called "Value Sensitive Design: Shaping Technology with Moral Imagination". Value Sensitive Design takes human values into account in a well-defined matter throughout the whole process. Designs are developed using an investigation consisting of three phases: conceptual, empirical and technological. These investigations are intended to be iterative, allowing the designer to modify the design continuously.

The VSD approach is often described as an approach that is fundamentally predicated on its ability to be modified depending on the technology, value(s), or context of use. Some examples of modified VSD approaches are Privacy by Design which is concerned with respecting the privacy of personally identifiable information in systems and processes. Care-Centered Value Sensitive Design (CCVSD) proposed by Aimee van Wynsberghe is another example of how the VSD approach is modified to account for the values central to care for the design and development of care robots.

Design process

VSD uses an iterative design process that involves three types of investigations: conceptual, empirical and technical. Conceptual investigations aim at understanding and articulating the various stakeholders of the technology, as well as their values and any values conflicts that might arise for these stakeholders through the use of the technology. Empirical investigations are qualitative or quantitative design research studies used to inform the designers' understanding of the users' values, needs, and practices. Technical investigations can involve either analysis of how people use related technologies, or the design of systems to support values identified in the conceptual and empirical investigations. Friedman and Hendry account seventeen methods, including their main purpose, an overview of its function as well as key references:

Criticisms

Two commonly cited criticisms are critiques of the heuristics of values on which VSD is built. These critiques have been forwarded by Le Dantec et al. and Manders-Huits. Le Dantec et al. argue that formulating a pre-determined list of implicated values runs the risk of ignoring important values that can be elicited from any given empirical case by mapping those value a priori. Manders-Huits instead takes on the concept of ‘values’ itself with VSD as the central issue. She argues that the traditional VSD definition of values as “what a person or group of people consider important in life” is nebulous and runs the risk of conflating stakeholders preferences with moral values.

Wessel Reijers and Bert Gordijn have built upon the criticisms of Le Dantec et alia and Manders-Huits that the value heuristics of VSD are insufficient given their lack of moral commitment. They propose that a heuristic of virtues stemming from a virtue ethics approach to technology design, mostly influenced by the works of Shannon Vallor, provides a more holistic approach to technology design. Steven Umbrello has criticized this approach arguing that not only can the heuristic of values be reinforced but that VSD does make moral commitments to at least three universal values: human well-being, justice and dignity. Batya Friedman and David Hendry, in "Value Sensitive Design: Shaping Technology with Moral Imagination", argue that although earlier iterations of the VSD approach did not make explicit moral commitments, it has since evolved over the past two decades to commit to at least those three fundamental values.

VSD as a standalone approach has also been criticized as being insufficient for the ethical design of artificial intelligence. This criticism is predicated on the self-learning and opaque artificial intelligence techniques like those stemming from machine learning and, as a consequence, the unforeseen or unforeseeable values or disvalues that may emerge after the deployment of an AI system. Steven Umbrello and Ibo van de Poel propose a modified VSD approach that uses the Artificial Intelligence for Social Good (AI4SG) factors as norms to translate abstract philosophical values into tangible design requirements. What they propose is that full-lifecycle monitoring is necessary to encourage redesign in the event that unwanted values manifest themselves during the deployment of a system.

See also

References

Uses material from the Wikipedia article Value sensitive design, released under the CC BY-SA 4.0 license.