Artificial Intelligence engineers should enlist ideas and expertise from a broad range of social science disciplines, including those embracing qualitative methods, in order to reduce the potential harm of their creations and to better serve society as a whole, a pair of researchers has concluded in an analysis that appears in the journal Nature Machine Intelligence.
"There is mounting evidence that AI can exacerbate inequality, perpetuate discrimination, and inflict harm," write Mona Sloane, a research fellow at New York University's Institute for Public Knowledge, and Emanuel Moss, a doctoral candidate at the City University of New York.
"To achieve socially just technology, we need to include the broadest possible notion of social science, one that includes disciplines that have developed methods for grappling with the vastness of social world and that helps us understand how and why AI harms emerge as part of a large, complex, and emergent techno-social system."
The authors outline reasons where social science approaches, and its many qualitative methods, can broadly enhance the value of AI while also avoiding documented pitfalls.
Studies have shown that search engines may discriminate against women of color while many analysts have raised questions about how self-driving cars will make socially acceptable decisions in crash situations (e.g., avoiding humans rather than fire hydrants).
Sloane, also an adjunct faculty member at NYU's Tandon School of Engineering, and Moss acknowledge that AI engineers are currently seeking to instill "value-alignment"--the idea that machines should act in accordance with human values--in their creations, but add that "it is exceptionally difficult to define and encode something as fluid and contextual as 'human values' into a machine."
Artificial Intelligence engineers should enlist ideas and expertise from a broad range of social science disciplines, including those embracing qualitative methods, in order to reduce the potential harm of their creations and to better serve society as a whole, a pair of researchers has concluded in an analysis that appears in the journal Nature Machine Intelligence.
"There is mounting evidence that AI can exacerbate inequality, perpetuate discrimination, and inflict harm," write Mona Sloane, a research fellow at New York University's Institute for Public Knowledge, and Emanuel Moss, a doctoral candidate at the City University of New York.
"To achieve socially just technology, we need to include the broadest possible notion of social science, one that includes disciplines that have developed methods for grappling with the vastness of social world and that helps us understand how and why AI harms emerge as part of a large, complex, and emergent techno-social system."
The authors outline reasons where social science approaches, and its many qualitative methods, can broadly enhance the value of AI while also avoiding documented pitfalls.
Studies have shown that search engines may discriminate against women of color while many analysts have raised questions about how self-driving cars will make socially acceptable decisions in crash situations (e.g., avoiding humans rather than fire hydrants).
Sloane, also an adjunct faculty member at NYU's Tandon School of Engineering, and Moss acknowledge that AI engineers are currently seeking to instill "value-alignment"--the idea that machines should act in accordance with human values--in their creations, but add that "it is exceptionally difficult to define and encode something as fluid and contextual as 'human values' into a machine."