Closely associated with human ecosystem, robotics tends to imbibe many behavioral aspects from humans. When positive, it helps in serving humanity. But adverse traits bring about an equally negative impact too. Read on how human bias and related injustice related characteristics are present in robotic applications and what steps can be taken to eliminate these from the root. After all, a fair society is one where everyone is equal.
Most roboticists focus on the design of intelligent machines with the goal of positively affecting the world, i.e., building robots in service to humanity. To this end, roboticists should embrace the concept in which our robot systems are explicitly designed to work with uniformly positive performance across the diversity of users. Unfortunately, researchers have shown that this is not always the case. Object detection systems, of the kinds used in autonomous vehicles, have uniformly poorer performance when it comes to detecting pedestrians with darker skin tones. Researchers have also shown that racial bias exists in commercial facial recognition application programming interfaces or APIs.
It is not just the responsibility of society or governing bodies to take on the challenge of fixing racial bias and inequity. Roboticists also need to take on the responsibility to make sure we do not cause equivalent harm in developing new technologies. And if the harm we are creating is negatively affecting one group or groups more than another, it is our responsibility to fix that. After all, roboticists are pretty skilled at finding solutions to hard, seemingly unsolvable problems. It is time to apply those skills to fix this one.
We propose that developers should consider the ethical implications of robotic usage—namely, ethical use and equity in performance—especially when robot use could result in harm to any group. We define ethical use as the process for weighing the potential benefits against the possible risk of harm to all affected groups; only when this weighting factor is positive and sufficiently mitigates harm should deployment of the technology be considered. We define equity in performance as a metric to determine to what extent a deployed technology’s performance is uncorrelated with a group’s protected characteristics (race, ethnicity, age, gender, sex, etc.). If there is lack of equity in performance, then the implications deploying such technology should be carefully considered as well as the reliance of the technology.
We believe that an important step in addressing equity in performance as well as ethical use is to ensure more diverse teams are the creators of these technologies and to understand how to draw on their diverse backgrounds for team success; the very practical consequence of this concept is that diverse backgrounds will allow use and implications of the technology to be seen from unique perspectives, increasing the chances for equity and ethics. Diverse teams can also lead to better performance—this fact has been shown time and time again. Thus, to begin the process of addressing this problem, a new organization was founded: Black in Robotics (BiR) (www.blackinrobotics.org). BiR is an organization that was born to address the systemic inequities found in our robotics community by focusing on three primary pillars—community, advocacy, and accountability.
This can be defined as a sense of fellowship with those that share similar characteristics and goals and has been shown to directly correlate with success in STEM higher education for underrepresented minorities. As discussed in, while there are no U.S. statistics collected specifically about the demographics of the robotics workforce, we can examine the engineering workforce statistics as an indicative metric. In 2018, 12.7% of the U.S. population was Black or African American, but only 4.2% of bachelor’s degrees in engineering went to Black scholars. This issue of lack of diversity is also found in the tightly integrated field of artificial intelligence (AI), particularly when it comes to algorithm design and testing for AI systems that affect diverse populations. BiR plans to build community through networking and mentorship. We believe that establishing community is the first step to increasing the presence of Black and other diverse groups in the field of robotics.
For robotics, advocacy is defined as explicit action that supports or defends equity in performance as well as ethical use on the behalf of all, with a focus on ensuring equal outcomes across diverse communities. BiR’s contribution toward the goal of advocacy is to showcase Black excellence in our community and to help connect academia and industry to the talent found in diverse communities. One such activity is the Black in Robotics Reading List, with objectives to provide academic role models for aspiring researchers and to normalize Black scholarship.
Our pillar of accountability is to design pathways for all roboticists, including allies, to participate in the solution. Just as being Black does not exclude those that identify as Black from being discriminated against based on their skin color, not identifying as Black should not exclude one’s involvement in dismantling issues around robotics and race. For accountability, BiR seeks to function as the conduit to engage communities, to identify best practices, and to hold all of us accountable for making the robots that we design and deploy usable for all groups and communities.
We hope that the BiR organization inspires individuals to increase diversity in their spaces. We believe that this diversity is crucial for us to answer the next big questions for robotics as we integrate them more into our daily lives. Therefore, our mission is a call to action for the entire robotics community to increase diversity and to build with thoughtfulness for disadvantaged groups.
Complicity through silence is not an option.
(This is a slightly modified version of an article originally published in Science Robotics. The original article can be found at https://robotics.sciencemag.org/content/5/48/eabf1364)