In recent months, headlines about artificial intelligence (AI) have ranged from concerning to downright dystopian. In Lockport, NY, the public school system rolled out a massive facial recognition system to surveil over 4,000 students â despite recent demographic studies which show that the technology frequently mis-identifies children. Ongoing investigations into Clearview AI, the facial recognition startup built upon photos that were illicitly scraped from the web, have revealed widespread unsupervised use by not only law enforcement but banks, schools, department stores, and even rich investors and friends of the company.
But the recent Social Impact in AI Conference, hosted by the Harvard Center for Research on Computation and Society (CRCS), offered another path forward for the future of artificial intelligence.
In just two days, conference attendees shared their work applying artificial intelligence and machine learning to a shockingly wide range of domains, including homelessness services, wildlife trafficking, agriculture by low-literate farmers, HIV prevention, adaptive interfaces for blind users and users with dementia, tuberculosis medication, climate modeling, social robotics, movement-building on social media, medical imaging, and education. These algorithms donât center around surveillance and profit maximization, but rather community empowerment and resource optimization.
Whatâs more, many of these young researchers are centering considerations of bias and equity, (re)structuring their designs and methodologies to minimize harm and maximize social benefit. Many of the researchers voiced concerns about how their work interacts with privacy and technocratic values, not shying away from difficult questions about how to responsibly use personal data collected by often opaque methods.
The conference was designed around a central question, âWhat does it mean to create social impact with AI research?â Over the course of the event, at least one answer became clear: it means listening.
Indeed, a central takeaway from the conference was the importance of knowing what you donât know, and of not being afraid to consult others when your expertise falls short. AI-based solutions will only ever solve problems on behalf of the many if they are designed in consultation with affected populations and relevant experts. No responsible discussion of the computational advances of new AI systems is complete without integrated interdisciplinary discussions regarding the social, economic, and political impacts of such systems. In this sense, the conference practiced what it preached â many attendees and speakers came from social work, biology, law, behavioral science, and public service.
The success of CRCSâs conference serves as a reminder that AI, for all its futuristic hype, is fundamentally just a tool. How we use it is determined by us: our cultural priorities and societal constraints. And as a tool, AI deserves better.
Recent advances in artificial intelligence algorithms have too frequently been used to the detriment of our civil liberties and societyâs most vulnerable populationsâfurther consolidating power in the hands of the powerful, and further exacerbating existing inequities. But reproducing and consolidating power is not what the world needs. Indeed, the AI that the world needs will not be developed by Silicon Valley technocrats in pursuit of profit, nor by Ivory Tower meritocrats in pursuit of publications. The AI we need must be developed by interdisciplinary coalitions in pursuit of redistributing power and strengthening democracy.
To that end, I offer the following six guiding questions for AI developers, as a roadmap to designing socially responsible systems:
- How might my tool affect the most vulnerable populations?
- Have I consulted with relevant academics, experts, and impacted people?
- How much do I know about the issue I am addressing?
- How am I actively countering the effects and biases of oppressive power structures that are inherently present in my data/tool/application?
- How could my tech be used in a worst-case scenario?
- Should this technology exist?
Answering these questions is unquestionably hard, but itâs only by engaging with them head-on that we stand any chance of truly changing the world for the better, with AI or with any other technology.