Prof. Julia Stoyanovich
Director of the Center for Responsible AI at New York University
I’m an inherently intellectually curious person, and I firmly believe that maintaining curiosity about the world is a privilege and a responsibility. It’s crucial for personal development and active participation in society. We have the right and duty to engage democratically in our environment, especially in areas that broadly impact us, such as AI.
We must ensure that AI usage enhances lives without causing environmental, economic, or personal harm. Education is crucial in this realm, serving as a key enabler of responsible AI. My vision is for a future in this “responsible AI” and “AI” are synonymous.
It’s important to recognize that AI itself isn’t responsible; the responsibility lies with the people who design, develop, and use it, and with those who oversee its implementation. Currently, what we need isn’t necessarily more technological advancements but rather better-informed individuals who can control how their data is collected and used, and how they use AI to affect others.
Education about AI is essential for every individual and citizen. Those of us with expertise have a particular responsibility to educate others about AI’s impact. Failure to do so leads to a concentration of power in the hands of a few, which is unjust. We must share information to create a distributed accountability regime around AI use.
Integrating tools like ChatGPT in education can potentially enhance learning, but their use requires careful risk-benefit analysis, rigorous evaluation, and stringent oversight of its use.
We are not yet at a point where we can confidently assert that AI helps students learn effectively and affordably, especially in resource-limited areas. The deployment of AI-based tutors brings numerous externalities, not least of which is the potential displacement of human teachers. In many parts of the world, particularly in less affluent regions, teaching is a profession dominated by women. Replacing these educators with AI not only threatens their economic stability but can also affect the quality of education delivered.
Furthermore, the environmental impact of AI, such as the significant energy required to train large language models, adds another layer of concern. We must weigh these externalities against the potential benefits of the use of AI in the classroom., A more nuanced take on this is to use AI tutors under the guidance of (human) teachers. This, once again, requires us to understand what – if anything – these tools can help with, and also how to train the teachers to use them productively.
At the NYU Center for Responsible AI, we developed and have been teaching a public education course called “We are AI: Taking Control of Technology.” This course was developed in collaboration with P2PU, a public education nonprofit, and with New York City’s Queens Public Library. The goal of the course is to introduce the basics of AI, discuss some of the social and ethical dimensions of the use of AI in modern life, and empower individuals to engage with how AI is used and governed. It has been offered to NYC library patrons, as well as to librarians and non-academic staff at New York University.
“We are AI” runs as a learning circle: a facilitated study group for people who want to meet regularly and learn about a topic with others. There are no teachers or students in a learning circle—it is a group where everyone learns the material together. We have also partnered with the NYU Tandon Ability Project to improve the accessibility of AI education across abilities and levels of expertise. The result of this work is our All Aboard! primer on making AI education accessible.
We are AI is accompanied by a comic book series by the same name, currently available in English and Spanish, and with immediate plans to issue a German and Ukrainian translations.