Skip to main content

Stanford and Harvard Deepen the Conversation on AI Imagery

On Feb. 11, the IT communities at Stanford University and Harvard University reconvened for part two of AI Image Generation: Shaping Perception and Visual Influence. Co-sponsored by both institutions, the session expanded on the first event last September. This time, the conversation dedicated significant time to deeper discussions on bias, ethics, human agency, and the future of visual AI.

Opening the session, Emily Bottis, managing director of academic technology at Harvard University IT, grounded the conversation with a reminder that AI image generation is not merely a technical advancement but a cultural force with implications for accessibility, equity, education, and public trust.

“Images do more than illustrate,” Bottis shared. “They shape how we see, how we remember, and how we understand the world around us.” 

Bias in AI 

Tela Vessa from Educational Technology at Stanford School of Medicine led a panel featuring Dr. Douglas Guilbeault, Stanford Graduate School of Business, Nava Haghighi, PhD candidate in Human-Computer Interaction at Stanford, and Madeleine Woods, project lead for AI initiatives at Harvard’s Derek Bok Center for Teaching and Learning.

The panel started with a discussion about bias in AI. While AI isn’t built to be biased, it reflects the biased data used to train it.

Dr. Guilbeault shared research showing that large-scale image datasets and language models associate men with older age and experience, while representing women as younger, mirroring cultural narratives rather than empirical reality. When applied to AI-generated hiring scenarios, these patterns influence resume scoring outcomes. Generative systems inherit and amplify the bias that’s embedded in their training data.

For a deeper exploration of this research and its implications, see Dr. Guilbeault’s paper here.

Prompting is key

Madeleine Woods addressed the unpredictability many users experience when prompting AI images. These models, she explained, are statistical systems predicting pixel patterns associated with text. Nava Haghighi expanded by asking folks to imagine a tree. Everyone may imagine it differently, but AI assumes a “default” representation. 

Rather than abandoning AI tools when it's hard to get the expected output, panelists encouraged confronting these moments of friction. AI can’t read human minds, but human minds can mold AI. Iteration in prompting is key to understanding and training AI to get better expected results. 

Community in the loop

Panelists emphasized how corporations shape AI model architectures, but individuals and institutions remain responsible for how tools are used. With an increase in AI productivity can come a decrease in peer consultation, potentially eroding collective intelligence.

Dr. Guilbeault reframed the AI concept of “human in the loop” as something broader: “community in the loop.” For IT leaders, this means fostering spaces where AI outputs are questioned rather than accepted passively. By questioning, people can continue to exercise their own intelligence instead of relying on AI.

Looking ahead

When asked, panelists reflected on what the next decade might bring. Woods expressed hope that AI image generation could bring equal opportunity for visual creation for students and educators, regardless of their artistic ability. Haghighi envisioned a future where output variety is embraced rather than smoothed away. Dr. Guilbeault highlighted the potential for exploratory visualization that helps us imagine new conceptual breakthroughs.

AI is here and has made its mark, but individuals are key to its success. 

Continuing the exploration

To support ongoing learning:

  • Do a guided, adversarial conversation with an AI in the Harvard AI Sandbox or Stanford AI Playground  to probe where image prompts reveal hidden bias and narrow “ideals.”
  • Read the Stanford HAI 2025 AI Index Report.
  • Listen to recent research on bias in image generation and language models.
  • Watch and reflect on ethical implications of visual AI in both professional and personal contexts.

As generative AI evolves, so must our frameworks for understanding it. This cross-institutional collaboration demonstrates that when institutions come together not only to adopt technology but to interrogate it, innovation becomes more thoughtful, inclusive, and human-centered.

DISCLAIMER: IT Community News is accurate on the publication date. We do not update information in past news items. We do make every effort to keep our webpages up-to-date.