Campus Voices on AI: Exploring Environmental and Social Impacts
As AI becomes increasingly embedded in daily life, concerns about ethics, misinformation, and environmental impacts are rising alongside its rapid expansion. At CSULB, faculty experts and sustainability leaders are actively discussing these concerns and helping guide the campus community toward responsible, thoughtful use of AI technologies.
Expert Panel Uncovers Multiple Impacts & Perspectives
In February, a panel discussion titled “AI’s True Costs” brought together faculty from multiple disciplines to examine AI’s environmental and social implications. The event was moderated by Dr. Alexis Pavenick, Director of Digital Literacy and Ethics in the University Library, and featured experts from the College of Business and the College of Engineering.
Panelists Dr. Laura Gonzalez Alana, Dr. Reo Song, and Dr. Ali Talebi from Finance and Marketing, along with Dr. Shadnaz Asgari and Dr. Alireza Mehrnia from Computer Engineering and Computer Science, explored a broad array of challenges and opportunities related to artificial intelligence.
They discussed the substantial resource consumption of AI systems, including the water and energy demands of data centers, as well as greenhouse gas emissions associated with training large models. They noted that data centers are often built in communities that have little say in their development—communities that frequently bear disproportionate environmental burdens. Economic concerns, such as potential worker displacement, and social issues, including AI’s ability to reinforce bias and discrimination, were also central to the conversation.
Despite these concerns, the panelists highlighted emerging opportunities. Some data centers are exploring ways to capture excess heat from graphics processing units (GPUs) and redirect it as heating for buildings. They also pointed to advances in “neuromorphic chips,” which mimic the functionality of human neurons by activating only when needed. These chips could lead to more efficient energy management systems capable of predicting when and where cooling is required in a data center.
Beyond environmental factors, the panel discussed the rise of misinformation generated by AI, challenges around algorithmic transparency, and the growing uncertainty about AI’s impact on the economy and workforce. Two themes emerged across the discussion: the urgent need for regulations to guide AI development and deployment, and increased investment in research to better understand and mitigate AI’s risks.
Sustainability Champions Roundtable
Other campus groups are engaging deeply with these issues as well. The Sustainability Champions recently hosted a roundtable led by Dr. Lily House-Peters, Director of Environmental Science & Policy. Participants arrived with varying levels of familiarity regarding AI, but the dialogue quickly became a thoughtful and nuanced exploration.
“On the one hand, AI can serve as a useful and innovative tool for optimizing energy systems, forecasting climate risks, and accelerating breakthroughs in environmental conservation and ecological systems modeling,” House-Peters shared. Yet she also emphasized that AI currently depends on fossil fuels, significant water use, and data centers that are often located in communities already overburdened by pollution.
During the roundtable, participants stressed the need for clear regulations and guidelines governing how AI systems and data centers are developed, used, and sited.
“I really appreciate opportunities to talk with colleagues on campus about AI from a sustainability perspective,” remarked Greg Camphire, a CPaCE marketing copywriter and web content specialist who has been part of the Sustainability Champions program since its inception. “There are so many practical and ethical questions about this powerful technology, but there isn’t much training or regulation available. I hope these conversations continue so we can prioritize environmental justice and human values.”
As conversations across campus continue, CSULB is building a foundation for responsible AI use—one that recognizes both the promise of these technologies and the environmental and social challenges they present. These discussions highlight that responsible AI development must prioritize sustainability, transparency, and justice—values that will hopefully guide the university as it navigates an increasingly AI-driven world.