Unpublished Letter to Inside Higher Ed

Estimated Reading Time: 5 minutes

I wrote the following Letter to the Editor about a month ago when Inside Higher Ed decided to publish a piece by Paul LeBlanc, the president of Southern New Hampshire University.  Clearly, I had thoughts about him coming into a conversation that he demonstrated very little understand and felt largely like another opportunity to pimp his book.  Not entirely surprised that IHE didn't publish it but figured I would still put it out there.  
A block toy robot
Image from randychiu

I really wish Paul Leblanc had done the minimum we ask of college students (sufficiently explore the topic before you start writing about it) when he decided to chastise the rest of academia for not having his forward-thinking approach to using generative artificial intelligence. (Granted, I wish I was a betting man as he also proved my prediction from last month that SNHU or other behemoth institutions would be the first to employ generative AI tools in the process of course creation.)


He has missed or ignored the richness of the conversation around AI generative tools in higher education over the last 4 months and also, the deeper AI ethics conversation that has been happening for years.  There’s nothing that he encourages academia to investigate and consider that isn’t already going on.

 

If he really wanted to dip his toe into the conversation and engage in the questions he thinks we should be asking, then he had ample avenues to explore. For instance, instead of citing some statistics about the size of higher education, the origin of “cobot”, and of course, promoting his book, he could have cited actual voices involved in this conversation.

He could have read up on the interconnections between AI and creativity in The Artist in the Machine by Arthur I. Miller or Joseph E. Aoun's Robot-Proof: Higher Education in the Age of Artificial Intelligence.  If he really wanted to show us he understood the concerns of academia (those philosophers and ethicists and humanities folks he cheers on so much), he might have also highlighted the work by Ruha Benjamin (Race After Technology), Cory Doctorow (Chokepoint Capitalism), Virginia Eubanks (Automating Inequality), Xiaowei Wang (Blockchain Chicken Farm), or so many of books tackling the very questions he supposedly wants to examine.


He could have just mentioned this one crucial journal article on the problematic and biased structure of knowledge in these large language models (or just watched the video) to show he’s paying attention or even this Time Magazine piece to raise the profound questions of ethics around how ChatGPT was created through exploited labor practices by 3rd party vendors hiring Kenyans to do content moderation for $2.  


Leblanc could have read some blog posts or watched some videos of the compelling work on AI and education being done by Maha Bali, Autumm Caines, Anna Mills, Ethan Mollick, Chrissi Nerantzi, Antonio M. Arboleda,  Marianna Karatsiori & Sandra Abegglen, Mike Sharples & Rafael Pérez y Pérez, even me and countless others.


Instead of praising the idea that we should include philosophers, ethicists, and folks in the humanities in this discourse, Leblanc missed an opportunity to actually honor and recognize those that have been having the very complex conversations he seems to be begging for but somehow, unable to find.  


Here’s a hint–he should try Google.  If that doesn’t work, then he might want to pause to question the benefits of AI because he’s being served up an AI-generated filter bubble that’s inhibiting him from meaningfully contributing to this conversation.


In the end, Leblanc warns academia to do something because we don’t want the tech bros in charge. But realistically, as the leader of a massive organization that has been quick to replace nuanced relationships in the classroom with technology to increase outputs (i.e. degrees) at scale and cost (i.e. deskilling faculty so they have no agency on adapting curriculum in their classrooms and basically are just there to grade and discuss in limited parameters with students)--and he’s excited about deploying new (and questionable) technology to further replace humans, he just might be a tech bro.  


And if he hasn’t even bothered to listen to the range of voices discussing this new technology, all signs indicate achieve tech-bro status.  After all, it’s a standard practice for tech bros to dismiss or refused to listen to their researchers as seen in the past few years, where Google has
fired or dismissed its lead ethicists, Facebook disbanded its Responsible Innovation Team, and Musk laid off his AI ethics team.  


So by all means, let’s continue to have the conversation about the possibilities and problems that this new technology represents, and let’s also do our work to make sure we’re aware of the contours of that conversation before charging into it like we have the answer.



Did you enjoy this read? Let me know your thoughts down below or feel free to browse around and check out some of my other posts!. You might also want to keep up to date with my blog by signing up for them via email.

Creative Commons LicenseThis work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Comments