ChatGPT, AI-Generative Tools, and Education...my turn...

Estimated Reading Time: 25 minutes

I know...I know.  Everyone has their thoughts on ChatGPT and I'm coming to the party just a little too late.  That's fair--I largely respect that.  But I haven't been idle in my thinking or working around ChatGPT during this time (some folks know if you've been reading my weekly updates) and definitely have some contributions and thoughts for this discussion.  

A word cloud of this blog post in the shape of a cat.

This presentation/paper/work was prepared using ChatGPT, an “AI Chatbot”. We acknowledge that ChatGPT does not respect the individual rights of authors and artists, and ignores concerns over copyright and intellectual property in the training of the system; additionally, we acknowledge that the system was trained in part through the exploitation of precarious workers in the global south. In this work I specifically used ChatGPT to craft my Q&A log to explore and inform my own understanding about the way the tool operates and responds to a series of questions.

My Story Thus Far with ChatGPT

I began hearing about ChatGPT in early December and began playing with it. I found in interesting and began to keep a log of the questions and answers that I got. This got me thinking about its usefulness for teaching and learning much more so than the AI image generators that ran through social media in the previous months (first with a sense of awe and then with a backlash about the violation of artists' rights).  Like many others at this point, I began to read and watch a lot of content arond this and began assembling my own curated list of materials that I thought were super-helpful in thinking about it.

But then, I heard from a friend and colleague, Autumm Caines who teaches at College Unbound and she, too, was playing around with ChatGPT. She realized that a student or two likely had submitted work in her course that had been informed by ChatGPT.  It was the end of the semester and our academic honesty policy is not quite clear on the usage of such a tool.  Our policy grounds usage in other people, not a non-person.  There is an authenticity clause additionally, we didn't define "authenticity"--which raises questions about tools like translators, Grammarly, etc.

College Unbound centers the student's life and experience in a way that I can honestly say I have not seen at the nearly dozen of other colleges and universities I've worked or taught at over the past 17 years. Embedded in that is an emphasis on relationship-building, supporting the students growth, and often, undoing the harm, trauma, and shame that other institutions have instilled in many students.  Because of that, one of the things I realized early on in thinking about the use of ChatGPT and other AI generative tools is that we have to be extremely careful and deliberate about how we engage with students about this if we are concerned they are using the tool in a way that conflicts with learning and course expectations.  Simply put: A false-positive accusation would do deep harm that is counter to the institution's mission.    

From this conversation, I decided to send out a survey to all students to see if they would share their experiences about using AI Generative Tools in their learning at the tail end of the semester (the week after classes ended).  You can see (and adapt) that survey here.  We got a handful of responses and insights from students about the fact that they were already using it in December (we plan to relaunch the survey in the middle of the semester).  

This told me that something bigger needed to be done and sometime during that week, I had a brain-blast that if we were going to tackle this new thing, we had to do it differently than what I had seen done in the past.  In that vein most notably was Wikipedia where folks banned it immediately rather than thoughtfully engage with it (Side Note: That's where I got my start in instructional design was discussing with faculty the importance and value of engaging with Wikipedia rather than ignoring or banishing it).  

I reached out to some folks in Academic Affairs with an idea:  "what about a 1-credit course for spring around thinking about and co-developing CU policy about the use of ChatGPT?"  Folks found the idea the right approach.  We talked about it a bit more and got the course up and running in the catalog.  I started outreach to students and by the time the semester started (second week of January), I had about 8 students; about the right amount for this course.  

Over the holiday break and the first week of the new year, it was also gnawing at me that we didn't really have solid guidance for faculty and students in place with the semester starting.  More than that, given the course, we needed a larger strategy to think about AI-generative tools and their role in education.  So I started to think about what that could look like and with help from conversing with my partner, I realized that there could be a bigger way to move through the semester.  

CU semesters are 17 weeks.  Some courses run 16 weeks with a 1-week break in the middle; other courses are 8-weeks and either run the first 8 weeks, or the following 8 weeks after that break.  No longer was it one course but two courses. The first course would be students who had been at CU learning about AI Generative tools and putting together an initial usage policy proposal.  The second course would be session 2 with new students who would be essentially kicking the tires on that usage policy to see where it helps, hinders, or confangles students. This would mean by the end of the spring semester we would have created something and tested it out before finalizing what we present to CU as proposed policies.   This resulted in me developing this plan that I shared with the leadership for input and the thumbs up to go forward (which I received).  I realize this is one implied but not stated piece of this strategy and that is to also have it reviewed for input by faculty as well before the final proposal to the institution.  

You'll notice within that plan 3 things:

Temporary Policy: We issued a temporary policy that advises against using it just yet but asks if students do use it, for them to only use it for about 25% of their work and to make it clear that it is from a different source.  My rationale for this is that we didn't want to outright forbid and therefore, start ceasely guessing or mouse-trapping with our students.  Instead, we wanted to create the context that if they were to use it, they could feel comfortable doing so and identify it.  This creates the opportunity for faculty to engage in meaningful conversation with students about their usage to make recommendations and learn about how the students are using it.  

Sharing it with Students: Several folks outside of CU said to me that sharing the policy and putting a spotlight on AI-Generative tools felt risky because it was putting a spotlight on it.  They were concerned about some version of the Streisand effect.  That's a fair concern but since our approach wasn't saying don't do it at all but rather, do it under these conditions, we felt differently. (We're not under the assumption that a few students may still use it in ways we advise against).  We wanted the transparency and we wanted to be honest with students about the fact that we don't have all the answers and we're trying to figure this out.  There is something important to us in being honest about navigating tricky waters that is part of how we connect and build relationships with our student body.  

Sharing it with Faculty: We told faculty about the policy, about telling the students about the policy, and offering them to construct their own policy as fits their classroom context.  Again, transparency was important as was reiterating their autonomy in determining what makes sense for their spaces.  

The policy went out and then I started to hold conversation sessions for folks to come and chat.  The attendance has been low and also, I've had several folks follow up via email to discuss it in general or in particular cases that are arising.  We're also only in week 3 of the semester; I imagine dialogue will pick up more--especially as ChatGPT continues to make headlines and grabs attention across nearly all media.  

The class is off to a great start; the students are intrigued and raising interesting questions. Each week feels like there's so much happening that I feel like we'll could forgo the syllabus-assigned learning materials and just go with whatever happened that week as our conversation starter. 

But what's really awesome is that the students are excited to be involved in the process of developing a usage policy proposal for CU and in doing more--attending presentations and writing pieces to help make sure that whatever happens, the students' voices are heard in whatever is to come.

We start our semester a little bit earlier than most colleges and universities.  We started on January 9th.  That served as part of the impetus for us to figure things out sooner than later.  But I started to see other higher ed institutions starting to try to figure this out.  That's when I also decided to do something that no one else was really doing.  (Note: lots of folks were doing lots of things: writing, crowdsourcing materials, using social media, etc).  I realized some folks were sharing their policy but there was no place to really find them together (I saw some on different social media platforms if I searched the right keyword but nowhere for a clear view).  So that's when I started to share out this form for folks to submit their policies and then, created and shared this Crowdsourced Classroom Policies for AI-Generative Tools resource.  

What's next?
I'm running a webinar, The Future's Already Here:  AI Generative Tools and Teaching-Webinar with NERCOMP on Monday, February 21 from 11am-12pm EST. 
On Monday, March 27 from 1pm-5pm, I'm running a workshop with the students at the annual NERCOMP conference in Providence, RI:  Institutional Policy Development for AI Generative Tools in Teaching and Learning.

My Lukewarm Take

Ok, so now that I've shared a bit of the history of my experience with ChatGPT, let me share my largely unoriginal thoughts about its role in education; it's based on my own thinking about teaching, learning, technology and is informed by many other folks in this space.  By the time this goes live, I'm sure I'll add more to that list as I already have another dozen tabs open with takes on it..  

I run the gamut of responses when thinking about ChatGPT and other AI-generative tools.  They run from excitement about what this course mean to the inevitability of such tools becoming ubiquitous to it being one more step to automation and destruction of jobs and livelihoods without a cultural context that actually recognized this and doesn't penalize folks for the masturbatory hyper-productivity and hoarding ideology of capitalism.  At times, I believe they can be really powerful for education and then also, can be a trap for education, students who are marginalized, and institutions scratching together pennies and looking for affordable technological solutions to humans.  So--a lot of thoughts on lots of fronts.  But when it comes to such tools in the classroom, I have 3 solid views (currently):

Possibilities
Like many other folks out there, I see there is some really great opportunities to use and engage with ChatGPT and other AI-generative tools.  For teaching, I can see using it to get rough drafts or initial material situated.  For instance, some things I've taught I know but teaching can be harder to summarize or synthesize (theory is a big one for me).  Using tools like ChatGPT can help me do in an hour what might take hours.  Similarly, if I'm looking to create visuals that capture or have resonance with what I'm conveying, using something like DALL-E can get me closer much more quickly.  Additionally, it gets me moving more quickly on approaches--often by giving me an approach that I'm uninterested in. In that way, it feels like another entity to bounce ideas off and to do so, sometimes, when I (or others) feel awkward or ashamed not being able to clearly dialogue with others about.  

But the possibilities for students can be great.  They too can use it as a dialogue tool with which to engage and refine their understanding (with the noted concern of it providing wrong information or "hallucinations"--a term that doesn't quite feel right for this tool).  I feel like it also gives them templates to work through things.  

One of the things that I often see with faculty is that they might give guidance for activities or assignments but they rarely give examples or templates for students to work with. There's lots of reasons for this, but I think for many students, it leaves them lost either because the blank white screen can be so overwhelming or there's a lack of clarity of exactly what needs to be submitted.  Engaging with a tool that can spit back examples instantly and provide feedback in ways that instructors may want to but aren't always able to (e.g. operating on a 24-hour communication cycle rather than an instantaneous communication cycle).  So as a support feature, an AI text tool could do a great many things.  

As others have pointed out, AI text tools can also be used to generate examples with which to analyze both to improve writing but also to explore what does the collective data set of the AI have to say about something and what does THAT tell us about the data set (and culture that produces it).  

Most importantly, I think this can do things that unleash learning that can be instantly individualized and move people along more quickly around things that they might grapple with.  I go back to theory here.  I struggle to learn and understand theory a lot. I can read about it till the cows come home and still feel lost with much of it. In general, I've found videos to really help with this but also, I can see engaging with ChatGPT that allows me to dig in bit by bit--asking for general concepts but even breaking down passages for further explanation.  Yes, this may be done in class, but having a reading-guide to query and clarify these things can be so powerful for enhancing ideas or things that are hard to understand for us. 

Putting aside accuracy (which will improve in all likelihood), this tool improves accessibility to knowledge--especially knowledge that can be archane yet also super-important to understanding the world.  I think we continue to forget how complicated and complex the world truly is--how many systems and structures of all sorts (physical, mental, linguistic, technological, legal, social, cultural, etc) exist and we never really undo the previous structures.  People have to operate successfully in a world where ALL of that is operating all at once and trying to learning it in a clear way is a truly hard task, especially when we can get instant responses to the things that don't make sense.  So in this way, I feel like a collective hivemind of knowledge that can easily generate responses can do so much for us.

And that's just the text...I'm not even what to say as we see this happen with audio and moving images.

Challenges for Teaching

Of course, I have lots of concerns and challenges that arise when thinking about this as a tool within education.  

The rise of AI-generative tools represents a sincere challenge for how one thinks about assignments, what constitutes authentic work, and how to reliably and consistently assess and provide feedback to students about their learning. One person in a Reddit discussion I was in, insisted that it's all covered by simply saying "do the work" and I didn't know where to begin with that.  I get it, but also by actually writing our answers, Socrates would not think we were "doing the work."  Our concepts of what authenticity is and doing the work changes over the years and the ways we integrate technology into our lives.  So I don't think repeating the refrain will help us.  

Another person said that this felt like a karmic moment for all those faculty who insisted on the written word as the uniform format of evaluation.  I pointed out that this is more than the written word.  After all, I can have ChatGPT create the text for a speech; I could have Vall-E (or a similar emerging tool) be the voice, and then I could use one of the AI tools that create slide decks use the speech to create a slide deck and now, I have a presentation that I can record and didn't actually have to put much of myself into it.  That is, ChatGPT represents a moment of significant change and possibility in content creation that could alter a lot. A lot of possibilities for creation are possible and bypass our traditional understanding of creativity or "student work."  In some areas, I don't foresee that being a problem per se and in other areas, it has a strong possibility of doing so.  

And this is where I get really worried because I think--just like the pandemic (because we don't seem to learn), we're going to see the rise of surveillance tools and ways of controlling and tracking "student work" on part with the worse of eproctoring.   That's one of the big challenges for me in this--how do we recognize and engage with these newly emerging tools and not be reactive and recreate the worst of past practices or policing and surveillance?  As I hear calls for returning to in-classroom handwritten exams or finding ways of controlling the student when they are working, I can't help but think about who will be most subjected to control and who will be assumed to not need it (i.e. marginalized students will get further marginalized as always happens with the introduction of tech to policing efforts).    

I think this also harkens to what feels like the never-ending exhaustion wheel in education. It existed prior to the pandemic but it was the pandemic where educators were demanded to change everything instantly and do so with little support (or compensation). We survived this (ok, some of us survived this ok, others are still feeling the many lingering effects, and some abandoned the profession altogether) but still haven't had great support, guidance, or compensation for all the extra things we must now navigate with each course.  And just like that, ChatGPT comes along and tells us, we need to be prepared to massively shift our practices or find ourselves playing cop to catch students.  I mean, this is one way to get the vast majority of a profession to give up and let the robot overlords take over (and maybe that's the plan somewhere within all this; no need to fire folks, just wear the hell out of them and then replace them with AI trainer bots).  

But The Real Problems

Those are educational problems and yet, I can't look at ChatGPT without the higher questions and concerns that many others have brought into the discussion.  The first and foremost is, of course, the concern that in order to get ChatGPT to work in the way it does, OpenAI paid contracted Kenyan workers at $2 an hour to do content moderation--a practice that is known for being traumatizing.  That a US entity decided it could exploit people from an African country to mine the data set and clean it up of its most toxic and ugliest elements to make something that is friendly to use and does not harm people in the Global North feels like the worst of AI colonialism.  What does it mean that we send faculty and students to use these tools, knowing that their "clean" results come at the hands of exploited workers? Is that any different from so many other parts of education?  Unfortunately no, and yet, we need to at least acknowledge this.  

I really appreciated  Phipps and Lanclos' acknowledgment of the problematic use of ChatGPT and feel like all teachers and scholars who are interested in and serious about using these tools while also interested in social justice, should be using something like that acknowledgment or some other means to make evident the problems that such technologies represent. I should also say that this is probably my first step but want to continue other ways to elevate and make evident the human costs of these technologies if and when I continue to use them.   

Beyond the ways that technologies like AI often are derived from exploited labor, there's also the biases embedded in AI.  These are both the biases derived from the creators who make decisions about how the technology should be experienced and understood. Silicon Valley has a long history of folks creating biased and questionable tools. Facebook, after all was created basically to validate white boys desire to look at and rate women at elite schools--like a virtual catalog for potential brides. 

Then, there are biases in the data set that the AI learns from (also a side concern: steal from since with ChatGPT, they borrowed from a great deal of copyrighted works).  The collection of data is filled with various biases that inform the AI how to respond.  Yes, that's some of the work that the Kenyan workers were working on to correct--and yet, how deeply and thoroughly can such things be rooted out?  After all, machine learning doesn't just work on the dataset but on the questions that are being answered.  So what happens when a significant amount of people are asking bias questions (knowingly or not) and keep looking for different answers than what has been deemed "clean."  ChatGPT has had to deal with this issueReplika, another AI tool is seeing the result of this with AI bots becoming sexually aggressive. AI Dungeon also had its AI become sexually provocative when users deployed certain terms and phrases as a result of how other users played with the game. Currently, the AI Incident Database documents over 2200 examples of issues with AI and it has only been in existence for about 18 months.

Also, there are biases that machine learning derives in its answers that have no clear rationale or understanding. Users cannot really query the answers and actions that AI spit out). The concerns for this have arisen particularly in the realm of criminal justice where recommendations are made without any clear understanding of why and because it's proprietary software, there's no way to really understand it other than to trust the company selling it.  I see this happening in education right now as several tools have emerged that are there to show if a text was AI-generated or not. Some of them explain their work and that's great but I imagine at some point, companies will creep in (TurnItIn is probably the most likely suspect to come out with an institutional model that plugs into LMSs) and when they do, we'll lose all visibility.  

The final thing that concerns me in the big picture is that all the engaging that we are doing right now around ChatGPT's "research preview" version is that we are the folks enabling AI's last mile.  This is a concept introduced to me in Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass by Mary L. Gray and Siddharth Suri.  Basically, before AI can be fully ready to fully replace workers, there is often a final phase that can feel nearly impossible to transgress and relies on lots of human labor to make it happen.  We see this with tools like Amazon Turk and the Kenyans that OpenAI exploited for content moderation. 

Every time we're interacting with ChatGPT, we're training it better; we're performing labor for a product that they are clearly looking to make a profit on.  And where that becomes a problem is the fact that tools like this will replace jobs.  And I know, folks will be like, "but new jobs will arrive"--and maybe they will, but how many.  When I read the tech-utopian babble of Sam Altman, the CEO of OpenAI (creators of ChatGPT), I am far from reassured; I feel like I'm seeing an Ozymandias-type character seeing that he holds the key to human salvation. He seems to think that AI will "CHANGE EVERYTHING" and we can use this movement to somehow change what has happened in every major technology shift in the history of the Global North--it proves an opportunity for massive financial acquisition by the few who own the legal right to it.  And somehow, I doubt he's going to free this technology, not that places like Microsoft are tossing billions of dollars at him.  

Sure, there is a potential for AI to significantly change our lives in positive ways.  But color me skeptical in that this same promise has been made about every prior technology and those technologies found ways to introduce just as many problems, pains, and alienation as existed in the world before the given technology.  

The Big Twisty Knotty Problem for Education

For me, so much of what AI-generative tools open up about education is the question of authenticity--and that's a question that I think we've always relied on a set of assumptions that aren't necessarily true but that we are comfortable with forcing everyone into, regardless.  We privileged different forms of knowledge demonstration (just as we privilege different types of knowledge) and AI distrupts that significantly.  In this way, I feel for my previously mentioned friend, that it is a karmic moment for many--particularly those whose knowledge and demonstration of knowledge did not fit into the rigid buckets that education typically demands.

In a really interesting way, I think this challenges traditional education, in particular, Friere's banking model of learning.  This new context requires relationship, trust, and dialogue in ways that many folks (faculty and even some students) may not be ready or equipped for.  It requires us to start first with each individual student and build a collaborative learning relationship with them and know that trust is a mutually-developed and supported practice--not something you have or don't have.  

The question I think about a lot in this is where are the lines and how will they be redrawn. I think I can explain this best with cars as my go-to.  There was a point at which owning and driving a car meant having a serious working knowledge of how it worked to be able to anticipate its limitations and also be able to fix it on the go.  But somewhere along the way, this became less needed so by the time I learned how to drive, I needed to know how to steer and break (limited gear changing) and how to put some liquids in. I could operate under this veil of ignorance rather easily given the world that I live in, my needs, and my financial situation. Even now, I have a very limited understanding of a car, even though I've been driving one for 25+ years.  That's what I mean by the line; that line changed over the course of the history of the car.  The same is true for computers; what I needed to definitely know when I was playing on computers in the 1990s is different than today.  That line is a moving target as newer technologies are deployed and become ubiquitous.  

So where is that line with AI-generative tools and creation as it relates to learning?  Socrates thought it was in what you could contain in your head; at one point, folks thought that it was embodied in the handwritten word, while others focused on the detriment that spell and grammar check would mean for writing. I think that's the question that many of us are grappling with.  Given a technology like this and its possibility of being ubiquitous, what exactly are we assessing, evaluating, and ultimately, asking students to learn.  

What is the world we're helping them to learn and prepare for?  What are the skills and demonstrated knowledge that is essential for it?  And, what are the things we're teaching them because they were useful (or personally interesting and important) to us?  What is the "important to learn and know in your bones" line in our classes?  

I think AI is going to challenge a lot of this and there are no good answers.  Lots of predictions and guesses, but little real answer.  Clearly, I don't have one but I feel at least a little bit better I have some questions to work with around all of this.  

So there you have it--my thoughts on AI Generative Tools at this moment.  No one asked for it and I don't blame no one for not reading this.  But if you did, I hope there were things that you found useful.

Resources to Borrow/Use



Did you enjoy this read? Let me know your thoughts down below or feel free to browse around and check out some of my other posts!. You might also want to keep up to date with my blog by signing up for them via email.

Creative Commons LicenseThis work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Comments

  1. Thanks Lance - You have given this a lot of thought, and it is great that you are also translating it into concrete policy for the classroom. I have written on some of the topics you mention - educational objectives / authorship / misconduct - for the Sentient Syllabus project, which I founded in response to the challenges of generative AI in higher education. The "misconduct" analysis was posted just yesterday (https://sentientsyllabus.substack.com/p/generated-misconduct) and I think you will find it useful. Other resources (syllabus, activities ...) are at http://sentientsyllabus.org Thanks again for your take on this!

    ReplyDelete
    Replies
    1. Yes! I include a link to Sentient Syllabus on my resources document--Autumm Caines introduced me to it and it's pretty cool! Thanks for reading and I will definitely be reading more of your stuff!

      Delete

Post a Comment