Recent Keynote: The Questions to Be AIsking
Introduction
Hi folks,
I’m so glad to be here in community with you and thank everyone who helped get me here. We’re here to talk about generative AI and education and I promise you, none of this presentation was created by generative AI but there are times when I’ve used it and will make sure that’s evident.
So, what are we doing today?
Well, I’m going to frame some of the conversation today, then we’ll talk about AI possibilities in general, in education, and for work at large. We’ll then get into questions from y’all and hopefully, give some strong ideas about what might be next!
2 things to note before we get started:
There’s a resource link. There’s a lot there. There’s a part of me that considered ditching the entire talk and saying, “dig in!!!” but pretty sure that would be frowned upon.
At the end of each section, we’re going to pause for 2 minutes to do a reflection. I say this just so it doesn’t surprise you and I encourage you to have something to write on physically or digitally.
So let’s get started.
First off, I’m going to give you some background on me and how I got here. Then we’re gonna center some important considerations, and then I’m going to acknowledge some elephants in the room. Sound good?
I’ve been teaching in since 2006. In the 2000s, I was doing the full-time adjunct thing and that led me into instructional design because when you are teach 9 face-to-face courses and 2 online courses in a semester, you need to really make synergistic choices that utilize technology effectively.
In the last 12 years, I've worked at intersection of technology and education at including at community colleges, small liberal arts colleges, research universities, and the Ivy leagues.
Simultaneously, I've developed a deeper consideration and participated in rich and complex conversations about the roles of technologies in our world. And, of course, in the last decade the rise of movements like Black Lives Matter, Standing Rock, MeToo, and others organizations and communities have risen up to bring to our attention to the ways that technologies can be both empowering and complicit in historical and contemporary violence toward BIPOC communities, LGBTQIAA folks, people with disabilities, and other groups who have been pushed to the periphery. Those things too have made me think about my pedagogical practices, the technologies I use, and the ways that I work with faculty.
All of that led me to College Unbound, as their Director of Digital Pedagogy. As a young and growing college still building its infrastructure, I have to think deeply about what technologies we bring into practice given that we serve largely an adult population that is more than 70% women of color.
Being thoughtful about the possibilities and downstream effects of technology is particularly important for our students who are more often digitally surveilled and digitalled redlined - that is, subjected to technology rather than agents of technology.
It’s because these things are often in my head that I also start this presentation with an Equity Acknowledgement to further ground our conversation.
This presentation was prepared using generative AI tools. I acknowledge that many generative AI tools do not respect the individual rights of authors and artists, and ignore concerns over copyright and intellectual property in the training of the system.
Additionally, I acknowledge many AI systems are trained in part through the exploitation of precarious workers in the Global South. Also, I recognize that the structures to support the expanse of AI rests on continued large-scale extraction of resources from environments in methods that have long effects on the local populations and in the end, many of those resources (i.e. hardware) are often causing further harm in global climate change and environmental degradation; particularly and directly for the Global South and communities that are historically and presently marginalized.
In this work I specifically used generative AI as a collaborative exercise and to test out some ideas about its usage, better understand the tool, and may also demonstrate some of the ways it generates answers. (Inspired by Lawrie Phipps and Donna Lanclos's An Offering)
There’s 2 elephants I need to acknowledge.
Elephant #1
The first elephant is the whole academic dishonesty, plagiarism, cheating frame around generative AI and higher ed.
I’m not talking about this for 3 reasons.
The first is that for the foreseeable future, it’s gonna be impossible to determine human from AI outputs effectively and I refuse to make students subject to defending them against machines and proving innocence against probabilities and no actual evidence. There’s also a lot of question and problems I have with folks trying to AI-proof their assignments. Whether it’s making everyone do oral presentations or increase hand-written assignments, these feel like steps backwards to me.
The second reason is that it bores me. I’ve been the true believer that plagiarism is the highest form of academic crime one could perform. I spoke about it at length to students, I found insidious methods to catch them. But none of this made me or them feel better or made me a better educator; it’s a toxic dynamic and does nothing to really build trust, respect, and opportunities for learning in ways that align with my values. In the end, plagiarism is not the problem we think it is, at least in a world that values ceaseless productivity, quick answers, and a level of unaffordability that means most adults cannot afford to retire.
The third is that I cannot be educator and cop in a system that barely pays me to be an educator. Yes, I can assess good work in the fields that I teach. I can sometimes know whether it was or wasn’t produced by the student in question. But what I refuse to do is to be an investigative officer of the institution to root out academic dishonesty in all its big and small forms. That’s not my job and if institutions want me to do that, well, they better start paying me like it–which would include a hell of a lot of overtime for all this extra leg work to make such appraisals.
Elephant #2
The second elephant is the abundant problems with generative AI–some of which I indicate in my equity acknowledgement.
A handful of those issues include:
Bias in the large language model
Bias in the outputs
Labor questions for content moderators
Labor questions for people using the platform
Copyright
Privacy
Inability to ground the answers in clarity how the machine arrived at that answer
Questions about social/emotional health with these tools
The impact of generative AI outputs on how we develop a new normality around language
The impact on workers–both in terms of being replaced but also in terms of having more asked of them.
These are genuine concerns–concerns that I continue to grapple with and think about in my own exploration of this topic, but in a way, they feel abstract in the face of an oncoming semester where students will be curious, exploratory, and most likely using these tools. Because of that, I frame much of this talk in a constructive manner.
Here’s our first reflection. Take a moment. You can reflect in your head, write in your notes or on your device. But consider one or more of these questions.
Excluding academic honesty, what concerns do you have about generative AI?
Which ones do you feel that you can address or figure out before the start of the semester?
Using AI In General
So let’s take a look at some of the possibilities with AI in our lives in general.
The first here are some of the most popular and known tools. I’m guessing most folks here have heard or used one or more of these.
Let’s see. Raise your arm or nod if you have used at least one of these.
Two of these?
Three of these?
Four of these?
All of these?
Right–these tools are becoming increasingly ubiquitous. And what I want to point out that is distinct about these tools and really, the overall AI hype we’ve felt in the last 8 months that feels different from prior cycles.
Generative AI feels different because of its ease of use.
I mean we’re several years since the launch of the Metaverse and can anyone tell me what that really is and if they have visited it?
The lift to figure out what the Metaverse is, how to access it, how to create in or with it, and why it would be better than other things–that’s a lot of figure out.
But with much of the generative AI stuff, it comes in the form of a chatbox…something that’s been around for decades and to which we’re quite familiar with. See a textbox on the computer, enter text.
That’s what all of these did well right out of the gate (though Midjourney was a lil more complicated than that). Still, it took something terribly complex and nuanced and made it usable in a textbox. That’s a devilishly easy invitation.
Those are solid tools but there are some ones here that are really useful to learn more about.
Again, raise your arm or nod if you have used at least one of these.
Two of these?
Three of these?
Four of these?
All of these?
I’m not going into all of these but in the resource, there’s a table that breaks down useful information about each of them and why you might use them. But I wanted to highlight some of theses:
Elicit.org: Operates as an AI tool that can help with formulating, refining, and identifying research literature. It’s an interesting playground to begin to work with around research.
NOLEJ: Is getting a lot of attention in instructional design because of the ways it can help build out courses. My rule of thumb is that if it’s good for instructional designers, some faculty are going to love this one.
Character.AI: This is one of those AI bots that create the idea that you can talk to a particular character. It could be fiction or historical. But I find this to be an interesting opportunity to make certain aspects of learning more engaging and thoughtful. If you can interview a facsimile of Marie Curie does that change how you make sense of her work?
These tools are gaining more attention and interest in academia and while I think some schools will start to use them more consistently going forward, I also don’t think they will be the tools we’re using in 5 years.
We’re in the rapid diffusion of AI tools right now and so I’m hesitant to put my stakes on any one tool because I think in the next year or two, we’ll also see a rapid consolidation of tools or clear platforms that will be the winners who scoop up the others.
And yes, those are likely to be folks like Google, Microsoft, Apple, and Facebook. Because one of the things that’s going to make a difference in usage in these systems is how easily they can integrate our digital selves into and create hyper personalize guidance. And those big companies often already have a decade or more of usage history about us.
Conversational & thought-partner
I've been seeing people use this really well as a dialogue partner or something to challenge, reframe, or reflect their ideas and thoughts. My partner finds really interesting ways to engage with AI to elicit ideas or refine her thinking. In truth, this area is really helpful for all of us when we're trying to figure things out and don't have colleagues, mentors, or friends to help us make sense of ideas in the world or just in our head.
Task minimizer
I've started a practice now where anytime I need to do a tedious task, I ask myself if generative AI could help.
Last week, I was looking to create a list of dates as I must do in every semester. So rather than toggle back and forth between a calendar and write out the dates. I just asked ChatGPT to generate a list of dates in a particular format for every Monday from August to December and to indicate if and when there are holidays that week. The best part, is now I have that prompt to do this every year.
I might not have eliminated the task but I have reduced it and I think that's where we'll be with generative AI for the next few years.
Another example is I created a list of all the things I typically want to do in a day such as exercise, leisure read, creatively write, etc. I gave that list and the rough amount of time I want to spend on each. I also included things I must also do such as work 8 hours each day. I explained when I prefer to do some tasks such as working out in the morning. I then loaded it into ChatGPT to design a schedule for me.
Then the cool thing happened. I asked it to create the code for a calendar file and I imported it to automatically populate my Google Calendar.
First Draft/Review
Myself and others have found that it's really great at creating first drafts of things, particularly rhetorically standardized work. If I'm trying to come up with a policy, a feedback form, instructions around certain processes, and the like. I'll use generative AI as my first go to.
Especially now with tools like Claude where I can upload a large text document to provide additional tone, context, and information. It can really help in just creating a first draft rather than staring at blank screen.
And I think that's a thing we don't always value. In much of our work, even if we love it, there's things we need to do that we either don't like or take longer because we procrastinate because of that dislike or it's just more klunky for us to do.
Embedded AI
There's 2 types of embedded AI that I'm thinking about. The first is where there is a AI system that seamlessly integrates across your devices and systems to call upon as needed either through verbal cues, gestures, or the like.
If you're spidey sense is tingling as you think about the surveillance, data-mining, and many other issues around that AI, I'm also there with you.
While I think that's a little longer off and for some of us, that's a hard no. We are seeing generative AI embedded in different tools already, right? Google tools and the Office suit increasingly have AI embedded tools. Grammarly is also getting in on the game. Zoom just got suffered a public shaming for its user policy but there, too, we'll see more generative AI tools embedded in that experience.
So as we can see, generative AI slowly showing up in everyday life. We’re going to have to think about how and where we as individuals want to implement them and where they are already implemented. What does it mean that Google Docs can already leverage generative AI if that’s a student’s primary space for doing their work?
Here we are again. So, here I’m going to throw out there the following:
What tools are you most interested or excited about based on what we’ve discussed or what you’ve learned elsewhere?
Why? What is the value being offered in those tools that you feel is important for you.
AI in Learning, Teaching, and Assessment
Now, let’s take a look at what AI means for teaching and learning. Some of the thoughts that follow are still in flux because, of course, the technology continues to change.
How can AI help students learn better?
What I love about generative AI is that it has endless patience. It’s available 24 hours a day and can provide as much time and attention as a student needs or wants.
No matter how much I want to be able to do all that I can’t be–even full time faculty can’t be all that. But the reality of higher education in a capitalist society is that students often come with a framework of customers. So having a tool or resource that can be this responsive, available, and patient can be really powerful.
Many of the AI chatbots now allow you to provide a link to your conversation. This means the student could use their chat as a study guide or share it with their instructor for guidance and clarity if the AI leads them astray.
How might using AI in class provoke deeper engagement and learning?
If these tools existed when I was in undergrad or even high school, I’d be using the hell out of them.
There’s lots of things I learned, but dang, I struggled with philosophy and theory. The readings are hard, inaccessible, and felt beyond my ability to read. Hell, it took me more than 10 years to read Foucault’s History of Sexuality…and it only happened because I had the audiobook.
Now, if I could have engaged with Foucault's writings in a chatbot to test out understanding, ask for repeated and different explanations, or the like, I think it wouldn’t have been as hard.
That’s one of the value added elements of working with generative AI. It can support and guide students as a learning tool. They can keep asking it questions and learning without you needing to direct and guide everything.
Because I’ll tell you, when I was trying to learn Foucault–and I’m not trying to hate on Foucault but I’m guessing other folks had this experience too–I was in a classroom with folks fully familiar and deep into him and I wasn’t about to show how much I didn’t get. I wasn’t about to divert the entirety of class to say “I didn’t understand the pages we read and I don’t even know where to start.”
Additionally, AI can help students to delve deeper into a subject and fulfill their curiosity or revisit things in their notes. As educators, we can open up challenges to students about using generative AI. We can task them each with coming up with their own comprehensive resource or guide for a course and then have students peer review, edit, and revise what each student comes up with.
How can students leverage AI to improve their work?
Students can also leverage AI to improve their work. One of my students shared with me one of my favorite use-cases around this. She used ChatGPT to organize her notes. She knows that she’s not going to organize them but she can throw them into ChatGPT to help put them into order. It helped her see more clarity around what she wrote. And this kind of practice can include notes and quotes from research that one might be using for a paper or project.
Of course we have to be mindful about whether what we get from AI is accurate or not–but that is something we are always navigating.
We have had to do it with social media, the internet, newspapers, and books. That’s a perennial problem.
It’s also worth considering that generative AI is only in its infancy and the growth and refinement we’ve seen in just 10 months tells me that leveraging it as a learning tool is going to be really helpful for many.
How can AI help us with tedious tasks?
As I mentioned, I’ve used it with tedious tasks such as creating dates for my syllabus. I’ve used it to clean up and tighten language around policy in the syllabus.
How can AI help us in course design?
I mentioned NOLEJ as one of the generative AI tools out there but in general, I can find that other tools like ChatGPT is effective for course design on its own.
I will get pretty good results if I ask it to produce the essential objectives for a course and provide it specific details about the course such as its length, number of credits, who the students are, the class duration, and preferred pedagogical approaches. From there, I can do similar prompts honing in on assessments, learning activities, and materials.
How does AI help us with creating course content?
It can also help to create useful course content. For instance, it can create examples of reflections, answers, responses in discussions, blog posts, or other things you want students to see good and bad examples of.
How might AI help engagement in the course?
Finally, it can also help with engagement. It can generate or enhance your assignments, your content, or your prompts for live or asynchronous discussions. It may not do so on the first shot but I love that I can ask it for examples and then even more examples.
Where can it help in assessment design?
I’m a fan and practitioner of choice in my courses. I’m from the world of backwards design where you start with your objectives and then have your assessments lead to your objectives. And for me, this means that if I create an objective well, it should be flexible enough to be assessed in more than one way.
That is, a presentation, a creative work, an essay, or a blog might all be valid ways of achieving an objective in a course.
This allows me to offer students choices. “Oh, you wanna demonstrate you have achieved this objective–cool! Then here are 3 options. Don’t like those options, no problem. Option 4 is to pitch your own.”
This is a rewarding process but it can be a challenge to come up with different assignments with their respective guidelines and examples.
So, I can leverage ideas from generative AI to help me think about what kinds of assignments might achieve an objective. For instance, I might say, “My objective is for students to be able to meaningfully analyze a piece of literature in part or full.” That’s a mediocre objective, I’ll admit, but let’s roll with it.
When I go to ChatGPT, the prompt might be: “Provide five distinct assignments that can be completed in different formats or mediums that can achieve the following objective…and also, make sure each of the assignments are constructed through the lens of open pedagogy.” Whatever the list is, I can always adjust what I see or ask it for 5 more distinct ideas.
But the bigger picture is that this becomes a useful and focused space to expand and experiment about what kinds of assessment I want to do along with leveraging AI to improve the assignment guidelines to make them clear and effective.
How can it help in feedback?
It can also help me to create rubrics–something for me that is a chore and yet often quite helpful to me and my students.
In terms of feedback, I like to and have seen it used to improve the kind of feedback that balances details, too much information, tone, and next steps.
TO BE CLEAR, I’m not putting any student’s work into any of these tools. I have strong adverse feelings to putting students work in to any generative AI tool at this point.
Rather, I would first engage with ChatGPT about what kind of details and feedback are useful for your student population in your particular discipline in your particular assignment. I would then dialogue with it to say, “Ok, create a template for what that feedback would look like in written form.” Once I’ve done that, then I can return to ChatGPT and say, “Use the feedback template created and generate feedback that focuses around these issues…” and I’d paste a bullet list of concerns from having reviewed the work. It takes a little investment in the creating the template but what’s great about this is that it improves your consistency and helps you to take out your own shortcomings.
What do I mean by that? Well, we know we are biased and imperfect people. We sometimes lose ourselves in small things–like the 10th paper in a row that decided to not follow APA format or spell-check or do something else that hits a nerve. That can result in how we treat/view the 10th paper vs the first. This is a tool that can help limit the negative downstream effects of that.
How assessments might change?
Ok, but the real question everyone wants to know is what will this mean for assessments overall? I don’t have a perfect answer for you. The first is that generative AI doesn’t change much in some ways. Some portion of students will not do the work in the way that is expected of them. That’s always been the case; now, the scale of possibility feels bigger.
But here’s an interesting moment of reflection. Many folks in higher education hold more liberal values–they often believe in stronger safety nets that provide support and aid for folks who are struggling. They want folks to be able to access what they need when they need it.
Yet, so much of our reaction around what students are going to do with generative AI is a worry that “everyone’s going to cheat the system”--the exact same concern that folks have when it comes to social supports. We won’t be able to tell who deserves to pass and who doesn’t. That angst around fraud is real and does have implications, but I just like to highlight that the same premise in a different context produces a very different answer for many of us.
It does that because we know it’s not the clear and straight forward. Often, folks who “cheat” are doing so out of necessity, precarity, and a sense that what they do doesn’t matter. Now, what if we responded to the heart of those issues rather than the framework of “cheating.”
Right now, I don’t think there’s much change that we can or should do around assessments that don’t feel like they’re going to just alienate, marginalized, or exclude other people–nevermind be mere stopgaps from folks finding a way to game the system.
Let’s take the calls for a return to oral exams. What does that mean for folks who stutter? What does it mean for folks who navigate ADHD? What does it mean for how class time is actually used? What does it mean for online courses or courses on Zoom? There is a reason why they faded away. They may meet objectives but in limited ways and ways that often have little to do with the larger purpose of the course or how students might demonstrate their learning in the real world.
My friend and colleague, Autumm Caines has a good blog post that is in the materials. Her point is clear. Good pedagogy–pedagogy that comes from a place of curiosity, care, and inclusive practices is what works and has always worked to overcome lots of complicated and challenging things. Heck, that’s much of what got folks through the pandemic. That’s really what we need to lean into.
That’s also not me saying that by doing that, it will solve all of it, but it can go a far way in reducing the chance that students won’t over rely on these tools.
I preface this with the fact that I’m trying to predict into the future which means take it all with a grain of salt.
Josh Brake has a substack post about Personal vs personalized education and that’s definitely something I’ve been thinking about.
Generative AI is going to allow for a lot more personalized education. Learning that dynamically responds to the specific prompts and challenges of a given student. That can be incredibly useful.
So useful in fact that I imagine some of the massive online institutions are scheming how to integrate it further so that they can assign more students in each course to their fleet of adjunct faculty OR find a way of reducing the instructor role further and paying them less. Either way, it is going to be leveraged to improved productivity while adversely impacting instructors.
Personal education though is more relationally based and grounded in the idea that learning isn’t just about processing content but that there is also a social component to it. That relationships are a central part of learning, which is something that bares out in research–that there has to be meaning and connection for transformative learning to take place.
Particularly when we are considering students whom have been marginalized and alienated by society or higher ed. We need to build more trust and an AI tool isn’t going to do that particularly well or in healthy ways.
And what I imagine in the best case scenario is generative AI providing space for deeper content connections while the student is learning on their own. That learning can be further developed in the classroom where the instructor or peers can reinforce, challenge, or change the learning. AI is the learning tool but the classroom is the space of meaning-making and social learning.
One real potential down the road with generative AI is that we’re actually going to figure out how to learn better and how learning happens. Learning is still so much of a blackbox. We haven’t really studied how learning works in anyway that centers learning.
Rather, we have largely studied learning in how we practice it currently. That is, we create the structures of learning or what we assume is how one learns and are largely studying that. Right…we had the systems of learning: schools, mentor, etc long long long before we had a science of learning.
This approach to learning prioritizes convenience and not learning. For instance, we create F2F classes that meet 1, 2, or 3 times a week for 15 weeks that adhere to a standard of measurement–the Carnegie credit hour–that’s over 100 years old and comes from an industrialized view–not a scientific view of learning. Sure, we have other models–the 1 week, 6 week, the 8 week, and so on. But it’s focused around a frequency of meeting.
Online courses flip this largely and says, “we think your learning should take about this long, but you figure it out.”
I’m not saying folks don’t learn in these spaces but we center convenience and efficiency. Heck, how often does the word “convenient” pop up when talking about online learning?
Those elements have a place but what I think may happen is that as we get to the stage of embedded AI-the kinda scary AI that’s always with us across devices.
That AI may end up tracking and monitoring how effectively we hold onto, learn, and apply ideas. And my theory is that it’s going to find out and redevelop learning. We’re going to find out that learning can’t happen in a linear way as we structure our classes. We’re going to learn that learning can’t be effectively chunked into perfect slices of hours, weeks, or semesters.
We’re going to find that individually, some folks will fly through topics A through K but then need to spend days or weeks on topic L. And that time is going to be scheduled around when it’s maximally beneficial to the individual student’s learning. Some learning may need to be social, other learning may not. Some learning may get shifted around based upon what’s going on with the student–if they’re going through something serious, the AI may determine from elevated levels in a learner’s body that it’s not going to be useful.
Ultimately, I think we’ll have a different understanding of when, how and to what degree of learning happens that’s driven in part of better understanding that convenience, while helpful, has also significantly limited how learning can happen and misses different opportunities for more substantial learning that sticks.
I know that’s quite futuristic and I also don’t think it will be that perfect but I think that’s a direction we’re heading to more now that AI is here. Having a tool that can perform learning and actively respond to our questions at any given moment is only going to feed our curiosity, diffuse our confusion, and allow us to keep seeking clarity when we want and need it.
Some new questions to consider
How do you imagine you would have used these tools in your learning if they were made available when you were in high school or college?
How would you have approached teaching differently if these were in place from the beginning?
AI in the World of Work
All right–now for the final arc, we’re gonna look at what AI might mean for the world of work.
How can it become an instant upskiller for job-seekers?
One of the things I appreciate about AI is how it can help me and others prepare for new challenges that might be less familiar to us or help us hone in on things we’re uncertain about. In the job search, I have seen it used review a job description and a resume to more effectively help the job seeker apply for the position. It might mean polishing up the resume, clarifying to cover letter, or making stronger connections between one’s past work and the role they are applying for. I’ve also seen folks use some context and the job description to generate what are the most likely questions they will face in an interview and even answer those questions to gain feedback.
How can AI open the hidden curriculum for the world?
But in particular, I appreciate that generative AI can often open up the hidden curriculum of the world. To me, that’s it’s most powerful…and problematic aspect of this tool. It’s powerful because it can be used to help folks get passed artificial, superficial, or discriminatory barriers.
There’s no better example that I can think about than the Cover Letter. The cover letter is the most trite piece of writing ever. It is this rhetorically-loaded piece of garbage that requires the applicant to supplicate themselves before the employer in the hopes of an interview. It’s highly loaded and suspect to cultural, gendered, and racist interpretations. And the thing is, in the vast majority of instances, the job that one is applying for has nothing to do with how well they write a cover letter. In laymen’s terms, it’s bullshit. And I say that as someone who can write a pretty strong cover letter–at least if my history of getting interviews for jobs I’m applying for is any indication.
So the fact that people can now use the job description and their resume to largely craft a cover letter and save themselves a tremendous amount of time is great. Whether folks are multi-language learners, neurodiverse, dyslexic, or just struggle a lot with groveling themselves in written form before a potential employer, it provides a level of support that I think can help job-seekers.
The other side of that hidden curriculum though–the one that I worry about is that it will have a cost on language diversity and expectations of language. The more we use this tool as it currently stands, the more it pushes us towards a homogenous form of the English language–one that is trained upon and values English derived from predominantly white and male cultures. There’s lots of examples and explanations of why this is problematic and for me, it’s just that it pulls us to get used to and expect a certain way of talking and writing that dismisses others who don’t and that doesn’t feel right either.
How can it help job-seekers become better at understanding their value and abilities?
I also think there’s ways to leverage prompts with generative AI to better understand what skills you have or what value you offer. One way I’ve played around and seen others use it is to use it is to provide the details of a project that I worked on and to solicit what are the skills that I was using and explain how.
Often, what it reveals is things that I had not considered or realized. Sometimes, it’s offering something that isn’t true but often, it is capturing things I hadn’t thought of or realized to frame in that way. This can improve my own thinking about the value I offer in my current work when I’m doing an annual evaluation or advocating for improving conditions or requesting a promotion. It can also help me think about how I communicate my value if I’m job-searching.
Where has it been showing up in the hiring process?
AI has been in the hiring process for a while before ChatGPT. A few years ago, I wrote a piece about how AI was being used to review resumes and to even evaluate applicants through video recorded interviews.
How might it be leveraged in employment and onboarding?
AI is likely to be used to increasingly review materials but go a step further and crawl the internet to create a profile of the applicant based upon statistically probable places that person shows up. I’m sure many of us can see the problem with that because while I know there are only like 2 other Lance Eatons with some kind of presence on the Internet–neither of which operate in my field–that’s not the case when it comes to many other names. Still, it’s likely that AI will be applied to evaluate and anticipate certain elements of the individual.
I know some organizations are trying to determine and use AI-plagiarism checkers to make sure cover letters are legit and of course, that’s going to fail as miserably as it is in higher education with students’ writing. However, I do think there will be ways that it can be helpful. For instance, I’ve used it to help come up with both job descriptions and onboarding guidance.
I foresee using it to help me to think through how to help and support a new employee based upon their experience, the job description, and the initial projects I want them to take one. And yes, these are things I can do on my own, but that I can get ideas on paper about how to map out a new staff’s 1st year with milestones and markers to provide the person from the beginning as something we discuss and find consensus on is also going to save me 10 or more hours.
So assembling the onboarding process, creating policies, or any associated training, or updating and revising the onboarding guide at work are some useful ways I see generative AI helping.
What might this mean for job-seekers?
Of course, I also see AI being used in more problematic ways by employers. Sure, it can be used to provide contextual information, insights, and feedback to an employee. But it can also be calculating and determining whether the new employee is sufficiently meeting goals. Now, that is a part of the hiring process–making sure the employee is capable of the work.
But this is where it can get tricky. When we’re using the tool to assess human effort directly, we’re going to run into situations where the AI is wildly off. But if we have created so much trust in the tool, we might forget or not question its fallibility. And that’s where current AI worries me–as we’re seeing with AI plagiarism checkers…when we use machines we do not understand to assess humans, we create a situation where humans cannot defend themselves effectively.
That’s probably my biggest worry and lesson in all of this for the foreseeable future. Use generative AI to help you and guide you, but be very very very careful about how you use it to judge and evaluate others, especially when there are real-life consequences on the line.
In this case, job-seekers are going to have carry some awareness and understanding of where and when AI is being used on them. In fact, that needs to be one of the regulations or laws. That it’s always clear when employers are using AI on employees or potential employees and there is a clear and effective means to challenge the tool’s outputs.
How will integration of AI tools vary with individuals?
In the next few years, I suspect many folks will find themselves making use of AI to do things they don’t like or are tedious or to start on things they may like but can get started more quickly with AI’s help. I know many folks around me are finding it helpful to generate ideas, get initial text, or to look for cracks in their thinking.
How may integration of AI tools vary across sectors & industries?
I think some sectors are going to be fast to incorporate generative AI if they haven’t already such as accounting, graphic design, computer science, business, marketing and communications while others may be a bit slower such as the health sciences, social work, psychology, and other types of people work. I think there’s still some significant concerns about ethics, privacy and security while also educating people in those industries about what AI will mean for clients and customers.
How might it reduce challenges for different styles of individuals?
There’s promise here for helping folks through the things that they struggle with or to quickly get aid and guidance as they need it. Again, having a go-to perona that you can constantly ask questions and get answers is really powerful. It bypasses a lot of different possible concerns such as office politics, angst about coming across as insufficient, or feeling like you’re being disruptive or too needy. None of which may be true and yet folks are likely to spend as much mental energy on the worry as on the task to acquire the help.
And of course, I have a concern here too. Right now, AI is still new enough that some folks are using it and being more productive. Ideally, technology should make work easier. But in 5-10 years, I worry about when AI is everywhere.
Because that means there is going to be a new heightened level of productivity at work. That is, AI will not give us back time, but will increasingly ask more of us. “Oh, it used to take you a month to produce a report. With AI, you can now and should produce a report a week.” And while sometimes these higher level tasks are rewarding or feel more valuable, they are likely to also feel more stressful when the frequency of demand increases.
That’s is, while I think there’s lots of ways AI will help, AI is also going to ask more of us the more we use it. We’ve seen this with other technologies and their impact on our expected productivity.
Scholarship is a great example. I’m currently finishing my dissertation and it’s focused on how scholars engage on academic pirate platforms to access research literature to produce their own research. One reason scholars are accessing pirate networks is because they feel the demand to produce even more research.
Because we can use computers and email and the internet, the amount of publications required to attain tenure or legitimacy in many fields has changed. I posit that the demand for productivity will increase even more with the rise of AI and that’s something we’ll need to think about and figure out in the years to come.
Final reflection for now.
Where do you imagine using AI in your life?
What feels like tangible ways that this might be helpful for your life and work?
Where are we now? We’re at the end–I promise!
So as you noticed, I created all these questions throughout the presentation. I did this for two reasons.
The first is that the resources for this presentation include an annotated slides section. There, you will find the text of this talk (or at least a close enough version–I may have strayed a bit). The text is aligned with the questions and also, where useful and relevant, I have included resources.
The second is that this is also one of the things that I’ve been thinking about with generative AI in the current form. It relies on the prompt and what is the prompt but a question. Nearly a decade ago, I started to read Warren Berger. He has several great books. His premise is that knowledge is valuable but questions are equally valuable and if we can learn to ask better questions, we can learn, grow, and work better. That to ask questions is to poke at and investigate a subject with a curious mind and through that process, one can learn better.
I feel like this is a great approach not just for folks looking to pursue advanced education (I won’t share with you how long it took me to come up with the right research question for my dissertation) but also a great methodology of learning for the age of generative AI. How can we use effective questioning to get the most out of these tools? I think there’s a lot of promise in using the questions framework to meaningfully engage with generative AI. So I figured I would make the slides filled with questions to get you thinking.
So I hope much of what I have said has been helpful but some of y’all are still wondering what to do next and my guess is that there’s a little bit of overwhelming feelings–my cat, Pumpkin completely understands. I do have some practical recommendations in this regard.
Here are the 10 ten things you can do to get yourself started, knowing that you’re going to have to be continuing to develop and stretch as you go along.
First up is that there’s no silver bullet. Some students will use this. Your goal is to figure out what makes the class compelling and meaningful enough so that fewer students used generative AI in fewer ways.
In that, it means one needs to recognize that there’s ways we have lost control (or I might argue lost the facade of control). Students have a power with this tool that means we have to think about how we work with them rather than believing we have as much power over them. And I’m not saying that we all thought this way about the classroom.
However, I would encourage you to look at your syllabus–how many rules or expectations do you have in that document? Do you assume it to be or refer to it as a contract? Many of us treat syllabi like End-User-Agreements–documents that cover all the possibilities and put all the responsibility on the user or student. I would guess that if you did a Venn Diagram of us who complain about students not reading the syllabus and those of us who don’t read end-user agreements, there’d be a lot of overlap there. All that’s to say, we assume a lot of control explicit and implicit in our courses. I think we’re going to have to recognize generative AI opens up ways that’s no longer true.I’m here providing thoughts and frameworks to think about generative AI but get out there and find your people in your disciplines. I assure you–there’s rich conversations taking place on social media, listservs, online and physical gatherings like conferences and the like. Find the conversation and start learning what others are already doing.
Do some reflection about what you are ready for and what you aren’t. Don’t take on things you’re not ready to. If you’re not interested in fully integrated generative AI tools into your classroom, then don’t. But have clarity about why and have curiosity about what it would mean if you did or didn’t.
Take 30 minutes and find some AI tools to play with. Give it some tries to see what it can do. Not just for learning but for you. See if there are things it can help you with.
I include a prompt guide in the resources, if you’re going to play with AI, then try some of the prompts and see where it gets you. Also, think about how prompts might be used in your class directly or indirectly.
Give some time to think about where does it make sense to use it in your life, your work, and your classroom. You don’t have to use it but consider where it might make sense and feel right and in alignment.
Have the talk with the students. Even if it is just to say, “look, it’s new, I’m not sure, here’s where I stand.” You can also frame that conversation as a “how do we want to use it or accept what level of use in this course.” But don’t pretend it doesn't’ exist. Even better, provide some clarity about how you might recommend they use it. There are lots of ways that they might use it that aren’t in the realm of what one might consider academic dishonesty.
Realize that one of the best ways you’re going to get students to use it less is to make sure you are building connection and trust in the classroom. When students feel connected and that they can trust you, they will ask. It’s happened in my classes where they asked if they could use AI for an assignment. I explained why I don’t think it’s ideal on a given assignment but still gave them the final say. They opted to do the assignment without AI.
Most importantly, include students. I’ll end this with what we’ve done at College Unbound. In December 2022, I realized that we needed a plan for ChatGPT. With help from my colleague, Autumm Caines and support from my Provost, I pitched and ran a course called AI & Education.
In the course, the students and I learned about AI, education and policy. We used that to help form the usage guidelines for students and faculty at College Unbound. We did that in Spring Session 1 and then test-piloted it in a second run of the course in Spring Session 2. We then had the faculty review it and we’re setting it up as the policy for Fall 2023.
But it was important to do this with students because their insights and considerations are incredibly important since they are the ones who are going to be looking for jobs in a world where these tools are increasingly common and expected.
Did you enjoy this read? Let me know your thoughts down below or feel free to browse around and check out some of my other posts!. You might also want to keep up to date with my blog by signing up for them via email.
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
nice
ReplyDelete