Trailblazers in Higher Ed: How the Best Use AI and Other Emerging Technologies to Innovate

Share

Today’s tech-savvy students expect a tech-savvy experience. Watch this on-demand webcast to learn how MIT and ASU are taking visionary strides in embracing technology and innovation to prepare for the future of education.

Share
Video Transcript
Thank you for joining us for the trailblazers and higher ed webinar. We will give it about thirty seconds, and then we will start. For those entering the room, we're just waiting for everybody to join. Give us about thirty seconds, and we will start. Okay. It looks like our attendees have started to slow down coming into the room.

I'm sure we'll have some late, late joiners as well. But welcome. Thank you for joining us for the trailblazers in the higher education webinar. We'll be talking about AI, and we got a couple of great perspectives today on that from our panelists who I'll reduce in just a minute. But I wanna give you, some brief overviews.

We will actually be taking questions in the q and a and trying to integrate those into the conversation. So please use the q and a box as opposed to the chat box. Although, we'll be monitoring both. And we would like to to pull some of the conversation. We will be making the presentation available to a recording of the presentation available to all of the registrants.

After the event. So, you can count on that. So with without further ado, let me introduce our panelists. Starting with Elizabeth Riley. Elizabeth joins us from Arizona State University.

She has over a decade of experience working with data and analytics in higher ed and a variety of areas, including strategy, policy, information technology, admissions, career services, student affairs, and academic affairs. She currently leads the new AI acceleration team, which is part of a large initiative being led by ASU's enterprise technology, to drive strategy, across the university, that empowers all students and faculty and staff to leverage the advantages of AI to enhance their daily work. Elizabeth, welcome, And our next panelist is Cheryl Barnes, bringing a very different perspective. Cheryl has more than twenty years of experience working at the intersection of faculty technology and teaching. At Tufts and Harvard before MIT.

Cheryl spent almost ten years as an academic advisor, living in a freshman dorm at Harvard. This received her EDM in Technology and Education from the Harvard Graduate School of Education, and her BA with a double major in biology and society government from Cornell University. Cheryl is the director of digital learning in resident, residential education and MIT's office of Open Learning, where she leads a team helping faculty leverage digital technology to transform teaching and learning for MIT students. And Cheryl welcome as well. So this is gonna be kind of an informal conversation.

We we wanna talk about, what both of your institutions and you personally are doing, around AI and education This is one of those topics that we we get a incredibly high number of attendees for our webinars, because there's so much interest and so much I think, disparity, going on across the industry right now. So let's talk by starting a little bit about, last November, November thirtieth, my birthday of of twenty twenty two. CATCHPD dropped, and they very quickly, reached a million users, like, five days. And that led to what we're seeing now, this generative eye eye revolution. So let's talk a little bit about that initial reaction from your institutions when that dropped.

Maybe Elizabeth, we start with you, and and what was the initial reaction to chat GPT and AI? Sure. Absolutely. So hi everybody. Pleasure to be here today. My initial reaction was was, excitement.

So it's there's few times that I can think of where a technology that had been evolving for so many years had been put in people's hands in such a compelling way that changed things overnight. So a lot of us who have been in this field for a while in data and analytics and statistical analysis and modeling, seen some of these technologies evolve, but just seeing how a user interface in front of a model like that can can change the world when folks can really experience it was a really big deal and was, a turning point for me and how I think about technologies and how people interact with them. Arizona State University, pretty immediately embraced the possibilities of AI as something that could potentially be a big equalizer in higher education, create that personalized learning experience there was an immediate openness to it. I think as part of our our culture of number one and innovation, nine years, in a row as of this week, is to be open to those new technologies and those possibilities, but also, to look at them from an ethical and principled lens as well, which the community also immediately did. And, Cheryl, how about MIT? What was that initial initial reaction? Yeah.

Well, I guess I would, I would say, for myself personally, my very first reaction, because this has been the most common reaction to new technologies that people say are gonna change everything over the last twenty five years or more was like Oh, yeah. Whatever. And then about ten minutes later, I was like, oh, wow. Yeah. There's nothing really to this.

Yeah. It really cut that. I was thinking like, there's this hype cycle and the trough of disillusionment and how many times it would've been over this. And then I like, oh, no. Like, yeah, this is different, very different.

And, like what Elizabeth said, like, the power of it, in everybody's hands and such a, Yeah. Just sort of bursting on the scene that way. I'm I'm not, in, you know, I'm not involved in the development of these technologies at all So, I was sort of with the rest of the world and, this seemed to to not come out of nowhere, but, really sprung on the scene, you know, that the crossing that line from, to sort of thing that's actually usable, was really kinda dramatic. And, in terms of MIT's reaction, I'll probably say this a lot today. It's it's hard to summarize in one way.

Like, of course, we have our, we have c sail, the computer science and artificial intelligence lab here, for since the fifties. So, you know, I I can't characterize their reaction. They may have been slightly less surprised than I was. But so, you know, the folks who are, actually helping, you know, develop these technologies, I actually don't know what their reactions were. But on the teaching side, I would say, a range of excitement you know, those who are who could immediately see the potential for them their students from excitement to, despair.

I think is fair. You know, concern, frightening, you know, November is not that far ahead of the spring semester. Like, what the heck does this mean? My class, for my exams, for my writing assignments, in the spring. So I think that we we had the full range of reaction, I would say. I think one of the one of the upsides of speaking to such innovative universities, is that I think, you know, the where the initial knee jerk reaction for a lot of institutions was to see it only as a cheating tool.

I think for both the issue and MIT, you very quickly saw the the potential for innovation and that potential for positive usage. And and, you know, I I think it was interesting to Elizabeth's point. The you know, AI is not new. It is what was new was, the more powerful, large language model behind the tool, and then that that easy to use chatbot interface, that would give such, you know, massively improved, results over the the chatbots we were used to using. Right? Previous to that.

But AI has been around for a while with with, you know, auto complete and simple chat bots and predictive text, things like that. This is just kind of the next step of that. But was there a was there at any point of movement I think on any either of your campuses to ban chat GBT or ban, you know, other AI open AI powered tools. Yeah. There was no movement to ban.

Those tools at Arizona State University, but there was an immediate embracing of of academic freedom and reiteration of our, principles of academic freedom and allowing faculty to be able to use AI how they solve it in their classrooms, whether it is a no AI experience or, or a very heavy, embracing of AI experience. So they're not a ban, but an embracing of that freedom for people to be able to navigate the space in in the way that they see fit. Yeah. I would say probably a little bit similar here. I mean, MIT let's, people do a lot of things here.

Like, we let the students build a roller coaster out of wood that other students ride at some point, I think in the not too distant past, they as students were allowed to throw pianos off the top of a building to measure something or other. So, you know, you may you may know about we have a really strong, sort of history of hacking culture here. They put a police car on top of the dome, years ago. So I don't I don't know if it's it's an embrace of that. I think part of it is an embrace of that sort of creativity and that energy intellect will energy, and part of it is maybe just, understanding that it wouldn't work, any sort of a band would just be impractical.

And there is definitely, you know, we we all have, energetic creative students So there's a there's a real there's a real interest in not putting down dictates that we that people know are unenforceable because it it makes you look bad. Yeah. I mean, and and we've seen the progression of since November, you know, we were, what, nearly ten months, I think, progression. And we've seen the the release of chat GPT plus or g b powered by g b t four. Right? That's trained on a billion parameters versus g b t three.

That was a hundred and fifty or a trillion parameters versus a hundred and fifty billion parameters. Right? So it keeps getting smarter. But we keep finding new tools. How is the how is the perception changed, I think, from that initial response, that initial reaction to where we are now? I think, you know, Elizabeth is interesting because I think you've you've gotta You are now leading a program that didn't exist before, right, the around AI and and putting it to use. And I think the the hype cycle has led to some of that positive momentum.

How what how does that look like for you? So, yeah, so I have a job today that I never thought I would have or never thought would exist a year ago. So that's very, very exciting and a very tangible change for me personally. I think for the university, I've seen a lot of quick maturity in around approaching this technology. So just this past week, we had a, converging of folks from across the university in a community of practice to talk about how different faculty members are using AI, opportunities in the provost's office in our learning enterprise, in our knowledge, in prize and research. And it's really been an opportunity to bring folks together, in a way that I've seen before, but in a way where we feel like we really need each other, where we need each other skills.

We need to help each other to make the best use of of technology and to make the best use of economies of scale in this in this space as, as these technologies, as we'll probably talk about a little bit later. Our are very expensive. And so making sure that we're being good financial stewards of our of our resources. It's been, very exciting to see all the different ways that back faculty are approaching the use of AI and using technologies in the classroom and that there's just an infinite amount of ways for faculty members to be able to use AI in ways that teach me as a technologist, different things about how to approach the technologies and how to create technologies that are going to be pedagogically sound and are going to help help along that journey of encouraging discernment, encouraging critical thinking, and not just basically providing providing an answer, which doesn't help any of us. To to grow as human beings.

Cheryl, your thoughts there on on kind of the current state Yeah. I guess I would I would say that, you know, in terms of, teaching, again, the classroom, maybe share a little bit about that. There were there are two that I know of and probably others that I don't know classes who, you know, somehow we're ready from November to the beginning of January to embrace use of of generative AI in their classes. So that's, that's pretty extraordinary. And one of them actually one of them has an open website, so I can I can just post that in the chat now? So Yeah.

That, the one that and like I said, there's probably others, but one was in the our the sloan school of management, Melissa Webster, who's a lecturer there She teaches communications. So she was ready to start experimenting with it right away. She worked with our teaching center, and she incorporated you know, changed her class, her spring class, you know, in order to, you know, embrace use of of chat UBT and is scaling that up in her fall class. And then that the class that I posted, also happened in the spring, more of a a discussion based project based, you know, iterative kind of, co developed course, with the folks in the center for Creative Communication at the media lab. So again, two communications related classes, so maybe not super surprising, but they would be on it right away.

And then this term has given a little more is a little more time for folks who have different kinds of classes who have bigger classes. So our introductory physics class is fairly well known for having, is switched to an interactive sort of a flipped model, about about fifteen years ago, I think now. So this week, all our classes have numbers. Instead of a name. So this is eight zero one.

The eight means physics. So, eight zero one, it it has a name too, but nobody uses that. So, you know, eight zero one has got seven hundred students. Every student at MIT takes a year of physics and a year of calculus. So, it's a big class for us.

We only have about twelve thousand students total. So I'm laughing. It was like we're like a hiccup, compared to ASU in a sense. Terms of number of students, but, so the amazing course team in in eight zero one has, they have they've developed a UI over over chat UPT that they're using in their class now, in the fall. They're using it with about a hundred students, right now, and they're scaling up to all seven hundred, and they're using it for the for the kinds of things, that I think you were just referring to but, you know, helping students, synthesize information, helping students get more practice, with the with the problems you know, extra generate extra problems and check the accuracy of the response and, also then introduce discussions about it.

So it's pretty amazing to me. I can't imagine myself, you know, in charge of a class of that size, moving that quickly but, I think that characterizes the change here is that the folks who some folks are all in from the start. Others are are getting in now and there's a great deal of experimentation, going on. There's also, like everywhere, people who have banned it in their individual class, you know, that university hasn't, doesn't have a position on that, but instructors are They're allowed to handle the situation how they want to. So, there are there are plenty of instructors who have told their students, no.

You're not allowed to use it. Let's say the most common position because it kind of I don't know banning with with college students. It's a little difficult especially if it's unenforceable. I would say the most common position is you can use it as you would. Any other resource or a human collaborator and unique to cite that.

You know, if if you had a, you know, important collect contribution from your roommate, you should you should document that in your project too. So Yeah. Yeah. Cheryl, you reminded me, of our of when you were talking about how the communications folks and how you you might expect for them to, to be embracing chat, GPT, and be embracing these large language models and and AI. But you also mentioned a lot of other folks, you wouldn't necessarily expect to be those early adopters, and we're seeing the same thing here at Arizona State University, where, Jeffrey O and our dean of humanities has leaned very heavily into into the AI space and has been very supportive and encouraging of his faculty to find ways to use AI in the in the classroom to enhance that learning experience to make learning more accessible to more diverse groups of of students and our English department, with Kyle Jensen being the director of our English department has leaned very heavily into AI.

He's done some experiments in the classroom and using things like tax EPT, a tool called word tune, as well as Google barred, in, in the English class room collecting feedback from students about their feelings about having having used those technologies, how they felt like it had, contributed to to their learning, and so really excited to see the feedback from students directly, and there will only be more and more over time. Yeah. There's some great questions coming in in the Q and A box. We will get to many of those, in just a bit. We would be remiss if we didn't talk a little bit about the a big integrity aspect of AI, generative AI, and and the impact there.

I'd I hosted webinar a couple of weeks ago, and I heard something very stirving, in the chat. Somebody said, we are going medieval on them. We are going back to paper and pencil, and And that focus on cheating to the detriment of learning, I think is actually kind of alarming to me. But we know that, you know, there's, when we look at academic integrity tools and their ability to properly identify and identify the content, We've already seen headlines of students being falsely accused and and things like that. How do you how do you create an environment where you don't, go medieval.

We actually maintain the use of technology, but we address, those those cheating considerations. Shail, do you wanna start with them? Sure. I can start with that one. My first response is there's an aspect of us that's just still actually medieval. We're not really going that way for this, but, and, I mean, that in a in a funny way.

And it what I mean is this, There are a lot of, there are especially in this in the in the stem classes here. There are a lot of classes that, where, you know, students are encouraged to use whatever helps them learn, and then at the end of the day, they need to take an exam, that in person, and they don't have these tools to help them with that exam. So it's not, it's not intended to be punitive at all. It's just intended to, you know, that there's a difference between, you know, losing using a tool to help you learn and using a tool to help you avoid learning. So, you know, there's a there's there's one simple mode that's very well established here that controls for that a little bit, which is this like a final assessment that is, you know, an in class exam, an in class unaided presentation, an oral exam, a hands on project.

You know, there's lots about the, you know, how In some ways, this tool makes it, you know, points and structures in the direction of, more in engaging pedagogy, then then that's not a bad thing. So it's not, it's not intended to be medieval in in a punitive sense, but it maybe isn't, it's a rather traditional mode, you know, sitting in a in a room and taking an exam is is a, maybe not what you think of happening it at too, but it definitely happens here. But it's it's definitely a more holistic assessment approach than you would maybe just a twenty page paper might be, which is easily, you know, accomplished with Chad GBT. Elizabeth, how about you? I'd I don't think it's a bad thing that this moment is forcing us to think of assessment in more authentic ways, in different in different ways. I think in the long term, if we're careful about it.

This moment in time, how we assess learning, how we assess how we assess growth and knowledge can lead us to more equitable ways. Of assessing knowledge of assessing growth outside of a traditional right of twenty page paper and, and I'll grade I'll grade that paper. I think a lot of I see a lot of our faculty looking looking at assessing the process. So assessing students on critical thinking assessing them through different pieces of their process, whether it's interacting with AI tools or not, but being able to, to ask more questions of why. How did you get there? Why did you there in ways that are, more difficult to be able to use AI to answer some of those questions and really, really unique and meaningful ways, and I that there's a lot of, yes, there are faculty who are having students coming into the classroom, pencil and paper, computers down.

There's also a lot of other other ways to be able to to assess students while using AI technologies at at the same time, doing things doing things like checking in verbally through a writing process that might be done, that might be done online or a test taking process that might be done online and and having students verbally talk about, you know, why did you answer the question that way or why how did you get to that response? I saw that you use this technology to change your wording from this to this. Why did you do that? Why did you think was a better approach or a more meaningful way, to describe that particular situation. So I think that there's a lot of opportunity. And I've been really heartened, to see the the dozens and dozens of different ways that our faculty have, have leaned into this moment to look at assessment in a more critical and a more authentic way. I think to both of your points, my I have a twelve year old son.

Who's in seventh grade. And he said, I truly don't understand the difference between using Grammarly and using, chatty, BT. And I I actually think Cheryl, your point of, are you using it to enhance your learning or are you using it to avoid learning? I think was actually a a pretty good bellwether. I I think that's interesting. But have have your institutions created policies to help guide students to provide clarity around that? Or, are you are you airing on the side of of openness.

What it was what is the what's the right approach there? Are are there no across the board policies for our students. The instructors set those, in their own classes. They have And, I would say that the myself and the head of our teaching center, and we've been she and I have been working, together, on this, on this topic and they they have a lot of I'll share the link to their website. They've put they have a lot of really great resources on their site, including you know, example syllabus statements for faculty to choose from so they can, you know, be encouraging to put something in your syllabus and to actually decide, what your what your approach is gonna be, but, it's it's a to the instructors and the the students are, you know, they need to they need to follow what their instructors tell them. Yeah.

It's very much the same here. Our Provos Office put out some sample syllabus statements of, of what I call, like, a small, medium, large, a no AI in the classroom, a sum AI in the classroom, and a very, very AI heavy, and faculty don't have to use the syllabus statements, but their suggestions that that are out there to to help faculty to evolve, evolve their policies in in the classroom in this this new place that we're in. You know. One of the challenges as we as we look at AI, one of the looming challenges, and we've got some really good questions in the Q and A that kind of align with this is the accessibility gap. There's a couple different angles of that.

One is, the cost of of chat GPT, right, GPT four is available through GPT plus as a as a paid service. So the most up to date fastest version is only available if you pay. But some of the other issues are also the ADA compliance, around, you know, these are not ADA compliant, at this point. How do we address you know, both accessibility in that aspect and accessibility from a financial standpoint. How do we keep this from being a a tool for the the students that can afford it at the detriment of students who can't? Yeah.

Those are really great questions, and we don't have all the answers to those today. We'll see that's part of the reason why we created an AI team within our enterprise technology office is so that we can create experiences that are accessible and that are ADA compliant. So using using the the models and APIs and things that are are available in the large cloud providers and through other means, but create user interfaces and experiences, that are that are accessible that can be accessible by screen readers and other technologies. And so we wanna make sure that that we're developing these technologies in an accessible way, whether the providers are doing are doing that, today or or not. We wanna make sure that we're making sure that folks, can can access and have the same level of access across, across folks of all kinds of varying bearing abilities, and because we see this as a moment where we can either, take take a a beat and continue to develop, quickly and experiment quickly, but also take a beat and say, okay, this is our opportunity where we can either increase the gaps between the haves and the have nots, or we can really, decrease those gaps and let's move forward in a way where we're decreasing those gaps, even if we're thinking a second or two where we put out, some technologies into our into our broader communities.

And and some of that is a cost calculation. They are extremely expensive. And so making sure, that we are, creating funding models, for these types of technologies that are sustainable, where we're not are creating a, a technology that a college might be able to take advantage of because they might have, more, more income than another college over here, but their students are no less deserving. And so we wanna make sure that from a funding aspect, we're we're short up, in a way that when we release things, we are not having to charge huge amounts to our community to use these tools because they are very expensive when when we look at scaling these technologies. They're rather inexpensive to experiment with, to iterate with to do rapid proof of concepts with, which is what we're doing now.

But when we talk about, scale of a hundred and forty thousand plus students that we have at Arizona State University and over twenty thousand faculty and staff, that those those those token numbers, they look small individually, but they start to to add up rapidly. Yeah. We actually had a question specifically around the economics of AI, and that that cost at scale is actually one of the things that didn't get a lot of attention on, and now we're just kind of as we've experimented with it, and we try to put it into practice, now we look at those costs, and that could be prohibitive. Yeah. And I would say here, I I it concerns me greatly, just really, like, plus one to everything Elizabeth, said in terms of the concerns.

Because I think, you know, like, right now in this moment, three like, four is so much better than three, five. If and if you're a student, there's lot of students for whom twenty dollars a month is absolutely impossible, and there's a decent number of students for whom twenty dollars a month is totally fine. So, you know, I think it's something for instructors to be aware of again when they're thinking about their assessments, if they can on on that side of it. And it's I think it's it's just definitely a real problem and the, you know, the accessibility, issue from a sort of ADA standpoint also. And there's, I mean, in terms of, like, the broader issues, not not happening in a classroom here right now, but I, wanted to mention we have, one of our, great faculty members, Cynthia Brazil.

She's a professor of media arts and sciences in the media lab and also she's open learnings, Dean for digital learning. She's leading a project for a research project, where MIT is collaborating with Georgia State University and Quinn Sigovan Community College, which is near here, to design, implement and evaluate a generative AI tutor, in intro to computer science Georgia State and and Quincy. And the goal of the project is is related to this question is to provide initial evidence of on generative AI can be designed and deployed to promote inclusive and equitable learning outcomes for underserved students, to look at the factors that facilitate and hinder the effectiveness of util utilizing generative AI tutors, and how, generative AI and human tutors can synergistically support unique learning needs of students from diverse backgrounds in a personalized and scalable manner. So it's, it's gonna take a little while to see what the results of that are, but there that's at least one project where folks are Yeah. That's it's interesting because the we've already seen some early data that students actually really like working with, AI tutors and support of educators.

Right? When a teacher can't be with them, and they can't be in the classroom. They can actually engage with an AI tutor and and benefit from that. We, the instructor announced that partnership with commigo, khan academy and their commigo tool to do something similar. And and a big portion of what we're looking at as well, just like you said, Cheryl, is the data behind it. Are these AI tools helping students achieve their academic goals.

And that's really if we can actually work with mature tools, you know, use it in canvas in the classroom and see what that looks like. We can actually prove that these tools are are beneficial, and outweigh the the potential negatives. One of the other elements there, I think, that is related to that. Sure. I'm sorry.

Did you have another point? Yeah. That's alright. But I can go after you if you wanna move on, but I can But there you go. I'll just add. I it's sort of in line with a some people may know MIT has a long and, long history that we're proud of of openness, you know, starting opencourseware, more than twenty five years ago and that's, these are my colleagues.

Work with them closely now. You know, in sustaining that all this time is amazing. So, somewhat in that vein, I will just say, there's a course going on now, and it looking at, some of these issues, including equity, another, another course happening right now, at MIT. And it's, it's just don't wanna bury the lead. It's open.

So if you go to that link, you can actually see the course materials. It's not fully open in the sense that you're not gonna get like instructor feedback on your assignment, but, you can see all the materials, you can actually sign up as a listener in the class It's taught by this amazing team that includes, a lot of renowned faculty, including Cynthia Brazil, who I just mentioned, and, Hal ableson, who's professor of Computer Science, and he's actually the founding, a founding director of Creative Commons and a free software foundation. So the it's a class, you know, as you'll see there looking at at, generative AI and k twelve education. So if are looking for a really high quality resource that's happening right now. You might find that to be interesting.

It's it's a really looks like an amazing class to me. I just double posted those to make sure they were available to everyone in the chat there. So you should be able to see those. One of the challenges, and it's actually interesting there was a a watched the news article this morning on a number of authors who have actually filed suits against OpenAI and ChatT around, academic they're actually intellectual property. So they're they feel that their, works were used to train the model.

And so that now that that proprietary data is now part of a much larger open source material. We see the same thing with students, and the the potential for students to put personal information, IP, things like that in in AI. Do we train students, to be cautious with that? How do we are are your institutions do anything specifically on training students there? What's the it's definitely a challenge with a lot of aspects, but are you is there a approach that you've adopted there? Yeah. So I, I'd rather just wanna see University, we do a lot of thinking around digital trust. What does it mean to trust? Have trust in technologies to build trust with, with users with folks who are who are using and interacting with technology.

And so recently, we've released a set of guidelines around digital trust that include things like thinking about intellectual property and copyright implications, things to look for in terms and conditions, in these various various softwares. The space is changing so quickly that there isn't one set of guidelines that one can put out into the world, but I think That also helps to encourage that that critical thinking being thoughtful about how you interact with technologies and providing we try to provide that scaffolding or those guidelines for people to be able, to zero in on what, what things to think about, what things, what things to think critically about when they're going to interact with the technology, like, those intellectual property implications, like downstream implications on the ability to copyright something. And so it's it's something that I think we will see lots and lots of litigation and court cases and tons of evolution and over over the upcoming years. And I'm very interested, to to see where things land there with, with artistic freedom, and also, with being able to copyright your your work and get credit and and be compensated for your work. Yeah.

Yeah. I don't I don't have too much special to add from here, but you know, it's similar here, and I totally agree. Like, and I guess the only thing I'll add is just I think the the pace of the change, right now in this realm is, I think driving a lot, a lot of the overwhelm a lot of the chaos, a lot of the, you know, lack of attention to boundaries and all the rest and it'll be interesting to see if things, stabilize at some point or if they don't, but yeah, although there's a really important concerns. There's a things I think about a lot in this space is compensation. So, like, with GDPR and and with other similar types of regulations.

We see move towards compensating people for their data, for their information. And so I wonder if that's where that's where this goes to making sure that, the people know can consent to or compensated for data that that they produce that they create that goes into into these models. So that's something that I've been thinking thinking a lot about over the last few weeks. Have you done any work with because there's a question that plays rifts off of that very well? Have you done any work around, closed large language models So, large language models that are using your own data that are shut off from the open source versions that are a little more protectable as if that's a word. Have you done anything around that just to to kind of build walls? Yeah.

So we've done some work with taking open source models, and then hosting them internally, doing some fine tuning training in that space. Our knowledge enterprise, our research arm of our university does, does a lot of wonderful work in, in this space and building large models, training them? I, we're at a point with, the costs, at this point in time, where while we're experimenting with creating our own models in house and training our own models, in house, also we know to scale to scale those is, is prohibitive. So it'll be able to create a model with, with billions of parameters and then, and then host that model. We're certainly doing doing experimentation in that space, but we're also at the time, very cautious about using any any student data. So the model training and things that we have done has been on basically information that's already public or that we would all feel comfortable with making making public, like, some of our knowledge bases and things that.

So we haven't included any student data in any of our model training or fine tuning at this point in the large language model space because we wanna think through very carefully. All of those implications and think through, consent processes and how do we how do we build trust and make sure that we're doing things in such a way, where where students know how their data is being used and have an opportunity to be able to to consent or not. And I mean, I I'm sure there's a lot of stuff like that going on around here. A little outside of my realm with regard to to the to the teaching, but I will say in physics, they are they're working on, training tutor for their introductory class on their own data. So I they're that's, you know, sort of foundational work that's underway right now.

And I I'm sure there's lots of other folks doing similar things like that around here, you know. Like ASU, actually, we have a lot of, moop data, content, course content, that, you know, we, you know, could be used, I think, in some really interesting ways. I don't know. Anybody's digging into that yet, but I'm not talking about, student data. I'm talking about content.

Yeah. I I think there's a caveat out there that we're we're definitely abiding by all of the student data privacy security, those aspects when we're talking about anonymized and aggregated data. So, Heather, I should put a really great comment in the that I love because I'm somewhat fascinated with the idea that an AI can hallucinate. Right? But this the question or the question is I'm a bit concerned about how frequently Chat UT answers questions in ways that are just factually wrong. And whether that's, a misinterpretation of the information that's out there or the hallucination piece that we've seen much more likely in chat g or in GPT three than in GPT four.

GPT four is apparently much less likely to hallucinate and give you, wrong answers with high confidence. There's a parallel here between Wikipedia. When Wikipedia first came out, and you could look at Wikipedia and give you an answer, but there was bias built in by whoever put those in there or those different aspects. How do we, you know, build that trust with AI? How do we make sure that we are having students keep a critical eye towards the results they get when working with AI, generative AI tools. Yeah.

So we've explored a few areas on this front. Everything that we do, we have an evaluation framework that we've created for evaluating both, vendor products, third party products that we have, that we have looked into as well as our own models and our own products that we create in house. And so one of one of those pieces in that evaluation framework is looking at the percent of correct responses in various domains. So for example, we've done some work in, can we make a help bot that can help our ours ultimately help our students to be able to navigate, the ASU landscape in a more seamless way, make sure that their questions are answered. But our threshold for putting something like that in front of our students is extremely high because of this issue of hallucinations.

And so what we're doing in this space is where first experimenting with, help bots in, in front of our student facing staff. So folks in our experience center who are our one stop shop twenty four hours a day, seven days a week. Any question, any student faculty or staff has, can go through our can call our experience center chat with our experience center, email our experience center, and they will find the answer to that question or get them to to the right place. And so, we have done some experimentation with our experience and our specialists and, being able to provide them a, a bot that if they don't know the answer to the question that's being asked, then they can put that question in the bot. It goes through instead of them having to search, a couple different knowledge bases, our website, provide all that information to them in one place, but what it does when it creates that that nicely worded large language model result is that it sites every single place where that information came from.

So all of the links to any knowledge based article, any website, most of the answers come from two or three. Sometimes four different different places and are brought together in that large language model. They can fact check that data. We've also experimented with different products where it actually shows them the text from which that, that answer came, which makes it a lot easier for them to check on the spot. So instead of having to go out to the links, they're able to, to fact check right there.

So those are some of the things that we're doing, and then we have a set of a suite of of questions across various and sundry domains that we then enter into any of these products you know, bring in the response to, to that question. And then we have human beings who go in and evaluate the correctness of that response and then evaluate quality of that response. So we're able to compare apples to apples across various products that we're looking at and make making sure that we're really looking at that aspect of, of correctness and of hallucinations because that's one of my biggest fears. That's the thing that keeps me up at night. Is that I put some a product in front of a student that gives them incorrect information and leads them astray on their on their student journey.

And Cheryl, what's on that? You know, I think in terms of things in the classroom, if I'm making the right kind of connection here, you know, if I I actually haven't heard about too much of this going on, but if if if I were teaching a class this semester, you know, I would be inclined to, you know, have a fairly generous policy and think about, you know, I mean, one of our instructors Melissa Webster at the sloan school, she's she's, you know, she said, you know, she thinks these are your Microsoft office. Tools of the future. So I think our students want us to help them learn how to use them so they can be successful in their careers using tools rather than, you know, investing time and holding them back in a certain way. And I guess if I were in a class right now, I would I would Probably be inclined to turn it back on the students and put a put a high penalty or or, you know, you you control the grading scheme as an director, and you can make it very, costly for a student if they submit an assignment with error with actual factual errors. You know, things that most likely would have only been produced by a bot.

So I think, you know, I was like, alright. If you wanna use the tool, like, and I don't know if I was a student, I think it would seem faster. Maybe to start with the bot. And I think is I think this one thing I think about a lot is as an expert when I try to do, like, I've done I've had, GPT four, create lesson plans for classes that I imagine teaching in things that I know, It's amazing. Right? Cause I can go through it and eighty percent looks great and I can see the other percent very quickly because I'm an expert in that field.

It's something I know very well. So it's delightful. But for a novice, I I actually think it might be slower for them to try to They don't know anything. Like, I can see in the list of references. Oh, those four don't exist.

Those are not real authors in this field. I just know that because or I can't I know, you know, is that is that the the, you know, is it shorts or shortsman? And let me have a look at that one and can see immediately. But if I was student who took, like, the the output. And then I gotta go and, like, I don't know anything. Right? I gotta check like, every And how do you double check without those sources, right, if it's if, you know Awful.

Much better to start from Eric or Jay store or all a reliable source that your university provides in my opinion, but it won't look like I don't think to them, you know, at the start, and then you're kind of down a rabbit hole. And then you shed in between to summarize complex issues. Right? Tell me about social emotional learning. Now why is it controversial? Why is it positive? Right? It summarizes those all very well. But without the links, like, like, Elizabeth was saying, if you don't have the links to where it's referencing, you don't know about the bias.

Right? And so that's why I think that next step of of actually including references is a big deal. But, Cheryl, your point, one of the we've got a couple of questions on this that I love. What about students? Are we getting students feedback in what tools are being used? And what if a student says they don't wanna use AI? They're not comfortable with using AI. How do we address those conversations? Well, I my my personal view and, you know, I'm not the boss of the MIT faculty. Thank god.

That would be That's the hard enough job for the provost. Uh-uh, you know, I I don't I would I don't advise that any instructor requires students to use any tool that's not licensed by the university. Right? Like, we and you know, we know these aren't free tools. We know the exchange is, your data, for use of the tool. So You know, I'm that applies, I think, the same here that, you know, if, it and it's tricky.

Right? Because people are, like, I use Gmail too, but, you know, I'm I'm making a personal choice in terms of, you know, what, think it's different if you're an instructor, you know, requiring something of your student. If it's not a tool that, their data, that the student data is protected, I think I think it's, I don't I don't think it's right, to require use of it. But, again, people do, and, you know, that's but that just would be my advice is is, find an alternative for students who aren't comfortable with that or or an alternative tool that maybe is licensed, which in this case, there's there isn't one. Yeah. Elizabeth, your thoughts around that, bring students into the conversation? Yeah.

I think students have to be at the table and have to be in in the conversation about, their thoughts on on AI, our faculty. A lot of our faculty have done a really great job in collecting student feedback. On the uses of AI that they, that they have had in their classroom, the way that we build our tools, that will eventually be in front of students or in front of today, but will eventually, be in front of students is constant feedback mechanisms, so the ability to liker or or not like thumbs up thumbs down the ability to flag something, that they feel like has has bias or has potentially harmful or dangerous content in there, the ability, to, at any time, create answer survey. About their experience with the tools. I think feedback is is, absolutely needs to be constant, yeah, into all of these technologies.

When I think about how we build technologies, that's the thing that we talk about first is how can we create that constant feedback mechanism with something that's so new and that's so emerging. Yeah. This is I actually love this question, and and if you have some of these at the ready. We've got a couple of folks who've asked, around, you know, their job is to support teachers in the classroom with AI and things like that. But How would you recommend them to get smart? Are there resources that you like to point people to around AI that are either great, list of tools or great best practice as things like that.

I can I can dump that? So I actually just have we have, few websites just posted two websites, one from our teaching and learning lab and one from ours, our technology services group at the, at the business school that are sites for instructors, chiefly, for instructors. There's stuff for, students on there too. So those are both great sites. And the third thing isn't a link, but I think anybody to make a Google's MIT physics, YouTube can find a playlist from the GPT event in July. It's not really related to the other two.

Just copied it by accident at the same time. That's great, though. Yeah. We have a course, cam a course in Canvas that, is available to all of our faculty and staff, that has a lot of content that pulls, pulls from YouTube videos and other places as well, not just our own created content that we make available. I will find I will find the link, to this, but Andrew Maynard, who's a professor at Arizona State University just came out with a really wonderful article that walks through different ways.

It it walks through kind of a one zero one of these technologies and then provides a list of exercises that you can do with chat, GPT, to look at things like ethics and and bias. And so I will find that and then put a link to that in the chat. I was just going through through that yesterday. And it was a really invaluable resource for kinda getting up up to speed and then getting really hands on, with the technology, but in a, in a guided way. Yeah.

And my in most of my presentations, that's one of the things I I underscore quite a bit is the collaboration. There's so many new tools that I've introduced to by my peers that I would not have otherwise captured. These conversations are a great way to share that knowledge with each other and makes us all smarter, across the board. But are either of you aware of of any institutions that have licensed, Chad GBT or GBT four for students across their institutions? I'm not, at this point, but I don't know if if, either of you are. I heard that Moderna licensed it for all their staff They're right next door.

So that's they're not students, but Yeah. I'm not sure. I haven't heard of I've heard of folks who are doing that for faculty and staff facing uses, but I'm not sure about students. Yeah. The number of companies I know that have banned it exceeds the number of companies that I know that are actually, providing universal licenses.

So companies like Samsung, who had a had an employee drop, proprietary code into the public facing j JetGPT interface. Things like that. So see a lot of that across the board. We talk a lot about the classroom. Obviously, inherently, that's kind of our focus of this group, you know, working for instructor.

We are all about teaching and learning. But more broadly, there's one question we had around, are we seeing this, impacting admissions onboarding financial aid, for students? Are we seeing it affect that broader process? And then How else across the college university or or these AI tools being leveraged? And again, I know that we're that's our world is is teaching and learning generally. Yeah. Yes. So we're seeing the use of AI technologies for, for things like process efficiency.

So creating that quick quick first draft in offices across our university. I think that's one space that is maybe not so, glamorous but can really make a big impact for for our students down the line when staff are able to get through some of those more mundane processes, much more quickly and spend more time face to face with students, spend more time thinking, thinking that those big, those big questions, those big issues about how do we transform our processes to, to be more student focused if folks are spending less time on on, more rote tasks or tasks where a chatty PT can create a really good first draft that would have taken a human being two or three hours, and then they can spend thirty, forty five minutes improving upon that, as Cheryl talked about earlier, improving upon that draft using your subject matter expertise and not having to not having to create something, something out of nothing, that that can really lead to big improvements in the student experience because of those us, efficiency. So we're seeing that across across our university offices. One of one of the questions I ran into. The first webinar that we hosted, I think, back in January, we had high number of attendees, and one of the biggest challenges was, or one of the biggest questions was, what does this mean for librarians? Is this gonna replace librarians? And Rose asked you in the in the comment or in the question, how do your institutions utilize academic libraries and librarians to support faculty and students in relation to AI? And I actually think it's one of those areas where every librarian should make themself the AI guru.

I think there's so much opportunity. I don't think it will replace librarians. I think it's a it it's like super power for them, you know, power up for them. And I and I think across the board, but have you seeing any of that in practice at your institutions? That's a great question. I gotta go talk to them.

I've been trying to just get a handle on what's happening in the classroom. It's hard enough. Yeah. So, And there's a lot of opportunity there. And and again, Actually, I do know one thing.

We have our, president and provost put out a, a funding opportunity for folks in the community here. And our librarian, got one of the grants. So, sorry, I'm posting a lot in a chat lately, but, that list of those grants is there. So our our head librarian, got one. So she's working on something because she got some money related to this, so she got the money present in the provost.

Chris Borg is her name. Excellent. Well, we're we're wrapping up. I think we got about three minutes. There's a lot of questions that we weren't able to get to some great conversations around.

Is it right to call, to call large language models and other generative AI? Hallucinations, right, their their negative response. It's the best term we've come up with, but in some cases, it's just bias based incorrect answers. But one of the challenges is They don't really know why because large language models tend to be a black box. You know what? We control we control the the inputs, the prompts. We don't always understand what how they, you know, what magic goes on inside as they come out the other end.

So that's one of the biggest challenges. I Ryan, I'm sorry. I'm jumping in here. I have a really strong negative reaction to that word. And I know that the hallucination word.

Yeah. The train has left the station, and I'll probably lose his battle. But the problems with that word are one We we need to be careful about anthropomorphizing these tools. It's hard enough already not to think not to say please and thank you and think this is an actual human And that word is an involuntary biological process that happens in living creatures. Yes.

Only is anthropomorphic. But it lets the it lets us the technology off the hook. It is not these are not hallucinations. These are errors. These are something stronger than errors.

These are, you know, they're they're just they're just wrong. Yeah. Yeah. And I think that was the point of the question. Really was exact what what you're aligned with.

Yeah. And I thought about it. Is there is there a better answer? So what comes to me first is lie, but then that's a human thing too. There's an intention behind a lie. You have to you have to intend to lie to lie.

And that's, I feel like, another anthropomorphization that we wanna that we wanna get away from in this in this space. How about under Yeah. I think I think untruth. So at first when you said error, I was like, oh, that kinda at it, but then it's in a way it's not because it's what the model was intended to do. It's not an it's not an error, in in sense.

But it is vehicle beyond inaccuracies. Right? Because it's Yeah. Right? Well, this is It could be like that, but harsher, right, and accuracy. It sounds like Fabrications is not a bad one. Christian says that I'm not I'm not, annoyed with that, but So we are running out of time.

We have about a minute left. I do wanna thank both of our panelists, Cheryl Elizabeth. We can talk about this all day, and I would be just as excited. But I your input has been amazing. I think your perspectives are so valuable.

I hope all of our audience found that, useful and helpful. A recording will be sent out to all of the registrants. And, I that'll have, I think, the links in the chat as well. So some great resources that you both shared with us I really appreciate you doing that today. So Thank you so much for for joining.

Thanks for all the great questions in the chat. Really appreciate it. Perfect. Thank you, everyone. Bye.
Collapse