Improving Student Success with Canvas Outcomes at an R1 University

Share

This session shows the training approach used to deploy the Canvas Outcomes feature at an R1 University. The results and experiences of faculty and instructional design staff will be shared. This session seeks to help institutions improve student success by implementing effective assessment practices using Canvas Outcomes.

Share
Video Transcript
Hey. Hello, everybody. Thank you. Welcome to my session. Thank you for joining me today. For my session on improving student success with Canvas outcome, at an Arbon University.

My name is Swati Romani, and I'm the faculty development specialist at at the center for teaching and learning at University of California So our center for teaching and learning is called excite, and the full form is exploration center for innovation and, teaching engagement. Along with me, I also have our, second presenter, Doctor. Joyip Corey, who's assistant teaching professor from the department of economics. Add you to Risai, and he's here from a faculty's perspective. So he was part of my project.

He actually experienced it, and he's here to give an ex his experience from his faculty. Decide, like, how he went through the entire project and how it helps the students. What to expect from this session, In this session, I'm gonna show the training approach that I use to deploy Canvas outcomes, feature at my university. I'll also be sharing the results and experience of faculty and us from an instructional, the the center for teaching and learning. And the whole goal is to help you, to help you with your institutions, when you go out there and implement canvas outcomes, and how you can succeed with canvas outcomes.

So it's just an idea for you to fly as well. So before I get started and and share with you my how my experience was at my school, I wanna hear from the audience what is your experience with Canvas outcomes at your institution? Would anybody like to share? Okay. So we use applicants for our program level at our subaccounts Director. And then also at our root level for our institutional level outcomes. So we're trying to move to this course level.

So we're kind Oh, okay. So you guys win the top down approach. Yeah. Okay. Cool.

Cool. Yes. A level four g e r outcome tracking and they're finding that it kind of works kind of doesn't based on the granularity of the centralization of the computer development of a curriculum And we also have some departments use me to open up. K. Cool.

Thank you. Anybody else? Yes. Yes. Yes. Yes.

Kind of similar to us as well. I'll be sharing a little bit about that as well. Yes. Anybody else? Okay. So so Canvas outcomes and assessment are used to So let me share, like, how it all got started.

So first is that we just transitioned from Blackboard to Canvas LMS. So it's just been a year. So we just moved to Canvas last year. And this is our first year using Canvas on campus with twenty years of Blackboard, you know, have that tradition of twenty years with Blackboard. And, this provo, our provost has strategic goals of, improving graduation rates and closing equity gaps.

And, there's also in the talks between the leadership and the faculty senate to consider using achievement of student learning outcomes as a replacement of course evaluations for faculty. So that was that time in the talks. Now there is a okay to go. And there are some other things also that they use for the American promotion, the tenure and promotion file. That time, that was one of the important things that came up and that gave this project a little boost and a motivation that said we formed a group, like a group of stakeholders, which is one of them is me, and I'm part of community engaged team at Xite.

So Xite has two teams. One is community engaged team, and the second is academic innovations team. And I belong to the community engaged team, and I took the lead in this. Then we also had a director of assessment and evaluation at UC Riverside join as a stakeholder in this group. And we had the entire staff of academic innovations team at Excite be part of this as well and who is also present in my in in my in the audience.

So we first this was a the whole goal was to start it as a pilot, because as I said, canvas is new. We are new to canvas. Faculty are new to canvas. So we wanna first get the whole experience. We wanna get started as a pilot.

This was the first year of a five year plan. So our five year goal is that we'll hopefully have the entire institution come on board with this. But for first year, we weren't that ambitious, but I'll talk more about that in the next few slides. And the goal is eventually to merge with our standardized trainings in shrubs that we do for the entire canvas series. But for this particular startup, we did canvas outcomes as a whole new separate project.

So just in case, if I run late, I wanted to share this project repository with you prior. You can use this QR code or the the you are the the link, which is tiny URL dot com slash u c r canvas outcomes. I have shared all my documents, my slides, some of my training slides, some diagrams, some checklist, and I'll be going over those in my current, this this session, this presentation. So it's just for you to have access to when you need it. And just in case I lower if I run late, I wanna share this with you prior.

So I'm gonna move on to next slide if everybody's okay. You can just take a picture also. Good. Okay. One, two, three.

So this is, chart, and this is also present in that repository. So if this is not so clear here, you can access it that in that link that I shared with you. So based on the entire experience, I I drew this chart. So as I said, starring top is Proostrategy goal then it's Canvas outcomes and assessment, and you see this dotted line with Canvas bootcamsaries. So when I end this presentation, I'm gonna have this, chart again And that's where this dotted line to Canvas bootcamsaries will will make more sense.

But right now, I just wanna continue what is below that. So below that Canvas outcomes and assessment project is on the very left, you see Canvas outcomes and assessment live synchronous ratings. So we started off with live trainings, which is zoom, like hybrid in person and zoom. And, so we did for we first started at course we didn't want it to be very ambitious and go on to program an institutional level. We first started at course level, and then we experienced a little bit that, then we moved on to program level, and then we moved on to institutional level.

And for every work, for each faculty who implemented the canvas outcomes for at least one quarter in their course, they received a certificate. And then we moved on to canvas, asynchronous training, which was the self base. So we we designed a self paced asynchronous course. And the part one was a basic one, with just introductory concepts of assessment and the entire video of how to do canvas outcomes at in in how to create canvas outcomes and how to run reports and how to set up your learn I mean, how to get your data for student learning achievement. That was part one, part two is if they submit a course map, the actual course map using the use our course template.

That's part two and part three is when they actually implement the whole thing, the whole actual canvas outcomes in their course. Experiences, and they gave us their student data. That was part three. Each of this individually gets a digital badge. Eventually, if they finish all three parts, they get a certificate.

Which is an award of instructional development. And the last one, which is actually a future phase we are just in talks right now, it's about connecting this outcome data, the student outcomes data to the institutional research data. So we are in touch with the chancellor or the vice chancellor of institutional research who has all these demographics, student demographics data, DFW, withdraw, withdraw, failure rates, you know, the gaps in equity, all that institution data and how to connect it with our outcomes data. That way, we get more connection and we try to find more gaps where the student learning gaps are way we can improve accordingly. So that's the future talk and that's, is going on right now.

So that's the future phase. But this is the three main goals of the implementation. Three main ideas of the implementation. I'm gonna come back to this chart again at the end. So faculty incentives.

So every, a good evidence based faculty development practice is that whatever you have for faculty, you give them an incentive. When when you do something, when they do something, you give them an incentive when it comes to faculty development. So what we have is they earn a certificate, which is also, we also use it as award of instructional development to align it with the language of e file, they get a certificate if they implement Canvas outcomes throughout the quarter in at least one course. They can do in two, three, four, five, as many as they want, but if they show us they didn't want, we give them a certificate or an award of instructional development. And they can also use this in their CV, their website, their, and also in give them just provide them to their students in the syllabus.

So we defined our training across three levels. So the first course level was separate, program level was separate, and institutional level was separate. So the whole initiation when we started was we spoke to a selective faculty we knew And we decided, you know, our stakeholders, the group that we were together and the stakeholders that I just discussed in a couple of slides ago was we dis we decided to open the training for the entire campus. So live training workshops and canvas outcomes, and they started in fall twenty twenty two. And they were available in both in person as well as Zoom.

And as a starting point, we just started the training at course level. And the marketing happened through emails, Canvas dashboard area, through exciting UCR events page, and through word-of-mouth. Like, we knew some other groups and departments, and we did connecting with some word-of-mouth. And, our overall goal first quarter because we wanted to see what obstacles we face, we wanna keep it a smaller group. So our goal was three pile participants, and we did get the first quarter.

We did get three pilot participants who implemented canvas outcomes in their own course throughout the fall And these three faculty sat in campus wide training. And the one who, joining who joined this pilot received multiple one on one consultations. So they did receive multiple one on one consultation due to the obstacles that sometimes they face. So sometimes they had to adjust the outcome have to adjust the assessments to fit to implement outcomes. And I'm gonna talk about more details on what these obstacles were.

So the one on one consultation were very much needed. And these did happen at least twice a month or sometimes more. So that's one of the very important part that effort from the staff instructional design staff was very much needed to make this happen for faculty. So I have this training approach. The main piece of, this was actually very important to me at least.

It's very important to me is that you can't just bring technological You know, you have to make sure you're integrating technology to your pedagogy and to your academic discipline, the content. So my approach was using the try and call our tea pack model that we not only just show them how to use canvas outcomes, but also show them how can they best integrate into their pedagogy and show them how things will look like when everything will be done in their sample codes like in academic disciplines or show them real examples of the content itself. So there needs to be that whole middle place. That's where really the implementation of any technology tool can be successful. So I'm gonna show a brief demo of my some of my slides and some a little bit of dummy data that I have.

And This is what I'm gonna share now is also available in the, the repository that I have shared with you. Can everybody see the screen? So these are slides from my training itself. And I'm not gonna do like a whole big screen on this because This is just the main idea to give you how I implemented the TPAC approach. So these are my training slides of the course level outcome workshop I just didn't jump on to the outcomes or how to do the outcomes phase itself. I first went on to the pedagogy behind it.

What is the outcome? I clarified some common misconceptions because we do use these terms interchangeably often. Like, what is the difference between course goals versus course objectives versus course outcomes And I made it very I gave them some examples on this, and I made it very clear that what you put inside Canvas outcomes feature is the one that is called student learning outcomes, where is where you use measurable verbs. And then I went over what you can do with canvas and what the benefit of doing this. So all of that was pedagogy. So that's one part of tea back.

The second was the hands on training. So my training was just not showing how to use these outcomes, but it was more like hands on parallel. Play. So with my with the man with the with the help of my team members, we gave a sandbox course, a course to play with for them to play with with dummy data assessments of some dummy students when they came to our hands on workshop. So they got that, and they were actually doing it right there in front of us with the dummy data.

So when I showed them step by step, they did it step by step as well. So for example, I I just didn't jump on straight away. I divided I segmented my training into these steps. And again, these slides are provided in that link I shared with you. So I had these steps, and I also had this checklist for which is as I proceeded, I scratched off that this is how you do that.

This is how you do that. And this these checklist was available to them as well to keep track because outcomes is not like it's it can be difficult for a few faculty depending on their experience with technology. So this segmentation really help with that. So that's the technology piece. Now, when it comes to the content, if you remember the t pack model, the content, I had some dummy examples, so which I worked on prior to beginning of the hands on workshops, which is with some dummy students in two dummy demonstration course I came up.

I played around with assessments. I played around with quizzes. I aligned them with some dummy outcomes. I I like a sample course, I graded what art art history and I showed them this. This this is like how if they are done with outcomes once in the entire course, This is what the outcomes is gonna look like at the end.

And then I went to a little bit of the pedagogy, like how this data can help once you've grade all your students with these outcomes, you know, your quizzes, your assignments, everything is is done with outcomes with this skills your students are achieving. This is what you see, and there's a lot more you can they can do with this. That's what I was ex I explained to them during the workshop. So this does show all the averages. You know, you can see which student is struggling out here.

If there's a, you know, maybe is it some some students are doing really well in one They're not doing so well in another assessment. Is it the question? Is it a, you know, or learning activities? Are you preparing them well? Or is it your facilitation? Or is it the student itself? The meaning all the pedagogy behind assessment and closing the loop that went inside when I came to this part of the workshop. But, yes, so this is how the actual, workshop training happened with the T pack more. So now I'm gonna go back to my slides and I am gonna hold this back a little bit. Okay.

So the three faculty who participated, in the in the last year, twenty twenty two twenty twenty three pilot. These were one from statistics, one from economics, and one from chemistry. Faculty one who has a statistic course, implemented all high stakes assessments by aligning outcomes to rubrics. This faculty did not use quizzes, as these were not high stakes for this faculty. So every faculty had their own store.

The difficulty that we faced. As I said, the obstacles, and that's where the one on one consultations really helped. First, so for this faculty, the thing was this was a statistics course. And they are used to taking exams in paper. So with paper, they have to upload these paper based exams and then grade outcomes in canvas.

That's the hard part. And it was this faculty didn't want it to do so much of work. And so this faculty just decided to implement outcomes just for signature assessment. And these faculty students had a large number of students in this in their course. So this was two hundred students in the course.

But this faculty had a good experience overall and then continue to implement Canvas outcomes moving forward in their courses. So faculty two is actually here. Doctor. Joy. He's and I'm gonna pass it on to him.

Hi, sweetie. My name is Joe Corey. Faculty number two on the slides. Number one in your hearts. So I teach economics, and when sweaty came to me and asked you to start talking about in committing these outcomes.

I really teach these big five hundred and fifty person classes, and I don't like implementing anything, like, right away. And, like, class that size. Fortunately, the first quarter, she came to me, I was also teaching a small honors course of fifteen students. So if I wanna roll something out, I wanna roll it out to like a smaller class first. So if something goes wrong, we have, like, a dozen students emailing me versus, like, five hundred.

So we started out with that honors course, and this course was just a series of reflection papers. And so, basically, what I did was I attached, a gray rubric to those reflection papers, and those outcomes were tagged via those rubrics. Right? But then the next quarter, that winter quarter, when I topped my five hundred and fifty person introduction to microeconomics class, I started tying these out comes to their performance on these module quizzes. And so I have, both like module learning outcomes and then my overall, like, seven course learning comes. So, like I teach so about supply and demand a lot.

So a common module outcome would be something like, interpreting the law of demand and using it as straight to the banker. Whereas my course learning outcome would be something like identifying a market equilibrium and analyzing economic changes affect that equal it. So on a module quiz, there might be, like, a question bank of ten questions about the law of demand, which a student will randomly get one of them. Right? No complete fees online. And so that question bank would be tied to both that module learning outcome on demand, as well as the course learning outcome on identifying equilibrium and showing how it changes affect that equilibrium.

So I can see how they're performing each module and then how they're performing with the overall course objectives. And I will say this. I really appreciate the, help I got from the XIT team, Swati, and the staff there. That one on one, faculty, training, my we helped a lot as I was kinda navigating the ins and outs of using these objectives and, in these outcomes. So, yeah, I'd recommend that to, anybody who wants to implement this with your my buddy.

But just make sure you're there for your faculty, because when I run into trouble, I gotta email somebody, like, hey, can we meet on Monday? Because I don't know what I'm doing anymore, and again, make time. And, kinda bell me out. I'll say I definitely had a good experience implementing these outcomes in both of these courses. Great. Thank you, Doctor.

Jobcor. And the third faculty was a chemistry course faculty. And this faculty had two hundred and seventy students in their course, and the and only had quizzes. So, again, the obstacles we faced with this particular faculty was, you know, they they only used quizzes and then classic quizzes. So new quizzes hadn't they came afterwards, but we were still getting used to it.

And they had multiple quizzes in classic quiz format. And for for outcomes to be aligned to quizzes, we have to create these question banks. Now, we trained fact to create question banks and align the outcomes, but then they forgot that they have to pull the questions from the question banks. So he did all the work, but then there was that alignment gap because he wasn't he was creating quizzes, but not pulling the questions from the question So we had to redo many quizzes with this faculty to get it working. So that's, again, the one on one consultation came into good effort, good good positivity.

So finally, faculty feedback. We had a review meeting, where all the faculty were present, some and all the stakeholders were present. What they said, they they suggest start designing integrating earlier on this than later. They also said it is difficult to set up for a large class, especially if you have paper exams. They also said the rubric, needs to be set up properly with clear details and directions, and that connects to interrater reliability, especially if you have your other core instructors or your teaching assistants, great outcomes.

That's where it's very important that you're clear with your criteria on rubrics and how they should grade it. They said that the outcomes are very valuable for departments to prove students are achieving the outcomes. Found the one on one trainings useful. And some of one or two also said about the user experience, they wish they had they didn't have to create outcomes rather just pull outcomes that are available from somewhere, like find outcomes, features, something like that. And did it say that tracking outcomes can help track and close equity gaps? So what did we learn from this pilot at course level? So a very important struggle that I saw was faculty were getting confused between traditional grade scores and outcomes tracking.

So traditional grade scores is just great. But when it comes to outcomes, it's like finding whether your students has that skill set, that measurable world that we identify. And I saw that faculty again and again trying to match that. And that was a struggle. So I've I felt that I need to when I do the training again, I to be be clear on that and maybe even call some of the people from the assessment office to describe that even further.

So that was the biggest struggle, and I think I'm still facing that. But hopefully, I'm gonna get and, also when we started, we had the new format of canvas outcomes. So for those of you who are aware, the new for Matt does not allow has to be standardized mastery scales for all the outcomes you create versus the old for you can have separate mastery scales for each outcome. So that was actually a struggle. So we didn't realize that.

So in the next quarter of moving winter, quarter onwards, we worked with ITS to move into the old format of Canvas outcomes. That way faculty could change different mastery levels in scaling, you know, where when the master is achieved for different outcomes. So that was something very, very, helpful with that. Hands on training went very well, and, the and we also helped, we also learned how to analyze gaps in our Canvas training components. You know, I had that dotting line in the Canvas boot cancer.

So as I said in one of the chemistry courses, this that faculty wasn't ruling the quiz questions from the banks, and we realized we didn't cover that. We needed to be stronger on that piece when we trained in the general Canvas boot camp, So we we worked on that to to, make a change on that. And we did notice a very high, I mean, a high attendance by faculty during the training but a little less act in the actual implementation when they wanted to actually implement because they did say that this is a lot of work. So we then moved on to program level and institutional level. And then So we had two faculty on this, overall, and one faculty from bioscience engineering at program level.

And this was the using, like, we decided to have the entire Bio Science engineering program to it, but it's very hard for all faculty of the bio that program to be in one meeting. So we we did we went about with train the trainer model. So we trained somebody who was in charge of abbot accreditation, and this person is in charge to train the rest of the faculty to track, outcomes for signature assessments and and submit for creation. And there were two faculty at institutional level, education and bioscience engineering. So that there was a common one for bioscience engineering who implemented program a little implemented the institutional level.

And again, this is was a pilot of that level. So overall, we did achieve the total five faculty. And this particular group was only given one on one training and consultation, because some things had to be tweaked to their program. So we had to do a one on one only for this group. And so we created because outcomes had to be created admin level.

We did give some restrictions to their permissions, and we created these outcomes for them at admin level or account level at program for their program. And we also created the rubrics for them. So everything was that piece was we did it, but we only trained them how to pull it and how to grade and how to work with assignments Vubrigs. And, the results gave valuable data at account or admin level, and faculty had a positive experience. So key findings for all levels, the faculty testimonials.

I also have one document in the link that I shared with you that has the the faculty testimonials in detail of what experience faculty faced. So they say that it's it was easy to track student learning class wide, program wide, and institution wide. One of them well, most of them, most of all said that we, we were able to track struggling students and connect with them to provide further support. It was very useful for program level accreditation, specific specifically for the Biasense Engineering since they had they were working on manual Excel sheets. But in this case, they get all the data in Canvas, and they just export it to CSV file, and they use that to filter it out and submit what they need to submit for creation.

And, the whole process also made faculty to redesign and, and, you know, rethink on the assessments. And I feel that also led to some UDL, universal design for learning pieces and inclusivity piece there. And, as I said, there's there was important piece about assessment and closing the loop. So the whole effort with what they saw in their learning mastery grade book about all their students when they found out average, the median, the overall, percentage of the students who's struggling, which where the outcomes are achieved, where they are not achieved. Is it the assessment? Is is it the question it's self? Is it me? I mean, is it the faculty itself? How I am teaching? Is it the learning activity? Or is it some institutional support that we are missing? You know, is there a writing center support that I need to give a sword? So that identified all of those pieces.

This whole project helped the faculty with that, and which led to improving assessments, course design, and facilitation. And I would like to pass it on to Doctor. Job Corey to give his perspective. Yeah. So I really enjoy being a part of this pilot.

I learned a lot about my course. So, again, it was a lot of work to set up. Make sure that all my, questions are on the banks that are only this module learning outcome and then a course learning outcome. But I ended up getting a lot of useful data feedback from this class. For example, in economics, there's some like, basic ways of thinking about economics, like, people respond to incentives so that the benefit of it actually goes up, people are more likely to undertake that action.

Or enforce gum, trade, it's two boxes of chocolates, the Jenny for one pair of running shoes, right, which the following is true and is that both sides win from that exchange. Right? So that's what we call it the economic way thinking. And then there's kinda more technical stuff, like, be able to identify the amount of profit that this firm is making by looking at this graph. Right? And it's always been my case teaching this class that students struggle more with the technical stuff and that basic economic way of thinking stuff. So it's kinda designed more practice questions, the technical stuff And then when I look at these outcomes, to my surprise, they're scoring a lot higher on the technical stuff than on the basic economic way of thinking stuff.

Which kind of led me to start changing the way that I, incorporate some of these, practice questions. I now put, some more practice questions focused on basic economic way of thinking. Stuff. Right? So, again, I kind of, you know, where my students were excelling a little bit better than otherwise would have. And then where, maybe they weren't excelling and then started to address where they weren't excelling by incorporating more practice questions, more, chances at cert assignments and things like that.

Right. So, again, a lot of valuable feedback from these, assessments another good thing about it is that our university is kind of putting less weight on the teacher evaluations these days because, there's been definitely demonstrated some bias when he comes to his evaluations. And, they're not necessarily sure how much stock to put into evaluations. We're only ten to twenty percent of the class actually completes them. Right.

And these learning outcomes is something that we can kinda put in our file as showing that our classes are effective helping students achieve these objectives. Right. So, seth was something that I'm definitely gonna start including my file, especially, like, when you're teaching a fifteen person honors class, you know, teaching a small honest class. They usually do well. Right? So when you see, like, a hundred percent of people are keeping the objectives, right? And they certainly makes me look good.

There's a little bit more of a reality shot when I got to that five hundred fifty person intro class, right, taking my freshman and sophomores, some of which is like our first course ever in college. Right? But again, it tried to show, what they were achieving, and it's something that definitely I can use now to demonstrate my teaching effectiveness in addition to something just like force evaluations. Thank you, Doctor. Corey. So that was the live training piece.

I just wanna go since we have that time, I can go over the asynchronous training that we we we worked on. And this is gonna go starting in the fall, this fall. So it's the course has been created with just minor updates needed. So some not all faculty can, you know, show up in the live trainings, whether Zoom or in person. You know, they have class conflicts or some other meetings.

And we also wanted to scale. The goal is we wanna start scaling since, you know, this is a five year for we wanna do as much as we can to bring the entire institution on board and track student outcomes using Canvas outcomes feature. So that way, we decided to build the Canvas outcomes and assessments self paced asynchronous scores, which they can participate any time of the day and can complete on their own. And the, and that this is where the part one, part two, and part three comes and, this course itself is just part one, which just goes over the basic skills to design assessments and close the loop and how to use Canvas outcomes to feature to effectively track student learning outcomes. And I do have the time to go over this course.

Unfortunately, I not I cannot give access to this course in the link I shared with you, but, I'm happy to just go over this, piece here. That said, I think I may need to log in. I don't know. It's working. Okay.

Right there. Perfect. So hopefully the home page is not showing up, but it's fine. I'm just gonna go to modules and jump on. Well, as it's, again, t pack model, the Mishra encoder t pack model was also applied for this sync asynchronous training.

So we have the introduction. It's just six modules with one more, introductory, introductional module, and it will have some you know, we will be using the lock feature in Canvas. And we are tracking our own outcomes in this course. So because we are showing to use outcomes, we also track our own excite and and institutional learning outcomes in this course. So first model is just general.

What is assessment? Second is how to create a learning outcome. So it has the pedagogy, and then I have this every module had a video from a excite staff, how to do it in Canvas. So it was actually shown by an excite staff, and that that kind of adds on to the motivation rather than showing something from Google or something else. And we have some assessments for them to complete self paced on their own. Same thing with the next model was constructive alignment, how to map your course, and we do show all the pedagogy and we share the template, then we also show the actual alignment piece, you know, with the how to do it in canvas.

Same thing with then we also since deI, diversity, query inclusion is is we we value that. And that's one of our strategic goals. We have a separate module on that. Then rubrics and interrogator reliability since grading outcomes is very important here. And because our our campus has a large number of students courses have lots of teas and interrater reliability does make a lot of, sense here.

We need to train our faculty on that. And then finally, how to evaluate your students and how to actually track your outcomes. And this is where the explanation of how to differentiate traditional grade scores versus outcomes tracking is comes in this module. And then lastly, this is where the submit a survey. Once they submit a survey, they receive the digital badge for this particular course.

And if they are even more ambitious, if they wanna submit a course map, get a second digital badge with part two. And then if they are further more ambitious, if they show us that they have implemented this in their own canvas course and instance, and they they show that link to the course, and we see the learning mastery grade book that leads them part three completion of the and get a digital badge of that part three as well as if they finish all parts parts, they get a certificate. Okay. So that's that. And I'm gonna go back to my slides, and this, goes a little short all the time.

Future phases, future steps, what we are working on for the future. So, again, I think to really achieve Prova's strategic goals, which is to improve, graduation rates and to fill our closed equity gaps. It's very important that we merge or, you know, we connect this outcomes data of each students to the actual demographics, the DFW rates, you know, all the work that we do in Power BI, basically all the institutional data and we connect it to the outcomes data, which gives us a broader picture of which group of students not achieving the learning outcomes and where we can close learning gaps. And this is in progress. And, the future plans are since I said, this is a five year for the goal is to have the entire institutional faculty be on board.

And I think we are now that we have learned so much, we plan to just go out there and do more large scale work. And, also trying to get stipends and request for funding to give it to faculty as an additional incentive. And again, this is the chart that I'm connecting again. And again, I provided this the link, the repository that I've shared with you. And that's where the dotted line is for the Canvas boot camp series.

That's to to because I think that piece is a prerequisite. Like, it's really important that they know how to use Canvas before jumping on the actual canvas outcomes piece, which I see the left piece that you see. Eventually, we plan to move that and merge it with the canvas bootcamp series. Because, that is gonna make more just, like, an additional achieve, like, it'll be going faster if you really wanna move large scale. And any questions.

Yes. Well, it's it again leads to alignment. And what you're talking about is program alignment. So eventually, the case should be department chair or a program director creates program level outcomes and then connects it with all the courses at the course level. And identifies some signature assessments in particular courses, which that faculty wants to track for gradation.

So I think it starts with program outcomes that have well, program outcomes needs to be aligned with course outcomes and course outcomes needs to be aligned modular outcomes, and all of this needs to be aligned with the assignments and the learning activities that we do in the program. So I think it's the constructive alignment So when it comes to actual integration in Canvas, it will all depend and, it depends on the way the department chair or the program director has designed his or her program or their program. So it's, you know, it it's up to them what outcomes they wanna track. And then which particular course has a signature assessment that is aligned to their program outcome. So that's the connection Doctor.

Do you have anything to say on that? Well, at least at UCR. It's kind of depending on the department. So, like, in the economics department, we have micro outcomes for microeconomics. We got macroeconomic. Outcomes.

We've got statistical or econometric outcomes, and then we have, like, research and writing outcomes. So, like, my intro and micro class has a series of outcomes that students are supposed to achieve that relate to the micro outcomes of the department. And so the department level, they have like the outcomes introduced the outcome is practiced, and the outcome is demonstrated. Right. So in my intro to micro class, the outcome needs to be practiced and introduced.

Right. And then in the intermediate, my go needs to be demonstrated. Right? So that's kinda how my course outcomes relate to those program outcomes. Right? And, again, that's the way it's set up in the economics department, but we set up differently depending on what other departments are doing. So of that.

Any other questions? Yes. It's in the talks. I mean, I can get back to you, but we aren't completely done. We have to do minor updates and add some videos. I think I'll I'm I'm open to keeping the generic maybe not the videos, but I think I'll keep an outline in the Canvas comments.

Yes. Like a timeline. So we when we use faculty development, we call it a timeline, faculty development timeline instead of syllabus. Yes. Yes.

I'll add that to that generic, template. Any other questions? If you may have any questions about the family experience, How how familiar will it be for you and the rest of the faculty with learning outcomes in general? I know I'm an instructional designer. When I talk, especially older faculty. He had no idea what a learning outcome. You know what, of course, goal is, you know what a teaching objective is, but as far as the learning outcomes that you have no idea, and it's, I'm really giving roadblocks trying to, teach Yeah.

So it is not easy now. First confusion is whether, you know, as I said in the starting, whether the the there needs to be a clarification whether which particular is it course goals, course objectives, or course outcomes that needs to go in here. And different departments at our school use different terminology at times. So maybe one school, one school of humanities is using course objectives term instead of course outcomes. And then there's another, maybe economics is using course outcomes or maybe somebody is using course goals.

So that's one thing that we are struggling with and I just make sure, regardless of the terminology they use, whatever they put in put in canvas is the measurable one, which is, which actually takes the skill. So that's one struggle. And, the second is, I mean, yes, as you said, especially with the faculty who've been who are, who have been here, I mean, at the school years and, you know, getting used to tools and technology. It did take time, but I think we did give our TA support. So as an exciting center for teaching and learning, we also have grad student support.

And, we did work with them during the one on one consultation, and that's where that piece took time. So that one on one consultation is the key to that and constant follow ups and motivation is something. That's what I worked with. But I was fortunate enough to have worked with the exciting before where I was familiar with course outcomes and how to write them using Woom's taxonomy and things like that. But I'm also a professor of teaching and economics, which means that my job is focused more on the teaching side than the research side.

If you were to to a research professor who cares much more about publishing in high level journals and about the quality of learning out them as a class a pilot you want. And maybe more likely. I just have questions for those. Academic units that are really external accreditation is there a way to Yes. That's admin lab.

The account level work. Yes. Yes. Did y'all have, like, a specific naming syntax that you'll use to name outcomes? Yes. That's actually a very good question.

So there's no standard ideation, but we did come up with a naming convention. So we gave the free will for faculty to do whatever however they wanna name at the course level. But when it came to program and institutional level, we have a specific naming convention, and we use certain four letter, term with with the particular, with the department name or the school to the major name. Yeah. Yes.

But at program and institutional level, we did have the four, the short form of the outcome and the program name. Great. Good question. Thank you for asking that. Yeah.

We did spend a lot of two meetings on that. That was Any other questions? Okay. Well, feel feel free to reach out to me, if you have further questions and let me know if you have any questions on the project repository you shared. Other than that, thank you for coming. Hope you enjoy my session. Thank you.
Collapse

Discover More Topics: