Leveraging an LTI Assessment Dashboard for an Enhanced User Experience

Share

ARU built an Assessment Dashboard LTI to provide Course Leaders with the tools to manage and have oversight of their assessment tasks (including allocating named Markers, Moderators and External Examiners), with notifications to keep them informed. Staff involved in assessing have individualized Dashboards to access and manage their assessment allocations

Share
Video Transcript
My name is James Truman, an academic by trade. Been in the higher ed business for about twenty years. I started off in psychiatric nursing, to be fair. And then got into, technology enhanced assessment probably about thirteen, fourteen years ago. And I currently work in a a center for learning and teaching in a university doing that for about twelve years, I'd say I'll hand over to Steven. Hi.

My name's Stephen Polter. I'm a senior solutions developer. With Anglet Ruskin University, and I've been working on Canvas integrations for the last six years now, I believe. Yeah. I reckon so.

Just very briefly, my role at university is an academic lead for assessment. So, basically, I take a central role looking after assessments and with the way is at the moment, a lot of that is technology based, which is really what relates to this particular session. So a little bit about us is more about the university itself because we appreciate we've come from over the pond and thank you very much for taking your risk for some guys from the UK. We hope you something out of the session. We appreciate the markets and the models can be different.

So bear with us on that one. There will be some terminology things going on. Possibly practice issues that may not a hundred percent align with what you're used to. So, you know, forgive us if it does look like, why here. Just, you know, roll with it for that respect.

We'll look at what led up to the the build of the assessment dashboard and and why we why we got there, and the sort of work we were trying to achieve in general terms, and then we're gonna do a bit of a whistle stop tour through what the assess dashboard is, and that pretty much is a show and tell. We've got a range of slides, some of them static, some of them a little bit more moving. And, we are just gonna try and get through it. What we're gonna try and do is sort of to reiterate what the workflow is all about so that you can see it fundamentally. But if there's any questions that arise from that, I'm here till Saturday.

So feel free to sort of, you know, sit down with me and ask questions you want. He leaves on Friday. So if you've got a technical question, pin him first. Thanks. And and on that note, just a very quick sort of show of hands, and I'm gonna be really simple about it.

I know that other presenters have asked far more technically and granularly. Shove hands, anyone who's here from more of a technical background. Whoa. That's a messer. That is a law.

Wheels mate. I'm on the table. And so I'm gonna ask teaching more from a teaching perspective Oh, that's a smattering of hands. Okay, great. Well, we got some teachers in the room, so I feel much blessed now.

Alright. Over you. So a little bit more about us as a university, we were founded back in eighteen fifty eight as Cambridge School of Art. Since then, we've grown a little bit and spread out. We now have four main campuses of cross the east of England, Peter, Cambridge, Chelmsford, and London, and I think now we've got close to forty four, maybe a thousand.

We don't. Students from, rather, a lot of different countries We've been using Canvas since twenty seventeen, We now have over a hundred and forty thousand users. We don't seem to take them out of our system. We're happy to put them in, but we seem a little bit more reluctant to take them out. So we've got a large number of course enrollments.

And a large number of courses. It's all automated from our student record system using, a variety of integrations that have been developed a long way since we first started. The journey that we went on really started with, an organization called Jisc in the UK, and they're effectively a research oriented organization that support Hyrage education institutions, and they did a bit of work back sort of in twenty twelve is roughly when we got involved with them, and it was around the electronic management of assessment. And fundamentally what they were looking was that that their data exists in in university systems, but it's being used extremely poorly and that there are learning management systems out there. But many of them don't actually fit the requirements of most universities and, you know, various talks about why don't universities all get together and build their own, I'll leave that debate for another time.

But, effectively, what we did with Gisk is in universities in the UK got together. We came up with workflows. We came up with basic models. We took what universities said where they all described thirty different ways of marking and moderating and all these sorts of things. And there's sort of broke that down and summarized it into where there's actually probably four models, and that's all you really need worry about.

So if an LMS can produce a platform that supports four models, you can pretty much sell that anywhere. That was kind of the work that we got involved in. We really we really flew this because what we wanted to do was grow the work that Gisk undertook with us into a model of automation using data that sits in our student record system or SIS depending on however you wanna think about it. So the first thing that we did was build configuration tools in our student record system, which was for the academics, the instructors, however you wanna, use terminology to describe them, which effectively gave them sort of like a a about a question drop down. And what that did is gave them the opportunity to preconfigure what Canvas was gonna develop or to create in terms of assessment.

So we set down certain descriptive we described what type of submission method, etcetera. A lot of this stuff is in Canvas, but we were doing it outside of Canvas because what we wanted to do was set up a system whereby once that data was recorded in a background database table, we could then fire that information at Canvas programmatically whenever was a required change. So it might well be that a student got a new due date, and that required a separate assignment because of certain rules that were being applied. So, therefore, we got the new due date. We got the record with the student, then we fired the same configuration at Canvas on the fly based on the requirements that have been preconfigured by the academic.

And you what you're just seeing here is that we are wherever turnitin or a plagiarism service we use turnitin was preconfigured, then we would automatically create the draft to sign where that was relevant, when there was a re reset reassessment. I don't know what terminology you want to use or do use, but if that happened, then we would just create it if the student a further date, we call them extensions in the UK. Effectively, we then just remove them from their original assessment and fire them into the extension. So it was all just being fired programmatically. And because we were doing it programmatically, what we also then did is started shutting down areas of Canvas to make sure that the the academic who was managing this didn't change it.

The top red box, for example, was the assessment description. We don't use that. It's not relevant to our system. We're not re we're not reading it. We're using IDs in the background, but what we wanted to do was also create a system of consistency that when students assessments, wherever they were in the institutional framework, wherever they are in our ecosystem, they saw it in exactly the same way.

So if they move from one faculty organization, to another. This the assessment was presented to them in an identical format. We left certain areas open such as, the green box there and others we locked down and what we were doing was effectively firing information from our student record system. That was being dumped into Canvas, and then we didn't allow academic to change that. We then created a whole range of different notifications.

This is just two. One of them, the top one there, for example, is that effectively an assessment sitting in Canvas, but it hasn't been published. And so we would then let them know that we've got an assessment sitting there. There's a due date coming up. Students need access to it.

It's probably not been published because they haven't finished their little bit at the green box. So we're giving them a bit of a kick up the bum to get on and do it. The one there was where a student has been given an extension. So there's a different date sitting there. We were effectively looking at the things that Canvas did, and it does do amount of stuff extremely well.

And then we just looked at the holes and thought, how can we fill those holes? We also then introduced a range of notifications for students. I know that previously been to one of the instructor cons. They've talked about nudges, and we really like the term nudges. So we stole it, and we call our emails to our students nudges, but basically they're very much very basic data driven nudges. So if a student's got a submission date and they're coming up, they've missed it.

We send them a range of different emails based on various conditions, and we also send them an email receipt. And that was largely because we'd use turnitin and turnitin was firing receipts to add them by email. Students were used to that and they wanted to continue it. And then effectively because, Gisk's work was about end to end management assessment. We also then were programmatically sharing the, results back into our student record system.

How do you? It's amazing. Yeah. Alright. So we implemented the main assessments into Canvas from the student record system back in twenty twenty one. And since then, we seem to have fired an awful lot of information into Canvas.

It takes a lot of processing power overnight, to maintain all of the records, make sure they're up to date. Sometimes the information coming student record system gets filled in last minute, so it's important that we can get everything in and on time for the students to be able to submit, because if their assignment isn't there, when they're ready to submit, they tend to get a little bit upset. That, I think, is back to you. Case. So this is the, the stage that led to the dashboard.

And basically, once we've released all of that work and people were using it and generally were very happy with it. As is normally in any institution, a whole bunch of questions were raised. And this is a sample of the questions that we were getting, and they effectively drove what we ended up doing with the assessment dashboard because whilst our programmatic system was working well and it was managing the student experience pretty well, what was actually missing was the academics experience in terms of accessing those submissions, marking them, knowing what to mark, knowing where they were. Sometimes we've got what I'm gonna say a module leader. I know that there's modules in Canvas, and that's a completely different concept.

But, eventually, module leader is course leader. The person the academic who's for that particular period of learning. And they could have a wide range of of modules across a wide range of different students. They could be dealing with a a module that twelve students on it and another one which had nine hundred students on it, and keeping track of that and knowing what they need to do was a challenge for them. And there was a whole range of challenges which they raised with us, and they wanted us and solve those.

And so it's now you. Speak from what bit? Hello. You could. Helpful. No.

So we ran this as a, probably a traditional agile project. Running it with three weekly, well, a sprint loss in three weeks. So we'd get our user stories organize them all up, prioritize the ones that we needed to be done first. In consultation with the wider university community because if you don't engage with your users, you've produced something that they don't want and they won't engage with. And then as end of every sprint, we made sure that we fed back, made sure they were happy with what they'd been given.

And, continued along that line until I think we are pretty much we've got one sprint left, haven't we Yeah, wrap up. The one wrap up sprint to finish up the project. So we are at the university primarily a Microsoft Shop. So it's all being done in C. For the web API back end, we've made a bit of a leap, and we've done React for the front end, which seems to give us a lot more flexibility than what we'd seen doing a traditional MVC pathway with the with the views and the controllers in the model.

And then we are for a variety of reasons hosting it on our own internal servers, running IIS. So the dashboard is added into Canvas is a LTI, and it appears in the course and user navigation. And then it's all controlled by Canvas permissions and the role that you're given within Canvas. And then we've been using Azure DevOps to manage our pipelines, and that way we can attach user stories to particular release of code. So it has, so far, it's been a bit of a learning curve because we sort of only just started using Azuh dev ops and this project has been a guinea pig for that.

But I think it's worked really well so far. Yeah. Yeah. So I mentioned earlier that, there's some terminology issues. So effectively, what the dashboard is offering is a management tool for the module leader or the course leader, if you wanna think terms, the professor running the running the actual period of learning, and then a process tool for individuals involved in the assessment process.

And I've used the icons there because on the subsequent slides, what we're going to be doing is looking at the workflow and then these icons will will refer to those individuals who are the sort of the primary targets. So the module leader is effectively administering this, then you've got a marker and a moderator and if over in the UK, just in case it's in any way different over here. A marker moderator model is effectively one person goes in and marks, and then a moderator will take a Sam and they'll look at it for QA purposes just to make sure that the standards are being met. And that's an internal process. The other way in which it might be done is double marking which is two academics will take exactly same sample of work and they will both market and then they'll have to come to an agreement about what the feedback should be and what the mark should be that the student or the grade is that the student receives And those are the two primary models that happen in a range of, markets similar to over here.

No. Okay. Great. You can leave whenever you're on. And then the external examiner is another layer of QA that we have, an addie's effect to be an an academic from another institution in the sector and they're employed by your institution, your home institution, and their job is to come in and undertake effectively QA audit of your standards and to confirm that they meet national standards or professional discipline standards as well.

So there's various layers of QA that are going off. And what we needed to do was sure these individuals all had access to the relevant parts of the, of the system and to the assessments that were being undertaken by the students in in as easy as way as possible. That was really the objective. And so what it started with was this, this tool that we made available as, Steven has said both user profile and then also in a Canvas course. And so there, that's just demonstrating the two links into the dashboard itself.

And what it's is providing, a range of different tools for the module leader. So here, you've just got a glimpse of what the dashboard is. In fact, if you've got the different assignment months being shown there. And then the module leader is clicking into once to start on to taking one of the primary functions, which is marker allocation. In the marker process.

The module leader is effectively grabbing the names from our system for markers for staff, effectively an active directory call, and they're going to pull information in. So you can set it up by using a CSV import, which is quite useful for some of our module who do it all offline, and then just wanna pull it all in. And you can also set it up so that if students are subsequently having to reset the same markers reallocated to the students that they originally set up as. If you don't wanna do that, you could just then set up markers, at a unique seller. So it might well be that the reset's gonna be six months down the line.

And the person who first it's actually on holiday, some sort of academic leave, and it's no point allocating it to them. And effectively, as you can see here, what you're doing is just picking, members staff from the database and creating a pool of individuals who are then gonna be allocated to the students. And you can just see there at the bottom, you can either allocate manually, so you'd go through a process or of allocating to the individual students, or you can use an auto allocation system in the in the tool. And that effectively, once you click on the button, we'll then create a list of students. I've just blanked them out because this is a particular module, which actually doesn't have anonymization enabled.

So the there's a column there which doesn't have any data in it. And on the right hand side, you can see Schmidt, who's a colleague of ours, and Stephen have been allocated using the system. If, moderator wanted to, they switch back over to manual allocation, then move one of the students from one marker to another. So it's entirely flexible. But using the auto allocation system, it also allows for the fact that students sometimes join our courses late.

And by having auto allocation enabled, then the system just chucks them into a marking group automatically once they in the course. So it deals with those sort of late enrollments, on the fly again, which is one of our primary objectives is to try and take all the workload away from the academics so they can just get on with the teaching. We also, then have something called the moderator as I say. And effectively, that's the person who is gonna be looking at this way purposes. And whilst you can have different moderators for different markers, and that's commonly done where you've got the nine hundred students in our and what you actually need to do is set up a pool of moderators and then break the workload out.

The tool won't allow a moderator to also be the same marker, so they can't moderate own work. It it throws an error back at them. And so that would have been just shown up there. Here I've just set myself up as the moderator for both What we also have in the system is the double marking setup, and that's very similar. Here I've allocated the, double mark or enabled the double marking tool, and I create the pool of markers.

And then what I'm doing then is selecting a marker against each of the students. So first marker, second marker. You can see there in the top row. I've tried to put Stephen down as both first marker and second marker, and the sisters, again, is throwing an error. And then the last one is just allocating the external examiner.

Quite a basic tool. What we have is the assessments part of it, which is more of the workflow. And so here I've got a marker who's going and having a look at the scripts that they've been allocated, they've all been submitted. And what we do is generate a submission link in that part of the tool. So that by clicking on that, it takes the marker directly to the submission in Canvas' SpeedGrader.

They can then enter a mark in that once they reviewed the submission, whatever that might be, or however it's rendered in SpeedGrader, and that will then automatically fire back to the tools So once they come back here, it then just updates and then shows in the submission status that it's been graded. The submission status in grade is interesting for Canvas because if you then remove the market, it still maintains that graded status. I'd like them to fix that if anyone's listening. But the moment, it just stays and shows that it's been graded. Once the marking has been completed by that particular marker, they then complete marking.

So there's a button there, which they click, and then show marking completed. One of the reasons we did that is we wanted to chat we wanted to track the process, and that's fundamentally for the administration for the module leader, which I'll show you in a little minute. So we use those as values. Once all the marking's been completed, then the module leader, the person running it can create then the sample. So it starts off.

We use internal rules as part of the store, the the tool. You may have your own rules on sure that would be configurable if you wanted to create something similar to this. So for us, it's a minimum sample of eight or ten percent of all the submissions. So when it opens up, it starts off at red, and I work your way through, it's still red, but it's counting off the number of samples that have been created. And when you've met the minimum sample, it will then turn green and say, right, you're okay.

Similarly, when the moderator then accesses, they've got a submission link. And this was probably one of the biggest things that caused us brief when we were first using, SpeedGrader because we would have nine hundred students submitting, and we would have a relatively small sample out of all of that. Trying to find those students in SpeedGrader was quite challenging, particularly where marking was being undertaken under anonymous conditions, because whilst you could use the the top of the screen, if a piece of work is, marked under anonymous conditions and you copy the URL once those anonymous conditions are lifted and that sometimes also then coincides with when the moderator might be looking at it, that URL, that anonymous URL doesn't then take the moderator to that particular paper. It just takes them to the first student in the list. And that was one of the challenges that our academics really felt quite frustrated about.

So by inserting into the tool these submission links, it then gives the moderator direct access to the particular submission that they've been asked to look at and review in terms of QA. And then they just go through the process of checking off that they've completed the moderation, and that then enables us to track that they've actually done their job too. They leave some marks some comments, basically, and this is just a process for recording that QA, and this is available to our external examiner. So it's a way of having an audit trail for that particular process. And once they've submitted, that's a a good tracking idea for us.

And the first markers are, also recorded in the system. So here, you've got a first marker or an academic who's both been allocated as first marker to some students and second marker to others. And when they access either of those options, here it's the second marker, you can see that the grade column has got a couple of marks. So you got fifty nine percent and sixty seven percent. Those are being entered into SpeedGrader by the first marker.

Because SpeedGrader only offers one place for grades to be entered. We couldn't look around with that. We've had little chats with Canvas pro serve and all those sorts of individuals, and they won't let us with the UI there, and I can quite understand that. You know, it's quite a protected piece of kit. So we had to work around that.

So what we've done is created a migrate grade column in the tool, and that's where the second marker records their grade. And again, it's audit trail. You can see on the far right there, there's like a little comment box. And that allows the markers to have an offline discussion, which isn't available to the students. The only way we could find before this was effectively to leave comments in SpeedGrader, but not save them.

Therefore they were recorded as draft, but that always ran the risk that someone would then actually submit that comment and it was then available to the student. And we didn't quite fancy the idea of that. So we created this little submission and it's really just a chat screen. So I'm here a second marker I've left a comment, then the first marker pops in and they can respond. And this is generally where there's a bit of a disagreement about mark, you know, and they're having that that that academic conversation offline trying to agree.

And in the era we're in at the moment, where there's a lot of working, people aren't doing stuff in offices next door to each other, having some sort of facility where they can have that dialogue without it being, in any way exposed to the applications was quite a critical thing that we wanted to achieve. We then have the, process that the module leader has, which is that tracking. So here you can module leader is able to review which markers and moderators have done their job. That's really useful when one of them hasn't, because then they can track down the right person and give them a good of a push. Yes, it was you.

Yeah. Pretty normal, really, for you, isn't it? Once the module reader's then sort of made sure everything's been done, they can then leave some sort comment against the script. So again, it's all about audit audit training and making sure that there's a record of what's going on. That's really important because when the external, they accesses it. They then have access to all that information.

So here, what you've got is the sample being created for the external examiner. And then when they access the sample. They have all of the comments that have been left by the internal moderator. They will also then have access to the various submission comments, if it was double marking, and they will then leave their own comments in the tool itself again. So it's all being recorded within the tool directly.

And as part of the whole process, what we included was a whole range of other notifications. You saw some of the emails that we put up there. We haven't really replicated them here because quite frankly, that, oops, sorry, there's loads of them. And you can see on the on the screen there a range of some of the, notifications which are generated. We're in it's actually this bit which we're in the closing stages of just wrapping up because effectively whilst we've got the notifications in play, we're also now just building in a similar way to a student ability to manage when and how frequently they get their notifications, we want academics to have exactly the same control.

So we're just sort of like spinning that up really just to make sure either, you know, get it once a week or whatever. So it's it's those sorts of elements to the projects that we're just wrapping up, but the the the basic build is pretty much there. And that's the end of sort of like giving you an overview. It was whistle top. I that, largely because we wanted to make sure that there was some time for any questions if you wanted to ask us any questions.

Yes. Go ahead. I'm Scott Hides from, Palo Alto University here in the US. Hi. And we, a lot of us use assessment platforms, watermark, Elonman, those kind of things to do some of these things.

Mhmm. But we your way ahead of us, I think, we don't do as much moderation. We do maybe a little bit of external review. Yeah. But I think you have something here that a lot of us can pull in and use as a as a supplement to those assessment platform kind of processes that we're using or to do things that does assess the you should sell this to the assessor platform too.

That's really weird. How much does it cost? Are are there any plans to or have you already shared it. That's my question with, others in the UK that are aside from just putting it on GitHub. Let's put it this way. We were encouraged to come here in structure.

I think there might have been some early conversations, and I'm I'm also presenting this at a UK and Ireland version of this in November her. Yeah, I I thought there might be some interest because I know I I follow the community a little bit. I don't have time to, you know, there's of pages, isn't there? And I know that large course management is quite a big issue over here, and a lot of it a lot of people are using sections. We use sections when we create those marking groups, we're effectively creating sections in Canvas. Some of the questions we had last night, hack night for some of, with some of the Canvas colleagues was we can link directly to the paper but the thing we haven't been able to address so far is then to pre filter the section.

So where you've got shell sections and then filter down, that doesn't yet appear in the code because they're using with the interface to do that. And when we're saying, why can't we just drill why can't you give us a URL to drill directly into that pre filtered section? That was the only one where they thought you might be able to do it. That they thought they might -- Yeah. -- very kindly be able to do -- Yeah. -- compared to the other questions.

Again, if you're listening, go for it. So yeah, we think there might be some things that might we do appreciate there are some very sort of EMEA centric stuff going on there. We know that. You know, we're not we're not immune to the fact that it's we're a little bit weird. Yes.

So with the multiple grade, multiple graders, space or or parks, people doing a few different marks, and you got a moderator. And let's say they disagree, where how how would that resolution work? Do they eventually walk you through and they both often end up at the same rate? We'll get recorded after all. We didn't show you the the the full data, but basically the, that there'll be It's a collegiate chat. Let's put it that way. So there's an expectation that that they get on it and they sort it out amongst themselves because they're all big boys and girls.

But and if there is a an agreement at the end of that, then effectively one of the comment boxes for the double marking effectively has three options, which is, you know, second marker the first first marker agreed to the second, or we agree mutually agreed that we were gonna go with X, you know. And so there's and that that's then available to the external, so they can see it a glass where it ended up, you know, if it how did it go? If and it does happen, if they can't agree, then the rule will be you bring in a third person and they are the they're the arbiter of whatever this is gonna go at, you know. And so they both agree to hand over the decision to somebody else. So they go in and they review it and they come up. And that that academics can be quite interesting.

Can't we? You know, we we we like to know that our opinion really important. So sometimes it does happen, and then you have to just bring in a third person. That's the way we handle it. So when the grade is decided on, does it go back in what different grade was decided to drill me being a word for quant tech? Yeah. That's right.

The first one gets recorded in the grade book. The second one is, you know, in system -- Yeah. -- and then eventually one of those gets recorded back into the main range. So if it goes to the SIS or whatever you're seeing. Basically, the agreed grade is scored in in speaker.

So whatever's gonna be presented to the student, you dump that in SpeedGrader because that's what they're gonna see. Everything else is just a a record of the academic discourse. For for audit purposes, man, man, and there is a missing piece of the puzzle which we developed, which is the student information system interface, and that is where the module leader pulls all the grades back in and ticks to say that they're happy. Ported from whatever the discourse was there in your system -- Yeah. -- pushed back.

Yeah. They have to they literally have to hit at a button says I'm approving these grades. So there was a hand there. Yeah. I have a question about, extensions, which, I think in our parlance is, an accommodation for a student for someone who has time and a half on an assignment a second try or something like that.

I was wondering if this can be applied to multiple assignments in a course or if like a student taking course for a term if that is applied across each assessment that they're doing, or is it has to be applied individually we would have slightly different rules. So in in the UK, they'll have variations which will be called something like a statement of reason adjustments, which is an accommodation. So they would be assessed by a particular department in the university, and they will say this particular has to have x. In most cases, for an assignment where there is a a reasonable period of time for the student to develop it, that accommodation will actually be the support provided in advance of the suspicion, so they get increased levels of support, and the date on which they have to submit is the same as everyone else. If it's a timed event, that's when it might well be okay.

So you guys have got an hour and a half, but you particularly have got two hours So, and and then that will be applied to every assessment, which is an event based timed assessment automatically, by by rule, effectively, it's recorded on the students recording the an extension is slightly different, and that's more of a, my dog ate my homework the night before, really just need an extra, you know, it's on the expectation that the student is there, but not quite Some things happen, and they're asked criteria, obviously. And then again, there's a there's an area in the university, and they're they're only allowed, so in some schools or in some universities, the academic can give the bump of the date, but in a lot of them, they will have specific departments who are independent of everything else, and it's their to ensure the student journey is protected and that student well-being is insured. And so they will then consider what that student is saying and go, okay, you've got extra, and that's what the extension is. Okay. Thank you for clarifying up.

But you did answer my question because I was more concerned with the combination are. Mhmm. The instructors have to get assignment by assignment, which a lot of extra work here. If they can apply that a student that's gonna set a combination throughout the turner -- Yeah. -- that would have happened for a lot of convenience.

Yeah. Yeah, you can Yep. Sorry. Go. Okay.

Go. Go go. Go. Go. Go.

Go. Go. Go. Go. Go ahead.

With regards to your dashboard, like, guiding the users through this addition's link to each other in speed graders. SpeedGrader normally has that drop down where you can then view as a as a greater any of the students that you're assigned to be assessing. Do you still have that to be suppressed in some way, or do they get No. That's still available to them. And the only thing that we would like to do is actually pre select that drop down, which we can't do at the moment just because cameras won't allow us to in the back end.

But maybe we'll get there. So the externals don't access the course directly The only I accessing the, the module piece and some other, like, authentication piece in the that's just to that At the moment prior to this, they were going in and they were then having to to find the sample that was being asked of them to look at. And that causing them quite a bit of time. So with the dashboard, they'll go to the course, but then they would just, as the sort of first run did, they'll just click on assessment dashboard. It will then recognize them the external examiner for that course, and it will show them this is what you've been assigned to for this course to to review.

It will they will just click on the drop down, they'll have their sample, and then they just go through. There are there are options in there. There's a lot more in there than we've shown you in this in that they can they will it will default to the sample that they've been given, but some of our externals just wanna look at everything. You know, if there's nine hundred students, they wanna be able to skim through and pick their own students. So they can just change the filter and it will show every student, and then they can just wander around to their heart's content.

So you issue you're issuing institutional account Yeah. That's right. They're employed by us and they have to sign all the right paperwork. Yeah. Yep.

On that. Do they be assessed or need to be, like, constituents of the university of Capitol County? I'm just thinking this could be an interesting competitor for, like, k twelve education, licensure, and you don't have any traditional accounts for access. So, like, watermark, just for the recording. So effectively, do the markers have to be employed by the institution and have accounts, etcetera? I think the short answer to that is yes. And and that does present issues already for us because we have partnership arrangements with some organizations where we validate their courses.

The students who are taking those periods of study are automatically our students, and so they've got access to everything, but those partner organizations employ those individuals, and then there are legal licensing financial implications about what they've got access to. And we're like any other institutions constantly rumbling through those challenges, but at the end of the day, they would have to have an account with us. And it's entirely possible to give them that account. We've just gotta go through all legal stuff of, you know, making them, appropriately positioned to have that account you know, everything's doable. That's the thing I've learned through all these.

You can do you can pretty much do anything, even if it's just completely accept that it's not doable. That's still an option. Yes. The assessments outside of the LOS? Yep. So, again, for the recording, why do we create the assessments? Side of the LMS largely that's about, rather than having the academic going in and manually doing that every single time, our student record system system has a record of every single assessment that's associated with that particular course that is credit bearing.

And because they already that record already it goes back to that, primary principle that if we've got that information, why do we then make someone do swivel chair integration where they go and create the assessments that are recorded over there already? So we know what they are, we know what they need to have. What we needed was a little bit more granular information about how that needed to be created in Canvas rather than it being just sort of a basic description. So that seven question drop down is translating the the record the assessments into a canvas oriented artifact, which we will then fire at the system whenever a student state changes, whether it's an extension, a reset, they they move to a different delivery of that course. It doesn't really matter. We just use that basic information that sits in our student record system, We then filter it through that particular form and we fire it at Canvas, and it just happens overnight automatically.

Sorry. Tied to a student as well. Totally. Yeah. Absolutely.

Yeah. And that and that that's a a kind of a I'll come to you in just one minute. It's one of the other challenges that we start student first. So everything is students. So where a lot of institutions and a lot of practices will use the global in Canvas, so use everyone, everyone else.

We we we don't do that because it it's not everyone that wants needs to have access to this assessment. It's a selection of those students need to have access or that or one or four those students might need to have a different date. So we start an individual override because each student is recorded in our student record system as an individual with an individual journey. And when we translate that into canvas, we want to translate it as a mirror. They're an individual person with an individual journey.

So we went straight for the individual override, but the system hasn't been built. All the APIs or the code hasn't been reflect that. So a lot of things. Most things, pretty much everything works with everyone, the global, not everything works in the same way if you use an individual or section override. Because I think it was originally conceived to be an something else, not your start point.

Sorry. Yes. You were first. So please follow on. Forgive me a few addresses, but because the SIS specified what assessment higher than, of course, by course, on a student based basis, are you then doing great pushback for those assessments from Canvas into the SIA totally.

Yeah. And stands to reason because you have the consistency. That is what the SIS means. Yeah. Yeah.

Yeah. We we it's managed via a, module either has to have control of that, where we could have just built it so fired it straight back, but they want to have academic over signing off. So they will then basically review the grades that have exist and and then hit a button, and then they're in control are doing it, but it's basically a three step, Paul, say yes, go. Okay. Any special challenges from new quizzes? Lots.

We haven't got time. Tomorrow. The and one of the things that just to when we started this, classic quizzes were on the out new quizzes were on the way in. We were only gonna we were gonna only gonna build a solution to generate one of those. So we went with new quizzes. Teachers can still create classic quizzes if want to, and there are phenomenal amounts of classic quizzes being created in their platform for non credit bearing assessments, but we only ever create, new quizzes, and there are challenges. The way it's been built is an l l t I, a whole we won't go there, but thank you very much for coming.
Collapse

Discover More Topics: