Generative AI and assessment

A key purpose of assessment is to yield evidence of student learning. This evidence depends on submitted work accurately representing each student’s contribution.

Use of generative AI technologies to produce text and other media as part of student submissions (or, indeed, as part of the process of developing such submissions) needs to be thoughtfully supported and/or limited, to ensure responsible use. It also needs to be clearly and openly acknowledged when in use.

Providing a clear rationale for assessment conditions that are imposed will help students to comply and see the value in responsible uses of AI technologies. Limitations should align with the Monash University institutional position of responsible use of AI within theAssessment and Academic Integrity Policy.

Considerations for assessment design

An important concern about generative AI relates to how it could be used by students to complete summative assessments. By emphasising responsible use of AI, there is an opportunity to considerwhywe are assessing our students,whatis being evaluated, andhowevidence of learning is being gathered.

Here are ideas and guidance about ways that assessment could be modified or overhauled when rethinking the type of evidence that is important to generate in order to effectively assess students’ learning (see alsoChoosing assessment tasks on Teach HQ).

AI and Assessment Security

Detection of AI

As generative artificial intelligence tools such as ChatGPT become more accessible to students, other tools are being developed that are claimed to be able to detect AI-generated text. However, these tools raise some significant implications for privacy, equity and data security. There is also a lack of evidence as to whether they are actually effective. AI models are being developed and updated at an increasing pace, meaning that any detection tools are unlikely to keep pace with the latest developments.

Watch: Tim Fawns (Monash Education Academy) and Dave Cormier (University of Windsor) discuss AI detection.

Blocking, invigilation and high stakes examinations

Some institutions are responding to AI technologies by prohibiting their use, blocking access to them from university networks and reverting to traditional modes of assessment such as in-person pen and paper exams or oral exams.

In many cases, high stakes exams do not promote authentic learning experiences for learners and limit opportunities for assessment via multiple modalities or for feedback that can contribute meaningfully to learning.

Accessibility and inclusivity principles are also compromised in high stakes exams. For example, hand-written and oral exams are known tosignificantly disadvantage studentswith physical or mental disabilities. Attempts to block remote access to technology can often easily be bypassed by students. Attempting to use other technology to detect instances of this behaviour is unlikely to be effective or sustainable.

See more onActive learning on Learn HQ

Summary

  • consider the ways in which the integrity of current assessments could be compromised by students’ use of AI tools.
  • think about more subtle ways of protecting and refining assessments.
  • consider the use of alternative forms of assessment to gauge understanding in a low-risk environment, including lower stakes and formative tasks.
  • make time for clear communication and discussion with your students about the possibilities and limitations of these tools.