Saturday, 9th December 2017, 10 AM. As I search for appropriate resources regarding efficient technological interventions for assessments, I ponder upon why we need technology to intervene for assessing students? Can assessments delivered using technology provide for a clearer picture of student needs, interests, and abilities as compared to traditional assessments? Can they contribute towards adaptive learning, making learning a more personalized experience? Is it possible that they might be more costly? Can both the formative and summative forms of assessments be supported by technology?
Technological interventions for assessments are desirable because teachers can analyze a student’s learning during the process. LMS systems like Ultrabot provide a mastery based system of assessment where assessments are adapted to a learner’s current level of understanding. As the student’s learning progresses, the system pulls out skills from other modules and adapts them to assess the student. In this way, learning becomes a more personalized experience (Khan, 2017).
Embedded systems also make it possible for teachers to provide students with real-time feedback through powerful dashboards. This increased student-teacher interaction also helps in the learning process. Parents can keep up with their children’s performance as well keep track of what they learn at school and their performance through these systems. (Assessment, n.d.)
However, how do these systems account for both formative and summative assessments? What factors are to be kept in mind while designing these assessments? Teachers should not be made to spend time designing formative tests which students will never take. Then, should they be made high stake, low stake or mandatory? How will the system account for such assessments? (Groeneveld, 2017).
Some research work suggests that MCQs can be used for both formative and summative assessments (UK Universities’ Staff Development, n.d.). Students can attempt them in a short amount of time, and they can be tested for reliability. Teachers can identify the learning gaps and pass feedback to the students or grade their work if the assessment is summative. However, if used in a formative form, it should be made clear if the assessment is not just a clicking exercise. As mentioned before, LMS systems like Ultrabot and even Khan Academy can be used for formative assessments where a student’s skills, gauged through MCQs, are assessed to identify learning gaps and then guiding the student to the right lesson. Coursera allows the students to re-attempt the exams until they get their concepts right.
Online learning platforms like Coursera also employ peer-assessments where students share their assignments with their peers. Three peers generally grade one assignment. This allows for diversity of approaches to one assignment as participants are generally from across the globe, and sometimes they are experts in that area as well. EdX has a system which looks for particular words in an essay type question and tags them against an in-built system and then provides feedback to the student about what can be added.
How can a student’s progress be monitored? LMS systems used for educational or training purposes usually have dashboards which display a student’s progress in terms of percentage score, course completion etc. Some LMS systems like Ultrabot also display a student’s proficiency in terms of skills (Khan, 2017).
Can these systems account for cheating? It is possible to guard against cheating with time checks for each answer, examination conditions, and randomly generated tests where each candidate is given a slightly different test. Packages such as Question Mark are also available for the design and assessment of MCQs. (UK Universities’ Staff Development, n.d.). Coursera also takes a typing speed at the beginning which helps it to recognize if it is the same person attempting the exam or not.
What if digital summative assessments are conducted at exam centers? How would the environment be controlled and what costs will they incur? Can these tests be beneficial in any scenario? Digital summative tests conducted at exam centers are costly as they require test software to be set up and a test expert to interpret the results of the software. They can be beneficial in the context that they allow easy detection of mistakes and can easily be aligned with the course content (Groeneveld, 2017).
Now the critical question: How beneficial and appropriate are digital assessments keeping in mind Pakistan’s current landscape? The biggest advantage of digital assessments is that they create data which can be managed if resources are used correctly. If the Pakistani educational administration employs such systems and allocate resources appropriately, the data gathered will serve to improve the overall educational system.
Hence, we can say that technology has the potential to move assessment from disjointed separate measures of student progress to an integrated system of assessments and personalized instruction to meet the needs of the learner (Assessment, n.d.). In addition to this, it can provide large amounts of assessment data that can serve the need for educational improvement. However, the availability and allocation of resources must be kept in mind before implementing them.