Reducing the Gap Between Oral and Written Assessment : A Comparison of How Teachers Assess Podcasts and Written Solutions
Abstract: Prior to writing our thesis, a course at the Royal institute of technology allowed alternative methods to present our knowledge. We decided to create a podcast. During the process of creating the podcast we started discussing how the examiner would assess this compared to our peers creating written solutions. This resulted in three questions that later became our research questions: How does written versus podcast affect the grading from teachers and teacher students? What parameters should be considered when creating a task for written and podcast? What is the general view among teachers for using pupil-generated podcasts? To answer these questions, we created an experiment and a survey. The experiment aimed to answer the first two questions and gave us the following results: by only altering the format between podcast and written, our participants assessing the podcast were more keen to give forward and comprehensive feedback, while our participants assessing written focused more on what was missing and commented on small details. The second research question was answered by analyzing the task we created for our experiment. We found that source integration is needed to reduce confusion. The survey implies that no teachers were against using pupil-generated podcasts but STEM teachers were more picky on how and when it can be implemented. We hope that this thesis inspires further research in the relatively new area of pupil/student-generated podcasts.
AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)