Monthly Archives: May 2013

The use of Rubrics in Assessment

One of the ways we have found in improving student assessment choice, increasing transparency in the assessment process and encouraging students to engage with their feedback was to use rubrics in the assessment process. I know rubrics are contested by some, and are not right for every discipline and in every context, but we had some really positive results by using them. It isn’t easy to create a rubric, especially a scored one where it works out the grade, but most academic members of staff have a marking scheme of some sort, or even just tacit knowledge to know what a piece of work is ‘worth’. It takes a while to get it right and working in the way it should. I worked with another member of staff at the University on a project which tried rubrics out within assessment. The steps were:

  1. The student completed the assignment, they could hand in their assignment in any format they wanted as long as it could be submitted directly to the University VLE or linked to. So they could write an essay, do a PowerPoint presentation, do a short video or a podcast, or use whatever format they chose as long as it met the learning criteria and could be accessed.
  2. Whilst the member of staff was marking the assignment using the rubric, we asked the students to self-assess themselves against the rubric, and submit the rubric to us.
  3. When the work was handed back we asked the students to compare the marked rubric against their own.
  4. If the students felt that they had been unfairly marked on any of the criteria, they could appeal. They had to write no more than 500 words referring to the rubric and their original piece of work submitted.
  5. We compared the results of the self-assessment with that of the tutors.

 

We found the following:

  1. That using the rubric created complete transparency of where the marks came from. The students really liked this instead of not being sure of how a particular mark was arrived at.
  2. Using the rubric allowed students complete choice on choice of format, which played to their strengths rather than just favouring those students who were good at writing or exams.
  3. Breaking the mark down into the different criteria was also useful, as it gave the students some really clear information on how they could improve their piece of work or future assignments.
  4. Giving the student the chance to appeal gave them agency over the process. Only two students took the tutor up on this option, one convinced the tutor they should have been graded slightly higher, the other admitted they were trying it on and accepted the given grade.
  5. The self-assessment task gave the tutor some really interesting data:
    1. Firstly it gave the tutor some really rich diagnostic data as it broke down the information into individual criteria, the tutor could see if any of the criteria had been misunderstood. In this case the students rated themselves much higher than the tutor on one of the criteria, which was to do with secondary resources. They had obviously misunderstood what this involved, so intervention could take place, and some training/explanation of that could immediately be put in place to correct that.
    2. Most students saw themselves as in the 2.1 category (60-70%). Not sure if this was wishful thinking by some but those that were 2.2 or 3rd students rated themselves higher, and maybe through modesty the 1st students (over 70%) also rated themselves as 2.1s.
    3. The students who rated themselves as 2.1s and were graded by the tutor as 2.1s were not scoring consistent 2.1s across all criteria. This was interesting as I think that those students who expected a 2.1 and then subsequently got it, would probably not have engaged with their feedback much if it had not been for the rubric. The rubric showed them that they were perhaps scoring a 1st in some criteria and lower, even down to a 3rd in other criteria but gaining an overall average of a 2.1. Having that in front of them in the rubric showed them clearly what they needed to do to improve their work and gave them the aspirations of getting a first (perhaps they had never thought this possible).

 

This activity was improved in subsequent years to have the students engage with their criteria before submitting their work, and involving peer assessment before having a chance to improve their own piece of work. The tutor was also able to compare years and cohorts to see how the students have improved. It was very interesting to be involved with this, and the students really liked the rubrics being used as it was crystal clear in how the marks had been arrived at and exactly what they needed to do to improve their grade.

 

Some further references about using rubrics (including our conference presentations about the above project)

Campbell, A. (2005). Application of ICT and rubrics to the assessment process where professional judgement is involved: the features of an e-marking tool. Assessment & Evaluation in Higher Education, 30(5), 529-537.

Ellis, C., & Folley, S. (2009). Improving student assessment choice using Blackboard’s e-assessment tools. Paper presented at BbWorld Europe 2009, April 6–8, in Barcelona, Spain.

Ellis, C., & S. Folley, S (2009). The use of scoring rubrics to assist in the management of increased student assessment.

Hafner, J., & Hafner, P. (2003). Quantitative analysis of the rubric as an assessment tool: an empirical study of student peer-group rating. International Journal of Science Education, 25(12), 1509-1528.

Jonsson, A., & Svingby, G. (2007). The use of scoring rubrics: Reliability, validity and educational consequences. Educational Research Review, 2(2), 130-144.

Meier, S., Rich, B., & Cady, J. (2006). Teachers’ use of rubrics to score non-traditional tasks: factors related to discrepancies in scoring. Assessment in Education: Principles, Policy & Practice, 13(1), 69-95.

Creating Virtual Learning Sessions (webinars)

Connectpic3

One of the discussion threads this week on otTEL is about running online synchronous sessions using platforms like Blackboard Collaborate or Adobe Connect. I have used both these tools for delivering and attending webinars, but I am always looking to improve my skills in this area. I attended a free webinar run by Citrix in March this year on how to create virtual learning sessions, which I found really useful, so thought I would share the notes I made from the session here:

1. How to keep participants engaged and active:

  • Call a session by its rightful name; terming a session meeting, presentation, webinar, learning event all create different expectations in terms of participation etc – so make sure you label your session correctly.
  • Every 3-5 minutes have the audience do something different to keep attention and prevent multitasking. You have plenty of tools at your disposal including: asking questions; using the whiteboard; using chat; polling; giving a break; thinking time; asking for people to speak; read something; show a video; etc.
  • For a small group, a tip to make sure everyone is engaging is to write everyone’s name down on a piece of paper and mark down when they contribute, that way you can invite those who are quieter for their opinion.

2. 3-step instructional design technique:

  • Identify goal performance and objectives:  the instructional goal is the session’s mission statement; the performance objectives are what the learners will be able to do when they leave the session.
  • Determine the assessment needs: how do we know each of the objectives have been achieved? Take each objective separately for this to determine if it suitable for online instruction. A general rule of thumb is ‘if it can be tested online, it can be taught online’.
  • Determine collaboration needs: think about if you need to bring people together to learning this thing. What does collaboration bring? Will it make the experience richer? A general rule is that if it does not need collaboration, then it can be taught via self-paced online materials rather than in a collaborative session e.g. learning road signs.

3. Determine when and how to design interaction and collaboration online

  • Differentiate between interaction and collaboration. Interaction involves participation including polling and adding ideas to the chat space; collaboration involves working as a group to come up with a solution to a shared problem.
  • Two main reasons to include interaction and collaboration: firstly to support participant engagement and secondly to support learning outcomes.
  • Interactions are usually at the lower end of Bloom’s taxonomy and promotes communication between the tutor and participants and between the participants.
  • Collaborations usually target the higher levels of Bloom’s taxonomy like evaluation, analysis and synthesis  and achieves deeper learning.
  • The goal with collaboration is to help participants to achieve better results than they would individually.
  • Interactions can be serial or concurrent. Serial means that people take it in turns, concurrent means that everyone can participate at the same time. A unique feature of online platforms means that concurrent interactions can take place. If serial interactions take place online, people will quickly start to lose interest and multi-task.

Summary:

  1. Set realistic expectations for your students.
  2. Create opportunities for learners to engage.
  3. Follow the 3-step process to make sure the content is suitable for an online webinar session.
  4. Ensure that the learners are interacting and collaborating with you, the other participants and the technology.

Digital Methodologies in Educational Research Conference

This week unfortunately I have not had a great deal of time for ocTEL but I did attend the webinar and played with some of the OER resource websites. One of the reasons I have been busy is that I attended and presented at the Digital Methodologies in Educational Research Conference which was held in Preston on 10th May. As the theme of the conference very much fits in with the ocTEL themes, I thought I would write a blog post about some of the presentations, while it is all still fresh in my mind.

The conference was held in the Brockholes Nature Reserve near Preston, which was a lovely location for a conference. The conference centre was located in a building surrounded by water, with a lot of bird life around, it is just a shame that the weather was dreadful.

First up was Paul Seedhouse from Newcastle University talking about the French Digital Kitchen and European Digital Kitchen. These projects were set up to teach language skills with the additional advantage of the students acquiring cooking skills. The idea was to teach language skill in a more authentic setting than a classroom. The kitchen was fully equipped with technology to help the students who had to follow recipes in the language they were learning. They were paired up to support peer learning, and could use the technology provided to translate things they did not understand. The utensils they used were all fitted with technology that detected movement in a similar way to wii controllers, so it can be checked that the students were carrying out the correction movements. It is not a technology intervention that others could repeat elsewhere due to the heavy investment needed but was interesting none the less.

Second up was Jeff Bezemer taking about using multimodal framework in educational research. This is something I hadn’t come across before but looks really useful. Its focuses on capturing at all the different modes in the research context, such as audio, speech, images, dress, gesture, gaze, etc and very importantly how these inter-relate. The situation being researched is usually videoed so that all the modes can be captured and later analysed in minute detail. He had applied the framework to research in primary school teaching and operating theatre contexts. He used a software package called Elan to analyse the data, which is time-line based so all the different modes can be examined in relation to each other and when they occurred. The rationale behind using this framework is to make visible/explicit the unspoken, so to explore and identify the tacit/embodied knowledge in the given context.

Next up was Stephen Bax from Bedford University speaking about a project which used eye-tracking software to track how language students read some text to answer questions in an exam. This was fascinating. The students had to locate the answers to questions in the text which was set in their second language. The software could show the order of the elements on the screen that the students focussed on, and the length of time their eyes were fixed on certain places on the screen. The object of the exercise was to compare how successful students read compared to those less successful. The results showed that successful students did have a reading strategy but that the strategies used were very varied, so they all read the text in different ways e.g. some would read the whole text first then look at the questions, other would read the questions through first and then look for answers in the text. The less successful students seemed to have less of a strategy and looked around the screen a great deal, moving forwards and backwards. It was really interesting, and you could see the application to other research projects.

Next were myself and Liz Bennett from the University of Huddersfield and our presentation was about the use of digital tools to support the doctoral process. We particularly focused on the use of social media and Web 2.0 tools and their impact on identity. We argued that use of these tools can be both enhancing and exposing, and these feelings are amplified because of the widespread audiences they reach. We theorised our experiences in terms of the notions of liminality and hybridised identity.

I had to leave the conference after our paper to travel back, but I was sorry to miss Cedric Sarre speak about social network analysis and also the closing keynote which was Stephen Downs via video conference from Canada.

So overall a good conference with some really interesting presentations

Notpron – Far too frustrating

I chose the Notpron from the list for activity 3.2 because the word puzzle was next to the name, and I normally like puzzle type games. I say normally. I hated this one, it was extremely frustrating. If left on my own, I would not have got further than the first level, which was easy as you just clicked on the door. This ‘puzzle’ was far too hard for me. You needed a degree in computing/programming and a very creative imagination to work through this, I have neither of these. Hints were provided, but these were insufficient to help me. When thinking of this in terms of learning, it reminded me of Vygotsky’s (1978) Zone of Proximate Development (ZPD). This ZPD stretches students to learn new things, but within their capabilities with guidance and providing suitable scaffolding for this to happen. This was way out of my ZPD, and the hints provided were not sufficient scaffolding for me to guess what to even try to do to get to the next level. I suppose this is a lesson learned in making sure that that enough scaffolding is provided for all level of student, and it provided me with the insight of what it felt like to feel way out of my comfort zone. In my case, I could just give up on the game (or cheat – see below) but what if it was someone’s degree course and they felt like that?

I didn’t give up straight away, as I don’t like to be beaten. I asked my son, who enjoys that type of thing, and together (mainly him) we worked out a few more levels. I also Googled the game and found a walkthrough, which provided the step by step answers up to level 10. Then we gave up – as it was way too hard and the hints and walkthough stopped at that point (no scaffolding at all). I didn’t feel that I built on my knowledge from one level sufficiently to get to the next one, it was all really hard and far-fetched. May be others with more creative minds found it easier but I just found it frustrating.

So lessons learnt were to build learning activities that build up knowledge (slowly if necessary), provide enough support and guidance to assist students to work within their ZPD, and that if students feel that the tasks are way out of their comfort zone, they are likely to give up.