Tuesday, December 14, 2010

Analysis of Competing Instruments

I am comparing the Media Evaluation (ME) Traditional Media evaluation form and the Techie Tigers (TT) evaluation form. To compare the two is limited to the Rating Areas chosen. The ME and TT evaluations both rely on the use of a rubric. They are both three tier rubrics.
I come from a science/research background. I look at data. Data is objective. Data must be clearly defined. When comparing the ME evaluation form to the TT evaluation form, what stands out the most is difference in subjectivity vs. objectivity. Entering into this project, I have had a problem with the objectivity of each rubric. It relies on the evaluator to be an expert in all these different fields that the rubric is drawn from. My problem is that if two people use the same rubric, will they be able to give an identical evaluation. If the answer is yes, then it is a viable scientific assessment. If the answer is no, then the assessment is invalid. For example, when it is asked “Is this media age appropriate?”, could two people give what age the material is for accurately?
While the ME evaluation is cumbersome, the Narrative Criteria to each rating area gives a specific way of addressing the questions. It is intended as an explanation to orient the evaluator’s thoughts towards specific features or consistency of perception. The information given here is based on published research and offers the evaluator a snapshot of what is considered important in the specific area of learning.
There is an appropriate use of subjective reflection when evaluating any media. It is this “gut feeling” that is difficult to quantify but still has merit to offer. The ME evaluation form needs to have a place for the evaluator to comment within the context of the rating area being focused on.
For the Web 2.0 evaluation, I looked at Shamelle Nash’s evaluation. When comparing the two, I see objective statements in her rubric. Both hers and the ME evaluation give specific outlines to the rating area.
What contrasts our work is she chose a more balanced, three format review. While she starts with a meta-analysis of her questions, she continues to offer objective responses in her narrative and her use of a three tier rubric. In the end, Ms. Nash’s evaluation helps to compensate for different evaluator’s personal approach to learning material.
I can see that the ME approach of just a three tier rubric can be limiting in its adaptation to different evaluation styles. Of course, being in a research field, personal differences are a feature that a researcher wants to filter out.

No comments:

Post a Comment