Because I hope to develop a learning software platform for ASD learners, this course has helped me to find out what is the best way to create it. By writing the standard of evaluation to the highest level of specificity, I have the bar set as to what I must reach to be successful. But there is so much more that I have learned.
First, if the majority of a class is of one professional background, don’t let the ones from outside this group form their own team. They will go off in some eclectic tangent. Place an outsider within each team of same profession people. This will benefit each group by bringing in an outside perspective.
Second, when a group is formed, certain guidelines of “group collaboration” protocol needs to be followed such as who is the group leader, who is the secretary (in charge of the document process), and so forth.
Third, when you commit to a group, a certain level of communication is needed such as instant messaging, chat-rooms, or good old face-to-face at a scheduled time and on a regular basis.
Fourth, there is a “social” aspect to this “hybrid” type of class that is much different than a physical classroom. There is a routine to attending a physical class. The “virtual” classroom is missing this basic level of structure that helps those who tend to be distractible, focused and attentive.
Fifth and final, online classes do not provide the entertainment value of watching your professor perform. As I have learned from my other class this semester, only 10% of a subject’s information is to be found in literature. 90% of the knowledge is in the heads of those in the field. “Online only” classes offer the student only 10% of knowledge in the field if they are left without the professor’s “performance.”
Tuesday, December 14, 2010
Analysis of Competing Instruments
I am comparing the Media Evaluation (ME) Traditional Media evaluation form and the Techie Tigers (TT) evaluation form. To compare the two is limited to the Rating Areas chosen. The ME and TT evaluations both rely on the use of a rubric. They are both three tier rubrics.
I come from a science/research background. I look at data. Data is objective. Data must be clearly defined. When comparing the ME evaluation form to the TT evaluation form, what stands out the most is difference in subjectivity vs. objectivity. Entering into this project, I have had a problem with the objectivity of each rubric. It relies on the evaluator to be an expert in all these different fields that the rubric is drawn from. My problem is that if two people use the same rubric, will they be able to give an identical evaluation. If the answer is yes, then it is a viable scientific assessment. If the answer is no, then the assessment is invalid. For example, when it is asked “Is this media age appropriate?”, could two people give what age the material is for accurately?
While the ME evaluation is cumbersome, the Narrative Criteria to each rating area gives a specific way of addressing the questions. It is intended as an explanation to orient the evaluator’s thoughts towards specific features or consistency of perception. The information given here is based on published research and offers the evaluator a snapshot of what is considered important in the specific area of learning.
There is an appropriate use of subjective reflection when evaluating any media. It is this “gut feeling” that is difficult to quantify but still has merit to offer. The ME evaluation form needs to have a place for the evaluator to comment within the context of the rating area being focused on.
For the Web 2.0 evaluation, I looked at Shamelle Nash’s evaluation. When comparing the two, I see objective statements in her rubric. Both hers and the ME evaluation give specific outlines to the rating area.
What contrasts our work is she chose a more balanced, three format review. While she starts with a meta-analysis of her questions, she continues to offer objective responses in her narrative and her use of a three tier rubric. In the end, Ms. Nash’s evaluation helps to compensate for different evaluator’s personal approach to learning material.
I can see that the ME approach of just a three tier rubric can be limiting in its adaptation to different evaluation styles. Of course, being in a research field, personal differences are a feature that a researcher wants to filter out.
I come from a science/research background. I look at data. Data is objective. Data must be clearly defined. When comparing the ME evaluation form to the TT evaluation form, what stands out the most is difference in subjectivity vs. objectivity. Entering into this project, I have had a problem with the objectivity of each rubric. It relies on the evaluator to be an expert in all these different fields that the rubric is drawn from. My problem is that if two people use the same rubric, will they be able to give an identical evaluation. If the answer is yes, then it is a viable scientific assessment. If the answer is no, then the assessment is invalid. For example, when it is asked “Is this media age appropriate?”, could two people give what age the material is for accurately?
While the ME evaluation is cumbersome, the Narrative Criteria to each rating area gives a specific way of addressing the questions. It is intended as an explanation to orient the evaluator’s thoughts towards specific features or consistency of perception. The information given here is based on published research and offers the evaluator a snapshot of what is considered important in the specific area of learning.
There is an appropriate use of subjective reflection when evaluating any media. It is this “gut feeling” that is difficult to quantify but still has merit to offer. The ME evaluation form needs to have a place for the evaluator to comment within the context of the rating area being focused on.
For the Web 2.0 evaluation, I looked at Shamelle Nash’s evaluation. When comparing the two, I see objective statements in her rubric. Both hers and the ME evaluation give specific outlines to the rating area.
What contrasts our work is she chose a more balanced, three format review. While she starts with a meta-analysis of her questions, she continues to offer objective responses in her narrative and her use of a three tier rubric. In the end, Ms. Nash’s evaluation helps to compensate for different evaluator’s personal approach to learning material.
I can see that the ME approach of just a three tier rubric can be limiting in its adaptation to different evaluation styles. Of course, being in a research field, personal differences are a feature that a researcher wants to filter out.
Subscribe to:
Posts (Atom)