Sunday, March 8, 2015

@ONE Desiging Effective Online Assessments Course Week 2

Essential course content from this week:

Why Use Traditional Assessments?
The table below provides some of the advantages of objective tests.


Advantage

Explanation

Students can provide a great deal of information on a broad range of learning outcomes in a short period of time.

This is called efficiency. If you only have 45 minutes to assess students and you have a large number of outcomes or areas to assess then a multiple choice test will be more efficient than an essay test.

Objective tests encourage broader, but shallower, learning than subjective assessments because of their efficiency.

If you are interested in seeing if students have a general understanding of a broad area, such as English Literature in the US, then a multiple choice is a good approach. If you wish to see if a student has an understanding of a specific work of literature, such as Of Mice and Men, then an essay approach would be better.

Objective tests are fast and easy to score, although they are difficult to construct well.

Because of the time involved in constructing good objective assessments, reuse is essential to their overall utility. This means that the items and tests must be securely stored.

Objective assessment results can be summarized into a single number.

This number, also called a performance indicator, makes the tests appealing to those who have to make decisions at the course, program or institution levels.


OrganizedTeaching.com provides additional resources about traditional assessments, some of the basic concepts and links to resources that help create traditional assessments as well as contrasting them with Authentic assessment.
There are two general types of objective or traditional assessment items: closed-ended (sometimes referred to as forced choice) and open-ended (sometimes referred to as production items). We will first examine some of the specific item formats within each of these two categories, and then discuss principles for designing good items.

Writing Good Multiple Choice Items

There are two basic principles to follow in creating good multiple choice items. In fact, these principles apply to all item types:
  • Remove all barriers that may keep a knowledgeable student from answering the item correctly. Students that have learned the concept or truly know it, should choose the correct answer.
  • Remove all clues that would help a less-than-knowledgeable student answer the item correctly. Students who do not know or have not learned the content or skill should answer the item incorrectly.

True/False Items

True/False items are multiple choice items with two alternatives. These items are a very simple item format and are used to test whether a student has basic factual knowledge. They essentially answer the question: Is the statement correct or not?

Because True/False items contain only two alternatives they have a number of undesirable characteristics, and should be used in rare instances:
  • Students who haven’t learned or don’t know the content or skill have a high probability of guessing the correct answer (probability =50%).
  • True/False items provide no evidence of where a student went wrong in their thinking
  • It is difficult to assess thinking skills with True/False items.
  • Students may correctly recognize a false statement without knowing its true counterpart.
  • It can be very difficult to write unambiguous and unqualified statements that are definitely true or false

Some things to keep in mind if you are going to write True/False items:
  • Keep them simple: avoid long statements and lengthy qualifiers
  • Use them only to assess important factual knowledge: it is easy for these items to descend into trivial details or facts
  • Avoid negative and double negative statements: negative and double negative lead to confusion in the reader
  • Keep the proportion of true statements close to 50% of the items
  • Be careful not to have an obvious pattern of true vs false responses


Matching Items

Matching items is another special case of the Multiple Choice item format. In this situation there is a common set of alternatives that apply to a set of questions. This item format can be used to assess thinking skills especially application of knowledge to new situations. They can also be used for basic knowledge.

The following are some general guidelines for Matching items:
  • A matching set should consist of homogeneous items, that is, every option in the answer set should be a plausible answer for every item or question.
  • There should be an unequal match between the column of answers and set of questions. Students should be allowed to use the same answer to more than one question, or not to use some alternatives at all. Having a match between the two columns increases the likelihood of guessing through elimination.
  • Make it easy for students who know the material to select the correct answer. Keep the answers to a single word or short phrase. The questions should be longer statements.
  • Give clear directions: it is important to explain how the two columns are related; and that some options may not be used at all while others may be used more than once.
  • Be creative: you can use more than words or text, for example, charts, graphs, pictures, videos, etc.

Completion or Fill-in-the-Blank items

Completion items are multiple-choice items with no alternatives being provided. Instead the student must generate their own response, usually a word, phrase, number or symbol. Completion items should only have a single correct answer. If they have more than one correct answer then they are not strictly speaking objective items. Completion items are a good alternative for testing essential facts where you do not wish to give the student the opportunity to simply recognize the correct response from a list. Thus, these items test memory of facts rather than recognition of facts. If you wish to test deeper understanding of content and its application, either performance assessments or open-ended items should be used.

Completion items are widely used in Mathematics where they can be used to test thinking skills in addition to memory. Use of completion items in this discipline helps eliminate the likelihood of guessing or of working backwards from the different alternatives in a multiple-choice item. You can also use the incorrect answers to the completion item for creating future multiple-choice items.

Maximize Learning with Close-Ended Items

TIP – one way to elevate the usefulness of this type assessment, is to include targeted feedback on each item. Regardless of if the student gets the answer right/wrong, students are able to learn WHY it was the right/wrong choice.
Key components of the instant feedback are:
  • Provide feedback for correct and incorrect answers.
  • Be enthusiastic about a correct answer. While providing feedback for correct answers may feel redundant, it is another opportunity to reinforce learning. Also, consider a student who may not have been 100% sure of the correct answer, however made their best guess and choose correctly. In this case, they were not completely clear on the reasoning behind why the answer was correct, and the feedback you provide will help bring the needed clarity.
  • Identify why the answer was correct/incorrect, and if incorrect – what the best response would be and why.
  • Provide page numbers or a reference to the source information that would have provided them with the correct answer.
In some cases, correct and incorrect answers can provide a whole learning tree leading to tutorials/content that either remediates or advances the student through the content. To find out more about how this process can be used most effectively watch (optional):
2010 Online Teaching Conference June 18 General Session:
Continuous Improvement in Teaching and Learning: The Community College Open Learning Initiative (1:11:25 uncaptioned YouTube video opens in new window)

by Candace Thille, Director of the Open Learning Initiative (OLI) at Carnegie Mellon University (opens new window)

Using intelligent tutoring systems, virtual laboratories, simulations, and frequent feedback, the Community College-Open Learning Initiative (CC-OLI) builds open and free learning environments that support continuous improvement in teaching and learning. CC-OLI is a development and research collaboration among Carnegie Mellon University and community colleges across the country. We will discuss how you can use these free web-based learning environments to support your teaching and your students learning and how faculty and colleges across the country can participate in the development, adaptation and evaluation of these environments.

Open-Ended Items

The item formats presented so far are all referred to as “objective” items as they can be scored with a simple straightforward answer key that could be reliably applied by any normal individual. So, they are “objective” in the sense that a person’s score is not going to vary due to the scoring process and who is applying the process. The open-ended item formats and the performance assessment formats introduce “subjective” items. They are subjective because there may be more than one correct answer and the scoring is often dependent on who does the scoring.

Open-ended items are items that allow the student to produce their own response rather than select an alternative response that has been presented to them by the instructor. They are similar to completion items or “fill-in-blank” items except that the responses are longer (more than a single word or phrase) and there is more than one correct answer.

Open-ended items are used most frequently used in disciplines like literature, English, Art, Philosophy, History and Reading. The most common forms of open-ended items are short answer and essay questions. The primary difference between these two items formats is the length of response that is expected from the student. These formats are good for testing deep understanding of a topic area, understanding of relationships in a topic area, or the ability to synthesize or draw conclusions/implications.

Open-Ended Items

Content Analysis

Content analysis is a technique developed in the early 1930s but popularized in the 1960s by Glaser.

The technique provides a systematic mechanism for analyzing bodies of text and reducing it to one or more numbers that then can be statistically summarized and analyzed. Frequently the approach is a count of certain keywords or phrases, or common communication structures such as rhetorical questions or passive voice. In today’s web world you can get examples of content analysis in things such as “trending” and “popular tags” and applications such as Wordle (http://www.wordle.net/).

Ole Holsti (1969) groups 15 uses of content analysis into three basic categories:
  • make inferences about the antecedents of a communication
  • describe and make inferences about characteristics of a communication
  • make inferences about the effects of a communication.

He also places these uses into the context of the basic communication paradigm.

The following table shows fifteen uses of content analysis in terms of their general purpose, element of the communication paradigm to which they apply, and the general question they are intended to answer.

 

Purpose

Element

Question

Use

Make inferences about the antecedents of communications

Source

Who?
Answer questions of disputed authorship (authorship analysis)

Encoding process

Why?
Secure political & military intelligence
Analyze traits of individuals
Infer cultural aspects & change
Provide legal & evaluative evidence

Describe & make inferences about the characteristics of communications

Channel

How?
Analyze techniques of persuasion
Analyze style

Message

What?
Describe trends in communication content
Relate known characteristics of sources to messages they produce
Compare communication content to standards

Recipient

To whom?
Relate known characteristics of audiences to messages produced for them
Describe patterns of communication

Make inferences about the consequences of communications

Decoding process

With what effect?
 Measure readability
Analyze the flow of information
Assess responses to communications

Open-Ended Items

Performing a Content Analysis

According to Dr. Klaus Krippendorff (1980 and 2004), six questions must be addressed in every content analysis:
  1. Which data are analyzed?
  2. How are they defined?
  3. What is the population from which they are drawn?
  4. What is the context relative to which the data are analyzed?
  5. What are the boundaries of the analysis?
  6. What is the target of the inferences?

It should be noted that there are two general types of content analysis, quantitative and qualitative. In the quantitative approach, the essential data is a count of keywords or phrases. In the qualitative approach, the content is categorized and classified. This approach may also produce “numbers” that represents the categories or classifications.


Providing Feedback
We say that assessment can be more than just a mechanism for grading and benchmarking our students, that it can truly be an agent for learning. A powerful context that you may consider when providing feedback is to assume the role of a coach, helping guide the student(s) back to the right path, opposed to simply offering a judgment of correct/incorrect.

Key components of the instant feedback are the following:
  • Provide feedback for correct and incorrect answers.
  • While providing feedback for correct answers may feel redundant, it is another opportunity to reinforce learning. Also, consider a student who may not have been 100% sure of the correct answer, however made their best guess and choose correctly. In this case, they were not completely clear on the reasoning behind why the answer was correct, and the feedback you provide will help bring the needed clarity.
  • Identify why the answer was correct/incorrect, and if incorrect – what the best response would be and why.
  • Provide page numbers or a reference to the source information that would have provided them with the correct answer.
From The Chronicle of Higher Ed, Cheating Lessons series by James M. Lang, part 3, Aug. 19, 2003

-frequent low-stakes assessment with a firm and consistent academic integrity policy reduces cheating and promotes learning
Online assessments outside of BB – Survey Monkey, Zoomerang, Doodle, Google Docs


For more on how to use Google forms:
Matt Silverman'sdirections (opens new window). 
Using Google Forms to Create an Online Quiz (4:54 YouTube video - uncaptioned)

Materials and Resources for Traditional Assessments
Overview on Designing Test Questions

Multiple Choice Items and Tests:

Matching Items

My assignment for this week: A traditional assessment for one course SLO

Traditional Assessment for Noncredit ESL Level 7


SLO: Interpret meaning from a variety of authentic readings in identified areas of interest

15 questions

Information:
In my NCESL department, we test the reading SLO listed here, as required by the federal grant we receive, using the CASAS reading test (life skills types of reading - see example questions for Level D - not literature), but I wanted to create something that would go along with this performance-based assessment, with a lesson I have created on reading a college class schedule, to help students prepare for the transition to credit classes.

I have found in helping students (even those with university degrees from their countries of origin) apply and register for classes that the procedures and class schedule is confusing and new for them.  The test I created would follow lessons on learning about these topics.

 

 

No comments:

Post a Comment