Friday, July 30, 2010
Permutation Test
In Linguistics, permutation test is important to analyze the abundant and very divergent linguistics data for which normalization of a population set is impossible. The data are acquired by comparing two languages in contact and the purpose of this test is usually to find syntactic differences between those languages. The factors influencing the degree of differences are the mother tongue of the speakers, other languages they know, the length and time of experience in the second language, the role of formal instruction. This test would make the data of such studies amenable or easy to control. After doing the test, not only numerical value to the difference between two syntactic varieties but also confidence in the measure of examining the source of the difference and answers to linguistic questions about relative or volatility of syntactic structures can be gained. As complement to permutation test, Monte Carlo Sampling method is introduced to complete enumeration of the data. It can well approximate the distribution using random samples and make testing on large data, like in linguistic testing, possible. Below is an example of testing on linguistic using permutation test:
1. Determine differences between 2 vectors of trigrams of two tested languages, which will be the test statistic
2. Permute a pair of sentences from two sub-corpus, compare the differences of resulting two vectors of trigrams (compute test statistics for this permutation)
3. Repeat step (2) e.g. 10,000 times, each time, pairs of sentences are picked randomly (Monte Carlo technique).
4. Estimate the statistic significance and the probability that the original samples were due to chance (p-value).
Permutation Tests for Complex Data: Theory, Applications and Software (Wiley Series in Probability and Statistics)
Permutation Tests: A Practical Guide to Resampling Methods for Testing Hypotheses (Springer Series in Statistics)
BMV Quantum Subliminal CD Set- 4 SUBLIMINAL CDs - Neuro Linguistic Programming (NLP) Mastery Mind Power Program (4 CDs: NLP Mind Control for Rapport Persuasion & Influence, Improve Language Skills, Non-Verbal Communication Skills, Public Speaking)
Tuesday, July 27, 2010
Test Items Analyses
Item Discrimination Analysis
Group 1: 0.40 and up
- Items number (3), (4), (6), (8) are considered very good items. They are subject to retention in the revised test.
- For item (3), three students, belonging to the strong category, and one weak student gave correct answer.
- Item (4) is an ideal item because the IF is 0.50 and the ID is 1.00 that means it is well-centered. 50 percent of students answer correctly and the other 50 percent answer incorrectly.
- For item (6), there was only one weak student who gave correct answer.
- Item (8), the item is acceptable because the IF is 0.63 and the ID is 0.40.
Group 2: 0.20 to 0.29
- Items (1) and (10) are marginal items. They are subject to improvement in the revised test.
- For item (1), most students can answer correctly. There is only one weak student who gave wrong answer. So the item needs to be revised to make the distractors more efficient.
- For item (10), almost all students gave wrong answer. There is only one student from the strong category can answer correctly. So it needs to be improved.
Group 3: Below 0.19
- Items number (2), (5), (7), (9) are poor items. They should be sorted out or improved by revision.
- For item (2), the number of students from the strong and the weak category who can answer correctly is exactly the same.
- Item (5) is absolutely a bad item because all students can answer correctly.
- Item (7) should also be rejected because no single students from the strong category can give correct answers. On the other hand, all weak students gave correct answers.
- For item (9), the number of weak students who answer correctly is more than that of the students of the strong category. There is only one student from strong category answer correctly.
Test-Retest Reliability
Since the same reading comprehension test is administered twice over a period of time to a group of students, we should use the strategy of test-retest reliability to calculate the value of the test reliability. The calculation of this test-retest reliability uses the formula of Pearson’s product-moment correlation coefficient.
180.91 – (0.04 x 0.08) / 10
r =√ (173.88 – 0.04² / 10) (353.68 – 0.08² / 10)
r = 0.729178253 → rounded to 0.73
From the calculation of using Pearson product-moment correlation coefficient between the two sets of scores, we get the reliability estimate of .73 that means about 73% of the variance of the students’ observed test scores is attributable to the true ability and the other 27% is attributable to error.
Split-Half Reliability
Since the test can only be administered once, so the total number of the test items that is 40 items is divided equally into two categories: the odd-numbered items and the even- numbered items. This is done just as though the scores were two different forms. Then the two sets of scores are calculated to find the correlation coefficient. The coefficient gives the reliability for either the odd-numbered items or the even-numbered items, but just half of the test. The calculated coefficient must therefore be adjusted to provide a coefficient that represents the full-test reliability. This adjustment of the half-test correlation to estimate the full-test reliability is accomplished by using the Spearman-Brown formula.
The half-test correlation coefficient:
(30 x 367.52) – (1.5 x -0.1)
r =√ (30 x 405.12) –1.5² √ (30 x 389.5) –0.1²
r = 0.919036585 → rounded to 0.92
Spearman-Brown Formula:
rsb = 2rxy /(1+rxy) → rsb = 2 x 0.92
1 + 0.92
rsb = 0.96
After calculating the full-test correlation coefficient using Spearman-Brown Formula, we get the result of the full-test reliability value of .96 that means almost perfect. So the test items are considered consistent or stable to be administered repeatedly. There is only small percentage (4%) of the test item variance that is attributable to error. And the other 94% of the variance is reliable to represent the true ability of the assessed test takers.
The item facility index is a statistical technique used in test item analysis to examine the percentage of students who answered a particular item correctly. The objective of analyzing the item based on this Item Facility (IF) is to evaluate to what extend the item is easy or difficult for students. Calculating the IF value can be done as illustrated by the following equation:
Ncorrect = number of students answering correctly
Ntotal = total number of students taking the test
The result of this formula is an item facility value that can range from 0.00 to 1.00 Thus, if 45 out of 50 students answered a particular item correctly, the proportion would be 45/50 = .90. An IF of .90 means that 90% of the students answered the item correctly, and by extension, that the item is very easy.
Item Discrimination (ID) is a statistic that indicates the degree to which an item separates the students who performed well from those who did poorly on the test as a whole. These groups are sometimes referred to as the “high” and “low” scorers or “upper’” and “lower’” proficiency students. Item discrimination can be calculated by first figuring out who the upper and lower students are on the test (using their total scores to sort them from the highest score to the lowest). The upper and lower groups should probably be made up of equal numbers of students who represent approximately one third of the total group each. The formula would be like the following:
ID = IF upper – IF lower
ID = item discrimination for an individual item
IF upper = item facility for the upper group on the whole test
IF lower = item facility for the lower group on the whole test
Ideal items in an NRT should have an average IF of .50. Such items would thus be well centered, i.e., 50 percent of the students would have answered correctly, and by extension, 50 percent would have answered incorrectly. In reality however, items rarely have an IF of exactly .50, so those that fall in a range between .30 and .70 are usually considered acceptable for NRT purposes.
Once those items that fall within the .30 to .70 range of IFs are identified, the items among them that have the highest IDs should be further selected for inclusion in the revised test. This process would help the test designer to keep only those items that are well centered and discriminate well between the high and the low scoring students.
Language Test Construction and Evaluation (Cambridge Language Teaching Library)
Optimizing the diagnostic power of tests: An illustration from language arts (CSE report)
Language Testing and Assessment: An Advanced Resource Book (Routledge Applied Linguistics)
THE ROLE OF MULTIPLE INTELLIGENCES AND LEARNING STYLES IN CONSTRUCTING READING ASSESSMENT FOR TEENAGE ENGLISH LEARNERS
CHAPTER I: INTRODUCTION
1.1 Background of the Study
Reading is one of the most important skills that should be improved in the process of learning English. This skill is important since students have to integrate all their knowledge of the language components to comprehend a reading passage. Through reading, students may learn new vocabulary, the structure of the language or even cultural aspects of the passage. Students’ reading skill progress can be monitored and analyzed by giving appropriate assessment. Regarding to the importance of improving this skill, teachers sometimes face difficulties to assess the students’ progress objectively in the real classroom practice. They often find surprising outcomes of the given reading assessment. It seems that the results do not reflect each of the students’ true ability. Furthermore, the students do not appear to be really interested and eager in doing the assessment. Much based on my teaching experience, this may be caused by the traditional approach of the standardized assessment that does not adequately accommodate the students’ learning styles and sometimes neglect their other types of intelligences. This is the main reason why this study is proposed to conduct a research upon the influential factors. According to Dr. Howard Gardner, a psychologist who developed Multiple Intelligences, standardized tests only measure linguistic and logical/mathematical intelligences within artificial settings and tend to ignore the capability in other intelligences. In his opinion, the purpose of assessment should measure students’ learning processes in order to obtain information about students’ understanding of skills or knowledge as well as their approach to solving problems. In addition, assessment, from Gardner’s perspective, should connect their class work to real-life experiences and apply their knowledge to new situations. Therefore, there is a need to analyze how a reading assessment can accommodate the students’ learning styles so that it can really reflect how well that students have learnt and to make the result reliable to measure their true ability without ignoring their other types of intelligences.
1.2 Statement of the Problem
Many teenage students of English are not reaching their full potential because they are less adept than their peers in the two traditional intelligences: verbal linguistic and logical-mathematical. Besides, this group of students may also have their unique individual learning styles that sometimes do not match the expected learning style of a certain given reading material. That is why it is not fair if their progress in reading is measured by standardized test that only include the two traditional intelligences mentioned above. To solve this problem, this study tries to analyze the role of multiple intelligences and learning styles as the influential factors in constructing reading assessment. The result of the study will be such a sample of reading assessment that can accommodate the students’ unique learning styles with which they can learn best and objectively measure their true ability. Therefore, this study will only focus on teenage English learners whose learning styles and multiple intelligences will be the influential factors to be considered in constructing reading assessment.
1.3 Research Questions
1. To what extend should a teacher consider the factors of students’ learning styles and multiple intelligences in constructing reading assessment?
2. How can a teacher construct a reliable reading assessment that objectively accommodates students’ learning styles with which they can learn best and measures their true ability?
1.4 Objective
The aim of conducting this study is to strengthen the focus on students’ learning styles and to reveal the importance of considering students’ multiple intelligences in constructing reading assessment. The outcome of the discussion is hoped to be able to create a sample of reading assessment as a parameter to objectively accommodate students’ learning styles and measure their true ability which later is also hoped to bring impact of increasing students’ confident and building strong positive attitude toward English learning.
1.5 Significance of the Study
This study is hoped to be significant for the development of reading assessment theory and the quality improvement of educational practitioners, especially teachers or test makers, in constructing reading assessment that put students’ unique learning styles and multiple intelligences into account to objectively measure their true ability in reading.
CHAPTER TWO: REVIEW OF RELATED LITERATURE
The theory of Multiple Intelligences
Dr. Howard Gardner, a psychologist and professor from Harvard University, developed Multiple Intelligences Theory (MI) in 1983. His theory is an important contribution to educational practices and reform movements around the world. It challenges the traditional view of “IQ” and enables educators to take a renewed look at our views about learning and development. In the book Frames of Mind, Gardner questioned the validity of “IQ” score in deciding human intelligence because IQ tests only measures one's ability to handle academic subjects, and it predicts little of success in later life. He proposed that there are at least seven basic intelligences: (1) Visual/Spatial Intelligence, (2) Musical Intelligence, (3) Verbal/Linguistic Intelligence, (4) Logical/Mathematical Intelligence, (5) Interpersonal Intelligence, (6) Intrapersonal Intelligence, and (7) Bodily/Kinesthetic Intelligence). And recently, in 1996, Gardner added the eighth intelligence--naturalist intelligence to his theory. Gardner pointed out that “it is not if you are smart, but how you are smart.” (Gardner, 1983) The following criteria have been used in MIT to identify intelligence: it “entails the ability to solve problems, it involves a “biological proclivity,” it has “an identifiable neurological core operation or set of operations” and it is susceptible to encoding in a symbol system…which captures and conveys important forms of information” (Gardner 1999: 15-16). These different kinds of intelligences reflect learners’ myriad ways of interacting with the world. Although each person possesses all intelligences to some degree, some intelligences are more strongly exhibited than others. By various stimuli and education, MI can be nurtured and strengthened or ignored and weakened.
a. Description of the Eight Intelligences:
1) Linguistic Intelligence involves the capacity to use language effectively and creatively no matter in writing or speaking. Linguistic people like to use language to express their ideas, convey information, and understand other people. They are good at memorizing names, places, or other detailed information.
2) Logical-mathematical intelligence is the ability to use numbers effectively and engage in higher order thinking. People with this intelligence like to reason and analyze problems, work with numbers, and explore patterns and relationships. They are able to control visuals and mental pictures from various perspectives.
3) Spatial intelligence is the ability to manipulate and perceive objects or forms mentally and then to transfer those perceptions either mentally or concretely. They like to learn and think by visual stimuli and tend to organize things spatially. So, they learn best through graphic images.
4) Bodily-Kinesthetic intelligence involves using people’s whole body or parts of their body to solve problems, to express ideas and emotions. Bodily-Kinesthetic learners like to touch, talk, create things, and move around. They are good at physical activities such as dance, hands-on tasks, constructing models, and any kind of movement.
5) Musical intelligence is the capacity to think and express in musical forms. People with this intelligence own the sensitivity to the melody, sound, pitch or tone. They learn best through activities wherein they discriminate, transform, and express sounds.
6) Interpersonal intelligence involves the capacity to perceive the feelings, intentions, and motivations. Interpersonal learners can discriminate the cues from facial expressions, gestures, or intonation and response effectively to those cues. They like to join groups, communicate with others, and make a lot of friends. Such interpersonal learners learn best by interacting with people, cooperating, and leading others.
7) Intrapersonal intelligence means learners have the ability to understand themselves. They have a clear picture in who they are, what they can do, and what they want to do. They like to work alone and achieve their goals. They learn best through getting in touch with their inner moods, intentions, and self motivations.
8) Naturalist intelligence enables the learners to better relate themselves to the surroundings. They show strong interests in animals or natural phenomena. Being outside, making observation about the subtle changes in the environment, interacting with plants and animals allow such learners to perform with more confidence and ease.
b. Key points in Multiple Intelligences theory
With the reference of Thomas Armstrong (1994: 11-12), four points are listed below to display a few of the key ideas of MI theory.
1) Each person possesses capacities in all intelligences. Some people perform extremely high levels of functioning in all intelligences while others tend not to display many, if any. Most of us, however, appear to possess some highly developed intelligence as well as some weak ones.
2) Most people have the capacity to develop each intelligence to an adequate level of competency. The combination of the environmental influences such as school instruction, parents, and exposure to cultural activities can strengthen or weaken certain intelligence. If given appropriate instruction and encouragement, all intelligences can develop and reach to a higher level.
3) Intelligences usually work together in complex ways. No intelligence works alone because intelligences always interact with each other. For example, to make a cake, one should read the receipt, weigh the flour, and decide the flavor to satisfy all members of the family and one’s own preference. The process of making a cake needs the intelligences such as linguistic, logical-mathematical, interpersonal, and intrapersonal intelligences.
The Theory of Learning Styles and the Application of Multiple Intelligences
Everyone owns different learning styles and preferences. Some people may find that they have a preferred style of learning or way of encountering the world and less use or experience with other styles. Others may find that they use different styles in different situations. As teachers, we need to know students’ learning preference, help them to make good use of their learning styles, and develop ability in less dominant ones. Thus, teachers need to present information using different styles. This variety in presentation of content and overall instructional approach allows students to learn better and more quickly; especially if the chosen teaching methods used better match their preferred learning styles. Also, students can learn in other ways, not just in their preferred style. There are various modes used to discriminate a learner’s learning styles. Kanar (1995) describes the three most common styles (1) visual, (2) auditory, and (3) kinesthetic in her book, The Confident Student. Teachers can integrate the following teaching strategies into the class to meet students’ learning styles.
A. Visual Learning Style
Visual learning style involves learning through seeing images such as reading or writing tasks. Such students learn better by writing the information down, reading, and watching. They seem to have a vivid image in their mind, so visual learners can recall what they learn easily by a glance at the context.
Strategies for Teaching Visual Students
1. Various visual materials can be present in the class. For example, pictures, charts, fresh cards, videos, and maps are good resources for visual learners.
2. Use bright colors to draw or write some key points or concepts on the board.
3. Write the information in detail in handouts for students to reread.
4. Draw the picture on the board when it is necessary or have students draw pictures on the board or margin to connect the concepts.
5. Provide the assignment in writing and reading.
B. Auditory Learning Style
Auditory learning style involves filtering and transferring information through listening. They learn better by talking to people and hearing what was said. In addition, they may have some problem in reading and writing.
Strategies for Teaching Auditory Students
1. Give a brief explanation about the content of the lesson in the beginning and summarize the new material at the end of the class.
2. Have students read out loud the questions or whisper new information to themselves.
3. Auditory activities such as group discussion, brainstorming, and presentation all allow students to acquire auditory stimuli.
4. Advise the students to take notes by using tape recorders so that they can review what they learn or discuss in the class.
5. Ask questions and encourage students to share their ideas.
C. Kinesthetic Learning Style
Kinesthetic learning styles involves learning through moving or touching. These learners seem to have more difficulty paying attention in the traditional classroom. They like to speak out what they learn and express emotion physically. They learn best by physical experience such as touching, holding, or doing hands-on activities.
Strategies for Teaching Kinesthetic Students
1. Advise students to take notes during lectures and underline the key points in the text.
2. Provide activities such as role-plays, project work, and games to help students to join learning.
3. Take frequent stand up and stretch breaks.
4. Have students transfer new information from the text books to another medium such as computers or posters.
5. Provide objects that are related to the subjects of the lesson so that students can learn things by touching, feeling, or operating the objects
The more detailed diagram below expands the detail for the original seven intelligences shown above, and also suggests ideas for applying the model and underpinning theories, so as to optimize learning and training, design accelerated learning methods, and to assess training and learning suitability and effectiveness.
intelligence type description typical roles related tasks, activities or tests preferred learning style clues
1 Linguistic words and language, written and spoken; retention, interpretation and explanation of ideas and information via language, understands relationship between communication and meaning writers, lawyers, journalists, speakers, trainers, copy-writers, English teachers, poets, editors, linguists, translators, PR consultants, media consultants, TV and radio presenters, voice-over artistes write a set of instructions; speak on a subject; edit a written piece or work; write a speech; commentate on an event; apply positive or negative 'spin' to a story words and language
2 Logical-Mathematical logical thinking, detecting patterns, scientific reasoning and deduction; analyse problems, perform mathematical calculations, understands relationship between cause and effect towards a tangible outcome or result scientists, engineers, computer experts, accountants, statisticians, researchers, analysts, traders, bankers bookmakers, insurance brokers, negotiators, deal-makers, trouble-shooters, directors perform a mental arithmetic calculation; create a process to measure something difficult; analyse how a machine works; create a process; devise a strategy to achieve an aim; assess the value of a business or a proposition numbers and logic
3 Musical musical ability, awareness, appreciation and use of sound; recognition of tonal and rhythmic patterns, understands relationship between sound and feeling musicians, singers, composers, DJ's, music producers, piano tuners, acoustic engineers, entertainers, party-planners, environment and noise advisors, voice coaches perform a musical piece; sing a song; review a musical work; coach someone to play a musical instrument; specify mood music for telephone systems and receptions music, sounds, rhythm
4 Bodily-Kinesthetic body movement control, manual dexterity, physical agility and balance; eye and body coordination dancers, demonstrators, actors, athletes, divers, sports-people, soldiers, fire-fighters, PTI's, performance artistes; ergonomists, osteopaths, fishermen, drivers, crafts-people; gardeners, chefs, acupuncturists, healers, adventurers juggle; demonstrate a sports technique; flip a beer-mat; create a mime to explain something; toss a pancake; fly a kite; coach workplace posture, assess work-station ergonomics physical experience and movement, touch and feel
5 Spatial-Visual visual and spatial perception; interpretation and creation of visual images; pictorial imagination and expression; understands relationship between images and meanings, and between space and effect artists, designers, cartoonists, story-boarders, architects, photographers, sculptors, town-planners, visionaries, inventors, engineers, cosmetics and beauty consultants design a costume; interpret a painting; create a room layout; create a corporate logo; design a building; pack a suitcase or the boot of a car pictures, shapes, images, 3D space
6 Interpersonal perception of other people's feelings; ability to relate to others; interpretation of behavior and communications; understands the relationships between people and their situations, including other people interpret moods from facial expressions; demonstrate feelings through body language; affect the feelings of others in a planned way; coach or counsel another person human contact, communications, cooperation, teamwork
7 Intrapersonal self-awareness, personal cognisance, personal objectivity, the capability to understand oneself, one's relationship to others and the world, and one's own need for, and reaction to change arguably anyone (see note below) who is self-aware and involved in the process of changing personal thoughts, beliefs and behaviour in relation to their situation, other people, their purpose and aims - in this respect there is a similarity to level, and again there is clear association between this type of intelligence and what is now termed 'Emotional Intelligence' or EQ
consider and decide one's own aims and personal changes required to achieve them (not necessarily reveal this to others); consider one's own and decide options for development; consider and decide one's own position in relation to the self-reflection, self-discovery
NEWER FORM OF READING ASSESSMENT AND ITS APPLICATION OF MULTIPLE INTTELLIGENCES
The existing measurement kind of standardized tests is no longer considered adequate to fully judge student’s reading skill progress. Nowadays test makers tend to develop new forms of reading assessment to be more performance-based and authentic (Mitchell, 1992; O'Neil, 1992) that can accommodate more of the students’ learning styles and multiple intelligences. Lamme & Hysmith (1991); Mitchell (1992); Wiggins (1992) argue that the newer forms of assessment are designed to bring about alignment and congruence between enlightened concepts of what reading is and how it should be taught and the assessment of reading. Teachers should no longer feel compelled to "teach to tests" since tests will be in harmony with good teaching practices. In the past, there was clear evidence that teachers frequently narrowed their curriculum to improve test scores (Herman & Golan, 1991; Shepherd, 1991; Smith & Rottenberg, 1991).
Some of the characteristics of new reading tests include:
1. Building the reading assessment within a framework that views reading as a dynamic, interactive, constructive process; therefore, isolated skills are not measured.
2. Using longer passages that were not written for the test but that were originally written for students to read for information and enjoyment.
3. Assessing students' ability to read a variety of text types for a variety of purposes, such as reading expository, narrative, and procedural texts for enjoyment, for literary appreciation, for information, and so forth.
4. Asking students to respond to open-ended questions that allow for a variety of interpretations and a range of acceptable responses rather than asking students to choose the correct answer from four choices as in the standardized test.
Students who are engaged in programs of instruction using quality literature as a basis for reading, comparing, reflecting, and writing will clearly have an advantage on new forms of reading assessment. Emphasis is no longer on choosing a single answer from a multiple-choice format. Emphasis is on reading. There is good evidence that students who engage in extensive reading and writing achieve better in literacy (Anderson, Wilson, & Fielding, 1988).
The primary effect that new ideas in reading assessment are having is that classroom teachers rather than tests are being viewed as the most important instruments in assessment. The assessment information that teachers gather is seen as having the potential for being by far the most valuable and valued form of assessment (Lamme & Hysmith, 1991).
The concepts of performance-based and authentic assessment clearly imply that the observations that teachers make and the products that result from classroom instructional events are the most valuable and valid measures of reading (Hansen, 1992; Shavelson, 1992; Wiggins, 1992). As authentic approaches to assessment are increasingly implemented, the distinction between instruction and assessment should diminish.
By thinking of assessment as part of instruction, teachers obtain immediate instructional suggestions and make any adjustments that are necessary. Teacher observation is a legitimate, necessary, valuable source of assessment information. By asking students to read aloud or to retell a portion of a selection they are reading, the teacher receives immediate information about the level of challenge that the selection presents to various students (Bembridge, 1992; Morrow, 1985). Classroom organization and management suggestions flow from ongoing assessment data. Students who need added support, for example, may be encouraged to work in cooperative groups. Students who are having difficulty gain the support they need, and very able students gain deeper understanding of the materials they are reading as they explain the materials to others (Johnson & Johnson, 1992).
Portfolio Assessment
Portfolio approaches to assessing literacy have been described in a wide variety of publications (Flood & Lapp, 1989; Lamme & Hysmith, 1991; Matthews, 1990; Tierney, Carter, & Desai, 1991; Valencia, 1990; Wolf, 1989) so that many descriptions of portfolios exist. Generally speaking, a literacy portfolio is a systematic collection of a variety of teacher observations and student products, collected over time, that reflect a student's developmental status and progress made in literacy.
A portfolio is not a random collection of observations or student products; it is systematic in that the observations that are noted and the student products that are included relate to major instructional goals. For example, book logs that are kept by students over the year can serve as a reflection of the degree to which students are building positive attitudes and habits with respect to reading. A series of comprehension measures will reflect the extent to which a student can construct meaning from text. Developing positive attitudes and habits and increasing the ability to construct meaning are often seen as major goals for a reading program.
Portfolios are multifaceted and begin to reflect the complex nature of reading and writing. Because they are collected over time, they can serve as a record of growth and progress. By asking students to construct meaning from books and other selections that are designed for use at various grade levels, a student's level of development can be assessed. Teachers are encouraged to set standards or expectations in order to then determine a student's developmental level in relation to those standards (Lamme & Hysmith, 1991).
Portfolios can consist of a wide variety of materials: teacher notes, teacher-completed checklists, student self- reflections, reading logs, sample journal pages, written summaries, audiotapes of retellings or oral readings, videotapes of group projects, and so forth (Valencia, 1990). All of these items are not used all of the time.
An important dimension of portfolio assessment is that it should actively involve the students in the process of assessment (Tierney, Carter, & Desai, 1991).
There are many ways in which portfolios have proven effective. They provide teachers with a wealth of information upon which to base instructional decisions and from which to evaluate student progress (Gomez, Grau, & Block, 1991). They are also an effective means of communicating students' developmental status and progress in reading and writing to parents (Flood & Lapp, 1989). Teachers can use their record of observations and the collection of student work to support the conclusions they draw when reporting to parents. Portfolios can also serve to motivate students and promote student self-assessment and self-understanding (Frazier & Paulson, 1992).
Linn, Baker, and Dunbar (1991) indicate that major dimensions of an expanded concept of validity are consequences, fairness, transfer and generalizability, cognitive complexity, content quality, content coverage, meaningfulness, and cost efficiency. Portfolios are an especially promising approach to addressing all of these criteria.
Brings Assessment in Line with Instruction
Portfolios are an effective way to bring assessment into harmony with instructional goals. Portfolios can be thought of as a form of "embedded assessment"; that is, the assessment tasks are a part of instruction. Teachers determine important instructional goals and how they might be achieved. Through observation during instruction and collecting some of the artifacts of instruction, assessment flows directly from the instruction (Shavelson, 1992).
Portfolios can contextualize and provide a basis for challenging formal test results based on testing that is not authentic or reliable. All too often students are judged on the basis of a single test score from a test of questionable worth (Darling-Hammong & Wise, 1985; Haney & Madaus, 1989). Student performance on such tests can show day-to-day variation. However, such scores diminish in importance when contrasted with the multiple measures of reading and writing that are part of a literacy portfolio.
Valid Measures of Literacy
Portfolios are extremely valid measures of literacy. A new and exciting approach to validity, known as consequential validity, maintains that a major determinant of the validity of an assessment measure is the consequence that the measure has upon the student, the instruction, and the curriculum (Linn, Baker, & Dunbar, 1991). There is evidence that portfolios inform students, as well as teachers and parents, and that the results can be used to improve instruction, another major dimension of good assessment (Gomez, Grau, & Block, 1991).
Portfolios and Self-Assessment
A sizable number of authors and researchers indicate that students can and do improve in their ability to assess their strengths and weaknesses in reading and writing and their progress in these areas (Frazier & Paulson, 1992; Lamme & Hysmith, 1991; Tierney, Carter, & Desai, 1991). These sources describe how students improve in their awareness of what they know, what they are learning, areas that need improvement, and so forth. Students learn how to interact effectively with their teachers and parents to gain an even fuller picture of their own achievements and progress. The work of Gomez, Grau, and Block (1991) suggests that in order for students to use portfolio assessment to grow in their understanding of themselves as learners, they need guidance and support from their teacher.
Below are the excerpts from the booklet prepared by the International Reading Association and National Council of Teachers of English Joint Task Force on Assessment in 1994.
The Standards for the Assessment of Reading
1. The interests of the student are paramount in assessment.
2. The primary purpose of assessment is to improve teaching and learning.
3. Assessment must reflect and allow for critical inquiry into curriculum and
instruction.
4. Assessments must recognize and reflect the intellectually and socially complex nature of reading and writing and the important roles of school, home, and society in literacy development.
5. Assessment must be fair and equitable.
6. The consequences of an assessment procedure are the first, and most important, consideration in establishing the validity of the assessment.
7. The teacher is the most important agent of assessment.
8. The assessment process should involve multiple perspectives and sources of data.
9. Assessment must be based in the community.
10. All members of the educational community -- students, parents, teachers, administrators, policymakers, and the public -- must have a voice in the development, interpretation, and reporting of assessment.
11. Parents must be involved as active, essential participants in the assessment process.
Standard 1: The interests of the student are paramount in assessment.
This standard refers to individual students, not students on average nor students collectively. Assessment must serve, not harm, each and every student. This means that each individual's intellectual, social, and emotional well-being must be considered, even when the decision to be made from the assessment will affect other individual students or even an entire class or school.
We must recognize that assessment experiences, formal or informal, have consequences for students (see standard 6 -- consequential validity). Assessment procedures have profound effects on students' lives. Assessments may alter their educational opportunities, increase or decrease their motivation to learn, elicit positive or negative feelings about themselves and others, and influence their understanding of what it means to be literate, educated, or successful.
What features of assessment are likely to serve the students' interests? First and foremost, assessment must encourage students to reflect on their own reading and in productive ways, to evaluate their own intellectual growth, and to set goals. In this way, students become involved in and responsible for their own learning and better able to assist the teacher in focusing instruction. Past assessment practices, particularly normative practices, have often produced conditions of threat and defensiveness for students. Constructive reflection is particularly difficult under such conditions. Thus assessment should emphasize what students can do rather than what they cannot do. Portfolio assessment, for example, if managed properly, can be reflective, involving students in their own learning and assisting teachers in refocusing their instruction.
Standard 2: Assessment must provide useful information to inform and enable reflection.
The information must be both specific and timely. Specific information on students' knowledge, skills, strategies, and attitudes helps teachers, parents, and students set goals and plan instruction more thoughtfully. Information about students' confusions, counterproductive strategies, and limitations, too, can help students and teachers reflect on and learn about students' reading and writing as long as it is provided in the context of clear descriptions of what they can do. The timeliness of the information is equally important. If information from assessment is not provided immediately, it is not likely to be used. Nor is it likely to be useful, because needs, interests, and aspirations are likely to change with the passage of time. In either case the opportunity to influence and promote learning may be missed.
Standard 3: The assessment must yield high-quality information.
The quality of information is suspect when tasks are too difficult or too easy, when students do not understand the tasks or cannot follow the directions, or when they are too anxious to be able to do their best or even their typical work. In these situations students cannot produce their best efforts or demonstrate what they know. Requiring students to spend their time and efforts on assessment tasks that do not yield high-quality, useful information results in students losing valuable learning time. Such a loss does not serve their interests and is thus an invalid practice (see standard 6).
Implications
This standard implies that if any individual student's interests are not served by an assessment practice, regardless of whether it is intended for administration or decision making by an individual or by a group, then that practice is not valid for that student. Since group-administered, machine-scorable tests do not normally encourage students to reflect constructively on their reading and writing, do not provide specific and timely feedback, and generally do not provide high-quality information about students, they seem unlikely to serve the best interests of students. Similarly, many less formal classroom assessments fail to meet these criteria. Regardless of the source or motivation for any particular assessment, states, school districts, schools, and teachers must demonstrate how these assessment practices benefit and do not harm individual students.
Assessment instruments or procedures themselves are not the only consideration in this standard. The context in which they are used can be equally important. For example, portfolio assessment that satisfies this standard when used in one class may also satisfy it in the context of a high-stakes assessment, such as an accountability assessment in which comparative scores are published in the newspaper. Students will perform "authentic" or "real-life" tasks over time, and these tasks can be evaluated at the district, state, and national levels and provide much more meaningful information about what a student knows and is able to do. Rather than a simple comparative reporting of aggregate test scores by a school or district, which provides numbers only and is more likely to produce defensiveness and anxiety than insight, such task-oriented assessments can produce meaningful information that shows the level of teaching and learning actually taking place in a learning community
Indeed, the most powerful assessments for students are likely to be those that occur in the daily activity of the classroom. Maximizing the value of these for students and minimizing the likelihood that they are damaging will involve an investment in staff development and the creation of conditions that enable teachers to reflect on their own practice.
Glossary of Assessment Terminology
Rapid changes in the field of reading and writing assessment have generated a variety of new terms as well as new uses for many established terms. The purpose of this glossary is to specify how assessment terms are generally used in discussions of literacy assessment and to point out alternative meanings of terms where they are common. We begin with curriculum since it is the foundation for our understanding of assessment as curriculum inquiry.
Curriculum
We can think of curriculum as having three components: the envisioned curriculum, the enacted curriculum, and the experienced curriculum. The envisioned curriculum is our daily attempts in classrooms to put the envisioned curriculum into practice. The experienced curriculum is the sense the language learner makes of what goes on in the classroom and is thus constructed within the language of that classroom. For example, if most of the reading material in one class involves racial or gender stereotypes, then that is likely to be reflected in students' learning, and, by contrast, students are likely to construct different knowledge about human relationships from a more balanced selection of reading material. However, the knowledge and attitudes students construct from those works is strongly influenced by the ways the teacher talks about them, the nature of group discussions, and the ways teachers and other students respond to each other. Ultimately, it is the experienced curriculum that is our concern, and that is why students must be our primary curricular informants. However, it is the discrepancies among the envisioned, enacted, and experienced curricula that drive curriculum inquiry, the process of assessment. Standards 1, 3, and 4 are particularly closely related to issues of curriculum.
Aggregation
In assessment, aggregation is the process of collecting data together for the purpose of making a more general statement. For example, it is common practice for school districts to add together all of the test scores for their students in order to find the average performance of students in the district. This process strips away all of the differences among the various cultural groups, schools, and students within the district in order to make the larger statement. Even an individual student's test score is a result of aggregating all of the individual items to which the student responded in order to make a general statement about a student's "reading ability." It is also common then to "disaggregate" the scores to see how subgroups performed within the larger group. Aggregation and disaggregation are in some ways a matter of deciding what are relevant and what are irrelevant data.
There are powerful tensions in this society around the issue of aggregation -- reflecting, on the one hand, the need to make general statements about students, teachers, and schools, and, on the other, the problem of stripping away the particulars of individual performances and situations in the process. It is not universally agreed that it is valuable to reduce students or schools to numbers, let alone for which purposes or on what grounds that might be reasonable. It is often argued that administrators need highly aggregated data to make programmatic and budgetary decisions. However, both in education and in industry administrators make different decisions when facing aggregated data than they do when facing real situations with real persons.
Authentic, Performance-based Assessment
These terms and the kinds of assessment to which they refer arise from the realization that widely employed assessment tools generally have been poor reflections of what literate people actually do when they read, write, and speak. The logic of authentic assessment suggests, for example, that merely identifying grammatical elements or proofreading for potential flaws is not an acceptable measure of writing ability. For their writing to be assessed, students must write, facing the real challenges faced by literate people.
The general issue of the "realness" of what is being measured (its construct-validity) is alluded to by the terms: authentic assessment, performance-based assessment, performance assessment, and demonstrations. Regardless of what the assessments are called, the issue is that tests must measure what they purport to measure: a reading test requires a demonstration of, among other things, constructing meaning from written text; a writing assessment requires a demonstration of producing written text.
Controversy continues to exist about whether machine-scorable, multiple-choice tests have a place in a world in which the criterion of authenticity is applied systematically and rigorously to the evaluation of assessments. The issues of authentic, performance-based assessment are particularly relevant to standards 4 and 6.
Equity
Issues of fairness surround literacy assessment. Testing originated as a means to control nepotism in job selection -- providing an independent perspective on selection to uphold fairness. But equity cannot be assured through testing alone. Those who control the assessment process control what counts, or what is valued. As we pointed out in the introduction, language assessment is laden with cultural issues and biases. Although equity cannot be assured through assessment, it must be pursued relentlessly in assessment and in schooling. It is more likely to be achieved through the involvement of multiple, independent perspectives than through the use of a single perspective.
Tests have traditionally been administered, their results published, and their impact on instruction instigated with little regard to issues such as cultural, economic, and gender equity. But many equity issues affect assessment, rendering comparisons difficult and often meaningless. Because traditional test makers have all too frequently designed assessment tools reflecting narrow cultural values, students and schools with different backgrounds and concerns often have not been fairly assessed.
Equity issues also include the kinds of educational experiences available to students who will face similar assessments, particularly in certification or gatekeeping situations. Questions of access to sound instruction, appropriate materials, and enriching learning opportunities are critical. Educators have become increasingly aware of the connections between assessment results and levels of safety, health and welfare support, and physical accessibility. Any responsible assessment must engage the full complexity of situations faced by educational communities. These issues related to equity are most closely tied to standard 5, but touch all the standards here.
Norm-referenced or Criterion-referenced Assessment
"Referencing" is choosing a framework for interpreting something, in this case assessment data. Norm-referenced interpretations are based on comparisons with others, usually resulting in a ranking. A norm-referenced interpretation of a student's writing will assert, for example, that the sample of writing is "as good as that of 20 percent of the students in that grade nationally." Criterion-referenced assessment is based on predetermined criteria that serve as "yardsticks" or "benchmarks" of performance. Neither frame of reference is particularly illuminating instructionally.
Other, less common frames of reference are more productive in that regard. For example, performance can be interpreted in the context of previous performance (self-referenced). Performance can also be interpreted in the context of a particular theory of literate learning (theory-referenced). But these frames of reference have consequences for the whole process of assessment. They bring with them consequential changes in assessment procedures. In order to make self-referenced assessments one needs to arrange for the collection of historical examples. In order to make theory-referenced interpretations one has to have a coherent theory. To make norm-referenced assessments, assessment practices need to be standardized and focus on maximizing the differences among individuals on a single scale.
Norm-referenced testing is the most prevalent form of large-scale testing, in which large groups of students take a test and the scores are grouped and interpreted in relation to other scores. In other words, the score of any student or group (school, district, state, or nation) has meaning only in relationship to all the other scores of like entities, e.g., school to school, district to district, state to state. In order to make such comparisons, we have to make the assumption of "all else being equal." In other words, we try to make everything the same so that differences in performance can be attributed to one source: the student, or school, or district -- whichever is the level of aggregation. This assumption, as we pointed out earlier in our discussion of language, is extremely dubious. It does not usually take into account the differences that abound throughout the thousands of schools and districts relating to curriculum, culture, gender, ethnicity, economic circumstance, per-pupil funding, and so forth. National norm-referenced tests assume that all students in our society have had similar cultural and curricular experiences.
Norm-referenced interpretations often occur in classrooms, too. A teacher who has little knowledge of the complexity of literacy learning will often have to resort to comparisons and rankings in order to interpret students' reading and writing. Such normative assessments often turn up as grades on report cards. Teachers with a reasonably detailed knowledge of their students' reading and writing, on the other hand, will have difficulty reducing their knowledge to simple rankings for such purposes. Indeed, the process poses highly stressful ethical dilemmas for them. Although grades and rankings are a common part of the educational history of most individuals in this culture, this committee believes the practice to be unnecessary and generally counterproductive.
Some of the stakeholders in assessment -- parents, teachers, students, administrators, policymakers -- have been seduced into believing that norm-referenced test scores are readily interpretable and productive. However, when it comes to assessing reading and writing, norm-referenced test scores have little utility because they oversimplify highly complex processes. These processes cannot be evaluated by a machine-scored, multiple-choice test -- the most common form of norm-referenced assessment. Assessments based on norm-referenced tests give at best inadequate and often actually misleading information about many students. Most unfortunately, norm-referenced test scores have too often become the single most important criterion for decisions about placement and promotion that have a powerful impact on students' lives.
Criterion-referenced testing involves tests that compare students' performance against established benchmarks. These benchmarks or criteria are usually expressed as numerical ranges that define levels of achievement. For example, an 80-85 score may mean high performance among levels of achievement ranging from unsatisfactory to outstanding. Criterion-based testing can also involve holistic scoring of writing, for example, where a score is based on a set of pre-established consensual criteria.
Standards 1, 2, 3, 4, and 6 raise issues related to norm-referenced and criterion-referenced assessments.
Reliability
Broadly speaking, reliability is an index of the extent to which a set of results or interpretations can be generalized across tasks, over time, and among interpreters. In other words, it is a particular kind of generalizability. For example, a common concern raised by newer forms of literacy assessment is whether different examiners, evaluating a complex response and using complex scoring criteria, will draw similar conclusions about a student's performance (whether an assessment will generalize across different examiners). Experience from scoring complex student writing samples does suggest that when people are well trained in the application of specific criteria, high rates of agreement can be achieved; however, this agreement does not guarantee a high-quality assessment. Indeed, current assessment practices stressing reliability as the central quality of assessments generally focus on trivial matters, on which it is easiest to gain agreement. Reliability is only important within the context of validity -- the extent to which the assessment leads to useful, meaningful conclusions and consequences.
In order to provide more "authentic" tasks, newer approaches to testing reading use more substantial bodies of text than the brief excerpts typical of older tests. Because these require more reading and response time, fewer assessment tasks or "items" are typical. For example, rather than having students read and answer multiple-choice questions about a dozen or more short passages, students may be asked to read one or two long pieces. The specific content of those passages may seriously influence that student's performance. This would limit the generalizability of any statements made about the student's reading of expository materials. In "one-shot" tests, there is thus a trade-off between the extent to which one can generalize performance in reading and writing to real ("authentic") situations, and whether one can generalize across examiners or tests.
One way to increase the reliability of statements about students' reading and writing performance while maintaining authenticity is to avoid dependence on one-shot tests, taking more advantage of continuous classroom assessment, at least where classroom practices reflect the literate activities of the real world. Standards 4 and 5 raise issues related to the reliability of assessment.
Validity
Historically, a common definition of a valid measure is that it measures what it purports to measure. The evidence for the validity of most reading and writing assessment tasks in the past was very thin, or nonexistent, often consisting only of how well a new test of reading, for example, correlated with some other measure of reading. If assessments of literate learning are to measure what they purport to measure, they will need to concern themselves with the nature of language. Valid assessments must then respect and value student diversity and acknowledge that there is generally no single "correct" response. Such assessments would allow for and encourage multiple interpretations of a reading selection and make provisions for allowing students to demonstrate their ability to construct meaning through multiple response modes such as writing, drawing, speaking, or performing.
To a very great extent, a valid assessment is one that reflects a valid curriculum. But more recent conceptions of validity include an examination of the consequences of assessment practices. In other words, one cannot have a valid assessment procedure that destroys curriculum in the process. Consequently, a more productive definition of a valid assessment practice would be one that reflects and supports a valid curriculum. As standard 6 asserts, assessment must have consequential validity. Validity issues are addressed particularly in standards 1, 2, 3, 4, 5, and 6.
The validity of any assessment should be judged in terms of the purpose of the assessment. Validity is much easier to achieve when assessment is closely aligned with instructional goals and integrated into instructional activities. This is why many assessment specialists see performance-based assessment as a potentially more valid alternative to traditional testing. To understand why alternative assessment systems (performance tasks, portfolio assessments, and other integrated measures of knowledge and skills) may be more valid than traditional forms of testing (multiple-choice, fill-in-the-blank, and other discrete measures of knowledge and skills) for various purposes you may want to look back at the three general areas of validity that are construct validity, consequential validity, and face validity.
In terms of construct validity, the advantage of alternative assessment is the opportunity to create more direct and more authentic measures of desired knowledge, skills, and abilities than is typically possible with traditional testing. On the other hand, construct validity also includes concerns about reliability (consistency of scores/ratings over time and among raters). Standardized tests are strong on reliability. Performance based assessments are scored more subjectively and therefore reliability must be strengthened by use of well-structured scoring guidelines and training of teachers to make effective and consistent use of scoring guidelines.
In terms of consequential validity, alternative assessment systems again have the advantage over traditional testing in many situations because of the fact that performance-based assessments and especially portfolios assessment systems give learners more opportunities (in more "real-world" contexts) to demonstrate desirable knowledge, skills, and abilities.
On the issue of face validity, performance assessment is again a clear winner. Scoring criteria used in performance-based assessment are more easily communicated (and often more meaningful) to learners and teachers than is the case in traditional forms of assessment. Within an alternative assessment system, learning and assessment activities are combined. A well-structured performance task should also be a learning activity. Good use of a portfolio is one way to capitalize on the potential for strong face validity in performance assessment. The primary purpose of the portfolio should be to aid communication between the learner and the instructor so that learning goals and progress can be reviewed and evaluated in an ongoing dialogue. In the end, the portfolio can become a richly textured and substantial piece of evidence of learning achievement. The challenge is to convince policy makers and funding agencies that such evidence is as valid (and reliable) as standardized test results. Basically, this means changing the ways that policy makers think about validity. The face validity of a standardized test rests mostly on the authority of the experts who design the test and analyze its results.
Action Research: An Educational Leader's Guide to School Improvement
Applying Educational Research: How to Read, Do, and Use Research to Solve Problems of Practice (6th Edition)
Handbook of Research on Educational Communications and Technology, Third Edition
(Part 2) THE ROLE OF MULTIPLE INTELLIGENCES AND LEARNING STYLES IN CONSTRUCTING READING ASSESSMENT FOR TEENAGE ENGLISH LEARNERS
CHAPTER THREE: RESEARCH METHODOLOGY
3.1 Research Design
The kind of this study will be a case study because I will only analyze the reading assessment problem that appears in classrooms with teenage English learners at LIA. First of all, I will give the students a kind of reading standardized test and examine the difficulties and the uninteresting factors that student face in doing the assessment. Then I will give the students the multiple intelligences test and the VAK learning styles test to know the tendency of each of the students’ dominant intelligences and their preferred learning style. After that, I will analyzed the findings of the examined difficulties in doing the given reading standardized test regarding the students’ individual dominant multiple intelligences and preferred learning styles. The result will be considerations to accommodate all types of students’ multiple intelligences and learning styles in constructing alternate reading assessment in the best form either portfolio assessment, performance-based test or other integrated measures of knowledge and reading skills. Finally, whenever possible the test reliability will be counted using one of the test reliability measurements to find out to extend the test can measure the students’ true ability objectively. If the result shows that the test is a reliable one then this test will be an authentic sample of an alternate reading assessment that can accommodate students’ multiple intelligences and learning styles to measure students’ true ability in reading. To sum up this study, I will discuss the role of multiple intelligences and learning styles in constructing reading assessment based on the result of the whole research.
3.2 Subject
The subject of this study is LIA teenage students. They are usually gathered in the course programs of English for Teens and General English. The age range of these students is between 11 to 20 years old. I choose to do my research at LIA Mercu Buana because this course is the place where I teach and apply all of my teaching English as a second language knowledge and there is also where I face the problem of giving reading assessment as stated earlier.
3.3 Instrument
I will use some kinds of instrument that can help me determine the students’ dominant multiple intelligences and preferred learning styles. I will try to find ready-to-use and scientifically validated multiple intelligences and VAK learning styles tests. Whenever it is required and valid, I will also use some test reliability measurements.
3.4 Collecting Data
a) I will try to gather result of the given standardize test and examine the weak point of the test or the numbers of the test item at which most student make the most mistakes.
b) I will interview the students to know the difficulties and the uninteresting factors that they face in doing the test.
c) I will collect the result of the given multiple intelligences test to know each of the student’s dominant intelligences according to Howard Gardner’s definition.
d) I will collect the result of the given VAK learning styles test to get the information of individual student’s preferred learning styles.
e) Based on the result of the above observation, I will determine the most suitable sample of alternate reading assessment for these students to adequately accommodate their preferred learning styles and to measure their true ability objectively.
3.5 Data Analysis
First of all, I will analyze how the students cope with standardized test and examine their reading progress based on scores of that type of test. Then I will investigate the test results of each of the student’s dominant multiple intelligences and preferred learning style. Based on the findings of the students’ multiple intelligences and learning styles test, I will try to modify the assessment into another type that is alternate reading assessment. Then I will analyze the reliability and validity of the test to be able to draw conclusion about the role of multiple intelligences and learning style in constructing reading assessment.
BIBLIOGRAPHY
Gardner, H. (1993). Frames of mind: The theory of multiple intelligences. New York: Basic Books.
Smith, M. K. (2002) 'Howard Gardner and multiple intelligences', the encyclopedia of informal education, http://www.infed.org/thinkers/gardner.htm. Last updated: November 27, 2007
Thomas, A. (1994). Multiple intelligences in the classroom. USA: the Association for Supervision and Curriculum Development.
Vincent, A., Ross, D. (2001). Personalize training: determine learning styles, personality types and multiple intelligences online. The Internet Journal, 8 (1), 36-43.
Declan, K., Tangney, B. (2003). A framework for using multiple intelligence in an ITS, Retrieved July 15, 2008, from https://www.cs.tcd.ie/crite/publications/sources/EDMEDIA03Paper4.pdf
Mind Tools.(2004). Learning styles learn effectively understanding your learning preferences. Retrieved July 17, 2008, from http://www.mindtools.com/mnemlsty.html.
Karen L. Currie.(2003). Multiple Intelligence Theory and the ESL Classroom. The Internet TESL Journal,9(4). http://iteslj.org/Articles/Currie-MITheory.html
Smagorinsky, P.(1995). Multiple Intelligence in the English Class: An overview The English Journal, 84(8), 19-26.
New Horizons for Learning and America Tomorrow (2000). Applying MI in schools Retrieved July 17, 2008, from http://www.newhorizons.org/strategies/mi/hoerr2.htm
New Horizons for Learning and America Tomorrow (2000). Five-Phrases To PBL:MITA (Multiple Intelligence Teaching Approach). Retrieved July 18, 2008, from http://www.newhorizons.org/strategies/mi/weber3.htm
The s-files. (2006). Implementing Howard Gardner’s Theory of multiple Intelligences. Retrieved July 18, 2008, from http://www.studentretentioncenter.ucla.edu.sfiles/multipleintelligences.htm
Keid, J. M. (1987).The Learning Style References of ESL students, TESOL Quarterly, 21(1), 87-111.
Anderson, R.C., Wilson, P.T., & Fielding, L.G. (1988). Growth in reading and how children spend their time outside of school. Reading Research Quarterly, 23, 285-303.
Bembridge, T. (1992). A MAP for reading assessment. Educational Leadership, 49, 46-48.
Cambourne, B., & Turbill, J. (1990). Assessment in whole language classrooms: Theory into practice. Elementary School Journal, 90, 337-349.
Darling-Hammong, L., & Wise, A. (1985, January). Beyond standardization: State standards and school improvement. Elementary School Journal, 315-336.
Flood, J., & Lapp, D. (1989). Reporting reading progress: A comparison portfolio for parents. The Reading Teacher, 42, 508-514.
Frazier, D.M., & Paulson, F.L. (1992, May). How portfolios motivate reluctant writers. Educational Leadership, 49(8), 62-65.
Gomez, M.L., Grau, M.E., & Block, M.N. (1991). Reassessing portfolio assessment: Rhetoric and reality. Language Arts, 68, 620-628.
Haney, W., & Madaus, G. (1989). Searching for alternatives to standardized tests: Whys, whats, and whatever. Phi Delta Kappan, 70, 683-687.
Hansen, J. (1992). Literacy portfolios: Helping students know themselves. Educational Leadership, 49, 66-68.
Herman, J., & Golan, S. (1991). Effects of standardized tests on teachers and learning - another look. CSE Technical Report #334. Los Angeles: Center for the Study of Evaluation.
Hiebert, E.A. (1992). Portfolios invite reflection - from students and staff. Educational Leadership, 49, 58-61.
Johnson, D.W., & Johnson, R.T. (1992). What to say to advocates of the gifted. Educational Leadership, 50, 44-47.
Johnston, P. (1984). Assessment in reading. In P.D. Pearson (Ed.), Handbook of reading research (147-182). New York: Longman.
Joint Task Force on Assessment of the International Reading Association and the National Council of Teachers of English (in press).
Lamme, L.L., & Hysmith, C. (1991). One school's adventure into portfolio assessment. Language Arts, 68, 629-640.
Linn, R., Baker, E., & Dunbar, S. (1991). Complex performance-based assessment: Expectations and validation criteria. Educational Researcher, 20, 15-21.
Matthews, J.K. (1990, February). From computer management to portfolio assessment. The Reading Teacher, 43, 420-421.
Meyers, C.A. (1992). What's the difference between authentic and performance assessment? Educational Leadership, 49, 39-41.
Mitchell, R. (1992). Testing for learning: How new approaches to evaluation can improve American schools. New York: The Free Press.
Morrow, L.M. (1985). Retelling stories as a diagnostic tool. In S. Glazer, L. Searfoss, & L. Gentile (Eds.), Reexamining reading diagnosis (128-149). Newark, DE: International Reading Association.
NAEP Reading Consensus Project (1992). Reading Framework for the 1992 National Assessment of Educational Progress. Washington, DC: US Government Printing Office.
O'Neil, J. (1992). Putting performance assessment to the test. Educational Leadership, 49, 14-19.
Shavelson, R.J. (1992). What we've learned about assessing hands-on science. Educational Leadership, 49, 20-25.
Shepherd, L. (1991). Will national tests improve student learning? Phi Delta Kappan, 73, 232-238.
Smith, M.L., & Rottenberg, C. (1991). Unintended consequences of external testing in elementary schools. Educational Measurement: Issues and Practices, 10, 7-11.
Tierney, R.J., Carter, M.A., & Desai, L.E. (1991). Portfolio assessment in the reading-writing classroom. Norwood, MA: Christopher-Gordon Publishers.
Valencia, S.W., (1990, January). A portfolio approach to classroom reading assessment: The whys, whats, and hows. The Reading Teacher, 43, 338-340.
Valencia, S.W. & Pearson, P.D. (1987, April). Reading assessment: Time for a change. The Reading Teacher, 43, 726-732.
Wiggins, G. (1992). Creating tests worth taking. Educational Leadership, 49, 26-33.
Winograd, P., Paris, S., & Bridge, C. (1991). Improving the assessment of literacy. The Reading Teacher, 45, 108-116.
Wolf, D.P. (1989). Portfolio assessment: Sample student work. Educational Leadership, 46, 35-39.
Wolf, D., Bixley, J., Glenn, J., & Gardner, H. (1991). To use their minds well: Investigating new forms of student assessments. In G. Grand (Ed.), Review of research in education (Vol. 17, 31-74). Washington, DC: AERA.
APPENDIXES
1. Gardner’s Multiple Intelligences Test
2. VAK Learning Styles Test
3. Assessment Reliability
4. Reading Test Scoring system
VAK Learning Styles Self-Assessment Questionnaire
Circle or tick the answer that most represents how you generally behave.
(It’s best to complete the questionnaire before reading the accompanying explanation.)
1. When I operate new equipment I generally:
a) read the instructions first
b) listen to an explanation from someone who has used it before
c) go ahead and have a go, I can figure it out as I use it
2. When I need directions for travelling I usually:
a) look at a map
b) ask for spoken directions
c) follow my nose and maybe use a compass
3. When I cook a new dish, I like to:
a) follow a written recipe
b) call a friend for an explanation
c) follow my instincts, testing as I cook
4. If I am teaching someone something new, I tend to:
a) write instructions down for them
b) give them a verbal explanation
c) demonstrate first and then let them have a go
5. I tend to say:
a) watch how I do it
b) listen to me explain
c) you have a go
6. During my free time I most enjoy:
a) going to museums and galleries
b) listening to music and talking to my friends
c) playing sport or doing DIY
7. When I go shopping for clothes, I tend to:
a) imagine what they would look like on
b) discuss them with the shop staff
c) try them on and test them out
8. When I am choosing a holiday I usually:
a) read lots of brochures
b) listen to recommendations from friends
c) imagine what it would be like to be there
9. If I was buying a new car, I would:
a) read reviews in newspapers and magazines
b) discuss what I need with my friends
c) test-drive lots of different types
10. When I am learning a new skill, I am most comfortable:
a) watching what the teacher is doing
b) talking through with the teacher exactly what I’m supposed to do
c) giving it a try myself and work it out as I go
11. If I am choosing food off a menu, I tend to:
a) imagine what the food will look like
b) talk through the options in my head or with my partner
c) imagine what the food will taste like
12. When I listen to a band, I can’t help:
a) watching the band members and other people in the audience
b) listening to the lyrics and the beats
c) moving in time with the music
13. When I concentrate, I most often:
a) focus on the words or the pictures in front of me
b) discuss the problem and the possible solutions in my head
c) move around a lot, fiddle with pens and pencils and touch things
14. I choose household furnishings because I like:
a) their colours and how they look
b) the descriptions the sales-people give me
c) their textures and what it feels like to touch them
15. My first memory is of:
a) looking at something
b) being spoken to
c) doing something
16. When I am anxious, I:
a) visualise the worst-case scenarios
b) talk over in my head what worries me most
c) can’t sit still, fiddle and move around constantly
17. I feel especially connected to other people because of:
a) how they look
b) what they say to me
c) how they make me feel
18. When I have to revise for an exam, I generally:
a) write lots of revision notes and diagrams
b) talk over my notes, alone or with other people
c) imagine making the movement or creating the formula
19. If I am explaining to someone I tend to:
a) show them what I mean
b) explain to them in different ways until they understand
c) encourage them to try and talk them through my idea as they do it
20. I really love:
a) watching films, photography, looking at art or people watching
b) listening to music, the radio or talking to friends
c) taking part in sporting activities, eating fine foods and wines or dancing
21. Most of my free time is spent:
a) watching television
b) talking to friends
c) doing physical activity or making things
22. When I first contact a new person, I usually:
a) arrange a face to face meeting
b) talk to them on the telephone
c) try to get together whilst doing something else, such as an activity or a meal
23. I first notice how people:
a) look and dress
b) sound and speak
c) stand and move
24. If I am angry, I tend to:
a) keep replaying in my mind what it is that has upset me
b) raise my voice and tell people how I feel
c) stamp about, slam doors and physically demonstrate my anger
25. I find it easiest to remember:
a) faces
b) names
c) things I have done
26. I think that you can tell if someone is lying if:
a) they avoid looking at you
b) their voices changes
c) they give me funny vibes
27. When I meet an old friend:
a) I say “it’s great to see you!”
b) I say “it’s great to hear from you!”
c) I give them a hug or a handshake
28. I remember things best by:
a) writing notes or keeping printed details
b) saying them aloud or repeating words and key points in my head
c) doing and practising the activity or imagining it being done
29. If I have to complain about faulty goods, I am most comfortable:
a) writing a letter
b) complaining over the phone
c) taking the item back to the store or posting it to head office
30. I tend to say:
a) I see what you mean
b) I hear what you are saying
c) I know how you feel
Now add up how many A’s, B’s and C’s you selected.
A’s = B’s = C’s =
If you chose mostly A’s you have a VISUAL learning style.
If you chose mostly B’s you have an AUDITORY learning style.
If you chose mostly C’s you have a KINAESTHETIC learning style.
Some people find that their learning style may be a blend of two or three styles, in this case read about the styles that apply to you in the explanation below.
When you have identified your learning style(s), read the learning styles explanations and consider how this might help you to identify learning and development that best meets your preference(s).
Now see the VAK Learning Styles Explanation.
The VAK learning styles model suggests that most people can be divided into one of three preferred styles of learning. These three styles are as follows, (and there is no right or wrong learning style):
Someone with a Visual learning style has a preference for seen or observed things, including pictures, diagrams, demonstrations, displays, handouts, films, flip-chart, etc. These people will use phrases such as ‘show me’, ‘let’s have a look at that’ and will be best able to perform a new task after reading the instructions or watching someone else do it first. These are the people who will work from lists and written directions and instructions.
Someone with an Auditory learning style has a preference for the transfer of information through listening: to the spoken word, of self or others, of sounds and noises. These people will use phrases such as ‘tell me’, ‘let’s talk it over’ and will be best able to perform a new task after listening to instructions from an expert. These are the people who are happy being given spoken instructions over the telephone, and can remember all the words to songs that they hear!
Someone with a Kinaesthetic learning style has a preference for physical experience - touching, feeling, holding, doing, practical hands-on experiences. These people will use phrases such as ‘let me try’, ‘how do you feel?’ and will be best able to perform a new task by going ahead and trying it out, learning as they go. These are the people who like to experiment, hands-on, and never look at the instructions first!
People commonly have a main preferred learning style, but this will be part of a blend of all three. Some people have a very strong preference; other people have a more even mixture of two or less commonly, three styles.
When you know your preferred learning style(s) you understand the type of learning that best suits you. This enables you to choose the types of learning that work best for you.
There is no right or wrong learning style. The point is that there are types of learning that are right for your own preferred learning style.
Reading Assessment, Second Edition: A Primer for Teachers and Coaches (Solving Problems in the Teaching of Literacy)
Day-to-Day Assessment in the Reading Workshop: Making Informed Instructional Decisions in Grades 3-6
3-Minute Reading Assessments: Word Recognition, Fluency, and Comprehension: Grades 5-8
Friday, July 23, 2010
VOICE SYSTEM AND ASPECTS IN LINGUISTICS
The clips are visualization of voice system and aspects in English and Indonesian. Hopefully, this will be useful and fun to be used in a tend-to-boring linguistics class.
Second Language Acquisition: An Advanced Resource Book (Routledge Applied Linguistics)
Classification and Modeling with Linguistic Information Granules: Advanced Approaches to Linguistic Data Mining
Grammar and Context: An Advanced Resource Book (Routledge Applied Linguistics)
Tuesday, July 20, 2010
Write and Earn!
PARTNERSHIP PROGRAM
As English teachers, English learners, English practitioners, English scholars, or anyone who is interested in English teaching and learning are invited to contribute and share any related articles, researches, papers, thesis, interesting lesson plans, educative English games, inspirational teaching aids, or English teaching and learning videos to this blog.
PURPOSE
Your articles will not only be publicly recognized but will also be rewarded. Article contribution can earn the author up to 70% of the advertisement revenue on each of the displayed post page. Though, this blog is not meant to be profit-oriented, whereas, it is aimed to encourage English writing and creative thinking towards educational purposes. The incentive reward is only to appreciate your work and to give lifetime loyalty as long as the articles' appearance in this blog.
Terms and Condition
By submitting any article to the admin you are abide by the submission eligibility stated hereby. Any article submission will be manually reviewed by the admin to check its originality, the use of the language and grammatical mistakes. Any article must be well-composed and consist of 100 words at the minimum. Any plagiarism is strictly prohibited. You may rephrase or rewrite any published articles with your own words while also embed the original writer and the source link whenever possible. You are not to fraud any parties by including copyrighted materials without citation in your articles. You are not to promote your article links at any automatic generated traffic websites such as traffic exchanges, auto-surf websites or any other similar services since this surely will violate the advertisers TOS (Terms of Service).
PAYOUT
Earnings will be calculated on 10th. Payout will be made on 15th every month whenever the earnings reach the amount of $10 or more. Payout will be based on credential levels that are explained herein and will be delivered by Paypal as the sole secure payment method. For this purpose, you are to provide your Paypal ID when you submit your article. If you don't have a Paypal ID, you may create one free here:
Whenever earning is less than $10, it will be accrued to the next payment period. The earnings will be reported monthly by email.
Credential levels are determined according to the number of submitted articles:
Level 1: 1-10 submitted original articles will earn an author 50% of the ads revenue.
Level 2: 11 -20 submitted original articles will earn an author 60% of the ads revenue.
Level 3: 21 and more submitted original articles will earn an author 70% of the ads revenue.
PROCEDURE
If you are interested in participating in this partnership program, just leave a message in the comment box by leaving your email address or Skype name (if any) as your contact information and register yourself as the follower of this blog. Next, you will be informed where to send your articles. Please understand that your contact information will be treated confidentially and will not be revealed to any third parties for your privacy.
Intro Lit& Writng Res Papers& MLL Web Pkg
Perspectives on Argument &Writng Res Papers
Writng Resrch Paper Prfct& CC Mycomp Acc Pkg
Subscribe to:
Posts (Atom)