Search This Blog

Monday, November 28, 2011

23 Ways To Speed Up Windows XP

Since defragging the disk won't do much to improve Windows XP performance, here are 23 suggestions that will. Each can enhance the performance and reliability of your customers' PCs. Best of all, most of them will cost you nothing.
1.) To decrease a system's boot time and increase system performance, use the money you save by not buying defragmentation software -- the built-in Windows defragmenter works just fine -- and instead equip the computer with an Ultra-133 or Serial ATA hard drive with 8-MB cache buffer.

2.) If a PC has less than 512 MB of RAM, add more memory. This is a relatively inexpensive and easy upgrade that can dramatically improve system performance.

3.) Ensure that Windows XP is utilizing the NTFS file system. If you're not sure, here's how to check: First, double-click the My Computer icon, right-click on the C: Drive, then select Properties. Next, examine the File System type; if it says FAT32, then back-up any important data. Next, click Start, click Run, type CMD, and then click OK. At the prompt, type CONVERT C: /FS:NTFS and press the Enter key. This process may take a while; it's important that the computer be uninterrupted and virus-free. The file system used by the bootable drive will be either FAT32 or NTFS. I highly recommend NTFS for its superior security, reliability, and efficiency with larger disk drives.

4.) Disable file indexing. The indexing service extracts information from documents and other files on the hard drive and creates a "searchable keyword index." As you can imagine, this process can be quite taxing on any system.

The idea is that the user can search for a word, phrase, or property inside a document, should they have hundreds or thousands of documents and not know the file name of the document they want. Windows XP's built-in search functionality can still perform these kinds of searches without the Indexing service. It just takes longer. The OS has to open each file at the time of the request to help find what the user is looking for.

Most people never need this feature of search. Those who do are typically in a large corporate environment where thousands of documents are located on at least one server. But if you're a typical system builder, most of your clients are small and medium businesses. And if your clients have no need for this search feature, I recommend disabling it.

Here's how: First, double-click the My Computer icon. Next, right-click on the C: Drive, then select Properties. Uncheck "Allow Indexing Service to index this disk for fast file searching." Next, apply changes to "C: subfolders and files," and click OK. If a warning or error message appears (such as "Access is denied"), click the Ignore All button.

5.) Update the PC's video and motherboard chipset drivers. Also, update and configure the BIOS. For more information on how to configure your BIOS properly, see this article on my site.

6.) Empty the Windows Prefetch folder every three months or so. Windows XP can "prefetch" portions of data and applications that are used frequently. This makes processes appear to load faster when called upon by the user. That's fine. But over time, the prefetch folder may become overloaded with references to files and applications no longer in use. When that happens, Windows XP is wasting time, and slowing system performance, by pre-loading them. Nothing critical is in this folder, and the entire contents are safe to delete.

7.) Once a month, run a disk cleanup. Here's how: Double-click the My Computer icon. Then right-click on the C: drive and select Properties. Click the Disk Cleanup button -- it's just to the right of the Capacity pie graph -- and delete all temporary files.

8.) In your Device Manager, double-click on the IDE ATA/ATAPI Controllers device, and ensure that DMA is enabled for each drive you have connected to the Primary and Secondary controller. Do this by double-clicking on Primary IDE Channel. Then click the Advanced Settings tab. Ensure the Transfer Mode is set to "DMA if available" for both Device 0 and Device 1. Then repeat this process with the Secondary IDE Channel.

9.) Upgrade the cabling. As hard-drive technology improves, the cabling requirements to achieve these performance boosts have become more stringent. Be sure to use 80-wire Ultra-133 cables on all of your IDE devices with the connectors properly assigned to the matching Master/Slave/Motherboard sockets. A single device must be at the end of the cable; connecting a single drive to the middle connector on a ribbon cable will cause signaling problems. With Ultra DMA hard drives, these signaling problems will prevent the drive from performing at its maximum potential. Also, because these cables inherently support "cable select," the location of each drive on the cable is important. For these reasons, the cable is designed so drive positioning is explicitly clear.

10.) Remove all spyware from the computer. Use free programs such as AdAware by Lavasoft or SpyBot Search & Destroy. Once these programs are installed, be sure to check for and download any updates before starting your search. Anything either program finds can be safely removed. Any free software that requires spyware to run will no longer function once the spyware portion has been removed; if your customer really wants the program even though it contains spyware, simply reinstall it. For more information on removing Spyware visit this Web Pro News page.

11.) Remove any unnecessary programs and/or items from Windows Startup routine using the MSCONFIG utility. Here's how: First, click Start, click Run, type MSCONFIG, and click OK. Click the StartUp tab, then uncheck any items you don't want to start when Windows starts. Unsure what some items are? Visit the WinTasks Process Library. It contains known system processes, applications, as well as spyware references and explanations. Or quickly identify them by searching for the filenames using Google or another Web search engine.

12.) Remove any unnecessary or unused programs from the Add/Remove Programs section of the Control Panel.

13.) Turn off any and all unnecessary animations, and disable active desktop. In fact, for optimal performance, turn off all animations. Windows XP offers many different settings in this area. Here's how to do it: First click on the System icon in the Control Panel. Next, click on the Advanced tab. Select the Settings button located under Performance. Feel free to play around with the options offered here, as nothing you can change will alter the reliability of the computer -- only its responsiveness.

14.) If your customer is an advanced user who is comfortable editing their registry, try some of the performance registry tweaks offered at Tweak XP.

15.) Visit Microsoft's Windows update site regularly, and download all updates labeled Critical. Download any optional updates at your discretion.

16.) Update the customer's anti-virus software on a weekly, even daily, basis. Make sure they have only one anti-virus software package installed. Mixing anti-virus software is a sure way to spell disaster for performance and reliability.

17.) Make sure the customer has fewer than 500 type fonts installed on their computer. The more fonts they have, the slower the system will become. While Windows XP handles fonts much more efficiently than did the previous versions of Windows, too many fonts -- that is, anything over 500 -- will noticeably tax the system.

18.) Do not partition the hard drive. Windows XP's NTFS file system runs more efficiently on one large partition. The data is no safer on a separate partition, and a reformat is never necessary to reinstall an operating system. The same excuses people offer for using partitions apply to using a folder instead. For example, instead of putting all your data on the D: drive, put it in a folder called "D drive." You'll achieve the same organizational benefits that a separate partition offers, but without the degradation in system performance. Also, your free space won't be limited by the size of the partition; instead, it will be limited by the size of the entire hard drive. This means you won't need to resize any partitions, ever. That task can be time-consuming and also can result in lost data.

19.) Check the system's RAM to ensure it is operating properly. I recommend using a free program called MemTest86. The download will make a bootable CD or diskette (your choice), which will run 10 extensive tests on the PC's memory automatically after you boot to the disk you created. Allow all tests to run until at least three passes of the 10 tests are completed. If the program encounters any errors, turn off and unplug the computer, remove a stick of memory (assuming you have more than one), and run the test again. Remember, bad memory cannot be repaired, but only replaced.

20.) If the PC has a CD or DVD recorder, check the drive manufacturer's Web site for updated firmware. In some cases you'll be able to upgrade the recorder to a faster speed. Best of all, it's free.

21.) Disable unnecessary services. Windows XP loads a lot of services that your customer most likely does not need. To determine which services you can disable for your client, visit the Black Viper site for Windows XP configurations.

22.) If you're sick of a single Windows Explorer window crashing and then taking the rest of your OS down with it, then follow this tip: open My Computer, click on Tools, then Folder Options. Now click on the View tab. Scroll down to "Launch folder windows in a separate process," and enable this option. You'll have to reboot your machine for this option to take effect.

23.) At least once a year, open the computer's cases and blow out all the dust and debris. While you're in there, check that all the fans are turning properly. Also inspect the motherboard capacitors for bulging or leaks. For more information on this leaking-capacitor phenomena, you can read numerous articles on my site.


Following any of these suggestions should result in noticeable improvements to the performance and reliability of your customers' computers. If you still want to defrag a disk, remember that the main benefit will be to make your data more retrievable in the event of a crashed drive.

How to Change your IP Address?


Before you can change your IP you need some information. This information includes your IP range, subnet mask, default gateway, dhcp server, and dns servers.


1. Getting your IP range - Getting information about your IP range is not difficult, I recommend using Neo Trace on your own IP. But for my test just look at your IP address, say it's 24.193.110.13 you can definitely use the IP's found between 24.193.110.1 < [new IP] < 24.193.110.255, don't use x.x.x.1 or x.x.x.255. To find your IP simply open a dos/command prompt window and type ipconfig at the prompt, look for "IP Address. . . . . . . . . . . . : x.x.x.x".


2. Subnet Mask, Default Gateway, DHCP Server - These are very easy to find, just open a dos/command prompt window and type 'ipconfig /all' without the ' '. You should see something like this:
Windows IP Configuration:

Host Name . . . . . . . . . . . . . . : My Computer Name Here
Primary Dns Suffix . . . . . . . . . :
Node Type . . . . . . . . . . . . . . .: Unknown
IP Routing Enabled. . . . . . . . . : No
WINS Proxy Enabled. . . . . . . . : No

Ethernet adapter Local Area Connection:

Connection-specific DNS Suffix . . . . . . .: xxxx.xx.x
Description . . . . . . . . . . . . . . . . . . . . : NETGEAR FA310TX Fast Ethernet Adapter (NGRPCI)
Physical Address. . . . . . . . . . . . . . . . . : XX-XX-XX-XX-XX-XX
Dhcp Enabled. . . . . . . . . . . . . . . . . . . : Yes
Autoconfiguration Enabled . . . . . . . . . : Yes
IP Address. . . . . . . . . . . . . . . . . . . . . : 24.xxx.xxx.xx
Subnet Mask . . . . . . . . . . . . . . . . . . . .: 255.255.240.0
Default Gateway . . . . . . . . . . . . . . . . . : 24.xxx.xxx.x
DHCP Server . . . . . . . . . . . . . . . . . . . .: 24.xx.xxx.xx
DNS Servers . . . . . . . . . . . . . . . . . . . . : 24.xx.xxx.xxx
24.xx.xxx.xx
24.xx.xxx.xxx
Lease Obtained. . . . . . . . . . . . . . . . . . .:Monday, January 20, 2003 4:44:08 PM
Lease Expires . . . . . . . . . . . . . . . . . . . .:Tuesday, January 21, 2003 3:43:16 AM


This is all the information you will need for now, I suggest you either keep your dos/command prompt window open or copy & paste the information somewhere, to copy right click the window and select text and click once.



III. Changing your IP Address


To change your IP address first pick any IP you like out of your IP range and remember it or write it down. It is usualy a good idea to make sure the IP is dead (except for what we are going to do later on) so just ping it via "ping x.x.x.x" and if it times out then you can use it. Now go to My Computer, then Control Panel. In Control Panel select Network Connections and pick your active connection, probably Local Area Connection or your ISP name. Open that connection by double clicking on the icon in Network Connections, then select Properties under the General Tab. In the new window that pops up select Internet Protocol (TCP/IP) and click properties, it's under the general tab. In this new window select the General tab and choose "Use the following IP address" and for the IP address enter the IP you would like to use (the one you picked from your subnet earlier) and for the Subnet Mask enter the subnet mask you got when your ran ipconfig /all, same goes for the Default Gateway. Now select "Use the following DNS server addresses" and enter the information you got earlier. Now just click OK. Test that it worked, try to refresh a website and if it works you know everything is okay and you are connected. To make sure the change worked type ipconfig again and the IP address should have changed to your new one.



IV. DDoS & DoS Protection


If your firewall shows that you are being DDoSed, this is usually when you are constantly getting attempted UDP connections several times a second from either the same IP address or multiple IP addresses (DDoS), you can protect yourself by changing your IP address via the method I described above.



V. Web servers & Other Services


If you know someone on your IP range is running a web server and he or she has pissed you off or you just like messing around you can "steal" their IP address so any DNS going to that IP will show your site instead because you would be running a web server yourself.

To "steal" an IP is to basically use the changing IP address method above and picking an IP that someone that is running a web server has in use. Often you will be able to keep that IP at least for some time, other times you won’t be able to use it so just keep trying until it works. You your self will need to have a web server on the same port with your message. You can do this with other services too. You can also DoS or DDoS the IP address you are trying to steal to kick him off the net, but I don't recommend as its pretty illegal.

Amplitude


The Amplitude Is the Height of the crust in a sound wave and it affects the loudness of the sound, loudness of sound can be harmful if crossed the limits.

Db in SOUND
Amplitude or loudness of sound is measured in db (Decibels). Decibel is not an absolute scale, it is a relative scale, it is only used for measuring ratios, and that depends upon different situations. But there is no answer that what exactly is 1db as we know in kg or km.

Therefore these Assumptions come into play, which are different for real sound and audio.
When we talk about the Real sound, in order to measure loudness of sound, a level of the minimum sound (threshold) which is pure silence is set to be 0 db. And with that reference we can measure the loudness of the real sound in reality as we go up by 5db 10db 15db and so on. With these measurements it is assumed that loudness within 100db is safest sound that is audible. Maybe 115 db can be harmful.

Db in Audio
Now for Audio, It Is totally reversed, when we talk about decibels in audio 0db is the maximum value that any audio device can handle. And then all audio is based on negative values of decibels.  That is 0db, -5db, -10db, -15db.                                                                                                     
                             Max, loud, less loud, lesser loud and so on.

Frequency


Frequency is defined As the no. of cycles per second, its measured in Hz (hertz), and it effects The pitch of the sound, higher the pitch sharper will be the sound and lower the pitch the graver will be the sound. E.g. a female’s and a male’s voice in most cases (Deep or Bass).

Range of frequency a human ear can listen is known as the AUDIBLE range and the range is 20 Hz to 20000 Hz
Infrasonic (Below)   20 Hz - 20000 Hz   (Above) Ultrasonic
Frequency is harmless because we cannot listen to infra and ultrasonic frequencies.

Sound


Sound is Form of energy, which is in the form of waves.

Sound is created from a source of energy, and when sound is produced the constant air molecules (in Silence) are disturbed, and that disturbance is two way disturbance, and thus waves are created, which have peaks and troughs (oppressions and depressions). Humans have a little organ in our ears called the ear drum, the ear drum also vibrates in the exact same way the sound waves are vibrating.
The only difference between air and water is that we can see water waves but cannot see air.

Dynamic Range


The Difference between the highest and the lowest levels of loudness in audio is known as dynamic Range, i.e. –infinity to 0db.

In real sound the dynamic range is from 0db till the point our ears get damaged.

Phase


When two waves are totally identical, we say that the waves are in a phase.

Constructive Interference:
When we join the two identical waves then the resultant wave will be a wave having double the amplitude of each wave. This process is known as Constructive Interference.

Phase Cancellation:
But if two wave are totally identical (phase) but are inverse and they are added then the amplitude of the resultant wave will be cancelled out, then the there will be neither peak nor a trough and there will only be a straight line. This process is known as destructive Interference (Phase Cancellation).
 Good Examples: Karaoke Devices, Noise reduction Headphones etc.

Audio Sampling


When Sound is recorded in audio through any device it is known to be sampled, just like in images, pixels are the building blocks of the picture. When we talk about audio, samples are the building blocks of digital audio, The audio wave is sliced in tiny parts called samples, higher the sampling rate better is the quality of the sound (just like resolution).


Sample Rate: 
The sample rate is no. of samples per second. And the standard is 44100 Hz which Is good quality sound.

Difference Between Sound And Audio


Sound is the natural voice or sound that is produced from natural sources
But when our sound is recorded into digital energy it is converted into Audio

But what is the final destination of audio?
The Speakers! Where it is again converted into sound that reaches our ears
SOUND = Acoustic Energy             and               Audio = Electronic Energy

Tools And Techniques Of Measurement And Evaluation


VALIDITY:
It is the degree to which a test measures what is suppose to measure. (L. R. Gay)
  • Test validity refers to the degree to which the test actually measures what it claims to measure.
  • Test validity is also the extent to which inferences, conclusions, and decisions made on the basis of test scores are appropriate and meaningful.
  • Validity is the strength of our conclusions, inferences or propositions.
  • Validity refers to the accuracy of an assessment -- whether or not it measures what it is supposed to measure.
  • If a test is valid, it is almost always reliable.
Measurement of validity:
There are three ways in which validity can be measured.
Type of Validity
Definition
Example/Non-Example
Content
The extent to which the content of the test matches the instructional objectives.
A semester or quarter exam that only includes content covered during the last six weeks is not a valid measure of the course's overall objectives -- it has very low content validity.
Criterion
The extent to which scores on the test are in agreement with (concurrent validity) or predict (predictive validity) an external criterion.
If the end-of-year math tests in 4th grade correlate highly with the statewide math tests, they would have high concurrent validity.
Construct
The extent to which an assessment corresponds to other variables, as predicted by some rationale or theory.
If you can correctly hypothesize that ESOL students will perform differently on a reading test than English-speaking students (because of theory), the assessment may have construct validity.
In order to have confidence that a test is valid (and therefore the inferences we make based on the test scores are valid), all three kinds of validity evidence should be considered. So, does all this talk about validity and reliability mean you need to conduct statistical analyses on your classroom quizzes? No, it doesn’t. (Although you may, on occasion, want to ask one of your peers to verify the content validity of your major assessments.) However, you should be aware of the basic tenets of validity and reliability as you construct your classroom assessments, and you should be able to help parents interpret scores for the standardized exams.
Types of Validity:
There are four types of validity commonly examined in social research.
  1. Conclusion validity asks is there a relationship between the program and the observed outcome? Or, in our example, is there a connection between the attendance policy and the increased participation we saw?
  2. Internal Validity asks if there is a relationship between the program and the outcome we saw, is it a causal relationship? For example, did the attendance policy cause class participation to increase?
  3. Construct validity is the hardest to understand in my opinion. It asks if there is there a relationship between how I operationalized my concepts in this study to the actual causal relationship I'm trying to study/? Or in our example, did our treatment (attendance policy) reflect the construct of attendance, and did our measured outcome - increased class participation - reflect the construct of participation? Overall, we are trying to generalize our conceptualized treatment and outcomes to broader constructs of the same concepts.
  4. External validity refers to our ability to generalize the results of our study to other settings. In our example, could we generalize our results to other classrooms?
Characteristics of Validity:
There are following characteristics of validity commonly examined in social research.
  1. Content Validity: How well the sample of test items represents the content the test is designed to measure.
  2. Predictive validity: How well predictions made by a test are confirmed by later behavior of subjects.
  3. Concurrent validity: Similar to predictive validity, but behavior is measured at same time as test.
  4. Construct validity: How well a particular test can be shown to measure a particular construct (a theoretical construction about the nature of human behavior, such as intelligence, anxiety, or creativity).
  5. Face validity: How closely the test appears to measure what it's supposed to measure.

RELIABILITY:
Reliability is the degree to which a test is consistently measures whatever it measure.
(L. R. Gay)
Test reliability refers to the degree to which a test is consistent and stable in measuring what it is intended to measure. Most simply put, a test is reliable if it is consistent within itself and across time. Reliability is the consistency of your measurement, or the degree to which an instrument measures the same way each time it is used under the same condition with the same subjects. In short, it is the repeatability of your measurement. It is the level of internal consistency or stability of the test over time, or the ability of the test to obtain the same score from the same student at different administrations (given the same conditions). Reliability is usually expressed as some sort of correlation coefficient. Values may range from .00 (low reliability) to 1.00 (perfect reliability). Reliability refers to the extent to which assessments are consistent. Just as we enjoy having reliable cars (cars that start every time we need them), we strive to have reliable, consistent instruments to measure student achievement. Another way to think of reliability is to imagine a kitchen scale. If you weigh five pounds of potatoes in the morning, and the scale is reliable, the same scale should register five pounds for the potatoes an hour later (unless, of course, you peeled and cooked them). Likewise, instruments such as classroom tests and national standardized exams should be reliable – it should not make any difference whether a student takes the assessment in the morning or afternoon; one day or the next. Another measure of reliability is the internal consistency of the items. For example, if you create a quiz to measure students’ ability to solve quadratic equations, you should be able to assume that if a student gets an item correct, he or she will also get other, similar items correct.  The following table outlines three common reliability measures.
Type of Reliability
How to Measure
Stability or Test-Retest
Give the same assessment twice, separated by days, weeks, or months. Reliability is stated as the correlation between scores at Time 1 and Time 2.
Alternate Form
Create two forms of the same test (vary the items slightly).  Reliability is stated as correlation between scores of Test 1 and Test 2.
Internal  Consistency
Compare one half of the test to the other half.  Or, use methods such as Kuder-Richardson Formula 20 (KR20) or Cronbach's Alpha.   

Estimation of Reliability:
There are two ways by which reliability is usually estimated
  1. Test/Retest: Test/retest is the more conservative method to estimate reliability. The idea is that you should get the same score on test 1 as you do on test 2.
The three main components to this method are as follows:
o    Implement your measurement instrument at two separate times for each subject
o    Compute the correlation between the two separate measurements;
o    Assume there is no change in the underlying condition (or trait you are trying to measure) between test 1 and test 2.
  1. Internal Consistency: Internal consistency estimates reliability by grouping questions in a questionnaire that measure the same concept. For example, you could write two sets of three questions that measure the same concept (say class participation) and after collecting the responses, run a correlation between those two groups of three questions to determine if your instrument is reliably measuring that concept.
The primary difference between test/retest and internal consistency estimates of reliability is that test/retest involves two administrations of the measurement instrument, whereas the internal consistency method involves only one administration of that instrument.
The Relationship of Reliability and Validity
In order for assessments to be sound, they must be free of bias and distortion. Reliability and validity are two concepts that are important for defining and measuring bias and distortion. Test validity is requisite to test reliability. If a test is not valid, then reliability is moot. In other words, if a test is not valid there is no point in discussing reliability because test validity is required before reliability can be considered in any meaningful way. Likewise, if a test is not reliable it is also not valid.
OBJECTIVITY:
Objectivity is the extent to which the instrument is free from personal error (personal bias) that is subjectivity on the part of the error. (C.V Good)
The objectivity of test refers to the degree to which equally competent scores obtained the same results. (Norman E. Gronlund)
ACCURACY:
A term used to describe the size of the relative error.                                      (C.V Good)
ADECUACY:
A characteristic evidenced by its sufficient length to sample widely the behaviour it is designed to measure.                                                                                         (C.V Good)


Methods of Data Recording
The assessment techniques in this category may be used with any of the ongoing student activities as well as with the quizzes and tests. The appropriateness of the technique for the purpose intended should act as a guide.
Anecdotal records refer to written descriptions of student progress that a teacher keeps on a day- to-day basis.
A teacher may decide to keep anecdotal records on students' ability to manipulate materials at assessment stations, to work in a group, to work in a test-taking situation, or to complete a project or a written report. There are situations where a teacher will keep anecdotal comments on the development of specific skills related to instructional objectives, on the behavior of a student, or on the attitude expressed or demonstrated by a student. Anecdotal records are as flexible as a teacher wishes to make them.
Observation checklists are lists of criteria a teacher determines are important to observe in students at a particular time. Beside each of the criteria, a notation is made as to whether that particular criterion was observed.
Checklists can be used to record the presence or the absence of knowledge, particular skills, learning processes, or attitudes. They may be used to record such information in relation to written assignments, presentations, classroom performance, test-taking behaviors, individual or group work, fulfillment of the requirements of a contract, self- and peer-assessment of work, or completion of an assessment station. How a teacher wishes to use an observation checklist depends upon the type of student progress information required.
  • Rating Scales
Rating scales have the same usage as observation checklists. The essential difference lies in what is indicated. Observation checklists record the presence or absence of a particular knowledge item, skill, or process. Rating scales record the degree to which they are found or the quality of the performance.
Anecdotal Records Description
An anecdotal record is a written description of the observations made on students. These records are usually collected in a specific book or folder.


Evaluation Context
  • Formative
The very act of recording observations may serve to alert you to some aspect of a student's learning or attitude that may need immediate attention; for example, an outburst caused by frustration.
  • Summative
Since the anecdotal record concentrates on describing incidents of student performance over a period of time, the sequence of anecdotes can serve as a record of the student's development towards long term goals such as lifelong learning, healthy self-concept, cooperative learning, skill development, work/study habits, knowledge attainment, and interest/attitude.
  • Diagnostic
Through the regular spotlighting of a student's performance, areas needing special attention may emerge. Examples include communication skills and personal development. Your anecdotal records may start to show that Billy is consistently having trouble in expressing coherent thoughts. As a consequence, you may decide to investigate the causes of this behavior more thoroughly.
Using Technique to Best Advantage

Entries must be made with appropriate frequency. They should eventually encompass all the students, although some students may warrant more entries than others. Anecdotal records offer you a way of recording aspects of your students' learning that might not be identified by other techniques.
Guidelines for use
  • What to write
First, you write a description of the incident in an objective way by describing what actually happened. Then make further notes on your analysis of the situation, any comments you want to make, and any questions you pose to yourself that may guide further observations.
  • When to use
For many teachers, the time when students are engaged in writing offers an opportunity to demonstrate that teachers are writing, too. You can use a portion of your writing time for recording your anecdotes. Teachers who do not have these opportunities may use times when students are engaged in independent work. In program areas such as physical education and home economics, there are parts of the period when students change clothes or tidy up equipment. You might be able to use these times for recording entries. Whichever scheme is chosen, it should offer regular opportunities for entering observations.
  • How to record
Various formats have been developed. A notebook with each entry dated offers a powerful chronological record; although it is sometimes difficult locate a particular student. Alphabetized notebooks, looking like large address books, are available and they permit easy reference by student name. Alternatively, a loose leaf format may be used so that the entries may be entered chronologically, and at the end of the year may be reformatted by student name. One further idea: modern technology has provided us with conveniences for recording and storing student progress data that range from electronic student data files available on various software programs to removable self-stick notes that can be used to record the anecdote and then be affixed to the student record.
Example:          No example is required for the open-ended, unstructured anecdotal record. The examples that follow are formats for anecdotal records designed to give you ideas as to how to set up this type of data recording method. Keep in mind these are only examples.
Using the Information for Student Evaluation
While the entries themselves are usually not shown to the student or the parents/guardians, they can form a valuable basis for communication. They allow you to flesh out your year-end reports on the more holistic dimensions of student growth.
Observation Checklists Description
The observation checklist is a listing of specific concepts, skills, processes, or attitudes, the presence or absence of which you wish to record. If the observation checklist is used relatively frequently and over time, a longitudinal profile of a student is assembled and ultimately evaluated.
Evaluation Context
The observation checklist is most appropriately used in situations where you wish to assess your students' abilities, attitudes, or performance in process areas. For example, it can assess communication skills, cooperative learning skills, extent of participation, interest in the topic, and psychomotor skills.
Using Technique to Best Advantage
Used on a single occasion, the observation checklist can provide formative evaluation information for the situation in which it is used For example, to learn how effective students are when working in groups, a checklist to observe them in a single group session can be used. This will provide information to guide future instruction.
Observation checklists are most useful when collected over time and used summatively or diagnostically. Once you decide to use observation checklists in your evaluation plan, you must use them systematically. They are misleading when used sporadically.
Guidelines for Use
Usually the observation checklist is used during class time. Therefore, it must be simple. The most efficient way to collect data is to record learning progress on four or five students at the same time. If you choose to observe four students per lesson and you have 28 students, you will cover the class once every seven lessons. At the end of the term or unit, you will have several observations on every student. If your class is working in groups, do one group every day. If not, use your seating plan to identify groups of students sitting in the same area. If you choose students alphabetically, you may find that your eyes have to cover too much of the room in order to encompass the selected students.
  • Before the unit or course begins, develop an estimate of what would constitute appropriate learning outcomes for your students. If you intend to use the information for making criterion-referenced judgments, decide on what your criteria will be. You may wish to develop minimum criteria (e.g., "six of the eight behaviors must be observed over the course of the unit"), or you may wish to develop different criteria levels for what would constitute excellent, satisfactory, or unsatisfactory work. Decisions on criteria should be made before the observation sequence begins.
  • Before every class, enter the names of the students, the date, and the activity. During class, pay special attention to the selected group so that you build an impression of their level of competence or execution of the skills, processes, or attitudes you wish to record.
Recording options: You may simply mark an entry on the item's first appearance and leave it at that, or you may record an item's every appearance (e.g., Undisplayed Graphic). If you develop some measure of degree to describe the item (e.g., !, ?, or X), you have transformed your observation checklist into a rating scale. This is a characteristic of rating scales and checklists that gives you more flexibility. Make sure you record the date and the class on every observation checklist you use.
  • After class, annotate the checklist sheet with any appropriate thoughts. For example, "Fire drill interrupted the group activity - recorded instances are therefore lower than I anticipated." Ä File the checklist sheet with the others so that the class set is available for evaluation at the end of the course or unit. Large envelopes are useful here.
Example: The example checklists are designed to give you ideas as to how to set up this type of data recording technique. Keep in mind these are only examples.
Using the Information for Student Evaluation
Arrange the sheets into piles according to the student groups. Read them all over once or twice to develop a feeling for the overall class picture. For criterion-referenced judgments, refer to the criterion levels you made initially. For norm-referenced judgments, estimate where each student lies relative to the others in the class and make your judgment. If you have looked for very general or broad items, be careful not to over interpret your data - for example, "On these aspects of the course Kim seems to be performing a little bit more consistently than most of the students." This may be about the level of sophistication that is possible, depending on how you constructed the instrument. For self-referenced judgments, all the checklists on one particular student can be studied, providing a measure of progress over the span of the unit or course. This is one of the most powerful uses of the checklist.
  • Where you can, start with an existing checklist and modify it according to your needs.
  • Choose items that relate to the intended learning outcomes of the unit. If you wish to use checklists in several courses and they have many overlapping items, develop a master list and eliminate those items that are inappropriate for the specific unit or course.
  • Choose items that you can observe or reasonably infer. If an item is too vague (e.g., interest in the subject), you may not be consistent throughout the term in your estimation and recording of it.
  • Keep the list of items manageable. Twelve is about the maximum.
  • Keep the language of the items simple and jargon-free. In that way you can use the checklists at parent-teacher or student-teacher interviews.
Variants

Develop checklists that detail one particular series of components. For example, a checklist on the correct operation of a microscope may be useful in minimum competency situations where something just has to be done correctly.
As previously mentioned, the observation checklist shares many characteristics with the rating scale. This is an advantage that can be a time-saver for you.

Rating Scales Description
Rating scales are measuring instruments that allow representation of the extent to which specific concepts, skills, processes, or attitudes exist in students and their work.
Evaluation Context
Rating scales enable the teacher to record student performance on a wide range of skills and attitudes. They are particularly useful in situations where the student performance can be described along a continuum, such as participation in a debate or skill in preparing a microscope slide.
Guidelines for Use
As the rating scale is usually used during class time, it must be simple to use.
  • Developing the rating scale
Once you decide upon the activity you wish to rate, break it up into its constituent parts. Make the parts as specific as possible so as to increase the scale's reliability. For example, instead of globally rating "performance in debates," decide on what performance criteria you wish to observe in the student. Perhaps "states argument," "demonstrates background preparation," "responds to opposition arguments relevantly" might together give a less inferential picture of the student's performance than the rating on the global behavior alone.
The next task is to develop the scale points. You might use the old stand-by: "very good/good/ average/poor/very poor," or you can develop more descriptive scale points. For the criterion mentioned above, "states argument," you could choose to use points based upon how forceful the student was: "very forceful/forceful/average/ diffident/very diffident."
  • Before the unit or course begins
If you intend to use the information for making criterion-referenced judgments, decide on what your criteria will be. You may wish to develop minimum criteria such as, "six of the eight behaviors must be rated at the satisfactory level or higher over the course of the unit." Or you may wish to develop different criteria levels for what would constitute excellent, satisfactory, or unsatisfactory work.
  • Before every class
Enter the names of the students, the date, and the activity. This will usually be governed by the activity being rated. If Peter and Petra are facing off in today's debate, then theirs are the names entered.
  • Recording
As you form an impression of student behavior on each criterion, mark the point on the continuum.
  • After class
Examine the individual criteria and decide on an overall rating for each student on the total behavior being rated. File the rating sheet with the others so that the class set is available as a record. Large envelopes are useful here.

Example: In the first example provided, the full sheet on 'Performance in Debates' is developed. The other examples that follow are designed to give you ideas as to how to set up this type of data recording method. Keep in mind these are only examples.
Two Variants
Rating scales have many variants and any book on measurement will offer examples. Two variants are described here.
  • Self-evaluation
Rating scales are very useful in allowing students to perform self-evaluation on their own work. Present the student with a rating scale that covers the aspects of the unit or project which you wish him or her to self-evaluate. Examples may be the amount of effort expended in research, the amount of effort expended on initial organization, the extent to which the student reflected on the initial organization, the amount of reorganization, or the effort spent on writing. The student's ratings on the five-point scale can form a useful starting-point for teacher- student dialogue. 
  • Number line
The number line is a variant that is particularly useful with pre-reading students. On a long piece of paper, draw a horizontal line and mark off five to ten intervals. On the extreme left- hand mark, draw a sad face, at the mid-point draw a neutral face, and at the right-hand mark, draw a happy face. Mount the number line on the wall at a suitable height. The student then places the left palm on the sad face and, in response to a question (such as "How much did you like that story?"), positions the right palm accordingly. If the story was not a success, then both hands overlap on the unhappy face. By training the students to pass by the number line fairly quickly, you can obtain rapid feedback on the question you pose. With experience, more sophisticated questions can be asked. Here are examples from a unit on estimation. "When you guessed the number of peas in the pea pod that I showed you, how sure were you of your answer?" "Now, when you guessed the number of Smarties in the bottle, how sure were you?"

Norm Reference Test (NRT)

Any objective test that is standardized on a group of individuals whose performance is evaluated in relation to the performance of others contrasting with criterion reference test.

NRT is designed to measure knowledge taught at a particular grade level. NRT involves comparing a student’s performance to that of other students.

Criterion Reference Test (CRT)

A measurement of achievement of specific criteria or skills in terms of absolute levels of mastery. The focus is on performance of an individual as measured against a standard or criteria rather than against the performance of others whom take the same test as with norm referenced test.

CRT is designed to compare a student’s with a clearly defined curricular objective, skill, standard or area of knowledge (rather than with score of a sample of other students. CRT involves comparing a student’s performance to a well-defined content domain. (Can or can not do)