Survey Methods ________________________________________ The survey is a non-experimental, descriptive research method. Surveys can be useful when a researcher wants to collect data on phenomena that cannot be directly observed (such as opinions on library services). Surveys are used extensively in library and information science to assess attitudes and characteristics of a wide range of subjects, from the quality of user-system interfaces to library user reading habits. In a survey, researchers sample a population.
Basha and Harter (1980) state that “a population is any set of persons or objects that possesses at least one common characteristic. ” Examples of populations that might be studied are 1) all 1999 graduates of GSLIS at the University of Texas, or 2) all the users of UT General Libraries. Since populations can be quite large, researchers directly question only a sample (i. e. a small proportion) of the population. Types of Surveys Instrument Design Resources and Links Types of Surveys
Data are usually collected through the use of questionnaires, although sometimes researchers directly interview subjects. Surveys can use qualitative (e. g. ask open-ended questions) or quantitative (e. g. use forced-choice questions) measures. There are two basic types of surveys: cross-sectional surveys and longitudinal surveys. Much of the following information was taken from an excellent book on the subject, called Survey Research Methods, by Earl R. Babbie. Cross-Sectional Surveys Cross-sectional surveys are used to gather information on a population at a single point in time.
An example of a cross sectional survey would be a questionaire that collects data on how parents feel about Internet filtering, as of March of 1999. A different cross-sectional survey questionnaire might try to determine the relationship between two factors, like religiousness of parents and views on Internet filtering. Longitudinal Surveys Longitudinal surveys gather data over a period of time. The researcher may then analyze changes in the population and attempt to describe and/or explain them. The three main types of longitudinal surveys are trend studies, cohort studies, and panel studies.
Trend Studies Trend studies focus on a particular population, which is sampled and scrutinized repeatedly. While samples are of the same population, they are typically not composed of the same people. Trend studies, since they may be conducted over a long period of time, do not have to be conducted by just one researcher or research project. A researcher may combine data from several studies of the same population in order to show a trend. An example of a trend study would be a yearly survey of librarians asking about the percentage of reference questions answered using the Internet.
Cohort Studies Cohort studies also focus on a particular population, sampled and studied more than once. But cohort studies have a different focus. For example, a sample of 1999 graduates of GSLIS at the University of Texas could be questioned regarding their attitudes toward paraprofessionals in libraries. Five years later, the researcher could question another sample of 1999 graduates, and study any changes in attitude. A cohort study would sample the same class, every time. If the researcher studied the class of 2004 five years later, it would be a trend study, not a cohort study.
Panel Studies Panel studies allow the researcher to find out why changes in the population are occurring, since they use the same sample of people every time. That sample is called a panel. A researcher could, for example, select a sample of UT graduate students, and ask them questions on their library usage. Every year thereafter, the researcher would contact the same people, and ask them similar questions, and ask them the reasons for any changes in their habits. Panel studies, while they can yield extremely specific and useful explanations, can be difficult to conduct.
They tend to be expensive, they take a lot of time, and they suffer from high attrition rates. Attrition is what occurs when people drop out of the study. Instrument Design One criticism of library surveys is that they are often poorly designed and administered (Busha and Harter 1980), resulting in data that is that is not very accurate, but that is energetically quoted and used to make important decisions. Surveys should be just as rigourously designed and administered as any other research method.
Meyer (1998) has identified five preliminary steps that should be taken when embarking upon any research project: 1) choose a topic, 2) review the literature, 3) determine the research question, 4) develop a hypothesis, and 5) operationalization (i. e. , figure out how to accurately measure the factors you wish to measure). For research using surveys, two additional considerations are of prime importance: representative sampling and question design. Much of the following information was taken from the book Research Methods in Librarianship: Techniques and Interpretation by Charles H.
Busha and Stephen P. Harter. Representative Sampling A sample is representative when it is an accurate proportional representation of the population under study. If you want to study the attitudes of UT students regarding library services, it would not be enough to interview every 100th person who walked into the library. That technique would only measure the attitudes of UT students who use the library, not those who do not. In addition, it would only measure the attitudes of UT students who happened to use the library during the time you were collecting data.
Therefore, the sample would not be very representative of UT students in general. In order to be a truly representative sample, every student at UT would have to have had an equal chance of being chosen to participate in the survey. This is called randomization. If you stood in front of the student union and walked up to students, asking them questions, you still would not have a random sample. You would only be questioning students who happened to come to campus that day, and further, those that happened to walk past the student union.
Those students who never walk that way would have had no chance of being questioned. In addition, you might unintentionally be biased as to who you question. You might unconsciously choose not to question students who look preoccupied or busy, or students who don’t look like friendly people. This would invalidate your results, since your sample would not be randomly selected. If you took a list of UT students, uploaded it onto a computer, then instructed the computer to randomly generate a list of 2 percent of all UT students, then your sample still might not be representative.
What if, purely by chance, the computer did not include the correct proportion of seniors, or honors students, or graduate students? In order to further ensure that the sample is truly representative of the population, you might want to use a sampling technique called stratification. In order to stratify a population, you need to decide what sub-categories of the population might be statistically significant. For instance, graduate students as a group probably have different opinions than undergraduates regarding library usage, so they should be recognized as separate strata of the population.
Once you have a list of the different strata, along with their respective percentages, you could instruct the computer to again randomly select students, this time taking care that a certain percentage are graduate students, a certain percentage are honors students, and a certain percentage are seniors. You would then come up with a more truly representative sample. Question Design It is important to design questions very carefully. A poorly designed questionaire renders results meaningless. There are many factors to consider.
Babbie gives the following pointers: •Make items clear (don’t assume the person you are questioning knows the terms you are using). •Avoid double-barreled questions (make sure the question asks only one clear thing). •Respondent must be competent to answer (don’t ask questions that the respondent won’t accurately be able to answer). •Questions should be relevant (don’t ask questions on topics that respondents don’t care about or haven’t thought about). •Short items are best (so that they may be read, understood, and answered quickly). •Avoid negative items (if you ask whether librarians should not be paid more, it will confuse respondents). Avoid biased items and terms (be sensitive to the effect of your wording on respondents). Busha and Harter provide the following list of 10 hints: 1. Unless the nature of a survey definitely warrants their usage, avoid slang, jargon, and technical terms. 2. Whenever possible, develop consistent response methods. 3. Make questions as impersonal as possible. 4. Do not bias later responses by the wording used in earlier questions. 5. As an ordinary rule, sequence questions from the general to the specific. 6. If closed questions are employed, try to develop exhaustive and mutually exclusive response alternatives. . Insofar as possible, place questions with similar content together in the survey instrument. 8. Make the questions as easy to answer as possible. 9. When unique and unusual terms need to be defined in questionnaire items, use very clear definitions. 10. Use an attractive questionnaire format that conveys a professional image. As may be seen, designing good questions is much more difficult than it seems. One effective way of making sure that questions measure what they are supposed to measure is to test them out first, using small focus groups. Examples of Survey Research
Graphics, Visualization & Usability (GVU) Center. Graphic, Visualization, & Usability Center’s (GVU) 10th WWW User Survey. Internet WWW page, at URL: http://www. cc. gatech. edu/gvu/user_surveys/survey-1998-10/#methodology Accessed on 8/9/99. The Graphics, Visualization & Usability (GVU) Center at Georgia Tech has conducted a WWW user survey every 10 years. Included here are both the instrument and the results. New Ideas in Pollution Regulation, The World Bank Group. Quantifying Environmental Performance Survey Result. Internet WWW page, at URL: http://www. worldbank. rg/nipr/data/envperf/” Accessed on 8/9/99. Includes survey questions, methodology, data sets, and results. Rylander, Carole Keeton. “Central Administrators and Support Staff Survey Results. ” Texas School Performance Review, Mount Pleasant Independent School District. Internet WWW page, at URL: http://www. window. state. tx. us/tpr/tspr/mtplsnt/appendd. htm Accessed on 8/9/99. Sponsored by the Texas Comptroller of Public Accounts, this is the results and methodology of a management and performance survey of central administrators and support staff at the Mount Pleasant Independent School
District. Gwartney, Patricia A. ORSP Survey Results-Executive Summary – Fall 1995. Internet WWW page, at URL: http://darkwing. uoregon. edu/~osrl/orsp/orsp. html Accessed on 8/9/99. “As one part of an internal and external review of the University of Oregon’s Office of Research and Sponsored Programs (ORSP), the Oregon Survey Research Laboratory (OSRL) was asked by the office of the Vice Provost for Research to conduct a survey of faculty members and grant administrators. This report summarizes the results of that survey. ” top | home