Introduction
Many educational research methods are descriptive; that is, they set out to describe and to interpret what is. Descriptive research, according to Best, is concerned with:
“conditions or relationships that exist; practices that prevail; beliefs, points of views, or attitudes that are held; processes that are going on; effects that are being felt; or trends that are developing. At times, descriptive research is concerned with how what is or what exists is related to some preceding event that has influenced or affected a present condition or event. (Best 1970)”
Such studies look at individuals, groups, institutions, methods and materials in order to describe, compare, contrast, classify, analyses and interpret the entities and the events that constitute their various fields of inquiry.
Typically, surveys gather data at a particular point in time with the intention of describing the nature of existing conditions, or identifying standards against which existing conditions can be compared, or determining the relationships that exist between specific events. Thus, surveys may vary in their levels of complexity from those that provide simple frequency counts to those that present relational analysis.
Typically, surveys gather data at a particular point in time with the intention of describing the nature of existing conditions, or identifying standards against which existing conditions can be compared, or determining the relationships that exist between specific events. Thus, surveys may vary in their levels of complexity from those that provide simple frequency counts to those that present relational analysis.
Surveys may be further differentiated in terms of their scope. A study of contemporary developments in post-secondary education, for example, might encompass the whole of western Europe; a study of subject choice, on the other hand, might be confined to one secondary school. The complexity and scope of surveys in education can be illustrated by reference to familiar examples. The surveys undertaken for the Plowden Committee on primary school children (Central Advisory Council for Education 1967) collected a wealth of information on children, teachers and parents and used sophisticated analytical techniques to predict pupil attainment. By contrast, the small-scale survey of Jackson and Marsden (1962) involved a detailed study of the backgrounds and values of 88 workingclass adults who had achieved success through selective secondary education. Similarly, a study of training in multicultural perspectives by Bimrose and Bayne (1995) used only 28 participants in the survey research.
A survey has several characteristics and several claimed attractions; typically it is used to scan a wide field of issues, populations, programmes etc. in order to measure or describe any generalized features. It is useful (Morrison, 1993: 38–40) in that it usually:
- gathers data on a one-shot basis and hence is economical and efficient
- represents a wide target population (hence there is a need for careful sampling)
- generates numerical data
- provides descriptive, inferential and explanatory information
- manipulates key factors and variables to derive
- frequencies (e.g. the numbers registering a particular opinion or test score)
- gathers standardized information (i.e. using the same instruments and questions for all participants)
- ascertains correlations (e.g. to find out if there is any relationship between gender and scores)
- presents material which is uncluttered by specific contextual factors
- captures data from multiple choice, closed questions, test scores or observation schedules
- supports or refutes hypotheses about the target population
- generates accurate instruments through their piloting and revision
- makes generalizations about, and observes patterns of response in, the targets of focus
- gathers data which can be processed statistically
- usually relies on large-scale data gathering from a wide population in order to enable generalizations to be made about given factors or variables.
Examples of surveys are as follows:
- opinion polls, which refute the notion that only opinion polls can catch opinions
- test scores (e.g. the results of testing students nationally or locally)
- students’ preferences for particular courses (e.g. humanities, sciences)
- reading surveys (e.g. Southgate et al.’s (1981) example of teaching practices in the United Kingdom).
Web sites for the National Child Development Study (NCDS) can be found at:
- http://www.cls.ioe.ac.uk/Ncds/nibntro.htm
- http://www.cls.ioe./ac.uk.Ncds/narchive.htm
- http://www.mimas.ac.uk/surveys.ncds/
- http://www.mimas.ac.uk/surveys.ncds/ncds info.html
Surveys in education often use test results, selfcompletion questionnaires and attitude scales. A researcher using this model typically will be seeking to gather large-scale data from as representative a sample population as possible in order to say with a measure of statistical confidence that certain observed characteristics occur with a degree of regularity, or that certain factors cluster together (see Chapter 25) or that they correlate with each other (correlation and covariance), or that they change over time and location (e.g. results of test scores used to ascertain the ‘value-added’ dimension of education, maybe using regression analysis and analysis of residuals to determine the difference between a predicted and an observed score), or regression analysis to use data from one variable to predict an outcome on another variable.
Surveys can be exploratory, in which no assumptions or models are postulated, and in which relationships and patterns are explored (e.g. through correlation, regression, stepwise regression and factor analysis). They can also be confirmatory, in which a model, causal relationship or hypothesis is tested (see the discussion of exploratory and confirmatory analysis in Part Five). Surveys can be descriptive or analytic (e.g. to examine relationships). Descriptive surveys simply describe data on variables of interest, while analytic surveys operate with hypothesized predictor or explanatory variables that are tested for their influence on dependent variables.
Most surveys will combine nominal data on participants’ backgrounds and relevant personal details with other scales (e.g. attitude scales, data from ordinal, interval and ratio measures). Surveys are useful for gathering factual information, data on attitudes and preferences, beliefs and predictions, behaviour and experiences – both past and present (Weisberg et al. 1996).
The attractions of a survey lie in its appeal to generalizability or universality within given parameters, its ability to make statements which are supported by large data banks and its ability to establish the degree of confidence which can be placed in a set of findings.
On the other hand, if a researcher is concerned to catch local, institutional or small scale factors and variables – to portray the specificity of a situation, its uniqueness and particular complexity, its interpersonal dynamics, and to provide explanations of why a situation occurred or why a person or group of people returned a particular set of results or behaved in a particular way in a situation, or how a programme changes and develops over time, then a survey approach is probably unsuitable. Its degree of explanatory potential or fine detail is limited; it is lost to broadbrush generalizations which are free of temporal, spatial or local contexts, i.e. its appeal largely rests on the basis of positivism. The individual instance is sacrificed to the aggregated response (which has the attraction of anonymity, non-traceability and confidentiality for respondents).
Surveys typically, though by no means exclusively, rely on large-scale data, e.g. from questionnaires, test scores, attendance rates, results of public examinations etc., all of which enable comparisons to be made over time or between groups. This is not to say that surveys cannot be undertaken on a small-scale basis, as indeed they can; rather it is to say that the generalizability of such small-scale data will be slight. In surveys the researcher is usually very clearly an outsider, indeed questions of reliability must attach themselves to researchers conducting survey research on their own subjects, such as participants in a course that they have been running (e.g. Bimrose and Bayne 1995; Morrison 1997). Further, it is critical that attention is paid to rigorous sampling, otherwise the basis of the survey’s applicability to wider contexts is seriously undermined. Non-probability samples tend to be avoided in surveys if generalizability is sought; probability sampling will tend to lead to generalizability of the data collected.
Some preliminary considerations
Three prerequisites to the design of any survey are: the specification of the exact purpose of the inquiry; the population on which it is to focus; and the resources that are available. Hoinville and Jowell’s (1978) consideration of each of these key factors in survey planning can be illustrated in relation to the design of an educational inquiry.
The purpose of the inquiry
First, a survey’s general purpose must be translated into a specific central aim. Thus, ‘to explore teachers’ views about in-service work’ is somewhat nebulous, whereas ‘to obtain a detailed description of primary and secondary teachers’ priorities in the provision of in-service education courses’ is reasonably specific.
Having decided upon and specified the primary objective of the survey, the second phase of the planning involves the identification and itemizing of subsidiary topics that relate to its central purpose. In our example, subsidiary issues might well include: the types of courses required; the content of courses; the location of courses; the timing of courses; the design of courses; and the financing of courses.
The third phase follows the identification and itemization of subsidiary topics and involves formulating specific information requirements relating to each of these issues. For example, with respect to the type of courses required, detailed information would be needed about the duration of courses (one meeting, several meetings, a week, a month, a term or a year), the status of courses (nonaward bearing, award bearing, with certificate, diploma, degree granted by college or university), the orientation of courses (theoretically oriented involving lectures, readings, etc., or practically oriented involving workshops and the production of curriculum materials).
The population upon which the survey is focused
The second prerequisite to survey design, the specification of the population to which the inquiry is addressed, affects decisions that researchers must make both about sampling and resources. In our hypothetical survey of in-service requirements, for example, we might specify the population as ‘those primary and secondary teachers employed in schools within a thirty-mile radius of Loughborough University’. In this case, the population is readily identifiable and, given sufficient resources to contact every member of the designated group, sampling decisions do not arise. Things are rarely so straightforward, however. Often the criteria by which populations are specified (‘severely challenged’, ‘under-achievers’, ‘intending teachers’ or ‘highly anxious’) are difficult to operationalize. Populations, moreover, vary considerably in their accessibility; pupils and student teachers are relatively easy to survey, gypsy children and headteachers are more elusive. More importantly, in a large survey researchers usually draw a sample from the population to be studied; rarely do they attempt to contact every member. We deal with the question of sampling shortly.
The resources available
The third important factor in designing and planning a survey is the financial cost. Sample surveys are labour-intensive (see Davidson 1970), the largest single expenditure being the fieldwork, where costs arise out of the interviewing time, travel time and transport claims of the interviewers themselves. There are additional demands on the survey budget. Training and supervising the panel of interviewers can often be as expensive as the costs incurred during the time that they actually spend in the field. Questionnaire construction, piloting, printing, posting, coding, together with computer programme – all eat into financial resources.
Proposals from intending education researchers seeking governmental or private funding are often weakest in the amount of time and thought devoted to a detailed planning of the financial implications of the projected inquiries. (In this chapter we confine ourselves from this point to a discussion of surveys based on self-completion questionnaires).
Planning a survey
Whether the survey is large scale and undertaken by some governmental bureau or small scale and carried out by the lone researcher, the collection of information typically involves one or more of the following data-gathering techniques: structured or semi-structured interviews, selfcompletion or postal questionnaires, telephone interviews, Internet surveys, standardized tests of attainment or performance, and attitude scales.
The process moves from the general to the specific. A general research topic is broken down into complementary issues and questions, and, for each component, questions are set. As will be discussed in questionnaires, it is important, in the interests of reliability and validity, to have several items or questions for each component issue, as this does justice to the allround nature of the topic. Sapsford (1999: 34–40) suggests that there are four main considerations in planning a survey:
- Problem definition: deciding what kinds and contents of answers are required; what hypotheses there are to be tested; what variables there are to explore
- Sample selection: what is the target population; how can access and representativeness be assured; what other samples will need to be drawn for the purpose of comparison
- Design of measurements: what will be measured, and how (i.e. what metrics will be used); what variables will be required; how reliability and validity will be assured
- Concern for participants: protection of confidentiality and anonymity; avoidance of pain to the respondents; avoiding harm to those who might be affected by the results; avoiding over-intrusive questions; avoiding coercion; informed consent.
A fourteen-stage process of planning a survey can be considered:
1. Definetheobjectives.
2. Decide the kind of survey required.
3. Formulate research questions or hypotheses (if appropriate): the null hypothesis and alternative hypothesis.
4. Decidetheissuesonwhichtofocus.
5. Decide the information that is needed to address the issues.
6. Decidethesamplingrequired.
7. Decide the instrumentation and themetrics required.
8. Generatethedatacollectioninstruments.
9. Decide how the data will be collected (e.g. postal survey, interviews).
10. Pilot the instruments and refine them.
11. Train the interviewers (if appropriate).
12. Collect the data.
13. Analyse the data.
14. Report the results.
Rosier (1997) suggests that the planning of a survey will need to include clarification of:
- The research questions to which answers need to be provided.
- The conceptual framework of the survey, specifying in precise terms the concepts that will be used and explored.
- Operationalizing the research questions (e.g. into hypotheses).
- The instruments to be used for data collection, e.g. to chart or measure background characteristics of the sample (often nominal data), academic achievements (e.g. examination results, degrees awarded), attitudes and opinions (often using ordinal data from rating scales) and behaviour (using observational techniques).
- Sampling strategies and subgroups within the sample (unless the whole population is being surveyed, e.g. through census returns or nationally aggregated test scores etc.).
- Pre-piloting the survey.
- Piloting the survey.
- Data collection practicalities and conduct (e.g. permissions, funding, ethical considerations, response rates).
- Data preparation (e.g. coding, data entry for computer analysis, checking and verification).
- Data analysis (e.g. statistical processes, construction of variables and factor analysis, inferential statistics).
- Reporting the findings (answering the research questions).
It is important to pilot and pre-pilot a survey. The difference between the pre-pilot and the pilot is significant. Whereas the pre-pilot is usually a series of open-ended questions that are used to generate categories for closed, typically multiple choice questions, the pilot is used to test the actual survey instrument itself.
Read: Methodological Principles For Language Teaching
A rigorous survey, then, formulates clear, specific objectives and research questions, ensures that the instrumentation, sampling, and data types are appropriate to yield answers to the research questions, ensures that as high a level of sophistication of data analysis is undertaken as the data will sustain (but no more!).
Survey sampling
Sampling is a key feature of a survey approach. Because questions about sampling arise directly from the second of our preliminary considerations, that is, defining the population upon which the survey is to focus, researchers must take sampling decisions early in the overall planning of a survey. We have already seen that due to factors of expense, time and accessibility, it is not always possible or practical to obtain measures from a population. Researchers endeavour therefore to collect information from a smaller group or subset of the population in such a way that the knowledge gained is representative of the total population under study. This smaller group or subset is a ‘sample’. Notice how competent researchers start with the total population and work down to the sample. By contrast, novices work from the bottom up, that is, they determine the minimum number of respondents needed to conduct a successful survey. However, unless they identify the total population in advance, it is virtually impossible for them to assess how representative the sample is that they have drawn. There are two methods of sampling. One yields probability samples in which, as the term implies, the probability of selection of each respondent is known. The other yields nonprobability samples, in which the probability of selection is unknown.
Probability samples include:
- simple random samples
- systematic samples
- stratified samples
- cluster samples
- stage samples
- multi-phase samples.
Their appeal is to the generalizability of the data that are gathered. Non-probability samples include:
- convenience sampling
- quota sampling
- dimensional sampling
- purposive sampling
- snowball sampling.
These kinds of sample do not seek to generalize from the data collected. Each type of sample seeks only to represent itself. The researcher will need to decide the sampling strategy to be used on the basis of fitness for purpose, in parallel with considerations of, for example, the representativeness of the sample, the desire to generalize, the access to the sample, and the size of the sample.
Reference
Cohen, Louis, Lawrence Manion & Keith Morrison. (2007). Research Methods in Education. London & New York: Routledge.