RESEARCH CONNECTIONS
An experiment is a study in which the researcher manipulates the level of some independent variable and then measures the outcome. Experiments are powerful techniques for evaluating cause-and-effect relationships. Many researchers consider experiments the "gold standard" against which all other research designs should be judged. Experiments are conducted both in the laboratory and in real life situations.
Types of Experimental Design:
There are two basic types of research design:
True experiments
Quasi-experiments
The purpose of both is to examine the cause of certain phenomena.
True experiments, in which all the important factors that might affect the phenomena of interest are completely controlled, are the preferred design. Often, however, it is not possible or practical to control all the key factors, so it becomes necessary to implement a quasi-experimental research design.
Similarities between true and quasi-experiments:
Study participants are subjected to some type of treatment or condition
Some outcome of interest is measured
The researchers test whether differences in this outcome are related to the treatment
Differences between true experiments and quasi-experiments:
In a true experiment, participants are randomly assigned to either the treatment or the control group, whereas they are not assigned randomly in a quasi-experiment
In a quasi-experiment, the control and treatment groups differ not only in terms of the experimental treatment they receive, but also in other, often unknown or unknowable, ways. Thus, the researcher must try to statistically control for as many of these differences as possible
Because control is lacking in quasi-experiments, there may be several "rival hypotheses" competing with the experimental manipulation as explanations for observed results
Key Components of Experimental Research Design
The Manipulation of Predictor Variables. In an experiment, the researcher manipulates the factor that is hypothesized to affect the outcome of interest. The factor that is being manipulated is typically referred to as the treatment or intervention. The researcher may manipulate whether research subjects receive a treatment (e.g., antidepressant medicine: yes or no) and the level of treatment (e.g., 50 mg, 75 mg, 100 mg, and 125 mg). Suppose, for example, a group of researchers was interested in the causes of maternal employment. They might hypothesize that the provision of government-subsidized child care would promote such employment. They could then design an experiment in which some subjects would be provided the option of government-funded child care subsidies and others would not. The researchers might also manipulate the value of the child care subsidies in order to determine if higher subsidy values might result in different levels of maternal employment.
Random Assignment. Study participants are randomly assigned to different treatment groups
All participants have the same chance of being in a given condition
Participants are assigned to either the group that receives the treatment, known as the "experimental group" or "treatment group," or to the group which does not receive the treatment, referred to as the "control group"
Random assignment neutralizes factors other than the independent and dependent variables, making it possible to directly infer cause and effect
Random Sampling. Traditionally, experimental researchers have used convenience sampling to select study participants. However, as research methods have become more rigorous, and the problems with generalizing from a convenience sample to the larger population have become more apparent, experimental researchers are increasingly turning to random sampling. In experimental policy research studies, participants are often randomly selected from program administrative databases and randomly assigned to the control or treatment groups.
Validity of Results. The two types of validity of experiments are internal and external. It is often difficult to achieve both in social science research experiments.
Internal Validity. The extent to which researchers provide compelling evidence that the causal (independent) variable causes changes in the outcome (dependent) variable. To do this, researchers must rule other potential explanations for the changes in the outcome variable.
When an experiment is internally valid, we are certain that the independent variable (e.g., child care subsidies) caused the outcome of the study (e.g., maternal employment)
When subjects are randomly assigned to treatment or control groups, we can assume that the independent variable caused the observed outcomes because the two groups should not have differed from one another at the start of the experiment
For example, take the child care subsidy example above. Since research subjects were randomly assigned to the treatment (child care subsidies available) and control (no child care subsidies available) groups, the two groups should not have differed at the outset of the study. If, after the intervention, mothers in the treatment group were more likely to be working, we can assume that the availability of child care subsidies promoted maternal employment
One potential threat to internal validity in experiments occurs when participants either drop out of the study or refuse to participate in the study. If particular types of individuals drop out or refuse to participate more often than individuals with other characteristics, this is called differential attrition. For example, suppose an experiment was conducted to assess the effects of a new reading curriculum. If the new curriculum was so tough that many of the slowest readers dropped out of school, the school with the new curriculum would experience an increase in the average reading scores. The reason they experienced an increase in reading scores, however, is because the worst readers left the school, not because the new curriculum improved students' reading skills.
differential attrition Differential or selective attrition occurs when the rates of dropping out or leaving a study with several data collection waves (e.g., longitudinal study or experimental research) vary across different study groups. This is particularly troublesome when the characteristics of those who drop out are systematically different from those who remain, and may introduce bias in the study findings.
External Validity. The degree to which the results of a study can be generalized beyond the study sample to a larger population.
External validity is also of particular concern in social science experiments
It can be very difficult to generalize experimental results to groups that were not included in the study
Studies that randomly select participants from the most diverse and representative populations are more likely to have external validity
The use of random sampling techniques makes it easier to generalize the results of studies to other groups
For example, a research study shows that a new curriculum improved reading comprehension of third-grade children in Iowa. To assess the study's external validity, you would ask whether this new curriculum would also be effective with third graders in New York or with children in other elementary grades.
Ethics. It is particularly important in experimental research to follow ethical guidelines. Protecting the health and safety of research subjects is imperative. In order to assure subject safety, all researchers should have their project reviewed by the Institutional Review Boards (IRBS). The Natonal insttutes of Health supplies strict guidelines for project approval. Many of these guidelines are based on the Belmont Report (pdf).
The basic ethical principles:
Respect for persons -- requires that research subjects are not coerced into participating in a study and requires the protection of research subjects who have diminished autonomy
Beneficence -- requires that experiments do not harm research subjects, and that researchers minimize the risks for subjects while maximizing the benefits for them
Justice -- requires that all forms of differential treatment among research subjects be justified
Advantages and Disadvantages of Experimental Design
Advantages. The environment in which the research takes place can often be carefully controlled. Consequently, it is easier to estimate the true effect of the variable of interest on the outcome of interest.
Disadvantages. It is often difficult to assure the external validity of the experiment, due to the frequently nonrandom selection processes and the artificial nature of the experimental context.
An experiment is a study in which the researcher manipulates the level of some independent variable and then measures the outcome. Experiments are powerful techniques for evaluating cause-and-effect relationships. Many researchers consider experiments the "gold standard" against which all other research designs should be judged. Experiments are conducted both in the laboratory and in real life situations.
Types of Experimental Design:
There are two basic types of research design:
True experiments
Quasi-experiments
The purpose of both is to examine the cause of certain phenomena.
True experiments, in which all the important factors that might affect the phenomena of interest are completely controlled, are the preferred design. Often, however, it is not possible or practical to control all the key factors, so it becomes necessary to implement a quasi-experimental research design.
Similarities between true and quasi-experiments:
Study participants are subjected to some type of treatment or condition
Some outcome of interest is measured
The researchers test whether differences in this outcome are related to the treatment
Differences between true experiments and quasi-experiments:
In a true experiment, participants are randomly assigned to either the treatment or the control group, whereas they are not assigned randomly in a quasi-experiment
In a quasi-experiment, the control and treatment groups differ not only in terms of the experimental treatment they receive, but also in other, often unknown or unknowable, ways. Thus, the researcher must try to statistically control for as many of these differences as possible
Because control is lacking in quasi-experiments, there may be several "rival hypotheses" competing with the experimental manipulation as explanations for observed results
Key Components of Experimental Research Design
The Manipulation of Predictor Variables. In an experiment, the researcher manipulates the factor that is hypothesized to affect the outcome of interest. The factor that is being manipulated is typically referred to as the treatment or intervention. The researcher may manipulate whether research subjects receive a treatment (e.g., antidepressant medicine: yes or no) and the level of treatment (e.g., 50 mg, 75 mg, 100 mg, and 125 mg). Suppose, for example, a group of researchers was interested in the causes of maternal employment. They might hypothesize that the provision of government-subsidized child care would promote such employment. They could then design an experiment in which some subjects would be provided the option of government-funded child care subsidies and others would not. The researchers might also manipulate the value of the child care subsidies in order to determine if higher subsidy values might result in different levels of maternal employment.
Random Assignment. Study participants are randomly assigned to different treatment groups
All participants have the same chance of being in a given condition
Participants are assigned to either the group that receives the treatment, known as the "experimental group" or "treatment group," or to the group which does not receive the treatment, referred to as the "control group"
Random assignment neutralizes factors other than the independent and dependent variables, making it possible to directly infer cause and effect
Random Sampling. Traditionally, experimental researchers have used convenience sampling to select study participants. However, as research methods have become more rigorous, and the problems with generalizing from a convenience sample to the larger population have become more apparent, experimental researchers are increasingly turning to random sampling. In experimental policy research studies, participants are often randomly selected from program administrative databases and randomly assigned to the control or treatment groups.
Validity of Results. The two types of validity of experiments are internal and external. It is often difficult to achieve both in social science research experiments.
Internal Validity. The extent to which researchers provide compelling evidence that the causal (independent) variable causes changes in the outcome (dependent) variable. To do this, researchers must rule other potential explanations for the changes in the outcome variable.
When an experiment is internally valid, we are certain that the independent variable (e.g., child care subsidies) caused the outcome of the study (e.g., maternal employment)
When subjects are randomly assigned to treatment or control groups, we can assume that the independent variable caused the observed outcomes because the two groups should not have differed from one another at the start of the experiment
For example, take the child care subsidy example above. Since research subjects were randomly assigned to the treatment (child care subsidies available) and control (no child care subsidies available) groups, the two groups should not have differed at the outset of the study. If, after the intervention, mothers in the treatment group were more likely to be working, we can assume that the availability of child care subsidies promoted maternal employment
One potential threat to internal validity in experiments occurs when participants either drop out of the study or refuse to participate in the study. If particular types of individuals drop out or refuse to participate more often than individuals with other characteristics, this is called differential attrition. For example, suppose an experiment was conducted to assess the effects of a new reading curriculum. If the new curriculum was so tough that many of the slowest readers dropped out of school, the school with the new curriculum would experience an increase in the average reading scores. The reason they experienced an increase in reading scores, however, is because the worst readers left the school, not because the new curriculum improved students' reading skills.
differential attrition Differential or selective attrition occurs when the rates of dropping out or leaving a study with several data collection waves (e.g., longitudinal study or experimental research) vary across different study groups. This is particularly troublesome when the characteristics of those who drop out are systematically different from those who remain, and may introduce bias in the study findings.
External Validity. The degree to which the results of a study can be generalized beyond the study sample to a larger population.
External validity is also of particular concern in social science experiments
It can be very difficult to generalize experimental results to groups that were not included in the study
Studies that randomly select participants from the most diverse and representative populations are more likely to have external validity
The use of random sampling techniques makes it easier to generalize the results of studies to other groups
For example, a research study shows that a new curriculum improved reading comprehension of third-grade children in Iowa. To assess the study's external validity, you would ask whether this new curriculum would also be effective with third graders in New York or with children in other elementary grades.
Ethics. It is particularly important in experimental research to follow ethical guidelines. Protecting the health and safety of research subjects is imperative. In order to assure subject safety, all researchers should have their project reviewed by the Institutional Review Boards (IRBS). The Natonal insttutes of Health supplies strict guidelines for project approval. Many of these guidelines are based on the Belmont Report (pdf).
The basic ethical principles:
Respect for persons -- requires that research subjects are not coerced into participating in a study and requires the protection of research subjects who have diminished autonomy
Beneficence -- requires that experiments do not harm research subjects, and that researchers minimize the risks for subjects while maximizing the benefits for them
Justice -- requires that all forms of differential treatment among research subjects be justified
Advantages and Disadvantages of Experimental Design
Advantages. The environment in which the research takes place can often be carefully controlled. Consequently, it is easier to estimate the true effect of the variable of interest on the outcome of interest.
Disadvantages. It is often difficult to assure the external validity of the experiment, due to the frequently nonrandom selection processes and the artificial nature of the experimental context.