Wednesday, 3 December 2014

RESEARCH DESIGN- QUANTITATIVE CLASS 1/11/2014

Experimental Research

Experimental research is one of the most powerful research
methodologies that researchers can use.

Of the many types of research that might be used, the experiment

is the best way to establish cause-and-effect relationships
among variables.

The Uniqueness of Experimental Research

 Of all the research methodologies described in this
book, experimental research is unique in two very
important respects: It is the only type of research that
directly attempts to influence a particular variable,
and when properly applied, it is the best type for testing
hypotheses about cause-and-effect relationships.
In an experimental study, researchers look at the
effect(s) of at least one independent variable on one or
more dependent variables. The independent variable
in experimental research is also frequently referred
to as the experimental , or treatment , variable . The
dependent variable , also known as the criterion , or
outcome , variable , refers to the results or outcomes
of the study.



The major characteristic of experimental research
that distinguishes it from all other types of research is
that researchers manipulate the independent variable.

Independent variables frequently manipulated in educational research
include methods of instruction, types of assignment,
learning materials, rewards given to students, and types
of questions asked by teachers.

 Dependent variables that are frequently studied include achievement, interest
in a subject, attention span, motivation, and attitudes
toward school.

Experimental research, therefore, enables researchers
to go beyond description and prediction, beyond the
identification of relationships, to at least a partial determination
of what causes them.



Some actual examples of the kinds of experimental studies
that have been conducted by educational researchers are
as follows:
• The effect of small classes on instruction.
• The effect of early reading instruction on growth
rates of at-risk kindergarteners.
• The use of intensive mentoring to help beginning
teachers develop balanced instruction.
• The effect of lotteries on Web survey response rates.
• Introduction of a course on bullying into preservice
teacher-training curriculum.
• Using social stories to enhance the interpersonal conflict
resolution skills of children with learning disabilities.
• Improving the self-concept of students through the
use of hypnosis.

Essential Characteristics of Experimental Research

COMPARISON OF GROUPS

An experiment usually involves two groups of subjects,
an experimental group and a control or a comparison
group, although it is possible to conduct an experiment
with only one group (by providing all treatments to
the same subjects) or with three or more groups. The
experimental group receives a treatment of some sort
(such as a new textbook or a different method of teaching),
while the control group receives no treatment (or
the comparison group receives a different treatment).
The control or the comparison group is crucially important
in all experimental research, for it enables the
researcher to determine whether the treatment has had
an effect or whether one treatment is more effective
than another.

MANIPULATION OF THE INDEPENDENT
VARIABLE

The second essential characteristic of all experiments is
that the researcher actively manipulates the independent
variables. What does this mean? Simply put, it means
that the researcher deliberately and directly determines
what forms the independent variable will take and then
which group will get which form. For example, if the independent
variable in a study is the amount of enthusiasm
an instructor displays, a researcher might train two
teachers to display different amounts of enthusiasm as
they teach their classes.

Although many independent variables in education
can be manipulated, many others cannot. Examples of
independent variables that can be manipulated include
teaching method, type of counseling, learning activities,
assignments given, and materials used; examples of independent
variables that cannot be manipulated include
gender, ethnicity, age, and religious preference. Researchers
can manipulate the kinds of learning activities
to which students are exposed in a classroom, but they
cannot manipulate, say, religious preference—that is,
students cannot be “made into” Protestants, Catholics,
Jews, or Muslims, for example, to serve the purposes
of a study. To manipulate a variable, researchers must
decide who is to get something and when, where, and
how they will get it.

RANDOMIZATION

An important aspect of many experiments is the random
assignment of subjects to groups. Although there are
certain kinds of experiments in which random assignment
is not possible, researchers try to use randomization
whenever feasible. It is a crucial ingredient in the
best kinds of experiments. Random assignment is similar,
but not identical, to the concept of random selection.

Random assignment means that every individual who is
participating in an experiment
has an equal chance of being assigned to any of
the experimental or control conditions being compared.
Random selection , on the other hand, means that every
member of a population has an equal chance of being
selected to be a member of the sample.

Control of Extraneous
Variables
Researchers in an experimental study have an opportunity
to exercise far more control than in most other
forms of research. They determine the treatment (or
treatments), select the sample, assign individuals to
groups, decide which group will get the treatment, try
to control other factors besides the treatment that might
influence the outcome of the study, and then (finally)
observe or measure the effect of the treatment on the
groups when the treatment is completed.

Group Designs in Experimental
Research


The quality of an experiment
depends on how well the various threats to internal
validity are controlled.

POOR EXPERIMENTAL DESIGNS

Designs that are “weak” do not have built-in controls for
threats to internal validity. In addition to the independent
variable, there are a number of other plausible explanations
for any outcomes that occur. As a result, any
researcher who uses one of these designs has difficulty
assessing the effectiveness of the independent variable.
The One-Shot Case Study. In the one-shot case
study design , a single group is exposed to a treatment
or event and a dependent variable is subsequently observed
(measured) in order to assess the effect of the
treatment.

The One-Group Pretest-Posttest Design.
In the one-group pretest-posttest design , a single
group is measured or observed not only after being
exposed to a treatment of some sort, but also before.

The Static-Group Comparison Design. In
the static-group comparison design , two already existing,
or intact, groups are used. These are sometimes referred
to as static groups, hence the name for the design.
This design is sometimes called a nonequivalent control
group design .

The Static-Group Pretest-Posttest Design.
The static-group pretest-posttest design differs from
the static-group comparison design only in that a pretest
is given to both groups.


TRUE EXPERIMENTAL DESIGNS

The essential ingredient of a true experimental design
is that subjects are randomly assigned to treatment
groups. As discussed earlier, random assignment is a
powerful technique for controlling the subject characteristics
threat to internal validity, a major consideration
in educational research.

The Randomized Posttest-Only Control

Group Design. The randomized posttest-only control
group design involves two groups, both of which
are formed by random assignment. One group receives
the experimental treatment while the other does not,
and then both groups are posttested on the dependent
variable.


The Randomized Pretest-Posttest Control
Group Design. The randomized pretest-posttest
control group design differs from the randomized
posttest-only control group design solely in the use of
a pretest. Two groups of subjects are used, with both
groups being measured or observed twice. The first measurement
serves as the pretest, the second as the posttest.

QUASI-EXPERIMENTAL DESIGNS
Quasi-experimental designs do not include the use
of random assignment. Researchers who employ these
designs rely instead on other techniques to control (or
at least reduce) threats to internal validity. We shall describe
some of these techniques as we discuss several
quasi-experimental designs.

The Matching-Only Design.
The matching-only design differs from random assignment with matching
only in the fact that random assignment is not used. The
researcher still matches the subjects in the experimental
and control groups on certain variables, but he or she has
no assurance that they are equivalent on others.

Counterbalanced Designs.
Counterbalanced designs represent another technique for equating experimental
and comparison groups. In this design, each group
is exposed to all treatments, however many there are, but
in a different order. Any number of treatments may be involved.
An example of a diagram for a counterbalanced
design involving three treatments is as follows:
Time-Series Designs. The typical pre- and posttest
designs examined up to now involve observations
or measurements taken immediately before and after
treatment. A time-series design , however, involves
repeated measurements or observations over a period
of time both before and after treatment.

FACTORIAL DESIGNS
Factorial designs extend the number of relationships
that may be examined in an experimental study. They
are essentially modifi cations of either the posttest-only
control group or pretest-posttest control group designs
(with or without random assignment), which permit
the investigation of additional independent variables.
Another value of a factorial design is that it allows a
researcher to study the interaction of an independent
variable with one or more other variables, sometimes
called moderator variables. Moderator variables may
be either treatment variables or subject characteristic
variables.

A Summary on EXPERIMENTAL RESEARCH DESIGN

THE UNIQUENESS OF EXPERIMENTAL RESEARCH
• Experimental research is unique in that it is the only type of research that directly
attempts to influence a particular variable, and it is the only type that, when used
properly, can really test hypotheses about cause-and-effect relationships. Experimental
designs are some of the strongest available for educational researchers to use in
determining cause and effect.

ESSENTIAL CHARACTERISTICS OF EXPERIMENTAL RESEARCH
• Experiments differ from other types of research in two basic ways—comparison of
treatments and the direct manipulation of one or more independent variables by the
researcher.

RANDOMIZATION
• Random assignment is an important ingredient in the best kinds of experiments.
It means that every individual who is participating in the experiment has an equal
chance of being assigned to any of the experimental or control conditions that are
being compared.

CONTROL OF EXTRANEOUS VARIABLES
• The researcher in an experimental study has an opportunity to exercise far more control
than in most other forms of research.
• Some of the most common ways to control for the possibility of differential subject
characteristics (in the various groups being compared) are randomization, holding
certain variables constant, building the variable into the design, matching, using subjects
as their own controls, and using analysis of the covariance.

POOR EXPERIMENTAL DESIGNS
• Three weak designs that are occasionally used in experimental research are the oneshot
case study design, the one-group pretest-posttest design, and the static-group
comparison design. They are considered weak because they do not have built-in controls
for threats to internal validity.
• In a one-shot case study, a single group is exposed to a treatment or event, and its
effects are assessed.
• In the one-group pretest-posttest design, a single group is measured or observed both
before and after exposure to a treatment.
• In the static-group comparison design, two intact groups receive different treatments

TRUE EXPERIMENTAL DESIGNS
• The essential ingredient of a true experiment is random assignment of subjects to
treatment groups.
• The randomized posttest-only control group design involves two groups formed by
random assignment.
• The randomized pretest-posttest control group design differs from the randomized
posttest-only control group only in the use of a pretest.
• The randomized Solomon four-group design involves random assignment of subjects
to four groups, with two being pretested and two not.

MATCHING
• To increase the likelihood that groups of subjects will be equivalent, pairs of subjects
may be matched on certain variables. The members of the matched groups are then
assigned to the experimental and control groups.
• Matching may be either mechanical or statistical.
• Mechanical matching is a process of pairing two persons whose scores on a particular
variable are similar.
• Two difficulties with mechanical matching are that it is very difficult to match on
more than two or three variables, and that in order to match, some subjects must be
eliminated from the study when no matches can be found.
• Statistical matching does not necessitate a loss of subjects.

QUASI-EXPERIMENTAL DESIGNS
• The matching-only design differs from random assignment with matching only in
that random assignment is not used.
• In a counterbalanced design, all groups are exposed to all treatments, but in a different
order.
• A time-series design involves repeated measurements or observations over time, both
before and after treatment.

FACTORIAL DESIGNS
• Factorial designs extend the number of relationships that may be examined in an
experimental study.




No comments:

Post a Comment