Experimental Research

Experimental Research

 

Assignment 8 Paper 8 ? Experimental Research
Read the article, The Nature of Experimental and Quasi-Experimental Research in Postgraduate Education Research in South.

Think about the following to guide your reading:

What problems does the article point out with the random assignment of participants into experimental and control groups?

Activity:
Write a paper that compares and contrasts the differences between experimental and quasi-experimental research. Your response should use critical thinking skills, such as critique, compare, defend, evaluate, recommend, formulate, analyze, categorize, and assess.
The nature of experimental and quasiexperimental research in postgraduate education research in South Africa: 1995–2004
B. Goba
Faculty of Education University of KwaZulu-Natal, South Africa e-mail: gobab@ukzn.ac.za

R. J. Balfour
Dean of Education Sciences North-West University (Potchefstroom Campus) University of KwaZulu-Natal, South Africa e-mail: Robert.Balfour@nwu.ac.za

T. Nkambule
University of Witwatersrand, South Africa e-mail: Thabisile.Nkambule@wits.ac.za

Abstract
It is widely known that there is a dearth of education research in South Africa which takes as its methodological basis experimentation. The emphasis has been on educators’ and learners’ experiential understanding in the first decade of democracy after apartheid, when qualitative research predominated. The article investigates, first, the extent of experimental and quasi-experimental research designs. Second, we examine, drawing upon a sample of theses which self-report as experimental and quasi-experimental research, the extent to which such methodologies are actually deployed and with what success or efficacy. And, third, we interrogate the associations of experimental and quasi-experimental designs with particular disciplines within education. This article points out the problems with random assignment of participants into experimental and control group in educational settings. Most of the experimental research is concentrated in three institutions in the Gauteng Province, while there are six institutions where this methodology is not used. Also, the experimental designs are prevalent in the psychology of education discipline. This points, ultimately, to the lack of supervision capacity in the experimental designs in South African higher education institutions.

INTRODUCTION

It is well known ‘that there is a dearth of quantitative research’ in education, particularly in South African postgraduate education research in the ten year period immediately after 1994 (Karlsson et al. 2009, 1099). In addition, in their paper, Karlsson et al. (2009, 1091) contend that ‘not all institutions teach research
© Unisa Press ISSN 1011-3487

269

SAJHE 25(2)2011 pp 947–964

B. Goba , R. J. Balfour and T. Nkambule

methodology in the same way’. Against this backdrop, this article explores first, the extent of experimental and quasi-experimental design within the quantitative research approach, with specific reference to postgraduate education research in South Africa: 1995–2004. Second, it investigates the nature of experimental and quasi-experimental research and understands the framing of methodological choices in relation to these designs. Third, it discusses the associations of the experimental as well as quasiexperimental research design used by the postgraduate education students with the institutions where educational theses are published and the discipline areas in which the studies are based. We are interested in experimental and quasi-experimental designs since there is a paucity of such research in education. Education as a field is critiqued for over-using qualitative and personal experiences in understanding educational phenomena (Whitehurst 2002). This critique comes about even when experimental and quasi-experimental researches are argued to be appropriate designs for ‘evidenced-based’ research (Desimone 2002, 2; Campbell 1969). Some scholars even refer to experimental designs as the ‘gold standard’ (Cook and Payne 2002; Sorensen, Smaldino and Walker 2005,) for efficiency in research in the literature. It is for these reasons we seek to understand how postgraduate education students in South Africa use experimental designs when undertaking their research. What do we seek to achieve by undertaking an analysis of this nature? What emerges almost immediately is that a very minute proportion of postgraduate education research using quantitative approaches employs experimental and quasiexperimental designs. Also, the association of experimental and quasi-experimental design with particular institutional research cultures is tenuous. Further, the associations with particular disciplines demonstrate that experimental work has not been considered widely as an accessible mode for research in educational research in South Africa. These points support our argument concerning the paucity of research that is experimental, and makes a strong case for understanding the predominance of qualitative modes in the majority of higher education institutions in South Africa. Before we focus on the abovementioned, we would like to first offer a discussion of the literature reviewed on experimental and quasi-experimental designs. Second, we describe the methodology and the results of the data selected for this article. Finally, we offer a discussion of our findings and the conclusion.
LITERATURE REVIEW

How is the experimental and quasi-experimental design understood in educational research? Experimental designs originate from the scientific disciplines such as medicine and psychology (Cook and Campbell 1979; Kember 2003; Nueman 2006). Experimental design involves an empirical investigation where the researcher controls some independent variables which impact on the dependent variable under scrutiny (Howitt and Cramer 2000; Neuman 2006; Moses and Knutsen 2007), and measure the correlation/associations between these variables. Usually the experiment is conducted in a laboratory setting where the researcher subjects the experimental
270

The nature of experimental and quasi-experimental research in postgraduate education research in SA

group under treatment, while having a control group that is not subjected to the same treatment – in other words, a placebo or no treatment. There are four broad categories of the experimental design that are used in educational research. Singleton and Straits (1999) summarise these broad categories as pre-experimental designs, true experimental designs, factorial experimental designs and quasi-experimental designs. Each of these experimental designs has at least two types of experimental designs, and are summarised in the table below.
Table 1: Categories of experimental designs1
Categories of experimental designs Pre-experimental designs Types of experimental designs One-shot case study One group pre-test-post-test design Static-group comparisons True experimental designs Pre-test-post-test control group design Post-test-only control group design Solomon four-group design Factorial experimental designs Quasi-experimental designs Main effect Interaction effect Separate-sample pre-test design Non-equivalent control group design Interrupted time-series design Multiple time-series design

In the literature review we focus our attention to ‘true’ experimental (referred here as experimental) and quasi-experimental designs. Experimental designs simulate experimentation similar to laboratory settings. The nomenclature of experimental design is random assignment of participants into experimental and control groups; the experimental group is subjected to an intervention or treatment, and the effectiveness of the intervention is measured using pre- and post-testing (Cook and Campbell 1979; Singleton and Straits 1999; Howitt and Cramer 2000; Neuman 2006, Moses and Knutsen 2007). Random assignment is the process of selecting participants randomly first, from the population, then assigning the selected participants into experimental and control group such that the two groups are similar (Singleton and Straits 1999; Neuman 2006; Moses and Knutsen 2007). In experimental designs, there is measurement of causal relationship between independent variables and dependent variables. Quasi-experimental designs, unlike experimental designs, are based on experimentation that ‘does not directly manipulate variables as in a typical laboratory experiment’ (Elmes, Kantowitz and Roediger III 2006, 254). Variables in quasiexperimental designs are effects of natural ‘treatments’ such as disasters, age, sex, etc. (Elmes et al. 2006, 254). In addition, random assignment of participants into experimental and control groups is not necessary in quasi-experimental designs.
271

B. Goba , R. J. Balfour and T. Nkambule

However, the selection of participants randomly is important and sufficient. This article seeks to understand the extent of quasi-experimental over true experimentation used in postgraduate education research in the period delineated. In experimental designs, the researchers use either single case or groups in the experiment. Group experimental designs have come under criticism lately (Lundervold and Belwood 2000; Harvey et al. 2004; Kennedy 2004; BargerAnderson et al. 2004). The criticism levelled against group experimental designs such as pre-test/post-test control group design and Solomon’s four group design, is that they ‘are insensitive to the exigencies of everyday practice’ (Lundervold and Belwood 2000, 92). The single case experimental designs are the preferred way of researching classroom practice. Lundervold and Belwood (2000) consider single case experimental design the ‘best kept secret’ since they are infrequently used in educational research. Single case experimental designs are better adapted to study ‘behavioural processes operating on student learning’ (Kennedy 2004, 209), as opposed to group experimental designs which are appropriate for testing effectiveness of an intervention (Lundervold and Belwood 2000). An example of a single case design is the multiple baseline experimental design. This kind of experimental design focuses on individuals, not the schooling system. Also, depending on the type of multiple baseline experimental design (concurrent or non-concurrent), it staggers the period of data collection between the baseline and the intervention (Harvey, May and Kennedy 2004, 268). In this article we are interested in understanding whether postgraduate education students use single case or group experimental design when undertaking their research. Experimental designs have limitations when applied to educational settings. The literature points to the random assignment of participants, and control of independent variables as some of the limitations in the design (Howitt and Cramer 2000; Borman 2002; Seethaler and Fuchs 2005; Elmes, Kantowitz and Roediger III 2006). The selection of control and experimental groups denies teachers and learners who are part of the control group, the opportunity to benefit from the educational intervention under scrutiny (Burtless 2002; Kember 2003). Experimentation processes are difficult to conduct in the educational setting, unlike in the scientific/medical settings (Seethaler and Fuchs 2005). Seethaler and Fuchs (2005, 98) argue that ‘the classroom settings are contextually complex for researchers to successfully control confounding independent variables’. Further, they contend that the experimental research relies heavily on huge funding and requires a team of researchers to conduct ‘large-scale, randomized trials’ which limit the number of such research projects in education, especially in postgraduate education (Seethaler and Fuchs 2005). As a result, there is a reduced preference for experimental design when conducting educational research. Moses and Knutsen (1999) take this debate further by pointing to the ontology of the scientific paradigm which brings about resistance to the use of experimental designs. This is an ontological argument about the nature of the thing we study, and asks (Moses and Knutsen 1999, 54): is the world of social science made up of atomistic, interchangeable parts (like clock), or is it an organic whole, where the very context
272

The nature of experimental and quasi-experimental research in postgraduate education research in SA

provides it with meaning (and where manipulating the context will change its meaning)? Pre-testing is important in the experimentation process. There are, however, different conceptions of pre-testing in the literature. For example, Singleton and Straits (1999) use the term pre-testing interchangeably with piloting an instrument. They define pre-test as a process of ‘trying out the survey instrument on a small sample of persons having characteristics similar to those of the target group of respondents’ (Singleton and Straits 1999, 266). Neuman defines pre-test as ‘the measurement of the dependent variable prior to introduction of the treatment’ (Neuman 2006, 252– 253). It is our intention to read the data and explore how postgraduate education students understand and use different concepts in experimental designs.
METHODOLOGY

In this article, we review a corpus of postgraduate educational research produced in South African institutions from 1995 to 2004 for the Project on Postgraduate Education Research (PPER) 2. We use meta-interpretation as a methodology to review and synthesise postgraduate education research using experimental designs. This methodology was chosen amongst a number of methods that can be used for research synthesis as espoused by Weed (2008, 15), namely, meta-analysis, metasynthesis, systematic review, narrative review, meta-ethnography, meta-study, metaevaluation, and meta-interpretation. Meta-interpretation, unlike meta-analysis, takes the role of the synthesiser as crucial. Meta- interpretation, according to Weed (2008), takes an idiographic approach to the inclusion or exclusion of studies. That is, there are no studies conforming to the selection criteria that are excluded before the analysis. In this article, we selected all studies captured on the PPER’s data base, which self-reportedly used experimental, quasi-experimental and pre-post test research designs. Only after careful reading of the postgraduate education theses for analysis did we exclude studies that were not experimental, quasi-experimental or pre-post test design in nature. The context of the studies reviewed in meta-interpretation is important. In this method, the meaning of context is multifaceted as it includes not only the context of the research sites mentioned in the studies, but also the ‘academic context’ where studies are ‘produced and written’ (Weed 2008, 20). This is an important point in our analysis as we seek to determine whether there is association between disciplines and institutions with the preference of using experimentation in postgraduate education research. The data for this article was based on masters and doctoral education theses which were collected from 20 universities in South Africa and produced in the period 1995 to 2004. The postgraduate education theses were read and captured on Endnote. To select data for this article, we searched the EndNote data base of 3 776 theses, for experimental, quasi-experimental and pre-post test methodologies. We decided to keep the pre-post test as a separate category because the authors of the theses did not indicate their methodology as either true-experimental design or pre-experimental
273

B. Goba , R. J. Balfour and T. Nkambule

design (see Table 1). It was pointed out in the literature that Singleton and Straits (1999) identify two categories of experimental designs (pre-experimental and trueexperimental) with pre-post test as a type of experimental design. From the PPER database, a total of 149 out of 3776 theses use experimental, quasi experimental and pre-post test research design. We read the 149 theses to explore the association of experimental design to discipline, degree and institution of each study as well as what type of sampling strategies were used. We opted to read a further sample of the English theses (37 out of 111) in-depth: whether randomization or a treatment/ intervention was done, what type of research instruments were used and whether issues of validity and reliability were discussed (See Appendix A). We did not read the Afrikaans theses for in-depth analysis because we are not fluent in the language and would need a translation of the whole thesis. The Afrikaans theses were read globally and are part of the 149 theses mentioned here. The subset of 37 theses from 111 English titles that were selected for close reading in this article are comprised of (20 out of 74) experimental, (12 out of 31) quasi-experimental , and (5 out of 44) prepost test research designs, in the data base. The data were analysed using descriptive statistics. We read how the participants were selected, examined the samples as to who the participants were and how many were selected, and also considered whether the participants were randomly assigned or selected in these studies. This process represents an important feature of the experimental design. We read the narrative in each study on whether the experimental and control groups were used in the research process. We gathered information about the instruments used in each study, how the validity and reliability of those instruments were determined, and how the data were analysed in each study. We believe that the academic context of each study is important, as discussed in the meta-interpretation method. In the next section, we discuss the results of this article. For each of the analytical frames mentioned, we will offer an analysis in the order of experimental, quasi-experimental, and pre-post testing following the description of how we read the data.
Table 2: Sampling strategies used in the experimental, quasi-experimental and pre-post testing research designs in the English theses
Design Random assignment 5 Random selection 18 4 Matched Purposive/ Volunteer/ Convenience 27 19 Stratified Quota Systematic Total

Experimental Quasiexperimental Prepost test Total

1

3 3 1

54 27

1

10

16

2

1

30

6 (5%)

32 (29%)

1(1%)

62 (56%)

8 (7%)

1 (1%)

1 (1%)

111

274

The nature of experimental and quasi-experimental research in postgraduate education research in SA

RESULTS

Sampling strategies used in the studies
What emerges from a close reading of the data, in relation to the sampling strategies used in the studies, is that the majority of studies (56%) use purposive/convenience sampling as opposed to either random assigning (experimental design) or randomly selecting (quasi-experimental design) participants (see Table 2). Even though experimental design constitutes 49 per cent of the sample (English theses), random assignment of participants is the least used as it constitute 5 per cent of the sample. This is of concern as experimental design usually uses random assignment of participants into experimental and control groups. While random assignment of participants is not important in quasi-experimental design, and random selection is, however only 4 out of 27 studies using quasi-experimental design selected the participants randomly. This illustrates the constraints postgraduate education researchers face when entering the research field to collect data, because it is expected that they do not disturb the normal running of teaching; hence classrooms are selected as opposed to the participants, who are selected randomly. In some studies, for example theses numbered 3390 and 1323 (Appendix A), the postgraduate education researchers indicate in their methodology sections that they were aware they should use randomization, but it was impossible given the schooling context. The two excerpts below indicate the aforementioned: Two of the intact groups (i.e. classes) were used for the study. Random cluster sampling was used to choose the experimental and control groups from the three classes. The reason for using intact groups was that the smooth running of the school would not be disturbed during the study (1323). The participating school agreed to a two term duration of the research, provided that the classroom remained intact with the existing class teacher. In this convenience or accidental sample, pupils were not randomly assigned to the control and experimental groups (3390). However in other studies, for example, 1088, the author suggests that the sample was randomly assigned into experimental and control group, while she or he meant the sample was randomly selected. In the methodology section of this particular thesis, it is not mentioned whether the principals were matched either by gender, age, the level of the school they lead or any other variables as seen in the excerpt below: To select the experimental and control group from the entire population group, random assignment turned out to be the most appropriate method. Random selection was done by cutting a piece of paper into 24 equal pieces, that was the number of equivalent participants. Each paper was numbered from 1 to 24. The numbered pieces of paper were placed in a container. Each participant was requested to pick one piece of paper and check the number written on it. The participants who chose numbers 1–12 formed the experimental group while the remaining 12 participants formed the control group (1088).
275

B. Goba , R. J. Balfour and T. Nkambule

Who are the participants in the studies using experimental design?
Most authors of studies using experimental designs prefer to use learners (88 out of 149), students (32 out of 149) and teachers (22 out of 149) as participants. The use of the aforementioned participants suggests that they are most accessible to postgraduate education students who most of the times are teachers.

Experimental and control groups
The normative of experimentation is the use of pre- and post-testing to measure whether there is a strong correlation between variables. This is done by subjecting experimental group to a treatment/intervention while keeping a control group. The researchers can measure a single case or groups using the pre-test and post-test. A close reading of the selected 37 studies (Appendix A) illustrates that 28 out of 37 (76%) studies utilise experimental and control groups to measure the impact of the intervention (see Table 3). This suggests that single case experimental designs were not popular in postgraduate education research conducted in South Africa from 1995 to 2004.
Table 3: Use of experimental and control groups or single case in experimentation
Grouping Experimental design 4 16 20 Quasiexperimental design 3 9 12 Pre-post test Total

Single group (N =1) Experimental & control groups Total

2 3 5

9 (24%) 28 (76%) 37

One of the criteria when selecting experimental and control groups is that the groups must match one another. There should not be much difference between them so that the results are not jeopardized. However, this is not done in some studies reviewed in this article. For example, in Thesis 274 (Appendix A), the two groups selected for the quasi-experimental design do not match one another. In Thesis 274, the author’s aim is ‘the comparison of the performance of normal Black South African 13 to 16 month[s] old infants on the Griffiths Scales of Mental Development (GSMD) with that of the British 1996 normative sample’. It could be argued that by comparing groups that are not the same, the credibility of the results in this study is questionable or the methodology is flawed.

Sample size
Of the 37 studies that were read closely, 73 per cent (27) utilise large sample size (30 and above sample size is considered statistically significant), see Table 4. Most of the studies use large samples and are located in education psychology, with the remaining studies distributed between Language Education, Mathematics Education
276

The nature of experimental and quasi-experimental research in postgraduate education research in SA

and other disciplines (see Appendix A). One of the strengths of experimental work is its ability to make meaningful utilisation of large quantities of data. The size of the sample plays a significant role in the credibility of results. Results from large data sets will be more trustworthy because the measurement error will be removed.
Table 4: Sample size in the studies reviewed
Design Experimental design Quasi-experimental design Pre-post test Total Small sample 4 4 2 10 (27%) Large sample 16 8 3 27 (73%) Total 20 12 5 37

Treatment/intervention
The close reading of the data reveals that a significant 22 per cent of studies do not have the intervention/treatment in their design of experimentation (see Table 5). In the appendix A, studies like number 274, 2269 and 2434 have not indicated what treatment/intervention were given to the experimental group. This indicates a lack of understanding of the nomenclature of experimentation by a few postgraduate education students during the period 1995–2004.
Table 5: Use of treatment/intervention in the experimentation
Treatment/ Intervention Experimental design Quasi-experimental design Pre-post test Total NO 3 4 1 8 (22%) YES 17 8 4 29 (78%) Total 20 12 5 37

Research instruments, validity and reliability
The synthesis of the 149 postgraduate education studies on PPER’s data base using experimentation (English and Afrikaans theses) reveals that questionnaires and tests are the most common instruments used in the experimentation designs (see Table 6). This is not surprising as experimental designs are quantitative in nature and associated with the positivist paradigm. This suggestion appears to assume that most of this research is not concerned with perceptions, attitudes or experiences. It would thus be useful to understand why this is the case, although this is not the focus of our article. The questionnaire data tend to address only multiple choice or restricted choice items and do not appear to require experiential responses. Given that most of this work occurs in educational psychology, it is not unexpected to find that such instruments have been used in many studies over a period of time and are,
277

B. Goba , R. J. Balfour and T. Nkambule

in fact, well-documented within the literature associated with this field of research for example Griffiths Scales of Mental Development (GSMD).
Table 6: Reserch instruments used in the experimental designs
Experimental design Experimental Quasiexperimental Pre-post test Total Questionnaire 27 9 17 53 Test 36 19 18 73 Interview 9 8 13 30 Observation 6 5 6 17

Nevertheless, there are small proportions of studies reviewed which make use of research instruments which are qualitative in nature (see Table 6). Furthermore, the comprehension of research terminology (for example: pre-test/ post-test) by postgraduate students, is revealed not to be consistent across all institutions. For example (as can be seen from Appendix A- Thesis 1201), the understanding of pretest refers to the piloting of instruments, rather than the measurement of progress over time, or as a consequence of an intervention. The author for thesis 1201 under the sub-heading pre-testing writes: Many experienced researchers emphasize the importance of pilot studies before the actual research is done. A pilot study yields data concerning instrument deficiencies as well as suggestions for improvements. This research principle was adhered to in this study. The questionnaire was pre-tested as follows. The implementation of the instruments during the data collection phase is sometimes inconsistent with the experimentation process. On one hand, the experimentation process requires that instruments used to measure the dependent variable should be administered to experimental and control groups at the same time to avoid sensitisation of participants to the measurement and attrition of the original sample. In theses number 1988 and 2575 (see Appendix A), the tests were not administered during the same time. In thesis number 1988 the test for the control group was administered in November 1991, while the experimental group took the same test a year later in 1992. The staggered time between the measurement of experimental and control groups might not guarantee the same characteristics of the sample, because the experimental group might have matured when they took the test. While in thesis number 2575, the pre-test was administered a week earlier to the experimental group than the control group whereas the post-test was written by the control group a week before it was taken by the experimental group. In this sense, the data might be contaminated as either group could have heard of one of the tests. On the other hand, both experimental and control groups should be subjected to the same measurement. In theses number 1323 and 3068, the post-test was just administered to experimental group. In thesis number 3068, the author suggests that only experimental group was given a post-test since they were exposed to graphic
278

The nature of experimental and quasi-experimental research in postgraduate education research in SA

calculators and not the control group. Likewise thesis number 1323 argues for the same point sighting that ‘it was this [experimental] group that had been trained in learning (reading) strategies’. In this sense, the authors cannot be able to measure whether the intervention is successful or the improvement could have resulted from other variables. Further reading of the studies in Appendix A reveals that there are few theses which deploy quantitative methods (9 out of 37–24%), verifying constructs such as content validity and reliability. A high percentage of studies reviewed 60 per cent (22) (refer to Table 7) do not discuss or mention the issues of validity and reliability of the instruments, while 8 per cent (3) refer to validity and reliability without deploying any methods to actually establish the veracity of either concept. The preceding statements suggest that some postgraduate education students use experimental designs without a full understanding of the design.
Table 7: Validity and reliability of instruments
Design Validity & reliability calculated quantitatively 6 4 Validity & reliability mentioned briefly 3 0 Validity & reliability not discussed 11 8 Total

Experimental design Quasiexperimental design Pre-post test Total

20 12

2 12 (32%)

0 3 (8%)

3 22 (60%)

5 37

Analytical tools used in the studies
Almost all of the studies in the psychology discipline relied on the use of t-tests and descriptive statistics to analyse data. In the remaining studies, a range of methods were adopted such as Chi-square, correlations, ANOVA, regressions, hypothesis testing and difficulty index (see Appendix A).

Association of experimentation to institutions, disciplines and degrees
Having provided an analysis of the nature of research in education that uses experimentation, we also provide information on the institutions where such work seems to be encouraged, and where evident capacity for such work exists in education facilities. Figure 1 reveals that most of such research occurs at the universities of Witwatersrand (19%), South Africa (15%), and Pretoria (9%). There is no history of such research ever been done, at least within Education, in six South African institutions within the period forming the focus of our study (1995–2004). The above discussion points to the lack of capacity of postgraduate education supervisors to supervise studies using experimental designs.
279

B. Goba , R. J. Balfour and T. Nkambule

Figure 1: Institutions where experimentation is used

There are more studies using experimentation in Psychology of Education (23%) than in other disciplines. Disciplines like Mathematics Education (10%), Language Education (7%), Science Education (7%), Curriculum (5%) and Computer Education (4%) follow the leading discipline (refer to Figure 2). The analysis of 149 studies using experimental designs show that there is high prevalence of masters studies based on experimentation. Figure 3 shows that of the 149 studies employing experimentation, 117 (79%) are masters.

Figure 2: Distribution of studies using experimentation over disciplines 280

The nature of experimental and quasi-experimental research in postgraduate education research in SA

Figure 3: Distribution of experimental designs across degrees

DISCUSSION

The principal findings in this article are that experimental and control groups are often used in the experimental designs in postgradute studies. While the selection of experimental and control group is an important characteristic of experimentation, we find that it is often difficult to randomly assign participants to these groups in education. There is also a small proportion of postgraduate education students which does not include the intervention or treatment in the experimentation process. Regarding the usage of instruments in the experimental designs, questionnaires and tests are mostly used. However, there is still a need for a better understanding of their administration and conception. A reasonable percentage of postgraduate education studies using experimental designs utilise large samples (n = 30). There is a high prevalence of the t-test as an analytical tool in the experimental studies, especially in psychology of education. The lack of using experimental research by the six institutions seems to suggest the lack of capacity in supervision. The findings tabled above are discussed further in the following paragraphs.

Randomly assigning participants to experimental and control groups
The data shows that in educational research it is difficult to randomly assign participants into experimental and control groups. Instead, postgraduate education students use purposive and convenience sampling strategies which are often used in qualitative research. Thus, purposive sampling would be expected to be used in schooling contexts especially since it is not always possible for researchers to adopt a random sampling method within the constraints of the curriculum and its attendant
281

B. Goba , R. J. Balfour and T. Nkambule

syllabi and timetable issues within the school. Further, it is impossible for a teacher to divide and teach one class into an experimental group using a treatment, while the control group sits in the very same class. We have shown earlier that most studies use learners, students and teachers as participants. These findings concur with what other authors have found (see Howitt and Cramer 2000; Elmes, Kantowitz and Roediger III 2006; Borman 2002; Seethaler and Fuchs 2005). Burtless (2002) and Kember (2003) posit that teachers and learners who are part of the control group are denied the opportunity to benefit from the intervention or treatment under investigation.

Intervention/treatment
While the literature suggests that intervention/treatment is essential in the experimentation process (Cook and Campbell 1979; Neuman 2006), we discovered that some postgraduate education students conducted ‘true’ experimental designs, without a treatment/intervention. Experimentation involving intervention is understood to be an important research design for ‘evidence-based research’. In our country where educational policies constantly change, experimental designs can provide evidence whether the policies are viable or not. As a result, this suggests the need for experimental designs to be taught at postgraduate level. Postgraduate education students need to be exposed to different genres of research methodologies and need to have deep insight into their chosen methodology.

Instrumentation and analytical tools in experimental designs
As expected, the postgraduate education students utilised questionnaires and tests when collecting data in the experimental designs. This suggestion appears to lend itself to the assumption that most of this research is not concerned with perceptions, attitudes or experiences. It would be useful to understand why this is the case, although, again, this is not our focus in this article. Nevertheless, postgraduate education students need to understand the conceptual meaning of the instruments they use and the implications of delay in administration of the instruments between experimental and control groups. For example, pre-test is understood by some postgraduate education students as piloting a questionnaire and not taking a measurement of the variables before the intervention. This could be attributed to the literature giving differing conceptions of the word pre-test. Singleton and Straits (1999) offer an explanation similar to that of the postgraduate education students mentioned above, while Neuman (2006) defines pre-test as the measurement of variable before the treatment (see also Cook and Campbell 1979; Howitt and Cramer 2000). The administration of the instruments has an impact on the results. The literature suggests that for the validity of the results, the experimental and control groups should be treated the same in the experimentation process (Singleton and Straits 1999; Neuman 2006; Moses and Knutsen 2007). In one case in the data that was read closely, the experimental group took a post-test a year later than the control group. This surely disadvantaged the control group. In other words, when the experimental group took the post-test, they might have had greater maturity than at the time the
282

The nature of experimental and quasi-experimental research in postgraduate education research in SA

control group wrote theirs. Consequently, there could be other independent variables impacting on the dependent variable. Also, when delaying the administration of the instrument, there might be an attrition of the original sample: therefore, not all participants may take the test.

Sample size
A reasonable number of postgraduate education studies utilised experimental designs in the period 1995–2004, with large sample size. Though experimental designs form only 4 per cent of the total postgraduate education studies collected for PPER, South African higher education institutions, which have large faculties of education, still have a responsibility to prepare researchers to meet the needs of society with respect to understanding education as phenomena worthy of large scale studies. Over the past ten years, the South African government has been gathering EMIS and HEMIS data which remains largely unanalysed but represents a critical resource for us to understand the impact of education on learning lives. Universities ought to be leading in these processes and reflecting on them. However, given that most of our postgraduate are masters students with limited time and resources, it is also little wonder that institutions have not been able to devote either human or physical resources to large scale studies.

The use of t-test as an analytical tool in experimentation
There is a high prevelance of t-test utilised in the postgradute education studies analysis using experimentation, especially in psychology of education. This could be attributed to the predominant use of tests and questionnaires as research instruments. A student’s t-test, as it is called, determines the difference between test marks. Thus in the experimental designs, the t-test investigates the hypothesis whether there is difference in means of the experimental and control groups after the intervention.

Use of experimental designs in South African institutions within disciplines
The nature of experimental designs is not the only subject of this article: the ‘academic context’ where the education postgraduate studies under investigation were produced, also bears scrutiny. We have found that there are six institutions in South Africa where experimental designs are not used in postgraduate education research within the period 1995–2004, while only three institutions show a reasonable use of these designs. In addition, most of these studies are in the psychology of education discipline. This implies that there is lack of supervision capacity on experimental designs in South African institutions within disciplines other than educational psychology. This is a plausible finding, as education psychologists or researchers associated with this field, tend to focus their research on particular phenomena such as child abuse, the use of drugs and alcohol, the effect of dysfunction in the family, and learning. They usually look for relationships between the case at hand, and the
283

B. Goba , R. J. Balfour and T. Nkambule

learning or impediments to learning as they occur in formal environments (such as the classroom, or the counselling session).
CONCLUSION

On reflection, we argue that there is cause for concern when reviewing the existing research which claims to make use of experimental designs. Firstly, such work is associated almost exclusively with Educational Psychology, with very little associated with mathematics and science education, or within any other areas in the formal curriculum. In terms of the features of the work that has been done, it is evident that postgraduate students struggle to identify samples using classical design principles in experimental research, and that this can be attributed to the lack of flexibility in the timetable in schools and to the difficulty of obtaining permission and consent from participants in such environments. In this case, single case experimental design can be used often in education to avoid the problem of random assignment of participants. Further research in this area is needed to understand what can be done to promote educational research based on large scale, evidence-based studies that will be able not only to influence policy making, but also be able to influence the ways in which education stakeholders make sense of large scale education research. While it is evident that experimental or quasi-experimental research remains associated with particular disciplines, it is equally evident that the sector (education) does not make adequate provision for this kind of research to take place in schools, despite its obvious education, pedagogic and research advantages. We know that experimental work internationally is often at the forefront of innovation in learning (see for example in a field like cognitive and neurolinguistics) and illumination in understanding the impact of learning strategies and even materials on the cognitive advancement of the learners, but equally and because of the difficulty associated in setting up and conducting such research, it remains a scarcity. In South Africa with its history of rapid change and social engineering, the need for such work is more critical now than ever before.
NOTES 1. Source: Singleton, R. A. Jr. and B. C. Straits. 1999. Approaches to social research. 3rd ed. New York: Oxford University Press, 210–236. 2. For PPER’s methodology see: Balfour et al. 2006. Project for Postgraduate Educational Research: Issues trends and reflections. REFERENCES Balfour, R., L. Moletsane and P. Rule. 2007. Project on Postgraduate Educational Research in Education: Issues, Trends, and Reflections (1995–2004). An unpublished project brief. University of KwaZulu-Natal.
284

The nature of experimental and quasi-experimental research in postgraduate education research in SA

Barger-Anderson, R., J. W. Domaracki, N. K. Vakulick and R. M. Kubina. 2004. Multiple baseline designs: The use of a single-case experimental designs in literacy research. Reading Improvement Winter 2004, 41(4): 217–225. Borman, G. D. 2002. Experiments for educational evaluation and improvement. Peabody Journal of Education 77(4): 7–27. Botha, J., D. van der Westhuizen and E. de Swardt. 2005. Towards appropriate methodologies to research interactive learning: Using a design experiments to assess a learning programme for complex thinking. International Journal of Education and Development using Information and Communication Technology 1(2): 105–117. Burtless, G. 2002. Randomized fields trials for policy evaluation: Why not in education? In Evidence matters: Randomized trials in education research, eds. F. Mosteller and R. Boruch, 150–178. Washington, DC: Brookings Institute. Campbell, D. 1969. Reforms as experiments. American Psychologist 24:409–429. Cook, T. D. and D. T. Campbell. 1979. Quasi-experimentation: Design and analysis issues for field settings. Chicago: Rand McNally. Cook, T. D. and M. R. Payne. 2002. Objecting to the objects to using random assignment in educational research. In Evidence matters: Randomized trials in education research, eds. F. Mosteller and R. Boruch, 150–178. Washington, DC: Brookings Institute. Desimone, L. M. 2002. Introduction. Peabody Journal of Education 77(4): 1–6. Elmes, D. G., B. H. Kantowitz and H. L. Roediger III. 2006. Research methods in psychology. Australia: Vicki Knight. Harvey, M. T., M. E. May and C. H. Kennedy. 2004. Nonconcurrent multiple baseline designs and the evaluation of educational systems. Journal of Behavioural Education 13(4): 267–276. Howitt, D. and D. Cramer. 2000. First steps in research and statistics: A practical workbook for psychology students. London: Routledge. Johnson, B. and L. Christensen. 2008. Educational research: Quantitative, qualitative, and mixed approaches. 3rd ed. Los Angeles: SAGE. Karlsson, J., R. Balfour, R. Moletsane and G. Pillay. 2009. Researching postgraduate educational research. South African Journal of Higher Education 23(6): 1086–1100. Kember, D. 2003. To control or not to control: The question of whether experimental designs are appropriate for evaluating teaching innovations in higher education. Assessment and evaluation in Higher Education 28(1): 89–101. Kennedy, C. H. 2004. Recent innovations in single case designs. Journal of Behavioural Education 13(4): 209–211. Kobus, M. and J. Pietersen. 2007. The quantitative research process. In First steps in research, ed. M. Kobus. Pretoria: Van Schaik Publishers. Lundervold, D. A. and M. F. Belwood. 2000. The best kept secret in counselling: Single case (N=1) experimental designs. Journal of Counselling and Development 78:92–102. Moses, J. W. and T. L. Knutsen. 2007. Ways of knowing: Competing methodologies in social and political research. New York: Palgrav. Neuman, W. L. 2006. Social research methods: Qualitative and quantitative approaches. 6th ed. Boston: Pearson Education Inc. Seethaler, P. M. and L. S. Fuchs. 2005. A drop in the bucket: Randomized controlled trials testing reading and math interventions. Learning Disabilities Research and Practice 20(2): 98–102.
285

B. Goba , R. J. Balfour and T. Nkambule

Singleton, R. A. Jr. and B. C. Straits. 1999. Approaches to social research. 3rd ed. New York: Oxford University Press. Sorensen, C., S. Smaldino and D. Walker. 2005. The perfect study: Demonstrating ‘what works’ in teacher preparation using ‘Gold Standard’ research designs in education. TechTrends: Linking Research and Practice to Improve Learning Jul/Aug2005, 49(4): 16–19. Weed, M. 2008. A potential method for the interpretive synthesis of qualitative research: Issues in the development of ‘meta-interpretation’. International Journal for Social Research Methodology 11(1): 13–28.

286

Copyright of South African Journal of Higher Education is the property of South African Journal of Higher Education and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder’s express written permission. However, users may print, download, or email articles for individual use.

PLACE THIS ORDER OR A SIMILAR ORDER WITH US TODAY AND GET AN AMAZING DISCOUNT 🙂

© 2020 customphdthesis.com. All Rights Reserved. | Disclaimer: for assistance purposes only. These custom papers should be used with proper reference.