Figures
Abstract
Learner-centered classrooms encourage critical thinking and communication among students and between students and their instructor, and engage students as active learners rather than passive participants. However, students, faculty, and experts often have distinct definitions of learner-centeredness, and the paucity of research comparing perspectives of these different groups must be resolved. In the current study, our central research question was how do student, faculty, and expert observer perceptions of learner-centeredness within biology classrooms compare to one another? We sampled 1114 students from fifteen sections of a general biology course for non-majors, and complete responses from 490 students were analyzed. Five valid and reliable tools (two faculty; two student; and one expert observer) evaluated the learner-centeredness of each participating section. Perceptions of learner-centered instructors often aligned with those of expert observers, while student perceptions tended not to align with either group. Interestingly, students perceived learner-centered instructors as less learner-centered if they taught at non-traditional times and/or in large-enrollment sections, despite their focus on student learning. Perceptions of learner-centeredness in the biology classroom are complex and may be best captured with more than one instrument. Our findings encourage instructors to be cognizant that the approaches they employ in the classroom may not be interpreted as learner-centered, in the same manner, by students and external observers, particularly when additional course factors such as enrollment and scheduling may encourage negative perceptions of learner-centered practices.
Citation: Heim AB, Holt EA (2018) Comparing student, instructor, and expert perceptions of learner-centeredness in post-secondary biology classrooms. PLoS ONE 13(7): e0200524. https://doi.org/10.1371/journal.pone.0200524
Editor: Andrew R. Dalby, University of Westminster, UNITED KINGDOM
Received: February 7, 2018; Accepted: June 28, 2018; Published: July 11, 2018
Copyright: © 2018 Heim, Holt. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: To protect the privacy and confidentiality of the human research subjects used in this study (i.e., faculty and students), and in accordance with University of Northern Colorado IRB regulations, data can be obtained by contacting the first author at ashley.heim@unco.edu. Upon written request, we can release data as de-identified and in aggregate form (i.e., by section, as we used in our analyses). For more information about this restriction, please contact Sherry May, IRB Administrator, Office of Sponsored Program at UNC (sherry.may@unco.edu) or Dr. Megan Stellino, UNC IRB Co-Chair (megan.stellino@unco.edu).
Funding: The authors received no specific funding for this work.
Competing interests: The authors have declared that no competing interests exist.
Introduction
Active learning is broadly defined as engaged teaching approaches that encourage critical thinking and communication among students and between students and their instructor [1–3]. Further, active learning contributes to the learner-centeredness of a classroom, which can also be characterized by the level of bilateral learning in a course, and whether students have a role in this process as active learners rather than passive participants [4]. While active classrooms tend to share goals of higher cognitive learning and separate the roles of instructors and students in a similar way, they can, on the ground, look very different, depending on the learner-centered practices administered in the classroom.
Experts within education fields have developed these broad descriptions of learner-centeredness and learner-centered practices. However, as Andrews et al. [5] noted, the definition of a “learner-centered” classroom is often generated by the instructors or students themselves, generally documented through self-reported survey responses in educational research. It remains unclear to what degree these expert, instructor, and student definitions of learner-centeredness can be interwoven or if they are discrete, potentially diverging perceptions.
Student challenges with learner-centered classrooms
Learner-centered classrooms reportedly lead to improvements in students’ metacognitive abilities, critical thinking skills, and subject knowledge [6–14], and have also been linked with improvements in student performance in the classroom [1, 12–13, 15]. Further, increases in student motivation, persistence, self-confidence, and attitudes in science fields have been correlated with learner-centered teaching and learning approaches in STEM (i.e., science, technology, engineering, and technology) courses [16–18]. The multi-faceted, positive impact on students from active learning [6, 19] is of particular significance in light of the continued leakiness of the STEM pipeline [20–21]; perhaps by actively engaging students in STEM courses from the start of their undergraduate careers, instructors can both increase retention rates and ensure a more authentic experience in the sciences for incoming students.
Despite these numerous benefits, many students resist learner-centered pedagogies. University students often have mixed feelings about the use of active learning techniques in lecture [15, 17]; several studies have reported that students prefer traditional lectures over active learning and consider the former method of teaching more conducive to learning [22–24]. Herreid and Schiller [25] noted that students often feel more learner-centered classrooms (i.e. the flipped classroom) require more out-of-class time for reading, homework, etc., than traditional classrooms. Clicker questions or small group discussions in lectures, which require self-directed learning and critical thinking of students, have been shown to leave some students feeling frustrated or withdrawn from the course [26]. Similarly, Cooper and Brownell [27] reported that students of the LGBTQIA community often feel unwelcomed in active learning biology lectures and perceive increased pressure to reveal their identities during the frequent group learning activities characteristic of such sessions. While their study focused on a particular population of students, arguably the transition to a more active classroom likely increases scholastic accountability and social pressure on all students as they are forced into a more collaborative learning environment.
In a study by Watters & Watters [28], first-year undergraduate biochemistry students reported that they believe effective learning involves information transfer and prefer surface to deep strategies. Therefore, if students understand “learner-centered teaching” as strategies which maximize student learning, which they may erroneously equate with lecture-style presentations, their interpretations of learner-centeredness in the science classroom may be quite skewed from those of instructors and experts. Tsang and Harris [24], who found that students are unfamiliar with pedagogical practices and the process of learning in general, supports the presence of these student misconceptions. Subsequently, students’ negative perceptions of truly learner-centered classrooms and their unwillingness to engage in these practices may be rooted in their misconception that the extra expectations are burdens rather than benefits to them [29].
Faculty challenges with learner-centered classrooms
As mentioned above, learner-centered practices may improve student-faculty relations [18], which consequently improve the overall quality of the classroom environment by providing increased opportunity for discussion amongst the class [30] and shifting the accountability and responsibility of learning from the instructor onto the student [29]. Despite these reported benefits, many instructors remain hesitant to translate learner-centered pedagogies into their current teaching practices, citing lack of support and training [17, 31–32], increased time and effort required to reform a class [17, 24, 33], and loss of “professional identity” [34]. Some instructors view the lab component of a course as sufficient engagement and thus fail to incorporate active learning approaches in lecture, demonstrating a form of passive resistance [16, 35]. Andrews et al. [5] argues that the link between active learning and increased student learning gains may be attributed to instructors’ pedagogical experience and not the teaching strategy itself. These findings combined with personal ambivalence may deter science faculty from reforming their classrooms, which helps to explain the persistence of didactic lecture [11] in the face of contradictory evidence.
However, a gradual shift from traditional lecturing to more active strategies is occurring in undergraduate courses [36], and individual instructors are reforming their classes and experimenting with more learner-centered strategies. Regretfully, approximately 75% of instructors that Ebert-May et al. [37] surveyed claimed that they used learner-centered practices but in fact used a lecture-based, teacher-driven pedagogy, demonstrating a large disconnect between faculty perceptions and actual teaching practices. This disconnect may derive from the possibility that instructors have their own disparate definition of learner-centeredness compared to students and expert observers, or perhaps because instructors undergo a cognitive shift after pedagogical development that is not necessarily transferred to their actual classroom practices [38–39]. Dall’Alba and Sandberg [40] note that, even after educators complete professional development programs, a broad understanding of pedagogical practice is uncommon among participants; the authors further argue that professional development not only incorporates development of skills but knowledge and attitudes as well, which could at least partially explain the aforementioned disconnect between instructors’ perceptions of learner-centeredness compared to those of experts. Further, McCombs and Quiat [41] found that student perceptions tended to be a better measure of learner-centeredness than instructor perceptions and that, additionally, these student perceptions were more aligned with those of trained educational and developmental psychologists rather than the perceptions of course instructors [42].
Instruments for measuring learner-centeredness
A variety of valid and reliable instruments are available to analyze the learner-centeredness of a classroom, whether from the perspective of the student, the instructor, or an expert observer. Previous work has used some of these tools to contrast why students learn and how they learn [43–46], and how the teaching-learning environment influences student approaches to studying and learning [47–48]. Faculty instruments provide teachers formal opportunities for self-reflection and -assessment. Data from these tools may serve as a compass to focus reform efforts to best achieve a student-driven learning environment [49–51]. Meanwhile, expert observer protocols are often used to enhance student learning via critiquing and reforming teaching practices from an objective vantage point. Such protocols can quantify the learner-centeredness of instruction in a classroom, providing meaningful feedback to the instructor [52–54].
Many previous studies measure the degree of learner-centeredness of classrooms from just a single perspective: only the student view [43–48], only the instructor view [49–51], or only the expert view [52–54], based on a single instrument; yet, there is a dearth of studies which cross-evaluate student, faculty, and expert perceptions. As students, faculty, and experts often have distinct definitions of learner-centeredness, the paucity of research based on instruments which capture the perspectives of these different groups must be resolved. One exception, Trigwell et al. [55], compared faculty and student perceptions with separate faculty (i.e. the Approaches to Teaching Inventory, or ATI) and student tools (i.e. the Study Process Questionnaire, or SPQ). They found student and faculty perspectives on learner-centeredness generally agreed [55]. In courses where instructors self-reported a more teacher-centered focus on transmitting knowledge, students adopted a more surface approach to learning that subject; in contrast, but less strongly, in courses where instructors self-reported a more student-centered focus on conceptual change, students adopted a deeper approach to learning [55]. These findings were not compared to an expert observer’s perceptions of learner-centeredness and therefore may have incorporated bias due to instructors’ over-estimation of teaching skills or students’ resistance or lack of pedagogical knowledge regarding learner-centeredness.
In another study, Gibbs and Coffey [56] compared an instructor tool to two student surveys and found that instructors, who were pedagogically trained, tended to believe that they were encouraging deeper learning approaches compared to instructors who received no pedagogical training. While student learning gains improved in courses with pedagogically trained versus untrained instructors, student scores on the “Deep Approach” subscale of a student questionnaire did not significantly increase; in contrast, student learning gains remained unchanged in courses taught by the untrained cohort of instructors [56]. This study suggests that students may be misjudging their learning by performing at a high level but not attributing that success to learner-centered approaches; meanwhile, instructors of their sample who participated in pedagogical training appear more likely to use learner-centered teaching practices and may excel in such aspects of teaching as enthusiasm, organization, and rapport [56].
The current study is unique in that it used several student and instructor instruments from each perspective within the same classroom, and compared these perspectives to one another in addition to expert perceptions of the same biology classrooms. Redundancy in tools for individual populations can allow us to capture different elements of learner-centeredness, providing a more complete understanding of how learner-centeredness is perceived in the undergraduate biology classroom.
Purpose and research questions
In the current study, our central research question was how do student, faculty, and expert observer perceptions of learner-centeredness within biology classrooms compare to one another? Specifically, we wanted to (a) compare subscales within individual student and faculty instruments, (b) compare subscales across student, faculty, and expert observer instruments and describe those relationships, and (c) describe the structure of learner-centered classrooms using multiple instruments. We predicted that different instruments, or subscales within a single instrument, measuring learner-centeredness from a single perspective (i.e., faculty or student) would both linearly and positively correlate. We envisaged that faculty perceptions would generally be disconnected from expert perceptions, as supported by Ebert-May et al. [37]. Contrastingly, we predicted that student perceptions would be more aligned with expert perceptions, as supported by McCombs and Quiat [41] and Daniels et al. [42]. We also predicted that student perceptions of learner-centeredness would be disconnected from faculty perceptions, supported by Fraser’s [57] findings that student perceptions of instruction and the overall class environment are more negative than instructor perceptions, even in post-secondary education. We hypothesized that a single-dimension framework, characterized by highly learner-centered at one end and highly teacher-centered at the opposing end, would best describe biology classrooms from various perspectives.
Materials and methods
Ethics statement
The procedures for this study were approved by the Institutional Review Boards of Utah Valley University (IRB# 01103) and the University of Northern Colorado (IRB #932641–1). Written informed consent was obtained by all participating students and faculty at the beginning of the study.
Participants
We conducted an observational study in introductory biology classrooms at one public post-secondary institution in the western US. While this institution is self-described as “engaged” in its mission, instructors were not considered pedagogical experts. We assumed that the fifteen class sections and nine instructors in our study were representative of average undergraduate biology classrooms.
We sampled 1114 students from fifteen sections of a general biology course for non-majors, and complete responses from 490 students were analyzed (i.e., students who completed both the student surveys administered in this study). While volunteer participation can result in non-response bias, our response rate of 44% is proximal to the accepted average noted in psychological studies [58] when considering the removal of three course sections from the original data set (n = 244 students enrolled; further described below). Our twelve participating class sections varied by student enrollment (min = 16 students per section, max = 391, mean = 91.4) and class meeting time (1 section was a weekend course, 3 were night classes, and 8 met during the weekday).
Nine instructors taught these fifteen sections during Fall 2013 and Spring 2014; six of these instructors taught two sections during the same semester. One of the participating instructors failed to complete both faculty surveys, and consequentially both of this instructor’s sections were removed from our data set (n = 94 students enrolled). Additionally, one of the participating instructors voiced concern after completing the faculty surveys regarding their inconsistent interpretation of survey questions; to prevent a lack of validity and reliability in our analyses, we also removed this instructor’s section from our data set (n = 150 students enrolled). Our final analyses included twelve sections. The remaining seven instructors had various levels of teaching experience: one instructor had taught for 2–3 years; one for 3–5 years; two for 11–20 years; and three for 21 or more years. Additionally, the population of instructors used in this study included tenured and tenure-track professors, as well as adjunct instructors. Course section numbers used in this paper (1–12) reflect their ranked Reformed Teaching Observation Protocol (RTOP) score (i.e., section one had the highest RTOP score, while section twelve had the lowest RTOP score), and to protect participant anonymity do not link to actual institutional numbering schemes.
Conceptual framework
We used five valid and reliable tools (2 for faculty, 2 for students, and 1 for expert observers) to evaluate the learner-centeredness of each section participating in this study. The conceptual framework, or null hypothesis, for our work is a one-dimensional gradient, where a tool or subscale within an instrument falls at either end of a learner- to teacher-centered gradient, concomitantly opposing the other end (Fig 1). We expect the student-centered end of our gradient to include classrooms where faculty hold more learner-centered beliefs and focus more on conceptual change in their students, and where students incorporate deeper learning approaches and dedicate more class time to building models and sharing ideas with one another. In contrast, at the opposing end of our gradient, we expect a more teacher-centered classroom to include more non-learner-centered beliefs and be more focused on information transfer by faculty to students, and for students to incorporate more surface learning approaches and rarely interact with the instructor or their peers during class.
Examples of student behaviors and instructor practices at the learner-centered end (in gray) juxtapose those that are more teacher-centered (black) at the other end of the framework. Learner-centered descriptors (gray) were expected to positively correlate with each other, while teacher-centered descriptors (black) were expected to positively correlate with each other. Negative correlations (dashed line) were expected between two related but contrasting descriptors, as both would fall on opposite ends of the learner- to teacher-centered framework. For example, deep approaches are more learner-centered, while surface approaches are more teacher-centered; a student that engaged in deeper learning approaches would not be expected to engage in as many surface approaches, or vice versa.
We assumed that subscales or factors of different instruments would overlay onto our conceptual framework (Fig 1), and likewise relate to other tools positioned within this framework. If factors, from different instruments or within the same instrument, both attempted to capture learner-centered behaviors, we expected that those factors would positively covary, and fall at the same end of our gradient. Alternatively, we predicted that if one subscale measures teacher-centered beliefs and another measures learner-centered beliefs, they will negatively covary, representing opposite ends of our 1-D framework.
Instruments for comparing perceptions of learner-centeredness
Nine factors (italicized) were derived from five published instruments (Table 1) to describe learner-centered perceptions in the classroom within our conceptual framework (Fig 1). The Assessment of Learner-Centered Practices (ALCP) [59], a faculty instrument, assessed characteristics of effective teaching, assessment of classroom practices most relative to motivation and achievement, and beliefs and assumptions about learners, learning, and teaching. The ALCP has been extensively validated and has undergone multiple item reliability analyses (α = 0.76–0.91) [60–62]. Two of the three scales within the ALCP measured learner-centered beliefs (LC Bel) and non-learner-centered beliefs (NLC Bel) of faculty. We expected learner-centered beliefs to fall closer to the learner-centered end of the gradient, while non-learner-centered beliefs may fall toward the teacher-centered end of the gradient (Fig 1). The Approaches to Teaching Inventory (ATI) [63], founded on research perspectives applied by Marton et al. [64], functioned to capture faculty approaches to teaching and learning; the ATI measured information-transfer/teacher-focused (ITTF) and conceptual change/student-focused (CCSF) practices. Prior studies have conducted psychometric analyses, including confirmatory factor analysis, on the ATI to ensure both its validity and reliability across a range of participants and settings (α = 0.66–0.74) [63, 65–66]. ITTF practices were expected to overlap with non-learner-centered beliefs at the teacher-centered end of the gradient, while CCSF practices were expected to overlap with learner-centered beliefs near the learner-centered end of the gradient (Fig 1).
Within each student and instructor instrument exists primary and secondary subscales that we used in our study; we indicate the possible score ranges for each subscale and at which end of the learner-centered (LC) gradient a high score on that subscale would capture.
Two student surveys were used to evaluate student learning approaches on a deep or surface level and to better understand the general learning-teaching environment, respectively. The Revised 2-Factor Study Process Questionnaire (R-SPQ-2F) [43], based on the original Study Process Questionnaire (SPQ) developed by John Biggs in the 1980s, measured deep and surface approaches. Psychometric analyses including confirmatory factor analysis have been conducted by many prior researchers (α = 0.64–0.73) that suggest the R-SPQ-2F collects reliable data [43, 66–68]. While deeper approaches are motivated by a student’s intrinsic interests and desire to maximize meaning, surface approaches are motivated by a student’s fear of failure and rote learning strategies [43]. We expected deeper approaches to correspond with the learner-centered end of the gradient, while more surface approaches may fall on the teacher-centered end of the gradient (Fig 1). The Shortened Experiences of Teaching and Learning Questionnaire (SETLQ) [69] was produced as part of the Enhancing Teaching-Learning Environments in Undergraduate Courses Project and was intended to enhance student achievement via the strengthening of student-instructor relations and of the learning-teaching environment in general [69]. The SETLQ measured six scales, and we focused on two of those scales: student self-reported experiences of teaching and learning (ETL) and knowledge and learning acquired (KLA). Validity and reliability analyses for the SETLQ have been conducted in several prior studies (α = 0.56–0.83) [69–71].
We anticipated that students who self-reported increased learning gains in the classroom (KLA), in addition to having positive teaching and learning experiences (ETL), would cluster near the learner-centered end of the gradient; it should be noted that this is the only pair of subscales from a single instrument that were expected to associate with the same end (i.e. the learner-centered end) of the learner- and teacher-centered spectrum.
The Reformed Teaching Observation Protocol (RTOP) [54] quantified the learner-centeredness of instruction within each classroom, as determined by an external observer. The RTOP, originally designed by the Evaluation Facilitation Group of the Arizona Collaborative for Excellence in the Preparation of Teachers (ACEPT), allowed trained experts to objectively classify teaching in a classroom on the same learner- to teacher-centered spectrum described above (Fig 1). More learner-centered classrooms should earn higher RTOP scores, while more teacher-centered classrooms should earn lower RTOP scores. Sawada et al. [54] used RTOP to quantify the learner-centeredness of undergraduate science classrooms after instructors participated in professional development workshops.
In the current study, we chose to use RTOP rather than other expert observer tools such as the Classroom Observation Protocol for Undergraduate STEM (COPUS). RTOP requires more rigorous multi-day training to achieve sufficient interrater reliability [54], and contains protocol items that are more aligned with quantification of learner-centeredness in the classroom. Considering expert observer tools, RTOP was the best fit for our research objectives centered on learner-centeredness in the undergraduate biology classroom; per Sawada et al. [54], RTOP is “standards based, inquiry oriented, and student centered” (p. 1).
Administration and analysis of faculty instruments.
Faculty surveys were administered online during the last week of the semester (via www.surveymonkey.com); however, instructors were given up to two weeks to complete the two faculty surveys to maximize response rates. In this study, ALCP [59] items were ranked on a 4-level Likert scale and ultimately, answers were categorized into either “learner-centered beliefs” or “non-learner-centered beliefs” (Scales 1 and 3, respectively); scores were then summed based on the system described by McCombs and Miller [59]. The ALCP Scale 2, or “Non Learner-Centered Beliefs about Learners,” was not used in this study, because it focused on personal reflection and emotional aspects of teaching [59, 72]. We felt that personal beliefs about student performance or persistence may or may not translate into an instructor’s pedagogical practices, thus did not cleanly overlay with one end of our framework, as we have defined it. The learner-centered beliefs and non-learner-centered beliefs subscales of the ALCP were not further broken down into secondary subscales as the other instructor and student instruments were.
The ATI consisted of sixteen five-point Likert scale items. Answers were ultimately characterized into one of two pedagogical categories of eight items each based on reported teaching practices: teacher-focused and information transfer-based or student-focused and conceptual change-based [63]. We then summed scores for items in each category. Within the ATI, ITTF can be further broken down into information transfer and teacher-focused and CCSF can be further broken down into conceptual change and student-focused. Hence, an instructor with a high ITTF score would tend to lecture at students more, while an instructor with a high CCSF score would generally focus more on students’ understanding of concepts rather than simply transferring knowledge.
Administration and analysis of student instruments.
The R-SPQ-2F asked students to respond to twenty items related to attitudes towards and usual methods of studying; the scale for each item ranged from 1 (never or only rarely) to 5 (always or almost always). Main scale scores were categorized into one of two categories and summed: deep or surface approaches [43]. Within the R-SPQ-2F, the deep subscale can be further broken down into deep motive and deep strategy, while the surface subscale can be similarly broken down into surface motive and surface strategy. In this case, motive refers to a student’s justification for learning and succeeding in the classroom, while strategy refers to a student’s plan for learning the material in a particular course and how effective they are in doing so.
Although the SETLQ is composed of six sections, we used only two subscales (the ETL and KLA, described above) in this study due to our perception of their direct relevance to learner-centeredness. The ETL asked students to indicate their level of agreement on 25 items, of a 5-level Likert scale, based on their general approaches to studying and learning. The KLA asked students to respond to eight items regarding their perceptions of what they had learned in the course (i.e., Introductory Biology); the scale for each item ranged from 1 (very little) to 5 (a lot). Scores for each subscale were calculated by summing item responses in a given subscale. Within the SETLQ, the ETL can be further broken down into Aims and congruence (aims), Choice allowed (choice), Teaching for understanding (understanding), Set work and feedback (feedback), Assessing understanding (assessment), Staff enthusiasm and support (staff), Student support (students), and Interest and enjoyment (interest), while the KLA can be further broken down into knowledge and subject-specific skills (k-skills), generic skills (g-skills), and information skills (i-skills).
Both student surveys were administered online during the last week of the semester (via www.surveymonkey.com) and students were given a week and compensated 1% of their final grade to complete them. Additionally, at the beginning of the semester, students were administered a demographic questionnaire and a critical thinking survey used for another study [11]. The demographic survey included seven questions and collected the ethnic and educational backgrounds of the student participants. Demographic information was available for 94% of students in the current study.
Collection and scoring of expert instrument.
During Fall of 2013 and Spring of 2014, 65 classroom sessions of the 12 introductory biology sections were recorded. Filming days were generally selected at random, and each section was recorded between four to eight times during semester, usually without advance notice to the instructor. Three to four usable videos from each section were randomly selected to evaluate using the RTOP. We expected that analyzing multiple class sessions would provide a more comprehensive range of pedagogical strategies the instructors employed throughout the semester, hence representing a more genuine measure of learner-centeredness in the classroom. The RTOP is a tool, considered both valid [54, 73] and reliable [74–75], which quantitatively measures the learner-centeredness of instruction in a classroom. In this study, videos were independently rated by at least two trained raters and inter-reliability was high (generalizability coefficient = 0.787; see [11]).
Three scales exist within the RTOP, including lesson design and implementation, content, and class culture; items within each scale (25 total) were ranked on a scale from zero (absent) to four (present) [54]. The summed scores from the 25 items results in an RTOP lesson score ranging from 1–100. Two trained raters [11] independently scored each class session. Each score was categorized into one of five RTOP levels [37, 76]. If both raters’ scores categorized the same class session into the same RTOP level, the scores were averaged; however, if two scores for a single class session fell into different RTOP levels then an additional tie-breaker rater was used and the two scores sharing an RTOP level were used and averaged. Multiple class session RTOP scores for each section were averaged into a single score. We could not use the natural scales within RTOP, since our final RTOP score for each section represented an average among several raters and class sessions.
Data and analyses
Cronbach’s reliability analyses for each scale were calculated in SPSS [77]. From the nine subscales representing three perspectives (student, instructor, and expert observer), we created five data matrices which were used in multivariate analyses. We initially created two sets of these five data matrices; one set used section (n = 12) as the sample unit and the other set used individual students (n = 490) as the sample unit. For each set, the first two matrices included student data: student primary subscales (4 factors) and student secondary subscales (15 factors). The next two matrices included faculty data: instructor primary subscales (4 factors) and instructor secondary subscales (6 factors). The final data matrix, RTOP scores (1 factor), represented expert observations of the same classes.
Pairwise Pearson correlations of univariate factors were run in SPSS [77]. We compared all our factors, including RTOP (expert) scores and student and faculty instruments, at either the primary subscale (i.e. ITTF, CCSF, LC-bel, NLC-bel, Deep, Surface, ETL, and KLA; Table 2) or secondary subscale (listed in italics in the Administration and Analysis of Student/Faculty Instruments sections above; S1 Table). Correlations were compared to a null hypothesis of no relationship, and the resulting p-values were compared to a Bonferroni-adjusted alpha of 0.000806 for the primary subscale comparisons (Table 2) and 0.000113 for the secondary subscale comparisons (S1 Table). The Bonferroni-adjusted alpha corrected for multiple comparisons to reduce the possibility of measuring false-positive results.
We ran nonmetric multidimensional scaling (NMS) analyses, using a Euclidean distance measure, in PC-ORD 7 [78] to identify gradients in perceptions of learner-centeredness and visually capture how various perceptions overlap. NMS is a multivariate ordination technique that represents the sample units in as few dimensions as possible using measured similarities. We chose NMS over other ordination techniques (e.g., PCA, factor analysis), because NMS allows you to select your own distance measure, and allows you to rigidly rotate your final configuration to align with a variable of interest rather than loading the greatest variance on the first axis. The purpose of our NMS analysis was to (1) test the hypothesis of our framework (i.e., if learner-centeredness is one-dimensional), (2) understand the relationship of secondary subscales to the learner-centered structure defined by primary scales, and (3) understand the relationship of faculty scales to the learner-centered structure defined by student scales [79–80].
We chose to use the student primary subscale data as the main matrix upon which to build ordinations and all other data as secondary matrices to investigate after-the-fact relationships with this matrix. We selected the student matrix, instead of the faculty matrix, because it represented a larger sample (i.e., 490 students vs. 7 faculty members); further, students are the natural center point of a learner-centered classroom, so we wanted to align all other perspectives to theirs.
Mantel tests differ from simple univariate correlations in that they measure correlations across matrices rather than individual pairwise comparisons. Our Mantel tests evaluated collective differences between students, instructors, and expert observers using all instruments together [81–82]. Lastly, cluster analyses using Ward’s minimum variance method [83] to estimate the expected number of clusters (based on a Euclidean distance measure) were run in PC-ORD 7 to further analyze how alike course sections were based on instructor versus student perceptions. Cluster analysis is a multivariate classification technique that separates data into meaningful groups (or clusters) based on overall relatedness; hence, items that cluster together are more related than items that do not cluster into the same group [84]. We used cluster analysis to independently separate primary student and instructor data into meaningful groups based on course section for later comparison.
Data adjustments
Unfortunately, we found cluster analyses with student as the sample unit were unwieldly in size (i.e., 490 branch tips), not informative, and did not produce identifiable patterns within the cluster dendrograms. Further, the overall patterns in the ordinations and proportion of variance explained was similar using students or sections (i.e., all students within a section averaged) as sample units. We further discovered that secondary subscales in ordination analyses may be more accurate in parsing out perceptions of learner-centeredness with section as sample unit compared to using student responses as sample unit, though we found no difference in comparing primary subscales using section versus student responses as sample units. Particularly in science education, the use of individual student responses as sample units often leads to an inability to distinguish between learning gains due to instructional practices or learning gains due to extrinsic factors (e.g. experiences and backgrounds) of individual students [85]. While individual student responses may seem more attractive as a sample unit, they act as pseudoreplicates; therefore, sections as sample units are statistically superior. Results using students as sample units, therefore, are not reported here and all subsequent analyses reflect sections.
Results
Participating students, instructors, and class sections
To better describe our student population, we collected self-reported demographic data from our participants. Of the 490 students in our sample who fully completed the demographic portions of the student surveys, 30.8% (151 students) were freshmen, 43.3% (212) were sophomores, 19.6% (96) were juniors, 5.1% (25) were seniors, and 1.2% (6) were post-baccalaureate. The mean self-reported grade-point average within this student population was 3.3 on a 0.0–4.0 scale, while the mean ACT score was 22.9. The majority of participants (79%; 389 students) were Caucasian; 9% (46) were Latina/o; and 12% (55) were other ethnicities. Students, on average, had taken 1.2 biology courses in high school and 0.2 biology courses at the college level. Summary data for each instrument are available in Table 3.
Pairwise univariate correlations
Primary subscales.
Comparing primary subscales (e.g. ITTF, CCSF, LC-bel, NLC-bel, Deep, Surface, ETL, and KLA) and RTOP across sections via Pearson correlations (Table 2), the strongest negative correlation was measured between ETL and Surface (r = -0.97; p < 0.000806), which represent student subscales from different instruments. We found no strong positive correlations between primary subscales (p > 0.000806; Table 2) across sections.
Multivariate trends among instruments
Ordinations.
In analyzing average student responses of primary subscales (e.g. Deep, Surface, ETL, and KLA) across our twelve sections, the final stress for a two-dimensional solution was 1.2067 (p = 0.0199), with a final instability of <0.001 after 52 iterations (Fig 2). We rotated this ordination by the strongest variable, ETL (353 degrees), to load it on a single axis. Axis one explained 96.3% of the variance and axis two explained 3.3% of variance in student primary subscale scores. ETL (r = 0.99) and KLA (r = 0.83) explained most of the positive end of axis one, while the opposing end of axis one was associated with Surface approaches (r = -0.60). Axis two opposed Deep approaches (r = 0.91) and somewhat KLA scores (r = 0.57) at the positive end and Surface approaches (r = -0.67) at the negative end. The positive end of Axis 1 was characterized by learner-centered strategies, while the negative end was indicative of non-learner-centered strategies. Similarly, the positive end of Axis 2 was characterized by learner-centered motives, while the negative end was indicative of non-learner-centered motives (Fig 2).
(a) Several components of the ETL and KLA positively correlate with Axis 1, the strategy axis. Conceptual change of the ATI also correlated at the positive end of axis one, though was not included in the ordination figure. (b) The Deep and Surface approaches of the R-SPQ-2F associate with the positive and negative ends of Axis 2, the motive axis, respectively. In this panel, the relative symbol size of the 12 course sections are coded by RTOP score; high RTOP scores (i.e., larger circles) correlate with the positive end of Axis 2.
When student secondary subscales by section were overlaid onto the student primary student subscales ordination, the positive end of axis one was associated with several of the secondary subscales, including those of the ETL (SETLQ): feedback (r = 0.97), understanding (r = 0.97), choice (r = 0.90), aims (r = 0.90), interest (r = 0.82), staff (r = 0.59), and student (r = 0.58); those of the KLA (SETLQ): k-skills (r = 0.84), i-skills (r = 0.72), and g-skills (r = 0.67); and one from the R-SPQ-2F: deep strategy (r = 0.53). Assess was the only secondary subscale of the ETL that did not strongly correlate with the positive end of axis one (r = 0.35). It should be noted that Deep approaches in the primary subscales above did not strongly associate with axis one, although strong correlations did arise among the Deep secondary subscales and axis one. The opposing end of axis one was only strongly associated with the R-SPQ-2F’s surface strategy (r = -0.72). The positive end of axis two was correlated with deep strategy (R-SPQ-2F; r = 0.91), deep motive (R-SPQ-2F; r = 0.88), and g-skills (r = 0.66), while surface motive (R-SPQ-2F; r = -0.74) was the only secondary subscale strongly related to the negative end of axis two (Fig 2).
When instructor primary subscales were overlaid onto the ordination of mean student responses per section in primary subscale space, CCSF (ATI) was related to the positive end of axis one (r = 0.63), while no factors were strongly associated (r > -0.5) with the negative end of axis one nor either end of axis two. When instructor secondary subscales were overlaid onto the student primary subscales, conceptual change (ATI) associated with the positive end of axis one (r = 0.61), while no factors were strongly associated (r > ±0.5) with the negative end of axis one nor either end of axis two. The single factor which captured expert perceptions, RTOP, correlated with the positive end of axis two (r = 0.68) but was not strongly associated with axis one. The primary subscales from the second instructor tool, the ALCP, were not strongly associated with either axis (r < ±0.5) (Fig 2).
Multivariate correlations
Pairwise Mantel tests jointly compared multiple indices of student, instructor, and expert perceptions of the learner-centeredness of participating classes. No significant correlations (p < 0.05) existed among class sections based on similarities using primary subscales of instructors and students or RTOP (Table 4). Similarly, no significant correlations (p < 0.05) existed among class sections based on similarities using secondary subscales of instructors and students or RTOP (Table 4).
Correlation coefficients and p-values in upper corner compare primary subscale scores, while correlation coefficients in the lower corner compare secondary subscale scores.
Cluster analyses
To further analyze the relatedness of instructor to student perceptions of learner-centeredness, we compared independent cluster dendrograms based on section-averaged primary subscale responses. Dendrogram nodes were rotated to best align clusters of sections between student and instructor perspectives (Fig 3). Some pairs of course sections (i.e., 2 and 4; 11 and 12; 7 and 9; 5 and 6; and 8 and 10) were taught by the same instructor, thus their faculty survey scores are identical. In grouping course sections by student primary subscales (Fig 3A), we identified two main clusters with 50% information remaining. The first student cluster (top cluster; Fig 3A) included three course sections (i.e. 2, 12, and 4) in which students tended to have higher ETL, KLA, and deep scores and lower surface scores; this first group was categorized as the more learner-centered group in which learning was based on deep approaches. Interestingly, this cluster also included more of the low enrollment course sections (mean = 57.67 students per section, range = 48–75 students). The second student cluster (bottom cluster; Fig 3A) included nine course sections (i.e. 11, 10, 1, 8, 3, 6, 7, 5, and 9) in which students tended to have low ETL, KLA, and deep scores and high surface scores; this second group was categorized as the more non-learner-centered group in which learning was based on surface approaches. Interestingly, this cluster also appeared to include more of the higher enrollment course sections (mean = 102.67 students per section, range = 16–391 students).
In the dendrogram, information remaining (%) is indicative of the strength of the relationship between class sections; clusters joined with greater information remaining are more closely related. Sections are clustered by student perceptions in the dendrogram to the left (a), while the same sections are clustered by instructor perceptions in the right dendrogram (b). Identical course sections are connected in the center to aid in visualization of similarities; connector lines patterns denote enrollment size (dashed line ≤70 students, solid line = 71–150 students, bolded double line >150 students [one section, n = 391]). In the instructor dendrogram, Cluster A is the true learner-centered cluster; Cluster B is characterized by internal confusion within individual faculty; Cluster C is epitomized by the conflict in perspectives among groups; and Cluster D is the non-learner-centered cluster based on instructor and student perceptions.
In grouping course sections by instructor primary subscales (Fig 3B), we identified four main clusters with approximately 85% information remaining. The first faculty cluster (cluster A; Fig 3B) included three course sections (i.e. 2, 4, 3) in which instructors were more learner-centered as evidenced by high CCSF scores and three of the top four RTOP scores; interestingly, students also perceived two out of three of these moderately-sized classes to be learner-centered (Fig 3A). Cluster A is the only truly learner-centered cluster, where student, faculty, and expert perceptions of learner-centeredness tended to generally align.
The second faculty cluster, cluster B, included four course sections (i.e. 12, 11, 8, and 10) in which instructors were less learner-centered as evidenced by generally higher ITTF and NLC-bel scores; however, sections twelve and eleven had average to high CCSF and LC-bel scores while sections eight and ten had average CCSF and LC-bel scores (Fig 3B). The high CCSF scores in sections twelve and eleven are attributed to high conceptual change scores, as student-focused scores were quite low in these sections. Interestingly, the single instructor of these two sections had more than twenty years of teaching experience and earned relatively low RTOP scores. So while this instructor may have identified with the ideas of learner-centeredness in theory, they may not have put this theory into practice while teaching the sessions we observed. Notably, the instructor of sections 8 and 10 had little teaching experience, which likely influenced their counterintuitive perception of their own teaching as both teacher-focused and student-centered. Students within cluster B perceived these classes to be non-learner-centered, excepting for section 12, in which students perceived the class to be highly learner-centered (Fig 3A). Generally, students and experts agreed that the sections in cluster B were non-learner centered, while these instructors expressed mixed views of which end of the spectrum their teaching occupied. Three of the four sections in this second cluster had the greatest student enrollments, excepting section 10, which was closer to the average.
Faculty cluster C included three course sections (i.e. 1, 6, and 5), where instructors had low ITTF scores and high CCSF and LC-bel scores (Fig 3B). Cluster C epitomized the conflict in perspectives among groups; while these instructors ranked themselves as highly learner-centered, their students ranked all three of these course sections as non-learner-centered (Fig 3A), and experts rated section 1 as learner-centered yet the other two as transitioning to learner-centered. While section 1 had the largest enrollment (n = 391) and was taught during weekday mornings, sections 5 and 6 had the smallest enrollments (n = 16 and n = 30, respectively) and were taught at more non-traditional times (on weekday evenings and weekends, respectively).
Finally, faculty cluster D included two course sections (i.e. 7 and 9) in which the single instructor who taught both sections had high ITTF scores and low CCSF and LC-bel scores (Fig 3B); these two courses represented the most teacher-centered faculty cluster. Students agreed that these sections were non-learner-centered, and experts scored them as in the low range of the RTOP level 2, just above teacher-centered.
While most course sections within the instructor and student dendrograms could be roughly aligned (as denoted by straight or nearly straight dashed lines connecting Fig 3A and 3B), some misalignments of sections based on instructor primary subscales versus student primary subscales occurred. Expert scoring of the learner-centeredness of these sections, also did not necessarily agree with these designations. Additionally, student primary subscale scores of two sections taught by the same instructor were never more similar to one another than they were to scores from other instructors’ sections. For example, though sections 11 and 12 were taught by the same instructor, students perceived section 11 as non-learner-centered and section 12 as learner-centered.
Discussion
How did subscales within and among student instruments compare?
Most of the primary and secondary subscales of the SETLQ positively and linearly correlated, suggesting that students’ positive experiences with learning coincide with their perceived knowledge gained. Entwistle [86] reported similar associations linking classroom experiences with conceptual understanding and knowledge acquired, and noted that the extent of conceptual understanding or knowledge acquired may also be influenced by a student’s decision to approach learning at a deep or surface level. While students’ strategies and motives for learning were orthogonal in our analysis, Deep and Surface approaches fell at each opposing end of both ordination axes (Fig 2). The ETL, KLA, and deep strategies fell together at the learner-centered end of the same axis, axis one. This alignment supports the idea that students who report having more positive classroom experiences and highly valuing course content tend to adopt deeper strategies [87]. The alliance of the two student surveys administered in this study suggests that the R-SPQ-2F and SETLQ can be used in conjunction with one another to capture students’ strategies and motives, experiences in teaching and learning, and knowledge acquired on a learner- to non-learner-centered gradient.
How did subscales within and among instructor instruments compare?
In univariate contrasts, neither primary nor secondary subscales of the ATI significantly related to one another, in agreement with prior studies [88]. Surprisingly, the two subscales of the ALCP did not significantly correlate to one another or any of the other faculty scales. Affective aspects of teaching, measured by the ALCP, were likely not captured with the other instruments we used in our study. Low reliability of ALCP scales within our sample population, particularly for the non-learner-centered beliefs subscale, suggests this tool is not reliable with our small instructor population (n = 7) and thus may be ineffective to measure our desired factor, learner-centeredness. The lack of alignment we observed between the ATI and ALCP, at least the learner-centered beliefs scale that was moderately reliable, might suggest there is an additional dimension of learner-centeredness among instructors that the ATI did not capture, and which may reflect affective rather than practical aspects of learner-centered pedagogy.
Is learner-centeredness best represented as a one-dimensional gradient?
We found student perceptions of learner-centeredness in introductory biology classrooms are multidimensional (Fig 2). Most of the variance among class sections, however, is loaded along one gradient, in line with our original hypothesis that perceptions of learner-centeredness would fall on a single-dimensional framework with two opposing ends. In the student survey, the R-SPQ-2F, the two secondary subscale factors (i.e., strategy and motive) became important but separate factors with surface and deep ends, which defined our two ordination gradients. While strategy represents one’s process or plan for learning, and motive represents one’s orientation for learning, it is important to keep in mind that multiple motive-strategy combinations may be possible; for example, a student may have deep motives but surface strategies for learning a topic [89].
We defined Axis 1 as the strategy gradient. Positive experiences of teaching and learning, increased knowledge acquired, deep strategies, and conceptual change describe the learner-centered end of this axis, whereas surface approaches describe the opposing, teacher-centered end (Fig 2). While the various primary and secondary subscales measured in this study did not covary using linear, univariate analyses, many of the subscales did overlay when viewed in multidimensional space; all subscales on Axis 1 (i.e. KLA, ETL, conceptual change, and deep strategies) aligned as predicted (Fig 1). The fact that LC-beliefs did not correlate with these other learner-centered measures may suggest that the ALCP is capturing an additional dimension of learner-centeredness (e.g., perhaps one more focused on affective aspects of instruction). Further, though conceptual change and student-focused comprised the CCSF subscale of the ATI, student-focused did not align with other measures of learner-centeredness. Elsewhere, secondary science teachers who intended to teach toward conceptual change rather than based on information transfer often were not able to implement student-focused practices into their lessons [90], which might explain the disconnect we measured between conceptual change and student-focused of the CCSF in the current study. Moreover, we also cannot overlook the considerable unreliability of the SF subscale in our sample, which likely disrupted any potential underlying trend. Low reliability of the CCSF subscale was most certainly skewed by the incredibly low reliability of the SF portion of the subscale (α = 0.090) rather than the CC portion of the subscale (α = 0.634).
We labeled Axis 2 as the motive gradient. At one end of this gradient, students expressed deep motives and strategies for learning and increased general learning skills, and experts perceived these classrooms as highly learner-centered. Surface motives defined the opposing end of this gradient (Fig 2). Sambell, Brown, and McDowell [91] noted that even in a learner-centered environment, a student may not adopt deep learning strategies if he or she is not motivated to engage in high-quality learning. However, students in a classroom are reportedly more motivated to succeed if they perceive that they have some control of their learning [92]. Further, alignment of expert and student perceptions of learner-centeredness has also been reported previously, including the correlation of high RTOP scores with student conceptual gains and classroom collaboration in a learner-centered course [53].
In its entirety, Axis 1 (i.e. the strategy gradient) explained substantially more variance in student scores; thus, may be more informative of students’ perceptions of learner-centeredness than Axis 2 (i.e. the motive gradient). While many have discussed the close relationship between conceptions of learning and approaches to learning [43, 93], others have argued that the interplay between conceptions of learning, approaches to learning, and extraneous factors such as culture is more complicated than a simple causal relationship [94–95]. While the design of our study cannot infer causation, the strategies students use correlate with a perceived gain in learning (in the form of ETL and KLA scores), but motive is uncoupled from strategy. Though some prior studies have reported that students engaging in deep strategies may not always possess deep motives for learning in a particular course, and vice versa [89], other studies have discussed the strong coupling of deep intrinsic motives and strategies among undergraduate students [96]. Further, students may perceive their strategies and motives as quite separate entities in the learning process [89], which could be related to the idea that students’ conceptions of learning (e.g. motives) may influence their approaches to learning [97–98], whether deep or surface.
Are two dimensions of learner-centeredness enough?
Instructor perceptions of learner-centeredness, as measured by the CCSF and CC secondary subscale of the ATI, agreed with student perceptions and fell along the strongest gradient of learner-centeredness, the strategy gradient (Fig 2). While instructors in our sample may desire learner-centered outcomes in their classes (i.e., high CC), some do not engage in the necessary pedagogy to ensure a learner-centered class (i.e., high SF). The paradox of conceptual change in the absence of student-focused learning has been discussed by others in the context of limitations of the original conceptual change model—mainly, that there was too much focus on the instructor’s role, rather than the student’s role, in facilitating conceptual change in the classroom [33, 99–101]. A class based largely on conceptual change is perceived by our sampled students as a class requiring deep strategies and promoting positive learning experiences and increased knowledge and learning. Interestingly, Trigwell et al. [55] found that student-focused instructors were more likely to encourage deep learning approaches, which our data did not support since high CCSF scores in the current study were mainly driven by the conceptual change secondary subscale rather than the student-focused one. The student-focused subscale was not strongly correlated (r <0.50) to either student gradient, which may suggest additional dimensionality was perceived by instructors but not by students.
Similarly, the two subscales of the ALCP and the ITTF scale of the ATI did not associate with either gradient that students identified as learner-centered. This lack of relationship between the ALCP and other subscales within this study lends more evidence for the multi-dimensional framework of learner-centeredness, even beyond the 2-D model identified in our student ordination (Fig 2), rather than the one-dimensional framework described by our null hypothesis. The ALCP, as an example, describes faculty affect that may represent its own separate dimension of learner-centeredness with no relation to the motive and strategy gradients we identified. While prior studies have found strong associations between affective traits of teachers and student outcomes [102], affective measures of instructors have not historically been linked to instructor and student perceptions of learner-centeredness, as was done in this study by using multiple tools to quantify perceptions of each group.
How did subscales across student, faculty, and expert observer instruments compare?
All univariate and multivariate linear correlations showed no relationships among the student, faculty, and expert instruments, which suggests a disconnect across the subscales of these instruments. However, using data reduction and agglomeration techniques (i.e., ordination and cluster analysis), we were able to identify some overlap in learner-centered perceptions. We found that expert and faculty perceptions mostly align based on cluster analysis; that expert and student perceptions align along the motive axis of the ordination; and that student and faculty perceptions generally do not agree, with the exception of the conceptual change subscale correlating with the learner-centered strategy end of axis one within the ordination.
Similar to our original hypothesis, as guided by work from Ebert-May et al. [37], our univariate contrasts suggested that expert perceptions of learner-centeredness (i.e. RTOP scores) generally did not relate to faculty perceptions, though our cluster analyses suggested that instructors who perceived their practices and beliefs as learner-centered often taught course sections that were more learner-centered based on expert opinions. Additionally, RTOP scores only associated with the weaker of the two student ordination axes, suggesting that experts’ perceptions of the classroom learner-centeredness more closely aligned with students’ perceptions of motives rather than strategies. Finally, in agreement with previous work [57], student and faculty perceptions of learner-centeredness were disconnected in all analyses with one exception (i.e., CC subscale positively associating with the student strategy gradient). Our findings contradict the general agreement between student and instructor perceptions identified by Trigwell et al. [55] using several of the same instruments administered in the current study, though Trigwell and others noted the small sample size that included only one field of study (i.e., physical science) warranted caution in interpreting the results. Likewise, our study included a relatively small sample (n = 12 class sections) restricted to a single discipline (i.e., biology), which may also contribute to the lack of agreement between our work and Trigwell and others [55].
Instructors in our study appear to perceive additional dimensions of learner-centeredness that students do not (i.e., measured by the subscales of ALCP), perhaps dimensions based more on affective aspects of teaching and learning. Sutton and Wheatley [103] discuss the emotional process as relevant to teaching, including how emotional expression and subjective tendencies of teachers may vary during instruction. The ALCP may incorporate this more affective dimension of learner-centeredness, though this dimension could not be adequately detected or aligned with other factors in the current study.
Our finding that RTOP did not associate with the strategy axis of the ordination (i.e., Axis 1) suggests that student strategies do not relate to observable classroom environment and behaviors. As mentioned above, students engaging in deep strategies may not always possess deep motives for learning in a particular course [89]. Perhaps the deep motives that many students fostered in the current study were influenced by positive aspects of the classroom environment such as group discussions with peers and a supportive instructor [104], though these motives may not have necessarily reflected students’ strategies to learn biology.
Are perceptions of learner-centeredness biased by external factors?
In our sample, we found that the combination of low enrollment courses (i.e., less than or equal to 70 students) with high RTOP scores (i.e., greater than 40) could be viewed as highly learner-centered by both students and faculty. However, in classes where experts and faculty aligned as highly learner-centered yet were either very high enrollment (i.e., greater than 150 students) or taught during non-traditional times (evenings or weekends), students rated these sections as teacher-centered. Differential student success has elsewhere been tied to course scheduling; specifically, students in morning classes outperform students in non-morning classes [105]. Likewise, college science instructors often anecdotally feel that class size is a limitation in implementing more learner-centered or inquiry-based techniques in the lecture [106]. Our data empirically suggest that even if a class looks and feels learner-centered, external barriers (i.e., time of day, class size) may limit this perception by students.
Prior studies have concluded that learner-centered practices can be implemented effectively in large enrollment science courses [12, 33, 107]. However, our findings demonstrate that while faculty and experts perceive some larger enrollment course sections as learner-centered, students fail to perceive this learner-centeredness when enrolled in these large classes themselves. The tendency of students to perceive larger classes as more teacher-centered in the current study is similar to the trend described by Ebert May et al. [37] and Murray and MacDonald [108], though in these prior studies, instructors and experts, rather than students, perceived larger classes as more teacher-centered.
Limitations
Though we assumed that the fifteen class sections and nine instructors in our study were representative of average undergraduate biology classrooms, our findings should be generalized with caution. We conducted our study at a single institution with nine instructors that collectively taught fifteen sections of the same non-majors introductory biology course. Expanding to include other institutions, science and non-science courses, and a variety of instructors could provide more generalizable patterns. Further, our study was conducted exclusively in biology courses, though none of the student, instructor, or expert instruments used in our study included items specific to the biological sciences; findings may be different if this research was conducted in other disciplines both within and beyond the sciences.
Future work could conduct additional psychometric analyses, especially on instruments or scales we found to be unreliable, or conduct qualitative interviews to provide additional validity for each instrument administered. The absence of clear patterns in some of our data analyses may reflect issues with the instruments themselves rather than a real trend, or lack thereof. Future directions of this research should also consider interventions to better align perceptions of learner-centeredness in the biology classroom, specifically focused on large or non-traditionally timed courses.
Conclusions
Our sample of introductory biology classrooms clearly implies that learner-centeredness is multidimensional, as seen in our ordination, and is more complex than a simple dichotomous learner- versus teacher-centered relationship. The alignment of student, instructor, and expert perceptions of learner-centeredness or teacher-centeredness was generally inconsistent across sections of this non-majors biology course. While pairwise univariate correlations suggest few significant relationships exist between individual scales, Mantel tests indicate that perspectives measured by several instruments do not significantly correlate across students, instructors, and experts. Alternatively, our ordination highlights that student and expert perceptions of learner-centeredness align based on motives. Lastly, cluster analyses separated student and instructor data into meaningful groups, which suggest alignment of faculty and expert perceptions occurs in most contexts. Broadly, expert opinions tended to agree with instructor and student perceptions independently, while students’ perceptions mostly differed from those of faculty. Regretfully, the classroom experience for students can be negatively influenced by external factors, including enrollment size and time of lecture. Perceptions of learner-centeredness in the biology classroom are complex, and can be more completely measured and interpreted with more than one instrument. Our findings encourage instructors to be cognizant that the approaches they employ in the classroom may not be interpreted as learner-centered, in the same manner, by students and external observers, particularly when additional course factors such as enrollment and scheduling may encourage negative perceptions of learner-centered practices.
Supporting information
S1 Table. Pearson correlations between secondary instructor and student subscales and RTOP across all sections.
https://doi.org/10.1371/journal.pone.0200524.s001
(DOCX)
Acknowledgments
The authors acknowledge the numerous undergraduate researchers who helped collect data, the nine participating instructors, and hundreds of participating students. We also thank Barbara McCombs for her permission to use the ALCP version available in her publications.
References
- 1. Freeman S, Eddy SL, McDonough M, Smith MK, Okoroafor N, Jordt H, et al. Active learning increases student performance in science, engineering, and mathematics. Proc Natl Acad Sci. 2014 Jun 10;111(23):8410–5. pmid:24821756
- 2. McKeachie W, Svinicki M. McKeachie's Teaching Tips. Cengage Learning; 2013.
- 3. Prince M. Does active learning work? A review of the research. J Eng Educ. 2004 Jul 1;93(3):223–31.
- 4. Fahraeus A. Research supports learner-centered teaching. J Scholarsh Teach Learn. 2013 Aug 6;13(4):126–31.
- 5. Andrews TM, Leonard MJ, Colgrove CA, Kalinowski ST. Active learning not associated with student learning in a random sample of college biology courses. CBE Life Sci Educ. 2011;10(4):394–405. pmid:22135373
- 6. Hake RR. Interactive-engagement versus traditional methods: A six-thousand-student survey of mechanics test data for introductory physics courses. Am J Phys. 1998 Jan;66(1):64–74.
- 7.
Bransford JD, Brown AL, Cocking RR, editors, and Committee on Developments in the Science of Learning, National Research Council. How People Learn: Brain, Mind, Experience, and School. Washington, DC: National Academies Press; 1999.
- 8. Crouch CH, Mazur E. Peer instruction: Ten years of experience and results. Amer J Phys. 2001 Sep;69(9):970–7.
- 9. Shepard LA. The role of assessment in a learning culture. Educ Res. 2000 Oct;29(7):4–14.
- 10. White BY, Frederiksen JR. Inquiry, modeling, and metacognition: Making science accessible to all students. Cogn Instr. 1998 Mar 1;16(1):3–118.
- 11. Holt EA, Young C, Keetch J, Larsen S, Mollner B. The greatest learning return on your pedagogical investment: Alignment, assessment or in-class instruction?. PloS One. 2015 Sep 4;10(9):e0137446. pmid:26340659
- 12. Armbruster P, Patel M, Johnson E, Weiss M. Active learning and student-centered pedagogy improve student attitudes and performance in introductory biology. CBE Life Sci Educ. 2009;8(3):203–213. pmid:19723815
- 13. Knight JK, Wood WB. Teaching more by lecturing less. Cell Biol Educ. 2005 Dec 21;4(4):298–310. pmid:16341257
- 14. Casagrand J, Semsar K. Redesigning a course to help students achieve higher-order cognitive thinking skills: from goals and mechanics to student outcomes. Adv Physiol Educ. 2017 Jun 1;41(2):194–202. pmid:28377433
- 15. Walker JD, Cotner SH, Baepler PM, Decker MD. A delicate balance: integrating active learning into a large lecture course. CBE Life Sci Educ. 2008 Dec 21;7(4):361–7. pmid:19047423
- 16. Brownell SE., Kloser MJ, Fukami T, Shavelson R. Undergraduate biology lab courses: comparing the impact of traditionally based “cookbook” and authentic research-based courses on student lab experiences. J Coll Sci Teach. 2012;41(4):36–45.
- 17. Miller CJ, Metz MJ. A comparison of professional-level faculty and student perceptions of active learning: its current use, effectiveness, and barriers. Adv Physiol Educ. 2014 Sep 1;38(3):246–52. pmid:25179615
- 18.
McCombs BL. Assessing the Role of Educational Technology in the Teaching and Learning Process: A Learner-Centered Perspective. 2000. The Secretary’s Conference on Educational Technology, 2000: Shaping the Future: Alexandria, VA.
- 19. Ernst H, Colthorpe K. The efficacy of interactive lecturing for students with diverse science backgrounds. Adv Physiol Educ. 2007 Jan;31(1):41–4. pmid:17327581
- 20.
Seymour E, Hewitt E. Talking About Leaving: Factors Contributing to High Attrition Rates Among Science, Mathematics, and Engineering Undergraduate Majors. Boulder, CO: Bureau of Sociological Research. 1997.
- 21.
Chen X, Soldner M. STEM Attrition: College Students' Paths into and out of STEM Fields. Statistical Analysis Report. NCES 2014–001. National Center for Education Statistics. 2013 Nov.
- 22. Covill AE. College students' perceptions of the traditional lecture method. Coll Stud J. 2011;45:92–101.
- 23. Fox-Cardamone L, Rue S. Students’ responses to active-learning strategies: An examination of small-group and whole-class discussion. Res Educ Reform. 2003;8(3):3–15.
- 24. Tsang A, Harris DM. Faculty and second-year medical student perceptions of active learning in an integrated curriculum. Adv Physiol Educ. 2016 Dec 1;40(4):446–53. pmid:27697958
- 25. Herreid CF, Schiller NA. Case studies and the flipped classroom. J Coll Sci Teach. 2013 May 1;42(5):62–6.
- 26. Felder RM, Brent R. Navigating the bumpy road to student-centered instruction. Coll Teach. 1996 Apr 1;44(2):43–7.
- 27. Cooper KM, Brownell SE. Coming out in class: Challenges and benefits of active learning in a biology classroom for LGBTQIA students. CBE Life Sci Educ. 2016;15(3):ar37. pmid:27543636
- 28. Watters DJ, Watters JJ. Approaches to learning by students in the biological sciences: Implications for teaching. Int J Sci Educ. 2007 Jan 15;29(1):19–43.
- 29.
Weimer M. Learner-Centered Teaching: Five Key Changes to Practice. John Wiley & Sons; 2002 Oct 16.
- 30. Anton M. The discourse of a learner‐centered classroom: Sociocultural perspectives on teacher‐learner interaction in the second‐language classroom. Mod Lang J. 1999;83(3):303–318.
- 31. Henderson R. Classroom pedagogies, digital literacies and the home-school digital divide. Int J Pedagog Learn. 2011 Aug 1;6(2):152–61.
- 32.
Henderson R, Editor. Teaching Literacies in the Middle Years: Pedagogies and Diversity. Oxford University Press; 2012.
- 33. Allen D, Tanner K. Infusing active learning into the large-enrollment biology class: seven strategies, from the simple to complex. Cell Biol Educ. 2005;4(4):262–268. pmid:16344858
- 34. Brownell SE, Tanner KD. Barriers to faculty pedagogical change: Lack of training, time, incentives, and… tensions with professional identity?. CBE Life Sci Educ. 2012;11(4):339–346. pmid:23222828
- 35. Modell HI, Michael JA. Promoting active learning in the life science classroom: defining the issues. Ann N Y Acad Sci. 1993 Dec 1;701(1):1–7.
- 36.
Eagan K, Stolzenberg EB, Lozano JB, Aragon MC, Suchard MR, Hurtado S. Undergraduate teaching faculty: The 2013–2014 HERI faculty survey. Higher Education Research Institute, UCLA. 2014.
- 37. Ebert-May D, Derting TL, Hodder J, Momsen JL, Long TM, Jardeleza SE. What we say is not what we do: effective evaluation of faculty professional development programs. BioScience. 2011 Jul 1;61(7):550–8.
- 38. Guskey TR. Professional development and teacher change. Teachers and teaching. 2002 Aug 1;8(3):381–91.
- 39.
Huberman M. ECRI, Masepa, North Plains: Case study. Andover, MA: The Network. 1981.
- 40. Dall’Alba G, Sandberg J. Unveiling professional development: A critical review of stage models. Review of Educational Research. 2006 Sep;76(3):383–412.
- 41. McCombs BL, Quiat M. What makes a comprehensive school reform model learner centered? Urban Educ. 2002 Sep;37(4):476–96.
- 42. Daniels DH, Kalkman DL, McCombs BL. Young children's perspectives on learning and teacher practices in different classroom contexts: Implications for motivation. Early Educ Dev. 2001 Apr 1;12(2):253–73.
- 43. Biggs J, Kember D, Leung DY. The revised two‐factor study process questionnaire: R‐SPQ‐2F. Br J Educ Psychol. 2001;71(1):133–149.
- 44. Ginns P, Ellis R. Quality in blended learning: Exploring the relationships between on-line and face-to-face teaching and learning. Internet High Educ. 2007 Dec 31;10(1):53–64.
- 45. Skogsberg K, Clump M. Do psychology and biology majors differ in their study processes and learning styles? Coll Stud J. 2003 Mar 1;37(1):27–34.
- 46. Tiwari A, Lam D, Yuen KH, Chan R, Fung T, Chan S. Student learning in clinical nursing education: Perceptions of the relationship between assessment and learning. Nurse Educ Today. 2005 May 31;25(4):299–308. pmid:15896415
- 47. O'Neill GM, Guerin S. Working with the challenge of designing and implementing a stand-alone learning to learn module in a large arts programme. AISHE-J: The All Ireland Journal of Teaching and Learning in Higher Education. 2015 Oct 31;7(3).
- 48. Tudor J, Penlington R, McDowell L. Perceptions and their influences on approaches to learning. Eng Educ. 2010 Dec 1;5(2):69–79.
- 49. Crick R, McCombs B, Haddon A, Broadfoot P, Tew M. The ecology of learning: factors contributing to learner‐centred classroom cultures. Research Papers in Education. 2007;22(3):267–307.
- 50. Trigwell K. Approaches to teaching design subjects: a quantitative analysis. Art Des Commun High Educ. 2002 Jul 1;1(2):69–80.
- 51. Weinberger E, McCombs BL. Applying the LCPs to high school education. Theory Pract. 2003 May 1;42(2):117–26.
- 52.
MacIsaac D, Sawada D, Falconer K. Using the Reformed Teaching Observation Protocol (RTOP) as a Catalyst for Self-Reflective Change in Secondary Science Teaching. 2001. Paper presented at the meeting of the American Association of Physics Teachers: Rochester, NY.
- 53. MacIsaac D, Falconer K. Reforming physics instruction via RTOP. Am J Phys. 2002 Nov;40(8):479–85.
- 54. Sawada D, Piburn MD, Judson E, Turley J, Falconer K, Benford R, et al. Measuring reform practices in science and mathematics classrooms: The reformed teaching observation protocol. Sch Sci Math. 2002 Oct 1;102(6):245–53.
- 55. Trigwell K, Prosser M, Waterhouse F. Relations between teachers' approaches to teaching and students' approaches to learning. High Educ. 1999 Jan 1;37(1):57–70.
- 56. Gibbs G, Coffey M. The impact of training of university teachers on their teaching skills, their approach to teaching and the approach to learning of their students. Active learning in higher education. 2004 Mar;5(1):87–100.
- 57.
Fraser BJ. Research on classroom and school climate. In: Gabel D, editor. Handbook of Research on Science Teaching and Learning. New York: Macmillan; 1994. pp. 493–541.
- 58. Baruch Y. Response rate in academic studies-A comparative analysis. Hum Relat. 1999;52(4):421–438.
- 59.
McCombs BL, Miller L. Learner-centered classroom practices and assessments: Maximizing student motivation, learning, and achievement. Corwin Press; 2007.
- 60.
McCombs BL. The Assessment of Learner-Centered Practices (ALCP): Tools for teacher reflection, learning, and change. Denver, CO: University of Denver Research Institute. 1999.
- 61. McCombs BL, Lauer PA. Development and validation of the learner-centered battery: Self-assessment tools for teacher reflection and professional development. The Professional Educator. 1997;20(1):1–21.
- 62.
McCombs, BL. The case for Learner-Centered Practices: Introduction and rationale. April 2004. Paper presented in the interactive symposium proposal, ‘‘The Case for Learner-Centered Practices Across the K-12 and College Levels,” at the Annual Meeting of the American Educational Research Association: San Diego, CA.
- 63. Trigwell K, Prosser M. Development and use of the approaches to teaching inventory. Educ Psychol Rev. 2004 Dec 7;16(4):409–24.
- 64.
Marton F, Hounsell D, Entwistle N. The Experience of Learning. Edinburgh: Scottish Academic Press; 1997.
- 65. Prosser M, Trigwell K. Confirmatory factor analysis of the approaches to teaching inventory. Br J Educ Psychol. 2006 Jun 1;76(2):405–19.
- 66. Zhang LF. Approaches and thinking styles in teaching. J. Psychol. 2001 Sep 1;135(5):547–61. pmid:11804007
- 67. Fryer LK, Ginns P, Walker RA, Nakao K. The adaptation and validation of the CEQ and the R‐SPQ‐2F to the Japanese tertiary environment. Br J Educ Psychol. 2012 Dec 1;82(4):549–63.
- 68. Stes A, De Maeyer S, Van Petegem P. Examining the cross-cultural sensitivity of the Revised Two-factor Study Process Questionnaire (R-SPQ-2F) and validation of a Dutch version. PloS one. 2013 Jan 16;8(1):e54099. pmid:23342085
- 69. Entwistle N, McCune V, Hounsell J. Approaches to studying and perceptions of university teaching-learning environments: Concepts, measures and preliminary findings. Occasional report. 2002 Sep;1.
- 70. Oliva FC, Pastor MS, Picos AP. Cuestionario sobre metodología y evaluación en formación inicial en educación física/Questionnaire On Methodology And Assessment In Physical Education Initial Training. pp. 245–267. Revista Internacional de Medicina y Ciencias de la Actividad Física y del Deporte. 2015 Oct 1(58).
- 71. Bolkan S, Goodboy AK, Griffin DJ. Teacher leadership and intellectual stimulation: Improving students' approaches to studying through intrinsic motivation. Commun Res Rep. 2011 Oct 1;28(4):337–46.
- 72.
McCombs BL. Defining Tools for Teacher Reflection: The Assessment of Learner-Centered Practices (ALCP). 2003. Paper presented at the Annual Meeting of the American Educational Research Association: Chicago, IL.
- 73. Piburn M, Sawada D. Reformed Teaching Observation Protocol (RTOP) Reference Manual. Technical Report. 2000.
- 74. Amrein-Beardsley A, Popp SEO. Peer observations among faculty in a college of education: investigating the summative and formative uses of the Reformed Teaching Observation Protocol (RTOP). J Pers Eval Educ. 2012;24(1):5–24.
- 75. Marshall JC, Smart J, Lotter C, Sirbu C. Comparative analysis of two inquiry observational protocols: Striving to better understand the quality of teacher‐facilitated inquiry‐based instruction. Sch Sci Math. 2011 Oct 1;111(6):306–15.
- 76.
Sawada D. Reformed Teacher Education in Science and Mathematics: An Evaluation of the Arizona State Collaborative for Excellence in the Preparation of Teachers. Arizona State University Document Production Services. 2003.
- 77.
IBM SPSS Statistics for Windows, Version 22.0. Armonk, NY: IBM Corp. 2013.
- 78.
McCune B, Mefford MJ. PC-ORD. Multivariate Analysis of Ecological Data. Version 7. MjM Software Design, Gleneden Beach, Oregon, U.S.A.; 2016.
- 79. Kruskal JB. Nonmetric multidimensional scaling: a numerical method. Psychometrika. 1964 Jun 27;29(2):115–29.
- 80.
Mather PM. Computational methods of multivariate analysis in physical geography. John Wiley & Sons; 1976.
- 81. Mantel N. The detection of disease clustering and a generalized regression approach. Cancer research. 1967 Feb 1;27(2 Part 1):209–20. pmid:6018555
- 82. Mantel N, Valand RS. A technique of nonparametric multivariate analysis. Biometrics. 1970 Sep 1:547–58.
- 83. Ward JH Jr. Hierarchical grouping to optimize an objective function. J Amer Stat Assoc. 1963 Mar 1;58(301):236–44.
- 84.
Kaufman L, Rousseeuw PJ. Finding Groups in Data: An Introduction to Cluster Analysis. John Wiley & Sons; 2009 Sep 25.
- 85. Theobald R, Freeman S. Is it the intervention or the students? Using linear regression to control for student characteristics in undergraduate STEM education research. CBE Life Sci Educ. 2014 Mar 20;13(1):41–8. pmid:24591502
- 86.
Entwistle N. Taking Stock: Teaching and Learning Research in Higher Education. In: Review Prepared for an International Symposium on Teaching and Learning Research in Higher Education. Guelph, Ontario; 2008 Apr 25.
- 87. Floyd KS, Harrington SJ, Santiago J. The effect of engagement and perceived course value on deep and surface learning strategies. Informing Science: The International Journal of an Emerging Transdiscipline. 2009 Jun 12;12:181–90.
- 88.
Lasry N, Charles E, Whittaker C, Dedic H, Rosenfield S. Changing classroom designs: Easy; Changing instructors' pedagogies: Not so easy … In: AIP Conference Proceedings; 2013 Jan 22. pp. 238–241.
- 89. Chiou GL, Liang JC, Tsai CC. Undergraduate students’ conceptions of and approaches to learning in biology: A study of their structural models and gender differences. Int J Sci Educ. 2012;34(2):167–195.
- 90. Tabachnick BR, Zeichner KM. Idea and action: Action research and the development of conceptual change teaching of science. Sci Educ. 1999 May 1;83(3):309–22.
- 91. Sambell K, Brown S, McDowell L. But is it fair?: An exploratory study of student perceptions of the consequential validity of assessment. Stud Educ Eval. 1997;23(4):349–71.
- 92. Pintrich PR. A motivational science perspective on the role of student motivation in learning and teaching contexts. J Educ Psychol. 2003 Dec;95(4):667.
- 93. Dart BC, Burnett PC, Purdie N, Boulton-Lewis G, Campbell J, Smith D. Students' conceptions of learning, the classroom environment, and approaches to learning. J Educ Res. 2000 Mar 1;93(4):262–70.
- 94. Lee MH, Johanson RE, Tsai CC. Exploring Taiwanese high school students' conceptions of and approaches to learning science through a structural equation modeling analysis. Sci Educ. 2008 Mar 1;92(2):191–220.
- 95. Tsai CC. Conceptions of learning science among high school students in Taiwan: A phenomenographic analysis. Int J Sci Educ. 2004 Nov 1;26(14):1733–50.
- 96. Richardson JC, Newby T. The role of students' cognitive engagement in online learning. Am J Distance Educ. 2006 Mar 1;20(1):23–37.
- 97. Edmunds R, Richardson JT. Conceptions of learning, approaches to studying and personal development in UK higher education. Br J Educ Psychol. 2009 Jun 1;79(2):295–309.
- 98. Marton F, Saljo R. Approaches to learning. In: Marton F, Hounsell D, Entwistle NJ, editors. The Experience of Learning: Implications for Teaching and Studying in Higher Education. 3rd (Internet) edition; 2005.
- 99. Beeth ME. Teaching for conceptual change: Using status as a metacognitive tool. Sci Educ. 1998;82(3):343–356.
- 100. Martin BL, Mintzes JJ, Clavijo IE. Restructuring knowledge in biology: cognitive processes and metacognitive reflections. Int J Sci Educ. 2000 Mar 1;22(3):303–23.
- 101.
Wandersee J, Mintzes J, Novak J. Research on Alternative Conceptions in Science, Handbook of Research on Science Teaching and Learning, Ed. Macmillan, New York. 1994.
- 102. Roorda DL, Koomen HM, Spilt JL, Oort FJ. The influence of affective teacher–student relationships on students’ school engagement and achievement: A meta-analytic approach. Review of educational research. 2011 Dec;81(4):493–529.
- 103. Sutton RE, Wheatley KF. Teachers' emotions and teaching: A review of the literature and directions for future research. Educ Psychol Rev. 2003 Dec 1;15(4):327–58.
- 104. Rocca KA. Student participation in the college classroom: An extended multidisciplinary literature review. Commun Educ. 2010 Apr 1;59(2):185–213.
- 105. Kantartzi SK, Allen S, Lodhi K, Grier IV RL, Kassem MA. Study of factors affecting students’ performance in three science classes: General biology, botany, and microbiology at Fayetteville State University. Atlas J Sci Educ. 2010;1(1):13–8.
- 106. Brown PL, Abell SK, Demir A, Schmidt FJ. College science teachers' views of classroom inquiry. Sci educ. 2006;90(5):784–802.
- 107. Deslauriers L, Schelew E, Wieman C. Improved learning in a large-enrollment physics class. Science. 2011 May 13;332(6031):862–4. pmid:21566198
- 108. Murray K, Macdonald R. The disjunction between lecturers' conceptions of teaching and their claimed educational practice. High Educ. 1997 Apr 1;33(3):331–49.