Meeting the Challenge of Interdisciplinary Assessment

 

Introduction to Interdisciplinary Education

“As the pace of scientific discovery and innovation accelerates, there is an urgent cultural need to reflect thoughtfully about these epic changes and challenges. The challenges of the twenty-first century require new interdisciplinary collaboration, which place questions of meanings and values on the agenda. We need to put questions about the universe and the universal back at the heart of university.” William Grassie (2013)

As the world becomes more complex, given the rapid expansion of technology, the changing nature of warfare, rising energy, and environmental crises, the value of an interdisciplinary education is increasingly obvious. Social, political, economic, and scientific issues are so thoroughly interconnected that they cannot be explored productively, either by experts or students, within clear-cut disciplinary boundaries.

Despite this fact, several problems arise when institutions try to incorporate interdisciplinary education into their programs. Boix Mansilla (2005) noted that the assessment of interdisciplinary work by students is of great concern. She explains that because faculty are often discipline-specific experts, they are unfamiliar with disciplines outside their realm of expertise and have difficulty defining interdisciplinary work. She goes on to explain that, as a consequence, “the issue [of standards] is marred by controversies over the purposes, methods, and most importantly, the content of proposed assessments” (2005, 16).

This paper offers one solution to this dilemma. The following analysis explores the current state of interdisciplinary education, both in academia broadly, and specifically, at West Point through its interdisciplinary Core Program. The sections that follow will highlight the current issues inherent in interdisciplinary education, define interdisciplinary education objectives, and finally, explain the adaptable, multi-functional, interdisciplinary rubric being implemented at the United States Military Academy (USMA), a rubric designed to resolve many of the issues interdisciplinary educators encounter.

The Current State of Interdisciplinary Education in Academia

The demand is clear. Whether we try to take a stance on the stem cell research controversy, to interpret a work of art in a new medium, or to assess the reconstruction of Iraq, a deep understanding of contemporary life requires knowledge and thinking skills that transcend the traditional disciplines. Such understanding demands that we draw on multiple sources of expertise to capture multi-dimensional phenomena, to produce complex explanations, or to solve intricate problems. The educational corollary of this condition is that preparing young adults to be full participants in contemporary society demands that we foster their capacity to draw on multiple sources of knowledge to build deep understanding.” Veronica Boix Mansilla (2005, 14)

There are currently several studies, including evaluation measures, defining the essence of interdisciplinary education. The above quote from Boix Mansilla’s “Assessing Student Work at Disciplinary Crossroads” highlights the challenge educators are experiencing in preparing students to meet today’s most pressing problems. This paper will not attempt to address the structure of interdisciplinary education as an institutional convention, but only to define the essential skills and capacities that a student with interdisciplinary understanding would demonstrate. These definitions are essential to understanding and creating a framework for interdisciplinary learning, which is arguably the first step in adequately integrating it into educational programs. Interdisciplinarity is a difficult construct to quantify, and many educators have been unable to frame a definition of it or to assess it in student work. As a consequence of these and other challenges, only a limited number of colleges or universities have implemented formal interdisciplinary programs at the institutional level.

Several analyses (Boix Mansilla 2005; Boix Mansilla and Dawes Duraising 2007; Rhoten et al. 2008; Stowe and Eder 2002) address the key issues surrounding interdisciplinary learning in higher education and offer proposals on how to address them, starting with the definition of the term “interdisciplinary.” One definition of interdisciplinary understanding is “the capacity to integrate knowledge and modes of thinking drawn from two or more disciplines to produce a cognitive advancement—for example, explaining a phenomenon, solving a problem, creating a product, or raising a new question—in ways that would have been unlikely through single disciplinary means” (Boix Mansilla 2005, 16; Boix Mansilla and Dawes Duraising 2007, 216). A definition is particularly important because “a clear articulation of what counts as quality interdisciplinary work, and how such quality might be measured, is needed if academic institutions are to foster in students deep understanding of complex problems and evaluate the impact of interdisciplinary education initiatives” (Boix Mansilla 2005, 16). An agreed-upon definition is currently lacking in academia, and this has resulted in inconsistent grading, teaching, and learning in interdisciplinary education.

One study of well regarded and established interdisciplinary programs in the U.S., which included Bioethics at the University of Pennsylvania, Interpretation Theory at Swarthmore College, Human Biology at Stanford University, and the NEXA Program at San Francisco State University, involved “69 interviews, 10 classroom observations, 40 samples of student work, and assorted program documentation” (Boix Mansilla and Dawes Duraising 2007, 4). The data were gathered in one-hour to 90-minute semi-structured interviews with faculty and students inquiring about the manner of assessment used in their respective programs. Next, examples of student work were used to give examples of what the institution viewed as meeting the definition of interdisciplinarity. From the interviews and student examples, the authors concluded that there are three core dimensions to student interdisciplinary work: disciplinary grounding, advancement through integration, and critical awareness (Boix Mansilla, 2005, Boix Mansilla and Dawes Duraising 2007). These core elements are represented graphically in Figure 1.

Figure 1. Three Interrelated Criteria for Assessing Students’ Interdisciplinary Work (Boix Mansilla and Dawes Duraising 2007, 223)

The first core element in Figure 1, disciplinary grounding, calls for strong base knowledge in individual disciplines. During the interviews, 75 percent of the interviewed faculty felt that strong subject-area knowledge was necessary for interdisciplinary education that did not sacrifice depth in exchange for breadth. However, the authors noted that the key to successful disciplinary grounding also included the thoughtful selection of which disciplines to use and how to use them. Advancement through integration, the second principle, is universal in all student work in the sense that students are supposed to learn from the work they do; however, what sets it apart in interdisciplinary education is that “students advance their understanding by moving to a new conceptual model, explanation, insight, or solution” (Boix Mansilla and Dawes Duraising 2007, 225). In the study, sixty-eight percent of faculty identified advancement through integration as a necessary element in interdisciplinary understanding and as the quintessential element for the advancement of student understanding. However, various programs and their students interpret this core element differently. For example, some students in in the NEXA Program at San Francisco State University strive for complex explanations, which evaluate the extent to which disciplines are interwoven to create a broad picture of how interconnected different disciplines are on a given topic. Other students in the same program prefer to use aesthetic reinterpretations to connect the literary, historical, and social elements of a given topic. Other students, such as those in the Bioethics program at the University of Pennsylvania, choose to focus on the development of practical solutions based on of the use of multi-disciplined ideas. The final principle from Figure 1, critical awareness, refers to student work being able to withstand examination and criticism and explicitly calls for evidence of student reflection in their work. Student work needs to “exhibit clarity of purpose and offer evidence of reflective self-critique” (Boix-Mansilla and Dawes Duraising 2007, 228).

Rhoten et al. (2008) also conducted a study focused on the similarities and differences between the learning outcomes of liberal arts and interdisciplinary programs. For this particular study, the researchers used student and faculty surveys, interviews, and tests to gather data for their analysis. The authors explain that most liberal arts programs “must develop student capacities to integrate or synthesize disciplinary knowledge and modes of thinking,” which is very similar to the type of synthesis that is expected from an interdisciplinary curriculum (Rhoten et al. 2008, 3–4). The main purpose behind this study was to identify the parallels between interdisciplinary and liberal arts programs, in order to show how a program can be made more interdisciplinary without changing its structure or content. Table 1 shows a summary of several parallels between a liberal arts education and an interdisciplinary education.

Table 1. Comparison of Liberal Arts Education and Interdisciplinary Education Objectives (Rhoten et al. 2008)

Rhoten et al. (2008) also analyzed empirical data to draw out trends on the “222 institutions considered ‘Baccalaureate College-Liberal Arts institutions’ under the 2000 Carnegie Classification system,” whether the interdisciplinary programs offered were majors, minors, optional courses, or required courses (Rhoten et al. 2008, 5). In general, “interdisciplinary programs are still ‘personally driven,’ whereas departments are ‘self-perpetuating'” (Rhoten et al. 2008, 6). “Personally driven” simply means that if students want to broaden their subject-area exposure they must do so on their own. “Self-perpetuating” refers to fact that departments within an institution need to act in their own self-interest in order to survive and thrive; therefore they tend to avoid interdisciplinary efforts. Interdisciplinary education does not support the mission of individual departments, and if students seek it, they must do so on their own initiative. One would therefore conclude that the only way to truly incorporate interdisciplinary education into schools is by making it institutionally mandated, at least for the core curriculum that all students are required to take.

Schools should strive to integrate interdisciplinary efforts into their institutions because “interdisciplinarity breeds innovation” (Rhoten et al. 2008, 12). Although such innovation carries tremendous benefits, the difficulty of measuring student and educator success was again identified as a barrier. Most schools that are already making efforts towards interdisciplinarity believe that they are somewhat successful according to Rhoten et al. (2008). However, in order to mark and measure success, and to continually improve interdisciplinary programs in schools, the authors propose a value-added assessment, which is intended to provide an “assessment regime that measures growth that has occurred as a result of participation in the institution or academic program” (Rhoten et al. 2008, 14). Moreover, some cross-cutting goals that are embedded especially in interdisciplinary studies, such as life-long learning, curiosity, creative thinking, synthesis, and integration, have acquired the reputation of being ineffable and, correspondingly, unassessable” (Rhoten et al. 2008, 83). This common problem was addressed by Stowe and Eder (2002) who identified several assessment measures that are placed on a continuum, as seen in Figure 2. These measures can also be used to better define interdisciplinary standards by providing a multi-tiered adjustable scale that can help to quantify the assessment of student work based on an instructor’s desired outcomes.

Figure 2. Perspectives on Assessment (Stowe and Eder 2002, 84)

Stowe and Eder (2002) state that using a rubric to define and measure interdisciplinary work would improve the “apparently subjective nature” of interdisciplinary assessment. They further recommend the rubric as a “visible standard—a scoring guide—that allows the assessor and the public, for that matter, to recognize expectations and make increasingly fine distinctions about the quantity and quality of student learning” (96). They expand on their recommendation by noting that assessment must be focused on both improving interdisciplinary learning and “improving student learning,” and should be “embedded within larger systems… and create linkages and enhance coherence within and across the curriculum” (80). Without cooperation across different programs, it is impossible to foster an interdisciplinary learning environment.

An example of such cooperation can be seen at USMA, where several academic departments have moved towards a cooperative environment focused on interdisciplinary learning (Elliott et al. 2013). This paper will focus on the education of the USMA Class of 2016 from their freshman year, when the plan to use energy conservation and the NetZero project (an energy initiative by the Dept. of the Army on several Army posts, including West Point, to produce as much energy as it consumes by the year 2020) was adopted to infuse interdisciplinary themes into their core courses. The five Student Learning Outcomes from this effort include four individual discipline-focused outcomes as well as a fifth, which aims to “develop an interdisciplinary perspective that supports knowledge transfer across disciplinary boundaries and supports innovative solutions to complex energy problems/projects” (Elliott et al. 2013, 33). In a larger sense, this objective illustrates that interdisciplinary education addresses the mission of USMA and the Army’s focus on the “development of adaptive leaders who are comfortable operating in ambiguity and complexity will increasingly be our competitive advantage against future threats to our Nation,” as outlined by General Martin Dempsey, Chairman of the U.S. Joint Chiefs of Staff (Elliiott et al. 2013, 30).

Framing the Problem

The Academy produces graduates who can think dynamically in the ever-changing world described in the quotes from Grassie and Boix Mansilla at the beginning of this article. At West Point, this is accomplished by taking not only a multi-disciplinary approach to education, but also an interdisciplinary one. The Academy’s Core Curriculum describes the required classes that all cadets must complete or validate. The Core Curriculum does not include any classes required for a cadet’s major. Other non-academic requirements include three tactics courses and seven physical education courses. The interdisciplinary aspect is a new addition to the curriculum. In recent years, several committees have recommended promoting interdisciplinary approaches to better meet both the Academy’s and the Army’s goals as outlined in Elliott et al. (2013).

To achieve these goals, several academic departments involved in the Core Curriculum developed an interdisciplinary program for the entering plebe class, the Class of 2016. During the first week of classes, freshmen wrote an essay in their Introduction to Mathematical Modeling course, or MA103, about how they would use concepts from different courses to tackle the challenges that NetZero and the alarming problem of energy consumption in the Army pose to West Point. After 30 instruction sessions (approximately thirteen weeks), the freshmen revised these essays in their Composition course EN101. This time they used the knowledge acquired throughout the semester in the English course and in the other courses they were taking. Faculty from the Department of Mathematical Sciences and the Department of English and Philosophy evaluated these revised essays from different perspectives to emphasize the importance and relevance of multiple disciplines. This led to the realization that it was impossible to adequately compare the essays, since the assignments, rubrics, and faculty were not consistent and there was no common rubric to standardize the grading approach. To mitigate this challenge, the essays were compared in our study using the Flesch-Kincaid test (a formula designed to evaluate the difficulty and complexity of technical writing. It consists of two readings: grade level and reading ease) and a comparison of the final grades for the various essays. Scores for a sample of three essays for 25 students, a total of 75 essays, were used to compare improvement in a measureable, quantitative manner. The test consisted of a null hypothesis that there was no significant difference among the ratings, indicating neither improvement nor deterioration of scores from the different assignments throughout the semester, and an alternative hypothesis that there actually was a difference between scores. A two-tailed t-test yielded p-values ranging between 0.3 and 0.6. This indicated that the Flesch-Kincaid results were inconclusive, meaning that neither the null nor the alternative hypothesis could be rejected.

Despite the inconclusive results of the Flesch-Kincaid test, there was a demonstrated improvement in student work, albeit an improvement that was perceived on the basis of a subjective analysis of the essays. Therefore, a new rubric was developed to re-grade all of the essays in a standardized fashion against the desired elements for that particular set of assignments. To facilitate a comparison, this new and straightforward rubric aimed at grading each assignment from the different departments on the same scale. The grades were on a 1–10 scale, and the rubric can be seen in Table 2. The essays were then re-graded according to the same rubric and the results were compared again using a two-tailed t-test.

The challenge in evaluating interdisciplinary work is that the term “interdisciplinary” is not well-defined or broadly understood. This became even clearer after the Chemistry faculty conducted an interdisciplinary group capstone in the General Chemistry course with the Class of 2016 during the second semester of their freshman year. The capstone presented the students a complex and challenging energy problem that was both current and militarily relevant to their future roles as Army officers. This project required groups of students to write a memorandum summarizing their findings on an experimental, portable, and green battery recharger for soldiers in the field, and then to provide a presentation of their results to their commander. Cadets conducted an experiment on the battery recharger to test its efficiency, to compare it to current recharging methods, and to address the social and leadership challenges that would occur when this new equipment was integrated into a unit. In addition, the capstone leveraged the students’ various courses and experiences to scaffold understanding of key concepts and technology necessary to engage the problem. The freshman cadets were expected to utilize what they learned from math modeling, information technology, general psychology, and general chemistry courses in formulating their solution.

The rubric used to grade these capstones was developed by the Chemistry faculty with input from all the participating courses, and then later utilized by the Chemistry faculty in assessing the capstones. The collaborative rubric identified numerous concepts in each course, and as a result, it was several pages long. Perhaps most significantly, it did not define the term “interdisciplinary” for the faculty and the students in the capstone, nor did it make clear the associated expectations. At the conclusion of the rubric, faculty were asked to rate on a 1–10 scale how interdisciplinary their students’ submissions were. The results, displayed in Figure 3, had a standard deviation of .186 and were inconsistent in both the average instructor rating and the range of different ratings faculty assigned. This indicated that the faculty did not share the same understanding of “interdisciplinary” in assessing student work.

Table 2. Rubric used to evaluate the population sample of NetZero essays from fall 2012

Boix Mansilla and Dawes Duraising (2007) state that student interdisciplinary work should “be well-grounded in the disciplines” “show critical awareness,” and “advance student understanding” (223). These criteria both define the basic learning objectives of an interdisciplinary education and address the need for baseline knowledge in the subjects being addressed in student work. While these criteria may not be included in a rubric or other grading mechanism, they provide more of a defined objective regarding interdisciplinary student work.

Although the idea of graduating interdisciplinary-minded students is appealing to many programs, the challenge of measuring the success of interdisciplinary curriculums in producing these “multi-disciplined” graduates has yet to be addressed. The problem of scaling and measuring interdisciplinary education is itself interdisciplinary in nature and, consequently, an abstract idea for many (Boix-Mansilla and Dawes Duraising 2007, 218). Interdisciplinary education evaluation currently lacks a “sound framework” for assessment since the effects of interdisciplinary efforts on student learning are neither well-defined nor proven (Boix Mansilla 2005, 18). As seen in Figure 2 (Stowe and Eder 2002), the assessment of interdisciplinary work is a non-static scale where the balance between the perspectives and entities is never quite the same from project to project, or from class to class. Stowe and Eder (2002) offer a flexible scale for assessment that allows each interdisciplinary quality to be judged according to faculty expectations: how discovery-oriented versus objective-orientated do they want student assignments to be? Rhoten et al. (2008) do correlate several common learning outcomes of a liberal arts education with their interdisciplinary counterparts as seen in Table 1. Although useful for demonstrating extensive possible outcomes and correlations, the linkages are broadly defined and do not specify objectives; this exemplifies the issues of scale, definition, and the non-quantified nature of interdisciplinary education that currently prevail in academia.

Figure 3. Chemistry instructor evaluation of interdisciplinary synergy in capstone projects during Spring 2013. Courtesy of the United States Military Academy Department of Chemistry and Life Sciences.

All of the aforementioned problems can be traced to a lack of clarity on standards (Boix-Mansilla 2005, 16). Stowe (2002) explicitly calls for a standard for grading, collecting data, and creating a shared understanding, which he suggests could be found in a rubric. A standardized rubric, which is adaptable to several mediums and is general enough to be applicable to several disciplines, is desperately needed for evaluating and assessing interdisciplinary work. Such a rubric needs to clearly define the necessary elements of an interdisciplinary product and be sufficiently adaptable to align with project requirements; this would resolve several of the problems we have identified. In addition, Stowe and Eder (2002) call for the inclusion of very specific elements in a rubric, so that it can address current problems and properly evaluate interdisciplinary work. Among these requirements are assessing complex intellectual processes, promoting objectivity, reliability, and validity in assessment, clearly defining learning objectives for students, and being flexible and adjustable for course or curriculum progression (96). Although we conducted a thorough search, we failed to find a rubric that adequately fulfills this need.

Interdisciplinary Rubric Development

The goal of the rubric developed at USMA is to create a grading mechanism that can be used in multiple project mediums across multiple disciplines. Simultaneously this rubric maintains the integrity of the interdisciplinary goals by creating a more defined standard with which to grade interdisciplinary student work. The rubric also contains open areas for point allotment as well as weighting for each category, which allows faculty to allot points and focus where they see fit. Developing such a rubric required several steps: defining the term interdisciplinary, identifying the elements that student work needs to demonstrate in order to illustrate interdisciplinary thinking, creating a model that visually represents the interconnectivity of these elements, and then using the defined elements and model to arrive at the rubric categories.

The first step in the rubric development process was to define the term interdisciplinary:

Interdisciplinary: The seamless integration of multi-dimensional, multi-faceted ideas into a clearly demonstrated understanding of an issue’s breadth and depth, with sound judgment and dynamic thinking.

Boix Mansilla’s definition of interdisciplinary understanding provided the starting point for the development of the rubric. Additionally, material from the research discussed above identified missing elements from Boix Mansilla’s definition. For example, the best students’ interdisciplinary work included ideas from multiple disciplines that were integrated to demonstrate the level of understanding that the student has attained.

The second step in the rubric development process was to expand the definition of interdisciplinary, in order to create a shared understanding between students, faculty, and those evaluating the interdisciplinary work. To this end, the feedback and lessons learned from previous student work were used to identify the elements common to successful interdisciplinary work. These principles include: discipline specific knowledge, multi-perspective understanding, integration, practical integrated solutions, reflection, and clarity of purpose. To illustrate the interconnectivity of these principles, a conceptual model of the characteristics was created. Initially, the intention was to create a linear model to represent the core principles. However, several issues, such as missing connections and limited complexity, led to the immediate conclusion that a linear model could not completely describe complex nonlinear problem solving. The resulting model, which illustrates a cyclical thinking process, is shown in Figure 4.

Figure 4. The Cyclican Model of the Key Interdisciplinary Characteristics. This model demonstrates the interconnectivity of the d defined interdisciplinary elements.

The model begins with the framing and scoping of the problem before the application of discipline-specific knowledge, which as we have seen is an essential starting point for interdisciplinary work. The core principle of the integration of ideas was partitioned into multi-perspective understanding, integration, and practical integrated solutions. Multi-perspective understanding and discipline-specific knowledge are connected by an addition sign, which symbolizes understanding a topic from multiple perspectives. This illustrates that students must be able to use discipline-specific knowledge to make this essential connection. The arrow labeled “integration” in the lower part of the model represents the synthesis of discipline-specific knowledge and multi-perspective understanding into practical integrated solutions. Practical integrated solutions are then connected to reflection via a multiplication sign to show that reflection has a multiplicative effect on interdisciplinary understanding. The arrow labeled “clarity of purpose” represents the cyclical process and shows the compilation of all the previous elements back into discipline-specific knowledge. The knowledge gained from the various parts of the cycle can be used in the further learning of other applicable disciplines. This model’s goal is not to explain the rubric, but to illustrate how interdisciplinary education is cyclical in nature, how the characteristics of interdisciplinary understanding are relevant to interdisciplinary education, and how student learning should continue to build.

Next, the core principles of what makes student work interdisciplinary were established, defined, and examined. The elements in Figure 1 above, taken from Boix Mansilla and Dawes Duraising (2007), were used as a starting point for the development of this rubric’s core principles: be well grounded in the disciplines, show critical awareness, and advance student leaning through understanding (223). For the purpose of this rubric, some elements were modified and expanded to create six core principles. A list of the six core principles that were incorporated into the rubric, along with their definitions, appear in Table 3.

Problem framing and scope are derived from the idea that interdisciplinary work should show critical awareness. The definition used in the rubric is very flexible, so that educators can adapt it for different project mediums and faculty, departments, and/or university requirements. Critical awareness, as defined by Boix Mansilla (2007), includes the definition of purpose as well as the integration of ideas. The definition used for problem framing and scope in the rubric requires that the student’s work have a clearly defined purpose. This was created as a separate category because we had observed a clear trend of misunderstanding among faculty regarding the level of complexity that they expected. This is an important aspect of student interdisciplinary understanding; it allows the faculty to scale assignments according to the expected level of student understanding and allows the student to recognize just how complex and multi-disciplined a product the instructor is seeking. For example, if students were assigned a project on how to effectively stock a warehouse, an instructor would not have the same expectations of a freshman who has taken only introductory courses in mathematical modeling and economics as of a senior who had taken nonlinear optimization, supply chain management, and microeconomics courses. Having this requirement in the rubric makes clear the expectation that students will properly identify what they want to address, and also allows the instructor to have a frame of reference in a project.

The rubric’s second core principle, discipline knowledge is well grounded in the disciplines and is intentionally more open-ended, so that it can be readily adapted to different departments, projects, and situations (Boix Mansilla 2007). Identifying theories, examples, findings, methods, etc. may not be relevant or necessary in a given problem. Therefore, although the evaluator is given an area in the rubric that calls for disciplinary knowledge, the rubric does not explicitly indicate how that knowledge is to be graded. For example, in our warehouse stocking project, a freshman might be expected to mathematically model the effects of changing employee wages on productivity. A university senior, on the other hand, might be expected to produce a business recommendation to stakeholders by addressing the intricacies of supply chain management on warehouse profits as well as its psychological implications for employees. The discipline knowledge area of the rubric enables the evaluator to determine how much knowledge and understanding students are expected to demonstrate, while ensuring that the importance of disciplinary understanding is not lost on an interdisciplinary project.

The integration of ideas principle is really the quintessential element for the interdisciplinarity of this rubric. All six core principles are important interdisciplinary factors, but if this element were removed, the rubric could be used for a project that is not interdisciplinary. Integration of ideas derives its meaning from the critical awareness and advanced student understanding pieces identified above in Figure 1. This rubric defines integration of ideas as multi-dimensional, feasible, practical solutions with multi-faceted and seamlessly connected ideas. It is important to note the difference between being integrated and being seamlessly integrated. The seamless integration of ideas, which can take on different meanings depending on the assignment, is an indicator of true multi-dimensional, multi-faceted understanding. Seamless integration. We define the term seamlessly integrated to mean that ideas are not simply laundry-listed, but instead are connected in an intelligent and logical fashion. The definitional elements of multi-dimensional and multi-faceted identify the need for complexity in student work. It is multi-dimensional when students make use of multiple dimensions of their education or, in other words, use multiple disciplines, in their work. Multi-faceted means that students are able to use evidence and knowledge to back up their multi-dimensional claims. The most important component is that students be able to demonstrate a clear understanding of what they are presenting. This also relates to a student’s ability to demonstrate the span of an issue’s breadth and depth. In other words, students should be able to apply disciplines to an issue or topic with an appropriate understanding of the level of each of the disciplines. The use of extraneous disciplines merely for the sake of incorporating more disciplines does not necessarily make student work interdisciplinary. In fact, it contradicts the idea of advancing the complexity of the student’s thought process.. Students who apply the appropriate level of discipline breadth and depth indicate their ability to use sound judgment or logic, as well as their ability to think dynamically.

The next two core principles, clarity of purpose and reflection, were added to address the students’ failure to internalize what they were learning and understanding; this was revealed during the analysis of the USMA interdisciplinary program. The main challenge was that students did not fully grasp why a given project was interdisciplinary, or why that was important. To alleviate this, the core principle clarity of purpose was added to the rubric to help students understand the “why”; the intent was to motivate them to define the purpose of their investigation and to take an in-depth approach to the problem. This is different from problem framing and scope in a very important way: problem framing and scope focuses on a well-defined thesis statement or purpose statement, whereas clarity of purpose focuses on the content of student work. In other words, problem framing and scope ask whether students have a clearly stated framework for their project, while clarity of purpose asks whether they demonstrate their personal interdisciplinary understanding and then explain it well to their audience. Similarly, the next principle, reflection, calls for a clear and delineated connection of ideas and an indication that students have reflected on the interconnectivity and importance of their areas of study. These two core principles are drivers of internalization and cognitive advancement in interdisciplinary learning. They are particularly important because often students do not reflect on what they have learned. The reflection piece is intended to facilitate a deeper understanding of what they are learning and to encourage students to consider how the material fits into the greater scheme of their education.

The final element of the rubric shown in Table 3 is the presentation principle. This principle calls for information that is presented in a suitable medium with proper tone, word choice, spelling, grammar, etc. In short, did the students address the audience correctly and present their knowledge intelligently while doing so? This section can be adapted to the type of project and course for which the rubric is being used. For example, English faculty would probably expand this section because of its importance to their learning outcomes, whereas chemistry faculty may place more emphasis on the discipline-knowledge portion.

The newly developed rubric was presented to the Math course leaders for use on the freshman’s “mini” capstone exercise in December 2013. The rubric was sent to the faculty with minimal guidance. The feedback from the course director made it clear that the students and faculty did not fully grasp the intention or expectations behind the rubric. A few factors contributed to this: sixty-six percent of the faculty were new to the department; the interdisciplinary expectations were not fully explained to the faculty; although everyone received the rubric, each instructor created his or her own rubric for the mini-capstone; and the students who took the mini-capstone and the faculty who graded their work were under significant time pressure. The mini-capstone in its creation, execution, and grading was not given adequate time due to end of semester requirements at USMA during the November-December time period. An important conclusion from this feedback was that the faculty needed to have a common understanding of what is expected on an interdisciplinary project. To achieve this for the General Chemistry capstone project in the spring of 2014, a grading calibration exercise was conducted. This calibration included good and poor examples of interdisciplinary work from the previous year’s chemistry capstone, and showed faculty how to distinguish between good and poor work and how to use the rubric in assigning a grade.

Implementing the Interdisciplinary Rubric

The first step in implementing the rubric was calibration with the faculty. With such an exercise, the faculty should take away a common understanding of what exactly interdisciplinarity is as well as the knowledge of what constitutes a good final project. The plan for the calibration exercise developed for USMA faculty who would be grading the CH102 General Chemistry capstone in the spring of 2014 was an hour-long presentation and discussion. Prior to the presentation, faculty received a packet of examples of cadet work in each of the major portions of the previous year’s capstone project. The examples included “A” work as well as examples of common integration errors students make: the “laundry list,” the “tacked on at the end,” and the “no real knowledge” integration errors. The “laundry list” is an example of how a student may mention and be knowledgeable in multiple disciplines but does not integrate them, providing instead a “laundry list” of the different disciplines and explaining the relevance of each individually. The “tacked on at the end” error (or whatever we may call it) exemplifies how a student may go in-depth in one discipline, particularly in the discipline for which the assignment was given, then tack on a sentence or two at the end mentioning other disciplines in order to call the project interdisciplinary. The “no real knowledge” example presents a plethora of ideas but does not demonstrate that the student learned or integrated disciplines and/or ideas. With these examples, faculty became more familiar with what correct and incorrect work looked like. The “A” level example was not meant to illustrate the perfect or only solution; it was merely one example. Faculty evaluated each example using the standard A, B, C, D, F grading scale based on how interdisciplinary they felt each project was.

At the start of the presentation portion of the rubric calibration, faculty were introduced to the interdisciplinary characteristics and model from Figure 4. This ensured understanding of interdisciplinary characteristics prior to the introduction to the rubric itself. After the characteristics were covered, the results from the exercise, which the faculty just had completed, were discussed. This clarified any misunderstandings that faculty had about the interdisciplinary characteristics, while the examples of chemistry capstones from the previous year provided a frame of reference. Next the rubric was thoroughly explained, showing how it was scalable, expandable, and concise to meet instructor needs for interdisciplinary student projects.

The General Chemistry capstone rubric for 2014 differs from its 2013 predecessor in two very important ways. First, it is significantly shorter; its two pages (compared to seven pages) emphasize quality over quantity. Instead of listing every detail of the project, the new capstone rubric has five categories that address the math modeling, leadership, information security, oral communication, and the required submission components of the project, all without specific details. This allows the students to be more creative in their answers to the given problem.

The 2013 rubric was not based on any interdisciplinary principles or examples. Instead, it listed specific requirements from the disciplines the students were supposed to integrate. The result was quite the opposite: the 2013 capstone projects tended to be disjointed because of the slew of specific requirements. This year’s capstone rubric incorporates the interdisciplinary principles described in Table 3.Problem framing and scope is addressed in the Project Summary section with the requirement for a bottom line up front (BLUF), or thesis. Discipline knowledge is asked for in the Discrete Dynamic Modeling, Persuasion and Conformity in a Leadership Environment, and Information Security sections. Although the course-specific requirements must be addressed, Integration of ideas is assessed in the Oral Communication and Project Summary sections, which requires that fluid transitions and logically ordered and related ideas be integrated. Appropriate presentation is also adequately addressed in these sections, as the rubric lays out clear expectations of the written and oral presentations for students, including their tone, body language, and level of professionalism. Clarity of purposeand reflectionare asked for in the Project Summary section, which calls for contingency plans and thoroughly explained analysis of the total problem.

Initial instructor feedback on the use of this rubric is that it better defined expectations for the students’ interdisciplinary work, for both the instructor and the students. After using the rubric in the calibration exercise, instructors stated that they felt more confident and prepared than they had in 2013 when there was no such exercise and assessment tool available; this year they understood what was asked of them and of the students. Initial comparisons of the interdisciplinary assessments of the students’ work from 2013 and 2014 are quite positive. On a scale of 0–10, the average interdisciplinary score given by instructors was 5.69 in 2013, with zero being the least interdisciplinary and 10 the most. (See Figure 3 for these data.) In 2014 this improved to 7.79 (actually 15.5/20). There was also less variability between instructors. For example, in 2013 the standard deviation of the mean scores assigned by each of the instructors was 1.86 (Figure 3). In 2014, the standard deviation between the instructors’ mean scores was .98 (1.96/20), a decrease of over 47%.

Future Work and Conclusion

Now that the General Chemistry capstone for USMA Class of 2017 has concluded, several analyses must be completed to evaluate the progress of interdisciplinary education at USMA. At a minimum, an analysis of the grades and feedback from the students and faculty needs to be conducted. The analysis of the grades should include a distribution of grades compared with their expected distribution, as well as a quantitative and a qualitative analysis of the capstones compared to the previous years’ capstones. This could be done using the methods previously employed, including the use of Flesch-Kincaid, paired t-test, the distribution of the faculty’s interdisciplinary rating similar to Figure 3, and/or a cross-course sample of projects re-graded by the course director.

The discussion and research that have taken place at West Point since the first General Chemistry capstone project in 2013 indicate that the results of this year’s changes should be positive. Although there is as yet no statistical evidence to demonstrate improvement, the general understanding of how interdisciplinarity looks, how to produce it, and how to assess it is much more expansive now than in 2013. The reason for this might be that faculty and students at USMA are now experienced with interdisciplinary work and have a clearer understanding of interdisciplinary assessment and its importance over the course of a year.

The world is a complex and rapidly changing place that requires its future scientists, scholars, engineers, teachers, and leaders to think dynamically and across disciplines.. Interdisciplinary assessment is necessary for the future of education, particularly at West Point where we recognize that “adaptive leaders who are comfortable operating in ambiguity and complexity will increasingly be our competitive advantage against future threats to our Nation” (Elliott et al. 2013, 30). Only time will tell whether this interdisciplinary rubric has met its goal of creating a grading mechanism that can be used in multiple project mediums across multiple disciplines. Given the extensive research and analysis done at West Point to create this much- needed and useful tool, the prospects for future interdisciplinary education are promising.

About the Authors

Elizabeth Olcese

Elizabeth Olcese graduated with a Bachelor of Science degree in Operations Research from the United States Military Academy at West Point, NY in 2014. She served as a student researcher for West Point’s Core Interdisciplinary Team focused on enhancing opportunities for interdisciplinary learning in West Point’s core academic curriculum. Upon completion of the Quartermaster Basic Officer Leader Course at the Army Logistics School in Fort Lee, VA, she will serve as a second lieutenant for the 25th Infantry Division at Schofield, Hawaii.

Joseph C. Shannon

Joseph C. Shannon graduated with a doctorate in Curriculum and Instruction with a focus in Science Education from the College of Education at the University of Washington, WA. He is a former member of West Point’s Core Interdisciplinary Team that was focused on enhancing opportunities for interdisciplinary learning in West Point’s core academic curriculum. He is a former Academy Professor at the United States Military Academy and Program Director for the General Chemistry Program in the Department of Chemistry and Life Science. He is currently the Dean of Academic Programs at South Seattle College in West Seattle, Washington.

Gerald Kobylski

Gerald Kobylski graduated with a doctorate in interdisciplinary studies (Systems Engineering and Mathematics) from Stevens Institute of Technology, NJ. He currently is co-leading a thrust to infuse interdisciplinary education into West Point’s core academic curriculum. He is also deeply involved with pedagogy, faculty development, and assessment. Jerry is an Academy Professor at the United States Military Academy, a Professor of Mathematical Sciences, and a Commissioner for Middle States on Higher Education.

Lieutenant Colonel Charles (Chip) Elliott

Lieutenant Colonel Charles (Chip) Elliott graduated with a doctorate in Geography and Environmental Engineering from Johns Hopkins University in Baltimore, MD and is a registered professional engineer in Virginia. He is currently the General Chemistry Program Director and the Plebe (Freshman) Director for the Core Interdisciplinary Team at the United States Military Academy. He has previously taught CH101/102 General Chemistry, EV394 Hydrogeology, EV488 Solid & Hazardous Waste Treatment and Remediation, EV401 Physical & Chemical Treatment, and EV203 Physical Geography. He is currently an Assistant Professor in the Department of Chemistry and Life Science.

References

Boix Mansilla, V. Interdisciplinary Understanding: What Counts as Quality Work?” Interdisciplinary Studies Project, Harvard Graduate School of Education.

Boix Mansilla, V. 2005. “Assessing Student Work at Disciplinary Crossroads.” Change 37 (1): 14–21.

Boix Mansilla, V., and E. Dawes Duraising. 2007. “Targeted Assessment of Students’ Interdisciplinary Work: An Empirically Grounded Framework Proposed. The Journal of Higher Education 78 (2): 216–237.

Elliott, C., G. Kobylski, P. Molin, C.D. Morrow, D.M. Ryan, S.K. Schwartz, J.C. Shannon, and C. Weld. 2013. “Putting the Backbone into Interdisciplinary Learning: An Initial Report.” Manuscript submitted for publication, United States Military Academy, West Point, NY.

Grassie, W. (n.d.). “Interdisciplinary Quotes.” Thinkexist.com. http://thinkexist.com/quotation/as-the-pace-of-scientific-discovery-and/1457907.html (accessed November 16, 2013).

Ivanitskaya, L., D. Clark, G. Montgomery, and R. Primeau. 2002. “Interdisciplinary Learning: Process and Outcomes.” Innovative Higher Education, 27 (2): 95–111.

Newell, W.H. 2006. “Interdisciplinary Integration by Undergraduates.” Issues in Integrative Studies 24: 89–111.

Repko, A.F. 2007. “Interdisciplinary Curriculum Design.” Academic Exchange Quarterly 11 (1): 130–137.

Repko, A.F. 2008. “Assessing Interdisciplinary Learning Outcomes. Academic Exchange Quarterly 12 (3): 171–178.

Rhoten, D., V. Boix Mansilla, M. Chun, and J. Thompson Klein. 2008. “Interdisciplinary Education at Liberal Arts Institutions.” Teagle Foundation White Paper.

Stowe, D.E., and D.J. Eder. 2002. “Interdisciplinary Program Assessment.” Issues in Integrative Studies 20: 77–101.

Appendix 1

Download (PDF, 191KB)

 

Download (PDF, 998KB)

 

Science Bowl Academic Competitions and Perceived Benefits of Engaging Students Outside the Classroom

 

Abstract

The National Science Bowl® emphasizes a broad range of general and specific content knowledge in all areas of math and science. Over 20,00 students have chosen to enter the competition and be part of a team, and they have enjoyed the benefits of their achievements in the extracurricular Science Bowl experience. An important question to ask, in light of the effort it takes to organize and participate in regional or national science competitions, is whether the event makes a difference to the student. And if it does make a difference, does it improve student learning or student attitudes about science? In a preliminary survey, students competing in a Regional Science Bowl Competition report that the event has a positive impact and fosters learning in science and mathematics. These data support findings for other forms of extracurricular academic competitions associated with science and mathematics.

Introduction

Since 1991, the Department of Energy’s (DOE) National Science Bowl® has been sponsoring annual regional and national competitions for high school students across the United States of America, including Puerto Rico and the U.S. Virgin Islands. In addition to seeing the pragmatic value of increasing the “feed” of science-educated personnel into DOE research facilities, the DOE recognized that the improvement of science education, broadly, would be of great benefit to the nation. Expanding its focus beyond formal science education at the college level, the DOE started the Science Bowl program to encourage high school student participation and interest in math and science. The idea was to increase science literacy in general and to encourage science- and mathematics-related careers specifically. The success of the high school competitions resulted in the expansion of the program to include middle schools in 2002.

The competitions feature teams of four to five students answering multiple choice and short answer questions in the areas of science, mathematics, energy, and technology. There are currently 67 regional high school and 36 middle school competitions. The high school competitions involve more than 15,000 students and the middle school contests more than 6,000. The winning team from each regional event is invited to Washington D.C. to compete with other winners.

Participation in Science Bowl involves working as a team, and a team’s level of success is determined not only by scientific knowledge, but also by teamwork and gamesmanship. The students’ engagement in group work directly benefits the individual team members, their social groups, and society as a whole (Greif and Ephross 2011, 6). The actual team formation and function is itself a model for both future community engagement and civic activism. In fact, creating teams is one of the three principal strategies for successfully placing students in service-learning opportunities within communities. (Harris 2009).

The National Science Bowl® emphasizes a broad range of general and specific content knowledge in all areas of math and science. Science Bowl experiences are independent of the classroom environment and generally occur because the students have volunteered to enter the competition and become part of a team. Each team must have a coach, who can be a parent or other interested person, but is usually a high school science teacher. The volunteer aspect of the competition as an extracurricular activity means that it is similar to robotics competitions, the Science Olympiad, and other interdisciplinary, multi-disciplinary, and applied endeavors. All of these programs stress the collaborative and communal nature of the projects over the content, a characteristic shared by other civic engagement and volunteer endeavors (Jacoby and Ehrlich, 2009).

An important question to ask in light of the effort it takes to run regional or national science competitions is whether the event makes a difference to the student. And if it does make a difference, does it improve student learning or student attitudes about science? The literature on science competitions is not extensive. Abernathy and Vineyard (2001, 274) asked students who competed in the Science Olympiad why they did so. The number one reason for participating in the Olympiad was that it was fun. The number two reason was that the participants enjoyed learning new things. These findings held for both male and female participants; they seemed to think learning science and math in this context was enjoyable. Abernathy and Vineyard suggested that competitive events “may be tapping into students’ natural curiosity and providing a new context for them to learn in, without rigid curriculum or grading constraints (2001, 274).”

Competitive events such as the National Science Bowl® may provide the “initial motivation” and catalyst for helping students to discover the joy of learning (Ozturk and Debelak, 2008). Academic competitions can provide motivation for students to study, learn new material, and reinforce previously learned material so that they will be ready to compete (and collaborate) with their peers from other schools both regionally and nationally—not just in games but also in academic and work environments. This type of motivation is difficult to provide in a normal classroom environment. While it can be argued that this is solely extrinsic motivation and that students should not be dependent on it, it can nevertheless serve as the spark that ignites a discovery of the joy of learning science and math.

One of the more important effective benefits of competitions like the National Science Bowl®, is that the participants, who may be the academic elite at their home schools (big fish in a little pond), must test their knowledge and skills against the students from other schools who will be their peers once they get to college and the workplace. Ozturk and Debelak (2008) note that students “learn to respect the quality of work by other children and to accurately assess their own performance in light of the performance of their intellectual peers. They achieve an accurate assessment of where their level of performance stands in the world of their intellectual capacity and, in turn, develop a more wholesome self-concept” (51) . Developing a more accurate and grounded self-concept is an important stage for children to go through on their way to becoming healthy and mature adults. This realistic and comparative self-assessment can be difficult to foster in the case of elite students who have never faced stiff competition or external challenges to their academic abilities in their home institution.

Students in academic competitions also benefit from learning not only how to succeed, but how to accept failure, learn from it, and, “subsequently, grow as a person and improve in performance” (Ozturk and Debelak 2008, 52). This, again, may be one of the most important aspects of intramural academic competitions, one that cannot be easily provided in a typical classroom environment; learning to fail and being able to cope with the emotional aftermath may be riskier in a classroom environment than in a games environment where the experience of failure is shared among the group. Being thrust into a situation where participants must deal with failure (even after they have prepared and done their best) promotes the healthy development of a student’s resilience and self-awareness. Academic competitions like the National Science Bowl® and its many regional competitions may provide the type of environment that helps students to reflect on their knowledge and abilities and self-evaluate their performance, promoting improved personal growth and development for the participants.

Certainly, extreme competitiveness can cause anxiety and undue stress (see for example Davis and Rimm, 2004). Many of us can remember learning in our Psychology 101 course about test anxiety and how it can negatively affect student performance and achievement and lead to low self-esteem. But Davis and Rimm also report that competition can increase student productivity and achievement. Some students seem to need to compete with others in order to push themselves to produce at a higher level. It would follow that socially organized competitions like the National Science Bowl® and its many regional competitions could help to promote high levels of achievement and productivity in the participating math and science students. Some of the increased levels of achievement and productivity may be due to the practice in teamwork and study skills promoted by participation in this type of academic competition. Bishop and Walters (2007) report that the students involved in competition increased their ability to be leaders and team players, especially in the areas of directed studying (“cramming”), communication, and stress management.

Most studies of this nature tend to be based on student reporting of their own perceptions, and Bishop and Walters also discuss the viability of using a self-report, Likert scale survey to investigate how the National Ocean Sciences Bowl (NOSB) influenced the participants’ choice of major and courses in college. They further triangulate their data using follow-up interviews, information on the colleges the students attended, and lists of the college courses the students took following their participation in the NOSB. Their longitudinal study, which took place from 2000–2007, establishes the credibility of the students’ self-reported data using this type of survey (Bishop and Walters 2007).

What Do the Students Get from This Competition?

A brief survey was developed for the students who compete in the Northern New England Regional Science Bowl Competition, for the purpose of gathering information about the students’ perception of the impact the competition has on them and other students. The questions were developed by the Regional Science Bowl coordinators and distributed to the students (also to coaches, volunteers, and audience) on the actual day of the competition, which takes place each year in late February or early March. The students in the Northern New England Regional Science Bowl Competition come from the three northernmost New England states, Maine, Vermont, and New Hampshire. The competition is an extracurricular activity; the students in grades 9–12 have self-selected to be part of a team that practices and competes during non-school hours. The students making up the teams tend to be academically successful. As might be expected, these students usually like mathematics and science and are predisposed to participate in activities involving these subjects. The teams of students compete in a one-day event at the University of Southern Maine, which culminates in a single elimination tournament round. The winning team is offered an all-expenses-paid trip to Washington D.C. to compete with other regional winners for the national championship. Students at the regional bowl are given the survey. Completing and returning the survey is voluntary, although the students and coaches are made aware that their responses will help improve the event.

The Instrument

The first part of the survey was designed to collect general background information about the students and their role in the day’s competition. This section was a simple checklist:

  • This is my first experience.
  • I’ve been at previous science bowls here.
  • I was a volunteer today.
  • I am a spectator/guest.
  • I was one of the student competitors today.
  • I am a coach of one of the teams.

The next set of items was intended to gain insight into the students’ perceptions of how the regional competition affected the students who were taking part in the day’s activities and events. The questions consisted of three Likert-type response choice items:

1. I think this competition had a positive impact on the students:

2. Quiz competitions foster student learning about science and mathematics:

3. Quiz competitions are stressful in a negative way:

Each of these questions had a five-choice scale that ranged from strongly agree to neutral to strongly disagree. There were also two open ended questions:

The thing I enjoyed most about today was:

What I would recommend for next year:

And finally a yes/no question:

I’d like to come back next year.

Findings and Discussion

Data collection began with the 2004 Northern New England Regional Science Bowl Competition and continued through 2009. (After this year the Bowl was restructured and focused exclusively on Maine students, although participants continue to be surveyed.) This six-year longitudinal study has provided data representing a constant mix of new and returning students. Throughout the course of the study, there was an almost equal distribution of first-time and returning students who responded to the survey. Although the survey was distributed to students, coaches, and other volunteers who took part in the events, only the results of the student surveys were used as part of this report. The voluntary nature of conducting the study produced an average of fifteen percent of the students per year completing and returning the survey. Interviews with coaches and students indicate that the low response rate is most likely a result of its collection at the end of a long, intense day, when many teams were eager to start their journeys back to homes throughout northern New England.

Of the students participating in the Northern New England Science Bowl who responded to the survey during the study period, 93 percent either agreed or strongly agreed that the competition had a positive impact on them (Table 1).

Campbell and Walberg (2011) suggest that this type of positive impact follows the students throughout their life. Willingness to participate in events on their own time, especially during the weekend, demonstrates a high level of positive engagement that would foster feelings of positive impact. Akey (2006,16) reports that “student engagement and perceived academic competence had a significant positive influence.” on achievement. The survey results also suggest that the students perceive themselves as academically competent in math and science, and that is why they participate. This mirrors the findings of Abernathy and Vineyard (2001) who report that academic competitions tap into the natural curiosity and inclinations of students and provide an arena for them to learn new things. The science bowl event could provide the platform for these students to excel and receive recognition. Further, Ozturk and Debelak (2008) report that academic competitions may provide the motivation to find the joy in learning. Curiosity and motivation are important aspects of learning that would presumably have a positive impact on the lives of the participants in academic competitions like the National Science Bowl®.

Most (91 percent) of the respondents reported either that they agreed or that they strongly agreed that the Regional Science Bowl Competition fosters student learning in science and mathematics (Table 2).

These data again appear to support the research done by Abernathy and Vineyard (2001), indicating that academic competitions provide a forum to stimulate the students’ natural curiosity about learning new things, as well as the work of Ozturk and Debelak (2008), who have concluded that academic competitions may motivate students to discover the joy of learning.

The high positive response rate of these two questions indicates that the student participants in the Regional Science Bowl Competition are developing a strong positive sense of self. These responses, reinforced by our interviews of participating coaches, indicate that the students are reflecting on their experiences and developing a more complete self-image and perhaps an increased sense of their personal competence. Bishop and Walters report that an enhanced and comparative sense of personal competence or capability “translates as a very high factor influencing career choice” (2007, 69). It may well be that academic competitions such as the National Science Bowl® and its associated regional competitions provide experiences that positively influence student career choices.

Interestingly, the same students who reported that the Science Bowl Competition had such a positive effect on them in general, and a positive effect on their learning, did not necessarily think the competition was unstressful. Only 61 percent disagreed or strongly disagreed that the quiz competition was stressful in a negative way (Table 3).

Perhaps the wording of the question led students to equate “quiz” with “test,” which affected their response. It could also be that the students consider any kind of stress negative, and if they perceived that the competition created even a low level of stress, they would conclude that this was a negative effect.

In the open-ended question that asked what they enjoyed the most about the Science Bowl, the number one response was competition, the second most frequent response was meeting like-minded people, and the third was the hands-on nature of the activities. These students seem to be saying that they feel that testing their knowledge and skills in science and mathematics against other students of similar ability is fun! Maybe this is because they are beginning to form a deeper understanding of and respect for the quality of their work, as suggested by Ozturk and Debelak (2008). Academic competitions (such as the Science Bowl) may give students the opportunity to compete mentally the way athletic competitions allow them to compete physically (Parker 1998). Perhaps these students get the same kind of “high” that athletes get during competition, and the thrill of academic competition releases endorphins much the same way that athletic competition does.

The data indicate that a statistically significant portion of the students competing in the Northern New England Regional Science Bowl Competition report that the event has a positive impact on them and fosters learning in science and mathematics. These data support findings that have been reported for other forms of academic competitions that are involved with science and mathematics (e.g. Campbell and Walberg 2011). Self-reporting indicates that the students have a high level of perceived personal competence, a high level of engagement in mathematics and science activities, and a high level of motivation toward these academic subjects. In addition to increased involvement in the community, competence, engagement, and motivation are factors that have been linked to academic achievement, personal growth, and career choices. If the education community is seeking to increase student interest and participation in science and mathematics majors and in science and mathematics careers, and ultimately in complex science-related public policy discussions, then academic competitions like the National Science Bowl® may be an important part of the overall strategy bringing the nation closer to that goal.

A Proposal for Further Study

A key aspect of the Science Bowl competition is its role in building a social community of contestants, which leads one to wonder whether the competitions contribute to increased involvement in the larger community and whether they encourage participants to become more effective and engaged citizens. Participating schools are likely to return to the event, as are alumni who come back as volunteer officials. Further, with the release of recent studies, such as “Steady as She Goes? Three Generations of Students through the Science and Engineering Pipeline” (Lowell et al., 2009), we (the authors of this paper) feel an ethical responsibility to continue the investigation of whether science competitions represent meaningful contributions to the experience of students and their disposition towards science.

To better understand the impact of the Science Bowls on both STEM learning and civic engagement, we recommend that surveys be administered for all the National Science Bowl® middle school and high school competitions. The surveys should be standardized, with optional regionally based questions, and should be part of a well-designed study that can inform future science bowl decisions. An existing instrument, the Student Assessment of Learning Gains (SALG, http://www.salgsite.org/), has survey questions that are geared towards formal academic courses but are a no-cost, accessible means to obtain data on students’ attitudes about science. Social media also provides opportunities for assessment and self-reporting of students. Surveys can be followed up by focus group interviews that could provide greater depth to our understanding of the findings. Such longitudinal studies could serve to verify whether or not these informal and volunteer learning experiences correlate with continued interest and involvement in science and mathematics, including choice of college majors, careers, and enhanced awareness and involvement in our most pressing science-related civic challenges, including climate change, public health, and technology.

About the Authors

Robert Kuech

Robert Kuech (Bob) taught middle and high school physics, chemistry, physical science, biology, ecology, computer programming for 20 years before returning to Penn State to work on a Ph.D. in science education. In 1999, when he finished his studies at Penn State, he came directly to USM and has served as the science educator in the Teacher Education Department since that time.

Robert Sanford

Robert M. Sanford (Rob) is Professor of Environmental Science and Policy and Chair of the Department of Environmental Science and Policy at the University of Southern Maine in Gorham, Maine. He received his M.S. and Ph.D. in environmental science from SUNY College of Environmental Science & Forestry. His research interests include environmental impact assessment and planning, and environmental education. He is a co-director of the SENCER New England Center for Innovation (SCI) and is a SENCER Leadership Fellow.

References

Abernathy, T., and R. Vineyard. 2001. “Academic Competitions in Science: What Are the Rewards for Students?” The Clearing House 74 (5): 269–276.

Akey, T.M. 2006. School Context, Student Attitudes and Behavior, and Academic Achievement: An Exploratory Analysis. New York: MDRC. http://www.mdrc.org/publications/419/full.pdf. (Accessed July 7, 2014.)

Bishop, K., and H. Walters. 2007. “The National Ocean Sciences Bowl: Extending the Reach of a High School Academic Competition to College, Careers, and a Lifelong Commitment to Science.” American Secondary Education 35 (3): 63–76.

Campbell, J.R., and H.J. Walberg, 2011. “Olympiad Studies: Competitions Provide Alternatives to Developing Talents That Serve National Interests.” Roeper Review 33:8–17.

Davis, G.A., and S.B. Rimm, 2004. Education of the Gifted and Talented. 5th ed. New York: Pearson.

Greif, G., and P. Ephross. 2011. Group Work with Populations at Risk. Oxford: Oxford University Press.

Harris, J.D. 2009. “Service-learning: Process and Participation.” In Service-learning and the Liberal Arts, C.A. Rimmerman. ed, 21–40. Lanham, MD: Rowman & Littlefield.

Jacoby, B., and T. Ehrlich, eds. 2009. Civic Engagement in Higher Education. San Francisco: Jossey-Bass.

Lowell, B.L., H. Salzman, H. Bernstein, and E. Henderson. “Steady as She Goes? Three Generations of Students through the Science and Engineering Pipeline.” Paper presented at the Annual Meetings of the Association for Public Policy Analysis and Management, Washington, D.C. http://policy.rutgers.edu/faculty/salzman/steadyasshegoes.pdf. (Accessed July 7, 2014.)

Ozturk, M., and C. Debelak. 2008. “Affective Benefits from Academic Competitions for Middle School Gifted Students.” Gifted Child Today 31 (2): 48–53.

Parker, S. 1998. “At Dawn or Dusk, Kids Make Time for This Quiz.” Christian Science Monitor 90 (116): 49–54.

Download (PDF, 143KB)

SENCERizing Pre-service K-8 Teacher Education: The Role of Scientific Practices

 

Abstract

Recent policy reports are calling for curriculum reforms to address problems about a lack of relevance and an avoidance of the core scientific practices in science courses K–16. One important cohort is K–8 teacher candidates who need courses in which they learn core ideas in science and participate in science practices. One promising approach is infusing SENCER courses into the science course sequence for future teachers. We report a review of select SENCER courses using an Evidence-Explanation framework to assess the type and levels of science practices introduced. Results on ‘Differences in Courses’, ‘Common Themes Among Courses’, and ‘Demographic Patterns’ are reported.[more]

Introduction

Recent U.S. policy reports express a growing concern for the supply of scientists, science workers and science teachers; c.f., National Research Council 2006 report Raising Above the Gathering Storm and the National Center on Education and the Economy 2007 report Tough Choices Tough Times. The STEM (Science Technology Engineering Mathematics) teacher and workforce shortages have two components (1) declines in attracting and retaining individuals into science/science education programs of study and (2) into places of employment. These recent reports show that uptake of STEM courses and careers are waning. Then there is the documented evidence that the development of youth attitudes toward science, both negative and positive, begins in and around middle school grades (ADEEWR, 2008). Thus, much of the focus for addressing the problems is on schools and schooling K–16.

Consensus review reports (Carnegie Corporation of New York, 2009) are placing much of the blame on the curriculum models citing a lack of relevance and an avoidance of the core scientific practices that frame science as a way of knowing; e.g., critiquing and communicating evidence and explanations. The NRC K–8 science education synthesis research study Taking Science to School (Duschl, Schweingruber & Shouse, 2007) is another consensus report that makes recommendations about the reform of science curriculum, instruction and assessment. The TSTS report concludes that K–8 science education should be grounded in (1) learning and using core knowledge, (2) building and refining models and (3) participating in discourse practices that promote argumentation and explanation. The report also concludes that a very different model of teacher education must be put into place. That raises an important set of issues. Where in the undergraduate curriculum do future K–8 teachers engage in and learn to use the core knowledge, building and refining models and argumentation and explanation practices?

The typical introductory survey science courses taken by non-science majors and elementary education candidates focus more on the ‘what we know’ of science and less on the ‘how we know’ and the ‘why we believe’ dynamics and practices of science. Determining the level and degree of scientific practices in science courses is essential for shaping and understanding preservice/inservice teachers’ engagement and confidence in doing science when planning and leading science lessons in their own classroom. Science courses that focus exclusively on teaching what we know in science are inappropriate for future teachers.

Teacher candidates need courses in which they participate in science practices. One promising approach we have been considering is infusing SENCER courses into the science course sequence for future teachers (e.g., subject matter, SENCER, science teaching methods). Science Education for New Civic Engagements and Responsibilities (SENCER) course frameworks offer a potential solution to both engagement in and understanding of science practices. The SENCER commitment is to situate science learning in civic or social problems to increase relevance, engagement and achievement in science content knowledge and inquiry practices. This article reports on an analysis of a subset of SENCER courses that take up environmental problems as the civic engagement issue.

The study investigates how the design of SENCER courses provides opportunities to practice science as inquiry. The premise is that teachers gaining experience in science practices are more likely to use these practices in their own elementary school classrooms. In turn, these teachers will be in a better position to understand and hopefully address the Taking Science To School recommendation that K–8 science education be coordinated around the 4 Strands of Proficiency:

Students who understand science:

  1. Know, use and interpret scientific explanations of the natural world.
  2. Generate and evaluate scientific evidence and explanations.
  3. Understand the nature and development of scientific knowledge.
  4. Participate productively in scientific practices and discourse (Duschl et al 2007).

One of the three TSTS recommendations for teacher professional development speaks directly to the issue:

Recommendation 7: University-based science courses for teacher candidates and teachers’ ongoing opportunities to learn science in-service should mirror the opportunities they will need to provide for their students, that is, incorporating practices in all four strands and giving sustained attention to the core ideas in the discipline. The topics of study should be aligned with central topics in the K–8 curriculum so that teachers come to appreciate the development of concepts and practices that appear across all grades. (Duschl et al, 2007, p 350)

Review of Literature and Analytical Frameworks

With respect to changing how and what science is taught, one important cohort of science students is preservice elementary (K–8) teachers who have low self-efficacy when it comes to science (Watters & Ginns, 2000). The K–8 education cohort’s lack of confidence and experience within the science experiences they had contributes to maintaining a cycle in which the students they teach lose interest and confidence in learning science due to poor teaching strategies, misdirected curriculum and weak teacher knowledge. (Wenner, 1993). Sadler (2009) has found that socio-scientific issues (SSI) affect learners’ interest and motivation, content knowledge, nature of science, higher order thinking and community of practice. Thus, it is not a surprise that SENCER courses have successfully demonstrated increases in student enthusiasm (Weston, Seymour & Thiry, 2006). However, more information is needed to determine how SENCER courses impact student achievement in core knowledge of science and with science practices that involve model-building and revision. The first step toward conducting research on the impact of SENCER courses on learning is to ascertain which SENCER courses are implementing scientific practices; e.g., raising research questioning, planning measurements and observations, collecting data, deciding evidence, locating patterns and building models, and proposing explanations. The driving question is can SENCER courses when placed between science courses and science teaching methods courses effect teacher thinking and practices.

Co-designed courses represent another model that brings science and science methods courses together. The co-designed courses are planned and taught by both science and science education faculty. Zembal-Saul (2009, 687) has found that co-designed courses that adopt a framework for teaching science as argument to preservice elementary teachers served “as a powerful scaffold for preservice teachers’ developing thinking and practice . . . [as well as] attention to classroom discourse and the role of the teacher in monitoring and assessing childrens’ thinking.” Schwartz (2009) found similar positive effects on preservice teachers’ principled reasoning and practices after using an instructional framework focusing on modeling-centered inquiry coupled with using reform-based criteria from Project 2061 to analyze and modify curriculum materials. What these two studies demonstrate and the SENCER model supports is the effectiveness coherently aligned courses can have on students’ engagement and learning. Such shifts in undergraduate courses and teaching frameworks will contribute to breaking the cycle that perpetuates low interest and high anxiety in the sciences at all levels of education, K–16.

Research shows that preservice elementary school teachers tend to enter the profession with inadequate knowledge of scientific content and practice. Preservice elementary teachers answer only 50 percent of questions correctly on a General Science Test Level II (Wenner, 1993). Stevens and Wenner’s (1996) surveys of upper level undergraduate elementary education majors are consistent with other research that 43 percent of practicing teachers had completed no more than one year of science course work in college (Manning, Esler, & Baird, 1982; Eisneberg, 1977). The lack of courses and experiences in science reflected the low self-efficacy in science among preservice elementary school teachers (Stevens & Wenner, 1996; Wenner, 1993).

If no changes are made to current coursework required of preservice elementary school teachers, they will continue to have low self-efficacy in science and therefore avoid teaching this subject (Stevens & Wenner, 1996). Thus, teachers are unlikely to use inquiry within their science lessons with the result that students are not exposed to scientific practices. The cycle of negative experiences with science does not have to be accepted as an educational norm; as the studies by Zembal-Saul and by Schwartz demonstrate. Changes can be made that coherently align science courses with methods courses.

SENCER courses can serve as a bridge to connect real-world issues and scientific knowledge with the positive impact of raising motivation and engagement among non-majors’ and preservice elementary teaches’ to learn science (SENCER, 2009). Evidence shows that learning science within the context of a current social problem helps to motivate preservice teachers and enables them to form goals that include learning scientific concepts and practices (Watters and Ginns, 2000; Sadler, 2009). Preservice elementary teachers who experience scientific practices and do investigations that build and refine scientific evidence and explanations can more informed decision makers about science and the teaching of science.

Evidence-Explanation Continuum Framework

While it is important that SENCER courses successfully motivate preservice elementary teachers to learn about science content, it is also essential that science courses provide opportunities to use scientific knowledge and practices. The targeted science practices for this review of SENCER courses are from the Evidence-Explanation (E-E) continuum (Duschl, 2003, 2008). The E-E continuum represents a step-wise framework of data gathering and analyzing practices. The appeal to adopting the E-E continuum as a framework for designing science education curriculum, instruction and assessment models is that it helps work out the details of the critiquing and communicating discourse processes inherent in TSTS Strand 4 — Participate productively in scientific practices and discourse. The E-E continuum recognizes how cognitive structures and social practices guide judgments about scientific data texts. It does so by formatting into the instructional sequence select junctures of reasoning, e.g., data texts transformations. At each of these junctures or transformations, instruction pauses to allow students to make and report judgments. Then students are encouraged to engage in rhetoric/argument, representation/communication and modeling/theorizing practices. The critical transformations or judgments in the E-E continuum include:

  1. Selecting or generating data to become evidence,
  2. Using evidence to ascertain patterns of evidence and models.
  3. Employing the models and patterns to propose explanations.

Another important judgment is, of course, deciding what data to obtain and what observations or measurements are needed (Lehrer & Schauble, 2006; Petrosino, Lehrer & Schauble, 2003). The development of measurement to launch the E-E continuum is critically important. Such decisions and judgments are critical entities for explicitly teaching students about the nature of science (Duschl, 2000; Kuhn & Reiser, 2004; Kenyon and Reiser, 2004). How raw data are selected and analyzed to be evidence, how evidence is selected and analyzed to generate patterns and models, and how the patterns and models are used for scientific explanations are important ‘transitional’ practices in doing science. Each transition involves data texts and making epistemic judgments about ‘what counts.’

In a full-inquiry or a guided-inquiry, students formulate scientific questions, plan methods, collect data, decide which data to use as evidence, and create patterns and explanations from the selected evidence (Duschl, 2003). Science engagement becomes more of a cognitive and social dialectical process as groups and group members discuss why they differed in data selected to be evidence and varied in the evidence used for explanations (Olson & Loucks-Horsley, 2000). Students’ participating in these interactions tend to build new knowledge and/or to correct previous misconceptions about a scientific concept (Olson & Loucks-Horsley, 2000).

Research Context and Methods

The research question asks to what extent do SENCER courses model and use scientific practices that are linked to obtaining and using evidence to develop explanations? SENCER courses were selected from the SENCER website and examined to determine the opportunities provided to engage in scientific practices. Only SENCER courses designed around environmental topics (e.g., water, earth, soil, rocks) were selected because these courses offer up integrated science opportunities. Next, course syllabi, projects and activities were reviewed to ascertain students use, or the potential for use, of data-driven E-E scientific practices.

SENCER courses were considered to emphasize planning and asking questions if students asked their own research question, designed their own experiment, or designed an engineering project. A course that stressed data collection showed that students went into the field and collected soil, water, or air, or they took measurements of samples. A SENCER course provided students practice in evidence if they decided which data to keep as inferred by students representing data or creating graphs. Practice in evidence was also inferred if students analyzed data later. Students could not complete this activity without deciding which evidence to use. A course gave students experience in patterns if students determined how the evidence was modeled as seen by analysis of evidence or running statistics on evidence. Lastly, a course allowed students to practice using explanation if students connected their project to previous research or theories as seen in library searches, if they made predictions for another phenomenon based off of their results, or if they discussed recommendations. Courses that included scientific content but focused on practices used in the humanities such as research and communication with another culture and were left out of this study. A summary of the criteria for evaluating the courses appears in Table 1,below.

Criteria for Evaluating SENCER Courses
Table 1. Criteria for Evaluating SENCER Courses

The names of the courses located on the SENCER website appear in Tables 2 and 3. Scientific practices identified were recorded as an X in Table 3 with further details on how the course fulfilled the criteria. Courses that did not meet the criteria received an N/R (no result). Each X was worth one point on the scale. Each scientific practice identified was worth one point on the scale. A scale from 1–5 was created to effectively compare scientific practices identified in each of the course modules. A score of 1 indicated that the course module only incorporated one portion of scientific practice, and a score of 5 indicated that the course emphasized all five portions of scientific practice within the E-E continuum. Therefore, a course with a score of 1 did not emphasize scientific practice whereas a course receiving a score of 5 heavily emphasizes scientific practice.

Selected SENCER Courses
Table 2. Selected SENCER Courses

Course demographics were also investigated from the SENCER website. Information researched included type of institution, class size, student year, major and class time (Table 2). Demographic information was then used to interpret any differences seen in level of scientific practice among SENCER course modules.

Results and Findings

The results and findings are reported in 3 sections: Differences in Courses, Common Themes Among Courses, and Demographic Patterns.

Differences in Courses

Differences in courses are presented from the highest emphasis on scientific practices (5) to lowest emphasis of scientific practices (1). Two courses, “The Power of Water” and “Chemistry and the Environment,” received a 5 because they provided students with practice in each aspect of scientific inquiry (Table 3). However, they approached various aspects of inquiry differently due to the nature of the problem being solved. “The Power of Water” took an engineering method in which students designed the most efficient micro-hydro-power turbine for a hypothetical small rural village whereas “Chemistry and the Environment” students formulated their own question to research about some environmental chemistry issue on their campus.

Most of the courses scored a 4 (Table 3), these included “Introduction to Statistics with Community-based Project,” “Chemistry and Policy”, “Renewable Environment: Transforming Urban Neighborhoods,” “Riverscape,” “Environment and Disease,” “Energy and the Environment,” and “Geology and the Development of Modern Africa.” These six courses differed from “The Power of Water” and “Chemistry and the Environment” because they did not allow students to explain their patterns or models. Two courses that received a 4 did expose students to explanation, but left out some other aspect of scientific practices in inquiry. Students in “Chemistry and Policy” did not create their own scientific question to study, and “Riverscape” did not provide students with practice in creating create patterns. The “Riverscape” course is a major source of interest because it was designed specifically for preservice elementary school teachers in the attempt to gain appeal in science and learn scientific practices.

Two courses provided students with the opportunity to use 3 out of 5 practices within scientific inquiry, giving them a score of 3. “Renewable Environment: Transforming Urban Neighborhoods” and “Science in the Connecticut Coast,” allowed students to collect data, provide evidence, and create patterns or models. However, students did not practice the planning and explanation stages of scientific inquiry.

Two courses that gave students experience in the fewest scientific practices scored a 2. There were no courses that scored a 1. “Science, Society, and Global Catastrophe” and “Math Modeling” differed in the inclusion of scientific practices. “Science, Society, Global Catastrophe” gave students training in finding evidence and creating patterns and models but not in the remainder of scientific practices. “Math Modeling” enabled students to practice finding evidence and creating explanations, but the course provided students with the remaining portions of scientific inquiry.

Common Themes Among Courses

SENCER courses with differing levels of scientific practices tended to have common themes for practicing scientific inquiry. One major theme was the use of collaboration as seen through group work on a scientific project. Most course modules shown on the SENCER website specifically state that students work in groups for their projects. Others such as “Riverscape” and “Chemistry and Policy” do not directly state that students do group work, although collaboration is emphasized within the course. The only course that did not emphasize collaboration was “Renewable Environment: Transforming Urban Neighborhoods,” although this information may have been left off of the SENCER website. While not specifically stated within the E-E continuum, collaboration plays an important role within inquiry. Students who are able to discuss scientific concepts with one another can articulate ideas and argue enabling them to reconstruct their own ideas of scientific meaning (Olson & Loucks-Horsley, 2000).

Another common theme among high practice SENCER courses was that students communicated their results with one another in various formats. Most courses incorporated formal presentations at the end of the project for the rest of the class. Others used formal presentations, although they were created for different audiences such as the general public or for a buyer of potential land for diamond extraction. Other course modules such as “Science in the Connecticut Coast” and “Environment and Disease” based communication more on discussion of scientific concepts. Despite differences in the means of presenting ideas in class, communication of results is an important skill essential to inquiry-based learning.

Scored Courses
Table 3. Scored Courses

Demographic Patterns

The SENCER courses differed in demographic information. The total number of students participating in class was widespread between 5 and 130 students (Table 2). Laboratories decreased class size to roughly 20 students. However, more information is needed for “The Power of Water” laboratory class size. Student year ranged from freshmen to graduate students within the course. Student type varied greatly from non-majors and preservice elementary school teachers to math or chemistry majors. Total class time differed among the courses in addition to the way the time spent was scheduled (Table 2).

None of the demographic information influenced the degree to which students gained practice in using science. Although class size is variable among courses, it had no impact on amount of scientific practices emphasized. Courses with large class sizes such as “The Power of Water” and “Energy and the Environment” provided students with similar practice in using science to smaller classes such as “Riverscape.”

Additionally, student major had little impact on scientific practices emphasized within SENCER courses. Majors used a varying number of scientific practices among the courses studied. Math students in “Introduction to Statistics with Community-Based Project” used more areas of scientific practice than math majors in “Math Modeling” as seen in Table 3. Majors also did not use any more scientific practices than non-majors in these courses. “The Power of Water” allowed students to use all 5 elements of scientific practice in inquiry whereas majors in “Math Modeling” were only given the opportunity to practice 2 aspects.

Class year also did not affect the ability to expose students to use scientific practices. As expected, SENCER courses enabled upperclassmen and graduate students to gain practice in conducting science as seen in “Riverscape.” However, many SENCER classes also provided underclassmen with a rich experience in practicing science. For example, “The Power of Water,” consisting of sophomores, provided students with practice in every area of scientific inquiry.

Lastly, class time did not affect student exposure to using scientific practices. Courses that received the same scores consisted of a wide variety of time scheduled. “Chemistry and Policy” devoted much more time toward class time than “Environment and Disease,” but students experienced the same number of scientific practices.

Conclusions

Distinctions in SENCER course characteristics have led to varying opportunities for students to gain experience in doing scientific practice as seen in this study’s scores. Those with the highest scores allow students to have the greatest amount of ownership over their own work. Courses with a score of 5 provide students with the ultimate source of ownership in allowing them to choose their own question to study. Modules with scores of 3 and 4 may not allow students to ask their own questions to study, but they do provide students with responsibility over the remainder of scientific practices in the E-E continuum. Courses with the lowest scores provide students with the least amount of ownership over their own work. Students are given a piece of someone else’s project and continue a small portion of that project. For example, students are given another project’s data set that they are expected to analyze. Future SENCER courses should consider giving students as much ownership over their work as possible to encourage student experience in using scientific practices.

The nature of data collection also had an impact on the level of scientific practices used within course modules. Courses in which there was easy access to collect soil or water samples of interest along with equipment to measure samples showed a higher level of scientific practices within the E-E continuum. Courses such as “Math Modeling” and “Science, Society, and Global Catastrophe” may not have allowed for easy access to gather water or soil samples. Therefore, the course was unable to provide students with the opportunity to gain practice in data collection. “Geology and the Development of Africa” found a loophole that enabled students to gather their own data by using a computer simulation. Students did not actually collect rock samples in this class, but were able to collect data from their computer simulation. Perhaps computer simulations could be used in other courses that do not have easy access to take samples from the environment.

While these characteristics provide critical information to increase a SENCER course’s use of scientific practices, traits that have no effect on level of scientific practices also offer great insight to increase student experience in performing science.

It is reassuring that SENCER courses can be flexible enough in incorporating inquiry for small as well as large class sizes. Future courses using the SENCER approach may be designed knowing that students can successfully learn scientific practices within a large classroom size. SENCER courses may cater to majors and especially to non-majors who have little experience in scientific practices. It is appropriate to use SENCER not only for upper level courses, but it is also critical to apply these modules to lower level classes.

SENCER courses provide a way to incorporate scientific practices within student learning. The integration of social issues with science builds preservice teacher interest in scientific practices. As these students gain experience in using scientific tools, they become more confident in incorporating science into their future elementary classroom. Perhaps our future teachers’ greater enthusiasm for science will spark student interest in the sciences.

References

ADEEWR, Australian Department of Education, Employment and Workplace Relations. 2008. Opening up Pathways: Engagement in STEM Across the Primary-Secondary School Transition. Cantabera, Australia.

Burns, W.D. 2002. “Knowledge to Make Our Democracy.” Liberal Education 88 (4): 20–27.

Carnegie Corporation of New York. 2009. The Opportunity Equation: Transforming Mathematics and Science Education for Citizenship and the Global Economy. www.opportunityequation.org (accessed December 14, 2009).

Duschl, R. 2003. “Assessment of Inquiry.” In Everyday Assessment in the Classroom, J.M. Atkin and J. Coffey, eds., 41–59. Arlington, va: NSTA Press.

Duschl, R., H. Schweingruber, and A. Shouse, eds. 2007. Taking Science to School: Learning and Teaching Science in Grades K–8. Washington, DC: National Academy Press.

Eisenberg, T.A. 1977. “Begle Revisited: Teacher Knowledge and Student Achievement in Algebra.” Journal for Research in Mathematics Education, 8, 216–222.

Kenyon, L. and B. Reiser. 2004. “Students’ Epistemologies of Science and Their Influence on Inquiry Practices.” Paper presented at the annual meeting of National Association of Research in Science Teaching, April 2004, Dallas, TX.

Kuhn, L. and B. Reiser. 2004. “Students Constructing and Defending Evidence-based Scientific Explanations.” Paper presented at the annual meeting of National Association of Research in Science Teaching, April 2004, Dallas, TX.

Lehrer, R. and L, Schauble. 2006. “Cultivating Model-based Reasoning in Science Education. In The Cambridge Handbook of the Learning Sciences, K. Sawyer ed., 371–388. New York: Cambridge University Press.

Manning, P.C., W.K. Esler, and J.R. Baird. 1982. “How Much Elementary Science is Really Being Taught?” Science and Children, 19 (8)” 40–41.

Olson, S. and S. Loucks-Horsley, eds. 2000. Inquiry and the National Science Education Standards: A Guide for Teaching and Learning. Washington, DC: National Academy Press.

Petrosino, A., R. Lehrer, and L. Schauble. 2003. “Structuring Error and Experimental Variation as Distribution in the Fourth Grade. Mathematical Thinking and Learning 5 (2/3): 131–156.

Sadler, T. 2009. “Situated Learning in Science Education: Socio-Scientific Issues as Contexts for Practice.” Studies in Science Education 45 (1): 1–42.

SENCER, Science Education for New Civic Engagements and Responsibilities. http://www.sencer.net (accessed December 14, 2009).

Schwartz, C. 2009. “Developing ‑vice Elementary Teachers’ Knowledge and Practices Through Modeling-Centered Scientific Inquiry.” Science Education 93 (4): 720–744.

Seago, J.L. Jr. 1992. “The Role of Research in Undergraduate Instruction.” The American Biology Teacher 54 (7): 401–405.

Stevens, C. and G. Wenner. 1996. “Elementary Preservice Teachers’ Knowledge and Beliefs Regarding Science and Mathematics.” School Science and Mathematics 96 (1): 2–9.

Wenner, G. 1993. “Relationship Between Science Knowledge Levels and Beliefs Toward Science Instruction Held by Preservice Elementary Teachers. Journal of Science Education and Technology 2 (3): 461–468.

Watters, J.J. and I.S. Ginns. 2000. “Developing Motivation to Teach Elementary Science: Effect of Collaborative and Authentic Learning Practices in Preservice Education.” Journal of Science Teacher Education 11 (4), 301–321.

Zembal-Saul, C. 2009. “Learning to Teach Elementary School Science as Argument.” Science Education, 93 (4): 687–719.

About the Authors

Amy Utz graduated from Bucknell University in 2005 with a B.A. in Biology. In 2007, she graduated from Drexel University with an M.S. in Biology. She currently is a graduate student within the Master’s of Education program at Penn State University. She is completing her student teaching and plans to become a high school biology teacher.

Richard A. Duschl, (Ph.D. 1983 University of Maryland, College Park) is the Waterbury Chaired Professor of Secondary Education, College of Education, Penn State University. Prior to joining Penn State, Richard held the Chair of Science Education at King’s College London and served on the faculties of Rutgers, Vanderbilt and the University of Pittsburgh. He recently served as Chair of the National Research Council research synthesis report Taking Science to School: Learning and Teaching Science in Grades K–8 (National Academies Press, 2007).

Download (PDF, 297KB)

Please click the link above to download a PDF copy of the above article.