Meeting the Challenge of Interdisciplinary Assessment

 

Introduction to Interdisciplinary Education

“As the pace of scientific discovery and innovation accelerates, there is an urgent cultural need to reflect thoughtfully about these epic changes and challenges. The challenges of the twenty-first century require new interdisciplinary collaboration, which place questions of meanings and values on the agenda. We need to put questions about the universe and the universal back at the heart of university.” William Grassie (2013)

As the world becomes more complex, given the rapid expansion of technology, the changing nature of warfare, rising energy, and environmental crises, the value of an interdisciplinary education is increasingly obvious. Social, political, economic, and scientific issues are so thoroughly interconnected that they cannot be explored productively, either by experts or students, within clear-cut disciplinary boundaries.

Despite this fact, several problems arise when institutions try to incorporate interdisciplinary education into their programs. Boix Mansilla (2005) noted that the assessment of interdisciplinary work by students is of great concern. She explains that because faculty are often discipline-specific experts, they are unfamiliar with disciplines outside their realm of expertise and have difficulty defining interdisciplinary work. She goes on to explain that, as a consequence, “the issue [of standards] is marred by controversies over the purposes, methods, and most importantly, the content of proposed assessments” (2005, 16).

This paper offers one solution to this dilemma. The following analysis explores the current state of interdisciplinary education, both in academia broadly, and specifically, at West Point through its interdisciplinary Core Program. The sections that follow will highlight the current issues inherent in interdisciplinary education, define interdisciplinary education objectives, and finally, explain the adaptable, multi-functional, interdisciplinary rubric being implemented at the United States Military Academy (USMA), a rubric designed to resolve many of the issues interdisciplinary educators encounter.

The Current State of Interdisciplinary Education in Academia

The demand is clear. Whether we try to take a stance on the stem cell research controversy, to interpret a work of art in a new medium, or to assess the reconstruction of Iraq, a deep understanding of contemporary life requires knowledge and thinking skills that transcend the traditional disciplines. Such understanding demands that we draw on multiple sources of expertise to capture multi-dimensional phenomena, to produce complex explanations, or to solve intricate problems. The educational corollary of this condition is that preparing young adults to be full participants in contemporary society demands that we foster their capacity to draw on multiple sources of knowledge to build deep understanding.” Veronica Boix Mansilla (2005, 14)

There are currently several studies, including evaluation measures, defining the essence of interdisciplinary education. The above quote from Boix Mansilla’s “Assessing Student Work at Disciplinary Crossroads” highlights the challenge educators are experiencing in preparing students to meet today’s most pressing problems. This paper will not attempt to address the structure of interdisciplinary education as an institutional convention, but only to define the essential skills and capacities that a student with interdisciplinary understanding would demonstrate. These definitions are essential to understanding and creating a framework for interdisciplinary learning, which is arguably the first step in adequately integrating it into educational programs. Interdisciplinarity is a difficult construct to quantify, and many educators have been unable to frame a definition of it or to assess it in student work. As a consequence of these and other challenges, only a limited number of colleges or universities have implemented formal interdisciplinary programs at the institutional level.

Several analyses (Boix Mansilla 2005; Boix Mansilla and Dawes Duraising 2007; Rhoten et al. 2008; Stowe and Eder 2002) address the key issues surrounding interdisciplinary learning in higher education and offer proposals on how to address them, starting with the definition of the term “interdisciplinary.” One definition of interdisciplinary understanding is “the capacity to integrate knowledge and modes of thinking drawn from two or more disciplines to produce a cognitive advancement—for example, explaining a phenomenon, solving a problem, creating a product, or raising a new question—in ways that would have been unlikely through single disciplinary means” (Boix Mansilla 2005, 16; Boix Mansilla and Dawes Duraising 2007, 216). A definition is particularly important because “a clear articulation of what counts as quality interdisciplinary work, and how such quality might be measured, is needed if academic institutions are to foster in students deep understanding of complex problems and evaluate the impact of interdisciplinary education initiatives” (Boix Mansilla 2005, 16). An agreed-upon definition is currently lacking in academia, and this has resulted in inconsistent grading, teaching, and learning in interdisciplinary education.

One study of well regarded and established interdisciplinary programs in the U.S., which included Bioethics at the University of Pennsylvania, Interpretation Theory at Swarthmore College, Human Biology at Stanford University, and the NEXA Program at San Francisco State University, involved “69 interviews, 10 classroom observations, 40 samples of student work, and assorted program documentation” (Boix Mansilla and Dawes Duraising 2007, 4). The data were gathered in one-hour to 90-minute semi-structured interviews with faculty and students inquiring about the manner of assessment used in their respective programs. Next, examples of student work were used to give examples of what the institution viewed as meeting the definition of interdisciplinarity. From the interviews and student examples, the authors concluded that there are three core dimensions to student interdisciplinary work: disciplinary grounding, advancement through integration, and critical awareness (Boix Mansilla, 2005, Boix Mansilla and Dawes Duraising 2007). These core elements are represented graphically in Figure 1.

Figure 1. Three Interrelated Criteria for Assessing Students’ Interdisciplinary Work (Boix Mansilla and Dawes Duraising 2007, 223)

The first core element in Figure 1, disciplinary grounding, calls for strong base knowledge in individual disciplines. During the interviews, 75 percent of the interviewed faculty felt that strong subject-area knowledge was necessary for interdisciplinary education that did not sacrifice depth in exchange for breadth. However, the authors noted that the key to successful disciplinary grounding also included the thoughtful selection of which disciplines to use and how to use them. Advancement through integration, the second principle, is universal in all student work in the sense that students are supposed to learn from the work they do; however, what sets it apart in interdisciplinary education is that “students advance their understanding by moving to a new conceptual model, explanation, insight, or solution” (Boix Mansilla and Dawes Duraising 2007, 225). In the study, sixty-eight percent of faculty identified advancement through integration as a necessary element in interdisciplinary understanding and as the quintessential element for the advancement of student understanding. However, various programs and their students interpret this core element differently. For example, some students in in the NEXA Program at San Francisco State University strive for complex explanations, which evaluate the extent to which disciplines are interwoven to create a broad picture of how interconnected different disciplines are on a given topic. Other students in the same program prefer to use aesthetic reinterpretations to connect the literary, historical, and social elements of a given topic. Other students, such as those in the Bioethics program at the University of Pennsylvania, choose to focus on the development of practical solutions based on of the use of multi-disciplined ideas. The final principle from Figure 1, critical awareness, refers to student work being able to withstand examination and criticism and explicitly calls for evidence of student reflection in their work. Student work needs to “exhibit clarity of purpose and offer evidence of reflective self-critique” (Boix-Mansilla and Dawes Duraising 2007, 228).

Rhoten et al. (2008) also conducted a study focused on the similarities and differences between the learning outcomes of liberal arts and interdisciplinary programs. For this particular study, the researchers used student and faculty surveys, interviews, and tests to gather data for their analysis. The authors explain that most liberal arts programs “must develop student capacities to integrate or synthesize disciplinary knowledge and modes of thinking,” which is very similar to the type of synthesis that is expected from an interdisciplinary curriculum (Rhoten et al. 2008, 3–4). The main purpose behind this study was to identify the parallels between interdisciplinary and liberal arts programs, in order to show how a program can be made more interdisciplinary without changing its structure or content. Table 1 shows a summary of several parallels between a liberal arts education and an interdisciplinary education.

Table 1. Comparison of Liberal Arts Education and Interdisciplinary Education Objectives (Rhoten et al. 2008)

Rhoten et al. (2008) also analyzed empirical data to draw out trends on the “222 institutions considered ‘Baccalaureate College-Liberal Arts institutions’ under the 2000 Carnegie Classification system,” whether the interdisciplinary programs offered were majors, minors, optional courses, or required courses (Rhoten et al. 2008, 5). In general, “interdisciplinary programs are still ‘personally driven,’ whereas departments are ‘self-perpetuating'” (Rhoten et al. 2008, 6). “Personally driven” simply means that if students want to broaden their subject-area exposure they must do so on their own. “Self-perpetuating” refers to fact that departments within an institution need to act in their own self-interest in order to survive and thrive; therefore they tend to avoid interdisciplinary efforts. Interdisciplinary education does not support the mission of individual departments, and if students seek it, they must do so on their own initiative. One would therefore conclude that the only way to truly incorporate interdisciplinary education into schools is by making it institutionally mandated, at least for the core curriculum that all students are required to take.

Schools should strive to integrate interdisciplinary efforts into their institutions because “interdisciplinarity breeds innovation” (Rhoten et al. 2008, 12). Although such innovation carries tremendous benefits, the difficulty of measuring student and educator success was again identified as a barrier. Most schools that are already making efforts towards interdisciplinarity believe that they are somewhat successful according to Rhoten et al. (2008). However, in order to mark and measure success, and to continually improve interdisciplinary programs in schools, the authors propose a value-added assessment, which is intended to provide an “assessment regime that measures growth that has occurred as a result of participation in the institution or academic program” (Rhoten et al. 2008, 14). Moreover, some cross-cutting goals that are embedded especially in interdisciplinary studies, such as life-long learning, curiosity, creative thinking, synthesis, and integration, have acquired the reputation of being ineffable and, correspondingly, unassessable” (Rhoten et al. 2008, 83). This common problem was addressed by Stowe and Eder (2002) who identified several assessment measures that are placed on a continuum, as seen in Figure 2. These measures can also be used to better define interdisciplinary standards by providing a multi-tiered adjustable scale that can help to quantify the assessment of student work based on an instructor’s desired outcomes.

Figure 2. Perspectives on Assessment (Stowe and Eder 2002, 84)

Stowe and Eder (2002) state that using a rubric to define and measure interdisciplinary work would improve the “apparently subjective nature” of interdisciplinary assessment. They further recommend the rubric as a “visible standard—a scoring guide—that allows the assessor and the public, for that matter, to recognize expectations and make increasingly fine distinctions about the quantity and quality of student learning” (96). They expand on their recommendation by noting that assessment must be focused on both improving interdisciplinary learning and “improving student learning,” and should be “embedded within larger systems… and create linkages and enhance coherence within and across the curriculum” (80). Without cooperation across different programs, it is impossible to foster an interdisciplinary learning environment.

An example of such cooperation can be seen at USMA, where several academic departments have moved towards a cooperative environment focused on interdisciplinary learning (Elliott et al. 2013). This paper will focus on the education of the USMA Class of 2016 from their freshman year, when the plan to use energy conservation and the NetZero project (an energy initiative by the Dept. of the Army on several Army posts, including West Point, to produce as much energy as it consumes by the year 2020) was adopted to infuse interdisciplinary themes into their core courses. The five Student Learning Outcomes from this effort include four individual discipline-focused outcomes as well as a fifth, which aims to “develop an interdisciplinary perspective that supports knowledge transfer across disciplinary boundaries and supports innovative solutions to complex energy problems/projects” (Elliott et al. 2013, 33). In a larger sense, this objective illustrates that interdisciplinary education addresses the mission of USMA and the Army’s focus on the “development of adaptive leaders who are comfortable operating in ambiguity and complexity will increasingly be our competitive advantage against future threats to our Nation,” as outlined by General Martin Dempsey, Chairman of the U.S. Joint Chiefs of Staff (Elliiott et al. 2013, 30).

Framing the Problem

The Academy produces graduates who can think dynamically in the ever-changing world described in the quotes from Grassie and Boix Mansilla at the beginning of this article. At West Point, this is accomplished by taking not only a multi-disciplinary approach to education, but also an interdisciplinary one. The Academy’s Core Curriculum describes the required classes that all cadets must complete or validate. The Core Curriculum does not include any classes required for a cadet’s major. Other non-academic requirements include three tactics courses and seven physical education courses. The interdisciplinary aspect is a new addition to the curriculum. In recent years, several committees have recommended promoting interdisciplinary approaches to better meet both the Academy’s and the Army’s goals as outlined in Elliott et al. (2013).

To achieve these goals, several academic departments involved in the Core Curriculum developed an interdisciplinary program for the entering plebe class, the Class of 2016. During the first week of classes, freshmen wrote an essay in their Introduction to Mathematical Modeling course, or MA103, about how they would use concepts from different courses to tackle the challenges that NetZero and the alarming problem of energy consumption in the Army pose to West Point. After 30 instruction sessions (approximately thirteen weeks), the freshmen revised these essays in their Composition course EN101. This time they used the knowledge acquired throughout the semester in the English course and in the other courses they were taking. Faculty from the Department of Mathematical Sciences and the Department of English and Philosophy evaluated these revised essays from different perspectives to emphasize the importance and relevance of multiple disciplines. This led to the realization that it was impossible to adequately compare the essays, since the assignments, rubrics, and faculty were not consistent and there was no common rubric to standardize the grading approach. To mitigate this challenge, the essays were compared in our study using the Flesch-Kincaid test (a formula designed to evaluate the difficulty and complexity of technical writing. It consists of two readings: grade level and reading ease) and a comparison of the final grades for the various essays. Scores for a sample of three essays for 25 students, a total of 75 essays, were used to compare improvement in a measureable, quantitative manner. The test consisted of a null hypothesis that there was no significant difference among the ratings, indicating neither improvement nor deterioration of scores from the different assignments throughout the semester, and an alternative hypothesis that there actually was a difference between scores. A two-tailed t-test yielded p-values ranging between 0.3 and 0.6. This indicated that the Flesch-Kincaid results were inconclusive, meaning that neither the null nor the alternative hypothesis could be rejected.

Despite the inconclusive results of the Flesch-Kincaid test, there was a demonstrated improvement in student work, albeit an improvement that was perceived on the basis of a subjective analysis of the essays. Therefore, a new rubric was developed to re-grade all of the essays in a standardized fashion against the desired elements for that particular set of assignments. To facilitate a comparison, this new and straightforward rubric aimed at grading each assignment from the different departments on the same scale. The grades were on a 1–10 scale, and the rubric can be seen in Table 2. The essays were then re-graded according to the same rubric and the results were compared again using a two-tailed t-test.

The challenge in evaluating interdisciplinary work is that the term “interdisciplinary” is not well-defined or broadly understood. This became even clearer after the Chemistry faculty conducted an interdisciplinary group capstone in the General Chemistry course with the Class of 2016 during the second semester of their freshman year. The capstone presented the students a complex and challenging energy problem that was both current and militarily relevant to their future roles as Army officers. This project required groups of students to write a memorandum summarizing their findings on an experimental, portable, and green battery recharger for soldiers in the field, and then to provide a presentation of their results to their commander. Cadets conducted an experiment on the battery recharger to test its efficiency, to compare it to current recharging methods, and to address the social and leadership challenges that would occur when this new equipment was integrated into a unit. In addition, the capstone leveraged the students’ various courses and experiences to scaffold understanding of key concepts and technology necessary to engage the problem. The freshman cadets were expected to utilize what they learned from math modeling, information technology, general psychology, and general chemistry courses in formulating their solution.

The rubric used to grade these capstones was developed by the Chemistry faculty with input from all the participating courses, and then later utilized by the Chemistry faculty in assessing the capstones. The collaborative rubric identified numerous concepts in each course, and as a result, it was several pages long. Perhaps most significantly, it did not define the term “interdisciplinary” for the faculty and the students in the capstone, nor did it make clear the associated expectations. At the conclusion of the rubric, faculty were asked to rate on a 1–10 scale how interdisciplinary their students’ submissions were. The results, displayed in Figure 3, had a standard deviation of .186 and were inconsistent in both the average instructor rating and the range of different ratings faculty assigned. This indicated that the faculty did not share the same understanding of “interdisciplinary” in assessing student work.

Table 2. Rubric used to evaluate the population sample of NetZero essays from fall 2012

Boix Mansilla and Dawes Duraising (2007) state that student interdisciplinary work should “be well-grounded in the disciplines” “show critical awareness,” and “advance student understanding” (223). These criteria both define the basic learning objectives of an interdisciplinary education and address the need for baseline knowledge in the subjects being addressed in student work. While these criteria may not be included in a rubric or other grading mechanism, they provide more of a defined objective regarding interdisciplinary student work.

Although the idea of graduating interdisciplinary-minded students is appealing to many programs, the challenge of measuring the success of interdisciplinary curriculums in producing these “multi-disciplined” graduates has yet to be addressed. The problem of scaling and measuring interdisciplinary education is itself interdisciplinary in nature and, consequently, an abstract idea for many (Boix-Mansilla and Dawes Duraising 2007, 218). Interdisciplinary education evaluation currently lacks a “sound framework” for assessment since the effects of interdisciplinary efforts on student learning are neither well-defined nor proven (Boix Mansilla 2005, 18). As seen in Figure 2 (Stowe and Eder 2002), the assessment of interdisciplinary work is a non-static scale where the balance between the perspectives and entities is never quite the same from project to project, or from class to class. Stowe and Eder (2002) offer a flexible scale for assessment that allows each interdisciplinary quality to be judged according to faculty expectations: how discovery-oriented versus objective-orientated do they want student assignments to be? Rhoten et al. (2008) do correlate several common learning outcomes of a liberal arts education with their interdisciplinary counterparts as seen in Table 1. Although useful for demonstrating extensive possible outcomes and correlations, the linkages are broadly defined and do not specify objectives; this exemplifies the issues of scale, definition, and the non-quantified nature of interdisciplinary education that currently prevail in academia.

Figure 3. Chemistry instructor evaluation of interdisciplinary synergy in capstone projects during Spring 2013. Courtesy of the United States Military Academy Department of Chemistry and Life Sciences.

All of the aforementioned problems can be traced to a lack of clarity on standards (Boix-Mansilla 2005, 16). Stowe (2002) explicitly calls for a standard for grading, collecting data, and creating a shared understanding, which he suggests could be found in a rubric. A standardized rubric, which is adaptable to several mediums and is general enough to be applicable to several disciplines, is desperately needed for evaluating and assessing interdisciplinary work. Such a rubric needs to clearly define the necessary elements of an interdisciplinary product and be sufficiently adaptable to align with project requirements; this would resolve several of the problems we have identified. In addition, Stowe and Eder (2002) call for the inclusion of very specific elements in a rubric, so that it can address current problems and properly evaluate interdisciplinary work. Among these requirements are assessing complex intellectual processes, promoting objectivity, reliability, and validity in assessment, clearly defining learning objectives for students, and being flexible and adjustable for course or curriculum progression (96). Although we conducted a thorough search, we failed to find a rubric that adequately fulfills this need.

Interdisciplinary Rubric Development

The goal of the rubric developed at USMA is to create a grading mechanism that can be used in multiple project mediums across multiple disciplines. Simultaneously this rubric maintains the integrity of the interdisciplinary goals by creating a more defined standard with which to grade interdisciplinary student work. The rubric also contains open areas for point allotment as well as weighting for each category, which allows faculty to allot points and focus where they see fit. Developing such a rubric required several steps: defining the term interdisciplinary, identifying the elements that student work needs to demonstrate in order to illustrate interdisciplinary thinking, creating a model that visually represents the interconnectivity of these elements, and then using the defined elements and model to arrive at the rubric categories.

The first step in the rubric development process was to define the term interdisciplinary:

Interdisciplinary: The seamless integration of multi-dimensional, multi-faceted ideas into a clearly demonstrated understanding of an issue’s breadth and depth, with sound judgment and dynamic thinking.

Boix Mansilla’s definition of interdisciplinary understanding provided the starting point for the development of the rubric. Additionally, material from the research discussed above identified missing elements from Boix Mansilla’s definition. For example, the best students’ interdisciplinary work included ideas from multiple disciplines that were integrated to demonstrate the level of understanding that the student has attained.

The second step in the rubric development process was to expand the definition of interdisciplinary, in order to create a shared understanding between students, faculty, and those evaluating the interdisciplinary work. To this end, the feedback and lessons learned from previous student work were used to identify the elements common to successful interdisciplinary work. These principles include: discipline specific knowledge, multi-perspective understanding, integration, practical integrated solutions, reflection, and clarity of purpose. To illustrate the interconnectivity of these principles, a conceptual model of the characteristics was created. Initially, the intention was to create a linear model to represent the core principles. However, several issues, such as missing connections and limited complexity, led to the immediate conclusion that a linear model could not completely describe complex nonlinear problem solving. The resulting model, which illustrates a cyclical thinking process, is shown in Figure 4.

Figure 4. The Cyclican Model of the Key Interdisciplinary Characteristics. This model demonstrates the interconnectivity of the d defined interdisciplinary elements.

The model begins with the framing and scoping of the problem before the application of discipline-specific knowledge, which as we have seen is an essential starting point for interdisciplinary work. The core principle of the integration of ideas was partitioned into multi-perspective understanding, integration, and practical integrated solutions. Multi-perspective understanding and discipline-specific knowledge are connected by an addition sign, which symbolizes understanding a topic from multiple perspectives. This illustrates that students must be able to use discipline-specific knowledge to make this essential connection. The arrow labeled “integration” in the lower part of the model represents the synthesis of discipline-specific knowledge and multi-perspective understanding into practical integrated solutions. Practical integrated solutions are then connected to reflection via a multiplication sign to show that reflection has a multiplicative effect on interdisciplinary understanding. The arrow labeled “clarity of purpose” represents the cyclical process and shows the compilation of all the previous elements back into discipline-specific knowledge. The knowledge gained from the various parts of the cycle can be used in the further learning of other applicable disciplines. This model’s goal is not to explain the rubric, but to illustrate how interdisciplinary education is cyclical in nature, how the characteristics of interdisciplinary understanding are relevant to interdisciplinary education, and how student learning should continue to build.

Next, the core principles of what makes student work interdisciplinary were established, defined, and examined. The elements in Figure 1 above, taken from Boix Mansilla and Dawes Duraising (2007), were used as a starting point for the development of this rubric’s core principles: be well grounded in the disciplines, show critical awareness, and advance student leaning through understanding (223). For the purpose of this rubric, some elements were modified and expanded to create six core principles. A list of the six core principles that were incorporated into the rubric, along with their definitions, appear in Table 3.

Problem framing and scope are derived from the idea that interdisciplinary work should show critical awareness. The definition used in the rubric is very flexible, so that educators can adapt it for different project mediums and faculty, departments, and/or university requirements. Critical awareness, as defined by Boix Mansilla (2007), includes the definition of purpose as well as the integration of ideas. The definition used for problem framing and scope in the rubric requires that the student’s work have a clearly defined purpose. This was created as a separate category because we had observed a clear trend of misunderstanding among faculty regarding the level of complexity that they expected. This is an important aspect of student interdisciplinary understanding; it allows the faculty to scale assignments according to the expected level of student understanding and allows the student to recognize just how complex and multi-disciplined a product the instructor is seeking. For example, if students were assigned a project on how to effectively stock a warehouse, an instructor would not have the same expectations of a freshman who has taken only introductory courses in mathematical modeling and economics as of a senior who had taken nonlinear optimization, supply chain management, and microeconomics courses. Having this requirement in the rubric makes clear the expectation that students will properly identify what they want to address, and also allows the instructor to have a frame of reference in a project.

The rubric’s second core principle, discipline knowledge is well grounded in the disciplines and is intentionally more open-ended, so that it can be readily adapted to different departments, projects, and situations (Boix Mansilla 2007). Identifying theories, examples, findings, methods, etc. may not be relevant or necessary in a given problem. Therefore, although the evaluator is given an area in the rubric that calls for disciplinary knowledge, the rubric does not explicitly indicate how that knowledge is to be graded. For example, in our warehouse stocking project, a freshman might be expected to mathematically model the effects of changing employee wages on productivity. A university senior, on the other hand, might be expected to produce a business recommendation to stakeholders by addressing the intricacies of supply chain management on warehouse profits as well as its psychological implications for employees. The discipline knowledge area of the rubric enables the evaluator to determine how much knowledge and understanding students are expected to demonstrate, while ensuring that the importance of disciplinary understanding is not lost on an interdisciplinary project.

The integration of ideas principle is really the quintessential element for the interdisciplinarity of this rubric. All six core principles are important interdisciplinary factors, but if this element were removed, the rubric could be used for a project that is not interdisciplinary. Integration of ideas derives its meaning from the critical awareness and advanced student understanding pieces identified above in Figure 1. This rubric defines integration of ideas as multi-dimensional, feasible, practical solutions with multi-faceted and seamlessly connected ideas. It is important to note the difference between being integrated and being seamlessly integrated. The seamless integration of ideas, which can take on different meanings depending on the assignment, is an indicator of true multi-dimensional, multi-faceted understanding. Seamless integration. We define the term seamlessly integrated to mean that ideas are not simply laundry-listed, but instead are connected in an intelligent and logical fashion. The definitional elements of multi-dimensional and multi-faceted identify the need for complexity in student work. It is multi-dimensional when students make use of multiple dimensions of their education or, in other words, use multiple disciplines, in their work. Multi-faceted means that students are able to use evidence and knowledge to back up their multi-dimensional claims. The most important component is that students be able to demonstrate a clear understanding of what they are presenting. This also relates to a student’s ability to demonstrate the span of an issue’s breadth and depth. In other words, students should be able to apply disciplines to an issue or topic with an appropriate understanding of the level of each of the disciplines. The use of extraneous disciplines merely for the sake of incorporating more disciplines does not necessarily make student work interdisciplinary. In fact, it contradicts the idea of advancing the complexity of the student’s thought process.. Students who apply the appropriate level of discipline breadth and depth indicate their ability to use sound judgment or logic, as well as their ability to think dynamically.

The next two core principles, clarity of purpose and reflection, were added to address the students’ failure to internalize what they were learning and understanding; this was revealed during the analysis of the USMA interdisciplinary program. The main challenge was that students did not fully grasp why a given project was interdisciplinary, or why that was important. To alleviate this, the core principle clarity of purpose was added to the rubric to help students understand the “why”; the intent was to motivate them to define the purpose of their investigation and to take an in-depth approach to the problem. This is different from problem framing and scope in a very important way: problem framing and scope focuses on a well-defined thesis statement or purpose statement, whereas clarity of purpose focuses on the content of student work. In other words, problem framing and scope ask whether students have a clearly stated framework for their project, while clarity of purpose asks whether they demonstrate their personal interdisciplinary understanding and then explain it well to their audience. Similarly, the next principle, reflection, calls for a clear and delineated connection of ideas and an indication that students have reflected on the interconnectivity and importance of their areas of study. These two core principles are drivers of internalization and cognitive advancement in interdisciplinary learning. They are particularly important because often students do not reflect on what they have learned. The reflection piece is intended to facilitate a deeper understanding of what they are learning and to encourage students to consider how the material fits into the greater scheme of their education.

The final element of the rubric shown in Table 3 is the presentation principle. This principle calls for information that is presented in a suitable medium with proper tone, word choice, spelling, grammar, etc. In short, did the students address the audience correctly and present their knowledge intelligently while doing so? This section can be adapted to the type of project and course for which the rubric is being used. For example, English faculty would probably expand this section because of its importance to their learning outcomes, whereas chemistry faculty may place more emphasis on the discipline-knowledge portion.

The newly developed rubric was presented to the Math course leaders for use on the freshman’s “mini” capstone exercise in December 2013. The rubric was sent to the faculty with minimal guidance. The feedback from the course director made it clear that the students and faculty did not fully grasp the intention or expectations behind the rubric. A few factors contributed to this: sixty-six percent of the faculty were new to the department; the interdisciplinary expectations were not fully explained to the faculty; although everyone received the rubric, each instructor created his or her own rubric for the mini-capstone; and the students who took the mini-capstone and the faculty who graded their work were under significant time pressure. The mini-capstone in its creation, execution, and grading was not given adequate time due to end of semester requirements at USMA during the November-December time period. An important conclusion from this feedback was that the faculty needed to have a common understanding of what is expected on an interdisciplinary project. To achieve this for the General Chemistry capstone project in the spring of 2014, a grading calibration exercise was conducted. This calibration included good and poor examples of interdisciplinary work from the previous year’s chemistry capstone, and showed faculty how to distinguish between good and poor work and how to use the rubric in assigning a grade.

Implementing the Interdisciplinary Rubric

The first step in implementing the rubric was calibration with the faculty. With such an exercise, the faculty should take away a common understanding of what exactly interdisciplinarity is as well as the knowledge of what constitutes a good final project. The plan for the calibration exercise developed for USMA faculty who would be grading the CH102 General Chemistry capstone in the spring of 2014 was an hour-long presentation and discussion. Prior to the presentation, faculty received a packet of examples of cadet work in each of the major portions of the previous year’s capstone project. The examples included “A” work as well as examples of common integration errors students make: the “laundry list,” the “tacked on at the end,” and the “no real knowledge” integration errors. The “laundry list” is an example of how a student may mention and be knowledgeable in multiple disciplines but does not integrate them, providing instead a “laundry list” of the different disciplines and explaining the relevance of each individually. The “tacked on at the end” error (or whatever we may call it) exemplifies how a student may go in-depth in one discipline, particularly in the discipline for which the assignment was given, then tack on a sentence or two at the end mentioning other disciplines in order to call the project interdisciplinary. The “no real knowledge” example presents a plethora of ideas but does not demonstrate that the student learned or integrated disciplines and/or ideas. With these examples, faculty became more familiar with what correct and incorrect work looked like. The “A” level example was not meant to illustrate the perfect or only solution; it was merely one example. Faculty evaluated each example using the standard A, B, C, D, F grading scale based on how interdisciplinary they felt each project was.

At the start of the presentation portion of the rubric calibration, faculty were introduced to the interdisciplinary characteristics and model from Figure 4. This ensured understanding of interdisciplinary characteristics prior to the introduction to the rubric itself. After the characteristics were covered, the results from the exercise, which the faculty just had completed, were discussed. This clarified any misunderstandings that faculty had about the interdisciplinary characteristics, while the examples of chemistry capstones from the previous year provided a frame of reference. Next the rubric was thoroughly explained, showing how it was scalable, expandable, and concise to meet instructor needs for interdisciplinary student projects.

The General Chemistry capstone rubric for 2014 differs from its 2013 predecessor in two very important ways. First, it is significantly shorter; its two pages (compared to seven pages) emphasize quality over quantity. Instead of listing every detail of the project, the new capstone rubric has five categories that address the math modeling, leadership, information security, oral communication, and the required submission components of the project, all without specific details. This allows the students to be more creative in their answers to the given problem.

The 2013 rubric was not based on any interdisciplinary principles or examples. Instead, it listed specific requirements from the disciplines the students were supposed to integrate. The result was quite the opposite: the 2013 capstone projects tended to be disjointed because of the slew of specific requirements. This year’s capstone rubric incorporates the interdisciplinary principles described in Table 3.Problem framing and scope is addressed in the Project Summary section with the requirement for a bottom line up front (BLUF), or thesis. Discipline knowledge is asked for in the Discrete Dynamic Modeling, Persuasion and Conformity in a Leadership Environment, and Information Security sections. Although the course-specific requirements must be addressed, Integration of ideas is assessed in the Oral Communication and Project Summary sections, which requires that fluid transitions and logically ordered and related ideas be integrated. Appropriate presentation is also adequately addressed in these sections, as the rubric lays out clear expectations of the written and oral presentations for students, including their tone, body language, and level of professionalism. Clarity of purposeand reflectionare asked for in the Project Summary section, which calls for contingency plans and thoroughly explained analysis of the total problem.

Initial instructor feedback on the use of this rubric is that it better defined expectations for the students’ interdisciplinary work, for both the instructor and the students. After using the rubric in the calibration exercise, instructors stated that they felt more confident and prepared than they had in 2013 when there was no such exercise and assessment tool available; this year they understood what was asked of them and of the students. Initial comparisons of the interdisciplinary assessments of the students’ work from 2013 and 2014 are quite positive. On a scale of 0–10, the average interdisciplinary score given by instructors was 5.69 in 2013, with zero being the least interdisciplinary and 10 the most. (See Figure 3 for these data.) In 2014 this improved to 7.79 (actually 15.5/20). There was also less variability between instructors. For example, in 2013 the standard deviation of the mean scores assigned by each of the instructors was 1.86 (Figure 3). In 2014, the standard deviation between the instructors’ mean scores was .98 (1.96/20), a decrease of over 47%.

Future Work and Conclusion

Now that the General Chemistry capstone for USMA Class of 2017 has concluded, several analyses must be completed to evaluate the progress of interdisciplinary education at USMA. At a minimum, an analysis of the grades and feedback from the students and faculty needs to be conducted. The analysis of the grades should include a distribution of grades compared with their expected distribution, as well as a quantitative and a qualitative analysis of the capstones compared to the previous years’ capstones. This could be done using the methods previously employed, including the use of Flesch-Kincaid, paired t-test, the distribution of the faculty’s interdisciplinary rating similar to Figure 3, and/or a cross-course sample of projects re-graded by the course director.

The discussion and research that have taken place at West Point since the first General Chemistry capstone project in 2013 indicate that the results of this year’s changes should be positive. Although there is as yet no statistical evidence to demonstrate improvement, the general understanding of how interdisciplinarity looks, how to produce it, and how to assess it is much more expansive now than in 2013. The reason for this might be that faculty and students at USMA are now experienced with interdisciplinary work and have a clearer understanding of interdisciplinary assessment and its importance over the course of a year.

The world is a complex and rapidly changing place that requires its future scientists, scholars, engineers, teachers, and leaders to think dynamically and across disciplines.. Interdisciplinary assessment is necessary for the future of education, particularly at West Point where we recognize that “adaptive leaders who are comfortable operating in ambiguity and complexity will increasingly be our competitive advantage against future threats to our Nation” (Elliott et al. 2013, 30). Only time will tell whether this interdisciplinary rubric has met its goal of creating a grading mechanism that can be used in multiple project mediums across multiple disciplines. Given the extensive research and analysis done at West Point to create this much- needed and useful tool, the prospects for future interdisciplinary education are promising.

About the Authors

Elizabeth Olcese

Elizabeth Olcese graduated with a Bachelor of Science degree in Operations Research from the United States Military Academy at West Point, NY in 2014. She served as a student researcher for West Point’s Core Interdisciplinary Team focused on enhancing opportunities for interdisciplinary learning in West Point’s core academic curriculum. Upon completion of the Quartermaster Basic Officer Leader Course at the Army Logistics School in Fort Lee, VA, she will serve as a second lieutenant for the 25th Infantry Division at Schofield, Hawaii.

Joseph C. Shannon

Joseph C. Shannon graduated with a doctorate in Curriculum and Instruction with a focus in Science Education from the College of Education at the University of Washington, WA. He is a former member of West Point’s Core Interdisciplinary Team that was focused on enhancing opportunities for interdisciplinary learning in West Point’s core academic curriculum. He is a former Academy Professor at the United States Military Academy and Program Director for the General Chemistry Program in the Department of Chemistry and Life Science. He is currently the Dean of Academic Programs at South Seattle College in West Seattle, Washington.

Gerald Kobylski

Gerald Kobylski graduated with a doctorate in interdisciplinary studies (Systems Engineering and Mathematics) from Stevens Institute of Technology, NJ. He currently is co-leading a thrust to infuse interdisciplinary education into West Point’s core academic curriculum. He is also deeply involved with pedagogy, faculty development, and assessment. Jerry is an Academy Professor at the United States Military Academy, a Professor of Mathematical Sciences, and a Commissioner for Middle States on Higher Education.

Lieutenant Colonel Charles (Chip) Elliott

Lieutenant Colonel Charles (Chip) Elliott graduated with a doctorate in Geography and Environmental Engineering from Johns Hopkins University in Baltimore, MD and is a registered professional engineer in Virginia. He is currently the General Chemistry Program Director and the Plebe (Freshman) Director for the Core Interdisciplinary Team at the United States Military Academy. He has previously taught CH101/102 General Chemistry, EV394 Hydrogeology, EV488 Solid & Hazardous Waste Treatment and Remediation, EV401 Physical & Chemical Treatment, and EV203 Physical Geography. He is currently an Assistant Professor in the Department of Chemistry and Life Science.

References

Boix Mansilla, V. Interdisciplinary Understanding: What Counts as Quality Work?” Interdisciplinary Studies Project, Harvard Graduate School of Education.

Boix Mansilla, V. 2005. “Assessing Student Work at Disciplinary Crossroads.” Change 37 (1): 14–21.

Boix Mansilla, V., and E. Dawes Duraising. 2007. “Targeted Assessment of Students’ Interdisciplinary Work: An Empirically Grounded Framework Proposed. The Journal of Higher Education 78 (2): 216–237.

Elliott, C., G. Kobylski, P. Molin, C.D. Morrow, D.M. Ryan, S.K. Schwartz, J.C. Shannon, and C. Weld. 2013. “Putting the Backbone into Interdisciplinary Learning: An Initial Report.” Manuscript submitted for publication, United States Military Academy, West Point, NY.

Grassie, W. (n.d.). “Interdisciplinary Quotes.” Thinkexist.com. http://thinkexist.com/quotation/as-the-pace-of-scientific-discovery-and/1457907.html (accessed November 16, 2013).

Ivanitskaya, L., D. Clark, G. Montgomery, and R. Primeau. 2002. “Interdisciplinary Learning: Process and Outcomes.” Innovative Higher Education, 27 (2): 95–111.

Newell, W.H. 2006. “Interdisciplinary Integration by Undergraduates.” Issues in Integrative Studies 24: 89–111.

Repko, A.F. 2007. “Interdisciplinary Curriculum Design.” Academic Exchange Quarterly 11 (1): 130–137.

Repko, A.F. 2008. “Assessing Interdisciplinary Learning Outcomes. Academic Exchange Quarterly 12 (3): 171–178.

Rhoten, D., V. Boix Mansilla, M. Chun, and J. Thompson Klein. 2008. “Interdisciplinary Education at Liberal Arts Institutions.” Teagle Foundation White Paper.

Stowe, D.E., and D.J. Eder. 2002. “Interdisciplinary Program Assessment.” Issues in Integrative Studies 20: 77–101.

Appendix 1

Download (PDF, 191KB)

 

Download (PDF, 998KB)

 

Author: seceij

Chuck Gahun is the content manager for the SECEIJ website and technical consultant for NCSCE

Leave a Reply