Monday, August 22, 2016

Decisions about Qualitative Coding

Groups of researchers worked with C-PEER in order to code archival documents for specified constructs, including: (a) STEM equity; (b) Teacher’s use of available time (TAUT); (c) English Language Learning (ELL) supports; and (d) Professional learning structures with coaching for Culturally Relevant Pedagogical (CRP). Taking guidance from Speer and Basurto’s (2012) calibration of qualitative data as sets for qualitative comparative analysis (QCA), researchers compared elementary schools systematically while trying to “give justice to within-case complexity” (Rihoux and Ragin, 2009; Speer, 2012, p. 156). Our research team decided not to perform a Qualitative Code Analysis, though the approach (thinking about how to define conditions related to outcomes) was an important part of our analysis framework. I contributed to the types of documents that I wanted accessed, and designed some codes based on expectations from current literature. I used those initial codes with my co-researchers and then refined them, continually performing multiple check-ins and corrections to be sure that I had inter-coder agreement. This was informally arrived at through code trials, discussions, and revisions of disagreements. This reflexive method of developing codes across coders allowed me to plan, code, monitor, and adjust throughout our coding sessions.

Data Collection Tools

We recruited seven urban elementary schools in an urban Colorado school district to participate in this study (N=7). Within these schools, we surveyed 186 teachers and educational leaders (n=105) for our sample subset. Schools were chosen using the Colorado Department of Education’s School Performance Framework. The seven participating schools ranked as follows: three “Performance Level” schools, two “Improvement Level” schools, one “Priority Improvement Level” school, and one “Turnaround Level” school. Using a range of elementary schools (multi-site) allowed researchers to analyze STEM foundational thinking and instructional activities as a comparative case study. Results from student perception surveys, leader and teacher surveys (quantitative data) were analyzed against students achievement and school academic growth data from school’s Unified Improvement Plans (UIP) and relevant trend (qualitative data), and archival structure data (e.g.: school schedules and team and committee workflows). These include observational data from classroom visits and curriculum and course schedule archives. In collaboration with C-PEER, researchers followed a short-cycle, iterative approach to research, working in collaborative learning teams. C-PEER and doctoral candidates formed research teams in order to co-design and create survey items.

Using Qualtrics Survey software, we administered the effective learning communities’ teacher survey by sending links out to building principals, to disseminate to their staff. This allowed for the greatest number of teachers to participate while remaining anonymous. Teachers were randomly assigned to take one of two versions of Qualtrics survey. This had the simultaneous effect of increasing the total number of items surveyed and keeping the survey short. We wanted teachers to stay engaged throughout the survey. The questions on each survey were similar in context (e.g.: STEM), but random in question order. For example, we grouped survey questions into nine areas: (a) lesson preparation; (b) staff collaboration; (c) student use of feedback and reflection on learning; (d) academic work relevant to students; (e) teacher access to instructional resources; (f) teachers engaging students in problem solving; (g) students participating in problem-solving activities; (h) teacher engagement with students’ families; (i) students showcasing mastery of academic content. This design supported our collection of reliable and valid data.

Quantitative Data Analysis

The teacher response rates for the Effective Learning Teacher Survey are between 37% and 100%. With regards to STEM-foundational thinking, teachers at each elementary school answered as follows: Annie Easley: 84%, Benjamin Banneker: 100%, Richard Spikes: 39%, Aprille Ericsson: 81%, Mae Jemison: 50%, Shirley Jackson: 37%, and Elijah McCoy: 50%.