What were the key milestones in developing the original QM Rubric?

Foundations of the Quality Matters Rubric — The Inside Story
An Interview with Mary Wells

Mary Wells, former Director of Distance Learning at Prince George’s Community College was the principal writer and Co-Director of the Quality Matters FIPSE Grant (2003-2006). Ron Legon interviewed her in January 2017 as part of a research project to examine the origins and role of faculty peer review in the QM course review process sponsored by the International Network for Quality Assurance Agencies in Higher Education (INQAAHE). The interview has been edited for length and clarity.

Ron Legon: Let’s focus on how key elements of the Quality Matters Rubric took shape in the months leading up to the three-year grant from the Fund for the Improvement of Postsecondary Education (FIPSE) and during the grant period (2003 – 2006). Take us back to the early 2000’s and describe the status of online learning in Maryland and the influence of Jurgen Hilke’s 56 question online course inventory and faculty review process.

Mary Wells: At the time [the community colleges and four-year institutions represented in Maryland Online (MOL)] were all scrambling to get degrees online – nobody wanted to just have [online] courses. For the bigger schools it was not so much of a problem, but for smaller schools like Hagerstown and Frederick, it was a big problem. They didn’t have that many people who were both familiar with designing an online course and also able to embrace online learning.

From Jurgen’s perspective at Frederick CC, I think the opportunity to vet his approach at other institutions was irresistible, and he deserves credit for spreading the idea and getting others to see that this could work. To the best of my recollection, Jurgen (Frederick CC) invited Carroll, Chesapeake, and one or two others—not Prince George’s CC—to participate. I did not become aware of the project till slightly later. Though Jurgen’s project was not funded or supported by MOL, the nineteen MOL representatives [one each from fourteen Maryland community colleges and five 4-year institutions in Maryland] met every month, so we were aware it was going on.

RL: What was the relevance of Jurgen’s approach when the opportunity to apply for a FIPSE grant on behalf of the Maryland Online schools arose in 2002-2003?

MW: Prince George’s wanted to write a FIPSE grant – initially around the nursing program. But the nursing program was in such high demand that there wasn’t room for additional students, and we couldn’t identify a nursing issue to build a proposal around. I said, "I have a project: MOL needs a way to fill the gap between trying to share courses, and how do we know what’s a good course?" Jurgen’s project proved fortuitous, because FIPSE wanted to know whether we had any experience in this area on which to build. Being able to cite this prototype in use at several MOL schools contributed hugely to convincing FIPSE that this could be a successful project. So, we chose to write the grant building on his work.

RL: What did you and your MOL colleagues see in Jurgen’s approach that had greater potential?

MW: MOL and its precursor organizations had always seen the advantages of moving together. There was a culture of collaboration. MOL was trying to get course sharing going [since 1999], and one of the big roadblocks was, how do we know your courses are OK?

Faculty knew our courses were “perfect,” but were not sure about yours. So, Jurgen’s approach looked like a very promising way to address that huge problem; if we could say this course meets these specific standards at this level, that would resolve the issue and move us all forward.

RL: When did the “Quality Matters” label become associated with the grant project?

MW: That was during the writing of the grant in Spring 2003. MOL was looking for an external rubric that all the institutions could agree on, which was also felt to be more what FIPSE was looking for. How do you get 19 institutions to all go in the same direction? “Inter-institutional Organization for Quality Assurance” didn’t sound very compelling or easy to sink your teeth into. “Quality Matters” came to me as I was stepping out of the shower. This was about “quality” and quality matters! So, it seemed like something, and we had a deadline. The name turned out to be something people remembered, maybe not the best, but it worked. Now, it’s even become a verb – people talk about QM’ing their courses!

RL: How did the project plan, including development of the Rubric, testing it, creating a review process, etc. take shape?

MW: Between the Fall 2002 FIPSE call for proposals and the early May 2003 deadline for submission. We had a working committee – one representative per institution – that met throughout the spring. We fleshed out the goals: (1) develop a rubric, (2) develop a process, and (3) conduct research to see if things were working. These were the three lynchpins. The grant requirements forced us to make this all cohesive around a budget and timeline. There were modifications over time, but the bones were pretty much there from the beginning.

RL: Can you identify the key elements added during the grant project to augment Jurgen’s original insights?

MW: We never felt that we had to preserve elements of Jurgen’s work, although we did wind up keeping a lot of it. We actually had a rubric for the Rubric. We knew it had to be faculty driven; this was going to be a faculty process. We all recognized that this project had a huge professional development impact.

Let me tell you two things that only a few people in the Project Management Committee know: Early in the process, Penn State was involved, and Chris Sax [Senior Associate Dean of Undergraduate Studies at the University of Maryland University College (UMUC) at that time and co-director of the FIPSE Grant] and I went up to give a presentation to what we thought were going to be a group of faculty members but turned out to be a room full of instructional designers. They gave us a huge amount of feedback, and Chris and I rewrote the Rubric based on that feedback. We rewrote it all the way driving back to Maryland from Penn State and all that night and all the next day, and that really became more the basis of the Rubric as we know it than Jurgen’s list.

And the other huge, huge thing was Cynthia France—an instructional designer from Chesapeake CC—being on the Project Management Committee. She told us early on that the Rubric needed to be holistic – more than just a laundry list – and had to have an intrinsic connection; that at the end you had to have a course, not just a pile of laundry. We wanted to establish alignment from that basic internal structure, which is what makes a course so effective.

RL: How long did it take to hammer out the basic shape of the rubric?

MW: We did about eight iterations in that first year, looking for the right combination. We wanted a faculty-friendly Rubric. We spent a lot of time talking about what the final product should look like, so we wouldn’t end up with junk. And we had the right process management team because Cynthia France had a Ph.D. in instructional design. We had Jurgen, Chris, Wendy Gilbert, Kay Kane, John [Sener], me…. It was the right team.

Once we developed the Rubric, then we starting thinking about whether the Standards are all equal. That’s when we came up with the idea that some Standards are essential, and others are very important or important.

RL: How important was research in shaping the Rubric?

MW: We did a pretty thorough review of research, but there wasn’t that much research available at the time. So, we also relied on best practices to establish validity. We looked at standards from many sources.

RL: Why was the decision made to focus on course design rather than course delivery?

MW: The choice was very practical – “What can you get done in three years?” We didn’t think we could deal with both design and delivery and get buy-in from [all 19] institutions. We didn’t want to couple design with delivery or anything associated with promotion and tenure, so we chose not to focus on the faculty member’s performance. If we had tried to focus instead on faculty behavior, we would have gotten pretty much nowhere.
 
Design seemed a more objective or neutral issue. Course design standards were something people were looking for. The focus on design led to buy-in by so many institutions. But we always thought that this would be step one and that the next step would be a grant dealing just with delivery, and the third step would be students: “How do the students get their input into the process?” So, we never thought a course design rubric was the end all, it’s just what we thought we could accomplish in three years.

RL: In his pilot reviews, Jurgen had limited his course reviewers to faculty with online teaching experience. Did the project embrace Jurgen’s practice of faculty peer review as the exclusive method of QM course review without considering alternatives?

MW: We did consider an industrial model where we would hire and train a cadre of reviewers. We thought the process would be a lot messier with peer review depending on faculty taking the time and energy. We knew that if we [used instructional designers as reviewers], we would get a much more consistent approach, but that was never our problem. In most of the institutions we were working with, except for UMUC, faculty were developing courses without instructional design support. Our problem was how do we get faculty more involved in online teaching and bring them up to these standards, given that most of them never had any training in instructional design.

We talked about it in the Project Management Team. The professional development of faculty was first and foremost, and having instructional designers as reviewers wasn’t going to solve that problem.

RL: Were your instructional design advisers comfortable with limiting reviews to faculty?

MW: They became involved as we went because we created an instructional design group, and their job was to give us input. But we resisted almost every attempt to involve them in reviews because they were not faculty, and we did not think faculty would listen to them. We did consider having an instructional designer as an adviser to the review team, but that wasn’t adopted because of the cost. But we tried to be as liberal as we could be, defining experience teaching online to include adjunct faculty. I still think that was a good decision.

RL: What was the rationale for requiring the use of at least one external reviewer on each review team?

MW: We talked a lot about institutional bias, and that’s why the external reviewer was there. Within an institution you get in the habit of thinking “that’s how it should be done because that’s how we do it.” Having somebody external to the institution brought in a fresh perspective.

RL: When did you realize there was a need to train your faculty reviewers?
 
MW: Right away. I was also the chair of the Process Committee. We started the grant in September 2003 and offered training by March of 2004—Jurgen, Chris, and I. The early trainees were hand selected. Jurgen and I were given strict instructions on how long we could talk because both of us could talk forever. It was day-long training and a debriefing afterwards about what they liked and didn’t like.

RL: Was use of faculty peer reviewers seen as a way to encourage other faculty to allow their courses to be reviewed?

MW: We thought it was critical. We knew that faculty tend to listen more to each other. Instructional designers, at that point, and possibly still to some extent, do not have the same influence with faculty.

RL: So, you thought faculty would accept recommendations for changes more readily if they came from faculty peers?

MW: Right. If I, as [online] director, said this is the wrong way to do something, they might not pay attention, but they would listen if the advice came from other faculty members, especially someone from another institution, a subject matter expert, a colleague at a conference, etc. If those people said it, suddenly it would become important. We found that faculty, instead of being defensive, if they understood the reason for the review – the design and how to make it better [rather than faculty evaluation] – we had no problem in most cases. It was more likely they would buy in.

RL: What reaction did you get from faculty around the country when you began presenting the Rubric at conferences and institutions?

MW: Regarding the reception of the course design rubric, we felt like rock stars. We drew such large conferences audiences, we often had to bar the door. People were riveted by this—talking about what makes a good online course.

RL: Is there anything you would like to add to this picture of the formative period of the Quality Matters Rubric?

MW: The underlying principles about continuous improvement and being collaborative and collegial – supported by the research and literature – were there from then until now. Who couldn’t get behind these principles? They were part of the Rubric right from the very beginning in “the rubric for the Rubric.” That was the end goal of what we needed to have. That’s why this has been so important to institutions. It’s not the instructional designers who needed training on how to design, it’s the faculty. And so, having them look at other people’s courses was the best way to get them to look at their own. When they found out they could be on a review without actually having been reviewed – that was really big for most faculty. That was one of the things they loved about this because they could test it out before anybody looked at them.