While there appears to be an abundance of literature on whether online narrated presentations are effective (i.e., lead to positive learning outcomes such as increased attendance and achievement) as compared to traditional lectures, there is not as much literature on what makes online narrated presentations effective. Perhaps that is due to the expectation that what is done in the traditional lecture is simply to be replicated in the online environment or to the assumption that what makes the traditional lecture successful will be the same as what makes the online presentation successful.
This session will highlight research about those individuals who are “on point” for online learning at their institution. This will include their background and experience, how their position and group are organized, responsibilities of the role, and overall institutional approach. Key findings from two national studies, along with results from the recent CHLOE research reports, will be discussed and shared. This session will be helpful to chief online learning officers (COLOs) and any faculty and staff collaborating with them.
Do students agree with the items included in the QM Rubric? Do they rate QM Standards at the same level of importance? Help us find out! In 2024, a research team will repeat the 2009-2011 national study of student perspectives of quality using the recently released QM Higher Education Rubric, Seventh Edition. Join the session to discuss study details and learn how to participate.
We all know the importance of collecting data on our individual and institutional efforts and projects. But what happens when the data doesn't show what you expected? How do you move forward? This presentation will focus on a case example of creating collaborative online working groups, the efficacy data collected, and what happened when the results did not pan out as expected.
In 2019, Quality Matters published the Academic Rigor white paper series that provided an observable definition of rigor, distinguishing teachers’ and learners’ responsibilities, disentangling academic rigor from the curriculum and from student learning, and leveraging objective evidence to document rigor so it can be improved upon.
When it comes to the development of online courses and programming, quality certainly does matter. But what does that mean in the context of practical application and realistic implementation? As online learning professionals, faculty members, and academic leadership, you understand that the answer to this question is unique to each of your institutions. In the ever-changing landscape of online learning, the needs of students and responsibilities of institutions are rapidly evolving, as well.
All too often, online course quality is determined by a course’s compliance with industry standards rather than consultation with actual students. In this session, we will share how we used research-based methods to develop a Student Advisory Board for Instructional Technology, ensuring that student feedback is actively used in refining online course evaluation and improvement strategies.
How can you convince everyone that accessibility is their responsibility? You begin with a practice that makes their lives easier.
I have made one practice the focus of every accessibility conversation. From my internal team to my doctoral colleagues, to students, to faculty, to staff, and even my husband, I have convinced many and continue to invite everyone to implement one practice that will create a more accessible digital environment for all.
This presentation delves into the innovative integration of the UAkron Online Promise (UAOP) with the Quality Matters rubric, a strategic approach aimed at augmenting student retention and bolstering learning outcomes. The UAOP, a commitment to excellence in online education, coupled with the rigorous standards of the Quality Matters rubric, creates a robust framework for course design and delivery.