IRMA-International.org: Creator of Knowledge
Information Resources Management Association
Advancing the Concepts & Practices of Information Resources Management in Modern Organizations

Massive Open Program Evaluation: Crowdsourcing's Potential to Improve E-Learning Quality

Massive Open Program Evaluation: Crowdsourcing's Potential to Improve E-Learning Quality
View Sample PDF
Author(s): Tonya B. Amankwatia (Regent University, USA)
Copyright: 2019
Pages: 22
Source title: Crowdsourcing: Concepts, Methodologies, Tools, and Applications
Source Author(s)/Editor(s): Information Resources Management Association (USA)
DOI: 10.4018/978-1-5225-8362-2.ch004

Purchase

View Massive Open Program Evaluation: Crowdsourcing's Potential to Improve E-Learning Quality on the publisher's website for pricing and purchasing information.

Abstract

Given the complexity of developing programs, services, policies, and support for e-learning, leaders may find it challenging to regularly evaluate programs to improve quality. Are there new opportunities to expand user and stakeholder input, or involve others in e-learning program evaluation? This chapter asks researchers and practitioners to rethink existing paradigms and methods for program evaluation. Crowdsourced input may help leaders and stakeholders address persistent evaluation challenges and improve e-learning quality, especially in Massive Open Online Courses (MOOCs). After reviewing selected evaluation paradigms, models, and methods, this chapter offers a possible role for crowdsourced input. This chapter examines the topics of crowd definition, affordances, and problems, to begin a taxonomical framework with possible applications for e-learning. The goal is to provide a reference for advancing the discussion and examination of crowdsourced input.

Related Content

. © 2023.
. © 2023.
. © 2023.
. © 2023.
. © 2023.
. © 2023.
. © 2023.
Body Bottom