Learning Not Guaranteed: An Annotation

Shapiro, A., & Niederhauser, D. (2004). Learning from hypertext: Research issues and findings. In D. H. Jonassen (Ed), Handbook of Research for Educational Communications and Technology (pp. 605-620). New York: Macmillan.

There was a common theme in the studies cited in this article related to hypertext assisted learning–there is no one system that works to increase learning every time; there are too many variables. Cognition depends on the learner’s prior knowledge and its impact on choices made within the hypertext, on the structure of the page from automated self-regulation to colors and images, and on whether or not the activity was introduced with pre-reading strategies. No one method worked for all learners.  System structure offered some interesting theses–do hierarchically structured hypertext activities work better than unstructured activities? Well, it depends. On learner capabilities–more structure supported learning for lower level learners, while less structure gave more capable learners opportunities to put ideas together in new ways.  More structure was more effective for fact-finding, while less structure encouraged problem-solving and big-picture making. Learner variables affected outcomes (as is the case in all education research)–passive interaction led to less comprehension, while active engagement led to more.  Too much complexity had the potential to give learners “intellectual indigestion” (614). And when goals did not encourage divergent thinking, researchers warned that the richness of hypertext might be squandered. In short, the Goldilocks principle applies–some may prefer Vygotsky’s Zone of Proximal Development–for every learner, there is an educational space that offers just the right level of complexity and interest for every learner, and it is the job of the educator to help learners navigate into this space.  Now that this space includes the IoE, our jobs are that much more difficult.

Even though the results were relatively similar for every section, the authors broke the information down into manageable chunks.  The theoretical stance was shared in the first section that compared the construction integration model with the cognitive flexibility theory.  In the sections that followed, the authors used this framework to compare the influence of cognitive factors, system structures, and learner variables between the two models. In all cases, their case was made clear–the research was inconclusive due to the lack of commonalities between the research variables, and to errors in the design of the studies.

While this study was based on review of closed hypertext systems, it does reveal the challenges inherent in using internet resources today.  If stand-alone hypertext activities were often too complex and led to “cognitive entropy,” how much more complex is the situation where students attempt to read and learn using web-based searches? While it is true that there is a Goldilocks Zone for every learner, and while I believe that it may be found on the internet, how is it possible for the educator to help students find it? Or to teach them how to find it on their own?  It’s no wonder that student research can be lacking in depth or miss the mark on learning targets.  Even with carefully constructed questions or learner outcomes, even with a list of suggested research sites, the allure of the rabbit hole is too great, and the overwhelming amount of information out there is hard to sift through. Learning online IS disorienting. With the advent of programs like Go Guardian that can limit access to the number of sites that students can access, I wonder if using a tool like this to narrow (NOT eliminate) student choice could be helpful? Would it increase efficiency and open up time for more exploration?I wonder how a limitation of this kind would interfere with creativity? Would limited access increase creativity by increasing the need for innovation?  I wonder.

Leave a comment