AERA 2016

This year, the American Educational Research Association will hold its annual meeting here in Washington DC. The meeting is April 8 -12. You can read more about it here. We will attend most of the sessions on online education. Perhaps we will see you there.

Posted in Uncategorized | Leave a comment

Article Review

IMG_3586This review discusses a relatively recent meta-analysis by Means, Toyama, Murphy, and Baki. Here is the citation:

Means, B., Toyama, Y., Murphy, R., & Baki, M. (2013). The effectiveness of online and blended learning: A meta-analysis of the empirical literature. Teachers College Record, 115(3), 1-47.

This is a meta-analysis of the effectiveness of online and blended education when compared to face-to-face education measured by outcomes. Researchers searched five online databases for research studies with the following criteria: the study used a QED or RCT design, the study examined online learning (not audio/etc.), and the study measured outcomes (not perceptions or opinions). Research studies were further analyzed for inclusion in the study using effect size. Researchers were interested in effect sizes because a similar study done in 2004 found that effect sizes of research studies were zero or near zero, which showed no difference between online education and face-to-face education. Researchers hypothesized that more sophisticated technology and pedagogy for online and blended education would yield a statistically significant effect size.

For each study, an effect size, upper and lower limit on a 95 percent confidence interval, z-value (test of the two-tailed null hypothesis), and number of units (participants or units of participants, such as a section or classroom of students) was collected. Unfortunately, not all studies yielded enough data to calculate an effect size, so analysts estimated some variables. For example, correlation r was not included in many of these studies, so the researchers used an estimated conservative estimate of r = 0.70 for studies where the pre-test and post-test measures were similar and r = 0.50 where the measures differed to estimate effect size. Many other adjustments were made in order to calculate a z-score and make these studies comparable (see page 17-18). Studies that provided enough data to calculate an effect size were reserved for the study.

The mean of the effect sizes was used to compare online and blended groups. The effect sizes were weighted to prevent smaller studies from having undue influence, and then the researchers computed effect size variance (Q-statistic) to determine “the extent to which variation in effect sizes was not explained by sampling error alone” (pg 16).

Positive Q-value findings suggested that something other than online/blended vs. face-to-face may influence effect sizes, so the studies were further analyzed to determine variations in the studies that might influence effect size. According to a Sloan-C framework, three categories of variables were determined to influence effect size: (a.) online instruction practices, (b.) conditions under which the study was conducted, and (c.) aspects of the study methodology itself. Moderating variables were derived for the study guided by these categories.

Results indicated that online learning was more effective than face-to-face learning where the mean effect of the 50 moderator variables from the Sloan-C framework was +0.20, p < .001, but this was true only when fully online and blended studies were considered as one group. By contrast, when fully online programs were considered separately, the comparison was not statistically significant, but the blended programs were. As well, 3 moderating variables were found to be statistically significant, confirming that factors other than whether the learning was online/blended vs. face-to-face was influential in the study outcomes.

Most researchers would probably agree with the the decision to include moderating variables. Even without knowing the Q-statistics of this dataset, most researchers of online education agree that blended learning varies tremendously. In fact, most would contend that no study can compare fully online and blended learning without moderating variables, even if it is only because the field writ-large does not yet fully agree on definitions of blended learning.

Readers should note the extensive use of adjustments made in order for these studies to be considered as comparable, so findings should be take with caution. If future studies that apply the same adjustments to all studies, the studies would be treated equally. Statistics are used to validly compare different data, and perhaps this tactic might make the data more comparable. On the other hand, it could serve only to reduce the sample size to a point where power is too low to show any difference.

Posted in Uncategorized | Tagged , , , , , , , , , , , | Leave a comment

Some ideas for engaging other posters on an academic discussion board

Staying with the theme of engagement, here are some tips for engaging other students on an academic discussion board. Sometimes, students know what they want to write about, but they have trouble putting it in context with the rest of the discussion. This short list may help them get started.

  • Look at the bio section while reading the student posts. That way, you can bring the student’s experience into the question and personalize the interaction.
  • Bring specific lecture and textbook material to the conversation as much as possible.
  • If students are discussing the same things on different posts, refer them to the other. If students interact with each other, the discussion will be more lively and interesting.
  • Offer a possible solution in a question asked by a student, then encourage others to list their ideas or solutions, then keep a running list in another post.
  • Mention other students and their ideas by name so everyone can see all that was written about a particular subject.
  • Change the title of the post to draw in more readers/writers and add variety to the board. That way, a student can see where their ideas may fit just by looking at the board.
  • Ask a question of a particular student, then open the question to all students. (“What do you think of that? As well, if anyone else has used this, post your experience, so we can all learn from it”).
Posted in Uncategorized | Leave a comment

How Student Note-taking May Affect Outcomes in a College-level Course

As mentioned in the previous post, Crawford published an article about some of the first empirical studies in student note-taking. The bibliographic information for the article is:

Crawford, C. C. (1925). Some experimental studies of the results of student note-taking. Journal of Educational Research , 12 (5), 379-386.

To be perfectly honest, I found the Procedure section of this article tremendously difficult to follow, which likely affected my understanding of the study; however, the Summary/Conclusion section was a little better.  The study consisted of 7 experiments. The first 3 experiments tested students right away after the lecture and the last 4 tested students a week later, allowing for a review of the notes: (I) in this experiment, students listened to a lecture then took a 20-minute traditional quiz on the content; some of the students took notes and some did not, (II) in this experiment, students listened to a lecture and took a traditional quiz and a true-false quiz; again, some of the students took notes and some did not, (III) this experiment was similar to experiment II, but the testing was even more comprehensive, (IV), in this experiment, students with notes were tested a week later and allowed to review their notes for 5 minutes before the test while other students took the test without any review, (V) students were tested to compare taking notes with review and listening-only with no review, (VI) & (VII) experiments compared note-taking in live-lecture and text reading lecture courses.

In most cases, note-takers did better on the quiz than non note-takers, with a slight advantage of non note-takers on true/false questions only where the testing occurred immediately after the lecture. It was also found that the value of a student’s notes was more pronounced in the delayed-testing situations than in the immediate-testing experiments, quality of notes affects outcomes, and students score higher on a quiz when the class is lecture-based when compared to only reading the text.

Posted in Uncategorized | Leave a comment

Note-taking for Student Engagement

One of the first research studies in note-taking efficacy was done in 1925 by Crawford, where the findings suggested that note-taking in a college course may affect student assessment of the material. Most more recent research about note-taking for academic purposes tends to be highly specialized, perhaps because the field has moved beyond accepting its efficacy. At the CARDE, interest in note-taking research originates from a new research project on note-taking in online education. The data for this project has been collected, and the background information for the written reports is being gathered. A recent report published in The American Journal of Psychology was interesting because it focused specifically on note-taking when viewing instructional video compared to absorbing the same information via instructional text. Here is the bibliographic info:

Bohay, M., Blakely, D. P., Tamplin, A. K., & Radvansky, G. A. (2011). Note-taking, review, memory, and comprehension. The American Journal of Psychology , 124 (1), 63-73.

This article details a research project where note-taking was analyzed as a method of engaging students. Participants were University students and the experiment took place in the school environment, not a lab. In the first experiment, students read a text then completed a multiple-choice exam to test their comprehension. The exam answers were differentiated into those that required verbatim recall of the texts, paraphrased recall, and inferences. Some participants were given a chance to review their notes prior to the exam, and others were told they could review their notes, but they were not allowed to review them. The exam was conducted immediately and 1 week after the reading of the texts. In the second experiment, the information was presented in video format and note-takers were assigned to either a written-note group or a typed-note group.

Findings suggest no meaningful difference between typed and written notes. Differences were also noted between text and video notes, but they were minor and observed only at the most complex model on comprehension. Overall findings suggest that active engagement with information in the form of taking notes assists in better performance, especially at deeper levels of comprehension. There was no support for the idea that taking notes is a distraction that diverts a student’s attention. See Kintsch & Van Dijk (1978) for a better explanation of the differentiated forms of memory represented in the different question types in this experiment. The findings support other recent research on note-taking in college courses.

Posted in Uncategorized | Leave a comment

iNACOL 2013 Student Panel

The breakfast session of the 2nd day of the iNACOL 2013 Online and Blended Learning Symposium featured a panel of students currently attending a virtual (online) or blended school. The students ranged from 3rd grade to high school, and they offered honest answers to questions posed by the audience. One of the most interesting questions asked was, Why did you choose a virtual or blended school experience over a traditional experience?

 The answers were interesting and varied, so I captured them to share with you.

Reasons students chose a virtual or blended schooling experience

  • Accelerated graduation
  • To motivate others in school (friend/family)
  • Work at own pace (not rushed)
  • Work in a smaller environment (if the student’s school is large)
  • Content
  • Meets student demands
  • Instructions are clearer (click here)
  • More personal learning experience
  • Good teachers
  • One online class is mandatory in the student’s school
  • Software adjusts to my learning (adaptive)
  • Integrated tutoring in the software (this is cool)

Any thoughts on these reasons? Did they surprise you? Why/why not? Please feel free to share your comments.

Available video feed

A video feed of the student plenary panel is available.

Posted in Uncategorized | Tagged , , , , , , , , , , , , , , , , , | Leave a comment

Presentations from iNACOL Symposium for Online and Blended Learning

As I mentioned in a previous post, I’m gathering my notes from the iNACOL symposium and the Sloan-C conference to share with you all. I hope the notes are helpful. This is the first presentation I attended.

photo(1)

Presentation Summary

This information was presented by a representative from Tel Aviv Center for Educational Technology, a non-profit organization inside Israel and a for-profit organization outside of Israel (selling their content—apps, videos, etc.).

This group launched a STEM virtual high school, which is now a year old. They believe it to be a new model for education. Their project was launched in cooperation with the Trump foundation and the Israeli Ministry of Education. It is a fully online program.

The group did not refer to the current academic literature about online schools, nor did they build their school on a particular foundation or theory. I suspect that they started the school to fit their needs and were surprised to find themselves as developers, curriculum designers, etc. Their presentation was largely about an LMS they created using .NET and Flash (I think?). However, they said that they are no longer using that LMS, and they have since moved all the content and system to Moodle for management.

Though the presenters seemed less informed of the academic literature surrounding virtual schools than might be expected, this presentation was not without merit. On the contrary, this was almost a pure, instinctually-created project with no outside influence, which makes the project more interesting. My analysis is this:

The school administrators discovered that small-group tutoring was tremendously successful for STEM education, but it is also tremendously expensive. Expense may be a hurdle for personalized learning in the next few years. I think online tutoring—its success-rates and expense—deserves more attention as we move forward.

The exigence for this project was:

  • Global growing demand for scientists
  • High schools STEM enrollment declining in Israel
  • Number of teachers declining
  • Rural classes cannot offer STEM
  • Widening inequality in the country
  • High tech industry in Israel is at risk

Their challenge was to increase the number of STEM students in Israel

Their suggested solution was:

  • Virtual class teaching in-school hours
  • Virtual tutoring (mentoring) in off-school hour in small groups (synchronous)
  • High quality monitored self-paced drill and practices (commitment to individual work and high-quality content)
  • All sessions are recorded, so students can refer back to them
  • Provide interactive digital content
  • In this model, the students spend 3 hours in a classroom with a teache—the rest on their own.
  • They hope this model will bring equal opportunity to rural students
  • They found that more than 70% of students re-watch recorded class sessions, so the recordings are imperative to this model.
  • Each tutors was responsible for only 3 students.
  • Expensive as it may be, mentoring in small groups if very important. It’s a way of introducing personalization into the program.
  • Synchronous classroom met twice per week—3 hours total for 10th grade, 4 hours total for 11th grade
  • Afternoon tutoring went 2 hours per week in groups of 3 students.
  • The small-group tutoring model is very expensive, but it was worth it in this case.
  • Illusion—do not see the faces or eyes of student, so they do not know if the students understand the material. This bothers the teachers. Students are more active learners because teachers ask lots of questions because they are worried about the students.
  • Transparency:  Info is available 24/7. You can see everything—attendance, homework, etc.
  • Tradition—people don’t like change.
  • Scalability—hard to convince policy-makers, etc.
  • Attendance
  • Completed homework
  • Quality of homework
  • Students have the info available all the time.

As mentioned above, and interesting discovery from the project is the success of online tutoring. It validates the large group with small labs/breakout sessions model for online teaching. I think other schools could find value in adding or enhancing this model in their own classrooms.

Posted in Uncategorized | Leave a comment