This week, my classmates and I each set out to code the text that came out of an online class meeting. The text was not terribly long. I broke it into 134 speech acts. The individual speech acts are roughly a sentence in length. Fourteen students participated in the text part of the conversation. The instructor used his mic and interacted entirely through audio, so his part in the meeting was not captured in the text.
We were asked to conduct emic and etic coding on the text. Emic coding involves coding the text without relation to any predesigned research questions. The themes that emerge are supposed to reflect the holistic, general nature of the exchange. Etic coding revolves around predetermined research questions. The information needed to answer the research question is the guiding force behind the code selection.
This coding exercise taught me about the very different roles emic and etic coding can play. The emic coding process was more difficult for me than the etic. The lack of guiding research questions made it hard to analysis the data beyond a superficial level. The research questions from the etic coding activity, however, provided a nice structure for investigating a few concepts in depth. In my etic coding, I wanted to measure the degree of knowledge construction achieved among the learners as well as the level of peer teaching that occurred during the discussion. This was easily done using the four levels of knowledge construction proposed by Garrison, Anderson, and Archer (2001): triggering event, exploration of ideas, integration of ideas, resolution. Using this continuum, I was able to code and count the number of speech acts in each category and how often learners were engaging at each level.
I still have one concern. It is not clear to me how much benefit it is to analyze discourse that took place both through text and through audible speech. This analysis is missing all of the instructor’s contributions and a few student contributions as students occasionally asked questions or commented using their mics rather than the chat box. It is possible that certain types of contributions were expressed more through the mic than through the chat box. The analysis I just conducted would overlook much of what was shared. So here I am, still in more of an information integration stage than a resolution phase of my knowledge construction about CMDA.
Garrison, D. R., Anderson, T., & Archer, W. (2001). Critical thinking and computer conferencing: A model and tool to assess cognitive presence. American Journal of Distance Education, 15(1), 7−23.