Emic and Etic coding

This week, my classmates and I each set out to code the text that came out of an online class meeting. The text was not terribly long. I broke it into 134 speech acts. The individual speech acts are roughly a sentence in length. Fourteen students participated in the text part of the conversation. The instructor used his mic and interacted entirely through audio, so his part in the meeting was not captured in the text.

We were asked to conduct emic and etic coding on the text. Emic coding involves coding the text without relation to any predesigned research questions. The themes that emerge are supposed to reflect the holistic, general nature of the exchange. Etic coding revolves around predetermined research questions. The information needed to answer the research question is the guiding force behind the code selection.

This coding exercise taught me about the very different roles emic and etic coding can play. The emic coding process was more difficult for me than the etic. The lack of guiding research questions made it hard to analysis the data beyond a superficial level. The research questions from the etic coding activity, however, provided a nice structure for investigating a few concepts in depth. In my etic coding, I wanted to measure the degree of knowledge construction achieved among the learners as well as the level of peer teaching that occurred during the discussion. This was easily done using the four levels of knowledge construction proposed by Garrison, Anderson, and Archer (2001): triggering event, exploration of ideas, integration of ideas, resolution. Using this continuum, I was able to code and count the number of speech acts in each category and how often learners were engaging at each level.

I still have one concern. It is not clear to me how much benefit it is to analyze discourse that took place both through text and through audible speech. This analysis is missing all of the instructor’s contributions and a few student contributions as students occasionally asked questions or commented using their mics rather than the chat box. It is possible that certain types of contributions were expressed more through the mic than through the chat box. The analysis I just conducted would overlook much of what was shared. So here I am, still in more of an information integration stage than a resolution phase of my knowledge construction about CMDA.

Garrison, D. R., Anderson, T., & Archer, W. (2001). Critical thinking and computer conferencing: A model and tool to assess cognitive presence. American Journal of Distance Education, 15(1), 7−23.

Analyzing text

I analyzed my first text using the computer-mediated discourse analysis (CMDA) method. We started the analysis in class using text about the use of Second Life as a learning environment. The group created questions and found codes associated with the first respondent’s text. As homework, we were to code the transcripts of the next three individuals.

This was easier than I thought it would be. I’m very curious how my codes compare with those of my classmates. There were a few points that I maybe interpreted more vaguely than others would have. For example, one code created in class was “logistics”. It was unclear to me whether I should mark a sentence as relating to the logistics code only when the individual talked directly about the logistics of the environment, or if I should code any reported or implied behavior that would involve logistical measures. I opted to mark both the direct reference to logistics and the logistical moves made by the respondent with the “logistics” code.

I can see the need for inter-rater reliability when coding data. It would be quite unlikely for two people to code the same passage in the same way. If many people are involved and must come to a consensus through inter-rater discourse, the analysis has a greater chance for repeatability.

As part of my class readings, I read a paper by Paulus (2007) in which the researcher used the CMDA method to analyze reflection papers and text-based communications in an online course. I noted some differences in the way the Paulus approached the analysis process. One key difference being that Paulus first “unitized” each message into functional moves. Meaning, the purpose(s) of each sentence was identified. Once each message was broken down into a single function or purpose, the researcher continued with the coding process. Perhaps unitization was only necessary because of the research question being asked. The process helped answer the first research question: “Which communication mode(s) do experienced distance learners choose as they collaborate on project-based tasks?” (Paulus, 2007, p. 1326).

The second research question again drove the coding process. After the messages were unitized, each message was identified as conceptual or nonconceptual in nature. This helped to answer the RQ2: “RQ2: What do they talk about in each mode?”. I’ll have to ask my professor to what degree the research questions should guide the codes.

So, while this first analysis helped me answered some questions and gave me some confidence in the procedure, it also raised several questions.

Paulus, T. M. (2007). CMC modes for learning tasks at a distance. Journal of Computer-Mediated Communication, 12(4), 1322–1345. doi:10.1111/j.1083-6101.2007.00375.x

My Personal View of Research

My general worldview is that of a contextualist. I see the world as having ultimate truths and a single, objective reality. People, however, do not see this objective reality though clear glass. What they see of the world and how they interact with the world cannot be cleanly transferred into their understanding. Instead, their perceptions of the world are interpretations of their interaction with that objective reality. These interpretations of reality are based on and intertwined with their own experiences and assumptions. Additionally, peoples’ emotions and feelings can further cloud their pursuit of ultimate truths and objective reality.

So, these are the conditions a research has to deal with. Researchers seek to find ultimate truths, but the researcher’s primary tool, his/her mind, is fundamentally flawed (no offense – we’re all in the same boat). The mind cannot collect or process data without first filtering it, to a degree, through the researcher’s web of experiences, assumptions, emotions, and feelings. From the start, any data the researcher is able to uncover are sent tumbling through the researcher’s mental filter system and spit out as findings, or interpretations meant to approximately reflect reality. While there may be exercises that can train the mind to view the world more objectively, the influence of the researcher’s mental filtering system will always have some impact on their findings.

If all researchers have this filter system, then it is logical that a team of researchers with a range of experiences and perspectives could collaborate and come to a more thorough interpretation of the data by sharing and evaluating perspectives and interpretations. This collaboration could potentially broaden the perspectives of each individual researchers as well, altering, and hopefully reducing the influence of the individual’s limited scope. Thus, research teams may be helpful in approaching an understanding of ultimate truths.

Likewise, data collection methods should probably be diverse. Collecting the same data through different means can help to triangulate the results and point to a more accurate interpretation of reality. The mixed methods research strategy has been used in this way. Mixed methods researchers collect quantitative as well as qualitative data and base their findings on the combined results.

We discussed the question, “What is research” in class. While we didn’t boil the responses down to a single definition of research, we did collect a variety of perspectives. If I were to conduct research with this group, it would probably be important to hack away at our definitions of research in order to design research projects that best reflect methods the group concluded were appropriate. Having a well-defined, defensible worldview is important in research because it can help others understand why and how the researchers came to their interpretations and conclusions.