To begin with, articles can
use completely different data – from interviews to facebook posts.
Moreover, articles (as is the case with two of these, the fab lab and information overload ones) can use any one of a number of
combinations of different kinds of data. At the same time, the
authors' research can also use different methods of analyzing their
data, regardless of what the sources are. This was especially
apparent to me in the article on fabbing and the article on information overload. Both articles use
interviews, but the different ways in which they manipulate this data
lead to different kinds of conclusions and results. Both how one
acquires data and what one does with it impact the final results, and
the intersections between the two can dramatically change the
research.
Tuesday, 26 February 2013
Different ways of doing research
Research Ethics and Generalizability
On February 21, I attended the Research Ethics
in the Social Sciences & Humanities workshop, presented by Dr. Dean Sharpe of
the University of
Toronto (U of T) Office
of Research Ethics. In this workshop, I learned that
in the U.S. legislation governing research ethics, the word
"generalizable" is included in the definition of the type of research
that be reviewed by an ethics professionals. In the Canadian guidelines (Canada
has no legislation on research ethics), research does not have to be
generalizable. The inclusion of the word “generalizable” in the U.S.
legislative definition must have deep implications for the types of research
that are required to be given an ethics review in U.S., as compared to the
research projects that require a review in Canada. However, I am not sure what
those implications are. For example, does the absence of the need for
generalizability in the Canadian definition mean that there is more
non-generalizable research conducted in Canada
than in the U.S. ?
Monday, 25 February 2013
Hang around, observe, and record [your] observations
I enjoyed the opening illustration used in the Shaffir article of the two lost men in the woods; the one who's flustered because he's been lost for three days and the other who's made peace with the idea of being lost because he's been lost for so long. I can identify the the poor soul whose been lost for three days, and find comfort in the fact that it's not about finding a way out, but making peace with the situation as is.
The article goes on to state that "any attempt to codify the process - much less to force it into the rigid protocols of 'hard science' - is to miss the point. Focusing on the process is futile, since it's not the end but the means. The research must instead be focused on the end result, where the sum of the observations eventually create a story or unfolds a cultural lesson that only be taught through the process of hanging out.
Ethnographic research is useful in that there are sometimes huge disconnects between what people say they do versus what they actually do, and also, many of us aren't able to fully articulate our needs or wants. The whole "known knowns, known unknowns, unknown unknowns, unknown knowns comes into play. There are things that become second nature to us, and we don't think about them until they're pointed out. The down side to observational research is that it's time consuming and therefore very costly, which means that the sample sizes remain relatively small. There's also the well known Hawthorne effect, observation will inevitably alter the behavior of those being watched.
The article goes on to state that "any attempt to codify the process - much less to force it into the rigid protocols of 'hard science' - is to miss the point. Focusing on the process is futile, since it's not the end but the means. The research must instead be focused on the end result, where the sum of the observations eventually create a story or unfolds a cultural lesson that only be taught through the process of hanging out.
Ethnographic research is useful in that there are sometimes huge disconnects between what people say they do versus what they actually do, and also, many of us aren't able to fully articulate our needs or wants. The whole "known knowns, known unknowns, unknown unknowns, unknown knowns comes into play. There are things that become second nature to us, and we don't think about them until they're pointed out. The down side to observational research is that it's time consuming and therefore very costly, which means that the sample sizes remain relatively small. There's also the well known Hawthorne effect, observation will inevitably alter the behavior of those being watched.
Wednesday, 13 February 2013
Test Your Awareness
Following up with the video we watched today in class about the basketball passing and gorilla, here is the video that I mentioned from the Transport for London:
http://www.youtube.com/watch?v=Ahg6qcgoay4
http://www.youtube.com/watch?v=Ahg6qcgoay4
Tuesday, 12 February 2013
"Real life in real time is lots of noise and not much signal." (Luker, p. 101)
...and thus we sample. We sample because there is no way we could gather all
of the possible bits of information that would illuminate our research interest.
We sample because others, including federal institutions and large
organizations have gone before us, and paid the big bucks necessary to produce
random probability surveys. We sample because it's what we do, we do it when we
read a book, watch a movie, attend a class or taste something new. We sample to
gain familiarity.
In terms of research, Luker notes that sampling allows us to get a better handle on the "subset of facts, observations, people ... we will pay attention to." (p. 101)
Luker goes on to outline why we sample, and concludes that it's not about numbers, but about observable phenomenon that we 'just can't put [our] finger on [as] yet. (p. 103) In thinking about my research question so far, I've thought about the surveys and interviews I'll need to conduct in order to collect the data necessary to add value, and which I've sort of indirectly assumed could serve as population samples. I'm not sure how the two intersect, or whether they are in fact the same thing...sampling, versus interviews, vs. surveys. It goes back to the point of "real life in real time' being lots of noise and not much signals. Mass quantities of information, much of which intersects, is constantly washing over us and forcing us to decide what's worth paying attention to.
-Mandi Arlain
References: Luker, K. (2008). Salsa dancing into the social sciences: Research in an age of info-glut. Cambridge, MA: Harvard University Press.
In terms of research, Luker notes that sampling allows us to get a better handle on the "subset of facts, observations, people ... we will pay attention to." (p. 101)
Luker goes on to outline why we sample, and concludes that it's not about numbers, but about observable phenomenon that we 'just can't put [our] finger on [as] yet. (p. 103) In thinking about my research question so far, I've thought about the surveys and interviews I'll need to conduct in order to collect the data necessary to add value, and which I've sort of indirectly assumed could serve as population samples. I'm not sure how the two intersect, or whether they are in fact the same thing...sampling, versus interviews, vs. surveys. It goes back to the point of "real life in real time' being lots of noise and not much signals. Mass quantities of information, much of which intersects, is constantly washing over us and forcing us to decide what's worth paying attention to.
-Mandi Arlain
References: Luker, K. (2008). Salsa dancing into the social sciences: Research in an age of info-glut. Cambridge, MA: Harvard University Press.
Sampling and Its Drawbacks
In Chapter 6 of Luker's (2008), Salsa Dancing Into The Social Sciences, a method of researching called sampling, where random subjects are observed and compared to each other, is discussed. Luker states that researchers use sampling as a form of data collection "because there is no way [they] can gather all of the possible bits of information that would illuminate research question[s]" (p. 101). Personally, I find that this method carries a great danger of resulting in narrow-viewed conclusions. If only several random subjects are observed, one cannot see the big picture in how they relate to whole population of subjects. Those selected for sampling may not represent what is wide-spread or accepted as the norm in the population. While I agree with Luker that sampling would save time and energy for researchers (p. 101), those factors should not compromise the quality of research that one must do to answer their research question accurately.
Luker goes on to say that sampling is not meant to represent the larger population, but rather the larger phenomenon (p. 103). In this case, researchers would have to pick and choose their subjects in order to ensure that they are relevant to the cause at hand. This, however, could certainly result in the manipulating of research conclusions because if researchers have control of those they study, they could promote their own agenda in getting the results they are looking for.
Luker, K. (2008). Salsa dancing into the social sciences: Research in an age of info-glut. Cambridge, MA: Harvard University Press.
Luker goes on to say that sampling is not meant to represent the larger population, but rather the larger phenomenon (p. 103). In this case, researchers would have to pick and choose their subjects in order to ensure that they are relevant to the cause at hand. This, however, could certainly result in the manipulating of research conclusions because if researchers have control of those they study, they could promote their own agenda in getting the results they are looking for.
Luker, K. (2008). Salsa dancing into the social sciences: Research in an age of info-glut. Cambridge, MA: Harvard University Press.
Operationalizing Variables
Both the Luker and the Knight readings for this week have left me thinking about choosing specific terms. Luker's example about how women define rape showed just how much word choice is key to the answers a researcher will receive (p. 120-121). In Luker's example, the researcher and the respondents had very different definitions of the word "rape." That, in itself, is a rich area for research. I also wonder, though, if researchers ever do studies where they change the terms but intend to search for the same thing. For instance, I might conduct two otherwise identical studies, one of which asks if you have ever plagiarized and one of which asks if you have ever cheated on an assignment. Would the results be different (this isn't the best example but it's what I came up with - imagine another case where the results likely would be different)? If so, what does that reveal? I'd be curious to read a study like that.
Luker, K. (2008). Salsa dancing into the social sciences: Research in an age of info-glut. Cambridge, MA: Harvard University Press.
Luker, K. (2008). Salsa dancing into the social sciences: Research in an age of info-glut. Cambridge, MA: Harvard University Press.
Reusing qualitative data
Last week’s Hammersely (2010)
reading got me thinking about some issues relating to archiving qualitative
social science data. In a study I read regarding data sharing across different
academic disciplines, social science disciplines lagged behind almost every other
discipline grouping except medicine (Tenopir et al. 2010). Although the study
was fairly limited in its scope and the discrepancies in data sharing likely
had to do with confidentiality issues involving human participants, it got me
thinking about some of the difficulties that might be involved with submitting
qualitative data to open data projects.
One of the
major justifications for open data is that it allows for the reuse and
reinterpretation of shared data. Hammersely (2010) describes the construction
process involved with transcribing interviews. He explains that a number of
decisions are made which are dependent on the cultural and cognitive
understandings of the transcriber (2010, p.560). Kuula (2010) explains that a
major concern for qualitative researchers submitting their data for public
consumption is a fear of misinterpretation (p.14). Moreover, Kuula points out
there is a tendency of original researchers to feel only they can fully
understand and interpret qualitative data because of the co-constructed nature
of interviews (Kuula, 2010, p.14). Because interview transcriptions can be so selective
in what they include (Hammersely, 2010), I wonder if is possible to make all of
the decisions and interpretations of the transcriber transparent so that
qualitative data can be truly reused?
Hammersely, M. (2010).
Reproducing or construction: some questions about transcription in social
research. Qualitative Research, 10(5), 553-569.
Kuula, A. (2010). Methodological
and ethical dilemmas of archiving qualitative data. IASSIST Quarterly, 34(3), 12-17
Tenopir, C., Allard, S.,
Douglass, K., Aydinoglu, A. U., Wu, L., Read, E., Frame, M. (2011). Data
sharing by scientists: Practices and perceptions. PloS One, 6(6),
e21101.
Wednesday, 6 February 2013
Knight’s take on Face-to-Face Research
Chapter
3 in Small Scale Research covers
different methods of face-to-face research. Knight specifically highlights fixed-response
questions, observation, interviews, focus groups, memory work, experiments, etc.
A point he seems to reiterate throughout is how much the researcher’s
participation in face-to-face methods can affect the data, or responses he/she
gets. Not only can a researcher influence his/her respondents through the way the
questioning method is framed i.e. the types of questions, but also through the
sort of mood that the researcher gives off, and the specific way he/she acts
during this interaction. This was particularly fascinating for me because I
never thought that the attitude of the researcher had anything to do with the
results, but it made me think back to when I took an introductory psychology
course in undergrad and we were all required to partake in six hours of these
sort of tests outside of class for, I guess, students who were doing their
masters or PhDs. Coming from the perspective of the respondent, that sort of
extra time devoted to answering someone else’s questions can seem a bit taxing,
and I remember that the students running the experiments were always very
appreciative and encouraging about it all. Some of them even gave out trinkets
like lion postcards for our participation. When I think about it, the idea of
six hours of being a respondent was far drearier than the actual process. I see
now how big an impact the researcher/experimenter’s overall mood can have on
the results. If you are pleasant and can easily put the respondent at ease and
somehow create an environment where he/she doesn’t feel like your questions are
unnecessarily eating up his/her time, then your results will, to some degree,
reflect that. Knight really highlights the importance of interpersonal skills
in the control of face-to-face research, and how rewarding or detrimental it
can be for your results.
Knight, P.T. (2002). Small-Scale Research. Thousand Oaks, CA: Sage Publications.
Knight, P.T. (2002). Small-Scale Research. Thousand Oaks, CA: Sage Publications.
Tuesday, 5 February 2013
Focus Groups vs. Textual Analysis
In reading about focus
groups for this week, I've been thinking about how my intended
research method – textual analysis – relates to in-person
qualitative research. Textual analysis is obviously different from
in-person research in that it involves analyzing written texts rather
than people's speech. But as Lunt and Livingstone point out, the two
are not that different. Analyzing focus group research involves
making sense of written texts too, something that Lunt and
Livingstone point out can be almost identical to research in
humanities fields like literature (p. 94). Whether the researchers
use the kind of critical analysis common to the study of literature
or the method of systematically coding the transcripts for different
types of content, they are engaged in what seems the same thing as
textual analysis.
As a result, the only
differences are those that emerge from how the text is generated –
in a focus group the researcher generates the text under specific
circumstances, whereas usually in textual analysis the texts already
exist and are chosen by the researcher. So what concerns emerge?
Lunt and Livingstone talk at length about communication – the
interactions between people in the group. The group dynamics of a
focus group are certainly one of its unique benefits and not
something that can be replicated in textual analysis. I do think it
makes sense to talk about communication in terms of texts, though.
Different texts can be seen as part of the same broader discussion or
discourse. They may not speak directly to each other (although
sometimes they do), but they're still involved in communication. In
studying texts, we do miss out on the specific interactions and
direct responses of a focus group, but we can gain a broader sense of
how topics are discussed.
References:
Lunt,
P. and Livingstone, S. (1996). Rethinking the focus group in media
and communications research. Journal
of Communication,
46(2), 79-98.
Monday, 4 February 2013
The benefits and pitfalls of the focus group
Lunt's article starts off by stating some of the benefits of focus groups, which include, getting people to engage on a level where you know what they mean, but also 'how they understand'. That position is followed by Hoijer's point that “the obvious and well-documented effect of group pressure raises too many problems to permit taking the group discussion as a valid basis for research.” I think of my own experience with focus groups, and I can agree to some extent with both positions.
Although random sampling wouldn't, or better yet, hasn't worked well for me in terms of engaging - sitting with a group of people I know and feel safe with, and perhaps who know and feel safe with me, has allowed for results that are honest and useful. Conversely, it is also correct that adhoc group placements are uncomfortable for most, and the group dynamic, as has happened with me on several occasions, will encourage an unequal and perhaps not very forthcoming exchange of ideas and suggestions.
Are focus groups useful, we know they are...are they also problematic because of the pressures raised through group dynamics? Also true!. I guess it boils down to what is being researched and/or observed, and what outcomes are being sought!
-Mandissa A
References
Lunt, P. and Livingstone, S. (1996). Rethinking the focus group in media and communications research. Journal of Communication, 46(2), 79-98.
Although random sampling wouldn't, or better yet, hasn't worked well for me in terms of engaging - sitting with a group of people I know and feel safe with, and perhaps who know and feel safe with me, has allowed for results that are honest and useful. Conversely, it is also correct that adhoc group placements are uncomfortable for most, and the group dynamic, as has happened with me on several occasions, will encourage an unequal and perhaps not very forthcoming exchange of ideas and suggestions.
Are focus groups useful, we know they are...are they also problematic because of the pressures raised through group dynamics? Also true!. I guess it boils down to what is being researched and/or observed, and what outcomes are being sought!
-Mandissa A
References
Lunt, P. and Livingstone, S. (1996). Rethinking the focus group in media and communications research. Journal of Communication, 46(2), 79-98.
Subscribe to:
Posts (Atom)