Friday, July 26, 2013

Types of Field Notes


Groenewald (2004) describes the 4 types of field notes he used in his phenomenological research:

• Observational notes (ON) — 'what happened notes' deemed important enough to the researcher to make. Bailey (1996) emphasises the use of all the senses in making observations.
Theoretical notes (TN) — 'attempts to derive meaning' as the researcher thinks or reflects on experiences.
Methodological notes (MN) — 'reminders, instructions or critique' to oneself on the process.
Analytical memos (AM) — end-of-a-field-day summary or progress reviews

Sources cited:
Groenewald, T. (2004). A phenomenological research design illustrated. International Journal of Qualitative Methods, 3(1). Article 4.

Sunday, July 14, 2013

Types of Validity in Research

Validity is important to any type of research. Types include (but are NOT limited to!):

  • External validity - the degree to which the research can be generalized to other settings. There are 2 types: population validity, and ecological validity
  • Internal validity - the degree to which the research accurately follows cause and effect
  • Criterion validity - the degree to which the research results concur with other research on the same construct to come to the same conclusions. An example of this might be whether GPA scores correlate to SAT scores.
  • Content validity - the degree to which the research measures accurately the "content" associated with the construct studied. So, for instance, if a literacy test only looks at grammar, it is missing many different parts of the construct called "literacy" since literacy requires much more than just grammar!
  • Construct validity - the degree to which the "construct" measures what it's supposed to measure. For instance, does IQ actually measure intelligence as it is supposed - or something else, such as test-taking proficiency or cultural understandings. An example of a problem with construct validity in the IQ test is that it is specific to the country/jurisdiction. The IQ tests in the US use imperial measurements (inches, feet, gallons) so a person in Canada might score poorly on such questions not because they do not know how to convert measurements, but because they are familiar with metric (not imperial) measurements
  • Face validity - similar to content validity, face validity has to do with whether the research appears to measure what it is supposed to measure based on expert observers. A famous example of this would be the "tests" used during the Salem Witch Trials. The "test" of burning witches at the stake might have seemed to some at the time as valid, but we know now that it was seriously flawed.
  • Predictive validity - the degree to which the research will predict certain types of results if similar research is done in the future. For example, are SAT scores predictive of academic performance in college? 

Sample size determination for quantitative research

Continuous data is data where the responses fall on a continuum (e.g., likert-type scales), unlike categorical data (e.g., gender, occupation, etc.)

This article by James E. Bartlett, Joe W. Kotrlik and Chadwick C. Higgins describes the process for sample size determination. They also offer a useful table that compares continuous and categorical data calculations:

In the article the authors invite use of this table *if* the margin of error is appropriate for a researcher's study. If the researcher selects a different margin of error, the sizes need to be re-calculated.
A final note to researchers is to remember that the degree to which you can generalize is based on the sampling METHOD (not just the sampling numbers!) so be sure to familiarize yourself with all of that!

There are also a few rules of thumb you might consider, especially if the population size is not known (Hill, Robin, 1998, What sample size is "enough" in internet survey research? Interpersonal Computing and Technology: An Electronic Journal of the 21st Century, 6(3/4), 1-10). These include:

  • Generally speaking, it's difficult to justify fewer than 10 cases, or more than 500
  • In simple matched-pairs experimental designs, 10 cases can suffice, but more complicated experimental designs OR correlational research should have at least 30 cases. When these are broken down into categories (e.g., male/female) then multiple the minimum number of cases by categories
  • For multiple regressions, samples should be at least 10 times the number of variable (so 5 variables means you should have at least 50 cases
  • For purely descriptive research, the sample should be 10% of the population

To check sufficiency of data when the population is not known, you can perform a "split half analysis" in which you divide the data in two, and see if both halves generate the same conclusions.