Research Ethics Overview
The ethical problems of doing scientific research should not be overlooked. Considering ethical problems is vital for doing any research project. There are many important ethical concerns in the research process, including why research must be ethical, general ethical theories, ethical principles, specific ethical problems, and ethics in data analysis and reporting.
Since most mass media research involves human beings, researchers must not violate the rights of participants. Ethical research is the right thing to do.
Three general types of theories are discussed that have evolved concerning ethics:
1) rule-based or deontological theories (based off categorical imperatives – principles that define appropriate action in any and all situations; Golden Rule: moral duty),
2) balancing or teleological theories (based on notion of utilitarianism; the good that may come from an action is weighed against or balanced against possible harm. The end may justify the means). , and
3) relativistic theories (no 'absolute' right or wrong way of behaving; ethical decisions are determined by cultural norms; everything is relative).
There are four relevant ethical principles:
1) autonomy or self-determination,
2) nonmaleficence,
3) beneficence, and
4) justice.
Autonomy suggests that researchers should respect the rights, values, and decisions of other people. As a way to guarantee this principle, researchers in mass media use informed consent.
Nonmaleficence means that it is wrong to intentionally inflict harm on others, while beneficence stipulates that a researcher should remove any existing harm and provide benefits to others.
Justice is related to the equal rights of participants, suggesting that participants should be treated equally and all benefits should be shared with all who are qualified.
Ethical issues to be concerned with when it comes to research include:
1) voluntary participation,
2) informed consent,
3) concealment,
4) deception, and
5) protection of privacy.
Research participation should be a voluntary process and informed consent provides information to the participants to aid them in making a choice. The researcher should warn of any possible discomfort or unpleasantness in the research process and obtain the consent form from the participants.
Concealment involves the withholding of certain information from the participants. Deception is intentionally providing false information. Although there are arguments concerning the pros and cons of both practices, these two techniques should not be used indiscriminately.
Researchers can use two ways to protect the privacy of participants: a promise of anonymity or confidentiality. Researchers are responsible for having a moral and ethical obligation in data analysis and reporting. Questionnaire responses and experimental observations should not be fabricated, altered, or discarded.
Online research raises special ethical problems. Passive analysis of online content generally raises fewer ethical issues than does active research where the investigator tries to gather information directly from online users.
Sunday, February 8, 2015
Elements of Research
Some of the important elements of research include --
concept,
construct,
variables,
measurement,
scales,
reliability, and
validity.
To conduct effective research, a researcher needs to have a clear understanding of these elements.
A concept is a term that expresses an abstract idea formed by generalizing from particulars and summarizing related observations. Researchers can simplify research by using concepts that helps them formulate a general and inclusive term.
A construct is a combination of concepts. Variables are used to describe the phenomena and events that can be measured in empirical world. Independent variables are varied by the researcher, whereas dependent variables are the ones that researcher wants to find out about. Researchers can observe the phenomena or events by a clear statement of what is to be observed, called an operational definition.
Measurement is an assignment of numerals to persons, objects, or characteristics. In this chapter, four levels of measurement are described. The nominal level simply assigns numerals to the objects without mathematical significance. The ordinal level ranks objects according to certain orders, such as from smallest to largest. The scale is at the interval level when the intervals between adjacent points are equal. The ratio level, the highest level of measurement, has all the properties of interval scales and plus a true zero point.
Measurement of some variables requires scales. This chapter describes Thurstone scales, Guttman scales, Likert scales, and semantic differential scales. Likert scales and semantic differential scales are the most commonly used scales in mass media research.
A measurement must be both reliable and valid to be useful in any research procedures. We can say a measure is reliable if it consistently gives the same answer. Reliability consists of three components: stability, internal consistency, and equivalency. To assess the reliability of measurements, researcher can use the test-retest method with the correlation coefficient. The split-half technique and the cross-test reliability method can be used to examine the internal consistency and the equivalency component of reliability. Also, intercoder reliability is used in the case of content analysis.
A valid measure measures what it is supposed to measure. Four major types of validity are: face validity (does it measure what it says it measures), predictive validity, (if SAT scores are valid indicators of college success, then those who do well on the exam should do better in college than those who don't) concurrent validity (can we compare our results to other readily available results concurrently to make sure things are valid), and construct validity (are our constructs valid?).
Reliability and validity are related. Reliability is a necessary condition to establish validity, but it is not a sufficient condition. A measurement can be reliable even if it is not valid. It is important to remember that a measurement must be both reliable and valid to be used in the research.
Ways of Knowing
There are a variety of ways to investigate a research question or hypothesis. In 1986, Kerlinger, using definitions provided nearly a century ago by C. S. Peirce, presents four approaches to finding answers, or “methods of knowing.” They are tenacity, intuition, authority, and science. As Wimmer & Dominick (2003) state:
The scientific method approaches learning as a series of small steps. That is, one study or one source provides only an indication of what may or may not be true; the “truth” is found through a series of objective analyses. This means that the scientific method is self-correcting in that changes in thought or theory are appropriate when errors in previous research are uncovered.
For example, in 1984 Barry Marshall, a medical resident in Perth, Australia, identified a bacterium (Helicobacter pylori or H. pylori) as the cause of stomach ulcers (not an increase in stomach acid due to stress or anxiety). After several years, hundreds of independent studies proved that Marshall was correct, and in 1996, the Food and Drug Administration (FDA) approved a combination of drugs to fight ulcers-an antacid and an antibiotic.
In this class, we adopt the scientific method way of learning.
A user of the method of tenacity follows the logic that something is true because it has always been true. An example is the storeowner, who says, “I don’t advertise because my parents did not believe in advertising.” The idea is that nothing changes-what was good, bad, or successful before will continue to be so in the future.
In the method of intuition, or the a priori approach, a person assumes that something is true because it is “self-evident” or “stands to reason.” Some creative people in advertising agencies resist efforts to test their advertising methods because they believe they know what will attract customers. To these people, scientific research is a waste of time.
The method of authority promotes a belief in something because a trusted source, such as a parent, a news correspondent, or a teacher, says it is true. The emphasis is on the source, not on the methods the source may have used to gain the information.
For example, in 1984 Barry Marshall, a medical resident in Perth, Australia, identified a bacterium (Helicobacter pylori or H. pylori) as the cause of stomach ulcers (not an increase in stomach acid due to stress or anxiety). After several years, hundreds of independent studies proved that Marshall was correct, and in 1996, the Food and Drug Administration (FDA) approved a combination of drugs to fight ulcers-an antacid and an antibiotic.
In this class, we adopt the scientific method way of learning.
Levels of Measurement
In scientific research, we examine the relationships between variables. Each of these variables, in quantitative research, can be measured. Specifically, there are levels of measurement--
1) Nominal-- A nominal (sometimes called categorical) variable is one that is typically measured in categories. It is the weakest form of measurement, but sometimes the only one available. For example, if we were to identify a person as being either "male" or "female," we don't have a lot of wiggle room-- you tend to be one or the other (biologically speaking and, yes, I'm fully aware that some people are born with sex organs of both males and females). Historically, many demographic variables are nominal. These include variables such as gender, race, religion, etc.
2) Ordinal-- An ordinal variable is one where a numeric value is assigned to the variables, but we can't assume an equal distance between the points. For example, if I ask you to list your top three favorite ice cream flavors, you might say that mint chocolate chip ranks first, french vanilla ranks second, and double-dutch chocolate ranks third. Well, based on this ranking, I know that mint chocolate chip is your favorite, but I can't tell how much MORE you like it compared to the others. You may like it 10x more than your second favorite, or it may be almost a tie between the two. I have no way of knowing. Ordinal variables indicate rank, but not distance between points.
Another way of thinking of it is to look at the results of a horse race. Sometimes a horse finishes first by 20 lengths, sometimes only by a nose. In either case, the horse still wins, but, statistically speaking, the distance between points matters.
3) Interval-- An interval level of measurement provides a rank order AND equal distance between the points, but has no real "zero." The Fahrenheit scale of temperature is an every day example. The difference between 70 and 80 is the same as the difference between 20 and 30 (10 degrees). The zero on the scale, however, is not a TRUE zero, since the measurement does not indicate a lack of the concept, but, rather, just one point on the continuum.
4) Ratio-- A ratio level of measurement provides rank order, equal distance AND has a true zero-- which indicates none of the concept. For example, if I'm measuring "TV viewing" in hours for the past day and I observe that you did not watch even one moment of TV during that time, then I can record "0."
Here are some sample variables. Indicate, based on how the variables is being measured, the level of measurement:
1. What is your gender?
1. Female
2. Male
3. Other: __________
2. What color hair does the cartoon character have?
1. Blonde
2. Brown
3. Black
4. Red
5. Other: ____________
3. Indicate your level of agreement with the following statement:
College is too expensive.
5-Strongly Agree 4- Agree 3- Neutral 2-Disagree 1-Strongly Disagree
4. Please indicate total your income for the year 2009: _________________
5. Please indicate your favorite electronic media from favorite (5) to least favorite (1):
a. Internet
b. Television
c. Music
d. Video games
e. Movies
Post your answers here.
Jack
1) Nominal-- A nominal (sometimes called categorical) variable is one that is typically measured in categories. It is the weakest form of measurement, but sometimes the only one available. For example, if we were to identify a person as being either "male" or "female," we don't have a lot of wiggle room-- you tend to be one or the other (biologically speaking and, yes, I'm fully aware that some people are born with sex organs of both males and females). Historically, many demographic variables are nominal. These include variables such as gender, race, religion, etc.
2) Ordinal-- An ordinal variable is one where a numeric value is assigned to the variables, but we can't assume an equal distance between the points. For example, if I ask you to list your top three favorite ice cream flavors, you might say that mint chocolate chip ranks first, french vanilla ranks second, and double-dutch chocolate ranks third. Well, based on this ranking, I know that mint chocolate chip is your favorite, but I can't tell how much MORE you like it compared to the others. You may like it 10x more than your second favorite, or it may be almost a tie between the two. I have no way of knowing. Ordinal variables indicate rank, but not distance between points.
Another way of thinking of it is to look at the results of a horse race. Sometimes a horse finishes first by 20 lengths, sometimes only by a nose. In either case, the horse still wins, but, statistically speaking, the distance between points matters.
3) Interval-- An interval level of measurement provides a rank order AND equal distance between the points, but has no real "zero." The Fahrenheit scale of temperature is an every day example. The difference between 70 and 80 is the same as the difference between 20 and 30 (10 degrees). The zero on the scale, however, is not a TRUE zero, since the measurement does not indicate a lack of the concept, but, rather, just one point on the continuum.
4) Ratio-- A ratio level of measurement provides rank order, equal distance AND has a true zero-- which indicates none of the concept. For example, if I'm measuring "TV viewing" in hours for the past day and I observe that you did not watch even one moment of TV during that time, then I can record "0."
Here are some sample variables. Indicate, based on how the variables is being measured, the level of measurement:
1. What is your gender?
1. Female
2. Male
3. Other: __________
2. What color hair does the cartoon character have?
1. Blonde
2. Brown
3. Black
4. Red
5. Other: ____________
3. Indicate your level of agreement with the following statement:
College is too expensive.
5-Strongly Agree 4- Agree 3- Neutral 2-Disagree 1-Strongly Disagree
4. Please indicate total your income for the year 2009: _________________
5. Please indicate your favorite electronic media from favorite (5) to least favorite (1):
a. Internet
b. Television
c. Music
d. Video games
e. Movies
Post your answers here.
Jack
Two big concepts-- Reliability and Validity
Without question, in order to understand effective social science research (or any kind of scientific research), you have to understand the concepts of "validity" and "reliability."
To put it simply, "validity" refers to whether a measure actually measures what you purport to be measuring.
For example, if you create a concept called "television use" and then decide to measure it by asking people how many TVs they own, that MIGHT be an indicator of how much TV they watch, but you definitely have some validity problems, right? Why? (post your thoughts as a comment to this entry).
It would probably be better to measure the concept by asking people how many hours of TV per day they watch, on average (or better still, you might have them go hour by hour thinking only of yesterday and to report what they watched during the day).
I'm sure you can see that using a measurement like this is better than just asking how many TVs someone owns...
Reliability, meanwhile, simply refers to how often you can repeat the measurement and get the same result.
Let's take an example and put the two concepts together--
Suppose you have a digital bathroom scale and you step up on it and weigh yourself and it reads "145 lbs."
Now let's suppose you repeat that process ten times in a row and you get results like this--
1. 145
2. 144
3. 144.5
4. 145
5. 145.1
6. 144.5
7. 144.8
8. 145.2
9. 145
10. 144
Acting reasonably, we should see these results and say "this scale has come pretty close to giving me the same reading 10 times in a row, so I conclude it's reliable." If so, its "reliability" is strong and is not in question.
What we do not know, however, is if the scale is RIGHT. What if it's wrong by, say, 10 lbs. and you REALLy weigh closer to 155 lbs.?
The scale's readings are reliable, but we can't say for sure if the scale is valid.
To confirm that the scale really is measuring pounds, we might "test" it by weighing other items whose weights we already know. For example, a 10 lb. bag of potatoes, a weight (from a weight room) of 25 lbs., a 50 lb. bag of rock salt, the official rod of steel used as the standard to determine a "pound" and so on.
Now, if we weigh all those items and each time the scale gives us readings that are really close to what we should expect, we can conclude that the scale is indeed valid.
Get it?
Let's practice. Below are several sample ways of measuring certain media concepts. Indicate to me, as comments, whether you think the measurements are valid (valid or NOT valid) and why or why not. In each case, one measurement is more valid than the other. Identify the valid measurement and indicate why the other is NOT valid.
Concept Measurement
media use
A) how much time you spend reading a daily newspaper
B) combination of how much time you spend on all media
exposure to TV violence
A) how many hours of TV you watch each day
B) how many hours of TV you watch each day broken up by TYPE of tv show watched (coded for content)
internet use
A) how many times a day you check Facebook
B) how many hours per day you spend online via computer or other handheld device
You may be thinking to yourself-- "this is pretty obvious stuff," but you'd be surprised at the number of reports professionals put together that suffer from problems of validity.
To put it simply, "validity" refers to whether a measure actually measures what you purport to be measuring.
For example, if you create a concept called "television use" and then decide to measure it by asking people how many TVs they own, that MIGHT be an indicator of how much TV they watch, but you definitely have some validity problems, right? Why? (post your thoughts as a comment to this entry).
It would probably be better to measure the concept by asking people how many hours of TV per day they watch, on average (or better still, you might have them go hour by hour thinking only of yesterday and to report what they watched during the day).
I'm sure you can see that using a measurement like this is better than just asking how many TVs someone owns...
Reliability, meanwhile, simply refers to how often you can repeat the measurement and get the same result.
Let's take an example and put the two concepts together--
Suppose you have a digital bathroom scale and you step up on it and weigh yourself and it reads "145 lbs."
Now let's suppose you repeat that process ten times in a row and you get results like this--
1. 145
2. 144
3. 144.5
4. 145
5. 145.1
6. 144.5
7. 144.8
8. 145.2
9. 145
10. 144
Acting reasonably, we should see these results and say "this scale has come pretty close to giving me the same reading 10 times in a row, so I conclude it's reliable." If so, its "reliability" is strong and is not in question.
What we do not know, however, is if the scale is RIGHT. What if it's wrong by, say, 10 lbs. and you REALLy weigh closer to 155 lbs.?
The scale's readings are reliable, but we can't say for sure if the scale is valid.
To confirm that the scale really is measuring pounds, we might "test" it by weighing other items whose weights we already know. For example, a 10 lb. bag of potatoes, a weight (from a weight room) of 25 lbs., a 50 lb. bag of rock salt, the official rod of steel used as the standard to determine a "pound" and so on.
Now, if we weigh all those items and each time the scale gives us readings that are really close to what we should expect, we can conclude that the scale is indeed valid.
Get it?
Let's practice. Below are several sample ways of measuring certain media concepts. Indicate to me, as comments, whether you think the measurements are valid (valid or NOT valid) and why or why not. In each case, one measurement is more valid than the other. Identify the valid measurement and indicate why the other is NOT valid.
Concept Measurement
media use
A) how much time you spend reading a daily newspaper
B) combination of how much time you spend on all media
exposure to TV violence
A) how many hours of TV you watch each day
B) how many hours of TV you watch each day broken up by TYPE of tv show watched (coded for content)
internet use
A) how many times a day you check Facebook
B) how many hours per day you spend online via computer or other handheld device
You may be thinking to yourself-- "this is pretty obvious stuff," but you'd be surprised at the number of reports professionals put together that suffer from problems of validity.
Understanding IV and DV
Ok, so in a nutshell, here's how quantitative research works (we'll talk about qualitative later)--
First, you come up with an interesting question that you'd like to answer. In other words, you come up with concepts to see if they're related. Maybe you're interested in the relationship between TV viewing and obesity in kids; the portrayal of models of fashion magazines and real-life body image among females (or males); video games and reflexes; video games and violence; sexual content on TV and real-life sexual behavior; rap/rock lyrics and attitudes toward women; and so on. The possibilities are, literally, endless.
Then, using your own observations, thoughts, opinions, etc., you decide which way you believe the relationship goes. You come up with a declarative statement like "kids who watch a lot of TV get fat."
You believe this to be true for whatever reason. This is known as your "theoretical rationale." Why do you think it might be true? As long as it makes sense (face validity), you're probably on to something.
For example, you might say-- "well, kids who watch a lot of TV are spending time watching TV INSTEAD of running around and playing outside, so they're probably not getting a lot of exercise, so they're not burning as many calories. Also, it seems likely that kids watching TV are more likely to mindlessly snack than are kids playing kickball or some other activity. So, it seems to me that kids watching TV burn less calories and consume more calories, so it makes sense that this might lead to more childhood obesity."
Makes sense to me.
Every premise has a theoretical rationale.
We need to refine it, however, and form a hypothesis. A hypothesis is simply a declarative, testable statement that examines the relationship between variables.
Variables are either independent (IV) or dependent (DV).
An IV is one that isn't changed... A DV, however, is the one that changes. We measure DVs.
For example, if I developed a hypothesis on the tv-obesity topic, I might come up with something like this:
H1: The more TV a child watches, the more likely the child is overweight.
In this one simple hypothesis, we have three concepts that we need to identify. What do we mean by "child," "TV watching," and "overweight?"
The conceptual definition is the dictionary definition of the concept, and how we plan on "measuring" that concept is called the "operational" definition.
For example, TV watching is defined as the number of hours, on average, someone watches TV per day (conceptual definition). To measure this, we had children circle the TV shows they watched "yesterday" from a grid provided to them (operational definition).
Get it? You'd have to do this for each concept.
In terms of variables, in our first hypothesis, the IV is "TV watching" and the DV is "weight." We are suggesting, at least in this hypothesis, that an individual's "weight" will change based on how much TV he/she watches. Since we're suggesting that TV viewing can "influence" weight, weight is the DV.
To make it easier for us, most scholars follow the form of putting your IV first and your DV second in any hypothesis.
Then, once we've got this all figured out, we need to figure out how the heck we would "test" this hypothesis. The "test" is the statistical method used to figure out if the relationship between the variables is significant.
In this case, the IV is "ratio" and the DV is also "ratio." When we have two ratio variables, we always use "correlation" as the statistical test.
When the IV is "nominal" and the DV is "nominal," we use chi-square.
When the IV is "nominal" and the DV is "interval/ratio," we use t-test.
When the IV is "interval/ratio" and the DV is "interval/ratio," we use correlation.
(In this class, we won't discuss what to use if the IV is "interval/ratio," and the DV is "nominal" -- logistical regression).
We'll talk more about the tests later in the course, but it's a good idea to know which test is used in which circumstance...
Obviously, you don't need to know what these statistical tests are yet, but we'll get to them next week.
Per usual, let me know if you have any questions, you crazy quant-heads.
Peace,
Jack
First, you come up with an interesting question that you'd like to answer. In other words, you come up with concepts to see if they're related. Maybe you're interested in the relationship between TV viewing and obesity in kids; the portrayal of models of fashion magazines and real-life body image among females (or males); video games and reflexes; video games and violence; sexual content on TV and real-life sexual behavior; rap/rock lyrics and attitudes toward women; and so on. The possibilities are, literally, endless.
Then, using your own observations, thoughts, opinions, etc., you decide which way you believe the relationship goes. You come up with a declarative statement like "kids who watch a lot of TV get fat."
You believe this to be true for whatever reason. This is known as your "theoretical rationale." Why do you think it might be true? As long as it makes sense (face validity), you're probably on to something.
For example, you might say-- "well, kids who watch a lot of TV are spending time watching TV INSTEAD of running around and playing outside, so they're probably not getting a lot of exercise, so they're not burning as many calories. Also, it seems likely that kids watching TV are more likely to mindlessly snack than are kids playing kickball or some other activity. So, it seems to me that kids watching TV burn less calories and consume more calories, so it makes sense that this might lead to more childhood obesity."
Makes sense to me.
Every premise has a theoretical rationale.
We need to refine it, however, and form a hypothesis. A hypothesis is simply a declarative, testable statement that examines the relationship between variables.
Variables are either independent (IV) or dependent (DV).
An IV is one that isn't changed... A DV, however, is the one that changes. We measure DVs.
For example, if I developed a hypothesis on the tv-obesity topic, I might come up with something like this:
H1: The more TV a child watches, the more likely the child is overweight.
In this one simple hypothesis, we have three concepts that we need to identify. What do we mean by "child," "TV watching," and "overweight?"
The conceptual definition is the dictionary definition of the concept, and how we plan on "measuring" that concept is called the "operational" definition.
For example, TV watching is defined as the number of hours, on average, someone watches TV per day (conceptual definition). To measure this, we had children circle the TV shows they watched "yesterday" from a grid provided to them (operational definition).
Get it? You'd have to do this for each concept.
In terms of variables, in our first hypothesis, the IV is "TV watching" and the DV is "weight." We are suggesting, at least in this hypothesis, that an individual's "weight" will change based on how much TV he/she watches. Since we're suggesting that TV viewing can "influence" weight, weight is the DV.
To make it easier for us, most scholars follow the form of putting your IV first and your DV second in any hypothesis.
Then, once we've got this all figured out, we need to figure out how the heck we would "test" this hypothesis. The "test" is the statistical method used to figure out if the relationship between the variables is significant.
In this case, the IV is "ratio" and the DV is also "ratio." When we have two ratio variables, we always use "correlation" as the statistical test.
When the IV is "nominal" and the DV is "nominal," we use chi-square.
When the IV is "nominal" and the DV is "interval/ratio," we use t-test.
When the IV is "interval/ratio" and the DV is "interval/ratio," we use correlation.
(In this class, we won't discuss what to use if the IV is "interval/ratio," and the DV is "nominal" -- logistical regression).
We'll talk more about the tests later in the course, but it's a good idea to know which test is used in which circumstance...
Obviously, you don't need to know what these statistical tests are yet, but we'll get to them next week.
Per usual, let me know if you have any questions, you crazy quant-heads.
Peace,
Jack
Overview of MMRM
Greetings--
Below is a brief overview of quantitative research. In this class, we'll cover three major quantitative methodologies-- 1) surveys, 2) experiments, and 3) content analysis, and the three major qualitative methods-- 1) interviews, 2) focus groups, 3) participant-observation.
When it comes to social science, there are two approaches-- qualitative and quantitative. While both approaches use the scientific method, a qualitative researcher believes that humans and their behaviors are far too "messy" and complex to quantify and that it is silly to even try. Instead, qualitative researchers believe that any understanding of the human condition is best ascertained by talking to people and searching for identifiable patterns among their comments.
A quantitative researcher, by comparison, believes that if a concept exists, then it can be measured, and, conversely, if it can't be measured, then it doesn't exist.
For example, if we were to take the concept of "romantic love," an abstract concept to be sure, a quantitative researcher would say that if "love" is real and exists, then it can be measured. He/she might begin by identifying indicators of romantic love and they might include items like--
amount of time spent with the other person
level of affection
level of emotional connection
level of desire to spend time with the other person
phsyiological responses when with the other person
willingness to do things for the other person
level of communication with the other person (talking in person, snapchat, via cell, texting, email, IMing, facebook, skype, etc.)
sexual relationship
level of commitment
and so on...
The point is this- a quantitative researcher might be able to say "If I discover that a person has a high level of affection for someone, feels emotionally connected, wants to spend a lot of time with that person, experiences physiological symptoms when in the presence of the other person (heart rate increase, general feelings of happiness, goose bumps, etc.), wants to do things for that person, spends a lot of time communicating with him/her, has a healthy sexual relationship with him/her, and is committed for the long haul, and so on, then that person is 'in love.'"
Get it?
If there's one thing I want you to get out of this class, it's this-- quantitative research is practical and is a wonderful tool we have to answer some of life's most interesting questions.
So let's begin by taking a quick look at an overview of the quantitative approach:
Quantitative research overview
Quantitative research usually is designed to produce estimates of the prevalence of knowledge, attitudes, opinions, behaviors, and other characteristics of a defined population group, whether the U.S. population as a whole or some subgroup. Most quantitative research attempts to do this by using various approaches to randomly selecting a representative sample of the audience of interest, such as random household surveys, random digit-dialing telephone methodologies, or random selection of names from voter lists or other lists believed to be inclusive of all members of the audience (for example, membership lists, motor vehicle registrations, school enrollment lists).
These and other approaches can involve complex decisions about the comprehensiveness of the data from which to sample, whether the sampling should be "stratified" (a process in which attempts are made to represent different segments of the overall population of interest, such as by gender, age group, or household income), whether adequate numbers of people can be sampled from which to reliably make population estimates, as well as specific ways in which "random selection" is actually accomplished.
After a representative sample is selected, surveys are then administered to the sample via one or more modalities, including in-person, telephone, or mail, and, more recently, the Internet. Each of these methods presents technical challenges, among them ensuring that all members of the sample have equal access to the chosen modality so as to not introduce a sampling bias. Another major challenge is to achieve high response rates, so that the results are not potentially biased by unknown characteristics of either the "responders" or "nonresponders."
Because of the nature of this research, and its use of relatively high numbers of respondents, surveys often use a limited number of questions that are presented in a consistent way to all respondents. Approaches to questionnaire development and phrasing can vary in rigor from internal professional review and discussion to formal cognitive laboratory testing with potential respondents.
To facilitate data entry and analysis, most questions will be structured in a "closed-ended" format, which limits the respondents to making a choice between two or more predetermined alternative responses. "Open-ended" formats also can be used in this research. If responses given by respondents do not fall within a limited number of categories or themes that can be discerned by either software or analysis by the researcher, however, they can limit the value and generalizability of the findings.
Finally, the other defining feature of quantitative research is the statistical analysis of the responses and reporting of these data in summary form. Statistical analysis can range from simple frequencies, percentages, and cross-tabulations for selected subgroups (such as age group, income, race/ethnicity) to complex analyses that may try to explain how various characteristics of the population or subgroups relate to one another (for example, what characteristics explain differences between people who engage in a healthy behavior versus those who do not).
The results of these analyses are then used to draw conclusions about the prevalence of the measured characteristics in the larger population of interest. These results are often expressed using confidence intervals or p-values to gauge the level of certainty that the reported results may in fact reflect the population as a whole.
Benefits of quantitative research
The most important benefit of well-designed and well-implemented quantitative research is that it can give planners of communications programs fairly reliable information about the prevalence of certain characteristics among their audience. Quantitative research that is conducted on a periodic basis also can track the effects of the program on targeted knowledge, attitudes, and/or behavioral change objectives. Quantitative methods also can be used to determine if the results of qualitative research are valid for the larger population.
Limitations of quantitative research
Researchers must consider several limitations of quantitative research before making a decision to conduct audience research using this family of approaches. The more important limitations are:
* Such approaches usually are resource intensive and can take several weeks to many months to design, implement, and analyze, thus extending the time needed to incorporate audience-based research into program planning. One option to address this limitation is to add questions to ongoing omnibus marketing and opinion-sampling surveys conducted by commercial entities. These results can be turned around much more quickly. How these firms construct their samples, what modalities they use for interviewing, and how they achieve high response rates all need to be explored, however, to assure the quality of the information provided to the planners.
* Quantitative research also requires skills in sampling design issues, sampling methodologies, survey design, statistical techniques, and how they are all applied in a communications research context. The extent to which these skills are used in planning and carrying out a quantitative study determines both the quality of the data and their generalizability to the total population.
* The structure of most surveys limits the number of questions that can be asked, the variety of responses that respondents can provide, the time each respondent has to answer questions (15–20 minutes is what many surveys aim for to minimize respondent burden and to maximize full completion of the survey), and any type of interactive process with or among respondents. Thus the data are limited in the amount and richness of audience information that can be fed into the program and message design processes.
Below is a brief overview of quantitative research. In this class, we'll cover three major quantitative methodologies-- 1) surveys, 2) experiments, and 3) content analysis, and the three major qualitative methods-- 1) interviews, 2) focus groups, 3) participant-observation.
When it comes to social science, there are two approaches-- qualitative and quantitative. While both approaches use the scientific method, a qualitative researcher believes that humans and their behaviors are far too "messy" and complex to quantify and that it is silly to even try. Instead, qualitative researchers believe that any understanding of the human condition is best ascertained by talking to people and searching for identifiable patterns among their comments.
A quantitative researcher, by comparison, believes that if a concept exists, then it can be measured, and, conversely, if it can't be measured, then it doesn't exist.
For example, if we were to take the concept of "romantic love," an abstract concept to be sure, a quantitative researcher would say that if "love" is real and exists, then it can be measured. He/she might begin by identifying indicators of romantic love and they might include items like--
amount of time spent with the other person
level of affection
level of emotional connection
level of desire to spend time with the other person
phsyiological responses when with the other person
willingness to do things for the other person
level of communication with the other person (talking in person, snapchat, via cell, texting, email, IMing, facebook, skype, etc.)
sexual relationship
level of commitment
and so on...
The point is this- a quantitative researcher might be able to say "If I discover that a person has a high level of affection for someone, feels emotionally connected, wants to spend a lot of time with that person, experiences physiological symptoms when in the presence of the other person (heart rate increase, general feelings of happiness, goose bumps, etc.), wants to do things for that person, spends a lot of time communicating with him/her, has a healthy sexual relationship with him/her, and is committed for the long haul, and so on, then that person is 'in love.'"
Get it?
If there's one thing I want you to get out of this class, it's this-- quantitative research is practical and is a wonderful tool we have to answer some of life's most interesting questions.
So let's begin by taking a quick look at an overview of the quantitative approach:
Quantitative research overview
Quantitative research usually is designed to produce estimates of the prevalence of knowledge, attitudes, opinions, behaviors, and other characteristics of a defined population group, whether the U.S. population as a whole or some subgroup. Most quantitative research attempts to do this by using various approaches to randomly selecting a representative sample of the audience of interest, such as random household surveys, random digit-dialing telephone methodologies, or random selection of names from voter lists or other lists believed to be inclusive of all members of the audience (for example, membership lists, motor vehicle registrations, school enrollment lists).
These and other approaches can involve complex decisions about the comprehensiveness of the data from which to sample, whether the sampling should be "stratified" (a process in which attempts are made to represent different segments of the overall population of interest, such as by gender, age group, or household income), whether adequate numbers of people can be sampled from which to reliably make population estimates, as well as specific ways in which "random selection" is actually accomplished.
After a representative sample is selected, surveys are then administered to the sample via one or more modalities, including in-person, telephone, or mail, and, more recently, the Internet. Each of these methods presents technical challenges, among them ensuring that all members of the sample have equal access to the chosen modality so as to not introduce a sampling bias. Another major challenge is to achieve high response rates, so that the results are not potentially biased by unknown characteristics of either the "responders" or "nonresponders."
Because of the nature of this research, and its use of relatively high numbers of respondents, surveys often use a limited number of questions that are presented in a consistent way to all respondents. Approaches to questionnaire development and phrasing can vary in rigor from internal professional review and discussion to formal cognitive laboratory testing with potential respondents.
To facilitate data entry and analysis, most questions will be structured in a "closed-ended" format, which limits the respondents to making a choice between two or more predetermined alternative responses. "Open-ended" formats also can be used in this research. If responses given by respondents do not fall within a limited number of categories or themes that can be discerned by either software or analysis by the researcher, however, they can limit the value and generalizability of the findings.
Finally, the other defining feature of quantitative research is the statistical analysis of the responses and reporting of these data in summary form. Statistical analysis can range from simple frequencies, percentages, and cross-tabulations for selected subgroups (such as age group, income, race/ethnicity) to complex analyses that may try to explain how various characteristics of the population or subgroups relate to one another (for example, what characteristics explain differences between people who engage in a healthy behavior versus those who do not).
The results of these analyses are then used to draw conclusions about the prevalence of the measured characteristics in the larger population of interest. These results are often expressed using confidence intervals or p-values to gauge the level of certainty that the reported results may in fact reflect the population as a whole.
Benefits of quantitative research
The most important benefit of well-designed and well-implemented quantitative research is that it can give planners of communications programs fairly reliable information about the prevalence of certain characteristics among their audience. Quantitative research that is conducted on a periodic basis also can track the effects of the program on targeted knowledge, attitudes, and/or behavioral change objectives. Quantitative methods also can be used to determine if the results of qualitative research are valid for the larger population.
Limitations of quantitative research
Researchers must consider several limitations of quantitative research before making a decision to conduct audience research using this family of approaches. The more important limitations are:
* Such approaches usually are resource intensive and can take several weeks to many months to design, implement, and analyze, thus extending the time needed to incorporate audience-based research into program planning. One option to address this limitation is to add questions to ongoing omnibus marketing and opinion-sampling surveys conducted by commercial entities. These results can be turned around much more quickly. How these firms construct their samples, what modalities they use for interviewing, and how they achieve high response rates all need to be explored, however, to assure the quality of the information provided to the planners.
* Quantitative research also requires skills in sampling design issues, sampling methodologies, survey design, statistical techniques, and how they are all applied in a communications research context. The extent to which these skills are used in planning and carrying out a quantitative study determines both the quality of the data and their generalizability to the total population.
* The structure of most surveys limits the number of questions that can be asked, the variety of responses that respondents can provide, the time each respondent has to answer questions (15–20 minutes is what many surveys aim for to minimize respondent burden and to maximize full completion of the survey), and any type of interactive process with or among respondents. Thus the data are limited in the amount and richness of audience information that can be fed into the program and message design processes.
Subscribe to:
Posts (Atom)