Powell Chapter 9 
Coding is the conversion of data or responses to numerical format so that they can be tabulated or tallied.  This expedites analysis, especially if computers are to be used.
Descriptive statistics are statistics used to explain or characterize how frequently data of certain values appears, develop pictorial representations, characterize what is "typical" in a set of data, and show the extent of data variation.  Inferential statistics, on the other hand, help researchers to estimate future circumstances or to generalize from random samples.
Wildemuth Chapters 29-37 
Ch. 29
Content analysis is a method of doing research that involves analyzing the information contained in fixed media such as paper, digital media, video, audio, film, and more.  This is a systematic form of analysis, though, that follows the scientific method.
Manifest content is content that is clearly (unambiguously) held in the message or media.  Latent content cannot be observed in the media or message, but is instead conceptual in nature. 
Ch. 30
Conventional qualitative content analysis involves directly and inductively derived coding categories that are taken from the raw data in a study.
Directed content analysis involves beginning coding with a theory or with relevant research findings.  It is most often used to validate or extend a theory or conceptual framework that is already in existence. 
Summative content analysis starts with word counting or counting of manifest content, but then analyzes latent meaning or themes in the same way.  This approach is often used to explore word usage inductively.
Ch. 31
In discourse analysis, the term coherence is used to mean that the outcome(s) of your analysis should make the universe of discourse you have examined more clear and more focused.
The fruitfulness of discourse analysis findings is a measure indicating the amount of insight those findings provide that could aid in the analysis of new discourse.
Ch. 32
Analytic induction is a form of inductive reasoning that is used to evaluate and/or analyze data that is qualitative in nature.  It is a method for formally elucidating a theory or hypothesis straight from the data in a study.
One of analytic induction's drawbacks is that it has difficulty in accounting for continuous variables.  It works much better with binary or other well-compartmentalized variables.
Ch. 33
 
Measuring central tendency involves identifying the typical value or values that embody the overall distribution of values for a given variable.  This can be expressed as the mean, median, or mode.  
Dispersion is the measure of how (and how far) variables spread out from the central tendency.  The measure used for expressing variables' dispersion is informed by the measure chosen for expressing the central tendency.
Ch. 34
The frequency distribution of data is a representation of how many times a particluar category of a variable occurs.
By putting the frequency distribution of two variables in a table with the categories of one variable displayed as rows and the categories of the other variable displayed as columns, the relationship between the two variables can be observed and described.  This is called a cross-tabulation table (also a two-way table, contingency table, or bivariate table).
Ch. 35
Sequential event analysis is the analysis of individual steps of a process, or events, over time.  It can be based on the transitions between events or states, or on matching optimal algorithms (which involves comparing complete sequences of events).
It is important to remember that the examination of state transitions tells a researcher little about the sequence of events itself, while optimal matching looks more at the sequences themselves than at individual events.
Ch. 36
Correlation is an analysis method that looks at two variables and examines the relationship between them.  It can be positive (an increase in one causes an increase in the other) or negative (an increase in one causes a decrease in the other); these are called directions of the relationship.
Strength of correlation ranges from 0 (no correlation) to 1 (perfect correlation), and it is a measure of the likelihood or influence that one variable's change has on another.
Ch. 37
In research studies, the researcher wants to explore effects that arise out of certain circumstances.  These can be due to actual effect (true causation) or from chance (which may appear to be true causation, but is not).  Chance effects are also called sampling error.
If data have equal variances and normal distributions, a t test can be used to calculate the probability that variable differences are caused by chance.  It is calculated by dividing the difference between two means by the standard error (which comes from dividing the standard deviation of a sample by the square root of the size of the sample).
References
Connaway, L. & Powell, R. (2010). Basic research methods for librarians. 5th ed. Santa
Barbara, CA: Libraries Unlimited.  
Wildemuth, B. (2009). Applications of social research methods to questions in information and library science. Westport, CT: ABC-CLIO, Incorporated.
No comments:
Post a Comment