The main activity of this week was drafting a proposal for some dissertation support funding. There is a required section in this proposal called "Analysis" where you are supposed to discuss how you are planning to go about analyzing your data - not in general terms (i.e. I'm going to fit a such and such model, or I'm going to develop a set of codes and apply them), but in a way that shows how exactly you are going to use results of analysis to answer research questions. I'm still working on this section, so this will be a little thinking out loud. For now, I'm just going to concentrate on how I'm going to analyze the data coming out of the assessment, rather than the interview data with scientists.
I will have two primary sources of data: transcripts of think alouds with high schools students while they complete each item and the set of completed assessments from ~200 students.
My first question is whether we can develop items with high reliability and high validity - I often throw these words around a lot (why of COURSE we want high reliability and validity), but there are lots of kinds of reliability and validity. So I think step 1 is to think about what types of reliability and validity are most important to me for this project. *note to self - think about this* But, I think the basic answer in this proposal will be that I will run a basic psychometric analysis (looking for items with good discrimination, and reasonable internal reliability), test for unidimensionality (EFA, e.g.), and then use unidimensional and multidimensional IRT analysis - I will be looking to see that my items span the range of student ability. I wouldn't want to see all items clustered at the bottom or at the top of the ability distribution.
My second question is about the relationship between content knowledge and argumentation. Lately, on another project, we seems to be leaning toward a more specific question: does the ability to argue at a certain level depend on having certain content knowledge? I'm not sure I'm ready to get that specific. If I do, I have been reading about the structured constructs model (SCM, under development by Mark Wilson's group) - this allows one to test dependencies within a data set. Perhaps I'm not quite ready to make these types of hypotheses yet - perhaps a better question is what types of content knowledge do students seem to be bringing to the argumentation task (e.g. specific ecology knowledge, more general epistemic knowledge), to what extent does the quality of their arguments and critiques rely on this specific knowledge, and (for the future) is there a way to test the dependency?
Tasks for next week: DSG: Analysis, Statement of Qualifications, finish budget
Complete Us
I will have two primary sources of data: transcripts of think alouds with high schools students while they complete each item and the set of completed assessments from ~200 students.
My first question is whether we can develop items with high reliability and high validity - I often throw these words around a lot (why of COURSE we want high reliability and validity), but there are lots of kinds of reliability and validity. So I think step 1 is to think about what types of reliability and validity are most important to me for this project. *note to self - think about this* But, I think the basic answer in this proposal will be that I will run a basic psychometric analysis (looking for items with good discrimination, and reasonable internal reliability), test for unidimensionality (EFA, e.g.), and then use unidimensional and multidimensional IRT analysis - I will be looking to see that my items span the range of student ability. I wouldn't want to see all items clustered at the bottom or at the top of the ability distribution.
My second question is about the relationship between content knowledge and argumentation. Lately, on another project, we seems to be leaning toward a more specific question: does the ability to argue at a certain level depend on having certain content knowledge? I'm not sure I'm ready to get that specific. If I do, I have been reading about the structured constructs model (SCM, under development by Mark Wilson's group) - this allows one to test dependencies within a data set. Perhaps I'm not quite ready to make these types of hypotheses yet - perhaps a better question is what types of content knowledge do students seem to be bringing to the argumentation task (e.g. specific ecology knowledge, more general epistemic knowledge), to what extent does the quality of their arguments and critiques rely on this specific knowledge, and (for the future) is there a way to test the dependency?
Tasks for next week: DSG: Analysis, Statement of Qualifications, finish budget
Complete Us