Common use of Step E Clause in Contracts

Step E. The fifth and last step E consists of the statistical analysis of the qualitative and quantitative content analysis of step C and D. Each of those five main steps in our analysis workflow is comprised of several sub-steps. For a detailed view of our analysis workflow, please see Figure 3.a and Figure 3.b as well as the complete, scalable versions of the workflow representation in the annex. After we have roughly described our analysis workflow, we describe the individual steps in detail in the following sections. 3.3.3 Specific theoretical background of open science This step (step A) deals with the specific theoretical background analysis and identification of important categories of open science. This step consists of two independent parts: 1. Qualitative analysis of standard texts on open science and 2. Quantitative text mining of standard texts on open science These two parts serve the purpose of identifying the main categories of open science as objectively as possible. In the literature analysis part, we systematically research and read a selection of standard texts on open science and excerpt standard definitions. From these excerpts, we create a list of the main principles, topics, concepts or categories of open science. In the text mining part we algorithmically scan a standard text corpus and algorithmically extract a list of categories via automatic topic modelling (Wikipedia 2020c). Both independently generated lists of six main categories of open science are each merged together where possible or recombined via abstraction to new categories. The result is a single list of 18 categories. This list of is used in the creation of the coding book. Please see Figure 3, A.1 and A.2. 3.3.4 Pre-processing the given CAM documents This step (step B) deals with the preparation, sorting and the pre-processing of the given CAM documents. The purpose of this step is to systematically run through the primary raw text data (CAMs) and decide whether a given document will be used or whether there are bad redundancies. The result is a clean set of secondary text data (CAMs text corpus) that can be used for the next steps in our analysis. Please see Figure 3, B. 3.3.5 Qualitative content analysis This step (step C) in our content analysis workflow deals with the qualitative content analysis including the creation of the coding book. The goal of this step is to arrive at an inter-subjective category frequency table. This step consists of two independent parts: 1. Creation of the coding book and 2. Run-through with two independent coders (persons). In the first part, we use the list of categories identified in our previous specific theoretical background analysis to create the coding book (see Table 12 in the annex). The coding book is a table and is comprised of four columns (parts): The first columns consist of all categories that we have obtained from the previously created list of important categories. The second column contains a definition for each category. The category definitions are determined by their canonical dictionary definitions (▇▇▇▇▇▇.▇▇▇ and Oxford University Press (OUP) 2019). The third column contains rules for each category. Each rule specifies the conditions under which a text passage falls under the according category. The fourths column contains anchor examples for each category found in the text corpus. Please see Figure 3, B.1. In the second part, we run through the CAMs text corpus with two independent coders and the help of the previously created coding book. A coder is a person who systematically goes through the text corpus with the help of the coding book and copies all the found references according to the rules of the coding book and enters them with a corresponding reference into a table of his own. Please see Figure 3, B.2. We will discuss this method in more detail in the main section: Qualitative content analysis. 3.3.6 Quantitative content analysis This step (step D) deals with the quantitative content analysis including the creation of a category model via synonyms. The goal of this step is to arrive at an objective category frequency table. This step is non-trivial in nature because it is very hard to determine the priori base frequencies of category occurrence. We tackle this challenge by a rather simplistic but effective category model. We build our model by utilising the assumption of hypernyms and synonyms, which is also a silent background assumption in qualitative content analysis (see section: General theoretical background). For each category, we determine a list of synonyms with the help of a standard dictionary (▇▇▇▇▇▇.▇▇▇ and Oxford University Press (OUP) 2019). We then consult a comprehensive word frequency list (Word frequency data 2019) to determine the base frequency of each synonym. From these frequencies, we can calculate the probabilities and expected values for each synonym. We model each category probability by the combined probabilities of the corresponding synonyms. Finally, we search for all synonyms for each category in the CAMs text corpus and count their occurrence frequencies. This procedure allows us to specify the a priori expected frequency for each category against which we can test the observed category frequencies of each coder. Please see Figure 4, D. We will discuss this method in more detail later in the main section: 3.5 Quantitative content analysis. 3.3.7 Statistical analysis This step (step E) deals with the statistical analysis of the results from the qualitative and quantitative content analysis, i.e., the CAM corpus category frequency table A and B (see Figure 3 and 4). We use the classic ▇▇▇▇▇▇'▇ exact test (Wikipedia 2019) to calculate the significance level for each coder and the category frequencies. We use the Krippendorff’s alpha (▇▇▇▇▇▇▇▇▇▇▇▇ 2011) and the ▇▇▇▇▇▇▇ rank correlation coefficient (▇▇▇▇ 2007; Wikipedia

Appears in 1 contract

Sources: Grant Agreement

Step E. The fifth fifths and last step E consists of the statistical analysis of the qualitative and quantitative content analysis of analysis, step C and D. Each of those five main steps in our analysis workflow is comprised of several sub-steps. For a detailed view of our analysis workflow, please see Figure 3.a and Figure 3.b as well as the complete, scalable versions of the workflow representation in the annex. After we have roughly described our analysis workflow, we describe the individual steps in detail in the following sections. 3.3.3 Specific theoretical background of open science This step (step A) deals with the specific theoretical background analysis and identification of important categories of open science. This step consists of two independent parts: 1. Qualitative analysis of standard texts on open science and 2. Quantitative text mining of standard texts on open science These two parts serve the purpose of identifying the main categories of open science as objectively as possible. In the literature analysis part, we systematically research and read a selection of standard texts on open science and excerpt standard definitions. From these excerpts, we create a list of the main principles, topics, concepts or categories of open science. In the text mining part we algorithmically scan a standard text corpus and algorithmically extract a list of categories via automatic topic modelling (Wikipedia 2020c). Both independently generated lists of six main categories of open science each are each merged together where possible or recombined via abstraction to new categories. The result is a single list of 18 categories. This list of is used in the creation of the coding book. Please see Figure 3, A.1 and A.2. 3.3.4 Pre-processing the given CAM documents This step (step B) deals with the preparation, sorting and the pre-processing of the given CAM documents. The purpose of this step is to systematically run through the primary raw text data (CAMs) and decide whether a given document will be used or whether there are bad redundancies. The result is a clean set of secondary text data (CAMs text corpus) that can be used for the next steps in our analysis. Please see Figure 3, B. 3.3.5 Qualitative content analysis This step (step C) in our content analysis workflow deals with the qualitative content analysis including the creation of the coding book. The goal of this step is to arrive at an inter-subjective category frequency table. This step consists of two independent parts: 1. Creation of the coding book and 2. Run-through with two independent coders (persons). In the first part, we use the list of categories identified in our previous specific theoretical background analysis to create the coding book (see Table 12 in the annex). The coding book is a table and is comprised of four columns (parts): The first columns consist of all categories that we have obtained from the previously created list of important categories. The second column contains a definition for each category. The category definitions are determined by their canonical dictionary definitions (▇▇▇▇▇▇.▇▇▇ and Oxford University Press (OUP) 2019). The third column contains rules for each category. Each rule specifies the conditions under which a text passage falls under the according category. The fourths fourth column contains anchor examples for each category found in the text corpus. Please see Figure 3, B.1B1. In the second part, we run through the CAMs text corpus with two independent coders and the help of the previously created coding book. A coder is a person who systematically goes through the text corpus with the help of the coding book and copies all the found references according to the rules of the coding book and enters them with a corresponding reference into a table of his own. Please see Figure 3, B.2. We will discuss this method in more detail later in the main section: Qualitative content analysis. 3.3.6 Quantitative content analysis This step (step D) deals with the quantitative content analysis including the creation of a category model via synonyms. The goal of this step is to arrive at an objective category frequency table. This step is non-trivial in nature because it is very hard to determine the priori base frequencies of category occurrence. We tackle this challenge by a rather simplistic but effective category model. We build our model by utilising the assumption of hypernyms and synonyms, which is also a silent background assumption in qualitative content analysis (see section: General theoretical background). For each category, we determine a list of synonyms with the help of a standard dictionary (▇▇▇▇▇▇.▇▇▇ and Oxford University Press (OUP) 2019). We then consult a comprehensive word frequency list (Word frequency data 2019) to determine the base frequency of each synonym. From these frequencies, we can calculate the probabilities and expected values for each synonym. We model each category probability by the combined probabilities of the corresponding synonyms. Finally, we search for all synonyms for each category in the CAMs text corpus and count their occurrence frequencies. This procedure allows us to specify the a priori expected frequency for each category against which we can test the observed category frequencies of each coder. Please see Figure 4, D. We will discuss this method in more detail later in the main section: 3.5 Quantitative content analysis. 3.3.7 Statistical analysis This step (step E) deals with the statistical analysis of the results from the qualitative and quantitative content analysis, i.e., the CAM corpus category frequency table A and B (see Figure 3 and 4). We use the classic ▇▇▇▇▇▇'▇ exact test (Wikipedia 2019) to calculate the significance level for each coder and the category frequencies. We use the Krippendorff’s alpha (▇▇▇▇▇▇▇▇▇▇▇▇ 2011) and the ▇▇▇▇▇▇▇ rank correlation coefficient (▇▇▇▇ 2007; WikipediaWikipedia 2020b) to calculate the inter coder reliability. The statistical results can then be interpreted and this interpretation can be used as a basis for drawing conclusions with respect to the project. Please see Figure 4, D. 4 Research question and differentiation of the hypothesis The central question of the current deliverable arises directly from the main aim of the OSCAR project itself. Is it possible to integrate or harmonise the statements of commitment to some of the major open science principles into some CAMs commonly used in the European AAT research landscape? Arguably, it is possible to integrate open science into the European AAT research landscape in general—at least we are not aware of any compelling argument that would prove that it is impossible. The bona fide possibility leads directly to the follow-up question: How exactly can open science being integrated into the AAT CAMs? To answer this question, it is necessary to analyse the contents of representative CAMs with regard to open science. The instrumental normative objective conditional in the context of this question is as follows: I: If open science is implicitly relevant in some CAMs, then a strategy for integrating open science into these CAMs should be pursued that exploits the respective fact of existing or not existing conceptual frameworks for open science. The existence or non-existence of a conceptual framework for open science places different demands on our upcoming strategic steps of the OSCAR project. Therefore, it is important to answer this question in advance of any further action or decision. To see whether the above implication is true, we need to define the antecedence viz what it is for open science to implicitly relevant in the CAMs in the first place. Only then, we can decide whether the antecedence of I is true. Only then, in turn, we can take appropriate, i.e., informed actions regarding the overarching goal of the OSCAR project. The following conditional working definition of relevance (antecedence of I) of open science in the given CAMs at hand is sufficient for the purposes of the current analysis:

Appears in 1 contract

Sources: Grant Agreement