Common use of Participant Selection Clause in Contracts

Participant Selection. In the case of school directors (i.e. school principal), the entire population was tested (i.e. each director was asked to fill the questionnaires). If the director was not available, the assistant director or other high-ranking school official was tested. For teachers, a non-probabilistic sampling strategy was used. Two second grade teachers were tested at each school. If a second grade teacher did not want to participate in the study, other teachers from grades 1 through 6 were asked to participate. Finally, children were randomly selected from all second grade classrooms in all 400 schools (simple randomization). Six children were tested at each school. minute), and advanced reading skills (fluency and reading comprehension). These subcomponents are standards used internationally in many standardized tests and in USAID reading programs, which often use EGRAs (Early Grade Reading Assessments) to measure reading. For working memory, the Word and Pseudoword Repetition Protocol (▇▇▇▇▇▇▇▇, ▇▇▇▇▇▇▇, ▇▇▇▇▇ & ▇▇▇▇▇▇▇, 2002) was used. This test entailed the immediate repetition of isolated words and pseudowords. The test is able to give us a broad or simple assessment of children’s ability to hold phonological information for a very short period of time. The oral comprehension task consisted of an enumerator reading aloud 3 stories and then asking children 5 literal questions about each story. The stories ensured a lack of gender bias, portraying both girls and boys and avoiding stereotyped gender roles within the story line. Stories had less than 70 words according to well-known recommended guidelines (USAID, 2009). In order to measure phonological awareness, we adapted the guideline’s recommended task, comprised of two components. The first component consisted of asking children to detect the initial sound of a word. The second component consisted of asking children to attend to the initial sound of 3 words and to subsequently identify the sound that did not match the other 2 words. By choosing letters in random order, we created the letter knowledge task. Letters were presented in upper and lower case. Children were asked to say the name of the presented letters. We chose to accept the letter name as opposed to the letter sound because the current curriculum in the Dominican Republic does not teach children to explicitly identify letter sounds. The literature suggests that because of Spanish’s transparent orthography, letter name is a good predictor of reading because it places the letter sound in syllabic context (▇▇▇ & ▇▇▇▇▇▇▇▇, 2012). For decoding skills we created a list of words from second grade textbooks currently in use in the Dominican Republic by the MoE. The criteria of selection for words were the frequency and the number of syllables, which was limited to two. The list of pseudowords was automatically generated using Wuggy, a free computerized multilingual pseudoword generator (available for download at ▇▇▇.▇▇▇▇▇.▇▇/▇▇▇▇▇▇▇▇-▇▇▇▇/▇▇▇▇▇). First, we set the desired output language to Spanish and used the list of words from the word-decoding test as seed to generate grammatically similar pseudowords. Second, we limited the number of syllables to 2 as an equivalent to the word-decoding test. A list of 250 pseudowords was initially generated, from which 50 words were randomly selected to assemble the final list. For fluency, three stories were presented to children with the same characteristics of the oral comprehension task. Children were allowed to read for one minute. This was done in order to be consistent with the words per minute task. We calculated the final fluency score by counting all words read within the time frame and subtracting reading errors. Finally, we tested reading comprehension by presenting children with three stories to read. Upon reading the stories, the enumerator asked the children 5 literal questions per story. Cronbach’s alpha, an internal consistency coefficient, was evaluated as the reliability criteria, while factor analysis of the instruments’ underlying constructs or subscales was the standard for construct validity. Only instruments that reported reliability above the acceptable level (.70; ▇▇▇▇▇▇ & ▇▇▇▇▇▇▇, 2003) and had adequate construct validity were selected. Original subscales and alpha scores are reported below (except for adapted instruments), but are also reported in the results section based on our sample. In all, the battery was comprised of the following components1 (described in order of appearance in the battery’s final print version): School climate. Using the 38-item Delaware School Climate Scale – Teacher/Staff (DSCS-T/S), developed by ▇▇▇▇ and colleagues (2014), we assessed 7 components of school climate: teacher-student relations, student-student relations, teacher-home communication, respect for diversity, school safety, fairness of rules, and clarity of expectations. Cronbach’s alpha for the 7 subscales ranged from .84 to .95. The Delaware Positive, Punitive, and SEL Techniques Scale was also included as part of this battery to assess teacher’s perception of reinforcement techniques used in the school. This scale is comprised of 13 items divided into 3 subscales: positive techniques, punitive techniques, and socio-emotional learning techniques. Cronbach’s alpha for these subscales were .85, .77, and .92, respectively. 1 An operational definition for these components was submitted as part of the First Quarterly Report

Appears in 2 contracts

Sources: Baseline Report, Baseline Report