A recent Institute of Medicine report noted that as of 1990, when the last most comprehensive assessment of youth programs was undertaken, there were approximately 17,000 active youth-serving organizations in the United States alone.1 Community-based youth programs have a long history of providing needed supports and services to youth. These programs have had a positive impact on the lives of youth, helping to reduce alcohol, tobacco, and drug use,2 to prevent violence and reduce juvenile detentions,2 and to prevent adolescent pregnancies.3 Traditionally, the focus of many youth programs has been to prevent various problem behaviors and negative outcomes. As it became recognized that problem behaviors cluster in certain groups of youth,4 programs were designed to provide services to youth classified as "at risk."
Early research in resiliency helped highlight that the presence of caring adult relationships, decision-making skills, and other characteristics in a young person's life could offset the occurrence of negative outcomes even among youth considered to be at high risk.5 As noted in articles elsewhere in this supplement, in the 1990s, youth advocates began calling for a paradigm shift away from solely preventing problem outcomes to fully preparing all youth to lead productive lives and achieve their full potential.6,7 The awareness of positive youth development strategies has been heightened in recent years by a growing body of literature that supports the impact of positive youth development (whether called developmental assets or protective factors) on the well-being of youth.8,9
Promoting youth development has become a central tenet of many youth advocacy organizations including the Search Institute, National Collaboration for Youth, Academy for Educational Development, and Public/Private Ventures. Although these and other organizations provide valuable resources to community-based organizations in planning and implementing youth development programs, evaluating such programs remains challenging. Few instruments exist to measure positive youth development outcomes, and those that do are mainly lengthy, detailed, community-level surveys.1 Community-based programs that may neither have the financial resources nor have the intention to implement large scale evaluations are in need of valid and reliable youth development measures to help them determine the impact of their program on youth.1
Recognizing this need, two major youth-funding organizations in Monroe County, New York (the United Way of Greater Rochester and the Rochester-Monroe County Youth Bureau) convened a project team composed of researchers, funders, and program leaders representing local youth-serving organizations, to identify a valid tool that could help local community-based programs measure positive youth development outcomes. Program leaders identified the need for a brief instrument that would be easy to use and administer, applicable to a variety of youth-serving programs, and useful for the assessment of program impact on the development of youth participants. Finding no existing tools that met their needs, the project team enlisted help from researchers at the University of Rochester to develop such an instrument.
To enhance the likelihood of use by the participating community-based programs, the project team began by narrowing a list of 54-candidate youth development outcomes to those that would allow program leaders to demonstrate their effectiveness to funders, improve the quality of their services, and that were able to be affected by their community-based youth program. The four outcomes identified through this consensus building process were (1) caring adult relationships, (2) basic social skills, (3) decision making, and (4) constructive use of leisure time. Survey items were either written or adapted from various sources to address these four outcomes. The resulting instrument was pilot tested using cognitive interviews with adolescents to establish face validity. In a larger field test in 2001-2002 to establish construct reliability, factor analysis found six principal components measuring elements of caring adult relationships, basic social skills, and decision making. Caring adult relationships included two underlying constructs, staff relationships and program effectiveness, and basic social skills factored into constructs measuring self-control, empathy, and communication. Further details about the development and psychometric properties of this instrument have been reported elsewhere.10 The final instrument was named READY, the Rochester Evaluation of Asset Development for Youth, by the youth-serving agency participants in the pilot instrument development. In 2002, READY was made available for routine use by youth-serving organizations that had participated in its development.
The purpose of this article is to describe the implementation and dissemination of the READY tool among these community programs. We review the lessons learned during READY dissemination and implementation by community-based youth-serving agencies in Rochester, New York, in 2002-2003, and report on youth development outcome combined benchmark data on the basis of administration of READY by these community-based programs.
Methods
Youth development outcome measure
The READY tool is a self-report 40-item questionnaire that assesses key demographics, program participation and connectedness, and four common youth development outcomes. To measure participation, youth are asked to report their length of involvement in the program, the number of days per week they spend in program activities, and their perceived intensity of involvement. Participation intensity is classified as high if youth report that they "attend and participate in most program activities," medium if they report they "attend but don't always participate in program activities," and low if they report that they "don't attend program regularly." Participant connectedness to program is classified as high when youth report feeling connected to most people, medium when they report feeling connected with some people, and low when they report that they do not feel connected to people at the program. Constructive use of leisure time is measured by asking participants to report the number of days per week they participate in (1)music, theater, or other arts, (2) sports, (3) clubs or other organizations, (4) religious youth groups, (5) religious services, and (6) reading for fun. To assess the remaining three youth development outcomes, we created summary scores for each of the six underlying constructs (staff relationships, program effectiveness, self-control, empathy, communication, and decision making). We developed scores by averaging participant responses on Likert-like items that comprised each construct to yield a simple normalized score between 0 and 100.
Dissemination
We created a computerized Tool Kit using Microsoft Office programs that contained both a customizable version of the READY instrument and data analysis software. The customizable instrument allowed organizations to enter their program names and staff titles into the survey questions, thus creating a survey instrument tailored to their program. The analysis software was designed to allow organizations to manage data entry and easily generate summary score reports without external assistance or database manipulation expertise. Program staff entered survey data into a Microsoft Excel workbook that was preprogrammed to calculate means and/or frequencies for each survey item, and to generate a summary score report containing overall program scores, ranging from 0 to 100, for each of the six underlying constructs.
Between May and September 2002, we conducted group-training sessions to familiarize organizations with program evaluation and survey administration procedures, and to train staff on how to use the CD-ROM-based Tool Kit. Agencies were asked to send both program leaders who had decision-making authority around program and evaluation activities and staff who would be involved with the administration of the tool, data entry, and/or report generation to these trainings. Eleven of the original youth-serving agencies that participated in the development of READY participated in these sessions. Agencies included large, local chapters of national organizations such as the scouts, Boys & Girls Club, and the YMCA, and both small and large local neighborhood-based community organizations and recreation centers.
We also held individual technical assistance (TA) meetings with agency leaders at each of the 11 participating sites. Agency leaders were asked to identify all of their youth-serving programs, the goals and objectives of each, the number and ages of participants, and the structure of the program (eg, drop-in vs structured). They were also asked to describe their plans for using the outcome data that would be generated through the use of the READY tool. On the basis of the information they provided, each agency was assisted with developing an appropriate sampling and administration plan. This included identifying which programs or groups of participants were to be surveyed, when the surveys were to be administered, and staff responsibility for administration, data entry, and report generation.
Program feedback and data analyses
In the fall of 2003, 1 year after the READY tool had been disseminated for use, the funders convened a meeting of program leaders and facilitated a discussion of their experiences with the tool, and what, if anything, they had done with the data generated from the score reports. Key themes that emerged from this discussion are reviewed in the "Results" section below. Each organization was also asked to share raw data from the surveys that they had administered over the past year. Nine of 11 participating agencies (listed in Box 1) contributed data, which we then aggregated to provide a community-level youth development outcomes benchmark report for youth participating in these programs.
Using the aggregate data, we examined descriptive statistics and then used one-way analysis of variance to examine the relationship between each construct score and participants' reported length of involvement (number of years in program), frequency of participation (number of days per week in program), level of intensity of participation, and connectedness to program. Any significant differences in group means (P < .05) were further examined using the Scheffe test for post hoc analysis. Data analyses were conducted using SPSS, Version 11.5. This study was considered exempt by the University of Rochester Research Subjects Review Board.
Results
Lessons learned during dissemination
Overall, all agencies were able to implement and manage their own use of the READY instrument as a program evaluation tool. Most used TA provided by University of Rochester staff. We found that TA needs varied depending on the expertise of agency staff in conducting evaluations. Nearly all agencies required some technical support to initially navigate the analysis program section of the Tool Kit. Some organizations also required additional TA to design and implement their sampling and administration plans. The need for TA, whether related to the computer-based Tool Kit or the overall evaluation process, increased when a greater amount of time passed between staff attendance at training sessions and the program's subsequent administration of the survey.
Program leader feedback
Nearly all program representatives reported that they found the tool easy to use. Many of the agencies reported that their leadership had reviewed their score reports with various other staff members. A few agencies reported that the data had prompted them to examine program curricula and activities more closely to identify whether their programs included components intentionally designed to impact the measured outcomes. Some of the larger agencies surveyed youth in multiple program activities or at multiple sites, and reported comparing score reports across program units, using the data to identify whether programming differences might have accounted for any observed score variations. Program leaders voiced uncertainty about what their scores meant, and with their ability to interpret a "good" score versus a "bad" score. However, the majority acknowledged that data from their first year would be considered baseline, and that they would need to use the tool over several years to understand longitudinal results for their program. Many agencies were also interested in reviewing community-wide aggregated data. Some program leaders also expressed a desire to be able to share results and work collaboratively with funders and with similar programs in the community to identify best practices and share strategies to improve program quality.
Aggregate community-based program outcomes
Demographics
We received READY survey data on 1,070 youth at least 10 years of age who participated in 15 different programs at one of nine agencies. The average age of participants was 13 years (SD +/-2 years), and 53 percent of participants were female. The sample was ethnically diverse: 53 percent reported that they were Black or African American, 25 percent White, 9 percent multiracial, and 6 percent Hispanic/Latino. More than half (53%) reported that their family had enough for necessities and could buy special things, 42 percent reported that their family had just enough money for necessities, and 5 percent reported that their family did not have enough money to buy necessities.
Participation and connectedness to program
A third of participants reported they had been involved in a program for less than 1 year, 47 percent reported participating between 1 and 5 years, and 20 percent reported participating for more than 5 years. Twenty-seven percent of youth reported participating in program activities 5 to 7 days per week, 34 percent reported 2 to 4 days per week, 34 percent reported participating once a week, and 5 percent reported 0 days a week. The majority (71%) reported attending and participating in most program activities, 21 percent reported attending but not always participating in activities, and 8 percent reported not attending program regularly. Most youth felt connected to people at their programs: 49 percent reported connection to most people, 46 percent reported feeling connected to some, and 6 percent reported that they did not feel connected to people at the program.
Youth development outcomes
Most youth reported participating in a leisure time activity one or more times per week (Table 1). Positive youth development construct scores for the combined community program data and for varying levels of self-reported participation and connectedness are shown in Table 2. Participants' length of involvement in a program was significantly related to each of the three constructs measuring basic social skills. Mean scores for self-control, empathy, and communication generally increased with increased length of involvement in program. Youth who participated at least once a week in program activities had significantly higher caring adult relationship scores (staff relationships and program effectiveness) than youth who did not participate weekly. Intensity of participation was significantly related to all six construct scores; youth who reported attending and participating more actively had higher scores than those who reported less-active participation. Participants who reported increased levels of connectedness also had higher staff relationship scores and higher basic social skills scores (self-control, empathy, and communication).
Discussion
The READY tool appears to be feasible for use by a variety of small and large, local neighborhood-based and nationally affiliated youth-serving organizations. In its first year of use by community-based programs, all participating organizations were able to successfully implement and administer the tool using their own staff resources. In addition to initial training, almost all organizations were also in need of some ongoing TA in planning their evaluation or interpreting their READY data. A recent survey of nonprofit organizations by the Urban Institute found that outcome evaluation in general is relatively new for many organizations, and that few nonprofits have had training or TA in outcome measurement.11 These authors also reported that few of the 36 organizations that they surveyed used any systematic data collection or sampling strategies.11 Similarly, we found that many of the agencies we worked with needed TA in creating their overall evaluation plan. For example, larger organizations with multiple programs required assistance with selecting samples, in order to be able to administer READY in a manner that would not require excessive staff resources but that would still give them representative data about their programs to be used to help improve program services. Although nearly all agencies required some technical support to use the analysis program, this section of the Tool Kit has since been redesigned and simplified as READY, Version 2. The simplified version has been disseminated and is being used in four additional geographic areas, with several dozen programs. Whether this reduces the assistance required for initial learning, and whether Rochester area programs are able to use READY independently in subsequent years of use are questions currently under evaluation, and will be addressed in future reports.
Preliminary data from our development of the READY tool,10 as well as other reports,1,12 have demonstrated that participation in community-based programs is indeed associated with better developmental outcomes for youth. The positive effects of program participation are confirmed by these community-wide data. Youth who participated more frequently in programs had higher scores on constructs measuring the outcome of caring adult relationships. This suggests that youth need to have at least weekly contact with programs, in order to develop meaningful relationships and to feel valued and supported. We also found that participants' perceived intensity of participation was associated with positive youth development outcomes. Higher scores were observed on all six constructs among youth who perceived that they more regularly and more actively participated in program activities than among those who did not. These findings further highlight the need to pay attention to multiple dimensions that constitute participation, beyond just attendance checklists, when evaluating program effects.13
In our initial validation of the READY instrument, connectedness to program was related to staff relationships, self-control, empathy, communication, and decision making.10 In this study, a significant relationship did not exist for the decision-making score. However, connectedness to program remained significantly associated with staff relationships, self-control, empathy, and communication. Clearly, this is a crucial element to improving youth development outcomes. The Institute of Medicine and the National Research Council's recommendations for key features of positive developmental settings for youth include physical and psychological safety, appropriate structure, supportive relationships, opportunities to belong, positive social norms, support for efficacy and mattering, opportunities for skill building, and integration of family, school, and community efforts.1 Programs with which youth do not feel connected may need to consider whether the program context, or individual factors, or both, have resulted in the lack of engagement or connections by youth.
Our study has limitations. First, the data we have reported are based on organizations that volunteered to participate in using the READY tool as a program evaluation measure. We collected qualitative feedback from programs on their use of the tool utilizing informal discussions to identify key themes. These programs may not be fully representative of youth-serving organizations in our area, and qualitative feedback may have been biased by social desirability. Generalizability to other geographic areas and to other types of programs may be similarly limited. The United Way of Greater Rochester and the Rochester-Monroe County Youth Bureau commissioned a recent independent survey of program directors and direct service staff in agencies that were using the READY tool. This study confirmed our initial findings of the utility of the tool: 8 of 10 program directors reported that the READY tool provided helpful information about youth in their programs, and 7 of 10 reported that they used the information collected to change how they provide services to youth in their programs.14
Community-based programs represent one link in promoting positive youth development. As noted by various authors in other articles in this supplement, experts recognize that it takes a coordinated approach of families, communities, schools, and community programs to influence developmental outcomes among youth. Another limitation of this study was our inability to measure and control for other non-program-related influences, such as family, community, and school, on youth development outcomes. Although there is clearly an association between program participation and positive youth outcomes, further study utilizing additional measures and longitudinal designs is needed to demonstrate independent effects. For some youth, community-based programs provide supports and services including opportunities to form healthy relationships with peers and caring adults and a place in which to develop skills that can help them become productive adults. The development and use of valid indicator measures that can be used by community programs to evaluate their influence on the developmental outcomes of youth is crucial in helping to improve the quality of services and supports that youth receive.
This study has shown that READY is a promising tool that could potentially be used by community-based programs to examine program-attributable developmental outcomes for youth and improve program quality. Unlike program quality assessment tools that rely on observations made by adults, READY provides programs with feedback from their youth participants. Although further work is needed to refine the measurements, examine construct reliability, and establish the predictive validity of the tool, READY is a promising option for measuring positive outcomes, and not simply assessing the absence of negative outcomes,15 in promoting development of youth.
REFERENCES