Given the diversity and context-specificity of innovation systems approaches, in March 2007 the World Bank organized a workshop in which about 80 experts (representing donor agencies, development and related agencies, academia, and the World Bank) took stock of recent experiences with innovation systems in agriculture and reconsidered strategies for their future development. This paper summarizes the workshop findings and uses them to develop and discuss key issues in applying the innovation systems concept. The workshop’s recommendations, including next steps for the wider
Multi-stakeholder partnerships network which is typified by the FARA-led Integrated Agriculture Research for Development (IAR4D) of the SSA-Challenge Program is an innovation platform (IP) composed of stakeholders bound together by their individual interests in a shared commodity or outcome. The result from such innovation platforms is largely influenced by the strength of the network. In this paper, similarities within and across platforms are assessed using the simple matching procedure. Results indicate consistency in conduct of Innovation Platform activities.
The aim of this report is to provide a detailed review of documented social learning processes for climate changeand natural resource managementas described in peer-reviewed literature. Particular focus is on identifying (1) lessons and principles, (2) tools and approaches, (3) evaluation of social learning, as well as (4) concrete examples of impacts that social learning has contributed to.
This paper discusses a range of approaches and benchmarks that can guide future design of value chain impact evaluations. Twenty studies were reviewed to understand the status and direction of value chain impact evaluations. A majority of the studies focus on evaluating the impact of only a few interventions, at several levels within the value chains. Few impact evaluations are based on well-constructed, well-conceived comparison groups. Most of them rely on use of propensity score matching to construct counterfactual groups and estimate treatment effects.
The UNDP Capacity Assessment Methodology User‘s Guide gives UNDP and other development practitioners a detailed step-by-step guide to conducting a capacity assessment using the UNDP Capacity Assessment Methodology, which consists of the UNDP Capacity Assessment Framework, a three-step process and supporting tools.
This set of guidance notes is designed to support practitioners and evaluators in conducting retrospective evaluations of a capacity development intervention or portfolio to assess and document results. Users will enhance their understanding of the capacity development process, of what works and what does not work in promoting change and to inform future programs. The standard M&E approach for assessing capacity development results has not been sufficient.
Innovation systems can be defined in a variety of ways: they can be national, regional, sectoral, or technological. They all involve the creation, diffusion, and use of knowledge. Systems consist of components, relationships among these, and their characteristics or attributes. The focus of this paper is on the analytical and methodological issues arising from various system concepts. There are three issues that stand out as problematic. First, what is the appropriate level of analysis for the purpose at hand?
This review of literature on evaluation methods focuses specifically on approaches and methodologies in evaluation which are relevant for evaluating initiatives in extension or rural advisory services. The context and scope of the review are discussed, followed by sections addressing the purposes, users and uses of evaluation, evaluation standards and criteria, approaches, rigour and attribution.
This report reviews the evidence of impact of capacity strengthening on agricultural research for development (AR4D) in developing countries. The study was commissioned by DFID as part of the documentation process of the project Strengthening Capacity for Agricultural Research for Development in Africa (SCARDA).
This paper argues that impact assessment research has not made more of a difference because the measurement of the economic impact has poor diagnostic power. In particular it fails to provide research managers with critical institutional lessons concerning ways of improving research and innovation as a process. Paper's contention is that the linear input-output assumptions of economic assessment need to be complemented by an analytical framework that recognizes systems of reflexive, learning interactions and their location in, and relationship with, their institutional context.