The Australian Centre for Evaluation (ACE) was established to help put evaluation evidence at the heart of policy design and decision-making.
We seek to improve the volume, quality, and use of evaluation evidence to support better policy and programs that improve the lives of Australians.
The e-Assessment Association has three major goals. To provide professional support and facilitate debate and discussion for people involved in this field of expertise; create and communicate the positive contributions that technology makes to all forms of assessment; and to develop statements of good practice for suppliers and consumers of e-Assessment technologies.
Fundar es una organización dedicada al estudio, investigación y diseño de políticas públicas cuyo foco es el desarrollo de una Argentina sustentable e inclusiva
Lumi es una aplicación de escritorio que le permite crear, editar, ver y compartir contenido interactivo con docenas de diferentes tipos de contenido. Es gratis y de código abierto.
There are many excellent guides and toolkits online which introduce you to Monitoring and Evaluation (M&E). While they are all useful, most work on the assumption that you already have a pretty good grasp of M&E. This toolkit assumes that you may be starting from scratch or that you really need a refresher
The Centre for Research in Assessment and Digital Learning (CRADLE) investigates improvements in higher education assessment in the context of a rapidly expanding digital environment.
The project aimed to confront a fundamental issue for every Higher Education (HE) course/programme leader: how to design an effective, efficient, inclusive and sustainable assessment strategy which delivers the key course/programme outcomes.
This is a curated list of our technical postings, to serve as a one-stop shop for your technical reading. I’ve focused here on our posts on methodological issues in impact evaluation – we also have a whole lot of posts on how to conduct surveys and measure certain concepts curate
Team members work alongside agency collaborators to apply behavioral insights, make concrete recommendations for how to improve government, and evaluate impact using administrative data. We also work with agencies to interpret and apply what we’ve learned together, and share leading practices, resources, and build the skills of civil servants to continue this work.
Banco de recursos del Banco de Desarrollo de América Latina. Guías, documentos de trabajo, casos, informes de evaluación de programas en América Latina
Developing a monitoring and evaluation framework helps clarify which pieces of information to collect to evidence your story of change.
It is good practice to include people who will be collecting the data when you develop your framework. You could also involve beneficiaries, volunteers, trustees, partner organisations or funders.
Ideally, write your framework before your project starts so you can make sure you are collecting appropriate data from the beginning.
Process evaluations aim to explain how complex interventions work. They are especially useful for interventions that include a number of interacting components operating in different ways and also when interventions address complex problems, or seek to generate multiple outcomes.
This Shiny App is to provide a user-friendly interface for users to conduct item analysis based on Classical Test Theory (CTT). Item analysis is used for examining responses to individual test items in order to assess the quality of items and of the test as a whole. It is valuable in improving items or eliminating poorly written or ambiguous item. This Shiny App provides several strategies used for item analysis, including reliability estimates, item difficulty, item discrimination (i.e.,item total correlation).