Multimedia: Blog Posts, Videos, Podcasts, and More
As part of our evaluation work, Innovation Network often conducts webinars and other online presentations. Whenever possible we try to make those available to the broader evaluation and nonprofit field.
What is graphic recording? Graphic recording is organizing information in a visual way using words, symbols, and pictures. This is often done in real time, paced with the information generated in a panel session, a meeting, a focus group, etc. In this post, Kat Athanasides shared tips for getting started with graphic recording.
Want to launch an evaluation, but not sure where to start? Wondering how much you should budget for it, or who should be involved? Although 90% of nonprofits are engaged in evaluation, it continues to be one of the most mysterious and misunderstood functions of nonprofit management.
In this evaluation 101-level webinar, Johanna Morariu and Ann Emery introduced key evaluation concepts, approaches, and methods. We also explained the four stages of the evaluation lifecycle (planning, data collection, analysis, and action).
Many organizations are using advocacy strategies to meet their missions. Just like any other work that foundations and nonprofits engage in, advocacy needs to be continually assessed, tweaked, and strengthened through a process of evaluation and learning. In this webinar for the National Committee for Responsive Philanthropy, Johanna Morariu and Will Fenn shared the nine steps of advocacy evaluation. The webinar is based on Innovation Network's report titled Pathfinder: A Practical Guide to Advocacy Evaluation.
Ann Emery shares Hall of Fame-worthy examples of data visualization in the evaluation field. She describes how evaluators are creating innovative, practical, and interactive visualizations; are thinking beyond standard reporting formats; and have capitalized on visual thinking skills.
Johanna Morariu describes treemaps, a relatively new data visualization technique, especially to evaluators. The technique was created in the 1990s by Dr. Ben Shneiderman for mapping computer hard drive usage. Treemaps are useful for visualizing hierarchical data, or tree structure data. Area is used to proportionally illustrate differences in values, i.e., how many program participants fall into each of the nested categories. She also shares resources for making your own treemaps.
Johanna Morariu and Ann Emery discussed how nonprofit evaluation is progressing as a discipline with impact, highlighting findings from Innovation Network's State of Evaluation project about nonprofit evaluation practices and capacity. They highlighted five areas where funders can support grantee evaluation capacity: by investing in technology, tools, and resources; by supporting data coaching and training; by supporting both internal and external evaluation staff; by engaging nonprofits in conversations with their peers; and by engaging nonprofits in conversations with their funders.
Johanna Morariu and Ann Emery share three tactics for tackling the Dusty Shelf Report in evaluation: captivating the readers with visuals, choosing the design that's right for the reader, and strengthening the dataviz literacy of readers.
Although nonprofits often have lots of data and a desire to use it, spreadsheet skills are in short supply. In this guest blog post, Ann Emery shares spreadsheet tips to help nonprofit staff analyze their own evaluation data in Excel.
In 2010, Innovation Network set out to answer a question that was previously unaddressed in the evaluation field-what is the state of nonprofit evaluation practice and capacity?—and initiated the first iteration of the State of Evaluation project. In 2012 we launched the second installment of the State of Evaluation project. This blog post shares highlights from that research.
Johanna Morariu and Ann Emery were guest participants on a recent Story By Numbers podcast where they provided a user-friendly introduction to Data Visualization. They introduce the listener to DataViz and discuss ways to choose the right kind of chart for different types of data.
As an evaluation consultant, Johanna Morariu witnesses firsthand the trepidation and uncertainty many people experience when making decisions about evaluation. In this guest blog post, she shared six pieces of advice: 1) Rethink what evaluation can be, 2) Don't get hung up on frameworks and jargon, 3) Start small and grow, 4) Collect less data, 5) Get comfortable with quantitative and qualitative data, and 6) Define the evaluation purpose and scope before selecting tech tools for evaluation.
Are you intrigued by infographics and how they could improve your communication strategy? Are you interested in what it takes for an organization to systematically use data? Or are you maybe even drowning in data and looking for someone to throw you a life-saving suggestion for software and other tools? Johanna Morariu, Beth Kanter, and Brian Kennedy presented a panel on data and information visualization at the 2012 Nonprofit Tech Conference. This video is a recording of the panel.
Ever wonder what the rest of us can learn from the evaluation approaches of large foundations? In this webinar Johanna Morariu and Ehren Reed provide an overview of The Packard Foundation’s The Standards, an example of a comprehensive strategy and evaluation handbook. The presenters also share four lessons that can be learned from the evaluation approaches of the Bill & Melinda Gates Foundation, The Annie E. Casey Foundation, The Robert Wood Johnson Foundation, and The William and Flora Hewlett Foundation.
In this webinar Johanna Morariu and Ehren Reed discuss four levels of evaluation: grant-level, portfolio-level, foundation-level, and issue-level. The presentation addresses the pros and cons of these four levels of evaluation, and when one level may be more appropriate than another. Also included are considerations for right-sizing your evaluation approach—design, data collection, analysis, and reporting differences between grant and portfolio evaluations.
Johanna Morariu shares two extremely useful network analysis and mapping tools: Gephi and NodeXL. She describes how she uses NodeXL for collecting, organizing, and analyzing network data and Gephi for attractively presenting sociograms or network maps.
Are you tired of the same old text and bar charts? In this webinar, Johanna Morariu and Veena Pankaj explored the more visual side of evaluation, sharing approaches and examples of how to incorporate data and information visualization throughout the four stages of the evaluation life cycle: planning, data collection, analysis and reporting, and action and improvement. Data and information visualization is a support to good evaluation practice, it can increase stakeholder participation in evaluation and build evaluation capacity.
Johanna Morariu describes how she explains the merit and appropriateness of qualitative designs when helping individuals and organizations design and evaluation approach or when presenting qualitative findings.
Myia Welsh and Johanna Morariu outline six lessons gleaned from their evaluation capacity building experiences with nonprofits, highlighting Innovation Network's white paper titled "Evaluation Capacity Building: Funder Initiatives to Strengthen Grantee Evaluation Capacity and Practice."
Veena Pankaj and Myia Welsh described Innovation Network's participatory approach to evaluation, highlighting how stakeholders can be involved in the analysis and interpretation of data. They also shared tips from Innovation Network's white paper titled "Participatory Analysis: Expanding Stakeholder Involvement in Evaluation."
Johanna Morariu described how Innovation Network worked with the Post Carbon Institute to develop an organizational evaluation framework. We developed and fine tuned several assessment areas for think tank evaluation and advocacy evaluation with interview data collected from key informants. Then, in consultation with key staff from the Post Carbon Institute, we created an organizational theory of change. The theory contains information about the organization’s mission; audiences; strategies; focusing events, crises, and windows of opportunity; desired shifts, and impact.
What are nonprofits really doing to evaluate their work? How are they really using evaluation results? These are the questions we sought to answer in our State of Evaluation project. This blog post summarized key findings from State of Evaluation 2010, the first nationwide project that systematically and repeatedly collects data from U.S. nonprofits about their evaluation practices.
In the past few years efforts to use common measures to assess and compare nonprofit performance seem to have multiplied. Interest in comparing nonprofit performance is in a dramatic upswing, and new/different sets of common measures seem to emerge frequently. Some sets of measures have been developed for niche fields, while others seek to compare across the entire sector. As evaluators, we should be aware of these efforts and aware of their possible implications. This blog post explored a number of questions related to the topic of nonprofit rating systems and common measures, e.g., Is it possible to develop meaningful common measures for a field as diverse as the nonprofit sector? What can we learn from the experiences of fairly well-known, sector-wide approaches such as Charity Navigator, GreatNonprofits, etc.? Considering what we know about existing approaches, what is the effect on traditional program evaluation?