Define measurable criteria
Last updated on 2024-12-24 | Edit this page
Step 5: Define measurable criteria to meet the evaluation objectives
In this step you will look at your objectives, as outlined in the Evaluation Plan and define measurable criteria to assess them. Depending on the context of the digital experience you want to evaluate, you can deploy a framework to examine different aspects of the experience, such as:
The Generic Learning Outcomes Framework (GLOs) focusing on learning outcomes in the broader sense which include knowledge and understanding; skills; attitudes and values; enjoyments, inspiration, creativity; activity, behaviour and progression. This question bank could also facilitate framing questions around the GLOs of the experience.
The Generic Social Outcomes (GSOs) with a focus on strengthening public life, building stronger communities, and enhancing well-being.
In the context of User Experience (UX), you might also want to evaluate certain aspects of the experience of the digital application, such as:
- Utility: Is the user finding the system’s functions practical and appropriate for their needs? Usability: Does the user find it straightforward and efficient to accomplish tasks with the system?
- Aesthetics: Does the user find the system visually appealing? Is it pleasant to interact with?
- Identification: Can I relate to the product? Do I feel good about myself when using it?
- Stimulation: Does the system spark my creativity or provide exciting experiences?
- Value: Is the system significant to me? What value does it hold in my perspective?
Please note that many UX aspects reflect the GLOs and hence, when choosing which ones to include in your evaluation, you should be mindful not to repeat questions which will be identical or might yield to similar responses. What might not be covered thoroughly by the GLOs is Usability, an essential element of the UX. Usability in UX refers to the ease with which users can interact with a product or system to achieve their goals effectively, efficiently, and with satisfaction. Usability assesses how user-friendly and intuitive a product is, focusing on factors such as ease of learning, efficiency of use, memorability, error prevention and recovery, and user satisfaction. There are different ways to conduct usability testing, yet typically this includes a number of tasks which the users have to accomplish while facilitators observe, listen and make notes.
In addition, useful data to take into account when evaluating a digital application or system are log data, automatically generated to track transactions, changes, and performance metrics. Log data include page views, click-through rates, bounce rate, session duration, error rate, scroll depth and more.
More information on usability testing and the way/s to conduct it can be found here:
- Moran, K. (2019). Usability Testing 101. Nielsen Norman Group logoNielsen Norman Group. Retrieved from: https://www.nngroup.com/articles/usability-testing-101/
Looking at the examples of objectives, as proposed in the Evaluation Plan (previous step), these could be measured as follows:
- ‘to enhance the interpretation of our collection/s for visually impaired audiences with the use of technology’: this could be measured in terms of knowledge and understanding; skills; attitudes and values; enjoyments, inspiration, creativity; activity, behaviour and progression, as presented in the GLOs.
- ‘to enable independent exploration of our collection with the use of technology for visually impaired audiences’: this can be measured in terms of attitudes and values and/or activity, behaviour and progression, as presented in the GLOs. It can also be measured through usability testing, where users will attempt to accomplish tasks while facilitators observe, listen and make notes.
- ‘to increase the numbers of visually impaired users who visit our museum’: this can be measured by specific ticket sales over a certain period of time, organised visits from visually impaired groups etc.
Now, look at the objectives in the Evaluation Plan you have developed and try to define how to measure them. You can add your notes on the Evaluation Plan next to the objectives or use a separate document/sheet of paper to note them down.
More information on frameworks/criteria, data collection methods, data analysis and presentation as well as general guidance can be found here:
- Diamond, J., Horn, M., & Uttal, D. H. (2016). Practical evaluation guide: Tools for museums and other informal educational settings (Third;3; ed.). Rowman & Littlefield Publishers.
- Share Museums East (2020). Evaluation Toolkit for Museums. Retrieved from: https://www.sharemuseumseast.org.uk/wp-content/uploads/2020/05/SHARE_Evaluation_Toolkit_FINAL_WEB.pdf.