Digital health tools, platforms, and artificial intelligence– or machine learning–based clinical decision support systems are increasingly part of health delivery approaches, with an ever-greater degree of system interaction.
Critical to the successful deployment of these tools is their functional integration into existing clinical routines and workflows. This depends on system interoperability and on intuitive and safe user interface design.
Its extremely important that research and efforts are directed towards minimizing emergent workflow stress and ensuring purposeful design for integration.
Usability of tools in practice is as important as algorithm quality.
Regulatory and health technology assessment frameworks recognize the importance of these factors to a certain extent, but their focus remains mainly on the individual product rather than on emergent system and workflow effects.
The measurement of performance and user experience has so far been performed in ad hoc, nonstandardized ways by individual actors using their own evaluation approaches.
This paper proposes that a standard framework for system-level and holistic evaluation which be built into interacting digital systems to enable systematic and standardized system-wide, multiproduct, postmarket surveillance and technology assessment.
Such a system could be made available to developers through regulatory or assessment bodies as an application programming interface and could be a requirement for digital tool certification, just as interoperability is. This would enable health systems and tool developers to collect system-level data directly from real device use cases, enabling the controlled and safe delivery of systematic quality assessment or improvement studies suitable for the complexity and interconnectedness of clinical workflows using developing digital health technologies.
read the entire paper at https://www.jmir.org/2023/1/e50158
In recent conversations with FIND and WHO we have mutually discussed the framework which needs to be defined and put in place for identifying effectiveness in AI tools for use in medicine at the provider level. The basic reasoning behind our conversations has been to ensure that the trust factor behind a particular AI technology being used in healthcare is created and then via regular monitoring maintained rather than simply buying into far fetched stories. This paper and several others on this topic will be highlighted here , and further on Plus91's medium blog we will put several frameworks together