AI-based voice diagnostics analyse spoken audio. They are used in health to monitor disease and in education to guide learning. However, currently, AI that carries out diagnosis on the basis of the human voice has two shortcomings. First, it faces a capability gap, i.e., it lacks broad applicability. Second, it faces a responsibility gap, i.e., it puts users at risk because it fails to prioritise privacy. The goal of this project is to develop responsible AI for voice diagnostics that will bridge both of these gaps. We make two key scientific contributions to the state of the art in AI. First, we use principles and practices of data-centric AI to extend the capabilities of voice-based AI, and, second, we develop cutting-edge information minimisation to achieve responsible AI that addresses privacy risks. While previous work has emphasised algorithms over data when seeking to improve performance and has pursued information-greedy approaches at odds with privacy, the proposed project will move forward the state-of-the-art in health with respect to diagnosing and managing neurodegenerative disease (Alzheimer’s and Parkinson’s) and in education with respect to reading and pronunciation skills. It advances offline diagnosis based on pre-recorded speech data and online diagnosis, including monitoring and guidance, via spoken interaction, i.e., as a voicebot. The project consists of a set of six interrelated PhD projects that decisively enhance voice-based AI to serve a more diverse group of people with a wider range of needs for diagnosis and interaction.