Abstract
The Internet is increasingly used as a medium for gathering and exchanging health information exchange. Healthcare professionals and organizations need to consider barriers that may exist within their patient-oriented Web applications. One approach to making the Web more accessible for those with lower health literacy may be to supplement textual content with audio annotation using text-to-speech engines, allowing for the creation of a virtual surrogate reader. One challenge is that with numerous text-to-speech engines on the market, objective measures of quality are difficult to obtain. To facilitate comparisons of text-to-speech engines, we developed an open-source Web application that measures user reaction times, subjective quality ratings, and accuracy in completing tasks across different audio files created by text-to-speech engines. Our research endeavor was successful in building and piloting this Web application; significant differences were found for subjective ratings of quality across three text-to-speech engines priced at different levels. However, no significant differences were found with reaction times or accuracy between these text-to-speech engines. Future avenues of research include exploring more complex tasks, usability issues related to implementing text-to-speech features, and applied health promotion and education opportunities among vulnerable populations.