2023 Symposium Posters

Posters > 2023

Are Familiar Voices more Trustworthy? (The Model Audio)


Primary Investigator:
Research Independant

Project Members
Sabila Nawshin (Snawshin@iu.edu)
Smart voice assistants can very easily build models tailored towards each individual user with the voice data available to it to elevate the user’s experience, and major companies are already considering it. While personalized models would lead to better performance on voice recognition, it comes with the disturbing potential of building personalized speech synthesis models with the voice features extracted by the model. With the smart voice assistants being situated inside a user’s house, it can potentially gather voice data of not only the user himself, but also of the friends and family members the user interacts with on a daily basis. This data can be used to subtly manipulate the user in various ways, one of which can be customizing the voice assistant’s voice to subtly include voice features of the user himself or the people the user is close to (friends/family members). With Amazon’s smart voice assistant Alexa allowing skills generated outside of Amazon being invoked by the assistant, the user can potentially be manipulated by third party companies who may use the opportunity with malicious intents. In our work, we aim to find out how people are affected by synthesized voices personalized towards them by subtly adding voice characteristics of the people familiar to them. As the first step towards that goal, we want to start by personalizing the synthetic voice with voice characteristics of celebrities familiar to people and identify how it affects the believability or trustworthiness of the contents presented. We are working on the initial stages of the project and would welcome feedback from the community.