At Tech Advisors, I had the opportunity to work with a client who was introducing an Augmentative and Alternative Communication app for individuals with motor impairments. One example that stood out was the Voiceitt communication app. The client had staff members with atypical speech patterns, and we helped integrate the app so they could train it to recognize their unique pronunciation. The moment a staff member used their trained phrases and the app vocalized in a clear synthesized voice, you could see the shift in confidence. It gave them a way to participate in meetings and everyday interactions without feeling left out. When supporting this project, I focused on practical considerations. Accuracy in voice recognition was critical, especially in environments with background noise. We tested the app across different scenarios, from quiet offices to busy common areas, to make sure it performed consistently. We also made sure users had alternative input options, such as text and keyboard navigation, so they never felt restricted to one mode of communication. The training process itself became a positive reinforcement tool for speech therapy, improving consistency and giving users a sense of progress. For anyone working on inclusive technology, I recommend starting with direct feedback from the people who will rely on the tool. We learned early on that assumptions from the design side rarely matched real user needs. Involving users at every step uncovered challenges like timing out too quickly when a person needed extra moments to form words. Small adjustments like extending response times made a big difference. Solving for one specific challenge, like atypical speech recognition, can often extend benefits to a much wider group—an important lesson I continue to carry forward in my work.
I worked on a project where we integrated speech-to-text technology into a web platform for users with hearing impairments. The goal was to make live webinars and instructional videos fully accessible. I focused on accuracy, latency, and context sensitivity—ensuring the system captured technical terms correctly and displayed real-time captions without significant delay. We also added customization options, letting users adjust font size, color contrast, and playback speed. Testing with actual users with varying levels of hearing ability helped identify gaps we hadn't anticipated, like differentiating multiple speakers or handling background noise. Considering both technology and user experience from the start made the platform more inclusive, empowering users to participate fully rather than feeling limited by accessibility barriers.
We integrated voice navigation into a retail platform to support users with limited mobility and visual impairments. Instead of relying solely on screen readers, which often required complex keyboard shortcuts, we built a speech interface that allowed customers to search products, add items to carts, and complete checkout through simple voice commands. The design emphasized natural phrasing rather than rigid command structures, so users could interact with the system as if they were speaking to a store associate. The main considerations were clarity, error recovery, and privacy. Voice prompts were structured in short, unambiguous steps to reduce cognitive load, and the system confirmed key actions to avoid accidental purchases. We also ensured data security by limiting voice data storage and giving users the option to mute or pause listening features. This approach not only made the platform more inclusive but also demonstrated that accessibility improvements can simultaneously enhance convenience for all users, not just those with disabilities.