As someone deeply passionate about creating inclusive digital workplaces, and as a long-term user of assistive technology, I’ve witnessed firsthand how technology can both empower and exclude. The last decade has seen remarkable advances, but as we look ahead, I’m convinced the next 10 years will be an even bigger leap, especially when it comes to assistive technologies powered by AI.
Great experiences should not be reserved for consumer apps; employees want and need great tools too. And with emerging technologies like multimodal interfaces, intelligent agents, and ambient computing, that vision can become a reality.
Multimodal interfaces: Technology that speaks, listens, and understands
Imagine a workday where you switch effortlessly between voice, touch, gesture, and even eye or head movement to interact with your tools. That’s the promise of multimodal interfaces, and they’re already reshaping how we communicate and collaborate.
The basic two-senses principle of accessibility is the perfect match for multimodal interfaces. At least two ways of interacting must be available: if you can see it on a screen, you should be able to hear it via screen reader or read aloud or touch it via braille or haptic feedback.
[continue reading here]

Leave a Reply