Earlier today Apple released macOS Big Sur to the public. The update brings the biggest visual changes to the Mac since the introduction of OS X with more translucency transparency, new icons, Control Center, and more.
While the visual changes are getting the most attention, an underrated feature has to do with FaceTime Video. Millions around the world are using their Macs and FaceTime to remotely work and learn as a result of COVID-19. So much so though that Apple’s new Macbook lineup includes updated image signal processing which Apple says will result in sharper results from the still 720p built-in webcam.
On macOS Big Sur, and it turns out even iOS and iPadOS 14, FaceTime can now automatically detect when a user is using sign-language, and will automatically make them the prominent speaker in group calls.
This is a great feature for group calls where certain members may be deaf and have to rely on a sign language interpreter. This feature is enabled by default on macOS Big Sur and iOS and iPadOS 14. Apple’s vague on how exactly it does this, but it’s likely using one device machine learning o detect normal sing language words such as “Thank you” or “Hello,” to trigger the system into making that person the prominent speaker.
Apple originally introduced group FaceTime calls on iOS 12, and on the call, prominent speakers, or speakers who are speaking have their tile enlarged to indicate they’re speaking. With the new feature on Big Sur and iOS/iPadOS 14, those tiles will now be enlarged for a sign language interpreter.
Apple has been one of the few major tech companies which have taken accessibility seriously. With every OS update, it includes new accessibility features that ensure its devices, services, and software can be used by as many people as possible.