Google Explores New Way to Wake Gemini AI Without Voice Commands
Google appears to be moving away from traditional methods of summoning its AI assistant, Gemini, such as hotwords or button presses. According to a report from Android Headlines, citing a newly filed patent, the company is developing a system that automatically activates Gemini when the phone detects it is near the user’s face.
The technology reportedly leverages the phone’s existing capacitive touchscreen sensors, which normally detect finger touches by monitoring changes in the electrical field. Google’s approach uses these sensors to recognize nearby objects, such as a face or hand, without physical contact. When the phone senses a distinctive pattern in the electric field consistent with a face being close, Gemini AI could automatically wake up, eliminating the need for a hotword or a dedicated button.
Currently, many smartphones offer AI activation through a customizable button or voice command, such as “Hey Google.” These methods, however, can be less effective in noisy environments or when the user is wearing a mask. The new system is designed to make summoning Gemini faster and more convenient. Because it relies on low-power sensors, it is expected to have minimal impact on battery life. Google also aims to make the feature more accurate over time as it learns to detect the user reliably.
At this stage, the feature remains in the patent phase, and there is no official word on when or if it will arrive on Android devices.