Google Patent Applications

DUAL BAND WIRELESS COMMUNICATIONS FOR MULTIPLE CONCURRENT AUDIO STREAMS

Granted: March 20, 2025
Application Number: 20250097623
Various arrangements for performing wireless device-to-device communication are presented. An audio output device, such as an earbud or pair of earbuds, can establish a connection with an audio source via a first Bluetooth interface that communicates using a Bluetooth communication protocol on a 2.4 GHz Bluetooth frequency band. The audio output device can negotiate that Bluetooth frequency-shifted communication, such as on a 5 or 6 GHz frequency band, is available for use with the audio…

Housing Assemblies

Granted: March 20, 2025
Application Number: 20250097331
Techniques and apparatuses are described that implement housing assemblies for computing devices. In aspects, a housing assembly includes an elongated side-frame element comprising a first metal and a cast internal frame comprising a second, different, metal. The melting point of the first metal is higher than the melting point of the second metal. The elongated side-frame element may include at least one elongated slot disposed on an inner surface of the elongated side-frame element,…

Integrated Second Factor Authentication

Granted: March 20, 2025
Application Number: 20250097218
Techniques and apparatuses are described that enable integrated second factor authentication. These techniques and apparatuses enable the improved security of something you have without the accompanying inconvenience or chance of loss. To do so, a secure physical entity is integrated within a computing device. While this provides the something you have without a need to carry a separate object with you, the something you have also must not be able to be accessed remotely. To prevent…

USING NON-PARALLEL VOICE CONVERSION FOR SPEECH CONVERSION MODELS

Granted: March 20, 2025
Application Number: 20250095639
A method includes receiving a set of training utterances each including a non-synthetic speech representation of a corresponding utterance, and for each training utterance, generating a corresponding synthetic speech representation by using a voice conversion model. The non-synthetic speech representation and the synthetic speech representation form a corresponding training utterance pair. At each of a plurality of output steps for each training utterance pair, the method also includes…

Respiration Rate Sensing

Granted: March 13, 2025
Application Number: 20250082300
Techniques and apparatuses are described that perform respiration rate sensing. Provided according to one or more preferred embodiments is a hearable, such as an earbud, that is capable of performing a novel physiological monitoring process termed herein audioplethysmography, an active acoustic method capable of sensing subtle physiologically-related changes observable at a user's outer and middle ear. Instead of relying on other auxiliary sensors, such as optical or electrical sensors,…

LOCAL AUTOMATION ENGINE IN DISTRIBUTED ENVIRONMENT

Granted: March 13, 2025
Application Number: 20250088412
Local execution of smart device mesh automations with cloud-based failover is described herein. Embodiments operate in context of network-connected devices in a smart device mesh with a local automation system, where all devices communicate with cloud-based automation, and at least some also communicate with local automation. A determination is made whether to claim each automation routine for local automation, or to automatically execute the automation by the cloud when triggered.…

Grid-Based Enrollment for Face Authentication

Granted: March 13, 2025
Application Number: 20250087020
This document describes techniques and systems that enable grid-based enrollment for face authentication. The techniques and systems include overlaying a three-dimensional (3D) tracking window over a preview image of the user's face displayed via a display device. The 3D tracking window includes a plurality of segments, which persist to correspond to an approximate direction that the user's face is facing. Based on the tracking, segments are highlighted to indicate the approximate…

CO-PLANAR WAVEGUIDE FLUX QUBITS

Granted: March 13, 2025
Application Number: 20250086489
A qubit device includes an elongated thin film uninterrupted by Josephson junctions, a quantum device in electrical contact with a proximal end of the elongated thin film, and a ground plane that is co-planar with the elongated thin film and is in electrical contact with a distal end of the elongated thin film, in which the thin film, the quantum device, and the ground plane comprise a material that is superconducting at a designed operating temperature.

PRIVACY-PRESERVING DATA PROCESSING FOR CONTENT DISTRIBUTION

Granted: March 13, 2025
Application Number: 20250086300
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for distributing digital contents to client devices are described. For each of a plurality of client devices, the system receives a digital component request, identifies one or more user attributes of a user based on the digital component request, and sends the identified user attributes to the client device. The system obtains, from a shared storage of each client device, accumulated user…

ELECTRONIC DEVICE AND METHOD FOR ACTIVITY DETECTION

Granted: March 13, 2025
Application Number: 20250085740
Features described herein generally relate to an electronic device and a method for activity detection. Particularly, an electronic device can be detected as being in a docked mode and/or a tablet mode. In the docked mode, activity can be detected based on a first detector. In the tablet mode, activity can be detected based on a second detector. The activity can be classified as corresponding to an activity type and a display screen of the electronic device can be updated based on the…

Audioplethysmography Calibration

Granted: March 13, 2025
Application Number: 20250082210
Techniques and apparatuses are described that perform audioplethysmography calibration. Provided according to one or more preferred embodiments is a hearable, such as an earbud, that is capable of performing a novel physiological monitoring process termed herein audioplethysmography, an active acoustic method capable of sensing subtle physiologically-related changes observable at a user's outer and middle ear. The hearable can utilize audioplethysmography to monitor a user's biometrics,…

Refining a Search Using Physiological Information

Granted: March 6, 2025
Application Number: 20250077508
This document describes techniques and devices for a radar recognition-aided search. Through use of a radar-based recognition system, gestures made by, and physiological information about, persons can be determined. In the case of physiological information, the techniques can use this information to refine a search. For example, if a person requests a search for a coffee shop, the techniques may refine the search to coffee shops in the direction that the person is walking. In the case of…

Integrated Vapor Chamber for Electronic Devices

Granted: March 6, 2025
Application Number: 20250081397
This document describes a vapor chamber within an electronic device. In aspects, an electronic device includes a middle frame that provides mechanical support for the electronic device, a middle plate affixed to the middle frame to define an inner layer of a chassis, and a vapor chamber disposed inside the middle plate. The vapor chamber includes a first region proximate to a heat source and a second region opposite the first region. A coolant is evaporated in a first mode at the first…

TARGET SPEAKER KEYWORD SPOTTING

Granted: March 6, 2025
Application Number: 20250078840
A method includes receiving audio data corresponding to an utterance spoken by a particular user and captured in streaming audio by a user device. The method also includes performing speaker identification on the audio data to identify an identity of the particular user that spoke the utterance. The method also includes obtaining a keyword detection model personalized for the particular user based on the identity of the particular user that spoke the utterance. The keyword detection…

Adapter Finetuning with Teacher Pseudo-Labeling for Tail Languages in Streaming Multilingual ASR

Granted: March 6, 2025
Application Number: 20250078830
A method includes receiving a sequence of acoustic frames characterizing a spoken utterance in a particular native language. The method also includes generating a first higher order feature representation for a corresponding acoustic frame in the sequence of acoustic frames by a causal encoder that includes an initial stack of multi-head attention layers. The method also includes generating a second higher order feature representation for a corresponding first higher order feature…

QUANTIZATION AND SPARSITY AWARE FINE-TUNING FOR SPEECH RECOGNITION WITH UNIVERSAL SPEECH MODELS

Granted: March 6, 2025
Application Number: 20250078815
A method includes obtaining a plurality of training samples that each include a respective speech utterance and a respective textual utterance representing a transcription of the respective speech utterance. The method also includes fine-tuning, using quantization and sparsity aware training with native integer operations, a pre-trained automatic speech recognition (ASR) model on the plurality of training samples. Here, the pre-trained ASR model includes a plurality of weights and the…

Zero-Shot Task Expansion of ASR Models Using Task Vectors

Granted: March 6, 2025
Application Number: 20250078813
A method includes training, using an un-supervised learning technique, an auxiliary ASR model based on a first set of un-transcribed source task speech utterances to determine a first task vector, training, using the un-supervised learning technique, the auxiliary ASR model based on a second set of un-transcribed speech utterances to determine a second task vector, and training, using the un-supervised learning technique, the auxiliary ASR model based on un-transcribed target task speech…

Two-Level Text-To-Speech Systems Using Synthetic Training Data

Granted: March 6, 2025
Application Number: 20250078808
A method includes obtaining training data including a plurality of training audio signals and corresponding transcripts. Each training audio signal is spoken by a target speaker in a first accent/dialect. For each training audio signal of the training data, the method includes generating a training synthesized speech representation spoken by the target speaker in a second accent/dialect different than the first accent/dialect and training a text-to-speech (TTS) system based on the…

Injecting Text in Self-Supervised Speech Pre-training

Granted: March 6, 2025
Application Number: 20250078807
A method includes receiving training data that includes unspoken text utterances and un-transcribed non-synthetic speech utterances. Each unspoken text utterance is not paired with any corresponding spoken utterance of non-synthetic speech. Each un-transcribed non-synthetic speech utterance is not paired with a corresponding transcription. The method also includes generating a corresponding synthetic speech representation for each unspoken textual utterance of the received training data…

Scaling Multilingual Speech Synthesis with Zero Supervision of Found Data

Granted: March 6, 2025
Application Number: 20250078805
A method includes receiving training data that includes a plurality of sets of training utterances each associated with a respective language. Each training utterance includes a corresponding reference speech representation paired with a corresponding input text sequence. For each training utterance, the method includes generating a corresponding encoded textual representation for the corresponding input text sequence, generating a corresponding speech encoding for the corresponding…