Dolby Laboratories Patent Applications

ELECTRO-OPTICAL TRANSFER FUNCTION CONVERSION AND SIGNAL LEGALIZATION

Granted: September 15, 2022
Application Number: 20220295020
A device includes an electronic processor configured to define a first set of sample pixels from a set of sample pixels determined from received video data according to a first electro-optical transfer function (EOTF) in a first color representation of a first color space; convert the first set of sample pixels to a second EOTF via a mapping function, producing a second set of sample pixels according to the second EOTF; convert the first and second set of sample pixels from the first…

LOW-LATENCY, LOW-FREQUENCY EFFECTS CODEC

Granted: September 15, 2022
Application Number: 20220293112
In some implementations, a method of encoding a low-frequency effect (LFE) channel comprises: receiving a time-domain LFL channel signal; filtering, using a low-pass filter, the time-domain LFE channel signal; converting the filtered time-domain LFE channel signal into a frequency-domain representation of the LFE channel signal that includes a number of coefficients representing a frequency spectrum of the LFL channel signal; arranging coefficients into a number of subband groups…

3D EYEWEAR ADAPTED FOR FACIAL GEOMETRY

Granted: September 15, 2022
Application Number: 20220291520
Three dimensional (3D) glasses suited for wearers with varying facial geometries are disclosed. A particular embodiment includes a frame adapted to position spectrally filtering lenses at a particular distance from the eyes of the wearer. In a more particular embodiment, the 3D glasses include a means for adjusting the distance between the lenses and the eyes of the wearer. In another particular embodiment, the lenses include positive runout.

RENDERING AUDIO OBJECTS WITH MULTIPLE TYPES OF RENDERERS

Granted: September 8, 2022
Application Number: 20220286800
An apparatus and method of rendering audio objects with multiple types of renderers. The weighting between the selected renderers depends upon the position information in each audio object. As each type of renderer has a different output coverage, the combination of their weighted outputs results in the audio being perceived at the position according to the position information.

FRAME CONVERSION FOR ADAPTIVE STREAMING ALIGNMENT

Granted: September 8, 2022
Application Number: 20220286730
Methods for generating an AV bitstream (e.g., an MPEG-2 transport stream or bitstream segment having adaptive streaming format) such that the AV bitstream includes at least one video I-frame synchronized with at least one audio I-frame, e.g., including by re-authoring at least one video or audio frame (as a re-authored I-frame or a re-authored P-frame). Typically, a segment of content of the AV bitstream which includes the re-authored frame starts with an I-frame and includes at least…

VIDEO CODING USING REFERENCE PICTURE RESAMPLING SUPPORTING REGION OF INTEREST

Granted: September 8, 2022
Application Number: 20220286667
Methods, systems, and bitstream syntax are described for canvas size, single layer or multi-layer, sealable decoding, with support for regions of interest (ROI). using a decoder supporting reference picture resampling. Offset parameters for a region of interest in a current picture and offset parameters for an ROI in a reference picture are taken into consideration when computing scaling factors to apply reference picture resampling Syntax elements for supporting ROI regions under…

ENCODING AND DECODING IVAS BITSTREAMS

Granted: September 8, 2022
Application Number: 20220284910
Encoding/decoding an immersive voice and audio services (IVAS) bitstream comprises: encoding/decoding a coding mode indicator in a common header (CH) section of an IVAS bitstream, encoding/decoding a mode header or tool header in the tool header (TH) section of the bitstream, the TH section following the CH section, encoding/decoding a metadata payload in a metadata payload (MDP) section of the bitstream, the MDP section following the CH section, encoding/decoding an enhanced voice…

SIGNAL RESHAPING AND CODING FOR HDR AND WIDE COLOR GAMUT SIGNALS

Granted: May 12, 2022
Application Number: 20220150548
In a method to improve the coding efficiency of high-dynamic range (HDR) images, a decoder parses sequence processing set (SPS) data from an input coded bitstream to detect that an HDR extension syntax structure is present in the parsed SPS data. It extracts from the HDR extension syntax structure post-processing information that includes one or more of a color space enabled flag, a color enhancement enabled flag, an adaptive reshaping enabled flag, a dynamic range conversion flag, a…

METHODS AND DEVICES FOR CONTROLLING AUDIO PARAMETERS

Granted: April 28, 2022
Application Number: 20220129235
A method of controlling headphones having external microphone signal pass-through functionality may involve controlling a display to present a geometric shape on the display and receiving an indication of digit motion from a sensor system associated with the display. The sensor system may include a touch sensor system or a gesture sensor system. The indication may be an indication of a direction of digit motion relative to the display. The method may involve controlling the display to…

SHARING PHYSICAL WRITING SURFACES IN VIDEOCONFERENCING

Granted: April 21, 2022
Application Number: 20220124128
An apparatus and method relating to use of a physical writing surface (132) during a videoconference or presentation. Snapshots of a whiteboard (132) are identified by applying a difference measure to the video data (e.g., as a way of comparing frames at different times). Audio captured by a microphone may be processed to generate textual data, wherein a portion of the textual data is associated with each snapshot. The writing surface may be identified (enrolled) using gestures. Image…

METHOD AND APPARATUS FOR SCREEN RELATED ADAPTATION OF A HIGHER-ORDER AMBISONICS AUDIO SIGNAL

Granted: April 14, 2022
Application Number: 20220116727
A method for generating loudspeaker signals associated with a target screen size is disclosed. The method includes receiving a bit stream containing encoded higher order ambisonics signals, the encoded higher order ambisonics signals describing a sound field associated with a production screen size. The method further includes decoding the encoded higher order ambisonics signals to obtain a first set of decoded higher order ambisonics signals representing dominant components of the sound…

VOLUME LEVELER CONTROLLER AND CONTROLLING METHOD

Granted: April 14, 2022
Application Number: 20220116006
Volume leveler controller and controlling method are disclosed. In one embodiment, A volume leveler controller includes an audio content classifier for identifying the content type of an audio signal in real time; and an adjusting unit for adjusting a volume leveler in a continuous manner based on the content type as identified. The adjusting unit may configured to positively correlate the dynamic gain of the volume leveler with informative content types of the audio signal, and…

METHOD AND APPARATUS FOR DECODING A BITSTREAM INCLUDING ENCODED HIGHER ORDER AMBISONICS REPRESENTATIONS

Granted: April 14, 2022
Application Number: 20220115027
Higher Order Ambisonics represents three-dimensional sound independent of a specific loudspeaker set-up. However, transmission of an HOA representation results in a very high bit rate. Therefore compression with a fixed number of channels is used, in which directional and ambient signal components are processed differently. For coding, portions of the original HOA representation are predicted from the directional signal components. This prediction provides side information which is…

Acoustic Environment Simulation

Granted: April 14, 2022
Application Number: 20220115025
Encoding/decoding an audio signal having one or more audio components, wherein each audio component is associated with a spatial location. A first audio signal presentation (z) of the audio components, a first set of transform parameters (w(f)), and signal level data (?2) are encoded and transmitted to the decoder. The decoder uses the first set of transform parameters (w(f)) to form a reconstructed simulation input signal intended for an acoustic environment simulation, and applies a…

DISPLAY MANAGEMENT WITH AMBIENT LIGHT COMPENSATION

Granted: April 14, 2022
Application Number: 20220114928
A display apparatus, a display management module and a method for ambient light compensation are described. The display management module is configured to receive an input video signal comprising a sequence of video frames and to determine whether a current video frame of the sequence of video frames immediately follows a scene change. The display management module is further configured to adjust ambient light compensation applied to the input signal in dependence on the signal…

CODING AND DECODING OF INTERLEAVED IMAGE DATA

Granted: April 7, 2022
Application Number: 20220109874
Sampled data is packaged in checkerboard format for encoding and decoding. The sampled data may be quincunx sampled multi-image video data (e.g., 3D video or a multi-program stream), and the data may also be divided into sub-images of each image which are then multiplexed, or interleaved, in frames of a video stream to be encoded and then decoded using a standardized video encoder. A system for viewing may utilize a standard video decoder and a formatting device that de-interleaves the…

METHOD AND APPARATUS FOR COMPRESSING AND DECOMPRESSING A HIGHER ORDER AMBISONICS SIGNAL REPRESENTATION

Granted: March 31, 2022
Application Number: 20220103960
A method and apparatus for decompressing a Higher Order Ambisonics (HOA) signal representation is disclosed. The apparatus includes an input interface that receives an encoded directional signal and an encoded ambient signal and an audio decoder that perceptually decodes the encoded directional signal and encoded ambient signal to produce a decoded directional signal and a decoded ambient signal, respectively. The apparatus further includes an extractor for obtaining side information…

REVERBERATION GENERATION FOR HEADPHONE VIRTUALIZATION

Granted: March 31, 2022
Application Number: 20220103959
The present disclosure relates to reverberation generation for headphone virtualization. A method of generating one or more components of a binaural room impulse response (BRIR) for headphone virtualization is described. In the method, directionally-controlled reflections are generated, wherein directionally-controlled reflections impart a desired perceptual cue to an audio input signal corresponding to a sound source location. Then at least the generated reflections are combined to…

ATTENUATING WAVEFRONT DETERMINATION FOR NOISE REDUCTION

Granted: March 31, 2022
Application Number: 20220101807
A system and method comprise a light source; a spatial light modulator including a substantially transparent material layer and a phase modulation layer; an imaging device configured to receive a light from the light source as reflected by the spatial light modulator, and to generate an image data; and a controller. The controller provides a phase-drive signal to the spatial light modulator and determines an attenuating wavefront of the substantially transparent material layer based on…

3D PROJECTION SYSTEM USING LASER LIGHT SOURCES

Granted: March 24, 2022
Application Number: 20220091432
Laser or narrow band light sources (e.g., red, green, and blue) are utilized to form left (e.g., R1, G1, B1) and right (e.g., R2, G2, B2) images of a 3D projection. Off-axis viewing of the projections which has the potential to cause crosstalk and/or loss of energy/brightness in any channel or color, is eliminated (or reduced to only highly oblique viewing angles) via the combined use of any of guard bands between light bands of adjacent channels, curvature of viewing filters, and…