Disney Patent Applications

KERNEL-PREDICTING CONVOLUTIONAL NEURAL NETWORKS FOR DENOISING

Granted: October 11, 2018
Application Number: 20180293711
Supervised machine learning using convolutional neural network (CNN) is applied to denoising images rendered by MC path tracing. The input image data may include pixel color and its variance, as well as a set of auxiliary buffers that encode scene information (e.g., surface normal, albedo, depth, and their corresponding variances). In some embodiments, a CNN directly predicts the final denoised pixel value as a highly non-linear combination of the input features. In some other…

GRAPH BASED CONTENT BROWSING AND DISCOVERY

Granted: October 4, 2018
Application Number: 20180285478
Systems and methods for using graph databases to make digital content recommendations are described. A graph database may be associated with tagged digital content. The graph database may include a node for each content tag and edges identifying a relationship between nodes. When a user accesses or searches a digital content item, the graph database may be traversed to identify and present related content recommendations to the user based on the traversed nodes. Node graph traversal may…

DESIGNING EFFECTIVE INTER-PIXEL INFORMATION FLOW FOR NATURAL IMAGE MATTING

Granted: August 9, 2018
Application Number: 20180225827
Embodiments can provide a strategy for controlling information flow both from known opacity regions to unknown regions, as well as within the unknown region itself. This strategy is formulated through the use and refinement of various affinity definitions. As a result of this strategy, a final linear system can be obtained, which can be solved in closed form. One embodiment pertains to identifying opacity information flows. The opacity information flow may include one or more of flows…

LUMINANCE COMFORT PREDICTION AND ADJUSTMENT

Granted: August 2, 2018
Application Number: 20180218709
One or more levels of maladaptation are calculated relative to frames of media content having abrupt jumps from periods of low illumination to bright illumination when visual acuity may be lost and/or discomfort may be experienced. These levels of maladaptation may be correlated with subjectively determined levels of perceived luminance discomfort. Based upon the levels of perceived luminance discomfort that can be derived from the levels of maladaptation, the media content may be…

SYSTEMS AND METHODS FOR DIFFERENTIAL MEDIA DISTRIBUTION

Granted: July 12, 2018
Application Number: 20180199081
Systems for electronic media distribution includes a differential versioning server configured to receive a first media file including a first set of data with a first set of attributes and a second media file including a second set of data with a second set of attributes, generate a first differential data file as a function of differences between the first media file and the second media file, and generate a first differential metadata file including an encoding data set configured to…

SIMULATION EXPERIENCE WITH PHYSICAL OBJECTS

Granted: July 12, 2018
Application Number: 20180196523
A simulation adapter can be attached to a real-world physical object and adapted to communicate one or more characteristics and/or simulation events associated with the real-world physical object to a simulation device. The simulation device is adapted to generate a simulation experience based upon the one or more characteristics of the real-world physical object and/or the simulation events associated with the real-world physical object.

SALIENCY-WEIGHTED VIDEO QUALITY ASSESSMENT

Granted: June 7, 2018
Application Number: 20180158184
Systems and methods are disclosed for weighting the image quality prediction of any visual-attention-agnostic quality metric with a saliency map. By accounting for the salient regions of an image or video frame, the disclosed systems and methods may dramatically improve the precision of the visual-attention-agnostic quality metric during image or video quality assessment. In one implementation, a method of saliency-weighted video quality assessment includes: determining a per-pixel image…

AUGMENTED REALITY CAMERA FRUSTUM

Granted: June 7, 2018
Application Number: 20180157047
An apparatus such as a head-mounted display (HMD) may have a camera for capturing a visual scene for presentation via the HMD. A user of the apparatus may be operating a second, physical camera for capturing video or still images within the visual scene. The HMD may generate an augmented reality (AR) experience by presenting an AR view frustum representative of the actual view frustum of the physical camera. The field of view of the user viewing the captured visual scene via the AR…

OBJECT RECONSTRUCTION FROM DENSE LIGHT FIELDS VIA DEPTH FROM GRADIENTS

Granted: May 17, 2018
Application Number: 20180137674
The present disclosure relates to techniques for reconstructing an object in three dimensions that is captured in a set of two-dimensional images. The object is reconstructed in three dimensions by computing depth values for edges of the object in the set of two-dimensional images. The set of two-dimensional images may be samples of a light field surrounding the object. The depth values may be computed by exploiting local gradient information in the set of two-dimensional images. After…

OBJECT RECONSTRUCTION FROM DENSE LIGHT FIELDS VIA DEPTH FROM GRADIENTS

Granted: May 17, 2018
Application Number: 20180139436
The present disclosure relates to techniques for reconstructing an object in three dimensions that is captured in a set of two-dimensional images. The object is reconstructed in three dimensions by computing depth values for edges of the object in the set of two-dimensional images. The set of two-dimensional images may be samples of a light field surrounding the object. The depth values may be computed by exploiting local gradient information in the set of two-dimensional images. After…

PIPELINE FOR HIGH DYNAMIC RANGE VIDEO CODING BASED ON LUMINANCE INDEPENDENT CHROMATICITY PREPROCESSING

Granted: May 10, 2018
Application Number: 20180131841
The disclosure describes a high dynamic range video coding pipeline that may reduce color artifacts and improve compression efficiency. The disclosed pipeline separates the luminance component from the chrominance components of an input signal (e.g., an RGB source video) and applies a scaling of the chrominance components before encoding, thereby reducing perceivable color artifacts while maintaining luminance quality.

NOISE REDUCTION ON G-BUFFERS FOR MONTE CARLO FILTERING

Granted: May 10, 2018
Application Number: 20180130248
Techniques for selectively removing Monte Carlo (MC) noise from a geometric buffer (G-buffer). Embodiments identify the G-buffer for rendering an image of a three-dimensional scene from a viewpoint. Embodiments determine, for each of a plurality of pixels in the image being rendered, respective world position information based on the three-dimensional scene and a position and orientation of the viewpoint. A pre-filtering operation is then performed to selectively remove the MC noise from…

METHODS AND SYSTEMS OF ENRICHING BLENDSHAPE RIGS WITH PHYSICAL SIMULATION

Granted: May 10, 2018
Application Number: 20180130245
Methods, systems, and computer-readable memory are provided for determining time-varying anatomical and physiological tissue characteristics of an animation rig. For example, shape and material properties are defined for a plurality of sample configurations of the animation rig. The shape and material properties are associated with the plurality of sample configurations. An animation of the animation rig is obtained, and one or more configurations of the animation rig are determined for…

RECORDING HIGH FIDELITY DIGITAL IMMERSIVE EXPERIENCES THROUGH OFF-DEVICE COMPUTATION

Granted: May 3, 2018
Application Number: 20180124370
Systems and methods are described for recording high fidelity augmented reality or virtual reality experiences through off-device computation. In one implementation, an augmented reality system renders an augmented reality object overlaid over a real-world environment; captures video and audio data of the real-world environment during rendering of the augmented reality object; and stores augmented reality object data associated with the rendered augmented reality object. The augmented…

INTERACTIVE VIDEO GAME METHOD AND SYSTEM

Granted: April 19, 2018
Application Number: 20180104597
One particular implementation of the present invention may take the form of a method and apparatus for providing various movements as input to a video game. The method and apparatus may detect the body movements of a video game player and interpret those movements as inputs to the video game. The video game may then compare the movements of the user to expected movements to determine if the correct movement was performed by the user. The video game may also display a video game…

REAL TIME SURFACE AUGMENTATION USING PROJECTED LIGHT

Granted: April 12, 2018
Application Number: 20180101987
A method of augmenting a target object with projected light is disclosed. The method includes determining a blend of component attributes to define visual characteristics of the target object, modifying an input image based, at least in part, on an image of the target object, wherein the modified input image defines an augmented visual characteristic of the target object, determining a present location of one or more landmarks on the target object based, at least in part, on the image of…

DISTRIBUTION CHANNEL USING AUDIO/VISUAL RECOGNITION

Granted: April 12, 2018
Application Number: 20180101896
Systems and methods are provided for providing a platform to provide virtual storefronts to consumers. Environmental elements are associated with specific consumer services on computer server. A user in the environment takes audio or visual recordings of an environmental element and uploads the recordings to the server. The server determines the appropriate consumer service associated with the recorded environmental element and provides the user with a reference to the service.…

DEEP-LEARNING MOTION PRIORS FOR FULL-BODY PERFORMANCE CAPTURE IN REAL-TIME

Granted: April 5, 2018
Application Number: 20180096259
Training data from multiple types of sensors and captured in previous capture sessions can be fused within a physics-based tracking framework to train motion priors using different deep learning techniques, such as convolutional neural networks (CNN) and Recurrent Temporal Restricted Boltzmann Machines (RTRBMs). In embodiments employing one or more CNNs, two streams of filters can be used. In those embodiments, one stream of the filters can be used to learn the temporal information and…

REAL-TIME HIGH-QUALITY FACIAL PERFORMANCE CAPTURE

Granted: April 5, 2018
Application Number: 20180096511
A method of transferring a facial expression from a subject to a computer-generated character and a system and non-transitory computer-readable medium for the same. The method can include receiving an input image depicting a face of a subject; matching a first facial model to the input image; generating a displacement map representing of finer-scale details not present in the first facial model using a regression function that estimates the shape of the finer-scale details. The…

POINT CLOUD NOISE AND OUTLIER REMOVAL FOR IMAGE-BASED 3D RECONSTRUCTION

Granted: April 5, 2018
Application Number: 20180096463
Enhanced removing of noise and outliers from one or more point sets generated by image-based 3D reconstruction techniques is provided. In accordance with the disclosure, input images and corresponding depth maps can be used to remove pixels that are geometrically and/or photometrically inconsistent with the colored surface implied by the input images. This allows standard surface reconstruction methods (such as Poisson surface reconstruction) to perform less smoothing and thus achieve…