Lucasfilm Patent Applications

DETERMINING CONTROL VALUES OF AN ANIMATION MODEL USING PERFORMANCE CAPTURE

Granted: May 25, 2017
Application Number: 20170148201
Performance capture systems and techniques are provided for capturing a performance of a subject and reproducing an animated performance that tracks the subject's performance. For example, systems and techniques are provided for determining control values for controlling an animation model to define features of a computer-generated representation of a subject based on the performance. A method may include obtaining input data corresponding to a pose performed by the subject, the input…

FLIGHT PATH CORRECTION IN VIRTUAL SCENES

Granted: March 23, 2017
Application Number: 20170084072
A method includes receiving a first motion path for an object, where an orientation of the object is not aligned with the first motion path for the object for at least a portion of the first motion path. The method also includes receiving a first motion path for a virtual camera and determining a speed of the object along the first motion path for the object. The method additionally includes calculating a second motion path for the object based on the speed of the object along the first…

ANIMATION MOTION CAPTURE USING THREE-DIMENSIONAL SCANNER DATA

Granted: February 16, 2017
Application Number: 20170046865
Systems and techniques are provided for performing animation motion capture of objects within an environment. For example, a method may include obtaining input data including a three-dimensional point cloud of the environment. The three-dimensional point cloud is generated using a three-dimensional laser scanner including multiple laser emitters and multiple laser receivers. The method may further include obtaining an animation model for an object within the environment. The animation…

FACILITATE USER MANIPULATION OF A VIRTUAL REALITY ENVIRONMENT VIEW USING A COMPUTING DEVICE WITH TOUCH SENSITIVE SURFACE

Granted: September 29, 2016
Application Number: 20160283081
A system and method for controlling a view of a virtual reality (VR) environment via a computing device with a touch sensitive surface are disclosed. In some examples, a user may be enabled to augment the view of the VR environment by providing finger gestures to the touch sensitive surface. In one example, the user is enabled to call up a menu in the view of the VR environment. In one example, the user is enabled to switch the view of the VR environment displayed on a device associated…

FACILITATE USER MANIPULATION OF A VIRTUAL REALITY ENVIRONMENT

Granted: September 29, 2016
Application Number: 20160284136
A system and method facilitating a user to manipulate a virtual reality (VR) environment are disclosed. The user may provide an input via a touch sensitive surface of a computing device associated with the user to bind a virtual object in the VR environment to the computing device. The user may then move and/or rotate the computing device to cause the bound virtual object to move and/or rotate in the VR environment accordingly. In some examples, the bound virtual object may cast a ray…

SWITCHING MODES OF A MEDIA CONTENT ITEM

Granted: August 4, 2016
Application Number: 20160227262
Systems and techniques are provided for switching between different modes of a media content item. A media content item may include a movie that has different modes, such as a cinematic mode and an interactive mode. For example, a movie may be presented in a cinematic mode that does not allow certain user interactions with the movie. The movie may be switched to an interactive mode during any point of the movie, allowing a viewer to interact with various aspects of the movie. The movie…

EFFICIENT LENS RE-DISTORTION

Granted: June 23, 2016
Application Number: 20160180501
Methods and systems efficiently apply known distortion, such as of a camera and lens, to source image data to produce data of an output image with the distortion. In an embodiment, an output image field is segmented into regions so that on each segment the distortion function is approximately linear, and segmentation data is stored in a quadtree. The distortion function is applied to the segmented image field to produce a segmented rendered distortion image (SRDI) and a corresponding…

DEEP IMAGE IDENTIFIERS

Granted: March 31, 2016
Application Number: 20160093112
A method may include receiving a plurality of objects from a 3-D virtual scene. The plurality of objects may be arranged in a hierarchy. The method may also include generating a plurality of identifiers for the plurality of objects. The plurality of identifiers may include a first identifier for a first object in the plurality of objects, and the identifier may be generated based on a position of the first object in the hierarchy. The method may additionally include performing a…

STYLING OF COMPUTER GRAPHICS HAIR THROUGH VOLUMETRIC FLOW DYNAMICS

Granted: March 17, 2016
Application Number: 20160078675
Methods are disclosed for the computer generation of data for images that include hair, fur, or other strand-like material. A volume for the hair is specified, having a plurality of surfaces. A fluid flow simulation is performed within the volume, with a first surface of the volume being a source area through which fluid is simulated to enter the volume, and a second surface being an exit surface through which fluid is simulated as exiting the volume. The fluid flow simulation may be…

VISUAL AND PHYSICAL MOTION SENSING FOR THREE-DIMENSIONAL MOTION CAPTURE

Granted: January 14, 2016
Application Number: 20160012598
A system includes a visual data collector for collecting visual information from an image of one or more features of an object. The system also includes a physical data collector for collecting sensor information provided by at one or more sensors attached to the object. The system also includes a computer system that includes a motion data combiner for combining the visual information the sensor information. The motion data combiner is configured to determine the position of a…

IMMERSION PHOTOGRAPHY WITH DYNAMIC MATTE SCREEN

Granted: December 3, 2015
Application Number: 20150348326
A method may include displaying, on one or more display devices in a virtual-reality environment, a visual representation of a 3-D virtual scene from the perspective of a subject location in the virtual-reality environment. The method may also include displaying, on the one or more display devices, a chroma-key background with the visual representation. The method may further include recording, using a camera, an image of the subject in the virtual-reality environment against the…

REAL-TIME CONTENT IMMERSION SYSTEM

Granted: December 3, 2015
Application Number: 20150350628
A method may include presenting a scene from linear content on one or more display devices in an immersive environment, and receiving, from a user within the immersive environment, input to change an aspect of the scene. The method may also include accessing 3-D virtual scene information previously used to render the scene, and changing the 3-D virtual scene information according to the changed aspect of the scene. The method may additionally include rending the 3-D virtual scene to…

DEEP IMAGE DATA COMPRESSION

Granted: November 5, 2015
Application Number: 20150317765
A method of compressing a deep image representation may include receiving a deep image, where the deep image may include multiple pixels, and where each pixel in the deep image may include multiple samples. The method may also include compressing the deep image by combining samples in each pixel that are associated with the same primitives. This process may be repeated on a pixel-by-pixel basis. Some embodiments may use primitive IDs to match pixels to primitives through the rendering…

MOTION-CONTROLLED BODY CAPTURE AND RECONSTRUCTION

Granted: October 15, 2015
Application Number: 20150294492
A method of generating unrecorded camera views may include receiving a plurality of 2-D video sequences of a subject in a real 3-D space, where each 2-D video sequence may depict the subject from a different perspective. The method may also include generating a 3-D representation of the subject in a virtual 3-D space, where a geometry and texture of the 3-D representation may be generated based on the 2D video sequences, and the motion of the 3-D representation in the virtual 3-D space…

AUTOMATED CAMERA CALIBRATION METHODS AND SYSTEMS

Granted: October 8, 2015
Application Number: 20150288951
Methods and systems are disclosed for calibrating a camera using a calibration target apparatus that contains at least one fiducial marking on a planar surface. The set of all planar markings on the apparatus are distinguishable. Parameters of the camera are inferred from at least one image of the calibration target apparatus. In some embodiments, pixel coordinates of identified fiducial markings in an image are used with geometric knowledge of the apparatus to calculate camera…

CALIBRATION TARGET FOR VIDEO PROCESSING

Granted: October 8, 2015
Application Number: 20150288956
An apparatus is disclosed which may serve as a target for calibrating a camera. The apparatus comprises one or more planar surfaces. The apparatus includes at least one fiducial marking on a planar surface. The set of all planar markings on the apparatus are distinguishable.

POST-RENDER MOTION BLUR

Granted: August 20, 2015
Application Number: 20150235407
A method of applying a post-render motion blur to an object may include receiving a first image of the object. The first image need not be motion blurred, and the first image may include a first pixel and rendered color information for the first pixel. The method may also include receiving a second image of the object. The second image may be motion blurred, and the second image may include a second pixel and a location of the second pixel before the second image was motion blurred.…

DYNAMIC LIGHTING CAPTURE AND RECONSTRUCTION

Granted: July 30, 2015
Application Number: 20150215623
Systems and techniques for dynamically capturing and reconstructing lighting are provided. The systems and techniques may be based on a stream of images capturing the lighting within an environment as a scene is shot. Reconstructed lighting data may be used to illuminate a character in a computer-generated environment as the scene is shot. For example, a method may include receiving a stream of images representing lighting of a physical environment. The method may further include…

CONTROLLING A VIRTUAL CAMERA

Granted: May 14, 2015
Application Number: 20150130801
Among other aspects, on computer-implemented method includes: receiving at least one command in a computer system from a handheld device; positioning a virtual camera and controlling a virtual scene according to the command; and in response to the command, generating an output to the handheld device for displaying a view of the virtual scene as controlled on a display of the handheld device, the view captured by the virtual camera as positioned.

REAL-TIME PERFORMANCE CAPTURE WITH ON-THE-FLY CORRECTIVES

Granted: March 26, 2015
Application Number: 20150084950
Techniques for facial performance capture using an adaptive model are provided herein. For example, a computer-implemented method may include obtaining a three-dimensional scan of a subject and a generating customized digital model including a set of blendshapes using the three-dimensional scan, each of one or more blendshapes of the set of blendshapes representing at least a portion of a characteristic of the subject. The method may further include receiving input data of the subject,…