Lucasfilm Patent Applications

SYSTEM AND METHOD FOR MUSIC AND EFFECTS SOUND MIX CREATION IN AUDIO SOUNDTRACK VERSIONING

Granted: December 17, 2020
Application Number: 20200394999
Implementations of the disclosure describe systems and methods that leverage machine learning to automate the process of creating music and effects mixes from original sound mixes including domestic dialogue. In some implementations, a method includes: receiving a sound mix including human dialogue; extracting metadata from the sound mix, where the extracted metadata categorizes the sound mix; extracting content feature data from the sound mix, the extracted content feature data…

CAMERA SYSTEMS FOR MOTION CAPTURE

Granted: September 10, 2020
Application Number: 20200288050
Embodiments of the disclosure provide systems and methods for motion capture to generate content (e.g., motion pictures, television programming, videos, etc.). An actor or other performing being can have multiple markers on his or her face that are essentially invisible to the human eye, but that can be clearly captured by camera systems of the present disclosure. Embodiments can capture the performance using two different camera systems, each of which can observe the same performance…

FACIAL PERFORMANCE CAPTURE IN AN UNCONTROLLED ENVIRONMENT

Granted: September 10, 2020
Application Number: 20200286301
A method of transferring a facial expression from a subject to a computer generated character that includes: receiving a plate with an image of the subject's facial expression and an estimate of intrinsic parameters of a camera used to film the plate; generating a three-dimensional parameterized deformable model of the subject's face where different facial expressions of the subject can be obtained by varying values of the model parameters; solving for the facial expression in the plate…

ON-SET FACIAL PERFORMANCE CAPTURE AND TRANSFER TO A THREE-DIMENSIONAL COMPUTER-GENERATED MODEL

Granted: September 10, 2020
Application Number: 20200286284
A method of transferring a facial expression from a subject to a computer generated character that includes receiving a plate with an image of the subject's facial expression, a three-dimensional parameterized deformable model of the subject's face where different facial expressions of the subject can be obtained by varying values of the model parameters, a model of a camera rig used to capture the plate, and a virtual lighting model that estimates lighting conditions when the image on…

CREATING SHADOWS IN MIXED REALITY

Granted: August 20, 2020
Application Number: 20200265638
Implementations of the disclosure are directed to generating shadows in the physical world that correspond to virtual objects displayed on MR displays. In some implementations, a method includes: synchronously presenting a version of a scene on each of a MR display system and a projector display system, where during presentation: the MR display system displays a virtual object overlaid over a view of a physical environment; and a projector of the projector display system creates a shadow…

FACILITATE USER MANIPULATION OF A VIRTUAL REALITY ENVIRONMENT VIEW USING A COMPUTING DEVICE WITH A TOUCH SENSITIVE SURFACE

Granted: August 6, 2020
Application Number: 20200249765
A system and method for controlling a view of a virtual reality (VR) environment via a computing device with a touch sensitive surface are disclosed. In some examples, a user may be enabled to augment the view of the VR environment by providing finger gestures to the touch sensitive surface. In one example, the user is enabled to call up a menu in the view of the VR environment. In one example, the user is enabled to switch the view of the VR environment displayed on a device associated…

IMMERSIVE CONTENT PRODUCTION SYSTEM WITH MULTIPLE TARGETS

Granted: May 7, 2020
Application Number: 20200145644
An immersive content presentation system and techniques that can detect and correct lighting artifacts caused by movements of one or more taking camera in a performance area consisting of multiple displays (e.g., LED or LCD displays). The techniques include capturing, with a camera, a plurality of images of a performer performing in a performance area at least partially surrounded by one or more displays presenting images of a virtual environment. Where the images of the virtual…

IMMERSIVE CONTENT PRODUCTION SYSTEM

Granted: May 7, 2020
Application Number: 20200143592
An immersive content presentation system and techniques that can detect and correct lighting artifacts caused by movements of one or more taking camera in a performance area consisting of multiple displays (e.g., LED or LCD displays). The techniques include capturing, with a camera, a plurality of images of a performer performing in a performance area at least partially surrounded by one or more displays presenting images of a virtual environment. Where the images of the virtual…

SYSTEMS AND METHODS FOR MOTION CAPTURE

Granted: April 25, 2019
Application Number: 20190122374
Embodiments of the disclosure provide systems and methods for motion capture to generate content (e.g., motion pictures, television programming, videos, etc.). An actor or other performing being can have multiple markers on his or her face that are essentially invisible to the human eye, but that can be clearly captured by camera systems of the present disclosure. Embodiments can capture the performance using two different camera systems, each of which can observe the same performance…

CAMERA SYSTEMS FOR MOTION CAPTURE

Granted: April 25, 2019
Application Number: 20190124244
Embodiments of the disclosure provide systems and methods for motion capture to generate content (e.g., motion pictures, television programming, videos, etc.). An actor or other performing being can have multiple markers on his or her face that are essentially invisible to the human eye, but that can be clearly captured by camera systems of the present disclosure. Embodiments can capture the performance using two different camera systems, each of which can observe the same performance…

VIRTUAL-SCENE CONTROL DEVICE

Granted: July 6, 2017
Application Number: 20170195527
A handheld device includes: an input control configured to control and modify a virtual scene including a virtual camera; and a display that shows a representation of the controlled and modified virtual scene generated by the virtual camera. A system includes: a computer system configured to execute program instructions for generating a virtual scene including a virtual camera; and handheld device configured to communicate with the computer system for controlling and modifying the…

MULTI-CHANNEL TRACKING PATTERN

Granted: June 22, 2017
Application Number: 20170178382
A multi-channel tracking pattern is provided along with techniques and systems for performing motion capture using the multi-channel tracking pattern. The multi-channel tracking pattern includes a plurality of shapes having different colors on different portions of the pattern. The portions with the unique shapes and colors allow a motion capture system to track motion of an object bearing the pattern across a plurality of video frames.

DETERMINING CONTROL VALUES OF AN ANIMATION MODEL USING PERFORMANCE CAPTURE

Granted: May 25, 2017
Application Number: 20170148201
Performance capture systems and techniques are provided for capturing a performance of a subject and reproducing an animated performance that tracks the subject's performance. For example, systems and techniques are provided for determining control values for controlling an animation model to define features of a computer-generated representation of a subject based on the performance. A method may include obtaining input data corresponding to a pose performed by the subject, the input…

FLIGHT PATH CORRECTION IN VIRTUAL SCENES

Granted: March 23, 2017
Application Number: 20170084072
A method includes receiving a first motion path for an object, where an orientation of the object is not aligned with the first motion path for the object for at least a portion of the first motion path. The method also includes receiving a first motion path for a virtual camera and determining a speed of the object along the first motion path for the object. The method additionally includes calculating a second motion path for the object based on the speed of the object along the first…

ANIMATION MOTION CAPTURE USING THREE-DIMENSIONAL SCANNER DATA

Granted: February 16, 2017
Application Number: 20170046865
Systems and techniques are provided for performing animation motion capture of objects within an environment. For example, a method may include obtaining input data including a three-dimensional point cloud of the environment. The three-dimensional point cloud is generated using a three-dimensional laser scanner including multiple laser emitters and multiple laser receivers. The method may further include obtaining an animation model for an object within the environment. The animation…

FACILITATE USER MANIPULATION OF A VIRTUAL REALITY ENVIRONMENT

Granted: September 29, 2016
Application Number: 20160284136
A system and method facilitating a user to manipulate a virtual reality (VR) environment are disclosed. The user may provide an input via a touch sensitive surface of a computing device associated with the user to bind a virtual object in the VR environment to the computing device. The user may then move and/or rotate the computing device to cause the bound virtual object to move and/or rotate in the VR environment accordingly. In some examples, the bound virtual object may cast a ray…

FACILITATE USER MANIPULATION OF A VIRTUAL REALITY ENVIRONMENT VIEW USING A COMPUTING DEVICE WITH TOUCH SENSITIVE SURFACE

Granted: September 29, 2016
Application Number: 20160283081
A system and method for controlling a view of a virtual reality (VR) environment via a computing device with a touch sensitive surface are disclosed. In some examples, a user may be enabled to augment the view of the VR environment by providing finger gestures to the touch sensitive surface. In one example, the user is enabled to call up a menu in the view of the VR environment. In one example, the user is enabled to switch the view of the VR environment displayed on a device associated…

SWITCHING MODES OF A MEDIA CONTENT ITEM

Granted: August 4, 2016
Application Number: 20160227262
Systems and techniques are provided for switching between different modes of a media content item. A media content item may include a movie that has different modes, such as a cinematic mode and an interactive mode. For example, a movie may be presented in a cinematic mode that does not allow certain user interactions with the movie. The movie may be switched to an interactive mode during any point of the movie, allowing a viewer to interact with various aspects of the movie. The movie…

EFFICIENT LENS RE-DISTORTION

Granted: June 23, 2016
Application Number: 20160180501
Methods and systems efficiently apply known distortion, such as of a camera and lens, to source image data to produce data of an output image with the distortion. In an embodiment, an output image field is segmented into regions so that on each segment the distortion function is approximately linear, and segmentation data is stored in a quadtree. The distortion function is applied to the segmented image field to produce a segmented rendered distortion image (SRDI) and a corresponding…

DEEP IMAGE IDENTIFIERS

Granted: March 31, 2016
Application Number: 20160093112
A method may include receiving a plurality of objects from a 3-D virtual scene. The plurality of objects may be arranged in a hierarchy. The method may also include generating a plurality of identifiers for the plurality of objects. The plurality of identifiers may include a first identifier for a first object in the plurality of objects, and the identifier may be generated based on a position of the first object in the hierarchy. The method may additionally include performing a…