Disney Patent Applications

AUTOMATED STORYBOARDING BASED ON NATURAL LANGUAGE PROCESSING AND 2D/3D PRE-VISUALIZATION

Granted: April 11, 2019
Application Number: 20190107927
Systems and methods are provided for a workflow framework that scriptwriters can utilize when developing scripts. A script can be parsed to identify one or more elements in a script, and various visual representations of the one or more elements and/or a scene characterized in the script can be automatically generated. A user may develop or edit the script which can be presented in a visual and temporal manner. Information parsed from the script can be stored in basic information…

CREATION OF NON-LINEARLY CONNECTED TRANSMEDIA CONTENT DATA

Granted: March 28, 2019
Application Number: 20190098370
The invention relates to systems and methods for manipulating non-linearly connected transmedia content, in particular for creating, processing and/or managing non-linearly connected transmedia content and for tracking content creation and attributing transmedia content to one or more creators. Specifically, the invention involves creating a transmedia content data item by a first user and storing the transmedia content data item in a data store, along with a record indicating an…

LIGHT FIELD BASED PROJECTOR CALIBRATION METHOD AND SYSTEM

Granted: March 28, 2019
Application Number: 20190098270
The present disclosure relates to a method for calibrating a projector. In one example, the method includes receiving by a processing element light field data corresponding to a calibration image projected by a projector and captured by a light field capturing device, and modeling by a processing element one or more intrinsic properties of the projector using the light field data and the calibration image. The calibration image may be projected by the projector directly into the light…

SYSTEM FOR OPTIMIZED EMBEDDING AND ANALYSIS OF BRANDED CONTENT

Granted: March 28, 2019
Application Number: 20190096094
The present disclosure relates to an apparatus, system and method for processing transmedia content data. More specifically, the disclosure provides for identifying and inserting one item of media content within another item of media content, e.g. inserting a video within a video, such that the first item of media content appears as part of the second item. The invention involves analysing a first visual media item to identify one or more spatial locations to insert the second visual…

MANIPULATION OF NON-LINEARLY CONNECTED TRANSMEDIA CONTENT DATA

Granted: March 28, 2019
Application Number: 20190095446
The invention relates to systems and methods for navigating, outputting and displaying non-linearly connected groups of transmedia content. Specifically, the invention involves retrieving, from a database, an ordered group of transmedia content data objects comprising a plurality of transmedia content data objects and linking data, whereby each element of the linking data defines a directional link from one of the transmedia content data objects to another of the transmedia content data…

MANIPULATION OF NON-LINEARLY CONNECTED TRANSMEDIA CONTENT DATA

Granted: March 28, 2019
Application Number: 20190095435
The present invention relates to systems and methods for manipulating non-linearly connected transmedia content, in particular for creating, processing and managing non-linearly connected transmedia content. The disclosure includes content creation modes and version control systems for non-linearly connected transmedia content data, in which individual transmedia content data items are connected to either other via directional linking data.

AUGMENTED REALITY TRAVEL ROUTE PLANNING

Granted: March 14, 2019
Application Number: 20190077504
An apparatus such as a head-mounted display (HMD) may have a camera for capturing a visual scene for presentation via the HMD. A user of the apparatus may specify a pre-planned travel route for a vehicle within the visual scene via an augmented reality (AR) experience generated by the HMD. The pre-planned travel route may be overlaid on the visual scene in the AR experience so that the user can account for real-time environmental conditions determined through the AR experience. The…

COLLABORATIVE MULTI-MODAL MIXED-REALITY SYSTEM AND METHODS LEVERAGING RECONFIGURABLE TANGIBLE USER INTERFACES FOR THE PRODUCTION OF IMMERSIVE, CINEMATIC, AND INTERACTIVE CONTENT

Granted: February 28, 2019
Application Number: 20190066387
The disclosure is directed to collaborative multi-modal mixed-reality systems and methods leveraging reconfigurable tangible user interfaces for the production of immersive, cinematic, and interactive content. As described herein, content may be created as either a mixture of live action media and computer generated media, combined together in context; or content made up entirely from computer-generated media alone. The technology described herein may leverage physical and software-based…

DRONES GENERATING VARIOUS AIR FLOW EFFECTS AROUND A VIRTUAL REALITY OR AUGMENTED REALITY USER

Granted: February 28, 2019
Application Number: 20190066359
Systems and methods described herein are directed to enhancing a virtual reality (VR) or augmented reality (AR) experience by using one or more unmanned vehicles to generate effects around a user of a headmounted display (HMD). The generated effects may be synchronized with VR/AR content presented to the user of the HMD. Particular systems and methods described herein are directed to enhancing a VR/AR experience by using one or more unmanned aerial vehicles (UAV) to generate air flow…

ADAPTIVE VR/AR VIEWING BASED ON A USERS EYE CONDITION PROFILE

Granted: February 21, 2019
Application Number: 20190056780
Techniques described herein are directed to adaptive virtual reality and augmented reality viewing based on a user's eye condition data. In a first implementation, a software application renders video content based on the user's eye condition data by mapping the user's eye condition data to video rendering parameters. The video content rendered based on the user's eye condition data may be made available to a virtual reality/augmented reality player and played using a head mounted…

INTRINSIC COLOR CAMERA

Granted: January 31, 2019
Application Number: 20190037124
Systems and methods described herein are directed to capturing intrinsic color images of subjects. A camera may be equipped with a light source that is coaxial to the camera's image sensor and configured to emit a pulse of light of short duration. During image capture of a subject, the camera light source may emit the pulse of light through the lens barrel of the camera and stop emission of light before the reflected light from the light source returns. Thereafter, the camera lens…

SYSTEMS AND METHODS FOR PREDICTIVE DELIVERY OF HIGH BIT-RATE CONTENT FOR PLAYBACK

Granted: January 31, 2019
Application Number: 20190036823
The present disclosure provides for systems and methods for predictive delivery of high bit-rate content. The disclosed systems and methods provide an adaptive-bit-rate streaming (ABS) system with more robust information, thereby allowing more intelligent pre-caching of the media content. By providing greater information to the ABS system, the disclosed systems are able to foresee higher bit-rate segments that require greater attention, allowing the system to use such information to…

METHODS AND SYSTEMS FOR DIGITAL FILE DISTRIBUTION TO THEATERS

Granted: January 17, 2019
Application Number: 20190020911
Systems and methods for efficiently distributing digital cinema files from a server to a theater are disclosed. The systems and methods may comply with DCI requirements. A server updates an original composition playlist, updates a track file associated with the updated composition playlist, and distributes the updated composition playlist and updated track file to one or more theaters over a communications network. The theater is configured to store the updated track file at a storage…

MENU NAVIGATION MODE FOR MEDIA DISCS

Granted: January 17, 2019
Application Number: 20190019536
Systems and methods are provided for reordering and/or bypassing certain informational content or menus that are conventionally presented prior to playback of media content stored on physical media discs. Upon initial use of a physical media disc, certain information content or menus may be presented to a user or viewer, for example, piracy warnings, language selection menus, etc. However, upon subsequent use of the physical media disc, such informational content or menus may be…

SYSTEMS AND METHODS FOR VIDEO CLIP CREATION, CURATION, AND INTERACTION

Granted: January 3, 2019
Application Number: 20190005981
Disclosed are systems and methods for user interaction with and curation of digital media content, such that users are able to specify a particular clip of video content, and utilize the clip in a desired way. The disclosed systems and methods allow users to view video content, select video clips within the video content, save video clips into a collection of video clips, and curate the collection of video clips. The disclosed systems and methods also allow users to view bookmarks…

PHYSICAL NAVIGATION GUIDED VIA STORY-BASED AUGMENTED AND/OR MIXED REALITY EXPERIENCES

Granted: November 15, 2018
Application Number: 20180328751
Systems and methods for providing physical navigation guidance via augmented reality (AR) environments are provided. An AR experience is generated based on navigation information including a starting location and a destination location. The AR experience comprises physical navigation guidance from the starting location to the destination location using virtual elements that are associated with either or both the starting location and/or the destination location. For example, virtual…

SYSTEMS AND METHODS FOR DIFFERENTIAL MEDIA DISTRIBUTION

Granted: November 8, 2018
Application Number: 20180324474
Systems for electronic media distribution includes a differential versioning server configured to receive a first media file including a first set of data with a first set of attributes and a second media file including a second set of data with a second set of attributes, generate a first differential data file as a function of differences between the first media file and the second media file, and generate a first differential metadata file including an encoding data set configured to…

3D MODEL CONSTRUCTION FROM 2D ASSETS

Granted: November 8, 2018
Application Number: 20180322695
Features of the surface of an object of interest captured in a two-dimensional (2D) image are identified and marked for use in point matching to align multiple 2D images and generating a point cloud representative of the surface of the object in a photogrammetry process. The features which represent actual surface features of the object may have their local contrast enhanced to facilitate their identification. Reflections on the surface of the object are suppressed by correlating such…

POINT CLOUD NOISE AND OUTLIER REMOVAL FOR IMAGE-BASED 3D RECONSTRUCTION

Granted: November 1, 2018
Application Number: 20180315168
Enhanced removing of noise and outliers from one or more point sets generated by image-based 3D reconstruction techniques is provided. In accordance with the disclosure, input images and corresponding depth maps can be used to remove pixels that are geometrically and/or photometrically inconsistent with the colored surface implied by the input images. This allows standard surface reconstruction methods (such as Poisson surface reconstruction) to perform less smoothing and thus achieve…

VIRTUAL REALITY EXPERIENCE SCRIPTWRITING

Granted: October 18, 2018
Application Number: 20180300958
Systems and methods are provided for a workflow framework that scriptwriters can utilize when developing (live-action/animation/cinematic) virtual reality (VR) experiences or content. A script can be parsed to identify one or more elements in a script, and a VR representation of the one or more elements can be automatically generated. A user may develop or edit the script which can be presented in a visual and temporal manner along with the VR representation. The user may edit the VR…