Abstract: Systems and methods create and distribute addressable virtual content with interactivity. The virtual content may depict a live event and may be customized for each individual user based on dynamic characteristics (e.g., habits, preferences, etc.) of the user that are captured during user interaction with the virtual content. The virtual content is generated with low latency between the actual event and the live content that allows the user to interactively participate in actions related to the live event. The virtual content may represent a studio with multiple display screens that each show different live content (of the same or different live events), and may also include graphic displays that include related data such as statistics corresponding to the live event, athletes at the event, and so on. The content of the display screens and graphics may be automatically selected based on the dynamic characteristics of the user.
Abstract: Systems and processes generate a viewing experience by determining location data and movement data of (a) at least one object and (b) at least one participant within an event area. A three-dimensional model of the event area, the participant and the object is determined based upon the location data and the movement data. A viewpoint of a spectator defines an origin, relative to the three-dimensional model, and a direction of the viewing experience. The viewing experience is generated for the viewpoint at least in part from the three-dimensional model to include one or more of augmented reality, mixed reality, extended reality, and virtual reality.