Patents by Inventor Eric Zavesky

Eric Zavesky has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11695986
    Abstract: Aspects of the subject disclosure may include, for example, receiving, for a selected channel, a first video; processing the first video for rendering on a display being viewed by a user; selecting from among a plurality of channels a subset of channels for which to pre-fetch data, the selecting being according to predictions that each channel of the subset of channels is more likely to be requested by the user than each channel of the plurality of channels that is not part of the subset; prioritizing the subset of channels such that a first channel of the subset of channels has a priority over a second channel of the subset of channels, the first channel being given the priority based upon a prediction that the first channel is more likely to be requested by the user than the second channel; pre-fetching, for the first channel, first data of a first type and second data of a second type; and pre-fetching, for the second channel, third data of the first type without pre-fetching any data of the second type.
    Type: Grant
    Filed: June 9, 2022
    Date of Patent: July 4, 2023
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Alexander Ruditsky, Eric Zavesky
  • Publication number: 20230209003
    Abstract: In one example, a method performed by a processing system including at least one processor includes identifying a background for a scene of video content, generating a three-dimensional model and visual effects for an object appearing in the background for the scene of video content, displaying a three-dimensional simulation of the background for the scene of video content, including the three-dimensional model and visual effects for the object, modifying the three-dimensional simulation of the background for the scene of video content based on user feedback, capturing video footage of a live action subject appearing together with the background for the scene of video content, where the live action subject appearing together with the background for the scene of video content creates the scene of video content, and saving the scene of video content.
    Type: Application
    Filed: December 28, 2021
    Publication date: June 29, 2023
    Inventors: Eric Zavesky, Tan Xu, Zhengyi Zhou
  • Publication number: 20230206259
    Abstract: Methods, systems, and apparatuses may provide for the auto-determination of partial usage of a physical environment and use derived intelligence to take various actions. This may allow for partial resulting maintenance of the physical environment based on a single use or use over time.
    Type: Application
    Filed: March 6, 2023
    Publication date: June 29, 2023
    Applicant: AT&T Intellectual Property I, L.P.
    Inventors: Robert Koch, Nikhil Marathe, James Pratt, Ari Craine, Eric Zavesky, Timothy Innes, Nigel Bradley
  • Publication number: 20230206146
    Abstract: A method for gaze-based workflow adaptation includes identifying a workflow in which a user is engaged, where the workflow comprises a plurality of tasks that collectively achieves a desired goal, creating a trigger for a given task of the plurality of tasks, wherein the trigger specifies an action to be automatically taken in response to a gaze of the user meeting a defined criterion, monitoring a progress of the workflow, monitoring the gaze of the user, and sending a signal to a remote device in response to the gaze of the user meeting the defined criterion, wherein the signal instructs the remote device to take the action.
    Type: Application
    Filed: February 27, 2023
    Publication date: June 29, 2023
    Inventors: Richard Palazzo, Lee Begeja, David Crawford Gibbon, Zhu Liu, Tan Xu, Eric Zavesky
  • Publication number: 20230206282
    Abstract: Concepts and technologies disclosed herein are directed to tiered immersive experiences for bimodal avatar groups. According to one aspect disclosed herein, a virtual assistant (“VA”) can be executed by a user device. The VA can obtain a preference for an immersive experience. The VA can generate a search request directed to an immersive experience marketplace. The search request can include the preference and a tier desired for the immersive experience. The user device can send the search request to the immersive experience marketplace, and in response, the user device can receive a search result that identifies at least one match for the immersive experience based, at least in part, upon the preference and the tier.
    Type: Application
    Filed: December 29, 2021
    Publication date: June 29, 2023
    Applicant: AT&T Intellectual Property I, L.P.
    Inventors: Rashmi Palamadai, Eric Zavesky, Nigel Bradley
  • Publication number: 20230206096
    Abstract: A processing system including at least one processor may obtain description information of a first machine learning model, obtain a set of interpretation criteria for the first machine learning model, and generate, via a second machine learning model, an explanation text providing an interpretation of the first machine learning model in accordance with the set of interpretation criteria and the description information of the first machine learning model.
    Type: Application
    Filed: December 27, 2021
    Publication date: June 29, 2023
    Inventors: Jean-Francois Paiement, Eric Zavesky, Zhengyi Zhou, David Gibbon
  • Patent number: 11689782
    Abstract: Methods, computer-readable media, and devices for tracking an accessing of a media content via a watermark embedded by a network node are disclosed. For example, a processing system including at least one processor may receive, from a first network node, a first copy of a watermark that is embedded by the first network node in a media content. The processing system may further receive a notification comprising a second copy of the watermark and an identification of a first endpoint device, the notification associated with an accessing of the media content by the first endpoint device, and record the accessing of the media content by the first endpoint device.
    Type: Grant
    Filed: October 4, 2021
    Date of Patent: June 27, 2023
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Nigel Bradley, Timothy Innes, James Pratt, Eric Zavesky
  • Publication number: 20230196771
    Abstract: In one example, a method performed by a processing system including at least one processor includes acquiring a plurality of video volumes of an environment from a plurality of cameras, wherein at least two individual video volumes of the plurality of video volumes depict the environment from different viewpoints, generating a panoptic video feed of the environment from the plurality of video volumes, detecting an event of interest occurring in the panoptic video feed, and isolating a video volume of the event of interest to produce a video excerpt.
    Type: Application
    Filed: December 22, 2021
    Publication date: June 22, 2023
    Inventor: Eric Zavesky
  • Patent number: 11675419
    Abstract: A method includes obtaining a set of components that, when collectively rendered, presents an immersive experience, extracting a narrative from the set of components, learning a plurality of details of the immersive experience that exhibit variance, based on an analysis of the set of components and an analysis of the narrative, presenting a device of a creator of the immersive experience with an identification of the plurality of the details, receiving from the device of the creator, an input, wherein the input defines a variant for a default segment of one component of the set of components, and wherein the variant presents an altered form of at least one detail of the plurality of details that is presented in the default segment, and storing the set of components, the variant, and information indicating how and when to present the variant to a user device in place of the default segment.
    Type: Grant
    Filed: November 24, 2020
    Date of Patent: June 13, 2023
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Richard Palazzo, Eric Zavesky, Tan Xu, Jean-Francois Paiement
  • Publication number: 20230179942
    Abstract: The technology described herein is generally directed towards the use of spatial audio to provide directional cues to assist a user in looking in a desired direction in a user perceived environment, which can be a real-world or a virtual environment. Location data for a user in three-dimensional space can be obtained. A direction of view of the user within the user-perceived environment is determined. Spatial audio can be output that is perceived as coming from a position within the user-perceived environment, such as to provide directional cues directed to changing the direction of view of the user to a different view. The spatial audio can provide prompts to the user to adjust his or her vision towards the desired direction, for example. The user's location and direction of view can be mapped to audio that is relevant to the user's direction of view when at that location.
    Type: Application
    Filed: December 2, 2021
    Publication date: June 8, 2023
    Inventors: Richard Palazzo, Brian M. Novack, Rashmi Palamadai, Tan Xu, Eric Zavesky, Ari Craine, Robert Koch
  • Publication number: 20230179673
    Abstract: The disclosed technology is directed towards determining that information of interest, corresponding to as a meaningful event, is available to be captured and saved, and capturing the information. When an event is determined to satisfy a defined meaningful event likelihood criterion, sensor data (which can include media data), time data and location data are collected and associated with the meaningful event, e.g., in a data store. A presentation/package is generated from the various data, and maintained for subsequent access, e.g., for sending to a recipient. The presentation can include annotation data. The presentation can be conditionally marked for future presentation if and when certain condition data is satisfied. The presentation can be associated with at least one conditional gift.
    Type: Application
    Filed: December 2, 2021
    Publication date: June 8, 2023
    Inventors: Richard Palazzo, Brian M. Novack, Rashmi Palamadai, Tan Xu, Eric Zavesky, Ari Craine, Robert Koch
  • Publication number: 20230177155
    Abstract: Aspects of the subject disclosure may include, for example, a device having a processing system including a processor; and a memory that stores executable instructions that, when executed by the processing system, facilitate performance of operations including capturing images generated by a fiducial invoked on a user device; determining a context of the fiducial; detect an anomaly in the images based on the context; and responsive to detecting the anomaly, providing a notification of the anomaly. Other embodiments are disclosed.
    Type: Application
    Filed: December 2, 2021
    Publication date: June 8, 2023
    Applicant: AT&T Intellectual Property I, L.P.
    Inventors: Kostikey Mustakas, Eric Zavesky, James Pratt
  • Publication number: 20230179816
    Abstract: The disclosed technology is directed towards inserting user-personalized or other user-related supplementary media content into primary media content being presented to the user. The personalized media content can be inserted into available insertion slots associated with the primary media content. The inserted content is based on the context of the primary media, e.g., a location or theme of a movie scene. For example, upon obtaining primary media content that is video, supplementary media content related to a group of frames of the primary media content can be determined. Supplementary media content is combined with the primary media content at a presentation position associated with the group of frames to output modified media content. For a video, for example, the supplementary content can be inserted between scenes, overlaid onto a scene, or presented proximate a scene.
    Type: Application
    Filed: December 2, 2021
    Publication date: June 8, 2023
    Inventors: Richard Palazzo, Brian M. Novack, Rashmi Palamadai, Tan Xu, Eric Zavesky, Ari Craine, Robert Koch
  • Publication number: 20230177258
    Abstract: The disclosed technology is directed towards determining a sub-content element within electronically presented media content based on where a user is currently gazing within the content presentation. The sub-content elements of the book have been previously mapped to their respective page and coordinates on the page, for example. As a more particular example, a user can be gazing at a certain paragraph on a displayed page of an electronic book, and that information can be detected and used to associate an annotation with that paragraph for personal output and/or sharing with others. A user can input the annotation data in various ways, including verbally for speech recognition or for audio replay.
    Type: Application
    Filed: December 2, 2021
    Publication date: June 8, 2023
    Inventors: Richard Palazzo, Brian M. Novack, Rashmi Palamadai, Tan Xu, Eric Zavesky, Ari Craine, Robert Koch
  • Publication number: 20230179814
    Abstract: The technology described herein is generally directed towards allowing a user to switch among media streams of live events, including generally concurrent events, and among different media streams that are available for the same event. The media streams can be virtual reality streams, video streams and audio streams, and can be from different viewing and/or listening perspectives and quality. The user can see (and hear) previews of a stream, and see metadata associated with each available stream, such as viewer rating, cost to use, video quality data, and a count of current viewers. Via a social media service, a user can sees whether any friends are also viewing. An event that is not streamed in virtual reality can be viewed via a virtual reality device by presenting the event via a virtual element, such as a virtual television set, within the virtual environment.
    Type: Application
    Filed: December 2, 2021
    Publication date: June 8, 2023
    Inventors: Brian M. Novack, Rashmi Palamadai, Tan Xu, Eric Zavesky, Ari Craine, Richard Palazzo, Robert Koch
  • Publication number: 20230172534
    Abstract: The technology described herein is generally directed towards collecting digital twin datasets from individual users to represent their individual physical, emotional, chemical and/or environmental conditions. A user's digital twin is matched to one or more other digital twins with similar physical, emotional, chemical, and/or environmental conditions. The matched digital twins can share data and learnings via a virtual anonymous relationship. Multiple digital twins that represent users with similar conditions may be found and treated collectively as a group; a user via his or her digital twin can poll the group to receive and process responses from each respondent. A digital twin can be a therapist or a researcher who emulates a patient and uses the emulated digital twin as a proxy to monitor and process the results of other digital twins.
    Type: Application
    Filed: December 2, 2021
    Publication date: June 8, 2023
    Inventors: Brian M. Novack, Rashmi Palamadai, Tan Xu, Eric Zavesky, Ari Craine, Richard Palazzo, Robert Koch
  • Publication number: 20230177416
    Abstract: Disclosed is managing (creating and maintaining) attendance/participation data describing remote attendance of attendees of an event, or a replay of the event. The event can be a virtual reality event. When a remote user attends an event, subsequent viewers of the event replay can see a digital representation (e.g., an avatar) of the remote user within the replay as having attended the event in-person. Subsequent replays include digital representations of the remote user and the user(s) that viewed previously replay(s) to emulate their attendance. User can manage their own attendance data, including to obtain proof of attendance. A user can delete his or her presence at an event, such that replays after the deletion do not include that user's representation. A user can go anonymous with respect to an event, such that any replays after the anonymity choice include only a generic representation of the attendee.
    Type: Application
    Filed: December 2, 2021
    Publication date: June 8, 2023
    Inventors: Richard Palazzo, Brian M. Novack, Rashmi Palamadai, Tan Xu, Eric Zavesky, Ari Craine, Robert Koch
  • Publication number: 20230179840
    Abstract: The disclosed technology is directed towards collecting data, including multimedia content, for creating a multimedia presentation associated with an event being experienced by a user. Location data of a user's mobile device can be used to determine availability of sensor(s) within a proximity of the mobile device, along with one or more multimedia sensor of the mobile device. The user can select a source group of the available sensors to obtain data from them, including multimedia content such as camera video and microphone audio feeds. The sensor data received from the selected sensor source group is used, optionally along with previously recorded multimedia content, to create and output the multimedia presentation associated with the event. The user can add annotations including user input and the previously recorded multimedia content for inclusion in the multimedia presentation.
    Type: Application
    Filed: December 2, 2021
    Publication date: June 8, 2023
    Inventors: Brian M. Novack, Rashmi Palamadai, Tan Xu, Eric Zavesky, Ari Craine, Richard Palazzo, Robert Koch
  • Patent number: 11670099
    Abstract: A method for validating objects appearing in volumetric video presentations includes obtaining a volumetric video presentation depicting a scene, wherein the volumetric video presentation is associated with a metadata file containing identifying information for the scene, identifying user-generated content that depicts the scene, by matching metadata associated with the user-generated content to the metadata file associated with the volumetric video presentation, comparing a first object appearing in the volumetric video presentation to a corresponding second object appearing in the user-generated content, assigning a score to the first object based on the comparing, wherein the score indicates a probability that the first object has not been manipulated, and altering the volumetric video presentation to filter the first object from the volumetric video presentation when the score falls below a threshold.
    Type: Grant
    Filed: April 5, 2021
    Date of Patent: June 6, 2023
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Zhu Liu, Eric Zavesky, David Crawford Gibbon, Lee Begeja, Paul Triantafyllou
  • Patent number: 11671575
    Abstract: In one example, a method performed by a processing system including at least one processor includes acquiring a first item of media content from a user, where the first item of media content depicts a subject, acquiring a second item of media content, where the second item of media content depicts the subject, compositing the first item of media content and the second item of media content to create, within a metaverse of immersive content, an item of immersive content that depicts the subject, presenting the item of immersive content on a device operated by the user, and adapting the presenting of the item of immersive content in response to a choice made by the user.
    Type: Grant
    Filed: September 9, 2021
    Date of Patent: June 6, 2023
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Eric Zavesky, Louis Alexander, David Gibbon, Wen-Ling Hsu, Tan Xu, Mohammed Abdel-Wahab, Subhabrata Majumdar, Richard Palazzo