Patents by Inventor Richard Palazzo

Richard Palazzo has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230224452
    Abstract: A volumetric content enhancement system (“the system”) can annotate at least a portion of a plurality of voxels from a volumetric video with contextual data. The system can determine at least one actionable position within the volumetric video. The system can create an annotated volumetric video that includes the volumetric video, an annotation with the contextual data, and the at least one actionable position. The system can provide the annotated volumetric video to a volumetric content playback system. The system can obtain viewer feedback associated with the viewer and can determine an emotional state of the viewer based, at least in part, upon the viewer feedback. The system can receive viewer position information that identifies a specific actionable position of the viewer. The system can generate manipulation instructions to instruct the volumetric content playback system to manipulate the annotated volumetric content to achieve a desired emotional state of the viewer.
    Type: Application
    Filed: February 27, 2023
    Publication date: July 13, 2023
    Applicant: AT&T Intellectual Property I, L.P.
    Inventors: Eric Zavesky, David Gibbon, Wen-Ling Hsu, Jianxiong Dong, Richard Palazzo
  • Patent number: 11700407
    Abstract: The disclosed technology is directed towards inserting user-personalized or other user-related supplementary media content into primary media content being presented to the user. The personalized media content can be inserted into available insertion slots associated with the primary media content. The inserted content is based on the context of the primary media, e.g., a location or theme of a movie scene. For example, upon obtaining primary media content that is video, supplementary media content related to a group of frames of the primary media content can be determined. Supplementary media content is combined with the primary media content at a presentation position associated with the group of frames to output modified media content. For a video, for example, the supplementary content can be inserted between scenes, overlaid onto a scene, or presented proximate a scene.
    Type: Grant
    Filed: December 2, 2021
    Date of Patent: July 11, 2023
    Assignee: AT&T INTELLECTUAL PROPERTY I, L.P.
    Inventors: Richard Palazzo, Brian M. Novack, Rashmi Palamadai, Tan Xu, Eric Zavesky, Ari Craine, Robert Koch
  • Publication number: 20230206146
    Abstract: A method for gaze-based workflow adaptation includes identifying a workflow in which a user is engaged, where the workflow comprises a plurality of tasks that collectively achieves a desired goal, creating a trigger for a given task of the plurality of tasks, wherein the trigger specifies an action to be automatically taken in response to a gaze of the user meeting a defined criterion, monitoring a progress of the workflow, monitoring the gaze of the user, and sending a signal to a remote device in response to the gaze of the user meeting the defined criterion, wherein the signal instructs the remote device to take the action.
    Type: Application
    Filed: February 27, 2023
    Publication date: June 29, 2023
    Inventors: Richard Palazzo, Lee Begeja, David Crawford Gibbon, Zhu Liu, Tan Xu, Eric Zavesky
  • Patent number: 11675419
    Abstract: A method includes obtaining a set of components that, when collectively rendered, presents an immersive experience, extracting a narrative from the set of components, learning a plurality of details of the immersive experience that exhibit variance, based on an analysis of the set of components and an analysis of the narrative, presenting a device of a creator of the immersive experience with an identification of the plurality of the details, receiving from the device of the creator, an input, wherein the input defines a variant for a default segment of one component of the set of components, and wherein the variant presents an altered form of at least one detail of the plurality of details that is presented in the default segment, and storing the set of components, the variant, and information indicating how and when to present the variant to a user device in place of the default segment.
    Type: Grant
    Filed: November 24, 2020
    Date of Patent: June 13, 2023
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Richard Palazzo, Eric Zavesky, Tan Xu, Jean-Francois Paiement
  • Publication number: 20230179814
    Abstract: The technology described herein is generally directed towards allowing a user to switch among media streams of live events, including generally concurrent events, and among different media streams that are available for the same event. The media streams can be virtual reality streams, video streams and audio streams, and can be from different viewing and/or listening perspectives and quality. The user can see (and hear) previews of a stream, and see metadata associated with each available stream, such as viewer rating, cost to use, video quality data, and a count of current viewers. Via a social media service, a user can sees whether any friends are also viewing. An event that is not streamed in virtual reality can be viewed via a virtual reality device by presenting the event via a virtual element, such as a virtual television set, within the virtual environment.
    Type: Application
    Filed: December 2, 2021
    Publication date: June 8, 2023
    Inventors: Brian M. Novack, Rashmi Palamadai, Tan Xu, Eric Zavesky, Ari Craine, Richard Palazzo, Robert Koch
  • Publication number: 20230179942
    Abstract: The technology described herein is generally directed towards the use of spatial audio to provide directional cues to assist a user in looking in a desired direction in a user perceived environment, which can be a real-world or a virtual environment. Location data for a user in three-dimensional space can be obtained. A direction of view of the user within the user-perceived environment is determined. Spatial audio can be output that is perceived as coming from a position within the user-perceived environment, such as to provide directional cues directed to changing the direction of view of the user to a different view. The spatial audio can provide prompts to the user to adjust his or her vision towards the desired direction, for example. The user's location and direction of view can be mapped to audio that is relevant to the user's direction of view when at that location.
    Type: Application
    Filed: December 2, 2021
    Publication date: June 8, 2023
    Inventors: Richard Palazzo, Brian M. Novack, Rashmi Palamadai, Tan Xu, Eric Zavesky, Ari Craine, Robert Koch
  • Publication number: 20230179840
    Abstract: The disclosed technology is directed towards collecting data, including multimedia content, for creating a multimedia presentation associated with an event being experienced by a user. Location data of a user's mobile device can be used to determine availability of sensor(s) within a proximity of the mobile device, along with one or more multimedia sensor of the mobile device. The user can select a source group of the available sensors to obtain data from them, including multimedia content such as camera video and microphone audio feeds. The sensor data received from the selected sensor source group is used, optionally along with previously recorded multimedia content, to create and output the multimedia presentation associated with the event. The user can add annotations including user input and the previously recorded multimedia content for inclusion in the multimedia presentation.
    Type: Application
    Filed: December 2, 2021
    Publication date: June 8, 2023
    Inventors: Brian M. Novack, Rashmi Palamadai, Tan Xu, Eric Zavesky, Ari Craine, Richard Palazzo, Robert Koch
  • Publication number: 20230179816
    Abstract: The disclosed technology is directed towards inserting user-personalized or other user-related supplementary media content into primary media content being presented to the user. The personalized media content can be inserted into available insertion slots associated with the primary media content. The inserted content is based on the context of the primary media, e.g., a location or theme of a movie scene. For example, upon obtaining primary media content that is video, supplementary media content related to a group of frames of the primary media content can be determined. Supplementary media content is combined with the primary media content at a presentation position associated with the group of frames to output modified media content. For a video, for example, the supplementary content can be inserted between scenes, overlaid onto a scene, or presented proximate a scene.
    Type: Application
    Filed: December 2, 2021
    Publication date: June 8, 2023
    Inventors: Richard Palazzo, Brian M. Novack, Rashmi Palamadai, Tan Xu, Eric Zavesky, Ari Craine, Robert Koch
  • Publication number: 20230177416
    Abstract: Disclosed is managing (creating and maintaining) attendance/participation data describing remote attendance of attendees of an event, or a replay of the event. The event can be a virtual reality event. When a remote user attends an event, subsequent viewers of the event replay can see a digital representation (e.g., an avatar) of the remote user within the replay as having attended the event in-person. Subsequent replays include digital representations of the remote user and the user(s) that viewed previously replay(s) to emulate their attendance. User can manage their own attendance data, including to obtain proof of attendance. A user can delete his or her presence at an event, such that replays after the deletion do not include that user's representation. A user can go anonymous with respect to an event, such that any replays after the anonymity choice include only a generic representation of the attendee.
    Type: Application
    Filed: December 2, 2021
    Publication date: June 8, 2023
    Inventors: Richard Palazzo, Brian M. Novack, Rashmi Palamadai, Tan Xu, Eric Zavesky, Ari Craine, Robert Koch
  • Publication number: 20230177258
    Abstract: The disclosed technology is directed towards determining a sub-content element within electronically presented media content based on where a user is currently gazing within the content presentation. The sub-content elements of the book have been previously mapped to their respective page and coordinates on the page, for example. As a more particular example, a user can be gazing at a certain paragraph on a displayed page of an electronic book, and that information can be detected and used to associate an annotation with that paragraph for personal output and/or sharing with others. A user can input the annotation data in various ways, including verbally for speech recognition or for audio replay.
    Type: Application
    Filed: December 2, 2021
    Publication date: June 8, 2023
    Inventors: Richard Palazzo, Brian M. Novack, Rashmi Palamadai, Tan Xu, Eric Zavesky, Ari Craine, Robert Koch
  • Publication number: 20230172534
    Abstract: The technology described herein is generally directed towards collecting digital twin datasets from individual users to represent their individual physical, emotional, chemical and/or environmental conditions. A user's digital twin is matched to one or more other digital twins with similar physical, emotional, chemical, and/or environmental conditions. The matched digital twins can share data and learnings via a virtual anonymous relationship. Multiple digital twins that represent users with similar conditions may be found and treated collectively as a group; a user via his or her digital twin can poll the group to receive and process responses from each respondent. A digital twin can be a therapist or a researcher who emulates a patient and uses the emulated digital twin as a proxy to monitor and process the results of other digital twins.
    Type: Application
    Filed: December 2, 2021
    Publication date: June 8, 2023
    Inventors: Brian M. Novack, Rashmi Palamadai, Tan Xu, Eric Zavesky, Ari Craine, Richard Palazzo, Robert Koch
  • Publication number: 20230179673
    Abstract: The disclosed technology is directed towards determining that information of interest, corresponding to as a meaningful event, is available to be captured and saved, and capturing the information. When an event is determined to satisfy a defined meaningful event likelihood criterion, sensor data (which can include media data), time data and location data are collected and associated with the meaningful event, e.g., in a data store. A presentation/package is generated from the various data, and maintained for subsequent access, e.g., for sending to a recipient. The presentation can include annotation data. The presentation can be conditionally marked for future presentation if and when certain condition data is satisfied. The presentation can be associated with at least one conditional gift.
    Type: Application
    Filed: December 2, 2021
    Publication date: June 8, 2023
    Inventors: Richard Palazzo, Brian M. Novack, Rashmi Palamadai, Tan Xu, Eric Zavesky, Ari Craine, Robert Koch
  • Patent number: 11671575
    Abstract: In one example, a method performed by a processing system including at least one processor includes acquiring a first item of media content from a user, where the first item of media content depicts a subject, acquiring a second item of media content, where the second item of media content depicts the subject, compositing the first item of media content and the second item of media content to create, within a metaverse of immersive content, an item of immersive content that depicts the subject, presenting the item of immersive content on a device operated by the user, and adapting the presenting of the item of immersive content in response to a choice made by the user.
    Type: Grant
    Filed: September 9, 2021
    Date of Patent: June 6, 2023
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Eric Zavesky, Louis Alexander, David Gibbon, Wen-Ling Hsu, Tan Xu, Mohammed Abdel-Wahab, Subhabrata Majumdar, Richard Palazzo
  • Publication number: 20230120772
    Abstract: Aspects of the subject disclosure may include, for example, obtaining a first group of volumetric content, generating first metadata for the first group of volumetric content, and storing the first group of volumetric content. Further embodiments include obtaining a second group of volumetric content, generating second metadata for the second group of volumetric content, and storing the second group of volumetric content.
    Type: Application
    Filed: October 14, 2021
    Publication date: April 20, 2023
    Applicant: AT&T Intellectual Property I, L.P.
    Inventors: Eric Zavesky, David Crawford Gibbon, Tan Xu, Wen-Ling Hsu, Richard Palazzo
  • Publication number: 20230070050
    Abstract: In one example, a method performed by a processing system including at least one processor includes acquiring a first item of media content from a user, where the first item of media content depicts a subject, acquiring a second item of media content, where the second item of media content depicts the subject, compositing the first item of media content and the second item of media content to create, within a metaverse of immersive content, an item of immersive content that depicts the subject, presenting the item of immersive content on a device operated by the user, and adapting the presenting of the item of immersive content in response to a choice made by the user.
    Type: Application
    Filed: September 9, 2021
    Publication date: March 9, 2023
    Inventors: Eric Zavesky, Louis Alexander, David Gibbon, Wen-Ling Hsu, Tan Xu, Mohammed Abdel-Wahab, Subhabrata Majumdar, Richard Palazzo
  • Patent number: 11595636
    Abstract: A volumetric content enhancement system (“the system”) can annotate at least a portion of a plurality of voxels from a volumetric video with contextual data. The system can determine at least one actionable position within the volumetric video. The system can create an annotated volumetric video that includes the volumetric video, an annotation with the contextual data, and the at least one actionable position. The system can provide the annotated volumetric video to a volumetric content playback system. The system can obtain viewer feedback associated with the viewer and can determine an emotional state of the viewer based, at least in part, upon the viewer feedback. The system can receive viewer position information that identifies a specific actionable position of the viewer. The system can generate manipulation instructions to instruct the volumetric content playback system to manipulate the annotated volumetric content to achieve a desired emotional state of the viewer.
    Type: Grant
    Filed: August 23, 2021
    Date of Patent: February 28, 2023
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Eric Zavesky, David Gibbon, Wen-Ling Hsu, Jianxiong Dong, Richard Palazzo
  • Patent number: 11593725
    Abstract: A method for gaze-based workflow adaptation includes identifying a workflow in which a user is engaged, where the workflow comprises a plurality of tasks that collectively achieves a desired goal, creating a trigger for a given task of the plurality of tasks, wherein the trigger specifies an action to be automatically taken in response to a gaze of the user meeting a defined criterion, monitoring a progress of the workflow, monitoring the gaze of the user, and sending a signal to a remote device in response to the gaze of the user meeting the defined criterion, wherein the signal instructs the remote device to take the action.
    Type: Grant
    Filed: April 16, 2019
    Date of Patent: February 28, 2023
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Richard Palazzo, Lee Begeja, David Crawford Gibbon, Zhu Liu, Tan Xu, Eric Zavesky
  • Publication number: 20230059361
    Abstract: In one example, a method performed by a processing system including at least one processor includes rendering an extended reality environment including a first object associated with a first media franchise, identifying a second object to replace the first object in the extended reality environment, wherein the second object is associated with at least one of: the first media franchise or a second media franchise different from the first media franchise, and rendering the second object in the extended reality media in place of the first object.
    Type: Application
    Filed: August 21, 2021
    Publication date: February 23, 2023
    Inventors: Eric Zavesky, Richard Palazzo, Wen-Ling Hsu, Tan Xu, Mohammed Abdel-Wahab
  • Publication number: 20230057722
    Abstract: A volumetric content enhancement system (“the system”) can annotate at least a portion of a plurality of voxels from a volumetric video with contextual data. The system can determine at least one actionable position within the volumetric video. The system can create an annotated volumetric video that includes the volumetric video, an annotation with the contextual data, and the at least one actionable position. The system can provide the annotated volumetric video to a volumetric content playback system. The system can obtain viewer feedback associated with the viewer and can determine an emotional state of the viewer based, at least in part, upon the viewer feedback. The system can receive viewer position information that identifies a specific actionable position of the viewer. The system can generate manipulation instructions to instruct the volumetric content playback system to manipulate the annotated volumetric content to achieve a desired emotional state of the viewer.
    Type: Application
    Filed: August 23, 2021
    Publication date: February 23, 2023
    Applicant: AT&T Intellectual Property I, L.P.
    Inventors: Eric Zavesky, David Gibbon, Wen-Ling Hsu, Jianxiong Dong, Richard Palazzo
  • Publication number: 20220335476
    Abstract: Aspects of the subject disclosure may include, for example, obtaining contextual information relating to a user, where the contextual information comprises location data that identifies a location of the user, identifying media content that relates to the contextual information and to profile data associated with the user, deriving, from the media content, personalized media content based on the profile data associated with the user, causing a target device to provide an immersion environment that includes the personalized media content, detecting user interaction data relating to the immersion environment, and performing an action relating to the personalized media content based on the detecting the user interaction data. Other embodiments are disclosed.
    Type: Application
    Filed: April 20, 2021
    Publication date: October 20, 2022
    Applicant: AT&T Intellectual Property I, L.P.
    Inventors: Mohammed Abdel-Wahab, Tan Xu, Eric Zavesky, Louis Alexander, Richard Palazzo