Patents by Inventor Terrel Lecesne

Terrel Lecesne has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230318999
    Abstract: In one example, a method includes detecting a new message sent by a first user to a second user, wherein at least the second user is engaged in an extended reality environment, determining whether the new message should be delivered to the second user while the second user is engaged in the extended reality environment, and sending the new message to the second user when it is determined that the new message should be delivered to the second user while the second user is engaged in the extended reality environment, wherein the sending is performed according to a routing strategy that is based on at least one of: a context of the second user or a preference of the second user.
    Type: Application
    Filed: April 4, 2022
    Publication date: October 5, 2023
    Inventors: Terrel Lecesne, Jason Decuir, Eric Zavesky, James Pratt
  • Publication number: 20230168737
    Abstract: A processing system having at least one processor may detect a first object in a first video of a first user and detect a second object in a second video of a second user, where the first video and the second video are part of a visual communication session between the first user and the second user. The processing system may further detect a first action in the first video relative to the first object, detect a second action in the second video relative to the second object, detect a difference between the first action and the second action, and provide a notification indicative of the difference.
    Type: Application
    Filed: January 30, 2023
    Publication date: June 1, 2023
    Inventors: James Pratt, Jason Decuir, Terrel Lecesne, Eric Zavesky
  • Publication number: 20230079099
    Abstract: An example method performed by a processing system includes receiving a request from a first user to render an extended reality environment, wherein the request includes a definition of a first policy that governs user behavior within the extended reality environment, rendering the extended reality environment by presenting content contributed by at least one user in the extended reality environment, monitoring the extended reality environment to ensure that the rendering results in a compliance of the extended reality environment with the first policy, detecting that a portion of the content contributed by at least one other user of the extended reality environment results in the extended reality environment failing to comply with the first policy, and modifying a presentation of the portion of content in the extended reality environment in response to the detecting, wherein the modifying results in the compliance of the extended reality environment with the first policy.
    Type: Application
    Filed: November 7, 2022
    Publication date: March 16, 2023
    Inventors: John Oetting, Eric Zavesky, James Pratt, Jason Decuir, Terrel Lecesne
  • Publication number: 20230056578
    Abstract: An example method performed by a processing system includes retrieving a digital model of a media element from a database storing a plurality of media elements, wherein the media element is to be inserted into a scene of an audiovisual media, rendering the media element in the scene of the audiovisual media, based on the digital model of the media element and on metadata associated with the digital model to produce a rendered media element, wherein the metadata describes a characteristic of the media element and a limit on the characteristic, and inserting the rendered media element into the scene of the audiovisual media.
    Type: Application
    Filed: October 10, 2022
    Publication date: February 23, 2023
    Inventors: John Oetting, Eric Zavesky, James Pratt, Jason Decuir, Terrel Lecesne
  • Publication number: 20230052265
    Abstract: A processing system having at least one processor may detect a first object in a first video of a first user and detect a second object in a second video of a second user, where the first video and the second video are part of a visual communication session between the first user and the second user. The processing system may further detect a first action in the first video relative to the first object, detect a second action in the second video relative to the second object, detect a difference between the first action and the second action, and provide a notification indicative of the difference.
    Type: Application
    Filed: August 16, 2021
    Publication date: February 16, 2023
    Inventors: James Pratt, Jason Decuir, Terrel Lecesne, Eric Zavesky
  • Publication number: 20230040884
    Abstract: In one example, a method includes presenting an extended reality (XR) experience to a plurality of user devices, wherein the presenting includes presenting a time control along with an XR stream, receiving a first signal from a first user device of the plurality of user devices via the time control, wherein the first signal indicates that a user of the first user device wishes to shift the XR stream to a first point in time that is different from a time that is currently being rendered in the XR stream, and presenting a personal XR environment to the first user device in response to the first signal, wherein the personal XR environment presents the first point in time in the XR stream to the first user device without changing a time point of the XR stream that is currently being presented to other user devices of the plurality of user devices.
    Type: Application
    Filed: October 24, 2022
    Publication date: February 9, 2023
    Inventors: Eric Zavesky, John Oetting, James Pratt, Terrel Lecesne, Jason Decuir
  • Patent number: 11567572
    Abstract: A processing system having at least one processor may detect a first object in a first video of a first user and detect a second object in a second video of a second user, where the first video and the second video are part of a visual communication session between the first user and the second user. The processing system may further detect a first action in the first video relative to the first object, detect a second action in the second video relative to the second object, detect a difference between the first action and the second action, and provide a notification indicative of the difference.
    Type: Grant
    Filed: August 16, 2021
    Date of Patent: January 31, 2023
    Assignees: AT&T Intellectual Property I, L.P., AT&T Mobility II LLC
    Inventors: James Pratt, Jason Decuir, Terrel Lecesne, Eric Zavesky
  • Publication number: 20220368770
    Abstract: An example method includes obtaining a profile of a user using a user device to present an immersive experience, wherein the profile specifies a user reaction indicator and a value that represents a baseline for the user reaction indicator, presenting the immersive experience on the user device, receiving user state data including a current value of the user reaction indicator, determining that the current value of the user reaction indicator deviates from the value that represents the baseline by more than a threshold, in response to the determining, selecting, from among a plurality of variants for a segment of the immersive experience, a variant that is expected to cause the current value of the user reaction indicator to adjust to a value that does not deviate from the value that represents the baseline by more than the threshold, and presenting the variant that is selected on the user device.
    Type: Application
    Filed: August 1, 2022
    Publication date: November 17, 2022
    Inventors: Terrel Lecesne, John Oetting, Eric Zavesky, James H. Pratt, Jason Decuir
  • Patent number: 11494951
    Abstract: An example method performed by a processing system includes receiving a request from a first user to render an extended reality environment, wherein the request includes a definition of a first policy that governs user behavior within the extended reality environment, rendering the extended reality environment by presenting content contributed by at least one user in the extended reality environment, monitoring the extended reality environment to ensure that the rendering results in a compliance of the extended reality environment with the first policy, detecting that a portion of the content contributed by at least one other user of the extended reality environment results in the extended reality environment failing to comply with the first policy, and modifying a presentation of the portion of content in the extended reality environment in response to the detecting, wherein the modifying results in the compliance of the extended reality environment with the first policy.
    Type: Grant
    Filed: July 24, 2020
    Date of Patent: November 8, 2022
    Assignees: AT&T Intellectual Property I, L.P., AT&T Mobility II LLC
    Inventors: John Oetting, Eric Zavesky, James Pratt, Jason Decuir, Terrel Lecesne
  • Publication number: 20220345622
    Abstract: Concepts and technologies are disclosed herein for a content capture application and/or service. In one example embodiment, an application/service can determine a project definition for a media project that can include a video that is to be captured via a user device having a camera. The application/service can determine a device capability of the user device including a video capability and a networking capability. The application/service can determine, based on the project definition and the device capability, a capture suggestion. The capture suggestion can include instructions relating to content to be included in the video. The device can provide the capture suggestion to the user device, obtain media content based on the video, assemble the media project including the media content, and output the media project to a recipient.
    Type: Application
    Filed: July 11, 2022
    Publication date: October 27, 2022
    Applicants: AT&T Intellectual Property I, L.P., AT&T Mobility II LLC
    Inventors: Eric Zavesky, Lee Begeja, David Crawford Gibbon, Jean-Francois Paiement, Tan Xu, Terrel Lecesne
  • Patent number: 11481983
    Abstract: In one example, a method includes presenting an extended reality (XR) experience to a plurality of user devices, wherein the presenting includes presenting a time control along with an XR stream, receiving a first signal from a first user device of the plurality of user devices via the time control, wherein the first signal indicates that a user of the first user device wishes to shift the XR stream to a first point in time that is different from a time that is currently being rendered in the XR stream, and presenting a personal XR environment to the first user device in response to the first signal, wherein the personal XR environment presents the first point in time in the XR stream to the first user device without changing a time point of the XR stream that is currently being presented to other user devices of the plurality of user devices.
    Type: Grant
    Filed: November 25, 2020
    Date of Patent: October 25, 2022
    Assignees: AT&T Intellectual Property I, L.P., AT&T Mobility II LLC
    Inventors: Eric Zavesky, John Oetting, James Pratt, Terrel Lecesne, Jason Decuir
  • Patent number: 11470404
    Abstract: An example method performed by a processing system includes retrieving a digital model of a media element from a database storing a plurality of media elements, wherein the media element is to be inserted into a scene of an audiovisual media, rendering the media element in the scene of the audiovisual media, based on the digital model of the media element and on metadata associated with the digital model to produce a rendered media element, wherein the metadata describes a characteristic of the media element and a limit on the characteristic, and inserting the rendered media element into the scene of the audiovisual media.
    Type: Grant
    Filed: May 26, 2020
    Date of Patent: October 11, 2022
    Assignees: AT&T Intellectual Property I, L.P., AT&T Mobility II LLC
    Inventors: John Oetting, Eric Zavesky, James Pratt, Jason Decuir, Terrel Lecesne
  • Patent number: 11405484
    Abstract: An example method includes obtaining a profile of a user using a user device to present an immersive experience, wherein the profile specifies a user reaction indicator and a value that represents a baseline for the user reaction indicator, presenting the immersive experience on the user device, receiving user state data including a current value of the user reaction indicator, determining that the current value of the user reaction indicator deviates from the value that represents the baseline by more than a threshold, in response to the determining, selecting, from among a plurality of variants for a segment of the immersive experience, a variant that is expected to cause the current value of the user reaction indicator to adjust to a value that does not deviate from the value that represents the baseline by more than the threshold, and presenting the variant that is selected on the user device.
    Type: Grant
    Filed: November 30, 2020
    Date of Patent: August 2, 2022
    Assignees: AT&T Intellectual Property I, L.P., AT&T Mobility II LLC
    Inventors: Terrel Lecesne, John Oetting, Eric Zavesky, James H. Pratt, Jason Decuir
  • Patent number: 11394875
    Abstract: Concepts and technologies are disclosed herein for a content capture application and/or service. In one example embodiment, an application/service can determine a project definition for a media project that can include a video that is to be captured via a user device having a camera. The application/service can determine a device capability of the user device including a video capability and a networking capability. The application/service can determine, based on the project definition and the device capability, a capture suggestion. The capture suggestion can include instructions relating to content to be included in the video. The device can provide the capture suggestion to the user device, obtain media content based on the video, assemble the media project including the media content, and output the media project to a recipient.
    Type: Grant
    Filed: December 11, 2019
    Date of Patent: July 19, 2022
    Assignees: AT&T Intellectual Property I, L.P., AT&T Mobility II LLC
    Inventors: Eric Zavesky, Lee Begeja, David Crawford Gibbon, Jean-Francois Paiement, Tan Xu, Terrel Lecesne
  • Publication number: 20220214745
    Abstract: Aspects of the subject disclosure may include, for example, assembling an extended reality (XR) immersive experience for a user, determining a user experience level for the user, adjusting an immersion detail level for the XR immersive experience according to the user experience level, and communicating data defining the XR immersive experience to XR equipment of the user. Other embodiments are disclosed.
    Type: Application
    Filed: March 22, 2022
    Publication date: July 7, 2022
    Applicants: AT&T Intellectual Property I, L.P., AT&T Mobility II LLC
    Inventors: Eric Zavesky, James Pratt, John Oetting, Jason DeCuir, Terrel LeCesne
  • Publication number: 20220174357
    Abstract: An example method includes presenting a remote broadcast event to a first set of user endpoint devices including at least one host device and a second set of user endpoint devices including a plurality of audience devices, monitoring reactions of audience members of the remote broadcast event, based on streams of data received from the second set of user endpoint devices, and displaying, to the first set of user endpoint devices, a measure of the reactions of the audience members, based on the monitoring.
    Type: Application
    Filed: November 30, 2020
    Publication date: June 2, 2022
    Inventors: Eric Zavesky, John Oetting, Terrel Lecesne, James H. Pratt, Jason Decuir
  • Publication number: 20220174131
    Abstract: An example method includes obtaining a profile of a user using a user device to present an immersive experience, wherein the profile specifies a user reaction indicator and a value that represents a baseline for the user reaction indicator, presenting the immersive experience on the user device, receiving user state data including a current value of the user reaction indicator, determining that the current value of the user reaction indicator deviates from the value that represents the baseline by more than a threshold, in response to the determining, selecting, from among a plurality of variants for a segment of the immersive experience, a variant that is expected to cause the current value of the user reaction indicator to adjust to a value that does not deviate from the value that represents the baseline by more than the threshold, and presenting the variant that is selected on the user device.
    Type: Application
    Filed: November 30, 2020
    Publication date: June 2, 2022
    Inventors: Terrel Lecesne, John Oetting, Eric Zavesky, James H. Pratt, Jason Decuir
  • Publication number: 20220172415
    Abstract: In one example, a method includes presenting a virtual event to a plurality of user endpoint devices associated with a plurality of participants, receiving, while the virtual event is in progress, a first signal from a first user endpoint device, wherein the first signal indicates that a first participant wishes to join the virtual event, admitting the first participant to the virtual event, and selecting, from among a plurality of candidate locations, a first location in the virtual event in which to place the first participant.
    Type: Application
    Filed: November 27, 2020
    Publication date: June 2, 2022
    Inventors: John Oetting, Eric Zavesky, Terrel Lecesne, James H. Pratt, Jason Decuir
  • Publication number: 20220174358
    Abstract: An example method includes detecting a request from a first user endpoint device to play back an extended reality media, detecting a moderation rule associated with a user of the first user endpoint device, presenting the extended reality media on the first user endpoint device, while monitoring for content within the extended reality media that triggers the moderation rule, determining, in response to detecting content in the extended reality media that triggers the moderation rule, a modification to be made to the extended reality media that would prevent the extended reality media from triggering the moderation rule, and presenting the modification on the user endpoint device, simultaneously with the extended reality media, to render a modified extended reality media.
    Type: Application
    Filed: November 27, 2020
    Publication date: June 2, 2022
    Inventors: Eric Zavesky, Terrel Lecesne, James Pratt, Jason Decuir
  • Publication number: 20220165037
    Abstract: In one example, a method includes presenting an extended reality (XR) experience to a plurality of user devices, wherein the presenting includes presenting a time control along with an XR stream, receiving a first signal from a first user device of the plurality of user devices via the time control, wherein the first signal indicates that a user of the first user device wishes to shift the XR stream to a first point in time that is different from a time that is currently being rendered in the XR stream, and presenting a personal XR environment to the first user device in response to the first signal, wherein the personal XR environment presents the first point in time in the XR stream to the first user device without changing a time point of the XR stream that is currently being presented to other user devices of the plurality of user devices.
    Type: Application
    Filed: November 25, 2020
    Publication date: May 26, 2022
    Inventors: Eric Zavesky, John Oetting, James Pratt, Terrel Lecesne, Jason Decuir