Patents by Inventor Lee Begeja

Lee Begeja has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220150573
    Abstract: Aspects of the subject disclosure may include, for example, providing media content to a communication device. The communication device provides a playback of a presentation of the media content. Further embodiments can include receiving an indication from the communication device that indicates a pause in the presentation of the media content, and determining a plurality of attributes associated with the pause. Additional embodiments can include providing instructions according to the plurality of attributes associated with the pause to the communication device. Other embodiments are disclosed.
    Type: Application
    Filed: November 11, 2020
    Publication date: May 12, 2022
    Applicant: AT&T Intellectual Property I, L.P.
    Inventors: David Crawford Gibbon, Eric Zavesky, Lee Begeja, Tan Xu, Paul Triantafyllou, Jean-Francois Paiement
  • Patent number: 11328224
    Abstract: A method, computer-readable medium, and apparatus for modeling data of a service for providing a policy are disclosed. For example, a method may include a processor for generating a first policy for a first service by a first policy model using machine learning for processing first data of the first service, determining whether the first policy is to be applied to a second service, applying the first policy to the second service when the first policy is deemed to be applicable to the second service, wherein the applying the first policy provides the first policy to a second policy model using machine learning for processing second data of the second service, generating a second policy for the second service, and implementing the second policy in the second service, wherein the first service and the second service are provided by a single service provider.
    Type: Grant
    Filed: July 20, 2020
    Date of Patent: May 10, 2022
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Raghuraman Gopalan, Lee Begeja, David Crawford Gibbon, Zhu Liu, Yadong Mu, Bernard S. Renger, Behzad Shahraray, Eric Zavesky
  • Publication number: 20220138801
    Abstract: Aspects of the subject disclosure may include, for example, a method that includes determining, by a processing system including a processor, that a viewer is viewing primary content for a first time; capturing, by the processing system, an emotive response of the viewer as the primary content is viewed during the first time; determining that the viewer is viewing the primary content at a subsequent time, wherein the subsequent time is later than the first time; detecting the emotive response of the viewer as the primary content is viewed during the subsequent time; retrieving external media responsive to detecting the emotive response; and displaying the external media. Other embodiments are disclosed.
    Type: Application
    Filed: October 30, 2020
    Publication date: May 5, 2022
    Applicant: AT&T Intellectual Property I, L.P.
    Inventors: Eric Zavesky, Lee Begeja, David Crawford Gibbon, Mohammed Abdel-Wahab, Jianxiong Dong
  • Patent number: 11321119
    Abstract: Task delegation and cooperation for automated assistants is presented. A method comprises receiving, at a centralized support center that is in contact with a plurality of automated assistants including a first automated assistant and a second automated assistant, a request to perform a task on behalf of an individual, formulating, at the centralized support center, the task as a plurality of sub-tasks including a first sub-task and a second sub-task, delegating, at the centralized support center, the first sub-task to the first automated assistant, based on a determination at the centralized support center that the first automated assistant is capable of performing the first sub-task, and delegating, at the centralized support center, the second sub-task to the second automated assistant, based on a determination at the centralized support center that the second automated assistant is capable of performing the second sub-task.
    Type: Grant
    Filed: September 10, 2020
    Date of Patent: May 3, 2022
    Assignee: AT&T INTELLECTUAL PROPERTY I, L.P.
    Inventors: Raghuraman Gopalan, Lee Begeja, David Crawford Gibbon, Eric Zavesky
  • Patent number: 11290766
    Abstract: In one example, a method performed by a processing system in a telecommunications network includes acquiring live footage of an event, acquiring sensor data related to the event, wherein the sensor data is collected by a sensor positioned in a location at which the event occurs, extracting an analytical statistic related to a target participating in the event, wherein the extracting is based on content analysis of the live footage and the sensor data, filtering data relating to the target based on the analytical statistic to identify content of interest in the data, wherein the data comprises the live footage, the sensor data, and data relating to historical events that are similar to the event, and generating computer-generated content to present the content of interest, wherein when the computer-generated content is synchronized with the live footage on an immersive display, an augmented reality media is produced.
    Type: Grant
    Filed: March 22, 2021
    Date of Patent: March 29, 2022
    Assignee: AT&T INTELLECTUAL PROPERTY I, L.P.
    Inventors: David Crawford Gibbon, Zhu Liu, Lee Begeja, Behzad Shahraray, Eric Zavesky
  • Publication number: 20220088493
    Abstract: Aspects of the subject disclosure may include, for example, obtaining portions of video content from a video game from video game server(s) associated with a video game provider, selecting a first portion of video content from the portions of the video content, and providing the first portion to device(s) associated with viewer(s). Each device presents the first portion of the video content. Further embodiments include obtaining popularity information from the device(s) according to feedback based on presenting the first portion of the video content to the device(s), determining that the popularity information satisfies a popularity threshold associated with the video content, determining a subject matter corresponding to the first portion of the video content, and identifying a second portion of the video content from the video game to be recorded according to the subject matter. Other embodiments are disclosed.
    Type: Application
    Filed: November 30, 2021
    Publication date: March 24, 2022
    Applicant: AT&T Intellectual Property I, L.P.
    Inventors: David Crawford Gibbon, Jean-Francois Paiement, Lee Begeja, Jianxiong Dong, Tan Xu, Eric Zavesky
  • Patent number: 11272264
    Abstract: Disclosed herein are systems, methods, and computer readable-media for temporally adaptive media playback. The method for adaptive media playback includes estimating or determining an amount of time between a first event and a second event, selecting media content to fill the estimated amount of time between the first event and the second event, and playing the selected media content possibly at a reasonably different speed to fit the time interval. One embodiment includes events that are destination-based or temporal-based. Another embodiment includes adding, removing, speeding up, or slowing down selected media content in order to fit the estimated amount of time between the first event and the second event or to modify the selected media content to adjust to an updated estimated amount of time. Another embodiment bases selected media content on a user or group profile.
    Type: Grant
    Filed: September 10, 2020
    Date of Patent: March 8, 2022
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Andrea Basso, Lee Begeja, David C. Gibbon, Zhu Liu, Bernard S. Renger, Behzad Shahraray
  • Patent number: 11265387
    Abstract: In one example, a device, computer-readable medium, and method for synchronizing multiple user devices in an immersive media environment using time-of-flight (ToF) light patterns are provided. In one example, a method performed by a processing system of a first device includes detecting a fiducial that is present in a light pattern projected by a second device, wherein the second device is present in a same surrounding environment as the first device, determining an identity of the second device based on an appearance of the fiducial, identifying an expected orientation of the fiducial within the light pattern, based on the identity of the second device, and determining a location of the second device relative to the first device, based on an observed orientation of the fiducial relative to the expected orientation of the fiducial.
    Type: Grant
    Filed: November 13, 2019
    Date of Patent: March 1, 2022
    Assignee: AT&T INTELLECTUAL PROPERTY I, L.P.
    Inventors: Eric Zavesky, Lee Begeja, David Crawford Gibbon
  • Patent number: 11252638
    Abstract: Devices, computer-readable media, and methods are disclosed for assigning a service to a vehicle-based mobile node. For example, a processor deployed in a communication network may receive a request for the service from a mobile endpoint device, determine a route of the mobile endpoint device, determine a route of the vehicle-based mobile node, and assign the service to the vehicle-based mobile node when the route of the mobile endpoint device and the route of the vehicle-based mobile node coincide.
    Type: Grant
    Filed: September 2, 2019
    Date of Patent: February 15, 2022
    Assignee: AT&T INTELLECTUAL PROPERTY I, L.P.
    Inventors: Eric Zavesky, Zhu Liu, Lee Begeja, Yadong Mu, David Crawford Gibbon, Bernard S. Renger, Raghuraman Gopalan, Behzad Shahraray
  • Publication number: 20220043838
    Abstract: Methods, computer-readable media, and devices are disclosed for providing a notification of an anomaly in a media content that is associated with an event type. For example, a method may include a processing system including at least one processor for detecting a first anomaly from a first media content, generating a first anomaly signature for the first anomaly, obtaining a notification of a first event, the notification including an event type, time information, and location information of the first event, correlating the first anomaly to the notification of the first event, and labeling the first anomaly signature with the event type. The processing system may further detect a second anomaly from a second media content that matches the first anomaly signature and transmit a notification of a second event of the event type when it is detected that the second anomaly matches the first anomaly signature.
    Type: Application
    Filed: October 25, 2021
    Publication date: February 10, 2022
    Inventors: Eric Zavesky, Lee Begeja, Raghuraman Gopalan, Bernard S. Renger, Behzad Shahraray, Zhu Liu
  • Publication number: 20220030284
    Abstract: Aspects of the subject disclosure may include, for example, obtaining a first group of video content streams of an event, determining a first point of view of a plurality of audience members of the event, and selecting a first portion of the first group of video content streams of the event according to the first point of view of the plurality of audience members. Further aspects can include aggregating the first portion of the first group of video content streams resulting in first aggregated video content, generating first augmented reality content from the first aggregated video content according to the first point of view, and providing the first augmented reality content to a communication device. The communication device can present the first augmented reality content. Other embodiments are disclosed.
    Type: Application
    Filed: October 5, 2021
    Publication date: January 27, 2022
    Applicant: AT&T Intellectual Property I, L.P.
    Inventors: Jean-Francois Paiement, Lee Begeja, David Crawford Gibbon, Eric Zavesky
  • Publication number: 20220020061
    Abstract: Aspects of the subject disclosure may include, for example, obtaining first data associated with a first presentation of a first portion of a first video, obtaining second data, wherein the second data is obtained: from a creator of the first video, by performing an analysis on the first portion of the first video, or a combination thereof, identifying an emotional trajectory of a first user during the first presentation in accordance with the first data and the second data, selecting a first creative in accordance with the emotional trajectory, and transmitting the first creative to a first communication device. Other embodiments are disclosed.
    Type: Application
    Filed: July 14, 2020
    Publication date: January 20, 2022
    Applicant: AT&T Intellectual Property I, L.P.
    Inventors: Eric Zavesky, Samuel Seljan, Ashutosh Sanzgiri, David Crawford Gibbon, Lee Begeja
  • Patent number: 11213758
    Abstract: Aspects of the subject disclosure may include, for example, obtaining portions of video content from a video game from video game server(s) associated with a video game provider, selecting a first portion of video content from the portions of the video content, and providing the first portion to device(s) associated with viewer(s). Each device presents the first portion of the video content. Further embodiments include obtaining popularity information from the device(s) according to feedback based on presenting the first portion of the video content to the device(s), determining that the popularity information satisfies a popularity threshold associated with the video content, determining a subject matter corresponding to the first portion of the video content, and identifying a second portion of the video content from the video game to be recorded according to the subject matter. Other embodiments are disclosed.
    Type: Grant
    Filed: April 22, 2020
    Date of Patent: January 4, 2022
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: David Crawford Gibbon, Jean-Francois Paiement, Lee Begeja, Jianxiong Dong, Tan Xu, Eric Zavesky
  • Publication number: 20210385515
    Abstract: Aspects of the subject disclosure may include, for example, embodiments include providing video streams of video content to displays, each video stream includes a portion of the video content, determining that a viewer reaction to a first portion of the video content in a first video stream satisfies a viewer reaction threshold when the first video stream is presented on a first display, and determining that a sub-portion of the first portion of the video content caused the viewer reaction to satisfy the viewer reaction threshold in response to analyzing the first portion of the video content. Further embodiments include generating a second video stream of the video content, the second video stream comprises the sub-portion without a remainder of the first portion, and providing the second video stream to a second display. Other embodiments are disclosed.
    Type: Application
    Filed: August 26, 2021
    Publication date: December 9, 2021
    Applicant: AT&T Intellectual Property I, L.P.
    Inventors: Eric Zavesky, Lee Begeja, Behzad Shahraray, Bernard S. Renger
  • Publication number: 20210383615
    Abstract: Aspects of the subject disclosure may include, for example, scanning, by an augmented reality system, a local environment which includes an object. The scanning includes identifying one or more characteristics of the object such as its size or shape. The subject disclosure may further include providing, to a display device of a local user in the local environment, local image information of the object from a viewing perspective of the local user and providing, to a display device of a remote user in a remote environment, remote image information of the object from a viewing perspective of the remote user.
    Type: Application
    Filed: August 26, 2021
    Publication date: December 9, 2021
    Applicant: AT&T Intellectual Property I, L.P.
    Inventors: Mohammed Abdel-Wahab, Lee Begeja, Eric Zavesky, Tan Xu
  • Publication number: 20210373742
    Abstract: Disclosed herein are systems, computer-implemented methods, and tangible computer-readable media for synthesizing a virtual window. The method includes receiving an environment feed, selecting video elements of the environment feed, displaying the selected video elements on a virtual window in a window casing, selecting non-video elements of the environment feed, and outputting the selected non-video elements coordinated with the displayed video elements. Environment feeds can include synthetic and natural elements. The method can further toggle the virtual window between displaying the selected elements and being transparent. The method can track user motion and adapt the displayed selected elements on the virtual window based on the tracked user motion. The method can further detect a user in close proximity to the virtual window, receive an interaction from the detected user, and adapt the displayed selected elements on the virtual window based on the received interaction.
    Type: Application
    Filed: July 12, 2021
    Publication date: December 2, 2021
    Inventors: Andrea Basso, Lee Begeja, David C. Gibbon, Zhu Liu, Bernard S. Renger, Behzad Shahraray
  • Publication number: 20210368155
    Abstract: A processing system having at least one processor may obtain at least a first source video from a first endpoint device and a second source video from a second endpoint device, where each of the first source video and the second source video is a two-dimensional video, determine that the first source video and the second source video share at least one feature that is the same for both the first source video and the second source video, and generate a volumetric video from the first source video and the second source video, where the volumetric video comprises a photogrammetric combination of the first source video and the second source video.
    Type: Application
    Filed: August 9, 2021
    Publication date: November 25, 2021
    Inventors: Lee Begeja, Eric Zavesky, Behzad Shahraray, Zhu Liu
  • Publication number: 20210350552
    Abstract: One example of a method includes receiving a plurality of video streams depicting a scene, wherein the plurality of video streams provides images of the scene from a plurality of different viewpoints, identifying a target that is present in the scene, wherein the target is identified based on a determination of a likelihood of being of interest to a viewer of the scene, determining a trajectory of the target through the plurality of video streams, wherein the determining is based in part on an automated visual analysis of the plurality of video streams, wherein the determining is based in part on a visual analysis of the plurality of video streams, rendering a volumetric video traversal that follows the target through the scene, wherein the rendering comprises compositing the plurality of video streams, receiving viewer feedback regarding the volumetric video traversal, and adjusting the rendering in response to the viewer feedback.
    Type: Application
    Filed: July 26, 2021
    Publication date: November 11, 2021
    Inventors: David Crawford Gibbon, Tan Xu, Lee Begeja, Bernard S. Renger, Behzad Shahraray, Raghuraman Gopalan, Eric Zavesky
  • Patent number: 11166050
    Abstract: Aspects of the subject disclosure may include, for example, obtaining a first group of video content streams of an event, determining a first point of view of a plurality of audience members of the event, and selecting a first portion of the first group of video content streams of the event according to the first point of view of the plurality of audience members. Further aspects can include aggregating the first portion of the first group of video content streams resulting in first aggregated video content, generating first augmented reality content from the first aggregated video content according to the first point of view, and providing the first augmented reality content to a communication device. The communication device can present the first augmented reality content. Other embodiments are disclosed.
    Type: Grant
    Filed: December 11, 2019
    Date of Patent: November 2, 2021
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Jean-Francois Paiement, Lee Begeja, David Crawford Gibbon, Eric Zavesky
  • Publication number: 20210331072
    Abstract: Aspects of the subject disclosure may include, for example, obtaining portions of video content from a video game from video game server(s) associated with a video game provider, selecting a first portion of video content from the portions of the video content, and providing the first portion to device(s) associated with viewer(s). Each device presents the first portion of the video content. Further embodiments include obtaining popularity information from the device(s) according to feedback based on presenting the first portion of the video content to the device(s), determining that the popularity information satisfies a popularity threshold associated with the video content, determining a subject matter corresponding to the first portion of the video content, and identifying a second portion of the video content from the video game to be recorded according to the subject matter. Other embodiments are disclosed.
    Type: Application
    Filed: April 22, 2020
    Publication date: October 28, 2021
    Applicant: AT&T Intellectual Property I, L.P.
    Inventors: David Crawford Gibbon, Jean-Francois Paiement, Lee Begeja, Jianxiong Dong, Tan Xu, Eric Zavesky