Patents by Inventor Paul Triantafyllou
Paul Triantafyllou has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11137761Abstract: Methods, computer-readable media, and devices are disclosed for improving an object model based upon measurements of physical properties of an object via an unmanned vehicle using adversarial examples. For example, a method may include a processing system capturing measurements of physical properties of an object via at least one unmanned vehicle, updating an object model for the object to include the measurements of the physical properties of the object, where the object model is associated with a feature space, and generating an example from the feature space, where the example comprises an adversarial example. The processing system may further apply the object model to the example to generate a prediction, capture additional measurements of the physical properties of the object via the at least one unmanned vehicle when the prediction fails to identify that the example is an adversarial example, and update the object model to include the additional measurements.Type: GrantFiled: November 20, 2017Date of Patent: October 5, 2021Assignee: AT&T INTELLECTUAL PROPERTY I, L.P.Inventors: Eric Zavesky, Raghuraman Gopalan, Behzad Shahraray, David Crawford Gibbon, Bernard S. Renger, Paul Triantafyllou
-
Publication number: 20210241576Abstract: Aspects of the subject disclosure may include, for example, comparing an input received from a peripheral device associated with an execution of a gaming application with a threshold value, wherein the threshold value is based on a first identification of a first user, a second identification of the peripheral device, and a third identification of stimuli presented as part of the execution of the gaming application. Responsive to the comparing, a determination may be made that the input exceeds the threshold value. Responsive to the determination, a validation request may be transmitted to a user device of the first user. Other embodiments are disclosed.Type: ApplicationFiled: April 22, 2021Publication date: August 5, 2021Applicant: AT&T Intellectual Property I, L.P.Inventors: Eric Zavesky, David Crawford Gibbon, Lee Begeja, Paul Triantafyllou, Bernard S. Renger
-
Publication number: 20210224517Abstract: A method for validating objects appearing in volumetric video presentations includes obtaining a volumetric video presentation depicting a scene, wherein the volumetric video presentation is associated with a metadata file containing identifying information for the scene, identifying user-generated content that depicts the scene, by matching metadata associated with the user-generated content to the metadata file associated with the volumetric video presentation, comparing a first object appearing in the volumetric video presentation to a corresponding second object appearing in the user-generated content, assigning a score to the first object based on the comparing, wherein the score indicates a probability that the first object has not been manipulated, and altering the volumetric video presentation to filter the first object from the volumetric video presentation when the score falls below a threshold.Type: ApplicationFiled: April 5, 2021Publication date: July 22, 2021Inventors: Zhu Liu, Eric Zavesky, David Crawford Gibbon, Lee Begeja, Paul Triantafyllou
-
Patent number: 11062678Abstract: In one example, a method includes monitoring conditions in a real environment in which a user is present, wherein the monitoring is performed by collecting data about the conditions from a plurality of sensors located in the real environment, transmitting the data about the conditions to an extended reality device that is present in the real environment, where the extended reality device is configured to render a virtual environment, interpolating between the real environment and the virtual environment, based at least in part on the conditions, to determine an actual extended reality environment that is being presented to the user, and sending a signal to a device that is located in the real environment, based on the interpolating, wherein the signal instructs the device to take an action that modifies at least one of the conditions in the real environment.Type: GrantFiled: December 27, 2018Date of Patent: July 13, 2021Assignee: AT&T INTELLECTUAL PROPERTY I, L.P.Inventors: Eric Zavesky, Zhu Liu, David Crawford Gibbon, Behzad Shahraray, Paul Triantafyllou, Tan Xu
-
Publication number: 20210166008Abstract: A processing system having at least one processor may establish a communication session between a first communication system of a first user and a second communication system of a second user, the communication session including first visual content, the first visual content including a first visual representation of the first user, and detecting a first action of the first visual representation in the first visual content in accordance with a first action detection model. The processing system may modify, in response to the detecting the first action, the first visual content in accordance with a first configuration setting of the first user for the communication session, which may include modifying the first action of the first visual representation of the first user in the first visual content. In addition, the processing system may transmit the first visual content that is modified to the second communication system of the second user.Type: ApplicationFiled: February 15, 2021Publication date: June 3, 2021Inventors: Eric Zavesky, Zhu Liu, Bernard S. Renger, Behzad Shahraray, Lee Begeja, Paul Triantafyllou
-
Patent number: 11017631Abstract: Aspects of the subject disclosure may include, for example, comparing an input received from a peripheral device associated with an execution of a gaming application with a threshold value, wherein the threshold value is based on a first identification of a first user, a second identification of the peripheral device, and a third identification of stimuli presented as part of the execution of the gaming application. Responsive to the comparing, a determination may be made that the input exceeds the threshold value. Responsive to the determination, a validation request may be transmitted to a user device of the first user. Other embodiments are disclosed.Type: GrantFiled: February 28, 2019Date of Patent: May 25, 2021Assignee: AT&T Intellectual Property I, L.P.Inventors: Eric Zavesky, David Crawford Gibbon, Lee Begeja, Paul Triantafyllou, Bernard S. Renger
-
Publication number: 20210142369Abstract: Aspects of the subject disclosure may include, for example, a method including obtaining an advertisement package, wherein the advertisement package defines an interactive extended reality advertisement and includes a plurality of optional features; obtaining information about a user, their equipment, and their environment; creating an interactive extended reality advertisement by choosing a selected feature of the plurality of optional features according to the user information; and presenting the interactive extended reality advertisement to the user equipment. Other embodiments are disclosed.Type: ApplicationFiled: January 21, 2021Publication date: May 13, 2021Applicant: AT&T Intellectual Property I, L.P.Inventors: Eric Zavesky, Paul Triantafyllou, Tan Xu, Zhu Liu
-
Patent number: 10970519Abstract: A method for validating objects appearing in volumetric video presentations includes obtaining a volumetric video presentation depicting a scene, wherein the volumetric video presentation is associated with a metadata file containing identifying information for the scene, identifying user-generated content that depicts the scene, by matching metadata associated with the user-generated content to the metadata file associated with the volumetric video presentation, comparing a first object appearing in the volumetric video presentation to a corresponding second object appearing in the user-generated content, assigning a score to the first object based on the comparing, wherein the score indicates a probability that the first object has not been manipulated, and altering the volumetric video presentation to filter the first object from the volumetric video presentation when the score falls below a threshold.Type: GrantFiled: April 16, 2019Date of Patent: April 6, 2021Assignee: AT&T INTELLECTUAL PROPERTY I, L.P.Inventors: Zhu Liu, Eric Zavesky, David Crawford Gibbon, Lee Begeja, Paul Triantafyllou
-
Publication number: 20210082472Abstract: Methods, computer-readable media, and apparatuses for composing a video in accordance with a user goal and an audience preference are described. For example, a processing system having at least one processor may obtain a plurality of video clips of a user, determine at least one goal of the user for a production of a video from the plurality of video clips, determine at least one audience preference of an audience, and compose the video comprising at least one video clip of the plurality of video clips of the user in accordance with the at least one goal of the user and the at least one audience preference. The processing system may then upload the video to a network-based publishing platform.Type: ApplicationFiled: November 30, 2020Publication date: March 18, 2021Inventors: Tan Xu, Behzad Shahraray, Eric Zavesky, Lee Begeja, Paul Triantafyllou, Zhu Liu, Bernard S. Renger
-
Patent number: 10929894Abstract: Aspects of the subject disclosure may include, for example, a method including obtaining an advertisement package, wherein the advertisement package defines an interactive extended reality advertisement and includes a plurality of optional features; obtaining information about a user, their equipment, and their environment; creating an interactive extended reality advertisement by choosing a selected feature of the plurality of optional features according to the user information; and presenting the interactive extended reality advertisement to the user equipment. Other embodiments are disclosed.Type: GrantFiled: August 10, 2018Date of Patent: February 23, 2021Assignee: AT&T Intellectual Property I, L.P.Inventors: Eric Zavesky, Paul Triantafyllou, Tan Xu, Zhu Liu
-
Patent number: 10922534Abstract: A processing system having at least one processor may establish a communication session between a first communication system of a first user and a second communication system of a second user, the communication session including first visual content, the first visual content including a first visual representation of the first user, and detecting a first action of the first visual representation in the first visual content in accordance with a first action detection model. The processing system may modify, in response to the detecting the first action, the first visual content in accordance with a first configuration setting of the first user for the communication session, which may include modifying the first action of the first visual representation of the first user in the first visual content. In addition, the processing system may transmit the first visual content that is modified to the second communication system of the second user.Type: GrantFiled: October 26, 2018Date of Patent: February 16, 2021Assignee: AT&T INTELLECTUAL PROPERTY I, L.P.Inventors: Eric Zavesky, Zhu Liu, Bernard S. Renger, Behzad Shahraray, Lee Begeja, Paul Triantafyllou
-
Patent number: 10885942Abstract: Methods, computer-readable media, and apparatuses for composing a video in accordance with a user goal and an audience preference are described. For example, a processing system having at least one processor may obtain a plurality of video clips of a user, determine at least one goal of the user for a production of a video from the plurality of video clips, determine at least one audience preference of an audience, and compose the video comprising at least one video clip of the plurality of video clips of the user in accordance with the at least one goal of the user and the at least one audience preference. The processing system may then upload the video to a network-based publishing platform.Type: GrantFiled: September 18, 2018Date of Patent: January 5, 2021Assignee: AT&T Intellectual Property I, L.P.Inventors: Tan Xu, Behzad Shahraray, Eric Zavesky, Lee Begeja, Paul Triantafyllou, Zhu Liu, Bernard S. Renger
-
Patent number: 10832590Abstract: In one example, the present disclosure describes a device, computer-readable medium, and method for monitoring a user's food intake. For instance, in one example, a user's food intake is monitored based on data collected from a sensor. The user's current nutrient consumption is estimated based on the monitoring. A recommendation is presented to the user based on the estimating, where the recommendation is designed to help the user achieve a target nutrient consumption.Type: GrantFiled: September 13, 2017Date of Patent: November 10, 2020Assignee: AT&T INTELLECTUAL PROPERTY I, L.P.Inventors: Raghuraman Gopalan, Eric Zavesky, Bernard S. Renger, Zhu Liu, Behzad Shahraray, David Crawford Gibbon, Lee Begeja, Paul Triantafyllou, Tan Xu
-
Publication number: 20200334447Abstract: A method for validating objects appearing in volumetric video presentations includes obtaining a volumetric video presentation depicting a scene, wherein the volumetric video presentation is associated with a metadata file containing identifying information for the scene, identifying user-generated content that depicts the scene, by matching metadata associated with the user-generated content to the metadata file associated with the volumetric video presentation, comparing a first object appearing in the volumetric video presentation to a corresponding second object appearing in the user-generated content, assigning a score to the first object based on the comparing, wherein the score indicates a probability that the first object has not been manipulated, and altering the volumetric video presentation to filter the first object from the volumetric video presentation when the score falls below a threshold.Type: ApplicationFiled: April 16, 2019Publication date: October 22, 2020Inventors: Zhu Liu, Eric Zavesky, David Crawford Gibbon, Lee Begeja, Paul Triantafyllou
-
Patent number: 10797960Abstract: The concepts and technologies disclosed herein are directed, in part, to a system that can monitor traffic traversing a virtualized network that includes a plurality of virtual network functions (“VNFs”) that provide, at least in part, a service. The system can capture an event from the traffic. The event can involve at least one VNF, and can negatively affect at least one operational aspect of the virtualized network in providing the service. The system can create snapshot that represents a network state of the virtualized network during the event. The system can create, based upon the snapshot, a shadow network. The shadow network can include a network emulation of the network state of the virtualized network during the event. The system can determine, from the shadow network, at least one modification to at least a portion of the virtualized network that would at least mitigate negative effects of the event.Type: GrantFiled: December 22, 2017Date of Patent: October 6, 2020Assignee: AT&T Intellectual Property I, L.P.Inventors: Eric Zavesky, Ryan Cullinane, Russell Fischer, Mary Keefe Hirsekorn, Jeffrey Stein, Paul Triantafyllou
-
Publication number: 20200279455Abstract: Aspects of the subject disclosure may include, for example, comparing an input received from a peripheral device associated with an execution of a gaming application with a threshold value, wherein the threshold value is based on a first identification of a first user, a second identification of the peripheral device, and a third identification of stimuli presented as part of the execution of the gaming application. Responsive to the comparing, a determination may be made that the input exceeds the threshold value. Responsive to the determination, a validation request may be transmitted to a user device of the first user. Other embodiments are disclosed.Type: ApplicationFiled: February 28, 2019Publication date: September 3, 2020Applicant: AT&T Intellectual Property I, L.P.Inventors: Eric Zavesky, David Crawford Gibbon, Lee Begeja, Paul Triantafyllou, Bernard S. Renger
-
Patent number: 10726745Abstract: In one example, the present disclosure describes a device, computer-readable medium, and method for performing autonomous multi-pass data acquisition using unmanned aerial vehicle. For instance, in one example, a method includes obtaining a first set of sensor data collected by a fleet of unmanned aerial vehicles comprising at least one unmanned aerial vehicle, wherein the first set of sensor data depicts a target area at a first granularity, constructing a three-dimensional map of hierarchical unit representations of the first set of sensor data, sending a signal to the fleet of unmanned aerial vehicles to obtain a second set of sensor data at a second granularity that is finer than the first granularity, based at least in part on an examination of the three-dimensional map, and aggregating the second set of sensor data to form a high-resolution composite of the target area.Type: GrantFiled: June 13, 2017Date of Patent: July 28, 2020Assignee: AT&T INTELLECTUAL PROPERTY I, L.P.Inventors: Eric Zavesky, Lee Begeja, David Crawford Gibbon, Paul Triantafyllou
-
Publication number: 20200211506Abstract: In one example, a method includes monitoring conditions in a real environment in which a user is present, wherein the monitoring is performed by collecting data about the conditions from a plurality of sensors located in the real environment, transmitting the data about the conditions to an extended reality device that is present in the real environment, where the extended reality device is configured to render a virtual environment, interpolating between the real environment and the virtual environment, based at least in part on the conditions, to determine an actual extended reality environment that is being presented to the user, and sending a signal to a device that is located in the real environment, based on the interpolating, wherein the signal instructs the device to take an action that modifies at least one of the conditions in the real environment.Type: ApplicationFiled: December 27, 2018Publication date: July 2, 2020Inventors: Eric Zavesky, Zhu Liu, David Crawford Gibbon, Behzad Shahraray, Paul Triantafyllou, Tan Xu
-
Publication number: 20200134298Abstract: A processing system having at least one processor may establish a communication session between a first communication system of a first user and a second communication system of a second user, the communication session including first visual content, the first visual content including a first visual representation of the first user, and detecting a first action of the first visual representation in the first visual content in accordance with a first action detection model. The processing system may modify, in response to the detecting the first action, the first visual content in accordance with a first configuration setting of the first user for the communication session, which may include modifying the first action of the first visual representation of the first user in the first visual content. In addition, the processing system may transmit the first visual content that is modified to the second communication system of the second user.Type: ApplicationFiled: October 26, 2018Publication date: April 30, 2020Inventors: Eric Zavesky, Zhu Liu, Bernard S. Renger, Behzad Shahraray, Lee Begeja, Paul Triantafyllou
-
Publication number: 20200090701Abstract: Methods, computer-readable media, and apparatuses for composing a video in accordance with a user goal and an audience preference are described. For example, a processing system having at least one processor may obtain a plurality of video clips of a user, determine at least one goal of the user for a production of a video from the plurality of video clips, determine at least one audience preference of an audience, and compose the video comprising at least one video clip of the plurality of video clips of the user in accordance with the at least one goal of the user and the at least one audience preference. The processing system may then upload the video to a network-based publishing platform.Type: ApplicationFiled: September 18, 2018Publication date: March 19, 2020Inventors: Tan Xu, Behzad Shahraray, Eric Zavesky, Lee Begeja, Paul Triantafyllou, Zhu Liu, Bernard S. Renger