Patents by Inventor Matthew Quinlan
Matthew Quinlan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20220245655Abstract: Sensor data analysis may include obtaining video data, detecting facial data within the video data, extracting the facial data from the video data, detecting indicator data within the video data, extracting the indicator data from the video data, transforming the extracted facial data into representative facial data, and determining a mood of the person by associating learned mood indicators derived from other detected facial data with the representative facial data. The analysis may include determining that the representative facial data is associated with a complex profile, and determining a context regarding the person within the environment by weighting and processing the determined mood, at least one subset of data representing information about the person of the complex profile, and the indicator data. The analysis may include determining a user experience for the person, and communicating the determined user experience to a device associated with the person.Type: ApplicationFiled: April 25, 2022Publication date: August 4, 2022Applicant: Deep Labs Inc.Inventors: Patrick Faith, Matthew Quinlan, Scott Edington
-
Patent number: 11341515Abstract: Sensor data analysis may include obtaining video data, detecting facial data within the video data, extracting the facial data from the video data, detecting indicator data within the video data, extracting the indicator data from the video data, transforming the extracted facial data into representative facial data, and determining a mood of the person by associating learned mood indicators derived from other detected facial data with the representative facial data. The analysis may include determining that the representative facial data is associated with a complex profile, and determining a context regarding the person within the environment by weighting and processing the determined mood, at least one subset of data representing information about the person of the complex profile, and the indicator data. The analysis may include determining a user experience for the person, and communicating the determined user experience to a device associated with the person.Type: GrantFiled: August 26, 2019Date of Patent: May 24, 2022Assignee: Deep Labs Inc.Inventors: Patrick Faith, Matthew Quinlan, Scott Edington
-
Publication number: 20190385177Abstract: Sensor data analysis may include obtaining video data, detecting facial data within the video data, extracting the facial data from the video data, detecting indicator data within the video data, extracting the indicator data from the video data, transforming the extracted facial data into representative facial data, and determining a mood of the person by associating learned mood indicators derived from other detected facial data with the representative facial data. The analysis may include determining that the representative facial data is associated with a complex profile, and determining a context regarding the person within the environment by weighting and processing the determined mood, at least one subset of data representing information about the person of the complex profile, and the indicator data. The analysis may include determining a user experience for the person, and communicating the determined user experience to a device associated with the person.Type: ApplicationFiled: August 26, 2019Publication date: December 19, 2019Inventors: Patrick Faith, Matthew Quinlan, Scott Edington
-
Patent number: 10395262Abstract: Sensor data analysis may include obtaining video data, detecting facial data within the video data, extracting the facial data from the video data, detecting indicator data within the video data, extracting the indicator data from the video data, transforming the extracted facial data into representative facial data, and determining a mood of the person by associating learned mood indicators derived from other detected facial data with the representative facial data. The analysis may include determining that the representative facial data is associated with a complex profile, and determining a context regarding the person within the environment by weighting and processing the determined mood, at least one subset of data representing information about the person of the complex profile, and the indicator data. The analysis may include determining a user experience for the person, and communicating the determined user experience to a device associated with the person.Type: GrantFiled: November 13, 2017Date of Patent: August 27, 2019Assignee: Deep Labs Inc.Inventors: Patrick Faith, Matthew Quinlan, Scott Edington
-
Publication number: 20180082314Abstract: Sensor data analysis may include obtaining video data, detecting facial data within the video data, extracting the facial data from the video data, detecting indicator data within the video data, extracting the indicator data from the video data, transforming the extracted facial data into representative facial data, and determining a mood of the person by associating learned mood indicators derived from other detected facial data with the representative facial data. The analysis may include determining that the representative facial data is associated with a complex profile, and determining a context regarding the person within the environment by weighting and processing the determined mood, at least one subset of data representing information about the person of the complex profile, and the indicator data. The analysis may include determining a user experience for the person, and communicating the determined user experience to a device associated with the person.Type: ApplicationFiled: November 13, 2017Publication date: March 22, 2018Inventors: Patrick Faith, Matthew Quinlan, Scott Edington
-
Patent number: 9818126Abstract: Sensor data analysis may include obtaining video data, detecting facial data within the video data, extracting the facial data from the video data, detecting indicator data within the video data, extracting the indicator data from the video data, transforming the extracted facial data into representative facial data, and determining a mood of the person by associating learned mood indicators derived from other detected facial data with the representative facial data. The analysis may include determining that the representative facial data is associated with a complex profile, and determining a context regarding the person within the environment by weighting and processing the determined mood, at least one subset of data representing information about the person of the complex profile, and the indicator data. The analysis may include determining a user experience for the person, and communicating the determined user experience to a device associated with the person.Type: GrantFiled: April 20, 2016Date of Patent: November 14, 2017Assignee: Deep Labs Inc.Inventors: Patrick Faith, Matthew Quinlan, Scott Edington
-
Publication number: 20170308909Abstract: Sensor data analysis may include obtaining video data, detecting facial data within the video data, extracting the facial data from the video data, detecting indicator data within the video data, extracting the indicator data from the video data, transforming the extracted facial data into representative facial data, and determining a mood of the person by associating learned mood indicators derived from other detected facial data with the representative facial data. The analysis may include determining that the representative facial data is associated with a complex profile, and determining a context regarding the person within the environment by weighting and processing the determined mood, at least one subset of data representing information about the person of the complex profile, and the indicator data. The analysis may include determining a user experience for the person, and communicating the determined user experience to a device associated with the person.Type: ApplicationFiled: April 20, 2016Publication date: October 26, 2017Inventors: Patrick Faith, Matthew Quinlan, Scott Edington
-
Publication number: 20150348019Abstract: Payment devices have physical characteristics and these physical characteristic may change in predictable ways. Further, backgrounds may also have physical characteristics which may change in predictable ways. By examining digital images for physical characteristics, some of which cannot even be seen, a better decision on whether a transaction is fraudulent may be made.Type: ApplicationFiled: May 30, 2015Publication date: December 3, 2015Inventors: Patrick Faith, Theodore Harris, Matthew Quinlan, Nick Giannaris, Phaneendra Gullapalli, Justin Bartee, Scott Edington
-
Publication number: 20150347932Abstract: Systems, methods, and platforms can be configured to provide services and devices that powers, controls and authenticates 3-D printed objects, such as through an adaptive control module for unique 3-D printer products. Secure processing of product specifications can also be performed to help maintain the anonymity of confidential user information used in the manufacture of products.Type: ApplicationFiled: February 12, 2015Publication date: December 3, 2015Inventors: Theodore Harris, Matthew Quinlan, Scott Edington, Patrick Faith
-
Publication number: 20040015409Abstract: A computer system for providing a centralised register of transport provider permanent booking agreements between a plurality of transport providers and a plurality of forwarders is dislosed. The booking agreements relate to capacity on routes between stations in a transport system. The computer system includes a processing unit, an interface unit for communication with said processing unit, and a memory unit. The computer system is configured to receive one or more transport provider allotment templates from a plurality of transport providers. Each allotment template comprises template data representing a permanent booking agrement between a transport provider and a forwarder. The template data comprises data representative of one or more route leg instances and data representative of an agreement capacity value for at least one of said one or more route leg instances. The computer system is configured to store a record of said allotment templates in the memory unit.Type: ApplicationFiled: January 7, 2003Publication date: January 22, 2004Inventors: Andrew Chittenden, Petros Andreas Demetriades, Todd Howard Morgan, Simon Patterson, Matthew Quinlan, David Ravech, Demetrios Zoppos