Patents by Inventor Matthew Quinlan

Matthew Quinlan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220245655
    Abstract: Sensor data analysis may include obtaining video data, detecting facial data within the video data, extracting the facial data from the video data, detecting indicator data within the video data, extracting the indicator data from the video data, transforming the extracted facial data into representative facial data, and determining a mood of the person by associating learned mood indicators derived from other detected facial data with the representative facial data. The analysis may include determining that the representative facial data is associated with a complex profile, and determining a context regarding the person within the environment by weighting and processing the determined mood, at least one subset of data representing information about the person of the complex profile, and the indicator data. The analysis may include determining a user experience for the person, and communicating the determined user experience to a device associated with the person.
    Type: Application
    Filed: April 25, 2022
    Publication date: August 4, 2022
    Applicant: Deep Labs Inc.
    Inventors: Patrick Faith, Matthew Quinlan, Scott Edington
  • Patent number: 11341515
    Abstract: Sensor data analysis may include obtaining video data, detecting facial data within the video data, extracting the facial data from the video data, detecting indicator data within the video data, extracting the indicator data from the video data, transforming the extracted facial data into representative facial data, and determining a mood of the person by associating learned mood indicators derived from other detected facial data with the representative facial data. The analysis may include determining that the representative facial data is associated with a complex profile, and determining a context regarding the person within the environment by weighting and processing the determined mood, at least one subset of data representing information about the person of the complex profile, and the indicator data. The analysis may include determining a user experience for the person, and communicating the determined user experience to a device associated with the person.
    Type: Grant
    Filed: August 26, 2019
    Date of Patent: May 24, 2022
    Assignee: Deep Labs Inc.
    Inventors: Patrick Faith, Matthew Quinlan, Scott Edington
  • Publication number: 20190385177
    Abstract: Sensor data analysis may include obtaining video data, detecting facial data within the video data, extracting the facial data from the video data, detecting indicator data within the video data, extracting the indicator data from the video data, transforming the extracted facial data into representative facial data, and determining a mood of the person by associating learned mood indicators derived from other detected facial data with the representative facial data. The analysis may include determining that the representative facial data is associated with a complex profile, and determining a context regarding the person within the environment by weighting and processing the determined mood, at least one subset of data representing information about the person of the complex profile, and the indicator data. The analysis may include determining a user experience for the person, and communicating the determined user experience to a device associated with the person.
    Type: Application
    Filed: August 26, 2019
    Publication date: December 19, 2019
    Inventors: Patrick Faith, Matthew Quinlan, Scott Edington
  • Patent number: 10395262
    Abstract: Sensor data analysis may include obtaining video data, detecting facial data within the video data, extracting the facial data from the video data, detecting indicator data within the video data, extracting the indicator data from the video data, transforming the extracted facial data into representative facial data, and determining a mood of the person by associating learned mood indicators derived from other detected facial data with the representative facial data. The analysis may include determining that the representative facial data is associated with a complex profile, and determining a context regarding the person within the environment by weighting and processing the determined mood, at least one subset of data representing information about the person of the complex profile, and the indicator data. The analysis may include determining a user experience for the person, and communicating the determined user experience to a device associated with the person.
    Type: Grant
    Filed: November 13, 2017
    Date of Patent: August 27, 2019
    Assignee: Deep Labs Inc.
    Inventors: Patrick Faith, Matthew Quinlan, Scott Edington
  • Publication number: 20180082314
    Abstract: Sensor data analysis may include obtaining video data, detecting facial data within the video data, extracting the facial data from the video data, detecting indicator data within the video data, extracting the indicator data from the video data, transforming the extracted facial data into representative facial data, and determining a mood of the person by associating learned mood indicators derived from other detected facial data with the representative facial data. The analysis may include determining that the representative facial data is associated with a complex profile, and determining a context regarding the person within the environment by weighting and processing the determined mood, at least one subset of data representing information about the person of the complex profile, and the indicator data. The analysis may include determining a user experience for the person, and communicating the determined user experience to a device associated with the person.
    Type: Application
    Filed: November 13, 2017
    Publication date: March 22, 2018
    Inventors: Patrick Faith, Matthew Quinlan, Scott Edington
  • Patent number: 9818126
    Abstract: Sensor data analysis may include obtaining video data, detecting facial data within the video data, extracting the facial data from the video data, detecting indicator data within the video data, extracting the indicator data from the video data, transforming the extracted facial data into representative facial data, and determining a mood of the person by associating learned mood indicators derived from other detected facial data with the representative facial data. The analysis may include determining that the representative facial data is associated with a complex profile, and determining a context regarding the person within the environment by weighting and processing the determined mood, at least one subset of data representing information about the person of the complex profile, and the indicator data. The analysis may include determining a user experience for the person, and communicating the determined user experience to a device associated with the person.
    Type: Grant
    Filed: April 20, 2016
    Date of Patent: November 14, 2017
    Assignee: Deep Labs Inc.
    Inventors: Patrick Faith, Matthew Quinlan, Scott Edington
  • Publication number: 20170308909
    Abstract: Sensor data analysis may include obtaining video data, detecting facial data within the video data, extracting the facial data from the video data, detecting indicator data within the video data, extracting the indicator data from the video data, transforming the extracted facial data into representative facial data, and determining a mood of the person by associating learned mood indicators derived from other detected facial data with the representative facial data. The analysis may include determining that the representative facial data is associated with a complex profile, and determining a context regarding the person within the environment by weighting and processing the determined mood, at least one subset of data representing information about the person of the complex profile, and the indicator data. The analysis may include determining a user experience for the person, and communicating the determined user experience to a device associated with the person.
    Type: Application
    Filed: April 20, 2016
    Publication date: October 26, 2017
    Inventors: Patrick Faith, Matthew Quinlan, Scott Edington
  • Publication number: 20150348019
    Abstract: Payment devices have physical characteristics and these physical characteristic may change in predictable ways. Further, backgrounds may also have physical characteristics which may change in predictable ways. By examining digital images for physical characteristics, some of which cannot even be seen, a better decision on whether a transaction is fraudulent may be made.
    Type: Application
    Filed: May 30, 2015
    Publication date: December 3, 2015
    Inventors: Patrick Faith, Theodore Harris, Matthew Quinlan, Nick Giannaris, Phaneendra Gullapalli, Justin Bartee, Scott Edington
  • Publication number: 20150347932
    Abstract: Systems, methods, and platforms can be configured to provide services and devices that powers, controls and authenticates 3-D printed objects, such as through an adaptive control module for unique 3-D printer products. Secure processing of product specifications can also be performed to help maintain the anonymity of confidential user information used in the manufacture of products.
    Type: Application
    Filed: February 12, 2015
    Publication date: December 3, 2015
    Inventors: Theodore Harris, Matthew Quinlan, Scott Edington, Patrick Faith
  • Publication number: 20040015409
    Abstract: A computer system for providing a centralised register of transport provider permanent booking agreements between a plurality of transport providers and a plurality of forwarders is dislosed. The booking agreements relate to capacity on routes between stations in a transport system. The computer system includes a processing unit, an interface unit for communication with said processing unit, and a memory unit. The computer system is configured to receive one or more transport provider allotment templates from a plurality of transport providers. Each allotment template comprises template data representing a permanent booking agrement between a transport provider and a forwarder. The template data comprises data representative of one or more route leg instances and data representative of an agreement capacity value for at least one of said one or more route leg instances. The computer system is configured to store a record of said allotment templates in the memory unit.
    Type: Application
    Filed: January 7, 2003
    Publication date: January 22, 2004
    Inventors: Andrew Chittenden, Petros Andreas Demetriades, Todd Howard Morgan, Simon Patterson, Matthew Quinlan, David Ravech, Demetrios Zoppos