Patents by Inventor Adam Turkelson

Adam Turkelson has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11756291
    Abstract: Provided is a technique for determining a context of an image and an object depicted by the image based on the context. A trained context classification model may determine a context of an image, and a trained object recognition model may determine an object depicted by the image based on the image and the context. Provided is also a technique for determining an object depicted within an image based on an input location of an input detected by a display screen. An object depicted within an image may be detected based on a distance in feature space between an image feature vector of an image and a feature vector of the object, and a distance in pixel-space between an input location of an input and location of the object within the image.
    Type: Grant
    Filed: August 17, 2020
    Date of Patent: September 12, 2023
    Assignee: Slyce Acquisition Inc.
    Inventors: Adam Turkelson, Kyle Martin, Sethu Hareesh Kolluru
  • Patent number: 10977520
    Abstract: Provided is a process that includes: determining that a training set lacks an image of an object with a given pose, context, or camera; composing, based on the determination, a video capture task; obtaining a candidate video; selecting a subset of frames of the candidate video as representative; determining that a given frame among the subset depicts the object from the given pose, context, or camera; and augmenting the training set with the given frame.
    Type: Grant
    Filed: December 18, 2019
    Date of Patent: April 13, 2021
    Assignee: Slyce Acquisition Inc.
    Inventors: Adam Turkelson, Kyle Martin, Christopher Birmingham, Sethu Hareesh Kolluru
  • Publication number: 20210004589
    Abstract: Provided is a technique for determining a context of an image and an object depicted by the image based on the context. A trained context classification model may determine a context of an image, and a trained object recognition model may determine an object depicted by the image based on the image and the context. Provided is also a technique for determining an object depicted within an image based on an input location of an input detected by a display screen. An object depicted within an image may be detected based on a distance in feature space between an image feature vector of an image and a feature vector of the object, and a distance in pixel-space between an input location of an input and location of the object within the image.
    Type: Application
    Filed: August 17, 2020
    Publication date: January 7, 2021
    Inventors: Adam Turkelson, Kyle Martin, Sethu Hareesh Kolluru
  • Patent number: 10755128
    Abstract: Provided is a technique for determining a context of an image and an object depicted by the image based on the context. A trained context classification model may determine a context of an image, and a trained object recognition model may determine an object depicted by the image based on the image and the context. Provided is also a technique for determining an object depicted within an image based on an input location of an input detected by a display screen. An object depicted within an image may be detected based on a distance in feature space between an image feature vector of an image and a feature vector of the object, and a distance in pixel-space between an input location of an input and location of the object within the image.
    Type: Grant
    Filed: December 18, 2019
    Date of Patent: August 25, 2020
    Assignee: Slyce Acquisition Inc.
    Inventors: Adam Turkelson, Kyle Martin, Sethu Hareesh Kolluru
  • Publication number: 20200210768
    Abstract: Provided is a process that includes: determining that a training set lacks an image of an object with a given pose, context, or camera; composing, based on the determination, a video capture task; obtaining a candidate video; selecting a subset of frames of the candidate video as representative; determining that a given frame among the subset depicts the object from the given pose, context, or camera; and augmenting the training set with the given frame.
    Type: Application
    Filed: December 18, 2019
    Publication date: July 2, 2020
    Inventors: Adam Turkelson, Kyle Martin, Christopher Birmingham, Sethu Hareesh Kolluru
  • Publication number: 20200193206
    Abstract: Provided is a technique for determining a context of an image and an object depicted by the image based on the context. A trained context classification model may determine a context of an image, and a trained object recognition model may determine an object depicted by the image based on the image and the context. Provided is also a technique for determining an object depicted within an image based on an input location of an input detected by a display screen. An object depicted within an image may be detected based on a distance in feature space between an image feature vector of an image and a feature vector of the object, and a distance in pixel-space between an input location of an input and location of the object within the image.
    Type: Application
    Filed: December 18, 2019
    Publication date: June 18, 2020
    Inventors: Adam Turkelson, Kyle Martin, Sethu Hareesh Kolluru
  • Publication number: 20200193552
    Abstract: Provided is a process that includes training a computer-vision object recognition model with a training data set including images depicting objects, each image being labeled with an object identifier of the corresponding object; obtaining a new image; determining a similarity between the new image and an image from the training data set with the trained computer-vision object recognition model; and causing the object identifier of the object to be stored in association with the new image, visual features extracted from the new image, or both.
    Type: Application
    Filed: December 18, 2019
    Publication date: June 18, 2020
    Inventors: Adam Turkelson, Sethu Hareesh Kolluru
  • Patent number: 8503797
    Abstract: An automatic document classification system is described that uses lexical and physical features to assign a class ci?C{c1, c2, . . . , ci} to a document d. The primary lexical features are the result of a feature selection method known as Orthogonal Centroid Feature Selection (OCFS). Additional information may be gathered on character type frequencies (digits, letters, and symbols) within d. Physical information is assembled through image analysis to yield physical attributes such as document dimensionality, text alignment, and color distribution. The resulting lexical and physical information is combined into an input vector X and is used to train a supervised neural network to perform the classification.
    Type: Grant
    Filed: September 5, 2008
    Date of Patent: August 6, 2013
    Assignee: The Neat Company, Inc.
    Inventors: Adam Turkelson, Huanfeng Ma
  • Patent number: 8218890
    Abstract: The boundaries of a scanned digital document are determined by identifying the largest connected component in the received digital document and assigning the boundaries of the largest connected component as the boundaries of the received digital document or by using a row by row and column by column analysis of the received digital document to identify horizontal and vertical bands in the digital image having pixels with a value opposite to the value of pixels of a background of the received digital document and assigning the horizontal and vertical bands to be the boundaries of the received digital document. These processes may be performed in series or parallel by a processor associated with a scanner that creates the digital document.
    Type: Grant
    Filed: January 22, 2009
    Date of Patent: July 10, 2012
    Assignee: The Neat Company
    Inventors: Adam Turkelson, Ravi Dwivedula
  • Publication number: 20090185752
    Abstract: The boundaries of a scanned digital document are determined by identifying the largest connected component in the received digital document and assigning the boundaries of the largest connected component as the boundaries of the received digital document or by using a row by row and column by column analysis of the received digital document to identify horizontal and vertical bands in the digital image having pixels with a value opposite to the value of pixels of a background of the received digital document and assigning the horizontal and vertical bands to be the boundaries of the received digital document. These processes may be performed in series or parallel by a processor associated with a scanner that creates the digital document.
    Type: Application
    Filed: January 22, 2009
    Publication date: July 23, 2009
    Applicant: DIGITAL BUSINESS PROCESSES, INC.
    Inventors: Ravi Dwivedula, Adam Turkelson
  • Publication number: 20090067729
    Abstract: An automatic document classification system is described that uses lexical and physical features to assign a class ci?C{c1, c2, . . . , ci} to a document d. The primary lexical features are the result of a feature selection method known as Orthogonal Centroid Feature Selection (OCFS). Additional information may be gathered on character type frequencies (digits, letters, and symbols) within d. Physical information is assembled through image analysis to yield physical attributes such as document dimensionality, text alignment, and color distribution. The resulting lexical and physical information is combined into an input vector X and is used to train a supervised neural network to perform the classification.
    Type: Application
    Filed: September 5, 2008
    Publication date: March 12, 2009
    Applicant: Digital Business Processes, Inc.
    Inventors: Adam Turkelson, Huanfeng Ma