Patents by Inventor Adam Turkelson
Adam Turkelson has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11756291Abstract: Provided is a technique for determining a context of an image and an object depicted by the image based on the context. A trained context classification model may determine a context of an image, and a trained object recognition model may determine an object depicted by the image based on the image and the context. Provided is also a technique for determining an object depicted within an image based on an input location of an input detected by a display screen. An object depicted within an image may be detected based on a distance in feature space between an image feature vector of an image and a feature vector of the object, and a distance in pixel-space between an input location of an input and location of the object within the image.Type: GrantFiled: August 17, 2020Date of Patent: September 12, 2023Assignee: Slyce Acquisition Inc.Inventors: Adam Turkelson, Kyle Martin, Sethu Hareesh Kolluru
-
Patent number: 10977520Abstract: Provided is a process that includes: determining that a training set lacks an image of an object with a given pose, context, or camera; composing, based on the determination, a video capture task; obtaining a candidate video; selecting a subset of frames of the candidate video as representative; determining that a given frame among the subset depicts the object from the given pose, context, or camera; and augmenting the training set with the given frame.Type: GrantFiled: December 18, 2019Date of Patent: April 13, 2021Assignee: Slyce Acquisition Inc.Inventors: Adam Turkelson, Kyle Martin, Christopher Birmingham, Sethu Hareesh Kolluru
-
Publication number: 20210004589Abstract: Provided is a technique for determining a context of an image and an object depicted by the image based on the context. A trained context classification model may determine a context of an image, and a trained object recognition model may determine an object depicted by the image based on the image and the context. Provided is also a technique for determining an object depicted within an image based on an input location of an input detected by a display screen. An object depicted within an image may be detected based on a distance in feature space between an image feature vector of an image and a feature vector of the object, and a distance in pixel-space between an input location of an input and location of the object within the image.Type: ApplicationFiled: August 17, 2020Publication date: January 7, 2021Inventors: Adam Turkelson, Kyle Martin, Sethu Hareesh Kolluru
-
Patent number: 10755128Abstract: Provided is a technique for determining a context of an image and an object depicted by the image based on the context. A trained context classification model may determine a context of an image, and a trained object recognition model may determine an object depicted by the image based on the image and the context. Provided is also a technique for determining an object depicted within an image based on an input location of an input detected by a display screen. An object depicted within an image may be detected based on a distance in feature space between an image feature vector of an image and a feature vector of the object, and a distance in pixel-space between an input location of an input and location of the object within the image.Type: GrantFiled: December 18, 2019Date of Patent: August 25, 2020Assignee: Slyce Acquisition Inc.Inventors: Adam Turkelson, Kyle Martin, Sethu Hareesh Kolluru
-
Publication number: 20200210768Abstract: Provided is a process that includes: determining that a training set lacks an image of an object with a given pose, context, or camera; composing, based on the determination, a video capture task; obtaining a candidate video; selecting a subset of frames of the candidate video as representative; determining that a given frame among the subset depicts the object from the given pose, context, or camera; and augmenting the training set with the given frame.Type: ApplicationFiled: December 18, 2019Publication date: July 2, 2020Inventors: Adam Turkelson, Kyle Martin, Christopher Birmingham, Sethu Hareesh Kolluru
-
Publication number: 20200193206Abstract: Provided is a technique for determining a context of an image and an object depicted by the image based on the context. A trained context classification model may determine a context of an image, and a trained object recognition model may determine an object depicted by the image based on the image and the context. Provided is also a technique for determining an object depicted within an image based on an input location of an input detected by a display screen. An object depicted within an image may be detected based on a distance in feature space between an image feature vector of an image and a feature vector of the object, and a distance in pixel-space between an input location of an input and location of the object within the image.Type: ApplicationFiled: December 18, 2019Publication date: June 18, 2020Inventors: Adam Turkelson, Kyle Martin, Sethu Hareesh Kolluru
-
Publication number: 20200193552Abstract: Provided is a process that includes training a computer-vision object recognition model with a training data set including images depicting objects, each image being labeled with an object identifier of the corresponding object; obtaining a new image; determining a similarity between the new image and an image from the training data set with the trained computer-vision object recognition model; and causing the object identifier of the object to be stored in association with the new image, visual features extracted from the new image, or both.Type: ApplicationFiled: December 18, 2019Publication date: June 18, 2020Inventors: Adam Turkelson, Sethu Hareesh Kolluru
-
Patent number: 8503797Abstract: An automatic document classification system is described that uses lexical and physical features to assign a class ci?C{c1, c2, . . . , ci} to a document d. The primary lexical features are the result of a feature selection method known as Orthogonal Centroid Feature Selection (OCFS). Additional information may be gathered on character type frequencies (digits, letters, and symbols) within d. Physical information is assembled through image analysis to yield physical attributes such as document dimensionality, text alignment, and color distribution. The resulting lexical and physical information is combined into an input vector X and is used to train a supervised neural network to perform the classification.Type: GrantFiled: September 5, 2008Date of Patent: August 6, 2013Assignee: The Neat Company, Inc.Inventors: Adam Turkelson, Huanfeng Ma
-
Patent number: 8218890Abstract: The boundaries of a scanned digital document are determined by identifying the largest connected component in the received digital document and assigning the boundaries of the largest connected component as the boundaries of the received digital document or by using a row by row and column by column analysis of the received digital document to identify horizontal and vertical bands in the digital image having pixels with a value opposite to the value of pixels of a background of the received digital document and assigning the horizontal and vertical bands to be the boundaries of the received digital document. These processes may be performed in series or parallel by a processor associated with a scanner that creates the digital document.Type: GrantFiled: January 22, 2009Date of Patent: July 10, 2012Assignee: The Neat CompanyInventors: Adam Turkelson, Ravi Dwivedula
-
Publication number: 20090185752Abstract: The boundaries of a scanned digital document are determined by identifying the largest connected component in the received digital document and assigning the boundaries of the largest connected component as the boundaries of the received digital document or by using a row by row and column by column analysis of the received digital document to identify horizontal and vertical bands in the digital image having pixels with a value opposite to the value of pixels of a background of the received digital document and assigning the horizontal and vertical bands to be the boundaries of the received digital document. These processes may be performed in series or parallel by a processor associated with a scanner that creates the digital document.Type: ApplicationFiled: January 22, 2009Publication date: July 23, 2009Applicant: DIGITAL BUSINESS PROCESSES, INC.Inventors: Ravi Dwivedula, Adam Turkelson
-
Publication number: 20090067729Abstract: An automatic document classification system is described that uses lexical and physical features to assign a class ci?C{c1, c2, . . . , ci} to a document d. The primary lexical features are the result of a feature selection method known as Orthogonal Centroid Feature Selection (OCFS). Additional information may be gathered on character type frequencies (digits, letters, and symbols) within d. Physical information is assembled through image analysis to yield physical attributes such as document dimensionality, text alignment, and color distribution. The resulting lexical and physical information is combined into an input vector X and is used to train a supervised neural network to perform the classification.Type: ApplicationFiled: September 5, 2008Publication date: March 12, 2009Applicant: Digital Business Processes, Inc.Inventors: Adam Turkelson, Huanfeng Ma