Patents by Inventor Adam Wiggen Kraft

Adam Wiggen Kraft has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10924676
    Abstract: Visual effects for element of interest can be displayed within a live camera view in real time or substantially using a processing pipeline that does not immediately display an acquired image until it has been updated with the effects. In various embodiments, software-based approaches, such as fast convolution algorithms, and/or hardware-based approaches, such as using a graphics processing unit (GPU), can be used reduce the time between acquiring an image and displaying the image with various visual effects. These visual effects can include automatically highlighting elements, augmenting the color, style, and/or size of elements, casting a shadow on elements, erasing elements, substituting elements, or shaking and jumbling elements, among other effects.
    Type: Grant
    Filed: February 7, 2018
    Date of Patent: February 16, 2021
    Assignee: A9.com, Inc.
    Inventors: Adam Wiggen Kraft, Colin Jon Taylor
  • Patent number: 10607362
    Abstract: Disclosed is a method and system for processing images from an aerial imaging device. The method includes receiving a first image of a geographical area having a first resolution. The method transmits the first image to a machine learning model to identify an area of interest containing an object of interest. The method receives a second image of the geographical area having a second resolution higher than the first resolution. The method transmits the second image to the machine learning model to determine a likelihood that the area of interest contains the object of interest. The method trains the machine learning model to filter out features corresponding to the area of interest in images having the first resolution if the likelihood is below a threshold. The method transmits a visual representation of the object of interest to a user device if the likelihood exceeds the threshold.
    Type: Grant
    Filed: February 15, 2019
    Date of Patent: March 31, 2020
    Assignee: ORBITAL INSIGHT, INC.
    Inventors: Adam Wiggen Kraft, Boris Aleksandrovich Babenko, Alexander Bogdanov Avtanski, Daniel Michael Sammons, Jasper Lin, Jason D. Lohn
  • Patent number: 10467674
    Abstract: Various embodiments enable a customer to quickly search additional information (e.g., product variations, sizes, price and availability) related to a specific product. For example, the customer can request additional information of a specific product by submitting an image of the specific product from a computing device. In one embodiment, location of the customer can be determined based on the image submitted by the customer. Product features can be extracted from the image according to various imaging processing and text recognition algorithms and then used to match products that are within view of the customer. A search results with additional information of the specific product can be provided to the computing device for presentation to the customer.
    Type: Grant
    Filed: June 30, 2016
    Date of Patent: November 5, 2019
    Assignee: A9.COM, INC.
    Inventor: Adam Wiggen Kraft
  • Publication number: 20190220696
    Abstract: A method comprises accessing a plurality of images of a substantially same geographical area, each image of the plurality of images captured at a separate time from each other using a separate imaging sensor, each imaging sensor capturing information corresponding to a different spectral band. The method further comprises identifying, for the images, a set of blobs, each blob comprising a plurality of adjacent pixels, wherein each image of the plurality of images includes a blob of the set of blobs, and wherein each blob in the set of blobs has a location in each image that differs from a location of other blobs in the set of blobs. The method further comprises generating a score indicating a likelihood that the set of blobs correspond to a moving object, storing an indication that the set of blobs correspond to the moving object responsive to the generated score exceeding a threshold.
    Type: Application
    Filed: March 22, 2019
    Publication date: July 18, 2019
    Inventors: Adam Wiggen Kraft, Boris Aleksandrovich Babenko, Michael Alan Baxter, Steven Jeffrey Bickerton
  • Publication number: 20190180464
    Abstract: Disclosed is a method and system for processing images from an aerial imaging device. The method includes receiving a first image of a geographical area having a first resolution. The method transmits the first image to a machine learning model to identify an area of interest containing an object of interest. The method receives a second image of the geographical area having a second resolution higher than the first resolution. The method transmits the second image to the machine learning model to determine a likelihood that the area of interest contains the object of interest. The method trains the machine learning model to filter out features corresponding to the area of interest in images having the first resolution if the likelihood is below a threshold. The method transmits a visual representation of the object of interest to a user device if the likelihood exceeds the threshold.
    Type: Application
    Filed: February 15, 2019
    Publication date: June 13, 2019
    Inventors: Adam Wiggen Kraft, Boris Aleksandrovich Babenko, Alexander Bogdanov Avtanski, Daniel Michael Sammons, Jasper Lin, Jason D. Lohn
  • Patent number: 10255523
    Abstract: Disclosed is a method and system for processing images from an aerial imaging device. A moving vehicle analysis system receives images from an aerial imaging device. The system may perform edge analysis in the images to identify a pairs of edges corresponding to a road. The system may identify pixel blobs in the images including adjacent pixels matching each other based on a pixel attribute. The system uses a machine learning model for generating an output identifying moving vehicles in the images. The system determines a count of the moving vehicles captured by the images, where each moving vehicle is associated with corresponding pixel blobs.
    Type: Grant
    Filed: November 14, 2016
    Date of Patent: April 9, 2019
    Assignee: ORBITAL INSIGHT, INC.
    Inventors: Adam Wiggen Kraft, Boris Aleksandrovich Babenko, Michael Alan Baxter, Steven Jeffrey Bickerton
  • Patent number: 10217236
    Abstract: Disclosed is a method and system for processing images from an aerial imaging device. The method includes receiving a first image of a geographical area having a first resolution. The method transmits the first image to a machine learning model to identify an area of interest containing an object of interest. The method receives a second image of the geographical area having a second resolution higher than the first resolution. The method transmits the second image to the machine learning model to determine a likelihood that the area of interest contains the object of interest. The method trains the machine learning model to filter out features corresponding to the area of interest in images having the first resolution if the likelihood is below a threshold. The method transmits a visual representation of the object of interest to a user device if the likelihood exceeds the threshold.
    Type: Grant
    Filed: April 12, 2018
    Date of Patent: February 26, 2019
    Assignee: ORBITAL INSIGHT, INC.
    Inventors: Adam Wiggen Kraft, Boris Aleksandrovich Babenko, Alexander Bogdanov Avtanski, Daniel Michael Sammons, Jasper Lin, Jason D. Lohn
  • Publication number: 20180232900
    Abstract: Disclosed is a method and system for processing images from an aerial imaging device. The method includes receiving a first image of a geographical area having a first resolution. The method transmits the first image to a machine learning model to identify an area of interest containing an object of interest. The method receives a second image of the geographical area having a second resolution higher than the first resolution. The method transmits the second image to the machine learning model to determine a likelihood that the area of interest contains the object of interest. The method trains the machine learning model to filter out features corresponding to the area of interest in images having the first resolution if the likelihood is below a threshold. The method transmits a visual representation of the object of interest to a user device if the likelihood exceeds the threshold.
    Type: Application
    Filed: April 12, 2018
    Publication date: August 16, 2018
    Inventors: Adam Wiggen Kraft, Boris Aleksandrovich Babenko, Alexander Bogdanov Avtanski, Daniel Michael Sammons, Jasper Lin, Jason D. Lohn
  • Patent number: 10038839
    Abstract: Various approaches provide for detecting and recognizing text to enable a user to perform various functions or tasks. For example, a user could point a camera at an object with text, in order to capture an image of that object. The camera can be integrated with a portable computing device that is capable of taking the image and processing the image (or providing the image for processing) to recognize, identify, and/or isolate the text in order to send the image of the object as well as recognized text to an application, function, or system, such as an electronic marketplace.
    Type: Grant
    Filed: June 1, 2017
    Date of Patent: July 31, 2018
    Assignee: A.9.com, Inc.
    Inventors: Adam Wiggen Kraft, Kathy Wing Lam Ma, Xiaofan Lin, Arnab Sanat Kumar Dhua, Yu Lou
  • Publication number: 20180167559
    Abstract: Visual effects for element of interest can be displayed within a live camera view in real time or substantially using a processing pipeline that does not immediately display an acquired image until it has been updated with the effects. In various embodiments, software-based approaches, such as fast convolution algorithms, and/or hardware-based approaches, such as using a graphics processing unit (GPU), can be used reduce the time between acquiring an image and displaying the image with various visual effects. These visual effects can include automatically highlighting elements, augmenting the color, style, and/or size of elements, casting a shadow on elements, erasing elements, substituting elements, or shaking and jumbling elements, among other effects.
    Type: Application
    Filed: February 7, 2018
    Publication date: June 14, 2018
    Inventors: Adam Wiggen Kraft, Colin Jon Taylor
  • Patent number: 9934526
    Abstract: Various embodiments enable a process to automatically attempt to select the most relevant words associated with products available for purchase from an electronic marketplace from an image frame. For example, an image frame containing text can be obtained and analyzed with an optical character recognition. The recognized words can then be preprocessed using various filtering and scoring techniques to narrow down a volume of text to a few relevant query terms. These query terms can then be sent to a search engine associated with the electronic marketplace to return relevant products to a user.
    Type: Grant
    Filed: June 27, 2013
    Date of Patent: April 3, 2018
    Assignee: A9.com, INC.
    Inventors: Arnab Sunat Kumar Dhua, Douglas Ryan Gray, Xiaofan Lin, Yu Lou, Adam Wiggen Kraft, Sunil Ramesh
  • Patent number: 9922052
    Abstract: Various embodiments provide a user with a capability to customize multiple image data stores, where each data store can be used to provide content tailored to different users having different interests, setting, or notification demands. For example, users can submit images and modify processing parameters to tune an image matching system to their, or their customer's, individual desires. Accordingly, content can be delivered to a computing device in response to a query image sent by the computing device to a matching system containing the customized image data stores. The delivered content can be related to, or derived from, an image in a respective data store that matches the provided query image.
    Type: Grant
    Filed: April 26, 2013
    Date of Patent: March 20, 2018
    Assignee: A9.com, Inc.
    Inventors: Adam Wiggen Kraft, Himanshu Arora, Max Delgadillo, Jr., Sunil Ramesh, Atul Kumar
  • Patent number: 9912874
    Abstract: Visual effects for element of interest can be displayed within a live camera view in real time or substantially using a processing pipeline that does not immediately display an acquired image until it has been updated with the effects. In various embodiments, software-based approaches, such as fast convolution algorithms, and/or hardware-based approaches, such as using a graphics processing unit (GPU), can be used reduce the time between acquiring an image and displaying the image with various visual effects. These visual effects can include automatically highlighting elements, augmenting the color, style, and/or size of elements, casting a shadow on elements, erasing elements, substituting elements, or shaking and jumbling elements, among other effects.
    Type: Grant
    Filed: January 11, 2016
    Date of Patent: March 6, 2018
    Assignee: A9.com, Inc.
    Inventors: Adam Wiggen Kraft, Colin Jon Taylor
  • Patent number: 9870633
    Abstract: Various embodiments enable a computing device to perform tasks such as highlighting words in an augmented reality view that are important to a user. For example, word lists can be generated and the user, by pointing a camera of a computing device at a volume of text, can cause words from the word list within the volume of text to be highlighted in a live field of view of the camera displayed thereon. Accordingly, users can quickly identify textual information that is meaningful to them in an Augmented Reality view to aid the user in sifting through real-world text.
    Type: Grant
    Filed: December 21, 2016
    Date of Patent: January 16, 2018
    Assignee: A9.COM, INC.
    Inventors: Adam Wiggen Kraft, Arnab Sanat Kumar Dhua, Douglas Ryan Gray, Xiaofan Lin, Yu Lou, Sunil Ramesh, Colin Jon Taylor, David Creighton Mott
  • Publication number: 20170272648
    Abstract: Various approaches provide for detecting and recognizing text to enable a user to perform various functions or tasks. For example, a user could point a camera at an object with text, in order to capture an image of that object. The camera can be integrated with a portable computing device that is capable of taking the image and processing the image (or providing the image for processing) to recognize, identify, and/or isolate the text in order to send the image of the object as well as recognized text to an application, function, or system, such as an electronic marketplace.
    Type: Application
    Filed: June 1, 2017
    Publication date: September 21, 2017
    Inventors: ADAM WIGGEN KRAFT, KATHY WING LAM MA, XIAOFAN LIN, ARNAB SANAT KUMAR DHUA, YU LOU
  • Patent number: 9736361
    Abstract: Various approaches provide for detecting and recognizing text to enable a user to perform various functions or tasks. For example, a user could point a camera at an object with text, in order to capture an image of that object. The camera can be integrated with a portable computing device that is capable of taking the image and processing the image (or providing the image for processing) to recognize, identify, and/or isolate the text in order to send the image of the object as well as recognized text to an application, function, or system, such as an electronic marketplace.
    Type: Grant
    Filed: April 8, 2016
    Date of Patent: August 15, 2017
    Assignee: A9.com, Inc.
    Inventors: Adam Wiggen Kraft, Kathy Wing Lam Ma, Xiaofan Lin, Arnab Sanat Kumar Dhua, Yu Lou
  • Patent number: 9721156
    Abstract: Various embodiments describe systems and methods enable a computing device of a user to capture an image of a gift card, or other such monetary device containing a code, with a camera or otherwise receive an image of that gift card. The computing device can be configured to recognize codes, such as digit claim codes, of the gift card by using one or more image processing, computer vision, and/or machine learning algorithms. After a successful detection and verification of a claim code, money or funds deposited in, or otherwise available from, an account associated with the gift card can be utilized, such as applied to a purchase or deposited into the user's account. In many instances, a user interface (UI) can be provided on the computing device for the user to use to capture an image of a gift card and redeem the funds from the corresponding card.
    Type: Grant
    Filed: August 12, 2016
    Date of Patent: August 1, 2017
    Assignee: A9.COM, INC.
    Inventor: Adam Wiggen Kraft
  • Publication number: 20170140245
    Abstract: Disclosed is a method and system for processing images from an aerial imaging device. A moving vehicle analysis system receives images from an aerial imaging device. The system may perform edge analysis in the images to identify a pairs of edges corresponding to a road. The system may identify pixel blobs in the images including adjacent pixels matching each other based on a pixel attribute. The system uses a machine learning model for generating an output identifying moving vehicles in the images. The system determines a count of the moving vehicles captured by the images, where each moving vehicle is associated with corresponding pixel blobs.
    Type: Application
    Filed: November 14, 2016
    Publication date: May 18, 2017
    Inventors: Adam Wiggen Kraft, Boris Aleksandrovich Babenko, Michael Alan Baxter, Steven Jeffrey Bickerton
  • Publication number: 20170103560
    Abstract: Various embodiments enable a computing device to perform tasks such as highlighting words in an augmented reality view that are important to a user. For example, word lists can be generated and the user, by pointing a camera of a computing device at a volume of text, can cause words from the word list within the volume of text to be highlighted in a live field of view of the camera displayed thereon. Accordingly, users can quickly identify textual information that is meaningful to them in an Augmented Reality view to aid the user in sifting through real-world text.
    Type: Application
    Filed: December 21, 2016
    Publication date: April 13, 2017
    Inventors: ADAM WIGGEN KRAFT, ARNAB SANAT KUMAR DHUA, DOUGLAS RYAN GRAY, XIAOFAN LIN, YU LOU, SUNIL RAMESH, COLIN JON TAYLOR, DAVID CREIGHTON MOTT
  • Patent number: 9582913
    Abstract: Various embodiments enable a computing device to perform tasks such as highlighting words in an augmented reality view that are important to a user. For example, word lists can be generated and the user, by pointing a camera of a computing device at a volume of text, can cause words from the word list within the volume of text to be highlighted in a live field of view of the camera displayed thereon. Accordingly, users can quickly identify textual information that is meaningful to them in an Augmented Reality view to aid the user in sifting through real-world text.
    Type: Grant
    Filed: September 25, 2013
    Date of Patent: February 28, 2017
    Assignee: A9.com, Inc.
    Inventors: Adam Wiggen Kraft, Arnab Sanat Kumar Dhua, Douglas Ryan Gray, Xiaofan Lin, Yu Lou, Sunil Ramesh, Colin Jon Taylor, David Creighton Mott