Patents Assigned to A9.com
  • Patent number: 10043109
    Abstract: A set of training images is obtained by analyzing text associated with various images to identify images likely demonstrating a visual attribute. Localization can be used to extract patches corresponding to these attributes, which can then have features or feature vectors determined to train, for example, a convolutional neural network. A query image can be received and analyzed using the trained network to determine a set of items whose images demonstrate visual similarity to the query image at least with respect to the attribute of interest. The similarity can be output from the network or determined using distances in attribute space. Content for at least a determined number of highest ranked, or most similar, items can then be provided in response to the query image.
    Type: Grant
    Filed: January 23, 2017
    Date of Patent: August 7, 2018
    Assignee: A9.COM, INC.
    Inventors: Ming Du, Arnab Sanat Kumar Dhua, Douglas Ryan Gray, Maya Kabkab, Aishwarya Natesh, Colin Jon Taylor
  • Patent number: 10037614
    Abstract: Approaches provide for minimizing variations in the height of a camera of a computing device when estimating the distance to objects represented in image data captured by the camera. For example, a front-facing camera of a computing device can be used to capture a live camera view of a user. An application can analyze the image data to locate features of the user's face for purposes of aligning the user with the computing device. As the position and/orientation of the device changes with respect to the user, the image data can be analyzed to detect whether a location of a representation of a feature of the user aligns with the alignment element. Once the feature is aligned with the alignment element, a rear-facing camera (or other camera) can capture second image data of an object.
    Type: Grant
    Filed: May 19, 2017
    Date of Patent: July 31, 2018
    Assignee: A9.COM, INC.
    Inventors: Eran Borenstein, Arunkumar Devadoss, Zur Nehushtan
  • Patent number: 10032072
    Abstract: Approaches provide for identifying text represented in image data as well as determining a location or region of the image data that includes the text represented in the image data. For example, a camera of a computing device can be used to capture a live camera view of one or more items. The live camera view can be presented to the user on a display screen of the computing device. An application executing on the computing device or at least in communication with the computing device can analyze the image data of the live camera view to identify text represented in the image data as well as determine locations or regions of the image that include the representations.
    Type: Grant
    Filed: June 21, 2016
    Date of Patent: July 24, 2018
    Assignee: A9.com, Inc.
    Inventors: Son Dinh Tran, R. Manmatha
  • Patent number: 10032286
    Abstract: Systems and methods track one or more points between images. A point for tracking may be selected, at least in part, on a determination of how discriminable the point is relative to other points in a region containing the point. A point of an image being tracked may be located in another image by matching a patch containing the point with another patch of the other image. A search for a matching patch may be focused in a region that is determined based at least in part on an estimate of movement of the point between images. Points may be tracked across multiple images. If an ability to track one or more points is lost, information about the points being tracked may be used to relocate the points in another image.
    Type: Grant
    Filed: October 18, 2017
    Date of Patent: July 24, 2018
    Assignee: A9.com, Inc.
    Inventors: Bryan E. Feldman, Nalin Pradeep Senthamil, Arnab Sanat Kumar Dhua, Gurumurthy D. Ramkumar
  • Patent number: 10026229
    Abstract: An auxiliary device can be used to display a fiducial that contains information useful in determining the physical size of the fiducial as displayed on the auxiliary device. A primary device can capture image data including a representation of the fiducial. The scale and orientation of the fiducial can be determined, such that a graphical overlay can be generated of an item of interest that corresponds to that scale and orientation. The overlay can then be displayed along with the captured image data, in order to provide an augmented reality experience wherein the image displayed on the primary device represents a scale-appropriate view of the item in a location of interest corresponding to the location of the auxiliary device. As the primary device is moved and the viewpoint of the camera changes, changes in relative scale and orientation to the fiducial are determined and the overlay is updated accordingly.
    Type: Grant
    Filed: February 9, 2016
    Date of Patent: July 17, 2018
    Assignee: A9.com, Inc.
    Inventors: Ismet Zeki Yalniz, Rahul Bhotika, Song Cao, Michael Patrick Cutter, Colin Jon Taylor, Mark Scott Waldo, Chun-Kai Wang, Daniya Zamalieva
  • Patent number: 10013624
    Abstract: Various embodiments enable the identification of semi-structured text entities in an imager. The identification of the text entities is a relatively simple problem when the text is stored in a computer and free of errors, but much more challenging if the source is the output of an optical character recognition (OCR) engine from a natural scene image. Accordingly, output from an OCR engine is analyzed to isolate a character string indicative of a text entity. Each character of the string is then assigned to a character class to produce a character class string and the text entity of the string is identified based in part on a pattern of the character class string.
    Type: Grant
    Filed: December 16, 2015
    Date of Patent: July 3, 2018
    Assignee: A9.com, Inc.
    Inventors: Douglas Ryan Gray, Xiaofan Lin, Arnab Sanat Kumar Dhua, Yu Lou
  • Patent number: 10013633
    Abstract: Various approaches enable a user to capture image information (e.g., still images or video) about an object of interest such as the sole of a shoe or other piece of footwear (e.g., a sandal) and receive information about items that are determined to match footwear based at least in part on the image information. For example, an image analyze service or other similar service can analyze the images to determine a type of shoe included within the images based at least in part on patterns of other distinguishing features of the sole of the shoe. The image analysis service can aggregate the results and can provide information about the results as a set of matches or results to be displayed to a user in response to a visual search query. The information can include, for example, descriptions, contact information, availability, location data, pricing information, and other such information.
    Type: Grant
    Filed: March 8, 2017
    Date of Patent: July 3, 2018
    Assignee: A9.COM, INC.
    Inventors: Raghavan Manmatha, Wei-Hong Chuang
  • Patent number: 10013398
    Abstract: A reusable distributed computing framework may be established in which contributors of computing resources may participate by using a web browser to visit a web page that incorporates a distributed computing participation component. A distributed computing job provider may submit distributed computing jobs to a web-based distributed computing service. A distributed computing job may include browser-executable code in accordance with a particular distributed computing programmatic interface and data to be processed by the browser-executable code. The web-based distributed computing service may assign independently processable portions of the job data to browsers visiting a donor page for processing with the job code. Results returned by the donor browsers may be indexed and made available in real-time, as may a status of the distributed computing job such as with respect to processing the job data.
    Type: Grant
    Filed: May 2, 2017
    Date of Patent: July 3, 2018
    Assignee: A9.COM, INC.
    Inventor: Matthew W. Amacker
  • Patent number: 10013630
    Abstract: Various embodiments provide methods and systems for detecting one or more segments of an image that are related to a particular object in the image (e.g., a logo or trademark) and extracting at least one feature point, each of which is represented by one feature point descriptor, based at least upon a contour curvature of the one or more segments. The at least one feature point descriptor can be converted into one or more codewords to generate a codeword database. A discriminative codebook can then be generated based upon the codeword database and utilized to detect objects and/or features in a query image.
    Type: Grant
    Filed: September 26, 2014
    Date of Patent: July 3, 2018
    Assignee: A9.com, Inc.
    Inventor: William Brendel
  • Patent number: 10007680
    Abstract: Systems and approaches for searching a content collection corresponding to query content are provided. In particular, false positive match rates between the query content and the content collection may be reduced with a minimum content region test and/or a minimum features per scale test. For example, by correlating content descriptors of a content piece in the content collection with query descriptors of the query content, the content piece can be determined to match the query content when a particular region of the content piece and/or a particular region of a query descriptor have a proportionate size meeting or exceeding a specified minimum. Alternatively, or in addition, the false positive match rate between query content and a content piece can be reduced by comparing content descriptors and query descriptors of features at a plurality of scales. A content piece can be determined to match the query content according to descriptor proportion quotas for the plurality of scales.
    Type: Grant
    Filed: January 26, 2015
    Date of Patent: June 26, 2018
    Assignee: A9.COM, INC.
    Inventors: Arnab Sanat Kumar Dhua, Sunil Ramesh, Max Delgadillo, Raghavan Manmatha
  • Patent number: 10008039
    Abstract: Various approaches discussed herein enable providing a virtual reality experience of trying on clothes by augmenting an image of an article of clothing so that it appears to be worn by a particular person who is represented in a separate image. The image of the person wearing a special article of clothing containing a number of gridlines is analyzed along with an image of the special article of clothing as it appears unworn. The analysis includes calculating differences in the images to determine a change in the position of the gridlines between the images, then used to generate body shape data. The body shape data is used to augment an image of a prospective article of clothing, which modified image is then combined with the image of the person wearing a special article of clothing and displayed.
    Type: Grant
    Filed: December 2, 2015
    Date of Patent: June 26, 2018
    Assignee: A9.COM, INC.
    Inventors: Adam Moshe Neustein, William Brendel, Kaolin Imago Fire, Mark Jay Nitzberg, Sunil Ramesh, Mark Scott Waldo
  • Patent number: 9996901
    Abstract: Embodiments provide systems and methods for generating a street map that includes a position identifier that identifies a location on the street map. The method and system may also generate and display a plurality of images representative of the location of the position identifier. A user may interact with a position identifier or one of several scroll icons to view images of other locations on the street map and/or to obtain driving directions between two locations.
    Type: Grant
    Filed: November 13, 2017
    Date of Patent: June 12, 2018
    Assignee: A9.com, Inc.
    Inventors: Jonathan A. Gold, Timothy Caro-Bruce, Huy T. Ha, John Alan Hjelmstad, Christopher Aaron Volkert
  • Patent number: 9990665
    Abstract: Searching for items, such as apparel items, can be performed using a set of category-specific outlines or contours from which a user can select. The outlines enable a user to quickly specify a relevant category, and provide guidance as to how to orient the camera in order to enable an item to be identified in an image without the need for an expensive object identification and segmentation process. The outline can specify a “swatch” region, indicating where the user should position a view of a pattern, texture, or color of the item in which the user is interested. The category selection and swatch region data can be used to determine matching items. If the user wants a different set of search results, the user can select a different outline, causing a new query to be executed with updated category information and swatch data to obtain new search results.
    Type: Grant
    Filed: May 8, 2017
    Date of Patent: June 5, 2018
    Assignee: A9.com, Inc.
    Inventor: Arnab Sanat Kumar Dhua
  • Patent number: 9990557
    Abstract: The accuracy of an image matching process can be improved by determining relevant swatch regions of the images, where those regions contain representative patterns of the items of interest represented in those images. Various processes examine a set of visual cues to determine at least one candidate object region, and then collate these regions to determine one or more representative swatch images. For apparel items, this can include locating regions such as an upper body region, torso region, clothing region, foreground region, and the like. Processes such as regression analysis or probability mapping can be used on the collated region data (along with confidence and/or probability values) to determine the appropriate swatch regions.
    Type: Grant
    Filed: September 21, 2017
    Date of Patent: June 5, 2018
    Assignee: A9.com, Inc.
    Inventors: Ming Du, Arnab Sanat Kumar Dhua, Michael Patrick Cutter
  • Patent number: 9984728
    Abstract: Various embodiments identify differences between frame sequences of a video. For example, to determine a difference between two versions of a video, a fingerprint of each frame of the two versions is generated. From the fingerprints, a run-length encoded representation of each version is generated. The fingerprints which appear only once (i.e., unique fingerprints) in the entire video are identified from each version and compared to identify matching unique fingerprints across versions. The matching unique fingerprints are sorted and filtered to determine split points, which are used to align the two versions of the video. Accordingly, each version is segmented into smaller frame sequences using the split points. Once segmented, the individual frames of each segment are aligned across versions using a dynamic programming algorithm. After aligning the segments at a frame level, the segments are reassembled to generate a global alignment output.
    Type: Grant
    Filed: January 15, 2016
    Date of Patent: May 29, 2018
    Assignee: A9.COM, INC.
    Inventors: Ismet Zeki Yalniz, Adam Carlson, Douglas Ryan Gray, Colin Jon Taylor
  • Patent number: 9984472
    Abstract: An image processing system receives a sequence of frames including a current input frame and a next input frame (the next input frame is captured subsequent in time with respect to capturing of the current input frame). The image processing system stores a previously outputted output frame. The previously outputted output frame is derived from previously processed input frames in the sequence. The image processing modifies the current input frame based on detected first motion and second motion. The first motion is detected based on an analysis of the current input frame with respect to the next input frame. The second motion is detected based on an analysis of the current input frame with respect to the previously outputted output frame. According to one configuration, the image processing system implements multi-sized analyzer windows to more precisely detect the first motion and second motion.
    Type: Grant
    Filed: June 16, 2014
    Date of Patent: May 29, 2018
    Assignee: A9.COM, INC.
    Inventors: Daniel B. Grunberg, Anantharanga Prithviraj, Douglas M. Chin, Peter D. Besen
  • Patent number: 9965895
    Abstract: Approaches are described for enabling a user to create an accurate perspective rendering of a source (e.g., a scene, object, subject, point of interest, etc.) on a drawing surface. For example, various approaches enable superimposition of the source being viewed upon a drawing surface upon which a user is drawing. In this way, the user can view both the source and drawing surface simultaneously. This allows the user to duplicate key points of the source on the drawing surface by viewing a display of a device, thus aiding in the accurate rendering of perspective.
    Type: Grant
    Filed: March 20, 2014
    Date of Patent: May 8, 2018
    Assignee: A9.com, Inc.
    Inventor: Douglas Ryan Gray
  • Patent number: 9965641
    Abstract: A method, apparatus and computer program product for policy-based access control in association with a sorted, distributed key-value data store in which keys comprise n-tuple structure that includes a cell-level access control. In this approach, an information security policy is used to create a set of pluggable policies. A pluggable policy may be used during data ingest time, when data is being ingested into the data store, and a pluggable policy may be used during query time, when a query to the data store is received for processing against data stored therein. Generally, a pluggable policy associates one or more user-centric attributes (or some function thereof), to a particular data-centric label. By using pluggable policies, preferably at both ingest time and query time, the data store is enhanced to provide a seamless and secure policy-based access control mechanism in association with the cell-level access control enabled by the data store.
    Type: Grant
    Filed: December 15, 2014
    Date of Patent: May 8, 2018
    Assignee: A9.com
    Inventors: Michael R. Allen, John W. Vines, Adam P. Fuchs
  • Patent number: 9940745
    Abstract: The density of images to display can be increased, and distractions reduced, through intelligent cropping or manipulation of at least some of the images. For objects such as dresses represented in the images, the density can be increased by cropping away regions of background outside the object region(s). Locating regions representing the face and legs of the wearer can enable cropping of the top and/or bottom of the image in order to cause the dress to occupy the majority of the area of the image, and can provide for a level of consistency of the sizes of the objects across the images, regardless of the sources of the images. Representative colors of the objects can also be selected to adjust the background color, in order to provide for easy distinction between the images while not providing contrasting or unappealing colors that take away from the aesthetics of the objects.
    Type: Grant
    Filed: May 17, 2017
    Date of Patent: April 10, 2018
    Assignee: A9.COM, INC.
    Inventor: Arnab Sanat Kumar Dhua
  • Patent number: 9934526
    Abstract: Various embodiments enable a process to automatically attempt to select the most relevant words associated with products available for purchase from an electronic marketplace from an image frame. For example, an image frame containing text can be obtained and analyzed with an optical character recognition. The recognized words can then be preprocessed using various filtering and scoring techniques to narrow down a volume of text to a few relevant query terms. These query terms can then be sent to a search engine associated with the electronic marketplace to return relevant products to a user.
    Type: Grant
    Filed: June 27, 2013
    Date of Patent: April 3, 2018
    Assignee: A9.com, INC.
    Inventors: Arnab Sunat Kumar Dhua, Douglas Ryan Gray, Xiaofan Lin, Yu Lou, Adam Wiggen Kraft, Sunil Ramesh