With Pattern Recognition Or Classification Patents (Class 382/170)
  • Patent number: 11052910
    Abstract: A peripheral vehicle is detected from an image captured by a camera, and the tire grounded portion of the peripheral vehicle is specified. Whether the color of a peripheral region of the specified tire grounded portion is white is determined. If the color is white, it is determined that the road condition of a traffic lane where the peripheral vehicle is traveling is snow.
    Type: Grant
    Filed: August 29, 2018
    Date of Patent: July 6, 2021
    Assignee: HONDA MOTOR CO., LTD.
    Inventors: Shigehiro Honda, Hiroshi Miura
  • Patent number: 11050709
    Abstract: A system is configured to perform operations that include determining an exception event corresponding to a transmission of a plurality of network packets over an electronic network. The electronic network may cause network address translation to be performed on the plurality of network packets. The operations may also include identifying, based on a log of the plurality of network packets, a first network packet associated with the exception event and calculating, based on a payload portion of the first network packet, a packet signature corresponding to the first network packet. The operations may further include determining, based on a comparison between a first data structure and a second data structure using the packet signature, original source address information that corresponds to the first network packet prior to the network address translation being performed on the first network packet.
    Type: Grant
    Filed: January 28, 2019
    Date of Patent: June 29, 2021
    Assignee: PayPal, Inc.
    Inventor: Shlomi Boutnaru
  • Patent number: 11017260
    Abstract: A text region positioning method and device, and a computer readable storage medium, which relate to the field of image processing. The text region positioning method includes acquiring a variance graph on the basis of an original image; acquiring an edge image of the variance graph; if a difference value among distances between edge points of opposing positions in two adjacent edge lines in the edge image is within a preset distance difference range, then the region between the two adjacent edge lines is determined as a text region.
    Type: Grant
    Filed: December 29, 2017
    Date of Patent: May 25, 2021
    Assignees: BEIJING JINGDONG SHANGKE INFORMATION TECHNOLOGY CO., LTD., BEIJING JINGDONG CENTURY TRADING CO., LTD.
    Inventors: Yongliang Wang, Qingze Wang, Biaolong Chen
  • Patent number: 11010642
    Abstract: Systems and techniques for providing concurrent image and corresponding multi-channel auxiliary data generation for a generative model are presented. In one example, a system generates synthetic multi-channel data associated with a synthetic version of imaging data. The system also predicts multi-channel imaging data and the synthetic multi-channel data with a first predicted class set or a second predicted class set. Furthermore, the system employs the first predicted class set or the second predicted class set for the synthetic multi-channel data to train a generative adversarial network model.
    Type: Grant
    Filed: March 28, 2019
    Date of Patent: May 18, 2021
    Assignee: General Electric Company
    Inventors: Ravi Soni, Gopal B. Avinash, Min Zhang
  • Patent number: 11004293
    Abstract: A method for testing a valuable document including illuminating the valuable document line by line such that a first group of lines is illuminated with light of a first wavelength and a second group of lines is illuminated with light of a second wavelength, reflection light that is reflected from the lines and/or transmission light that passes through the lines. First data are representative of the reflection light and/or transmission light assigned to the lines of the first group and second data are representative of the reflection light and/or transmission light assigned to the lines of the second group. Further, processing the first data such that a first image generated from the first data has a first resolution, and the second data such that a further image generated from the second data has a second resolution, comparing the first and second images with first and further reference images.
    Type: Grant
    Filed: December 18, 2018
    Date of Patent: May 11, 2021
    Assignee: CI Tech Sensors AG
    Inventors: Reto Schletti, Moritz Julen, Thomas Boos
  • Patent number: 10977080
    Abstract: Embodiments of the invention are directed to classifying requests associated with personal data at or before a point of entry to a trusted computing network. The invention provides for determining whether a request associated with personal data requires classification (for example, whether the request is impacted by regulations or other requirements necessitating classification/categorization). The determination may be based on what entity is requesting the data, the origin of the request, whose data is being requested, the type of action associated with the request and/or the data elements associated with the request. In addition, once the request has been determined to require classification the specific classification is determined and assigned to the request. The classification may be determined based on the rules associated with the regulation or other requirement(s) necessitating the classification/categorization.
    Type: Grant
    Filed: January 30, 2019
    Date of Patent: April 13, 2021
    Assignee: BANK OF AMERICA CORPORATION
    Inventors: Richard C. Clow, II, Joseph Benjamin Castinado
  • Patent number: 10970860
    Abstract: A character-tracking system is provided. The system includes a plurality of cameras, a first computing server, a second computing server, and a third computing server. The cameras are configured to capture scene images of a scene with different shooting ranges. The first computing server performs body tracking on a body region in the scene image to generate character data. The third computation server obtains a body region block from each scene image according to the character data for facial recognition to obtain user identity. The first computing server further performs person re-identification on different body regions to link the body regions with its person tag belonging to the same user. The first computing server further represents the linked body regions and their person tags with a corresponding user identity.
    Type: Grant
    Filed: February 14, 2020
    Date of Patent: April 6, 2021
    Assignee: WISTRON CORP.
    Inventors: Po-Shen Lin, Shih-Wei Wang, Yi-Yun Hsieh
  • Patent number: 10915736
    Abstract: An image processing method includes receiving an image frame, detecting a face region of a user in the image frame, aligning a plurality of preset feature points in a plurality of feature portions included in the face region, performing a first check on a result of the aligning based on a first region corresponding to a combination of the feature portions, performing a second check on the result of the aligning based on a second region corresponding to an individual feature portion of the feature portions, redetecting a face region based on a determination of a failure in passing at least one of the first check or the second check, and outputting information on the face region based on a determination of a success in passing the first check and the second check.
    Type: Grant
    Filed: May 20, 2019
    Date of Patent: February 9, 2021
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Dongwoo Kang, Jingu Heo, Byong Min Kang
  • Patent number: 10909380
    Abstract: A method and an apparatus for recognizing and training a video, an electronic device and a storage medium include: extracting features of a first key frame in a video; performing fusion on the features of the first key frame and fusion features of a second key frame in the video to obtain fusion features of the first key frame, where a detection sequence of the second key frame in the video precedes that of the first key frame; and performing detection on the first key frame according to the fusion features of the first key frame to obtain an object detection result of the first key frame. Through iterative multi-frame feature fusion, information contained in shared features of these key frames in the video can be enhanced, thereby improving frame recognition accuracy and video recognition efficiency.
    Type: Grant
    Filed: May 14, 2019
    Date of Patent: February 2, 2021
    Assignee: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTD
    Inventors: Tangcongrui He, Hongwei Qin
  • Patent number: 10885365
    Abstract: A method and an apparatus for detecting an object keypoint, an electronic device, a computer readable storage medium, and a computer program include: obtaining a respective feature map of at least one local regional proposal box of an image to be detected, the at least one local regional proposal box corresponding to at least one target object; and separately performing target object keypoint detection on a corresponding local regional proposal box of the image to be detected according to the feature map of the at least one local regional proposal box.
    Type: Grant
    Filed: May 27, 2019
    Date of Patent: January 5, 2021
    Assignee: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTD
    Inventors: Zhiwei Fang, Junjie Yan
  • Patent number: 10863727
    Abstract: The present invention pertains to a method for automatic sea lice monitoring in fish aquaculture, the method comprising submerging a camera (4) in a sea pen (300) comprising fish, using the camera to make an image of at least one of said fish, analysing the image to differentiate between individual sea lice present on the fish and the fish itself and assessing the number of sea lice present on the fish, wherein the camera is attached to a device (1, 10, 100) for guiding the salmon along an imaging track (5), the camera being directed to the track.
    Type: Grant
    Filed: October 21, 2016
    Date of Patent: December 15, 2020
    Assignee: Intervet Inc.
    Inventors: Peter Jans, Evert Gijtenbeek
  • Patent number: 10855882
    Abstract: An image processing apparatus processes input image data and generates output image data. The image processing apparatus includes a feature-value calculating unit, an output-gradation-number setting unit, and an image processor. The feature-value calculating unit calculates a feature value from the input image data. The output-gradation-number setting unit sets, as the number of output gradations, any one gradation number of candidates for the number of output gradations based on the feature value calculated by the feature-value calculating unit. The image processor processes the input image data and generates the output image data having the number of output gradations which is set by the output-gradation-number setting unit.
    Type: Grant
    Filed: July 3, 2019
    Date of Patent: December 1, 2020
    Assignee: SHARP KABUSHIKI KAISHA
    Inventors: Daisaku Imaizumi, Teruhiko Matsuoka, Masahiko Takashima, Yasushi Adachi
  • Patent number: 10853695
    Abstract: A method, a computer readable medium, and a system for cell annotation are disclosed. The method includes receiving at least one new cell image for cell detection; extracting cell features from the at least one new cell image; comparing the extracted cell features to a matrix of cell features of each class to predict a closest class, wherein the matrix of cell features has been generated from at least initial training data comprising at least one cell image; detecting cell pixels from the extracted cell features of the at least one new cell image using the predicted closest class to generate a likelihood map; extracting individual cells from the at least one cell image by segmenting the individual cells from the likelihood map; and performing a machine annotation on the extracted individual cells from the at least one new cell image to identify cells, non-cell pixels, and/or cell boundaries.
    Type: Grant
    Filed: June 27, 2017
    Date of Patent: December 1, 2020
    Assignee: KONICA MINOLTA LABORATORY U.S.A., INC.
    Inventors: Yongmian Zhang, Jingwen Zhu
  • Patent number: 10853635
    Abstract: Disclosed is an automated tap and detection system (ATDS) for monitoring insects on vegetation. The ATDS can automatically agitate vegetation and collect deposits released from the vegetation in response to the agitation. Images of the deposits can be collected and analyzed using machine learning models to determine a type of deposit (e.g., insect) collected and a number of deposits collected.
    Type: Grant
    Filed: July 9, 2019
    Date of Patent: December 1, 2020
    Assignee: UNIVERSITY OF FLORIDA RESEARCH FOUNDATION, INCORPORATED
    Inventors: Ioannis Ampatzidis, Philip Anzolut Stansly, Victor Henrique Meirelles Partel
  • Patent number: 10824881
    Abstract: A device for object recognition of an input image includes: a patch selector configured to subdivide the input image into a plurality of zones and to define a plurality of patches for the zones; a voting maps generator configured to generate a set of voting maps for each zone and for each patch, and to binarize the generated set of voting maps; a voting maps combinator configured to combine the binarized set of voting maps; and a supposition generator configured to generate and refine a supposition out of or from the combined, binarized set of voting maps.
    Type: Grant
    Filed: June 20, 2017
    Date of Patent: November 3, 2020
    Assignees: Conti Temic microelectronic GmbH, Continental Teves AG & Co. OHG
    Inventors: Ann-Katrin Fattal, Michelle Karg, Christian Scharfenberger, Stefan Hegemann, Stefan Lueke, Chen Zhang
  • Patent number: 10803615
    Abstract: An object recognition processing apparatus includes: an image obtainment unit that obtains an image; a template matching unit that obtains a recognition result including a plurality of candidates for the object to be recognized by carrying out a template matching process on the image; a candidate exclusion processing unit that excludes a candidate that meets a predetermined condition by generating, for each of the plurality of candidates, a binary image of the object to be recognized, and finding a degree of overlap of each candidate using the binary image; and a recognition result output unit that outputs a candidate that remains without being excluded as a recognition result.
    Type: Grant
    Filed: October 18, 2018
    Date of Patent: October 13, 2020
    Assignee: OMRON Corporation
    Inventor: Yoshinori Konishi
  • Patent number: 10796161
    Abstract: A system comprises a digital camera to capture images of a horticultural area, a device processor and a non-transitory computer readable medium storing instructions executable by the device processor to capture, using the digital camera, a first digital image of a horticultural area containing an insect trap, isolate a portion of the first digital image using the trap detection parameters, the portion of the first digital image corresponding to the insect trap, perform automated particle detection on the portion of the first digital image according to the insect detection parameters to identify regions of pixels in the portion of the first digital image that have the insect recognition color and that pass filter criteria, determine a cardinality of insects on the first object based on a number of identified regions of pixels, store the cardinality of insects in association with the first digital image and provide the cardinality of insects for display in a graphical user interface.
    Type: Grant
    Filed: July 16, 2018
    Date of Patent: October 6, 2020
    Assignee: ILLUMITEX, INC.
    Inventors: Jeffrey Elliott Bisberg, Paul Marchau Gray, Sirichi Bobby Srisan, Charles Alicea
  • Patent number: 10789261
    Abstract: A system represents data as visual distributed data frames (VDDFs) that comprise a dataset, metadata describing the data, and metadata describing visualization of the dataset. A VDDF may be extracted from charts displayed in markup language documents. A VDDF may be generated from different data sources including big data analysis systems. A VDDF workspace allows interaction with multiple VDDF objects extracted from multiple data sources and stored locally within the storage of the device. The VDDF workspace allows the user to interact with the VDDF objects, for example, by inspecting the metadata, modifying the data, adding new columns, changing the visualization, joining data from multiple charts, and sharing the VDDF objects with other documents. The processing of data of a VDDF is performed locally within a computing device, for example, in a client device.
    Type: Grant
    Filed: March 21, 2017
    Date of Patent: September 29, 2020
    Assignee: ARIMO, LLC
    Inventors: Christopher Nguyen, Anh H. Trinh, Bao Nguyen, Selene Chew
  • Patent number: 10776657
    Abstract: A template creation apparatus includes a three-dimensional data acquisition unit that acquires three-dimensional data of an object that is a recognition target, a normal vector calculation unit that calculates a normal vector at a feature point of an object viewed from a predetermined viewpoint that is set for the object, a normal vector quantization unit that quantizes a normal vector by mapping the normal vector to a reference region on a plane orthogonal to an axis that passes through the viewpoint, so as to acquire a quantized normal direction feature amount, the reference region including a central reference region corresponding to the vicinity of the axis and a reference region in the periphery of the central reference region, a template creation unit that creates a template for each viewpoint based on the quantized normal direction feature amount, and a template information output unit that outputs the template.
    Type: Grant
    Filed: July 13, 2018
    Date of Patent: September 15, 2020
    Assignee: OMRON CORPORATION
    Inventor: Yoshinori Konishi
  • Patent number: 10762630
    Abstract: A system and method are provided to automatically categorize biological and medical images. The new system and method can incorporate a machine learning classifier in which novel ideas are provided to guide the classifier to focus on regions of interest (ROI) within medical images for categorizing or classifying the images. The system and method can ignore regions when misleading structures exist. The detection and classification of one or more features of interest within a discriminative region of interest within an image are rendered invariant to differences in translation, orientation and/or scaling of the one or more features of interest within the medical image(s). The system and method allow a processor to more quickly, efficiently and accurately process and categorize medical images.
    Type: Grant
    Filed: July 15, 2016
    Date of Patent: September 1, 2020
    Assignee: OXFORD UNIVERSITY INNOVATION LIMITED
    Inventors: Mohammad Yaqub, J. Alison Noble, Aris Papageorghiou
  • Patent number: 10754599
    Abstract: Systems, methods, and devices are configured to print and reuse customized sample sets while printing documents. They include receiving instructions to retrieve an electronic document and processing the electronic document in a sample print mode. The electronic document are arranged in a plurality of page with each page containing readable information. They further include identifying a subset of pages of the electronic document to print in a first phase of the sample print mode and printing the subset of pages on a physical readable media in the sample print mode. They additionally include displaying a message on a user interface after completion of the first phase of the sample print mode and printing remaining pages of the electronic document on a physical readable media in response to user input based on the displayed message.
    Type: Grant
    Filed: March 15, 2019
    Date of Patent: August 25, 2020
    Assignee: Xerox Corporation
    Inventors: Muralidaran Krishnasamy, Sainarayanan Gopalakrishnan, Narayan Kesavan, Sathish Kumar Annamalai Thangaraj
  • Patent number: 10740889
    Abstract: The present invention provides a method for detection of in-panel mura based on Hough transform and Gaussian function fitting, including: S1. acquiring an original gray-scale image; S2. acquiring a binarized image according to the gray-scale image; S3. performing an edge detection via Hough transform to crop edges of the gray-scale image; and S4. performing histogram statistics on the cropped gray-scale image, fitting the histogram to a Gaussian function, and detecting an in-panel mura result according to the fitting parameters. The present invention is able to determine images of the display region via Hough transform in order to acquire the region for in-panel mura detection, and also evaluate severity of in-panel mura via parameters acquired by Gaussian function fitting, and thus to quickly detecting in-panel mura.
    Type: Grant
    Filed: May 8, 2018
    Date of Patent: August 11, 2020
    Assignee: HUIZHOU CHINA STAR OPTOELECTRONICS TECHNOLOGY CO., LTD.
    Inventor: Yanxue Wang
  • Patent number: 10740653
    Abstract: This learning data generation device (10) is provided with: an identification unit (11) which identifies a subject included in a first captured image, and generates an identification result in which information indicating the type and existence of the identified subject or the motion of the identified subject is associated with the first captured image; and a generation unit (12) which generates learning data on the basis of the identification result and a second captured image, which is associated with the first captured image but is different in type from the first captured image.
    Type: Grant
    Filed: April 3, 2018
    Date of Patent: August 11, 2020
    Assignee: NEC CORPORATION
    Inventor: Soma Shiraishi
  • Patent number: 10740657
    Abstract: An image processing device and a method performed by the device are provided. The image processing device includes processors forming a partitioning unit, processors forming a partitioning unit, and memories shared by the partitioning unit and the classification unit. The partitioning unit obtains an image from a first memory, partitions the image into one or more areas, each area includes one or more objects to be classified, and saves information of the partitioning of the image in a second memory. The classification unit obtains the image from the first memory, obtains information of partitioning of the image from the second memory and classifies the objects in each area of the image to obtain a classification result of the image. The image processing device may further include a scheduling unit for controlling optimizing performance of the partitioning unit and the classification unit.
    Type: Grant
    Filed: April 26, 2018
    Date of Patent: August 11, 2020
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Jun Yao, Tao Wang, Yu Wang
  • Patent number: 10729117
    Abstract: The present invention relates to a pest monitoring method based on machine vision. The method includes the following steps: arranging a pest trap at a place where pests gather, and setting an image acquisition device in front of the pest trap to acquire an image; identifying a pest in the acquired image, and obtaining a number of pests; extracting multiple suspicious pest images from a region of each identified pest in the image, and determining identification accuracy of each suspicious pest image, if the number of pests is greater than or equal to a preset threshold for the number of pests; and calculating a predicted level of pest damage based on the number of pests and the identification accuracy of each suspicious pest image. The present invention acquires a pest image automatically through the image acquisition device in front of the pest trap.
    Type: Grant
    Filed: December 25, 2017
    Date of Patent: August 4, 2020
    Assignee: ZHONGKAI UNIVERSITY OF AGRICULTURE AND ENGINEER
    Inventors: Yu Tang, Shaoming Luo, Zhenyu Zhong, Huan Lei, Chaojun Hou, Jiajun Zhuang, Weifeng Huang, Zaili Chen, Jintian Lin, Lixue Zhu
  • Patent number: 10733699
    Abstract: A face replacement system for replacing a target face with a source face can include a facial landmark determination model having a cascade multichannel convolutional neural network (CMC-CNN) to process both the target and the source face. A face warping module is able to warp the source face using determined facial landmarks that match the determined facial landmarks of the target face, and a face selection module is able to select a facial region of interest in the source face. An image blending module is used to blend the target face with the selected source region of interest.
    Type: Grant
    Filed: October 24, 2017
    Date of Patent: August 4, 2020
    Assignee: DEEP NORTH, INC.
    Inventors: Jinjun Wang, Qiqi Hou
  • Patent number: 10713493
    Abstract: This disclosure includes technologies for video recognition in general. The disclosed system can automatically detect various types of actions in a video, including reportable actions that cause shrinkage in a practical application for loss prevention in the retail industry. The temporal evolution of spatio-temporal features in the video are used for action recognition. Such features may be learned via a 4D convolutional operation, which is adapted to model low-level features based on a residual 4D block. Further, appropriate responses may be invoked if a reportable action is recognized.
    Type: Grant
    Filed: February 6, 2020
    Date of Patent: July 14, 2020
    Assignee: SHENZHEN MALONG TECHNOLOGIES CO., LTD.
    Inventors: Weilin Huang, Shiwen Zhang, Sheng Guo, Limin Wang, Matthew Robert Scott
  • Patent number: 10650274
    Abstract: A method and a clustering system for image clustering, and a computer-readable storage medium are provided. The method includes: extracting a GIST feature of a first image and a GIST feature of a second image; obtaining an image fingerprint of the first image, based on the GIST feature of the first image and in conjunction with an LSH algorithm and obtaining an image fingerprint of the second image, based on the GIST feature of the second image and in conjunction with the LSH algorithm; calculating a similarity between the first and second images, based on the image fingerprints of the first and second images; and classifying the first image and the second image as a same category of image in a case that the similarity between the first image and the second image is larger than a predetermined similarity threshold.
    Type: Grant
    Filed: April 19, 2018
    Date of Patent: May 12, 2020
    Assignee: Tencent Technology (Shenzhen) Company Limited
    Inventor: Pipei Huang
  • Patent number: 10636139
    Abstract: An image processing apparatus and method thereof are provided. The image processing apparatus stores at least a reference image and performs the following operations: (a) receiving an image, (b) determining a plurality of representative keypoints for the image, such as determining the representative keypoints by a density restriction based method, (c) finding out that a matched area in the image corresponds to a first reference image according to the representative keypoints, (d) determining that a matched number between the representative keypoints and a plurality of reference keypoints of the first reference image is less than a threshold, and (e) storing the matched area in the image processing apparatus as a second reference image.
    Type: Grant
    Filed: March 14, 2018
    Date of Patent: April 28, 2020
    Assignee: HTC CORPORATION
    Inventor: Shu-Jhen Fan Jiang
  • Patent number: 10599701
    Abstract: In accordance with an example embodiment, large scale category classification based on sequence semantic embedding and parallel learning is described. In one example, one or more closest matches are identified by comparison between (i) a publication semantic vector that corresponds to at least part of the publication, the publication semantic vector based on a first machine-learned model that projects the at least part of the publication into a semantic vector space, and (ii) a plurality of category vectors corresponding to respective categories from a plurality of categories.
    Type: Grant
    Filed: February 10, 2017
    Date of Patent: March 24, 2020
    Assignee: EBAY INC.
    Inventor: Mingkuan Liu
  • Patent number: 10599941
    Abstract: A processing system is configured to access an image of a system under analysis including a plurality of features of interest. The processing system determines a first angular position estimate of a first feature of interest in the image and determines a second angular position estimate of a second feature of interest in the image. The processing system determines a final angular position estimate for the system under analysis based on a relationship between the first angular position estimate and the second angular position estimate. The final angular position estimate is output.
    Type: Grant
    Filed: September 11, 2017
    Date of Patent: March 24, 2020
    Assignee: HAMILTON SUNDSTRAND CORPORATION
    Inventor: Michael C. Harke
  • Patent number: 10581902
    Abstract: A method, non-transitory computer readable medium, security management apparatus, and network traffic management system that monitors received HTTP requests associated with a source IP address to obtain data for one or more signals. A value for one or more bins corresponding to one or more of the signals for individual behavioral histograms and a global behavioral histogram is updated based on the signal data. The individual behavioral histograms each correspond to one of the source IP addresses. A determination is made when a DDoS attack condition is detected. When the determining indicates that the DDoS attack condition is detected, an attack pattern is identified in the global behavioral histogram and a mitigation action is initiated for one of the source IP addresses based on a correlation of one of the individual behavioral histograms, which corresponds to the one of the source IP addresses, to the attack pattern.
    Type: Grant
    Filed: October 18, 2016
    Date of Patent: March 3, 2020
    Assignee: F5 Networks, Inc.
    Inventors: Vadim Krishtal, Peter Finkelshtein, Oran Baruch
  • Patent number: 10565711
    Abstract: The following relates generally to image segmentation. In one aspect, an image is received and preprocessed. The image may then be classified as segmentable if it is ready for segmentation; if not, it may be classified as not segmentable. Multiple, parallel segmentation processes may be performed on the image. The result of each segmentation process may be marked as a potential success (PS) or a potential failure (PF). The results of the individual segmentation processes may be evaluated in stages. An overall failure may be declared if a percentage of the segmentation processes marked as PF reaches a predetermined threshold.
    Type: Grant
    Filed: April 29, 2016
    Date of Patent: February 18, 2020
    Assignee: KONINKLIJKE PHILIPS N.V.
    Inventors: Pingkun Yan, Christopher Stephen Hall, Kongkuo Lu
  • Patent number: 10558696
    Abstract: In accordance with an example embodiment, large scale category classification based on sequence semantic embedding and parallel learning is described. In one example, one or more closest matches are identified by comparison between (i) a publication semantic vector that corresponds to at least part of the publication, the publication semantic vector based on a first machine-learned model that projects the at least part of the publication into a semantic vector space, and (ii) a plurality of category vectors corresponding to respective categories from a plurality of categories.
    Type: Grant
    Filed: February 10, 2017
    Date of Patent: February 11, 2020
    Assignee: EBAY INC.
    Inventor: Mingkuan Liu
  • Patent number: 10553091
    Abstract: Methods, apparatuses, and computer-readable media are provided for splitting one or more merged blobs for one or more video frames. For example, a merged blob detected for a current video frame is identified. The merged blob includes pixels of at least a portion of at least two foreground objects in the current video frame. The merged blob is associated with a first blob tracker and a second blob tracker. A shape of the first blob tracker can be adjusted. For instance, adjusting the shape of the first blob tracker can include shifting at least one boundary of a bounding region of the first blob tracker based on the shape of the merged blob. The merged blob can be split into a first blob and a second blob, with the first blob being associated with the adjusted bounding region of the first blob tracker and the second blob being associated with a bounding region of the a second blob tracker. The first blob and the second blob can then be output for object tracking for the current video frame.
    Type: Grant
    Filed: July 20, 2017
    Date of Patent: February 4, 2020
    Assignee: QUALCOMM Incorporated
    Inventors: Ying Chen, Ning Bi, Yang Zhou
  • Patent number: 10521673
    Abstract: A system that detects the number of persons waiting in a queue includes an image acquisition unit that acquires a captured image, an analysis unit that detects a person from the captured image, and a decision unit that, when a region in which no person is detected in the captured image has an area greater than or equal to a predetermined area, sets the number of persons detected from the captured image as the number of persons waiting in the queue.
    Type: Grant
    Filed: August 28, 2017
    Date of Patent: December 31, 2019
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Kanako Takeda
  • Patent number: 10496894
    Abstract: System and method for text localization in images are disclosed. In an embodiment, a line and graphic eliminated image is received. Further, horizontal projection is performed on rows of the image to obtain a first flag vector, the flag vector indicates whether there is text in each row. Furthermore, a number of run-lengths of consecutive 1's and 0's is computed in the first flag vector. Moreover, text lines is extracted in the image based on the computed number of run-lengths of consecutive 1's and 0's in the first flag vector. Also, vertical projection is performed on the text lines to obtain a second flag vector for the text lines. Further, a number of run-lengths of consecutive 1's and 0's is computed in the second flag vectors. Furthermore, text is localized in the image based on the computed number of run-lengths of consecutive 1's and 0's in the second flag vectors.
    Type: Grant
    Filed: January 18, 2018
    Date of Patent: December 3, 2019
    Assignee: Tata Consultancy Services Limited
    Inventors: Santosh Kumar Jami, Srinivasa Rao Chalamala, Krishna Rao Kakkirala, Balakrishna Gudla, Arun Kumar Jindal, Bala Mallikarjunarao Garlapati, Sachin Premsukh Lodha, Ajeet Kumar Singh, Vijayanand Mahadeo Banahatti
  • Patent number: 10496691
    Abstract: Implementations provide an improved system for presenting search results based on entity associations of the search items. An example method includes generating first-level clusters of items responsive to a query, each cluster representing an entity in a knowledge base and including items mapped to the entity, merging the first-level clusters based on entity ontology relationships, applying hierarchical clustering to the merged clusters, producing final clusters, and initiating display of the items according to the final clusters. Another example method includes generating first-level clusters from items responsive to a query, each cluster representing an entity in a knowledge base and including items mapped to the entity, producing final clusters by merging the first-level clusters based on an entity ontology and an embedding space that is generated from an embedding model that uses the mapping, and initiating display of the items responsive to the query according to the final clusters.
    Type: Grant
    Filed: September 8, 2015
    Date of Patent: December 3, 2019
    Assignee: GOOGLE LLC
    Inventors: Jilin Chen, Peng Dai, Lichan Hong, Tianjiao Zhang, Huazhong Ning, Ed Huai-Hsin Chi
  • Patent number: 10489589
    Abstract: In one respect, there is provided a system for training a neural network adapted for classifying one or more scripts. The system may include at least one processor and at least one memory. The memory may include program code that provides operations when executed by the at least one processor. The operations may include: reducing a dimensionality of a plurality of features representative of a file set; determining, based at least on a reduced dimensional representation of the file set, a distance between a file and the file set; and determining, based at least on the distance between the file and the file set, a classification for the file. Related methods and articles of manufacture, including computer program products, are also provided.
    Type: Grant
    Filed: November 21, 2016
    Date of Patent: November 26, 2019
    Assignee: Cylance Inc.
    Inventors: Michael Wojnowicz, Matthew Wolff, Aditya Kapoor
  • Patent number: 10462107
    Abstract: A computer-implemented system and method for analyzing data quality is provided. Attributes each associated with one or more elements are maintained. A request from a user is received for determining data quality of at least one attribute based on an interest vector having a listing of the elements of that attribute and a selection of elements of interest. Each element is encrypted. A condensed vector having the same listing of elements as the interest vector is populated with occurrence frequencies for each of the listed elements. The elements of the condensed vector are encrypted by computing an encrypted product of each element in the condensed vector and the corresponding element of the interest vector. An aggregate is determined based on the encrypted products of each element of the interest vector and the corresponding element of the condensed vector. The aggregate is provided as results of the data quality.
    Type: Grant
    Filed: August 8, 2016
    Date of Patent: October 29, 2019
    Assignee: Palo Alto Research Center Incorporated
    Inventors: Julien Freudiger, Shantanu Rane, Alejandro E. Brito, Ersin Uzun
  • Patent number: 10440390
    Abstract: This application discloses methods and apparatuses for intra prediction mode selection, one method comprising: acquiring a gradient amplitude and a gradient angle of each element in a prediction block; analyzing statistics on the gradient angles and generating a gradient angle histogram of the prediction block; if the texture smoothness of the prediction block is greater than a first pre-determined threshold value, determining that the prediction block is a first-type prediction block, and setting a prediction mode of the first-type prediction block to comprise a direct current prediction mode and a planar prediction mode; and if the texture smoothness of the prediction block is less than a second pre-determined threshold value, determining that the prediction block is a second-type prediction block, and setting an angular prediction mode of the second-type prediction block to comprise an angular prediction mode corresponding to the maximum peak value in the gradient angle histogram.
    Type: Grant
    Filed: June 30, 2017
    Date of Patent: October 8, 2019
    Assignee: ALIBABA GROUP HOLDING LIMITED
    Inventors: Cao Shen, Haohui Xu, Chang Zhou, Kaiyan Chu, Guibin Lu
  • Patent number: 10402639
    Abstract: Techniques are disclosed to identify a form document in an image using a digital fingerprint of the form document. To do so, the image is evaluated to detect features of the image. For each feature, a pixel is plotted in a second image. The second image is the digital fingerprint of the form. To identify the form corresponding to the digital fingerprint, the digital fingerprint may be compared to digital fingerprints of known forms.
    Type: Grant
    Filed: September 26, 2018
    Date of Patent: September 3, 2019
    Assignee: INTUIT, INC.
    Inventors: Richard J. Becker, Greg Knoblauch, Pavlo Malynin, Anju Eappen
  • Patent number: 10398354
    Abstract: A system is provided for assessing heard injury criterion (HIC) by analyzing signals received from sensors of a sensor array to calculate an impact force on an impact area, determine the linear acceleration based on the impact force and mass of the wearer's head. HIC values are calculated within a variable duration sliding window across a time period within which an impact occurs based on the integral of linear acceleration for the time period of the window, to determine a maximal HIC which is converted to a biofeedback signal.
    Type: Grant
    Filed: July 9, 2014
    Date of Patent: September 3, 2019
    Assignee: Swinburne University of Technology
    Inventors: Franz Konstantin Fuss, Yehuda Weizman, Batdelger Doljin
  • Patent number: 10387482
    Abstract: The overall architecture and details of a scalable video fingerprinting and identification system that is robust with respect to many classes of video distortions is described. In this system, a fingerprint for a piece of multimedia content is composed of a number of compact signatures, along with traversal hash signatures and associated metadata. Numerical descriptors are generated for features found in a multimedia clip, signatures are generated from these descriptors, and a reference signature database is constructed from these signatures. Query signatures are also generated for a query multimedia clip. These query signatures are searched against the reference database using a fast similarity search procedure, to produce a candidate list of matching signatures. This candidate list is further analyzed to find the most likely reference matches. Signature correlation is performed between the likely reference matches and the query clip to improve detection accuracy.
    Type: Grant
    Filed: April 18, 2019
    Date of Patent: August 20, 2019
    Assignee: Gracenote, Inc.
    Inventors: Prashant Ramanathan, Mihailo M. Stojancic
  • Patent number: 10375947
    Abstract: One example insect sensing system includes a light emitter configured to emit light; a structured light generator positioned to receive the emitted light and configured to generate structured light from the emitted light; a plurality of light sensors arranged in a line, each of the light sensors oriented to receive at least a portion of the structured light and output a sensor signal indicating an amount of light received by the respective light sensor; a processing device configured to: obtain the sensor signals from each of the light sensors, and determine a presence of an insect based a received sensor signal from at least one light sensor, the sensor signal indicating a reduced amount of received light by the at least one light sensor. Another example insect sensing system includes a camera comprising an image sensor and a lens having an aperture of f/2.8 or wider; and a processor configured to obtain an image from the camera and detect an insect in the image.
    Type: Grant
    Filed: October 18, 2017
    Date of Patent: August 13, 2019
    Assignee: VERILY LIFE SCIENCES LLC
    Inventors: Jianyi Liu, Peter Massaro
  • Patent number: 10380461
    Abstract: Approaches introduce a pre-processing and post-processing framework to a neural network-based approach to identify items represented in an image. For example, a classifier that is trained on several categories can be provided. An image that includes a representation of an item of interest is obtained. Rotated versions of the image are generated and each of a subset of the rotated images is analyzed to determine a probability that a respective image includes an instance of a particular category. The probabilities can be used to determine a probability distribution of output category data, and the data can be analyzed to select an image of the rotated versions of the image. Thereafter, a categorization tree can then be utilized, whereby for the item of interest represented the image, the category of the item can be determined. The determined category can be provided to an item retrieval algorithm to determine primary content for the item of interest.
    Type: Grant
    Filed: October 20, 2017
    Date of Patent: August 13, 2019
    Assignee: A9.COM, INC.
    Inventors: Avinash Aghoram Ravichandran, Matias Omar Gregorio Benitez, Rahul Bhotika, Scott Daniel Helmer, Anshul Kumar Jain, Junxiong Jia, Rakesh Madhavan Nambiar, Oleg Rybakov
  • Patent number: 10382373
    Abstract: Systems, devices, methods, media, and instructions for automated image processing and content curation are described. In one embodiment a server computer system receives a content message from a first content source, and analyzes the content message to determine one or more quality scores and one or more content values associated with the content message. The server computer system analyzes the content message with a plurality of content collections of the database to identify a match between at least one of the one or more content values and a topic associated with at least a first content collection of the one or more content collections and automatically adds the content message to the first content collection based at least in part on the match. In various embodiments, different content values, image processing operations, and content selection operations are used to curate content collections.
    Type: Grant
    Filed: August 31, 2016
    Date of Patent: August 13, 2019
    Assignee: Snap Inc.
    Inventors: Jianchao Yang, Yuke Zhu, Ning Xu, Kevin Dechau Tang, Jia Li
  • Patent number: 10373073
    Abstract: A computer implemented method of automatically creating a classification function trained with augmented representation of features extracted from a plurality of sample media objects using one or more hardware processors for executing a code. The code comprises code instructions for extracting a plurality of features from a plurality of sample media objects, generating a plurality of feature samples for each of the plurality of features by augmenting the plurality of features, training a classification function with the plurality of features samples and outputting the classification function for classifying one or more new media objects.
    Type: Grant
    Filed: January 11, 2016
    Date of Patent: August 6, 2019
    Assignee: International Business Machines Corporation
    Inventor: Pavel Kisilev
  • Patent number: 10354143
    Abstract: A method (100) for comparing a first video shot (Vs1) comprising a first set of first images (I1(s)) with a second video shot (Vs2) comprising a second set of second images (I2(t)), at least one between the first and the second set comprising at least two images.
    Type: Grant
    Filed: October 13, 2014
    Date of Patent: July 16, 2019
    Assignee: TELECOM ITALIA S.p.A.
    Inventors: Skjalg Lepsoy, Massimo Balestri, Gianluca Francini
  • Patent number: 10346684
    Abstract: Various embodiments provide a method for computing color descriptors of product images. For example, a number of fine color representatives can be determined to describe color variation in an image as a histogram by assigning a saturation value and a brightness value to a plurality of color hues. For each pixel of the image, the closest color among a defined fine color representative set is computed. In this example, each of the pixels is assigned a color ID corresponding to their closest matching fine color representative and at least one family color ID corresponding one or more pure color families. In this example, a histogram of the color representatives and a histogram for the color families are computed. A single color vector descriptor for the image is then determined by combining the family histogram with the color representative histogram.
    Type: Grant
    Filed: June 9, 2017
    Date of Patent: July 9, 2019
    Assignee: A9.COM, INC.
    Inventors: Arnab Sanat Kumar Dhua, Himanshu Arora, Sunil Ramesh