Patents Issued in April 28, 2020
  • Patent number: 10635907
    Abstract: Techniques are described for enhanced interactions for a security automation system using a doorbell camera. One method includes detecting, by the doorbell camera, that a person is located within a distance threshold to an entry of a structure based on received sensor data; identifying, by the doorbell camera, a suggested security action for the security and automation system to perform based on the detecting; transmitting the suggested action to the security and automation system based on the identifying; and broadcasting, via the doorbell camera, in response to the transmitting a message to the person located within the distance threshold.
    Type: Grant
    Filed: April 17, 2017
    Date of Patent: April 28, 2020
    Assignee: Vivint, Inc.
    Inventors: Michael D. Child, Jeremy B. Warren, Alen L. Peacock, Michelle Zundel
  • Patent number: 10635908
    Abstract: To provide an image processing system, an image processing method, and a program, capable of detecting a group with high irregularity. An image processing system is provided with: a group detector that detects a group based on an input image captured with an image capturing at a first time; a repeating group analyzer that determines that a detected group has been previously detected; and an alert module that reports when the detected group has been determined by the repeating group analyzer to have been previously detected.
    Type: Grant
    Filed: July 2, 2014
    Date of Patent: April 28, 2020
    Assignee: NEC CORPORATION
    Inventors: Ryoma Oami, Yusuke Takahashi, Hiroyoshi Miyano
  • Patent number: 10635909
    Abstract: A vehicular structure from motion (SfM) system can include an input to receive a sequence of image frames acquired from a camera on a vehicle and an SIMD processor to process 2D feature point input data extracted from the image frames so as to compute 3D points. For a given 3D point, the SfM system can calculate partial ATA and partial ATb matrices outside of an iterative triangulation loop, reducing computational complexity inside the loop. Multiple tracks can be processed together to take full advantage of SIMD instruction parallelism.
    Type: Grant
    Filed: November 8, 2016
    Date of Patent: April 28, 2020
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Deepak Kumar Poddar, Shyam Jagannathan, Soyeb Nagori, Pramod Kumar Swami
  • Patent number: 10635910
    Abstract: This malfunction diagnosis apparatus detects a line on a road surface by distinguishing the line from an asphalt surface other than the line. The malfunction diagnosis apparatus sets a first area corresponding to the line and a second area corresponding to the asphalt surface using a first image captured while the line is detected. The malfunction diagnosis apparatus calculates a first brightness in the first area and a second brightness in the second area using a second image captured while the vehicle is cruising and the line is not detected. Further, the malfunction diagnosis apparatus diagnoses the malfunction based on at least one of the first brightness and the second brightness.
    Type: Grant
    Filed: July 5, 2016
    Date of Patent: April 28, 2020
    Assignee: TRANSTRON INC
    Inventors: Yoshiaki Hishinuma, Naoyuki Urushizaki, Hirokazu Sugita, Kazuhiko Kobayashi
  • Patent number: 10635911
    Abstract: In an apparatus for recognizing a travel lane of a vehicle, a deviation determiner is configured to, if two or more shape change points are extracted by a shape change point extractor and the second derivative value of curvature of an extracted boundary line at at least one of the two or more shape change points is inverted in sign, determine whether or not the extracted boundary line and an estimated boundary line estimated from travel lane parameters estimated by a travel lane parameter estimator are deviating from each other beyond a predetermined allowable range. A driving aid is configured to, if it is determined that the extracted boundary line and the estimated boundary line are deviating from each other beyond the predetermined allowable range, perform control upon deviation to prevent occurrence of undesirable situations that may be caused by deviation between the extracted boundary line and the estimated boundary line.
    Type: Grant
    Filed: January 15, 2018
    Date of Patent: April 28, 2020
    Assignees: DENSO CORPORATION, SOKEN, INC.
    Inventors: Shunya Kumano, Naoki Nitanda, Akihiro Watanabe
  • Patent number: 10635912
    Abstract: The disclosure relates to methods, systems, and apparatuses for virtual sensor data generation and more particularly relates to generation of virtual sensor data for training and testing models or algorithms to detect objects or obstacles. A method for generating virtual sensor data includes simulating, using one or more processors, a three-dimensional (3D) environment comprising one or more virtual objects. The method includes generating, using one or more processors, virtual sensor data for a plurality of positions of one or more sensors within the 3D environment. The method includes determining, using one or more processors, virtual ground truth corresponding to each of the plurality of positions, wherein the ground truth comprises a dimension or parameter of the one or more virtual objects. The method includes storing and associating the virtual sensor data and the virtual ground truth using one or more processors.
    Type: Grant
    Filed: June 19, 2017
    Date of Patent: April 28, 2020
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Ashley Elizabeth Micks, Venkatapathi Raju Nallapa, Harpreetsingh Banvait, Scott Vincent Myers
  • Patent number: 10635913
    Abstract: A path planning method applied to a navigation device includes acquiring a two-dimensional depth map, transforming the two-dimensional depth map into a gray distribution map via statistics of pixel values on the two-dimensional depth map, computing a space matching map by arranging pixel counts on the gray distribution map, and computing a weighting value about each angle range of the space matching map in accordance with a distance from location of a pixel count to a reference point of the space matching map. The weighting value represents existential probability of an obstacle within the said angle range and a probable distance between the navigation device and the obstacle.
    Type: Grant
    Filed: October 17, 2017
    Date of Patent: April 28, 2020
    Assignee: MEDIATEK INC.
    Inventors: Kai-Min Yang, Chih-Kai Chang, Tsu-Ming Liu
  • Patent number: 10635914
    Abstract: A system for testing a camera for vision system for a vehicle includes providing a camera having a field of view and providing and disposing a test structure in the field of view of the camera and between the camera and a target. The test structure includes at least one optic and is capable of directing a principal axis of the at least one optic toward multiple respective regions of the target. The camera views the target via the at least one optic. Image data is captured with the camera. The captured image data is representative of images of the target including the multiple regions of the target as captured when the at least one optic has its principal axis directed toward the respective regions of the target. The image data captured by the camera is processed to determine the focus at each of the multiple regions of the target.
    Type: Grant
    Filed: June 21, 2018
    Date of Patent: April 28, 2020
    Assignee: MAGNA ELECTRONICS INC.
    Inventors: Matthew C. Sesti, David F. Olson, Robert A. Devota, John R. Garcia, Donald W. Mersino
  • Patent number: 10635915
    Abstract: A method for giving a warning on a blind spot of a vehicle based on V2V communication is provided. The method includes steps of: (a) if a rear video of a first vehicle is acquired from a rear camera, a first blind-spot warning device transmitting the rear video to a blind-spot monitor, to determine whether nearby vehicles are in the rear video using a CNN, and output first blind-spot monitoring information of determining whether the nearby vehicles are in a blind spot; and (b) if second blind-spot monitoring information of determining whether a second vehicle is in the blind spot, is acquired from a second blind-spot warning device of the second vehicle, over the V2V communication, the first blind-spot warning device warning that one of the second vehicle and the nearby vehicles is in the blind spot by referring to the first and the second blind-spot monitoring information.
    Type: Grant
    Filed: December 20, 2019
    Date of Patent: April 28, 2020
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10635916
    Abstract: Systems and methods for determining vehicle crowdedness are provided. A method can include obtaining, by a computing system, real-time location data including information identifying a real-time location corresponding to a transit station for each of a plurality of user computing devices; determining, by the computing system using a vehicle crowdedness model, a vehicle crowdedness for one or more transit vehicles departing from one or more transit stations based at least in part on the real-time location data; and communicating, by the computing system, data indicative of the vehicle crowdedness for the one or more transit vehicles to a particular user computing device. The data indicative of the vehicle crowdedness can include information for displaying the vehicle crowdedness for the one or more transit vehicles on the particular user computing device.
    Type: Grant
    Filed: April 10, 2019
    Date of Patent: April 28, 2020
    Assignee: Google LLC
    Inventors: Cayden Meyer, Reuben Kan
  • Patent number: 10635917
    Abstract: A method for detecting a vehicle occupancy by using passenger keypoints based on analyzing an interior image of a vehicle is provided. The method includes steps of: (a) if the interior image is acquired, a vehicle occupancy detecting device (i) inputting the interior image into a feature extractor network, to generate feature tensors by applying convolution operation to the interior image, (ii) inputting the feature tensors into a keypoint heatmap & part affinity field (PAF) extractor, to generate keypoint heatmaps and PAFs, (iii) inputting the keypoint heatmaps and the PAFs into a keypoint detecting device, to extract keypoints from the keypoint heatmaps, and (iv) grouping the keypoints based on the PAFs, to detect keypoints per passengers; and (b) inputting the keypoints into a seat occupation matcher, to match the passengers with seats by referring to the inputted keypoints and preset ROIs for the seats and to detect the vehicle occupancy.
    Type: Grant
    Filed: December 20, 2019
    Date of Patent: April 28, 2020
    Assignee: STRADVISION, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10635918
    Abstract: A method for managing a smart database which stores facial images for face recognition is provided. The method includes steps of: a managing device (a) counting specific facial images corresponding to a specific person in the smart database where new facial images are continuously stored, and determining whether a first counted value, representing a count of the specific facial images, satisfies a first set value; and (b) if the first counted value satisfies the first set value, inputting the specific facial images into a neural aggregation network, to generate quality scores of the specific facial images by aggregation of the specific facial images, and, if a second counted value, representing a count of specific quality scores among the quality scores from a highest during counting thereof, satisfies a second set value, deleting part of the specific facial images, corresponding to the uncounted quality scores, from the smart database.
    Type: Grant
    Filed: December 31, 2019
    Date of Patent: April 28, 2020
    Assignee: STRADVISION, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10635919
    Abstract: The purpose of the present invention is, when a portion of a subject to be detected is occluded, to simplify detecting that the occluded subject to be detected is the subject to be detected, regardless of the position which is occluded. Provided is an information processing device (110), comprising: a computation unit (111) which computes local scores for each of a plurality of positions which are contained in an image of a prescribed scope, said scores indicating the likelihood of an object to be detected being present; and a change unit (112) which changes the scores for the positions, among the plurality of positions, which are included in a prescribed region which is determined according to the plurality of scores which have been computed for said plurality of positions, such that the likelihood of the object to be detected being present increases.
    Type: Grant
    Filed: April 2, 2019
    Date of Patent: April 28, 2020
    Assignee: NEC CORPORATION
    Inventor: Kenta Araki
  • Patent number: 10635920
    Abstract: An information processing apparatus includes a display unit that displays an image on a display device, a specifying unit that specifies a selection region selected in the displayed image, and a determining unit that determines a detection parameter for use in object detection processing based on a feature of an object within the specified selection region.
    Type: Grant
    Filed: July 6, 2017
    Date of Patent: April 28, 2020
    Assignee: Canon Kabushiki Kaisha
    Inventor: Hideyuki Ikegami
  • Patent number: 10635921
    Abstract: A container for providing an enclosure for a food item includes a plurality of grading marks and a docking station to dock an electronic device. Yet further, the system includes a processor configured to take one or more pictures of the food item (104) using the electronic device (110), transmit the one or more pictures to a cloud (202), receive recommended recipes for the food item (104) and display the recommended recipes.
    Type: Grant
    Filed: August 14, 2015
    Date of Patent: April 28, 2020
    Assignee: KENWOOD LIMITED
    Inventors: Paul Palmer, Gilman Grundy
  • Patent number: 10635922
    Abstract: A terminal for measuring at least one dimension of an object includes at least one imaging subsystem and an actuator. The at least one imaging subsystem includes an imaging optics assembly operable to focus an image onto an image sensor array. The imaging optics assembly has an optical axis. The actuator is operably connected to the at least one imaging subsystem for moving an angle of the optical axis relative to the terminal. The terminal is adapted to obtain first image data of the object and is operable to determine at least one of a height, a width, and a depth dimension of the object based on effecting the actuator to change the angle of the optical axis relative to the terminal to align the object in second image data with the object in the first image data, the second image data being different from the first image data.
    Type: Grant
    Filed: June 14, 2018
    Date of Patent: April 28, 2020
    Assignee: Hand Held Products, Inc.
    Inventors: Edward C. Bremer, Matthew Pankow
  • Patent number: 10635923
    Abstract: An image processing apparatus includes a detector, an estimator and a determiner. The detector detects a candidate region of a captured image captured by a camera, the candidate region serving as a candidate for a water drop region affected by a water drop on the lens of the camera, based on an edge strength of each pixel in the captured image. The estimator estimates, based on the candidate region, a circle that includes the candidate region. The determiner determines whether or not the candidate region is part of the water drop region based on the edge strength of some of the pixels in the circle.
    Type: Grant
    Filed: January 19, 2017
    Date of Patent: April 28, 2020
    Assignee: FUJITSU TEN LIMITED
    Inventors: Yasushi Tani, Takashi Kono, Junji Hashimoto, Tomohide Kasame, Teruhiko Kamibayashi, Hiroki Murasumi
  • Patent number: 10635924
    Abstract: Systems and methods for image classification include receiving imaging data of in-vivo or excised tissue of a patient during a surgical procedure. Local image features are extracted from the imaging data. A vocabulary histogram for the imaging data is computed based on the extracted local image features. A classification of the in-vivo or excised tissue of the patient in the imaging data is determined based on the vocabulary histogram using a trained classifier, which is trained based on a set of sample images with confirmed tissue types.
    Type: Grant
    Filed: May 11, 2015
    Date of Patent: April 28, 2020
    Inventors: Ali Kamen, Shanhui Sun, Terrence Chen, Tommaso Mansi, Alexander Michael Gigler, Patra Charalampaki, Maximillian Fleischer, Dorin Comaniciu
  • Patent number: 10635925
    Abstract: A system and method of video surveillance, namely, for processing of graphic and other video information for combination of display of the video images received from video cameras and data submitted a map of a given. The method including receiving an image from the video camera, defining a static object and coordinates of its location on a frame of the image and defining a mobile object and coordinates of its location on an image frame. Then setting a graphic symbol of a static object on map, calibrating the video camera and defining at least four virtual segments on the map and frame of the received image, and transforming coordinates of the static object from system of coordinates of a frame to a system of coordinates of the map, displaying a combination image on a display, and consecutively adjusting the transparency of the combined image.
    Type: Grant
    Filed: July 3, 2018
    Date of Patent: April 28, 2020
    Assignee: OOO ITV Group
    Inventor: Murat K. Altuev
  • Patent number: 10635926
    Abstract: An image analyzing apparatus reprojects an input image in a plurality of different directions to divide the input image into a plurality of partial images, extracts a feature amount from each of the partial images, and calculates a degree of importance of the input image by position from the extracted feature amount in accordance with a predetermined regression model.
    Type: Grant
    Filed: July 31, 2017
    Date of Patent: April 28, 2020
    Assignee: RICOH COMPANY, LTD.
    Inventor: Takayuki Hara
  • Patent number: 10635927
    Abstract: Performing semantic segmentation of an image can include processing the image using a plurality of convolutional layers to generate one or more feature maps, providing at least one of the one or more feature maps to multiple segmentation branches, and generating segmentations of the image based on the multiple segmentation branches, including providing feedback to, or generating feedback from, at least one of the multiple segmentation branches in performing segmentation in another of the segmentation branches.
    Type: Grant
    Filed: March 5, 2018
    Date of Patent: April 28, 2020
    Assignee: HONDA MOTOR CO., LTD.
    Inventors: Yi-Ting Chen, Athmanarayanan Lakshmi Narayanan
  • Patent number: 10635928
    Abstract: Systems and methods for identifying an object and presenting additional information about the identified object are provided. The techniques of the present invention can allow the user to specify modes to help with identifying objects. Furthermore, the additional information can be provided with different levels of detail depending on user selection. Apparatus for presenting a user with a log of the identified objects is also provided. The user can customize the log by, for example, creating a multi-media album.
    Type: Grant
    Filed: August 5, 2019
    Date of Patent: April 28, 2020
    Assignee: APPLE INC.
    Inventor: Michael Nathaniel Rosenblatt
  • Patent number: 10635929
    Abstract: The present invention belongs to the field of machine vision. A saliency-based method for extracting a road target from a night vision infrared image is disclosed. The method combines saliencies of the time domain and the frequency domain, global contrast and local contrast, and low level features and high level features, and energy radiation is also considered to be a saliency factor; thus, the object of processing is an infrared image and not the usual natural image. The extraction of a salient region is performed on the raw natural image on the basis of energy radiation, and the obtained extraction result if the salient region is more accurate and thorough, and the contour of a target in the salient region is clearer.
    Type: Grant
    Filed: June 8, 2017
    Date of Patent: April 28, 2020
    Assignee: JIANGSU UNIVERSITY
    Inventors: Yingfeng Cai, Lei Dai, Hai Wang, Xiaoqiang Sun, Long Chen, Haobin Jiang, Xiaobo Chen, Youguo He
  • Patent number: 10635930
    Abstract: For patient positioning for scanning, a current pose of a patient is compared to a desired pose. The desired pose may be based on a protocol or a pose of the same patient in a previous examination. Any differences in pose, such as arm position, leg position, head orientation, and/or torso orientation (e.g., laying on side, back, or stomach), are communicated. By changing the current pose of the patient to be more similar to the desired pose, a more consistent and/or registerable dataset may be acquired by scanning the patient.
    Type: Grant
    Filed: February 24, 2017
    Date of Patent: April 28, 2020
    Assignee: Siemens Healthcare GmbH
    Inventors: Bernhard Geiger, Atilla Peter Kiraly
  • Patent number: 10635931
    Abstract: The present technology relates to an information processing device, an information processing method, and a program that are designed to enable easy generation of a path for successively displaying characteristic images. The information processing device includes a setting unit that sets a path for connecting characteristic portions in at least one image by referring to metadata including at least information about feature points detected from the image. The setting unit sets the path by determining a regression curve, using the feature points. In a case where the feature points include a feature point at a distance equal to or longer than a predetermined threshold value from the regression curve, the setting unit redetermines the regression curve after removing a feature point detected from an image including a feature point having a low score among the feature points. The present technology can be applied to information processing devices that process still images and moving images.
    Type: Grant
    Filed: October 2, 2015
    Date of Patent: April 28, 2020
    Assignee: Sony Corporation
    Inventors: Kazuhiro Shimauchi, Seijiro Inaba, Nobuho Ikeda, Hiroshi Ikeda, Shuichi Asajima, Yuki Ono
  • Patent number: 10635932
    Abstract: Embodiments of the present disclosure relate to systems, techniques, methods, and computer-readable mediums for one or more database systems configured to perform image identification of an image captured using a remote mobile device, and display of identity information associated with the captured image on the remote mobile device, in communication with the database system(s). A system obtains an image captured using a camera on a remote mobile device and performs image analysis to identify the captured image using reference images in one or more databases. The system can present the results for display an interactive user interface on the remote mobile device.
    Type: Grant
    Filed: May 16, 2019
    Date of Patent: April 28, 2020
    Assignee: PALANTIR TECHNOLOGIES INC.
    Inventors: Aakash Goenka, Arzav Jain, Alexander Taheri, Daniel Isaza, Jack Zhu, William Manson, Vehbi Deger Turan, Stanislaw Jastrzebski
  • Patent number: 10635933
    Abstract: Methods and apparatus are disclosed for vision-based determining of trailer presence. An example vehicle includes a camera to capture a plurality of frames. The example vehicle also includes a controller to calculate feature descriptors for a set of features identified in a first frame, compute respective match magnitudes between the feature descriptors of the first frame and for a second frame, calculate respective feature scores for each feature of the set of features, and determine if a trailer is present by comparing a feature score of a feature to a threshold.
    Type: Grant
    Filed: January 23, 2018
    Date of Patent: April 28, 2020
    Assignee: Ford Global Technologies, LLC
    Inventors: Robert Bell, Brian Grewe
  • Patent number: 10635934
    Abstract: A method of recognizing image content, comprises applying to the image a neural network which comprises an input layer for receiving the image, a plurality of hidden layers for processing the image, and an output layer for generating output pertaining to an estimated image content based on outputs of the hidden layers. The method further comprises applying to an output of at least one of the hidden layers a neural network branch, which is independent of the neural network and which has an output layer for generating output pertaining to an estimated error level of the estimate. A combined output indicative of the estimated image content and the estimated error level is generated.
    Type: Grant
    Filed: September 17, 2018
    Date of Patent: April 28, 2020
    Assignee: Ramot at Tel-Aviv University Ltd.
    Inventors: Lior Wolf, Noam Mor
  • Patent number: 10635935
    Abstract: A method and computing device, for generating training image data for a machine learning-based object recognition system is described. The method comprises receiving generic image data of an object type, receiving recorded image data related to the object type, and modifying the generic image data with respect to at least one imaging-related parameter. The method further comprises determining a degree of similarity between the modified generic image data and the recorded image data, and, when the determined degree of similarity fulfills a similarity condition, storing the modified generic image data as generated training image data of the object type. Further described are a computing device, a computer program product, a system and a motor vehicle.
    Type: Grant
    Filed: December 18, 2017
    Date of Patent: April 28, 2020
    Assignee: Elektrobit Automotive GmbH
    Inventors: Tiberiu Cocias, Cosmin Ginerica, Sorin Grigorescu, Gigel Macesanu, Bogdan Transnea
  • Patent number: 10635936
    Abstract: A method includes receiving a first set of sensor data including data representing an object or an event in a monitored environment, receiving a second set of sensor data representing a corresponding time period as a time period represented by the first set of sensor data, inputting to a tutor classifier data representing the first set of data and including data representing the object or the event, generating a classification of the object or event in the tutor classifier, receiving the second set of sensor data at an apprentice classifier training process, receiving the classification generated in the tutor classifier at the apprentice classifier training process, and training the apprentice classifier in the apprentice classifier training process using the second set of sensor data as input and using the classification received from the tutor classifier as a ground-truth for the classification of the second set of sensor data.
    Type: Grant
    Filed: June 19, 2018
    Date of Patent: April 28, 2020
    Assignee: AXIS AB
    Inventors: Joacim Tullberg, Viktor Andersson
  • Patent number: 10635938
    Abstract: A method for training a main CNN by using a virtual image and a style-transformed real image is provided. And the method includes steps of: (a) a learning device acquiring first training images; and (b) the learning device performing a process of instructing the main CNN to generate first estimated autonomous driving source information, instructing the main CNN to generate first main losses and perform backpropagation by using the first main losses, to thereby learn parameters of the main CNN, and a process of instructing a supporting CNN to generate second training images, instructing the main CNN to generate second estimated autonomous driving source information, instructing the main CNN to generate second main losses and perform backpropagation by using the second main losses, to thereby learn parameters of the main CNN.
    Type: Grant
    Filed: December 31, 2019
    Date of Patent: April 28, 2020
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10635939
    Abstract: An exemplary system, method, and computer-accessible medium can include, for example, receiving an original dataset(s), receiving a synthetic dataset(s), training a model(s) using the original dataset(s) and the synthetic dataset(s), and evaluating the synthetic dataset(s) based on the training of the model(s). The model(s) can include a first model and a second model, and the first model can be trained using the original dataset(s) and the second model can be trained using the synthetic dataset(s). The synthetic dataset(s) can be evaluated by comparing first results from the training of the first model to second results from the training of the second model.
    Type: Grant
    Filed: October 4, 2018
    Date of Patent: April 28, 2020
    Assignee: CAPITAL ONE SERVICES, LLC
    Inventors: Mark Watson, Fardin Abdi Taghi Abad, Anh Truong, Kenneth Taylor, Reza Farivar, Jeremy Goodsitt, Austin Walters, Vincent Pham
  • Patent number: 10635940
    Abstract: The present disclosure relates to systems and methods for training image recognition models. The system may include a processor in communication with a client device, and a storage medium storing instructions that, when executed, cause the processor to perform operations including storing, in a database, a plurality of images depicting vehicles; determining, for a first subset of the images, that additional images of the vehicles depicted in the first subset are desired; determining, for a second subset of the images, that additional images of the vehicles depicted in the second subset are not desired; identifying, within the first subset, images suitable for training an image recognition model; assigning a classification to the suitable images; determining, based on the classification, whether a threshold number of images exist in the database; and initiating an update of the image recognition model, based on the determination of whether the threshold number of images exists.
    Type: Grant
    Filed: January 22, 2019
    Date of Patent: April 28, 2020
    Assignee: Capital One Services, LLC
    Inventors: Chi-san Ho, Micah Price, Sunil Subrahmanyam Vasisht, Aamer Charania
  • Patent number: 10635941
    Abstract: A method for on-device continual learning of a neural network which analyzes input data is provided for smartphones, drones, vessels, or a military purpose. The method includes steps of: a learning device, (a) uniform-sampling new data to have a first volume, instructing a boosting network to convert a k-dimension random vector into a k-dimension modified vector, instructing an original data generator network to repeat outputting synthetic previous data of a second volume corresponding to the k-dimension modified vector and previous data having been used for learning, and generating a batch for a current-learning; and (b) instructing the neural network to generate output information corresponding to the batch. The method can be used for preventing catastrophic forgetting and an invasion of privacy, and for optimizing resources such as storage and sampling processes for training images. Further the method can be performed through a learning for Generative adversarial networks (GANs).
    Type: Grant
    Filed: January 29, 2019
    Date of Patent: April 28, 2020
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10635942
    Abstract: Provided are a method and apparatus for identifying a product. The method includes: acquiring an image of the product; performing multilevel detection on the image to determine a label region of the product, specifically, an image region corresponding to a previous level of detection is greater than an image region corresponding to a following level of detection; and identifying information in the label region to determine information of the product. A product can be identified automatically by acquiring the image of the product and performing multilevel detection on the image, thereby efficiency is improved, a large number of products can be handled, and cost is reduced.
    Type: Grant
    Filed: April 24, 2018
    Date of Patent: April 28, 2020
    Assignee: Shenzhen Malong Technologies Co., Ltd.
    Inventors: Matthew Robert Scott, Dinglong Huang, Kai Fu
  • Patent number: 10635943
    Abstract: Methods and systems are provided for reducing noise in medical images with deep neural networks. In one embodiment, a method for training a neural network comprises transforming each of a plurality of initial image data sets not acquired by a medical imaging modality into a target image data set, wherein each target image data set is in a format specific to the medical imaging modality, corrupting each target image data set to generate a corrupted image data set, and training the neural network to map each corrupted image data set to the corresponding target image data set. In this way, the high-resolution of digital non-medical photographs or images can be leveraged for the enhancement or correction of medical images, and the trained neural network can be used to reduce noise and image artifacts in medical images acquired by the medical imaging modality.
    Type: Grant
    Filed: August 7, 2018
    Date of Patent: April 28, 2020
    Assignee: General Electric Company
    Inventors: Robert Marc Lebel, Dawei Gui, Graeme Colin McKinnon
  • Patent number: 10635944
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training an object representation neural network. One of the methods includes obtaining training sets of images, each training set comprising: (i) a before image of a before scene of the environment, (ii) an after image of an after scene of the environment after the robot has removed a particular object, and (iii) an object image of the particular object, and training the object representation neural network on the batch of training data, comprising determining an update to the object representation parameters that encourages the vector embedding of the particular object in each training set to be closer to a difference between (i) the vector embedding of the after scene in the training set and (ii) the vector embedding of the before scene in the training set.
    Type: Grant
    Filed: June 17, 2019
    Date of Patent: April 28, 2020
    Assignee: Google LLC
    Inventors: Eric Victor Jang, Sergey Vladimir Levine, Coline Manon Devin
  • Patent number: 10635945
    Abstract: Automated evaluation and extraction of information from piping and instrumentation diagrams (P&IDs). Aspects of the systems and methods utilize machine learning and image processing techniques to extract relevant information, such as tag names, tag numbers, and symbols, and their positions, from P&IDs. Further aspects feed errors back to a machine learning system to update its learning and improve operation of the systems and methods.
    Type: Grant
    Filed: June 28, 2018
    Date of Patent: April 28, 2020
    Assignee: Schneider Electric Systems USA, Inc.
    Inventors: Bhaskar Sinha, Ashish Patil, Amitabha Bhattacharyya, Venkatesh Jagannath, Sameer Kondejkar
  • Patent number: 10635946
    Abstract: The present application provides an eyeglass positioning method. The method includes: acquiring a real-time image shot by a shooting apparatus, and extracting a real-time face image from the real-time image using a face recognition algorithm; recognizing whether the real-time face image includes eyeglasses using a predetermined first classifier, and outputting a recognition result; and positioning the eyeglasses in the real-time face image using a predetermined second classifier and outputting a positioning result when the recognition result is that the real-time face image includes the eyeglasses. The present application also provides an electronic apparatus and a computer readable storage medium. The present application adopts two classifiers to detect images in eyeglass regions in the face images, thereby enhancing precision and accuracy of eyeglass detection.
    Type: Grant
    Filed: October 31, 2017
    Date of Patent: April 28, 2020
    Assignee: PING AN TECHNOLOGY (SHENZHEN) CO., LTD.
    Inventor: Lei Dai
  • Patent number: 10635947
    Abstract: A computer trains a classification model. (A) An estimation vector is computed for each observation vector using a weight value, a mean vector, and a covariance matrix. The estimation vector includes a probability value for each class of a plurality of classes for each observation vector that indicates a likelihood that each observation vector is associated with each class. A subset of the plurality of observation vectors has a predefined class assignment. (B) The weight value is updated using the computed estimation vector. (C) The mean vector for each class is updated using the computed estimation vector. (D) The covariance matrix for each class is updated using the computed estimation vector. (E) A convergence parameter value is computed. (F) A classification model is trained by repeating (A) to (E) until the computed convergence parameter value indicates the mean vector for each class of the plurality of classes is converged.
    Type: Grant
    Filed: September 30, 2019
    Date of Patent: April 28, 2020
    Assignee: SAS Institute Inc.
    Inventors: Xu Chen, Yingjian Wang, Saratendu Sethi
  • Patent number: 10635948
    Abstract: A method for finding one or more candidate digital images being likely candidates for depicting a specific object comprising: receiving an object digital image depicting the specific object; determining, using a classification subnet of a convolutional neural network, a class for the specific object depicted in the object digital image; selecting, based on the determined class for the specific object depicted in the object digital image, a feature vector generating subnet from a plurality of feature vector generating subnets; determining, by the selected feature vector generating subnet, a feature vector of the specific object depicted in the object digital image; locating one or more candidate digital images being likely candidates for depicting the specific object depicted in the object digital image by comparing the determined feature vector and feature vectors registered in a database, wherein each registered feature vector is associated with a digital image.
    Type: Grant
    Filed: September 6, 2018
    Date of Patent: April 28, 2020
    Assignee: Axis AB
    Inventors: Niclas Danielsson, Simon Molin, Markus Skans, Jakob Grundström
  • Patent number: 10635949
    Abstract: A system and method enable semantic comparisons to be made between word images and concepts. Training word images and their concept labels are used to learn parameters of a neural network for embedding word images and concepts in a semantic subspace in which comparisons can be made between word images and concepts without the need for transcribing the text content of the word image. The training of the neural network aims to minimize a ranking loss over the training set where non relevant concepts for an image which are ranked more highly than relevant ones penalize the ranking loss.
    Type: Grant
    Filed: July 7, 2015
    Date of Patent: April 28, 2020
    Assignee: XEROX CORPORATION
    Inventors: Albert Gordo Soldevila, Jon Almazán Almazán, Naila Murray, Florent C. Perronnin
  • Patent number: 10635950
    Abstract: A surveillance system is provided that includes a device configured to capture a video sequence, formed from a set of unlabeled testing video frames, of a target area. The surveillance system further includes a processor configured to pre-train a recognition engine formed from a reference set of CNNs on a still image domain that includes labeled training still image frames. The processor adapts the recognition engine to a video domain to form an adapted recognition engine, by applying a non-reference set of CNNs to domains including the still image and video domains and a degraded image domain. The degraded image domain includes labeled synthetically degraded versions of the frames included in the still image domain. The video domain includes random unlabeled training video frames. The processor recognizes, using the adapted engine, at least one object in the target area. A display device displays the recognized objects.
    Type: Grant
    Filed: February 6, 2018
    Date of Patent: April 28, 2020
    Assignee: NEC Corporation
    Inventors: Kihyuk Sohn, Xiang Yu, Manmohan Chandraker
  • Patent number: 10635951
    Abstract: A computer-implemented method includes obtaining a trained convolutional neural network comprising one or more convolutional layers, each of the one or more convolutional layers comprising a plurality of filters with known filter parameters; pre-computing a reusable factor for each of the one or more convolutional layers based on the known filter parameters of the trained convolutional neural network; receiving input data to the trained convolutional neural network; computing an output of the each of the one or more convolutional layers using a Winograd convolutional operator based on the pre-computed reusable factor and the input data; and determining output data of the trained convolutional network based on the output of the each of the one or more convolutional layers.
    Type: Grant
    Filed: April 22, 2019
    Date of Patent: April 28, 2020
    Assignee: Alibaba Group Holding Limited
    Inventors: Yongchao Liu, Qiyin Huang, Guozhen Pan, Sizhong Li, Jianguo Xu, Haitao Zhang, Lin Wang
  • Patent number: 10635952
    Abstract: As disclosed, f-scores can be generated for apparel items. Training images are identified, where each training image is associated with a corresponding set of tags including information about a plurality of attributes. A first convolutional neural network (CNN) is trained based on the plurality of training images and a first attribute. The first CNN is iteratively refined by, for each respective attribute, removing a set of neurons from the first CNN and retraining the first CNN based on the training images and the respective attribute. Upon determining that the first CNN has been trained based on each of the attributes, one or more CNNs are generated based on the first CNN. An image is received, where the image depicts an apparel item. The image is processed using the one or more CNNs, and an f-score for the apparel item is determined based on the output.
    Type: Grant
    Filed: June 24, 2019
    Date of Patent: April 28, 2020
    Assignee: International Business Machines Corporation
    Inventors: Mohit Sewak, Karthik P. Hariharan, Irina Fedulova
  • Patent number: 10635953
    Abstract: A card feed-out device may include a card housing; a feed-out claw; a claw feed mechanism; a gate member; and a gate moving mechanism. A front opening is formed in a lower end of a front surface of the card housing. A lower opening is formed in a front end of a lower surface portion of the card housing. The gate member may include a front surface and a bottom surface. The gate may be formed between a lower end surface of the front surface and a top surface of the bottom surface. During standby, the gate member is at a retracted position. When the first card is fed out, the gate member moves until at least a part of the bottom surface passes through an upper end of the lower opening, and a lower surface of the gate is disposed above the top surface of the housing bottom surface.
    Type: Grant
    Filed: December 2, 2016
    Date of Patent: April 28, 2020
    Assignee: NIDEC SANKYO CORPORATION
    Inventor: Keiji Ohta
  • Patent number: 10635954
    Abstract: A printer includes a plurality of dot clock signal generators, each of which generates dot clock signals for operating ejectors in a printhead in a color station of the printer. The dot clock signal generators are connected in a chain that corresponds to the process direction of the color stations in the printer. Each dot clock signal generator is configured to count dot clock signals of a preceding dot clock signal generator in the chain or a reference clock generator to determine when the dot clock signal generator turns on and to count encoder signals corresponding to movement of a substrate through the printer to determine when to generate dot clock signals for a color station.
    Type: Grant
    Filed: October 2, 2018
    Date of Patent: April 28, 2020
    Assignee: Xerox Corporation
    Inventors: Patricia J. Donaldson, Michael B. Monahan
  • Patent number: 10635955
    Abstract: An image forming method uses a memory, forms an image on a print medium using color materials of L colors, and includes: setting N groups associated for respective N objects including a plurality of scanning lines and reproduced with the color materials of the L colors; calculating a relative address that stores tone data representing a tone of one color material among the color materials of the L colors based on coordinates of a plurality of pixels forming the plurality of scanning lines to calculate L addresses separated by shifting the relative address using a predetermined offset address; and transmitting and receiving L tone data representing tones of respective L color materials for reproducing colors of the plurality of pixels forming the scanning lines by using the L addresses to/from the memory via L channels among M (M is an integer larger than N) communication channels.
    Type: Grant
    Filed: May 31, 2019
    Date of Patent: April 28, 2020
    Assignee: Kyocera Document Solutions Inc.
    Inventors: Masayoshi Nakamura, Dongpei Su
  • Patent number: 10635956
    Abstract: A system for foreign material accountability includes a kiosk, which further includes a touch enabled display screen that is back-lit and automatically adjusts brightness based on ambient environment. A user input device, wherein the user input device includes a pin pad, one or more sensors selected from a group consisting of temperature sensors, RFID sensors, IR sensors, optical sensors, iris sensors and one or more cameras, a processor, a data bus coupled to the processor and a computer-usable medium embodying computer code operating on the kiosk. The computer code includes programmed instructions executable by the processor to control accessibility of items into a restricted area based on data received from at least one of the display screen operated by a user, the user input device, and the one or more sensors.
    Type: Grant
    Filed: January 11, 2019
    Date of Patent: April 28, 2020
    Assignee: Access Solutions, LLC
    Inventors: Kelvin D. Mann, Eric Bergstrom, David Hansen, Nikolas Tripp, Nathan Smith, Stephen Lauser, Matthew Montgomery
  • Patent number: 10635957
    Abstract: The present invention provides systems and methods capable of collecting and analyzing a multi-fields two-dimensional code. A system in accordance with one embodiment of the present invention comprises at least one mobile terminal, a communication network and at least one server, wherein the at least one mobile terminal and the at least one server are both coupled to the communication network. The mobile terminal comprises a collecting module for collecting the multi-fields two dimensional code in an optical manner; a decoding module for decoding the collected multi-fields two dimensional code; and an identifying module for identifying the multiple fields by applying a predetermined rule. The server comprises a memory and an analyzing module for analyzing the multi-fields two-dimensional code. Other embodiments are also described.
    Type: Grant
    Filed: December 18, 2007
    Date of Patent: April 28, 2020
    Assignee: Gmedia Corporation
    Inventors: Wei Shen, Kaijun Cao, Hongqiang Zhang