Patents Examined by Shervin K. Nakhjavan
-
Patent number: 10902562Abstract: A system for filtering an input image dataset is disclosed. A plurality of sequences of input signal values, wherein each sequence corresponds to a different attribute, wherein an input signal value is associated with an attribute and a sampling point i. At least one processor 4 is configured to control computing an output signal value corresponding to a sampling point and an attribute. For a particular sampling point i and for each of a plurality of different attributes of the set of attributes, associating a weight to an input signal value based on a similarity between the signal value and for a plurality of the sampling points j excluding the sampling point i. Also the system computes a weighted sum based on the input signal values and weights. The attribute is a location or a frequency.Type: GrantFiled: May 15, 2017Date of Patent: January 26, 2021Assignee: Stichting Katholieke UniversiteitInventors: Ewoud Joris Smit, Wolfgang Mathias Prokop
-
Patent number: 10902609Abstract: A method for image processing includes a graphics processing unit (GPU) of a mobile device obtaining a first set of image data having a first pixel size and a first color format. The first set of image data is generated by an image sensor of the mobile device. The method further includes the GPU resampling the first set of image data to generate a second set of image data having a second pixel size, and reformatting the second set of image data to generate a third set of image data having a second color format. The third set of image data is used for tracking an object by the mobile device.Type: GrantFiled: December 5, 2018Date of Patent: January 26, 2021Assignee: SZ DJI OSMO TECHNOLOGY CO., LTD.Inventors: Yan Wang, Bo Zang, Chenyu Xiang, Dicong Qiu
-
Patent number: 10890576Abstract: There is provided an image processing device including an input receiver of inputting an image obtained by photographing a specimen subjected to staining and a hardware processor. The hardware processor extracts a region subjected to the staining from the image as a cell region; extracts a region as a candidate region, the region being surrounded by the cell region and not being subjected to the staining. The hardware processor further extracts a feature amount of the candidate region; determines whether or not the candidate region is a cell region on a basis of the feature amount; and corrects the candidate region which is determined to be a cell region by the distinction means to be a cell region.Type: GrantFiled: May 17, 2017Date of Patent: January 12, 2021Assignee: Konica Minolta, Inc.Inventor: Yusuke Mimura
-
Patent number: 10884156Abstract: The present disclosure provides an image processing method, device, and computer readable storage medium, relating to the field of image processing technology, the method includes: acquiring a first undersampled image to be processed; and reconstructing, according to a mapping relationship between an undersampled image and a normally sampled original image, the first undersampled image to a corresponding first original image, wherein the mapping relationship is obtained by training a machine learning model with a second undersampled image and a normally sampled second original image corresponding to the second undersampled image as training samples.Type: GrantFiled: December 26, 2018Date of Patent: January 5, 2021Inventors: Qi Wang, Bicheng Liu, Guangming Xu
-
Patent number: 10885403Abstract: Visibility of a license plate and color reproducibility of a vehicle body are improved in a monitoring camera. A vehicle body area detection unit detects a vehicle body area of a vehicle from an image signal. A license plate area detection unit detects a license plate area of the vehicle from the image signal. A vehicle body area image processing unit performs processing of the image signal corresponding to the detected vehicle body area. A license plate area image processing unit performs processing different from the processing of the image signal corresponding to the vehicle body area on the image signal corresponding to the detected license plate area. A synthesis unit synthesizes the processed image signal corresponding to the vehicle body area and the processed image signal corresponding to the license plate area.Type: GrantFiled: January 11, 2019Date of Patent: January 5, 2021Assignee: Sony Semiconductor Solutions CorporationInventor: Kazuhiro Hoshino
-
Patent number: 10885330Abstract: An image processing apparatus capable of easily generating a two-dimensional panoramic image at a high speed from a plurality of three-dimensional images includes an acquisition unit configured to acquire a generation condition of a first en-face image generated from a first three-dimensional image of an target eye, a first generation unit configured to generate a second en-face image from a second three-dimensional image of the target eye by applying the generation condition acquired by the acquisition unit to the second three-dimensional image, and a second generation unit configured to generate a combined image by combining the first en-face image with the second en-face image.Type: GrantFiled: August 31, 2018Date of Patent: January 5, 2021Assignee: CANON KABUSHIKI KAISHAInventors: Daisuke Kawase, Hiroki Uchida, Osamu Sagano
-
Patent number: 10878284Abstract: The method for training an image model, in each round of training performed with respect to each sample image: inputs an image obtained by cropping the sample image by an object extraction component obtained through a previous round of training, as a scale-adjusted sample image, into the image model, wherein the object extraction component is used for extracting concerned objects in sample images at respective scales; inputs a feature of the scale-adjusted sample image into a local classifier in the image model respectively, performs category prediction with respect to feature points in the feature, so as to obtain a local prediction result, and updates the object extraction component based on the local prediction result; performs object level category prediction for the scale-adjusted sample image based on the feature and the updated object extraction component; and trains the image model based on a category prediction result of the scale-adjusted sample image.Type: GrantFiled: March 12, 2019Date of Patent: December 29, 2020Assignee: FUJITSU LIMITEDInventors: Wei Shen, Rujie Liu
-
Patent number: 10871425Abstract: The subject disclosure presents systems and methods for improved meso-dissection of biological specimens and tissue slides including importing one or more reference slides with annotations, using inter-marker registration algorithms to automatically map the annotations to an image of a milling slide, and dissecting the annotated tissue from the selected regions in the milling slide for analysis, while concurrently tracking the data and analysis using unique identifiers such as bar codes.Type: GrantFiled: July 28, 2017Date of Patent: December 22, 2020Assignee: ROCHE MOLECULAR SYSTEMS INC.Inventors: Michael Barnes, Christophe Chefd'hotel, Srinivas Chukka, Mohammad Qadri
-
Patent number: 10867385Abstract: Embodiments disclose a method and system for segmenting medical images. In certain embodiments, the system comprises a database configured to store a plurality of medical images acquired by an image acquisition device. The plurality of images include at least one first medical image of an object, and a second medical image of the object, each first medical image associated with a first structure label map. The system further comprises a processor that is configured to register the at least one first medical image to the second medical image, determine a classifier model using the registered first medical image and the corresponding first structure label map, and determine a second structure label map associated with the second medical image using the classifier model.Type: GrantFiled: November 27, 2018Date of Patent: December 15, 2020Assignee: Elekta, Inc.Inventors: Lyndon Stanley Hibbard, Xiao Han
-
Patent number: 10867436Abstract: There is provided a method of training a neural network for reconstructing of a 3D point cloud from 2D image(s), comprising: extracting point clouds each represented by an ordered list of coordinates, from 3D anatomical images depicting a target anatomical structure, selecting one of the plurality of point clouds as a template, non-rigidly registering the template with each of the point clouds to compute a respective warped template having a shape of the respective point cloud and retaining the coordinate order of the template, wherein the warped templates are consistent in terms of coordinate order, receiving 2D anatomical images depicting the target anatomical structure depicted in corresponding 3D anatomical images, and training a neural network, according to a training dataset of the warped templates and corresponding 2D images, for mapping 2D anatomical image(s) into a 3D point cloud.Type: GrantFiled: April 18, 2019Date of Patent: December 15, 2020Assignee: Zebra Medical Vision Ltd.Inventor: Amit Oved
-
Patent number: 10860901Abstract: A detection system includes an image processing apparatus configured to discriminate whether or not there is a detection target contained in an object, using a discriminator, and an information processing apparatus configured to provide the discriminator to the image processing apparatus. The information processing apparatus includes an evaluation unit configured to evaluate, for each attribute, discrimination precisions of the discriminator before and after the additional training, using evaluation data associated with each of a plurality of attributes of an object, when the discriminator is additionally trained using the training data, and an output unit configured to output the discrimination precisions for each attribute.Type: GrantFiled: September 11, 2018Date of Patent: December 8, 2020Assignee: OMRON CorporationInventor: Toshinori Tamai
-
Patent number: 10852838Abstract: Configurations are disclosed for presenting virtual reality and augmented reality experiences to users. The system may comprise an image capturing device to capture one or more images, the one or more images corresponding to a field of the view of a user of a head-mounted augmented reality device, and a processor communicatively coupled to the image capturing device to extract a set of map points from the set of images, to identify a set of sparse points and a set of dense points from the extracted set of map points, and to perform a normalization on the set of map points.Type: GrantFiled: November 26, 2018Date of Patent: December 1, 2020Assignee: MAGIC LEAP, INC.Inventors: Gary R. Bradski, Samuel A. Miller, Rony Abovitz
-
Patent number: 10853675Abstract: Embodiments of the present application disclose driving state monitoring methods and apparatuses, driver monitoring systems, and vehicles. The driving state monitoring method includes: performing driver state detection on a driver image; and performing at least one of: outputting a driving state monitoring result of a driver or performing intelligent driving control based on a result of the driver state detection. The embodiments of the present application can implement real-time monitoring of the driving state of a driver, so as to take corresponding measures in time when the driving state of the driver is poor, to ensure safe driving and avoid road traffic accidents.Type: GrantFiled: October 31, 2018Date of Patent: December 1, 2020Assignee: Beijing SenseTime Technology Development Co., Ltd.Inventors: Fei Wang, Chen Qian
-
Patent number: 10832096Abstract: A method can include learning a common embedding space and a set of parameters for each one of a plurality of sets of mixture models, wherein one mixture model is associated with one class of objects within a set of object categories. The method can also include adding new mixture models to the set of mixture models to support novel categories based on a set of example embedding vectors computed for each one of the novel categories. Additionally, the method includes detecting in images a plurality of boxes with associated labels and corresponding confidence scores, wherein the boxes correspond to image regions comprising objects of both known categories and the novel categories. Furthermore, the method includes, given a query image, executing an instruction based on the common embedding space and the set of mixture models, the instruction comprising identifying objects from both categories in the query image.Type: GrantFiled: January 7, 2019Date of Patent: November 10, 2020Assignee: International Business Machines CorporationInventors: Leonid Karlinsky, Eliyahu Schwartz, Joseph Shtok, Mattias Marder, Sivan Harary
-
Patent number: 10825180Abstract: The present disclosure relates to a method for training a classifier. The method includes: acquiring an original image; determining a candidate target by segmenting the original image based on at least two segmentation models; determining a universal set of features by extracting features from the candidate target; determining a reference subset of features by selecting features from the universal set of features; and determining a classifier by performing classifier training based on the reference subset of features.Type: GrantFiled: July 28, 2017Date of Patent: November 3, 2020Assignee: SHANGHAI UNITED IMAGING HEALTHCARE CO., LTD.Inventors: Xiangjun Chen, Jiyong Wang
-
Patent number: 10825564Abstract: A method and system may use computer vision techniques and machine learning analysis to automatically identify a user's biometric characteristics. A user's client computing device may capture a video of the user. Feature data and movement data may be extracted from the video and applied to statistical models for determining several biometric characteristics. The determined biometric characteristic values may be used to identify individual health scores and the individual health scores may be combined to generate an overall health score and longevity metric. An indication of the user's biometric characteristics which may include the overall health score and longevity metric may be displayed on the user's client computing device.Type: GrantFiled: December 11, 2017Date of Patent: November 3, 2020Assignee: State Farm Mutual Automobile Insurance CompanyInventors: Dingchao Zhang, Michael Bernico, Peter Laube, Utku Pamuksuz, Jeffrey S. Myers, Marigona Bokshi-Drotar, Edward W. Breitweiser
-
Patent number: 10825191Abstract: An assessment device generates, based on a captured image captured by an image capturing device mounted on a moving object, shape information on subjects included in the captured image. The assessment device acquires, by referring to a storage, shape information on a static object associated with the image capturing location. The assessment device specifies, based on the generated shape information and the acquired shape information, a dynamic object that is moving from among the subjects included in the captured image and conducts an assessment related to the dynamic object based on the location in the captured image.Type: GrantFiled: February 12, 2019Date of Patent: November 3, 2020Assignee: FUJITSU LIMITEDInventors: Masahiro Kataoka, Junya Kato, Takuya Kozaki
-
Patent number: 10818028Abstract: A computing system is configured to train an object classifier. Monocular image data and ground-truth data are received for a scene. Geometric context is determined including a three-dimensional camera position relative to a fixed plane. Regions of interest (RoI) and a set of potential occluders are identified within the image data. For each potential occluder, an occlusion zone is projected onto the fixed plane in three-dimensions. A set of occluded RoIs on the fixed plane are generated for each occlusion zone. Each occluded RoI is projected back to the image data in two-dimensions. The classifier is trained by minimizing a loss function generated by inputting information regarding the RoIs and the occluded RoIs into the classifier, and by minimizing location errors of each RoI and each occluded RoI of the set on the fixed plane based on the ground-truth data. The trained classifier is then output for object detection.Type: GrantFiled: December 17, 2018Date of Patent: October 27, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Ishani Chakraborty, Gang Hua
-
Patent number: 10817706Abstract: Systems and methods are described that include receiving an indication of a scanned document from a scanning device, waiting a delay period in response to receiving the indication of the scanned document, and instructing an image capture device to capture an image of a document holder after waiting the delay period. In this manner, the described systems and methods may increase facial recognition throughput by waiting the delay period after receiving the indication of the scanned document, allowing for sufficient time for the document holder to look up from the scanning device to the image capture device.Type: GrantFiled: June 5, 2018Date of Patent: October 27, 2020Assignee: Universal City Studios LLCInventors: Andrew Alexander Alvin, Preston Tyler Jordan
-
Patent number: 10813563Abstract: Fluorescence based tracking of a light-emitting marker in a bodily fluid stream is conducted by: providing a light-emitting marker into a fluid stream; establishing field of view monitoring by placement of a sensor, such as a high speed camera, at a region of interest; recording image data of light emitted by the marker at the region of interest; determining time characteristics of the light output of the marker traversing the field of view; and calculating flow characteristics based on the time characteristics. Furthermore generating a velocity vector map may be conducted using a cross correlation technique, leading and falling edge considerations, subtraction, and/or thresholding.Type: GrantFiled: January 5, 2018Date of Patent: October 27, 2020Assignee: SCINOVIA CORP.Inventors: James Bradley Sund, Sr., David S. Cohen