Patents Issued in July 25, 2019
-
Publication number: 20190228219Abstract: Methods of recognizing motions of an object in a video clip or an image sequence are disclosed. A plurality of frames are selected out of a video clip or an image sequence of interest. A text category is associated with each frame by applying an image classification technique with a trained deep-learning model for a set of categories containing various poses of an object within each frame. A “super-character” is formed by embedding respective text categories of the frames as corresponding ideograms in a 2-D symbol having multiple ideograms contained therein. Particular motion of the object is recognized by obtaining the meaning of the “super-character” with image classification of the 2-D symbol via a trained convolutional neural networks model for various motions of the object derived from specific sequential combinations of text categories. Ideograms may contain imagery data instead of text categories, e.g., detailed images or reduced-size images.Type: ApplicationFiled: April 4, 2019Publication date: July 25, 2019Inventors: Lin Yang, Patrick Z. Dong, Baohua Sun
-
Publication number: 20190228220Abstract: An apparatus of the invention determines whether or not new scanned image data is similar to past scanned image data based on character string areas and a table area extracted from the new scanned image data, specifies a character string area used to obtain information set to the past scanned image data determined to be similar, detects a target area as a processing target out of the character string areas extracted from the new scanned image data based on the specified character string area, the table included in the past scanned image data determined to be similar, and the table included in the new scanned image data, performs character recognition processing on the detected target area, and sets information to the new scanned image data by using a character obtained as a result of the character recognition processing.Type: ApplicationFiled: January 15, 2019Publication date: July 25, 2019Inventor: Yoshitaka Matsumoto
-
Publication number: 20190228221Abstract: A method for separating out a defect image from a thermogram sequence based on weighted naive Bayesian classifier and dynamic multi-objective optimization. A method extracts these features and classifies the selected TTRs into K categories based on their feature vectors through a weighted naive Bayesian classifier, which deeply digs the physical meanings contained in each TTR, makes the classification of TTRs more rational, and improves the accuracy of defect image's separation. Meanwhile, the multi-objective function does not only fully consider the similarities between the RTTR and other TTRs in the same category, but also considers the dissimilarities between the RTTR and the TTRs in other categories, thus the RTTR selected is more representative, which guarantees the accuracy of describing the defect outline.Type: ApplicationFiled: March 29, 2019Publication date: July 25, 2019Applicant: UNIVERSITY OF ELECTRONIC SCIENCE AND TECHNOLOGY OF CHINAInventors: Chun YIN, Yuhua CHENG, Ting XUE, Xuegang HUANG, Haonan ZHANG, Kai CHEN, Anhua SHI
-
Publication number: 20190228222Abstract: Systems and methods are provided for assessing whether mobile deposit processing engines meet specified standards for mobile deposit of financial documents. A mobile deposit processing engine (MDE) is evaluated to determine if it can perform technical capabilities for improving the quality of and extracting content from an image of a financial document. A verification process then begins, where the MDE performs the image quality enhancements and text extraction steps on sets of images from a test deck. The results of the processing of the test deck are then evaluated by comparing confidence levels with thresholds to determine if each set of images should be accepted or rejected. Further analysis determines whether any of the sets of images were falsely accepted or rejected in error. An overall error rate is then compared with minimum accuracy criteria, and if the criteria are met, the MDE meets the standard for mobile deposit.Type: ApplicationFiled: January 28, 2019Publication date: July 25, 2019Inventors: Grigori Nepomniachtchi, Mike Strange
-
Publication number: 20190228223Abstract: A method and system for generating a map identifying the size and location of anomalous crop health patterns of a geographic area. Predictive crop health forecasting based historical crop health images generates expected crop health images. Statistical parametric mapping is used to model differences in the expected crop health images and current crop health images to generate a statistical parametric map. Regions of anomalous crop health based on the modeled differences are identified in the statistical parametric map. The number of the identified anomalous crop health regions and the size of each of the identified anomalous crop health regions are determined. The statistical significance of the size and number of the anomalous crop health regions relative to the expected crop health is quantified. A map of anomalous crop health patterns delineates the anomalous crop health regions and the statistical significance of the size and number of anomalous crop health regions.Type: ApplicationFiled: January 25, 2018Publication date: July 25, 2019Inventors: Sean A. McKenna, Beat Buesser, Seshu Tirupathi
-
Publication number: 20190228224Abstract: In embodiments, obtaining a plurality of image sets associated with a geographical region and a time period, wherein each image set of the plurality of image sets comprises multi-spectral and time series images that depict a respective particular portion of the geographical region during the time period, and predicting one or more crop types growing in each of particular locations within the particular portion of the geographical region associated with an image set of the plurality of image sets. Determining a crop type classification for each of the particular locations based on the predicted one or more crop types for the respective particular locations, and generating a crop indicative image comprising at least one image of the multi-spectral and time series images of the image set overlaid with indications of the crop type classification determined for the respective particular locations.Type: ApplicationFiled: December 12, 2018Publication date: July 25, 2019Applicant: X Development LLCInventors: Cheng-en Guo, Jie Yang, Elliott Grant
-
Publication number: 20190228225Abstract: In embodiments, obtaining a plurality of image sets associated with a geographical region and a time period, wherein each image set of the plurality of image sets comprises multi-spectral and time series images that depict a respective particular portion of the geographical region during the time period, and predicting presence of a crop at particular locations within the particular portion of the geographical region associated with an image set of the plurality of image sets. Determining crop boundary locations within the particular portion of the geographical region based on the predicted presence of the crop at the particular locations, and generating a crop indicative image comprising at least one image of the multi-spectral and time series images of the image set overlaid with indication of crop areas, wherein the crop areas are defined by the determined crop boundary locations.Type: ApplicationFiled: December 12, 2018Publication date: July 25, 2019Applicant: X Development LLCInventors: Cheng-en Guo, Jie Yang, Elliott Grant
-
Publication number: 20190228226Abstract: An automated warehouse includes a plurality of transfer destinations at which articles can be placed; and a transporter that moves between the plurality of transfer destinations and that transfers the article to the transfer destination, the automated warehouse including: moving-side image capturer provided on the transporter and image-captures the transfer destination and a part or all of an operation of transferring the article to the transfer destination performed by the transporter; and a fixed-side image capturer provided at a predetermined position in the automated warehouse, and image-captures, from a direction different from that of the moving-side image capturer, the transporter and a part or all of the operation of transferring the article to the transfer destination performed by the transporter.Type: ApplicationFiled: August 16, 2017Publication date: July 25, 2019Inventor: Naruto Adachi
-
Publication number: 20190228227Abstract: A method and an apparatus for extracting a user attribute, and an electronic device include: receiving image data sent by a second terminal; extracting user attribute information based on the image data; and determining a target service object corresponding to the user attribute information. Current biological images of the user are obtained in real time; it is easy and convenient; authenticity of the user attribute information may be ensured; the target service object is determined by means of the user attribute information, which is more in line with current demands of the user.Type: ApplicationFiled: December 26, 2017Publication date: July 25, 2019Applicant: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTDInventors: Fan ZHANG, Binxu PENG, Kaijia CHEN
-
Publication number: 20190228228Abstract: A drive recorder records a video captured by a camera mounted on the vehicle in association with a time of day, and extracts the video of a predetermined time period including the time of day when the abnormal event is detected from the recording medium, and transmits a first file including the extracted video and information of the predetermined time period. An information terminal device acquires driving condition information of the vehicle, and records it in association with the time of day. When receiving the first file, the information terminal device extracts driving condition information of the time period that is the same as the information of the predetermined time period included in the first file, creates a second file including the extracted driving condition information and the video included in the first file, and records it as an erasure prohibited object, or transmits it to an external device.Type: ApplicationFiled: July 19, 2017Publication date: July 25, 2019Applicant: NEC CorporationInventor: Hidenori TSUKAHARA
-
Publication number: 20190228229Abstract: Audio content may be captured during capture of spherical video content. An audio event within the audio content may indicate an occurrence of a highlight event based on sound(s) originating from audio source(s) captured within an audio event extent within the spherical video content at an audio event moment. Temporal type of the audio event providing guidance with respect to relative temporality of the highlight event with respect to the audio event and spatial type of the audio event providing guidance with respect to relative spatiality of the highlight event with respect to the audio event may be determined. A highlight event moment of the highlight event may be identified based on the audio event moment and temporal type of the audio event. A highlight event extent of the highlight event may be identified based on the audio event extent and the spatial type of the audio event.Type: ApplicationFiled: March 29, 2019Publication date: July 25, 2019Inventor: Ingrid A. Cotoros
-
Publication number: 20190228230Abstract: An information processing apparatus that provides information about a virtual viewpoint image includes: a generation unit configured to generate scene information including type information and time information, the type information indicating a type of an event occurring in an image-capturing region in which an image is captured by a plurality of cameras, the time information indicating a time when the event has occurred; and a provision unit configured to provide an output destination of material data with the scene information generated by the generation unit, the material data being generated from a plurality of captured images obtained by the plurality of cameras capturing images of the image-capturing region from different directions, the material data being used to generate the virtual viewpoint image depending on a position and an orientation of a virtual viewpoint.Type: ApplicationFiled: January 15, 2019Publication date: July 25, 2019Inventor: Kazufumi Onuma
-
Publication number: 20190228231Abstract: Systems and methods for segmenting video. A segmentation application executing on a computing device receives a video including video frames. The segmentation application calculates, using a predictive model trained to evaluate quality of video frames, a first aesthetic score for a first video frame and a second aesthetic score for a second video frame. The segmentation application determines that the first aesthetic score and the second aesthetic score differ by a quality threshold and that a number of frames between the first video frame and the second video frame exceeds a duration threshold. The segmentation application creates a video segment by merging a subset of video frames ranging from the first video frame to an segment-end frame preceding the second video frame.Type: ApplicationFiled: January 25, 2018Publication date: July 25, 2019Inventors: Sagar Tandon, Abhishek Shah
-
Publication number: 20190228232Abstract: An automatically calibrated vehicle-tracking system and methods of use thereof. The automatically calibrated vehicle-tracking system has an input interface for receiving an image stream from a tracking camera and vehicle license plate data indicative of valid license plate detections from a license plate camera; a general purpose processor; a computer-readable memory comprising calibration program code for calibrating the vehicle-tracking system, the calibration program code comprising: a tracking module to generate a plurality of calibration tracks, a pairing module to identify, for each of the plurality of calibration tracks, an association between a valid license plate detection and the calibration track and a calibration to set a threshold for a track parameter.Type: ApplicationFiled: September 12, 2017Publication date: July 25, 2019Inventors: Myriam LÉCART, Jonathan LAVOIE
-
Publication number: 20190228233Abstract: Video tracking systems and methods include a peripheral master tracking process integrated with one or more tunnel tracking processes. The video tracking systems and methods utilize video data to detect and/or track separately several stationary or moving objects in a manner of tunnel vision. The video tracking system includes a master peripheral tracker for monitoring a scene and detecting an object, and a first tunnel tracker initiated by the master peripheral tracker, wherein the first tunnel tracker is dedicated to track one detected object.Type: ApplicationFiled: November 5, 2018Publication date: July 25, 2019Applicant: IntuVision Inc.Inventors: Sadiye Zeyno Guler, Jason Adam Silverstein, Matthew Kevin Farrow, Ian Harris Pushee
-
Publication number: 20190228234Abstract: A supervisor can ascertain an occurrence of a congestion state in advance by performing an appropriate notification prior to reaching the congestion state. There is provided an area setting unit that sets at least two determination areas on the captured image in response to an input operation of a user, a person sensor that senses a person existing within the determination area from the captured image, a congestion degree calculator that calculates a congestion degree for each of the determination areas based on a sensing result by the person sensor, a state sensor that compares the congestion degree calculated by the congestion degree calculator with predetermined reference values for each of the determination areas, and senses a plurality of states including a congestion state and a potential congestion state, and an output unit that outputs information for performing a notification action based on the sensing result by the state sensor.Type: ApplicationFiled: August 28, 2017Publication date: July 25, 2019Applicant: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.Inventor: Hiroki TESHIMA
-
Publication number: 20190228235Abstract: A system includes an object identification module, a tailgate position module, and a user interface device (UID) control module. The object identification module is configured to identify at least one of a bumper of a vehicle and a tailgate of the vehicle in an image captured by a camera mounted to the tailgate. The tailgate position module is configured to determine that the tailgate is closed when the bumper is identified in the image, and determine that the tailgate is open when at least one of: the tailgate is identified in the image; and the bumper is not identified in the image. The UID control module is configured to adjust operation of a user interface device based on whether the tailgate is open or closed.Type: ApplicationFiled: January 25, 2018Publication date: July 25, 2019Applicant: GM Global Technology Operations LLCInventors: Mohannad MURAD, Bryan W. Fowler, Princess Len Carlos
-
Publication number: 20190228236Abstract: A method for processing sensor data in a number of controllers in a controller complex. The controllers are connected to at least one sensor via at least one communication bus, wherein the sensor data of the at least one sensor are processed by at least two different controllers in stages. At least one processing stage is concordant in the two controllers or is equivalent to the other stage at least in so far as the results of the processing are converted into one another by a conversion. Provision is made for a preprocessing unit to which the sensor data of the at least one sensor are supplied, wherein the processing of the sensor data in the at least one concordant processing stage is performed in the preprocessing unit, and the processed sensor data are forwarded to the at least two different controllers for individual further processing.Type: ApplicationFiled: January 22, 2019Publication date: July 25, 2019Inventors: Peter SCHLICHT, Stephan SCHOLZ
-
Publication number: 20190228237Abstract: A detection of a boundary in an environment. For this purpose, information of an occupancy grid is used, wherein the occupancy grid provides information about the probability of occupancy in the environment. Upon detecting a starting transition point between a free and an occupied grid cell in the occupancy grid, a region of interest window surrounding the starting transition point is analyzed to identify further transition points. The identified transition points are combined to one or more polygon chain. After the analysis of the boundary is performed within the region of interest, successive regions of interest may be analyzed. For this purpose, the successive regions of interest are determined based on the transition points and/or the boundary information of a current region of interest window.Type: ApplicationFiled: January 4, 2019Publication date: July 25, 2019Inventors: Pothuraju Chavali, Naveen Onkarappa, Gerrit Wischer
-
Publication number: 20190228238Abstract: An ECU is applied to a vehicle system that is provided with lateral sensors which acquire distance information expressing a distance to an object that is located at a position on a lateral side of a vehicle. When distance information on the object is acquired by the lateral sensors, a judgement is made by the ECU as to whether or not the object is a predetermined moving object that moves relative to the vehicle. The ECU determines that the object is a target to be subjected to contact avoidance processing for avoiding contact with the object, based on a result of judging whether or not the object for which the distance information is acquired is a moving object.Type: ApplicationFiled: July 11, 2017Publication date: July 25, 2019Applicant: DENSO CORPORATIONInventors: Taketo HARADA, Mitsuyasu MATSUURA, Yosuke MIYAMOTO
-
Publication number: 20190228239Abstract: A target detection method, system and controller. The method comprises: receiving first scan data transmitted by a radar, the first scan data being obtained after the radar performs a first type of scanning on a first target region; receiving image data transmitted by a digital image device, the image data being obtained after the digital image device images a second target region, an overlapping region existing between the second target region and the first target region; according to the first scan data, finding image information corresponding to obstacle targets from the image data so as to identify the types of the target obstacles; when it is determined according to the types of the target obstacles that an obstacle target that needs to be avoided exists, controlling the radar to perform a second type of scanning on the obstacle target that needs to be avoided and tracking same, the precision of the second type of scanning being greater than the precision of the first type of scanning.Type: ApplicationFiled: August 23, 2016Publication date: July 25, 2019Applicant: SUTENG INNOVATION TECHNOLOGY CO., LTD.Inventors: Bin WANG, Yingying ZHANG
-
Publication number: 20190228240Abstract: A method for detecting garage parking spaces for vehicles (1) in an area surrounding a vehicle (1) using a parking assistance system (2), wherein the vehicle (1) has at least one environment sensor (8, 10, 11, 12, 13, 14, 15), is designed such that garage parking spaces in a garage (20) can be reliability detected. This is achieved by providing a method comprising the steps of receiving sensor data using the parking assistance system (2) from the at least one environment sensor (8, 10, 11, 12, 13, 14, 15) from the surrounding area, transmission of sensor data to an on-board computer unit (6), creating a digital environment map of the surrounding area from the sensor data, detection of a parking space-like subregion (21) of the surrounding area in the environment map, classifying the parking space-like subregion (21) as a garage parking space by means of deep learning models.Type: ApplicationFiled: January 24, 2019Publication date: July 25, 2019Applicant: VALEO Schalter und Sensoren GmbHInventors: Evangelos Stamatopoulos, Gabriel Schoenung
-
Publication number: 20190228241Abstract: A method of operating an in-vehicle camera includes providing a database of geographic locations of points of interest. It is detected that the vehicle has arrived at one of the geographic locations of the points of interest. In response to the detecting step, capturing of images with the camera is automatically begun.Type: ApplicationFiled: March 29, 2019Publication date: July 25, 2019Inventor: HAKAN KOSTEPEN
-
Publication number: 20190228242Abstract: Vehicle systems and methods for determining a target position that a user is gesturing towards are disclosed. In one embodiment, a vehicle includes a user detection system configured to output a gesture signal in response to a hand of the user performing at least one gesture to indicate a target position, a user gaze monitoring system configured to output an eye location signal, one or more processors, and one or more non-transitory memory modules communicatively coupled to the one or more processors. The memory modules store machine-readable instructions that, when executed, cause the one or more processors to determine a point located on the hand of the user based at least in part on the gesture signal from the user detection system. The processors are also caused to determine an actual eye position of the user based on the eye location signal from the user gaze monitoring system.Type: ApplicationFiled: January 23, 2018Publication date: July 25, 2019Applicant: Toyota Research Institute, Inc.Inventors: Masaaki Yamaoka, Yuki Horiuchi
-
Publication number: 20190228243Abstract: Vehicle systems and methods for determining a target position are disclosed. A vehicle includes a user detection system configured to output a gesture signal in response to a hand of a user performing at least one gesture to indicate a final target position. The vehicle also includes a user gaze monitoring system configured to output an eye location signal that indicates an actual eye position of the user. The vehicle also includes one or more processors and one or more non-transitory memory modules communicatively coupled to the processors. The processors store machine-readable instructions that, when executed, cause the one or more processors to determine a first point and a second point located on the hand of the user based at least in part on the gesture signal from the user detection system. The first point and the second point define a pointing axis of the hand of the user.Type: ApplicationFiled: January 23, 2018Publication date: July 25, 2019Applicant: Toyota Research Institute, Inc.Inventors: Masaaki Yamaoka, Yuki Horiuchi
-
Publication number: 20190228244Abstract: A system and method. The system may include a monitor implemented as a virtual window, two cameras, and a switch. The switch may be configured to: when a passenger is in a first position, feed video from a first camera to the monitor; and when the passenger is in a second position, feed video from the second camera to the monitor.Type: ApplicationFiled: March 29, 2019Publication date: July 25, 2019Inventor: R. Klaus Brauer
-
Publication number: 20190228245Abstract: A system and method. The system may include a monitor implemented as a virtual window, a camera, an additional camera, a processor, and a switch. The processor may be configured to: receive video from the additional camera; manipulate the video from the additional camera based on a position of the passenger to provide an additional camera manipulated video stream; and output the additional camera manipulated video stream. The switch may be configured to: when the passenger is in a first position, feed video from the camera to the monitor; and when the passenger is in a second position, feed the additional camera manipulated video stream to the monitor.Type: ApplicationFiled: March 29, 2019Publication date: July 25, 2019Inventor: R. Klaus Brauer
-
Publication number: 20190228246Abstract: A vehicle is configured to receive, from a passenger, a pickup request including an approximate location of the passenger, to scan for the passenger after arriving at the approximate location of the passenger, and to determine whether the passenger has been identified by comparing passenger attribute information to results of the scan, and to transmit an approximate location of the vehicle and vehicle identification information to the passenger when the passenger has not been identified or is not accessible for pickup. The passenger is picked up by the vehicle when the passenger has been identified and is accessible for pickup. In addition, the passenger may also use a portable electronic device to identify the vehicle based on the received approximate location of the vehicle and vehicle identification information.Type: ApplicationFiled: January 25, 2018Publication date: July 25, 2019Inventors: Lei Yang, Hai Yu, Qijie Xu, Fatih Porikli
-
Publication number: 20190228247Abstract: A biometric biochemical analysis system includes a user interface module to provide instructions for collecting and handling biochemical sampling and processing related to biometric data gathering as well as capturing biometric data using digital data capturing devices. The user interface module and display are integrated with analysis and communications portions of the biometric biochemical analysis system to provide a portable system for multi-portion data collecting, storage, verification, and analysis.Type: ApplicationFiled: December 27, 2018Publication date: July 25, 2019Inventors: Robert A. Schueren, David King, Chungsoo Charles Park, Stevan B. Jovanovich
-
Publication number: 20190228248Abstract: A processor-implemented liveness test method includes: obtaining a color image including an object and an infrared (IR) image including the object; performing a first liveness test using the color image; performing a second liveness test using the IR image; and determining a liveness of the object based on a result of the first liveness test and a result of the second liveness test.Type: ApplicationFiled: December 11, 2018Publication date: July 25, 2019Applicant: Samsung Electronics Co., Ltd.Inventors: Jaejoon HAN, Youngjun KWAK, ByungIn YOO, Changkyu CHOI
-
Publication number: 20190228249Abstract: The purpose of the present invention is, when a portion of a subject to be detected is occluded, to simplify detecting that the occluded subject to be detected is the subject to be detected, regardless of the position which is occluded. Provided is an information processing device (110), comprising: a computation unit (111) which computes local scores for each of a plurality of positions which are contained in an image of a prescribed scope, said scores indicating the likelihood of an object to be detected being present; and a change unit (112) which changes the scores for the positions, among the plurality of positions, which are included in a prescribed region which is determined according to the plurality of scores which have been computed for said plurality of positions, such that the likelihood of the object to be detected being present increases.Type: ApplicationFiled: April 2, 2019Publication date: July 25, 2019Applicant: NEC CORPORATIONInventor: Kenta Araki
-
Publication number: 20190228250Abstract: A system for organizing materials is disclosed. The system has a material organization module, comprising computer-executable code stored in non-volatile memory, a processor, an object recognition imaging device, and a user interface. The material organization module, the processor, the object recognition imaging device, and the user interface are configured to use the object recognition imaging device to determine spatial data and image data, use the image data to display an actual image of a container including a plurality of compartments on the user interface, and display one or more computer-generated edible material images that are superimposed, based on the spatial data, on at least one of the plurality of compartments of the actual image of the container.Type: ApplicationFiled: January 19, 2018Publication date: July 25, 2019Inventor: Timothy R. Fitzpatrick
-
Publication number: 20190228251Abstract: Systems and methods for aligning digital image datasets to a computer model of a structure. The system receives a plurality of reference images from an input image dataset and identifies common ground control points (“GCPs”) in the reference images. The system then calculates virtual three-dimensional (“3D”) coordinates of the measured GCPs. Next, the system calculates and projects two-dimensional (“2D”) image coordinates of the virtual 3D coordinates into all of the images. Finally, using the projected 2D image coordinates, the system performs spatial resection of all of the images in order to rapidly align all of the images.Type: ApplicationFiled: January 25, 2019Publication date: July 25, 2019Applicant: Geomni, Inc.Inventors: Ángel Guijarro Meléndez, Javier Del Río Fernández, Antonio Godino Cobo
-
Publication number: 20190228252Abstract: An image data retrieving method and an image data retrieving device are provided. The image data retrieving method includes: receiving an image including a plurality of data from a communication interface; obtaining a plurality of regions of interest from the image, wherein each of the regions of interest is a data image including at least one of the data; dividing the regions of interest into a plurality of groups, wherein at least one of the data included in the regions of interest of each of the groups has a same type; combining the regions of interest of each of the groups into a to-be-recognized image; and performing an optical character recognition to the to-be-recognized image corresponding to each of the groups respectively to obtain the data corresponding to the regions of interest of each of the groups.Type: ApplicationFiled: April 18, 2018Publication date: July 25, 2019Applicant: Wistron CorporationInventors: Ying-Hao Peng, Zih-Yang Huang
-
Publication number: 20190228253Abstract: Systems and methods are described for enabling a client device to request video streams with different bit depth remappings for different viewing conditions. In an embodiment, information indicating the availability of additional remapped profiles is sent in a manifest file. Alternative bit-depth remappings may be optimized for different regions of interest in the image or video content, or for different viewing conditions, such as different display technologies and different ambient illumination. Some embodiments based on the DASH protocol perform multiple depth mappings at the encoder and also perform ABR-encoding for distribution. The manifest file contains information indicating additional remapping profiles. The remapping profiles are associated with different transformation functions used to convert from a higher bit-depth to a lower bit-depth.Type: ApplicationFiled: May 5, 2017Publication date: July 25, 2019Inventors: Kumar Ramaswamy, Jeffrey Allen Cooper
-
Publication number: 20190228254Abstract: An extraneous-matter detecting apparatus according to an embodiment includes a first extraction unit, a second extraction unit, and a detection unit. The first extraction extracts a first pixel group of first pixels included in a captured image captured by an image capturing device. Each of the first pixels has a luminance gradient directed outward from a predetermined center region. The second extraction unit extracts a second pixel group of second pixels included in the captured image. Each of the second pixels has a luminance gradient directed inward toward the predetermined center region. The detection unit combines the first pixel group, extracted by the first extraction unit, and the second pixel group, extracted by the second extraction unit, with each other so as to detect an extraneous matter adhered to the image capturing device.Type: ApplicationFiled: December 12, 2018Publication date: July 25, 2019Applicant: DENSO TEN LimitedInventors: Nobunori ASAYAMA, Daisuke YAMAMOTO, Nobuhisa IKEDA, Takashi KONO
-
Publication number: 20190228255Abstract: The current document is directed to methods and systems for monitoring a dental patient's progress during a course of treatment. A three-dimensional model of the expected positions of the patient's teeth can be projected, in time, from a three-dimensional model of the patient's teeth prepared prior to beginning the treatment. A digital camera is used to take one or more two-dimensional photographs of the patient's teeth, which are input to a monitoring system. The monitoring system determines virtual-camera parameters for each two-dimensional input image with respect to the time-projected three-dimensional model, uses the determined virtual-camera parameters to generate two-dimensional images from the three-dimensional model, and then compares each input photograph to the corresponding generated two-dimensional image in order to determine how closely the three-dimensional arrangement of the patient's teeth corresponds to the time-projected three-dimensional arrangement.Type: ApplicationFiled: March 29, 2019Publication date: July 25, 2019Inventors: Artem Borovinskih, Mitra Derakhshan, Carina Koppers, Eric Meyer, Ekaterina Tolstaya, Yury Brailov
-
Publication number: 20190228256Abstract: A learning automaton can be trained to merge data from input data streams, optionally with different data rates, into a single output data stream. The learning automaton can learn over time from the input data streams. The input data streams can be low-pass filtered to suppress data having frequencies greater than a time-varying cutoff frequency. Initially, the cutoff frequency can be relatively low, so that the effective data rates of the input data streams are all equal. This can ensure that initially, high data-rate data does not overwhelm low data-rate data. As the learning automaton learns, an entropy of the learning automaton changes more slowly, and the cutoff frequency is increased over time. When the entropy of the learning automaton has stabilized, the training is completed, and the cutoff frequency can be large enough to pass all the input data streams, unfiltered, to the learning automaton.Type: ApplicationFiled: January 18, 2019Publication date: July 25, 2019Inventors: Marcus Alton Teter, Natalie Rae Plotkin, Scott Allen Imhoff, Walter Parish Gililland, JR., Austin Jay Jorgensen
-
Publication number: 20190228257Abstract: A method includes obtaining a first image of a patient procured during an X-ray, analyzing the first image for one or more unmasked anomalies, shifting the first image to provide a shifted image, obtaining a residual image comprising a combination of the first image and the shifted image, and analyzing the residual image for one or more masked anomalies, where the one or more masked anomalies include anomalies that went undetected in the analysis of the first image for the one or more unmasked anomalies due to a presence of one or more masking features in the first image.Type: ApplicationFiled: September 22, 2017Publication date: July 25, 2019Inventor: Homayoun Karimabadi
-
Publication number: 20190228258Abstract: Methods and apparatus are disclosed for vision-based determining of trailer presence. An example vehicle includes a camera to capture a plurality of frames. The example vehicle also includes a controller to calculate feature descriptors for a set of features identified in a first frame, compute respective match magnitudes between the feature descriptors of the first frame and for a second frame, calculate respective feature scores for each feature of the set of features, and determine if a trailer is present by comparing a feature score of a feature to a threshold.Type: ApplicationFiled: January 23, 2018Publication date: July 25, 2019Inventors: Robert Bell, Brian Grewe
-
Publication number: 20190228259Abstract: In one embodiment, a plurality of patches of an image are processed using a first-pass of a first deep-learning model to generate object-level information for each of the patches. Each patch includes one or more pixels of the image. Using a second-pass of the first deep-learning model, a respective object proposal is generated for each of the plurality of patches of the image. The second-pass takes as input the first-pass output, and the generated respective object proposals comprise pixel-level information for each of the patches. Using a second deep-learning model, a respective score is computed for each object proposal. The second deep-learning model takes as input the first-pass output, and the object score includes a likelihood that the respective patch of the object proposal contains an entire object.Type: ApplicationFiled: March 29, 2019Publication date: July 25, 2019Inventors: Pedro Henrique Oliveira Pinheiro, Ronan Stéfan Collobert, Piotr Dollar
-
Publication number: 20190228260Abstract: A method for determining a quantity of interest related to the density of organic tissue starts with a digital representation of a histological image of the tissue. The digital representation is converted to a binary image, to discriminate pixels that represent tissue of interest in the image. A box filter is applied to values of the pixels of interest to obtain a tissue density value for each pixel of interest. A quantity of interest is computed, based upon the tissue density values for the pixels of interest. A tangible representation of the computed quantity of interest, such as a numerical value, a graph, or a color representation, is displayed or otherwise presented via an interface.Type: ApplicationFiled: January 18, 2019Publication date: July 25, 2019Applicant: BiocellviaInventors: Jean-Claude Gilhodes, Yvon Julé, Tomi Florent
-
Publication number: 20190228261Abstract: Systems and methods for generating prebuilt machine learning framework objects comprising sets of prebuilt machine learning components and one or more data mapping requirements. The components are associated with a respective machine learning service. One or more datasets are obtained. A user-specified context for creating a particular machine learning application is obtained. A particular prebuilt object is selected based on the datasets and the context. One more candidate data mappings are identified based on the data mapping requirements and the datasets. A particular data mapping is selected. A particular set of prebuilt components is selected from the plurality of prebuilt components. The particular machine learning application is generated from the particular prebuilt object based on the particular data mapping and the particular set of prebuilt components, the particular machine learning application comprising an executable application. The machine learning application is deployed.Type: ApplicationFiled: December 12, 2018Publication date: July 25, 2019Applicant: WeR.AI, Inc.Inventor: Man Chan
-
Publication number: 20190228262Abstract: Techniques are disclosed herein for collecting annotation data via a gamified user interface in a vehicle control system. According to an embodiment disclosed herein, the vehicle control system detects a trigger to initiate an annotation prompt associated with an object classified from an image. The vehicle control system presents, via a user interface, the annotation prompt. The vehicle control system receives, via the user interface, user input indicative of a response to the annotation prompt by a user and updates a confidence score associated with the classified object as a function of one or more metrics associated with the user.Type: ApplicationFiled: March 30, 2019Publication date: July 25, 2019Inventors: Domingo C. Gonzalez, Ignacio J. Alvarez, Mehrnaz Khodam Hazrati, Christopher Lopez-Araiza
-
Publication number: 20190228263Abstract: A method of detecting an object in a real scene using a computer includes specifying a 3D model corresponding to the object. The method further includes acquiring, from a capture camera, an image frame of a reference object captured from a first view angle. The method further includes generating a 2D synthetic image by rendering the 3D model in a second view angle that is different from the first view angle. The method further includes generating training data using (i) the image frame, (ii) the 2D synthetic image, (iii) the run-time camera parameter, and (iv) the capture camera parameter. The method further includes storing the generated training data in one or more memories.Type: ApplicationFiled: January 19, 2018Publication date: July 25, 2019Applicant: SEIKO EPSON CORPORATIONInventors: Hiu Lok SZETO, Syed Alimul HUDA
-
METHOD AND APPARATUS FOR TRAINING NEURAL NETWORK MODEL USED FOR IMAGE PROCESSING, AND STORAGE MEDIUM
Publication number: 20190228264Abstract: A method, apparatus, and storage medium for training a neural network model used for image processing are described. The method includes: obtaining a plurality of video frames; inputting the plurality of video frames through a neural network model so that the neural network model outputs intermediate images; obtaining optical flow information between an early video frame and a later video frame; modifying an intermediate image corresponding to the early video frame according to the optical flow information to obtain an expected-intermediate image; determining a time loss between an intermediate image corresponding to the later video frame and the expected-intermediate image; determining a feature loss between the intermediate images and a target feature image; and training the neural network model according to the time loss and the feature loss, and returning to obtaining a plurality of video frames continue training until the neural network model satisfies a training finishing condition.Type: ApplicationFiled: April 2, 2019Publication date: July 25, 2019Applicant: Tencent Technology (Shenzhen) Company LimitedInventors: Haozhi HUANG, Hao WANG, Wenhan LUO, Lin MA, Peng YANG, Wenhao JIANG, Xiaolong ZHU, Wei LIU -
Publication number: 20190228265Abstract: A system, method and computer program product is provided. An input signal for classification and a set of pre-classified signals are received, each comprising a vector representation of an object having a plurality of vector elements. A sparse vector comprising a plurality of sparse vector coefficients is determined. Each sparse vector coefficient corresponds to a signal in the set of pre-classified signals and represents the likelihood of a match between the object represented in the input signal and the object represented in the corresponding signal. A largest sparse vector coefficient is compared with a predetermined threshold. If the largest sparse vector coefficient is less than the predetermined threshold, the corresponding signal is removed from the set of pre-classified signals. The determining and comparing are repeated using the input signal and the reduced set of pre-classified signals.Type: ApplicationFiled: April 3, 2019Publication date: July 25, 2019Inventors: Cecilia J. Aas, Raymond S. Glover
-
Publication number: 20190228266Abstract: A method of detecting failure of an object tracking network with a failure detection network includes receiving an activation from an intermediate layer of the object tracking network and classifying the activation as a failure or success. The method also includes determining whether to initiate a recovery mode of the object tracking network or to remain in a tracking mode of the object tracking network, based on the classifying.Type: ApplicationFiled: January 22, 2018Publication date: July 25, 2019Inventors: Amirhossein HABIBIAN, Cornelis Gerardus Maria SNOEK
-
Publication number: 20190228267Abstract: Computer vision systems and methods for machine learning using image hallucinations are provided. The system generates image hallucinations that are subsequently used to train a deep neural network to match image patches. In this scenario, the synthesized changes serve in the learning of feature-embedding that captures how a patch of an image might look like from a different vantage point. In addition, a curricular learning framework is provided which is used to automatically train the neural network to progressively learn more invariant representations.Type: ApplicationFiled: January 23, 2019Publication date: July 25, 2019Applicant: Insurance Services Office, Inc.Inventors: Maneesh Kumar Singh, Hani Altwaijry, Serge Belongie
-
Publication number: 20190228268Abstract: An artificial neural network system for image classification, including multiple independent individual convolutional neural networks (CNNs) connected in multiple stages, each CNN configured to process an input image to calculate a pixelwise classification. The output of an earlier stage CNN, which is a class score image having identical height and width as its input image and a depth of N representing the probabilities of each pixel of the input image belonging to each of N classes, is input into the next stage CNN as input image. When training the network system, the first stage CNN is trained using first training images and corresponding label data; then second training images are forward propagated by the trained first stage CNN to generate corresponding class score images, which are used along with label data corresponding to the second training images to train the second stage CNN.Type: ApplicationFiled: August 9, 2017Publication date: July 25, 2019Applicant: KONICA MINOLTA LABORATORY U.S.A., INC.Inventors: Yongmian ZHANG, Jingwen ZHU