Patents Issued in January 9, 2020
-
Publication number: 20200012837Abstract: An optical identification module including a sensor and a collimator is provided. The sensor has a plurality of sensing regions. The collimator is disposed on the sensing regions, and the collimator includes a transparent substrate and a first light shielding layer. The first light shielding layer is disposed on a first surface of the transparent substrate. The first light shielding layer includes a plurality of first openings. A ratio of a thickness of the first light shielding layer to a width of each of the first openings is greater than 1.Type: ApplicationFiled: March 25, 2019Publication date: January 9, 2020Applicant: Guangzhou Tyrafos Semiconductor Technologies Co., LTDInventors: Chun-Yu Lee, Hsu-Wen Fu
-
Publication number: 20200012838Abstract: Method and system for automatic chromosome classification is disclosed. The system, alternatively referred as a Residual Convolutional Recurrent Attention Neural Network (Res-CRANN), utilizes property of band sequence of chromosome bands for chromosome classification. The Res-CRANN is end-to-end trainable system, in which a sequence of feature vectors are extracted from the feature maps produced by convolutional layers of a Residual neural networks (ResNet), wherein the feature vectors correspond to visual features representing chromosome bands in an chromosome image. The sequence feature vectors are fed into Recurrent Neural Networks (RNN) augmented with an attention mechanism. The RNN learns the sequence of feature vectors and the attention module concentrates on a plurality of Regions-of-interest (ROIs) of the sequence of feature vectors, wherein the ROIs are specific to a class label of chromosomes.Type: ApplicationFiled: January 11, 2019Publication date: January 9, 2020Applicant: Tata Consultancy Services LimitedInventors: Monika SHARMA, Swati JINDAL, Lovekesh VIG
-
Publication number: 20200012839Abstract: A contact image sensor having an illumination source; a first SBG array device; a transmission grating; a second SBG array device; a waveguiding layer including a multiplicity of waveguide cores separated by cladding material; an upper clad layer; and a platen. The sensor further includes: an input element for coupling light from the illumination source into the first SBG array; a coupling element for coupling light out of the cores into output optical paths coupled to a detector having at least one photosensitive element.Type: ApplicationFiled: September 20, 2019Publication date: January 9, 2020Applicant: DigiLens Inc.Inventors: Milan Momcilo Popovich, Jonathan David Waldern
-
Publication number: 20200012840Abstract: According to one embodiment of the present invention, a method for coding an iris pattern divides an iris area into a plurality of sectors on the basis of the assumption that a user's pupil and iris are not circular and then can code an iris pattern included in each sector. According to the present invention, an error occurrence frequency can be minimized compared with a case that an iris pattern is coded on the basis of the assumption that a pupil and an iris are circular.Type: ApplicationFiled: August 14, 2017Publication date: January 9, 2020Inventor: Min Ho KIM
-
Publication number: 20200012841Abstract: In a method of using a polarimeter for improved mapping and perception of objects on the ground, the polarimeter records raw image data to obtain polarized images of an area. The raw image data is processed to form processed images. The processed images are enhanced, and objects are detected and tracked. The polarimeter may be in a vehicle on the ground or in the air.Type: ApplicationFiled: August 27, 2019Publication date: January 9, 2020Applicant: Polaris Sensor Technologies, Inc.Inventors: Todd M. Aycock, David B. Chenault, John S. Harchanko
-
Publication number: 20200012842Abstract: Aerial vehicles that are equipped with one or more imaging devices may detect obstacles that are small in size, or obstacles that feature colors or textures that are consistent with colors or textures of a landing area, using pairs of images captured by the imaging devices. Disparities between pixels corresponding to points of the landing area that appear within each of a pair of the images may be determined and used to generate a reconstruction of the landing area and a difference image. If either the reconstruction or the difference image indicates the presence of one or more obstacles, a landing operation at the landing area may be aborted or an alternate landing area for the aerial vehicle may be identified accordingly.Type: ApplicationFiled: August 30, 2019Publication date: January 9, 2020Inventor: Andreas Klaus
-
Publication number: 20200012843Abstract: Systems and methods acquire and/or generate multiple different images of the same biometric identity, identify specific instances of biometric features in each of the different images, and merge the identified specific instances of biometric features into a data record that provides a digital representation of the biometric identity. Examples of biometric identities include fingerprints, handprints, palm prints, and thumbprints. In one embodiment, a counter is associated with each specific instance of a biometric feature found in the multiple images. Specific instances of biometric features found most frequently have high counts and are indicative of true identifications; those with low counts are indicative of false identifications. A threshold distinguishes between true and false identifications. Those specific instances with counts below the threshold are excluded when the digital representation of the biometric identity is generated.Type: ApplicationFiled: September 18, 2019Publication date: January 9, 2020Inventors: Taras P. RIOPKA, Pranab MOHANTY, Limin MA
-
Publication number: 20200012844Abstract: A method of recognizing a fingerprint includes generating, by using a fingerprint sensor, an input fingerprint image that is to be used when a fingerprint verification mode is executed; obtaining, from a memory, a registered fingerprint image that is generated from a finger image captured by a camera and stored in the memory prior to the generating the input fingerprint image; determining, from among partial regions of a registered fingerprint image that is obtained, a partial region of the registered fingerprint image, which is superimposed on the input fingerprint image, as a registered superimposed image; converting the registered superimposed image such that a first histogram of the registered superimposed image corresponds to a second histogram of the input fingerprint image; and determining, by comparing the registered superimposed image, which is converted, with the input fingerprint image, whether the input fingerprint image is verified.Type: ApplicationFiled: March 12, 2019Publication date: January 9, 2020Applicant: Samsung Electronics Co., Ltd.Inventors: Hyunjoon KIM, Jingu Heo
-
Publication number: 20200012845Abstract: Systems and methods for performing quantitative histopathology analysis for determining tissue potency are disclosed. According to some embodiments, a method training a tissue classifier is provided. According to the method, training the tissue classifier includes generating feature fingerprints of detected nuclei within slide images in a control library and clustering the slide images based on their corresponding feature fingerprints. According to some embodiments, a method for utilizing the trained tissue classifier is provided. According to the method, the trained tissue classifier determines whether tissue in an unknown slide image corresponds to slide images clustered during the training of the tissue classifier.Type: ApplicationFiled: July 2, 2019Publication date: January 9, 2020Applicant: Enzyvant Therapeutics, Inc.Inventors: Alex TRACY, Kristin MARKS, Michael Thomas JOHNSON, Thomas Stephen VILLANI
-
Publication number: 20200012846Abstract: A non-transitory computer readable medium embodies instructions that cause one or more processors to perform a method. The method includes: (A) receiving a selection of a 3D model stored in one or more memories, the 3D model corresponding to an object, and (B) setting a camera parameter set for a camera for use in detecting a pose of the object in a real scene. The method also includes (C) receiving a selection of data representing a view range, (D) generating at least one 2D synthetic image based on the camera parameter set by rendering the 3D model in the view range, (E) generating training data using the at least one 2D synthetic image to train an object detection algorithm, and (F) storing the generated training data in one or more memories.Type: ApplicationFiled: September 17, 2019Publication date: January 9, 2020Applicant: SEIKO EPSON CORPORATIONInventors: Ivo MORAVEC, Jie WANG, Syed Alimul HUDA
-
Publication number: 20200012847Abstract: An object detection apparatus includes: a camera configured to capture an image of an object; one or more of sensor devices each of which is configured to detect an environmental change; and a processor configured to (a): execute a determining process that includes, when any one of the one or more of sensor devices detects an environmental change, detecting a search starting point of the object based on at least one of a time corresponding to the detection and detection information from the sensor device, (b): execute an entry registering process that includes registering an entry with reference information when the object is detected, the entry including at least one of the time and the detection information and a direction in which the object is detected, wherein the determining process is configured to determine the direction toward which the camera is to be turned based on the reference information.Type: ApplicationFiled: September 17, 2019Publication date: January 9, 2020Applicant: Fujitsu LimitedInventors: Akihito Yoshii, Toshikazu Kanaoka, Toru Kamiwada
-
Publication number: 20200012848Abstract: A monitoring system includes an imaging unit configured to capture a depth image including a distance to a monitoring target person in a vehicle cabin, an estimation unit configured to estimate a three-dimensional human body model of the monitoring target person from the depth image captured by the imaging unit, and a monitoring unit configured to detect a get-off motion by monitoring a motion of a monitoring target portion of the human body model approaching the monitoring coordinate regarding a door manipulation in the cabin on the basis of the human body model estimated by the estimation unit.Type: ApplicationFiled: June 26, 2019Publication date: January 9, 2020Applicant: Yazaki CorporationInventor: Jun GOTO
-
Publication number: 20200012849Abstract: A pedestrian retrieval method and apparatus that belong to the video surveillance field include extracting first feature data, second feature data, and third feature data of a target pedestrian image, where the target pedestrian image is an image of a to-be-retrieved pedestrian, and the first feature data, the second feature data, and the third feature data respectively include a plurality of pieces of body multidimensional feature data, a plurality of pieces of upper-body multidimensional feature data, and a plurality of pieces of lower-body multidimensional feature data of the target pedestrian image, screening stored multidimensional feature data based on the first feature data, the second feature data, and the third feature data to obtain a target feature data set, and outputting a pedestrian retrieval result using the target feature data set.Type: ApplicationFiled: September 19, 2019Publication date: January 9, 2020Inventors: Wei Zhang, Maolin Chen, Bo Bai, Xianbo Mou
-
Publication number: 20200012850Abstract: A real-time end-to-end system for capturing ink strokes written with ordinary pen and paper using a commodity video camera is described. Compare to traditional camera-based approaches, which typically separate out the pen tip localization and pen up/down motion detection, described is a unified approach that integrates these two steps using a deep neural network. Furthermore, the described system does not require manual initialization to locate the pen tip. A preliminary evaluation demonstrates the effectiveness of the described system on handwriting recognition for English and Japanese phrases.Type: ApplicationFiled: July 3, 2018Publication date: January 9, 2020Inventors: Chelhwon Kim, Patrick Chiu, Hideto Oda
-
Publication number: 20200012851Abstract: Systems and associated methods relate to classification of documents according to their spectral frequency signatures using a deep neural network (DNN) and other forms of spectral analysis. In an illustrative example, a DNN may be trained using a set of predetermined patterns. A trained DNN may, during runtime, receive documents as inputs, where each document has been converted into a spectral format according to a (2D) Fourier transform. Some exemplary methods may extract periodicity/frequency information from the documents based on the spectral signature of each document. A clustering algorithm may be used in clustering/classification of documents, as well as searching for documents similar to a target document(s). A variety of implementations may save significant time to users in organizing, searching, and identifying documents in the areas of mergers and acquisitions, litigation, e-discovery, due diligence, governance, and investigatory activities, for example.Type: ApplicationFiled: May 29, 2019Publication date: January 9, 2020Inventors: Brent G. Stanley, Joseph Vance Haynes
-
Publication number: 20200012852Abstract: One variation of a method for deploying sensors within an agricultural facility includes: accessing scan data of a set of modules deployed within the agricultural facility; extracting characteristics of plants occupying the set of modules from the scan data; selecting a first subset of target modules from the set of modules, each target module in the set of target modules containing a group of plants exhibiting characteristics representative of plants occupying modules neighboring the target module; for each target module, scheduling a robotic manipulator within the agricultural facility to remove a particular plant from a particular plant slot in the target module and load the particular plant slot with a sensor pod from a population of sensor pods deployed in the agricultural facility; and monitoring environmental conditions at target modules in the first subset of target modules based on sensor data recorded by the first population of sensor pods.Type: ApplicationFiled: July 5, 2019Publication date: January 9, 2020Inventors: Winnie Ding, Jonathan Binney, Brandon Ace Alexander, Nicole Bergelin
-
Publication number: 20200012853Abstract: A computer-implemented method for crop type identification using satellite observation and weather data. The method includes extracting current and historical data from pixels of satellite images of a target region, generating temporal sequences of vegetation indices, based on the weather data, converting each timestamp of the temporal sequences into a modified temporal variable correlating with actual crop growth, training a classifier using a set of historical temporal sequences of vegetation indices with respect to the modified temporal variable as training features and corresponding historically known crop types as training labels, identifying a crop type for each pixel location within the satellite images using the trained classifier and the historical temporal sequences of vegetation indices with respect to the modified temporal variable for a current crop season, and estimating a crop acreage value by aggregating identified pixels associated with the crop type.Type: ApplicationFiled: September 16, 2019Publication date: January 9, 2020Inventors: Marcus O. Freitag, Hendrik F. Hamann, Levente Klein, Siyuan Lu
-
Publication number: 20200012854Abstract: A processing method that is performed by one or more processor is provided. The processing method includes determining a target video frame in a currently captured video; determining an object area in the target video frame based on a box selection model; determining a category of a target object in the object area based on a classification model used to classify an object in the object area; obtaining augmented reality scene information associated with the category of the target object; and performing augmented reality processing on the object area in the target video frame and the augmented reality scene information, to obtain the augmented reality scene.Type: ApplicationFiled: September 17, 2019Publication date: January 9, 2020Applicant: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LTDInventors: Dan Qing FU, Hao XU, Cheng Quan LIU, Cheng Zhuo ZOU, Ting LU, Xiao Ming XIANG
-
Publication number: 20200012855Abstract: An operation assistance apparatus 10 includes: an image data acquisition unit 11 configured to acquire image data according to equipment; an inquiry receiving unit 12 configured to receive an inquiry about an operation of equipment; an equipment identifying unit 13 configured to identify equipment included in an image, based on the acquired image data; a matched information extraction unit 14 configured to extract information that matches the received inquiry from information according to the identified equipment; and an information display control unit 15 configured to display the extracted information on a screen.Type: ApplicationFiled: February 5, 2018Publication date: January 9, 2020Applicant: NEC CORPORATIONInventors: Kazuki MIURA, Shuhei MIYAKE, Keisuke NAKAMURA, Shouta ITOU, Ryohei NAKAYAMA, Ryoko HORITA
-
Publication number: 20200012856Abstract: Handheld optical devices (HOD) such as binoculars, spotting scopes and riflescopes that have an integrated camera wherein the camera, via Bluetooth and/or Wi-Fi, sends an image to a mobile phone, which then processes the image with a third party computer application for real time identification of the object being viewed are disclosed.Type: ApplicationFiled: July 3, 2019Publication date: January 9, 2020Inventor: Philip E. Buchsbaum
-
Publication number: 20200012857Abstract: Embodiments of the present disclosure disclose a method and apparatus for augmenting reality. A specific embodiment of the method includes: acquiring outline data of a plurality of building blocks satisfying a preset selection condition, the outline data being used to describe an outline of a building block in three-dimensional space; generating reference information based on projected line segments of the plurality of building blocks; determining, based on the reference information, a target building block in the plurality of building blocks and a superimposed region in an image acquired by the terminal, labeling information of the target building block being superimposed on the superimposed region; and superimposing the labeling information of the target building block on the determined superimposed region, to obtain an augmented reality image.Type: ApplicationFiled: July 8, 2019Publication date: January 9, 2020Inventor: Zhilei Jiang
-
Publication number: 20200012858Abstract: Aspects of the disclosure provide methods and apparatuses for processing an augmented reality (AR) scenario. In some examples, an apparatus includes processing circuitry. The processing circuitry obtains first feature point information in a first video frame according to a target marker image. The processing circuitry tracks, according to an optical flow tracking algorithm, a first feature point corresponding to the first feature point information. The processing circuitry determines second feature point information in a second video frame according to the tracked first feature point. The processing circuitry constructs a homography matrix between the second video frame and the target marker image according to the second feature point information and a first source feature point of the target marker image. The processing circuitry performs a first AR processing on the second video frame according to the homography matrix.Type: ApplicationFiled: September 11, 2019Publication date: January 9, 2020Applicant: Tencent Technology (Shenzhen) Company LimitedInventor: Xiaoming XIANG
-
Publication number: 20200012859Abstract: In accordance with various aspects of the present disclosure, methods and systems for fire detection are disclosed. In some embodiments, a method for fire detection includes: acquiring data related to a monitored area, wherein the data comprises image data related to the monitored area; determining, based on the acquired data, whether a first mode or a second mode is to be executed for fire detection, wherein the first mode comprises a first smoke detection, and the second mode comprises a first flame detection; and executing, by a hardware processor, at least one of the first mode or the second mode based on a result of the determination.Type: ApplicationFiled: September 20, 2019Publication date: January 9, 2020Applicant: ZHEJIANG DAHUA TECHNOLOGY CO., LTD.Inventors: Jia ZHENG, Jianguo TIAN, Huadong PAN
-
Publication number: 20200012860Abstract: Example implementations described herein are directed to systems and methods for non-invasive data extraction from digital displays. In an example implementation, a method includes receiving one or more video frames from a video capture device capturing an external display, where the external display is independent the video capture device; determining one or more locations within the external display comprising time varying data of the external display; and for each identified location of the time varying data: determining a data type; applying one or more rules based on the data type; and determining an accuracy of the time varying data within the one or more frames based on the rules.Type: ApplicationFiled: July 3, 2018Publication date: January 9, 2020Inventors: Joydeep ACHARYA, Satoshi KATSUNUMA, Sudhanshu GAUR
-
Publication number: 20200012861Abstract: A media system generally includes a memory device that stores an event datastore that stores a plurality of event records, each event record corresponding to a respective event and event metadata describing at least one feature of the event. The media system (a) receives a request to generate an aggregated clip comprised of one or more media segments, where each media segment depicts a respective event; (b) for each event record from at least a subset of the plurality of event records, determines an interest level of the event corresponding to the event record; (c) determines one or more events to depict in the aggregated clip based on the respective interest levels of the one or more events; (d) generates the aggregated clip based on the respective media segments that depict the one or more events; and (e) transmits the aggregated clip to a user device.Type: ApplicationFiled: September 17, 2019Publication date: January 9, 2020Inventors: Edward Shek Chan, Yu-Han Chang, Rajiv Tharmeswaran Maheswaran, Jeffrey Wayne Su
-
Publication number: 20200012862Abstract: A metadata generation system utilizes machine learning techniques to accurately describe content of videos based on multi-model predictions. In some embodiments, multiple feature sets are extracted from a video, including feature sets showing correlations between additional features of the video. The feature sets are provided to a learnable pooling layer with multiple modeling techniques, which generates, for each of the feature sets, a multi-model content prediction. In some cases, the multi-model predictions are consolidated into a combined prediction. Keywords describing the content of the video are determined based on the multi-model predictions (or combined prediction). An augmented video is generated with metadata that is based on the keywords.Type: ApplicationFiled: July 5, 2018Publication date: January 9, 2020Inventors: Saayan Mitra, Viswanathan Swaminathan, Somdeb Sarkhel, Julio Alvarez Martinez, JR.
-
Publication number: 20200012863Abstract: A concept for a video data stream extraction is presented which is more efficient namely which is, for example, able to more efficiently deal with video content of a type unknown to the recipient with videos of different type differing, for instance, in view-port-to-picture-plane projection, etc., or which lessens the extraction process complexity. Further, a concept is described using which a juxtaposition of different versions of a video scene, the versions differing in scene resolution, may be provided more efficiently to a recipient.Type: ApplicationFiled: September 19, 2019Publication date: January 9, 2020Inventors: Robert SKUPIN, Cornelius HELLGE, Benjamin BROSS, Thomas SCHIERL, Yago SÁNCHEZ DE LA FUENTE, Karsten SUEHRING, Thomas WIEGAND
-
Publication number: 20200012864Abstract: An apparatus for video summarization using sematic information is described herein. The apparatus includes a controller, a scoring mechanism, and a summarizer. The controller is to segment an incoming video stream into a plurality of activity segments, wherein each frame is associated with an activity. The scoring mechanism is to calculate a score for each frame of each activity, wherein the score is based on a plurality of objects in each frame. The summarizer is to summarize the activity segments based on the score for each frame.Type: ApplicationFiled: March 11, 2019Publication date: January 9, 2020Inventors: Myung Hwangbo, Krishna Kumar Singh, Teahyung Lee, Omesh Tickoo
-
Publication number: 20200012865Abstract: A method of tracking a position of a target object in a video sequence includes identifying the target object in a reference frame. A generic mapping is applied to the target object being tracked. The generic mapping is generated by learning possible appearance variations of a generic object. The method also includes tracking the position of the target object in subsequent frames of the video sequence by determining whether an output of the generic mapping of the target object matches an output of the generic mapping of a candidate object.Type: ApplicationFiled: September 20, 2019Publication date: January 9, 2020Inventors: Ran TAO, Efstratios GAVVES, Arnold Wilhelmus Maria SMEULDERS
-
Publication number: 20200012866Abstract: An input video sequence from a camera is filtered by a process that comprises detecting temporal tracks of moving image parts from the input video sequence and assigning activity scores to temporal segments of the tracks, using respective predefined track dependent activity score functions for a plurality of different activity types. Based on this, event scores for are computed as a function of time. This computation is controlled by a definition of a temporal sequence of activity types or compound activity types for an event type. Successive intermediate scores are computed, each as a function of time for a respective activity types or compound activity types in the temporal sequence.Type: ApplicationFiled: December 15, 2017Publication date: January 9, 2020Inventors: Gerardus Johannes BURGHOUTS, Victor Leonard WESTERWOUDT, Klamer SCHUTTE
-
Publication number: 20200012867Abstract: A method of determining the boundary of a driveable space in a scene around a vehicle, comprises capturing a first colour image of the scene, computing a set of histograms of oriented gradients, for instance using a HOG algorithm, each histogram corresponding to a cell in a set of cells, assigning an entropy value to the cell by computing the entropy of the histogram for the cell, dividing the image into bins in a way that corresponds to a rectangular grid in the real world, and calculating an overall entropy value for each bin from the entropy values for the cells in the bin and an overall colour characteristic value for each bin. The entropy value and colour characteristic value for each bin are fed into a classifier that is configured to classify regions of the image corresponding to each bin as regions likely to be driveable space or that are not likely to be driveable space, from which the boundary is derived.Type: ApplicationFiled: December 15, 2017Publication date: January 9, 2020Inventors: Martin John Thompson, Adam John Heenan, Oliver Payne, Stephen Crumpler
-
Publication number: 20200012868Abstract: A device and method for recognizing an object included in an input image are provided, the device for recognizing the object included in the input image includes a memory in which at least one program is stored; a camera configured to capture an environment around the device; and at least one processor configured to execute the at least one program to recognize an object included in an input image, wherein the at least one program includes instructions to: obtain the input image by controlling the camera; obtain information about the environment around the device that obtains the input image; determine, based on the information about the environment, a standard for using a plurality of feature value sets in a combined way, the plurality of feature value sets being used to recognize the object in the input image; and recognize the object included in the input image, by using the plurality of feature value sets based on the determined standard for using the plurality of feature value sets in the combined way.Type: ApplicationFiled: January 17, 2018Publication date: January 9, 2020Inventors: Hyun-seok HONG, Sahng-gyu PARK, Seung-hoon HAN, Bo-seok MOON
-
Publication number: 20200012869Abstract: An obstacle avoidance reminding method includes: performing ground detection based on acquired image data to acquire ground information of a road; performing passability detection based on the acquired ground information, and determining a traffic state of the road; if it is determined that the road is impassable, performing road condition detection for the road to acquire a first detection result, and performing obstacle detection for the road to acquire a second detection result; and determining obstacle avoidance reminding information based on the first detection result and the second detection result.Type: ApplicationFiled: July 8, 2019Publication date: January 9, 2020Inventors: Ye LI, Yimin LIN, Shiguo LIAN
-
Publication number: 20200012870Abstract: A method for detecting false positives relating to a traffic light based on a video stream of images captured by a camera on board a motor vehicle, the traffic light being configured so as to switch between a plurality of states, each state being characterized by at least one colored zone representative of a signal. The method includes filtering the images, using a predetermined list of filtering criteria relating to the states of the traffic light, based on a predetermined history of the color of the pixels of the colored zone representing a luminous object detected in the video stream of images so as to detect false positives.Type: ApplicationFiled: February 1, 2018Publication date: January 9, 2020Inventors: Thibault Caron, Sophie Rony
-
Publication number: 20200012871Abstract: Disclosed are an apparatus and a method for detecting a fallen object which adjust a passenger's seat to easily pick up the fallen object in the vehicle. A fallen object detecting apparatus according to one embodiment of the present disclosure is an apparatus for detecting a fallen object in a vehicle which includes a camera configured to generate at least one image of an inside of the vehicle; an image identifier configured to identify at least one passenger and at least one object from the image, and to determine a location of the fallen object in response to a falling of the object in the vehicle; and a controller configured to provide the determined location of the fallen object via at least one component located in the vehicle and adjust a seat of the passenger based on the location of the fallen object and a condition of the passenger.Type: ApplicationFiled: September 18, 2019Publication date: January 9, 2020Applicant: LG ELECTRONICS INC.Inventors: Chul Hee LEE, Hyun Kyu KIM, Ki Bong SONG, Sang Kyeong JEONG, Jun Young JUNG
-
Publication number: 20200012872Abstract: A device (10) for determining a state of attentiveness of a driver of a vehicle (1) is disclosed. The device includes an image capture unit (11) onboard said vehicle (1), said image capture unit (11) being suitable for capturing at least one image of a detection area (D) located in said vehicle (1), and an image processing unit (15) suitable for receiving said captured image and programmed to determine the state of attentiveness of the driver (4), according to the detection of the presence of a distracting object in one of the hands of the driver (4), which hand being located in the detection area (D).Type: ApplicationFiled: February 23, 2018Publication date: January 9, 2020Applicant: VALEO COMFORT AND DRIVING ASSISTANCEInventor: Frédéric Autran
-
Publication number: 20200012873Abstract: Disclosed is a driving guide method for a vehicle. The driving guide method includes: acquiring predicted driving information of a vehicle that is driving manually; acquiring gaze information of a user of the vehicle; identifying at least one recognition pattern information that is acquired based on history information corresponding to the predicted driving information; identifying recognition pattern information corresponding to the gaze information from among the at least one recognition pattern information; and displaying information on a region of interest that is determined based on the recognition pattern information. One or more of an autonomous vehicle a crime predicting apparatus of the present disclosure may be linked to an Artificial Intelligence (AI) module, an Unmanned Aerial Vehicle (UAV), a robot, an Augmented Reality (AR) device, a Virtual Reality (VR) device, a 5G service-related device, etc.Type: ApplicationFiled: September 20, 2019Publication date: January 9, 2020Applicant: LG ELECTRONICS INC.Inventor: Soryoung KIM
-
Publication number: 20200012874Abstract: An electronic device is disclosed. The electronic device includes a wireless module configured to emit a first radar signal and receive a second radar signal, which is the first radar signal reflected by a user; a gravity sensor configured to sense a status of the electronic device to generate a sensing result; and a control unit coupled to the wireless module and the gravity sensor, and configured to control the wireless module to emit the first radar signal when the sensing result conforms to an emitting condition and determine a physiological status of the user according to the second radar signal received by the wireless module.Type: ApplicationFiled: October 11, 2018Publication date: January 9, 2020Inventors: Chih-Teng Shen, Cheng-Wei Chang
-
Publication number: 20200012875Abstract: An imaging system (e.g., hyperspectral imaging system) receives an indication to compare a first object and a second object (e.g., two anatomical structures or organs in a medical environment). The imaging system accesses a classification vector for the first object and the second object, the classification vector having been extracted by separating a plurality of collected reflectance values for the first object from a plurality of collected reflectance values for the second object. A set of optimal illumination intensities for one or more spectral illumination sources of the imaging system is determined based on the extracted classification vector. The first and second objects are illuminated with the determined illumination intensities. A high-contrast image of the first and second objects is provided for display, such that the two objects can be readily distinguished in the image. The intensity of pixels in the image is determined by the illumination intensities.Type: ApplicationFiled: September 18, 2019Publication date: January 9, 2020Inventors: Eden Rephaeli, Vidya Ganapati, Daniele Piponi, Thomas Teisseyre
-
Publication number: 20200012876Abstract: This application provides a text detection method, including: obtaining, by a computer device, an image; inputting the image into a neural network, and outputting a target feature matrix; inputting the target feature matrix into a fully connected layer, the fully connected layer mapping each element of the target feature matrix to a predicated subregion corresponding to the image according to a preset anchor; and obtaining text feature information of the predicated subregion, connecting the predicated subregion into a corresponding predicted text line according to the text feature information of the predicated subregion by using a text clustering algorithm, and determining a text area corresponding to the image.Type: ApplicationFiled: September 16, 2019Publication date: January 9, 2020Applicant: Tencent Technology (Shenzhen) Company LimitedInventor: Ming LIU
-
Publication number: 20200012877Abstract: There is provided with an information processing apparatus. From an image capturing apparatus which can move, a captured image of an object is obtained. The position of the image capturing apparatus is derived using the captured image and a three-dimensional map. The three-dimensional map is corrected using information indicating a reliability of: information indicating the three-dimensional position of the feature included in a predefined area in the three-dimensional map held by the holding unit; and information indicating a three-dimensional position of another feature of the object, the information obtained based on the captured image from an expanded area of the predefined area in the three-dimensional map held by the holding unit.Type: ApplicationFiled: June 25, 2019Publication date: January 9, 2020Inventors: Daisuke Kotake, Akihiro Katayama, Makoto Tomioka, Nozomu Kasuya, Takayuki Yamada, Masahiro Suzuki, Masakazu Fujiki
-
Publication number: 20200012878Abstract: Provided is an image recognition system that can easily perform image recognition on a side face of an item. An image recognition system according to one example embodiment of the present invention includes: a placement stage used for placing an item below an image capture device provided so as to perform capturing of a downward direction; a support structure configured to support the item at a predetermined angle relative to a top face of the placement stage; and an image recognition apparatus that identifies the item by performing image recognition on an image of the item acquired by the image capture device.Type: ApplicationFiled: February 8, 2018Publication date: January 9, 2020Applicant: NEC CORPORATIONInventors: Ryoma IIO, Kota IWAMOTO, Hideo YOKOI, Ryo TAKATA, Kazuya KOYAMA
-
Publication number: 20200012879Abstract: A text region positioning method and device, and a computer readable storage medium, which relate to the field of image processing. The text region positioning method includes acquiring a variance graph on the basis of an original image; acquiring an edge image of the variance graph; if a difference value among distances between edge points of opposing positions in two adjacent edge lines in the edge image is within a preset distance difference range, then the region between the two adjacent edge lines is determined as a text region.Type: ApplicationFiled: December 29, 2017Publication date: January 9, 2020Applicants: BEIJING JINGDONG SHANGKE INFORMATION TECHNOLOGY CO., LTD., BEIJING JINGDONG CENTURY TRADING CO., LTD.Inventors: Yongliang WANG, Qingze WANG, Biaolong CHEN
-
Publication number: 20200012880Abstract: A method and an apparatus for recognizing a descriptive attribute of an appearance feature include obtaining a location feature of an appearance feature of a target image to determine a location of a part of an object in a preset object model indicated by the appearance feature, where the location feature of the appearance feature indicates the location of the part of the object in the preset object model indicated by the appearance feature, recognizing a target region based on the location feature, where the target region includes the part of the object, performing feature analysis on the target region, recognizing a descriptive attribute of the appearance feature of the object, and determining the location feature of the appearance feature having a local attribute.Type: ApplicationFiled: September 20, 2019Publication date: January 9, 2020Inventors: Chunfeng Yao, Bailan Feng, Defeng Li
-
Publication number: 20200012881Abstract: A real time video analytic processor that uses a trained convolutional neural network that embodies algorithms and processing architectures that process a wide variety of sensor images in a fashion that emulates how the human visual path processes and interprets image content. Spatial, temporal, and color content of images are analyzed and the salient features of the images determined. These salient features are compared to the salient features of objects of user interest in order to detect, track, classify, and characterize the activities of the objects. Objects or activities of interest are annotated in the image streams and alerts of critical events are provided to the user. Instantiation of the cognitive processing can be accomplished on multi-FPGA and multi-GPU processing hardware.Type: ApplicationFiled: July 20, 2018Publication date: January 9, 2020Applicant: Irvine Sensors CorporationInventors: James Justice, David Ludwig, Virgilio Villacorta, Omar Asadi, Fredrik Knutson, Eric Weaver, Mannchuoy Yam
-
Publication number: 20200012882Abstract: A method for 2D feature tracking by cascaded machine learning and visual tracking comprises: applying a machine learning technique (MLT) that accepts as a first MLT input first and second 2D images, the MLT operating on the images to provide initial estimates of a start point for a feature in the first image and a displacement of the feature in the second image relative to the first image; applying a visual tracking technique (VT) that accepts as a first VT input the initial estimates of the start point and the displacement, and that accepts as a second VT input the two 2D images, processing the first and second inputs to provide refined estimates of the start point and the displacement; and displaying the refined estimates in an output image.Type: ApplicationFiled: July 3, 2018Publication date: January 9, 2020Applicant: Sony CorporationInventors: Ko-Kai Albert Huang, Ming-Chang Liu
-
Publication number: 20200012883Abstract: A surveillance method using multi-dimensional sensor data for use in a surveillance system is provided. The surveillance system includes a plurality of sensors installed within a scene, and the plurality of sensor are classified into a plurality of types. The surveillance method includes the steps of: obtaining each type of sensor data from the scene using the sensors; performing a local-object process on each type of sensor data to generate local-object-feature information for each type; performing a global-object process according to the local-object-feature information of each type to generate global-object-feature information; and performing a global-object recognition process on the global-object-feature information to generate a global-recognition result.Type: ApplicationFiled: November 23, 2018Publication date: January 9, 2020Inventors: Fang-Wen KUO, Chih-Ming CHEN
-
Publication number: 20200012884Abstract: Systems and techniques for classification based on annotation information are presented. In one example, a system trains a convolutional neural network based on training data and a plurality of images. The training data is associated with a plurality of patients from at least one imaging device. The plurality of images is associated with a plurality of masks from a plurality of objects. The system also generates a first loss function based on the plurality of masks, a second loss function based on a plurality of image level labels associated with the plurality of images, and a third loss function based on the first loss function and the second loss function, where the third loss function is iteratively back propagated to tune parameters of the convolutional neural network. The system also predicts a classification label for an input image based on the convolutional neural network.Type: ApplicationFiled: August 3, 2018Publication date: January 9, 2020Inventors: Qian Zhao, Min Zhang, Gopal Avinash
-
Publication number: 20200012885Abstract: A method for image processing includes: acquiring features of multiple images of a target object and a standard feature of the target object; and determining trusted images of the target object from the multiple images of the target object according to similarities between the features of the multiple images of the target object and the standard feature thereof, wherein similarities between features of the trusted images of the target object and the standard feature of the target object meet a preset similarity requirement. The image processing method may be applied to application scenarios such as image comparison, identity recognition, target object search, and similar target object determination.Type: ApplicationFiled: September 20, 2019Publication date: January 9, 2020Inventors: Nan JIANG, Mingyu Guo
-
Publication number: 20200012886Abstract: Systems and methods for clustering data are disclosed. For example, a system may include one or more memory units storing instructions and one or more processors configured to execute the instructions to perform operations. The operations may include receiving data from a client device and generating preliminary clustered data based on the received data, using a plurality of embedding network layers. The operations may include generating a data map based on the preliminary clustered data using a meta-clustering model. The operations may include determining a number of clusters based on the data map using the meta-clustering model and generating final clustered data based on the number of clusters using the meta-clustering model. The operations may include and transmitting the final clustered data to the client device.Type: ApplicationFiled: July 3, 2019Publication date: January 9, 2020Applicant: CAPITAL ONE SERVICES, LLCInventors: Austin WALTERS, Jeremy GOODSITT, Anh TRUONG, Reza FARIVAR