Patents Issued in January 17, 2019
  • Publication number: 20190019011
    Abstract: Identifying real objects in an area of interest for use in identifying virtual content a user is authorized to view using an augmented reality device. Particular methods and systems determine a set of real objects that are near a first position of a first augmented reality device, determine, from the set of real objects, a first subset of real objects that are associated with virtual content that the first augmented reality device is permitted to display, and for each real object in the first subset of real objects, transmit virtual content associated with that real object to the first augmented reality device.
    Type: Application
    Filed: June 5, 2018
    Publication date: January 17, 2019
    Inventors: David ROSS, Alexander F. HERN
  • Publication number: 20190019012
    Abstract: Systems, methods, and non-transitory computer readable media can identify a user associated with a device based on a subset of media content items on the device based at least in part on analysis of the subset of media content items. A relationship between the user and one or more other users depicted in the media content items can be determined. A recommendation relating to sending at least one media content item on the device to at least of the one or more other users can be generated based on the determined relationship.
    Type: Application
    Filed: July 17, 2017
    Publication date: January 17, 2019
    Inventors: Xun Wilson Huang, Jun Sun, Zhiyang Wang, Wenjie Lin, Jieqi Yu, Farhan Khan
  • Publication number: 20190019013
    Abstract: The present invention contributes to decreasing false recognition due to the lighting environment. A facial recognition apparatus comprises a photographing parameter input unit that receives a photographing parameter(s); a lighting information estimation unit that estimates lighting information based on the photographing parameter(s); and a recognition accuracy control unit that controls a recognition accuracy parameter(s) based on the lighting information.
    Type: Application
    Filed: August 21, 2018
    Publication date: January 17, 2019
    Applicant: NEC Corporation
    Inventor: Yuusuke TOMITA
  • Publication number: 20190019014
    Abstract: A computing system includes a processing system with at least one processing unit. The processing system is configured to execute a face alignment method upon receiving image data with a facial image. The processing system is configured to apply a neural network to the facial image. The neural network is configured to provide a final estimate of parameter data for the facial image based on the image data and an initial estimate of the parameter data. The neural network includes at least one visualization layer, which is configured to generate a feature map based on a current estimate of the parameter data. The parameter data includes head pose data and face shape data.
    Type: Application
    Filed: July 13, 2017
    Publication date: January 17, 2019
    Inventors: Mao Ye, Liu Ren, Amin Jourabloo
  • Publication number: 20190019015
    Abstract: Disclosed are a method and a related product for recognizing a live face. The method is applied to a terminal device including a front light emitting source, a front camera and an Application Processor (AP). The method includes the following actions. Upon reception of a face image collection instruction, the camera collects a first face image of a face when the source is in a first state. The camera collects a second face image of the face when the source is in a second state. The AP determines whether a difference between an eyeball area proportion in the first face image and an eyeball area proportion in the second face image is larger than a preset threshold; and if so, the AP determines that the collected face images are images of a live face.
    Type: Application
    Filed: July 5, 2018
    Publication date: January 17, 2019
    Applicant: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD.
    Inventors: Cheng Tang, Xueyong Zhang, Yibao Zhou, Haitao Zhou
  • Publication number: 20190019016
    Abstract: An apparatus which detects an object in a frame image of a moving image captured by each image capturing apparatus in an area, acquires an image feature of the detected object, tracks the detected object in one moving image, acquires an image feature of an object to be searched from the detected object, searches for an object having a similarity equal to or higher than a predetermined threshold from the acquired image features and identifies the object, acquires an image feature of the tracked object and corresponding to the identified object from the acquired image feature, and searches for an object having a similarity equal to or higher than the predetermined threshold from the acquired image features and identifies the object.
    Type: Application
    Filed: July 9, 2018
    Publication date: January 17, 2019
    Inventors: Kazuyo Ikeda, Hirotaka Shiiyama
  • Publication number: 20190019017
    Abstract: Techniques for distinguishing objects (e.g., an individual or an individual pushing a shopping cart) are disclosed. An object is detected in images of a scene. A height map is generated from the images, and the object is represented as height values in the height map. Based on height properties associated with another object, it is determined whether the other object is associated with the object. If so determined, the objects are classified separately.
    Type: Application
    Filed: September 6, 2018
    Publication date: January 17, 2019
    Inventors: Zhiqian WANG, Edward A. Marcheselli, Gary Dispensa, Thomas D. Stemen, William C. Kastilahn
  • Publication number: 20190019018
    Abstract: A sign language recognizer is configured to detect interest points in an extracted sign language feature, wherein the interest points are localized in space and time in each image acquired from a plurality of frames of a sign language video; apply a filter to determine one or more extrema of a central region of the interest points; associate features with each interest point using a neighboring pixel function; cluster a group of extracted sign language features from the images based on a similarity between the extracted sign language features; represent each image by a histogram of visual words corresponding to the respective image to generate a code book; train a classifier to classify each extracted sign language feature using the code book; detect a posture in each frame of the sign language video using the trained classifier; and construct a sign gesture based on the detected postures.
    Type: Application
    Filed: September 20, 2018
    Publication date: January 17, 2019
    Applicant: King Fahd University of Petroleum and Minerals
    Inventors: SABRI A. MAHMOUD, Ala Addin Sidig
  • Publication number: 20190019019
    Abstract: A people stream analysis apparatus includes an image information capturer that captures an external appearance image of a person, a person recognizer that recognizes the person from the external appearance image, a store inferrer that identifies from the external appearance image a possession carried by the person, and infers from the identified possession a store from which the possession has been obtained, a memory that stores, in an associated form, person information indicating the recognized person, store information indicating the inferred store, and time information indicating time at which the external appearance image has been captured, and an arrival store order determiner that determines an order of stores in which the person has visited, based on a change in a time sequence of listing of stores indicated by the store information stored on the memory.
    Type: Application
    Filed: July 3, 2018
    Publication date: January 17, 2019
    Inventors: YURI NISHIKAWA, JUN OZAWA
  • Publication number: 20190019020
    Abstract: Systems, methods and computer program products for image recognition in which instructions are executable by a processor to dynamically generate simulated documents and corresponding images, which are then used to train a fully convolutional neural network. A plurality of document components are provided, and the processor selects subsets of the document components. The document components in each subset are used to dynamically generate a corresponding simulated document and a simulated document image. The convolutional neural network processes the simulated document image to produce a recognition output. Information corresponding to the document components from which the image was generated is used as an expected output. The recognition output and expected output are compared, and weights of the convolutional neural network are adjusted based on the differences between them.
    Type: Application
    Filed: July 13, 2018
    Publication date: January 17, 2019
    Inventors: Arnaud Gilles Flament, Christopher Dale Lund, Guillaume Bernard Serge Koch, Denis Eric Goupil
  • Publication number: 20190019021
    Abstract: The present disclosure relates to simulating the capture of images. In some embodiments, a document and a camera are simulated using a three-dimensional modeling engine. In certain embodiments, a plurality of images are captured of the simulated document from a perspective of the simulated camera, each of the plurality of images being captured under a different set of simulated circumstances within the three-dimensional modeling engine. In some embodiments, a model is trained based at least on the plurality of images which determines at least a first technique for adjusting a set of parameters in a separate image to prepare the separate image for optical character recognition (OCR).
    Type: Application
    Filed: July 13, 2017
    Publication date: January 17, 2019
    Inventors: Kimia HASSANZADEH, Richard J. BECKER, Cole MACKENZIE, Greg COULOMBE
  • Publication number: 20190019022
    Abstract: Methods and systems for incorporating physical documents into a document review workflow involving electronic documents. One or more embodiments detect a presence of a physical document within a field of view of an AR device and map the physical document to an existing electronic document based on visual features of the physical document. Additionally, one or more embodiments determine at least one difference between the physical document and the electronic document and create, for the physical document and the electronic document a shared state mapping including the difference(s). One or more embodiments then apply the difference to the physical document or the electronic document by displaying the difference(s) in an AR layer within the field of view of the AR device or storing the difference(s) in the electronic document.
    Type: Application
    Filed: July 14, 2017
    Publication date: January 17, 2019
    Inventors: Vikas Marda, Roshni Sheikh, Kartik Sachan, Aman Singhal
  • Publication number: 20190019023
    Abstract: A gaze-tracking system for a head-mounted display apparatus includes a first set of illuminators for emitting light to illuminate a user's eye; at least one photo sensor for sensing reflections of the light from the user's eye; at least one actuator for moving at least one of: (i) the first set of illuminators, (ii) the at least one photo sensor; and a processor coupled with the first set of illuminators, the at least one photo sensor and the at least one actuator. The processor is configured to collect and process sensor data from the at least one photo sensor to detect a gaze direction of the user, and to control the at least one actuator to adjust, based upon the detected gaze direction, a position of the at least one of: (i) the first set of illuminators, (ii) the at least one photo sensor.
    Type: Application
    Filed: July 13, 2017
    Publication date: January 17, 2019
    Inventors: Urho Konttori, Klaus Melakari, Thiyagarajan Manihatty Bojan, Ville Miettinen
  • Publication number: 20190019024
    Abstract: A method for iris recognition performed by related products includes the following. A mobile terminal collects a target black-and-white iris image through an iris camera and collects a target color iris image through a front camera, when an iris collecting instruction is received. The target color iris image is displayed in an iris recognition area of a display screen, where the target color iris image is configured to hint a user that the mobile terminal is performing iris recognition. The target black-and-white iris image is processed for iris recognition.
    Type: Application
    Filed: July 2, 2018
    Publication date: January 17, 2019
    Inventors: Yibao Zhou, Xueyong Zhang, Cheng Tang, Haitao Zhou
  • Publication number: 20190019025
    Abstract: A mobile information terminal includes an emitted-light polarizing filter having a transmission axis in a first direction, a received-light polarizing filter having a transmission axis in a second direction, an infrared light source emitting near infrared light through the emitted-light polarizing filter, and an image pickup section receiving reflected light generated when the near infrared light is reflected off an object, through the received-light polarizing filter. The second direction has such an angle determined with respect to the first direction that the received-light polarizing filter blocks at least part of light having a polarization property in the reflected light.
    Type: Application
    Filed: July 11, 2018
    Publication date: January 17, 2019
    Inventors: SHINOBU YAMAZAKI, TAKASHI NAKANO, YUKIO TAMAI, DAISUKE HONDA
  • Publication number: 20190019026
    Abstract: A method of detecting fraud during identification by iris recognition, the method comprising the following steps: capturing an image of each eye of a person for identification (50), namely a first image (61) and a second image (71); extracting a first set of first characteristics from the first image (61); extracting a second set of second characteristics from the second image (71); evaluating a correlation coefficient between the first and second characteristics; and as a function of the value of the correlation coefficient, signaling an attempt at fraud or continuing with identification by eye recognition. An identification terminal arranged to perform the method.
    Type: Application
    Filed: July 11, 2018
    Publication date: January 17, 2019
    Inventors: Hervé JAROSZ, Emine KRICHEN, Jean-Noël BRAUN
  • Publication number: 20190019027
    Abstract: Disclosed are a method and mobile terminal for processing an image and storage medium. The method includes collecting color information of a target object; determining at least one of multiple pre-stored compensation parameters as at least one compensation parameter corresponding to the color information; and during iris collection, compensating an iris based on the at least one target compensation parameter to obtain a color iris image.
    Type: Application
    Filed: July 12, 2018
    Publication date: January 17, 2019
    Applicant: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD.
    Inventors: Haitao ZHOU, Yibao ZHOU, Cheng TANG, Xueyong ZHANG
  • Publication number: 20190019028
    Abstract: In one aspect, a device includes at least one processor, a camera, at least one sensor, a display, and storage. The storage bears instructions executable by the at least one processor to receive input from the at least one sensor and determine, based on the input, whether a user is making physical contact with the device. Based on a determination that the user is not making physical contact with the device, the instructions are executable by the at least one processor to execute scrolling of content based on input from the camera, with the content presented on the display. Based on a determination that the user is making physical contact with the device, the instructions are executable by the at least one processor to decline to execute scrolling of content based on input from the camera.
    Type: Application
    Filed: July 12, 2017
    Publication date: January 17, 2019
    Inventors: John Carl Mese, Russell Speight VanBlon, Nathan J. Peterson
  • Publication number: 20190019029
    Abstract: An image monitoring system includes: recording means for recording an image captured by a camera via a network; control means for controlling the system so as to display the present image captured by the camera or a past image recorded on the recording means on display means; and moving-object detecting means for detecting a moving object from the image captured by the camera; wherein the moving-object detecting means includes resolution conversion means for generating an image with a resolution lower than the resolution of the image captured by the camera, positional-information output means for detecting a moving object from the image generated by the resolution conversion means and outputting positional information on the detected moving object, and information merging means for merging the positional information of the moving object with the image captured by the camera on the basis of the positional information of the moving object output by the positional-information output means.
    Type: Application
    Filed: September 18, 2018
    Publication date: January 17, 2019
    Inventors: Masaki Demizu, Miki Murakami
  • Publication number: 20190019030
    Abstract: A method and system detects and localizes multiple instances of an object by first acquiring a frame of a three-dimensional (3D) scene with a sensor, and extracting features from the frame. The features are matched according to appearance similarity and triplets are formed among matching features. Based on 3D locations of the corresponding points in the matching triplets, a geometric transformation is computed. Matching triplets are clustered according to the computed geometric transformations. Since the set of features coining from two different object instances should have a single geometric transform, the output of clustering provides the features and poses of each object instance in the image.
    Type: Application
    Filed: October 23, 2017
    Publication date: January 17, 2019
    Applicants: Mitsubishi Electric Research Laboratories, Inc, Mitsubishi Electric Corporation
    Inventors: Esra Cansizoglu, Wim Abbeloos, Sergio Salvatore Caccamo, Yuichi Taguchi, Yukiyasu Domae
  • Publication number: 20190019031
    Abstract: Provided is a dynamic object detecting technique, and more specifically, a system and method for determining a state of a motion of a camera on the basis of a local motion estimated on the basis of a video captured by a dynamic camera and a result of analyzing a global motion, flexibly updating a background model according to the state of the motion of the camera, and flexibly detecting a dynamic object according to the state of the motion of the camera.
    Type: Application
    Filed: July 11, 2018
    Publication date: January 17, 2019
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Ki Min YUN, Yong Jin KWON, Jin Young MOON, Sung Chan OH, Jong Youl PARK, Jeun Woo LEE
  • Publication number: 20190019032
    Abstract: A computer-implemented method includes: detecting, by a virtual wearable computing device, a hazardous condition based on monitoring a proximity of a user wearing the virtual wearable computing device to a physical obstruction; and alerting, by the virtual wearable computing device, the user regarding the detection of the hazardous condition.
    Type: Application
    Filed: July 14, 2017
    Publication date: January 17, 2019
    Inventors: James E. Bostick, John M. Ganci, JR., Martin G. Keen, Sarbajit K. Rakshit
  • Publication number: 20190019033
    Abstract: An apparatus for generating olfactory information related to multimedia content may comprise a processor. The processor may receive multimedia content, extract an odor image or an odor sound included in the multimedia content, and generate representative data related to the odor image or the odor sound by describing information on the extracted odor image or odor sound in a data format sharable by a media thing.
    Type: Application
    Filed: November 27, 2017
    Publication date: January 17, 2019
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Sung June CHANG, Hae Ryong LEE, Jun Seok PARK, Joon Hak BANG, Jong Woo CHOI, Sang Yun KIM, Hyung Gi BYUN, Jang Sik CHOI
  • Publication number: 20190019034
    Abstract: A marker tracking system configured to detect light patterns (e.g., infrared light patterns) generated by one or more markers is described. A given marker is configured with a code which identifies the marker in a motion tracking camera field of view. Motion tracking camera(s) record the emitted infrared light and are configured to directly, or in conjunction with an associated computing device, computationally distinguish a given marker with high accuracy and efficiently.
    Type: Application
    Filed: September 20, 2018
    Publication date: January 17, 2019
    Inventors: Andrew C. Beall, Matthias Pusch
  • Publication number: 20190019035
    Abstract: A method of operating a mobile terminal includes obtaining at least one image and determining event information that is to be associated with the obtained at least one image. The method also includes storing, in computer memory, the obtained at least one image and information that associates the obtained at least one image with the event information. The method additionally includes detecting an event on the mobile terminal, and determining that the detected event corresponds to the event information. The method further includes displaying, on a display of the mobile terminal and based on the determination that the detected event corresponds to the event information, a first image among the at least one image that has been stored and associated with the event information.
    Type: Application
    Filed: September 4, 2018
    Publication date: January 17, 2019
    Inventors: Juhyun LEE, Byoungzoo JEONG, Suyoung LEE, Eugene MYUNG, Nayeoung KIM
  • Publication number: 20190019036
    Abstract: An electronic device which trains a video classification model based on a neural network, and classifies a video based on the trained video classification model, and an operating method thereof is provided. The electronic device includes a memory and a processor functionally coupled with the memory, and the processor is configured to acquire label information corresponding to a video generate a representative frame representing the video, based on a plurality of frames included in the video extract a feature corresponding to the video by iteratively inputting the representative frame to a video classification model and train the video classification model based on the extracted feature.
    Type: Application
    Filed: July 12, 2018
    Publication date: January 17, 2019
    Inventor: Jaehyeon YOO
  • Publication number: 20190019037
    Abstract: Systems and methods for improving video understanding tasks based on higher-order object interactions (HOIs) between object features are provided. A plurality of frames of a video are obtained. A coarse-grained feature representation is generated by generating an image feature for each of for each of a plurality of timesteps respectively corresponding to each of the frames and performing attention based on the image features. A fine-grained feature representation is generated by generating an object feature for each of the plurality of timesteps and generating the HOIs between the object features. The coarse-grained and the fine-grained feature representations are concatenated to generate a concatenated feature representation.
    Type: Application
    Filed: May 14, 2018
    Publication date: January 17, 2019
    Inventors: Asim Kadav, Chih-Yao Ma, Iain Melvin, Hans Peter Graf
  • Publication number: 20190019038
    Abstract: Hazardous or dangerous conditions may be monitored. A mode may be set to a state indicative of the condition being present. It may then be determined that the hazardous or dangerous condition has eased. An indication of the hazardous or dangerous condition easing may be output in response to the determination. Such an indication may be output as synthesized speech.
    Type: Application
    Filed: September 17, 2018
    Publication date: January 17, 2019
    Applicant: Google LLC
    Inventors: David Sloo, Nicholas Unger Webb, Matthew Lee Rogers, Anthony Michael Fadell, Jeffery Theodore Lee, Sophie Le Guen, Andrew W. Goldenson
  • Publication number: 20190019039
    Abstract: A surveillance system may comprise a control device and at least one robotic device. The control device is associated with a user and configured to request to connect to the at least one robotic device and in response to being connected, communicate a characteristic of the user to the at least one robotic device. The at least one robotic device comprises a platform to carry the user, and may be configured to in response to the request of the control device, verify identity of the control device of the user; in response to the identity of the control device of the user being verified, connect to the control device; define parameters of the at least one robotic device based on the characteristic of the user; and adjust the parameters of the at least one robotic device according to a riding pattern of the user.
    Type: Application
    Filed: June 26, 2018
    Publication date: January 17, 2019
    Inventors: YI LI, SONG CAO
  • Publication number: 20190019040
    Abstract: A method of displaying data in a data visualization computing system is described. Various methods of displaying the data are described including using a timeline, the data being aggregated based on time periods wherein the timeline consists of a plurality of time period sizes, the current selected time period covering the smallest time period, the period furthest on the timeline from the current selected period covering the largest time period, the timeline consisting of at least one time period of each time period size. Also described are improved methods of data selection and display.
    Type: Application
    Filed: July 23, 2018
    Publication date: January 17, 2019
    Inventor: Andrew John CARDNO
  • Publication number: 20190019041
    Abstract: The disclosure discloses a method for detecting a vehicle in a driving assisting system. The method for detecting a vehicle in a driving assisting system includes: obtaining an image to be detected, and determining the positions of lane lines in the image to be detected; determining a valid area in the image to be detected, according to the positions of the lane lines, and the velocity of the present vehicle; and determining a detected vehicle in the valid area according to T preset weak classifiers, and thresholds corresponding to the respective weak classifiers, wherein T is a positive integer.
    Type: Application
    Filed: March 15, 2018
    Publication date: January 17, 2019
    Inventors: Hongli Ding, Yu Gu, Yifei Zhang, Kai Zhao, Ying Zhang
  • Publication number: 20190019042
    Abstract: An adherent detecting method for, by means of at least one computer, detecting a target adherent adhering to a translucent body that separates an imaging element and a photographing target from each other includes acquiring a photographed image that is generated by photographing via the translucent body with the imaging element. Next, the photographed image is inputted as input data into a recognition model for recognizing the presence or absence of an adherent to the translucent body in an image taken via the translucent body. Then, the presence or absence of the target adherent in the photographed image is detected by acquiring information outputted from the recognition model and indicating the presence or absence of an adherent in the photographed image.
    Type: Application
    Filed: May 16, 2018
    Publication date: January 17, 2019
    Inventors: TORU TANIGAWA, YUKIE SHODA, SEIYA IMOMOTO
  • Publication number: 20190019043
    Abstract: A vehicular structure from motion (SfM) system can store a number of image frames acquired from a vehicle-mounted camera in a frame stack according to a frame stack update logic. The SfM system can detect feature points, generate flow tracks, and compute depth values based on the image frames, the depth values to aid control of the vehicle. The frame stack update logic can select a frame to discard from the stack when a new frame is added to the stack, and can be changed from a first in, first out (FIFO) logic to last in, first out (LIFO) logic upon a determination that the vehicle is stationary. An optical flow tracks logic can also be modified based on the determination. The determination can be made based on a dual threshold comparison to insure robust SfM system performance.
    Type: Application
    Filed: September 4, 2018
    Publication date: January 17, 2019
    Inventors: Prashanth Ramanathpu Viswanath, Soyeb Nagori, Manu Mathew
  • Publication number: 20190019044
    Abstract: An image processing apparatus includes one or more processors; and a memory, the memory storing instructions, which when executed by the one or more processors, cause the one or more processors to generate vertical direction distribution data indicating a frequency distribution of distance values with respect to a vertical direction of a range image, from the range image having distance values according to distance of a road surface in a plurality of captured images captured by a plurality of imaging parts; estimate a plurality of road surfaces, based on the vertical direction distribution data; and determine a desired road surface, based on the estimated plurality of road surfaces.
    Type: Application
    Filed: September 11, 2018
    Publication date: January 17, 2019
    Inventor: Naoki MOTOHASHI
  • Publication number: 20190019045
    Abstract: An object recognition apparatus includes an image acquisition unit that acquires a captured image of a photographic subject, and a recognition processing unit that recognizes the photographic subject in the acquired image using a recognition dictionary. The recognition processing unit detects a target in the acquired image using a target recognition dictionary, detects a wheel at a lower part of the detected target using a wheel recognition dictionary, and reflects a result of the detection of the wheel in a result of the detection of the target. Thus, the object recognition apparatus can accurately detect a target such as a person and another vehicle.
    Type: Application
    Filed: September 18, 2018
    Publication date: January 17, 2019
    Inventor: Takuya OGURA
  • Publication number: 20190019046
    Abstract: A method of living body detection performed with a terminal device includes the following operations. A first image for a target object is obtained via a camera at a first focal length, and a second image for the target object is obtained via the camera at a second focal length. A difference image of the first image and the second image is determined. Whether the target object is a living body is determined according to the difference image.
    Type: Application
    Filed: July 2, 2018
    Publication date: January 17, 2019
    Inventors: Haitao Zhou, Yibao Zhou, Cheng Tang, Xueyong Zhang
  • Publication number: 20190019047
    Abstract: Disclosed are an iris-based living-body detection method, a mobile terminal and a storage medium. According to the method, in response to detection of the mobile terminal being lifted, the touch display screen is lighted, and the touch display screen is controlled to display preset guide content, the preset guide content is used to guide eyes watching the touch display screen to move; the iris recognition apparatus is notified to perform iris acquisition for a target object associated with the eyes, to obtain a plurality of iris images; and it is determined whether the target object is a living body with the plurality of iris images.
    Type: Application
    Filed: July 5, 2018
    Publication date: January 17, 2019
    Applicant: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD.
    Inventors: Haitao Zhou, Yibao Zhou, Cheng Tang, Xueyong Zhang
  • Publication number: 20190019048
    Abstract: Provided are a method and a device for guiding fingerprint recognition. The method for guiding fingerprint recognition includes the following steps: determining a current working state of a smart terminal (S1); determining whether a fingerprint input prompting is needed (S2); and if the fingerprint input prompting is needed, displaying a fingerprint input guiding icon within an effective fingerprint detection area of the display (S3); initiating a fingerprint collection function, and collecting fingerprint information within the effective fingerprint detection area (S4). A user is guided to find the location of the fingerprint input quickly by virtue of a fingerprint input guiding icon and improving the efficiency of the fingerprint recognition detection, thereby improving the user experience.
    Type: Application
    Filed: September 17, 2018
    Publication date: January 17, 2019
    Inventors: GENGCHUN DENG, ZHIXIN ZHONG
  • Publication number: 20190019049
    Abstract: This character/graphics recognition device of the present disclosure obtains information by performing recognition of a character or graphic affixed to an object in a predetermined space. The character/graphics recognition device includes a controller, an imaging unit for capturing an image in a predetermined imaging area including the object, an illumination unit including multiple illumination lamps for emitting light from different positions to illuminate the predetermined space, and a recognition unit for obtaining the information by recognizing the character or graphic in the image captured by the imaging unit and outputting recognition result information including the information obtained. The controller applies a lighting pattern to the illumination unit and controls a timing to capture the image by the imaging unit, a lighting pattern being a combination of turning on and off of the plurality of illumination lamps.
    Type: Application
    Filed: September 19, 2018
    Publication date: January 17, 2019
    Inventors: SAKI TAKAKURA, MARIKO TAKENOUCHI
  • Publication number: 20190019050
    Abstract: Systems, methods, and apparatus, including computer programs encoded on a computer storage medium. In one aspect, a system includes initial neural network layers configured to: receive an input image, and process the input image to generate a plurality of first feature maps that characterize the input image; a location generating convolutional neural network layer configured to perform a convolution on the representation of the first plurality of feature maps to generate data defining a respective location of each of a predetermined number of bounding boxes in the input image, wherein each bounding box identifies a respective first region of the input image; and a confidence score generating convolutional neural network layer configured to perform a convolution on the representation of the first plurality of feature maps to generate a confidence score for each of the predetermined number of bounding boxes in the input image.
    Type: Application
    Filed: July 14, 2017
    Publication date: January 17, 2019
    Inventors: Dominik Roblek, Christian Szegedy, Jacek Slawosz Jurewicz
  • Publication number: 20190019051
    Abstract: A first transmitter transmits a transfer request requesting transfer of imaging of a tracked object and first position information on a first unmanned mobile apparatus to a second unmanned mobile apparatus. A second transmitter transmits feature information related to an appearance of the tracked object and second position information on the tracked object to the second unmanned mobile apparatus after the first transmitter transmits the transfer request and the first position information. A receiver receives a transfer completion notification from the second unmanned mobile apparatus after the second transmitter transmits the feature information and the second position information.
    Type: Application
    Filed: September 18, 2018
    Publication date: January 17, 2019
    Inventors: Atsushi SAITO, Hiroyuki NAKAJIMA, Kazuki MANNAMI, Shimpei KAMAYA, Yasuma SUZUKI, Makoto INADA
  • Publication number: 20190019052
    Abstract: Text region detection techniques and systems for digital images using image tag filtering are described. These techniques and systems support numerous advantages over conventional techniques through use of image tags to filter text region candidates. A computing device, for instance, may first generate text region candidates through use of a variety of different techniques, such as text line detection. The computing device then assigns image tags to the text region candidates. The assigned image tags are then used by the computing device to filter the text region candidates based on whether image tags assigned to respective candidates are indicative of text.
    Type: Application
    Filed: July 14, 2017
    Publication date: January 17, 2019
    Applicant: Adobe Systems Incorporated
    Inventors: I-Ming Pao, Jue Wang, Ke Ma, Zhe Lin
  • Publication number: 20190019053
    Abstract: The present disclosure provides an image recognition method and apparatus, a device and a non-volatile computer storage medium. In embodiments of the present disclosure, it is feasible to obtain the to-be-recognized image of the designated space, then perform image segmentation processing for the to-be-recognized image, to obtain at least one area image of the designated space, and then perform image matching processing for each area image in said at least one area image, to obtain a reference image corresponding to said each area image, so that it is possible to perform recognition processing for said each area image according to image information of the reference image corresponding to said each area image to obtain article information of said each area image. The so doing does not require manual participation and exhibits simple operations and a high rate of correctness, and thereby improves the recognition efficiency and reliability.
    Type: Application
    Filed: May 23, 2016
    Publication date: January 17, 2019
    Applicant: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) CO., LTD.
    Inventors: Chen ZHAO, Haoyuan GAO, Ji LIANG
  • Publication number: 20190019054
    Abstract: A contact information identification system, has the steps of a user selecting content from the display, the system scanning the content with and images with optical character recognition, to produce text, the system parses the text, the system groups text according to a factor selected from the group consisting of proximity of words, line or section, or by matching key words, to form a group, the system matching the group to a data field, the data associated with each field is presented in a list configured for correction by a user, and fields may be deleted by a user if they do not form part of the contact information or are not relevant to the contact information.
    Type: Application
    Filed: July 11, 2017
    Publication date: January 17, 2019
    Inventor: Eugene Waxman
  • Publication number: 20190019055
    Abstract: In an optical character recognition system, a word segmentation method, comprising: acquiring a sample image comprising a word spacing marker or a non-word spacing marker; processing the sample image with a convolutional neural network to obtain a first eigenvector corresponding to the sample image, a word spacing probability value and/or a non-word spacing probability value corresponding to the first eigenvector; acquiring a to-be-tested image, and processing the to-be-tested image with the convolutional neural network to obtain a second eigenvector corresponding to the to-be-tested image, a word spacing probability value or a non-word spacing probability value corresponding to the second eigenvector; and performing word segmentation on the to-be-tested image by using the just obtained word spacing probability value or the non-word spacing probability value.
    Type: Application
    Filed: February 16, 2017
    Publication date: January 17, 2019
    Inventors: WENMENG ZHOU, MENGLI CHENG, XUDONG MAO, XING CHU
  • Publication number: 20190019056
    Abstract: In general, certain embodiments of the present disclosure provide methods and systems for object detection by a neural network comprising a convolution-nonlinearity step and a recurrent step. In a training mode, a dataset is passed into the neural network, and the neural network is trained to accurately output a box size and a center location of an object of interest. The box size corresponds to the smallest possible bounding box around the object of interest and the center location corresponds to the location of the center of the bounding box. In an inference mode, an image that is not part of the dataset is passed into the neural network. The neural network automatically identifies an object of interest and draws a box around the identified object of interest. The box drawn around the identified object of interest corresponds to the smallest possible bounding box around the object of interest.
    Type: Application
    Filed: September 10, 2018
    Publication date: January 17, 2019
    Applicant: Pilot AI Labs, Inc.
    Inventors: Brian Pierce, Elliot English, Ankit Kumar, Jonathan Su
  • Publication number: 20190019057
    Abstract: Exemplary embodiments are generally directed to systems and methods of object identification. Exemplary embodiments can scan, by an optical reader, a machine-readable identifier associated with an original object. Exemplary embodiments can capture an image of the original object at a first orientation using an image capture device. Exemplary embodiments can transmit the machine-readable identifier and the image of the original object to an image database to store an association between the image of the original object and the machine-readable identifier. Exemplary embodiments can receive a subsequent object having a subsequent machine-readable identifier that is unavailable or incapable of being scanned. Exemplary embodiments can capture an image of the subsequent object with the image capture device.
    Type: Application
    Filed: September 18, 2018
    Publication date: January 17, 2019
    Inventors: Christopher Soames Johnson, Jimmie Russell Clark, Michael Lawerance Payne
  • Publication number: 20190019058
    Abstract: The present invention utilizes computer vision technologies to identify potentially malicious URLs and executable files in a computing device. In one embodiment, a Siamese convolutional neural network is trained to identify the relative similarity between image versions of two strings of text. After the training process, a list of strings that are likely to be utilized in malicious attacks are provided (e.g., legitimate URLs for popular websites). When a new string is received, it is converted to an image and then compared against the image of list of strings. The relative similarity is determined, and if the similarity rating falls below a predetermined threshold, an alert is generated indicating that the string is potentially malicious.
    Type: Application
    Filed: July 13, 2017
    Publication date: January 17, 2019
    Inventors: Jonathan Woodbridge, Anjum Ahuja, Daniel Grant
  • Publication number: 20190019059
    Abstract: The present disclosure provides a method and apparatus for identifying a target by obtaining mapping information between target images, generating philtrum model information for the target images, and determining a target included in a target image based on the mapping information and the philtrum model information.
    Type: Application
    Filed: August 23, 2017
    Publication date: January 17, 2019
    Inventor: Min Jeong LEE
  • Publication number: 20190019060
    Abstract: A system described herein may allow for the enhanced identification of candidate images that are similar to a reference image. For example, a primary focus and one or more secondary foci may be identified in the reference image (where a “focus” corresponds to a visual feature of the reference image). Characteristics (e.g., size, shape, color, etc.) of the secondary foci may be identified. Positional relationships of the secondary foci to the primary focus may also be identified. Candidate images may be scored based on whether they include the primary focus and one or more secondary foci, as well as whether the secondary foci match the characteristics and/or positional relationships of corresponding secondary foci in the reference image.
    Type: Application
    Filed: July 11, 2017
    Publication date: January 17, 2019
    Inventor: Qiao Yu