Patents Issued in January 25, 2018
-
Publication number: 20180025205Abstract: A fingerprint identification module includes a cover plate, a fingerprint identification sensor, at least one light source, a plurality of fibers, and a display device. The cover plate has an inner surface and an outer surface opposite to the inner surface. The fingerprint identification sensor and the at least one light source are located under the inner surface, and the at least one light source is located adjacent to the fingerprint identification sensor. The fibers are arranged in an array and are located between the cover plate and the fingerprint identification sensor. Each of the fibers has a light incident surface facing the inner surface and inclined relative to the inner surface. An optical axis of each of the fibers is perpendicular to the inner surface of the cover plate. The display device is located between the cover plate and the plurality of fibers.Type: ApplicationFiled: September 29, 2017Publication date: January 25, 2018Applicant: Gingy Technology Inc.Inventors: Jen-Chieh Wu, Chuck Chung
-
Publication number: 20180025206Abstract: Iris recognition can be accomplished for a wide variety of eye images by using plenoptic imaging. Using plenoptic technology, it is possible to correct focus after image acquisition. One example technology reconstructs images having different focus depths and stitches them together, resulting in a fully focused image, even in an off-angle gaze scenario. Another example technology determines three-dimensional data for an eye and incorporates it into an eye model used for iris recognition processing. Another example technology detects contact lenses. Application of the technologies can result in improved iris recognition under a wide variety of scenarios.Type: ApplicationFiled: September 29, 2017Publication date: January 25, 2018Applicant: UT-Battelle, LLCInventors: Hector J. Santos-Villalobos, Chris Bensing Boehnen, David S. Bolme
-
Publication number: 20180025207Abstract: Iris recognition can be accomplished for a wide variety of eye images by using plenoptic imaging. Using plenoptic technology, it is possible to correct focus after image acquisition. One example technology reconstructs images having different focus depths and stitches them together, resulting in a fully focused image, even in an off-angle gaze scenario. Another example technology determines three-dimensional data for an eye and incorporates it into an eye model used for iris recognition processing. Another example technology detects contact lenses. Application of the technologies can result in improved iris recognition under a wide variety of scenarios.Type: ApplicationFiled: September 29, 2017Publication date: January 25, 2018Applicant: UT-Battelle, LLCInventors: Hector J. Santos-Villalobos, Chris Bensing Boehnen, David S. Bolme
-
Publication number: 20180025208Abstract: An electronic device able to identify fingerprints ultrasonically includes a substrate, a fingerprint identification structure, and an adhesive layer. The fingerprint identification structure includes a thin film transistor (TFT) substrate and a flexible printed circuit (FPC). The FPC includes a first portion and a second portion. The first portion is located on a surface of the TFT substrate facing away from the substrate. The second portion is extended from an end of the first portion to be electrically connected to a surface of the TFT substrate facing the substrate. The second portion is separated from the adhesive layer. A space is defined between the second portion and the substrate. The adhesive layer is susceptible to deformation and decomposition from environmental conditions.Type: ApplicationFiled: January 7, 2017Publication date: January 25, 2018Inventor: JUAN WANG
-
Publication number: 20180025209Abstract: A fingerprint identification apparatus includes a substrate, a second electrode layer, and a first electrode layer. The first electrode layer includes parallel first electrodes, and at least parts of the first electrodes have openings or dents. The second electrode layer includes parallel second electrodes and the second electrodes cross with the first electrodes on the substrate, where the openings or the dents are defined at the cross points from projected view. The second electrode is applied with transmitting signal and the corresponding electric field lines are received by the first electrode. The electric field lines detouring the edges of the first electrodes, or detouring the openings (or the dents) have induction with the finger close to or touching the first electrodes. The number of the effective electric field lines and the effective mutual capacitance changes can be increased to enhance the fingerprint sensing accuracy.Type: ApplicationFiled: July 7, 2017Publication date: January 25, 2018Inventors: Hsiang-Yu LEE, Shang CHIN, Ping-Tsun LIN
-
Publication number: 20180025210Abstract: The methods, devices, and systems may allow a practitioner to obtain information regarding a biological sample, including analytical data, a medical diagnosis, and/or a prognosis or predictive analysis. The method, devices, and systems may provide a grade or level of development for identified diseases. In addition, the methods, devices and systems may generate a confidence value for the predictive classifications generated, which may, for example be generated in a format to show such confidence value or other feature in a graphical representation (e.g., a color code). Further, the methods, devices and system may aid in the identification and discovery of new classes and tissue sub-types.Type: ApplicationFiled: September 18, 2017Publication date: January 25, 2018Inventors: Stanley H. REMISZEWSKI, Clay M. THOMPSON, Max DIEM, Aysegul ERGIN
-
Publication number: 20180025211Abstract: A processor of a cell tracking correction apparatus is configured to perform processes comprising: estimating a position of at least one cell in images acquired by time-lapse photography, and tracking the position of the cell; generating nearby area images of a nearby area including the cell from the images of photography time points of the time-lapse photography, based on the tracked position of the cell at each of the photography time points of the time-lapse photography; displaying the nearby area images on a display; accepting, via a user interface, an input of a correction amount for correcting the position of the cell with respect to one of the nearby area images displayed on the display unit; and correcting the tracked position of the cell corresponding to the nearby area image, in accordance with the correction amount.Type: ApplicationFiled: September 29, 2017Publication date: January 25, 2018Applicant: OLYMPUS CORPORATIONInventor: Hideya ARAGAKI
-
Publication number: 20180025212Abstract: Apparatus, methods, and computer-readable media are provided for segmentation, processing (e.g., preprocessing and/or postprocessing), and/or feature extraction from tissue images such as, for example, images of nuclei and/or cytoplasm. Tissue images processed by various embodiments described herein may be generated by Hematoxylin and Eosin (H&E) staining, immunofluorescence (IF) detection, immunohistochemistry (IHC), similar and/or related staining processes, and/or other processes. Predictive features described herein may be provided for use in, for example, one or more predictive models for treating, diagnosing, and/or predicting the occurrence (e.g., recurrence) of one or more medical conditions such as, for example, cancer or other types of disease.Type: ApplicationFiled: July 26, 2017Publication date: January 25, 2018Applicant: Fundação D. Anna de Sommer Champalimaud e Dr. Carlos Montez Champalimaud,dba Champalimaud Fnd.Inventors: Peter Ajemba, Richard Scott, Janakiramanan Ramachandran, Jack Zeineh, Michael Donovan, Gerardo Fernandez, Qiuhua Liu, Faisal Khan
-
Publication number: 20180025213Abstract: A traffic enforcement system and corresponding method are provided. The traffic enforcement system includes a camera configured to capture an input image of one or more subjects in a motor vehicle. The traffic enforcement system further includes a memory storing a deep learning model configured to perform multi-task learning for a pair of tasks including a liveness detection task and a face recognition task on one or more subjects in a motor vehicle depicted in the input image. The traffic enforcement system also includes a processor configured to apply the deep learning model to the input image to recognize an identity the one or more subjects in the motor vehicle and a liveness of the one or more subjects. The liveness detection task is configured to evaluate a plurality of different distractor modalities corresponding to different physical spoofing materials to prevent face spoofing for the face recognition task.Type: ApplicationFiled: June 29, 2017Publication date: January 25, 2018Inventors: Manmohan Chandraker, Xiang Yu, Eric Lau, Elsa Wong
-
Publication number: 20180025214Abstract: The disclosure relates to a face recognition method. The face recognition method includes: providing a face recognition system, the face recognition system includes a database module, a camera module, and a feature point compare module, wherein the database module stores a plurality of data-photos of a plurality of users; turning on the face recognition system to a searching motion, and searching person faces by the camera module to get a target person face of a target person; and, turning on the face recognition system to a recognition motion to judge whether the target person is one user.Type: ApplicationFiled: March 27, 2017Publication date: January 25, 2018Inventors: TIEN-PING LIU, I-HAO CHUNG, KUEI-KANG WU, HORNG-JUING LEE
-
Publication number: 20180025215Abstract: A method and system for managing and sharing images in an anonymised and live manner such that a third party is able to search for the most recent images that meet the search criteria without being provided with any images uploaded by the proprietor in which a person is present. The system processes images uploaded to a database, marking them as public or private based on the content of the image, and provides an information retrieval system enabling a third party to search through the images marked as public, wherein the third party is presented with the images most recently captured that fulfil the search criteria.Type: ApplicationFiled: September 30, 2015Publication date: January 25, 2018Inventors: Yaacoub YOUSEF, Izlam SHALAAN, Tom MARTIN
-
Publication number: 20180025216Abstract: The present invention is to provide a system, a method, and a program for identifying a person depicted in a portrait. The system for identifying a person depicted in a portrait 1 receives input of portrait data of the portrait, stores different individual data depending on a plurality of persons, extracts a feature point of the received portrait by image analysis, checks the extracted feature point against the individual data, identifies a person depicted in the portrait from the check result, and displays the identification result together with a probability corresponding to an identical level.Type: ApplicationFiled: June 13, 2017Publication date: January 25, 2018Inventor: Shunji SUGAYA
-
Publication number: 20180025217Abstract: A face recognition system and corresponding method are provided. The face recognition system includes a camera configured to capture an input image of a subject purported to be a person. The face recognition system further includes a memory storing a deep learning model configured to perform multi-task learning for a pair of tasks including a liveness detection task and a face recognition task. The face recognition system also includes a processor configured to apply the deep learning model to the input image to recognize an identity of the subject in the input image and a liveness of the subject. The liveness detection task is configured to evaluate a plurality of different distractor modalities corresponding to different physical spoofing materials to prevent face spoofing for the face recognition task.Type: ApplicationFiled: June 29, 2017Publication date: January 25, 2018Inventors: Manmohan Chandraker, Xiang Yu, Eric Lau, Elsa Wong
-
Publication number: 20180025218Abstract: Techniques for identifying a person in a target image are described. According to one of the techniques, identifying a person in a target image involves displaying, within a graphical user interface, an image that depicts one or more faces. One or more faces are automatically detected within the image. A user provides input that selects a face of the one or more faces to be a currently-selected face. A set of images are selected from a collection of images, where the set of images includes images that closely match the currently-selected face. Concurrently with display of the currently-selected face, each image in the set of images is displayed. Within the graphical user interface, a control is provided. The control enables a user to select a target image from the set of images. In response to detecting that the user has selected a target image using the control, the currently-selected face is associated with a person to which the target image corresponds.Type: ApplicationFiled: September 29, 2017Publication date: January 25, 2018Inventors: Eran STEINBERG, Peter CORCORAN, Yury PRILUTSKY, Petronel BIGIOI, Mihai CIUC, Stefanita CIUREL, Constantin VERTAN
-
Publication number: 20180025219Abstract: A system according to various exemplary embodiments includes a processor and a user interface coupled to the processor, the user interface comprising an input device and a display screen. The system further comprises a sensor component coupled to the processor, the sensor component comprising a location sensor for determining location information associated with the system, and memory coupled to the processor and storing instructions that, when executed by the processor, cause the system to perform operations comprising: retrieving sensor information from the sensor component, the sensor information including location information from the location sensor; retrieving, from the memory, avatar information associated with a user of the system; generating a customized graphic based on the sensor information and the user's avatar information; and presenting the customized graphic within an electronic message via the display screen of the user interface.Type: ApplicationFiled: October 11, 2016Publication date: January 25, 2018Inventors: Dorian Franklin Baldwin, Jacob Blackstock, Shahan Panth, David James Kennedy
-
Publication number: 20180025220Abstract: Methods, systems, and devices are disclosed for image acquisition and distribution of individuals at large events. In one aspect, a method for providing an image of attendees at an event includes operating one or more image capturing devices to record images of attendees of an event situated at locations in an event venue, processing the images to form a processed image, and distributing the processed image to the individual. The processing includes mapping the locations to a grid including coordinates corresponding to predetermined positions associated with the event venue, defining an image space containing an individual at a particular location in the event venue based on the coordinates, and forming the processed image based on the image space.Type: ApplicationFiled: September 29, 2017Publication date: January 25, 2018Inventors: William Dickinson, Daniel Magy, Marco Correia
-
Publication number: 20180025221Abstract: Embodiments of the invention provide a method, system and computer program product for video sentiment analysis in video messaging. In an embodiment of the invention, a method for video sentiment analysis in video messaging includes receiving different video contributions to a thread in a social system executing in memory of a computer and sensing from a plurality of the video contributions a contributor sentiment. Thereafter, a sentiment value for the different video contributions is computed and a sentiment value for a selected one of the video contributions is displayed in a user interface to the thread for an end user contributing a new video contribution to the thread.Type: ApplicationFiled: July 20, 2016Publication date: January 25, 2018Inventors: Liam Harpur, Erik H. Katzen, Sumit Patel, John Rice
-
Publication number: 20180025222Abstract: The present disclosure relates to optical character recognition using captured video. According to one embodiment, using a first image in stream of images depicting a document, the device extracts text data in a portion of the document depicted in the first image and determines a first confidence level regarding an accuracy of the extracted text data. If the first confidence level satisfies a threshold value, the device saves the extracted text data as recognized content of the source document. Otherwise, the device extracts the text data from the portion of the document as depicted in one or more second images in the stream and determines a second confidence level for the text data extracted from each second image until identifying one of the second images where the second confidence level associated with the text data extracted from the identified second image satisfies the threshold value.Type: ApplicationFiled: July 25, 2016Publication date: January 25, 2018Inventors: Vijay YELLAPRAGADA, Peijun CHIANG, Sreeneel K. MADDIKA
-
Publication number: 20180025223Abstract: Methods and systems are provided for defining and determining a formal and verifiable mobile document image quality and usability (MDIQU) standard, or Standard for short. The Standard ensures that a mobile image can be used in an appropriate mobile document processing application, for example an application for mobile check deposit. In order to quantify the usability, the Standard establishes 5 quality and usability grades. A mobile image capture device can capture images. A mobile device can receive information associated with one or more image quality assurance (IQA) criteria; evaluating the images to select an image satisfying an image quality criteria based on the received information; and in response to the image satisfying the image quality score, sending the selected image to determine a set of image quality assurance (IQA) scores.Type: ApplicationFiled: September 29, 2017Publication date: January 25, 2018Inventors: Grigori Nepomniachtchi, Mike Strange
-
Publication number: 20180025224Abstract: A system and method for identifying unclaimed electronic documents among at least one electronic document, each electronic document including at least partially unstructured data. The method includes: analyzing each electronic document to determine at least one transaction parameter of the electronic document; creating a template for each electronic document, wherein each template is a structured dataset including the at least one transaction parameter determined for the electronic document; and determining whether each electronic document is unclaimed, wherein determining whether an electronic document is unclaimed further comprises comparing at least a portion of the template created for the electronic document to identifying data of a plurality of previous reclaims.Type: ApplicationFiled: August 4, 2017Publication date: January 25, 2018Applicant: Vatbox, Ltd.Inventors: Noam GUZMAN, Isaac SAFT
-
Publication number: 20180025225Abstract: A system and method generating consolidated data based on electronic documents. The method includes analyzing a first electronic document to determine at least one transaction parameter, the first electronic document indicating a transaction including at least one expense, wherein the first electronic document includes at least partially unstructured data; creating a template for the first electronic document, wherein the template is a structured dataset including the determined at least one transaction parameter; retrieving, based on the template, a second electronic document, wherein the second electronic document indicates evidence of the transaction; determining at least one deductible expense of the at least one expense based on at least one deduction rule, the template, and the second electronic document; and generating consolidation metadata based on the determined at least one deductible expense.Type: ApplicationFiled: August 4, 2017Publication date: January 25, 2018Applicant: Vatbox, Ltd.Inventors: Noam GUZMAN, Isaac SAFT
-
Publication number: 20180025226Abstract: Smart eyeglasses with iris recognition device comprise at least one glasses frame, a glasses arm connected to a side of the glasses frame; an iris recognition device installed on the glasses frame and having a recognition unit facing an inner side of the glasses frame. A light source device installed on the glasses frame projects a light on an inner side of an outer boundary of an iris of an eyeball, and not in contact with the outer boundary of the iris. The iris is sampled more easily and clearly with luminosity compensation of the light source device for enhancing the accuracy and sampling speed.Type: ApplicationFiled: July 22, 2016Publication date: January 25, 2018Inventor: Yung-Hui Li
-
Publication number: 20180025227Abstract: Methods and apparatus to measure brand exposure in media streams are disclosed. Disclosed example apparatus include a scene detector to compare a signature of a detected scene of a media presentation with a library of signatures to identify a first reference scene, and a scene classifier to classify the detected scene as a scene of changed interest when a first region of interest in the detected scene does not include a first reference brand identifier included in a corresponding region of interest in the first reference scene. Disclosed example apparatus further include a graphical user interface to present the detected scene, prompt for selection of an area of the first region of interest in the detected scene, and compare the selected area to a library of reference brand identifiers to identify a second reference brand identifier included in the first region of interest in the detected scene.Type: ApplicationFiled: September 29, 2017Publication date: January 25, 2018Inventors: Kevin Keqiang Deng, Barry Greenberg
-
Publication number: 20180025228Abstract: A system and methodology provide for annotating videos with entities and associated probabilities of existence of the entities within video frames. A computer-implemented method identifies an entity from a plurality of entities identifying characteristics of video items. The computer-implemented method selects a set of features correlated with the entity based on a value of a feature of a plurality of features, determines a classifier for the entity using the set of features, and determines an aggregation calibration function for the entity based on the set of features. The computer-implemented method selects a video frame from a video item, where the video frame having associated features, and determines a probability of existence of the entity based on the associated features using the classifier and the aggregation calibration function.Type: ApplicationFiled: October 2, 2017Publication date: January 25, 2018Inventors: Balakrishnan Varadarajan, George Dan Toderici, Apostol Natsev, Nitin Khandelwal, Sudheendra Vijayanarasimhan, Weilong Yang, Sanketh Shetty
-
Publication number: 20180025229Abstract: A method, an apparatus, and a storage medium are provided for image outputting. The method may include: acquiring data frames collected by a target camera; acquiring a target image based on the data frames; and controlling a target terminal to output an alert message which at least includes the target image. When an emergency situation occurs in a user's home, the user can be notified of the emergency situation at once, thereby increasing the effective utilization of the smart camera device.Type: ApplicationFiled: May 9, 2017Publication date: January 25, 2018Applicant: Beijing Xiaomi Mobile Software Co., Ltd.Inventors: Yi DING, Deguo MENG, Enxing HOU
-
Publication number: 20180025230Abstract: A computer system processes a video stream to detect a start of a first motion event candidate in the video stream, and in response to detecting the start of the first motion event candidate in the video stream, initiates event recognition processing on a first video segment associated with the start of the first motion event candidate. Initiating the event recognition processing further includes: determining a motion track of a first object identified in the first video segment; generating a representative motion vector for the first motion event candidate based on the motion track of the first object; and sending the representative motion vector for the first motion event candidate to an event categorizer, where the event categorizer assigns a respective motion event category to the first motion event candidate based on the representative motion vector of the first motion event candidate.Type: ApplicationFiled: August 7, 2015Publication date: January 25, 2018Inventors: Jason N. Laska, Gregory R. Nelson, Greg Duffy
-
Publication number: 20180025231Abstract: A system and method for providing surveillance data are provided. The system includes: a pattern learner configured to learn a time-based data pattern by analyzing at least one of image data of one or more images and sound data of sound obtained from a surveillance zone at a predetermined time or time period, and to generate an event model based on the time-based data pattern; and an event detector configured to detect at least one event by comparing the event model with a time-based data pattern of at least one of first image data of one or more first images and first sound data of first sound obtained from the surveillance zone.Type: ApplicationFiled: January 5, 2017Publication date: January 25, 2018Applicant: Hanwha Techwin Co., Ltd.Inventors: Seung In NOH, Jeong Eun LIM, Seoung Seon JEON
-
Publication number: 20180025232Abstract: A method comprising: associating a message with one or more presentation criterion and a physical location in a scene; automatically processing recorded first sensor data from the scene to recognize automatically satisfaction of the one or more presentation criterion; and in response to recognition of satisfaction of the one or more presentation criterion entering a presentation state to enable: automatic presentation of the message into the scene at the physical location.Type: ApplicationFiled: January 20, 2016Publication date: January 25, 2018Inventors: Antti ERONEN, Jussi LEPPÄNEN
-
Publication number: 20180025233Abstract: A positional information acquirer acquires positional information for every person from a video, an attribute information acquirer acquires attribute information for every person from the video, and an activity information acquirer restricts activity information to an attribute designated by a user based on the attribute information and the positional information, and acquires the activity information of which the attribute is restricted. An activity map generator generates an activity map of which an attribute is restricted based on the activity information, and a video output outputs a video acquired by superimposing the activity map. A controller determines appropriateness indicating whether or not the video output from the imager is appropriate, enables a function of outputting the activity map of which the attribute is restricted, and disables the function of outputting the activity map of which the attribute is restricted where the video output from the imager does not have the appropriateness.Type: ApplicationFiled: March 3, 2016Publication date: January 25, 2018Applicant: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.Inventor: Kazuhiko IWAI
-
Publication number: 20180025234Abstract: A method for determining lane information includes receiving perception data from at least two sensors, the at least two sensors including a rear facing camera of a vehicle. The method includes determining, based on the perception data, a number of lanes on a roadway within a field of view captured by the perception data using a neural network. The method includes providing an indication of the number of lanes to an automated driving system or driving assistance system.Type: ApplicationFiled: July 20, 2016Publication date: January 25, 2018Inventors: Scott Vincent Myers, Alexandru Mihai Gurghian, Ashley Elizabeth Micks, Alexandro Walsh
-
Publication number: 20180025235Abstract: Systems and methods are provided for crowdsourcing road surface information collection. In one implementation, a method of collecting road surface information for a road segment may include receiving at least one image representative of a portion of the road segment, identifying in the at least one image at least one road surface feature along the portion of the road segment, determining a plurality of locations associated with the road surface feature according to a local coordinate system of the vehicle, and transmitting the determined plurality of locations from the vehicle to a server. The determined locations may be configured to enable determination by the server of a line representation of the road surface feature extending along the road segment.Type: ApplicationFiled: July 21, 2017Publication date: January 25, 2018Inventor: Ofer Fridman
-
Publication number: 20180025236Abstract: To detect a distant object that may become an obstacle to a traveling destination of a moving vehicle or the like more accurately than the conventional, there is provided a detecting device, a program used in the detecting device, and a detecting method using the detecting device, where the detecting device includes: an acquisition section for acquiring two or more images captured in two or more imaging devices provided at different heights; and a detection section for detecting a rising portion of an identical object toward the imaging devices based on a difference between the lengths of the identical object in the height direction in the two or more images.Type: ApplicationFiled: September 29, 2017Publication date: January 25, 2018Inventors: Ching-Yung Lin, Masayuki Murata, Shuichi Shimizu
-
Publication number: 20180025237Abstract: A vehicular control system includes a camera having an exterior field of view at least rearward of the vehicle and operable to capture image data. A trailer is attached to the vehicle and image data captured by the camera includes image data captured when the vehicle is maneuvered with the trailer at an angle relative to the vehicle. The vehicular control system determines a trailer angle of the trailer and is operable to determine a path of the trailer responsive at least to a steering angle of the vehicle and the determined trailer angle of the trailer. The vehicular control system determines an object present exterior of the vehicle and the vehicular control system distinguishes a drivable surface from a prohibited space, and the vehicular control system plans a driving path for the vehicle that neither impacts the object nor violates the prohibited space.Type: ApplicationFiled: October 2, 2017Publication date: January 25, 2018Inventors: Sebastian Pliefke, Paul Jarmola, Thomas Wierich, Steven V. Byrne, Yuesheng Lu
-
Publication number: 20180025238Abstract: A method automatically detects parking zones in at least one residential street, wherein a) a computing unit is provided; b) information in the form of panoramic images of the at least one residential street is inputted from an external data store; c) information in the form of street data of the at least one residential street is inputted from a map database; d) an internal database is generated, which persists the panoramic images; e) the inputted panoramic images are analyzed for the presence of vehicles; f) the inputted panoramic images are analyzed for the presence of street signs and traffic signs; g) from the analyses of the presence of vehicles and the presence of street signs and traffic signs for at least one selected residential street, expected existing no-parking/no-stopping zones are determined; h) a data set that contains the detected information regarding identified vehicles, street signs, and the markings of no-parking/no-stopping zones and parking zones is generated, and i) the information cType: ApplicationFiled: September 29, 2017Publication date: January 25, 2018Inventors: Heidrun BELZNER, Daniel KOTZOR, Vladimir HALTAKOV, Andrej MAYA
-
Publication number: 20180025239Abstract: A method and an image processing apparatus for image-based object feature description are provided. In the method, an object of interest in an input image is detected and a centroid and a direction angle of the object of interest are calculated. Next, a contour of the object of interest is recognized and a distance and a relative angle of each pixel on the contour to the centroid are calculated, in which the relative angle of each pixel is calibrated by using the direction angle. Then, a 360-degree range centered on the centroid is equally divided into multiple angle intervals and the pixels on the contour are separated into multiple groups according to a range covered by each angle interval. Afterwards, a maximum among the distances of the pixels in each group is obtained and used as a feature value of the group. Finally, the feature values of the groups are normalized and collected to form a feature vector that serves as a feature descriptor of the object of interest.Type: ApplicationFiled: December 27, 2016Publication date: January 25, 2018Applicant: TAMKANG UNIVERSITYInventors: Chi-Yi Tsai, Hsien-Chen Liao
-
Publication number: 20180025240Abstract: A system and method are disclosed relating to a computer system that estimates the status of the driver of a vehicle. The system comprises the following steps: Acquisition of an image from a depth sensor containing depth and optional an IR intensity image and an RGB color image; identification of pixels that belong to the drivers head; creation of a 3D model of the head including an intensity model and a variability estimate for depth, grayscale and color information; estimation of the principal head pose and the neutral facial expression; estimation of the current relative head pose with respect to the principal head pose; identification of pixels that do not match the neutral face model with respect to depth, grayscale or color information or any combination thereof; clustering of the pixels with identified deviations; classification of spatial and temporal patterns to identify driver status and distraction events.Type: ApplicationFiled: July 21, 2016Publication date: January 25, 2018Inventors: Sascha Klement, Sebastian Low, Dennis Munster
-
Publication number: 20180025241Abstract: The present invention relates a novel system and method for person identification and personality assessment based on electroencephalography (EEG) signal. More particularly, this invention relates to a novel method of EEG recording and processing to map the inherent and unique properties of brain in the form of highly specific brain signature to be used as means for person identification and personality assessment.Type: ApplicationFiled: January 18, 2016Publication date: January 25, 2018Inventors: DR. PUNEET AGARWAL, SIDDHARTH PANWAR
-
Publication number: 20180025242Abstract: A facility access control system and corresponding method are provided. The facility access control system includes a camera configured to capture an input image of a subject attempting to enter or exit a restricted facility. The facility access control system further includes a memory storing a deep learning model configured to perform multi-task learning for a pair of tasks including a liveness detection task and a face recognition task. The facility access control system also includes a processor configured to apply the deep learning model to the input image to recognize an identity of the subject in the input image regarding being authorized for access to the facility and a liveness of the subject. The liveness detection task is configured to evaluate a plurality of different distracter modalities corresponding to different physical spoofing materials to prevent face spoofing for the face recognition task.Type: ApplicationFiled: June 29, 2017Publication date: January 25, 2018Inventors: Manmohan Chandraker, Xiang Yu, Eric Lau, Elsa Wong
-
Publication number: 20180025243Abstract: A login access control system is provided. The login access control system includes a camera configured to capture an input image of a subject purported to be a person and attempting to login to a system to access secure data. The login access control system further includes a memory storing a deep learning model configured to perform multi-task learning for a pair of tasks including a liveness detection task and a face recognition task. The login access control system also includes a processor configured to apply the deep learning model to the input image to recognize an identity of the subject in the input image regarding being authorized for access to the secure data and a liveness of the subject. The liveness detection task is configured to evaluate a plurality of different distractor modalities corresponding to different physical spoofing materials to prevent face spoofing for the face recognition task.Type: ApplicationFiled: June 29, 2017Publication date: January 25, 2018Inventors: Manmohan Chandraker, Xiang Yu, Eric Lau, Elsa Wong
-
Publication number: 20180025244Abstract: Exemplary embodiments are directed to biometric analysis systems generally including one or more illumination sources, a camera, and an analysis module. The illumination sources are configured to illuminate at least a portion of a face of a subject. The camera is configured to capture one or more images of the subject during illumination of the face of the subject. The analysis module is configured to analyze the one or more images captured by the camera to determine an indication of liveliness of the subject and prevent spoofing.Type: ApplicationFiled: July 27, 2017Publication date: January 25, 2018Applicant: Princeton Identity, Inc.Inventors: Andrew James Bohl, David Alan Ackerman, Christopher Robert Martin
-
Publication number: 20180025245Abstract: The present disclosure relates to a gaze based error recognition detection system that is intended to predict intention of the user to correct user drawn sketch misrecognitions through a multimodal computer based intelligent user interface. The present disclosure more particularly relates to a gaze based error recognition system comprising at least one computer, an eye tracker to capture natural eye gaze behavior during sketch based interaction, an interaction surface and a sketch based interface providing interpretation of user drawn sketches.Type: ApplicationFiled: September 1, 2017Publication date: January 25, 2018Applicant: KOC UniversitesiInventors: Tevfik Metin Sezgin, Ozem Kalay
-
Publication number: 20180025246Abstract: A system and process of nearsighted (myopia) camera object detection involves detecting the objects through edge detection and outlining or thickening them with a heavy border. Thickening may include making the object bold in the case of text characters. The bold characters are then much more apparent and heavier weighted than the background. Thresholding operations are then applied (usually multiple times) to the grayscale image to remove all but the darkest foreground objects in the background resulting in a nearsighted (myopic) image. Additional processes may be applied to the nearsighted image, such as morphological closing, contour tracing and bounding of the objects or characters. The bound objects or characters can then be averaged to provide repositioning feedback for the camera user. Processed images can then be captured and subjected to OCR to extract relevant information from the image.Type: ApplicationFiled: October 3, 2017Publication date: January 25, 2018Inventor: Scott E. Barton
-
Publication number: 20180025247Abstract: Upon input of conditions of a scale, the height of a ceiling, and an algorithm in an operation area through an operation by a user and dragging of a detection position object with a pointer by the user to set a detection position in a layout guide, an information processing apparatus displays a mounting position guide indicating a position where a camera is capable of being arranged on the layout guide in accordance with the algorithm that is selected.Type: ApplicationFiled: July 17, 2017Publication date: January 25, 2018Inventor: Akihiro Kohno
-
Publication number: 20180025248Abstract: A method of generating handwriting information about handwriting of a user includes determining a first writing focus and a second writing focus; sequentially shooting a first local writing area, which is within a predetermined range from a first writing focus, and a second local writing area, which is within a predetermined range from a second writing focus; obtaining first handwriting from the first local writing area and second handwriting from the second local writing area; combining the first handwriting with the second handwriting; and generating the handwriting information, based on a result of the combining.Type: ApplicationFiled: February 11, 2016Publication date: January 25, 2018Inventors: Yuxian SHAN, Chuanxia LIU, Youxin CHEN
-
Publication number: 20180025249Abstract: A method detects an object in an image. The method extracts a first feature vector from a first region of an image using a first subnetwork and determines a second region of the image by processing the first feature vector with a second subnetwork. The method also extracts a second feature vector from the second region of the image using the first subnetwork and detects the object using a third subnetwork on a basis of the first feature vector and the second feature vector to produce a bounding region surrounding the object and a class of the object. The first subnetwork, the second subnetwork, and the third subnetwork form a neural network. Also, a size of the first region differs from a size of the second region.Type: ApplicationFiled: July 25, 2016Publication date: January 25, 2018Applicant: Mitsubishi Electric Research Laboratories, Inc.Inventors: Ming-Yu Liu, Oncel Tuzel, Amir massoud Farahmand, Kota Hara
-
Publication number: 20180025250Abstract: An image processing apparatus is configured to extract an object region from an image. The image processing apparatus includes: a setting unit configured to set a plurality of reference points in the image; an obtaining unit configured to obtain a contour of the object region corresponding to each of the plurality of reference points as an initial extraction result based on a characteristic of the object region; and an extraction unit configured to extract the object region from the image based on an integration result obtained by integrating values of pixels in a plurality of initial extraction results.Type: ApplicationFiled: July 14, 2017Publication date: January 25, 2018Inventors: Bin Chen, Daisuke Furukawa
-
Publication number: 20180025251Abstract: The present disclosure is directed toward systems and methods to quickly and accurately identify boundaries of a displayed document in a live camera image feed, and provide a document boundary indicator within the live camera image feed. For example, systems and methods described herein utilize different display document detection processes in parallel to generate and provide a document boundary indicator that accurately corresponds with a displayed document within a live camera image feed. Thus, a user of the mobile computing device can easily see whether the document identification system has correctly identified the displayed document within the camera viewfinder feed.Type: ApplicationFiled: July 24, 2017Publication date: January 25, 2018Inventors: Nils Peter Welinder, Peter N. Belhumeur, Ying Xiong, Jongmin Baek, Simon Kozlov, Thomas Berg, David J. Kriegman
-
Publication number: 20180025252Abstract: A template creation device includes an acquisition unit configured to acquire a plurality of templates from a plurality of images of different poses of a single object, or a plurality of images for a plurality of objects; a clustering unit configured to divide the plurality of templates into a plurality of groups on the basis of a similarity score; and an integration unit configured to combine the templates in a group into an integrated template, and to create a new template set from the plurality of integrated templates corresponding to each group in the plurality of groups.Type: ApplicationFiled: September 22, 2017Publication date: January 25, 2018Applicant: OMRON CorporationInventor: Yoshinori KONISHI
-
Publication number: 20180025253Abstract: A system for identifying in an image an object that is commonly found in a collection of images and for identifying a portion of an image that represents an object based on a consensus analysis of segmentations of the image. The system collects images of containers that contain objects for generating a collection of common objects within the containers. To process the images, the system generates a segmentation of each image. The image analysis system may also generate multiple segmentations for each image by introducing variations in the selection of voxels to be merged into a segment. The system then generates clusters of the segments based on similarity among the segments. Each cluster represents a common object found in the containers. Once the clustering is complete, the system may be used to identify common objects in images of new containers based on similarity between segments of images and the clusters.Type: ApplicationFiled: August 11, 2017Publication date: January 25, 2018Inventors: Peer-Timo Bremer, Hyojin Kim, Jayaraman J. Thiagarajan
-
Publication number: 20180025254Abstract: A method and non-transitory computer-readable medium capture an image of bulk grain and apply a feature extractor to the image to determine a feature of the bulk grain in the image. For each of a plurality of different sampling locations in the image, based upon the feature of the bulk grain at the sampling location, a determination is made regarding a classification score for the presence of a classification of material at the sampling location. A quality of the bulk grain of the image is determined based upon an aggregation of the classification scores for the presence of the classification of material at the sampling locations.Type: ApplicationFiled: October 2, 2017Publication date: January 25, 2018Inventors: Carl Knox Wellington, Aaron J. Bruns, Victor S. Sierra, James J. Phelan, John M. Hageman, Cristian Dima, Hanke Boesch, Herman Herman, Zachary Abraham Pezzementi, Cason Robert Male, Joan Campoy, Carlos Vallespi-gonzalez