Patents Issued in September 24, 2019
-
Patent number: 10423819Abstract: Disclosed herein are methods for analyzing cell kinematics in a nucleated cell culture from a time-series sequence of multiple fluorescence microscopic images of the nucleated cell culture. The method includes the steps of, (a) identifying every cell nucleus in each fluorescence microscopic image; (b) identifying every cell cluster using the cell nuclei identified in the step (a); and (c) tracking the cells and/or cell clusters using the cell nuclei and cell clusters identified for the fluorescence microscopic images in steps (a) and (b) respectively.Type: GrantFiled: October 31, 2017Date of Patent: September 24, 2019Assignee: Chung Yuan Christian UniversityInventors: Yuan-Hsiang Chang, Hideo Yokota, Kuniya Abe, Ming-Dar Tsai
-
Patent number: 10423820Abstract: The subject matter of the present disclosure generally relates to techniques for image analysis. In certain embodiments, various morphological or intensity-based features as well as different thresholding approaches may be used to segment the subpopulation of interest and classify object in the images.Type: GrantFiled: September 13, 2017Date of Patent: September 24, 2019Assignee: GENERAL ELECTRIC COMPANYInventors: Alberto Santamaria-Pang, Qing Li, Yunxia Sui, Dmitry Vladimirovich Dylov, Christopher James Sevinsky, Michael E. Marino, Michael J. Gerdes, Daniel Eugene Meyer, Fiona Ginty, Anup Sood
-
Patent number: 10423821Abstract: Disclosed are systems, methods, and non-transitory computer-readable media for automated profile image generation based on scheduled video conferences. A profile image generation system generates, based on image data captured during a first video conference, a first facial feature data set for a first identified face identified from the image data. The first facial feature data set includes numeric values representing the first identified face. The profile image generation system calculates, based on the first facial feature data set and historic facial feature data sets generated from image data captured during previous video conferences, a first value indicating a likelihood that the first identified face is a first meeting participant that participated in the first video conference. The profile image generation system determines that the first value meets or exceeds a threshold value, and in response, determines that the first identified face is the first meeting participant.Type: GrantFiled: October 25, 2017Date of Patent: September 24, 2019Assignee: Microsoft Technology Licensing, LLCInventors: Florian Raudies, Yi Zhen, Ajith Muralidharan, Yiming Ma
-
Patent number: 10423822Abstract: Systems and methods for overlaying video segments of actions of audience members with video segments of an event performer are described. A computer implemented method includes: identifying, by a computer device, an event performer in video content; identifying, by the computer device, an audience member in the video content that has a social network relationship to the event performer; correlating, by the computer device, an action of the event performer in the video content to an action of the audience member in the video content; and generating, by the computer device, a composite image comprising an image of the action of the event performer and an image of the action of the audience member.Type: GrantFiled: March 15, 2017Date of Patent: September 24, 2019Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: James E. Bostick, John M. Ganci, Jr., Martin G. Keen, Sarbajit K. Rakshit
-
Patent number: 10423823Abstract: A system and method for identifying a subject based upon ear recognition using a convolutional neural network (CNN) and handcrafted features, wherein an ear in an image is cropped using ground truth annotations and landmark detection is performed to obtain the information required to normalize pose and scale variations. The normalized images are then described by different feature extractors and matched through distance metrics. Finally, scores are fused and a subject identification decision is made.Type: GrantFiled: December 17, 2018Date of Patent: September 24, 2019Assignee: University of South FloridaInventors: Sudeep Sarkar, Mauricio Pamplona Segundo, Earnest Eugene Hansley
-
Patent number: 10423824Abstract: A body information analysis apparatus (1) and method of analyzing hand skin by using same are provided. The method includes activating an image fetching module (12) of the body information analysis apparatus (1) to record an external image (61); activating a processing unit (10) of the body information analysis apparatus (1) to recognize one of a plurality of hand images (81, 82, 83, 84, 85, 86, 87, 88) in the external image (61); recognizing an image in one of the hand images (81, 82, 83, 84, 85, 86, 87, 88) corresponding to a defect; marking one of the hand images (81, 82, 83, 84, 85, 86, 87, 88) based on a location of the image having a defect; and activating a display module (111) of the body information analysis apparatus (1) to show the marked one of the hand images (81, 82, 83, 84, 85, 86, 87, 88).Type: GrantFiled: January 3, 2018Date of Patent: September 24, 2019Assignee: CAL-COMP BIG DATA, INC.Inventors: Shyh-Yong Shen, Min-Chang Chi, Hui-Teng Lin, Ching-Wei Wang
-
Patent number: 10423825Abstract: A retrieval device includes: a printer configured to print a document onto paper together with an identifier image, the identifier image representing an identifier of the document; a storage configured to correlate electronic data for the document with the identifier to store the electronic data; an image taking device configured to photograph the identifier image in the paper; an input device configured to receive an input of a keyword; a search portion configured to search for the keyword in the document by using the electronic data corresponding to the identifier represented in the identifier image photographed; and a display device configured to display a result of the search made by the search portion.Type: GrantFiled: June 21, 2016Date of Patent: September 24, 2019Assignee: KONICA MINOLTA, INC.Inventor: Kaitaku Ozawa
-
Patent number: 10423826Abstract: Systems and methods are provided for processing an image of a financial payment document captured using a mobile device and classifying the type of payment document in order to extract the content therein. These methods may be implemented on a mobile device or a central server, and can be used to identify content on the payment document and determine whether the payment document is ready to be processed by a business or financial institution. The system can identify the type of payment document by identifying features on the payment document and performing a series of steps to determine probabilities that the payment document belongs to a specific document type. The identification steps are arranged starting with the fastest step in order to attempt to quickly determine the payment document type without requiring lengthy, extensive analysis.Type: GrantFiled: March 22, 2016Date of Patent: September 24, 2019Assignee: Mitek Systems, Inc.Inventors: Grigori Nepomniachtchi, Vitali Kliatskine, Nikolay Kotovich
-
Patent number: 10423827Abstract: A method and system for analyzing text in an image. Classification and localization information is identified for the image at a word and character level. A detailed profile is generated that includes attributes of the words and characters identified in the image. One or more objects representing a predicted source of the text are identified in the image. In one embodiment, neural networks are employed to determine localization information and classification information associated with the identified object of interest (e.g., a text string, a character, or a text source).Type: GrantFiled: July 5, 2017Date of Patent: September 24, 2019Assignee: Amazon Technologies, Inc.Inventors: Jonathan Wu, Meng Wang, Wei Xia, Ranju Das
-
Patent number: 10423828Abstract: Techniques for determining reading order in a document. A current labeled text run (R1), RIGHT text run (R1) and DOWN text run (R3) are generated. The R1 labeled text run is processed by a first LSTM, the R2 labeled text run is processed by a second LSTM, and the R3 labeled text run is processed by a third LSTM, wherein each of the LSTMs generates a respective internal representation (R1?, R2? and R3?). Deep learning tools other than LSTMs can be used, as will be appreciated. The respective internal representations R1?, R2? and R3? are concatenated or otherwise combined into a vector or tensor representation and provided to a classifier network that generates a predicted label for a next text run as RIGHT, DOWN or EOS in the reading order of the document.Type: GrantFiled: December 15, 2017Date of Patent: September 24, 2019Assignee: Adobe Inc.Inventors: Shagun Sodhani, Kartikay Garg, Balaji Krishnamurthy
-
Patent number: 10423829Abstract: A signal observation device includes: an observation unit that observes a volume of a target signal by using compressed sensing; a filter having a plurality of elements that are arranged in a matrix and that are capable of individually restricting the volume of the target signal to be transmitted to the observation unit; and a control unit that causes the observation unit to observe the volume of the target signal transmitted via the filter by using first control for controlling the elements of the filter on the basis of a first observation matrix in which values of matrix elements are predetermined and that causes the observation unit to observe the volume of the target signal transmitted via the filter using second control for controlling the elements of the filter on the basis of a second observation matrix in which values of matrix elements are based on random numbers.Type: GrantFiled: September 21, 2016Date of Patent: September 24, 2019Assignee: PANASONIC INTELLECTUAL PROPERTY CORPORATION OF AMERICAInventor: Hiroshi Amano
-
Patent number: 10423830Abstract: Techniques related to eye contact correction to provide a virtual user gaze aligned with a camera while the user views a display are discussed. Such techniques may include encoding an eye region of a source image using a pretrained neural network to generate compressed features, applying a pretrained classifier to the features to determine a motion vector field for the eye region, and warping and inserting the eye region into the source image to generate an eye contact corrected image.Type: GrantFiled: April 22, 2016Date of Patent: September 24, 2019Assignee: Intel CorporationInventors: Edmond Chalom, Or Shimshi
-
Patent number: 10423831Abstract: A camera captures an image of a structural bearing, such as a hanger bearing or a rocker bearing. Additionally, an instrument detects a temperature. A computing system determines, based on the temperature, an expected angle of the bearing relative to a base line. The computing system also determines an actual angle of the bearing relative to the base line. The computing system superimposes a first line on the image, the first line indicating the expected angle. Furthermore, the computing system superimposes a second line on the image, the second line indicating the actual angle.Type: GrantFiled: September 15, 2017Date of Patent: September 24, 2019Assignee: Honeywell International Inc.Inventor: Robert E. De Mers
-
Patent number: 10423832Abstract: The described positional awareness techniques employing visual-inertial sensory data gathering and analysis hardware with reference to specific example implementations implement improvements in the use of sensors, techniques and hardware design that can enable specific embodiments to provide positional awareness to machines with improved speed and accuracy.Type: GrantFiled: April 24, 2018Date of Patent: September 24, 2019Assignee: Trifo, Inc.Inventors: Zhe Zhang, Grace Tsai, Shaoshan Liu
-
Patent number: 10423833Abstract: A computer system, method, and computer readable product are provided for setting a personal status using augmented reality. In various embodiments, an augmented reality computing device captures an image of a physical scene, which includes a person. The computing device then identifies the person, and accesses a personal status for that person. The computing device generates and displays an augmented reality image that displays the personal status in proximity to the person in the scene.Type: GrantFiled: June 7, 2018Date of Patent: September 24, 2019Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Jeremy A. Greenberger, Zachary M. Greenberger, Jeffrey A. Kusnitz
-
Patent number: 10423834Abstract: A network system, such as a transport management system, uses augmented reality (AR) to identify an approaching vehicle. Responsive to receiving a trip request, a trip management module matches the rider with an available driver and instructs a trip monitoring module to monitor the location of the driver's vehicle as it travels to the pickup location. When the driver's vehicle is within a threshold distance of the pickup location, an AR control module instructs the rider client device to begin a live video stream and instructs an image recognition module to monitor the video stream for the driver's vehicle. Responsive to the driver's vehicle entering the field of view of the camera on the rider client device, the AR control module selects computer-generated AR elements and instructs the rider client device to visually augment the video stream to identify the driver's vehicle as it approaches the pickup location.Type: GrantFiled: June 27, 2018Date of Patent: September 24, 2019Assignee: Uber Technologies, Inc.Inventors: John Badalamenti, Joshua Inch, Christopher Michael Sanchez, Theodore Russell Sumers
-
Patent number: 10423835Abstract: A machine-learning engine is disclosed that is configured to recognize and learn behaviors, as well as to identify and distinguish between normal and abnormal behavior within a scene, by analyzing movements and/or activities (or absence of such) over time. The machine-learning engine may be configured to evaluate a sequence of primitive events and associated kinematic data generated for an object depicted in a sequence of video frames and a related vector representation. The vector representation is generated from a primitive event symbol stream and a phase space symbol stream, and the streams describe actions of the objects depicted in the sequence of video frames.Type: GrantFiled: December 19, 2018Date of Patent: September 24, 2019Assignee: AVIGILON PATENT HOLDING 1 CORPORATIONInventors: John Eric Eaton, Wesley Kenneth Cobb, Dennis G. Urech, David S. Friedlander, Gang Xu, Ming-Jung Seow, Lon W. Risinger, David M. Solum, Tao Yang, Rajkiran K. Gottumukkal, Kishor Adinath Saitwal
-
Patent number: 10423836Abstract: An information providing system 10 includes an acquiring unit 210 that acquires an image captured by an imaging unit (a camera 100-1, a camera 100-2, and a camera 100-3) via a network, an analyzing unit 220 that analyzes the image to generate at least situation information indicating whether a captured person carries a rain gear, and a display control unit 240 and 320 that displays the situation information on a user terminal.Type: GrantFiled: June 16, 2016Date of Patent: September 24, 2019Assignee: OPTIM CORPORATIONInventor: Shunji Sugaya
-
Patent number: 10423837Abstract: An embodiment of a Wearable Computer apparatus includes a first portable unit for data gathering and communicating feedback and a second portable unit for processing the at least gathered data from the first unit.Type: GrantFiled: December 31, 2017Date of Patent: September 24, 2019Inventor: Masoud Vaziri
-
Patent number: 10423838Abstract: In a method for determining the spatial extent of a free queue, proceeding from position information, firstly a monitoring region comprising the free queue is subdivided into a plurality of positions. Proceeding from the position information, objects assigned to the free queue are identified and tracked. A current position of at least a portion of the tracked objects is periodically stored. An average speed of at least a portion of the objects is determined, wherein the average speed of an object is determined on the basis of a plurality of the stored positions of the respective object. Finally, a first map is created, which records, in relation to the positions in the monitoring region, an occurrence density of objects at the corresponding positions, Objects having an average speed outside a predefined range are not taken into account when creating the first map.Type: GrantFiled: September 11, 2014Date of Patent: September 24, 2019Assignee: XOVIS AGInventors: Cyrill Gyger, Markus Herrli Anderegg, David Studer
-
Patent number: 10423839Abstract: A vehicular traffic monitoring system which is capable of providing a complete monitoring system in an assembly capable of being mounted in a plane above or beside a roadway in order to monitor substantially all factors of interest with respect to approaching and receding vehicular traffic along the roadway below.Type: GrantFiled: November 16, 2016Date of Patent: September 24, 2019Assignees: Laser Technology, Inc., Kama-Tech (HK) LimitedInventors: Neil Thomas Heeke, Philip John Lack, Eric André Miller
-
Patent number: 10423840Abstract: A post-processing method for detecting lanes to plan the drive path of an autonomous vehicle by using a segmentation score map and a clustering map is provided. The method includes steps of: a computing device acquiring the segmentation score map and the clustering map from a CNN; instructing a post-processing module to detect lane elements including pixels forming the lanes referring to the segmentation score map and generate seed information referring to the lane elements, the segmentation score map, and the clustering map; instructing the post-processing module to generate base models referring to the seed information and generate lane anchors referring to the base models; instructing the post-processing module to generate lane blobs referring to the lane anchors; and instructing the post-processing module to detect lane candidates referring to the lane blobs and generate a lane model by line-fitting operations on the lane candidates.Type: GrantFiled: January 31, 2019Date of Patent: September 24, 2019Assignee: StradVision, Inc.Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
-
Patent number: 10423841Abstract: An abnormality detection device includes a space recognition success determination unit which determines whether an outside space recognition device is successful in space recognition from information which contains space recognition information and environment information, an environment dependence recognition failure classifying unit which determines and classifies whether a failure of the space recognition corresponds to any one of a failure type previously stored in an environment dependence recognition failure type storage unit with respect to the space recognition information determined as failing in the space recognition, and an abnormality detection unit which uses the space recognition information determined as not corresponding to any failure type by the environment dependence recognition failure classifying unit to detect an abnormality of the outside space recognition device in the space recognition information determined as failing in the space recognition.Type: GrantFiled: October 26, 2017Date of Patent: September 24, 2019Assignee: HITACHI, LTD.Inventors: Takehisa Nishida, Mariko Okude, Masayoshi Ishikawa, Kazuo Muto, Atsushi Katou
-
Patent number: 10423842Abstract: A vision system of a vehicle includes at least one camera disposed at a vehicle and having a field of view exterior of the vehicle, and an image processor operable to process image data captured by the camera. Responsive to image processing of captured image data, the image processor determines objects present in the field of view of the camera. The vision system processes additional frames of captured image data to enhance determination of objects of interest. The vision system initially detects an object present in the field of view of the camera and conducts hypotheses filtering and hypotheses merging and, responsive to the hypotheses merging, the system determines that the detected object is an object of interest or determines that the detected object is not an object of interest.Type: GrantFiled: February 4, 2019Date of Patent: September 24, 2019Assignee: MAGNA ELECTRONICS INC.Inventors: Nikhil Gupta, Liang Zhang
-
Patent number: 10423843Abstract: A vision system for a vehicle includes a camera disposed at the vehicle and having a field of view exterior of the vehicle. The camera captures image data. A control includes an image processor operable to process image data captured by the camera. The control, responsive at least in part to putative detection of a traffic sign via image processing by the image processor of image data captured by the camera, enhances resolution of captured image data based at least in part on known traffic sign images to generate upscaled image data. The control compares captured image data to upscaled image data to determine and/or classify and/or identify the putatively detected traffic sign.Type: GrantFiled: February 23, 2018Date of Patent: September 24, 2019Assignee: MAGNA ELECTRONICS INC.Inventors: Michael Biemer, Ruediger Boegel
-
Patent number: 10423844Abstract: The disclosure includes embodiments for providing augmented reality (“AR”) vehicular assistance for drivers who are color blind. A method according to some embodiments includes identifying an illuminated light in a driving environment of the vehicle based on sensor data recorded by a sensor. The method includes determining a color of the illuminated light based on the sensor data. The method includes determining if the color is of a specific type. The method includes determining a vehicular action to be taken responsive to the color being of the specific type. The method includes displaying an AR overlay using the AR viewing device that visually depicts a word which describes the vehicular action to be taken.Type: GrantFiled: September 27, 2017Date of Patent: September 24, 2019Inventors: Siyuan Dai, Nikos Arechiga, Chung-Wei Lin, Shinichi Shiraishi
-
Patent number: 10423845Abstract: A method of operating a remote view system with privacy protection and a remote view system with privacy protection. In one embodiment, the method includes receiving a request from a remote device for one or more images of a vehicle interior, receiving one or more images of the vehicle interior, determining whether a privacy key is located within the vehicle interior, determining whether one or more occupants are located within the vehicle interior, retrieving the privacy settings of the vehicle interior stored in memory, responsive to determining that the privacy key and the one or more occupants are located within the vehicle interior, generating one or more privacy images based on the one or more images and the privacy settings of the vehicle interior, controlling a transceiver to transmit the one or more privacy images to the remote device via an antenna.Type: GrantFiled: April 8, 2016Date of Patent: September 24, 2019Assignee: Robert Bosch GmbHInventors: James Stephen Miller, Patrick Graf, Robert Jones, Bernhard Hilliger
-
Patent number: 10423846Abstract: A method for identifying a driver change in a motor vehicle with the aid of an interior camera for monitoring the driver, which is characterized in that a driver change is detected when the head of the driver is not detected in the viewing range of the camera. In addition, a corresponding device and a computer program and a machine-readable memory medium are provided.Type: GrantFiled: July 27, 2017Date of Patent: September 24, 2019Assignee: Robert Bosch GmbHInventor: Felix Wulf
-
Patent number: 10423847Abstract: Systems, methods, and devices for predicting driver intent and future movements of a human driven vehicles are disclosed herein. A computer implemented method includes receiving an image of a proximal vehicle in a region near a vehicle. The method includes determining a region of the image that contains a driver of the proximal vehicle, wherein determining the region comprises determining based on a location of one or more windows of the proximal vehicle. The method includes processing image data only in the region of the image that contains the driver of the proximal vehicle to detect a driver's body language.Type: GrantFiled: September 25, 2017Date of Patent: September 24, 2019Assignee: FORD GLOBAL TECHNOLOGIES, LLCInventors: Ashley Elizabeth Micks, Harpreetsingh Banvait, Jinesh J Jain, Brielle Reiff
-
Patent number: 10423848Abstract: A method, a system, and a computer-readable recording medium for long-distance person identification are provided. The method is applicable to a system having an image capturing device and a depth sensor and includes the following steps. An image of a user is captured by using the image capturing device to generate a user image, and depth information of a user is detected by using a depth sensor to generate user depth information. Soft biometric features of the user are obtained according to the user image and the user depth information, where the soft biometric features include silhouette information and human body features. A soft biometric feature similarity of the user is calculated based on the soft biometric features by using registered information of registered users so as to output a person identification result accordingly.Type: GrantFiled: August 7, 2017Date of Patent: September 24, 2019Assignee: Wistron CorporationInventors: You-Jyun Syu, Ching-An Cho, Ming-Che Ho
-
Patent number: 10423849Abstract: The arrangement comprises a filter region (10) filtering electromagnetic radiation and a shielding component (20) inhibiting propagation of electromagnetic radiation. The filter region comprises a central filter region (11) and a separate peripheral filter region (13). The shielding component comprises an aperture (21). The aperture is arranged above the central filter region. The central filter region and the peripheral filter region are optimized for different angles of incidence (?, ?) and provided for measurements by individual sensor regions (18, 19).Type: GrantFiled: May 3, 2017Date of Patent: September 24, 2019Assignee: ams AGInventors: David Mehrl, George Kelly
-
Patent number: 10423850Abstract: In an embodiment, a computer-implemented method of detecting infected objects from large field-of-view images is disclosed. The method comprises receiving, by a processor, a digital image capturing multiple objects; generating, by the processor, a plurality of scaled images from the digital image respectfully corresponding to a plurality of scales; and computing a group of feature matrices for the digital image. The method further comprises, for each of the plurality of scaled images. selecting a list of candidate regions from the scaled image each likely to capture a single object; and for each of the list of candidate regions, performing the following steps: mapping the candidate region back to the digital image to obtain a mapped region; identifying a corresponding portion from each of the group of feature matrices based on the mapping; and determining whether the candidate region is likely to capture the single object infected with a disease based on the group of corresponding portions.Type: GrantFiled: October 5, 2017Date of Patent: September 24, 2019Assignee: The Climate CorporationInventors: Yaqi Chen, Wei Guan
-
Patent number: 10423851Abstract: Speed and accuracy of character recognition can be improved by isolating text orientation during an early stage of processing an image containing a mixture of horizontal and vertical text. Vertical and horizontal line bounding boxes are defined from characters in the image. In a section of the image containing horizontal text, vertical line bounding boxes may tend to be larger and/or spaced close together due to misalignment of characters. For the same reason, horizontal line bounding boxes may tend to be larger and/or spaced closed together in a section of the image containing vertical text. Such variations in size and/or spacing may be used to identify a division between the horizontal and vertical text. A subsequent character recognition process may take advantage of a known division to conserve computing resources.Type: GrantFiled: February 28, 2018Date of Patent: September 24, 2019Assignee: KONICA MINOLTA LABORATORY U.S.A., INC.Inventor: Charles David Tallman
-
Patent number: 10423852Abstract: In an intelligent character recognition (ICR) method for recognizing hand-written text images using a long-short term memory (LSTM) recurrent neural network (RNN), text images are segmented into text line images, and the text lines images are pre-processed to normalize the line height and to equalize the word spacings in each text line. Both training images used to train the RNN network and test images containing text to be recognized by the trained RNN network are pre-processed to have identical heights and identical word spacings between words. This method improves character recognition accuracy.Type: GrantFiled: March 20, 2018Date of Patent: September 24, 2019Assignee: KONICA MINOLTA LABORATORY U.S.A., INC.Inventor: Saman Sarraf
-
Patent number: 10423853Abstract: An information-processing apparatus includes: a first acquisition unit configured to acquire gradation characteristic information which relates to a gradation characteristic; a second acquisition unit configured to acquire axial characteristic information which relates to an axial characteristic including a distribution, on an axis, of graduations corresponding to brightness-related values related to brightness of input image data; and a generation unit configured to generate, based on the input image data, the gradation characteristic information, and the axial characteristic information, distribution information indicating a distribution of the brightness-related value of the input image data using the axis according to the axial characteristic information.Type: GrantFiled: February 8, 2017Date of Patent: September 24, 2019Assignee: Canon Kabushiki KaishaInventor: Muneki Ando
-
Patent number: 10423854Abstract: In an image processing apparatus, a controller is configured to perform: acquiring target image data representing a target image including a plurality of pixels; determining a plurality of first candidate character pixels from among the plurality of pixels, determination of the plurality of first candidate character pixels being made for each of the plurality of pixels; setting a plurality of object regions in the target image; determining a plurality of second candidate character pixels from among the plurality of pixels, determination of the plurality of second candidate character pixels being made for each of the plurality of object regions according to a first determination condition; and identifying a character pixel from among the plurality of pixels, the character pixel being included in both the plurality of first candidate character pixels and the plurality of second candidate character pixels.Type: GrantFiled: September 20, 2018Date of Patent: September 24, 2019Assignee: Brother Kogyo Kabushiki KaishaInventor: Ryuji Yamada
-
Patent number: 10423855Abstract: In some examples, a system includes a color cluster learning engine and a color recognition engine. The color cluster learning engine may be configured to obtain a set of training images, process the training images to obtain clusters of pixel colors for the training images, identify learned color clusters from the clusters of pixel colors obtained from the training images, and label the learned color clusters with color indicators. The color recognition engine may be configured to receive an input image for color identification, process the input image to obtain a particular cluster of pixel colors that covers the highest number of pixels in the input image, match the particular cluster to a particular learned color cluster labeled with a particular color indicator, and identify a color of the input image as specified by the particular color indicator.Type: GrantFiled: March 9, 2017Date of Patent: September 24, 2019Assignee: ENTIT SOFTWARE LLCInventors: Pashmina Cameron, Timothy Woods
-
Patent number: 10423856Abstract: A system and methodologies for neuromorphic vision simulate conventional analog NM system functionality and generate digital NM image data that facilitate improved object detection, classification, and tracking.Type: GrantFiled: February 20, 2019Date of Patent: September 24, 2019Assignees: Volkswagen AG, Audi AG, Porsche AGInventors: Edmund Dawes Zink, Douglas Allen Hauger, Lutz Junge, Luis Marcial Hernandez Gonzalez, Jerramy L. Gipson, Anh Vu, Martin Hempel, Nikhil J. George
-
Patent number: 10423857Abstract: An imaging apparatus of an embodiment of the present invention includes an imaging means (an imaging unit), and a CPU that is configured to perform orientation detection (an orientation detecting unit) to detect an orientation of the imaging apparatus, perform subject detection (a detecting unit) to detect a subject in a detection area (a detection area in a live view image) in an image captured by the imaging means (the live view image), and perform position change (a position changing unit) to change the position of the detection area in the image according to the detected orientation of the imaging apparatus.Type: GrantFiled: December 22, 2015Date of Patent: September 24, 2019Assignee: CASIO COMPUTER CO., LTD.Inventors: Tsutomu Shimono, Masayuki Endo
-
Patent number: 10423858Abstract: An example radial histogram matching system may generate a target radial histogram by identifying pixels in an input digital image that are determined to be on, traversing each of the pixels in the input digital image one time to determine how many of the pixels that are turned on are contained in each sector of a circle and assigning to each elements of the target radial histogram the number of on pixels in that element. The system may also compare the target radial histogram to each of an initial and a sequence of rotated radial histograms to determine a match score for each of the comparisons, and identify an offset rotation between a baseline pattern digital image and the input digital image based on the match scores.Type: GrantFiled: July 21, 2014Date of Patent: September 24, 2019Assignee: ENT. SERVICES DEVELOPMENT CORPORATION LPInventor: Joseph Miller
-
Patent number: 10423859Abstract: Data from one or more sensors is input to a workflow and fragmented to produce HyperFragments. The HyperFragments of input data are processed by a plurality of Distributed Experts, who make decisions about what is included in the HyperFragments or add details relating to elements included therein, producing tagged HyperFragments, which are maintained as tuples in a Semantic Database. Algorithms are applied to process the HyperFragments to create an event definition corresponding to a specific activity. Based on related activity included in historical data and on ground truth data, the event definition is refined to produce a more accurate event definition. The resulting refined event definition can then be used with the current input data to more accurately detect when the specific activity is being carried out.Type: GrantFiled: August 23, 2018Date of Patent: September 24, 2019Assignee: Orions Digital Systems, Inc.Inventor: Nils B. Lahr
-
Patent number: 10423860Abstract: A method for learning parameters of an object detector based on a CNN adaptable to customers' requirements such as KPI by using an image concatenation and a target object merging network is provided. The CNN can be redesigned when scales of objects change as a focal length or a resolution changes depending on the KPI. The method includes steps of: a learning device instructing an image-manipulating network to generate n manipulated images; instructing an RPN to generate first to n-th object proposals respectively in the manipulated images, and instructing an FC layer to generate first to n-th object detection information; and instructing the target object merging network to merge the object proposals and merge the object detection information. In this method, the object proposals can be generated by using lidar. The method can be useful for multi-camera, SVM(surround view monitor), and the like, as accuracy of 2D bounding boxes improves.Type: GrantFiled: January 22, 2019Date of Patent: September 24, 2019Assignee: StradVision, Inc.Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
-
Patent number: 10423861Abstract: The technology disclosed relates to constructing a convolutional neural network-based classifier for variant classification. In particular, it relates to training a convolutional neural network-based classifier on training data using a backpropagation-based gradient update technique that progressively match outputs of the convolutional network network-based classifier with corresponding ground truth labels. The convolutional neural network-based classifier comprises groups of residual blocks, each group of residual blocks is parameterized by a number of convolution filters in the residual blocks, a convolution window size of the residual blocks, and an atrous convolution rate of the residual blocks, the size of convolution window varies between groups of residual blocks, the atrous convolution rate varies between groups of residual blocks. The training data includes benign training examples and pathogenic training examples of translated sequence pairs generated from benign variants and pathogenic variants.Type: GrantFiled: October 15, 2018Date of Patent: September 24, 2019Assignee: Illumina, Inc.Inventors: Hong Gao, Kai-How Farh, Laksshman Sundaram, Jeremy Francis McRae
-
Patent number: 10423862Abstract: A system for processing a text capture operation is described. The system receives text captured from a rendered document in the text capture operation. The system also receives supplemental information distinct from the captured text. The system determines an action to perform in response to the text capture operation based upon both the captured text and the supplemental information.Type: GrantFiled: September 13, 2018Date of Patent: September 24, 2019Assignee: Google LLCInventors: Martin T. King, Dale L. Grover, Clifford A. Kushler, James Q. Stafford-Fraser
-
Patent number: 10423863Abstract: Examples are disclosed herein that relate to entity tracking. One examples provides a computing device comprising a logic processor and a storage device holding instructions executable by the logic processor to receive image data of an environment including a person, process the image data using a face detection algorithm to produce a first face detection output at a first frequency, determine an identity of the person based on the first face detection output, and process the image data using another algorithm that uses less computational resources of the computing device than the face detection algorithm. The instructions are further executable to track the person within the environment based on the tracking output, and perform one or more of updating the other algorithm using a second face detection output, and updating the face detection algorithm using the tracking output.Type: GrantFiled: June 28, 2017Date of Patent: September 24, 2019Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Haithem Albadawi, Zongyi Liu
-
Patent number: 10423864Abstract: A plurality of marks (11) equidistantly provided on both side edge parts (1a) of a long medium (1), a plurality of first indicator holes (12) equidistantly given on at least one of the side edge parts (1a), and a plurality of second indicator holes (13) given on at least one of the side edge parts (1a) on a straight line different from a row of the first indicator holes (12) at spacings shorter than spacings of the first indicator holes (12) are provided, and the second indicator holes (13) are each provided to a side of a trailing-end mark (11b), and each gradually comes closer to a leading-end mark (11a) as the long medium (1) runs toward a trailing end.Type: GrantFiled: September 17, 2018Date of Patent: September 24, 2019Assignee: Max, Co. LtdInventors: Takayuki Ehara, Jun Okazaki, Hiroaki Suto, Hirohisa Usami, Hiroyuki Fukumoto
-
Patent number: 10423865Abstract: A system and method for paper jam prediction includes a processor, memory and a network interface. Ongoing paper jam data is received from an identified, networked multifunction peripheral. Service call data for the multifunction peripheral indicative of prior service calls is stored in the memory. A sampling window of the paper jam data prior to a service call date is defined and a point in the sampling window when no symptoms of a forthcoming paper jam were present is determined so as to define a prediction window. A relationship between paper jam data in the prediction window of the sampling window and paper jam data outside the prediction window in the sampling window is determined and incoming paper jam data is monitored relative to the relationship data. A paper jam warning is generated when monitored incoming paper jam data indicates a forthcoming paper jam on the multifunction peripheral.Type: GrantFiled: March 6, 2018Date of Patent: September 24, 2019Assignees: Kabushiki Kaisha Toshiba, Toshiba TEC Kabushiki KaishaInventors: Michael Yeung, Manju Sreekumar, Milong Sabandith, Methee Phoboonme, Louis Ormond
-
Patent number: 10423866Abstract: A method for managing a data center that includes racks arranged in aisles, includes guiding an operator, by a mobile terminal, to a desired device of a rack. The guiding step includes: indicating, on a screen of the mobile terminal, a route to follow to arrive near the device; once the operator is near the device, reading, by a reading application of the mobile terminal, an electronic marker pattern placed on a first rack facing the operator to determine if the operator is facing the rack including the desired device; if not, repeating the reading operation on the rack directly adjacent to the first rack; once the rack is identified, reading, by the reading application of the mobile terminal, an optical marker pattern placed on the rack so as to obtain a height reference and thus locate the desired device; and acting upon the desired device using the mobile terminal.Type: GrantFiled: March 20, 2015Date of Patent: September 24, 2019Assignee: BULL SASInventors: Christophe Guionneau, Matthieu Isoard, Xavier Plattard
-
Patent number: 10423867Abstract: The present invention is generally directed towards a card and package assembly and methods of making the same. Card and package assemblies in accordance with some embodiments of the present invention may include a package, a data card, the data card packaged at least in part within the package, and an activation indicia, the activation indicia comprising a first portion printed on the package and a second portion printed on the data card. Methods of packaging a data card in accordance with some embodiments of the present invention may include steps of manufacturing or otherwise obtaining a data card, manufacturing or otherwise obtaining a package, determining an activation indicia, packaging the data card at least in part within the package, and printing the activation indicia in part on the data card and in part on the package.Type: GrantFiled: December 20, 2016Date of Patent: September 24, 2019Assignee: e2interactive, Inc.Inventors: Merrill Brooks Smith, Chandilyn Smith
-
Patent number: 10423868Abstract: Removing an embedded barcode in an image. A barcode-embedding area in an image is acquired. Pixels in the barcode-embedding area can be changed from a RGB color space to a color space with a luminance component. The luminance values of the pixels can be determined on a dark side or a light side. The luminance values of the pixels on the dark side or the light side can be mapped to luminance values falling into a whole range of a luminance interval. The pixels in the barcode-embedding area can be changed from the color space with the luminance component to the RGB color space.Type: GrantFiled: January 26, 2017Date of Patent: September 24, 2019Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Wenzhe Shi, Xiaoyu Li, Ziying Xin, Li Zhang, Wenya Zhou