Feature Extraction Patents (Class 382/190)
  • Patent number: 10762172
    Abstract: A method and apparatus for tracking a medication to be administered by a user. The method includes the steps of determining the identity of a medication to be administered by a user, identifying one or more characteristics associated with the medication that are to be used to continue to track the medication, the one or more characteristics including less than a total number of characteristics associated with the medication to be administered, and tracking the medication to be administered in accordance with the identified one or more characteristics through one or more future video images.
    Type: Grant
    Filed: October 5, 2010
    Date of Patent: September 1, 2020
    Assignee: Ai Cure Technologies LLC
    Inventors: Adam Hanina, Gordon Kessler, Lei Guan
  • Patent number: 10762606
    Abstract: An image processing apparatus includes: an acquisition unit configured to acquire a plurality of images each capturing an identical target and having a different attribute; a derivation unit configured to derive features from the plurality of images, using a first neural network; an integration unit configured to integrate the features derived from the plurality of images; and a generation unit configured to generate a higher quality image than the plurality of images from the feature integrated by the integration unit, using a second neural network.
    Type: Grant
    Filed: November 17, 2017
    Date of Patent: September 1, 2020
    Assignee: Canon Kabushiki Kaisha
    Inventors: Shunta Tate, Masakazu Matsugu, Yasuhiro Komori, Yusuke Mitarai
  • Patent number: 10762395
    Abstract: An image processing apparatus includes a processor and a memory. The processor executes a program stored in the memory to perform operations including: obtaining a plurality of image groups each containing a plurality of images classified based on contents of images; selecting an image from each of the plurality of image groups based on an evaluation result obtained by evaluating the plurality of images; and generating one image from the plurality of selected images.
    Type: Grant
    Filed: April 9, 2018
    Date of Patent: September 1, 2020
    Assignee: CASIO COMPUTER CO., LTD.
    Inventor: Masaru Onozawa
  • Patent number: 10753734
    Abstract: A device, method and system for utilizing an optical array generator, confocal measurement/depth of focus techniques to generate dynamic patterns in a camera for projection onto the surface of an object for three-dimensional (3D) measurement. Projected light patterns are used to generate optical features on the surface of an object to be measured and optical 3D measuring methods which operate according to triangulation, confocal and depth of focus principles are used to measure the object.
    Type: Grant
    Filed: June 8, 2018
    Date of Patent: August 25, 2020
    Assignee: DENTSPLY SIRONA Inc.
    Inventors: Michael Tewes, Markus Berner
  • Patent number: 10754074
    Abstract: A holographic display apparatus for providing an expanded viewing window includes a spatial filter configured to separate a plurality of holographic images generated by the hologram pattern displayed on the spatial light modulator from a plurality of lattice spots generated by a physical structure of the spatial light modulator. The spatial filter includes a plurality of color filters or a plurality of dichroic mirrors separating a first color image, a second color image, and a third color image from a first color lattice spot, a second color lattice spot, and a third color lattice spot.
    Type: Grant
    Filed: July 13, 2018
    Date of Patent: August 25, 2020
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Juwon Seo, Wontaek Seo, Geeyoung Sung, Changkun Lee, Hongseok Lee, Jaeseung Chung
  • Patent number: 10748397
    Abstract: A monitoring system includes: a camera, a storage section, and a control unit. The camera photographs an object. The storage section stores a first photographed image photographed by the camera and a second photographed image photographed by the camera at a same photographing place of a photographing place of the first photographed image after the photographing of the first photographed image. The control unit includes a processor and, through execution of a control program by the processor, functions as an obstacle determination section and a control section. The obstacle determination section calculates a difference between the first photographed image and the second photographed image and determines, based on the difference, whether or not an obstacle is present in a photographing range of the camera. The control section performs, upon determination by the obstacle determination section that the obstacle is present, processing of reporting results of the determination.
    Type: Grant
    Filed: March 6, 2019
    Date of Patent: August 18, 2020
    Assignee: KYOCERA Document Solutions Inc.
    Inventors: Fumiya Sakashita, Yoichi Hiranuma, Shoichi Sakaguchi, Shohei Fujiwara
  • Patent number: 10748166
    Abstract: A method for mining a churn factor causing user churn for a network application includes: calculating, according to a data universe of churned users, a proportion of a quantity of churned users under each user operation scenario where user churn occurs for a network application in a total quantity of the churned users, and determining multiple user operation scenarios corresponding to multiple proportions sequentially placed in foremost positions in a list of all calculated proportions of user operation scenarios ranked in a descending order; determining churn factors of the multiple user operation scenarios; determining, according to the proportions of the churned users under the multiple user operation scenarios in all the churned users, influence weight values of the churn factors; and determining, when the influence weight value of a churn factor is greater than or equal to the threshold, that the churn factor is a major churn factor.
    Type: Grant
    Filed: July 1, 2016
    Date of Patent: August 18, 2020
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Jie Dong, Xiao Wang
  • Patent number: 10740923
    Abstract: A non-transitory computer-readable recording medium has recorded thereon a computer program for face direction estimation that causes a computer to execute a process including: generating, for each presumed face direction, a face direction converted image by converting the direction of the face represented on an input image into a prescribed direction; generating, for each presumed face direction, a reversed face image by reversing the face represented on the face direction converted image; converting the direction of the face represented on the reversed face image to be the presumed face direction; calculating, for each presumed face direction, an evaluation value that represents the degree of difference between the face represented on the reversed face image and the face represented on the input image, based on the conversion result; and specifying, based on the evaluation value, the direction of the face represented on the input image.
    Type: Grant
    Filed: October 26, 2017
    Date of Patent: August 11, 2020
    Assignee: FUJITSU LIMITED
    Inventor: Osafumi Nakayama
  • Patent number: 10741214
    Abstract: An image capture apparatus includes an image acquisition unit, a feature-amount-calculation unit, a score compensation unit, and an image selection unit. The image acquisition unit acquires a plurality of images. The image acquisition unit acquires information relating to image capture timing of the plurality of images. The feature-amount-calculation unit and the score compensation unit evaluate the plurality of images based on the information relating to the image capture timing. The image selection unit selects a predetermined number of images from the plurality of images based on an evaluation result by the feature-amount-calculation unit and the score compensation unit.
    Type: Grant
    Filed: November 10, 2016
    Date of Patent: August 11, 2020
    Assignee: CASIO COMPUTER CO., LTD.
    Inventor: Masaru Onozawa
  • Patent number: 10740907
    Abstract: In a case in which the size of a moving body changes by a position of the moving body, in order to accurately track the moving body without an effort of the user regardless of a change in the size, the present disclosure estimates a size of the moving body in each of the images of the frames, updates template data based on the estimated size of the moving body, and compares a candidate region that is a candidate of a region where the moving body is present in each of the images of the frames with the template data, and determines whether or not the moving body is positioned in the candidate region based on the comparison result.
    Type: Grant
    Filed: October 14, 2016
    Date of Patent: August 11, 2020
    Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.
    Inventor: Junko Ueda
  • Patent number: 10740921
    Abstract: A method for estimating the absolute size dimensions of a test object based on image data of the test object, namely a face or part of a person. The method includes receiving image data of the test object, determining a first model of the test object based on the received image data, and aligning and scaling the first model to a first average model that includes an average of a plurality of first models of reference objects being faces or parts of faces of reference persons. The first models of the reference objects are of a same type as the first model of the test object.
    Type: Grant
    Filed: November 15, 2016
    Date of Patent: August 11, 2020
    Assignee: Koninklijke Philips N.V.
    Inventors: Dmitry Nikolayevich Znamenskiy, Ruud Vlutters
  • Patent number: 10733480
    Abstract: There is described a computing device and method in a digital medium environment for custom auto tagging of multiple objects. The computing device includes an object detection network and multiple image classification networks. An image is received at the object detection network and includes multiple visual objects. First feature maps are applied to the image at the object detection network and generate object regions associated with the visual objects. The object regions are assigned to the multiple image classification networks, and each image classification network is assigned to a particular object region. The second feature maps are applied to each object region at each image classification network, and each image classification network outputs one or more classes associated with a visual object corresponding to each object region.
    Type: Grant
    Filed: July 18, 2018
    Date of Patent: August 4, 2020
    Assignee: Adobe Inc.
    Inventors: Jayant Kumar, Zhe Lin, Vipulkumar C. Dalal
  • Patent number: 10733713
    Abstract: An apparatus includes a first circuit, a second circuit and a third circuit. The first circuit may be configured to set a flag where a current value in a current line of an image is a maximum value in a first window in the current line. The second circuit may be configured to reset the flag based on one or more previous lines of the image where the current value is not a largest value in a second window around the current value. The third circuit may be configured to generate an output value as (i) the current value if the flag is set and (ii) a predetermined value if the flag is reset.
    Type: Grant
    Filed: November 5, 2018
    Date of Patent: August 4, 2020
    Assignee: Ambarella International LP
    Inventor: Manish K. Singh
  • Patent number: 10733705
    Abstract: An information processing device, a learning processing method, a learning device and an object recognition device capable of improving learning processing accuracy are provided.
    Type: Grant
    Filed: December 20, 2018
    Date of Patent: August 4, 2020
    Assignee: HONDA MOTOR CO., LTD.
    Inventors: Yosuke Sakamoto, Umiaki Matsubara
  • Patent number: 10713818
    Abstract: Methods, and systems, including computer programs encoded on computer storage media for compressing data items with variable compression rate. A system includes an encoder sub-network configured to receive a system input image and to generate an encoded representation of the system input image, the encoder sub-network including a first stack of neural network layers including one or more LSTM neural network layers and one or more non-LSTM neural network layers, the first stack configured to, at each of a plurality of time steps, receive an input image for the time step that is derived from the system input image and generate a corresponding first stack output, and a binarizing neural network layer configured to receive a first stack output as input and generate a corresponding binarized output.
    Type: Grant
    Filed: January 28, 2019
    Date of Patent: July 14, 2020
    Assignee: Google LLC
    Inventors: George Dan Toderici, Sean O'Malley, Rahul Sukthankar, Sung Jin Hwang, Damien Vincent, Nicholas Johnston, David Charles Minnen, Joel Shor, Michele Covell
  • Patent number: 10706269
    Abstract: One or more images including a user's face are captured, and at least one of these images is displayed to the user. These image(s) are used by a face-recognition algorithm to identify or recognize the face in the image(s). The face-recognition algorithm recognizes various features of the face and displays an indication of at least one of those features while performing the face-recognition algorithm. These indications of features can be, for example, dots displayed on the captured image. Additionally, an indication of progress of the face-recognition algorithm is displayed near the user's face. This indication of progress of the face-recognition algorithm can be, for example, a square or other geometric shape in which at least a portion of the user's face is located.
    Type: Grant
    Filed: May 16, 2019
    Date of Patent: July 7, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Brigitte Evelyn Eder, Aaron Naoyoshi Sheung Yan Woo, Remi Wesley Ogundokun, Benjamin Alan Goosman
  • Patent number: 10706314
    Abstract: The present disclosure provides an image recognition method and apparatus, a device and a non-volatile computer storage medium. In embodiments of the present disclosure, it is feasible to obtain the to-be-recognized image of the designated space, then perform image segmentation processing for the to-be-recognized image, to obtain at least one area image of the designated space, and then perform image matching processing for each area image in said at least one area image, to obtain a reference image corresponding to said each area image, so that it is possible to perform recognition processing for said each area image according to image information of the reference image corresponding to said each area image to obtain article information of said each area image. The so doing does not require manual participation and exhibits simple operations and a high rate of correctness, and thereby improves the recognition efficiency and reliability.
    Type: Grant
    Filed: May 23, 2016
    Date of Patent: July 7, 2020
    Assignee: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) CO., LTD.
    Inventors: Chen Zhao, Haoyuan Gao, Ji Liang
  • Patent number: 10706516
    Abstract: A system and method for comparing and searching for digital data, such as images, using histograms includes receiving a source image, receiving a comparison image, generating a first histogram for the source image and generating a second histogram for the comparison image. The source image may be received from a network device, such as a computer or camera, and the comparison image may one of a plurality of stored images in a database. The histograms may correspond to an image characteristic, including a color histogram corresponding to the distribution of the intensity of a corresponding color among image pixels in the source image. Each of first and second histograms is normalized and a similarity score is calculated between the two histograms. The similarity score represents a similarity measure between the two histograms, calculated from a subset of bins, which are independently selected for each image.
    Type: Grant
    Filed: September 15, 2017
    Date of Patent: July 7, 2020
    Assignee: FLIR Systems, Inc.
    Inventor: Stefan Schulte
  • Patent number: 10699428
    Abstract: Embodiments generally relate to a machine-implemented method of automatically adjusting the range of a depth data recording executed by at least one processing device. The method comprises determining, by the at least one processing device, at least one position of a subject to be recorded; determining, by the at least one processing device, at least one spatial range based on the position of the subject; receiving depth information; and constructing, by the at least one processing device, a depth data recording based on the received depth information limited by the at least one spatial range.
    Type: Grant
    Filed: October 6, 2016
    Date of Patent: June 30, 2020
    Assignee: BLINXEL PTY LTD
    Inventors: Glen Siver, David Gregory Jones
  • Patent number: 10701303
    Abstract: Certain embodiments involve generating and providing spatial audio using a predictive model. For example, a generates, using a predictive model, a visual representation of visual content provideable to a user device by encoding the visual content into the visual representation that indicates a visual element in the visual content. The system generates, using the predictive model, an audio representation of audio associated with the visual content by encoding the audio into the audio representation that indicates an audio element in the audio. The system also generates, using the predictive model, spatial audio based at least in part on the audio element and associating the spatial audio with the visual element. The system can also augment the visual content using the spatial audio by at least associating the spatial audio with the visual content.
    Type: Grant
    Filed: March 27, 2018
    Date of Patent: June 30, 2020
    Assignee: Adobe Inc.
    Inventors: Oliver Wang, Pedro Morgado, Timothy Langlois
  • Patent number: 10699184
    Abstract: In one embodiment, a system retrieves a first feature vector for an image. The image is inputted into a first deep-learning model, which is a first-version model, and the first feature vector may be output from a processing layer of the first deep-learning model for the image. The first feature vector using a feature-vector conversion model to obtain a second feature vector for the image. The feature-vector conversion model is trained to convert first-version feature vectors to second-version feature vectors. The second feature vector is associated with a second deep-learning model, and the second deep-learning model is a second-version model. The second-version model is an updated version of the first-version model. A plurality of predictions for the image may be generated using the second feature vector and the second deep-learning model.
    Type: Grant
    Filed: December 29, 2016
    Date of Patent: June 30, 2020
    Assignee: Facebook, Inc.
    Inventor: Balmanohar Paluri
  • Patent number: 10691937
    Abstract: This disclosure relates to method and system for determining structural blocks of a document. The method may include extracting text lines from the document, generating a feature vector for each text line by determining feature values for a set of features in the each text line, and determining at least one dominant feature from among the set of features and at least one corresponding dominance factor, for each structural class, based on the feature vector for each text line. The method may further include deriving a set of rules for classification of the text lines into respective structural classes and determining a structural block tag for each text line based on the set of rules. Each of the set of rules correspond to one of the structural classes and is based on the at least one dominant feature and the at least one corresponding dominance factor for that class.
    Type: Grant
    Filed: September 18, 2018
    Date of Patent: June 23, 2020
    Assignee: Wipro Limited
    Inventors: Raghavendra Hosabettu, Sneha Subhaschandra Banakar
  • Patent number: 10685435
    Abstract: A drawing data generating method according to an embodiment is a method for generating drawing data input to a drawing apparatus that draws a plurality of figure patterns on an object using a charged particle beam. The method includes generating the drawing data in accordance with a data format that not only defines a plurality of pieces of figure information, but also sequentially defines dose information of each figure before or after the plurality of pieces of figure information. The dose information of each of the second and succeeding figures is converted to a representation based on the dose information of any preceding figure, and a data length of the dose information is made variable for each figure. For example, the dose information of each of the second and succeeding figures is converted to a difference representation between a dose of the figure and a dose of the preceding figure, and a data length of the difference representation is changed in accordance with the magnitude of a difference value.
    Type: Grant
    Filed: December 15, 2016
    Date of Patent: June 16, 2020
    Assignee: NuFlare Technology, Inc.
    Inventors: Shigehiro Hara, Kenichi Yasui, Noriaki Nakayamada
  • Patent number: 10682089
    Abstract: The present technology relates to an information processing apparatus, an information processing method, and a program capable of checking a plurality of types of analysis results regarding one skin condition item intuitively and easily. The information processing apparatus according to one aspect of the present technology includes an acquisition unit configured to obtain information representing a plurality of types of analysis results on skin conditions obtained by analyzing an image of a skin at a same position, and a presentation unit configured to simultaneously display, on a same image, a plurality of types of visualized information obtained from visual representation of the plurality of types of analysis results. The present technology is applicable to a mobile terminal used together with a skin measurement instrument that photographs a skin image.
    Type: Grant
    Filed: January 15, 2016
    Date of Patent: June 16, 2020
    Assignee: Sony Corporation
    Inventors: Natsuki Kimura, Yusuke Nakamura, Akiko Shimizu
  • Patent number: 10685408
    Abstract: A device includes an image data receiving component, a vegetation index generation component, a crop data receiving component, a masking component and a multivariate regression component. The image data receiving component receives image data of a geographic region. The vegetation index generation component generates an array of vegetation indices based on the received image data, and includes a plurality of vegetation index generating components, each operable to generate a respective individual vegetation index based on the received image data. The crop data receiving component receives crop data associated with the geographic region. The masking component generates a masked vegetation index based on the array of vegetation indices and the received crop data. The multivariate regression component generates a crop parameter based on the masked vegetation index.
    Type: Grant
    Filed: September 5, 2015
    Date of Patent: June 16, 2020
    Assignee: OmniEarth, Inc.
    Inventors: David Murr, Shadrian Strong, Kristin Lavigne, Lars P Dyrud, Jonathan T Fentzke
  • Patent number: 10685459
    Abstract: The present disclosure describes one or more embodiments of a selective raster image transformation system that quickly and efficiently generates enhanced digital images by selectively transforming edges in raster images to vector drawing segments. In particular, the selective raster image transformation system efficiently utilizes a content-aware, selective approach to identify, display, and transform selected edges of a raster image to a vector drawing segment based on sparse user interactions. In addition, the selective raster image transformation system employs a prioritized pixel line stepping algorithm to generate and provide pixel lines for selective edges of a raster image in real time, even on portable client devices.
    Type: Grant
    Filed: June 1, 2018
    Date of Patent: June 16, 2020
    Assignee: ADOBE INC.
    Inventor: John Peterson
  • Patent number: 10679099
    Abstract: An autonomous vehicle vision system for estimating a category of a detected object in an object pose unknown to the system includes a neural network to apply a mapping process to a region of interest in an image including the detected object in the object pose to obtain a point in a 3D manifold space. The system includes an object detector to estimate the category of the detected object in the object pose in the region of interest based on a relationship between the point representing the detected object in the object pose and a plurality of separate object clusters in the 3D manifold space. The system further includes a planner to select an improved route based on a predicted behavior of the category of the detected object in the object pose. The system also includes a controller to control operation of an autonomous vehicle according to the improved route.
    Type: Grant
    Filed: May 8, 2018
    Date of Patent: June 9, 2020
    Assignee: TOYTA RESEARCH INSTITUTE, INC.
    Inventors: Wadim Kehl, German Ros Sanchez
  • Patent number: 10679476
    Abstract: A motion detector camera is configured to differentiate between an object or objects passing through the field of view of the camera and an object or objects approaching the sensor. In one example, motion is detected with a camera configured to identify and selectively filter specific directions of motion. The motion detector compares pixel changes, corresponding to the basic area or size of the object, from frame to frame, and detects the amount of increase in change or decrease in change. An increasing change in the area or size of an object indicates that an object is approaching, and a decreasing change in the area or size of an object indicates that an object that is receding.
    Type: Grant
    Filed: August 27, 2019
    Date of Patent: June 9, 2020
    Assignee: The Chamberlain Group, Inc.
    Inventors: Eric C. Bretschneider, James J. Fitzgibbon
  • Patent number: 10663974
    Abstract: Provided are an object recognition device, an autonomous driving system including the same, and an object recognition method using the object recognition device. The object recognition device includes an object frame information generation unit, a frame analysis unit, an object priority calculator, a frame complexity calculator, and a mode control unit. The object frame information generation unit generates object frame information based on a mode control signal. The frame analysis unit generates object tracking information based on object frame information. The object priority calculator generates based on object tracking information. The frame complexity calculator generates a frame complexity based on object tracking information. The mode control unit generates a mode control signal for adjusting an object recognition range and a calculation amount of the object frame information generation unit based on the priority information, the frame complexity, and the resource occupation state.
    Type: Grant
    Filed: November 6, 2017
    Date of Patent: May 26, 2020
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Jung Hee Suk, Yi-Gyeong Kim, Chun-Gi Lyuh, Young-Deuk Jeon, Min-Hyung Cho
  • Patent number: 10664686
    Abstract: A method for analyzing face information in an electronic device is provided. The method includes detecting at least one face region from an image that is being captured by a camera module, zooming in the at least one detected face region, and analyzing the at least one detected and zoomed in face region according to at least one analysis item.
    Type: Grant
    Filed: October 30, 2015
    Date of Patent: May 26, 2020
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Joo-Young Son, Jin-Ho Kim, Woo-Sung Kang, Yun-Jung Kim, Hong-Il Kim, Jae-Won Son, Won-Suk Chang, In-Ho Choi, Dae-Young Hyun, Tae-Hwa Hong
  • Patent number: 10657410
    Abstract: Organ tissue properties of a patient are automatically compared with organ tissue properties of a healthy subject group. A population norm for the organ tissue properties is determined by: selecting at least two different tissue properties of the organ; determining for each tissue property previously selected and for each subject of said group a quantitative tissue property map; for each subject of the group, calculating a joint histogram from all the quantitative tissue property maps obtained for said subject; and determining an averaged joint histogram from all subjects of the healthy group, thus defining the population norm. A comparison is automatically performed of the averaged joint histogram with a patient joint histogram obtained for the organ tissue properties of the patient, by calculating a statistical deviation of values of a patient joint histogram relative to values of the averaged joint histogram, and mapping the statistical deviation to the patient organ.
    Type: Grant
    Filed: April 13, 2018
    Date of Patent: May 19, 2020
    Assignee: Siemens Healthcare GmbH
    Inventors: Tom Hilbert, Tobias Kober, Gian Franco Piredda
  • Patent number: 10658006
    Abstract: To make it possible to select images for generating a moving image even when individual playback times of images which are targeted for selection differ from each other. The image acquisition unit acquires a plurality of images. The feature amount calculation unit evaluates the plurality of images acquired. The moving image playback time setting unit sets a total playback time of data composed of the plurality of images. The image playback time setting unit sets individual playback times for each of the plurality of images. The image selection unit selects a predetermined number of images according to the total playback time from the plurality of images, based on (i) evaluation results of the plurality of images which have been evaluated, and (ii) the individual playback times which have been set, and (iii) the total playback time which has been set.
    Type: Grant
    Filed: November 9, 2016
    Date of Patent: May 19, 2020
    Assignee: CASIO COMPUTER CO., LTD.
    Inventor: Masaru Onozawa
  • Patent number: 10650232
    Abstract: Approaches to produce verification are addressed which may be suitably employed in conjunction with a low resolution image scanner. Features, such as lines, corners, text, texture, edges and the like are extracted and compared with features for an identified item or evaluated to determine if the features collectively are indicative of a produce item or a non-produce item.
    Type: Grant
    Filed: August 26, 2013
    Date of Patent: May 12, 2020
    Assignee: NCR Corporation
    Inventors: Chao He, Meng Yu, Richard C. Quipanes, Em Parac, Kristoffer Dominic Amora
  • Patent number: 10643043
    Abstract: A management system includes a camera that shoots a code where item information related to a target item has been encoded, a reader that detects the code from an input image obtained by shooting the code, and read the item information from the code, a camera controller that acquires an item image representing the target item from the camera after the code is detected, and a storage that stores the item image acquired by the camera controller, associating the item image with the item information.
    Type: Grant
    Filed: November 6, 2018
    Date of Patent: May 5, 2020
    Assignee: KONICA MINOLTA, INC.
    Inventors: Kentaro Aoki, Akinori Tadokoro
  • Patent number: 10643095
    Abstract: First coordinate transformation information between an entire image and a first captured image is calculated by a feature point comparing process. Second coordinate transformation information between the first captured image and a second captured image is calculated by a feature point tracing process, the second captured image being a captured image at a timing when the first coordinate transformation information is calculated. Third coordinate transformation information between an immediately previous captured image and a third captured image is calculated by a feature point tracing process. A data input area in the entire image is mapped on the third captured image based on the first to the third coordinate transformation information pieces. Updates of the first and the second coordinate transformation information pieces may be suppressed where a change amount exceeds a predetermined threshold.
    Type: Grant
    Filed: June 21, 2018
    Date of Patent: May 5, 2020
    Assignee: Canon Kabushiki Kaisha
    Inventor: Ryo Kishimoto
  • Patent number: 10643101
    Abstract: Disclosed examples include image processing methods and systems to process image data, including computing a plurality of scaled images according to input image data for a current image frame, computing feature vectors for locations of the individual scaled images, classifying the feature vectors to determine sets of detection windows, and grouping detection windows to identify objects in the current frame, where the grouping includes determining first clusters of the detection windows using non-maxima suppression grouping processing, determining positions and scores of second clusters using mean shift clustering according to the first clusters, and determining final clusters representing identified objects in the current image frame using non-maxima suppression grouping of the second clusters.
    Type: Grant
    Filed: July 8, 2016
    Date of Patent: May 5, 2020
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Manu Mathew, Soyeb Noormohammed Nagori, Shyam Jagannathan
  • Patent number: 10635948
    Abstract: A method for finding one or more candidate digital images being likely candidates for depicting a specific object comprising: receiving an object digital image depicting the specific object; determining, using a classification subnet of a convolutional neural network, a class for the specific object depicted in the object digital image; selecting, based on the determined class for the specific object depicted in the object digital image, a feature vector generating subnet from a plurality of feature vector generating subnets; determining, by the selected feature vector generating subnet, a feature vector of the specific object depicted in the object digital image; locating one or more candidate digital images being likely candidates for depicting the specific object depicted in the object digital image by comparing the determined feature vector and feature vectors registered in a database, wherein each registered feature vector is associated with a digital image.
    Type: Grant
    Filed: September 6, 2018
    Date of Patent: April 28, 2020
    Assignee: Axis AB
    Inventors: Niclas Danielsson, Simon Molin, Markus Skans, Jakob Grundström
  • Patent number: 10635892
    Abstract: Aspects of the disclosure provide a method for display control. The method includes capturing, by a camera in a terminal device that is in use by a user, a first face image and a second face image of the user, extracting a first face fiducial of a characteristic point on a face of the user from the first face image and a second face fiducial of the characteristic point on the face of the user from the second face image, determining a face location offset value based on the first face fiducial and the second face fiducial, determining, based on the face location offset value, a display location offset value of content to be displayed on a display screen of the terminal device, and performing a display control of the content on the display screen according to the display location offset value.
    Type: Grant
    Filed: December 26, 2017
    Date of Patent: April 28, 2020
    Assignees: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED, CHONGQING UNIVERSITY OF POSTS AND TELECOMMUNICATIONS
    Inventors: Junwei Ge, Qingling Wang, Zhangqun Fan, Weiwei Zhang, Yuyan Cui
  • Patent number: 10628296
    Abstract: Techniques are disclosed for dynamic memory allocation in a machine learning anomaly detection system. According to one embodiment of the disclosure, one or more variable-sized chunks of memory is allocated from a device memory for a memory pool. An application allocates at least one of the chunks of memory from the memory pool for processing a plurality of input data streams in real-time. A request to allocate memory from the memory pool for input data is received. Upon determining that one of the chunks is available in the memory pool to store the input data, the chunk is allocated from the memory pool in response to the request.
    Type: Grant
    Filed: January 26, 2018
    Date of Patent: April 21, 2020
    Assignee: OMNI AI, INC.
    Inventors: Lon W. Risinger, Kishor Adinath Saitwal
  • Patent number: 10621676
    Abstract: A system and method for extracting document images from images featuring multiple documents are presented. The method includes receiving a multiple-document image including a plurality of document images, wherein each document image is associated with a document; extracting a plurality of visual identifiers from the multiple-document image, wherein each visual identifier is associated with one of the plurality of document images; analyzing the plurality of visual identifiers to identify each document image; determining, based on the analysis, an image area of each document image; extracting each document image based on its image area.
    Type: Grant
    Filed: February 2, 2016
    Date of Patent: April 14, 2020
    Assignee: VatBox, Ltd.
    Inventors: Isaac Saft, Noam Guzman
  • Patent number: 10614170
    Abstract: A method of translating a first language-based speech signal into a second language is provided. The method includes receiving the first language-based speech signal, converting the first language-based speech signal into a first language-based text including non-verbal information, by performing voice recognition on the first language-based speech signal, and translating the first language-based text into the second language, based on the non-verbal information.
    Type: Grant
    Filed: September 25, 2017
    Date of Patent: April 7, 2020
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Sang-ha Kim, Eun-kyoung Kim, Ji-sang Yu, Jong-youb Ryu, Chi-youn Park, Jin-sik Lee, Jae-won Lee
  • Patent number: 10614690
    Abstract: Provided is a management system in which required capacity of a storage device can be reduced even when the number of events that occur increases and in which the required capacity can be clearly understood.
    Type: Grant
    Filed: June 4, 2019
    Date of Patent: April 7, 2020
    Inventor: Hiroshi Aoyama
  • Patent number: 10616471
    Abstract: Techniques are described for automated analysis and filtering of image data. Image data is analyzed to identify regions of interest (ROIs) within the image content. The image data also may have depth estimates applied to content therein. One or more of the ROIs may be designated to possess a base depth, representing a depth of image content against which depths of other content may be compared. Moreover, the depth of the image content within a spatial area of an ROI may be set to be a consistent value, regardless of depth estimates that may have been assigned from other sources. Thereafter, other elements of image content may be assigned content adjustment values in gradients based on their relative depth in image content as compared to the base depth and, optionally, based on their spatial distance from the designated ROI. Image content may be adjusted based on the content adjustment values.
    Type: Grant
    Filed: September 6, 2017
    Date of Patent: April 7, 2020
    Assignee: APPLE INC.
    Inventors: Claus Molgaard, Paul M. Hubel, Ziv Attar, Ilana Volfin
  • Patent number: 10607145
    Abstract: Methods, systems, and computer program products for detection of an arbitrarily-shaped source of an abnormal event via use of a hierarchical reconstruction method are provided herein. A computer-implemented method includes detecting an abnormal event based on analysis of sensor data, wherein said analysis of the sensor data comprises comparing the sensor data to a user-defined threshold; generating a query based on the detected abnormal event; processing the query against one or more given data repositories; executing an inverse model using an output generated in relation to said processing to identify a source of the detected abnormal event, wherein the source comprises an arbitrary shape; and outputting the identified source of the detected abnormal event.
    Type: Grant
    Filed: November 23, 2015
    Date of Patent: March 31, 2020
    Assignee: International Business Machines Corporation
    Inventors: Youngdeok Hwang, Jayant R. Kalagnanam, Xiao Liu, Kyong Min Yeo
  • Patent number: 10606556
    Abstract: A method implemented in a data processing system includes receiving a plurality of text strings. A plurality of rules are applied to the text strings. If a condition specified in one of the rules exists in a given text string, one or more attributes are associated to that text string as metadata. One or more of the text strings are selected, using the metadata, as a potential title for the content. A final title is prepared based on the potential title, and the content is published online under the final title.
    Type: Grant
    Filed: September 18, 2017
    Date of Patent: March 31, 2020
    Assignee: Leaf Group Ltd.
    Inventors: David M. Yehaskel, Henrik M. Kjallbring
  • Patent number: 10595072
    Abstract: A wearable apparatus is provided for identifying a person in an environment of a user of the wearable apparatus based on non-facial information. The wearable apparatus includes a wearable image sensor configured to capture a plurality of images from the environment of the user, and a processing device programmed to analyze a first image of the plurality of images to determine that a face appears in the first image. The processing device also analyzes a second image of the plurality of images to identify an item of non-facial information appearing in the second image that was captured within a time period including a time when the first image is captured. The processing device also determines identification information of a person associated with the face based on the item of non-facial information.
    Type: Grant
    Filed: November 14, 2018
    Date of Patent: March 17, 2020
    Assignee: OrCam Technologies Ltd.
    Inventors: Yonatan Wexler, Amnon Shashua
  • Patent number: 10586104
    Abstract: System and method of the present disclosure provide a linguistic approach to image processing. Prior art focused on extracting well-defined single objects occupying large portion of an image area. However, there was no focus on higher level semantics or distribution of object categories within the image. In contrast to imagery by handheld devices, remotely sensed data contains numerous objects because of relative large coverage and distribution over objects is critical to analyzing such large coverage. Accordingly, in the present disclosure, a generative statistical model is defined wherein an aerial image is modelled as a collection of the one or more themes and each of the one or more themes is modelled as a collection of object categories. The model automatically adapts to a scale of the aerial image and appropriately identifies the themes which may be used for applications including monitoring land use, infrastructure management and the like.
    Type: Grant
    Filed: July 20, 2018
    Date of Patent: March 10, 2020
    Assignee: Tata Consultancy Services Limited
    Inventors: Shailesh Shankar Deshpande, Balamuralidhar Purushothaman
  • Patent number: 10586335
    Abstract: Techniques are provided for segmentation of a hand from a forearm in an image frame. A methodology implementing the techniques according to an embodiment includes estimating a wrist line within an image shape that includes a forearm and a hand. The wrist line estimation is based on a search for a minimum width region of the shape that is surrounded by adjacent regions of greater width on each side of the minimum width region. The method also includes determining a forearm segment, and a hand segment that is separated from the forearm segment by the wrist line. The method further includes labeling the forearm segment and the hand segment. The labeling is based on a connected component analysis of the forearm segment and the hand segment. The method further includes removing the labeled forearm segment from the image frame to generate the image segmentation of the hand.
    Type: Grant
    Filed: August 28, 2017
    Date of Patent: March 10, 2020
    Assignee: Intel Corporation
    Inventors: Asaf Bar Zvi, Kfir Viente
  • Patent number: 10572718
    Abstract: A foundation analysis method adopted by a body information analysis apparatus (1) includes following steps: performing positioning for each part of a face after the face is recognized by an image recognition module (12) of the apparatus (1); obtaining positions of at least a left eye (53), a right eye (54), and a nose (52) after positioning; determining a position of a left foundation (81) according to the left eye (53) and the nose (52); determining another position of a right foundation (82) according to the right eye (54) and the nose (52); analyzing average color values of the two foundations (81,82); comparing two average color values of the two foundations (81,82) with default color values or comparing the two average color values with each other; displaying a comparison result at a display module (111) of the apparatus (1); and, re-executing above steps before an assisting function is terminated.
    Type: Grant
    Filed: January 14, 2018
    Date of Patent: February 25, 2020
    Assignee: CAL-COMP BIG DATA, INC.
    Inventors: Shyh-Yong Shen, Min-Chang Chi, Eric Budiman Gosno
  • Patent number: 10565432
    Abstract: In an approach to establishing personal identity using multiple sub-optimal images, a method includes receiving a set of sub-optimal input images, identifying a first and a second user feature in the set of sub-optimal input images, and determining a confidence score of the user features by comparison to user profile images. The method additionally includes determining a combined confidence score of the first user feature and the second user feature and determining whether features match a user by: (i) determining whether the combined confidence score is higher than a pre-determined threshold for the combined confidence score, (ii) determining whether the confidence score of the first user feature is higher than a pre-determined threshold for the user features, and (iii) determining whether the confidence score of the second user feature is higher than the pre-determined threshold for the user features.
    Type: Grant
    Filed: November 29, 2017
    Date of Patent: February 18, 2020
    Assignee: International Business Machines Corporation
    Inventors: Yuk L. Chan, Deepti M. Naphade, Tin Hang To