Local Or Regional Features Patents (Class 382/195)
  • Patent number: 11012650
    Abstract: An image processing apparatus comprises a first calculation unit configured to calculate, for an input image, a first range that is within a depth of field from a focus position, a second calculation unit configured to calculate, for the input image, a second range that is within a depth of field from the focus position corresponding to an aperture value virtually changed by a predetermined amount from an aperture value employed to capture the input image; and a display control unit configured to display an area in the first range or the second range on display apparatus depending on an image capturing mode employed to capture the input image, such that the area in the first range or the second range is distinguishable from an area other than the area in the first range or the second range.
    Type: Grant
    Filed: May 24, 2019
    Date of Patent: May 18, 2021
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Kenji Onuki
  • Patent number: 11003951
    Abstract: An image processing apparatus is provided. The image processing apparatus includes a memory configured to store at least one instruction; and a processor configured to read the at least one instruction and configured to, according to the at least one instruction: apply a learning network model to an input image frame and acquire information on an area of interest; and acquire an output image frame by retargeting the input image frame based on the acquired information on the area of interest. The learning network model is a model that is trained to acquire the information on the area of interest in the input image frame.
    Type: Grant
    Filed: May 23, 2019
    Date of Patent: May 11, 2021
    Assignees: SAMSUNG ELECTRONICS CO., LTD., Sogang University Research & Business Development Foundation
    Inventors: Hyungjun Lim, Suk-Ju Kang, Seung Joon Lee, Youngsu Moon, Siyeong Lee, Sung In Cho
  • Patent number: 10993524
    Abstract: A method and system for monitoring the location of an oral care device (102) within the mouth of a user, includes: an oral care device (102) having a light source (104); an optical sensor (112) configured to receive light emitted by the light source and to receive light reflected from a user using the oral care device; a computing device (110) adapted to receive and process signals generated by the optical sensor, and programmed to: detect at least one facial feature of the user from the received light reflected from a user; estimate the location of the user's teeth according to the detected facial feature; determine a location of the light source according to the received emitted light; compare the location of the light source to the location of the teeth to estimate the position of the oral care device with respect to the teeth of the user.
    Type: Grant
    Filed: December 14, 2016
    Date of Patent: May 4, 2021
    Assignee: KONINKLIJKE PHILIPS N.V.
    Inventors: Vincentius Paulus Buil, Lucas Jacobus Franciscus Geurts, Frederik Jan De Bruijn, Karl Catharina Van Bree
  • Patent number: 10997463
    Abstract: In implementations of recognizing text in images, text recognition systems are trained using noisy images that have nuisance factors applied, and corresponding clean images (e.g., without nuisance factors). Clean images serve as supervision at both feature and pixel levels, so that text recognition systems are trained to be feature invariant (e.g., by requiring features extracted from a noisy image to match features extracted from a clean image), and feature complete (e.g., by requiring that features extracted from a noisy image be sufficient to generate a clean image). Accordingly, text recognition systems generalize to text not included in training images, and are robust to nuisance factors. Furthermore, since clean images are provided as supervision at feature and pixel levels, training requires fewer training images than text recognition systems that are not trained with a supervisory clean image, thus saving time and resources.
    Type: Grant
    Filed: November 8, 2018
    Date of Patent: May 4, 2021
    Assignee: Adobe Inc.
    Inventors: Zhaowen Wang, Hailin Jin, Yang Liu
  • Patent number: 10993688
    Abstract: Method of data processing for Computed Tomography from a spectral image data set of an imaged zone, an anatomical image data set of the imaged zone, and an anatomical model, comprising: a. Assigning an initial set of values to a regularization scheme, 5 b. Performing a noise reduction scheme on the spectral image data set, wherein said noise reduction scheme is a function of the regularization scheme, of the spectral image data set and of the anatomical image data set, in order to obtain a processed spectral image data set, c. Performing a segmentation of structures of interest using the anatomical data set, the processed spectral image data set, and the anatomical model, in order to obtain a segmentation result, d. Updating the regularization scheme based on the segmentation result, e.
    Type: Grant
    Filed: December 7, 2016
    Date of Patent: May 4, 2021
    Assignee: KONINKLIJKE PHILIPS N.V.
    Inventors: Raz Carmi, Liran Goshen, Mordechay Pinchas Freiman
  • Patent number: 10999499
    Abstract: Embodiments disclosed herein provide systems, methods, and computer readable media for replacing a video background in real-time. The video comprises a plurality of image frames. In a particular embodiment, a method provides generating a range image from a subject image frame of the plurality of image frames. The range image indicates pixel distances from a plenoptic camera that captured the plurality of image frames. The method further provides identifying background pixels that represent a background portion of the subject image frame based on the range image and replacing the background pixels with replacement background pixels in the subject image frame.
    Type: Grant
    Filed: October 23, 2015
    Date of Patent: May 4, 2021
    Assignee: Avaya, Inc.
    Inventors: John F. Buford, Mehmet C. Balasaygun
  • Patent number: 10984536
    Abstract: The present invention discloses a method operable on a digital electronic device comprising an ISP, for initiating a motion diagnostic process on digital images captured by a sensor, and an image sensor. The method operable on a digital electronic device may also be designed to comprise additional steps such as reading a digital image stored in a memory unit, wherein the ISP is configured with computerized instructions comprising instructions to identify regions in the digital image, operating a hot region detection process to detect hot regions among the identified regions, creating a hot region map representing the hot regions detected among the identified regions, and then allowing the requester access to the captured hot region map via communicating with the memory unit, wherein said access allows the requester to perform a motion detection on the digital image. In some cases, the ISP or the digital electronic device may conduct a motion detection on the detected hot regions.
    Type: Grant
    Filed: January 24, 2019
    Date of Patent: April 20, 2021
    Assignee: EMZA VISUAL SENSE LTD
    Inventors: Zeev Smilansky, Tal Hendel, Tomer Kimhi
  • Patent number: 10979628
    Abstract: An image processing apparatus includes processing circuitry to: detect a region of a face of a person included in a moving image signal input at a certain frame rate as a specific region; set a region other than the specific region as a processing region; apply processing to the processing region to output an output image signal having information volume less than information volume of the input moving image signal; and encode the output image signal. The processing applied to the processing region includes at least one of low-pass filtering to selectively filter a spatial frequency of the processing region, contrast reduction processing to selectively reduce contrast of the processing region, and frame rate reduction processing to selectively reduce the frame rate of the processing region.
    Type: Grant
    Filed: January 11, 2019
    Date of Patent: April 13, 2021
    Assignee: RICOH COMPANY, LTD.
    Inventor: Masaki Nose
  • Patent number: 10977482
    Abstract: An object attribution analyzing method applied to an object attribution analyzing device and includes dividing a plurality of continuous frames into a current frame and several previous frames, utilizing face detection to track and compute a first attribution predicted value of an object within the current frame, utilizing the face detection to acquire a feature parameter of the object within the current frame for setting a first weighting, acquiring a second attribution predicted value of the object within the several previous frames, setting a second weighting in accordance with the first weighting, and generating a first induction attribution predicted value of the object within the plurality of continuous frames via the first attribution predicted value weighted by the first weighting and the second attribution predicted value weighted by the second weighting.
    Type: Grant
    Filed: July 29, 2019
    Date of Patent: April 13, 2021
    Assignee: VIVOTEK INC.
    Inventors: Kuan-Yu Lin, Chun-Yi Wu, Sheng-Yuan Chen
  • Patent number: 10970578
    Abstract: A system receives images capturing a non-planar surface from different respective angles, where each of the images captures a section of an area on the non-planar surface, the area including textual or graphical information. The system stitches together the images to obtain a stitched image that captures an entirety of the area on the non-planar surface. The system processes the stitched image to extract the textual or graphical information from the area on the non-planar surface. The system then associates the extracted textual or graphical information with one or more information fields in a report template, and generates a report by populating the one or more information fields of the report template with corresponding associated information within the extracted textual or graphical information.
    Type: Grant
    Filed: February 7, 2019
    Date of Patent: April 6, 2021
    Assignee: JOHNSON CONTROLS FIRE PROTECTION LP
    Inventors: Liam Moore, Ana Vinogradova, Volodymyr Korolyov
  • Patent number: 10959787
    Abstract: A medical manipulator system having a controller configured to receive an image showing a state of a treating portion of a manipulator from an endoscope, determine a magnitude of an external force exerted on the manipulator based on the state of the treating portion of the manipulator shown in the image, generate an auxiliary image in which a display form thereof is changed with a change in the magnitude of the external force exerted on the manipulator determined, determine a display position of the auxiliary image in the image on the basis of the magnitude of the external force exerted on the manipulator determined, and generate a composited image signal of a composited image in which the auxiliary image is superimposed on the image on the basis of the display position determined.
    Type: Grant
    Filed: July 11, 2018
    Date of Patent: March 30, 2021
    Assignee: OLYMPUS CORPORATION
    Inventors: Takumi Isoda, Mitsuaki Hasegawa
  • Patent number: 10963302
    Abstract: A spatially programmed logic circuit (SPLC) array system performs spatial compilation of programs for use in the SPLCs to produce standardized compiled blocks representing predetermined portions of an SPLC. The blocks may be freely relocated in an SPLC after compilation by editing of the compiled file. Inter-block communication circuitry allows joining of blocks within an SPLC or across SPLCs to allow scalability and accommodation of different programs with efficient utilization of an SPLC for multiple programs, again without recompilation.
    Type: Grant
    Filed: March 22, 2019
    Date of Patent: March 30, 2021
    Assignee: Wisconsin Alumni Research Foundation
    Inventors: Jing Li, Yue Zha
  • Patent number: 10963735
    Abstract: Methods and arrangements involving portable user devices such smartphones and wearable electronic devices are disclosed, as well as other devices and sensors distributed within an ambient environment. Some arrangements enable a user to perform an object recognition process in a computationally- and time-efficient manner. Other arrangements enable users and other entities to, either individually or cooperatively, register or enroll physical objects into one or more object registries on which an object recognition process can be performed. Still other arrangements enable users and other entities to, either individually or cooperatively, associate registered or enrolled objects with one or more items of metadata. A great variety of other features and arrangements are also detailed.
    Type: Grant
    Filed: June 1, 2018
    Date of Patent: March 30, 2021
    Assignee: Digimarc Corporation
    Inventor: Geoffrey B. Rhoads
  • Patent number: 10956714
    Abstract: A method and apparatus for detecting a living body, an electronic device and a storage medium include: performing target object detection on a first image captured by a first image sensor in a binocular camera apparatus to obtain a first target region, and performing the target object detection on a second image captured by a second image sensor in the binocular camera apparatus to obtain a second target region; obtaining key point depth information of a target object according to the first target region and the second target region; and determining, based on the key point depth information of the target object, whether the target object is a living body.
    Type: Grant
    Filed: December 28, 2018
    Date of Patent: March 23, 2021
    Assignee: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTD
    Inventors: Rui Zhang, Kai Yang, Tianpeng Bao, Liwei Wu
  • Patent number: 10936863
    Abstract: Systems and methods for automatic information retrieval from imaged documents. Deep network architectures retrieve information from imaged documents using a neuronal visual-linguistic mechanism including a geometrically trained neuronal network. An expense management platform uses the neuronal visual-linguistic mechanism to determine geometric-semantic information of the imaged document.
    Type: Grant
    Filed: November 13, 2018
    Date of Patent: March 2, 2021
    Assignee: WAY2VAT LTD.
    Inventors: Amos Simantov, Roy Shilkrot, Nimrod Morag, Rinon Gal
  • Patent number: 10929294
    Abstract: In an embedding caching system, embeddings generated from previous problems are re-used to improve performance on future problems. A data structure stores problems and their corresponding embeddings. When computing future embeddings, this data structure can be queried to determine whether an embedding has already been computed for a problem with the same structure. If it has, the embedding can be retrieved from the data structure, saving the time and computational expense of generating a new embedding. In one variation, the query is not based on exact matches. If a new problem is similar in structure to previous problems, those embeddings may be used to accelerate the generating of an embedding for the new problem, even if they cannot be used directly to embed the new problem.
    Type: Grant
    Filed: July 20, 2018
    Date of Patent: February 23, 2021
    Assignee: QC Ware Corp.
    Inventors: James W. Brahm, David A. B. Hyde, Peter McMahon
  • Patent number: 10917648
    Abstract: An image processing apparatus sets a region of interest for an image frame of a moving image captured by a capturing unit, based on an operation by a user, detects a moving object region in the image frame, determines whether at least part of the detected moving object region is contained in the region of interest or not, and, in a case where at least part of the detected moving object region is determined to be contained in the region of interest, performs encoding such that the entire region of interest becomes higher in image quality than an outside of the region of interest.
    Type: Grant
    Filed: January 23, 2019
    Date of Patent: February 9, 2021
    Assignee: Canon Kabushiki Kaisha
    Inventor: Keiko Yonezawa
  • Patent number: 10915988
    Abstract: An image stitching method capable of effectively decreasing an amount of sampling points is applied to a first monitoring image and a second monitoring image overlapped with each other and generated by a monitoring camera device. The image stitching method includes acquiring a first sampling point and a second sampling point respectively on the first monitoring image and the second monitoring image, detecting a first specific parameter of the first monitoring image and a second specific parameter of the second monitoring image, determining whether the first sampling point is matched with the second sampling point according to the first specific parameter and the second specific parameter, and deciding whether to stitch the first monitoring image and the second monitoring image by the first sampling point and the second sampling point according a determination result.
    Type: Grant
    Filed: June 9, 2019
    Date of Patent: February 9, 2021
    Assignee: VIVOTEK INC.
    Inventor: Chao-Tan Huang
  • Patent number: 10902278
    Abstract: In an image processing system, an image processing apparatus configured to recognize images of documents is connected through a network to terminal apparatuses each including an input unit and a display unit. The image processing apparatus includes: a recognition unit configured to perform character recognition processing on an image; a confidential information detecting unit configured to detect confidential information from a result of the character recognition processing; and a manipulation unit configured to generate, based on the confidential information in the image, a first manipulated image obtained by fragmenting the confidential information. Each of the terminal apparatuses includes: a display unit configured to display the first manipulated image; and an input unit configured to input corrected data for the first manipulated image.
    Type: Grant
    Filed: March 16, 2017
    Date of Patent: January 26, 2021
    Assignees: KABUSHIKI KAISHA TOSHIBA, TOSHIBA SOLUTIONS CORPORATION
    Inventors: Soichiro Ono, Tomohisa Suzuki, Akio Furuhata, Atsuhiro Yoshida
  • Patent number: 10895911
    Abstract: An image operation method and system for obtaining an eye's gazing direction are provided. The method and system employ multiple extraction stages for extracting eye-tracking features. An eye frame is divided into sub-frames, which are then sequentially temporarily stored in a storage unit. Launch features of sub frames are sequentially extracted from the sub frames by a first feature extraction stage, where a data of a former sub-frame is extracted before a data of a latter sub-frame is needed to be stored. Next, the remaining feature extraction stages apply a superposition operation on the launch features to obtain terminal features, which are then computed to obtain an eye's gazing direction.
    Type: Grant
    Filed: March 12, 2019
    Date of Patent: January 19, 2021
    Assignee: National Taiwan University
    Inventors: Shao-Yi Chien, Yu-Sheng Lin, Po-Jung Chiu
  • Patent number: 10891789
    Abstract: The present invention provides a method to produce a 3D model of a person or an object from just one or several image. The method uses a neural network that is trained on pairs of 3D models of human heads and their frontal images, and then, given an image, infers a 3D model.
    Type: Grant
    Filed: May 30, 2019
    Date of Patent: January 12, 2021
    Assignee: ITSEEZ3D, INC.
    Inventor: Ilya Lysenkov
  • Patent number: 10878529
    Abstract: An apparatus comprises processing circuitry configured to receive first image data; receive second medical image data; and apply a transformation regressor to perform a registration process to obtain a predicted displacement that is representative of a transformation between the first image data and the second image data; wherein the transformation regressor is trained in combination with a discriminator in an adversarial fashion by repeatedly alternating a transformation regressor training process in which the transformation regressor is trained to predict displacements, and a discriminator training process in which the discriminator is trained to distinguish between predetermined displacements and displacements predicted by the transformation regressor.
    Type: Grant
    Filed: November 30, 2018
    Date of Patent: December 29, 2020
    Assignee: Canon Medical Systems Corporation
    Inventors: James Sloan, Owen Anderson, Keith Goatman
  • Patent number: 10872260
    Abstract: A collation device includes: a storage that stores a search target image showing a search target; an acquisition unit that acquires a captured image generated by an imaging device; and a controller that determines whether the search target is included in the captured image. The controller searches out a collation area in the captured image, the collation area including a subject similar to at least a part of the search target, and selects, depending on the collation area, at least one of a whole of the search target image and a part of the search target image, and collates the selected at least one with the captured image so as to determine whether the search target is included in the captured image.
    Type: Grant
    Filed: November 6, 2018
    Date of Patent: December 22, 2020
    Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.
    Inventors: Khang Nguyen, Jubin Johnson, Sugiri Pranata Lim, Sheng Mei Shen, Shunsuke Yasugi
  • Patent number: 10861191
    Abstract: An apparatus for calibrating a driver monitoring camera may include: a camera configured to capture an image of a driver's face; a control unit configured to receive the captured image from the camera, and detect the face to determine a face position; and a display unit configured to display the determination result of the face position by the control unit.
    Type: Grant
    Filed: September 25, 2019
    Date of Patent: December 8, 2020
    Assignee: HYUNDAI MOBIS CO., LTD.
    Inventor: Kyu Dae Ban
  • Patent number: 10861129
    Abstract: Methods, systems, and articles of manufacture to improve image recognition searching are disclosed. In some embodiments, a first document image of a known object is used to generate one or more other document images of the same object by applying one or more techniques for synthetically generating images. The synthetically generated images correspond to different variations in conditions under which a potential query image might be captured. Extracted features from an initial image of a known object and features extracted from the one or more synthetically generated images are stored, along with their locations, as part of a common model of the known object. In other embodiments, image recognition search effectiveness is improved by transforming the location of features of multiple images of a same known object into a common coordinate system. This can enhance the accuracy of certain aspects of existing image search/recognition techniques including, for example, geometric verification.
    Type: Grant
    Filed: March 7, 2017
    Date of Patent: December 8, 2020
    Assignee: Nant Holdings IP, LLC
    Inventors: Bing Song, Liwen Lin
  • Patent number: 10848709
    Abstract: An image data processing method includes receiving, from an image sensor, frame image data of a frame at a first resolution, reducing a resolution of the frame image data to a second resolution, performing image recognition on the frame image data to determine one or more regions of interest (ROI) and a priority level of each of the one or more ROIs, and extracting portions of the frame image data corresponding to the one or more ROIs. The method further includes modifying a resolution of the portions of the frame image data corresponding to the one or more ROIs based on the priority level of the ROIs, and combining the resolution-modified portions of the frame image data corresponding to the one or more ROIs with the frame image data at the second resolution to generate output frame image data.
    Type: Grant
    Filed: September 17, 2018
    Date of Patent: November 24, 2020
    Assignee: EAGLESENS SYSTEMS CORPORATION
    Inventors: Guangbin Zhang, Weihua Xiong
  • Patent number: 10841586
    Abstract: A technique for processing video includes receiving a pixel array, such as a block or layer of video content, as well as a mask that distinguishes masked, “don't-care” pixels in the pixel array from unmasked, “care” pixels. The technique encodes the pixel array by taking into consideration the care pixels only, without regard for the don't-care pixels. An encoder operating in this manner can produce a simplified encoding of the pixel array, which represents the care pixels to any desired level of precision, without regard for errors in the don't-care pixels, which are irrelevant to reconstruction. Further embodiments apply a polynomial transform in place of a frequency transform for encoding partially-masked video content, and/or video content meeting other suitable criteria.
    Type: Grant
    Filed: December 31, 2018
    Date of Patent: November 17, 2020
    Assignee: LogMeln, Inc.
    Inventor: Steffen Schulze
  • Patent number: 10839577
    Abstract: Systems, methods, apparatuses and non-transitory, computer-readable storage mediums are disclosed for generating AR self-portraits or “AR selfies.” In an embodiment, a method comprises: capturing, by a first camera of a mobile device, image data, the image data including an image of a subject in a physical, real-world environment; receiving, by a depth sensor of the mobile device, depth data indicating a distance of the subject from the camera in the physical, real-world environment; receiving, by one or more motion sensors of the mobile device, motion data indicating at least an orientation of the first camera in the physical, real-world environment; generating a virtual camera transform based on the motion data, the camera transform for determining an orientation of a virtual camera in a virtual environment; and generating a composite image data, using the image data, a matte and virtual background content selected based on the virtual camera orientation.
    Type: Grant
    Filed: October 31, 2018
    Date of Patent: November 17, 2020
    Assignee: Apple Inc.
    Inventors: Toshihiro Horie, Kevin O'Neil, Zehang Sun, Xiaohuan Corina Wang, Joe Weil, Omid Khalili, Stuart Mark Pomerantz, Marc Robins, Eric Beale, Nathalie Castel, Jean-Michel Berthoud, Brian Walsh, Andy Harding, Greg Dudey
  • Patent number: 10839540
    Abstract: Disclosed herein are an apparatus and a method for generating an intermediate view image of a stereoscopic image. The apparatus may include a feature point detector detecting contours in left and right view images and detecting feature points in the contours, a corresponding point detector detecting corresponding points corresponding to the feature points of the left and right view images, and a composer generating an intermediate view image based on disparity information between the feature points and the corresponding points.
    Type: Grant
    Filed: September 26, 2018
    Date of Patent: November 17, 2020
    Assignee: Electronics and Telecommunications Research Institute
    Inventor: Sang Won Ghyme
  • Patent number: 10817704
    Abstract: At least one example embodiment discloses a method of extracting a feature from an input image. The method may include detecting landmarks from the input image, detecting physical characteristics between the landmarks based on the landmarks, determining a target area of the input image from which at least one feature is to be extracted and an order of extracting the feature from the target area based on the physical characteristics and extracting the feature based on the determining.
    Type: Grant
    Filed: December 28, 2018
    Date of Patent: October 27, 2020
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Sungjoo Suh, Seungju Han, Jaejoon Han
  • Patent number: 10803597
    Abstract: A circuitry of an image processing device divides a first image into a plurality of regions, extracts a feature point from each of the regions, tracks the feature point among a plurality of images to detect a motion vector, estimates a notable target of the first image, calculates the priority level of setting of a tracking feature point for each of the regions for tracking motion of the notable target, and sets the tracking feature point to any of the regions based on the priority level.
    Type: Grant
    Filed: March 1, 2018
    Date of Patent: October 13, 2020
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Yu Narita
  • Patent number: 10803664
    Abstract: A redundant tracking system comprising multiple redundant tracking sub-systems, enabling seamless transitions between such tracking sub-systems, provides a solution to this problem by merging multiple tracking approaches into a single tracking system. This system is able to combine tracking objects with six degrees of freedom (6DoF) and 3DoF through combining and transitioning between multiple tracking systems based on the availability of tracking indicia tracked by the tracking systems. Thus, as the indicia tracked by any one tracking system becomes unavailable, the redundant tracking system seamlessly switches between tracking in 6DoF and 3DoF thereby providing the user with an uninterrupted experience.
    Type: Grant
    Filed: April 20, 2020
    Date of Patent: October 13, 2020
    Assignee: Snap Inc.
    Inventors: Andrew James McPhee, Samuel Edward Hare, Peicheng Yu, Robert Cornelius Murphy, Dhritiman Sagar
  • Patent number: 10788430
    Abstract: A surface inspection apparatus includes: an inspection pattern forming unit that forms inspection patterns; a projection unit that projects the inspection patterns onto an inspection target object; a captured image acquiring unit that acquires captured images of the inspection target object; an edge extraction image creating unit that extracts edges from captured images, and creates edge extraction images; a correction coefficient setting unit that sets a correction coefficient for correcting intensities of edges in the edge extraction image; an intensity correcting unit that corrects the intensities of the edges; a corrected edge extraction image creating unit that creates corrected edge extraction images; an integrated image creating unit that creates a single integrated image by integrating the brightness values at the same position of the inspection target object; and a determination unit that determines the presence or absence of unevenness on a surface of the inspection target object.
    Type: Grant
    Filed: January 8, 2019
    Date of Patent: September 29, 2020
    Assignee: AISIN SEIKI KABUSHIKI KAISHA
    Inventors: Jin Nozawa, Yasuyuki Kuno, Yukio Ichikawa, Masataka Toda, Munehiro Takayama
  • Patent number: 10776939
    Abstract: Embodiments described herein provide various examples of an automatic obstacle avoidance system for unmanned vehicles using embedded stereo vision techniques. In one aspect, an unmanned aerial vehicle (UAV) capable of performing autonomous obstacle detection and avoidance is disclosed. This UAV includes: a stereo vision camera set coupled to the one or more processors and the memory to capture a sequence of stereo images; and a stereo vision module configured to: receive a pair of stereo images captured by a pair of stereo vision cameras; perform a border cropping operation on the pair of stereo images to obtain a pair of cropped stereo images; perform a sub sampling operation on the pair of cropped stereo images to obtain a pair of sub sampled stereo images; and perform a dense stereo matching operation on the pair of sub sampled stereo images to generate a dense three-dimensional (3D) point map of a space corresponding to the pair of stereo images.
    Type: Grant
    Filed: April 3, 2018
    Date of Patent: September 15, 2020
    Assignee: AltumView Systems Inc.
    Inventors: Rui Ma, Chao Shen, Yu Gao, Ye Lu, Minghua Chen, Jie Liang, Jianbing Wu
  • Patent number: 10769474
    Abstract: Embodiments relate a keypoint detection circuit for identifying keypoints in captured image frames. The keypoint detection circuit generates an image pyramid based upon a received image frame, and determine multiple sets of keypoints for each octave of the pyramid using different levels of blur. In some embodiments, the keypoint detection circuit includes multiple branches, each branch made up of one or more circuits for determining a different set of keypoints from the image, or for determining a subsampled image for a subsequent octave of the pyramid. By determining multiple sets of keypoints for each of a plurality of pyramid octaves, a larger, more varied set of keypoints can be obtained and used for object detection and matching between images.
    Type: Grant
    Filed: August 10, 2018
    Date of Patent: September 8, 2020
    Assignee: Apple Inc.
    Inventors: David R. Pope, Cecile Foret, Jung Kim
  • Patent number: 10769485
    Abstract: A framebuffer-less system of convolutional neural network (CNN) includes a region of interest (ROI) unit that extracts features, according to which a region of interest in an input image frame is generated; a convolutional neural network (CNN) unit that processes the region of interest of the input image frame to detect an object; and a tracking unit that compares the features extracted at different times, according to which the CNN unit selectively processes the input image frame.
    Type: Grant
    Filed: June 19, 2018
    Date of Patent: September 8, 2020
    Assignee: Himax Technologies Limited
    Inventor: Der-Wei Yang
  • Patent number: 10769411
    Abstract: Techniques are provided for selecting a three-dimensional model. An input image including an object can be obtained, and a pose of the object in the input image can be determined. One or more candidate three-dimensional models representing one or more objects in the determined pose can be obtained. From the one or more candidate three-dimensional models, a candidate three-dimensional model can be determined to represent the object in the input image.
    Type: Grant
    Filed: April 5, 2018
    Date of Patent: September 8, 2020
    Assignee: QUALCOMM TECHNOLOGIES, INC.
    Inventors: Alexander Grabner, Peter Michael Roth, Vincent Lepetit
  • Patent number: 10768303
    Abstract: The invention relates to a method for identifying individual trees in airborne lidar data and a corresponding computer program product. The method comprises: a. obtaining lidar data points of a group of one or more trees; b. define voxels in a regular 3D grid on the basis of the data points; c. applying an image segmentation algorithm to obtain at least one segment; and, if at least two segments are obtained: d. find the root voxel and branch voxels of a first segment and a second neighbouring segment; and e. merging the first and second segment if the distance between the first and second root voxel is less than a first threshold, the distance between the first root voxel and the closest second branch voxel is less than a second threshold; and the distance between the first branch voxels and the second branch voxels is less than a third threshold.
    Type: Grant
    Filed: August 24, 2016
    Date of Patent: September 8, 2020
    Assignees: YaDo Holding B.V.
    Inventors: Biao Xiong, Dong Yang
  • Patent number: 10762680
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for generating deterministic enhanced digital images based on parallel determinations of pixel group offsets arranged in pixel waves. For example, the disclosed systems can utilize a parallel wave analysis to propagate through pixel groups in a pixel wave of a target region within a digital image to determine matching patch offsets for the pixel groups. The disclosed systems can further utilize the matching patch offsets to generate a deterministic enhanced digital image by filling or replacing pixels of the target region with matching pixels indicated by the matching patch offsets.
    Type: Grant
    Filed: March 25, 2019
    Date of Patent: September 1, 2020
    Assignee: ADOBE INC.
    Inventors: Sohrab Amirghodsi, Connelly Barnes, Eric L. Palmer
  • Patent number: 10762608
    Abstract: Embodiments of the present disclosure relate to a sky editing system and related processes for sky editing. The sky editing system includes a composition detector to determine the composition of a target image. A sky search engine in the sky editing system is configured to find a reference image with similar composition with the target image. Subsequently, a sky editor replaces content of the sky in the target image with content of the sky in the reference image. As such, the sky editing system transforms the target image into a new image with a preferred sky background.
    Type: Grant
    Filed: August 31, 2018
    Date of Patent: September 1, 2020
    Assignee: ADOBE INC.
    Inventors: Xiaohui Shen, Yi-Hsuan Tsai, Kalyan K. Sunkavalli, Zhe Lin
  • Patent number: 10751129
    Abstract: Described are various embodiments of a computerized method and system for automatically developing a facial remediation protocol for a user based on an input facial image of the user, wherein facial remediation design or plan comprises a combination of one or more of a surgical plan to shift or change the size of various facial anatomical features, a makeup plan to change the apparent size or apparent position of various facial anatomical features and other techniques, such as hair design, to reduce the actual and perceived difference between the subject's face and a standard face, for example.
    Type: Grant
    Filed: May 16, 2017
    Date of Patent: August 25, 2020
    Inventor: John Gordon Robertson
  • Patent number: 10752172
    Abstract: A computer-implemented method for controlling a vehicle interface system in vehicle includes receiving images of a driver from an image capture device and determining a population group based on the images. The method includes identifying a human perception condition of the driver that is characteristic of the population group. The human perception condition limits an ability of the driver to perceive a driving situation of a vehicle operation and a stimulus output that conveys information to the driver in the vehicle about the driving situation of the vehicle operation. The stimulus output is controlled by the vehicle interface system. The method includes modifying the stimulus output into an optimal stimulus output that can be perceived by the driver with the human perception condition, and controlling the vehicle interface system to provide the optimal stimulus output to the driver during the vehicle operation.
    Type: Grant
    Filed: March 18, 2019
    Date of Patent: August 25, 2020
    Assignee: Honda Motor Co., Ltd.
    Inventor: Laith Daman
  • Patent number: 10748579
    Abstract: Facial expressions depicted in image data are edited based on variations of facial expressions depicted across a plurality of frames in other image data. The facial expression of a target subject, depicted in a first image data set, is edited based on the facial expression of a preview subject depicted in a second image data set. The target subject's facial expression is automatically edited based on variations in the of the preview subject's facial expression. A camera device captures video image data of the preview subject. The camera provides a live data image feed to a face-editing engine. The engine edits the face of the target subject's based on the varying face of the preview subject. In real time, for each frame of the data image feed, a user interface simultaneously displays both the varying face of the preview subject and the edited face of the target subject.
    Type: Grant
    Filed: October 26, 2016
    Date of Patent: August 18, 2020
    Assignee: Adobe Inc.
    Inventors: Gagan Singhal, Manik Singhal
  • Patent number: 10750115
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for capturing video on a mobile device. In an example mobile device, an activation action puts the device in a state in which it captures a loop of video of predefined length and uses that as a prefix to a video recorded in response to a record video action.
    Type: Grant
    Filed: June 11, 2019
    Date of Patent: August 18, 2020
    Assignee: Twitter, Inc.
    Inventors: Richard Plom, Reed Martin
  • Patent number: 10739774
    Abstract: According to one aspect, keyframe based autonomous vehicle operation may include collecting vehicle state information and collecting environment state information. A size of an object within the environment, a distance between the object and the autonomous vehicle, and a lane structure of the environment through which the autonomous vehicle is travelling may be determined. A matching keyframe model may be selected based on the size of the object, the distance from the object to the autonomous vehicle, the lane structure of the environment, and the vehicle state information. Suggested limits for a driving parameter associated with autonomous vehicle operation may be generated based on the selected keyframe model. The autonomous vehicle may be commanded to operate autonomously according to the suggested limits for the driving parameter.
    Type: Grant
    Filed: April 6, 2018
    Date of Patent: August 11, 2020
    Assignee: Honda Motor Co., Ltd.
    Inventors: Priyam Parashar, Kikuo Fujimura, Alireza Nakhaei Sarvedani, Akansel Cosgun
  • Patent number: 10733720
    Abstract: Disclosed embodiments relate to a method and an apparatus for testing accuracy of a high-precision map. In some embodiments, the method includes: reverting the high-precision map to a road network map; acquiring a 3D point cloud road image labeled with an actual coordinate of a map element; fitting the 3D point cloud road image into the road network map to obtain a road network map with the fitted 3D point cloud road image; calculating a differential between the actual coordinate of the 3D point cloud road image in the road network map with the fitted 3D point cloud road image and a map coordinate in the road network map; and determining the high-precision map as being accurate in response to the differential being less than or equal to a preset threshold.
    Type: Grant
    Filed: May 12, 2017
    Date of Patent: August 4, 2020
    Assignee: Baidu Online Network Technology (Beijing) Co., Ltd.
    Inventor: Fenghui Han
  • Patent number: 10726272
    Abstract: Systems and method of generating video summaries are presented herein. Information defining a video may be obtained. The video may include a set of frame images. Parameter values for parameters of individual frame images of the video may be determined. Interest weights for the frame images may be determined. An interest curve for the video that characterizes the video by interest weights as a function of progress through the set of frame images may be generated. One or more curve attributes of the interest curve may be identified and one or more interest curve values of the interest curve that correspond to individual curve attributes may be determined. Interest curve values of the interest curve may be compared to threshold curve values. A subset of frame images of the video to include within a video summary of the video may be identified based on the comparison.
    Type: Grant
    Filed: April 23, 2018
    Date of Patent: July 28, 2020
    Assignee: Go Pro, Inc.
    Inventors: Jonathan Wills, Daniel Tse, Desmond Chik, Brian Schunck
  • Patent number: 10713495
    Abstract: Techniques are disclosed for identifying a video using a video signature generated using image features derived from a portion of the video. In some examples, a method may include determining image features derived from a portion of a video, determining a video frame sequence of the video, and generating the video signature of the video based on the image features and the video frame sequence. The method may further include deriving a curve for the video based on the image features and the video frame sequence, and comparing the derived curve with one or more curves corresponding to respective one or more reference videos.
    Type: Grant
    Filed: March 13, 2018
    Date of Patent: July 14, 2020
    Assignee: Adobe Inc.
    Inventors: Kevin Gary Smith, William Brandon George
  • Patent number: 10713477
    Abstract: A determination result is easily obtained even in expression determination on a face image that is not a front view. A Robot includes a camera, a face detector, a face angle estimator, and an expression determiner. The camera acquires image data. The face detector detects a face of a person from the image data acquired by the camera. The face angle estimator estimates an angle of the face detected by the face detector. The expression determiner determines an expression of the face based on the angle estimated by the face angle estimator.
    Type: Grant
    Filed: May 25, 2018
    Date of Patent: July 14, 2020
    Assignee: CASIO COMPUTER CO., LTD.
    Inventors: Kouichi Nakagome, Keisuke Shimada
  • Patent number: 10702991
    Abstract: An apparatus, robot, method and recording medium is provided, wherein when it is determined that a speech of an adult includes a warning word, whether the adult is angry or scolding is determined based on a physical feature value of the speech of the adult. When it is determined that the adult is angry, at least any of the following processes is performed: (a) a process of causing a loudspeaker to output a first sound, (b) a process of causing an apparatus to perform a first operation, and (c) a process of causing a display to perform a first display.
    Type: Grant
    Filed: February 20, 2018
    Date of Patent: July 7, 2020
    Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.
    Inventors: Michiko Sasagawa, Ryouta Miyazaki