Patents Examined by Justin P. Misleh
-
Patent number: 10282637Abstract: A method for characteristic extraction includes dividing an image into a plurality of blocks, each block including a plurality of cells. The method also includes performing sparse signal decomposition for each cell to obtain a sparse vector corresponding to each cell. The method further includes extracting characteristics of Histogram of Oriented Gradient (HOG) of the image according to the sparse vectors.Type: GrantFiled: November 22, 2016Date of Patent: May 7, 2019Assignee: Xiaomi Inc.Inventors: Fei Long, Zhijun Chen, Tao Zhang
-
Patent number: 10277809Abstract: An imaging device includes a subject detecting unit configured to detect a subject image from a captured image captured by an imaging element; a motion vector acquiring unit configured to acquire a motion vector of the subject image detected by the subject detecting unit, by comparing image signals in different frames acquired by the imaging element; a comparison unit configured to acquire a comparison result, by comparing a position of the subject image detected by the subject detecting unit with a predetermined position; and a moving unit configured to reduce movement of the position of the subject image in the imaging range, by moving the subject image on the basis of the motion vector and the comparison result.Type: GrantFiled: October 20, 2016Date of Patent: April 30, 2019Assignee: Canon Kabushiki KaishaInventor: Nobushige Wakamatsu
-
Patent number: 10275683Abstract: Presented herein are techniques for assignment of an identity to a group of captured images. A plurality of captured images that each include an image of at least one person are obtained. For each of the plurality of captured images, relational metrics indicating a relationship between the image of the person in a respective captured image and the images of the persons in each of the remaining plurality of captured images is calculated. Based on the relational metrics, a clustering process is performed to generate one or more clusters from the plurality of captured images. Each of the one or more clusters are associated with an identity of an identity database. The one or more clusters may each be associated with an existing identity of the identity database or an additional identity that is not yet present in the identity database.Type: GrantFiled: January 19, 2017Date of Patent: April 30, 2019Assignee: Cisco Technology, Inc.Inventors: Xiaoqing Zhu, Rob Liston, John G. Apostolopoulos, Wai-tian Tan
-
Patent number: 10275896Abstract: A device for medical imaging of coronary vessels includes a medical imaging device configured to extract a first vessel map from computed tomography angiography data covering at least one reference cardiac phase and a plurality of second vessel maps from three-dimensional rotational angiography data including at least the reference, to generate a plurality of warped versions of the first vessel map aligned with each of second vessel maps, to merge the plurality of warped first vessel maps with corresponding ones of the second vessel maps at different cardiac phases in order to generate a plurality of merged vessel maps of the coronary vessels in the plurality of cardiac cycles.Type: GrantFiled: March 16, 2015Date of Patent: April 30, 2019Assignee: KONINKLIJKE PHILIPS N.V.Inventors: Vincent Maurice André Auvray, Raoul Florent, Pierre Henri Lelong
-
Patent number: 10270984Abstract: A video recording device is described having an orientation sensor. The recording device rotates video data received from an image sensor according to signals received from the orientation sensor. The rotation occurs before the video data is compressed according to a video codec and stored on a tangible storage device. By rotating the video data before compression, the need for intensive, post-capture video rotation on the compressed video file is eliminated.Type: GrantFiled: December 14, 2017Date of Patent: April 23, 2019Assignee: BBY SOLUTIONS, INC.Inventors: Farhad Nourbakhsh, Steven Brown
-
Patent number: 10268913Abstract: A generative adversarial network (GAN) system includes a generator sub-network configured to examine one or more images of actual damage to equipment. The generator sub-network also is configured to create one or more images of potential damage based on the one or more images of actual damage that were examined. The GAN system also includes a discriminator sub-network configured to examine the one or more images of potential damage to determine whether the one or more images of potential damage represent progression of the actual damage to the equipment.Type: GrantFiled: April 3, 2017Date of Patent: April 23, 2019Assignee: General Electric CompanyInventors: Ser Nam Lim, Arpit Jain, David Diwinsky, Sravanthi Bondugula, Yen-Liang Lin, Xiao Bian
-
Patent number: 10262212Abstract: In one aspect, an example method includes (i) determining, by a computing system at a first time, a first height of a bumper of a leading vehicle relative to a reference height; (ii) determining, by the computing system at a second time that is later than the first time, a second height of the bumper relative to the reference height; (iii) making, by the computing system, a determination that the determined first height and the determined second height lack a threshold extent of similarity based on an established tolerance level; and (iv) responsive at least to making the determination that the determined first height and the determined second height lack the threshold extent of similarity based on the established tolerance level, causing, by the computing system, the light source to operate.Type: GrantFiled: October 3, 2017Date of Patent: April 16, 2019Assignee: CSAA Insurance Services, Inc.Inventor: Steven Donald Eiden
-
Patent number: 10255480Abstract: A system includes a processor configured to generate a registered first 3D point cloud based on a first 3D point cloud and a second 3D point cloud. The processor is configured to generate a registered second 3D point cloud based on the first 3D point cloud and a third 3D point cloud. The processor is configured to generate a combined 3D point cloud based on the registered first 3D point cloud and the registered second 3D point cloud. The processor is configured to compare the combined 3D point cloud with a mesh model of the object. The processor is configured to generate, based on the comparison, output data indicating differences between the object as represented by the combined 3D point cloud and the object as represented by the 3D model. The system includes a display configured to display a graphical display of the differences.Type: GrantFiled: May 15, 2017Date of Patent: April 9, 2019Assignee: THE BOEING COMPANYInventors: Ryan Uhlenbrock, Deepak Khosla, Yang Chen, Kevin R. Martin
-
Patent number: 10248872Abstract: A method for estimating time to collision (TTC) of a detected object in a computer vision system is provided that includes determining a three dimensional (3D) position of a camera in the computer vision system, determining a 3D position of the detected object based on a 2D position of the detected object in an image captured by the camera and an estimated ground plane corresponding to the image, computing a relative 3D position of the camera, a velocity of the relative 3D position, and an acceleration of the relative 3D position based on the 3D position of the camera and the 3D position of the detected object, wherein the relative 3D position of the camera is relative to the 3D position of the detected object, and computing the TTC of the detected object based on the relative 3D position, the velocity, and the acceleration.Type: GrantFiled: October 19, 2016Date of Patent: April 2, 2019Assignee: TEXAS INSTRUMENTS INCORPORATEDInventors: Prashanth Ramanathpur Viswanath, Deepak Kumar Poddar, Soyed Nagori, Manu Mathew
-
Patent number: 10248874Abstract: Systems, methods, and devices for detecting brake lights are disclosed herein. A system includes a mode component, a vehicle region component, and a classification component. The mode component is configured to select a night mode or day mode based on a pixel brightness in an image frame. The vehicle region component is configured to detect a region corresponding to a vehicle based on data from a range sensor when in a night mode or based on camera image data when in the day mode. The classification component is configured to classify a brake light of the vehicle as on or off based on image data in the region corresponding to the vehicle.Type: GrantFiled: November 22, 2016Date of Patent: April 2, 2019Assignee: FORD GLOBAL TECHNOLOGIES, LLCInventors: Maryam Moosaei, Guy Hotson, Parsa Mahmoudieh, Vidya Nariyambut Murali
-
Patent number: 10249670Abstract: The present disclosure relates to a solid-state imaging device that can reduce crosstalk interference, and to an electronic apparatus. In the upper chip, VSLs, VSLs, and control lines are stacked in this order from the bottom. That is, in the stacked solid-state imaging device, the control lines are laid out in the uppermost layer of the upper chip. In this structure, the influence of a lower chip on the two sets of VSLs can be shielded by the control lines. The present disclosure can be applied to CMOS solid-state imaging devices to be used in electronic apparatuses, such as a camera apparatus.Type: GrantFiled: January 9, 2015Date of Patent: April 2, 2019Assignee: Sony CorporationInventor: Hiroaki Seko
-
Patent number: 10230888Abstract: An apparatus includes an initializer configured to adjust one or more settings of a camera prior to initialization of the camera. The one or more settings are adjusted based on an indication of motion detected using at least one measurement performed by a sensor device. The apparatus further includes a processing device configured to execute a camera application to initialize the camera after adjustment of the one or more settings.Type: GrantFiled: July 31, 2015Date of Patent: March 12, 2019Assignee: QUALCOMM IncorporatedInventors: Ying Chen Lou, Ruben Velarde, Sanket Krishnamurthy Sagar, Hengzhou Ding, Liang Liang, Leung Chun Chan
-
Patent number: 10217204Abstract: A method of evaluating an image quality for an imaging system and the imaging system are provided. The method in some examples includes: acquiring an image to be evaluated which is generated by the imaging system; extracting a number of sub-images from the image; obtaining a coefficient vector indicating a degree of sparsity by applying a sparse decomposition on the sub-images based on a pre-set redundant sparse representation dictionary; and performing a linear transformation on the coefficient vector so as to obtain an evaluation value for the image quality. The sparse dictionary is learned by only using a few high quality perspective images, and then the image quality is evaluated based on the sparse degree of the image obtained by using the sparse dictionary. A convenient and rapid no-reference image quality evaluation is achieved.Type: GrantFiled: January 19, 2017Date of Patent: February 26, 2019Assignee: Nuctech Company LimitedInventors: Zhiqiang Chen, Yuanjing Li, Li Zhang, Ziran Zhao, Yaohong Liu, Jianping Gu, Zhiming Wang
-
Patent number: 10218889Abstract: Systems and methods for transmitting and receiving image data captured by an imager array including a plurality of focal planes are described. One embodiment of the invention includes capturing image data using a plurality of active focal planes in a camera module, where an image is formed on each active focal plane by a separate lens stack, generating lines of image data by interleaving the image data captured by the plurality of active focal planes, and transmitting the lines of image data and the additional data.Type: GrantFiled: January 8, 2018Date of Patent: February 26, 2019Assignee: FotoNation LimitedInventor: Andrew Kenneth John McMahon
-
Patent number: 10210178Abstract: A machine learning image processing system performs natural language processing (NLP) and auto-tagging for an image matching process. The system facilitates an interactive process, e.g., through a mobile application, to obtain an image and supplemental user input from a user to execute an image search. The supplemental user input may be provided from a user as speech or text, and NLP is performed on the supplemental user input to determine user intent and additional search attributes for the image search. Using the user intent and the additional search attributes, the system performs image matching on stored images that are tagged with attributes through an auto-tagging process.Type: GrantFiled: April 3, 2017Date of Patent: February 19, 2019Assignee: ACCENTURE GLOBAL SOLUTIONS LIMITEDInventors: Christian Souche, Junmin Yang, Alexandre Naressi
-
Patent number: 10194068Abstract: A digital camera includes a communicator that communicates with other digital camera, a receiver that receives a camera information of another digital camera and an imaging device that forms a digital image of an object. The digital camera further includes a processor that creates a digital image data from the digital image based upon the camera information.Type: GrantFiled: September 28, 2016Date of Patent: January 29, 2019Assignee: NIKON CORPORATIONInventor: Akira Ohmura
-
Patent number: 10178305Abstract: Provided is a recommendation apparatus including a determination unit configured to determine an application to be recommended to an imaging apparatus, based on information on an image selected in accordance with an operation mode of the imaging apparatus.Type: GrantFiled: April 3, 2013Date of Patent: January 8, 2019Assignee: SONY CORPORATIONInventor: Takanori Minamino
-
Patent number: 10176388Abstract: Systems and methods for segmenting an image using a convolutional neural network are described herein. A convolutional neural network (CNN) comprises an encoder-decoder architecture, and may comprise one or more Long Short Term Memory (LSTM) layers between the encoder and decoder layers. The LSTM layers provide temporal information in addition to the spatial information of the encoder-decoder layers. A subset of a sequence of images is input into the encoder layer of the CNN and a corresponding sequence of segmented images is output from the decoder layer. In some embodiments, the one or more LSTM layers may be combined in such a way that the CNN is predictive, providing predicted output of segmented images. Though the CNN provides multiple outputs, the CNN may be trained from single images or by generation of noisy ground truth datasets. Segmenting may be performed for object segmentation or free space segmentation.Type: GrantFiled: January 19, 2017Date of Patent: January 8, 2019Assignee: Zoox, Inc.Inventors: Mahsa Ghafarianzadeh, James William Vaisey Philbin
-
Patent number: 10178293Abstract: A method, a computer program product, and a computer system for controlling a camera using a voice command and image recognition. One or more processors on the camera captures the voice command that is from a user of the camera and declares a subject of interest. The one or more processors processes the voice command and sets the subject of interest. The one or more processors receives a camera image from an imaging system of the camera. The one or more processors identifies the subject of interest in the camera image. The one or more processors sets camera one or more parameters that are appropriate to the subject of interest.Type: GrantFiled: June 22, 2016Date of Patent: January 8, 2019Assignee: International Business Machines CorporationInventors: Deborah J. Butts, Adrian P. Kyte, Timothy A. Moran, John D. Taylor
-
Patent number: 10178360Abstract: A digital imaging device includes: a monochromatic sensor including a plurality of photosensitive elements distributed in an array, the plurality of photosensitive elements configured to convert light falling on the monochromatic sensor into electronic signals; and a plurality of filters, each filter configured to be moved into a position in front of the monochromatic sensor, wherein each filter, when moved into the position in front of the monochromatic sensor, covers substantial portion of the monochromatic sensor. Key words include imaging sensor and layered filter.Type: GrantFiled: August 3, 2015Date of Patent: January 8, 2019Assignees: SONY CORPORATION, SONY PICTURES ENTERTAINMENT INC.Inventor: Kazunori Tanaka