Patents Examined by Obafemi O Sosanya
  • Patent number: 10469862
    Abstract: A method or coding image information, according to the present invention, comprises the steps of: binarizing according to different techniques, index values of forward prediction, backward prediction, and bidirectional prediction, depending on whether the bidirectional prediction is applied when inter-predicting a current block; and entropy coding a binarized codeword, wherein whether to apply the bidirectional prediction when inter-predicting the current block can be determined on the basis of the size of the current block. As a result, provided are a method for binarizing an inter-prediction direction of a prediction unit having a specific size, and an apparatus using same.
    Type: Grant
    Filed: August 6, 2018
    Date of Patent: November 5, 2019
    Assignee: LG Electronics Inc.
    Inventors: Jungsun Kim, Joonyoung Park, Chulkeun Kim, Hendry Hendry, Byeongmoon Jeon
  • Patent number: 10469757
    Abstract: There is provided a control device including an image display unit configured to acquire, from a flying body, an image captured by an imaging device provided in the flying body and to display the image, and a flight instruction generation unit configured to generate a flight instruction for the flying body based on content of an operation performed with respect to the image captured by the imaging device and displayed by the image display unit.
    Type: Grant
    Filed: August 30, 2018
    Date of Patent: November 5, 2019
    Assignee: Sony Corporation
    Inventors: Kohtaro Sabe, Yasunori Kawanami, Kenta Kawamoto, Tsutomu Sawada, Satoru Shimizu, Peter Duerr, Yuki Yamamoto
  • Patent number: 10469837
    Abstract: An apparatus for a volumetric display includes an imaging system having object and image planes, a source image generation apparatus for forming a source image in the object plane based on received image data, and a control apparatus for supplying image data to the source image generation apparatus. The source image generation apparatus includes a light source. The image data comprises a series of two-dimensional images. The imaging system includes a diffractive optical element and the control apparatus is arranged to generate at least one control signal to vary a wavelength of light emitted by the light source and/or a focal length associated with the diffractive optical element to vary the location of the image plane within a volume of real space in synchronism with the formation of the series of two-dimensional images in the object plane to construct a volumetric three-dimensional image based on the received image data.
    Type: Grant
    Filed: January 12, 2018
    Date of Patent: November 5, 2019
    Assignee: Javid Khan
    Inventor: Javid Khan
  • Patent number: 10467457
    Abstract: A system and method for capturing images used in a facial recognition system are provided. The system and method are configured to generate a sequential set of control signals configured to cause a camera to capture a corresponding sequential set of probe images of a face of an individual. Each control signal in the sequential set of control signals is configured to cause the camera to capture a probe image of the sequential set of probe images using a different exposure time. The different exposure times associated with the sequential set of control signals together form a predetermined profile configured to enable capture of a usable image from the sequential set of probe images despite variation in at least one of an amount of light and a skin tone of the face of the individual.
    Type: Grant
    Filed: December 3, 2015
    Date of Patent: November 5, 2019
    Assignee: NEC Corporation of America
    Inventors: Krishna Ranganath, Alvin Yap, Suresh Subramanian
  • Patent number: 10455217
    Abstract: An electronic apparatus and a method of generating a depth map are provided. The electronic apparatus includes a projection device, a first camera, a second camera, and a processing device. The projection device projects patterned invisible light on a scene. The first camera captures visible light information to generate a first image. The second camera captures visible light information and invisible light information to generate a second image. The processing device receives the first image and the second image. A first depth information unit in the processing device compares visible light images of the first image and the second image to obtain first depth information, and a second depth information unit in the processing device identifies an invisible light image in the second image to obtain second depth information. The processing device generates a depth map of the scene from the first depth information and the second depth information selectively.
    Type: Grant
    Filed: October 6, 2016
    Date of Patent: October 22, 2019
    Assignee: Altek Semiconductor Corp.
    Inventors: Hong-Long Chou, Shou-Te Wei, Yuan-Lin Liao, Yu-Chih Wang, Kai-Yu Tseng
  • Patent number: 10435154
    Abstract: In one embodiment, the disclosure provides a method of event auditing. The method of event auditing includes receiving sensor data at a base station from a sensor of an unmanned aerial vehicle (“UAV”), transmitting a controls signal to the UAV based on the sensor data, communicatively coupling the base station with an external evidenced repository, formatting a portion of the sensor data, and transmitting the formatted sensor data to the external evidence repository. In some embodiments, the base station is mounted to an anchor vehicle. In some embodiments, the UAV is communicatively coupled with the base station via a tether. In some embodiments, the formatting the sensor data includes formatting a second portion of sensor data to generate formatted sensor data based, at least in part, on an identity of the external evidence repository.
    Type: Grant
    Filed: July 26, 2018
    Date of Patent: October 8, 2019
    Assignee: RSQ-Systems SPRL
    Inventors: Mathieu Buyse, Jean Marc Coulon, Mike Blavier
  • Patent number: 10432955
    Abstract: A method of decoding a bitstream comprising decoding the bitstream into color values and metadata items indicating information about adaptive post-processing operations performed by a decoder, performing high dynamic range (HDR) adaptation operations on the color values based on the metadata items, and performing fixed post-processing operations to reconstruct an HDR video from the color values, wherein the HDR adaptation operations convert color values into a format expected by the fixed post-processing operations.
    Type: Grant
    Filed: September 26, 2018
    Date of Patent: October 1, 2019
    Assignee: ARRIS Enterprises LLC
    Inventors: Zhouye Gu, Koohyar Minoo
  • Patent number: 10413803
    Abstract: A method of displaying a video sequence of a scene captured using a video capture device, the video sequence having a limited field of view of the scene. A plurality of objects positioned in the scene outside limits of the field of view of the captured video sequence is determined. A representation of at least one of the objects is generated, a characteristic of the generated representation being determined from an object impact measure defining, at least in part, a confidence that the at least one object will enter the field of view. The generated object representation is displayed together with, and proximate to, a display of the captured video sequence.
    Type: Grant
    Filed: December 20, 2016
    Date of Patent: September 17, 2019
    Assignee: Canon Kabushiki Kaisha
    Inventors: Berty Jacques Alain Bhuruth, Andrew Peter Downing, Belinda Margaret Yee
  • Patent number: 10412388
    Abstract: A computer-implemented method includes receiving an encoded video frame, decompressing the received encoded video frame, extracting a first quantization parameter (QP) from the decompressed video frame, and acquiring a delta QP based on the first QP. The method also includes acquiring a second QP based on the delta QP and the first QP, compressing the decompressed video frame based on the second QP, and providing the compressed video frame. The first QP corresponds to quantization settings originally used for compressing the encoded video frame. And the second QP corresponds to quantization settings for compressing the decompressed video frame.
    Type: Grant
    Filed: January 8, 2018
    Date of Patent: September 10, 2019
    Assignee: CITRIX SYSTEM, INC.
    Inventors: Miguel Melnyk, Andrew Penner, Jeremy Tidemann
  • Patent number: 10397549
    Abstract: A method and system are described. The method includes capturing a set of images from a 2×2 array of cameras, each camera of the array of cameras having an overlapping field of view (FOV) with an adjacent camera of the array of cameras. The method further includes synchronously capturing a supplemental image from a fifth camera, the fifth camera having an at least partially overlapping FOV with every camera of the array of cameras. Supplemental information is extracted by comparing the supplemental image with the set of four images. Portions of the set of images are stitched based in part on the supplemental information to produce a combined stitched image, the combined stitched image having a higher resolution than each image of the set of images.
    Type: Grant
    Filed: December 20, 2016
    Date of Patent: August 27, 2019
    Assignee: GoPro, Inc.
    Inventors: Adeel Abbas, David A. Newman, Timothy Macmillan
  • Patent number: 10397616
    Abstract: Cross-plane filtering may be used to restore blurred edges and/or textures in one or both chroma planes using information from a corresponding luma plane. Adaptive cross-plane filters may be implemented. Cross-plane filter coefficients may be quantized and/or signaled such that overhead in a bitstream minimizes performance degradation. Cross-plane filtering may be applied to select regions of a video image (e.g., to edge areas). Cross-plane filters may be implemented in single-layer video coding systems and/or multi-layer video coding systems.
    Type: Grant
    Filed: September 27, 2013
    Date of Patent: August 27, 2019
    Assignee: VID SCALE, Inc.
    Inventors: Jie Dong, Yuwen He, Yan Ye
  • Patent number: 10395125
    Abstract: An object detection and classification system includes at least one image sensor mounted on a vehicle and configured to capture an image of a portion of the environment surrounding the vehicle. The image may be stored and analyzed to detect and classify objects visible in the captured image. Keypoints are extracted from the image and evaluated according to a feature function. A new descriptor function depending on the distance in complex space between a query point and the keypoints in the image and on the feature value of the keypoints may be sampled to produce a sample value. The sample value may trigger a signal to the operator of the vehicle to respond to the object if the sample value classifies the object as satisfying a potential hazard condition.
    Type: Grant
    Filed: October 6, 2016
    Date of Patent: August 27, 2019
    Assignee: SMR PATENTS S.A.R.L.
    Inventor: Firas Mualla
  • Patent number: 10386300
    Abstract: Methods are provided to identify spatially and spectrally multiplexed probes in a biological environment. Such probes are identified by the ordering and color of fluorophores of the probes. The devices and methods provided facilitate determination of the locations and colors of such fluorophores, such that a probe can be identified. In some embodiments, probes are identified by applying light from a target environment to a spatial light modulator that can be used to control the direction and magnitude of chromatic dispersion of the detected light; multiple images of the target, corresponding to multiple different spatial light modulator settings, can be deconvolved and used to determine the colors and locations of fluorophores. In some embodiments, light from a region of the target can be simultaneously imaged spatially and spectrally. Correlations between the spatial and spectral images over time can be used to determine the color of fluorophores in the target.
    Type: Grant
    Filed: December 20, 2016
    Date of Patent: August 20, 2019
    Assignee: Verily Life Sciences LLC
    Inventors: Cheng-Hsun Wu, Victor Marcel Acosta, Ian Peikon, Paul Lebel, Jerrod Joseph Schwartz
  • Patent number: 10389937
    Abstract: To provide an information processing device, an information processing method, and a program capable of sharing a space while maintaining the degree of freedom of a visual line. An information processing device according to the present disclosure includes: a control unit configured to perform control in a manner that a display image generated based on image information which is generated through imaging of an imaging device mounted on a moving object moving in a space, imaging-device posture information which is information regarding a posture of the imaging device, and user view information which is obtained from a user manipulation device manipulated by a user and specifies a region that the user desires to view is displayed in a display region viewed by the user.
    Type: Grant
    Filed: June 8, 2016
    Date of Patent: August 20, 2019
    Assignee: SONY CORPORATION
    Inventors: Junichi Rekimoto, Shunichi Kasahara
  • Patent number: 10380431
    Abstract: Embodiments of a method and system described herein enable capture of video data streams from multiple, different video data source devices and the processing of the video data streams. The video data streams are merged such that various data protocols can all be processed with the same worker processors on different types of operating systems, which are typically distributed.
    Type: Grant
    Filed: October 7, 2016
    Date of Patent: August 13, 2019
    Assignee: Placemeter LLC
    Inventors: Alexandre Winter, Ugo Jardonnet, Tuan Hue Thi, Niklaus Haldimann
  • Patent number: 10375417
    Abstract: The present invention relates to an advantageous scheme for boundary strength derivation and decision processing related to deblocking filtering. More particularly, the present invention improves schemes for deciding deblocking and selecting appropriate deblocking filters known in the art so as to reduce the number of calculation cycles and required memory space.
    Type: Grant
    Filed: December 13, 2018
    Date of Patent: August 6, 2019
    Assignee: Sun Patent Trust
    Inventors: Thomas Wedi, Anand Kotra, Matthias Narroschke, Semih Esenlik
  • Patent number: 10368094
    Abstract: A method for luma-based chroma intra-prediction in a video encoder or a video decoder is provided that includes filtering reconstructed neighboring samples of a reconstructed down sampled luma block, computing parameters ? and ? of a linear model using the filtered, reconstructed neighboring samples of the reconstructed down sampled luma block and reconstructed neighboring samples of a corresponding chroma block, wherein the linear model is PredC[x,y]=?·RecL?[x,y]+?, wherein x and y are sample coordinates, PredC is predicted chroma samples, and RecL? is samples of the reconstructed down sampled luma block, and computing samples of a predicted chroma block from corresponding samples of the reconstructed down sampled luma block using the linear model and the parameters.
    Type: Grant
    Filed: July 31, 2017
    Date of Patent: July 30, 2019
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventor: Madhukar Budagavi
  • Patent number: 10362276
    Abstract: A transmission apparatus that transmits an image to be distributed to a reception apparatus includes a holding unit configured to hold a plurality of settings that include resolution of a captured image and that are used for generating the image to be distributed, a reception unit configured to receive, from the reception apparatus, specification information for specifying one of the plurality of held settings in relation to superimposition of the mask image and superimposition information indicating a position at which the mask image is superimposed upon the image to be distributed generated in accordance with the one of the settings specified by the specification information, and a setting unit configured to set a position at which the mask image is superimposed upon the captured image on the basis of the specified one of the settings and the superimposition information received by the reception unit.
    Type: Grant
    Filed: November 22, 2016
    Date of Patent: July 23, 2019
    Assignee: Canon Kabushiki Kaisha
    Inventor: Takahiro Iwasaki
  • Patent number: 10354371
    Abstract: A system for locating the position of a component includes, an image capture device, the image capture device being configured to capture an image of a component, a working implement mounted in fixed relation to the image capture device, a positioning system configured to adjust a position of the image capture device and the working implement in relation to the component, and an image processing module in communication with the image capture device, the imaging processing module being configured to receive the image from the image capture device and to identify at least one feature of the component. The positioning system is configured to adjust the position of the image capture device based on a location of the identified feature within the image to align the image capture device with the identified feature, and to align the working implement with the identified feature based upon an offset between the image capture device and the working implement.
    Type: Grant
    Filed: October 6, 2016
    Date of Patent: July 16, 2019
    Assignee: General Electric Company
    Inventors: Ronald Francis Konopacki, Allan Gunn Ferry, Matthew David Allen
  • Patent number: 10356312
    Abstract: A camera device, a video auto-tagging method and a non-transitory computer readable medium thereof are provided. The camera device comprises a processor, a camera and a sensor module. The camera is configured to capture a video. The sensor module is configured to generate distinctive sensing information after sensing a distinctive motion event of an user. The processor is configured to create a timing tag for the video according to the distinctive sensing information.
    Type: Grant
    Filed: March 27, 2014
    Date of Patent: July 16, 2019
    Assignee: HTC CORPORATION
    Inventors: Yuan-Mao Tsui, Yuan-Kang Wang, Wen-Chien Liu