Abstract: There is provided an optical navigation device including an image sensor and a processing unit. The image sensor outputs successive image frames. The processing unit calculates a contamination level and a motion signal based on filtered image frames, and determines whether to update a fixed pattern noise (FPN) stored in a frame buffer according to a level of FPN subtraction, the calculated contamination level and the calculated motion signal to optimize the update of the fixed pattern noise.
Abstract: An image coding method includes: deriving a candidate for a motion vector of a current block from a co-located motion vector; adding the candidate to a list; selecting the motion vector of the current block from the list; and coding the current block, wherein the deriving includes: deriving the candidate by a first derivation scheme in the case of determining that each of a current reference picture and a co-located reference picture is a long-term reference picture; and deriving the candidate by a second derivation scheme in the case of determining that each of the current reference picture and the co-located reference picture is a short-term reference picture.
Type:
Grant
Filed:
March 10, 2020
Date of Patent:
September 7, 2021
Assignee:
SUN PATENT TRUST
Inventors:
Viktor Wahadaniah, Chong Soon Lim, Sue Mon Thet Naing, Hai Wei Sun, Takahiro Nishi, Hisao Sasai, Youji Shibahara, Kyoko Tanikawa, Toshiyasu Sugio, Kengo Terada, Toru Matsunobu
Abstract: A technique of generating a stereoscopic panorama image includes panning a portable camera device, and acquiring multiple image frames. Multiple at least partially overlapping image frames are acquired of portions of the scene. The method involves registering the image frames, including determining displacements of the imaging device between acquisitions of image frames. Multiple panorama images are generated including joining image frames of the scene according to spatial relationships and determining stereoscopic counterpart relationships between the multiple panorama images. The multiple panorama images are processed based on the stereoscopic counterpart relationships to form a stereoscopic panorama image.
Type:
Grant
Filed:
September 14, 2018
Date of Patent:
September 7, 2021
Assignee:
FotoNation Limited
Inventors:
Petronel Bigioi, George Susanu, Igor Barcovschi, Piotr Stec, Larry Murray, Alexandru Drimbarean, Peter Corcoran
Abstract: According to the present invention, there is provided A method of encoding a three-dimensional (3D) image, the method comprising: determining a prediction mode for a current block as an inter prediction mode; determining whether a reference block corresponding to the current block in a reference picture has motion information; when the reference block has the motion information, deriving motion information on the current block for each sub prediction block in the current block; and deriving a prediction sample for the current block based on the motion information on the current block.
Type:
Grant
Filed:
April 24, 2020
Date of Patent:
September 7, 2021
Assignee:
University-Industry Cooperation Group of Kyung Hee University
Inventors:
Gwang Hoon Park, Min Seong Lee, Young Su Heo, Yoon Jin Lee
Abstract: Systems and methods are provided for covertly monitoring an environment. In addition, solutions are provided for utilizing a camera to covertly monitor an environment by capturing images of a subject without the subject's awareness. In accordance with some embodiments, a system for capturing images is provided that comprises a device, a first cable, and a second cable. The device may comprise a housing having a top and a bottom, and a circuit board disposed within the housing. The first cable is may be connected to the circuit board and may extend through the top of the housing. The second cable may be connected to the circuit board and may extend through the bottom of the housing.
Abstract: The disclosure provide methods and content consumption devices that enable a scene, for example a 360° scene, that is larger (i.e. has more pixels in at least one dimension) than a display format of the content consumption device to be displayed. Constituent scene views are received individually by the content consumption device, for example as broadcasts, and are combined, for example stitched together, at the content consumption device to output a part of the scene that fits in the display format. The part of the scene (and hence the required constituent streams) to be displayed are determined by a signal, for example a navigational input from a user, enabling the user to navigate in the scene.
Abstract: Provided is a method of encoding an image, the method including: determining a subjective quality of the image when the image is compressed; determining at least one degree of compression that changes the subjective quality and is from among degrees of compression indicating how much the image is compressed; and encoding the image by compressing a residual signal of the image, based on compression information according to the determined degree of compression, wherein the subjective quality is determined for each frame by using a Deep Neural Network (DNN). Provided are an image decoding method and an image decoding apparatus for performing the image decoding method for decoding an image by using information encoded according to an image encoding method.
Type:
Grant
Filed:
February 6, 2018
Date of Patent:
August 17, 2021
Assignee:
SAMSUNG ELECTRONICS CO., LTD.
Inventors:
Sun-young Jeon, Jae-hwan Kim, Young-o Park, Jeong-hoon Park, Jong-seok Lee, Kwang-pyo Choi
Abstract: A visualization system includes an optical recording unit configured to capture optical signals characterizing at least one partial region of an object, a 3D reconstruction unit configured to ascertain spatial data sets, which describe the partial region of the object, based on the captured optical signals, a hologram computational unit configured to ascertain control data for producing a holographic presentation based on the spatial data sets of the partial region of the object, and a visualization unit configured to visualize a holographic presentation of the at least one partial region of the object for a user of the visualization system based on the control data. In addition, a suitable method for producing holographic presentations from optical signals is provided.
Abstract: Systems and methods for improving determination of encoded image data using a video encoding pipeline, which includes a first transcode engine that entropy encodes a first portion of a bin stream to determine a first bit stream including first encoded image data that indicates a first coding group row and that determines first characteristic data corresponding to the first bit stream to facilitate communicating a combined bit stream; and a second transcode engine that entropy encode a second portion of the bin stream to determine a second bit stream including second encoded image data that indicates a second coding group row while the first transcode engine entropy encodes the first portion of the bin stream and that determines second characteristic data corresponding to the second bit stream to facilitate communicating the combined bit stream, which includes the first bit stream and the second bit stream, to a decoding device.
Abstract: The present disclosure relates to method for embedding key information in an image, the method comprising reserving a range of DMZ values, in a predetermined range of 2N values used for storing useful data in the image, the reserved range being used for storing a key information associated with at least one coordinates in the image, with N>0 and DMZ<<2N.
Abstract: A controller obtains a distance index associated with first and second cameras and a camera distance between the first and second cameras. The controller receives a first and second image captured by the first and second cameras, respectively. The first and second images include a first and second object image of an object, respectively. The controller calculates a first object distance between the first camera and the object using a distance index associated with the first and second cameras, the camera distance, the first object image, and the second object image. The controller calculates an object real size of the object using the first object distance, the second object image, the camera distance, and the distance index. The controller determines whether policy criteria are satisfied by the first object distance and/or the object real size. When satisfied, the controller performs an action associated with the criteria.
Abstract: Systems and methods for performing domain adaptation include collecting a labeled source image having a view of an object. Viewpoints of the object in the source image are synthesized to generate view augmented source images. Photometrics of each of the viewpoints of the object are adjusted to generate lighting and view augmented source images. Features are extracted from each of the lighting and view augmented source images with a first feature extractor and from captured images captured by an image capture device with a second feature extractor. The extracted features are classified using domain adaptation with domain adversarial learning between extracted features of the captured images and extracted features of the lighting and view augmented source images. Labeled target images are displayed corresponding to each of the captured images including labels corresponding to classifications of the extracted features of the captured images.
Abstract: A first image is generated by imaging a first three-dimensional virtual space including a predetermined object by a first virtual camera. In addition, a map object formed by a three-dimensional model corresponding to the first three-dimensional virtual space is generated, and an indicator object indicating the position of a predetermined object is placed on the map object. Then, a second image is generated by imaging the map object by a second virtual camera. At this time, the second image is generated such that, regarding the indicator object placed on the map object, the display manners of a part hidden by the map object and a part not hidden by the map object as seen from the second virtual camera are different from each other.
Abstract: Systems and methods provide for an automated system for analyzing damage and processing claims associated with an insured item, such as a vehicle. An enhanced claims processing server may analyze damage associated with the insured item using cameras and lasers for determining the extent and severity of the damage. To aid in this determination, the server may also interface with various internal and external databases storing reference images of undamaged items and cost estimate information for repairing previously analyzed damages of similar items. Further still, the server may generate a payment for compensating a claimant for repair of the insured item.
Abstract: An image decoding method performed by a decoding apparatus, includes deriving a history-based motion vector prediction (HMVP) buffer for a current block based on a history, and deriving motion information of the current block based on an HMVP candidate included in the HMVP buffer, thereby increasing inter prediction efficiency.
Type:
Grant
Filed:
February 18, 2020
Date of Patent:
June 1, 2021
Assignee:
LG ELECTRONICS INC.
Inventors:
Naeri Park, Seunghwan Kim, Junghak Nam, Jaehyun Lim, Hyeongmoon Jang
Abstract: A projector projects an uncoded pattern of uncoded spots onto an object, which is imaged by a first camera and a second camera, 3D coordinates of the spots on the object being determined by a processor based on triangulation, the processor further determining correspondence among the projected and imaged spots based at least in part on a nearness of intersection of lines drawn from the projector and image spots through their respective perspective centers.
Type:
Grant
Filed:
October 16, 2017
Date of Patent:
June 1, 2021
Assignee:
FARO TECHNOLOGIES, INC.
Inventors:
Rolf Heidemann, Mark Brenner, Simon Raab
Abstract: A camera system is installed on the front end of a vehicle, either on the left front, the right front, or both sides. The camera is linked via wired or wireless connection to an onboard computer and a navigation display that is located within the passenger compartment of the vehicle. The driver reviews a visual description on the display of any oncoming traffic in the form of motor vehicles, pedestrians, cyclists, animals and the like on the navigation display via a single screen, split screen or alternating screens. The camera system can include a speed sensor that detects when the vehicle reaches a threshold speed to activate or de-activate the camera. Alternatively, the computer can activate the system when a tum signal is activated, and de-activate the system when the tum signal is no longer activated. This camera system can be retrofitted into older vehicles.
Type:
Grant
Filed:
November 26, 2019
Date of Patent:
May 4, 2021
Assignee:
Klear-View Camera, LLC
Inventors:
Steven R. Petrillo, Robert Michael Roeger
Abstract: Method and devices for coding point cloud data using a planar coding mode. The planar coding mode may be signaled using in a planar mode flag to signal that a volume is planar. A planar volume has all of its occupied child nodes on one side of a plane bisecting the volume. A planar position flag may signal which side of the volume is occupied. Planarity may be determined and signaled with respect to a horizontal plane, vertical plane, or otherwise. Occupancy bits may be inferred as a result of planar coding mode signaling.
Abstract: According to the present invention, there is provided A method of encoding a three-dimensional (3D) image, the method comprising: determining a prediction mode for a current block as an inter prediction mode; determining whether a reference block corresponding to the current block in a reference picture has motion information; when the reference block has the motion information, deriving motion information on the current block for each sub prediction block in the current block; and deriving a prediction sample for the current block based on the motion information on the current block.
Type:
Grant
Filed:
April 24, 2020
Date of Patent:
April 20, 2021
Assignee:
University-Industry Cooperation Group of Kyung Hee University
Inventors:
Gwang Hoon Park, Min Seong Lee, Young Su Heo, Yoon Jin Lee