Abstract: A method, computer program, and computer system is provided for decoding point cloud data. Data corresponding to a point cloud is received. A number of contexts associated with the received data is reduced based on occupancy data corresponding to one or more parent nodes and one or more child nodes within the received data. The data corresponding to the point cloud is decoded based on the reduced number of contexts.
Abstract: An electronic device displays a representation of a field of view of a camera that includes a view of a three-dimensional space. The representation of the field of view is updated over time based on changes to current visual data detected by at least one of the one or more cameras. Movement of the electronic device moves the field of view of the camera in a first direction. While detecting the movement, the electronic device: updates the representation of the field of view in accordance with the movement; identifies one or more elements in the representation of the field of view that extend along the first direction; and, based at least in part on the determination of the one or more elements, displays, in the representation of the field of view, a guide that extends in the first direction and that corresponds to one of the identified elements.
Type:
Grant
Filed:
May 20, 2022
Date of Patent:
November 7, 2023
Assignee:
APPLE INC.
Inventors:
Allison W. Dryer, Grant R. Paul, Stephen O. Lemay, Lisa K. Forssell
Abstract: An encoding method is provided for encoding a picture to generate a coded stream. The encoding method incldues: generating a first prediction image of a current block included in a current picture by referring to a first region included in a reference picture different from the current picture; operating a bi-directional optical flow process to generate a second prediction image based on the first prediction image by referring to a second region included in the first region, and not operating the bi-directional optical flow process by referring to a third region not included in the first region; and encoding the current block based on the second prediction image.
Type:
Grant
Filed:
August 16, 2021
Date of Patent:
November 7, 2023
Assignee:
Panasonic Intellectual Property Corporation of America
Abstract: The present invention relates to a method of reading out information from a data carrier and to a data carrier utilizing the concept of structured-illumination microscopy or saturated structured-illumination microscopy.
Abstract: A system for generating a 3D model of a surgical site includes a 3D endoscope and a computing device coupled to the 3D endoscope. The 3D endoscope includes a scanner for scanning a surface of a surgical site and a camera source for generating images of the surgical site. A 3D model of the surgical site, including objects therein, is generated using scan data and image data. The 3D model is updated by detecting a change in the surgical site, isolating a region of the surgical site where the change is detected, generating second scan data by scanning the surface of the isolated region, and updating the 3D model generated using the second scan data of the surface of the isolated region.
Abstract: Several methods, systems, and computer program products for quantization of video content are disclosed. In an embodiment, the method includes determining by a processing module, motion information associated with a block of video data of the video content. A degree of randomness associated with the block of video data is determined by the processing module based on the motion information. A value of a quantization parameter (QP) associated with the block of video data is modulated by a quantization module based on the determined degree of randomness.
Abstract: An encoder includes: circuitry; and memory coupled to the circuitry. In operation, the circuitry: derives a base motion vector to be used in predicting a current block to be encoded; derives a first motion vector different from the base motion vector; derives a motion vector difference based on a difference between the base motion vector and the first motion vector; determines whether the motion vector difference is greater than a threshold; modifies the first motion vector when the motion vector difference is determined to be greater than the threshold, and does not modify the first motion vector when the motion vector difference is determined not to be greater than the threshold; and encodes the current block using the first motion vector modified or the first motion vector not modified.
Type:
Grant
Filed:
November 19, 2021
Date of Patent:
October 10, 2023
Assignee:
PANASONIC INTELLECTUAL PROPERTY CORPORATION OF AMERICA
Inventors:
Jing Ya Li, Chong Soon Lim, Ru Ling Liao, Hai Wei Sun, Han Boon Teo, Kiyofumi Abe, Tadamasa Toma, Takahiro Nishi
Abstract: Disclosed is an apparatus and a process for producing and viewing through the internet high-resolution images of the commonly viewed exterior surfaces of a vehicle, while maintaining the same background view for multiple images of the vehicle. The background and the imaging device are revolved around a vehicle which is maintained in fixed position between the background and the imaging device. There can be two or more opposed imaging devices and two or more opposed displays. The vehicle does not need to be rotated or moved during the imaging.
Abstract: A visual positioning system for mobile devices includes at least one infrared camera, at least one infrared illumination source, and a processor that coordinates operation of the at least one camera and illumination sources. A flood IR infrared illumination source illuminates environmental objects for localization of the mobile device during a first camera exposure window, and a structured IR illumination source illuminate environmental objects for detection and mapping of obstacles during a second camera exposure window. A visual SLAM map is constructed with images obtained from a first camera, with a single map being useable for positioning and navigation across a variety of environmental lighting conditions.
Type:
Grant
Filed:
March 24, 2022
Date of Patent:
September 26, 2023
Assignee:
Labrador Systems, Inc.
Inventors:
Michael Dooley, Nikolai Romanov, James Philip Case
Abstract: Provided herein are microscope surveillance systems and methods. In particular, provided herein are modular, multi-functional microscope surveillance systems and methods suitable for use in incubators and other environments.
Type:
Grant
Filed:
July 9, 2019
Date of Patent:
September 19, 2023
Assignee:
The Regents of the University of Michigan
Inventors:
Alexis Donneys, Alexis Baker, Steven R. Buchman
Abstract: An image capture device may include a touchscreen display, which may be used to receive user input. The image capture device may determine whether it is under water based on analysis of visual content captured by the image capture device. Responsive to determination that it is under water, the image capture device may change operation with respect to the touchscreen display.
Abstract: A camera control system is provided for controlling operation of a camera mounted on a vehicle. The system includes a processor and a memory communicably coupled to the processor. The memory stores a camera control module configured to compare a recognition confidence level associated with a feature to a predetermined threshold and, responsive to the recognition confidence level being below the predetermined threshold and using location information associated with the feature, control operation of the camera to capture at least one image of the feature during movement of the vehicle.
Type:
Grant
Filed:
October 13, 2021
Date of Patent:
September 19, 2023
Assignees:
DENSO International America, Inc., DENSO Corporation
Abstract: The disclosure extends to methods, systems, and computer program products for widening dynamic range within an image in a light deficient environment.
Abstract: A method of video processing includes performing a conversion between a video and a bitstream of the video, wherein the bitstream includes one or more output layer sets (OLSs) each including one or more video layers, wherein the bitstream conforms to a format rule. The format rule specifies whether or how a first syntax element indicating whether a first syntax structure descriptive of parameters of a hypothetical reference decoder (HRD) used for the conversion is included in a video parameter set (VPS) of the bitstream.
Abstract: A system comprises one or more processors and one or more storage devices. The system is configured to receive first image data of a first image of an object from a first image sensor and receive second image data of a fluorescence image of the object from a second image sensor. Further, the system is configured to process the first image data and the second image data and generate combined image data of an output image of the object in a linear color space based on the processed first image data and the processed second image data. Additionally, the system is configured to convert the combined image data to a non-linear color space representation of the output image.
Abstract: Endoscopic image analysis, endoscopic procedure analysis, and/or component control systems, methods and techniques are disclosed that can analyze images of an endoscopic system and/or affect an endoscopic system to enhance operation, user and patient experience, and usability of image data and other case data.
Abstract: A video encoding apparatus is a video encoding apparatus for subjecting a video image to motion compensated prediction coding, comprising an acquisition module to acquire available blocks of blocks having motion vectors from encoded blocks adjacent to a to-be-encoded block and number of the available blocks, an acquisition/selection module to select one selection block from the encoded available blocks, a selection information encoder to encode selection information specifying the selection block using a code table corresponding to the number of available blocks, and an image encoder to subject the to-be-encoded block to motion compensated prediction coding using a motion vector of the selection block.
Abstract: To improve a coding efficiency. There are included a PU level search unit configured to search for a motion vector for each prediction block by using a matching process. and a sub-block level search unit configured to search for a motion vector of each of sub-blocks in the prediction block, wherein a precision of a local search by the PU level search unit is lower than a precision of a local search by the sub-block level search unit.
Abstract: A speed estimation system includes: a detection module configured to determine bounding boxes of an object moving on a surface in images, respectively, captured using a camera; a solver module configured to, based on the bounding boxes, determine a homography of the surface by solving an optimization problem, where the solver module is not trained; and a speed module configured to, using the homography, determine a speed that the object is moving on the surface.
Abstract: A method of generating a color image using a monochromatic image sensor. The method includes sequentially illuminating a surface in a plurality of colors, one color at a time. The monochromatic image sensor captures a plurality of image frames of the surface based on the plurality of colors. The plurality of image frames are identified, and at least one feature in the target of the plurality of image frames is highlighted. Color intensities of the plurality of image frames are normalized. A color intensity map of the target for each of the plurality of image frames is generated. A correlation score is determined by comparing each color intensity map of the plurality of image frames. The color image is generated based on the correlation score.
Type:
Grant
Filed:
August 27, 2021
Date of Patent:
July 18, 2023
Assignee:
Boston Scientific Scimed, Inc.
Inventors:
Kirsten Viering, George Wilfred Duval, Louis J. Barbato