Abstract: A target reflector search device. This device comprises an emitting unit for emitting an emission fan, a motorized device for moving the emission fan over a spatial region, and a receiving unit for reflected portions of the emission fan within a fan-shaped acquisition region, and a locating unit for determining a location of the reflection. An optoelectronic detector of the receiving unit is formed as a position-resolving optoelectronic detector having a linear arrangement of a plurality of pixels, each formed as an SPAD array, and the receiving unit comprises an optical system having an imaging fixed-focus optical unit, wherein the optical system and the optoelectronic detector are arranged and configured in such a way that portions of the optical radiation reflected from a point in the acquisition region are expanded on the sensitivity surface of the optoelectronic detector in such a way that blurry imaging takes place.
Type:
Grant
Filed:
May 6, 2020
Date of Patent:
August 22, 2023
Assignee:
HEXAGON TECHNOLOGY CENTER GMBH
Inventors:
Jürg Hinderling, Simon Bestler, Peter Kipfer, Andreas Walser, Markus Geser
Abstract: A method and system for temporal frequency analysis for identification of unmanned aircraft systems. The method includes obtaining a sequence of video image frames and providing a pixel from an output frame of the video; generating a fluctuating pixel value vector; examining the fluctuating pixel value vector over a period of time; obtaining the frequency information present in the pixel fluctuations; summing the frequency coefficients for the vectorized pixel values from the fluctuating pixel value vector; obtaining an image representing a two dimensional space based on the summed center frequency coefficients; generating a series of still frames equal to a summation of the center frequency coefficients for pixel variations; and combining the temporal information into spatial locations in a matrix to provide a single image containing the spatial and temporal information present in the sequence of video image frame.
Type:
Grant
Filed:
March 12, 2021
Date of Patent:
August 15, 2023
Assignee:
National Technology & Engineering Solutions of Sandia, LLC
Inventors:
Bryana Lynn Woo, Gabriel Carlisle Birch, Jaclynn Javonna Stubbs, Camron G. Kouhestani
Abstract: A method for video processing is provided. The method includes determining, for a conversion between a current video block of a video that is a chroma block and a coded representation of the video, parameters of a cross-component linear model (CCLM) based on two or four chroma samples and/or corresponding luma samples; and performing the conversion based on the determining.
Type:
Grant
Filed:
January 29, 2021
Date of Patent:
August 15, 2023
Assignees:
BEIJING BYTEDANCE NETWORK TECHNOLOGY CO., LTD., BYTEDANCE INC.
Inventors:
Kai Zhang, Li Zhang, Hongbin Liu, Jizheng Xu, Yue Wang
Abstract: The present invention relates to a method and device for sharing a candidate list. A method of generating a merging candidate list for a predictive block may include: producing, on the basis of a coding block including a predictive block on which a parallel merging process is performed, at least one of a spatial merging candidate and a temporal merging candidate of the predictive block; and generating a single merging candidate list for the coding block on the basis of the produced merging candidate. Thus, it is possible to increase processing speeds for coding and decoding by performing inter-picture prediction in parallel on a plurality of predictive blocks.
Type:
Grant
Filed:
November 15, 2021
Date of Patent:
July 25, 2023
Assignees:
Electronics and Telecommunications Research Institute, UNIVERSITY-INDUSTRY COOPERATION GROUP OF KYUNG HEE UNIVERSITY
Inventors:
Hui Yong Kim, Gwang Hoon Park, Kyung Yong Kim, Sang Min Kim, Sung Chang Lim, Jin Ho Lee, Jin Soo Choi, Jin Woong Kim
Abstract: Determining GPS coordinates of some image point(s) positions in at least two images using a processor configured by program instructions. Receiving position information of some of the positions where an image capture device captured an image. Determining geometry by triangulating various registration objects in the images. Determining GPS coordinates of the image point(s) positions in at least one of the images. Saving GPS coordinates to memory. This system and method may be used to determine GPS coordinates of objects in an image.
Abstract: Based on viewing tracking data, a viewer's view direction to a three-dimensional (3D) scene depicted by a first video image is determined. The first video image has been streamed in a video stream to the streaming client device before the first time point and rendered with the streaming client device to the viewer at the first time point. Based on the viewer's view direction, a target view portion is identified in a second video image to be streamed in the video stream to the streaming client device to be rendered at a second time point subsequent to the first time point. The target view portion is encoded into the video stream with a higher target spatiotemporal resolution than that used to encode remaining non-target view portions in the second video image.
Abstract: Disclosed is an image data encoding/decoding method and apparatus. A method for decoding a 360-degree image comprises the steps of: receiving a bitstream obtained by encoding a 360-degree image; generating a prediction image by making reference to syntax information obtained from the received bitstream; combining the generated prediction image with a residual image obtained by dequantizing and inverse-transforming the bitstream, so as to obtain a decoded image; and reconstructing the decoded image into a 360-degree image according to a projection format.
Abstract: Methods and systems for operating a moving platform to locate a known target at an area associated with the target are disclosed. In an example method to locate the target at the area, a first moving platform, configured with a first type of sensor, is caused to move to the area. An attempt is made to locate, via the first moving platform and the first type of sensor, the target at the area. Based on the attempt, a second moving platform, configured with a second type of sensor, is caused to move to the area. The target is located via the second moving platform and the second type of sensor.
Abstract: A method for encoding a multi-view frame in a video encoder is provided that includes computing a depth quality sensitivity measure for a multi-view coding block in the multi-view frame, computing a depth-based perceptual quantization scale for a 2D coding block of the multi-view coding block, wherein the depth-based perceptual quantization scale is based on the depth quality sensitive measure and a base quantization scale for the 2D frame including the 2D coding block, and encoding the 2D coding block using the depth-based perceptual quantization scale.
Abstract: “Semi-supervised” machine learning relies on less human input than a supervised algorithm to train a machine learning algorithm to perform entity recognition (NER). Starting with a known entity value or known pattern value for a specific entity type, phrases in a training data corpus are identified that include the known entity value. Context-value patterns are generated to match selected phrases that include the known entity value. One or more context-value patterns may be validated based on human input. The validated patterns identify additional entity values. A subset of the additional entity values may also be validated based on human input. Occurrences of validated entity values may be labeled in the training corpus. Sample phrases from the labeled training dataset may be extracted to form a reduced-size training set for a supervised machine learning model which may be further used in production to label data for any named entity recognition application.
Abstract: Dual cameras that simultaneously capture RGB and IR images of a scene can be used to remove glare from the RGB image, transformed to a YUV image, by substituting a glare region in the luminance component of the YUV image with the pixel values in a corresponding region of the IR image. Further, color information in the glare region may be adjusted by averaging over or extrapolating from the color information in the surrounding region.
Abstract: Disclosed are methods and apparatuses for decoding an image. A method includes receiving a bitstream obtained by encoding the image; dividing a first coding block into a plurality of second coding blocks; generating a prediction block of a second coding block based on syntax information obtained from the bitstream; and reconstructing the second coding block based on the prediction block and a residual block of the second coding block, the residual block being obtained by performing a dequantization and an inverse-transform on quantized transform coefficients from the bitstream. The first coding block has a recursive division structure. The first coding block is divided based on at least one of a quad tree division, a binary tree division or a triple tree division.
Abstract: Driving an emitter to emit pulses of electromagnetic radiation according to a jitter specification in a hyperspectral, fluorescence, and laser mapping imaging system is described. A system includes an emitter for emitting pulses of electromagnetic radiation and an image sensor comprising a pixel array for sensing reflected electromagnetic radiation. The system includes a driver for driving emissions by the emitter according to a jitter specification. The system is h that at least a portion of the pulses of electromagnetic radiation emitted by the emitter comprises one or more of a hyperspectral emission, a fluorescence emission, and/or a laser mapping pattern.
Abstract: Driving an emitter to emit pulses of electromagnetic radiation according to a jitter specification in a laser mapping imaging system is described. A system includes an emitter for emitting pulses of electromagnetic radiation and an image sensor comprising a pixel array for sensing reflected electromagnetic radiation. The system includes a driver for driving emissions by the emitter according to a jitter specification. The system is such that at least a portion of the pulses of electromagnetic radiation emitted by the emitter comprises a laser mapping pattern.
Abstract: Driving an emitter to emit pulses of electromagnetic radiation according to a jitter specification in a fluorescence imaging system is described. A system includes an emitter for emitting pulses of electromagnetic radiation and an image sensor comprising a pixel array for sensing reflected electromagnetic radiation. The system includes a driver for driving emissions by the emitter according to a jitter specification. The system is such that at least a portion of the pulses of electromagnetic radiation emitted by the emitter comprises electromagnetic radiation having a wavelength from about 770 nm to about 790 nm.
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for characterizing activity in a recurrent artificial neural network. In one aspect, a method for identifying decision moments in a recurrent artificial neural network includes determining a complexity of patterns of activity in the recurrent artificial neural network, wherein the activity is responsive to input into the recurrent artificial neural network, determining a timing of activity having a complexity that is distinguishable from other activity that is responsive to the input, and identifying the decision moment based on the timing of the activity that has the distinguishable complexity.
Type:
Grant
Filed:
June 11, 2018
Date of Patent:
May 30, 2023
Assignee:
INAIT SA
Inventors:
Henry Markram, Ran Levi, Kathryn Pamela Hess Bellwald
Abstract: A camera device for shooting at 360-degree angle and displaying images, and a control system are provided. The camera device includes a supporting platform, a supporting spindle, a supporting base, a rotating shooting bracket, and a video display device. The control system includes at least one wireless communication module configured to receive video data or control commands sent by an external control terminal, a storage module configured to store preset video data, and receive and store the video data transmitted from the wireless communication module, and a control module configured to be in communication connection with the wireless communication module, and the storage module. The control module includes an image processing unit and a display processing unit. The image processing unit obtains the video data stored in the storage module and performs analysis processing on the video data to obtain processed video data.
Abstract: Systems, methods, and non-transitory, machine-readable media to facilitate adaptive processing and content control are disclosed. Content composites may be created and configured according to a computational model that may include a hierarchical ordering of the content composites using a hierarchical data structure. The configured content composites may be presented with a graphical user interface of an endpoint device. Metrics of interactions with interface elements corresponding to the configured content composites may be determined using a processing device that monitors inputs. The computational model may be automatically trained using the metrics of interactions to create an adapted computational model. Adapted content composites may be created and configured according to the adapted computational model that may include a second hierarchical ordering using a second hierarchical data structure. The adapted content composites may be presented with the graphical user interface.
Type:
Grant
Filed:
July 8, 2021
Date of Patent:
May 23, 2023
Assignee:
Pluralsight, LLC
Inventors:
Krishna James Kannan, Nathan R. Walkingshaw, Gilbert Gomez Lee
Abstract: Identifying a person committing a crime in video data captured by a security device. An information request message identifying the video data and a need for information about the video data is received. When the video data is determined suitable for sharing within a geographic network, an access control of the video data is set to allow a client device registered with the geographic network to display the video data. A display control value of the video data is set to direct display of a label with the video data to indicate the need for the information. A rating of usefulness of the information received from the client device is determined and a first value is added to an account associated with the client device based at least in part upon the rating.
Abstract: A system and method for moving and aligning tandem axle lock pins on a semi-trailer, having a viewer and a remote receiver that are placeable in communication with each other. The viewer is attachable to a trailer so that a camera on the viewer can be positioned such that the trailer slide rail and tandem axle slider frame of a trailer are within the field of vision. The remote receiver may be a smartphone which can access data from the camera so that the user can from a remote location, such as a cab of a tractor, determine the alignment and orientation of the trailer slide rail and the tandem axle slider frame.
Type:
Grant
Filed:
April 15, 2021
Date of Patent:
May 16, 2023
Assignee:
Vanco Products, LLC
Inventors:
John Vanden Bos, Nathan Colvin, Matthew Colvin, Steve Colvin