Abstract: A system for making gradual adjustments to planograms may include at least one processor. The processor may be programmed to receive a first image of a shelf; analyze the first image to determine a first placement of products on the shelf; determine a planned first adjustment to the first placement of products; and provide first information configured to cause the planned first adjustment. The processor may then receive a second image of the shelf captured after the first information was provided; analyze the second image to determine a second placement of products on the shelf; determine a planned second adjustment to the second placement of products; and provide second information configured to cause the planned second adjustment to the determined second placement of products.
Abstract: An imaging device that reduces differences between a bird's eye image and an actually measured distance includes imaging cameras mounted on a ship to capture peripheral images of the ship and combines the peripheral images captured by the imaging cameras to create the bird's eye image as a composite image. The imaging device includes an auxiliary camera adjacent to at least one of the imaging cameras, and a distance calculator that calculates a distance in a lateral direction using the auxiliary camera and the at least one of the imaging cameras adjacent to the auxiliary camera.
Abstract: A 3D facial reconstruction system includes a main color range camera, a plurality of auxiliary color cameras, a processor and a memory. The main color range camera is arranged at a front angle of a reference user to capture a main color image and a main depth map of the reference user. The plurality of auxiliary color cameras are arranged at a plurality of side angles of the reference user to capture a plurality of auxiliary color images of the reference user. The processor executes instructions stored in the memory to generate a 3D front angle image according to the main color image and the main depth map, generate 3D side angle images according to the 3D front angle image and the plurality of auxiliary color images, and train an artificial neural network model according to a training image, the 3D front angle image and 3D side angle images.
Abstract: An image pickup apparatus that is capable of preventing hunching certainly at low cost and of switching between the day mode and the night mode at an optimal timing. An image sensor outputs an image signal depending on an optical image formed through an image pickup optical system. A mode setting unit sets a photographing mode for photographing using the image sensor from among a day mode and a night mode in which sensitivity for a wavelength range corresponding to infrared light is higher than that in the day mode. An obtaining unit obtains ratio information about the ratios of the infrared light and visible light based on the image signal in the night mode. A condition setting unit sets a determination condition that is used for switching the photographing mode to the day mode from the night mode based on the ratio information.
Abstract: Systems and methods for interactive site resource management are disclosed. In certain embodiments, the system and method collects and accesses local geospatially-referenced landscape and site survey data in the field as facilitated by selectable content views, the gathered landscape and site survey data synthesized with baseline geospatially-referenced landscape and site survey data. In one aspect, the synthesized landscape and site geospatially-referenced survey data may be used by a project lead or authority agent, to create a landscape and/or site report, an authorized development plan, a master landscape and/or site survey, and/or as-built development plans. In certain embodiments, the synthesized landscape site survey data is presented as an augmented reality display.
Type:
Grant
Filed:
September 14, 2020
Date of Patent:
April 19, 2022
Assignee:
S&ND IP, LLC
Inventors:
Nathaniel David Boyless, Dante Lee Knapp
Abstract: Encoding an image using a non-encoding region of the image, a block-based encoding region of the image, and a pixel-based encoding region of the image.
Type:
Grant
Filed:
December 30, 2014
Date of Patent:
April 19, 2022
Assignee:
SAMSUNG ELECTRONICS CO., LTD.
Inventors:
Dae-sung Cho, Chan-yul Kim, Jeong-hoon Park, Pil-kyu Park, Kwang-pyo Choi, Dai-woong Choi, Woong-il Choi
Abstract: A method of measuring a banding artefact in an image includes generating a gradient profile from the image, where the gradient profile includes respective gradient magnitudes of pixels of the image; generating, using the gradient profile, a candidate banding pixel (CBP) map, where each location of the CBP map is such that a gradient magnitude of the gradient profile of a corresponding pixel of the image being greater than a first threshold and smaller than a second threshold; generating, using the CBP map, a banding edge map (BEM), where the BEM includes connected banding edges of the image; generating, using the BEM, a banding visibility map (BVM), where the BVM includes a respective banding metric for at least some pixels of the image; and generating a banding index of the image using the BVM.
Abstract: A sensor sphere may measure light and/or other properties of an environment from a multitude of angles and/or perspectives. A rendering system may obtain measurements that the sensor sphere generates as a result of measuring light in the environment from a common position and the multitude of angles and/or perspectives. The rendering system may receive image(s) that capture the environment from a first perspective, may determine a first set of the measurements that measure the lighting from the first perspective, may determine a second set of the measurements that measure the lighting from a different second perspective, may illuminate the environment from the second perspective by adjusting the lighting of the environment according to a difference between the second set of measurements and the first set of measurements, and may render the environment from the second perspective with the adjusted lighting.
Abstract: A first value of a first data element (311) in a first set of data elements (310) is obtained, the first set of data elements being based on a first time sample of a signal. A second value of a second data element (321) in a second set of data elements (320) is obtained, the second set of data elements being based on a second, later time sample of the signal. A measure of similarity is derived between the first value and the second value. Based on the derived measure, a quantisation parameter useable in performing quantisation on data based on the first time sample of the signal is determined. Output data is generated using the quantisation parameter.
Type:
Grant
Filed:
September 21, 2018
Date of Patent:
March 1, 2022
Assignee:
V-NOVA INTERNATIONAL LIMITED
Inventors:
Ivan Makeev, Balázs Keszthelyi, Robert Ettinger, Michele Sanna, Stergios Poularakis
Abstract: An in-loop deblocking filter apparatus (120) for processing a current row or column of samples into a filtered row or column of samples. The current row or column of samples comprises a plurality of samples from a first sample block and a horizontally or vertically neighboring second sample block of a reconstructed picture of a video stream. The samples of the current row or column of samples have sample values pN?1?p0, q0?qN?1 wherein N is an even integer greater than 2. If a first condition or a second condition is satisfied, the current row or column is processed by: determining a filtered sample value q0? by applying a (2N?1)-tap filter to the sample values pN?2, . . . , p0, q0, . . . , qN?1 of the current row or column; and/or determining a filtered sample value p0? by applying a (2N?1)-tap filter to the sample values pN?1, . . . , p0, q0, . . . , qN?2 of the current row or column.
Abstract: The present disclosure relates to encoding a decoding video employing texture coding. In particular, a texture region is identified within a video picture and a texture patch is determined for said region. Moreover, a set of parameters specifies luminance within the texture region (1001) by fitting the texture region samples to a two-dimensional polynomial function of the patch determined according to the set of parameters (1040); and/or motion within the texture region by fitting motion estimated between the texture region of the video picture and an adjacent picture to a two-dimensional polynomial The texture patch and the first set of parameters are then included into a bitstream which is output of the encoder and provided in this way to the decoder which reconstructs the texture based on the patch and the function applied to the patch.
Abstract: An image decoding method according to the present invention includes: a step for determining a reference sample line of a current block; a step for determining whether candidate intra-predition modes identical to the intra-prediction mode of the current block exist; a step for deriving the intra-prediction mode of the current block on the basis of the determination; and a step for performing intra-prediction on the current block on the basis of the reference sample line and the intra-prediction mode. Here, at least one of the candidate intra prediction modes may be derived by adding or substracting an offset to or from the maximum value among the intra-prediction mode of a neighboring block that is above the current block and the intra-prediction mode of a neighboring block that is to the left of the current block.
Type:
Grant
Filed:
April 1, 2021
Date of Patent:
February 15, 2022
Assignee:
GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD.
Abstract: Compensating for delay in a Pan-Tilt-Zoom (PTZ) camera system is disclosed. Client-side view transformation is carried out to emulate a future Field Of View (FOV) of the camera so that the impact of latency is reduced.
Abstract: A certain material irregularly expressed in an observation area is effectively observed. An observing apparatus includes a first observing unit performing a time lapse shooting of a predetermined observation area, a first discriminating unit discriminating whether or not a first material is expressed in the observation area based on an image obtained by the first observing unit, and a second observing unit starting a time lapse shooting relating to a part where the first material is expressed at a timing when the first material is expressed in the observation area, in which a shooting frequency of the time lapse shooting by the second observing unit is higher than a shooting frequency of the time lapse shooting by the first observing unit.
Abstract: The invention relates to an apparatus for decoding an encoded texture block of a texture image, the decoding apparatus comprising: a partitioner (510) adapted to determine a partitioning mask (332) for the encoded texture block (312?) based on depth information (322) associated to the encoded texture block, wherein the partitioning mask (332) is adapted to define a plurality of partitions (P1, P2) and to associate a texture block element of the encoded texture block to a partition of the plurality of partitions of the encoded texture block; and a decoder (720) adapted to decode the partitions of the plurality of partitions of the encoded texture block based on the partitioning mask.
Abstract: An example device for decoding video data includes a video decoder configured to decode one or more syntax elements at a region-tree level of a region-tree of a tree data structure for a coding tree block (CTB) of video data, the region-tree having one or more region-tree nodes including region-tree leaf and non-leaf nodes, each of the region-tree non-leaf nodes having at least four child region-tree nodes, decode one or more syntax elements at a prediction-tree level for each of the region-tree leaf nodes of one or more prediction trees of the tree data structure for the CTB, the prediction trees each having one or more prediction-tree leaf and non-leaf nodes, each of the prediction-tree non-leaf nodes having at least two child prediction-tree nodes, each of the prediction leaf nodes defining respective coding units (CUs), and decode video data for each of the CUs.
Type:
Grant
Filed:
March 20, 2017
Date of Patent:
January 11, 2022
Assignee:
QUALCOMM Incorporated
Inventors:
Xiang Li, Jianle Chen, Li Zhang, Xin Zhao, Hsiao-Chiang Chuang, Feng Zou, Marta Karczewicz
Abstract: An in-loop deblocking filter apparatus (120) for processing a current row or column of samples into a filtered row or column of samples. The current row or column of samples comprises a plurality of samples from a first sample block and a horizontally or vertically neighboring second sample block of a reconstructed picture of a video stream. The samples of the current row or column of samples have sample values pN?1?p0, q0?qN?1 wherein N is an even integer greater than 2. If a first condition or a second condition is satisfied, the current row or column is processed by: determining a filtered sample value q0? by applying a (2N?1)-tap filter to the sample values pN?2, . . . , p0, q0, . . . , qN?1 of the current row or column; and/or determining a filtered sample value p0? by applying a (2N?1)-tap filter to the sample values pN?1, . . . , p0, q0, . . . , qN?2 of the current row or column.
Abstract: A video coder may be configured to determine to use a decoder side motion vector refinement process, including bi-lateral template matching, based on whether or not weights used for bi-predicted prediction are equal or not. In one example, decoder side motion vector refinement may be disabled when weights used for bi-predicted prediction are not equal.
Type:
Grant
Filed:
February 27, 2020
Date of Patent:
November 30, 2021
Assignee:
QUALCOMM Incorporated
Inventors:
Hongtao Wang, Wei-Jung Chien, Marta Karczewicz, Han Huang
Abstract: An image processing device is described. The circuitry of the image processing device obtains an image that is generated on a basis of an incident light and a transfer function related to a conversion between the incident light and the image, and determines a cost function for prediction mode selection according to the transfer function. The cost function calculates a cost value based on a first parameter corresponding to a prediction residual code amount and a second parameter corresponding to a prediction mode code amount. The cost function is determined in a manner in favor of increasing the prediction residual code amount or decreasing the prediction mode code amount as a dynamic range of the transfer function increases. The circuitry determines a prediction mode for coding a coding unit of the image according to the determined cost function, and encodes the coding unit according to the determined prediction mode.