Patents Examined by Talha M Nawaz
  • Patent number: 11889065
    Abstract: A method of decoding an image can includes constructing an advanced motion vector predication (AMVP) candidate list using available motion vector candidates of a left motion vector candidate, an above motion vector candidate and a temporal motion vector candidate; selecting a motion vector predictor among motion vector candidates of the AMVP candidates list using an AMVP index, and generating a motion vector using the motion vector predictor and a differential motion vector, generating a prediction block using the motion vector and a reference picture index; inversely quantizing a quantized block using a quantization parameter to generate a transformed block and inversely transforming the transformed block to generate a residual block; and generating a reconstructed block using the prediction block and the residual block, in which the quantization parameter is generated per quantization unit, and a minimum size of the quantization unit is adjusted per picture.
    Type: Grant
    Filed: January 25, 2022
    Date of Patent: January 30, 2024
    Assignee: GENSQUARE LLC
    Inventors: Soo Mi Oh, Moonock Yang
  • Patent number: 11889110
    Abstract: Methods, apparatuses, and non-transitory computer-readable storage mediums are provided for decoding a video signal. The method includes obtaining a first reference picture I associated with a video block, obtaining control point motion vectors (CPMVs) of an affine coding block based on the video block, obtaining prediction samples I(i, j) of the affine coding block, deriving PROF prediction sample refinements of the affine coding block based on the PROF, receiving an LIC flag that indicates whether the LIC is applied to the affine coding block, deriving, and when the LIC is applied, LIC weight and offset based on neighboring reconstructed samples of the affine coding block and their corresponding reference samples in the first reference picture, and obtaining final prediction samples of the affine coding block based on the PROF prediction sample refinements and the LIC weight and offset.
    Type: Grant
    Filed: December 20, 2021
    Date of Patent: January 30, 2024
    Assignee: Beijing Dajia Internet Information Technology Co., Ltd.
    Inventors: Xiaoyu Xiu, Yi-Wen Chen, Xianglin Wang, Shuiming Ye, Tsung-Chuan Ma, Hong-Jheng Jhu
  • Patent number: 11875478
    Abstract: Systems and methods are disclosed for dynamically smoothing images based on network conditions to adjust a bitrate needed to transmit the images. Content in the images is smoothed to reduce the quantity of bits needed to encode each image. Filtering the images modifies regions including content having a high frequency of pixel variation, reducing the frequency, so the pixel colors in the region appear “smoothed” or homogeneous. In other words, a region of an image showing a grassy lawn has a high frequency of variation from pixel to pixel resulting from the fine detail of separate blades of grass that may be similar in color, but not homogeneous. Encoding the region as a single shade of green (or multi-pixel regions of different shades of green) enables a viewer to recognize it as a grassy lawn while greatly reducing the number of bits needed to represent the region.
    Type: Grant
    Filed: August 28, 2020
    Date of Patent: January 16, 2024
    Assignee: NVIDIA Corporation
    Inventors: Diksha Garg, Keshava Prasad, Vinayak Jayaram Pore, Hassane Samir Azar
  • Patent number: 11849098
    Abstract: A method for controlling the video display of a virtual 3D object or scene on a 2D display device is provided. A virtual video camera, controlled by a virtual-video-camera state variable consisting of camera control and location parameters, generates the 2D video of the object or scene. A target virtual camera state, representing an optimal view of a given surface point, is generated for each model surface point. A 2D coordinate of the image display is received from a user, either by looking at a point or selecting it with a mouse click. A corresponding 3D designated object point on the surface of the object is calculated from the received 2D display coordinate. The virtual camera is controlled to move its view toward the 3D designated object point with dynamics that allow the user to easily follow the motion of the designated object point as he watches the video.
    Type: Grant
    Filed: May 16, 2023
    Date of Patent: December 19, 2023
    Assignee: Eyegaze Inc.
    Inventors: Dixon Cleveland, Preethi Vaidyanathan
  • Patent number: 11843773
    Abstract: A video decoding method according to an embodiment of the present invention may include determining a type of a filter to be applied to a first-layer picture which a second-layer picture as a decoding target refers to; determining a filtering target of the first-layer picture to which the filter is applied; filtering the filtering target based on the type of the filter; and adding the filtered first-layer picture to a second-layer reference picture list. Accordingly, the video decoding method and an apparatus using the same may reduce a prediction error in an upper layer and enhance encoding efficiency.
    Type: Grant
    Filed: November 22, 2021
    Date of Patent: December 12, 2023
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Jin Ho Lee, Jung Won Kang, Ha Hyun Lee, Jin Soo Choi, Jin Woong Kim
  • Patent number: 11838618
    Abstract: Disclosed herein is an image monitoring system including: a camera connected to a network; display means for displaying an image captured by the camera; and display control means for controlling display such that, in displaying images by the display means, an image is displayed in a window having a predetermined layout; wherein the display control means presets an allocation database containing a correlation between the window having a predetermined layout and a camera identification code and, when the camera is connected to the network, automatically sets a correlation between the camera identification code in the allocation database and the camera, thereby controlling image display into the window on the basis of the allocation database.
    Type: Grant
    Filed: October 7, 2021
    Date of Patent: December 5, 2023
    Assignee: Sony Group Corporation
    Inventor: Satoshi Ishii
  • Patent number: 11818327
    Abstract: A system and methods for a CODEC driving a real-time light field display for multi-dimensional video streaming, interactive gaming and other light field display applications is provided applying a layered scene decomposition strategy. Multi-dimensional scene data is divided into a plurality of data layers of increasing depths as the distance between a given layer and the display surface increases. Data layers which are sampled using an effective resolution function to determine a suitable sampling rate and rendered using hybrid rendering, such as perspective and oblique rendering, to encode light fields corresponding to each data layer. The resulting compressed, (layered) core representation of the multi-dimensional scene data is produced at predictable rates, reconstructed and merged at the light field display in real-time by applying view synthesis protocols, including edge adaptive interpolation, to reconstruct pixel arrays in stages (e.g. columns then rows) from reference elemental images.
    Type: Grant
    Filed: March 14, 2022
    Date of Patent: November 14, 2023
    Assignee: Avalon Holographics Inc.
    Inventors: Matthew Hamilton, Chuck Rumbolt, Donovan Benoit, Matthew Troke, Robert Lockyer, Thomas Butyn
  • Patent number: 11818378
    Abstract: An image encoding/decoding method of the present invention constructs a merge candidate list of a current block, derives motion information of the current block on the basis of the merge candidate list and a merge candidate index, and performs inter prediction on the current block on the basis of the derived motion information, wherein the merge candidate list can improve encoding/decoding efficiency by adaptively determining a plurality of merge candidates on the basis of the position or size of a merge estimation region (MER) to which the current block belongs.
    Type: Grant
    Filed: April 25, 2022
    Date of Patent: November 14, 2023
    Assignee: DIGITALINSIGHTS INC.
    Inventor: Yong Jo Ahn
  • Patent number: 11815688
    Abstract: A display system is configured to direct a plurality of parallactically-disparate intra-pupil images into a viewer's eye. The parallactically-disparate intra-pupil images provide different parallax views of a virtual object, and impinge on the pupil from different angles. In the aggregate, the wavefronts of light forming the images approximate a continuous divergent wavefront and provide selectable accommodation cues for the user, depending on the amount of parallax disparity between the intra-pupil images. The amount of parallax disparity may be selected using an array of shutters that selectively regulate the entry of image light into an eye. Each opened shutter in the array provides a different intra-pupil image, and the locations of the open shutters provide the desired amount of parallax disparity between the images. In some other embodiments, the images may be formed by an emissive micro-display.
    Type: Grant
    Filed: March 28, 2022
    Date of Patent: November 14, 2023
    Assignee: Magic Leap, Inc.
    Inventor: Michael Anthony Klug
  • Patent number: 11816260
    Abstract: A virtual reality/augmented reality (VR/AR) wearable assembly is described herein. The VR/AR wearable assembly includes a wearable frame adapted to be worn over a patient's eyes, a pair of photosensor oculography (PSOG) assemblies mounted to the wearable frame with each PSOG assembly including a display and an eye tracking assembly, and a processor coupled to each PSOG assembly. The processor is configured to execute an algorithm to continuously calibrate the eye tracking assembly including the steps of rendering a sequence of images on a corresponding display and calibrating a corresponding eye tracking assembly by establishing a display coordinate system associated with the corresponding display, determining predicted fixation locations and estimated gaze position locations within the display coordinate system, and determining an offset gaze position value by mapping the estimated gaze position location to the predicted fixation location within the display coordinate system.
    Type: Grant
    Filed: April 4, 2023
    Date of Patent: November 14, 2023
    Assignee: Inseye Inc.
    Inventors: Piotr Krukowski, Michal Meina, Piotr Redmerski
  • Patent number: 11803248
    Abstract: The present disclosure provides a gesture operation method, apparatus, device and medium. The method includes: obtaining depth information of a user hand; determining space coordinates of a virtual hand corresponding to the hand in a virtual space based on the depth information; binding trackballs to the virtual hand based on the space coordinates, which includes binding a palm ball to a palm position of the virtual hand, and binding at least one fingertip ball to at least one fingertip position of the virtual hand, a volume of the palm ball being greater than the at least one fingertip ball; and performing a corresponding operation in the virtual space based on a straight-line distance between the at least one fingertip ball and the palm ball.
    Type: Grant
    Filed: December 23, 2022
    Date of Patent: October 31, 2023
    Assignee: QINGDAO PICO TECHNOLOGY CO., LTD.
    Inventors: Wenhao Cheng, Huipu Wang
  • Patent number: 11803237
    Abstract: Disclosed herein is utilization of photosensor-oculography along with a video camera to implement a novel form of eye tracking, which can be important for many applications, such as augmented reality (AR) on smartglasses. In one embodiment, an eye tracking system includes a photosensor-oculography device (PSOG) that emits light and takes measurements of reflections of the light from an eye of a user, and a camera that captures images of the eye. A computer calculates values indicative of eye movement velocity (EMV) based on the measurements of the reflections obtained by the PSOG. These values then are used to determine how data is read from the camera, which can save power in some cases: the computer reads data from the camera at a higher bitrate when the values are indicative of the EMV are below a threshold compared to when the values are indicative of the EMV are above the threshold.
    Type: Grant
    Filed: September 30, 2021
    Date of Patent: October 31, 2023
    Assignee: Facense Ltd.
    Inventors: Arie Tzvieli, Ari M Frank, Gil Thieberger
  • Patent number: 11797259
    Abstract: An imaging unit includes a plurality of photoelectric conversion elements, a processing unit, and a display unit. The processing unit processes a signal transmitted from the imaging unit. The display unit displays an image based on the signal transmitted from the processing unit. The imaging unit acquires first image information at a first time. The processing unit generates first prediction image information at a second time later than the first time based on the first image information. Moreover, the display unit displays an image based on the first prediction image information.
    Type: Grant
    Filed: June 1, 2021
    Date of Patent: October 24, 2023
    Assignee: Canon Kabushiki Kaisha
    Inventors: Yosuke Nishide, Takeshi Ichikawa, Akira Okita, Toshiki Tsuboi, Nao Nakatsuji, Hiroshi Yoshioka
  • Patent number: 11800082
    Abstract: A virtual 3D display for a motor vehicle includes a substrate and a flexible display positioned on the substrate. The flexible display has two foldable sections and a main section. The main section provides a shared viewing area and each of the foldable sections provides a first and a second independent viewing area.
    Type: Grant
    Filed: September 21, 2021
    Date of Patent: October 24, 2023
    Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Ke Liu, Joseph F. Szczerba
  • Patent number: 11785202
    Abstract: Provided are VR image processing method and apparatus. The method includes: rendering left-eye and right-eye viewpoint regions based on left-eye and right-eye view angles respectively, to obtain left-eye and right-eye viewpoint images; determining a candidate region based on positions of the left-eye and right-eye view angles, and selecting a point in the candidate region as a peripheral image view angle; rendering left-eye and right-eye viewpoint peripheral regions based on the peripheral image view angle, to obtain a same viewpoint peripheral image; splicing the viewpoint peripheral image with the left-eye viewpoint image and with the right-eye viewpoint image to obtain a left-eye complete image and a right-eye complete image; and reducing, when a displacement of a left-eye viewpoint or a right-eye viewpoint within a preset time period is less than a preset displacement, an area of a corresponding viewpoint region and increasing an area of a corresponding viewpoint peripheral region.
    Type: Grant
    Filed: December 31, 2019
    Date of Patent: October 10, 2023
    Assignee: GOERTEK INC.
    Inventors: Xiangjun Zhang, Bin Jiang, Xiaoyu Chi
  • Patent number: 11785211
    Abstract: Technology for improving coding efficiency by performing a block split suitable for picture coding and decoding is provided.
    Type: Grant
    Filed: October 7, 2022
    Date of Patent: October 10, 2023
    Assignee: JVCKENWOOD Corporation
    Inventors: Hideki Takehara, Hiroya Nakamura, Satoru Sakazume, Shigeru Fukushima, Toru Kumakura, Hiroyuki Kurashige
  • Patent number: 11778160
    Abstract: Examples are disclosed that relate to calibration data related to a determined alignment of sensors on a wearable display device. One example provides a wearable display device comprising a frame, a first sensor and a second sensor, one or more displays, a logic system, and a storage system. The storage system comprises calibration data related to a determined alignment of the sensors with the frame in a bent configuration and instructions executable by the logic system. The instructions are executable to obtain a first sensor data and a second sensor data respectfully from the first and second sensors, determine a distance from the wearable display device to a feature based at least upon the first and second sensor data using the calibration data, obtain a stereo image to display based upon the distance from the wearable display device to the feature, and output the stereo image via the displays.
    Type: Grant
    Filed: June 1, 2021
    Date of Patent: October 3, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Yinan Wu, Navid Poulad, Dapeng Liu, Trevor Grant Boswell, Rayna DeMaster-Smith, Roy Joseph Riccomini, Michael Edward Samples, Yuenkeen Cheong
  • Patent number: 11758112
    Abstract: To make it possible to easily perform setting of virtual viewpoint information relating to playback of a virtual viewpoint image. A key frame is generated in which parameters representing a position of a virtual viewpoint and an orientation of the virtual viewpoint are associated with time in a period during which image capturing is performed by a plurality of imaging devices. Then, a playback direction of a plurality of key frames is determined. Then, based on the plurality of key frames and the playback direction, virtual viewpoint information representing transition of the virtual viewpoint is generated.
    Type: Grant
    Filed: September 28, 2021
    Date of Patent: September 12, 2023
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Yoshihiko Minato
  • Patent number: 11758114
    Abstract: A virtual 3D display for a motor vehicle includes a substrate and a flexible display positioned on the substrate. The flexible display has two foldable sections and a main section. The main section provides a shared viewing area and each of the foldable sections provides a first and a second independent viewing area.
    Type: Grant
    Filed: September 21, 2021
    Date of Patent: September 12, 2023
    Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Ke Liu, Joseph F. Szczerba
  • Patent number: 11758169
    Abstract: A method for video encoding includes determining whether a part of a current block is outside a current picture that is being encoded, and determining whether one of a binary split, a ternary split, or a quaternary split is allowed for the current block in response to the part of the current block being outside the current picture. The method also includes, in response to none of the binary split, the ternary split, and the quaternary split being allowed, determining whether a partition from an implicit binary split is across a virtual pipeline data unit boundary, and applying the implicit binary split to the current block in response to the partition from the implicit binary split not being across the virtual pipeline data unit boundary.
    Type: Grant
    Filed: May 10, 2022
    Date of Patent: September 12, 2023
    Assignee: Tencent America LLC
    Inventors: Guichun Li, Xiang Li, Shan Liu