Patents Examined by Tat C Chio
  • Patent number: 11108972
    Abstract: A method and system is provided for the creation, management, and distribution of two dimensional video content that appears to a viewer as having a third dimension of depth. The system includes a camera array with multiple cameras at respective different positions that coordinates the off-center rotating motion of apertures that are part of diaphragms in the camera lens systems. The resulting images from each camera in the array are stitched together to create a larger content field and only a portion is displayed at any one time.
    Type: Grant
    Filed: July 15, 2019
    Date of Patent: August 31, 2021
    Assignee: HSNi, LLC
    Inventor: John McDevitt
  • Patent number: 11109070
    Abstract: An example device includes a memory configured to store video data and one or more processors implemented in circuitry and communicatively coupled to the memory. The one or more processors are configured to determine whether a maximum number of merge candidates for a slice of the video data is equal to a first value. The one or more processors are configured to infer a value of a first syntax element to be equal to a second value based at least in part on the maximum number of merge candidates for the slice being equal to the first value, the first syntax element being indicative of a maximum number of merge candidates and a maximum number of merge candidates of a non-rectangular coding mode. The one or more processors are also configured to decode the slice based on the maximum number of merge candidates and the first syntax element.
    Type: Grant
    Filed: September 2, 2020
    Date of Patent: August 31, 2021
    Assignee: Qualcomm Incorporated
    Inventors: Yao-Jen Chang, Vadim Seregin, Muhammed Zeyd Coban, Adarsh Krishnan Ramasubramonian, Marta Karczewicz
  • Patent number: 11102484
    Abstract: A video coding mechanism is disclosed. The mechanism includes selecting a split mechanism to split a coding unit (CU) into sub-CUs for application of one or more transform units (TUs), the selection of the split mechanism based on comparing a CU width to a max TU width and comparing a CU height to a max TU height. The selected split mechanism is applied to the CU to obtain sub-CUs. A residual of one of the sub-CUs is determined. The residual includes a difference between sample values for the sub-CU and prediction samples for the sub-CU. The TUs are applied to transform the residual of the CU based on results of the selected split mechanism. A transformed residual for the CU is encoded into a bitstream.
    Type: Grant
    Filed: July 15, 2020
    Date of Patent: August 24, 2021
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Jiali Fu, Yin Zhao, Shan Gao, Huanbang Chen, Haitao Yang, Jianle Chen
  • Patent number: 11083537
    Abstract: A stereoscopic camera with fluorescence visualization is disclosed. An example stereoscopic camera includes a visible light source, a near-ultraviolet light source, and a near-ultraviolet light source. The stereoscopic camera also includes a light filter assembly having left and right filter magazines positioned respectively along left and right optical paths and configured to selectively enable certain wavelengths of light to pass through. Each of the left and right filter magazines includes an infrared cut filter, a near-ultraviolent cut filter, and a near-infrared bandpass filter. A controller of the camera is configured to provide for a visible light mode, an indocyanine green (“ICG”) fluorescence mode, and a 5-aminolevulinic acid (“ALA”) fluorescence mode by synchronizing the activation of the light sources with the selection of the filters. A processor of the camera combines image data from the different modes to enable fluorescence emission light to be superimposed on visible light stereoscopic images.
    Type: Grant
    Filed: August 27, 2019
    Date of Patent: August 10, 2021
    Assignee: Alcon Inc.
    Inventors: Ashok Burton Tripathi, Maximiliano Ramirez Luna, George Polchin, Thomas Riederer, Alan Fridman
  • Patent number: 11079584
    Abstract: A method for use in optical imaging, a system for using in optical imaging, and an optical system includes a controller arranged to determine an intermediate position of a sample upon a detection of a completion of a first movement of the sample, and to derive an optimal position associated with the intermediate position; a manipulator arranged to move the sample from the intermediate position to the optimal position with a second movement; wherein the sample is arranged to be observed using an optical instrument in the optimal position.
    Type: Grant
    Filed: October 24, 2016
    Date of Patent: August 3, 2021
    Assignee: City University of Hong Kong
    Inventors: Yajing Shen, Haojian Lu
  • Patent number: 11074470
    Abstract: A computer-implemented system and method for automatically improving gathering of data includes processing first input data acquired by a data gathering device to generate a first prediction and a corresponding confidence score. If the confidence score is below a threshold, then generating and applying a set of at least one action to the data gathering device, such as changing acquisition settings of the device. Second input data acquired using the data gathering device having the action applied thereto is further received and processed to generate a second prediction and corresponding confidence score. The set of action to-be-applied to the device is further modified based on the difference. The generation of the at least one action, such as changing the acquisition settings, can be based on an acquisition settings adjustment model and the modification of the action can include updating the model by machine learning (ex: reinforcement learning).
    Type: Grant
    Filed: December 6, 2018
    Date of Patent: July 27, 2021
    Assignee: LA SOCIÉTÉ DATAPERFORMERS INC.
    Inventors: Mehdi Merai, Ali Elawad
  • Patent number: 11068712
    Abstract: Apparatuses, methods, and systems are presented for sensing scene-based occurrences. Such an apparatus may comprise a vision sensor system comprising a first processing unit and dedicated computer vision (CV) computation hardware configured to receive sensor data from at least one sensor array comprising a plurality of sensor pixels and capable of computing one or more CV features using readings from neighboring sensor pixels. The vision sensor system may be configured to send an event to be received by a second processing unit in response to processing of the one or more computed CV features by the first processing unit. The event may indicate possible presence of one or more irises within a scene.
    Type: Grant
    Filed: December 23, 2019
    Date of Patent: July 20, 2021
    Assignee: QUALCOMM Incorporated
    Inventors: Evgeni Gousev, Alok Govil, Jacek Maitan, Venkat Rangan, Edwin Chongwoo Park, Jeffery Henckels
  • Patent number: 11070772
    Abstract: An endoscope image-capturing device includes: a first case inside of which is sealed; an image sensor arranged inside the first case; an electro-optic conversion element arranged outside the first case and configured to convert an image signal output from the image sensor into an optical signal; and a sealing member sealing the electro-optic conversion element.
    Type: Grant
    Filed: June 30, 2016
    Date of Patent: July 20, 2021
    Assignee: SONY OLYMPUS MEDICAL SOLUTIONS INC.
    Inventor: Kei Tomatsu
  • Patent number: 11070833
    Abstract: Encoding video data comprises receiving an image sequence comprising first and second input image frames, adding an overlay, thereby generating first and second generated image frames, and encoding a video stream containing output image frames with and without overlay. The first input image frame is encoded as an intra-frame to form a first output image frame. The second input image frame is encoded as an inter-frame with reference to the first output image frame to form a second output image frame. The generated image frames are encoded as inter-frames with reference to the first and second output image frames to form first and second overlaid output image frames. A first part of the second generated image frame is encoded with reference to the first overlaid output image frame, and a second part of the second generated image frame is encoded with reference to the second output image frame.
    Type: Grant
    Filed: June 25, 2019
    Date of Patent: July 20, 2021
    Assignee: Axis AB
    Inventor: Alexander Toresson
  • Patent number: 11064178
    Abstract: A monocular visual odometry system includes a stacked architecture. The stacked architecture receives camera data from a monocular camera and generates a depth map. Additionally, the system includes a deep virtual stereo odometry module that receives the camera data from the monocular camera and the depth map from the stacked architecture. The calculation module initializes a keyframe of the camera data using the depth map and determines a photometric error based on a set of observation points extracted from the keyframe and a set of reference points extracted from the camera data. The calculation module determines a virtual stereo photometric term using the depth map. The calculation module also optimizes a total energy function that includes the photometric error and the virtual stereo photometric term. Using the total energy function, the calculation module generates a positional parameter of the system and provides the positional parameter to an autonomous system.
    Type: Grant
    Filed: June 17, 2019
    Date of Patent: July 13, 2021
    Assignee: Artisense Corporation
    Inventors: Nan Yang, Rui Wang
  • Patent number: 11058513
    Abstract: A stereoscopic visualization camera and platform are disclosed. An example stereoscopic visualization camera includes a main objective assembly and left and right lens sets defining respective parallel left and right optical paths from light that is received from the main objective assembly of a target surgical site. Each of the left and right lens sets includes a front lens, first and second zoom lenses configured to be movable along the optical path, and a lens barrel configured to receive the light from the second zoom lens. The example stereoscopic visualization camera also includes left and right image sensors configured to convert the light after passing through the lens barrel into image data that is indicative of the received light. The example stereoscopic visualization camera further includes a processor configured to convert the image data into stereoscopic video signals or video data for display on a display monitor.
    Type: Grant
    Filed: May 24, 2019
    Date of Patent: July 13, 2021
    Assignee: Alcon, Inc.
    Inventors: Maximiliano Ramirez Luna, Michael Weissman, Thomas Paul Riederer, George Charles Polchin, Ashok Burton Tripathi
  • Patent number: 11054637
    Abstract: A compact adaptive optics system for long-range horizontal paths imaging that improves degraded images. The system uses a filter that corresponds to the three colors in a typical color detector element, one or more optic elements, a deformable mirror, and a detector. Focus errors, due to turbulence, in the image recorded by the detector element show up as image shifts in the three distinct color images. The shifts and statistics of these shifts between these simultaneous images are used to create control signals for the deformable mirror resulting in a compact adaptive optic system for horizontal paths without need for a point source located at the distance scene being imaged. Analysis of the relative pixel shifts in various regions of the image provides third order statistics revealing tip/tilt and additional Zernikes modes that are used to control a deformable mirror without the need for a guide star/point-source.
    Type: Grant
    Filed: December 21, 2018
    Date of Patent: July 6, 2021
    Assignee: Mission Support and Test Services, LLC
    Inventors: Mary D. O'Neill, David Terry
  • Patent number: 11050934
    Abstract: A method for displaying video images includes providing a plurality of cameras and an ECU at the vehicle. The cameras are in communication with one another via a vehicle network and the ECU is in communication with the cameras via respective data lines. During a driving maneuver of the vehicle, one of the cameras is designated as and functions as a master camera and other cameras are designated as and function as slave cameras. During the driving maneuver, automatic control of exposure, gain and white balance parameters of the designated master camera is enabled, and the exposure, gain and white balance parameters of the designated master camera are communicated to the designated slave cameras via the vehicle network. A composite image is displayed that provides bird's eye view video images derived from video image data captured by at least the designated master camera and the designated slave cameras.
    Type: Grant
    Filed: February 24, 2020
    Date of Patent: June 29, 2021
    Assignee: MAGNA ELECTRONICS INC.
    Inventors: Yuesheng Lu, Richard D. Shriner
  • Patent number: 11039117
    Abstract: A dual lens imaging module suitable for an electronic device is provided. The dual lens imaging module includes a first lens, a second lens, and a moving module. The moving module is connected to the second lens and is adapted to move or tilt the second lens, wherein the first lens is a lens having an autofocus function, and a working distance of the dual lens imaging module is adapted to be changed according to a spacing of the first lens and the second lens.
    Type: Grant
    Filed: August 27, 2019
    Date of Patent: June 15, 2021
    Assignee: HTC Corporation
    Inventors: Yu-Han Chen, Chung-Hsiang Chang, Hong-Kai Huang
  • Patent number: 11032549
    Abstract: A method includes receiving transform coefficients corresponding to a scaled video input signal, the scaled video input signal including a plurality of spatial layers that include a base layer. The method also includes determining a spatial rate factor based on a sample of frames from the scaled video input signal. The spatial rate factor defines a factor for bit rate allocation at each spatial layer of an encoded bit stream formed from the scaled video input signal. The spatial rate factor is represented by a difference between a rate of bits per transform coefficient of the base layer and an average rate of bits per transform coefficient. The method also includes reducing a distortion for the plurality of spatial layers by allocating a bit rate to each spatial layer based on the spatial rate factor and the sample of frames.
    Type: Grant
    Filed: June 23, 2019
    Date of Patent: June 8, 2021
    Assignee: Google LLC
    Inventors: Michael Horowitz, Rasmus Brandt
  • Patent number: 11032535
    Abstract: A method includes generating a set of three-dimensional (3D) virtual reality videos by stitching together image frames of one or more environments captured by a camera array. The method further includes generating graphical data for displaying a virtual reality user interface that includes (1) selectable icons that correspond to the set of 3D virtual reality videos and (2) an object. The method further includes receiving, from a peripheral device, a selection of the object by a user of the peripheral device in the virtual reality user interface and an indication of a desire to move the object to be positioned in front of a first 3D virtual reality video from the set of virtual reality videos. The method further includes providing a first 3D preview of the first 3D virtual reality video within the object.
    Type: Grant
    Filed: October 11, 2018
    Date of Patent: June 8, 2021
    Assignee: Verizon Patent and Licensing Inc.
    Inventors: Rory Lutter, Andrew Walkingshaw
  • Patent number: 11032475
    Abstract: Disclosed is a method to passively measure and calculate the strength of turbulence via the index of refraction structure constant Cn2 from video imagery gathered by an imaging device, such as a video camera. Processing may occur with any type computing device utilizing a processor executing machine executable code stored on memory. This method significantly simplifies instrumentation requirements, reduces cost, and provides rapid data output. This method combines an angle of arrival methodology, which provides scale factors, with a new spatial/temporal frequency domain method. As part of the development process, video imagery from high speed cameras was collected and analyzed. The data was decimated to video rates such that statistics could be computed and used to confirm that this passive method accurately characterizes the atmospheric turbulence.
    Type: Grant
    Filed: August 13, 2019
    Date of Patent: June 8, 2021
    Assignee: Mission Support and Test Services, LLC
    Inventors: Mary Morabito O'Neill, David Terry
  • Patent number: 11032536
    Abstract: A method includes generating a three-dimensional (3D) virtual reality video by stitching together image frames of an environment captured by a camera array. The method further includes generating graphical data for displaying a virtual reality user interface that includes the 3D virtual reality video. The method further includes determining, based on movement of a peripheral device, that a user moves a hand to be located in front of the 3D virtual reality video in the user interface and grabs the 3D virtual reality video from a first location to inside an object. The method further includes displaying the object with a preview of the 3D virtual reality video inside the object.
    Type: Grant
    Filed: October 11, 2018
    Date of Patent: June 8, 2021
    Assignee: Verizon Patent and Licensing Inc.
    Inventors: Rory Lutter, Andrew Walkingshaw
  • Patent number: 11025959
    Abstract: A method includes receiving head-tracking data that describe one or more positions of people while the people are viewing a three-dimensional video. The method further includes generating a probabilistic model of the one or more positions of the people based on the head-tracking data, wherein the probabilistic model identifies a probability of a viewer looking in a particular direction as a function of time. The method further includes generating video segments from the three-dimensional video. The method further includes, for each of the video segments: determining a directional encoding format that projects latitudes and longitudes of locations of a surface of a sphere onto locations on a plane, determining a cost function that identifies a region of interest on the plane based on the probabilistic model, and generating optimal segment parameters that minimize a sum-over position for the region of interest.
    Type: Grant
    Filed: September 3, 2019
    Date of Patent: June 1, 2021
    Assignee: Verizon Patent and Licensing Inc.
    Inventors: Andrew Walkingshaw, Arthur van Hoff, Daniel Kopeinigg
  • Patent number: 11025911
    Abstract: An encoding method for encoding an image using an inter-picture prediction includes determining a prediction block on which the inter-picture prediction is to be performed, partitioning the prediction block into a plurality of transform blocks by a partitioning method that partitions the prediction block, so that boundaries of the plurality of transform blocks are symmetrical with respect to a horizontal line passing a center of the prediction block and are symmetrical with respect to a vertical line passing the center of the prediction block, the plurality of transform blocks being rectangular, and determining, for each of the plurality of transform blocks, an orthogonal transformation type used for each of a vertical direction and a horizontal direction of a given transform block of the plurality of transform blocks based on a positional relation between the given transform block and the center of the prediction block.
    Type: Grant
    Filed: June 2, 2020
    Date of Patent: June 1, 2021
    Assignee: Socionext Inc.
    Inventor: Yoichi Azukizawa