Stereoscopic Patents (Class 348/42)
  • Patent number: 12382080
    Abstract: A method for coding views simultaneously representing a 3D scene from different positions or different view angles, implemented by a coding device. The method includes, for a depth component of at least one view: partitioning the depth component into at least one block; obtaining depth information of the at least one block from texture data of a texture component of at least one of the views; obtaining at least one depth estimation parameter from the information; and coding the at least one depth estimation parameter, the depth information of the at least one block not being coded.
    Type: Grant
    Filed: September 8, 2021
    Date of Patent: August 5, 2025
    Assignee: ORANGE
    Inventors: Félix Henry, Patrick Garus, Gordon Clare
  • Patent number: 12373096
    Abstract: A gesture-based text entry user interface for an Augmented Reality (AR) device is provided. The AR system detects a start text entry gesture made by a user of the AR system, generates a virtual keyboard user interface including a virtual keyboard having a plurality of virtual keys, and provides to the user the virtual keyboard user interface. The AR system determines using the one or more cameras, the user's selection of one or more selected virtual keys of the plurality of virtual keys and generates entered text data based on the one or more selected virtual keys. The AR system provides the entered text data to the user using a display of the AR system.
    Type: Grant
    Filed: May 31, 2022
    Date of Patent: July 29, 2025
    Assignee: Snap Inc.
    Inventors: Sharon Moll, Dawei Zhang
  • Patent number: 12361575
    Abstract: Systems and methods are provided for depth estimation from monocular images using a depth model with sparse range sensor data and uncertainty in the range sensor as inputs thereto. According to some embodiments, the methods and systems comprise receiving an image captured by an image sensor, where the image represents a scene of an environment. The method and systems also comprise deriving a point cloud representative of the scene of the environment from range sensor data, and deriving range sensor uncertainty from the range sensor data. Then a depth map can be derived for the image based on the point cloud and the range sensor uncertainty as one or more inputs into a depth model.
    Type: Grant
    Filed: June 7, 2022
    Date of Patent: July 15, 2025
    Assignees: TOYOTA RESEARCH INSTITUTE, INC., TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Vitor Guizilini, Jie Li, Charles Christopher Ochoa
  • Patent number: 12354293
    Abstract: A calibration method for a distance measurement device that is mounted inside a moving body, images outside of the moving body without intervention of a transparent body, and calculates a distance to an object by using disparity between images captured from at least two points of view, the method including: a first process of capturing a first-image-for-calibration and a second-image-for-calibration at different distances between a first-object-for-calibration and the distance measurement device without intervention of the transparent body and calculating correction information for converting disparity information calculated from image information of each of the first-image-for-calibration and the second-image-for-calibration into distance information; and a second process of capturing a third-image-for-calibration from a second-object-for-calibration that is located at least at one distance via the transparent body and modifying the correction information calculated in the first process on the basis of image
    Type: Grant
    Filed: June 13, 2023
    Date of Patent: July 8, 2025
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Takashi Shiozawa
  • Patent number: 12355931
    Abstract: A system for generating three-dimensional (3D) images from captured images of a target when executing digital magnification. A controller executes a digital magnification on the first image of the target captured by the first image sensor and on the second image captured by the second image sensor of the target. The controller crops the first image and the second image to overlap a first portion of the target captured by the first image sensor with a second portion of the target captured by the second image sensor. The controller adjusts the cropping of the first image and the second image to provide binocular overlap of the first portion of the target with the second portion of target. The displayed cropped first image and the cropped second image display the 3D image at the digital magnification to the user.
    Type: Grant
    Filed: October 24, 2023
    Date of Patent: July 8, 2025
    Assignee: UNIFY MEDICAL, INC.
    Inventors: Yang Liu, Maziyar Askari Karchegani
  • Patent number: 12348701
    Abstract: A display device includes a display panel that includes a plurality of sub-pixels in a display area, an optical member attached to the display panel and that includes stereoscopic lenses, and a display driver that receives information on relative positions of the sub-pixels for each stereoscopic lens of the optical member from an optical member bonding apparatus, and corrects image data based on the relative positions of the sub-pixels so that 3D images are displayed in a display area of the display panel.
    Type: Grant
    Filed: March 16, 2023
    Date of Patent: July 1, 2025
    Assignee: SAMSUNG DISPLAY CO., LTD.
    Inventors: Byeong Hee Won, Beom Shik Kim, Jeong Woo Park
  • Patent number: 12347134
    Abstract: A server for pose estimation of a person and an operating method of the server are provided. The operating method includes obtaining an original image including a person, generating a plurality of input images by rotating the original image, obtaining first pose estimation results respectively corresponding to the plurality of input images, by inputting the plurality of input images to a pose estimation model, applying weights to the first pose estimation results respectively corresponding to the plurality of input images, and obtaining a second pose estimation result, based on the first pose estimation results to which the weights are applied, wherein the first pose estimation results and the second pose estimation result each include data indicating main body parts of the person.
    Type: Grant
    Filed: October 11, 2022
    Date of Patent: July 1, 2025
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Yongsung Kim, Hyunsoo Choi, Daehyun Ban, Dongwan Lee, Juyoung Lee
  • Patent number: 12299897
    Abstract: A virtual reality system includes a head-mounted display device and several tracking devices is disclosed. Each tracking devices includes a camera and a processor. The camera obtains a picture of a human body of a current time point. The processor is configured to: obtain a current predicted 3D pose and a confidence of the current time point according to the picture; determine a previous valid value according to a previous predicted 3D pose and a previous final optimized pose; determine a current valid value according to the previous valid value, the confidence, and the current predicted 3D pose; and output the current predicted 3D pose and the confidence to a main tracking device of the tracking devices according to the current valid value, so as to generate a current final optimized pose.
    Type: Grant
    Filed: May 9, 2022
    Date of Patent: May 13, 2025
    Assignee: HTC Corporation
    Inventors: Kuan-Hsun Wang, Jun-Rei Wu
  • Patent number: 12299983
    Abstract: In one example, a method performed by a processing system including at least one processor includes acquiring a plurality of video volumes of an environment from a plurality of cameras, wherein at least two individual video volumes of the plurality of video volumes depict the environment from different viewpoints, generating a panoptic video feed of the environment from the plurality of video volumes, detecting an event of interest occurring in the panoptic video feed, and isolating a video volume of the event of interest to produce a video excerpt.
    Type: Grant
    Filed: December 22, 2021
    Date of Patent: May 13, 2025
    Assignee: AT&T Intellectual Property I, L.P.
    Inventor: Eric Zavesky
  • Patent number: 12293454
    Abstract: A system and method for receiving colour images, depth images and viewpoint information; dividing 3D space occupied by real-world environment into 3D grid(s) of voxels; create 3D data structure(s) comprising nodes, each node representing corresponding voxel; dividing colour image and depth image into colour tiles and depth tiles, respectively; mapping colour tile to voxel(s) whose colour information is captured in colour tile; storing, in node representing voxel(s), viewpoint information indicative of viewpoint from which colour and depth images are captured, along with any of: colour tile that captures colour information of voxel(s) and corresponding depth tile that captured depth information, or reference information indicative of unique identification of colour tile and corresponding depth tile; and utilising 3D data structure(s) for training neural network(s), wherein input of neural network(s) comprises 3D position of point and output of neural network(s) comprises colour and opacity of point.
    Type: Grant
    Filed: February 17, 2023
    Date of Patent: May 6, 2025
    Assignee: Varjo Technologies Oy
    Inventors: Kimmo Roimela, Mikko Strandborg
  • Patent number: 12284437
    Abstract: A video recording device includes: a first imaging unit (in camera) arranged on a same face as that of a display unit of a casing of the device; a second imaging unit (out camera) arranged on a face different from that of the display unit of the casing of the device; and an audio input unit that inputs a command voice giving an instruction for recording a video signal. A control unit sets a delay time until start of a recording process performed by the recording unit after input of the command voice to be different in accordance with the enabled imaging unit and sets a delay time of a case where the first imaging unit is enabled to be longer than a delay time of a case where the second imaging unit is enabled.
    Type: Grant
    Filed: May 22, 2023
    Date of Patent: April 22, 2025
    Assignee: MAXELL, LTD.
    Inventors: Kazuhiko Yoshizawa, Hirohito Kuriyama
  • Patent number: 12284381
    Abstract: In the present disclosure, a method of decoding a video signal and a device therefor are disclosed. Specifically, a method of decoding an image based on an inter prediction mode includes deriving a motion vector of an available spatial neighboring block around a current block; deriving a collocated block of the current block based on the motion vector of the spatial neighboring block; deriving a motion vector in a sub-block unit in the current block based on a motion vector of the collocated block; and generating a prediction block of the current block using the motion vector derived in the sub-block unit, wherein the collocated block may be specified by the motion vector of the spatial neighboring block in one pre-defined reference picture.
    Type: Grant
    Filed: February 8, 2024
    Date of Patent: April 22, 2025
    Assignee: BEIJING XIAOMI MOBILE SOFTWARE CO., LTD.
    Inventors: Hyeongmoon Jang, Junghak Nam, Naeri Park
  • Patent number: 12260572
    Abstract: A method includes determining, based on an image having an initial viewpoint, a depth image, and determining a foreground visibility map including visibility values that are inversely proportional to a depth gradient of the depth image. The method also includes determining, based on the depth image, a background disocclusion mask indicating a likelihood that pixel of the image will be disoccluded by a viewpoint adjustment. The method additionally includes generating, based on the image, the depth image, and the background disocclusion mask, an inpainted image and an inpainted depth image. The method further includes generating, based on the depth image and the inpainted depth image, respectively, a first three-dimensional (3D) representation of the image and a second 3D representation of the inpainted image, and generating a modified image having an adjusted viewpoint by combining the first and second 3D representation based on the foreground visibility map.
    Type: Grant
    Filed: August 5, 2021
    Date of Patent: March 25, 2025
    Assignee: Google LLC
    Inventors: Varun Jampani, Huiwen Chang, Kyle Sargent, Abhishek Kar, Richard Tucker, Dominik Kaeser, Brian L. Curless, David Salesin, William T. Freeman, Michael Krainin, Ce Liu
  • Patent number: 12262020
    Abstract: Data that is predicted across pictures in a video sequence is managed by separating the data into multiple data types. Instead of keeping all data associated with a decoded picture, such as picture sample values and motion vector data, data associated with a decoded picture is split by data type to enable storing only a subset of all data associated with a decoded picture.
    Type: Grant
    Filed: October 31, 2023
    Date of Patent: March 25, 2025
    Assignee: TELEFONAKTIEBOLAGET LM ERICSSON (PUBL)
    Inventors: Rickard Sjöberg, Martin Pettersson
  • Patent number: 12253689
    Abstract: The present invention relates to a multi perspective photography camera device, for capturing perspective images of a macroscopic 3D scene, comprising: —a hollow casing (10a-10b, 11, 13) housing: —a lens array (8) to be placed at a conjugate plane of an aperture diaphragm (2) of a photographic objective (OB), between the photographic objective (OB) and a photosensitive pixel array sensor (9), to simultaneously receive and transmit to the sensor (9) light representative of a plurality of different perspectives of the macroscopic 3D scene (S), one perspective per array lens; —a field diaphragm (5) to be placed at a plane where an image provided by said photographic objective (OB) is to be formed; and—a converging lens (6) having a focal length fR, and that is placed between the field diaphragm (5) and the lens array (8), at a distance equal to fR from the field diaphragm (5).
    Type: Grant
    Filed: October 27, 2021
    Date of Patent: March 18, 2025
    Assignee: UNIVERSITAT DE VALÈNCIA
    Inventors: Manuel Martínez Corral, Genaro Saavedra Tortosa, Gabriele Scrofani, Emilio Sánchez Ortiga
  • Patent number: 12250331
    Abstract: A camera module includes: a fixing member; a base member slidably coupled to the fixing member; a driving unit coupled to the base member and including a main gear; a gear structure rotatably coupled to the base member to engage with the main gear; a first camera connected to the gear structure to be rotated; and a second camera connected to the gear structure to be rotated in a direction opposite to the first camera. A first position of the base member when the main gear is rotated in a first rotation direction is different from a second position of the base member when the main gear is rotated in a second rotation direction that is opposite to the first rotation direction.
    Type: Grant
    Filed: November 8, 2022
    Date of Patent: March 11, 2025
    Assignee: Samsung Electro-Mechanics Co., Ltd.
    Inventors: Jae Woo Jun, Yun Kyoung Choi
  • Patent number: 12243162
    Abstract: Methods of determining the depth of a scene and associated systems are disclosed herein. In some embodiments, a method can include augmenting depth data of a scene captured with a depth sensor with depth data from one or more images of the scene. For example, the method can include capturing image data of the scene with a plurality of cameras. The method can further include generating a point cloud representative of the scene based on the depth data from the depth sensor and identifying a missing region of the point cloud, such as a region occluded from the view of the depth sensor. The method can then include generating depth data for the missing region based on the image data. Finally, the depth data for the missing region can be merged with the depth data from the depth sensor to generate a merged point cloud representative of the scene.
    Type: Grant
    Filed: May 4, 2023
    Date of Patent: March 4, 2025
    Assignee: PROPRIO INC.
    Inventors: Thomas Ivan Nonn, David Julio Colmenares, James Andrew Youngquist, Adam Gabriel Jones
  • Patent number: 12244828
    Abstract: Described herein is a computer implemented method. The method includes accessing input image data defining a plurality of input pixels and processing the input image data to generate output image data. The output image data defines a plurality of output pixels, each corresponding to an input pixel. At least one output pixel is generated by a sampling process that includes: selecting a working pixel from the plurality of input pixels; selecting a set of sample pixels for the working pixel, wherein each sample pixel is an input pixel that is selected as a sample pixel based on whether the input pixel is positioned within a depth-adjusted sampling area, the depth-adjusted sampling area for a particular input pixel being determined based on a depth separation between that particular input pixel and the working pixel; and generating an output pixel corresponding to the working pixel based on the set of sample pixels.
    Type: Grant
    Filed: September 12, 2022
    Date of Patent: March 4, 2025
    Assignee: CANVA PTY LTD
    Inventor: Bhautik Jitendra Joshi
  • Patent number: 12235081
    Abstract: A laser device for aircraft defense according to an embodiment of the present invention may include: a laser oscillator that outputs a laser beam; a LASER BEAM IRRADIATION AREA GENERATOR for generating a laser beam irradiation area in the air on the basis of the output laser beam; and a controller that controls the LASER BEAM IRRADIATION AREA GENERATOR to generate a laser beam irradiation plane having an energy density equal to or greater than a preset threshold in the laser beam irradiation area and controls to generate the laser beam irradiation area which is a three-dimensional space from the laser device to the laser beam irradiation surface and in which aircraft located on the laser beam irradiation area is hit with the laser beam.
    Type: Grant
    Filed: November 5, 2024
    Date of Patent: February 25, 2025
    Assignee: DUWON PHOTONICS CO., LTD.
    Inventors: Yong Won Park, Hee Won Shin
  • Patent number: 12231772
    Abstract: A dual-aperture zoom digital camera operable in both still and video modes. The camera includes Wide and Tele imaging sections with respective lens/sensor combinations and image signal processors and a camera controller operatively coupled to the Wide and Tele imaging sections. The Wide and Tele imaging sections provide respective image data. The controller is configured to output, in a zoom-in operation between a lower zoom factor (ZF) value and a higher ZF value, a zoom video output image that includes only Wide image data or only Tele image data, depending on whether a no-switching criterion is fulfilled or not.
    Type: Grant
    Filed: May 27, 2024
    Date of Patent: February 18, 2025
    Assignee: Corephotonics Ltd.
    Inventors: Noy Cohen, Oded Gigushinski, Nadav Geva, Gal Shabtay, Ester Ashkenazi, Ruthy Katz, Ephraim Goldenberg
  • Patent number: 12231833
    Abstract: A sensor module comprises a master sensor unit for sensing a first environmental parameter, a slave sensor unit for sensing a second environmental parameter, a common substrate on which the master sensor unit and the slave sensor unit are mounted, and a digital bus interface for a communication between the master sensor unit and the slave sensor unit. The master sensor unit comprises a non-volatile memory for storing calibration data and configuration data of the master sensor unit and the slave sensor unit. The master sensor unit is embodied as a first chip, and the slave sensor unit is embodied as a second chip. Such sensor module is compact, robust and versatile.
    Type: Grant
    Filed: February 28, 2019
    Date of Patent: February 18, 2025
    Assignee: SENSIRION AG
    Inventors: Ralph Steiner Vanha, Samuel Fuhrer, Marcel Pluess, Ulrich Bartsch
  • Patent number: 12231751
    Abstract: A modular omni-directional sensor array (MOSA) enclosure is provided for visual surveillance. The MOSA enclosure is disposable on an elevated position and includes a lower equipment module and an upper optical module. The equipment module contains electrical power and control electronics and is disposed on the elevated position from underneath. The optical module contains a plurality of cameras viewing radially outward. The optical module is disposed onto the equipment module from above.
    Type: Grant
    Filed: February 14, 2023
    Date of Patent: February 18, 2025
    Assignee: United States of America, represented by the Secretary of the Navy
    Inventors: Timothy Li-Ming Peng, Eric Wayne Stacy, David Bone Clark, III
  • Patent number: 12229978
    Abstract: A system includes an electronic display, a computer processor, one or more memory units, and a module stored in the one or more memory units. The module is configured to access a source image stored in the one or more memory units and determine depth data for each pixel of a plurality of pixels of the source image. The module is further configured to map, using the plurality of pixels and the determined depth data for each of the plurality of pixels, the source image to a four-dimensional light field. The module is further configured to send instructions to the electronic display to display the mapped four-dimensional light field.
    Type: Grant
    Filed: June 11, 2021
    Date of Patent: February 18, 2025
    Assignee: FYR, Inc.
    Inventors: Kyle Martin Ringgenberg, Mark Andrew Lamkin, Jordan David Lamkin, Bryan Eugene Walter, Jared Scott Knutzon
  • Patent number: 12225372
    Abstract: An audio visualization method applied to an electronic device with a screen and a function of a silent mode is provided. The audio visualization method includes: determining whether the electronic device executes an audio/video program or not, where when the audio/video program is executed, an audio effect signal is generated; converting the audio effect signal into a two-channel signal and a multi-channel signal when the electronic device executes the audio/video program, where a channel count of the two-channel signal is less than that of the multi-channel signal; generating sound data according to the multi-channel signal when the electronic device is set to the silent mode; and generating an icon according to the sound data and presenting the icon on the screen. The disclosure also provides an audio visualization system.
    Type: Grant
    Filed: October 6, 2022
    Date of Patent: February 11, 2025
    Assignee: ASUSTEK COMPUTER INC.
    Inventor: Jung-Cheng Liu
  • Patent number: 12223621
    Abstract: The present disclosure provides a virtual viewpoint synthesis method, including: pre-processing a depth image with zero parallax corresponding to an original image to obtain a processed depth image; generating virtual viewpoint images corresponding to a plurality of virtual viewpoints respectively according to the processed depth image and the original image; and filling holes in the virtual viewpoint image to generate a plurality of filled virtual viewpoint images. The present disclosure further provides an electronic apparatus and a computer-readable medium.
    Type: Grant
    Filed: January 25, 2021
    Date of Patent: February 11, 2025
    Assignees: BEIJING BOE OPTOELECTRONICS TECHNOLOGY CO., LTD., BOE TECHNOLOGY GROUP CO., LTD.
    Inventors: Yan Sun, Minglei Chu, Tiankuo Shi, Yanhui Xi, Zhihua Ji, Yifan Hou, Chenxi Zhao, Shuo Zhang, Xiangjun Peng, Xiaomang Zhang, Wei Sun
  • Patent number: 12212731
    Abstract: Mathematical relationships between the scene geometry, camera parameters, and viewing environment are used to control stereography to obtain various results influencing the viewer's perception of 3D imagery. The methods may include setting a horizontal shift, convergence distance, and camera interaxial parameter to achieve various effects. The methods may be implemented in a computer-implemented tool for interactively modifying scene parameters during a 2D-to-3D conversion process, which may then trigger the re-rendering of the 3D content on the fly.
    Type: Grant
    Filed: October 18, 2022
    Date of Patent: January 28, 2025
    Assignee: Warner Bros. Entertainment Inc.
    Inventors: Christopher E. Nolan, Bradley T. Collar, Michael D. Smith
  • Patent number: 12206673
    Abstract: Aspects of the subject disclosure may include, for example, obtaining a first user profile associated with a first user, the first user profile comprising a first privacy rule; obtaining a second user profile associated with a second user, the second user profile comprising a second privacy rule; determining which of the first privacy rule or the second privacy rule is more restrictive; setting for a first extended reality (XR) communication session a third privacy rule, the third privacy rule being set to the first privacy rule in a first case that the first privacy rule has been determined to be more restrictive than the second privacy rule and the third privacy rule being set to the second privacy rule in a second case that the second privacy rule has been determined to be more restrictive than the first privacy rule; creating the first XR communication session, the first XR communication session comprising one or more environments, the one or more environments supporting the first user and the second user
    Type: Grant
    Filed: September 9, 2021
    Date of Patent: January 21, 2025
    Assignee: AT&T Intellectual Property I, L.P.
    Inventor: Rashmi Palamadai
  • Patent number: 12182930
    Abstract: Images may be captured at an image capture device mounted on an image capture device gimbal capable of rotating the image capture device around a nodal point in one or more dimensions. Each of the plurality of images may be captured from a respective rotational position. The images may be captured by a designated camera that is not located at the nodal point in one or more of the respective rotational positions. A designated three-dimensional point cloud may be determined based on the plurality of images. The designated three-dimensional point cloud may include a plurality of points each having a respective position in a virtual three-dimensional space.
    Type: Grant
    Filed: February 23, 2023
    Date of Patent: December 31, 2024
    Assignee: Fyusion, Inc.
    Inventors: Nico Gregor Sebastian Blodow, Martin Saelzle, Matteo Munaro, Krunal Ketan Chande, Rodrigo Ortiz Cayon, Stefan Johannes Josef Holzer
  • Patent number: 12184825
    Abstract: A method and apparatus for generating additional information used to reconstruct an additional image through steps of: generating information for movement compensation based on the original right image of the stereoscopic image and the previous frame of the right image; and generating first additional information for reconstructing the right image to a high resolution based on the original right image and the information for movement compensation are provided.
    Type: Grant
    Filed: November 28, 2022
    Date of Patent: December 31, 2024
    Assignees: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE, KOOKMIN UNIVERSITY INDUSTRY ACADEMY COOPERATION FOUNDATION
    Inventors: Sung-Hoon Kim, Seongwon Jung, Dong Wook Kang, Kyeong Hoon Jung, Insu Son, Seungjun Lee
  • Patent number: 12165320
    Abstract: A mammographic imaging device for analysis and detection of possible inhomogeneities in breast tissue of a patient using laser light in the near-infrared in diffuse reflectance geometry, includes a patient horizontal support comprising at least one transparent window in its cross-section and a measuring mechanism below the at least one transparent window and carried by said support. The measuring mechanism includes a laser-light producing mechanism for producing laser light beams in the near-infrared, a directing mechanism for directing said laser light beams towards the at least one transparent window, at least one wavelength filter, a light-sensing and imaging mechanism for sensing light and producing images and a controlling, processing and image-normalizing unit for controlling, processing and normalizing images.
    Type: Grant
    Filed: December 16, 2019
    Date of Patent: December 10, 2024
    Assignees: CONSEJO NACIONAL DE INVESTIGACIONES CIENTÍFICAS Y TÉCNICAS (CONICET)
    Inventors: Pamela A. Pardini, Héctor A. García, María V. Waks Serra, Nicolás A. Carbone, Daniela I. Iriarte, Guido R. Baez, Hector O. Di Rocco, Juan A. Pomarico
  • Patent number: 12164694
    Abstract: The technology disclosed relates to manipulating a virtual object. In particular, it relates to detecting a hand in a three-dimensional (3D) sensory space and generating a predictive model of the hand, and using the predictive model to track motion of the hand. The predictive model includes positions of calculation points of fingers, thumb and palm of the hand. The technology disclosed relates to dynamically selecting at least one manipulation point proximate to a virtual object based on the motion tracked by the predictive model and positions of one or more of the calculation points, and manipulating the virtual object by interaction between at least some of the calculation points of the predictive model and the dynamically selected manipulation point.
    Type: Grant
    Filed: November 22, 2021
    Date of Patent: December 10, 2024
    Assignee: Ultrahaptics IP Two Limited
    Inventors: David S. Holz, Raffi Bedikian, Adrian Gasinski, Maxwell Sills, Hua Yang, Gabriel Hare
  • Patent number: 12154185
    Abstract: The disclosure relates to a system and method for verifying robot data that is used by a safety system monitoring a workspace shared by a human and robot. One or more sensors monitoring the workspace are arranged to obtain a three-dimensional view of the workspace. Raw data from each of the sensors is acquired and analyzed to determine the positioning and spatial relationship between the human and robot as both move throughout the workspace. This captured data is compared to the positional data obtained from the robot to assess whether discrepancies exist between the data sets. If the information from the sensors does not sufficiently match the data from the robot, then a signal from the system may be sent to deactivate the robot and prevent potential injury to the human.
    Type: Grant
    Filed: December 22, 2021
    Date of Patent: November 26, 2024
    Assignee: Datalogic IP Tech S.r.l.
    Inventors: Gildo Andreoni, Matteo Selvatici, Mohammad Arrfou
  • Patent number: 12155809
    Abstract: A network processing system obtains a viewport of a client device for volumetric video and a two-dimensional (2D) subframe of a frame of volumetric video is obtained associated with the viewport. Viewports may be obtained from the client device or be predicted. 2D subframes and reduced resolution versions of frames can be transmitted to the client device. A client device may request volumetric video from the network processing system and provides a viewport to the network processing system. The client device may obtain from the network processing system reduced resolution versions of volumetric video frames and 2D subframes in accordance with the viewport. The client device may determine whether a current viewport matches the viewport associated with the obtained 2D subframe and provides a display based on either that subframe (upon a match) or a 2D perspective of the reduced resolution frame associated with the current viewport (if no match).
    Type: Grant
    Filed: June 20, 2019
    Date of Patent: November 26, 2024
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Bo Han, Jackson Jarrell Pair, Tan Xu
  • Patent number: 12149675
    Abstract: Systems and techniques are provided for camera synchronization. An example method can include determining, for each camera of a plurality of cameras, a common point within an exposure time corresponding to a frame being requested from each camera. The method can include determining, based on the common point determined for each camera of the plurality of cameras, a respective synchronization error of each camera from the plurality of cameras. The method can include adjusting, based on the respective synchronization error, a duration of the frame at one or more cameras from the plurality of cameras, wherein the adjusted duration of the frame aligns the common point within the exposure time at each camera for the frame.
    Type: Grant
    Filed: June 14, 2022
    Date of Patent: November 19, 2024
    Assignee: QUALCOMM Incorporated
    Inventor: Cullum James Baldwin
  • Patent number: 12149824
    Abstract: An ophthalmic apparatus is provided with an image capturing optical system and an image capturing apparatus, and optically examines an eye to be examined by performing focus detection using image capturing surface phase difference AF. A focus detection unit calculates data for a focus detection precision map from a defocus amount based on the phase difference detection. In a case in which the focus detection precision of a first subject is less than an established precision, a region determining unit determines a circle or concentric circle region that passes through the first subject, and determines a region that includes a second subject on the focus detection precision map included on the circle or inside of the concentric circle region, and that also has a focus detection precision that is higher than a predetermined precision. The focus detection unit then performs focus detection on the second subject.
    Type: Grant
    Filed: January 28, 2022
    Date of Patent: November 19, 2024
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Takashi Hanasaka
  • Patent number: 12138113
    Abstract: The present invention proposes an apparatus (120) and method for detecting bone fracture of a subject on basis of ultrasound images. The apparatus (120) comprises a first fracture detector (122) and a second fracture detector (124). The first fracture detector (122) is configured to receive a first ultrasound image of a region of the subject, to identify a bone in the first ultrasound image, to identify at least one focus area within the region on basis of the identified bone, to generate focus area information indicating position of the at least one focus area, and to instruct an acquisition of a second ultrasound image of the region acquired based on the generated focus area information. The second fracture detector (124) is configured to receive the second ultrasound image, and to detect bone fracture on the basis of the second ultrasound image.
    Type: Grant
    Filed: November 28, 2019
    Date of Patent: November 12, 2024
    Assignee: KONINKLIJKE PHILIPS N.V.
    Inventors: Jing Ping Xu, Balasundar Iyyavu Raju, Anthony M. Gades
  • Patent number: 12131489
    Abstract: A surveillance system is provided. The surveillance system is configured for (i) detecting and tracking persons locally for each camera input video stream using the common area anchor boxes and assigning each detected ones of the persons a local track id, (ii) associating a same person in overlapping camera views to a global track id, and collecting associated track boxes as the same person moves in different camera views over time using a priority queue and the local track id and the global track id, (iii) performing track data collection to derive a spatial transformation through matched track box spatial features of a same person over time for scene coverage and (iv) learning a multi-camera tracker given visual features from matched track boxes of distinct people across cameras based on the derived spatial transformation.
    Type: Grant
    Filed: May 11, 2022
    Date of Patent: October 29, 2024
    Assignee: NEC Corporation
    Inventors: Farley Lai, Asim Kadav, Likitha Lakshminarayanan
  • Patent number: 12118784
    Abstract: Methods and systems for image processing for detection of devices are disclosed. Image data can be received. The image data can be filtered to provide output image data. The output image data can be classified.
    Type: Grant
    Filed: May 22, 2023
    Date of Patent: October 15, 2024
    Assignee: RPS Group, Inc.
    Inventors: Dave Houghton, Patrick Rath
  • Patent number: 12120286
    Abstract: Methods and devices for manipulating an image are described. The method comprises receiving image data, the image data including a first image obtained from a first camera and a second image obtained from a second camera, the first camera and the second camera being oriented in a common direction; identifying one or more boundaries of an object in the image data by analyzing the first image and the second image; and displaying a manipulated image based on the image data, wherein the manipulated image includes manipulation of at least a portion of the first image based on boundaries of the object.
    Type: Grant
    Filed: March 10, 2023
    Date of Patent: October 15, 2024
    Assignee: BlackBerry Limited
    Inventor: Steven Henry Fyke
  • Patent number: 12117284
    Abstract: Provided are a method and apparatus for measuring a geometric parameter of an object, and a terminal. The method includes: establishing a three-dimensional coordinate system based on a real environment, in accordance with a first depth image of the real environment photographed by a camera component; obtaining pose data of a terminal, obtaining a second depth image and a two-dimensional image of an object to be measured, and displaying the two-dimensional image on a display interface of the terminal; in response to a measurement point selection instruction, determining a coordinate in the three-dimensional coordinate system of a measurement point based on the pose data, the second depth image and the two-dimensional image, and determining a geometric parameter of the object to be measured based on the coordinate; and displaying, in the two-dimensional image, the geometric parameter of the object to be measured.
    Type: Grant
    Filed: September 17, 2021
    Date of Patent: October 15, 2024
    Assignee: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP. LTD.
    Inventor: Jian Deng
  • Patent number: 12111524
    Abstract: A display module, a display apparatus, and a vehicle are provided. The display module includes a backlight component, and a display component located at a side of the backlight component facing toward a light-emitting direction of the display module. The backlight component includes a first light guide structure, a light regulating structure, and a polymer liquid crystal film. The first polarizer is located at a side of the polymer liquid crystal film facing away from the backlight component. The polymer liquid crystal film is located at a side of the light regulating structure facing away from the first light guide plate and includes a polymer liquid crystal layer, and an electrode layer located at each of at least one side of the polymer liquid crystal layer. The display module has a sharing mode where the electrode layer is not energized and an anti-peeping mode where the electrode layer is energized.
    Type: Grant
    Filed: October 10, 2023
    Date of Patent: October 8, 2024
    Assignee: Shanghai Tianma Micro-Electronics Co., Ltd.
    Inventors: Longcai Xin, Zhiyuan Zhang, Fan Tian
  • Patent number: 12113948
    Abstract: Some examples of the disclosure are directed to systems and methods for managing locations of users in a spatial group within a communication session based on the display of shared content in a three-dimensional environment. In some examples, a first electronic device and a second electronic device are in communication within a communication session. In some examples, the first electronic device displays a three-dimensional environment including an avatar corresponding to a user of the second electronic device. In some examples, in response to detecting an input corresponding to a request to display shared content in the three-dimensional environment, if the shared content is a first type of content, the first electronic positions the avatar a first distance away from the viewpoint, and if the shared content is a second type of content, the first electronic device positions the avatar a second distance away from the viewpoint.
    Type: Grant
    Filed: January 24, 2024
    Date of Patent: October 8, 2024
    Assignee: Apple Inc.
    Inventors: Connor A. Smith, Willem Mattelaer, Joseph P. Cerra, Kevin Lee
  • Patent number: 12112495
    Abstract: Provided are a depth data filtering method and apparatus, an electronic device, and a readable storage medium. The method includes: obtaining, for each pixel, a depth difference value between two consecutive frames of depth maps; marking an area formed by pixels as a first environment change area, the depth difference value of the pixels is smaller than a predetermined absolute depth deviation; marking an area formed by pixels as a second environment change area, the depth difference value of the pixels is greater than or equal to the predetermined absolute depth deviation; respectively filtering the first environment change area and the second environment change area.
    Type: Grant
    Filed: December 22, 2021
    Date of Patent: October 8, 2024
    Assignee: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD.
    Inventor: Jian Kang
  • Patent number: 12101160
    Abstract: A device, method, and non-transitory computer readable medium that for two-dimensional blind single-input multiple-output channel identification for image restoration. The method includes receiving, by a receiver having independent channels, a two-dimensional image data matrix then transforming the received two-dimensional image data matrix to a one-dimensional image vector. Channel parameters can then be estimated using the one-dimensional image vector. The method can then construct a restored image using the estimated channel parameters and the two-dimensional image data matrix.
    Type: Grant
    Filed: April 5, 2023
    Date of Patent: September 24, 2024
    Assignee: KING FAHD UNIVERSITY OF PETROLEUM AND MINERALS
    Inventors: Abdulmajid Lawal, Karim Abed-Meraim, Qadri Mayyala, Naveed Iqbal, Azzedine Zerguine
  • Patent number: 12079914
    Abstract: Techniques are disclosed for providing improved pose tracking of a subject using a 2D camera and generating a 3D image that recreates the pose of the subject. A 3D skeleton map is estimated from a 2D skeleton map of the subject using, for example, a neural network. A template 3D skeleton map is accessed or generated having bone segments that have lengths set using, for instance, anthropometry statistics based on a given height of the template 3D skeleton map. An improved 3D skeleton map is then produced by at least retargeting one or more of the plurality of bone segments of the estimated 3D skeleton map to more closely match the corresponding template bone segments of the template 3D skeleton map. The improved 3D skeleton map can then be animated in various ways (e.g., using various skins or graphics) to track corresponding movements of the subject.
    Type: Grant
    Filed: April 23, 2020
    Date of Patent: September 3, 2024
    Assignee: INTEL CORPORATION
    Inventors: Shandong Wang, Yangyuxuan Kang, Anbang Yao, Ming Lu, Yurong Chen
  • Patent number: 12073581
    Abstract: An apparatus comprising an interface, a light projector and a processor. The interface may be configured to receive pixel data. The light projector may be configured to generate a structured light pattern. The processor may be configured to process the pixel data arranged as video frames comprising the structured light pattern, perform computer vision operations to detect a size of a face area of the video frames, determine a scale ratio in response to the size of the face area, extract the structured light pattern from the video frames, generate a downscaled structured light image and generate a depth map in response to the downscaled structured light image and a downscaled reference image. A downscale operation may be performed in response to the scale ratio to generate the downscaled structured light image. The scale ratio may enable the generation of the downscaled structured light image with sufficient depth pixels.
    Type: Grant
    Filed: September 28, 2023
    Date of Patent: August 27, 2024
    Assignee: Ambarella International LP
    Inventors: Jian Tang, Tao Liu, Jingyang Qiu
  • Patent number: 12054162
    Abstract: The invention relates to a method for determining a parameter indicative of a road capability of a road segment (16) supporting a vehicle (10). The vehicle (10) comprises a sensor (18) adapted to generate an information package on the basis of signals (20, 22) reflected from the surface of a portion of the road segment (16) in front of the vehicle (10), as seen in an intended direction of travel of the vehicle (10). The vehicle (10) further comprises a plurality of ground engaging members (12, 14).
    Type: Grant
    Filed: April 5, 2019
    Date of Patent: August 6, 2024
    Assignee: VOLVO TRUCK CORPORATION
    Inventors: Leo Laine, Mats Jonasson
  • Patent number: 12051224
    Abstract: Provided are systems and methods for camera alignment using pre-distorted targets. Some methods described include selecting a configuration of shapes, and determining targets by pre-distorting the shapes according to the inverse of the distortion function of the lens system to be aligned. Images of pre-distorted targets are then compared to the original configuration of shapes, to perform camera alignment. Alignment is thus accomplished in simpler and more accurate manner. Systems and computer program products are also provided.
    Type: Grant
    Filed: April 24, 2023
    Date of Patent: July 30, 2024
    Assignee: Motional AD LLC
    Inventors: Nijumudheen Muhassin, Yew Kwang Low, Jayesh Dwivedi
  • Patent number: 12041360
    Abstract: Systems and methods for implementing array cameras configured to perform super-resolution processing to generate higher resolution super-resolved images using a plurality of captured images and lens stack arrays that can be utilized in array cameras are disclosed. An imaging device in accordance with one embodiment of the invention includes at least one imager array, and each imager in the array comprises a plurality of light sensing elements and a lens stack including at least one lens surface, where the lens stack is configured to form an image on the light sensing elements, control circuitry configured to capture images formed on the light sensing elements of each of the imagers, and a super-resolution processing module configured to generate at least one higher resolution super-resolved image using a plurality of the captured images.
    Type: Grant
    Filed: September 5, 2023
    Date of Patent: July 16, 2024
    Assignee: Adeia Imaging LLC
    Inventors: Kartik Venkataraman, Amandeep S. Jabbi, Robert H. Mullis, Jacques Duparre, Shane Ching-Feng Hu
  • Patent number: 12028507
    Abstract: Augmented reality systems provide graphics over views from a mobile device for both in-venue and remote viewing of a sporting or other event. A server system can provide a transformation between the coordinate system of a mobile device (mobile phone, tablet computer, head mounted display) and a real world coordinate system. Requested graphics for the event are displayed over a view of an event. In a tabletop presentation, video of the event can be displayed with augmented reality graphics overlays at a remote location.
    Type: Grant
    Filed: November 4, 2021
    Date of Patent: July 2, 2024
    Assignee: Quintar, Inc.
    Inventors: Sankar Jayaram, Wayne O. Cochran, John Harrison, Timothy P. Heidmann, Thomas Sahara, John Buddy Scott