Three-dimension Patents (Class 345/419)
  • Patent number: 11995768
    Abstract: A VR snapshot broadcasting distribution system, a distribution server, a control method for the distribution server, a program for the distribution server, and a data structure of a VR snapshot data, whereby a 3D space is kept in a 3D form and data of the 3D space is made compact are disclosed. An example distribution server includes: a shooting instruction input unit, to which unit a VR snapshot shooting instruction is input by a distributor or a viewer; an exterior appearance data constructor configured to construct exterior appearance data of the entire 3-dimensional space including the avatar, based on a VR snapshot shooting instruction; a VR snapshot data distribution unit configured to distribute, as VR snapshot data, the exterior appearance data of the entire 3-dimensional space including the avatar, which is constructed by the exterior appearance data constructor; and a display unit configured to display the VR snapshot.
    Type: Grant
    Filed: October 24, 2019
    Date of Patent: May 28, 2024
    Assignee: Dwango Co., Ltd.
    Inventor: Shinnosuke Iwaki
  • Patent number: 11997507
    Abstract: A line-of-sight determination method includes: a point cloud data acquisition step of acquiring point cloud data including a first position indicating a position of a first wireless station and a plurality of second positions indicating positions on a structure serving as a candidate in which a second wireless station opposing the first wireless station is to be installed; and a line-of-sight determination step of determining whether or not there is a line of sight between the first position and at least one of the second positions, and determining whether or not there is a line of sight between the first wireless station and the structure based on the determination result.
    Type: Grant
    Filed: December 18, 2019
    Date of Patent: May 28, 2024
    Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Shuki Wai, Tatsuhiko Iwakuni, Hideyuki Tsuboi, Daisei Uchida, Hideki Toshinaga, Kazuto Goto, Naoki Kita
  • Patent number: 11995885
    Abstract: A spatial indexing system receives a video that is a sequence of frames depicting an environment, such as a floor of a construction site, and performs a spatial indexing process to automatically identify the spatial locations at which each of the images were captured. The spatial indexing system also generates an immersive model of the environment and provides a visualization interface that allows a user to view each of the images at its corresponding location within the model.
    Type: Grant
    Filed: March 22, 2023
    Date of Patent: May 28, 2024
    Assignee: OPEN SPACE LABS, INC.
    Inventors: Michael Ben Fleischman, Philip DeCamp, Jeevan James Kalanithi
  • Patent number: 11997271
    Abstract: An encoder partitions into blocks using a set of block partition modes. The set of block partition modes includes a first partition mode for partitioning a first block, and a second block partition mode for partitioning a second block which is one of blocks obtained after the first block is partitioned. When the number of partitions of the first block partition mode is three, the second block is a center block among the blocks obtained after partitioning the first block, and the partition direction of the second block partition mode is same as the partition direction of the first block partition mode, the second block partition mode indicates that the number of partitions is only three. A parameter for identifying the second block partition mode includes a first flag indicating a horizontal or vertical partition direction, and does not include a second flag indicating the number of partitions.
    Type: Grant
    Filed: November 15, 2022
    Date of Patent: May 28, 2024
    Assignee: PANASONIC INTELLECTUAL PROPERTY CORPORATION OF AMERICA
    Inventors: Tadamasa Toma, Takahiro Nishi, Kiyofumi Abe, Ryuichi Kanoh, Chong Soon Lim, Sughosh Pavan Shashidhar, Ru Ling Liao, Hai Wei Sun, Han Boon Teo, Jing Ya Li
  • Patent number: 11995762
    Abstract: A virtual reality (VR) system that includes a three-dimensional (3D) point cloud having a plurality of points, a VR viewer having a current position, a graphics processing unit (GPU), and a central processing unit (CPU). The CPU determines a field-of-view (FOV) based at least in part on the current position of the VR viewer, selects, using occlusion culling, a subset of the points based at least in part on the FOV, and provides them to the GPU. The GPU receives the subset of the plurality of points from the CPU and renders an image for display on the VR viewer based at least in part on the received subset of the plurality of points. The selecting a subset of the plurality of points is at a first frame per second (FPS) rate and the rendering is at a second FPS rate that is faster than the first FPS rate.
    Type: Grant
    Filed: October 17, 2022
    Date of Patent: May 28, 2024
    Assignee: FARO Technologies, Inc.
    Inventors: Manuel Caputo, Louis Bergmann
  • Patent number: 11995905
    Abstract: An object recognition method related to the field of artificial intelligence comprises: collecting an object to be subjected to recognition (S101); according to a target text detection model corresponding to the object to be subjected to recognition, carrying out screening and recognition on full text information corresponding to the object to be subjected to recognition, so as to obtain point-of-interest text information therefrom (S102); and carrying out recognition on the point-of-interest text information according to a preset text recognition model (S103). A target text detection model obtains point-of-interest text information by means of carrying out screening and recognition on full text information, such that the recognition of full text information in the prior art is avoided, thus saving recognition time, and improving the recognition efficiency.
    Type: Grant
    Filed: January 4, 2022
    Date of Patent: May 28, 2024
    Assignee: Beijing Baidu Netcom Science Technology Co., Ltd.
    Inventors: Bohao Feng, Xiaoshuai Zhang
  • Patent number: 11995226
    Abstract: A tracking system includes a processor, a controller, two or more light sources and a dynamic vision sensor (DVS). The light sources are of known configuration with respect to each other the controller and turn on and off in a predetermined sequence. The DVS includes an array of light-sensitive elements of known configuration. The DVS outputs signals corresponding to events at corresponding light-sensitive elements in the array in response to changes in light from the light sources. The signals indicate times of the events and locations of the corresponding light-sensitive elements. The processor determines an association between each event and one or more of the light sources and, from that association, determines an occlusion of one or more of the light sources. The processor estimates a location of an object using the determined occlusion, the known light source configuration, and the locations of the corresponding light-sensitive elements in the array.
    Type: Grant
    Filed: June 10, 2022
    Date of Patent: May 28, 2024
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Xiaoyong Ye, Yuichiro Nakamura
  • Patent number: 11995782
    Abstract: A cross reality system enables any of multiple devices to efficiently and accurately access previously stored maps and render virtual content specified in relation to those maps. The cross reality system may include a cloud-based localization service that responds to requests from devices to localize with respect to a stored map. The request may include one or more sets of feature descriptors extracted from an image of the physical world around the device. Those features may be posed relative to a coordinate frame used by the local device. The localization service may identify one or more stored maps with a matching set of features. Based on a transformation required to align the features from the device with the matching set of features, the localization service may compute and return to the device a transformation to relate its local coordinate frame to a coordinate frame of the stored map.
    Type: Grant
    Filed: December 20, 2022
    Date of Patent: May 28, 2024
    Assignee: Magic Leap, Inc.
    Inventors: Ali Shahrokni, Daniel Olshansky, Xuan Zhao, Rafael Domingos Torres, Joel David Holder, Keng-Sheng Lin, Ashwin Swaminathan, Anush Mohan
  • Patent number: 11989388
    Abstract: A page element display method and an electronic device (100 or 1500) are provided. The method includes: The electronic device (100 or 1500) displays a first interface, where the first interface includes a first page element (1401); the electronic device (100 or 1500) detects a first operation performed by a user on the first page element (1402); the electronic device (100 or 1500) adjusts a size of the first page element in response to the first operation (1403); and the electronic device (100 or 1500) automatically displays a second interface after adjusting the size of the first page element.
    Type: Grant
    Filed: April 20, 2023
    Date of Patent: May 21, 2024
    Assignee: HONOR DEVICE CO., LTD.
    Inventors: Feng Dong, Jiawei Weng
  • Patent number: 11986728
    Abstract: A game processing method executed by an information processing device configured to perform transmission/reception of signals to/from a display part configured to be attachable to a head of a player to display a virtual image superimposed on an image in real space, the method includes causing the display part to display an image of a virtual object, adjusting at least one of a position or an orientation of the virtual object relative to the image in real space, based on an input of the player, causing the display part to display an image of a virtual game character so as to be arranged coinciding with the adjusted at least one of the position or the orientation of the virtual object, and hiding the image of the virtual object when the image of the game character is displayed.
    Type: Grant
    Filed: March 29, 2022
    Date of Patent: May 21, 2024
    Assignee: KOEI TECMO GAMES CO., LTD.
    Inventors: Hitoshi Kadota, Yusuke Ishihara
  • Patent number: 11989830
    Abstract: The disclosure is directed to a method for generating a three dimensional (3D) volume including a treatment target including receiving a plurality of two dimensional (2D) input images of a patient, determining a metal artifact in each of the plurality of 2D input images, removing the metal artifacts from the plurality of 2D input images based on the determination of the metal artifact, and replacing metal artifacts with alternative pixel data to generate a plurality of filtered 2D images. A 3D volume is generated from the plurality of filtered 2D images. The plurality of 2D input images including a treatment target.
    Type: Grant
    Filed: May 24, 2022
    Date of Patent: May 21, 2024
    Assignee: Covidien LP
    Inventors: Guy Alexandroni, Ron Barak, Ariel Birenbaum, Nicolas J. Merlet
  • Patent number: 11988521
    Abstract: The present invention is to perform appropriate display on a display of a navigation system. A navigation system that displays an image on a display and presents the image to an occupant of a vehicle, acquires data for generating drawing elements from at least one information source and stores the data in a temporary storage for each of the drawing elements included in the image, selects, from among normal available data stored in the temporary storage, data based on the information source determined in accordance with a predetermined setting, and causes a drawing processor to draw an image combining the drawing elements based on the selected data.
    Type: Grant
    Filed: March 18, 2019
    Date of Patent: May 21, 2024
    Assignee: FAURECIA CLARION ELECTRONICS CO., LTD.
    Inventors: Katsuhiko Kageyama, Arata Hayashi, Naoya Baba
  • Patent number: 11989833
    Abstract: The present teaching relates to method, system, medium, and implementations for fusing a 3D virtual model with a 2D image associated with an organ of a patient. A key-pose is determined as an approximate position and orientation of a medical instrument with respect to the patient's organ. Based on the key-pose, an overlay is generated on a 2D image of the patient's organ, acquired by the medical instrument, by projecting the 3D virtual model on to the 2D image. A pair of feature points includes a 2D feature point from the 2D image and a corresponding 3D feature point from the 3D virtual model. The 3D coordinate of the 3D feature point is determined based on the 2D coordinate of the 2D feature point. The depth of the 3D coordinate is on a line of sight of the 2D feature point and is determined so that the projection of the 3D virtual model from the depth creates an overlay approximately matching the organ observed in the 2D image.
    Type: Grant
    Filed: May 16, 2022
    Date of Patent: May 21, 2024
    Assignee: EDDA TECHNOLOGY, INC.
    Inventors: Xiaonan Zang, Guo-Qing Wei, Cheng-Chung Liang, Li Fan, Xiaolan Zeng, Jianzhong Qian
  • Patent number: 11988833
    Abstract: A system for deformation or bending correction in an Augmented Reality (AR) system. Sensors are positioned in a frame of a head-worn AR system to sense forces or pressure acting on the frame by temple pieces attached to the frame. The sensed forces or pressure are used in conjunction with a model of the frame to determine a corrected model of the frame. The corrected model is used to correct video data captured by the AR system and to correct a video virtual overlay that is provided to a user wearing the head-worn AR system.
    Type: Grant
    Filed: October 4, 2022
    Date of Patent: May 21, 2024
    Assignee: Snap Inc.
    Inventors: Matthias Kalkgruber, Tiago Miguel Pereira Torres, Weston Welge, Ramzi Zahreddine
  • Patent number: 11989810
    Abstract: A digital human interactive platform can determine a contextual response to a user input and generate a digital human. The digital human can convey the contextual response to the user in real time. The digital human can be configured to convey the contextual response with a predetermined behavior corresponding to the contextual response.
    Type: Grant
    Filed: January 7, 2022
    Date of Patent: May 21, 2024
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Abhijit Z. Bendale, Pranav K. Mistry, Bola Yoo, Kijeong Kwon, Simon Gibbs, Anil Unnikrishnan, Link Huang
  • Patent number: 11989766
    Abstract: Systems and methods for color coordination for scanned products are provided. A kiosk has an input device, a display device, and an optical code reader and has access to a product/color database. The kiosk is configured to receive an identification code scanned by the optical code reader, determine a product sample associated with the identification code, determine at least one coordinating color for the flooring product based on the product/color database, display a simulated environment including a sample room having at least one selectable surface, display on the display device the at least one coordinating color, receive with the input device a selected color, receive with the input device a selected surface from the sample room, and display on the display device the simulated environment showing the sample room with the product sample and with the selected color on the selected surface of the sample room.
    Type: Grant
    Filed: January 20, 2023
    Date of Patent: May 21, 2024
    Assignee: Behr Process Corporation
    Inventors: Damien Reynolds, Douglas Milsom, Cecelia Wren
  • Patent number: 11989819
    Abstract: A computer-implemented method and a corresponding apparatus are provided for the provision of a two-dimensional visualization image having a plurality of visualization pixels for the visualization of a three-dimensional object represented by volume data for a user. Context information for the visualization is obtained by the evaluation of natural language and is taken into account in the visualization. The natural language can be in the form of electronic documents, which are assigned or can be assigned to the visualization process. In addition, the natural language can be in the form of a speech input of a user, during or after the visualization.
    Type: Grant
    Filed: October 13, 2021
    Date of Patent: May 21, 2024
    Assignee: Siemens Healthineers AG
    Inventors: Fernando Vega, Stefan Thesen, Sebastian Budz, Robert Schneider, Sebastian Krueger, Alexander Brost, Volker Schaller, Bjoern Nolte
  • Patent number: 11989834
    Abstract: Devices and techniques are generally described for three dimensional room modeling. In various examples, 3D mesh data representing a room may be received. Plane data comprising a plurality of planes may be received. Each plane of the plurality of planes may represent a planar surface detected in the room. In some cases, a first plurality of wall candidates for a 3D model of the room may be determined based at least in part on the plane data. A second plurality of wall candidates for the 3D model of the room may be determined by modifying the first plurality of wall candidates based on a comparison of the first plurality of wall candidates to the 3D mesh data. The 3D model of the room may be generated based at least in part on the second plurality of wall candidates.
    Type: Grant
    Filed: June 3, 2022
    Date of Patent: May 21, 2024
    Assignee: Amazon Technologies, Inc.
    Inventors: Mukul Agarwal, Asfand Yar Khan, Tomas F. Yago Vicente, Divyansh Agarwal, Karl Hillesland, Kevin May, Yu Lou, Chun-Kai Wang
  • Patent number: 11983365
    Abstract: An electronic device includes a capacitive sense array and is coupled to a stylus including a conductive tip and a conductive wire module. The conductive tip and wire module of the stylus are driven by a tip drive signal and a wire drive signal simultaneously. The tip and wire drive signals have a single stylus drive frequency. While the conductive tip and wire module are driven by the tip and wire drive signals, the electronic device scans the capacitive sense array to obtain a plurality of capacitive sense signals from a plurality of sense electrodes of the capacitive sense array. The electronic device generates a composite image of the capacitive sense array based on the plurality of capacitive sense signals and processes the composite image to determine one or more orientation parameters (e.g., a tilt angle) of the stylus with respect to a surface of the capacitive sense array.
    Type: Grant
    Filed: December 14, 2022
    Date of Patent: May 14, 2024
    Assignee: PARADE TECHNOLOGIES, LTD
    Inventors: Yi Ling, Pete Vavaroutsos, Roel Coppoolse
  • Patent number: 11985360
    Abstract: Live event production and distribution networks, systems, apparatuses and methods related thereto are described herein. The described innovations may be used not only to present live events to audiences, but to do so in a way that provides audience energy and feedback to the performer(s) (e.g., a band) in a manner akin to that which they receive during a traditional live performance, thereby energizing and motivating the performers to give the best live performance they can, even in the absence of a co-located live audience. Some or all of the audience members may be represented by a visual surrogate displayed on an audience feedback screen set up to be viewable by the performers. The screen may be sized to fill a curtain window of stage on which the performers are performing, and the performers may optionally interact with one or more people in the audience during the live performance.
    Type: Grant
    Filed: October 28, 2020
    Date of Patent: May 14, 2024
    Inventor: Zoltan Bathory
  • Patent number: 11983810
    Abstract: A method is performed at an electronic device with one or more processors and a non-transitory memory. The method includes obtaining hair curve data that represents a plurality of hair strands. Each of the plurality of hair strands includes a respective plurality of hair points. The method includes projecting the plurality of hair strands to a hair mesh that is associated with a virtual agent. The method includes rendering a first subset of the plurality of hair strands in order to generate a hair texture based on a corresponding portion of the projection. The method includes rendering the hair texture in association with the virtual agent in order to generate a display render. In some implementations, the method includes changing the number of hair strands rendered during a particular rendering cycle, enabling dynamic generation of hair textures across rendering cycles.
    Type: Grant
    Filed: March 31, 2022
    Date of Patent: May 14, 2024
    Assignee: APPLE INC.
    Inventor: Mariano Merchante
  • Patent number: 11983818
    Abstract: A measurement system includes a processor. Based on information measured by a camera that takes an image of a measurement target and an auxiliary object arranged on the measurement target, the processor acquires first point cloud data representing a three-dimensional geometry of the measurement target including the auxiliary object. Based on the first point cloud data and second point cloud data that is known and that represents a three-dimensional geometry of the measurement target, the processor eliminates point cloud data of the auxiliary object from the first point cloud data. The processor compares the first point cloud data, from which the point cloud data of the auxiliary object has been eliminated, with the second point cloud data. The processor displays information relating to a result of comparison on a display device.
    Type: Grant
    Filed: March 8, 2022
    Date of Patent: May 14, 2024
    Assignee: KABUSHIKI KAISHA TOSHIBA
    Inventors: Yu Takeda, Hiroaki Nakamura
  • Patent number: 11983841
    Abstract: An information processing apparatus that generates an image to be displayed, based on shape data indicating a shape of an object and surface characteristics data indicating surface characteristics of the object includes an acquisition unit configured to obtain a plurality of pieces of the shape data with different resolutions and the surface characteristics data, a setting unit configured to set a display area including at least part of the object, and a generation unit configured to generate the image to be displayed, based on one of the plurality of pieces of shape data according to the display area and the surface characteristics data.
    Type: Grant
    Filed: October 20, 2021
    Date of Patent: May 14, 2024
    Assignee: Canon Kabushiki Kaisha
    Inventor: Takayuki Tanaka
  • Patent number: 11978153
    Abstract: A method for determining a visible angle of a target object, an electronic device, and a storage medium are provided. The method includes: acquiring first point of interest POI data of the target object, second POI data of an occluder, and a position of an observation point; determining a first tangent line and a second tangent line of the target object, passing through the position of the observation point, according to the first POI data and the position of the observation point; determining a third tangent line and a fourth tangent line of the occluder, passing through the position of the observation point, according to the second POI data and the position of the observation point; and determining a target visible angle of the target object relative to the occluder according to the first tangent line, the second tangent line, the third tangent line, and the fourth tangent line.
    Type: Grant
    Filed: June 24, 2022
    Date of Patent: May 7, 2024
    Assignee: BEIJING BAIDU NETCOM SCIENCE TECHNOLOGY CO., LTD.
    Inventor: Lingguang Wang
  • Patent number: 11978227
    Abstract: A system for determining a length of an object includes a rod, an imaging device couplable to the rod, a laser rangefinder, one or more sensors, and a processor. The processor can receive image data, the orientation of the rod relative to the surface, and the distance to the surface from the imaging device and can determine the length of the rod. A system for surveying trees within a plot includes an imaging device, one or more sensors for measuring at least one of a pitch, a roll, or a compass bearing of the imaging device, and one or more processors configured to receive images of the plot including trees within the plot, associate one or more sensor measurements with the images, store the images with their associated measurements, and generate a survey of trees in the plot using at least the images and the one or more sensor measurements.
    Type: Grant
    Filed: August 19, 2021
    Date of Patent: May 7, 2024
    Assignee: FOREST CARBON WORKS, PBC
    Inventor: Kyle Andrew Holland
  • Patent number: 11974881
    Abstract: A system and method for providing an anatomic orientation indicator with a patient-specific model of an anatomical structure of interest extracted from a three-dimensional (3D) ultrasound volume is provided. The method includes extracting the anatomical structure of interest from the 3D volume and generating a patient-specific model of the anatomical structure of interest. The method includes generating an anatomic orientation indicator including at least one mocked patient anatomy model of an anatomical structure adjacent the anatomical structure of interest at a position and orientation relative the patient-specific model. The method includes displaying the anatomic orientation indicator with the patient-specific model at a same first point of view. The method includes receiving an instruction to change a point of view of the patient-specific model to a second point of view and updating the displaying of the anatomic orientation indicator with the patient-specific model to the second point of view.
    Type: Grant
    Filed: August 26, 2020
    Date of Patent: May 7, 2024
    Assignee: GE Precision Healthcare LLC
    Inventor: Federico Veronesi
  • Patent number: 11978254
    Abstract: Systems and methods for video presentation and analytics for live sporting events are disclosed. At least two cameras are used for tracking objects during a live sporting event and generate video feeds to a server processor. The server processor is operable to match the video feeds and create a 3D model of the world based on the video feeds from the at least two cameras. 2D graphics are created from different perspectives based on the 3D model. Statistical data and analytical data related to object movement are produced and displayed on the 2D graphics. The present invention also provides a standard file format for object movement in space over a timeline across multiple sports.
    Type: Grant
    Filed: March 22, 2023
    Date of Patent: May 7, 2024
    Assignee: SPORTSMEDIA TECHNOLOGY CORPORATION
    Inventor: Gerard J. Hall
  • Patent number: 11978151
    Abstract: Aspects presented herein relate to methods and devices for graphics processing including an apparatus, e.g., a GPU. The apparatus may obtain an indication of a BVH structure including a plurality of nodes, wherein the BVH structure is associated with geometry data for a plurality of primitives in a scene, wherein each of the plurality of nodes is associated with one or more primitives, where a first level BVH includes a set of first nodes and a second level BVH includes a set of second nodes. The apparatus may also allocate information for a plurality of second nodes in the set of second nodes to at least one first node in the set of first nodes. Further, the apparatus may store the allocated information for the plurality of second nodes in the set of second nodes in the at least one first node in the set of first nodes.
    Type: Grant
    Filed: August 31, 2022
    Date of Patent: May 7, 2024
    Assignee: QUALCOMM Incorporated
    Inventors: Adimulam Ramesh Babu, Srihari Babu Alla, Avinash Seetharamaiah, Jonnala Gadda Nagendra Kumar
  • Patent number: 11978259
    Abstract: Systems and methods for operating a mobile platform. The methods comprise, by a computing device: obtaining a LiDAR point cloud; using the LiDAR point cloud to generate a track for a given object in accordance with a particle filter algorithm by generating states of a given object over time (each state has a score indicating a likelihood that a cuboid would be created given an acceleration value and an angular velocity value); using the track to train a machine learning algorithm to detect and classify objects based on sensor data; and/or causing the machine learning algorithm to be used for controlling movement of the mobile platform.
    Type: Grant
    Filed: July 9, 2021
    Date of Patent: May 7, 2024
    Assignee: Ford Global Technologies, LLC
    Inventor: Kevin James Player
  • Patent number: 11979545
    Abstract: An information processing apparatus according to the present technology includes an image obtaining unit and a display control unit. The image obtaining unit obtains a plurality of first divided images obtained by dividing a first image showing a first location along a second direction substantially perpendicular to a first direction, and a plurality of second divided images obtained by dividing a second image showing a second location along the second direction. The display control unit arranges and simultaneously displays the plurality of first divided images and the plurality of second divided images along the first direction on a display device of a user at a third location.
    Type: Grant
    Filed: December 11, 2019
    Date of Patent: May 7, 2024
    Assignee: SONY GROUP CORPORATION
    Inventors: Mari Saito, Kenji Sugihara
  • Patent number: 11978177
    Abstract: A method and system of image processing of omnidirectional images with a viewpoint shift.
    Type: Grant
    Filed: September 25, 2020
    Date of Patent: May 7, 2024
    Assignee: Intel Corporation
    Inventors: Radka Tezaur, Niloufar Pourian
  • Patent number: 11972622
    Abstract: A method for updating a coordinate of an annotated point in a digital image due to camera movement is performed by an image processing device, which obtains a current digital image of a scene. The current digital image has been captured by a camera subsequent to movement of the camera relative the scene. The current digital image is associated with at least one annotated point. Each at least one annotated point has a respective coordinate in the current digital image. The method comprises identifying an amount of the movement by comparing position indicative information in the current digital image to position indicative information in a previous digital image of the scene. The previous digital image has been captured prior to movement of the camera. The method comprises updating the coordinate of each at least one annotated point in accordance with the identified amount of movement and a camera homography.
    Type: Grant
    Filed: December 16, 2021
    Date of Patent: April 30, 2024
    Assignee: Axis AB
    Inventors: Jiandan Chen, Haiyan Xie
  • Patent number: 11973991
    Abstract: A processor may initiate a recording. The processor may segment the recording into one or more segments. The processor may determine, based on the identification of a primary object in a first segment of the recording, a first bit rate for the first segment of the first recording. The processor may preload one or more subsequent segments that include the primary object at the first bit rate. The processor may preload each of the one or more subsequent segments with a secondary object at a second bit rate. The second bit rate may be lower than the first bit rate. The processor may display the recording to the user.
    Type: Grant
    Filed: October 11, 2019
    Date of Patent: April 30, 2024
    Assignee: International Business Machines Corporation
    Inventors: Balamurugaramanathan Sivaramalingam, Sathya Santhar, Samuel Mathew Jawaharlal, Sarbajit K. Rakshit
  • Patent number: 11969651
    Abstract: An augmented reality system generates computer-mediated reality on a client device. The client device has sensors including a camera configured to capture image data of an environment. The augmented reality system generates a first 3D map of the environment around the client device based on captured image data. The server receives image data captured from a second client device in the environment and generates a second 3D map of the environment. The server links the first and second 3D together in a singular 3D map. The singular 3D map may be a graphical representation of the real world using nodes that represent 3D maps generated by image data captured at client devices and edges that represent transformations between the nodes.
    Type: Grant
    Filed: December 18, 2020
    Date of Patent: April 30, 2024
    Assignee: NIANTIC, INC.
    Inventors: Anvith Ekkati, Purna Sowmya Munukutla, Dharini Krishna, Peter James Turner, Gandeevan Raghuraman, Si ying Diana Hu
  • Patent number: 11972536
    Abstract: The present invention relates to a method for tracking progress of the construction of objects, in particular walls comprised in a building based on 3D digital representation. Building Information Modeling (BIM) may provide a digital representation of the physical and functional characteristics of a place, such as a building comprising walls and other objects.
    Type: Grant
    Filed: November 3, 2023
    Date of Patent: April 30, 2024
    Assignee: DALUX APS
    Inventor: Anders Rong
  • Patent number: 11972061
    Abstract: To provide an input apparatus including an acquisition circuit that acquires a captured image capturing a user, a detection circuit that detects a first hand of the user from the captured image acquired by the acquisition circuit, and a display circuit that displays, when a second hand different from the first hand is detected during tracking of the first hand detected by the detection circuit, notification information corresponding to a distance between the first hand and the second hand on the display screen.
    Type: Grant
    Filed: April 21, 2023
    Date of Patent: April 30, 2024
    Assignee: SHARP KABUSHIKI KAISHA
    Inventors: Masahiko Takashima, Teruhiko Matsuoka, Tomoya Ishikura
  • Patent number: 11966051
    Abstract: The device (3) includes a display panel (6), a shutter panel (7), and a controller (8). The display panel (6) includes subpixels for displaying a parallax image including a first image and a second image having parallax between the images. The shutter panel (7) is configured to define a traveling direction of image light representing the parallax image from the display panel (6). The controller (8) is configured to change, in a certain time cycle, areas on the shutter panel in a light transmissive state to transmit the image light with at least a certain transmittance and areas in a light attenuating state to transmit the image light with a transmittance lower than the transmittance in the light transmissive state, and is configured to change the subpixels to display the first image and the second image based on positions of the areas in the light transmissive state.
    Type: Grant
    Filed: August 8, 2019
    Date of Patent: April 23, 2024
    Assignee: KYOCERA Corporation
    Inventor: Kaoru Kusafuka
  • Patent number: 11967014
    Abstract: A 3D conversation system can facilitate 3D conversations in an augmented reality environment, allowing conversation participants to appear as if they are face-to-face. The 3D conversation system can accomplish this with a pipeline of data processing stages, which can include calibrate, capture, tag and filter, compress, decompress, reconstruct, render, and display stages. Generally, the pipeline can capture images of the sending user, create intermediate representations, transform the representations to convert from the orientation the images were taken from to a viewpoint of the receiving user, and output images of the sending user, from the viewpoint of the receiving user, in synchronization with audio captured from the sending user. Such a 3D conversation can take place between two or more sender/receiving systems and, in some implementations can be mediated by one or more server systems. In various configurations, stages of the pipeline can be customized based on a conversation context.
    Type: Grant
    Filed: May 3, 2023
    Date of Patent: April 23, 2024
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Brian Keith Cabral, Albert Parra Pozo
  • Patent number: 11966999
    Abstract: An electronic apparatus performs a method of real time simulation of physical visual effect on one or more Graphics Processing Units (GPUs). The method includes a plurality of time steps. Each of the time steps includes: building up a mapping between particles and background grid blocks; sorting the particles to a level of granularity; transferring momenta and masses of the particles to grid nodes on the background grid blocks to compute forces on the grid nodes; updating velocities and resolving collisions from the computed forces on the grid nodes; and applying the updated velocities back to the particles from the grid nodes and advecting the particles. In some embodiments, the frequency of building up and sorting is reduced compared with the frequency of transferring, updating, and applying in the plurality of time steps.
    Type: Grant
    Filed: March 4, 2022
    Date of Patent: April 23, 2024
    Assignee: TENCENT AMERICA LLC
    Inventors: Yun Fei, Ming Gao
  • Patent number: 11964400
    Abstract: A method for controlling a robot to pick up an object in various positions. The method includes: defining a plurality of reference points on the object; mapping a first camera image of the object in a known position onto a first descriptor image; identifying the descriptors of the reference points from the first descriptor image; mapping a second camera image of the object in an unknown position onto a second descriptor image; searching the identified descriptors of the reference points in the second descriptor image; ascertaining the positions of the reference points in the three-dimensional space in the unknown position from the found positions; and ascertaining a pickup pose of the object for the unknown position from the ascertained positions of the reference points.
    Type: Grant
    Filed: November 8, 2021
    Date of Patent: April 23, 2024
    Assignee: ROBERT BOSCH GMBH
    Inventors: Andras Gabor Kupcsik, Marco Todescato, Markus Spies, Nicolai Waniek, Philipp Christian Schillinger, Mathias Buerger
  • Patent number: 11967022
    Abstract: In various examples, to support training a deep neural network (DNN) to predict a dense representation of a 3D surface structure of interest, a training dataset is generated using a parametric mathematical modeling. A variety of synthetic 3D road surfaces may be generated by modeling a 3D road surface using varied parameters to simulate changes in road direction and lateral surface slope. In an example embodiment, a synthetic 3D road surface may be created by modeling a longitudinal 3D curve and expanding the longitudinal 3D curve to a 3D surface, and the resulting synthetic 3D surface may be sampled to form a synthetic ground truth projection image (e.g., a 2D height map). To generate corresponding input training data, a known pattern that represents which pixels may remain unobserved during 3D structure estimation may be generated and applied to a ground truth projection image to simulate a corresponding sparse projection image.
    Type: Grant
    Filed: October 28, 2021
    Date of Patent: April 23, 2024
    Assignee: NVIDIA Corporation
    Inventors: Kang Wang, Yue Wu, Minwoo Park, Gang Pan
  • Patent number: 11967020
    Abstract: A distributed, cross reality system efficiently and accurately compares location information that includes image frames. Each of the frames may be represented as a numeric descriptor that enables identification of frames with similar content. The resolution of the descriptors may vary for different computing devices in the distributed system based on degree of ambiguity in image comparisons and/or computing resources for the device. A descriptor computed for a cloud-based component operating on maps of large areas that can result in ambiguous identification of multiple image frames may use high resolution descriptors. High resolution descriptors reduce computationally intensive disambiguation processing. A portable device, which is more likely to operate on smaller maps and less likely to have the computational resources to compute a high resolution descriptor, may use a lower resolution descriptor.
    Type: Grant
    Filed: December 20, 2022
    Date of Patent: April 23, 2024
    Assignee: Magic Leap, Inc.
    Inventors: Elad Joseph, Gal Braun, Ali Shahrokni
  • Patent number: 11961250
    Abstract: A light-field image generation system including a shape information acquisition server that acquires shape information indicating a three-dimensional shape of an object, and an image generation server that is provided with a shape reconstruction unit that reconstructs the three-dimensional shape of the object as a virtual three-dimensional shape in a virtual space based on the shape information and a light-field image generation unit that generates a light-field image of the virtual three-dimensional shape at a predetermined viewing point in the virtual space.
    Type: Grant
    Filed: October 13, 2020
    Date of Patent: April 16, 2024
    Assignee: TOPPAN PRINTING CO., LTD.
    Inventor: Tetsuro Morimoto
  • Patent number: 11961176
    Abstract: Disclosed approaches provide for interactions of secondary rays of light transport paths in a virtual environment to share lighting contributions when determining lighting conditions for a light transport path. Interactions may be shared based on similarities in characteristics (e.g., hit locations), which may define a region in which interactions may share lighting condition data. The region may correspond to a texel of a texture map and lighting contribution data for interactions may be accumulated to the texel spatially and/or temporally, then used to compute composite lighting contribution data that estimates radiance at an interaction. Approaches are also provided for reprojecting lighting contributions of interactions to pixels to share lighting contribution data from secondary bounces of light transport paths while avoiding potential over blurring.
    Type: Grant
    Filed: February 7, 2022
    Date of Patent: April 16, 2024
    Assignee: NVIDIA Corporation
    Inventor: Jacopo Pantaleoni
  • Patent number: 11961428
    Abstract: Provided is a method including: obtaining a plurality of images corresponding to a plurality of views; identifying at least one view region overlapping with a sub-pixel from among a plurality of view regions corresponding to the plurality of views; identifying a data value corresponding to the sub-pixel for each of at least one image corresponding to the at least one view region; determining an application degree of the data value for each of the at least one image, based on a level of overlap between the sub-pixel and the at least one view region, and determining an output value of the sub-pixel based on a data value adjusted according to the determined application degree; and outputting an image based on output values respectively determined for a plurality of sub-pixels including the sub-pixel.
    Type: Grant
    Filed: May 12, 2022
    Date of Patent: April 16, 2024
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventor: Kangwon Jeon
  • Patent number: 11960653
    Abstract: Systems and methods herein describe a multi-modal interaction system. The multi-modal interaction system, receives a selection of an augmented reality (AR) experience within an application on a computer device, displays a set of AR objects associated with the AR experience on a graphical user interface (GUI) of the computer device, display textual cues associated with the set of augmented reality objects on the GUI, receives a hand gesture and a voice command, modifies a subset of augmented reality objects of the set of augmented reality objects based on the hand gesture and the voice command, and displays the modified subset of augmented reality objects on the GUI.
    Type: Grant
    Filed: May 10, 2022
    Date of Patent: April 16, 2024
    Assignee: Snap Inc.
    Inventors: Jonathan Solichin, Xinyao Wang
  • Patent number: 11960009
    Abstract: Techniques for determining an object contour are discussed. Depth data associated with an object may be received. The depth data, such as lidar data, can be projected onto a two-dimensional plane. A first convex hull may be determined based on the projected lidar data. The first convex hull may include a plurality of boundary edges. A longest boundary edge, having a first endpoint and a second endpoint, can be determined. An angle can be determined based on the first endpoint, the second endpoint, and an interior point in the interior of the first convex hull. The longest boundary edge may be replaced with a first segment based on the first endpoint and the interior point, and a second segment based on the interior point and the second endpoint. An updated convex hull can be determined based on the first segment and the second segment.
    Type: Grant
    Filed: December 30, 2020
    Date of Patent: April 16, 2024
    Assignee: ZOOX, INC.
    Inventors: Yuanyuan Chen, Zeng Wang
  • Patent number: 11957978
    Abstract: The present disclosure describes approaches to camera re-localization that improve the speed and accuracy with which pose estimates are generated by fusing output of a computer vision algorithm with data from a prior model of a geographic area in which a user is located. For each candidate pose estimate output by the algorithm, a game server maps the estimate to a position on the prior model (e.g., a specific cell on a heatmap-style histogram) and retrieves a probability corresponding to the mapped position. A data fusion module fuses, for each candidate pose estimate, a confidence score generated by the computer vision algorithm with the location probability from the prior model to generate an updated confidence score. If an updated confidence score meets or exceeds a score threshold, a re-localization module initiates a location-based application (e.g., a parallel reality game) based on the associated candidate pose estimate.
    Type: Grant
    Filed: March 3, 2022
    Date of Patent: April 16, 2024
    Assignee: NIANTIC, INC.
    Inventors: Ben Benfold, Victor Adrian Prisacariu
  • Patent number: 11951395
    Abstract: This application relates to a method for displaying a marker element in a virtual scene performed at a terminal. The method includes: receiving a marking request from a user of the terminal, wherein the user controls a current virtual object rendered in a display interface of the virtual scene; in response to the marking request, determining a target virtual item in the display interface of the virtual scene; obtaining graphic data of a marker element when a distance between the target virtual item and the current virtual object in the virtual scene is within a predefined distance, the marker element being a graphic element used for indicating a location of the target virtual item in the virtual scene; and rendering the marker element according to the graphic data at a designated location adjacent the target virtual item in the display interface of the virtual scene.
    Type: Grant
    Filed: June 2, 2022
    Date of Patent: April 9, 2024
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventor: Yourui Fan
  • Patent number: 11954813
    Abstract: A three-dimensional scene constructing method, apparatus and system, and a storage medium. The three-dimensional scene constructing method includes: acquiring point cloud data of a key object and a background object in a target scene, wherein the point cloud data of the key object comprises three-dimensional information and corresponding feature information, and the point cloud data of the background object at least comprises three-dimensional information; establishing a feature database of the target scene, wherein the feature database at least comprises a key object feature library for recording three-dimensional information and feature information of the key object; performing registration and fusion on the point cloud data of the key object and the point cloud data of the background object, so as to obtain a three-dimensional model of the target scene; and when updating the three-dimensional model, reconstructing the three-dimensional model in a regional manner according to the feature database.
    Type: Grant
    Filed: September 2, 2021
    Date of Patent: April 9, 2024
    Assignee: BOE Technology Group Co., Ltd.
    Inventors: Youxue Wang, Xiaohui Ma, Kai Geng, Mengjun Hou, Qian Ha