Three-dimension Patents (Class 345/419)
  • Patent number: 11615617
    Abstract: Systems and methods for video presentation and analytics for live sporting events are disclosed. At least two cameras are used for tracking objects during a live sporting event and generate video feeds to a server processor. The server processor is operable to match the video feeds and create a 3D model of the world based on the video feeds from the at least two cameras. 2D graphics are created from different perspectives based on the 3D model. Statistical data and analytical data related to object movement are produced and displayed on the 2D graphics. The present invention also provides a standard file format for object movement in space over a timeline across multiple sports.
    Type: Grant
    Filed: November 4, 2020
    Date of Patent: March 28, 2023
    Assignee: SPORTSMEDIA TECHNOLOGY CORPORATION
    Inventor: Gerard J. Hall
  • Patent number: 11613082
    Abstract: A method for producing a support structure of a 3D model for 3D printing is provided. A method for producing a support according to an embodiment of the present invention comprises the steps of: dividing a surface constituting a 3D model into multiple surface patches; classifying respective divided surface patches according to geometric characteristics; and producing supports corresponding to the classified characteristics with regard to respective surface patches. Accordingly, during metal laminate manufacturing, the output stability may be improved while reducing the support producing process time. In addition, the surfaces may be expressed by different colors according to the result of geometric characteristic classification, and the supports may also be expressed by different colors according to the type, thereby playing the role of guide lines such that the user can recognize the shape of the surfaces and the type of supports to be installed on the corresponding surfaces.
    Type: Grant
    Filed: February 20, 2020
    Date of Patent: March 28, 2023
    Assignee: Korea Electronics Technology Institute
    Inventors: Hwa Seon Shin, Hye In Lee, Sung Hwan Chun, Ji Min Jang, Sung Hun Park
  • Patent number: 11611811
    Abstract: The present disclosure provides a video processing method, applied to an unmanned aerial vehicle (UAV) equipped with a camera device for capturing videos. The video processing method includes in response to the UAV moving in accordance with a flight trajectory, controlling the camera device of the UAV to obtain a first video segment when reaching a first photography point; in response to reaching a second photography point as the UAV continues moving, controlling the camera device of the UAV to capture environmental images to obtain a panoramic image, and generating a second video segment based on the panoramic image; and generating a target video based on the first video segment and the second video segment.
    Type: Grant
    Filed: March 23, 2020
    Date of Patent: March 21, 2023
    Assignee: SZ DJI TECHNOLOGY CO., LTD.
    Inventors: Wei Zhang, Ang Liu
  • Patent number: 11610368
    Abstract: A method and apparatus are provided for tessellating patches of surfaces in a tile based three dimensional computer graphics rendering system. For each tile in an image a per tile list of primitive indices is derived for tessellated primitives which make up a patch. Hidden surface removal is then performed on the patch and any domain points which remain after hidden surface removal are derived. The primitives are then shaded for display.
    Type: Grant
    Filed: October 10, 2017
    Date of Patent: March 21, 2023
    Assignee: Imagination Technologies Limited
    Inventor: John William Howson
  • Patent number: 11610349
    Abstract: A method for rendering a computer image includes, for each pixel of a plurality of N×M pixels forming a tile, determining a plurality of masks for the pixel, wherein N and M denote integers larger than 1, and wherein each mask identifies a respective subset of the pixels that are equidistant from the pixel and located at a respective distance from the pixel. The method further includes: determining an active mask for the tile, the active mask identifying active pixels of the pixels, each of the active pixels being determined as having color information; based on the active mask, identifying an empty pixel of the pixels, the empty pixel lacking color information; and determining at least a first nearest active pixel that is nearest to the empty pixel. The determining includes comparing the active mask with at least one mask of the masks for the empty pixel.
    Type: Grant
    Filed: March 1, 2021
    Date of Patent: March 21, 2023
    Assignee: DREAMWORKS ANIMATION LLC
    Inventor: Toshiaki Kato
  • Patent number: 11611755
    Abstract: A system and methods for a CODEC driving a real-time light field display for multi-dimensional video streaming, interactive gaming and other light field display applications is provided applying a layered scene decomposition strategy. Multi-dimensional scene data is divided into a plurality of data layers of increasing depths as the distance between a given layer and the plane of the display increases. Data layers are sampled using a plenoptic sampling scheme and rendered using hybrid rendering, such as perspective and oblique rendering, to encode light fields corresponding to each data layer. The resulting compressed, (layered) core representation of the multi-dimensional scene data is produced at predictable rates, reconstructed and merged at the light field display in real-time by applying view synthesis protocols, including edge adaptive interpolation, to reconstruct pixel arrays in stages (e.g. columns then rows) from reference elemental images.
    Type: Grant
    Filed: February 25, 2021
    Date of Patent: March 21, 2023
    Assignee: Avalon Holographies Inc.
    Inventors: Matthew Hamilton, Chuck Rumbolt, Donovan Benoit, Matthew Troke, Robert Lockyer
  • Patent number: 11610361
    Abstract: A method and intersection testing module are provided in a ray tracing system for determining whether a ray intersects a 3D axis-aligned box. The box represents a volume defined by a front-facing plane and a back-facing plane for each of the dimensions of the three-dimensional axis-aligned box. Scaled ray components are determined, wherein a third scaled ray component equals 1. A scaled minimum culling distance and a scaled maximum culling distance are determined. Determined cross-multiplication values are used to identify which of the front-facing planes intersects the ray furthest along the ray and identify which of the back-facing planes intersects the ray least far along the ray. It is determined whether the ray intersects the identified front-facing plane of the box at a position that is no further along the ray than the position at which the ray intersects the identified back-facing plane.
    Type: Grant
    Filed: March 22, 2022
    Date of Patent: March 21, 2023
    Assignee: Imagination Technologies Limited
    Inventor: Rostam King
  • Patent number: 11610667
    Abstract: A method for the automated annotation of radiology findings includes: receiving a set of inputs, determining a set of outputs based on the set of inputs, assigning labels to the set of inputs, and annotating the set of inputs based on the labels. Additionally, the method can include any or all of: presenting annotated inputs to a user, comparing multiple sets of inputs, transmitting a set of outputs to a radiologist report, or any other suitable processes.
    Type: Grant
    Filed: November 19, 2019
    Date of Patent: March 21, 2023
    Assignee: RAD AI, INC.
    Inventors: Doktor Gurson, Jeffrey Chang, Jeffrey Snell, Eric Purdy, Brandon Duderstadt, Deeptanshu Jha
  • Patent number: 11604574
    Abstract: According to various embodiments, an electronic device may comprise: a first camera arranged on a first surface of a housing of the electronic device; a second camera arranged apart from the first camera on the first surface; a display; and a processor set to process at least a portion of a first inputted image by applying a first image effect and display same on the display, on the basis of a first object area for the first inputted image obtained by using phase difference information of the first inputted image from among the first inputted image obtained from the first camera or a second inputted image obtained from the second camera, and to process at least a portion of the first inputted image by applying a second image effect and displaying same on the display, on the basis of a second object area for the first inputted image obtained by using time difference information between the first inputted image and the second inputted image.
    Type: Grant
    Filed: July 22, 2021
    Date of Patent: March 14, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Woo-Yong Lee, Hye-Jin Kang, Jae-Yun Song, Min-Sheok Choi, Ki-Huk Lee, Cheol-Ho Cheong
  • Patent number: 11605208
    Abstract: A system and method for creating, managing, and displaying a limited edition, serialized 3D digital collectible, and rarity classifications of the collectibles and packs in which they are distributed. The 3D digital collectible may include at least one digital media file and associated data. A digital media file may relate to a visual representation of an event during an entertainment experience, such as a video highlight or related images, and the data may be data associated with the event, experience, and/or the digital media file.
    Type: Grant
    Filed: October 1, 2021
    Date of Patent: March 14, 2023
    Assignee: Dapper Labs, Inc.
    Inventors: Courtney McNeil, Denise Cascelli Schwenck Bismarque
  • Patent number: 11604557
    Abstract: A field user interface that displays 3D objects, receives a selection of an object by the user, and uses a comparison between sizes of objects and thresholds to perform the selection, in order that the selected objects are consistent with the intent of the user.
    Type: Grant
    Filed: December 17, 2020
    Date of Patent: March 14, 2023
    Assignee: DASSAULT SYSTEMES
    Inventors: Guillaume Dayde, Christophe Delfino
  • Patent number: 11602887
    Abstract: An apparatus and a method of additive manufacturing is provided. The apparatus includes a light source configured to emit light between 0 and 500 nm in wavelength. At least one vessel is provided that includes a chamber and a transparent base. The chamber contains a volume of liquid print material. The transparent base being made of Fluorinated ethylene propylene (FEP) or Polydimethylsiloxane (PDMS) through which the relevant wavelength can be received into the chamber to cure a portion of the volume of print material through at least one mask, the at least one mask being made of paper, polymer, glass, metal, composites, or laminated substrates, the at least one mask defining a series of patterns associated with layers of a three-dimensional (3D) object, the at least one mask being position-able between the light source and the transparent base, via a mechanism, wherein the at least one mask defines the pattern of the light that is received through the transparent base and into the print material.
    Type: Grant
    Filed: July 14, 2021
    Date of Patent: March 14, 2023
    Assignee: CALT DYNAMICS LIMITED
    Inventors: Ross Lawless, Irene Villafane, Warren Katz
  • Patent number: 11603646
    Abstract: Techniques for generating earthmoving flow vectors for assisting control of a construction machine are disclosed. A design elevation map of an earthmoving site may be obtained. The design elevation map may include a plurality of design elevation points of the earthmoving site. An actual elevation map of the earthmoving site may be obtained. The actual elevation map may include a plurality of actual elevation points of the earthmoving site. A dual-layer input graph may be formed based on the design elevation map and the actual elevation map. The dual-layer input graph may include a plurality of nodes related through a plurality of connections. A flow graph may be generated by solving the dual-layer input graph. The flow graph may include a set of flow vectors indicating movement of the earth within the earthmoving site.
    Type: Grant
    Filed: December 17, 2019
    Date of Patent: March 14, 2023
    Assignee: Caterpillar Trimble Control Technologies LLC
    Inventors: Nathan Jones, Joseph Corbett-Davies
  • Patent number: 11604516
    Abstract: A method includes displaying, on a touchscreen, a video comprising a video frame and determining, based on a saliency map of the video frame, a region of interest in the video frame. The method also includes detecting a touch on a region of the touchscreen while the video frame is displayed and generating a haptic response in response to determining that the region of the touchscreen overlaps with the region of interest.
    Type: Grant
    Filed: December 17, 2020
    Date of Patent: March 14, 2023
    Assignee: Disney Enterprises, Inc.
    Inventors: Evan M. Goldberg, Daniel L. Baker, Jackson A. Rogow
  • Patent number: 11600049
    Abstract: Techniques for estimating a perimeter of a room environment at least partially enclosed by a set of adjoining walls using posed images are disclosed. A set of images and a set of poses are obtained. A depth map is generated based on the set of images and the set of poses. A set of wall segmentation maps are generated based on the set of images, each of the set of wall segmentation maps indicating a target region of a corresponding image that contains the set of adjoining walls. A point cloud is generated based on the depth map and the set of wall segmentation maps, the point cloud including a plurality of points that are sampled along portions of the depth map that align with the target region. The perimeter of the environment along the set of adjoining walls is estimated based on the point cloud.
    Type: Grant
    Filed: April 23, 2020
    Date of Patent: March 7, 2023
    Assignee: Magic Leap, Inc.
    Inventors: Zhao Chen, Ameya Pramod Phalak, Vijay Badrinarayanan
  • Patent number: 11600027
    Abstract: Certain example embodiments relate to an electronic device, including a user interface, and processing resources including at least one processor and a memory. The memory stores a program executable by the processing resources to simulate a view of an image through at least one viewer-selected product that is virtually interposed between a viewer using the electronic device and the image by performing functionality including: acquiring the image; facilitating viewer selection of the at least one product in connection with the user interface; retrieving display properties associated with the at least one viewer-selected product; generating, for each said viewer-selected product, a filter to be applied to the acquired image based on retrieved display properties; and generating, for display via the electronic device, an output image corresponding to the generated filter(s) being applied to the acquired image. The electronic device in certain example embodiments may be a smartphone, tablet, and/or the like.
    Type: Grant
    Filed: September 26, 2019
    Date of Patent: March 7, 2023
    Assignee: GUARDIAN GLASS, LLC
    Inventors: Alexander Sobolev, Vijayen S. Veerasamy
  • Patent number: 11601675
    Abstract: Disclosed herein is a method for transmitting point cloud data, including encoding point cloud data, and/or transmitting a bitstream containing the point cloud data and signaling information about the point cloud data. Disclosed herein is a method for receiving point cloud data, including receiving a bitstream containing point cloud data, and/or decoding the point cloud data in the bitstream.
    Type: Grant
    Filed: November 20, 2020
    Date of Patent: March 7, 2023
    Assignee: LG ELECTRONICS INC.
    Inventors: Yousun Park, Sejin Oh, Hyejung Hur
  • Patent number: 11600216
    Abstract: According to an embodiment of the present disclosure, a display device may include a display configured to display an image, a user input interface configured to receive a remote control input from a remote control, and a controller configured to receive a user input, obtain a type of the received user input, and change a display property according to the obtained type of the user input, wherein the type of the user input is one of a remote control input type or a touch input.
    Type: Grant
    Filed: March 29, 2022
    Date of Patent: March 7, 2023
    Assignee: LG ELECTRONICS INC.
    Inventors: Hyeseung Lee, Donghee Lee
  • Patent number: 11600050
    Abstract: Embodiments include systems and methods for determining a 6D pose estimate associated with an image of a physical 3D object captured in a video stream. An initial 6D pose estimate is inferred and then further iteratively refined. The video stream may be frozen to allow the user to tap or touch a display to indicate a location of the user-input keypoints. The resulting 6D pose estimate is used to assist in replacing or superimposing the physical 3D object with digital or virtual content in an augmented reality (AR) frame.
    Type: Grant
    Filed: April 2, 2021
    Date of Patent: March 7, 2023
    Assignee: STREEM, LLC
    Inventors: Flora Ponjou Tasse, Pavan K. Kamaraju, Ghislain Fouodji Tasse, Sean M. Adkinson
  • Patent number: 11593533
    Abstract: A design application is configured to visualize and explore large-scale generative design datasets. The design explorer includes a graphical user interface (GUI) engine that generates a design explorer, a composite explorer, and a tradeoff explorer. The design explorer displays a visualization of a multitude of design options included in a design space. The design explorer allows a user to filter the design space based on input parameters that influence a generative design process as well as various design characteristics associated with the different design options. The composite explorer displays a fully interactive composite of multiple different design options. The composite explorer exposes various tools that allow the user to filter the design space via interactions with the composite. The tradeoff explorer displays a tradeoff space based on different rankings of design options. The different rankings potentially correspond to competing design characteristics specified by different designers.
    Type: Grant
    Filed: March 19, 2019
    Date of Patent: February 28, 2023
    Assignee: AUTODESK, INC.
    Inventors: Tovi Grossman, Erin Bradner, George Fitzmaurice, Ali Baradaran Hashemi, Michael Glueck, Justin Frank Matejka
  • Patent number: 11593932
    Abstract: Methods and systems for processing medical images. One method includes, in response to startup of an application using an algorithm, creating a server process supporting a programming language associated with the algorithm and loading a plurality of deep learning models used by the algorithm into a memory of the server process to create in-memory models. The method also includes processing a first set of one or more medical images with the server process using the algorithm and at least one model selected from the in-memory models, maintaining the in-memory models in the memory of the server process after processing the first set of one or more medical images, and, in response to a request to process a second set of one or more medical images, processing the second set of one or more medical images using the algorithm and at least one of the in-memory models.
    Type: Grant
    Filed: October 25, 2019
    Date of Patent: February 28, 2023
    Assignee: Merative US L.P.
    Inventors: Hans Harald Zachmann, Simona Rabinovici-Cohen, Shaked Brody
  • Patent number: 11592667
    Abstract: A display system includes: a transparent display; a dimming panel located behind the transparent display and capable of adjusting transmissivity; and a processor, wherein, when the processor detects an object located behind the dimming panel, the processor displaying an image on the transparent display, setting the transmissivity of a region of the dimming panel located in front of the object higher than the transmissivity of a region of the dimming panel located behind a region on which the image is displayed, and making different a degree of an increase of the transmissivity in accordance with combination of the image and the object.
    Type: Grant
    Filed: July 20, 2021
    Date of Patent: February 28, 2023
    Assignee: PANASONIC INTELLECTUAL PROPERTY CORPORATION OF AMERICA
    Inventors: Nanami Fujiwara, Yuichi Hasegawa, Takeshi Kawaguchi
  • Patent number: 11593921
    Abstract: Disclosed is a system to add photorealistic detail and motion to an image based on a first material property associated with a first set of data points of an incomplete first object, and a second material property associated with a second set of data points of an incomplete second object in the image. The system may generate first artificial data points amongst the first set of data points that completes a first arrangement defined for the first material property, and may generate second artificial data points amongst the second set of data points that completes a second arrangement defined for the second material property. The system may then output an enhanced image of the completed first object based on first set of data points and the first artificial data points, and of the completed second object based on the second set of data points and the second artificial data points.
    Type: Grant
    Filed: March 22, 2022
    Date of Patent: February 28, 2023
    Assignee: Illuscio, Inc.
    Inventor: Robert Monaghan
  • Patent number: 11593959
    Abstract: Disclosed is a system and associated methods for generating and rendering a polyhedral point cloud that represents a scene with multi-faceted primitives. Each multi-faceted primitive stores multiple sets of values that represent different non-positional characteristics that are associated with a particular point in the scene from different angles. For instance, the system generates a multi-faceted primitive for a particular point of the scene that is captured in first capture from a first position and a second capture from a different second position.
    Type: Grant
    Filed: September 30, 2022
    Date of Patent: February 28, 2023
    Assignee: Illuscio, Inc.
    Inventors: Robert Monaghan, Mark Weingartner
  • Patent number: 11586349
    Abstract: A viewpoint control unit 204 detects a user operation on a display surface for displaying a virtual-viewpoint video (S801) and controls at least one of the position and the orientation of a virtual viewpoint concerning generation of the virtual-viewpoint video in accordance with the user operation (S805, S808, S812, S814).
    Type: Grant
    Filed: April 14, 2021
    Date of Patent: February 21, 2023
    Assignee: Canon Kabushiki Kaisha
    Inventor: Yuji Kato
  • Patent number: 11585672
    Abstract: Systems, methods, and non-transitory computer readable media configured to provide three-dimensional representations of routes. Locations for a planned movement may be obtained. The location information may include tridimensional information of a location. Route information for the planned movement may be obtained. The route information may define a route of one or more entities within the location. A three-dimensional view of the route within the location may be determined based on the location information and the route information. An interface through which the three-dimensional view of the route within the location is accessible may be provided.
    Type: Grant
    Filed: June 5, 2018
    Date of Patent: February 21, 2023
    Assignee: Palantir Technologies Inc.
    Inventors: Richard Dickson, Mason Cooper, Quentin Le Pape
  • Patent number: 11587283
    Abstract: An image processing apparatus acquires information about a three-dimensional display apparatus, acquires a plurality of images based on imaging by a plurality of imaging apparatuses that images a target area from different directions, sets a position of and a direction from a virtual viewpoint in a space associated with the target area, sets positions of and directions from virtual viewpoints of a number corresponding to a configuration of the three-dimensional display apparatus, with the position of and the direction from the virtual viewpoint as a reference, and generates display image data for the three-dimensional display apparatus, based on the plurality of images and the set positions of and the set directions from the virtual viewpoints.
    Type: Grant
    Filed: September 15, 2020
    Date of Patent: February 21, 2023
    Assignee: Canon Kabushiki Kaisha
    Inventor: Taku Ogasawara
  • Patent number: 11585654
    Abstract: Embodiments of the disclosure are drawn to projecting light on a surface and analyzing the scattered light to obtain spatial information of the surface and generate a three dimensional model of the surface. The three dimensional model may then be analyzed to calculate one or more surface characteristics, such as roughness. The surface characteristics may then be analyzed to provide a result, such as a diagnosis or a product recommendation. In some examples, a mobile device is used to analyze the surface.
    Type: Grant
    Filed: June 2, 2020
    Date of Patent: February 21, 2023
    Assignee: MICRON TECHNOLOGY, INC.
    Inventors: Zahra Hosseinimakarem, Jonathan D. Harms, Alyssa N. Scarbrough, Dmitry Vengertsev, Yi Hu
  • Patent number: 11584012
    Abstract: A method, apparatus, and computer-readable storage media for robotic programming are disclosed. To improve upon or even solve the dilemma that teach-in techniques cannot work for all kinds of objects and offline programming requires complicated simulation of a robot and objects, a solution is provided to use a virtual item marked by a marker during programming of the robot and display the virtual item to a user. As such, even very large items can be used and also replaced easily during programming, which makes the programming procedures go smoothly and efficiently.
    Type: Grant
    Filed: May 8, 2019
    Date of Patent: February 21, 2023
    Assignee: Siemens Aktiengesellschaft
    Inventors: Carlos Morra, Axel Rottmann
  • Patent number: 11587248
    Abstract: The present teaching relates to method, system, medium, and implementation of determining depth information in autonomous driving. Stereo images are first obtained from multiple stereo pairs selected from at least two stereo pairs. The at least two stereo pairs have stereo cameras installed with the same baseline and in the same vertical plane. Left images from the multiple stereo pairs are fused to generate a fused left image and right images from the multiple stereo pairs are fused to generate a fused right image. Disparity is then estimated based on the fused left and right images and depth information can be computed based on the stereo images and the disparity.
    Type: Grant
    Filed: February 18, 2021
    Date of Patent: February 21, 2023
    Assignee: PlusAI, Inc.
    Inventors: Anurag Ganguli, Timothy Patrick Daly, Jr., Hao Zheng, David Wanqian Liu
  • Patent number: 11588897
    Abstract: Methods, systems, apparatuses, and computer-readable media are provided for simulating user interactions with shared content. In one implementation, the computer-readable medium includes instructions to cause a processor to establish a communication channel for sharing content and user interactions; transmit to at least one second wearable extended reality appliance, first data, representing an object associated with first wearable extended reality appliance, enabling a virtual representation of the object to be displayed through the at least one second wearable extended reality appliance; receive image data from an image sensor associated with the first wearable extended reality appliance; detect in the image data at least one user interaction including a human hand pointing to a specific portion of the object; and transmit to the at least one second wearable extended reality appliance second data indicating an area of the specific portion of the object.
    Type: Grant
    Filed: April 5, 2022
    Date of Patent: February 21, 2023
    Assignee: MULTINARITY LTD
    Inventors: Tamir Berliner, Tomer Kahan, Orit Dolev
  • Patent number: 11589182
    Abstract: A method of presenting audio comprises: identifying a first ear listener position and a second ear listener position in a mixed reality environment; identifying a first virtual sound source in the mixed reality environment; identifying a first object in the mixed reality environment; determining a first audio signal in the mixed reality environment, wherein the first audio signal originates at the first virtual sound source and intersects the first ear listener position; determining a second audio signal in the mixed reality environment, wherein the second audio signal originates at the first virtual sound source, intersects the first object, and intersects the second ear listener position; determining a third audio signal based on the second audio signal and the first object; presenting, to a first ear of a user, the first audio signal; and presenting, to a second ear of the user, the third audio signal.
    Type: Grant
    Filed: November 17, 2021
    Date of Patent: February 21, 2023
    Assignee: Magic Leap, Inc.
    Inventor: Anastasia Andreyevna Tajik
  • Patent number: 11583770
    Abstract: The disclosed systems and methods provide for generating real-time egress plans for users in a building, based on the users' current locations. As the users' current locations change, egress plans associated with the users can be dynamically modified in real-time. The egress plans can also be generated, modified, and/or trained based on inputted information about the user. The disclosed technology can include a mobile application for presenting, in a centralized interface, information about user-specific egress plans, training the user for different emergency scenarios, improving or changing features in the building to improve safety, and user profiles. The mobile application can include training simulation games to help prepare the users to safely egress during an emergency. The disclosed technology can also predict building component and structure emergency risk levels. The disclosed technology can also designate zones in the building based on possible egress routes.
    Type: Grant
    Filed: June 21, 2021
    Date of Patent: February 21, 2023
    Assignee: LGHorizon, LLC
    Inventors: Bill Delmonico, Joseph Schmitt
  • Patent number: 11579711
    Abstract: A hand-held controller and a positional reference device for determining the position and orientation of the hand-held controller within a three-dimensional volume relative to the location of the positional reference device. An input/output subsystem in conjunction with processing and memory subsystems can receive a reference image data captured by a beacon sensing device combined with inertial measurement information from inertial measurement units within the hand-held controller. The position and orientation of the hand-held controller can be computed based on the linear distance between a pair of beacons on the positional reference device and the reference image data and the inertial measurement information.
    Type: Grant
    Filed: March 17, 2021
    Date of Patent: February 14, 2023
    Assignee: Marbl Limited
    Inventor: Matthew G. Fonken
  • Patent number: 11582555
    Abstract: There is provided a smart audio system including multiple audio devices and a central server. The central server confirms a model of every audio device and a position thereof in an operation area in a scan mode. The central server confirms a user position or a user state to accordingly control output power of a speaker of each of the multiple audio devices in an operation mode.
    Type: Grant
    Filed: September 30, 2021
    Date of Patent: February 14, 2023
    Assignee: PIXART IMAGING INC.
    Inventors: Ming Shun Manson Fei, Yi-Hsien Ko
  • Patent number: 11580691
    Abstract: A method for generating a three-dimensional (3D) model of an object includes: capturing images of the object from a plurality of viewpoints, the images including color images; generating a 3D model of the object from the images, the 3D model including a plurality of planar patches; for each patch of the planar patches: mapping image regions of the images to the patch, each image region including at least one color vector; and computing, for each patch, at least one minimal color vector among the color vectors of the image regions mapped to the patch; generating a diffuse component of a bidirectional reflectance distribution function (BRDF) for each patch of planar patches of the 3D model in accordance with the at least one minimal color vector computed for each patch; and outputting the 3D model with the BRDF for each patch.
    Type: Grant
    Filed: August 20, 2018
    Date of Patent: February 14, 2023
    Assignee: PACKSIZE LLC
    Inventors: Giulio Marin, Abbas Rafii, Carlo Dal Mutto, Kinh Tieu, Giridhar Murali, Alvise Memo
  • Patent number: 11576631
    Abstract: A method for forming a virtual 3D mathematical model of a dental system, including receiving DICOM files representing the dental system; identifying number and location of voxels of tissues of the dental system; combining the voxels of the tissues into voxels of organs of the dental system; combining the organs into the virtual 3D mathematical model of the dental system, wherein the virtual 3D mathematical models supports linear, non-linear and volumetric measurements of the dental system; and presenting the virtual 3D mathematical model to a user. The DICOM files can be cone beam or multispiral computed tomography, MRT, PET and/or ultrasonography. The tissues include enamel, dentin, pulp, cartilage, periodontium, and/or jaw bone. The organs include teeth, gums, temporomandibular joint and/or jaw. A size of the voxels is typically between 40 ?m and 200 ?m.
    Type: Grant
    Filed: February 15, 2020
    Date of Patent: February 14, 2023
    Assignee: Medlab Media Group SL
    Inventors: Marcos Rubio Rubio, Clara Soler Pellicer, Evgeny Solovykh, Alexander Obrubov, Svetlana Polyakova
  • Patent number: 11580667
    Abstract: A method for characterizing a pose estimation system includes: receiving, from a pose estimation system, first poses of an arrangement of objects in a first scene; receiving, from the pose estimation system, second poses of the arrangement of objects in a second scene, the second scene being a rigid transformation of the arrangement of objects of the first scene with respect to the pose estimation system; computing a coarse scene transformation between the first scene and the second scene; matching corresponding poses between the first poses and the second poses; computing a refined scene transformation between the first scene and the second scene based on coarse scene transformation, the first poses, and the second poses; transforming the first poses based on the refined scene transformation to compute transformed first poses; and computing an average rotation error and an average translation error of the pose estimation system based on differences between the transformed first poses and the second poses.
    Type: Grant
    Filed: November 17, 2021
    Date of Patent: February 14, 2023
    Assignee: Intrinsic Innovation LLC
    Inventors: Agastya Kalra, Achuta Kadambi, Kartik Venkataraman
  • Patent number: 11579744
    Abstract: The embodiments described herein provide technologies and techniques for using available data (from a variety of data sources) to provide an integrated and virtual reality experience.
    Type: Grant
    Filed: June 21, 2017
    Date of Patent: February 14, 2023
    Assignee: NAVITAIRE LLC
    Inventor: Justin Steven Wilde
  • Patent number: 11580709
    Abstract: An augmented reality (AR) device can be configured to generate a virtual representation of a user's physical environment. The AR device can capture images of the user's physical environment to generate a mesh map. The AR device can project graphics at designated locations on a virtual bounding box to guide the user to capture images of the user's physical environment. The AR device can provide visual, audible, or haptic guidance to direct the user of the AR device to look toward waypoints to generate the mesh map of the user's environment.
    Type: Grant
    Filed: September 14, 2021
    Date of Patent: February 14, 2023
    Assignee: Magic Leap, Inc.
    Inventors: Javier Antonio Busto, Jonathan Brodsky
  • Patent number: 11575825
    Abstract: A control apparatus that access first cameras capturing a first subject of a first area, and second cameras capturing a second subject of a second area, detects a viewing direction of a spectator group in the first subject on the basis of image data of the first subject captured by any one of first cameras, identifies a focus area in the second area that is focused on by the spectator group on the basis of the viewing direction of the spectator group, identifies a focus subject, focused on by the spectator group, that is present in the focus area on the basis of image data of the second subject captured by each of the second cameras, determines a specific second camera to be a transmission source of image data from among the second cameras on the basis of the focus subject, and transmits image data from the specific second camera.
    Type: Grant
    Filed: October 17, 2018
    Date of Patent: February 7, 2023
    Assignee: NIKON CORPORATION
    Inventors: Yosuke Otsubo, Satoshi Takahashi, Yuuya Takayama, Kazuhiro Abe, Hideo Hoshuyama, Marie Shoda, Sho Somiya, Tetsuya Koike, Naoya Otani
  • Patent number: 11574437
    Abstract: This application discloses a shadow rendering method and apparatus, a computer device, and a storage medium, the method including: obtaining at least one rendering structure in a virtual scene according to an illumination direction in the virtual scene; obtaining model coordinates of a plurality of pixels according to a current viewing angle associated with the virtual scene and depth information of the plurality of pixels; sampling at least one shadow map according to the model coordinates of the plurality of pixels to obtain a plurality of sampling points corresponding to the plurality of pixels; and rendering the plurality of sampling points in the virtual scene to obtain at least one shadow associated with the at least one virtual object.
    Type: Grant
    Filed: May 21, 2021
    Date of Patent: February 7, 2023
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventor: Cangjian Hou
  • Patent number: 11574447
    Abstract: A method for capturing real-world information into a virtual environment is provided. The method is suitable for a head-mounted device (HMD) located in a physical environment, and includes the following operations: providing the virtual environment, wherein a real-world content within the virtual environment is captured from a part of the physical environment corresponding to a perspective of the HMD; tracking a feature point which located within the physical environment and moved by a user, so as to define a selected plane of the real-world content by projecting a moving track of the feature point onto the real-world content; capturing image information corresponding to the selected plane; and generating a virtual object having an appearance rendered according to the image information, in which the virtual object is adjustable in size.
    Type: Grant
    Filed: August 19, 2020
    Date of Patent: February 7, 2023
    Assignee: HTC Corporation
    Inventors: Sheng-Cherng Lin, Chien-Hsin Liu, Shih-Lung Lin
  • Patent number: 11574453
    Abstract: Techniques are provided for generation of synthetic 3-dimensional object image variations for training of recognition systems. An example system may include an image synthesizing circuit configured to synthesize a 3D image of the object (including color and depth image pairs) based on a 3D model. The system may also include a background scene generator circuit configured to generate a background for each of the rendered image variations. The system may further include an image pose adjustment circuit configured to adjust the orientation and translation of the object for each of the variations. The system may further include an illumination and visual effect adjustment circuit configured to adjust illumination of the object and the background for each of the variations, and to further adjust visual effects of the object and the background for each of the variations based on application of simulated camera parameters.
    Type: Grant
    Filed: September 4, 2020
    Date of Patent: February 7, 2023
    Assignee: Tahoe Research, Ltd.
    Inventors: Amit Bleiweiss, Chen Paz, Ofir Levy, Itamar Ben-Ari, Yaron Yanai
  • Patent number: 11572653
    Abstract: An electronic device having a display on one side of the device and one or more image sensors on the opposite side of the device captures image data using the one or more image sensors. The display displays a portion of the captured image data on the display and a first graphical object overlaid on the displayed portion of the captured image data. A user gesture is detected using the one or more image sensors. In accordance with a determination that the detected user gesture meets a set of criteria, a position of the graphical object is updated based on the user gesture or is the first graphical object is replaced with a second graphical object. In accordance with a determination that the detected user gesture does not meet a set of criteria, the display of the first graphical object is maintained.
    Type: Grant
    Filed: March 9, 2018
    Date of Patent: February 7, 2023
    Assignee: ZYETRIC AUGMENTED REALITY LIMITED
    Inventor: Pak Kit Lam
  • Patent number: 11574487
    Abstract: There is provided a computerized system and method of generating a unique identification associated with a gemstone, usable for unique identification of the gemstone. The method comprises: obtaining one or more images of the gemstone, the one or more images captured at one or more viewing angles relative to the gemstone and to a light pattern, thus giving rise to a representative group of images; processing the representative group of images to generate a set of rotation-invariant values informative of rotational cross-correlation relationship characterizing the images in the representative group; and using the generated set of rotation-invariant values to generate a unique identification associated with the gemstone. The unique identification associated with the gemstone can be further compared with an independently generated unique identification associated with the gemstone in question, or with a class-indicative unique identification.
    Type: Grant
    Filed: April 20, 2021
    Date of Patent: February 7, 2023
    Assignee: SARINE COLOR TECHNOLOGIES LTD.
    Inventors: Yiftah Navot, Omri Spirman, Ofek Shilon, Abraham Kerner, Uzi Levami
  • Patent number: 11568605
    Abstract: A cross reality system enables any of multiple devices to efficiently and accurately access previously stored maps and render virtual content specified in relation to those maps. The cross reality system may include a cloud-based localization service that responds to requests from devices to localize with respect to a stored map. The request may include one or more sets of feature descriptors extracted from an image of the physical world around the device. Those features may be posed relative to a coordinate frame used by the local device. The localization service may identify one or more stored maps with a matching set of features. Based on a transformation required to align the features from the device with the matching set of features, the localization service may compute and return to the device a transformation to relate its local coordinate frame to a coordinate frame of the stored map.
    Type: Grant
    Filed: October 15, 2020
    Date of Patent: January 31, 2023
    Assignee: Magic Leap, Inc.
    Inventors: Ali Shahrokni, Daniel Olshansky, Xuan Zhao, Rafael Domingos Torres, Joel David Holder, Keng-Sheng Lin, Ashwin Swaminathan, Anush Mohan
  • Patent number: 11570404
    Abstract: A method for predicting behavior changes of a participant of a virtual three dimensional (3D) video conference, the method may include determining, for each part of multiple parts of the virtual 3D video conference, and by a first computerized unit, (a) a participant behavioral predictor to be applied by a second computerized unit during the part of the virtual 3D video conference, (b) one or more prediction inaccuracies related to the applying of the participant behavioral predictor during the part of the virtual 3D video conference, and (c) whether to generate and transmit to the second computerized unit prediction inaccuracy metadata that is indicative of at least one prediction inaccuracy that affects a representation of the participant within a virtual 3D video conference environment presented to another participant of the virtual 3D video conference during the part of the virtual 3D video conference; and generating and transmitting to the second computerized unit the prediction inaccuracy metadata, when
    Type: Grant
    Filed: June 20, 2021
    Date of Patent: January 31, 2023
    Assignee: TRUE MEETING INC.
    Inventors: Michael Rabinovich, Yuval Gronau, Ran Oz
  • Patent number: 11567632
    Abstract: The present disclosure generally relates to exploring a geographic region that is displayed in computer user interfaces. In some embodiments, a method includes at an electronic device with a display and one or more input devices, displaying a map of a geographic region on the display and detecting a first user input to select a starting location on the map. After detecting the first user input, the method includes detecting a second user input to select a first direction of navigation from the starting location. In response to detecting the second user input, the method includes determining a path on the map that traverses in the first direction of navigation and connects the starting location to an ending location, and providing audio that includes traversal information about traversing along the path in the geographic region in the first direction of navigation and from the starting location to the ending location.
    Type: Grant
    Filed: July 2, 2019
    Date of Patent: January 31, 2023
    Assignee: Apple Inc.
    Inventors: Christopher B. Fleizach, Michael A. Troute, Reginald D. Hudson, Aaron M. Everitt, Conor M. Hughes
  • Patent number: 11567336
    Abstract: A wearable device may include a head-mounted display (HMD) for rendering a three-dimensional (3D) virtual object which appears to be located in an ambient environment of a user of the display. The relative positions of the HMD and one or more eyes of the user may not be in desired positions to receive, or register, image information outputted by the HMD. For example, the HMD-to-eye alignment vary for different users and may change over time (e.g., as a given user moves around or as the HMD slips or otherwise becomes displaced). The wearable device may determine a relative position or alignment between the HMD and the user's eyes by determining whether features of the eye are at certain vertical positions relative to the HMD. Based on the relative positions, the wearable device may determine if it is properly fitted to the user, and provides feedback on the quality of the fit to the user, and may take actions to reduce or minimize effects of any misalignment.
    Type: Grant
    Filed: July 23, 2019
    Date of Patent: January 31, 2023
    Assignee: Magic Leap, Inc.
    Inventors: Yan Xu, Jordan Alexander Cazamias, Rose Mei Peng