Patents Examined by Michelle Chin
  • Patent number: 12020393
    Abstract: A system and method include performing digital block-out of one or more digital preparation teeth.
    Type: Grant
    Filed: January 20, 2023
    Date of Patent: June 25, 2024
    Assignee: James R. Glidewell Dental Ceramics, Inc.
    Inventors: Sergey Nikolskiy, Fedor Cheinokov
  • Patent number: 11989819
    Abstract: A computer-implemented method and a corresponding apparatus are provided for the provision of a two-dimensional visualization image having a plurality of visualization pixels for the visualization of a three-dimensional object represented by volume data for a user. Context information for the visualization is obtained by the evaluation of natural language and is taken into account in the visualization. The natural language can be in the form of electronic documents, which are assigned or can be assigned to the visualization process. In addition, the natural language can be in the form of a speech input of a user, during or after the visualization.
    Type: Grant
    Filed: October 13, 2021
    Date of Patent: May 21, 2024
    Assignee: Siemens Healthineers AG
    Inventors: Fernando Vega, Stefan Thesen, Sebastian Budz, Robert Schneider, Sebastian Krueger, Alexander Brost, Volker Schaller, Bjoern Nolte
  • Patent number: 11989798
    Abstract: A video processing system includes a system processor circuit and a video processor circuit. The system processor circuit includes a graphic buffer and an open media acceleration layer. The graphic buffer is configured to store video data from a camera. The open media acceleration interface is configured to extract at least one data parameter associated with the video data. The video processor circuit is configured to receive the at least one data parameter, receive the video data from the graphic buffer according to the at least one data parameter, encode the video data according to the at least one data parameter to generate encoded data, and transmit the encoded data to the system processor circuit.
    Type: Grant
    Filed: May 5, 2022
    Date of Patent: May 21, 2024
    Assignee: Realtek Semiconductor Corporation
    Inventor: Ji Ma
  • Patent number: 11989846
    Abstract: A method for training a real-time, modeling for animating an avatar for a subject is provided. The method includes collecting multiple images of a subject. The method also includes selecting a plurality of vertex positions in a guide mesh, indicative of a volumetric primitive enveloping the subject, determining a geometric attribute for the volumetric primitive including a position, a rotation, and a scale factor of the volumetric primitive, determining a payload attribute for each of the volumetric primitive, the payload attribute including a color value and an opacity value for each voxel in a voxel grid defining the volumetric primitive, determining a loss factor for each point in the volumetric primitive based on the geometric attribute, the payload attribute and a ground truth value, and updating a three-dimensional model for the subject. A system and a non-transitory, computer-readable medium storing instructions to perform the above method are also provided.
    Type: Grant
    Filed: December 17, 2021
    Date of Patent: May 21, 2024
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Stephen Anthony Lombardi, Tomas Simon Kreuz, Jason Saragih, Gabriel Bailowitz Schwartz, Michael Zollhoefer, Yaser Sheikh
  • Patent number: 11988832
    Abstract: In one aspect, a first device includes at least one processor and storage accessible to the at least one processor. The storage includes instructions executable by the at least one processor to execute a first application (app) to generate a first virtual display for rendering in a first canvas as part of a three-dimensional (3D) simulation. The instructions are also executable to execute a second app to generate a second virtual display for rendering in a second canvas as part of the 3D simulation. The instructions are then executable to concurrently render the first and second canvases at a headset as part of the 3D simulation.
    Type: Grant
    Filed: August 8, 2022
    Date of Patent: May 21, 2024
    Assignee: Lenovo (Singapore) Pte. Ltd.
    Inventors: Kuldeep Singh, Raju Kandaswamy, Andrew Hansen, Mayan Shay May-Raz, Cole Heiner
  • Patent number: 11983827
    Abstract: A computer-implemented method includes obtaining an image of an area of a property from an augmented reality device, identifying the area of the property based on the image obtained from the augmented reality device, determining that the area of the property corresponds to an event at the property or a configuration of a monitoring system of the property, and providing, in response to determining that the area of the property corresponds to the event or the configuration, information that represents the event or the configuration and that is configured to be displayed on the augmented reality device.
    Type: Grant
    Filed: October 7, 2021
    Date of Patent: May 14, 2024
    Assignee: ObjectVideo Labs, LLC
    Inventor: Donald Madden
  • Patent number: 11983816
    Abstract: The present disclosure relates to a system for controlling an interactive virtual environment.
    Type: Grant
    Filed: June 16, 2023
    Date of Patent: May 14, 2024
    Assignee: Build a Rocket Boy Games Ltd.
    Inventor: Leslie Peter Benzies
  • Patent number: 11972520
    Abstract: A method of rendering, in a rendering space, a scene formed by primitives in a graphics processing system. A geometry processing phase includes the step of storing fragment shading rate data representing a first fragment shading rate value and associating data identifying a primitive with the fragment shading rate data. A rendering phase includes the steps of retrieving the stored fragment shading rate data and associated data identifying the primitive, obtaining an attachment specifying one or more attachment fragment shading rate values for the rendering space; processing the primitive to derive primitive fragments to be shaded; and for each primitive fragment, combining the first fragment shading rate value for the primitive from which the primitive fragment is derived with an attachment fragment shading rate value from the attachment to produce a resolved combined fragment shading rate value for the respective fragment.
    Type: Grant
    Filed: June 30, 2022
    Date of Patent: April 30, 2024
    Assignee: Imagination Technologies Limited
    Inventor: Enrique de Lucas
  • Patent number: 11963828
    Abstract: Disclosed herein are visualization systems, methods, devices and database configurations related to the real-time depiction, in 2 D and 3 D on monitor panels as well as via 3 D holographic visualization, of the internal workings of patient surgery, such as patient intervention site posture as well as the positioning, in some cases real time positioning, of an object foreign to the patient.
    Type: Grant
    Filed: May 23, 2023
    Date of Patent: April 23, 2024
    Assignee: Xenco Medical, LLC
    Inventor: Jason Haider
  • Patent number: 11961184
    Abstract: A system and method for 3D reconstruction with plane and surface reconstruction, scene parsing, depth reconstruction with depth fusion from different sources. The system includes display and a processor to perform the method for 3D reconstruction with plane and surface reconstruction. The method includes dividing a scene of an image frame into one or more plane regions and one or more surface regions. The method also includes generating reconstructed planes by performing plane reconstruction based on the one or more plane regions. The method also includes generating reconstructed surfaces by performing surface reconstruction based on the one or more surface regions. The method further includes creating the 3D scene reconstruction by integrating the reconstructed planes and the reconstructed surfaces.
    Type: Grant
    Filed: March 16, 2022
    Date of Patent: April 16, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Yingen Xiong, Christopher A. Peri
  • Patent number: 11961185
    Abstract: A method for generating a more accurate mesh that represents a 3D printed part based on a model includes slicing the model into layers and identifying an infill-wall boundary and an exterior-interior boundary of each layer of the model. Layers of the model may be identified as critical by iterative comparison with adjacent layers. An interior voxel mesh may be constructed based on common two-dimensional reference grids imposed on the critical layers. The interior voxel mesh may be augmented to an augmented mesh and then extended to a protomesh. The protomesh may be extruded to construct the final mesh, which may be analyzed by finite element analysis. The part may be 3D printed based on the layers output by the slicing operation.
    Type: Grant
    Filed: December 9, 2022
    Date of Patent: April 16, 2024
    Assignee: Markforged, Inc.
    Inventor: Jeffrey Lee Selden
  • Patent number: 11950850
    Abstract: A virtual surgical robot being built from kinematic data is rendered to a display. A user input is received to effect a movement or a configuration of the virtual surgical robot. The kinematic data is modified based on evaluation of the movement or the configuration of the virtual surgical robot.
    Type: Grant
    Filed: September 27, 2021
    Date of Patent: April 9, 2024
    Assignee: Verb Surgical Inc.
    Inventors: Bernhard Fuerst, Eric Johnson, Pablo Garcia Kilroy
  • Patent number: 11955028
    Abstract: An environment-transforming system transforms a user's environmental information in ways that allow the user to better perceive her environment. For example, the system transforms some aspects of a visual scene into a sound space for a visually impaired user. Once the user learns to interpret the sound space, she avoids walking into an object because a sound cue tells her that the object is in her near environment and roughly where it is. Various types of transformations are possible depending upon the user's needs and preferences. As all people are different, each user has a personalized profile that directs the transformation process. More generally, the environment to be transformed may include aspects of artificial or enhanced reality. The transformed environmental information can be presented to the user in a number of ways, such as in haptic feedback, as directional audio, through oral stimulation, or through modified visual display.
    Type: Grant
    Filed: February 28, 2022
    Date of Patent: April 9, 2024
    Assignee: United Services Automobile Association (USAA)
    Inventors: Justin Royell Nash, Ivan Ortiz, Austin Ray Keeton, Subhalakshmi Selvam, Fang Yuan Gonzalez, Huihui Wu, Salvador Adrian Bretado
  • Patent number: 11941744
    Abstract: Various methods are provided for generating motion vectors in the context of 3D computer-generated images. An example method includes generating, for each pixel of one or more objects to be rendered in a current frame, a 1-phase motion vector (MV1) and a 0-phase motion vector (MV0), each MV1 and MV0 having an associated depth value, to thereby form an MV1 texture and an MV0 texture; converting the MV1 texture to a set of MV1 blocks and converting the MV0 texture to a set of MV0 blocks; and outputting the set of MV1 blocks and the set of MV0 blocks for image processing.
    Type: Grant
    Filed: March 23, 2022
    Date of Patent: March 26, 2024
    Assignee: PIXELWORKS SEMICONDUCTOR TECHNOLOGY (SHANGHAI) CO. LTD.
    Inventors: Hongmin Zhang, Miao Sima, Zongming Han, Gongxian Liu, Junhua Chen, Guohua Cheng, Baochen Liu, Neil Woodall, Yue Ma, Huili Han
  • Patent number: 11935323
    Abstract: The invention discloses an imaging device using feature compensation, such a device comprises a vision detecting module, a voice detecting module, a partial-feature database and a three-dimensional (3D) imaging module. The vision detecting module may be configured to 3D mesh information of at least one object based on an imaging signal of the object. The voice detecting module may be configured to detect at least one specific voice of the object and generates a corresponding identification (ID) information. The partial-feature database may be configured to store compensating mesh information of the partial-features of the object. The 3D imaging module may be configured to combine the 3D mesh information of the object with the compensating mesh information, and thereby forms an 3D image. Furthermore, correspondingly, the invention also discloses an imaging method using feature compensation.
    Type: Grant
    Filed: November 23, 2021
    Date of Patent: March 19, 2024
    Assignee: AVERMEDIA TECHNOLOGIES, INC.
    Inventor: Chao-Tung Hu
  • Patent number: 11935173
    Abstract: Provided are a method and device for providing interactive virtual reality content capable of increasing user immersion by naturally connecting an idle image to a branched image. The method includes providing an idle image including options, wherein an actor in the idle image performs a standby operation, while the actor performs the standby operation, receiving a user selection for an option, providing a connection image, and providing a corresponding branched image according to the selection of the user, wherein a portion of the actor in the connection image is processed by computer graphics, and the actor performs a connection operation so that a first posture of the actor at a time point at which the selection is received is smoothly connected to a second posture of the actor at a start time point of the branched image.
    Type: Grant
    Filed: January 4, 2023
    Date of Patent: March 19, 2024
    Assignees: VISION VR INC.
    Inventors: Dong Kyu Kim, Won-Il Kim
  • Patent number: 11922607
    Abstract: Disclosed is an electronic device including a memory and a processor electrically connected with the memory. The memory stores instructions that, when executed, cause the processor to control the electronic device to: obtain information about a maximum value of brightness of image content based on metadata of the image content, to perform tone mapping on at least one or more frames corresponding to a preview image of the image content based on the information about the maximum value of the brightness, and to output the preview image based on the at least one or more frames on which the tone mapping is performed, on a display device.
    Type: Grant
    Filed: July 25, 2021
    Date of Patent: March 5, 2024
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Sanguk Park, Bongsoo Jung, Changho Kim, Jungwon Lee, Donghyun Yeom
  • Patent number: 11922098
    Abstract: A system for modeling a roof of a structure comprising a first database, a second database and a processor in communication with the first database and the second database. The processor selects one or more images and the respective metadata thereof from the first database based on a received a geospatial region of interest. The processor generates two-dimensional line segment geometries in pixel space based on two-dimensional outputs generated by a neural network in pixel space of at least one roof structure present in the selected one or more images. The processor classifies the generated two-dimensional line segment geometries into at least one contour graph based on three-dimensional data received from the second database and generates a three-dimensional representation of the at least one roof structure based on the at least one contour graph and the received three-dimensional data.
    Type: Grant
    Filed: February 2, 2021
    Date of Patent: March 5, 2024
    Assignee: Insurance Services Office, Inc.
    Inventors: Bryce Zachary Porter, Ryan Mark Justus
  • Patent number: 11915377
    Abstract: Extended reality (XR) software application programs establish remote collaboration sessions in which a host device and one or more remote devices can interact. When initiating a remote collaboration session, an XR application in a host device determines a collaboration area. The collaboration area corresponds to a portion of a real-world environment that is shared by the host device with the one or more remote devices. In some embodiments, the collaboration area can be determined automatically and/or based on user input. The XR application causes sensors associated with the host device to scan the collaboration area. Then, the XR application transmits, to the one or more remote devices, a three-dimensional representation of the collaboration area for rendering in one or more remote XR environments.
    Type: Grant
    Filed: April 30, 2021
    Date of Patent: February 27, 2024
    Assignee: SPLUNK INC.
    Inventors: Devin Bhushan, Caelin Thomas Jackson-King, Stanislav Yazhenskikh, Jim Jiaming Zhu
  • Patent number: 11915375
    Abstract: In particular embodiments, a computing system may divide at least a portion of a physical space surrounding a user into a plurality of three-dimensional (3D) regions, wherein each of the 3D regions is associated with an area of a plurality of areas in a plane. The system may generate estimated locations of features of objects in the portion of the physical space. Based on the estimated locations, the system may determine an occupancy state of each of the plurality of 3D regions. Then based on the occupancy states of the plurality of 3D regions, the system may determine that one or more of the plurality of areas have respective airspaces that are likely unoccupied by objects.
    Type: Grant
    Filed: February 8, 2023
    Date of Patent: February 27, 2024
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Alexandru-Eugen Ichim, Matthew James Alderman, Alexander Sorkine Hornung, Manuel Werlberger, Gaurav Chaurasia, Jan Oberländer