Patents Examined by Maurice L McDowell, Jr.
  • Patent number: 11961183
    Abstract: A system may provide for searching terrain data of real-world locations based on input representing a terrain for a game world. The system may receive terrain inquiry data including height data for terrain of a game world, generate an inquiry descriptor based on the terrain inquiry data at least in part by applying a plurality of filters to the terrain inquiry data, the inquiry descriptor including a plurality of inquiry descriptor values corresponding to the plurality of filter and determine, based on the inquiry descriptor and respective sample descriptors of one or more terrain samples corresponding to terrain of real-world locations, one or more matching terrain samples.
    Type: Grant
    Filed: August 31, 2022
    Date of Patent: April 16, 2024
    Assignee: Electronic Arts Inc.
    Inventor: Daniel Ric√£o Canelhas
  • Patent number: 11960934
    Abstract: A method and system for computing one or more outputs of a neural network having a plurality of layers is provided. The method and system can include determining a plurality of sub-computations from total computations of the neural network to execute in parallel wherein the computations to execute in parallel involve computations from multiple layers. The method and system also can also include avoiding repeating overlapped computations and/or multiple memory reads and writes during execution.
    Type: Grant
    Filed: August 8, 2022
    Date of Patent: April 16, 2024
    Assignee: NEURALMAGIC, INC.
    Inventors: Alexander Matveev, Nir Shavit
  • Patent number: 11948226
    Abstract: A computer-implemented method for clinical workspace simulation includes capturing a real-world environment by an imaging device of an augmented reality headset and generating a composite view by rendering a first virtual object relative to a surgical table in the real-world environment. Captured real-world environment and the rendered first virtual object are combined in the composite view, which is displayed on a display of the augmented reality headset worn by a user.
    Type: Grant
    Filed: May 3, 2022
    Date of Patent: April 2, 2024
    Assignee: COVIDIEN LP
    Inventors: Max L. Balter, Michael A. Eiden, William J. Peine, Unnas W. Hussain, Justin R. Chen
  • Patent number: 11948259
    Abstract: Embodiments of the present invention provide a system for processing and integrating real-time environment instances into virtual reality live streams. The system is configured for determining that a user is accessing a virtual environment, capturing real-time environment instance associated with the virtual environment via one or more capturing devices, creating a neutral environment template based on processing the real-time environment instance, embedding one or more preferential objects associated with the user into the neutral environment template to generate a preferred environment template, and instantaneously integrating the preferred environment template into a virtual reality live stream associated with the virtual environment in real-time.
    Type: Grant
    Filed: August 22, 2022
    Date of Patent: April 2, 2024
    Inventors: Suryanarayan Parthasarathi Chakravarthi, Pritika Bhatia, Harshit Bhatt, Saisrikanth Chitty, Neha Jain, Mithun Kumar, Madhumitha Swaminathan Rangarajan
  • Patent number: 11937770
    Abstract: A surgical system is disclosed comprising a processor and a memory storing instructions executable by the processor to receive imaging data from a surgical visualization system, identify a critical structure within a patient based on the imaging data received from the surgical visualization system, transmit a margin signal based on the identified critical structure, measure a size of an anatomical structure of the patient based on the imaging data, identify information about a surgical instrument of a plurality of surgical instruments, customize an operational parameter of a surgical instrument of the plurality of surgical instruments, and provide electrosurgical energy to electrosurgical instruments of the plurality of surgical instrument. A display is configured to display a digital representation of the critical structure on the display. The display is configured to display a resection margin around the critical structure. The resection margin is generated based on the margin signal.
    Type: Grant
    Filed: June 15, 2021
    Date of Patent: March 26, 2024
    Assignee: Cilag GmbH International
    Inventors: Frederick E. Shelton, IV, Jason L. Harris, Daniel J. Mumaw, Kevin M. Fiebig
  • Patent number: 11928784
    Abstract: Examples of the disclosure describe systems and methods for sharing perspective views of virtual content. In an example method, a virtual object is presented, via a display, to a first user. A first perspective view of the virtual object is determined, wherein the first perspective view is based on a position of the virtual object and a position of the first user. The virtual object is presented, via a display, to a second user, wherein the virtual object is presented to the second user according to the first perspective view. A second perspective view of the virtual object is determined, wherein the second perspective view is based on an input from the first user. The virtual object is presented, via a display, to the second user, wherein presenting the virtual object to the second user comprises presenting a transition from the first perspective view to the second perspective view.
    Type: Grant
    Filed: January 9, 2023
    Date of Patent: March 12, 2024
    Assignee: Magic Leap, Inc.
    Inventor: Marc Alan McCall
  • Patent number: 11928404
    Abstract: Methods and systems of simulating a fluid. An outflow of the fluid is determined. An inflow of the fluid is determined. Determining a simulated fluid using the inflow and the outflow.
    Type: Grant
    Filed: April 21, 2022
    Date of Patent: March 12, 2024
    Assignee: LEVEL EX, INC.
    Inventors: Sam Glassenberg, Matthew Yeager
  • Patent number: 11928834
    Abstract: Presented herein are systems and methods for performing three-dimensional measurements of a surgical space using two-dimensional endoscopic images. According to an aspect, video data taken from an endoscopic imaging device can be used to generate a three-dimensional model of the surgical space represented by the video data. In one or more examples, two-dimensional images from the video data can be used to generate a three-dimensional model of the surgical space. In one or more examples, the one or more two-dimensional images of the surgical space can include a fiducial marker as part of the image. Using both the depth information and a size reference provided by the fiducial marker, the systems and methods herein can generate a three-dimensional model of the surgical space. The generated three-dimensional model can then be used to perform a variety of three-dimensional measurements in a surgical cavity in an accurate and efficient manner.
    Type: Grant
    Filed: May 23, 2022
    Date of Patent: March 12, 2024
    Assignee: Stryker Corporation
    Inventors: Cole Kincaid Hunter, Brian Fouts, Sanskruti Maske
  • Patent number: 11922578
    Abstract: A method for adjusting point cloud density, an electronic device, and a storage medium are provided. In the method an initial point cloud map and a distance determination threshold of a robot are obtained. A plurality of target regions in the initial point cloud map are determined, and an environmental complexity value of each target region is calculated. The initial point cloud map is divided into submaps, and a point cloud density coefficient of each submap is determined. The initial point cloud map is adjusted according to the point cloud density coefficient and the target point cloud map is obtained. By utilizing such method, adjustment efficiency and an accuracy of point cloud density can be improved.
    Type: Grant
    Filed: June 22, 2022
    Date of Patent: March 5, 2024
    Inventors: Jung-Yi Lin, Chieh Lee
  • Patent number: 11922569
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating realistic full-scene point clouds. One of the methods includes obtaining an initial scene point cloud characterizing an initial scene in an environment; obtaining, for each of one or more objects, an object point cloud that characterizes the object; and processing a first input comprising the initial scene point cloud and the one or more object point clouds using a first neural network that is configured to process the first input to generate a final scene point cloud that characterizes a transformed scene that has the one or more objects added to the initial scene.
    Type: Grant
    Filed: April 4, 2022
    Date of Patent: March 5, 2024
    Assignee: Waymo LLC
    Inventors: Yin Zhou, Dragomir Anguelov, Zhangjie Cao
  • Patent number: 11911116
    Abstract: A system includes a console assembly, a trocar assembly operably coupled to the console assembly, a camera assembly operably coupled to the console assembly having a stereoscopic camera assembly, and at least one rotational positional sensor configured to detect rotation of the stereoscopic camera assembly about at least one of a pitch axis or a yaw axis. The console assembly includes a first actuator and a first actuator pulley operable coupled to the first actuator. The trocar assembly includes a trocar having an inner and outer diameter, and a seal sub-assembly comprising at least one seal and the seal sub-assembly operably coupled to the trocar. The camera assembly includes a camera support tube having a distal and a proximal end, the stereoscopic camera operably coupled to the distal end of the support tube and a first and second camera module having a first and second optical axis.
    Type: Grant
    Filed: July 28, 2022
    Date of Patent: February 27, 2024
    Assignee: Vicarious Surgical Inc.
    Inventors: Eric Kline, Sammy Khalifa, Marshall Wentworth, Eric Van Albert
  • Patent number: 11883948
    Abstract: A virtual robot image presentation method and an apparatus are provided to improve virtual robot utilization and user experience. In this method, an electronic device generates a first virtual robot image, and presents the first virtual robot image. The first virtual robot image is determined by the electronic device based on scene information. The scene information includes at least one piece of information in first information and second information, the first information is used to represent a current time attribute, and the second information is used to represent a type of an application currently running in the electronic device. According to the foregoing method, in a human-machine interaction process, virtual robot images can be richer and more vivid, so that user experience can be better, thereby improving virtual robot utilization of a user.
    Type: Grant
    Filed: November 13, 2020
    Date of Patent: January 30, 2024
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Simon Ekstrand, Fredrik Andreasson, Johan Larsby, Sha Qian, Le Du, Xueyan Huang
  • Patent number: 11887237
    Abstract: A system to dynamically generate and cause display of composite user identifiers is described. Embodiments of the present disclosure related to systems for: receiving an identification of a user profile from a client device; retrieving user identifiers associated with the user profile and a user profile associated with the client device; selecting an animation script from among a plurality of animation scripts, the animation script including a set of graphical elements; generating a composite user identifier based on the user identifiers associated with the user profile and the user profile associated with the client device; and causing display of a presentation of the composite user identifier at the client device, wherein the presentation is based on the animation script.
    Type: Grant
    Filed: December 22, 2020
    Date of Patent: January 30, 2024
    Assignee: Snap Inc.
    Inventors: Celia Nicole Mourkogiannis, Jeremy Voss
  • Patent number: 11875563
    Abstract: Systems and methods for presenting an augmented reality view are disclosed. Embodiments include a system with a database for personalizing an augmented reality view of a physical environment using at least one of a location of a physical environment or a location of a user. The system may further include a hardware device in communication with the database, the hardware device including a renderer configured to render the augmented reality view for display and a controller configured to determine a scope of the augmented reality view authenticating the augmented reality view. The hardware device may include a processor configured to receive the augmented reality view of the physical environment, and present, via a display, augmented reality content to the user while the user is present in the physical environment, based on the determined scope of the augmented reality view.
    Type: Grant
    Filed: April 9, 2021
    Date of Patent: January 16, 2024
    Assignee: Capital One Services, LLC
    Inventors: Jason Richard Hoover, Micah Price, Sunil Subrahmanyam Vasisht, Qiaochu Tang, Geoffrey Dagley, Stephen Michael Wylie
  • Patent number: 11875012
    Abstract: The technology disclosed relates to positioning and revealing a control interface in a virtual or augmented reality that includes causing display of a plurality of interface projectiles at a first region of a virtual or augmented reality. Input is received that is interpreted as user interaction with an interface projectile. User interaction includes selecting and throwing the interface projectile in a first direction. An animation of the interface projectile is displayed along a trajectory in the first directions to a place where it lands. A blooming of the control interface blooming from the interface projectile at the place where it lands is displayed.
    Type: Grant
    Filed: May 21, 2019
    Date of Patent: January 16, 2024
    Assignee: Ultrahaptics IP Two Limited
    Inventor: Nicholas James Benson
  • Patent number: 11861759
    Abstract: Embodiments are generally directed to memory prefetching in multiple GPU environment. An embodiment of an apparatus includes multiple processors including a host processor and multiple graphics processing units (GPUs) to process data, each of the GPUs including a prefetcher and a cache; and a memory for storage of data, the memory including a plurality of memory elements, wherein the prefetcher of each of the GPUs is to prefetch data from the memory to the cache of the GPU; and wherein the prefetcher of a GPU is prohibited from prefetching from a page that is not owned by the GPU or by the host processor.
    Type: Grant
    Filed: January 20, 2022
    Date of Patent: January 2, 2024
    Inventors: Joydeep Ray, Aravindh Anantaraman, Valentin Andrei, Abhishek R. Appu, Nicolas Galoppo von Borries, Varghese George, Altug Koker, Elmoustapha Ould-Ahmed-Vall, Mike Macpherson, Subramaniam Maiyuran
  • Patent number: 11861777
    Abstract: Computer animation involving monocular pose prediction is disclosed. A plurality of candidate pose sequences of a three-dimensional model of an animation character is generated such that each candidate pose of each sequence has a segmentation map that matches a segmentation map of a corresponding character derived from a corresponding frame of a video. A distance between candidate poses at each time step is maximized. An optimum pose sequence is determined and used to generate a corresponding sequence of frames of animation.
    Type: Grant
    Filed: February 25, 2022
    Date of Patent: January 2, 2024
    Inventors: Sergey Bashkirov, Michael Taylor
  • Patent number: 11854147
    Abstract: Augmented reality guidance for guiding a user through an environment using an eyewear device. The eyewear device includes a display system and a position detection system. A user is guided though an environment by monitoring a current position of the eyewear device within the environment, identifying marker positions within a threshold of the current position, the marker positions defined with respect to the environment and associated with guidance markers, registering the marker positions, generating overlay image including the guidance markers, and presenting the overlay image on a display of the eyewear device.
    Type: Grant
    Filed: February 28, 2022
    Date of Patent: December 26, 2023
    Assignee: Snap Inc.
    Inventors: Shin Hwun Kang, Dmytro Kucher, Dmytro Hovorov, Ilteris Canberk
  • Patent number: 11854211
    Abstract: Training a multi-object tracking model includes: generating a plurality of training images based at least on scene generation information, each training image comprising a plurality of objects to be tracked; generating, for each training image, original simulated data based at least on the scene generation information, the original simulated data comprising tag data for a first object; locating, within the original simulated data, tag data for the first object, based on at least an anomaly alert (e.g., occlusion alert, proximity alert, motion alert) associated with the first object in the first training image; based at least on locating the tag data for the first object, modifying at least a portion of the tag data for the first object from the original simulated data, thereby generating preprocessed training data from the original simulated data; and training a multi-object tracking model with the preprocessed training data to produce a trained multi-object tracker.
    Type: Grant
    Filed: January 26, 2022
    Date of Patent: December 26, 2023
    Assignee: Microsoft Technology Licensing, LLC.
    Inventors: Ishani Chakraborty, Jonathan C. Hanzelka, Lu Yuan, Pedro Urbina Escos, Thomas M. Soemo
  • Patent number: 11830113
    Abstract: A set of streams of time series data is mapped to a corresponding set of drawing coordinates of an image, wherein a first stream of time series data in the set of streams of time series data comprises data of a first executing application component, wherein a first drawing coordinate in the set of drawing coordinates represents a first state of the first executing application component at a first time. The set of drawing coordinates is drawn, wherein a line between the first drawing coordinate and a second drawing coordinate represents a dependency between the first executing application component and a second executing application component.
    Type: Grant
    Filed: February 10, 2022
    Date of Patent: November 28, 2023
    Inventors: Albert Alexander Chung, Alexander Dalton Chung