Patents Examined by Robert J Craddock
  • Patent number: 11688118
    Abstract: A method for managing a multi-user animation platform is disclosed. A three-dimensional space within a computer memory is modeled. An avatar of a client is located within the three-dimensional space, the avatar being graphically represented by a three-dimensional figure within the three-dimensional space. The avatar is responsive to client input commands, and the three-dimensional figure includes a graphical representation of client activity. The client input commands are monitored to determine client activity. The graphical representation of client activity is then altered according to an inactivity scheme when client input commands are not detected. Following a predetermined period of client inactivity, the inactivity scheme varies non-repetitively with time.
    Type: Grant
    Filed: June 15, 2022
    Date of Patent: June 27, 2023
    Assignee: PFAQUTRUMA RESEARCH LLC
    Inventor: Brian Mark Shuster
  • Patent number: 11682175
    Abstract: The present disclosure relates to systems that capture a combination of image data and environmental data of the environment. The system uses the environmental data to create a detailed virtual scan of the environment. Computer generated models and images (“assets”) are inserted into the detailed virtual environment from the scan. These assets are scaled and placed within the virtual environment at specific locations and having a specific orientation. The scaled and positioned asset is then composited with the real-time video signal allowing a user to view the asset in real-time on a display.
    Type: Grant
    Filed: August 24, 2021
    Date of Patent: June 20, 2023
    Assignee: FD IP & LICENSING LLC
    Inventors: Brandon Fayette, Gene Reddick
  • Patent number: 11680914
    Abstract: There is provided systems and methods for generating 3D structure estimation of at least one target from a set of 2D Cryo-electron microscope particle images. The method includes: receiving the set of 2D particle images of the target from a Cryo-electron microscope; splitting the set of particle images into at least a first half-set and a second half-set; iteratively performing: determining local resolution estimation and local filtering on at least a first half-map associated with the first half-set and a second half-map associated with the second half-set; aligning 2D particles from each of the half-sets using at least one region of the associated half-map; for each of the half-maps, generating an updated half-map using the aligned 2D particles from the associated half-set; and generating a resultant 3D map using all the half-maps.
    Type: Grant
    Filed: October 5, 2018
    Date of Patent: June 20, 2023
    Assignee: THE GOVERNING COUNCIL OF THE UNIVERSITY OF TORONTO
    Inventors: Ali Punjani, David Fleet, Haowei Zhang
  • Patent number: 11682271
    Abstract: An automated teller machine (ATM) diagnostic and repair system includes an image capture device, a display, a processor, and a memory. The image capture device is configured to capture at least one of images or videos. The memory includes instructions stored thereon that, when executed by the processor, cause the processor to receive diagnostic data from an ATM. The instructions, when executed by the processor, further cause the processor to capture at least one of an image or a video of the ATM using the image capture device. The instructions, when executed by the processor, further cause the processor to receive a selection of a particular component of the ATM from a user and to provide at least one of an augmented image or an augmented video of the ATM including a modified view of the particular component.
    Type: Grant
    Filed: December 29, 2021
    Date of Patent: June 20, 2023
    Assignee: WELLS FARGO BANK, N.A.
    Inventor: Alicia Y. Moore
  • Patent number: 11676353
    Abstract: Systems and methods configured to facilitate animation are disclosed. Exemplary implementations may: obtain a first scene definition; receive second entity information; integrate the second entity information into the first scene definition such that a second scene definition is generated; for each of the entities of the entity information, execute a simulation of the virtual reality scene from the second scene definition for at least a portion of the scene duration; for each of the entities of the entity information, analyze the second scene definition for deviancy between the given entity and the second motion capture information; for each of the entities of the entity information, indicate, based on the analysis for deviancy, the given entity as deviant; and for each of the entities of the entity information, re-integrate the given entity into the second scene definition.
    Type: Grant
    Filed: June 23, 2022
    Date of Patent: June 13, 2023
    Assignee: Mindshow Inc.
    Inventors: Jeffrey Scott Dixon, William Stuart Farquhar
  • Patent number: 11676355
    Abstract: A method of merging distant virtual spaces is disclosed. Data describing an environment surrounding a MR merging device is received. A first slice plane is generated, positioned, and displayed within the environment. A second MR merging device is connective with in a second environment. Data describing inbound content from the second MR merging device is received. Content data is sent from the MR merging device to the second MR merging device. The inbound content data is processed and displayed on the first slice plane.
    Type: Grant
    Filed: November 4, 2021
    Date of Patent: June 13, 2023
    Assignee: Unity IPR ApS
    Inventor: Gregory Lionel Xavier Jean Palmaro
  • Patent number: 11663798
    Abstract: Present disclosure discloses an image processing system and method for manipulating two-dimensional (2D) images of three-dimensional (3D) objects of a predetermined class (e.g., human faces). A 2D input image of a 3D object of the predetermined class is manipulated by manipulating physical properties of the 3D object, such as a 3D shape of the 3D input object, an albedo of the 3D input object, a pose of the 3D input object, and lighting illuminating the 3D input object. The physical properties are extracted from the 2D input image using a neural network that is trained to reconstruct the 2D input image. The 2D input image is reconstructed by disentangling the physical properties from pixels of the 2D input image using multiple subnetworks. The disentangled physical properties produced by the multiple subnetworks are combined into a 2D output image using a differentiable renderer.
    Type: Grant
    Filed: October 13, 2021
    Date of Patent: May 30, 2023
    Assignee: Mitsubishi Electric Research Laboratories, Inc.
    Inventors: Tim Marks, Safa Medin, Anoop Cherian, Ye Wang
  • Patent number: 11663791
    Abstract: An example method includes identifying a need and an emotional state of a user who is participating in an interaction with a support application, retrieving a set of preferences for the user, selecting a set of features for an avatar to be presented to the user, wherein the set of features is selected based at least on the emotional state of the user and the set of preferences, selecting a stored workflow based on the need of the user, and rendering the avatar to exhibit the set of features and to present the stored workflow as part of the interaction.
    Type: Grant
    Filed: November 16, 2021
    Date of Patent: May 30, 2023
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: James Pratt, Yupeng Jia, Eric Zavesky
  • Patent number: 11663701
    Abstract: This disclosure presents a method and computer program product to denoise a ray traced scene. An apparatus for processing a ray traced scene is also disclosed. In one example, the method includes: (1) generating filtered scene data by filtering modified scene data from original scene data utilizing a spatial filter, and (2) providing a denoised ray traced scene by adjusting the filtered scene data utilizing a temporal filter. The modified and adjusted scene data can be sent to a rendering processor or system to complete rendering to generate a final scene.
    Type: Grant
    Filed: April 27, 2021
    Date of Patent: May 30, 2023
    Assignee: NVIDIA Corporation
    Inventors: Shiqiu Liu, Jacopo Pantaleoni
  • Patent number: 11657583
    Abstract: The present technology includes calculating the 3-D RF propagation pattern in a space for at least one Wi-Fi access point and displaying a visualization of the RF propagation pattern in augmented reality (AR). The augmented reality view of the space can be created by capturing at least one image of the space and displaying at least one image of the space on a display with the visualization of the Wi-Fi access point RF propagation pattern on the display overlaid at least one image of the space. The disclosed technology further can calculate the RF propagation properties and render a visualization of the RF propagation patterns in a 3D space by utilizing hardware on a user device. The AR display is useful in visualizing, in-person aspects of a Wi-Fi network and coverage, and can be used in troubleshooting, maintenance, and simulations of equipment variations.
    Type: Grant
    Filed: August 8, 2022
    Date of Patent: May 23, 2023
    Assignee: Cisco Technology, Inc.
    Inventors: Salvatore Valenza, Taha Hajar, Samer Salam, Mathieu Bastien Monney
  • Patent number: 11640693
    Abstract: Methods, systems and apparatuses may provide for technology that determines the size of a graphics primitive, renders pixels associated with the graphics primitive on a per tile basis if the size exceeds a threshold, and renders the pixels associated with the graphics primitive in a mesh order if the size does not exceed the threshold. In one example, the technology discards state data associated with the graphics primitive in response to a completion of rendering the pixels associated with the graphics primitive in the mesh order.
    Type: Grant
    Filed: November 15, 2021
    Date of Patent: May 2, 2023
    Assignee: Intel Corporation
    Inventors: Justin DeCell, Saurabh Sharma, Subramaniam Maiyuran, Raghavendra Miyar, Jorge Garcia Pabon
  • Patent number: 11636234
    Abstract: The disclosure notably relates to a computer-implemented method for generating a 3D model representing a building. The method comprises providing a 2D floor plan representing a layout of the building. The method also comprises determining a semantic segmentation of the 2D floor plan. The method also comprises determining the 3D model based on the semantic segmentation. Such a method provides an improved solution for processing a 2D floor plan.
    Type: Grant
    Filed: December 28, 2018
    Date of Patent: April 25, 2023
    Assignee: DASSAULT SYSTEMES
    Inventors: Asma Rejeb Sfar, Louis Dupont de Dinechin, Malika Boulkenafed
  • Patent number: 11636662
    Abstract: Methods and systems are disclosed for performing operations for applying augmented reality elements to a fashion item. The operations include receiving an image that includes a depiction of a person wearing a fashion item. The operations include generating a segmentation of the fashion item worn by the person depicted in the image. The operations include extracting a portion of the image corresponding to the segmentation of the fashion item; estimating an angle of each pixel in the portion of the image relative to a camera used to capture the image. The operations include applying one or more augmented reality elements to the fashion item in the image based on the estimated angle of each pixel in the portion of the image relative to the camera used to capture the image.
    Type: Grant
    Filed: September 30, 2021
    Date of Patent: April 25, 2023
    Assignee: Snap Inc.
    Inventors: Itamar Berger, Gal Dudovitch, Gal Sasson, Ma'ayan Shuvi, Matan Zohar
  • Patent number: 11620791
    Abstract: Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing at least one program and method for rendering three-dimensional captions (3D) in real-world environments depicted in image content. An editing interface is displayed on a client device. The editing interface includes an input component displayed with a view of a camera feed. A first input comprising one or more text characters is received. In response to receiving the first input, a two-dimensional (2D) representation of the one or more text characters is displayed. In response to detecting a second input, a preview interface is displayed. Within the preview interface, a 3D caption based on the one or more text characters is rendered at a position in a 3D space captured within the camera feed. A message is generated that includes the 3D caption rendered at the position in the 3D space captured within the camera feed.
    Type: Grant
    Filed: May 13, 2021
    Date of Patent: April 4, 2023
    Assignee: Snap Inc.
    Inventors: Kyle Goodrich, Samuel Edward Hare, Maxim Maximov Lazarov, Tony Mathew, Andrew James McPhee, Daniel Moreno, Wentao Shang
  • Patent number: 11615506
    Abstract: A method for adjusting an over-rendered area of a display in an AR device is described. The method includes identifying an angular velocity of a display device, a most recent pose of the display device, previous warp poses, and previous over-rendered areas, and adjusting a size of a dynamic over-rendered area based on a combination of the angular velocity, the most recent pose, the previous warp poses, and the previous over-rendered areas.
    Type: Grant
    Filed: November 18, 2021
    Date of Patent: March 28, 2023
    Assignee: Snap Inc.
    Inventors: Bernhard Jung, Edward Lee Kim-Koon
  • Patent number: 11615503
    Abstract: In one example embodiment, an information processing apparatus causes a display device to display a first image from images associated with an observation target object. The images include the first image and a second image which corresponds to an annotation mark. In this embodiment, the information processing apparatus also causes the display device to display the annotation mark corresponding to the second image. In this embodiment, the displayed annotation mark overlaps the first image.
    Type: Grant
    Filed: September 15, 2021
    Date of Patent: March 28, 2023
    Assignee: Sony Corporation
    Inventors: Masashi Kimoto, Shigeatsu Yoshioka
  • Patent number: 11601490
    Abstract: The disclosure is directed to systems and methods for local rendering of 3D models which are then accessed by remote computers. The advantage of the system is that extensive hardware needed for rendering complex 3D models is centralized and can be accessed by smaller remote computers without and special hardware or software installation. The system also provides enhanced security as model data can be restricted to a limited number of servers instead of stored on individual computers.
    Type: Grant
    Filed: June 25, 2021
    Date of Patent: March 7, 2023
    Assignee: AVEVA Software, LLC
    Inventors: David Matthew Stevenson, Paul Antony Burton, Mira Witczak
  • Patent number: 11593999
    Abstract: A method for guiding installation of smart-home devices may include capturing, by a camera of a mobile computing device, a view of an installation location for a smart-home device; determining, by the mobile computing device, an instruction for installing the smart-home device at the location; and displaying, by a display of the mobile computing device, the view of the installation location for a smart-home device with the instruction for installing the smart-home device.
    Type: Grant
    Filed: September 13, 2021
    Date of Patent: February 28, 2023
    Assignee: Google LLC
    Inventors: Adam Mittleman, Jason Chamberlain, Jacobi Grillo, Daniel Biran, Mark Kraz, Lauren Chanen, Daniel Foran, David Fichou, William Dong, Bao-Tram Phan Nguyen, Brian Silverstein, Yash Modi, Alex Finlayson, Dongeek Shin
  • Patent number: 11587266
    Abstract: Techniques are disclosed to add augmented reality to a sub-view of a high resolution central video feed. In various embodiments, a central video feed is received from a first camera on a first recurring basis and time-stamped position information is received from a tracking system on a second recurring basis. The central video feed is calibrated against a spatial region encompassed by the central video feed. The received time-stamped position information and a determined plurality of tiles associated with at least one frame of the central video feed are used to define a first sub-view of the central video feed. The first sub-view and a homography defining placement of augmented reality elements on the at least one frame of the central video feed are provided as output to a device configured to use the first sub-view and the homography display the first sub-view.
    Type: Grant
    Filed: July 21, 2021
    Date of Patent: February 21, 2023
    Assignee: Tempus Ex Machina, Inc.
    Inventors: Erik Schwartz, Michael Naquin, Christopher Brown, Steve Xing, Pawel Czarnecki, Charles D. Ebersol, Anne Gerhart
  • Patent number: 11580698
    Abstract: Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing at least one program and method for rendering three-dimensional captions (3D) in real-world environments depicted in image content. An editing interface is displayed on a client device. The editing interface includes an input component displayed with a view of a camera feed. A first input comprising one or more text characters is received. In response to receiving the first input, a two-dimensional (2D) representation of the one or more text characters is displayed. In response to detecting a second input, a preview interface is displayed. Within the preview interface, a 3D caption based on the one or more text characters is rendered at a position in a 3D space captured within the camera feed. A message is generated that includes the 3D caption rendered at the position in the 3D space captured within the camera feed.
    Type: Grant
    Filed: May 13, 2021
    Date of Patent: February 14, 2023
    Assignee: Snap Inc.
    Inventors: Kyle Goodrich, Samuel Edward Hare, Maxim Maximov Lazarov, Tony Mathew, Andrew James McPhee, Daniel Moreno, Wentao Shang