Patents Examined by Maurice L McDowell, Jr.
  • Patent number: 11347905
    Abstract: Methods and systems for performing softbody tissue simulation are described. A two-dimensional (2D) vertex displacement grid, represented as a 2D texture of a softbody mesh, can be determined. The 2D texture can comprise pinned positions of vector displacements relative to base positions. The surface of a three-dimensional (3D) object can be displaced by adding the vector displacements stored in the 2D texture in order to perform softbody tissue simulation. The pinning can comprise sliding, and sliding objects can be represented as signed distance functions (SDFs).
    Type: Grant
    Filed: October 24, 2018
    Date of Patent: May 31, 2022
    Assignee: Level Ex, Inc.
    Inventors: Sam Glassenberg, Matthew Yaeger
  • Patent number: 11341689
    Abstract: One or more computer processors create a user-event localization model for an identified remote audience member in a plurality of identified remote audience members for an event. The one or more computer processors generate a virtual audience member based the identified remote audience member utilizing a trained generated adversarial network and one or more user preferences. The one or more computer processors present the generated virtual audience member in a location associated with the event. The one or more computer processors dynamically adjust a presented virtual audience member responsive to one or more event occurrences utilizing the created user-event localization model.
    Type: Grant
    Filed: November 5, 2020
    Date of Patent: May 24, 2022
    Assignee: International Business Machines Corporation
    Inventors: Aaron K Baughman, Sai Krishna Reddy Gudimetla, Stephen C Hammer, Jeffrey D. Amsterdam, Sherif A. Goma
  • Patent number: 11335177
    Abstract: An example device includes a monitoring engine. The monitoring engine is to measure a characteristic of a wireless signal related to a path of the wireless signal. The wireless signal includes data from a remote source. The device includes an analysis engine to determine that an object is within a proximity threshold of a user based on the characteristic of the wireless signal. The device includes an indication engine to indicate to the user that the object is within the proximity threshold of the user.
    Type: Grant
    Filed: September 22, 2017
    Date of Patent: May 17, 2022
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Alexander Wayne Clark, Michael Bartha
  • Patent number: 11335008
    Abstract: Training a multi-object tracking model includes: generating a plurality of training images based at least on scene generation information, each training image comprising a plurality of objects to be tracked; generating, for each training image, original simulated data based at least on the scene generation information, the original simulated data comprising tag data for a first object; locating, within the original simulated data, tag data for the first object, based on at least an anomaly alert (e.g., occlusion alert, proximity alert, motion alert) associated with the first object in the first training image; based at least on locating the tag data for the first object, modifying at least a portion of the tag data for the first object from the original simulated data, thereby generating preprocessed training data from the original simulated data; and training a multi-object tracking model with the preprocessed training data to produce a trained multi-object tracker.
    Type: Grant
    Filed: September 18, 2020
    Date of Patent: May 17, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Ishani Chakraborty, Jonathan C. Hanzelka, Lu Yuan, Pedro Urbina Escos, Thomas M. Soemo
  • Patent number: 11328182
    Abstract: A three-dimensional (3D) map inconsistency detection machine includes an input transformation layer connected to a neural network. The input transformation layer is configured to 1) receive a test 3D map including 3D map data modeling a physical entity, 2) transform the 3D map data into a set of 2D images collectively corresponding to volumes of view frustums of a plurality of virtual camera views of the physical entity modeled by the test 3D map, and 3) output the set of 2D images to the neural network. The neural network is configured to output an inconsistency value indicating a degree to which the test 3D map includes inconsistencies based on analysis of the set of 2D images collectively corresponding to the volumes of the view frustums of the plurality of virtual camera views.
    Type: Grant
    Filed: June 9, 2020
    Date of Patent: May 10, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Lukas Gruber, Christoph Vogel, Ondrej Miksik, Marc Andre Leon Pollefeys
  • Patent number: 11321880
    Abstract: [Object] There is provided a mechanism that makes it possible to efficiently specify an important region of a moving image including a dynamic content. [Solution] An information processor including a control unit that recognizes a motion of an operator with respect to an operation target in a moving image and specifies an important region of the operation target in the moving image on a basis of an operation position of the operator.
    Type: Grant
    Filed: September 28, 2018
    Date of Patent: May 3, 2022
    Assignee: SONY CORPORATION
    Inventor: Shogo Takanashi
  • Patent number: 11315296
    Abstract: A digital map is displayed via a user interface in a map viewport. The digital map includes various features representing respective entities in a geographic area, each of the features being displayed at a same level of magnification. Geolocated points of interest are determined within the geographic area, and a focal point of the map viewport is determined. For each of indicators, the size of the indicator is varied in accordance with the distance between the geographic location corresponding to the indicator and the geographic location corresponding to the focal point of the map viewport. The indicators then are displayed on the digital map.
    Type: Grant
    Filed: November 22, 2017
    Date of Patent: April 26, 2022
    Assignee: GOOGLE LLC
    Inventors: Scott Mongrain, Bailiang Zhou
  • Patent number: 11302082
    Abstract: Provided herein are exemplary embodiments directed to a method for creating digital media, including the placing the digital media in a computer graphics environment, the computer graphics environment further comprising visually perceptible elements appearing as real objects placed in a real world setting, and viewing the digital media when at the real world setting. Various exemplary systems include an augmented reality and virtual reality server connected to a network, and a client device connected to the network, the client device having an augmented reality and virtual reality application. Further exemplary systems include a body or motion sensor connected to the client device and/or an augmented reality and virtual reality interface connected to the client device.
    Type: Grant
    Filed: July 17, 2020
    Date of Patent: April 12, 2022
    Assignee: tagSpace Pty Ltd
    Inventor: Paul Simon Martin
  • Patent number: 11302077
    Abstract: Augmented reality guidance for guiding a user through an environment using an eyewear device. The eyewear device includes a display system and a position detection system. A user is guided though an environment by monitoring a current position of the eyewear device within the environment, identifying marker positions within a threshold of the current position, the marker positions defined with respect to the environment and associated with guidance markers, registering the marker positions, generating overlay image including the guidance markers, and presenting the overlay image on a display of the eyewear device.
    Type: Grant
    Filed: May 29, 2020
    Date of Patent: April 12, 2022
    Assignee: Snap Inc.
    Inventors: Shin Hwun Kang, Dmytro Kucher, Dmytro Hovorov, Ilteris Canberk
  • Patent number: 11295517
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating realistic full-scene point clouds. One of the methods includes obtaining an initial scene point cloud characterizing an initial scene in an environment; obtaining, for each of one or more objects, an object point cloud that characterizes the object; and processing a first input comprising the initial scene point cloud and the one or more object point clouds using a first neural network that is configured to process the first input to generate a final scene point cloud that characterizes a transformed scene that has the one or more objects added to the initial scene.
    Type: Grant
    Filed: November 16, 2020
    Date of Patent: April 5, 2022
    Assignee: Waymo LLC
    Inventors: Yin Zhou, Dragomir Anguelov, Zhangjie Cao
  • Patent number: 11294697
    Abstract: A computing device may include a memory and a processor cooperating with the memory to generate data to correct errors in transmission of packets to a client device based upon a ratio of a first bandwidth in which to transfer content of a buffer and a second bandwidth in which to transfer the generated data, the packets to transfer the content and the generated data to the client device via a channel. The processor may further adjust the ratio based upon a parameter of the channel, and send the content of the buffer and the generated data via packets and through the channel to the client device based on the adjusted ratio.
    Type: Grant
    Filed: September 24, 2020
    Date of Patent: April 5, 2022
    Assignee: CITRIX SYSTEMS, INC.
    Inventor: Georgy Momchilov
  • Patent number: 11297229
    Abstract: A method of acquiring images includes moving, into a plurality of acquisition locations, of an acquisition device including at least one camera, and acquisition at each acquisition location of at least one image of a scene by the camera. Each acquisition location being chosen in such a manner that scenes viewed by the camera at two consecutive acquisition locations and corresponding images overlap, at least partially, and areal density of pixels assigned to at least one element of the corresponding scene, which is represented in the corresponding image by a high-resolution portion, is greater than 50% or greater than 80% of a target areal density, the areal density of pixels being defined as a ratio of the area of the element projected in a plane perpendicular to an optical axis of the camera over a quantity of pixels of the high-resolution portion.
    Type: Grant
    Filed: October 3, 2019
    Date of Patent: April 5, 2022
    Assignee: SOLETANCHE FREYSSINET
    Inventors: Guy Perazio, Jose Peral, Serge Valcke, Luc Chambaud
  • Patent number: 11295483
    Abstract: Systems, computer program products, and methods are described herein for immersive deep learning in a virtual reality environment. The present invention is configured to electronically receive, via the extended reality platform, an image of a financial resource; electronically receive, via the extended reality platform, a first user input selecting a machine learning model type; electronically receive, via the extended reality platform, a second user input selecting one or more interaction options; initiate a machine learning model on the image; extract, using the machine learning model, one or more features associated with the image; generate, using the saliency map generator, a saliency map for the image by superimposing the one or more features on the image; and transmit control signals configured to cause the computing device associated with the user to display, via the extended reality platform, the saliency map associated with the image.
    Type: Grant
    Filed: October 1, 2020
    Date of Patent: April 5, 2022
    Assignee: BANK OF AMERICA CORPORATION
    Inventor: Madhusudhanan Krishnamoorthy
  • Patent number: 11270476
    Abstract: Disclosed is a computer-implemented method for providing photorealistic changes for a digital image. The method includes receiving a digital image of dressable model, receiving digital cutout garment textures that are indexed according to an outfitting layering order and aligned with body shape and pose of the dressable model, receiving binary silhouettes of the digital cutout garment textures, generating a garment layer index mask by compositing the binary silhouettes of the digital cutout garment textures indexed according to the outfitting layering order, receiving a composite image obtained by overlaying the digital cutout garment textures according to the indexed outfitting layering order on the digital image of the dressable model, inputting the composite image and the garment layer index mask into a machine learning system for providing photorealistic changes, and receiving from the machine learning system a digital file including photorealistic changes for application to the composite image.
    Type: Grant
    Filed: April 23, 2019
    Date of Patent: March 8, 2022
    Assignee: METAIL LIMITED
    Inventors: Yu Chen, Jim Downing, Tom Adeyoola, Sukrit Shankar
  • Patent number: 11263796
    Abstract: Computer animation involving monocular pose prediction is disclosed. A plurality of candidate pose sequences of a three-dimensional model of an animation character is generated such that each candidate pose of each sequence has a segmentation map that matches a segmentation map of a corresponding character derived from a corresponding frame of a video. A distance between candidate poses at each time step is maximized. An optimum pose sequence is determined and used to generate a corresponding sequence of frames of animation.
    Type: Grant
    Filed: November 11, 2020
    Date of Patent: March 1, 2022
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Sergey Bashkirov, Michael Taylor
  • Patent number: 11263788
    Abstract: An apparatus and method for superimposing a teeth data image onto a face image is provided. The apparatus for superimposing the teeth data image onto the face image includes a scanner for generating a teeth data image by scanning a teeth of a patient; a camera for obtaining a side face image by photographing a side face of the patient; and a control unit for extracting a feature point of the side face image, extracting a reference line of the side face based on the feature point, adjusting a size of the teeth data image according to a length of the reference line of the side face, and superimposing the teeth data image onto the side face image.
    Type: Grant
    Filed: November 6, 2020
    Date of Patent: March 1, 2022
    Inventor: Tae Weon Kim
  • Patent number: 11263814
    Abstract: This application discloses methods and apparatus for rendering a virtual channel in a multi-world virtual scene. The method includes generating a virtual scene for displaying a virtual channel, a first world, and a second world, the virtual channel being used for a render camera to move between the first and second worlds; and receiving a control request for triggering the render camera to move in the virtual scene; and identifying a location of the render camera in response to the control request to obtain a movement trajectory of the render camera during movement in the virtual scene. The method further includes detecting a world state according to the movement trajectory; determining whether the world state is an intermediate state; in response to determining the world state is the intermediate state, generating a room for accommodating the render camera; and displaying in the virtual scene the room.
    Type: Grant
    Filed: September 10, 2020
    Date of Patent: March 1, 2022
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Huifu Jiang, Feng Xue, Nan Liu, Yifan Guo, Yaning Liu
  • Patent number: 11263827
    Abstract: Methods, systems, and devices for annotating three-dimensional displays are described herein. One method includes displaying, by a computing device, a particular view of a 3D model of a facility, the 3D model including a plurality of objects, each object associated with a respective annotation, determining a context associated with the 3D model, and displaying a subset of the plurality of annotations associated with a respective subset of the plurality of objects based on the context.
    Type: Grant
    Filed: August 3, 2020
    Date of Patent: March 1, 2022
    Assignee: Honeywell International Inc.
    Inventors: Henry Chen, Tom Plocher, Jian Geng Du, Liana M. Kiff
  • Patent number: 11232533
    Abstract: Embodiments are generally directed to memory prefetching in multiple GPU environment. An embodiment of an apparatus includes multiple processors including a host processor and multiple graphics processing units (GPUs) to process data, each of the GPUs including a prefetcher and a cache; and a memory for storage of data, the memory including a plurality of memory elements, wherein the prefetcher of each of the GPUs is to prefetch data from the memory to the cache of the GPU; and wherein the prefetcher of a GPU is prohibited from prefetching from a page that is not owned by the GPU or by the host processor.
    Type: Grant
    Filed: March 15, 2019
    Date of Patent: January 25, 2022
    Assignee: INTEL CORPORATION
    Inventors: Joydeep Ray, Aravindh Anantaraman, Valentin Andrei, Abhishek R. Appu, Nicolas Galoppo von Borries, Varghese George, Altug Koker, Elmoustapha Ould-Ahmed-Vall, Mike Macpherson, Subramaniam Maiyuran
  • Patent number: 11210848
    Abstract: There is provided a method for unsupervised training of a machine learning model, comprising: receiving 3D images depicting a respective object, for each respective 3D image: dividing the 3D image into 3D patches, computing a first 2D image corresponding to a first orientation of the respective object, computing a second 2D image corresponding to a second orientation, automatically labelling pairs of 2D patches from the first and second 2D images with a patch measure indicative of likelihood of a certain 3D patch of the 3D image corresponding to a certain pair of 2D patches, training the ML model using a training dataset including the labelled patch pairs, for receiving patches extracted from first and second 2D images captured by an imaging sensor at the first and second orientations, and outputting an indication of likelihood of a visual finding in a 3D region of the object corresponding to the 2D patches.
    Type: Grant
    Filed: June 14, 2020
    Date of Patent: December 28, 2021
    Assignee: International Business Machines Corporation
    Inventors: Vadim Ratner, Yoel Shoshan, Yishai Shimoni