Patents Examined by Maurice L McDowell, Jr.
  • Patent number: 11455383
    Abstract: A system and method are disclosed for identifying a user based on the classification of user movement data. An identity verification system receives a sequence of motion data characterizing movements performed by a target user. The sequence of motion data is received as a point cloud of the motion data. The point cloud is input a machine-learned model trained based on manually labeled point clusters of a training set of motion data that each represent a movement. The machine-learned model identifies a movement represented by the point cloud of the motion data and assigns a label describing the movement the point cloud. The system generates a labeled representation of the sequence of motion data comprising the label identifying a portion of the sequence of motion data corresponding to the identified movement.
    Type: Grant
    Filed: April 24, 2020
    Date of Patent: September 27, 2022
    Assignee: TruU, Inc.
    Inventors: Lucas Allen Budman, Amitabh Agrawal, Andrew Weber Spott
  • Patent number: 11449363
    Abstract: A method and system for computing one or more outputs of a neural network having a plurality of layers is provided. The method and system can include determining a plurality of sub-computations from total computations of the neural network to execute in parallel wherein the computations to execute in parallel involve computations from multiple layers. The method and system also can also include avoiding repeating overlapped computations and/or multiple memory reads and writes during execution.
    Type: Grant
    Filed: May 30, 2019
    Date of Patent: September 20, 2022
    Assignee: Neuralmagic Inc.
    Inventors: Alexander Matveev, Nir Shavit
  • Patent number: 11430218
    Abstract: A bird's eye view feature map, augmented with semantic information, can be used to detect an object in an environment. A point cloud data set augmented with the semantic information that is associated with identities of classes of objects can be obtained. Features can be extracted from the point cloud data set. Based on the features, an initial bird's eye view feature map can be produced. Because operations performed on the point cloud data set to extract the features or to produce the initial bird's eye view feature map can have an effect of diminishing an ability to distinguish the semantic information in the initial bird's eye view feature map, the initial bird's eye view feature map can be augmented with the semantic information to produce an augmented bird's eye view feature map. Based on the augmented bird's eye view feature map, the object in the environment can be detected.
    Type: Grant
    Filed: December 31, 2020
    Date of Patent: August 30, 2022
    Assignee: Toyota Research Institute, Inc.
    Inventors: Jie Li, Rares A. Ambrus, Vitor Guizilini, Adrien David Gaidon, Jia-En Pan
  • Patent number: 11423583
    Abstract: The exemplary embodiments disclose a method, a computer program product, and a computer system for mitigating the risks associated with handling items. The exemplary embodiments may include collecting data relating to one or more items, extracting one or more features from the collected data, determining one or more hazards based on the extracted one or more features and one or more models, and displaying the one or more hazards within an augmented reality device worn by a user.
    Type: Grant
    Filed: July 6, 2020
    Date of Patent: August 23, 2022
    Assignee: International Business Machines Corporation
    Inventors: Shikhar Kwatra, Sarbajit K. Rakshit, Adam Lee Griffin, Spencer Thomas Reynolds
  • Patent number: 11403838
    Abstract: An image processing method is disclosed. The image processing method may include inputting a first image and a third image to a pre-trained style transfer network model, the third image being a composited image formed by the first image and a second image; extracting content features of the third image and style features of the second image, normalizing the content features of the third image based on the style features of the second image to obtain target image features, and generating a target image based on the target image features and outputting the target image by using the pre-trained style transfer network model.
    Type: Grant
    Filed: January 6, 2020
    Date of Patent: August 2, 2022
    Assignee: BOE TECHNOLOGY GROUP CO., LTD.
    Inventors: Dan Zhu, Hanwen Liu, Pablo Navarrete Michelini, Lijie Zhang
  • Patent number: 11403812
    Abstract: A three-dimensional (3D) object reconstruction method a computer apparatus and a storage medium. A terminal acquires a scanning image sequence of a target object with the scanning image sequence including at least one frame of at least one scanning image including depth information, uses a neural network algorithm to acquire a predicted semantic label of each scanning image based on the at least one scanning image in the scanning image sequence, and then reconstructs a 3D model of the target object according to the at least one predicted semantic labels and the at least one scanning image in the scanning image sequence. In one aspect, the reconstructing comprises: mapping, according to label distribution corresponding to each voxel of a 3D preset model and each label of the at least one predicted semantic label, each scanning image to a corresponding position of the 3D preset model, to obtain the 3D model.
    Type: Grant
    Filed: November 9, 2018
    Date of Patent: August 2, 2022
    Assignee: SHENZHEN UNIVERSITY
    Inventors: Ruizhen Hu, Hui Huang
  • Patent number: 11389252
    Abstract: A marker for image guided surgery, consisting of a base, having a base axis, connecting to a clamp; and an alignment target. The alignment target includes a target region having an alignment pattern formed thereon, and a socket connected to the target region and configured to fit rotatably to the base, whereby the alignment target is rotatable about the base axis. The alignment target also includes an optical indicator for the socket indicating an angle of orientation of the alignment target about the base axis.
    Type: Grant
    Filed: June 15, 2020
    Date of Patent: July 19, 2022
    Assignee: AUGMEDICS LTD.
    Inventors: Tomer Gera, Nissan Elimelech, Nitzan Krasney, Stuart Wolf
  • Patent number: 11386603
    Abstract: A method comprises: accessing animation graphics files and a mask graphics file; generating first binary sequences corresponding to the animation graphics files, and generating a second binary sequence corresponding to the mask graphics file; and outputting the first binary sequences and the second binary sequence to hardware controlling an array of electrical components.
    Type: Grant
    Filed: December 1, 2020
    Date of Patent: July 12, 2022
    Assignee: Illumina, Inc.
    Inventors: Brian Sinofsky, Kirkpatrick Norton, Soham Sheth
  • Patent number: 11379948
    Abstract: A computer implemented method for warping virtual content includes receiving rendered virtual content data, the rendered virtual content data including a far depth. The method also includes receiving movement data indicating a user movement in a direction orthogonal to an optical axis. The method further includes generating warped rendered virtual content data based on the rendered virtual content data, the far depth, and the movement data.
    Type: Grant
    Filed: July 22, 2019
    Date of Patent: July 5, 2022
    Assignee: Magic Leap, Inc.
    Inventors: Reza Nourai, Robert Blake Taylor, Michael Harold Liebenow, Gilles Cadet
  • Patent number: 11379105
    Abstract: In a method for displaying a three dimensional interface on a device, a scene is displayed on a display of the device and a three dimensional user interface control with three dimensional effects is displayed on the display of the device, the three dimensional effects based on a virtual light source, a virtual camera, and a virtual depth of a three dimensional object relative to the scene. A change in the position of the device relative to the virtual light source and the virtual camera is detected. The three dimensional effects are dynamically changed based on the change in position of the device relative to the virtual light source and the virtual camera. Orientation of the virtual camera is dynamically changed to change the display of the scene and the display of the three dimensional user interface control to a new perspective based on the change in position of the device relative to the virtual light source and the virtual camera.
    Type: Grant
    Filed: August 4, 2020
    Date of Patent: July 5, 2022
    Assignee: Embarcadero Technologies, Inc.
    Inventors: Michael L. Swindell, John R. Thomas
  • Patent number: 11380038
    Abstract: To enable you to take animations in a virtual space, an animation production method comprising: placing a first and second objects and a virtual camera in a virtual space; controlling an action of the first object in response to an operation from the first user; controlling an action of the second object in response to an operation from the second user; and controlling the camera in response to an operation from the first or second user or the second user to shoot the first and second objects.
    Type: Grant
    Filed: August 31, 2020
    Date of Patent: July 5, 2022
    Assignee: AniCast RM Inc.
    Inventors: Yoshihito Kondoh, Masato Murohashi
  • Patent number: 11373358
    Abstract: Ray tracing hardware accelerators supporting motion blur and moving/deforming geometry are disclosed. For example, dynamic objects in an acceleration data structure are encoded with temporal and spatial information. The hardware includes circuitry that test ray intersections against moving/deforming geometry by applying such temporal and spatial information. Such circuitry accelerates the visibility sampling of moving geometry, including rigid body motion and object deformation, and its associated moving bounding volumes to a performance similar to that of the visibility sampling of static geometry.
    Type: Grant
    Filed: June 15, 2020
    Date of Patent: June 28, 2022
    Assignee: NVIDIA Corporation
    Inventors: Gregory Muthler, John Burgess
  • Patent number: 11367263
    Abstract: Devices and techniques are generally described for image-guided three dimensional (3D) modeling. In various examples, a first two-dimensional (2D) image representing an object may be received. A first three-dimensional (3D) model corresponding to the first 2D image may be determined from among a plurality of 3D models. A first selection of a first portion of the first 2D image may be received. A second selection of a second portion of the first 3D model corresponding to the portion of the first 2D image may be received. At least one transformation of the first 3D model may be determined based at least in part on differences between a geometric feature of the first portion of the first 2D image and a geometric feature of the second portion of the first 3D model. A modified 3D model may be generated by applying the at least one transformation to the first 3D model.
    Type: Grant
    Filed: June 24, 2020
    Date of Patent: June 21, 2022
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Frederic Laurent Pascal Devernay, Thomas Lund Dideriksen
  • Patent number: 11347905
    Abstract: Methods and systems for performing softbody tissue simulation are described. A two-dimensional (2D) vertex displacement grid, represented as a 2D texture of a softbody mesh, can be determined. The 2D texture can comprise pinned positions of vector displacements relative to base positions. The surface of a three-dimensional (3D) object can be displaced by adding the vector displacements stored in the 2D texture in order to perform softbody tissue simulation. The pinning can comprise sliding, and sliding objects can be represented as signed distance functions (SDFs).
    Type: Grant
    Filed: October 24, 2018
    Date of Patent: May 31, 2022
    Assignee: Level Ex, Inc.
    Inventors: Sam Glassenberg, Matthew Yaeger
  • Patent number: 11341689
    Abstract: One or more computer processors create a user-event localization model for an identified remote audience member in a plurality of identified remote audience members for an event. The one or more computer processors generate a virtual audience member based the identified remote audience member utilizing a trained generated adversarial network and one or more user preferences. The one or more computer processors present the generated virtual audience member in a location associated with the event. The one or more computer processors dynamically adjust a presented virtual audience member responsive to one or more event occurrences utilizing the created user-event localization model.
    Type: Grant
    Filed: November 5, 2020
    Date of Patent: May 24, 2022
    Assignee: International Business Machines Corporation
    Inventors: Aaron K Baughman, Sai Krishna Reddy Gudimetla, Stephen C Hammer, Jeffrey D. Amsterdam, Sherif A. Goma
  • Patent number: 11335177
    Abstract: An example device includes a monitoring engine. The monitoring engine is to measure a characteristic of a wireless signal related to a path of the wireless signal. The wireless signal includes data from a remote source. The device includes an analysis engine to determine that an object is within a proximity threshold of a user based on the characteristic of the wireless signal. The device includes an indication engine to indicate to the user that the object is within the proximity threshold of the user.
    Type: Grant
    Filed: September 22, 2017
    Date of Patent: May 17, 2022
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Alexander Wayne Clark, Michael Bartha
  • Patent number: 11335008
    Abstract: Training a multi-object tracking model includes: generating a plurality of training images based at least on scene generation information, each training image comprising a plurality of objects to be tracked; generating, for each training image, original simulated data based at least on the scene generation information, the original simulated data comprising tag data for a first object; locating, within the original simulated data, tag data for the first object, based on at least an anomaly alert (e.g., occlusion alert, proximity alert, motion alert) associated with the first object in the first training image; based at least on locating the tag data for the first object, modifying at least a portion of the tag data for the first object from the original simulated data, thereby generating preprocessed training data from the original simulated data; and training a multi-object tracking model with the preprocessed training data to produce a trained multi-object tracker.
    Type: Grant
    Filed: September 18, 2020
    Date of Patent: May 17, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Ishani Chakraborty, Jonathan C. Hanzelka, Lu Yuan, Pedro Urbina Escos, Thomas M. Soemo
  • Patent number: 11328182
    Abstract: A three-dimensional (3D) map inconsistency detection machine includes an input transformation layer connected to a neural network. The input transformation layer is configured to 1) receive a test 3D map including 3D map data modeling a physical entity, 2) transform the 3D map data into a set of 2D images collectively corresponding to volumes of view frustums of a plurality of virtual camera views of the physical entity modeled by the test 3D map, and 3) output the set of 2D images to the neural network. The neural network is configured to output an inconsistency value indicating a degree to which the test 3D map includes inconsistencies based on analysis of the set of 2D images collectively corresponding to the volumes of the view frustums of the plurality of virtual camera views.
    Type: Grant
    Filed: June 9, 2020
    Date of Patent: May 10, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Lukas Gruber, Christoph Vogel, Ondrej Miksik, Marc Andre Leon Pollefeys
  • Patent number: 11321880
    Abstract: [Object] There is provided a mechanism that makes it possible to efficiently specify an important region of a moving image including a dynamic content. [Solution] An information processor including a control unit that recognizes a motion of an operator with respect to an operation target in a moving image and specifies an important region of the operation target in the moving image on a basis of an operation position of the operator.
    Type: Grant
    Filed: September 28, 2018
    Date of Patent: May 3, 2022
    Assignee: SONY CORPORATION
    Inventor: Shogo Takanashi
  • Patent number: 11315296
    Abstract: A digital map is displayed via a user interface in a map viewport. The digital map includes various features representing respective entities in a geographic area, each of the features being displayed at a same level of magnification. Geolocated points of interest are determined within the geographic area, and a focal point of the map viewport is determined. For each of indicators, the size of the indicator is varied in accordance with the distance between the geographic location corresponding to the indicator and the geographic location corresponding to the focal point of the map viewport. The indicators then are displayed on the digital map.
    Type: Grant
    Filed: November 22, 2017
    Date of Patent: April 26, 2022
    Assignee: GOOGLE LLC
    Inventors: Scott Mongrain, Bailiang Zhou