Patents Examined by Maurice L McDowell, Jr.
  • Patent number: 11494996
    Abstract: A method, a structure, and a computer system for tangible mixed reality. Exemplary embodiments may include identifying one or more real objects within an environment and identifying one or more interactions between the one or more real objects, one or more reactive objects, and one or more animations therebetween. The exemplary embodiments may further include generating one or more learning activities for a user of a mixed reality headset that include at least one of the one or more interactions, and deploying the one or more learning activities to the mixed reality headset.
    Type: Grant
    Filed: November 30, 2020
    Date of Patent: November 8, 2022
    Assignee: International Business Machines Corporation
    Inventors: Aaron K. Baughman, Shikhar Kwatra, Pritesh Patel, Vijay Ekambaram, Prasenjit Dey
  • Patent number: 11481096
    Abstract: This disclosure pertains to systems, methods, and computer readable medium for mapping particular user interactions, e.g., gestures, to the input parameters of various image processing routines, e.g., image filters, in a way that provides a seamless, dynamic, and intuitive experience for both the user and the software developer. Such techniques may handle the processing of both “relative” gestures, i.e., those gestures having values dependent on how much an input to the device has changed relative to a previous value of the input, and “absolute” gestures, i.e., those gestures having values dependent only on the instant value of the input to the device. Additionally, inputs to the device beyond user-input gestures may be utilized as input parameters to one or more image processing routines. For example, the device's orientation, acceleration, and/or position in three-dimensional space may be used as inputs to particular image processing routines.
    Type: Grant
    Filed: August 20, 2020
    Date of Patent: October 25, 2022
    Assignee: Apple Inc.
    Inventors: David Hayward, Chendi Zhang, Alexandre Naaman, Richard R. Dellinger, Giridhar S. Murthy
  • Patent number: 11478301
    Abstract: An example system is disclosed for generating a model of a tubular anatomical structure. The system includes an anatomical measurement wire (“AMW”), a tracking system and a computing device. The AMW is configured to be navigated through the anatomical structure of a patient, and the AMW includes at least one sensor. The tracking system is configured to provide tracking data representing multiple positions of the sensor in a spatial coordinate system. The computing device is configured to generate a data point cloud based on the tracking data, generate a parametric model corresponding to at least a portion of the vessel based on the data point cloud and store the parametric model in non-transitory memory.
    Type: Grant
    Filed: November 14, 2019
    Date of Patent: October 25, 2022
    Assignee: CENTERLINE BIOMEDICAL, INC.
    Inventors: Karl J. West, Vikash R. Goel
  • Patent number: 11475497
    Abstract: A system and method for the aesthetic design of a modular assemblage, comprising means for providing a client graphic user interface for receiving an input for defining parameters of the modular assemblage, and for presenting an image of the defined modular assemblage; communicating a code to a server representing the defined parameters; at the server, in dependence on the communicated code, defining a set of graphic elements corresponding to the defined modular assemblage; communicating the graphic elements from the server to the client; and displaying, at the client, the graphic elements received from the server to represent the defined modular assemblage.
    Type: Grant
    Filed: August 17, 2020
    Date of Patent: October 18, 2022
    Assignee: Florelle, Inc.
    Inventors: Kenneth Banschick, Andrei Gurulev
  • Patent number: 11475312
    Abstract: A processor-implemented method including implementing a deep neural network (DNN) model using input data, generating, by implementing the DNN model, first output data from the DNN model, changing the DNN model, generating, by implementing the changed DNN model using the input data, second output data of the changed DNN model, and determining result data by combining the first output data and the second output data.
    Type: Grant
    Filed: June 16, 2020
    Date of Patent: October 18, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Youngjun Kwak, Youngsung Kim, Byung In Yoo, Yong-Il Lee, Hana Lee, Sangil Jung
  • Patent number: 11468630
    Abstract: The disclosure provides a cloud-based renderer and methods of rendering a scene on a computing system using a combination of raytracing and rasterization. In one example, a method of rendering a scene includes: (1) generating at least one raytracing acceleration structure from scene data of the scene, (2) selecting raytracing and rasterization algorithms for rendering the scene based on the scene data, and (3) rendering the scene utilizing a combination of the raytracing algorithms and the rasterization algorithms, wherein the rasterization algorithms utilize primitive cluster data from the raytracing acceleration structures.
    Type: Grant
    Filed: December 1, 2020
    Date of Patent: October 11, 2022
    Assignee: NVIDIA Corporation
    Inventors: Christoph Kubisch, Ziyad Hakura, Manuel Kraemer
  • Patent number: 11461968
    Abstract: A computer-implemented method and system for modeling an outer surface, such as skin. The method includes, under the control of one or more computer systems configured with executable instructions, defining a plurality of microstructures such as microstructures to be displayed in microstructure locations on a geometric model of a character or inanimate object, and generating a volumetric mesh including the plurality of microstructures. The volumetric mesh is configured to be applied to the geometric model as an outer surface (e.g., skin) covering the geometric model.
    Type: Grant
    Filed: September 28, 2020
    Date of Patent: October 4, 2022
    Assignee: Unity Technologies SF
    Inventors: Emiliano Padovani, Artur Vill
  • Patent number: 11455383
    Abstract: A system and method are disclosed for identifying a user based on the classification of user movement data. An identity verification system receives a sequence of motion data characterizing movements performed by a target user. The sequence of motion data is received as a point cloud of the motion data. The point cloud is input a machine-learned model trained based on manually labeled point clusters of a training set of motion data that each represent a movement. The machine-learned model identifies a movement represented by the point cloud of the motion data and assigns a label describing the movement the point cloud. The system generates a labeled representation of the sequence of motion data comprising the label identifying a portion of the sequence of motion data corresponding to the identified movement.
    Type: Grant
    Filed: April 24, 2020
    Date of Patent: September 27, 2022
    Assignee: TruU, Inc.
    Inventors: Lucas Allen Budman, Amitabh Agrawal, Andrew Weber Spott
  • Patent number: 11449363
    Abstract: A method and system for computing one or more outputs of a neural network having a plurality of layers is provided. The method and system can include determining a plurality of sub-computations from total computations of the neural network to execute in parallel wherein the computations to execute in parallel involve computations from multiple layers. The method and system also can also include avoiding repeating overlapped computations and/or multiple memory reads and writes during execution.
    Type: Grant
    Filed: May 30, 2019
    Date of Patent: September 20, 2022
    Assignee: Neuralmagic Inc.
    Inventors: Alexander Matveev, Nir Shavit
  • Patent number: 11430218
    Abstract: A bird's eye view feature map, augmented with semantic information, can be used to detect an object in an environment. A point cloud data set augmented with the semantic information that is associated with identities of classes of objects can be obtained. Features can be extracted from the point cloud data set. Based on the features, an initial bird's eye view feature map can be produced. Because operations performed on the point cloud data set to extract the features or to produce the initial bird's eye view feature map can have an effect of diminishing an ability to distinguish the semantic information in the initial bird's eye view feature map, the initial bird's eye view feature map can be augmented with the semantic information to produce an augmented bird's eye view feature map. Based on the augmented bird's eye view feature map, the object in the environment can be detected.
    Type: Grant
    Filed: December 31, 2020
    Date of Patent: August 30, 2022
    Assignee: Toyota Research Institute, Inc.
    Inventors: Jie Li, Rares A. Ambrus, Vitor Guizilini, Adrien David Gaidon, Jia-En Pan
  • Patent number: 11423583
    Abstract: The exemplary embodiments disclose a method, a computer program product, and a computer system for mitigating the risks associated with handling items. The exemplary embodiments may include collecting data relating to one or more items, extracting one or more features from the collected data, determining one or more hazards based on the extracted one or more features and one or more models, and displaying the one or more hazards within an augmented reality device worn by a user.
    Type: Grant
    Filed: July 6, 2020
    Date of Patent: August 23, 2022
    Assignee: International Business Machines Corporation
    Inventors: Shikhar Kwatra, Sarbajit K. Rakshit, Adam Lee Griffin, Spencer Thomas Reynolds
  • Patent number: 11403838
    Abstract: An image processing method is disclosed. The image processing method may include inputting a first image and a third image to a pre-trained style transfer network model, the third image being a composited image formed by the first image and a second image; extracting content features of the third image and style features of the second image, normalizing the content features of the third image based on the style features of the second image to obtain target image features, and generating a target image based on the target image features and outputting the target image by using the pre-trained style transfer network model.
    Type: Grant
    Filed: January 6, 2020
    Date of Patent: August 2, 2022
    Assignee: BOE TECHNOLOGY GROUP CO., LTD.
    Inventors: Dan Zhu, Hanwen Liu, Pablo Navarrete Michelini, Lijie Zhang
  • Patent number: 11403812
    Abstract: A three-dimensional (3D) object reconstruction method a computer apparatus and a storage medium. A terminal acquires a scanning image sequence of a target object with the scanning image sequence including at least one frame of at least one scanning image including depth information, uses a neural network algorithm to acquire a predicted semantic label of each scanning image based on the at least one scanning image in the scanning image sequence, and then reconstructs a 3D model of the target object according to the at least one predicted semantic labels and the at least one scanning image in the scanning image sequence. In one aspect, the reconstructing comprises: mapping, according to label distribution corresponding to each voxel of a 3D preset model and each label of the at least one predicted semantic label, each scanning image to a corresponding position of the 3D preset model, to obtain the 3D model.
    Type: Grant
    Filed: November 9, 2018
    Date of Patent: August 2, 2022
    Assignee: SHENZHEN UNIVERSITY
    Inventors: Ruizhen Hu, Hui Huang
  • Patent number: 11389252
    Abstract: A marker for image guided surgery, consisting of a base, having a base axis, connecting to a clamp; and an alignment target. The alignment target includes a target region having an alignment pattern formed thereon, and a socket connected to the target region and configured to fit rotatably to the base, whereby the alignment target is rotatable about the base axis. The alignment target also includes an optical indicator for the socket indicating an angle of orientation of the alignment target about the base axis.
    Type: Grant
    Filed: June 15, 2020
    Date of Patent: July 19, 2022
    Assignee: AUGMEDICS LTD.
    Inventors: Tomer Gera, Nissan Elimelech, Nitzan Krasney, Stuart Wolf
  • Patent number: 11386603
    Abstract: A method comprises: accessing animation graphics files and a mask graphics file; generating first binary sequences corresponding to the animation graphics files, and generating a second binary sequence corresponding to the mask graphics file; and outputting the first binary sequences and the second binary sequence to hardware controlling an array of electrical components.
    Type: Grant
    Filed: December 1, 2020
    Date of Patent: July 12, 2022
    Assignee: Illumina, Inc.
    Inventors: Brian Sinofsky, Kirkpatrick Norton, Soham Sheth
  • Patent number: 11380038
    Abstract: To enable you to take animations in a virtual space, an animation production method comprising: placing a first and second objects and a virtual camera in a virtual space; controlling an action of the first object in response to an operation from the first user; controlling an action of the second object in response to an operation from the second user; and controlling the camera in response to an operation from the first or second user or the second user to shoot the first and second objects.
    Type: Grant
    Filed: August 31, 2020
    Date of Patent: July 5, 2022
    Assignee: AniCast RM Inc.
    Inventors: Yoshihito Kondoh, Masato Murohashi
  • Patent number: 11379105
    Abstract: In a method for displaying a three dimensional interface on a device, a scene is displayed on a display of the device and a three dimensional user interface control with three dimensional effects is displayed on the display of the device, the three dimensional effects based on a virtual light source, a virtual camera, and a virtual depth of a three dimensional object relative to the scene. A change in the position of the device relative to the virtual light source and the virtual camera is detected. The three dimensional effects are dynamically changed based on the change in position of the device relative to the virtual light source and the virtual camera. Orientation of the virtual camera is dynamically changed to change the display of the scene and the display of the three dimensional user interface control to a new perspective based on the change in position of the device relative to the virtual light source and the virtual camera.
    Type: Grant
    Filed: August 4, 2020
    Date of Patent: July 5, 2022
    Assignee: Embarcadero Technologies, Inc.
    Inventors: Michael L. Swindell, John R. Thomas
  • Patent number: 11379948
    Abstract: A computer implemented method for warping virtual content includes receiving rendered virtual content data, the rendered virtual content data including a far depth. The method also includes receiving movement data indicating a user movement in a direction orthogonal to an optical axis. The method further includes generating warped rendered virtual content data based on the rendered virtual content data, the far depth, and the movement data.
    Type: Grant
    Filed: July 22, 2019
    Date of Patent: July 5, 2022
    Assignee: Magic Leap, Inc.
    Inventors: Reza Nourai, Robert Blake Taylor, Michael Harold Liebenow, Gilles Cadet
  • Patent number: 11373358
    Abstract: Ray tracing hardware accelerators supporting motion blur and moving/deforming geometry are disclosed. For example, dynamic objects in an acceleration data structure are encoded with temporal and spatial information. The hardware includes circuitry that test ray intersections against moving/deforming geometry by applying such temporal and spatial information. Such circuitry accelerates the visibility sampling of moving geometry, including rigid body motion and object deformation, and its associated moving bounding volumes to a performance similar to that of the visibility sampling of static geometry.
    Type: Grant
    Filed: June 15, 2020
    Date of Patent: June 28, 2022
    Assignee: NVIDIA Corporation
    Inventors: Gregory Muthler, John Burgess
  • Patent number: 11367263
    Abstract: Devices and techniques are generally described for image-guided three dimensional (3D) modeling. In various examples, a first two-dimensional (2D) image representing an object may be received. A first three-dimensional (3D) model corresponding to the first 2D image may be determined from among a plurality of 3D models. A first selection of a first portion of the first 2D image may be received. A second selection of a second portion of the first 3D model corresponding to the portion of the first 2D image may be received. At least one transformation of the first 3D model may be determined based at least in part on differences between a geometric feature of the first portion of the first 2D image and a geometric feature of the second portion of the first 3D model. A modified 3D model may be generated by applying the at least one transformation to the first 3D model.
    Type: Grant
    Filed: June 24, 2020
    Date of Patent: June 21, 2022
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Frederic Laurent Pascal Devernay, Thomas Lund Dideriksen