Patents Examined by Michael J Cobb
  • Patent number: 11154379
    Abstract: A method for guiding the position of a dental drill for implant treatment of a patient, the method acquiring a volume image of patient anatomy; superimposing an image of a planned drill hole on a display of the acquired volume image according to observer instructions to form an implant plan; displaying at least a portion of the implant plan in stereoscopic form on a head-mounted device worn by an observer and tracking patient position so that the displayed portion of the implant plan is registered to the patient anatomy that lies in the observer's field of view; and highlighting the location of the planned drill hole on the head-mounted device display.
    Type: Grant
    Filed: May 15, 2014
    Date of Patent: October 26, 2021
    Assignee: TROPHY
    Inventors: Sylvie Bothorel, Philippe Maillet
  • Patent number: 11151787
    Abstract: A generation device of a three-dimensional model including: an acquisition unit configured to acquire a first mask image indicating a structure area, which is an object still within each image captured from a plurality of viewpoints, and a second mask image indicating a foreground area, which is a moving object within each image captured from the plurality of viewpoints; a combination unit configured to generate a third mask image that integrates the structure area and the foreground area within the image captured from the plurality of viewpoints by combining the first mask image and the second mask image both acquired; and a generation unit configured to generate a three-dimensional model including the structure and the foreground by a visual volume intersection method using the third mask image.
    Type: Grant
    Filed: October 24, 2019
    Date of Patent: October 19, 2021
    Assignee: Canon Kabushiki Kaisha
    Inventors: Keisuke Morisawa, Kiwamu Kobayashi
  • Patent number: 11132760
    Abstract: Methods, systems, and devices for graphic processing are described. The methods, systems, and devices may include or be associated with identifying a graphics instruction, determining that the graphics instruction is alias enabled for the device, partitioning an alias lookup table into one or more slots, allocating a slot of the alias lookup table based on the partitioning and determining that the graphics instruction is alias enabled, generating an alias instruction based on allocating the slot of the alias lookup table and determining that the graphics instruction is alias enabled, and processing the alias instruction.
    Type: Grant
    Filed: December 13, 2019
    Date of Patent: September 28, 2021
    Assignee: QUALCOMM Incorporated
    Inventors: Yun Du, Andrew Evan Gruber, Chihong Zhang, Gang Zhong, Jian Jiang, Fei Wei, Minjie Huang, Zilin Ying, Yang Xia, Jing Han, Chun Yu, Eric Demers
  • Patent number: 11127129
    Abstract: A system and method for identifying hazardous site conditions. The method includes training a classifier using a labeled training set, wherein the classifier is trained to classify site conditions based on features extracted from enhanced floor plans, wherein the labeled training set includes a plurality of training features and a plurality of training labels, wherein the plurality of training features are extracted from a plurality of training enhanced floor plans, wherein the plurality of training labels are a plurality of hazardous site condition identification labels; and identifying at least one hazardous site condition of a test enhanced floor plan by applying the classifier to a plurality of test features extracted from the test enhanced floor plan, wherein the classifier outputs at least one site condition classification when applied to the plurality of test features, wherein the at least one hazardous site condition is identified based on the at least one site condition classification.
    Type: Grant
    Filed: July 26, 2019
    Date of Patent: September 21, 2021
    Assignee: The Joan and Irwin Jacobs Technion-Cornell Institute
    Inventors: Ardalan Khosrowpour, John Mollis
  • Patent number: 11106833
    Abstract: Methods, computer program products, and systems are presented. The method computer program products, and systems can include, for instance: presenting first data on a first area of a display, wherein the first data is non-sensitive data; presenting second data on a second area of the display, wherein the second data is sensitive data; wherein the first data is displayed to feature a first viewing angle, and wherein the second data is displayed to feature a second viewing angle, wherein the second viewing angle is narrower than the first viewing angle so that a range of viewing angles from the display at which displayed data is visible is larger for the first data than for the second data.
    Type: Grant
    Filed: September 6, 2019
    Date of Patent: August 31, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Mohamed Zouhaier Ramadhane, Joseph Saab, Fernando Ramos Zuliani, Mauricio Monroy Andrade
  • Patent number: 11087519
    Abstract: This application provides a facial animation implementation method performed at a computing device. The method includes: capturing a facial image; extracting facial feature points in the facial image; comparing the facial feature points with standard feature points, to obtain a first deformation coefficient corresponding to a geometrical feature; extracting a local region according to the facial feature points for processing, to obtain a second deformation coefficient corresponding to an appearance feature; and driving a three-dimensional virtual object according to the first deformation coefficient and the second deformation coefficient to perform a corresponding expression.
    Type: Grant
    Filed: July 11, 2019
    Date of Patent: August 10, 2021
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Jingcong Chen, Xinliang Wang, Bin Li
  • Patent number: 11069150
    Abstract: The present invention is directed to generating an image based on first image data of a first person having first pose, first body shape, and first clothing and second image data of a second person having second pose, a second body shape, and second clothing. The generated image represents a transfer of appearance (e.g., clothing) of the second person to the first person. This is achieved by modeling 3d human pose and body shape for each of the persons via triangle mesh, fitting the 3d human pose and body shape models to the images of the persons, transferring appearance using barycentric methods for commonly visible vertices, and learning to color the remaining ones using deep image synthesis techniques.
    Type: Grant
    Filed: June 5, 2019
    Date of Patent: July 20, 2021
    Inventors: Cristian Sminchisescu, Mihai Zanfir, Alin-lonut Popa, Andrei Zanfir, Elisabeta Marinoiu
  • Patent number: 11062511
    Abstract: Images, which are captured by an image capture device on a ground disturbing work machine, and that are taken from different perspectives, and are received. A machine learned image identification model identifies items of interest in the images. A three-dimensional representation is generated based on the set of images. The three-dimensional representation identifies a depth at which the recognized items lie beneath the surface of the soil being excavated. A map request is received which identifies a location and depth for which image data is to be provided. A three-dimensional representation of the location and depth are provided in response to the request.
    Type: Grant
    Filed: May 2, 2019
    Date of Patent: July 13, 2021
    Assignee: Deere & Company
    Inventor: Scott S. Hendron
  • Patent number: 11057606
    Abstract: A method and a system for information display are proposed. The system includes a light transmissive display, at least one first information extraction device, at least one second information extraction device, and a processing device, where the processing device is connected to the display, the first information extraction device, and the second information extraction device. The first information extraction device is configured to obtain position information of a user. The second information extraction device is configured to obtain position information of a target. The processing device is configured to perform coordinate transformation on the position information of the user and the position information of the object to generate fused information between the user and the target, and to display related information of the object on the display according to the fused information.
    Type: Grant
    Filed: March 21, 2019
    Date of Patent: July 6, 2021
    Assignees: Industrial Technology Research Institute, Intellectual Property Innovation Corporation
    Inventors: Wei-Lin Hsu, Tzu-Yi Yu, Heng-Yin Chen
  • Patent number: 11049308
    Abstract: A computer-implemented method for generating a machine-learned model to generate facial position data based on audio data comprising training a conditional variational autoencoder having an encoder and decoder. The training comprises receiving a set of training data items, each training data item comprising a facial position descriptor and an audio descriptor; processing one or more of the training data items using the encoder to obtain distribution parameters; sampling a latent vector from a latent space distribution based on the distribution parameters; processing the latent vector and the audio descriptor using the decoder to obtain a facial position output; calculating a loss value based at least in part on a comparison of the facial position output and the facial position descriptor of at least one of the one or more training data items; and updating parameters of the conditional variational autoencoder based at least in part on the calculated loss value.
    Type: Grant
    Filed: April 25, 2019
    Date of Patent: June 29, 2021
    Assignee: ELECTRONIC ARTS INC.
    Inventors: Jorge del Val Santos, Linus Gisslén, Martin Singh-Blom, Kristoffer Sjöö, Mattias Teye
  • Patent number: 11024068
    Abstract: The present disclosure relates to an apparatus and method for displaying information, a program, and a communication system, which enable the provision of an apparatus making use of a display device excellent in flexibility. An information display apparatus includes a display unit including a time information presenting section for presenting at least time information and a band section to be worn on an arm, and a display control unit for changing a display of the display unit. The present disclosure can be applied to, for example, the information display apparatus.
    Type: Grant
    Filed: December 16, 2019
    Date of Patent: June 1, 2021
    Inventors: Masakazu Mitsugi, Yuki Sugiue, Hiroshi Saeki, Machiko Takematsu, Masaaki Yamamoto, Yoichi Ito, Kenji Itoh
  • Patent number: 11010966
    Abstract: A system and method for creating enhanced floor plans. The method includes converting visual multimedia content into a plurality of frames, wherein the visual multimedia content shows a site, wherein each frame is a two-dimensional (2D) image showing a portion of the site; generating, based on the plurality of frames, a sparse three-dimensional (3D) model of the site, wherein the sparse 3D model includes a point cloud; geo-localizing the sparse 3D model with respect to a site layout model by identifying a plurality of matching features of the site layout model with respect to the sparse 3D model; and creating an enhanced floor plan based on the geo-localization, wherein the enhanced floor plan includes a plurality of floor plan points of the site layout model associated with respective portions of the sparse 3D model.
    Type: Grant
    Filed: December 14, 2018
    Date of Patent: May 18, 2021
    Assignee: The Joan and Irwin Jacobs Technion-Cornell Institute
    Inventor: Ardalan Khosrowpour
  • Patent number: 11010970
    Abstract: Systems and methods according to various embodiments enable a user to view three-dimensional representations of data objects (“nodes”) within a 3D environment from a first person perspective. The system may be configured to allow the user to interact with the nodes by moving a virtual camera through the 3D environment. The nodes may have one or more attributes that may correspond, respectively, to particular static or dynamic values within the data object's data fields. The attributes may include physical aspects of the nodes, such as color, size, or shape. The system may group related data objects within the 3D environment into clusters that are demarked using one or more cluster designators, which may be in the form of a dome or similar feature that encompasses the related data objects. The system may enable multiple users to access the 3D environment simultaneously, or to record their interactions with the 3D environment.
    Type: Grant
    Filed: July 29, 2019
    Date of Patent: May 18, 2021
    Assignee: SPLUNK INC.
    Inventors: Roy Arsan, Alexander Raitz, Clark Allan, Cary Glen Noel
  • Patent number: 10991164
    Abstract: Spatial information that describes spatial locations of visual objects as in a three-dimensional (3D) image space as represented in one or more multi-view unlayered images is accessed. Based on the spatial information, a cinema image layer and one or more device image layers are generated from the one or more multi-view unlayered images. A multi-layer multi-view video signal comprising the cinema image layer and the device image layers is sent to downstream devices for rendering.
    Type: Grant
    Filed: April 10, 2018
    Date of Patent: April 27, 2021
    Assignee: Dolby Laboratories Licensing Corporation
    Inventors: Ajit Ninan, Neil Mammen, Tyrome Y. Brown
  • Patent number: 10984501
    Abstract: File exploration is facilitated by enabling zoom with respect to a thumbnail as a function of an identified point of interest. More particularly, a scaled thumbnail of the same size as a thumbnail can be presented as a function of an identified point of interest. Furthermore, navigation, among other things, is enabled to allow panning with respect to a scaled thumbnail, for instance.
    Type: Grant
    Filed: March 18, 2019
    Date of Patent: April 20, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Joseph Milan, Wei Huang, John Hancock, Patrick Baumgartner, Drew Voegele
  • Patent number: 10982970
    Abstract: The present disclosure relates to a method (300) for displaying a perspective view of the surrounding of an aircraft (100) in an aircraft. The method comprises accessing (310) surrounding information from a database. The surrounding information is photo-based and three-dimensional. The method further comprises processing (320) the accessed surrounding information so that a perspective view of the surrounding of the aircraft is provided. The perspective view of the surrounding correlates to the position of the aircraft and is photo-based with spatially correlated photo-based textures. The method further comprises transmitting (360) the provided perspective view of the surrounding of the aircraft to a displaying unit so that it can be displayed in the aircraft. The present disclosure also relates to a system, an aircraft, a use, a computer program, and a computer program product.
    Type: Grant
    Filed: July 7, 2016
    Date of Patent: April 20, 2021
    Assignee: SAAB AB
    Inventors: Nigel Pippard, Robert Alexander Bennett, Jonas Dehlin, Adam Nilsson
  • Patent number: 10977867
    Abstract: Embodiments include techniques for a method and system for an augmented reality-based aircraft cargo monitoring and control system. The embodiments include a controlling device in communication with a master control panel (MCP), wherein the controlling device include a capturing module that is configured to receive an image and an object identification module that is configured to detect an identifier of an object from the image. The controlling device also includes a tracking module that is configured to track movement of the object, a rendering module that is configured to overlay an indicator over the image, and a microprojector that is configured to project the image on a display, wherein the display configured to display the mode and status of the object.
    Type: Grant
    Filed: October 10, 2018
    Date of Patent: April 13, 2021
    Assignee: GOODRICH CORPORATION
    Inventor: Rameshkumar Balasubramanian
  • Patent number: 10930045
    Abstract: Digital ink is generated to represent a visual component, such as a letter, number, character, and/or other symbol. The digital ink is generated by obtaining multiple different curves that combine to generate the visual component. These different curves can have various different characteristics (e.g., different thicknesses) to provide the desired visual component. The combined curves are converted into a set of primitives that make up the parts of the combined curves, and the set of primitives are converted into a digital ink format. Data describing the set of primitives in digital ink format can be stored and subsequently used to display the visual component as digital ink.
    Type: Grant
    Filed: March 22, 2017
    Date of Patent: February 23, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Robert C. Houser, Pavel Yurevich, Peter Hammerquist, David Abzarian, Xiao Tu, Silvano Bonacina
  • Patent number: 10916063
    Abstract: A system and method that allows a user to view objects in a three-dimensional environment, where one or more of the objects have a data display (e.g., a data billboard, etc.) that shows data about the object. To enhance user experience and to provide relevant contextual data as the user navigates through the three-dimensional environment, the system calculates a location for the user and a location for each object and determines if a relationship between the user frame of reference and each object location satisfies a first criterion. If the first criterion is satisfied, the system is configured to move the data display to the bottom of a viewing area of the three-dimensional environment (e.g. docking the data display to the bottom of the viewing area, etc.). The system may also arrange the data displays in the same order as the objects are perceived by the user in the three-dimensional environment.
    Type: Grant
    Filed: July 18, 2019
    Date of Patent: February 9, 2021
    Assignee: SPLUNK INC.
    Inventors: Roy Arsan, Alexander Raitz, Clark Allan, Cary Glen Noel
  • Patent number: 10909763
    Abstract: A user interface enables a user to calibrate the position of a three dimensional model with a real-world environment represented by that model. Using a device's sensor, the device's location and orientation is determined. A video image of the device's environment is displayed on the device's display. The device overlays a representation of an object from a virtual reality model on the video image. The position of the overlaid representation is determined based on the device's location and orientation. In response to user input, the device adjusts a position of the overlaid representation relative to the video image.
    Type: Grant
    Filed: January 15, 2019
    Date of Patent: February 2, 2021
    Assignee: Apple Inc.
    Inventors: Christopher G. Nicholas, Lukas M. Marti, Rudolph van der Merwe, John Kassebaum