Patents Examined by Hai Tao Sun
  • Patent number: 11967031
    Abstract: Biological digital imaging systems and methods are disclosed herein for analyzing pixel data of one or more digital images depicting absorbent articles or portions of absorbent articles. A digital image comprising pixel data is obtained depicting an absorbent article or a portion of an absorbent article, the digital image. An imaging application (app) analyzes the digital image to detect a biological feature depicted within the pixel data of the digital image of the absorbent article or the portion of the absorbent article. The imaging app generates an individual-specific biological prediction value corresponding to at least one of: (a) the absorbent article; (b) the portion of the absorbent article; or (c) an individual associated with the absorbent article or portion of the absorbent article. The individual-specific biological prediction value is based on the biological feature depicted within the pixel data of the digital image of the absorbent article or the portion of the absorbent article.
    Type: Grant
    Filed: June 8, 2022
    Date of Patent: April 23, 2024
    Assignee: The Procter & Gamble Company
    Inventors: Jennifer Joan Gustin, Amirhossein Tavanaei, Kelly Anderson, Donald C. Roe, Roland Engel, Latisha Salaam Zayid
  • Patent number: 11961248
    Abstract: An encoder is disclosed that uses hyperspectral data to produce a unified three-dimensional (“3D”) scan that incorporates depth for various points, surfaces, and features within a scene. The encoder may scan a particular point of the scene using frequencies from different electromagnetic spectrum bands, may determine spectral properties of the particular point based on returns measured across a first set of bands, may measure a distance of the particular point using frequencies of another band that does not interfere with the spectral properties at each of the first set of bands, and may encode the spectral properties and the distance of the particular point in a single hyperspectral dataset. The spectral signature encoded within the dataset may be used to classify the particular point or generate a point cloud or other visualization that accurately represents the spectral properties and distances of the scanned points.
    Type: Grant
    Filed: May 30, 2023
    Date of Patent: April 16, 2024
    Assignee: Illuscio, Inc.
    Inventor: Robert Monaghan
  • Patent number: 11947111
    Abstract: The present technology relates to artificial reality systems. Such systems provide projections a user can create to specify object interactions. For example, when a user wishes to interact with an object outside her immediate reach, she can use a projection to select, move, or otherwise interact with the distant object. The present technology also includes object selection techniques for identifying and disambiguating between objects, allowing a user to select objects both near and distant from the user. Yet further aspects of the present technology include techniques for interpreting various bimanual (two-handed) gestures for interacting with objects. The present technology further includes a model for differentiating between global and local modes for, e.g., providing different input modalities or interpretations of user gestures.
    Type: Grant
    Filed: September 2, 2022
    Date of Patent: April 2, 2024
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Jonathan Ravasz, Etienne Pinchon, Adam Tibor Varga, Jasper Stevens, Robert Ellis, Jonah Jones, Evgenii Krivoruchko
  • Patent number: 11948224
    Abstract: One embodiment provides an apparatus comprising a memory stack including multiple memory dies and a parallel processor including a plurality of multiprocessors. Each multiprocessor has a single instruction, multiple thread (SIMT) architecture, the parallel processor coupled to the memory stack via one or more memory interfaces. At least one multiprocessor comprises a multiply-accumulate circuit to perform multiply-accumulate operations on matrix data in a stage of a neural network implementation to produce a result matrix comprising a plurality of matrix data elements at a first precision, precision tracking logic to evaluate metrics associated with the matrix data elements and indicate if an optimization is to be performed for representing data at a second stage of the neural network implementation, and a numerical transform unit to dynamically perform a numerical transform operation on the matrix data elements based on the indication to produce transformed matrix data elements at a second precision.
    Type: Grant
    Filed: November 1, 2022
    Date of Patent: April 2, 2024
    Assignee: Intel Corporation
    Inventors: Elmoustapha Ould-Ahmed-Vall, Sara S. Baghsorkhi, Anbang Yao, Kevin Nealis, Xiaoming Chen, Altug Koker, Abhishek R. Appu, John C. Weast, Mike B. Macpherson, Dukhwan Kim, Linda L. Hurd, Ben J. Ashbaugh, Barath Lakshmanan, Liwei Ma, Joydeep Ray, Ping T. Tang, Michael S. Strickland
  • Patent number: 11922581
    Abstract: Augmented reality (AR) systems and methods involve an interactive head-mounted device (HMD), an external display, and a medical image computer, which is in communication with the HMD and the external display. The external display displays one or more planes of a medical image or a 3D model provided by the medical image computer. A user wearing the HMD may manipulate a medical image or 3D model displayed on the external display by focusing the user's gaze on a control object and/or a portion of a medical image or 3D model displayed on a display of the interactive HMD.
    Type: Grant
    Filed: March 9, 2021
    Date of Patent: March 5, 2024
    Assignee: Coviden LP
    Inventor: John W. Komp
  • Patent number: 11915341
    Abstract: In implementations of repeat object blending, a computing device implements a repeat object blending system, which is implemented to receive a digital image depicting a first object and a second object, where the first object is depicted as multiple instances of a repeated base object, and the second object is depicted as multiple instances of a visually different repeated base object. The repeat object blending system can identify visual characteristics of the first object and the second object. The repeat object blending system can then generate an intermediate object by blending one or more of the visual characteristics of the first object and one or more of the visual characteristics of the second object. The resulting intermediate object is a visual representation of the repeated base object blended with the visually different repeated base object.
    Type: Grant
    Filed: February 14, 2022
    Date of Patent: February 27, 2024
    Assignee: Adobe Inc.
    Inventor: Gaurav Jain
  • Patent number: 11900897
    Abstract: A system and method are provided to generate blended video and graphics using a blending domain. The system converts video from a first domain to a blending domain. The system converts graphics from a second domain to the blending domain and blends the video and graphics in the blending domain to generate a blended output.
    Type: Grant
    Filed: July 20, 2022
    Date of Patent: February 13, 2024
    Assignee: AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITED
    Inventors: David Chaohua Wu, Richard Hayden Wyman
  • Patent number: 11900023
    Abstract: An Agent supportable device for determining a direction of an item of interest. The Agent supportable device includes an antenna configured to communicate with the reference transceiver associated with the item of interest. A receiver in the Agent supportable device is in logical communication with the antenna. A transmitter is also in logical communication with the antenna. A digital storage contains software executable upon demand via a processor in logical communication with the digital storage. The processor is operative via execution of the software to cause the apparatus to display a user interface on a visual display that includes an arrow indicating a direction of the item of interest in relation to the apparatus and a distance between the apparatus and the item of interest.
    Type: Grant
    Filed: July 4, 2023
    Date of Patent: February 13, 2024
    Assignee: Middle Chart, LLC
    Inventors: Michael Wodrich, Michael S. Santarone, Randall Pugh, Jason E. Duff
  • Patent number: 11893701
    Abstract: A preferred method for dynamically displaying virtual and augmented reality scenes can include determining input parameters, calculating virtual photometric parameters, and rendering a VAR scene with a set of simulated photometric parameters.
    Type: Grant
    Filed: June 15, 2022
    Date of Patent: February 6, 2024
    Assignee: Dropbox, Inc.
    Inventors: Terrence Edward McArdle, Benjamin Zeis Newhouse
  • Patent number: 11869192
    Abstract: According to some embodiments, a system and method are provided comprising a vegetation module to receive image data from an image source; a memory for storing program instructions; a vegetation processor, coupled to the memory, and in communication with the vegetation module, and operative to execute program instructions to: receive image data; estimate a vegetation segmentation mask; generate at least one of a 3D point cloud and a 2.5D Digital Surface Model based on the received image data; estimate a relief surface using a digital terrain model; generate a vegetation masked digital surface model based on the digital terrain model, the vegetation segmentation mask and at least one of the 3D point cloud and the 2.
    Type: Grant
    Filed: November 6, 2020
    Date of Patent: January 9, 2024
    Assignee: General Electric Company
    Inventors: Mohammed Yousefhussien, Arpit Jain, James Vradenburg Miller, Achalesh Kumar, Walter V Dixon, III
  • Patent number: 11869195
    Abstract: A target object controlling method, apparatus (2), electronic device, and storage medium. The method includes in response to a movement control operation triggered for a target object in a real scene image, determining a control direction corresponding to the movement control operation (101); obtaining a photographing direction of the real scene image (102); and controlling the target object to move in the real scene image according to the control direction and the photographing direction (103). The target object controlling method can effectively solve the problem in the prior art that when the photographing direction of the real scene image changes, a direction deviation occurs in controlling the target object to move in the real scene image, and can also effectively improve the operation performance of the target object in the real scene image, bringing a better manipulation experience for a user.
    Type: Grant
    Filed: August 5, 2022
    Date of Patent: January 9, 2024
    Assignee: BEIJING BYTEDANCE NETWORK TECHNOLOGY CO., LTD.
    Inventor: Jiayi Zhang
  • Patent number: 11854150
    Abstract: One embodiment is directed to a user display device comprising a housing frame mountable on the head of the user, a lens mountable on the housing frame and a projection sub system coupled to the housing frame to determine a location of appearance of a display object in a field of view of the user based at least in part on at least one of a detection of a head movement of the user and a prediction of a head movement of the user, and to project the display object to the user based on the determined location of appearance of the display object.
    Type: Grant
    Filed: November 12, 2021
    Date of Patent: December 26, 2023
    Assignee: MAGIC LEAP, INC.
    Inventors: Brian T. Schowengerdt, Samuel A. Miller
  • Patent number: 11810236
    Abstract: Methods, devices, media, and other embodiments are described for managing and configuring a pseudorandom animation system and associated computer animation models. One embodiment involves generating image modification data with a computer animation model configured to modify frames of a video image to insert and animate the computer animation model within the frames of the video image, where the computer animation model of the image modification data comprises one or more control points. Motion patterns and speed harmonics are automatically associated with the control points, and motion states are generated based on the associated motions and harmonics. A probability value is then assigned to each motion state. The motion state probabilities can then be used when generating a pseudorandom animation.
    Type: Grant
    Filed: June 17, 2021
    Date of Patent: November 7, 2023
    Assignee: Snap Inc.
    Inventors: Gurunandan Krishnan Gorumkonda, Shree K. Nayar
  • Patent number: 11790212
    Abstract: Quantization-aware neural architecture search (“QNAS”) can be utilized to learn optimal hyperparameters for configuring an artificial neural network (“ANN”) that quantizes activation values and/or weights. The hyperparameters can include model topology parameters, quantization parameters, and hardware architecture parameters. Model topology parameters specify the structure and connectivity of an ANN. Quantization parameters can define a quantization configuration for an ANN such as, for example, a bit width for a mantissa for storing activation values or weights generated by the layers of an ANN. The activation values and weights can be represented using a quantized-precision floating-point format, such as a block floating-point format (“BFP”) having a mantissa that has fewer bits than a mantissa in a normal-precision floating-point representation and a shared exponent.
    Type: Grant
    Filed: March 18, 2019
    Date of Patent: October 17, 2023
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Kalin Ovtcharov, Eric S. Chung, Vahideh Akhlaghi, Ritchie Zhao
  • Patent number: 11790749
    Abstract: Hazard-resultant effects to land and buildings are predicted based on various inputs. Hazards may include any appropriate type of hazard (e.g., flood, wildfire, climate-related hazards, or the like). Inputs may include the likelihood that that a specific type of hazard may occur for various scenarios, terrestrial boundaries, property boundaries, census geographies, or the like. Relationships between the inputs are determined and used to quantify parameters pertaining to a specific type of hazard. For example, the depth of flood water may be predicted for a particular terrestrial boundary, a city or town, or a building, for specific climate scenarios. A risk likelihood of the quantified parameter may be determined for a particular period of time and environment. For example, flooding to a building may be determined, broken down by depth threshold and year of annual risk for specific climate scenarios. Economic loss also may be predicted.
    Type: Grant
    Filed: November 18, 2021
    Date of Patent: October 17, 2023
    Assignee: 1ST STREET FOUNDATION, INC.
    Inventors: Matthew Eby, Edward Kearns, Michael Amodeo, Jeremy Porter, Neil Freeman, Steven McAlpine
  • Patent number: 11783408
    Abstract: In one aspect, a computerized method of computer vision based dynamic universal fashion ontology fashion rating and recommendations includes the step of receiving one or more user-uploaded digital images. The method includes the step of implementing an image classifier on the one or more user-uploaded digital images, to classify a set of user-uploaded fashion content of the one or more user-upload digital images. The method includes the step of receiving a set of fashion rules input by a domain expert. The set of rules determine a set of apparel to match with the set of user-uploaded fashion content, generating a dynamic universal fashion ontology with the image classier and a text classier. The dynamic universal fashion ontology comprises an ontology of set of mutually exclusive attribute classes. The method includes the step of using the dynamic universal fashion ontology to train a specified machine learning based fashion classifications.
    Type: Grant
    Filed: August 6, 2019
    Date of Patent: October 10, 2023
    Inventors: Rajesh Kumar Saligrama Ananthanarayana, Sridhar Manthani
  • Patent number: 11776206
    Abstract: An extended reality system and extended reality method for digital twins, in which the digital twin can be a virtual asset of a real asset. The real asset can be a real object. For example, an initiation of an event in relation to the real asset causes the extended reality method to generate one or more predicted virtual states which are predicted to achieve the event in the virtual asset. The event can be initiated through the real asset and through the virtual asset. The extended reality method can receive one or more further real states of the real asset which achieve the event. The extended reality method can generate a reality 3D map in an extended reality application which concurrently displays, in the 3D space, the virtual asset in the one or more predicted virtual states and the real asset in the one or more further real states.
    Type: Grant
    Filed: December 23, 2022
    Date of Patent: October 3, 2023
    Assignee: AWE Company Limited
    Inventors: Neetika Gupta, Srinivas Krishna, Laura Thomas, Daniel Chantal Mills, Naimul Mefraz Khan
  • Patent number: 11769279
    Abstract: Generative shape creation and editing is leveraged in a digital medium environment. An object editor system represents a set of training shapes as sets of visual elements known as “handles,” and converts sets of handles into signed distance field (SDF) representations. A handle processor model is then trained using the SDF representations to enable the handle processor model to generate new shapes that reflect salient visual features of the training shapes. The trained handle processor model, for instance, generates new sets of handles based on salient visual features learned from the training handle set. Thus, utilizing the described techniques, accurate characterizations of a set of shapes can be learned and used to generate new shapes. Further, generated shapes can be edited and transformed in different ways.
    Type: Grant
    Filed: May 11, 2021
    Date of Patent: September 26, 2023
    Assignee: Adobe Inc.
    Inventors: Giorgio Gori, Tamy Boubekeur, Radomir Mech, Nathan Aaron Carr, Matheus Abrantes Gadelha, Duygu Ceylan Aksit
  • Patent number: 11756178
    Abstract: Disclosed is a system and associated methods for generating a composite image from scans or images that are aligned using invisible fiducials. The invisible fiducial is a transparent substance or a projected specific wavelength that is applied to and changes reflectivity of a surface at the specific wavelength without interfering with a capture of positions or visible color characteristics across the surface. The system performs first and second capture of a scene with the surface, and detects a position of the invisible fiducial in each capture based on values measured across the specific wavelength that satisfy a threshold associated with the invisible fiducial. The system aligns the first capture with the second capture based on the detected positions of the invisible fiducial, and generates a composite image by merging or combining the positions or visible color characteristics from the aligned captures.
    Type: Grant
    Filed: September 30, 2022
    Date of Patent: September 12, 2023
    Assignee: Illuscio, Inc.
    Inventors: Mark Weingartner, Robert Monaghan
  • Patent number: 11741572
    Abstract: The present invention discloses a method and system for directed transfer of cross-domain data based on high-resolution remote sensing images. In the method of the present invention, first, an objective loss function which combines an image translation loss and a model adaptive loss of an image translation network model is established, thus overcoming the technical shortcoming that an existing data translation technique fails to take a specific task into full consideration and ignores a negative impact of data translation on the specific task. Further, a trained image translation network model is fine-tuned based on sample data, so that the image translation network model keeps translation towards the effect desired by the target model, thus avoiding over-interpretation or over-simplification during directed transfer of cross-domain data and improving accuracy of directed transfer of the cross-domain data based on the high-resolution remote sensing images.
    Type: Grant
    Filed: September 3, 2020
    Date of Patent: August 29, 2023
    Assignee: Zhejiang University
    Inventors: Jianwei Yin, Ge Su, Yongheng Shang, Zhengwei Shen