Patents Examined by Sarah Lhymn
  • Patent number: 11158024
    Abstract: Systems and methods are disclosed for the rendering of contents communicated between devices. A source device processes a video sequence and transmits it to a target device together with metadata including rendering directives. At the target device, the received video sequence is rendered according to the rendering directives. Rendering is affected by events detected by the target device at the time of rendering or by the target device's information. Transparency masks, generated by the source device, are transmitted in an alpha channel to the target device, and are used for blending the video sequence with a secondary content.
    Type: Grant
    Filed: March 10, 2020
    Date of Patent: October 26, 2021
    Assignee: Apple Inc.
    Inventors: John S. Bushell, Mohammad A. Shah, Sundararaman V. Shiva, Alexandre R. Moha, Nicholas V. Scapel
  • Patent number: 11156834
    Abstract: Aspects of the present disclosure relate to optical systems with ergonomic presentation of content for use in head-worn computing systems. A method for controlling a head-worn computer when viewing virtual images, including image content, that encourages an ergonomic head position to reduce neck pain, includes determining an angle of the head-worn computer relative to horizontal, determining an angle of a line of sight to the center of the virtual image as presented to a user's eye, determining a deviation between the determined angle of the line of sight and a predetermined ergonomic angle, and shifting the image content of the virtual image vertically as displayed to the user's eye so that a portion of the image content is not viewable, wherein the amount of shifting is in reverse correspondence to the magnitude of the determined deviation.
    Type: Grant
    Filed: December 18, 2019
    Date of Patent: October 26, 2021
    Assignee: Mentor Acquisition One, LLC
    Inventors: Nima L. Shams, John N. Border, John D. Haddick
  • Patent number: 11159386
    Abstract: Systems and methods provide for enriching flow data to analyze network security, availability, and compliance. A network analytics system can capture flow data and metadata from network elements. The network analytics system can enrich the flow data by in-line association of the flow data and metadata. The network analytics system can generate multiple planes with each plane representing a dimension of enriched flow data. The network analytics system can generate nodes for the planes with each node representing a unique value or set of values for the dimensions represented by planes. The network analytics system can generate edges for the nodes of the planes with each edge representing a flow between endpoints corresponding to the nodes. The network analytics system can update the planes in response to an interaction with the planes or in response to a query.
    Type: Grant
    Filed: March 14, 2019
    Date of Patent: October 26, 2021
    Inventors: Matthew Lawson Finn, II, Alok Lalit Wadhwa, Navindra Yadav, Jerry Xin Ye, Supreeth Rao, Prasannakumar Jobigenahally Malleshaiah, Tapan Shrikrishna Patwardhan, Umamaheswaran Arumugam, Aiyesha Ma, Darshan Shrinath Purandare
  • Patent number: 11150724
    Abstract: A method, computer system, and computer program product for determining an engagement level of an individual is provided. The present invention may include capturing a plurality of image data depicting a relative location of a user. The present invention may also include identifying an individual within the captured image data. The present invention may further include gathering a plurality of engagement level indicator data associated with the identified individual. The present invention may also include calculating a current engagement level of the identified individual using the plurality of gathered engagement level indicator data.
    Type: Grant
    Filed: January 10, 2020
    Date of Patent: October 19, 2021
    Assignee: International Business Machines Corporation
    Inventors: Paul R. Bastide, Matthew E. Broomhall, Robert E. Loredo, Sathyanarayanan Srinivasan
  • Patent number: 11151767
    Abstract: A removal model is trained to predict secondary dynamics associated with an individual enacting a performance. For a given sequence of frames that includes an individual enacting a performance and secondary dynamics, a retargeting application identifies a set of rigid points that correspond to skeletal regions of the individual and a set of non-rigid points that correspond to non-skeletal region of the individual. For each frame in the sequence of frames, the application applies the removal model that takes as inputs a velocity history of a non-rigid point and a velocity history of the rigid points in a temporal window around the frame, and outputs a delta vector for the non-rigid point indicating a displacement for reducing secondary dynamics in the frame. In addition, a trained synthesis model can be applied to determine a delta vector for every non-rigid point indicating displacements for adding new secondary dynamics.
    Type: Grant
    Filed: July 2, 2020
    Date of Patent: October 19, 2021
    Assignee: Disney Enterprises, Inc.
    Inventors: Gaspard Zoss, Eftychios Sifakis, Dominik Thabo Beeler, Derek Edward Bradley
  • Patent number: 11146777
    Abstract: A method for efficiently populating a display is provided. The method can include identifying a point at which a world ray intersects a capture surface defined by capture points of a scene, identifying a capture point closest to the identified point, generating a motion vector based on the motion vectors for each of two directly adjacent capture points, identifying a vector in the generated motion vector at a location at which the world ray intersects an image surface, and providing a pixel value from the image data of the capture point, the pixel value corresponding to a location in the image surface at which a vector of the generated motion vector points to the location at which the world ray intersects the image surface within a threshold distance or after a specified number of iterations.
    Type: Grant
    Filed: February 15, 2019
    Date of Patent: October 12, 2021
    Assignee: Microsoft Technologv Licensing. LLC
    Inventors: Ashraf A. Michail, Yang You, Michael G. Boulton
  • Patent number: 11138805
    Abstract: The invention relates to quantitative quality assurance in a mixed reality environment. In some embodiments, the invention includes using mixed reality sensors embedded in a mixed reality device to detect body positional movements of a user and using an indirect measuring device to determine a target location for the current state of the target equipment and a current subtask of a predefined workflow. The invention further includes using a direct measuring device associated with the target location to detect a user interaction by the user at the target location, determining a confidence value based on the user movements, the current subtask, and the user interaction, and displaying confirmation of the user interaction on a mixed reality display of the user.
    Type: Grant
    Filed: October 19, 2020
    Date of Patent: October 5, 2021
    Assignee: The Government of the United States of America, as represented by the Secretary of the Navy
    Inventors: Christopher James Angelopoulos, Larry Clay Greunke
  • Patent number: 11132973
    Abstract: A method, system and computer-usable medium are disclosed for capturing an image rendered by a target application. One general aspect includes a computer-implemented method for capturing an image, the method including: intercepting API calls made by a target application to a graphics display driver, where the API calls made to the graphics display driver by the target application are made using a graphics rendering API library; and using the intercepted API calls to construct a copy of a frame buffer of the image, where the copy of the frame buffer is constructed independent of the graphics display driver. Certain embodiments may include corresponding stand-alone and/or network computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform one or more of these actions.
    Type: Grant
    Filed: February 1, 2019
    Date of Patent: September 28, 2021
    Assignee: Forcepoint, LLC
    Inventor: Benjamin Tyler
  • Patent number: 11113853
    Abstract: Systems and methods that enable blending and aggregating multiple related datasets to a blended data model (BDM), manipulation of the resulting data model, and the representation of multiple parameters in a single visualization. The BDM and each visualization can be iteratively manipulated, in real-time, using a user-directed question-answer-question response so patterns can be revealed that are not obvious.
    Type: Grant
    Filed: May 29, 2020
    Date of Patent: September 7, 2021
    Assignee: AQUMIN, LLC
    Inventors: Denis R. Papp, Michael J. Zeitlin
  • Patent number: 11113893
    Abstract: The present embodiments relate to display of glints associated with real-world objects in an environment displayed on an extra reality (XR) device. The glint can include a virtual object associated with a real-world object, such as an indication of a social interaction associated with a real-world object, a content item tagged to an object, etc. The system as described herein can present glints on a display of an XR device based on a distance between the XR device and a location associated with the glint. Responsive to selection of a glint in the environment, additional information can be presented relating to the glint or another action can be taken, such as to open an application. In some instances, a glint can include a series of search results relating to a corresponding real-world object to provide additional information relating to the real-world object.
    Type: Grant
    Filed: November 17, 2020
    Date of Patent: September 7, 2021
    Inventors: Jing Ma, Gerrit Hendrik Hofmeester, John Jacob Blakeley, Camila Cortes De Almeida e De Vincenzo, Gagneet Singh Mac, Jenna Velez, Cody Char, Annika Rodrigues, Michael Luo
  • Patent number: 11107195
    Abstract: An immersive content production system may capture a plurality of images of a physical object in a performance area using a taking camera. The system may determine an orientation and a velocity of the taking camera with respect to the physical object in the performance area. A user may select a first amount of motion blur exhibited by the images of the physical object based on a desired motion effects. The system may determine a correction to apply to a virtual object based at least in part on the orientation and the velocity of the taking camera and the desired motion blur effect. The system may also detect the distance from the taking camera to a physical object and the taking camera to the virtual display. The system may use these distances to generate a corrected circle of confusion for the virtual images on the display.
    Type: Grant
    Filed: August 21, 2020
    Date of Patent: August 31, 2021
    Inventors: Roger Cordes, Lutz Latta
  • Patent number: 11106723
    Abstract: An image display system is configured such that an image display device and an image processing device are connected to each other through a network. The image display device is provided with: an instruction information generation unit for generating instruction information pertaining to image-processing to be performed on an image input signal; an image signal transmission unit for transmitting the instruction information to the image processing device; a corrected signal reception unit for receiving a corrected image input signal obtained through image-processing performed by the image processing device on the basis of the instruction information; and a display signal output unit for outputting an image output signal based on the corrected image input signal, to an object where an image is to be displayed. The image processing device receives the image input signal connected through the network and performs image-processing on the image input signal according to the instruction information.
    Type: Grant
    Filed: April 30, 2020
    Date of Patent: August 31, 2021
    Assignee: MAXELL, LTD.
    Inventors: Keisuke Inata, Yuusuke Yatabe, Nobuaki Kabuto, Nobuhiro Fukuda, Mitsuo Nakajima
  • Patent number: 11107290
    Abstract: A method includes rendering, on displays of an extended reality (XR) display device, a first sequence of image frames based on image data received from an external electronic device associated with the XR display device. The method further includes detecting an interruption to the image data received from the external electronic device, and accessing a plurality of feature points from a depth map corresponding to the first sequence of image frames. The plurality of feature points includes movement and position information of one or more objects within the first sequence of image frames. The method further includes performing a re-warping to at least partially re-render the one or more objects based at least in part on the plurality of feature points and spatiotemporal data, and rendering a second sequence of image frames corresponding to the partial re-rendering of the one or more objects.
    Type: Grant
    Filed: July 29, 2020
    Date of Patent: August 31, 2021
    Inventors: Christopher A. Peri, Yingen Xiong, Lu Luo
  • Patent number: 11107254
    Abstract: A calligraphy-painting device, a calligraphy-painting apparatus, and an auxiliary method for calligraphy-painting are disclosed. The calligraphy-painting device includes: a display portion, configured to display a preset calligraphy-painting information; an image acquiring portion, configured to acquire an image; and a control unit, in communication connection with the display portion, and configured to control the display portion to display the preset calligraphy-painting information, wherein the image is processed to obtain a calligraphy-painting region, and the preset calligraphy-painting information is virtually displayed in the calligraphy-painting region.
    Type: Grant
    Filed: December 6, 2017
    Date of Patent: August 31, 2021
    Inventor: Xinxin Mu
  • Patent number: 11100712
    Abstract: A method includes: receiving, in a first device, a relative description file for physical markers that are positioned at locations, the relative description file defining relative positions for each of the physical markers with regard to at least another one of the physical markers; initially localizing a position of the first device among the physical markers by visually capturing any first physical marker of the physical markers using an image sensor of the first device; and recognizing a second physical marker of the physical markers and a location of the second physical marker without a line of sight, the second physical marker recognized using the relative description file.
    Type: Grant
    Filed: May 13, 2020
    Date of Patent: August 24, 2021
    Assignee: Google LLC
    Inventors: Brett Barros, Xavier Benavides Palos
  • Patent number: 11100710
    Abstract: The disclosure notably relates to a computer-implemented method for extracting a feature tree from a mesh. The method includes providing a mesh, computing a geometric and adjacency graph of the provided mesh, wherein each node of the graph represents one region of the mesh and comprises a primitive type and parameters of the region, each connection between two nodes is an intersection between the respective surfaces of the regions represented by the two connected nodes. The method also includes instantiating for each node of the graph, a surface based on the identified primitive type and parameters of the region.
    Type: Grant
    Filed: December 30, 2019
    Date of Patent: August 24, 2021
    Inventors: Guillaume Randon, Serban Alexandru State, Fernando Manuel Sanchez Bermudez
  • Patent number: 11099396
    Abstract: A method implemented by an extended reality (XR) display device includes rendering a current image frame received from an external electronic device associated with the XR display device. The current image frame is associated with a current pose of the XR display device. The method further includes receiving an updated image frame from the external electronic device, calculating an updated pose based on one or more characteristics of the updated image frame, and determining whether the updated pose is within a pose range with respect to the current pose. The method thus further includes re-rendering, on one or more displays of the XR display device, a previous image frame based on whether the current pose is determined to be within the pose range.
    Type: Grant
    Filed: August 18, 2020
    Date of Patent: August 24, 2021
    Inventor: Christopher A. Peri
  • Patent number: 11094109
    Abstract: A data processing device includes a camera configured to capture successive images, each image being captured during a respective image capture period such that respective different portions of the captured image are captured at respective different capture times within the image capture period; a detector to detect, from images captured by the camera, information indicating a relative location of a remote marker with respect to the data processing device, and to associate a timestamp with a detected relative location indicating a time at which the relative location was detected; and a motion sensor to detect motion of the data processing device and to control operation of the detector in response to the detected motion.
    Type: Grant
    Filed: August 2, 2018
    Date of Patent: August 17, 2021
    Assignee: Sony Interactive Entertainment Inc.
    Inventor: Sharwin Winesh Raghoebardajal
  • Patent number: 11087553
    Abstract: An end-user system in accordance with the present disclosure includes a communication device configured to communicate with a server, a display screen, one or more processors, and at least one memory storing instructions which, when executed by the processor(s), cause the end-user system to access a physical world geographical location from a user, access two-dimensional physical world map data for a region surrounding the physical world geographical location, render for display on the display screen a three-dimensional mirrored world portion based on the two-dimensional physical world map data and render an avatar at a mirrored world location corresponding to the physical world geographical location, access geotagged social media posts which have geotags in the region and which the user is permitted to view, and render the geotagged social media posts as three-dimensional objects in the mirrored world portion.
    Type: Grant
    Filed: January 3, 2020
    Date of Patent: August 10, 2021
    Assignee: University of Maryland, College Park
    Inventors: Amitabh Varshney, Ruofei Du
  • Patent number: 11087521
    Abstract: The disclosed computer system may include an input module, an autoencoder, and a rendering module. The input module may receive geometry information and images of a subject. The geometry information may be indicative of variation in geometry of the subject over time. Each image may be associated with a respective viewpoint and may include a view-dependent texture map of the subject. The autoencoder may jointly encode texture information and the geometry information to provide a latent vector. The autoencoder may infer, using the latent vector, an inferred geometry and an inferred view-dependent texture of the subject for a predicted viewpoint. The rendering module may be configured to render a reconstructed image of the subject for the predicted viewpoint using the inferred geometry and the inferred view-dependent texture. Various other systems and methods are also disclosed.
    Type: Grant
    Filed: January 29, 2020
    Date of Patent: August 10, 2021
    Assignee: Facebook Technologies, LLC
    Inventors: Stephen Anthony Lombardi, Jason Saragih, Yaser Sheikh, Takaaki Shiratori, Shoou-I Yu, Tomas Simon Kreuz, Chenglei Wu