Patents Examined by Sarah Lhymn
  • Patent number: 12175580
    Abstract: The disclosed technology relates to detection of attention exhibited by a second user to a first user's avatar in a virtual reality environment. A first user's avatar activity and the second user's activity are evaluated to determine the second user's (e.g., via another avatar) attention level directed towards first user's avatar. Information can be sent to notify the first user that the second user's attention level has satisfied attention-level criterion data. For example, if the second user is viewing and/or is within audio range of the first user for too long of a time or too frequently, then the first user can be notified and/or other action taken. Remedial action is available for negative attention, such blocking the second user from experiencing the first user in the environment. Detection of attention for positive purposes is available, such as to reward an influencer whose avatar is receiving significant attention from followers.
    Type: Grant
    Filed: August 23, 2022
    Date of Patent: December 24, 2024
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Nigel Bradley, Eric Zavesky, James Pratt, Ari Craine, Robert Koch
  • Patent number: 12164109
    Abstract: Methods and systems are disclosed for performing operations for displaying virtual content on a contact lens. The operations comprise causing the contact lens to operate in a first display mode to allow unobstructed light associated with a real-world environment to be received by a pupil of a user; detecting a condition associated with display of virtual content; selecting between a second display mode and a third display mode as a new display mode in which to operate the contact lens, the second display mode obstructing a portion of the light associated with a real-world environment received by the pupil with the virtual content, the third display mode obstructing all of the light associated with a real-world environment received by the pupil with the virtual content; and transitioning the contact lens from operating in the first display mode to operating in the new display mode in response to detecting the condition.
    Type: Grant
    Filed: April 29, 2022
    Date of Patent: December 10, 2024
    Assignee: Snap Inc.
    Inventors: Russell Douglas Patton, Jonathan M. Rodriguez, II
  • Patent number: 12148102
    Abstract: A method for augmenting a real-world scene viewed through augmented reality (AR) glasses includes determining that an overlay should be processed for a real-world person being viewed via the AR glasses, with the determining using artificial intelligence (AI) to identify a trigger scenario. The AI is configured to process a video stream of images captured of the real-world person using a camera of the AR glasses to identify the trigger scenario, which is associated with an intensity level exhibited by the real-world person. The method also includes identifying the overlay to replace a portion of the real-world person in the video stream of images, and generating an augmented video stream that includes the video stream composited with the overlay. The augmented video stream being presented via the AR glasses is configured to adjust the intensity level exhibited by the real-world person when viewed via the AR glasses.
    Type: Grant
    Filed: August 25, 2021
    Date of Patent: November 19, 2024
    Assignee: Sony Interactive Entertainment Inc.
    Inventor: Jorge Arroyo Palacios
  • Patent number: 12148093
    Abstract: Rendering systems that can use combinations of rasterization rendering processes and ray tracing rendering processes are disclosed. In some implementations, these systems perform a rasterization pass to identify visible surfaces of pixels in an image. Some implementations may begin shading processes for visible surfaces, before the geometry is entirely processed, in which rays are emitted. Rays can be culled at various points during processing, based on determining whether the surface from which the ray was emitted is still visible. Rendering systems may implement rendering effects as disclosed.
    Type: Grant
    Filed: April 2, 2021
    Date of Patent: November 19, 2024
    Assignee: Imagination Technologies Limited
    Inventors: Jens Fursund, Luke T. Peterson
  • Patent number: 12118641
    Abstract: A method for graphics processing. The method including rendering graphics for an application using a plurality of graphics processing units (GPUs). The method including using the plurality of GPUs in collaboration to render an image frame including a plurality of pieces of geometry. The method including during a pre-pass phase of rendering, generating information at the GPUs regarding the plurality of pieces of geometry and their relation to a plurality of screen regions. The method including assigning the plurality of screen regions to the plurality of GPUs based on the information for purposes of rendering the plurality of pieces of geometry in a subsequent phase of rendering.
    Type: Grant
    Filed: September 22, 2021
    Date of Patent: October 15, 2024
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Mark E. Cerny, Tobias Berghoff, David Simpson
  • Patent number: 12112427
    Abstract: Images of a scene are received. The images represent viewpoints corresponding to the scene. A pixel map of the scene is computed based on the plurality of images. Multi-plane image (MPI) layers from the pixel map are extracted in real-time. The MPI layers are aggregated. The scene is rendered from a novel viewpoint based on the aggregated MPI layers.
    Type: Grant
    Filed: August 29, 2022
    Date of Patent: October 8, 2024
    Assignee: SNAP INC.
    Inventors: Numair Khalil Ullah Khan, Gurunandan Krishnan Gorumkonda, Shree K. Nayar, Yicheng Wu
  • Patent number: 12112394
    Abstract: A method including rendering graphics for an application using graphics processing units (GPUs). Responsibility for rendering of geometry is divided between GPUs based on screen regions, each GPU having a corresponding division of the responsibility which is known. A plurality of pieces of geometry of an image frame is assigned to the GPUs for geometry testing. A first GPU state configuring one or more shaders to perform the geometry testing is set. Geometry testing is performed at GPUs on the plurality of pieces of geometry to generate information regarding each piece of geometry and its relation to each of the plurality of screen regions. A second GPU state configuring the one or more shaders to perform rendering is set. The information generated for each of the plurality of pieces of geometry is used when rendering the plurality of pieces of geometry at the GPUs.
    Type: Grant
    Filed: February 3, 2020
    Date of Patent: October 8, 2024
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Mark E. Cerny, Florian Strauss, Tobias Berghoff
  • Patent number: 12100111
    Abstract: A room manager can generate mappings for a real-world room that support a shared XR environment. For example, the real-world room can include real-world objects and surfaces, such as a table(s), chair(s), wall(s), door(s), window(s), etc. The room manager can generate XR object definitions based on information received about the real-world room, object(s), and surface(s). For example, the room manager can implement a flow that guides a user equipped with an XR system to provide information for the XR object definitions, such as real-world surfaces that map to the XR object(s), borders (e.g., measured using a component of the XR system), such as borders on real-world surfaces, semantic information (e.g., number of seat assignments at an XR table, size of XR objects, etc.), and other suitable information. Implementations generate previews of the shared XR environment, such as a local preview and a remote preview.
    Type: Grant
    Filed: September 29, 2022
    Date of Patent: September 24, 2024
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Björn Wanbo, Michael James LeBeau, William Arthur Hugh Steptoe, Jonathan Mallinson, Steven James Wilson, Vasanth Kumar Rajendran, Vasyl Baran
  • Patent number: 12094008
    Abstract: A method and system for generating a three-dimensional representation of a vehicle to assess damage to the vehicle. A mobile device may capture multispectral scans of a vehicle from each a plurality of cameras configured to scan the vehicle at a different wavelength of the electromagnetic spectrum. A virtual model of the vehicle may be generated from the multispectral scan of the vehicle, such that anomalous conditions or errors in individual wavelength data are omitted from model generation. A representation of the virtual model may be presented to the user via the display of the mobile device. The virtual model of the vehicle may further be analyzed to assess damage to the vehicle.
    Type: Grant
    Filed: June 26, 2023
    Date of Patent: September 17, 2024
    Assignee: State Farm Mutual Automobile Insurance Company
    Inventor: Nathan C Summers
  • Patent number: 12086934
    Abstract: Systems and methods for assessing trailer utilization are disclosed herein. The method generates a trailer interior map and captures an image of the trailer interior. The map includes first voxels associated with the trailer interior and the image includes a plurality of three-dimensional (3D) image data points. The method generates a 3D map of an object based on a set of 3D points indicative of respective 3D image data points corresponding to respective first voxels and determines whether the object is non-conforming. The method determines at least one of second voxels associated with unusable space proximate to a non-conforming object, third voxels associated with the non-conforming object, and fourth voxels associated with a conforming object. The method determines an occupied portion of the trailer based on the first voxels, third voxels, and fourth voxels and trailer utilization based on the occupied portion of the trailer, the first voxels, and the second voxels.
    Type: Grant
    Filed: September 30, 2022
    Date of Patent: September 10, 2024
    Assignee: Zebra Technologies Corporation
    Inventors: Yuri Astvatsaturov, Seth David Silk, Justin F. Barish
  • Patent number: 12079902
    Abstract: Systems and methods are provided that include a processor executing a program to process an initial image through a first diffusion stage to generate a final first stage image, wherein the first diffusion stage includes using a diffusion model, a gradient estimator model smaller than the diffusion model, and a text-image match gradient calculator. The processor further executes the program to process the final first stage image through a second diffusion stage to generate a final second stage image. The second diffusion stage includes, for a second predetermined number of iterations, inputting the final first stage image to through the diffusion model, back-propagate the image through the text-image match gradient calculator to calculate a second stage gradient against the input text, and update the final first stage image by applying the second stage gradient to the final first stage image.
    Type: Grant
    Filed: November 4, 2022
    Date of Patent: September 3, 2024
    Assignee: LEMON INC.
    Inventors: Qing Yan, Bingchen Liu, Yizhe Zhu, Xiao Yang
  • Patent number: 12079931
    Abstract: Systems and methods for image based location estimation are described. In one example embodiment, a first positioning system is used to generate a first position estimate. Point cloud data describing an environment is then accessed. A two-dimensional surface of an image of an environment is captured, and a portion of the image is matched to a portion of key points in the point cloud data. An augmented reality object is then aligned within one or more images of the environment based on the match of the point cloud with the image. In some embodiments, building façade data may additionally be used to determine a device location and place the augmented reality object within an image.
    Type: Grant
    Filed: July 1, 2022
    Date of Patent: September 3, 2024
    Assignee: SNAP INC.
    Inventors: Nathan Jurgenson, Linjie Luo, Jonathan M Rodriguez, II, Rahul Bhupendra Sheth, Jia Li, Xutao Lv
  • Patent number: 12079944
    Abstract: A mobile device comprises a display, an image capture device that generates image data of a face of a viewer of the display, and a processing device. The processing device receives the image data and sends the image data to a computing device. The computing device processes the image data to identify a position of a dental arch in the image data; determines a treatment outcome for the dental arch; generates a post-treatment image of the dental arch that shows the treatment outcome; generates updated image data comprising a superimposition of the post-treatment image of the dental arch over the received image data; and sends the updated image data to the mobile device. The mobile device outputs the updated image data to the display, wherein the post-treatment image of the dental arch is superimposed over the dental arch in the received image data such that the post-treatment image is visible in the display rather than a true depiction of the dental arch.
    Type: Grant
    Filed: May 30, 2023
    Date of Patent: September 3, 2024
    Assignee: Align Technology, Inc.
    Inventors: Pavel Pokotilov, Anton Lapshin, Evgeniy Malashkin, Sergei Ozerov, Yury Slynko, Andrey Sergeevich Nekrasov, Leonid Vyacheslavovich Grechishnikov, Anna Orlova, Yingjie Li, Phillip Thomas Harris, Maurice K. Carrier
  • Patent number: 12062121
    Abstract: A method to generate a digital avatar simulating a human for behavioral empathy and understanding is described. The method includes training a neural network to generate the digital avatar to simulate the human according to a vector of human attributes and generate responses in response to interactions with a user based on the vector of human attributes. The method also includes interacting, by the user, with the digital avatar, in which the digital avatar initially having a similar background and attributes as the user. The method further includes modifying, over time, the attributes of the digital avatar to represent a target background and a target set of attributes different from the user.
    Type: Grant
    Filed: October 2, 2021
    Date of Patent: August 13, 2024
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Nikos Arechiga, Matthew Len Lee, Charlene C. Wu, Shabnam Hakimi
  • Patent number: 12056896
    Abstract: A scale and pose estimation method for a camera system is disclosed. Camera data for a scene acquired by the camera system is received. A scale prior parameter characterizing scale of the camera system is received. A cost of a cost function is calculated for a similarity transformation. The cost of the cost function is influenced at least by the scale prior parameter. Based at least on the cost function being less than a threshold cost, an estimated scale and pose of the camera system is output based on the similarity transformation.
    Type: Grant
    Filed: September 12, 2022
    Date of Patent: August 6, 2024
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Victor M. Fragoso Rojas, Mei Chen, Gabriel Takacs
  • Patent number: 12046034
    Abstract: A piecewise progressive continuous calibration method with context coherence is utilized to improve display of virtual content. When a set of frames are rendered to depict a virtual image, the VAR system may identify a location of the virtual content in the frames. The system may convolve a test pattern at the location of the virtual content to generate a calibration frame. The calibration frame is inserted within the set of frames in a manner that is imperceptible to the user.
    Type: Grant
    Filed: May 19, 2022
    Date of Patent: July 23, 2024
    Assignee: Magic Leap, Inc.
    Inventor: Aydin Arpa
  • Patent number: 12045945
    Abstract: A control device includes circuitry configured to: generate, based on respective pieces of imaging data obtained by a plurality of imaging devices of a moving body, a three-dimensional image indicating a space including both the moving body and surroundings of the moving bod; and cause a display device to display generated the three-dimensional image. The circuitry is capable of rotation of the space in the three-dimensional image. When it is predicted that a boundary region of the respective pieces of imaging data in the three-dimensional image is present in a specific region at time of stop of the rotation, the circuitry changes the boundary region.
    Type: Grant
    Filed: November 30, 2022
    Date of Patent: July 23, 2024
    Assignee: HONDA MOTOR CO., LTD.
    Inventors: Tatsuro Fujiwara, Yasushi Shoda, Jumpei Morita
  • Patent number: 12033266
    Abstract: A computer-executable method for generating a side-by-side three-dimensional (3D) image includes the steps of creating a 3D mesh and estimating depth information of the raw image. The method further includes the steps of updating the left mesh area and the right mesh area of the 3D mesh based on the estimated depth information of the raw image and projecting each of the mesh vertices of the left mesh area onto a coordinate system of the side-by-side 3D image based on a left eye position, and projecting each of the mesh vertices of the right mesh area onto the coordinate system of the side-by-side 3D image based on a right eye position. The method further obtains the side-by-side 3D image by coloring the left mesh area and the right mesh area projected onto the coordinate system of the side-by-side 3D image based on the raw image.
    Type: Grant
    Filed: November 16, 2022
    Date of Patent: July 9, 2024
    Assignee: ACER INCORPORATED
    Inventors: Sergio Cantero Clares, Wen-Cheng Hsu, Shih-Hao Lin, Chih-Haw Tan
  • Patent number: 12033285
    Abstract: A virtual window configuration method includes the following steps. A processor generates a virtual window. A depth detection sensor generates depth information based on an image. The processor analyzes the depth information to generate a depth matrix. The processor finds a depth configuration block in the image using the depth matrix. A feature point detection sensor generates feature point information for the image. The processor analyzes the feature point information to generate a feature point matrix. The processor finds a feature point configuration block in the image using the feature point matrix. The processor moves the virtual window to the depth configuration block or the feature point configuration block.
    Type: Grant
    Filed: September 13, 2022
    Date of Patent: July 9, 2024
    Assignee: Wistron Corp.
    Inventors: Wei-Chou Chen, Ming-Fong Yeh, Yu-Chi Chang, Lee-Chun Ko
  • Patent number: 12033262
    Abstract: A computing system may provide functionality for controlling an animated model to perform actions and to perform transitions therebetween. The system may determine, from among a plurality of edges from a first node of a control graph to respective other nodes of the control graph, a selected edge from the first control node to a selected node. The system may then determine controls for an animated model in a simulation based at least in part on the selected edge, control data associated with the selected node, a current simulation state of the simulation, and a machine learned algorithm, determine an updated simulation state of the simulation based at least in part on the controls for the animated model, and adapt one or more parameters of the machine learned algorithm based at least in part on the updated simulation state and a desired simulation state.
    Type: Grant
    Filed: March 31, 2022
    Date of Patent: July 9, 2024
    Assignee: Electronic Arts Inc.
    Inventors: Zhaoming Xie, Wolfram Sebastian Starke, Harold Henry Chaput