Patents Examined by Martin Mushambo
-
Patent number: 11972526Abstract: Various implementations disclosed herein include devices, systems, and methods that present a view of a device user's face portion, that would otherwise be blocked by an electronic device positioned in front of the face, on an outward-facing display of the user's device. The view of the user's face portion may be configured to enable observers to see the user's eyes and facial expressions as if they were seeing through a clear device at the user's actual eyes and facial expressions. Various techniques are used to provide views of the user's face that are realistic, that show the user's current facial appearance, and/or that present the face portion with 3D spatial accuracy, e.g., each eye appearing to be in its actual 3D position. Some implementations combine live data with previously-obtained data, e.g., combining live data with enrollment data.Type: GrantFiled: September 29, 2023Date of Patent: April 30, 2024Assignee: Apple Inc.Inventors: Gilles M. Cadet, Shaobo Guan, Olivier Soares, Graham L. Fyffe, Yang Song
-
Patent number: 11966667Abstract: An intelligent simulation system for jacket towing system includes a distributed collaborative simulation subsystem configured to provide a communication interface for each subsystem, a comprehensive management and evaluation subsystem configured to generate and issue a simulation subject, an operation control simulation subsystem configured to generate operation instructions, a motion simulation subsystem configured to receive parameters of the subject and the operation instructions, simulate a motion state of the jacket, a tugboat and a towrope in real time, and transmit the simulated motion states to the visual simulation subsystem, and a visual simulation subsystem configured to perform a three-dimensional display for the real-time simulation of the motion states of the jacket, the tugboat and the towrope.Type: GrantFiled: May 9, 2023Date of Patent: April 23, 2024Assignee: HARBIN ENGINEERING UNIVERSITYInventors: Yingfei Zan, Lihao Yuan, Duanfeng Han, Hui Jia
-
Patent number: 11966701Abstract: In one embodiment, a method includes rendering a first output image comprising one or more augmented-reality (AR) objects for displays of an AR rendering device of an AR system associated with a first user. The method further includes accessing sensor signals associated with the first user. The one or more sensor signals may be captured by sensors of the AR system. The method further includes detecting a change in a context of the first user with respect to a real-world environment based on the sensor signals. The method further includes rendering a second output image comprising the AR objects for the displays of the AR rendering device. One or more of the AR objects may be adapted based on the detected change in the context of the first user.Type: GrantFiled: August 2, 2021Date of Patent: April 23, 2024Assignee: Meta Platforms, Inc.Inventors: Yiming Pu, Christopher E Balmes, Gabrielle Catherine Moskey, John Jacob Blakeley, Amy Lawson Bearman, Alireza Dirafzoon, Matthew Dan Feiszli, Ganesh Venkatesh, Babak Damavandi, Jiwen Ren, Chengyuan Yan, Guangqiang Dong
-
Patent number: 11961212Abstract: A display device according to an embodiment may include a controller and a display unit. The controller may perform tone mapping for adjusting luminance of input image data, and the display unit may display an image according to output image data whose luminance is adjusted by the tone mapping. The controller may generate a base mapping curve for an entire region from the input image data, extract information for each local region of the entire region, and generate a local mapping curve reflecting the information for each local region with respect to each local region to perform tone mapping.Type: GrantFiled: January 20, 2022Date of Patent: April 16, 2024Assignee: LG ELECTRONICS INC.Inventors: Hyomin Kim, Kyuri Kim, Hyun Jung, Chanho Lee
-
Patent number: 11948274Abstract: A method performed by a computer is disclosed. The method comprises receiving color data for input pixels of an input image and an input set of features used to render the input image of a three-dimensional animation environment, wherein the input pixels are of a first resolution. The computer may then load into memory a generator of a generative adversarial network including a neural network used to scale the input image, the neural network trained using training data comprising color data of training input images and training output images and a training set of the features used to render the training input images. After the generator is loaded into memory, the computer may generate an output image having a second resolution that is different than the first resolution by passing the color data and the input set of features through the generator.Type: GrantFiled: January 5, 2022Date of Patent: April 2, 2024Assignee: PIXARInventors: Vaibhav Vavilala, Mark Meyer
-
Patent number: 11948257Abstract: Systems and methods are described for generating an AR image are described herein. A physical camera is used to capture a video of a physical object in front of a physical background. The system then accesses data defining a virtual environment and selects a first position of a virtual camera in the virtual environment. While capturing the video, the system displays captured video of the physical object, such that the physical background is replaced with a view of the virtual environment from the first position of the virtual camera. In response to detecting a movement of the physical camera, the system selects a second position of the virtual camera in the virtual environment based on the detected movement. The system then displays the captured video of the physical object, wherein the view of the physical background is replaced with a view of the virtual environment from the second position of the virtual camera.Type: GrantFiled: May 9, 2022Date of Patent: April 2, 2024Assignee: Rovi Guides, Inc.Inventor: Warren Keith Edwards
-
Patent number: 11948339Abstract: According to an example method, a system receives first data representing a polygon mesh. The polygon mesh includes a plurality of interconnected vertices forming a plurality of triangles. The system generates second data representing the polygon mesh. Generating the second data includes traversing the vertices of the polygon mesh according to a traversal order, partitioning the plurality of triangles into a set of ordered triangle groups in accordance with the traversal order, and encoding, in the second data, the set of ordered triangle groups. The system outputs the second data. A position each of the vertices in the transversal order is determined based on (i) a number of previously encoded triangles that are incident to that vertex, and/or (ii) a sum of one or more angles formed by the previously encoded triangles that are incident to that vertex.Type: GrantFiled: June 6, 2022Date of Patent: April 2, 2024Assignee: Apple Inc.Inventor: Khaled Mammou
-
Patent number: 11943276Abstract: A method, computer system, and a computer program product for optimizing web conferencing is provided. The present invention may include receiving data for an organization, the organization being comprised of a plurality of participants. The present invention may include receiving a scheduled web conference. The present invention may include determining a network bandwidth threshold for each of the plurality of participants of the scheduled web conference based on at least the data received for the organization and data associated with the scheduled web conference. The present invention may include monitoring a network bandwidth of the scheduled web conference. The present invention may include determining whether to transmit a line art drawing for one or more participants based on the network bandwidth of the scheduled web conference.Type: GrantFiled: March 23, 2022Date of Patent: March 26, 2024Assignee: International Business Machines CorporationInventors: Ilse M. Breedvelt-Schouten, Jana H. Jenkins, John A. Lyons, Jeffrey A. Kusnitz
-
Patent number: 11935196Abstract: Techniques are described for using computing devices to perform automated operations related to providing visual information of multiple types in an integrated manner about a building or other defined area. The techniques may include generating and presenting a GUI (graphical user interface) on a client device that includes a computer model of the building's interior with one or more first types of information (e.g., in a first pane of the GUI), and simultaneously presenting other types of related information about the building interior (e.g., in additional separate GUI pane(s)) that is coordinated with the first type(s) of information being currently displayed. The computer model may be a 3D (three-dimensional) or 2.5D representation generated after the house is built and showing the actual house's interior (e.g., walls, furniture, etc.), and may be displayed to a user of a client computing device in a displayed GUI with various user-selectable controls.Type: GrantFiled: June 10, 2023Date of Patent: March 19, 2024Assignee: MFTB Holdco, Inc.Inventors: Yuguang Li, Ivaylo Boyadzhiev, Romualdo Impas
-
Patent number: 11935290Abstract: Systems and methods of servicing engines, an exemplary method of servicing an engine, the method including receiving, by one or more computing devices, information corresponding to one or more components of the engine; determining, by the one or more computing devices, a location of the one or more components of the engine with respect to a location of an augmented reality device; and presenting, in a current field of view display of the augmented reality device, at least a portion of the information corresponding to the one or more components of the engine, wherein the portion of the information includes a rendering of the one or more components, instructions regarding operations to be performed on the one or more components, directional arrows or contextual information associated with the one or more components, or any combination thereof.Type: GrantFiled: October 29, 2020Date of Patent: March 19, 2024Assignees: Oliver Crispin Robotics Limited, General Electric CompanyInventors: Andrew Crispin Graham, David Scott Diwinsky, Julian Matthew Foxall
-
Patent number: 11935183Abstract: A method for incorporating a real object at varying depths within a rendered three-dimensional architectural design space can include capturing data from a real environment, wherein the real environment comprises at least one real object within a physical architectural space. The method can also comprise extracting the at least one real object from the captured data from the real environment. Further, the method can include providing a rendered three-dimensional architectural design space comprising at least one virtual architectural component. The method can also include projecting the captured data from the real environment on a first plane within the rendered three-dimensional architectural design space and projecting the extracted at least one real object on a at least one additional plane within the rendered three-dimensional architectural design space, such that the rendered at least one real object is properly occluded within the rendered three-dimensional architectural design space.Type: GrantFiled: January 5, 2021Date of Patent: March 19, 2024Assignees: DIRTT ENVIRONMENTAL SOLUTIONS LTD., ARMSTRONG WORLD INDUSTRIES, INC.Inventors: Robert William Blodgett, Joseph S. Howell
-
Patent number: 11925860Abstract: This application discloses techniques for generating and querying projective hash maps. More specifically, projective hash maps can be used for spatial hashing of data related to N-dimensional points. Each point is projected onto a projection surface to convert the three-dimensional (3D) coordinates for the point to two-dimensional (2D) coordinates associated with the projection surface. Hash values based on the 2D coordinates are then used as an index to store data in the projective hash map. Utilizing the 2D coordinates rather than the 3D coordinates allows for more efficient searches to be performed to locate points in the 3D space. In particular, projective hash maps can be utilized by graphics applications for generating images, and the improved efficiency can, for example, enable a game streaming application on a server to render images transmitted to a user device via a network at faster frame rates.Type: GrantFiled: June 9, 2021Date of Patent: March 12, 2024Assignee: NVIDIA CorporationInventors: Marco Salvi, Jacopo Pantaleoni, Aaron Eliot Lefohn, Christopher Ryan Wyman, Pascal Gautron
-
Patent number: 11928787Abstract: Systems, apparatuses and methods may provide for technology that estimates poses of a plurality of input images, reconstructs a proxy three-dimensional (3D) geometry based on the estimated poses and the plurality of input images, detects a user selection of a virtual viewpoint, encodes, via a first neural network, the plurality of input images with feature maps, warps the feature maps of the encoded plurality of input images based on the virtual viewpoint and the proxy 3D geometry, and blends, via a second neural network, the warped feature maps into a single image, wherein the first neural network is deep convolutional network and the second neural network is a recurrent convolutional network.Type: GrantFiled: September 22, 2020Date of Patent: March 12, 2024Assignee: Intel CorporationInventors: Gernot Riegler, Vladlen Koltun
-
Patent number: 11928780Abstract: In one implementation, a method of enriching a three-dimensional scene model with a three-dimensional object model based on a semantic label is performed at a device including one or more processors and non-transitory memory. The method includes obtaining a three-dimensional scene model of a physical environment including a plurality of points, wherein each of the plurality of points is associated with a set of coordinates in a three-dimensional space, wherein a subset of the plurality of points is associated with a particular cluster identifier and a particular semantic label. The method includes retrieving a three-dimensional object model based on the particular semantic label, the three-dimensional object model including at least a plurality of points. The method includes updating the three-dimensional scene model by replacing the subset of the plurality of points with the three-dimensional object model.Type: GrantFiled: July 20, 2022Date of Patent: March 12, 2024Assignee: APPLE INC.Inventor: Payal Jotwani
-
Patent number: 11908080Abstract: The various embodiments described herein include methods, devices, and systems for generating object meshes. In some embodiments, a method includes obtaining a trained classifier, and an input observation of a 3D object. The method further includes generating a three-pole signed distance field from the input observation using the trained classifier. The method also includes generating an output mesh of the 3D object from the three-pole signed distance field; and generating a display of the 3D object from the output mesh.Type: GrantFiled: April 4, 2022Date of Patent: February 20, 2024Assignee: TENCENT AMERICA LLCInventors: Weikai Chen, Weiyang Li, Bo Yang
-
Patent number: 11908098Abstract: Various implementations disclosed herein include devices, systems, and methods that generate a combined 3D representation of a user based on an alignment based on a 3D reference point. For example, a process may include obtaining a predetermined three-dimensional (3D) representation that is associated with a 3D reference point defined relative to a skeletal representation of the user. The process may further include obtaining a sequence of frame-specific 3D representations corresponding to multiple instants in a period of time, each of the frame-specific 3D representations representing a second portion of the user at a respective instant of the multiple instants in the period of time. The process may further include generating combined 3D representations of the user generated by combining the predetermined 3D representation with a respective frame-specific 3D representation based on an alignment which is based on the 3D reference point.Type: GrantFiled: September 20, 2023Date of Patent: February 20, 2024Assignee: Apple Inc.Inventor: Michael S. Hutchinson
-
Patent number: 11900535Abstract: The following relates generally to light detection and ranging (LIDAR) and artificial intelligence (AI). In some embodiments, a system: receives LIDAR data generated from a LIDAR camera; measures a plurality of dimensions of a landscape based upon processor analysis of the LIDAR data; builds a 3D model of the landscape based upon the measured plurality of dimensions, the 3D model including: (i) a structure, and (ii) a vegetation; and displays a representation of the 3D model.Type: GrantFiled: April 26, 2021Date of Patent: February 13, 2024Assignee: STATE FARM MUTUAL AUTOMOBILE INSURANCE COMPANYInventors: Nicholas Carmelo Marotta, Laura Kennedy, J D Johnson Willingham
-
Patent number: 11893698Abstract: An electronic device according to various embodiments of the disclosure includes: a communication module comprising communication circuitry and a processor operatively connected to the communication module. The processor may be communicatively connected to an augmented reality (AR) device through the communication module, and be configured to receive image information obtained by a camera of the AR device from the AR device, to detect an object based on the received image information, to acquire virtual information corresponding to the object, to control the communication module to transmit the virtual information to the AR device, to determine, based on the received image information, whether the object is out of a viewing range of the AR device, and to change a transfer interval of the virtual information for the AR device based on the determination.Type: GrantFiled: December 23, 2021Date of Patent: February 6, 2024Assignee: Samsung Electronics Co., Ltd.Inventors: Seungbum Lee, Seungseok Hong, Donghyun Yeom
-
Patent number: 11886767Abstract: The disclosed system receives a request from a user to interact with an agent of a wireless telecommunication network including a 5G wireless telecommunication network or higher generation wireless telecommunication network. The system determines whether the user is associated with a first AR/VR device including a camera configured to capture an object proximate to the first AR/VR device and a display configured to show a virtual object, which is not part of a surrounding associated with the first AR/VR device. Upon determining that the user is associated with the first AR/VR device, the system creates a high-bandwidth communication channel over the wireless telecommunication network between the first AR/VR device and a second AR/VR device and a virtual room enabling the user and the agent to share visual information over the high-bandwidth communication channel.Type: GrantFiled: June 17, 2022Date of Patent: January 30, 2024Assignee: T-Mobile USA, Inc.Inventor: Phi Nguyen
-
Patent number: 11887260Abstract: Aspects of the present disclosure involve a system for presenting augmented reality (AR) items. The system performs operations including receiving a video that includes a depiction of a real-world environment and generating a 3D model of the real-world environment based on the video. The operations include determining, based on the 3D model of the real-world environment, that an AR item has been placed in the video at a particular 3D position and identifying a portion of the 3D model corresponding to the real-world environment currently being displayed on a screen. The operations include determining that the 3D position of the AR item is excluded from the portion of the 3D model currently being displayed on the screen and in response, displaying an indicator that identifies the 3D position of the AR item in the 3D model relative to the portion of the 3D model currently being displayed on a screen.Type: GrantFiled: December 30, 2021Date of Patent: January 30, 2024Assignee: Snap Inc.Inventors: Avihay Assouline, Itamar Berger, Gal Dudovitch, Peleg Harel, Gal Sasson