Patents Examined by Hilina K Demeter
  • Patent number: 11321440
    Abstract: An head mounted display (HMD) including an image display unit, a camera, a storage unit configured to store information about an image of a object and character information in association with the image of the object, an image detection unit configured to detect an image of a object from a captured image of the camera, a character string generating unit configured to retrieve character information in association with the image of the object detected by the image detection unit from the storage unit, and to arrange a character or character string represented by the retrieved character information in detection order of the object to generate a character string, and an input controller configured to input the character string generated by the character string generating unit to an input area arranged in a user interface.
    Type: Grant
    Filed: February 22, 2019
    Date of Patent: May 3, 2022
    Assignee: SEIKO EPSON CORPORATION
    Inventor: Takashi Tomizawa
  • Patent number: 11315295
    Abstract: Perception of the relationship between a comfort level and environmental data is facilitated, and appropriate management of air-conditioning equipment is enabled.
    Type: Grant
    Filed: June 28, 2018
    Date of Patent: April 26, 2022
    Assignee: Mitsubishi Electric Corporation
    Inventors: Yoshihiro Ohta, Natsumi Tamura, Kenji Sato, Satoko Tomita, Kazuyuki Nagahiro, Kazuo Tomisawa, Takayoshi Iida, Hiroyuki Yasuda, Yoshinori Nakajima
  • Patent number: 11308657
    Abstract: Systems and methods are disclosed configured to train an autoencoder. A data training set is generated comprising images of different faces. A first autoencoder configuration is generated, comprising a first encoder, and a first decoder. The first autoencoder configuration is trained using dataset images, wherein weights associated with the first encoder and weights associated with the first decoder are modified. A second autoencoder configuration is generated comprising the first encoder and a second decoder. The second decoder is trained using a plurality of images of a first target face. First encoder weights are substantially maintained, and weights associated with the second decoder are modified. An autoencoder comprising the trained first encoder and the trained second decoder is used to generate an output using a source image of a first face having a facial expression, where the facial expression of the first face from the source image is applied to the first specific target face.
    Type: Grant
    Filed: August 11, 2021
    Date of Patent: April 19, 2022
    Assignee: Neon Evolution Inc.
    Inventors: Cody Gustave Berlin, Carl Davis Bogan, III, Kenneth Michael Lande, Anders ├śland, Davide Toniolo, Alessia Bertugli, Dario Bertazioli, Brian Sung Lee
  • Patent number: 11308682
    Abstract: A method comprising the steps of generating a first representation and a second representation, where the first representation represents a first view of a computer-generated scene obtained from a first virtual camera and the second representation represents a second view of the computer-generated scene obtained from a second virtual camera. Each of the first and second representation comprises a plurality of rays which intersect with objects of the scene. A relationship is determined between a ray of the first representation and a ray of the second representation; which are grouped based on the relationship, to form a group of substantially similar rays. One or more of the groups of substantially similar rays are processed substantially simultaneously to produce a first a second rendered view of the computer-generated scene. The first the second rendered view are output to one or more display devices.
    Type: Grant
    Filed: October 28, 2019
    Date of Patent: April 19, 2022
    Assignees: Apical Limited, Arm Limited
    Inventors: Daren Croxford, Mathieu Jean Joseph Robart
  • Patent number: 11276215
    Abstract: An audio system in a local area providing an audio signal to a headset of a remote user is presented herein. The audio system identifies sounds from a human sound source in the local area, based in part on sounds detected within the local area. The audio system generates an audio signal for presentation to a remote user within a virtual representation of the local area based in part on a location of the remote user within the virtual representation of the local area relative to a virtual representation of the human sound source within the virtual representation of the local area. The audio system provides the audio signal to a headset of the remote user, wherein the headset presents the audio signal as part of the virtual representation of the local area to the remote user.
    Type: Grant
    Filed: May 8, 2020
    Date of Patent: March 15, 2022
    Assignee: Facebook Technologies, LLC
    Inventors: Nadav Grossinger, Robert Hasbun
  • Patent number: 11275436
    Abstract: Interface-based modeling and design of three dimensional spaces using two dimensional representations are provided herein. An example method includes converting a three dimensional space into a two dimensional space using a map projection schema, where the two dimensional space is bounded by ergonomic limits of a human, and the two dimensional space is provided as an ergonomic user interface, receiving an anchor position within the ergonomic user interface that defines a placement of an asset relative to the three dimensional space when the two dimensional space is re-converted back to a three dimensional space, and re-converting the two dimensional space back into the three dimensional space for display along with the asset, within an optical display system.
    Type: Grant
    Filed: September 1, 2020
    Date of Patent: March 15, 2022
    Assignee: RPX CORPORATION
    Inventor: Sterling Crispin
  • Patent number: 11275434
    Abstract: An information processing apparatus supplies, an image display apparatus including an image capturing unit configured to capture an image of a real space, and a display unit configured to display an image generated using the image captured by the image capturing unit, an image generated using the image captured by the image capturing unit. The information processing apparatus includes a generation unit configured to generate an image depicting a specific object at a position at which the specific object is estimated to be present after a predetermined time from a time when the image display apparatus starts to move in the captured image of the real space including the specific object, and a control unit configured to shift a position at which the image generated by the generation unit is displayed on the display unit based on a change in a position and/or an orientation of the image display apparatus.
    Type: Grant
    Filed: October 28, 2019
    Date of Patent: March 15, 2022
    Assignee: Canon Kabushiki Kaisha
    Inventor: Kazuki Takemoto
  • Patent number: 11263715
    Abstract: Generating a risk and constraint labeled context map of an operational space is provided. The risk and constraint labeled context map of the operational space corresponding to a user of a cognitive suit is generated to drive the cognitive suit contextually using three-dimension reconstruction, virtual reality, and semi-supervised learning. Labeled risks and constraints in the risk and constraint labeled context map are associated with cognitive suit actuation events to deploy a set of mitigation strategies to address the labeled risks and constraints. An apparatus embedded in the cognitive suit is actuated to deploy the set of mitigation strategies in response to sensing a labeled risk or labeled constraint proximate to the user along a trajectory of the user in the operational space.
    Type: Grant
    Filed: December 20, 2017
    Date of Patent: March 1, 2022
    Assignee: International Business Machines Corporation
    Inventors: Vijay Kumar Ananthapur Bache, Vijay Ekambaram, Srikanth K. Murali, Padmanabha Venkatagiri Shesadri
  • Patent number: 11257297
    Abstract: A system for manufacturing a customized product includes at least one processor programmed and/or configured to: display an image of a first product having first dimensions on a user interface of a computing device of a user; receive an augmented reality or virtual reality (AR/VR) request; in response to receiving the AR/VR request, capture image data from an image capturing device of the computing device and display the image data on the computing device; overlay the image of the first product over a portion of the image data captured by the image capturing device; and resize the overlaying image of the first product based on user input from a computing device of the user, such that second dimensions are associated with the first product.
    Type: Grant
    Filed: June 13, 2019
    Date of Patent: February 22, 2022
    Assignee: Baru Inc.
    Inventor: Augustine K. Go
  • Patent number: 11249553
    Abstract: A system for the generation and management of tactile sensation includes a computing subsystem. A method for the generation and management of tactile sensation includes receiving a set of inputs and processing the set of inputs. Additionally or alternatively, the method 200 can include: communicating tactile commands to a tactile interface system; operating the tactile interface system based on the tactile commands; and/or performing any other suitable processes.
    Type: Grant
    Filed: June 14, 2021
    Date of Patent: February 15, 2022
    Assignee: Emerge Now Inc.
    Inventors: James D. Hamilton, Naveen Anand Gunalan, Adam Elhadad, Nathan E. Brummel, Dustin Delmer, Stephen Hodgson
  • Patent number: 11244497
    Abstract: A content visualizing device and method that may adjust content based on a distance to an object so as to maintain a projection plane and prevent an overlap with the object in front is provided.
    Type: Grant
    Filed: February 7, 2020
    Date of Patent: February 8, 2022
    Assignee: SAMSUNG ELECTRONICS CO.. LTD.
    Inventors: Yang Ho Cho, Dong Kyung Nam
  • Patent number: 11238649
    Abstract: This invention presents a method and a system that use rendering facets to conduct hybrid geometric modeling for three dimensional product design, wherein a geometry operation comprises the steps: mapping rendering facets to operating facets, creating intersection lines, splitting each triangle through which an intersection line passes, sectioning geometries, regrouping facets to form new geometric objects, and mapping each new geometric object to rendering facets. To record modeling process, the method has the steps: allocating a Constructive Hybrid Geometry object, making up each Operating Geometry including a geometry object and operational parameters, adding an Operating Geometry to the object, conducting operations with the facets, and updating operational results. The system is flexible and able to create fine and variant geometric models with primary geometric objects, extended geometric objects and surface patches.
    Type: Grant
    Filed: November 26, 2019
    Date of Patent: February 1, 2022
    Assignee: Nature Simulation Systems Inc.
    Inventor: Shangwen Cao
  • Patent number: 11238836
    Abstract: Methods and systems for depth-based foveated rendering in the display system are disclosed. The display system may be an augmented reality display system configured to provide virtual content on a plurality of depth planes using different wavefront divergence. Some embodiments include determining a fixation point of a user's eyes. Location information associated with a first virtual object to be presented to the user via a display device is obtained. A resolution-modifying parameter of the first virtual object is obtained. A particular resolution at which to render the first virtual object is identified based on the location information and the resolution-modifying parameter of the first virtual object. The particular resolution is based on a resolution distribution specifying resolutions for corresponding distances from the fixation point. The first virtual object rendered at the identified resolution is presented to the user via the display system.
    Type: Grant
    Filed: March 15, 2019
    Date of Patent: February 1, 2022
    Assignee: Magic Leap, Inc.
    Inventors: Vaibhav Mathur, Lionel Ernest Edwin, Xiaoyang Zhang, Bjorn Nicolaas Servatius Vlaskamp
  • Patent number: 11217011
    Abstract: In one embodiment, a method includes accessing a digital map of a real-world region, where the digital map includes one or more three-dimensional meshes corresponding to one or more three-dimensional objects within the real-world region, receiving an object query including an identifier for an anchor in the digital map, positional information relative to the anchor, and information associated with a directional vector, determining a position within the digital map based on the identifier for the anchor and the positional information relative to the anchor, determining a three-dimensional mesh in the digital map that intersects with a projection of the directional vector from the determined position within the digital map, identifying metadata associated with the three-dimensional mesh, and sending the metadata to the second computing device.
    Type: Grant
    Filed: April 19, 2019
    Date of Patent: January 4, 2022
    Assignee: Facebook Technologies, LLC.
    Inventors: Mingfei Yan, Yajie Yan, Richard Andrew Newcombe, Yuheng Ren
  • Patent number: 11199898
    Abstract: In an embodiment, a processing system provides an augmented reality object for display by a head-mounted device (HMD) worn by a user. The processing system provides an augmented reality graphic for display by the HMD on a plane and overlaid on the augmented reality object. The processing system determines a gaze direction of the user using sensor data captured by a sensor of the HMD. Responsive to determining that the gaze direction intersects with the augmented reality graphic on the plane and remains intersecting for at least a period of time, the processing system determines a position of intersection between the gaze direction and the augmented reality graphic on the plane. The processing system provides a modified version of the augmented reality object for display by the HMD according to the position of intersection during the period of time.
    Type: Grant
    Filed: June 26, 2019
    Date of Patent: December 14, 2021
    Assignee: SentiAR, Inc.
    Inventors: Walter Blume, Michael K. Southworth, Jennifer N. Avari Silva, Jonathan R. Silva
  • Patent number: 11189100
    Abstract: A device may receive, from a user device, a request to activate an extended reality experience. The device may obtain access network information relating to a set of access networks available to the user device and user device information relating to the user device. Based on the access network information and the user device information, the device may determine an access network to use for the extended reality experience. The device may determine, based on the access network, a first portion of the extended reality experience to execute locally or a second portion of the extended reality experience to execute remotely.
    Type: Grant
    Filed: November 25, 2019
    Date of Patent: November 30, 2021
    Assignee: Verizon Patent and Acessing Inc.
    Inventors: Rohit Shirish Saraf, John A. Turato, Stephane Chaysinh
  • Patent number: 11176724
    Abstract: Speech-driven facial animation is useful for a variety of applications such as telepresence, chatbots, etc. The necessary attributes of having a realistic face animation are: 1) audiovisual synchronization, (2) identity preservation of the target individual, (3) plausible mouth movements, and (4) presence of natural eye blinks. Existing methods mostly address audio-visual lip synchronization, and synthesis of natural facial gestures for overall video realism. However, existing approaches are not accurate. Present disclosure provides system and method that learn motion of facial landmarks as an intermediate step before generating texture. Person-independent facial landmarks are generated from audio for invariance to different voices, accents, etc. Eye blinks are imposed on facial landmarks and the person-independent landmarks are retargeted to person-specific landmarks to preserve identity related facial structure.
    Type: Grant
    Filed: September 29, 2020
    Date of Patent: November 16, 2021
    Assignee: Tata Consultancy Services Limited
    Inventors: Sanjana Sinha, Sandika Biswas, Brojeshwar Bhowmick
  • Patent number: 11170740
    Abstract: A technique for selecting locations of tear lines when displaying visual content. The technique includes receiving coordinates for one or more portions of a display where a tear is permitted and determining if a frame transition is to occur while rendered content is being scanned out for display within the one or more portions of the display where tear is permitted. If the frame transition is to occur while the scanline for the display is in the one or more portions of the display where tear is permitted, then the technique further includes allowing the frame transition to occur. If the frame transition is to occur while the scanline for the display is not in the one or more portions of the display where tear is permitted, then the technique further includes delaying the frame transition until at least when the scanline for the display is in the one or more portions of the display where tear is permitted.
    Type: Grant
    Filed: July 6, 2018
    Date of Patent: November 9, 2021
    Assignee: NVIDIA Corporation
    Inventors: Radhika Ranjan Soni, Gaurav Singh
  • Patent number: 11158128
    Abstract: A system and method may provide for spatial and semantic auto-completion of an augmented or mixed reality environment. The system may detect physical objects in a physical environment based on analysis of image frames captured by an image sensor of a computing device. The system may detect spaces in the physical environment that are occupied by the detected physical objects, and may detect spaces that are unoccupied in the physical environment. Based on the identification of the detected physical objects, the system may gain a semantic understanding of the physical environment, and may determine suggested objects for placement in the physical environment based on the semantic understanding. The system may place virtual representations of the suggested objects in a mixed reality scene of the physical environment for user consideration.
    Type: Grant
    Filed: April 26, 2019
    Date of Patent: October 26, 2021
    Assignee: GOOGLE LLC
    Inventors: Roza Chojnacka, Meltem Oktem, Rajan Patel, Uday Idnani, Xiyang Luo
  • Patent number: 11145118
    Abstract: Techniques for extraction of body parameters, dimensions and shape of a customer are presented herein. A model descriptive of a garment, a corresponding calibration factor and reference garment shapes can be assessed. A garment shape corresponding to the three-dimensional model can be selected from the reference garment shapes based on a comparison of the three-dimensional model with the reference garment shapes. A reference feature from the plurality of reference features may be associated with the model feature. A measurement of the reference feature may be calculated based on the association and the calibration factor. The computed measurement can be stored in a body profile associated with a user. An avatar can be generated for the user based on the body profile and be used to show or indicate fit of a garment, as well as make fit and size recommendations.
    Type: Grant
    Filed: July 26, 2019
    Date of Patent: October 12, 2021
    Assignee: eBay Inc.
    Inventors: Jonathan Su, Mihir Naware, Jatin Chhugani