Patents Examined by Motilewa A. Good-Johnson
  • Patent number: 11417054
    Abstract: In one embodiment, a method includes displaying, for one or more displays of a virtual VR device, a first output image comprising a passthrough view of a real-world environment. The method includes identifying, using one or more images captured by one or more cameras of the VR display device, a real-world object in the real-world environment. The method includes receiving a user input indicating a first dimension corresponding to the real-world object. The method includes automatically determining, based on the first dimension, a second and third dimension corresponding to the real-world object. The method includes rendering, for the one or more displays of the VR display device, a second output image of a VR environment. The VR environment includes a MR object that corresponds to the real-world object. The MR object is defined by the determined first, second, and third dimensions.
    Type: Grant
    Filed: March 17, 2021
    Date of Patent: August 16, 2022
    Assignee: Facebook Technologies, LLC.
    Inventors: Christopher Richard Tanner, Amir Mesguich Havilio, Michelle Pujals, Gioacchino Noris, Alessia Marra, Nicholas Wallen
  • Patent number: 11409364
    Abstract: Disclosed herein are related to a system and a method for controlling a virtual reality based on a physical object. In one aspect, a shape of a hand of a user corresponding to a surface or a structure of a physical object is detected. In one aspect, according to the detected shape of the hand, an interactive feature for the surface or the structure of the physical object is generated in a virtual reality or augmented reality application. In one aspect, a user interaction with the interactive feature is detected. In one aspect, an action of the virtual reality or augmented reality application is initiated, in response to detecting the user interaction with the interactive feature.
    Type: Grant
    Filed: June 4, 2020
    Date of Patent: August 9, 2022
    Assignee: FACEBOOK TECHNOLOGIES, LLC
    Inventors: Qian Zhou, Kenrick Cheng-Kuo Kin
  • Patent number: 11392642
    Abstract: An image processing method includes: obtaining audio data corresponding to a reality scene image acquired in real time; dynamically determining attribute information of a virtual object according to the audio data, the attribute information indicating a visual state of the virtual object; identifying a target object from the reality scene image; determining, according to the target object, a fusion location of the virtual object determined according to the attribute information in the reality scene image according to the target object; fusing the virtual object determined according to the attribute information into the reality scene image according to the fusion location, the virtual object presenting different visual states that correspond to different attribute information dynamically determined according to the audio data.
    Type: Grant
    Filed: August 20, 2020
    Date of Patent: July 19, 2022
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Jingjin Zhou, Pei Cheng, Bin Fu, Yu Gao
  • Patent number: 11380021
    Abstract: Disclosed herein is an image processing apparatus including a hand recognizing section configured to recognize a state of a hand of a user, an item image superimposing section configured to superimpose images of items as selection targets attached to the fingers of the hand that are assigned to the items on either an image of the hand being displayed or an image representing the hand being displayed, and a selecting operation detecting section configured to detect that one of the items is selected on the basis of a hand motion performed on the images of the items before performing processing corresponding to the detected selection.
    Type: Grant
    Filed: June 12, 2020
    Date of Patent: July 5, 2022
    Assignee: SONY INTERACTIVE ENTERTAINMENT INC.
    Inventor: Masashi Nakata
  • Patent number: 11367365
    Abstract: A content presentation system includes: a training content distribution function 431 that presents the training content to the trainee; a work content delivery function 331 that presents a work procedure based on augmented reality; a user operation detection unit 120 that detects a three-dimensional operation of the site worker; a work record analysis function 338 that determines success or failure of the work based on the evaluation reference information; and a content creation/updating function 342 that updates the training content based on the determination result. When a predetermined body motion of the worker is detected, the work record analysis function determines whether the work has failed based on the measurement information of the model worker up to the body motion. When the work is determined to have failed, the content creation/updating function updates the training content so as to suppress the factor of the failure.
    Type: Grant
    Filed: August 24, 2018
    Date of Patent: June 21, 2022
    Assignee: Hitachi Systems, Ltd.
    Inventors: Shintaro Tsuchiya, Takayuki Fujiwara, Kentarou Oonishi, Katsuro Kikuchi, Yoshihito Narita
  • Patent number: 11360545
    Abstract: There is provided an information processing device to alleviate motion sickness caused when a video is watched, the information processing device including: a control unit configured to control an operation of a sensation presenting unit that presents a mechanical stimulation or an electric stimulation to a head or a neck of a first user on a basis of information regarding a change in a presentation video presented to the first user via a display unit.
    Type: Grant
    Filed: January 24, 2017
    Date of Patent: June 14, 2022
    Assignee: SONY CORPORATION
    Inventor: Kei Takahashi
  • Patent number: 11354867
    Abstract: Various implementations disclosed herein include devices, systems, and methods that enable presenting environments comprising visual representations of multiple applications. In one implementation, a method includes presenting a view of an environment at an electronic device on a display of the electronic device. The environment comprising visual representations corresponding to a plurality of applications. A first application among the plurality of applications is designated as an elevated application. The elevated application is provided with access to a control parameter configured to modify an ambience of the environment. Other applications of the plurality of applications are restricted from accessing the control parameter while the first application is designated as the elevated application.
    Type: Grant
    Filed: February 17, 2021
    Date of Patent: June 7, 2022
    Assignee: Apple Inc.
    Inventors: Aaron M. Burns, Alexis H. Palangie, Nathan Gitter, Pol Pla I. Conesa
  • Patent number: 11354896
    Abstract: A display device includes a display, a camera that captures images of an outside scene, a pointer recognizing section, a target recognizing section, and a display control section configured to display, on the display, a target related image, which is an image related to a target recognized by the target recognizing section. The display control section includes a related display configured to display, on the display, related information related to the target related image when a movement of a pointer within a range overlapping the target related image is recognized by the pointer recognizing section and a display processing section configured to change a display state of the target related image according to whether the target related image is present within an imaging range of the camera.
    Type: Grant
    Filed: July 30, 2020
    Date of Patent: June 7, 2022
    Inventors: Yuya Maruyama, Hideki Tanaka
  • Patent number: 11341423
    Abstract: To more suitably select an artificial intelligence. An information processing apparatus according to one embodiment of the present invention, includes a user information acquisition unit and an AI selection unit. The user information acquisition unit acquires information relevant to a scene as user-related information. The AI selection unit selects an artificial intelligence from a plurality of artificial intelligences, on the basis of the user-related information.
    Type: Grant
    Filed: September 19, 2018
    Date of Patent: May 24, 2022
    Assignee: CASIO COMPUTER CO., LTD.
    Inventor: Kazunori Kita
  • Patent number: 11335068
    Abstract: In one embodiment, a method includes generating a visual interaction tool that moves and extends in a three-dimensional artificial-reality environment according to hand and arm movements of a user. It may be detected that the visual interaction tool intersects a predefined region associated with a virtual item of a first type in the AR environment. The visual interaction tool may attach to the first virtual item. A first operating mode for the visual interaction tool may be selected based on the first type of the first virtual item. The first operating mode may be selected from multiple operating modes for the visual interaction tool. A first input from the user may be received while the visual interaction tool is attached to the first virtual item. First operations with the first virtual item may be performed according to the first operating mode and the first input.
    Type: Grant
    Filed: July 27, 2020
    Date of Patent: May 17, 2022
    Assignee: Facebook Technologies, LLC.
    Inventor: Martin Schubert
  • Patent number: 11334747
    Abstract: An augmented reality (AR) device and a method of predicting a pose in the AR device is provided. In the augmented reality (AR) device inertial measurement unit (IMU) values corresponding to the movement of the AR device are obtained at an IMU rate, intermediate 6-degrees of freedom (6D) poses of the AR device are estimated based on the IMU values and images around the AR device via a visual-inertial simultaneous localization and mapping (VI-SLAM) module, and a pose prediction model for predicting relative 6D poses of the AR device is generated by performing learning by using a deep neural network.
    Type: Grant
    Filed: September 29, 2020
    Date of Patent: May 17, 2022
    Assignees: SAMSUNG ELECTRONICS CO., LTD., Korea University Research and Business Foundation
    Inventors: Yuntae Kim, Sanghoon Sull, Geeyoung Sung, Hongseok Lee, Myungjae Jeon
  • Patent number: 11320896
    Abstract: In one embodiment, a method includes capturing, using one or more cameras implemented in a wearable device worn by a user, a first image depicting at least a part of a hand of the user holding a controller in an environment, identifying one or more features from the first image to estimate a pose of the hand of the user, estimating a first pose of the controller based on the pose of the hand of the user and an estimated grip that defines a relative pose between the hand of the user and the controller, receiving IMU data of the controller, and estimating a second pose of the controller by updating the first pose of the controller using the IMU data of the controller. The method utilizes multiple data sources to track the controller under various conditions of the environment to provide an accurate controller tracking consistently.
    Type: Grant
    Filed: August 3, 2020
    Date of Patent: May 3, 2022
    Assignee: Facebook Technologies, LLC.
    Inventors: Tsz Ho Yu, Chengyuan Yan, Christian Forster
  • Patent number: 11315308
    Abstract: A method for representing virtual information in a view of a real environment is provided that includes: providing a system setup including at least one display device, wherein the system setup is adapted for blending in virtual information on the display device in at least part of the view, determining a position and orientation of a viewing point relative to at least one component of the real environment, providing a geometry model of the real environment, providing at least one item of virtual information and a position of the at least one item of virtual information, determining whether the position of the item of virtual information is inside a 2D or 3D geometrical shape, determining a criterion which is indicative of whether the built-in real object is at least partially visible or non-visible in the view of the real environment, and blending in the at least one item of virtual information on the display device in at least part of the view of the real environment.
    Type: Grant
    Filed: December 21, 2020
    Date of Patent: April 26, 2022
    Assignee: Apple Inc.
    Inventors: Lejing Wang, Peter Meier, Stefan Misslinger
  • Patent number: 11302047
    Abstract: In various embodiments, a storyboarding application generates a storyboard for a media title. In operation, the storyboarding application determines a categorization for a first portion of the media item. The storyboarding application then determines a first media item based on at least one of the categorization or a caption associated with the first portion of the media title. Subsequently, the storyboarding application modifies the first media item based on at least one of the categorization or a character associated with the caption to generate a second media item. The storyboarding application then generates a sequence of media items for the storyboard that includes the second media item. Advantageously, because the storyboarding application can automatically generate media items for storyboards based on categorizations and/or captions, the storyboarding application can reduce both the manual effort and time required to generate storyboards relative to prior art techniques.
    Type: Grant
    Filed: March 26, 2020
    Date of Patent: April 12, 2022
    Assignee: Disney Enterprises, Inc.
    Inventors: Erika Varis Doggett, Mark Arana, Michael Goslin
  • Patent number: 11295494
    Abstract: Image modification styles learned from a limited set of modified images are described. A learned style system receives a selection of one or more modified images serving as a basis for a modification style. For each modified image, this system creates a modification memory, which includes a representation of the image content and modification parameters describing modification of this content to produce the modified image. These modification memories are packaged into style data, used to apply the modification style to input images. When applying a style, the system generates an image representation of an input image and determines measures of similarity between the input image's representation and representations of each modification memory in the style data. The system determines parameters for applying the modification style based, in part, on these similarity measures. The system modifies the input image according to the determined parameters to produce a styled image.
    Type: Grant
    Filed: June 26, 2019
    Date of Patent: April 5, 2022
    Assignee: Adobe Inc.
    Inventor: Gregg Darryl Wilensky
  • Patent number: 11284824
    Abstract: A system for assigning a social attribute class to a human subject in a predefined closed environment includes an image capturing component, a pose detection component configured to perform pose detection and tracking of a human subject in real-time, an action detection component configured to detect an action of the human subject, an activity detection component configured to relate a sequence of actions to detect an activity of the human subject, and a social attribute classification component. The social attribute classification component is configured to determine an average speed (s) of the human subject as a first social attribute, an interaction time of the human subject (Tint) as a second social attribute, an analysis time (Tanal) as a third social attribute, and automatically assigns a social attribute class to the human subject based on the values of the first, second and third social attributes.
    Type: Grant
    Filed: December 2, 2019
    Date of Patent: March 29, 2022
    Assignee: Everseen Limited
    Inventors: Dan Crisfalusi, Alan O'Herlihy, Joe Allen, Dan Pescaru, Cosmin Cernăzanu-Glăvan, Alexandru Arion
  • Patent number: 11263803
    Abstract: The present disclosure provides a method, apparatus and device for rendering virtual reality scenes. The method includes: obtaining a virtual reality scene and determining whether the virtual reality scene is in a rendering idle state; if the virtual reality scene is in the rendering idle state, performing image rendering on the virtual reality scene to generate a display image and store a correspondence between the display image and a display area; and obtaining a target area to be displayed of the virtual reality scene, calling a target display image corresponding to the target area according to the correspondence and displaying the target display image.
    Type: Grant
    Filed: December 12, 2019
    Date of Patent: March 1, 2022
    Assignees: BEIJING BOE OPTOELECTRONICS TECHNOLOGY CO., LTD., BOE Technology Group Co., Ltd.
    Inventors: Yadong Ding, Jian Sun, Ziqiang Guo, Lin Lin, Feng Zi, Bingxin Liu, Jiyang Shao, Yakun Wang, Binhua Sun
  • Patent number: 11257280
    Abstract: Elements in an artificial reality environment (e.g., objects or volumes) can be assigned different ray casting rules. In response to detecting a corresponding trigger, such as the user entering the volume or interacting with the object, the ray casting rules associated with that element can be implemented. Implementing the ray casting rules can control aspects of the ray such as the ray's shape, size, effects of the ray, where a ray originates, whether the ray is directed along a particular plane, or how rays are controlled. In some cases, an artificial reality system can cast multiple rays at the same time, which are controlled by the same feature of a user. Using priority rules (e.g., weighting factors, hierarchies, filters, etc.), the artificial reality system can determine which ray is primary, allowing the user to use the primary ray to interact with elements.
    Type: Grant
    Filed: May 28, 2020
    Date of Patent: February 22, 2022
    Assignee: Facebook Technologies, LLC
    Inventors: Owen Pedrotti, Gayan Ediriweera, Brandon Furtwangler
  • Patent number: 11250630
    Abstract: Immersive, dynamic storytelling functionality is described. The stories may include elements (e.g., characters, settings, duration, etc.) based on information provided, including in real time, by the user or presentation environment, and may be presented by projecting visual or audio story elements into the space surrounding the user. For example, as a child tells a story about a jungle, the room is filled with images of foliage. Animals that live in the jungle may be suggested as characters. Stories may be contextually tailored based on information about the user, environment, storytelling conditions, or other context. For example, story duration and excitement level may be influenced by the time of day, such that a story presented ten minutes before bedtime is an appropriate duration and excitement level. In some cases, objects of the presentation environment are incorporated into the story, such as a character projected as though entering through an actual doorway.
    Type: Grant
    Filed: November 16, 2015
    Date of Patent: February 15, 2022
    Assignee: HALLMARK CARDS, INCORPORATED
    Inventors: Randy S. Knipp, Kevin M. Brooks, Stephen Richard Eikos, Jason Blake Penrod, Jeffrey Alan Jones, Tim P. Patch, Timothy J. Lien
  • Patent number: 11250604
    Abstract: In one embodiment, a method of presenting a computer-generated reality (CGR) file includes receiving a user input to present a CGR scene including one or more CGR objects, wherein the CGR scene is associated with a first anchor and a second anchor. The method includes capturing an image of a physical environment and determining that the image of the physical environment lacks a portion corresponding to the first anchor. The method includes detecting a portion of the image of the physical environment corresponding to the second anchor. The method includes, in response to determining that image of the physical environment lacks a portion corresponding to the first anchor and detecting a portion of the image of the physical environment corresponding to the second anchor, displaying the CGR scene at a location of the display corresponding to the second anchor.
    Type: Grant
    Filed: June 3, 2020
    Date of Patent: February 15, 2022
    Assignee: APPLE INC.
    Inventors: Tyler Casella, David Lui, Norman Nuo Wang, Xiao Jin Yu