Augmented Reality (real-time) Patents (Class 345/633)
  • Patent number: 11720896
    Abstract: Computer-implemented methods, apparatuses, and computer program products are disclosed for proximate financial transactions. An example method includes receiving a request for participation in a proximate financial transaction by a first user device associated with a first user and a first user profile where the proximate financial transaction is associated with a transfer of physical currency notes. The method further includes determining at least a second user device associated with a second user and second user profile proximate the first user for participation in the proximate financial transaction. The method also includes causing presentation of identification data of the second user via the first user device and detecting the transfer of physical currency notes between the first user and the second user. The method further includes effectuating an electronic financial transaction between a first user account of the first user and a second user account of the second user.
    Type: Grant
    Filed: December 10, 2020
    Date of Patent: August 8, 2023
    Assignee: Wells Fargo Bank, N.A.
    Inventor: Chris Theodore Kalaboukis
  • Patent number: 11721039
    Abstract: The present disclosure provides systems and methods for calibration-free instant motion tracking useful, for example, for rending virtual content in augmented reality settings. In particular, a computing system can iteratively augment image frames that depict a scene to insert virtual content at an anchor region within the scene, including situations in which the anchor region moves relative to the scene. To do so, the computing system can estimate, for each of a number of sequential image frames: a rotation of an image capture system that captures the image frames; and a translation of the anchor region relative to an image capture system, thereby providing sufficient information to determine where and at what orientation to render the virtual content within the image frame.
    Type: Grant
    Filed: May 16, 2022
    Date of Patent: August 8, 2023
    Assignee: GOOGLE LLC
    Inventors: Jianing Wei, Matthias Grundmann
  • Patent number: 11721078
    Abstract: A technology for reducing a processing delay when a plurality of mixed reality spaces are shared between terminals is provided. An information processing system includes a terminal configured to display an image representing a mixed reality space obtained by superimposing a virtual space in a real space, and a server apparatus configured to communicate with the terminal. The server apparatus manages, for each of a plurality of real spaces, information for identifying the real space and anchor information for defining a superimposition position of the virtual space in the real space and, in a case where an acquisition request for anchor information corresponding to a first real space is received from the terminal, transmits response information including anchor information corresponding to the first real space and anchor information corresponding to a second real space adjacent to the first real space to the terminal.
    Type: Grant
    Filed: April 2, 2020
    Date of Patent: August 8, 2023
    Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Kazuya Matsuo, Koya Mori, Hiroyuki Tanaka
  • Patent number: 11719936
    Abstract: A head-mounted display may include a display system and an optical system that are supported by a housing. The optical system may be a catadioptric optical system having one or more lens elements. In one example, the optical system includes a single lens element and a retarder that is coated on a curved surface of the lens element. The retarder may be coated on an aspheric concave surface of the lens element. In another example the retarder may be coated on an aspheric convex surface of the lens element. One or more components of the optical system may be formed using a direct printing technique. This may allow for one or more adhesive layers and one or more hard coatings to be omitted from the optical system. A lens element may be directly printed on the display system to improve alignment between the optical system and the display system.
    Type: Grant
    Filed: January 29, 2021
    Date of Patent: August 8, 2023
    Assignee: Apple Inc.
    Inventors: Ran He, Zuoqian Wang, Brent J. Bollman, Francois R. Jacob, Guanjun Tan, John N. Border, Serhan O. Isikman, Wei-Liang Hsu, Di Liu
  • Patent number: 11721080
    Abstract: An augmented reality system to generate and cause display of a presentation of a space at a first client device, receive one or more selections of points within the presentation of the space at the first client device, and render graphical elements at the one or more points within the presentation of the space at the first client device. The augmented reality system is further configured to receive a display request to display the space at a second client device, and in response, may render a second presentation of the space at the second client device, wherein the second presentation of the space includes the graphical elements at the one or more points.
    Type: Grant
    Filed: April 26, 2022
    Date of Patent: August 8, 2023
    Assignee: Snap Inc.
    Inventors: Piers Cowburn, Qi Pan, Isac Andreas Müller Sandvik
  • Patent number: 11721084
    Abstract: An electronic device includes a communication interface and a processor, wherein the processor receives a first image predistorted and rendered at a first time, from an external device through the communication interface, calculates a pixel shift for each pixel of the received first image at a second time that is different from the first time, generates a second image by reprojecting the first image based on the calculated pixel shift, and transmits the generated second image to the external device through the communication interface.
    Type: Grant
    Filed: January 7, 2022
    Date of Patent: August 8, 2023
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Namseop Kwon, Myungjae Jeon, Hongseok Lee
  • Patent number: 11720171
    Abstract: In some embodiments, an electronic device navigates between user interfaces based at least on detecting a gaze of the user. In some embodiments, an electronic device enhances interactions with control elements of user interfaces. In some embodiments, an electronic device scrolls representations of categories and subcategories in a coordinated manner. In some embodiments, an electronic device navigates back from user interfaces having different levels of immersion in different ways.
    Type: Grant
    Filed: September 20, 2021
    Date of Patent: August 8, 2023
    Assignee: Apple Inc.
    Inventors: Israel Pastrana Vicente, Jay Moon, Jesse Chand, Jonathan R. Dascola, Jeffrey M. Faulkner, Pol Pla I Conesa, Dorian D. Dargan
  • Patent number: 11721074
    Abstract: A display control method includes performing plane detection on a physical environment within a front field of view (FOV) of an augmented reality (AR) device, determining a first object in the physical environment, determining a display position of a second object based at least on the first object and a current FOV of the AR device, and displaying the second object at the display position. The display position includes a display height and a display distance.
    Type: Grant
    Filed: September 14, 2021
    Date of Patent: August 8, 2023
    Assignee: LENOVO (BEIJING) LIMITED
    Inventors: Yingling Luo, Bin Cui
  • Patent number: 11714281
    Abstract: Disclosed herein is an image generation device including: a space construction section that constructs a virtual world to be displayed on a head-mounted display; an image generation section that generates, based on a position and a posture of a head of a player wearing the head-mounted display, a display image representing the virtual world in a field of view corresponding to a point of view of the player, and causes the head-mounted display to display the generated display image; and a visitor information acquisition section that acquires information regarding presence of a visitor without a head-mounted display in a space where the player is able to move. While the visitor is present, the space construction section displays an object indicating the presence of the visitor at a corresponding position in the virtual world.
    Type: Grant
    Filed: August 2, 2022
    Date of Patent: August 1, 2023
    Assignee: SONY INTERACTIVE ENTERTAINMENT INC.
    Inventor: Junichi Muramoto
  • Patent number: 11710039
    Abstract: Described are system, method, and computer-program product embodiments for developing an object detection model. The object detection model may detect a physical object in an image of a real world environment. A system can automatically generate a plurality of synthetic images. The synthetic images can be generated by randomly selecting parameters of the environmental features, camera intrinsics, and a target object. The system may automatically annotate the synthetic images to identify the target object. In some embodiments, the annotations can include information about the target object determined at the time the synthetic images are generated. The object detection model can be trained to detect the physical object using the annotated synthetic images. The trained object detection model can be validated and tested using at least one image of a real world environment. The image(s) of the real world environment may or may not include the physical object.
    Type: Grant
    Filed: September 29, 2020
    Date of Patent: July 25, 2023
    Assignee: PricewaterhouseCoopers LLP
    Inventors: Timothy Marco, Joseph Voyles, Kyungha Lim, Kevin Paul, Vasudeva Sankaranarayanan
  • Patent number: 11710111
    Abstract: Methods and systems are disclosed for collecting and releasing virtual objects between disparate augmented reality environments. One method comprises receiving user selection to collect an object in a first environment displayed on a user device at a first time. A search request may then be transmitted to a content server from which a virtual object corresponding to the collected object is received in response. The virtual object may be stored in a user library associated with a user identifier of the user. When a second environment is displayed on the user device, the virtual object may be added to the second environment in response to receiving user selection to add the virtual object and determining that the user is associated with the user identifier.
    Type: Grant
    Filed: October 28, 2022
    Date of Patent: July 25, 2023
    Assignee: Worldpay Limited
    Inventors: Kevin Gordon, Charlotte Spender
  • Patent number: 11710279
    Abstract: A contextual local image recognition module of a device retrieves a primary content dataset from a server and then generates and updates a contextual content dataset based on an image captured with the device. The device stores the primary content dataset and the contextual content dataset. The primary content dataset comprises a first set of images and corresponding virtual object models. The contextual content dataset comprises a second set of images and corresponding virtual object models retrieved from the server.
    Type: Grant
    Filed: April 26, 2021
    Date of Patent: July 25, 2023
    Assignee: RPX Corporation
    Inventor: Brian Mullins
  • Patent number: 11709541
    Abstract: In one implementation, a non-transitory computer-readable storage medium stores program instructions computer-executable on a computer to perform operations. The operations include presenting first content representing a virtual reality setting on a display of an electronic device. Using an input device of the electronic device, input is received representing a request to present a view corresponding to a physical setting in which the electronic device is located. In accordance with receiving the input, the first content is simultaneously presented on the display with second content representing the view corresponding to the physical setting obtained using an image sensor of the electronic device.
    Type: Grant
    Filed: May 1, 2019
    Date of Patent: July 25, 2023
    Assignee: Apple Inc.
    Inventors: Earl M. Olson, Nicolai Georg, Omar R. Khan, James M. A. Begole
  • Patent number: 11703947
    Abstract: A method for rendering computer graphics based on saccade detection is provided. One embodiment of the method includes rendering a computer simulated scene for display to a user, detecting an onset of a saccade that causes saccadic masking in an eye movement of the user viewing the computer simulated scene, and reducing a computing resource used for rendering frames of the computer simulated scene during at least a portion of a duration of the saccade. Systems perform similar steps, and non-transitory computer readable storage mediums each storing one or more computer programs are also provided.
    Type: Grant
    Filed: August 1, 2022
    Date of Patent: July 18, 2023
    Assignee: Sony Interactive Entertainment Inc.
    Inventor: Dominic Saul Mallinson
  • Patent number: 11703923
    Abstract: A wearable display device is disclosed and includes a main body and a driving module. The main body includes a frame, two temple arms and at least one monitor. The two temple arms are respectively connected with two ends of the frame, and the monitor is disposed on the frame. The driving module is disposed within the frame and includes a microprocessor, an optical display module and a heat dissipation component. The optical display module is electrically coupled with the microprocessor and configured for displaying an optical image on the at least one monitor. The heat dissipation component includes a heat dissipation base and two heat pipes. The two heat pipes are disposed on the heat dissipation base adjacent to the microprocessor. When the heat generated by the microprocessor is conducted to the heat dissipation base, the two heat pipes perform heat exchange with the heat dissipation base.
    Type: Grant
    Filed: June 18, 2021
    Date of Patent: July 18, 2023
    Assignee: MICRO JET TECHNOLOGY CO., LTD.
    Inventors: Hao-Jan Mou, Ta-Wei Hsueh, Yu-Tzu Chen, Shou-Cheng Cheng, Chi-Feng Huang, Yung-Lung Han, Tsung-I Lin, Wei-Ming Lee, Chin-Wen Hsieh
  • Patent number: 11699271
    Abstract: Example systems, devices, media, and methods are described for presenting a virtual experience using the display of an eyewear device in augmented reality. A content delivery application implements and controls the detecting of beacons broadcast from beacon transmitters deployed at fixed locations and determining the current eyewear location based on the detected beacons. The method includes retrieving content and presenting a virtual experience based on the retrieved content, the beacon data, and a user profile. The virtual experience includes playing audio messages, presenting text on the display, playing video segments on the display, and combinations thereof. In addition to wireless detection of beacons, the method includes scanning and decoding a beacon activation code positioned near the beacon transmitter to access a beacon.
    Type: Grant
    Filed: May 17, 2022
    Date of Patent: July 11, 2023
    Assignee: Snap Inc.
    Inventors: Ashwani Arya, Alex Feinman
  • Patent number: 11700286
    Abstract: Embodiments described herein enable a teleconference among host-side user(s) at a host site and remotely located client-side user(s), wherein a host device is located at the host site, and wherein each of the client-side user(s) uses a respective client device to participate in the teleconference. A host site audio-visual feed is received from the host device. A client data feed is received from the client device of each client-side user. Orientation information for the respective client-side user using a client device is also received. Each client device is provided with the host site audio-visual feed or a modified version thereof. The host device is provided with, for each client device, the client data feed and the orientation information of the client-side user that is using the client device. This enables host-side and client-side users to display visual representations of one another with their respective orientations.
    Type: Grant
    Filed: January 18, 2022
    Date of Patent: July 11, 2023
    Assignee: Avatour Technologies, Inc.
    Inventors: Devon Copley, Prasad Balasubramanian
  • Patent number: 11691073
    Abstract: A system includes a mobile device having one or more cameras to take images; a sensor detecting reflected light from one or more lasers and a diffuser to detect object range or dimension; code for motion tracking, environmental understanding by detecting planes in an environment, and estimating light and dimensions of the surrounding based on the one or more lasers; code to estimate a three-dimensional (3D) volume of an object from multiple perspectives and from projected laser beams to measure size or scale and determine locations of points on the object's surface in a plane or a slice using time-of-flight, wherein positions and cross-sections for different slices are correlated to construct a 3D model of the object, including object position and shape; the device receiving user request to select a content from one or more augmented, virtual, or extended reality contents and rendering a reality view of the environment.
    Type: Grant
    Filed: October 3, 2022
    Date of Patent: July 4, 2023
    Inventor: Naveen Kumar
  • Patent number: 11692331
    Abstract: A work vehicle, including: a plurality of control valves configured to selectively convey hydraulic fluid to a hydraulic machine, a plurality of controllers connected to said control valves, a terminal including a display screen displaying graphics associated with said valves, a setting button configured to selectively modify a connection between the control valves and the controllers, and modify graphics based on changes of the connection.
    Type: Grant
    Filed: June 8, 2020
    Date of Patent: July 4, 2023
    Inventor: Jean-Louis Burnier
  • Patent number: 11695908
    Abstract: There is provided an information processing apparatus, information processing method, and recording medium that each allow a user to recognize a border of a virtual space without breaking the world view of the virtual space. The information processing apparatus includes a control unit that tracks a motion of a user to present an image of a virtual space to the user, and performs distance control to increase a distance between a viewpoint of the user and a border region in the virtual space while an operation of the user coming closer toward the border region is being inputted. The border region is fixed at a specific position in the virtual space.
    Type: Grant
    Filed: February 22, 2019
    Date of Patent: July 4, 2023
    Assignee: SONY CORPORATION
    Inventors: Kei Takahashi, Tsuyoshi Ishikawa, Ryouhei Yasuda
  • Patent number: 11694409
    Abstract: A split-architecture for rendering and warping world-locked AR elements, such as graphics in a navigation application, for display on augmented reality (AR) glasses is disclosed. The split-architecture can help to alleviate a resource burden on the AR glasses by performing the more complex processes associated with the rendering and warping on a computing device, while performing the less complex processes associated with the rendering and warping on the AR glasses. The AR glasses and the computing device are coupled via wireless communication, and the disclosed systems and methods address the large and variable latencies associated with the wireless communication that could otherwise make splitting these processes impractical.
    Type: Grant
    Filed: December 8, 2021
    Date of Patent: July 4, 2023
    Assignee: GOOGLE LLC
    Inventors: Sazzadur Rahman, Konstantine Nicholas John Tsotsos, João Manuel de Castro Afonso, José Carlos Maranhão Pascoal, Ivo Duarte, Nuno Cruces
  • Patent number: 11693090
    Abstract: This document describes “Multi-domain Neighborhood Embedding and Weighting” (MNEW) for use in processing point cloud data, including sparsely populated data obtained from a lidar, a camera, a radar, or combination thereof. MNEW is a process based on a dilation architecture that captures pointwise and global features of the point cloud data involving multi-scale local semantics adopted from a hierarchical encoder-decoder structure. Neighborhood information is embedded in both static geometric and dynamic feature domains. A geometric distance, feature similarity, and local sparsity can be computed and transformed into adaptive weighting factors that are reapplied to the point cloud data. This enables an automotive system to obtain outstanding performance with sparse and dense point cloud data. Processing point cloud data via the MNEW techniques promotes greater adoption of sensor-based autonomous driving and perception-based systems.
    Type: Grant
    Filed: February 10, 2022
    Date of Patent: July 4, 2023
    Assignee: Aptiv Technologies Limited
    Inventors: Yang Zheng, Izzat H. Izzat
  • Patent number: 11681488
    Abstract: The system provides multiple locations with specialized video projector/camera pairs connected by a communication network and real-time video processing services to facilitate distributed collaboration of a shared workspace. Each location will have local objects unique to that location and all locations will receive a combined video composite stream bringing all remote local objects into a shared collaborative space. The system according to the present invention overcomes the effect of video echo, or alternatively referred to as infinite images by compositing image data from other workstations that only relates to real content on the working surface of the workstation and not projected content.
    Type: Grant
    Filed: February 24, 2022
    Date of Patent: June 20, 2023
    Assignee: INTERNATIONAL DATACASTING CORP.
    Inventors: Luke Kennedy, Rodney Allan
  • Patent number: 11678138
    Abstract: Described is a location-based interaction system. The system includes a server having a memory storing user data and a user computing device coupled to the server. The server may be programmed to receive and process a signal that user computing devices coupled to external devices have accessed the system and are within a predetermined proximity of each other. The system operates to send alert signals to the external devices, wherein the alert signals may be visual, audio or haptic. The system allows users to communicate through the use of external devices and/or through a communication interface.
    Type: Grant
    Filed: February 7, 2022
    Date of Patent: June 13, 2023
    Inventor: Charles Isgar
  • Patent number: 11677928
    Abstract: A captured scene captured of a live action scene while a display wall is positioned to be part of the live action scene may be processed. To perform the processing, image data of the live action scene having a live actor and the display wall displaying a first rendering of a precursor image is received. Further, precursor metadata for the precursor image displayed on the display wall and display wall metadata for the display wall is determined. An image matte is accessed, where the image matte indicates a first portion associated with the live actor and a second portion associated with the precursor image on the display wall Image quality levels for display wall portions of the display wall in the image data is determined, and pixels associated with the display wall in the image data are adjusted to the image quality levels.
    Type: Grant
    Filed: December 10, 2021
    Date of Patent: June 13, 2023
    Assignee: Unity Technologies SF
    Inventors: Kimball D. Thurston, III, Peter M. Hillman, Joseph W. Marks, Luca Fascione, Millicent Lillian Maier, Kenneth Gimpelson, Dejan Momcilovic, Keith F. Miller
  • Patent number: 11677833
    Abstract: Methods, non-transitory computer readable media, and collaborative computing apparatus that establish a collaborative session for visualizing and interacting with a three-dimensional object in a collaborative augmented reality environment between two or more of a plurality of computing devices. Position and orientation information of each of the two or more of the plurality of computing devices is obtained. An interaction instruction with respect to the three-dimensional object from a first of the two or more of the plurality of computing devices is received. Instructions for adjusting visualization of the three-dimensional object on each of the other of the two or more of the plurality of computing devices are determined and provided based on the received interaction instruction and the obtained position and orientation information of each of the two or more of the plurality of computing devices.
    Type: Grant
    Filed: May 17, 2019
    Date of Patent: June 13, 2023
    Assignee: KAON INTERACTIVE
    Inventors: Gavin Finn, Joshua Smith, Anatoly Dedkov
  • Patent number: 11677923
    Abstract: A captured scene captured of a live action scene while a display wall is positioned to be part of the live action scene may be processed. To perform the processing, image data of the live action scene having a live actor and the display wall displaying a first rendering of a precursor image is received. Further, precursor metadata for the precursor image displayed on the display wall and display wall metadata for the display wall is determined. An image matte is accessed, where the image matte indicates a first portion associated with the live actor and a second portion associated with the precursor image on the display wall in the live action scene. Pixel display values for a replacement wall image of higher resolution than the precursor image is determined, and the image data of the captured scene is adjusted using the pixel display values and the image matte.
    Type: Grant
    Filed: December 10, 2021
    Date of Patent: June 13, 2023
    Assignee: Unity Technologies SF
    Inventors: Kimball D. Thurston, III, Peter M. Hillman, Joseph W. Marks, Luca Fascione, Millicent Lillian Maier, Kenneth Gimpelson, Dejan Momcilovic, Keith F. Miller
  • Patent number: 11676345
    Abstract: A device is fitted with a camera and an extended reality (XR) software application program executing on a processor. Via the XR software application program, a technique is performed for automating adaptive workflows in the XR environment. In the technique, the XR software application program determines an identifier of an asset in the XR environment. The XR software application program sends to a data intake and query system a request associated with a playbook having one or more execution tasks associated with the asset. The XR software application program receives the playbook and generates an XR object associated with an execution task in the playbook. The XR software application program causes the XR object to be displayed at a location in the XR environment corresponding to a determined location, relative to the asset, of a portion of the asset with which the execution task is associated.
    Type: Grant
    Filed: October 16, 2020
    Date of Patent: June 13, 2023
    Assignee: SPLUNK INC.
    Inventors: Devin Bhushan, Jesse Chor, Seunghee Han, Jamie Kuppel, Sammy Lee, Stanislav Yazhenskikh, Jim Jiaming Zhu
  • Patent number: 11670060
    Abstract: Methods, systems, and storage media for auto-generating an artificial reality environment based on access to personal user content are disclosed. Exemplary implementations may: receive consent from a user to access user content on a user device, the user content comprising digital media; generate a user profile based at least in part on the user content; determine user preferences based at least in part on the user profile; generate an artificial reality environment based at least in part on the user preferences; and share the artificial reality environment with contacts of the user.
    Type: Grant
    Filed: October 11, 2021
    Date of Patent: June 6, 2023
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Tiffany Madruga, Allison Fu, Meaghan Fitzgerald, Geeti Arora, Rachel Cross
  • Patent number: 11670056
    Abstract: Methods, systems, and computer program products are described for obtaining, from a first tracking system, an initial three-dimensional (3D) position of an electronic device in relation to image features captured by a camera of the electronic device and obtaining, from a second tracking system, an orientation associated with the electronic device. Responsive to detecting a movement of the electronic device, obtaining, from the second tracking system, an updated orientation associated with the detected movement of the electronic device, generating and providing a query to the first tracking system, the query corresponding to at least a portion of the image features and including the updated orientation and the initial 3D position of the electronic device, generating, for a sampled number of received position changes, an updated 3D position for the electronic device and generating a 6-DoF pose using the updated 3D positions and the updated orientation for the electronic device.
    Type: Grant
    Filed: January 22, 2021
    Date of Patent: June 6, 2023
    Assignee: Google LLC
    Inventors: Aveek Purohit, Kan Huang
  • Patent number: 11668934
    Abstract: Disclosed is an electronic device. In the electronic device according to the present disclosure, a central axis of a viewing angle based on an eye of a user and the central axis of the viewing angle based on a lens optical axis of a camera match each other. An electronic device according to the present disclosure may be associated with an artificial intelligence module, robot, augmented reality (AR) device, virtual reality (VR) device, and device related to 5G services.
    Type: Grant
    Filed: April 10, 2020
    Date of Patent: June 6, 2023
    Assignee: LG ELECTRONICS INC.
    Inventors: Dongyoung Lee, Seungyong Shin
  • Patent number: 11668571
    Abstract: A method for simultaneous localization and mapping (SLAM) employs dual event-based cameras. Event streams from the cameras are processed by an image processing system to stereoscopically detect surface points in an environment, dynamically compute pose of a camera as it moves, and concurrently update a map of the environment. A gradient descent based optimization may be utilized to update the pose for each event or for each small batch of events.
    Type: Grant
    Filed: February 16, 2021
    Date of Patent: June 6, 2023
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Sebastien Derhy, Lior Zamir, Nathan Henri Levy
  • Patent number: 11670057
    Abstract: A context based augmented reality system can be used to display augmented reality elements over a live video feed on a client device. The augmented reality elements can be selected based on a number of context inputs generated by the client device. The context inputs can include location data of the client device and location data of nearby physical places that have preconfigured augmented elements. The preconfigured augmented elements can be preconfigured to exhibit a design scheme of the corresponding physical place.
    Type: Grant
    Filed: May 17, 2021
    Date of Patent: June 6, 2023
    Assignee: Snap Inc.
    Inventors: Ebony James Charlton, Jokubas Dargis, Eitan Pilipski, Dhritiman Sagar, Victor Shaburov
  • Patent number: 11663784
    Abstract: Systems and methods for creating content in augmented reality (AR) environment. The content is created by a combination of at least a handheld device and an AR device, and involves initiation and establishment of a session between the handheld device and the AR device, command and data generation at the handheld device, transmission of the command and data from the handheld device to the AR device, processing of the command and data at the AR device to build context from the command and/or data and to create content from the context, saving and sharing of the created content, retrieval of the created content, and optionally modification of the retrieved content.
    Type: Grant
    Filed: August 20, 2020
    Date of Patent: May 30, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Varun Nigam, Veenu Pandey
  • Patent number: 11662593
    Abstract: One or more embodiments of the present disclosure may include: a transparent member; a housing coupled to the transparent member in a rotatable manner via a hinge portion, such that the housing is foldable in a designated direction with respect to the transparent member; a projector at least partially disposed in the housing; and an optical transferring member configured to guide light emitted from the projector to the transparent member when the housing is unfolded with respect to the transparent member in an unfolded state.
    Type: Grant
    Filed: January 4, 2021
    Date of Patent: May 30, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Yunguk Lee, Yoonseok Kang
  • Patent number: 11662829
    Abstract: Techniques for modifying a garment based on gestures are presented herein. An access module can access a first set of sensor data from a first sensor, and a second set of sensor data from a second sensor. A garment simulation module can generate a three-dimensional (3D) garment model of a garment available for sale draped on an avatar based on the first set of sensor data and the second set of sensor data. A display module can cause a presentation, on a display of a device, of the 3D garment model draped on the avatar. Additionally, the garment simulation module can determine a modification gesture associated with the 3D garment model draped on the avatar based on the first set of sensor data and the second set of sensor data. Furthermore, the garment simulation module can modify the 3D garment model based on the determined modification gesture.
    Type: Grant
    Filed: June 21, 2021
    Date of Patent: May 30, 2023
    Assignee: eBay Inc.
    Inventors: Kyle Smith Rose, Pooja Sapra, Vivienne Melody Blue, Chuck Barnum, Giridhar Singam, Chris Miller, Rachel Maxine Minenno, James Stephen Perrine
  • Patent number: 11665373
    Abstract: A live event virtual experience is provided. Emotions of virtual spectators to a current situation occurring in a live event at a physical venue are determined from received data regarding reactions of the virtual spectators to the live event using bias detection. A historical sound clip that matches the emotions of the virtual spectators to the current situation occurring in the live event is retrieved. The historical sound clip that matches the emotions of the virtual spectators to the current situation occurring in the live event is input into a machine learning model that performs inverse bias mitigation to amplify bias and applies in-process adversarial fairness debiasing. The historical sound clip after performing the inverse bias mitigation to amplify the bias and applying the in-process adversarial fairness debiasing is converted to a standardized historical sound segment length. A sound representation is generated from the standardized historical sound segment length.
    Type: Grant
    Filed: April 15, 2021
    Date of Patent: May 30, 2023
    Assignee: International Business Machines Corporation
    Inventors: Ugo Ivan Orellana Gonzalez, Aaron K. Baughman, Todd Russell Whitman, Stephen C. Hammer
  • Patent number: 11662590
    Abstract: An eye tracker having a first waveguide for propagating illumination light along a first waveguide path and propagating image light reflected from at least one surface of an eye along a second waveguide path. At least one grating lamina for deflecting the illumination light out of the first waveguide path towards the eye and deflecting the image light into the second waveguide path towards a detector is disposed adjacent an optical surface of the waveguide.
    Type: Grant
    Filed: January 3, 2022
    Date of Patent: May 30, 2023
    Assignee: DigiLens Inc.
    Inventors: Milan Momcilo Popovich, Jonathan David Waldern, Alastair John Grant
  • Patent number: 11663781
    Abstract: A computer-implemented method, a computer system and a computer program product enhance user interaction with a virtual or augmented environment. The method includes obtaining a user profile. The user profile includes a constraint of a user. The method also includes identifying a plurality of objects in the augmented reality environment. In addition, the method includes determining an intent of the user with respect to navigating the augmented reality environment. The method further includes calculating a probability of interaction between the user and an identified object based on the intent. Lastly, the method includes generating an enhancement threshold for each of the identified objects based on the probability of interaction and the constraint of the user.
    Type: Grant
    Filed: December 1, 2021
    Date of Patent: May 30, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Lincoln A Alexander, Laura Janet Rodriguez, Robert E. Loredo, Jaclyn Wakin, Hemant Kumar Sivaswamy
  • Patent number: 11663751
    Abstract: Information may be provided for review by an augmented reality (AR) user. A system may store a plurality of scenes viewed by the AR user, and for each scene information identifying (i) any pairs of real-life objects and AR augmentations presented in the scene and (ii) any pairs in the scene with which the user interacted. The system may also select from the stored plurality of scenes a first subset of scenes such that each pair with which the user interacted is presented in at least one scene in the first subset of scenes. Responsive to an instruction from the user to review historical AR information, the system may present to the user the selected first subset of scenes. The first subset may be selected such that each pair with which the user interacted is presented in at least one scene in which the user interacted with that pair.
    Type: Grant
    Filed: October 7, 2021
    Date of Patent: May 30, 2023
    Assignee: InterDigital VC Holdings, Inc.
    Inventor: Mona Singh
  • Patent number: 11662589
    Abstract: An eyewear device with flexible frame for Augmented Reality (AR) is disclosed. At least two sensors and a display are mounted on the flexible frame. When in use, the real time geometry of the eyewear device may change from factory calibrated geometry, resulting in low quality AR rendering. A modeling module is provided to model the real time geometry of the eyewear device on the fly using sensor information of the at least two sensors. The modeled real time geometry is then provided to a rendering module to accurately display the AR to the user.
    Type: Grant
    Filed: January 31, 2022
    Date of Patent: May 30, 2023
    Assignee: Snap Inc.
    Inventors: Clemens Birklbauer, Georg Halmetschlager-Funek, Jeroen Hol, Matthias Kalkgruber, Daniel Wagner
  • Patent number: 11663787
    Abstract: Methods, systems, and media for enhancing one or more publications by receiving live video captured by a user, the live video comprising video of a publication, the publication comprising copyrighted content; identifying at least one first trigger in the live video, identifying one or more first three-dimensional, interactive media associated with the at least one first trigger and pertaining to the copyrighted content, and presenting to the user the first three-dimensional, interactive media; and identifying at least one second trigger in the first three-dimensional, interactive media, identifying one or more second three-dimensional, interactive media associated with the at least one second trigger and pertaining to the copyrighted content, and presenting to the user the second three-dimensional, interactive media to progressively deepen and enrich the engagement with the copyrighted content of the publication.
    Type: Grant
    Filed: May 14, 2021
    Date of Patent: May 30, 2023
    Assignee: A BIG CHUNK OF MUD LLC
    Inventor: J. Michelle Haines
  • Patent number: 11663993
    Abstract: A display system includes a host and a display. The host executes a first application and a program. The program sets a first display parameter corresponding to the first application. The display receives a signal provided by the host. The signal includes a desktop image. The first application is operated at a first window on the desktop image. The program outputs the first display parameter to the display. The display sets the first window with the first display parameter and displays, and displays the non-first window area of the desktop image with a preset display parameter.
    Type: Grant
    Filed: July 7, 2021
    Date of Patent: May 30, 2023
    Assignee: Qisda Corporation
    Inventors: Yu-Fu Fan, Yi-Ming Huang, Yu-Chun Lin
  • Patent number: 11663795
    Abstract: A streaming-based VR multi-split system and method are provided. The system includes a control system, a manipulation terminal and at least one experience terminal. The control system includes a streaming media server, a streaming coding and decoding interaction module and a VR platform. The streaming decoding and interaction processing module is configured to receive a video stream sent by the streaming coding and decoding interaction module, collect interaction data at one side of the manipulation terminal in real time, and transmit the interaction data to the streaming coding and decoding interaction module. The streaming coding and decoding interaction module sends a video picture code of the VR platform to the manipulation terminal, and pushes the video stream corresponding to the interaction data to the streaming media server. A presentation picture corresponding to an operation of the experience terminal is acquired by the streaming media server.
    Type: Grant
    Filed: August 4, 2022
    Date of Patent: May 30, 2023
    Assignee: QINGDAO PICO TECHNOLOGY CO., LTD.
    Inventor: Ruisheng Zhang
  • Patent number: 11656762
    Abstract: The discussion relates to virtual keyboard engagement. One example can define key volumes relating to keys of a virtual keyboard and detect finger movement of a user through individual key volumes. The example can detect parameter changes associated with detected finger movement through individual key volumes and build potential key sequences from detected parameter changes.
    Type: Grant
    Filed: May 5, 2021
    Date of Patent: May 23, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Christopher M. Becker, Nazeeh A. Eldirghami, Kevin W. Barnes, Julia Schwarz, Eric Carter
  • Patent number: 11657506
    Abstract: A method of robot autonomous navigation includes capturing an image of the environment, segmenting the captured image to identify one or more foreground objects and one or more background objects, determining a match between one or more of the foreground objects to one or more predefined image files, estimating an object pose for the one or more foreground objects by implementing an iterative estimation loop, determining a robot pose estimate by applying a robot-centric environmental model to the object pose estimate by implementing an iterative refinement loop, associating semantic labels to the matched foreground object, compiling a semantic map containing the semantic labels and segmented object image pose, and providing localization information to the robot based on the semantic map and the robot pose estimate. A system and a non-transitory computer-readable medium are also disclosed.
    Type: Grant
    Filed: March 6, 2019
    Date of Patent: May 23, 2023
    Assignee: General Electric Company
    Inventors: Huan Tan, Isabella Heukensfeldt Jansen, Gyeong Woo Cheon, Li Zhang
  • Patent number: 11657576
    Abstract: Embodiments of the present disclosure relate generally to generating, conducting, and reporting digital surveys utilizing augmented reality devices and/or virtual reality devices. In particular, in one or more embodiments, the disclosed systems and methods assist administrators in generating digital surveys utilizing interactive virtual environments via a virtual reality device and/or augmented reality elements via an augmented reality device. Similarly, the disclosed systems and methods can provide digital surveys via augmented reality devices and/or virtual reality devices, for instance, by monitoring user interactions via the augmented reality and/or virtual reality devices and providing digital surveys based on the monitored user interactions. Furthermore, the disclosed systems and methods can present survey results and allow administrators to interact with survey results utilizing augmented reality devices and/or virtual reality devices.
    Type: Grant
    Filed: January 7, 2019
    Date of Patent: May 23, 2023
    Assignee: Qualtrics, LLC
    Inventors: Blake Andrew Tierney, Milind Kopikare, Larry Dean Cheesman
  • Patent number: 11648478
    Abstract: According to one aspect, there is provided a virtual reality control system including: a sensor detecting a light signal; a display displaying an image to a user; at least one controller controlling the display; and an input device transmitting an input signal input from the user to the controller, wherein the controller is computing position data of the user by using data based on the light signal and computing virtual position data based on the position data of the user, wherein a plurality of areas is displayed on the display based on the virtual position data, wherein the plurality of areas includes an accessible area, where a character based on the virtual position data can move to, and an inaccessible area, where the character cannot move to, wherein an accessible mark is provided in the accessible area which is located within a reference distance from the character.
    Type: Grant
    Filed: December 26, 2018
    Date of Patent: May 16, 2023
    Assignee: SKONEC ENTERTAINMENT CO., LTD.
    Inventors: Who Jung Lee, Jae Young Kim, Sang Woo Han, Jong Hyun Yuk
  • Patent number: 11652870
    Abstract: Systems and methods of delegating media capturing functionality from one device to another are presented. A first device configured with an object recognition engine captures a media representation of an environment and identifies an object within that environment. Then based on matched object traits from a database, the engine selects a delegation rules set, and delegates certain media capturing functionality to a second device according to the selected delegation rules set.
    Type: Grant
    Filed: March 19, 2021
    Date of Patent: May 16, 2023
    Assignee: NANT HOLDINGS IP, LLC
    Inventor: Patrick Soon-Shiong
  • Patent number: 11651529
    Abstract: The embodiment of the present disclosure discloses an image processing method, apparatus, electronic device and computer readable storage medium. The image processing method includes identifying a first object in a first video frame image and a second object located in the first object; in accordance with a position of the first object in the first video frame image, overlaying a third object as a foreground image on the first video frame image to obtain a second video frame image; wherein the third object overlays the first object in the second video frame image; in accordance with a position of the second object in the first video frame image, overlapping the second object as a foreground image on the third object of the second video frame image to obtain a third video frame image.
    Type: Grant
    Filed: June 17, 2022
    Date of Patent: May 16, 2023
    Assignee: BEIJING BYTEDANCE NETWORK TECHNOLOGY CO., LTD.
    Inventors: Jingjing Zhuge, Guangyao Ni, Hui Yang