Augmented Reality (real-time) Patents (Class 345/633)
-
Patent number: 11650354Abstract: Disclosed are systems and methods to render data from a 3D environment. The methods and systems of this disclosure utilize inverse ray tracing from a viewing volume to capture energy data from a 3D environment in a single rendering pass providing thereby collecting data more efficiently and accurately.Type: GrantFiled: January 14, 2019Date of Patent: May 16, 2023Assignee: Light Field Lab, Inc.Inventor: Jonathan Sean Karafin
-
Patent number: 11645561Abstract: Provided are systems, methods, and media for handling dialogs based on user behavior data. An example method includes receiving an input paragraph having one or more factual sentences, in which each of the one or more factual sentences includes one or more words. Receiving an input question comprising one or more words. Performing word-level gaze prediction on the input paragraph to identify one or more predicted gaze attributes for the input paragraph. Extracting an answer to the input question based, at least in part, on the input paragraph, the input question, and the one or more predicted gaze attributes of the input paragraph. Transmitting the extracted answer.Type: GrantFiled: March 18, 2019Date of Patent: May 9, 2023Assignee: International Business Machines CorporationInventors: Abhijit Mishra, Enara C Vijil, Seema Nagar, Kuntal Dey
-
Patent number: 11645028Abstract: A system and method for visualizing multiple datasets in a virtual 3-dimensional interactive environment. Multiple datasets may be related and virtually cast as 3-dimensional type structures. User interfaces, such as game controllers or headsets, may be used to present the dataset from differing perspectives including the appearance of moving through the data. Certain embodiments provide for mirror image views that allow for presentation of higher order datasets. Other embodiments provide for animation or motion indicia to show how the data is changing and the results on the display. The datasets may represent physical areas or virtual areas as well as demographic, sensors and financial information.Type: GrantFiled: July 6, 2022Date of Patent: May 9, 2023Assignee: BadVRInventors: Jad Meouchy, Suzanne Borders
-
Patent number: 11647292Abstract: An image adjustment system includes a camera, an image adjustment device, an image display device, and a controller. The image display device displays a captured image adjusted by the image adjustment device. The image adjustment device includes an image generator and an image processor. The image generator generates a spherical surface image. The image processor acquires the spherical surface image from the image generator to display the spherical surface image on the image display device on the basis of instruction information output from the controller. The image processor rotates the spherical surface image on the basis of the instruction information. The image processor adjusts a right-eye image or a left-eye image of the captured image displayed on the image display device in accordance with a rotation of the spherical surface image.Type: GrantFiled: June 29, 2021Date of Patent: May 9, 2023Assignee: JVCKENWOOD CORPORATIONInventor: Takashi Himukashi
-
Patent number: 11646000Abstract: An image display system includes a display unit displaying an image, a projection unit projecting in a target space a virtual image corresponding to the image with an output light of the display unit, a body unit provided thereto the display unit and the projection unit, and an image producing unit including a first correction unit and a second correction unit. The first correction unit performs a first correction processing of correcting, based on a first orientation signal indicative of a first orientation change of the body unit, a display position of the virtual image in the target space. The second correction unit performs a second correction processing of correcting, based on a second orientation signal indicative of a second orientation change of the body unit which is faster than the first orientation change, the display position of the virtual image in the target space.Type: GrantFiled: March 18, 2022Date of Patent: May 9, 2023Assignee: PANASONIC IN TEI IFCTUAL PROPERTY MANAGEMENT CO., LTD.Inventors: Masanaga Tsuji, Toshiya Mori, Ken'ichi Kasazumi, Yoshiteru Mino, Tadashi Shibata, Nobuyuki Nakano, Akira Tanaka, Shohei Hayashi
-
Patent number: 11645034Abstract: Systems and methods for adjusting display of virtual content when a wearable display device detects a trigger event to change display of content that is relative to a physical or virtual surface. The method includes displaying content relative to a first surface, detecting a trigger that is one of motion or content driven; and adjusting display of the content to a user-centric virtual surface in the user's field of view.Type: GrantFiled: May 19, 2021Date of Patent: May 9, 2023Assignee: Magic Leap, Inc.Inventor: Genevieve Mak
-
Patent number: 11645809Abstract: Systems and methods for implementing methods for user selection of a virtual object in a virtual scene. A user input may be received via a user input device. The user input may be an attempt to select a virtual object from a plurality of virtual objects rendered in a virtual scene on a display of a display system. A position and orientation of the user input device may be determined in response to the first user input. A probability the user input may select each virtual object may be calculated via a probability model. Based on the position and orientation of the user input device, a ray-cast procedure and a sphere-cast procedure may be performed to determine the virtual object being selected. The probability of selection may also be considered in determining the virtual object. A virtual beam may be rendered from the user input device to the virtual object.Type: GrantFiled: March 2, 2021Date of Patent: May 9, 2023Assignee: zSpace, Inc.Inventors: Jonathan J. Hosenpud, Clifford S. Champion, David A. Chavez, Kevin S. Yamada, Alexandre R. Lelievre
-
Patent number: 11644278Abstract: A device can include one or more sensors configured to output sensor data, and a trigger detection module configured to receive the sensor data from the one or more sensors and to determine whether a trigger event has occurred. The trigger detection module can be configured to output a trigger detection signal when the trigger event is detected. The trigger detection signal can be configured to be used by an augmented reality or virtual reality system to cause an augmented reality or virtual reality event.Type: GrantFiled: December 29, 2020Date of Patent: May 9, 2023Assignee: University of Central Florida Research Foundation, Inc.Inventors: Dean Reed, Devyn Dodge, Troyle Thomas
-
Patent number: 11641402Abstract: A system providing connectivity management is provided. The system comprises: a content management server configured to manage connectivity for a network; one or more central controllers configured to collect connectivity information for at least a portion of the network for use by the content management server; and at least one outlet having one or more ports for receiving one or more plugs, wherein connectivity information is communicated between the outlet and the central controller through one or more wireless communication interface.Type: GrantFiled: May 10, 2022Date of Patent: May 2, 2023Assignee: CommScope Technologies LLCInventors: Joseph C. Coffey, Joseph Polland, Jason Bautista
-
Patent number: 11640679Abstract: An augmented reality (AR) or virtual reality (VR) calibration method including the steps of: (a) providing a computing device for displaying a base image of a surrounding environment, (b) obtaining location coordinates of the computing device; (c) initiating an executable application program for processing location data and generating an overlay image over the base image; (d) generating a virtual asset container and at least one digital object corresponding to the computing device, (e) determining a first location of the computing device at a local position within the asset container; (e) moving the computing device to a second location that is a determined distance in a direction from the first location, and (f) calculating a local rotation angle relative to a positive axis of the asset container and a rotation angle relative to a positive axis of a real-world coordinate system to determine an orientation difference.Type: GrantFiled: December 6, 2021Date of Patent: May 2, 2023Assignee: ARUTILITY, LLCInventor: Joseph Steven Eastman
-
Patent number: 11640572Abstract: A method to optimize learning based upon ocular information of a subject includes providing a video camera for recording a close-up view of a subject's eye. A first electronic display shows a plurality of educational subject matter to the subject. A second electronic display shows an output to an instructor. Changes in ocular signals of the subject are processed through the use optimized algorithms. A cognitive state model determines a low to a high cognitive load experienced by the subject. The cognitive state model is evaluated based on the changes in the ocular signals for determining a probability of the low to the high cognitive load experienced by the subject. The probability of the low to the high cognitive load experienced by the subject is displayed to the instructor.Type: GrantFiled: December 18, 2020Date of Patent: May 2, 2023Assignee: Senseye, Inc.Inventors: David Zakariaie, Kathryn McNeil, Alexander Rowe, Joseph Brown, Patricia Herrmann, Jared Bowden, Taumer Anabtawi, Andrew R. Sommerlot, Seth Weisberg, Veronica Choi
-
Patent number: 11636658Abstract: Three-dimensional occlusion can be used when generating AR display overlays. Depth information can be used to delete portions of an AR element, based on intervening objects between a viewer and the AR element. In cases where the depth information does not impart a complete picture of the intervening objects, additional image processing and object detection systems and techniques can be used to further improve the precision of the occlusion.Type: GrantFiled: May 4, 2022Date of Patent: April 25, 2023Assignee: Google LLCInventors: Yi-Hsuan Tsai, Chen-Ping Yu, Myvictor Tran
-
Patent number: 11636568Abstract: A split hierarchy graphics processor system including a master node executing a virtual reality (VR) application responsive to input from a client device received over a network to generate primitives for in a VR environment. The graphics processor system including render nodes performing rendering based on the primitives for views into the VR environment taken from a location in the VR environment, the views corresponding to a grid map of the VR environment. Each of the render nodes renders, encodes and streams a corresponding sequence of frames of a corresponding view to the client device. The processor system including an asset library storing input geometries for the objects used for building the VR environment, wherein the objects in the asset library are accessible by the master node and the render nodes.Type: GrantFiled: January 21, 2022Date of Patent: April 25, 2023Assignee: Sony Interactive Entertainment LLCInventor: Torgeir Hagland
-
Patent number: 11635623Abstract: The computational scaling challenges of holographic displays are mitigated by techniques for generating holograms that introduce foveation into a wave front recording planes approach to hologram generation. Spatial hashing is applied to organize the points or polygons of a display object into keys and values.Type: GrantFiled: July 23, 2020Date of Patent: April 25, 2023Assignee: Nvidia Corp.Inventors: Jui-Hsien Wang, Ward Lopes, Rachel Anastasia Brown, Peter Shirley
-
Patent number: 11636661Abstract: A method of generating an augmented reality lens comprises: causing to display a list of lens categories on a display screen of a client device; receiving a user choice from the displayed list; causing to prepopulate a lens features display on the display device based on the user choice, wherein each lens feature comprises image transformation data configured to modify or overlay video or image data; receiving a user selection of a lens feature from the prepopulated lens display; receiving a trigger selection that activates the lens feature to complete the lens; and saving the completed lens to a memory of a computer device.Type: GrantFiled: August 3, 2021Date of Patent: April 25, 2023Assignee: Snap Inc.Inventors: Oleksandr Chepizhenko, Jean Luo, Bogdan Maksymchuk, Vincent Sung, Ashley Michelle Wayne
-
Patent number: 11635625Abstract: Included are a first display element configured to display a first virtual image; a second display element configured to display a second virtual image; a combining optical member configured to combine first imaging light and second imaging light; a light-guiding optical system configured to guide light that passed through the combining optical member; and a correction optical system provided between the first display element and the combining optical member and configured to correct an aberration in accordance with a positional difference between the first display element and the second display element.Type: GrantFiled: March 23, 2021Date of Patent: April 25, 2023Assignee: SEIKO EPSON CORPORATIONInventor: Osamu Yokoyama
-
Patent number: 11636629Abstract: Example embodiments provide a method, performed by an edge data network, of rendering an object, and a method of displaying an object rendered on a device. The edge data network may generate a second metadata set corresponding to a predicted position and direction of a device based on a first metadata set, and render a first object corresponding to the second metadata set. Furthermore, the edge data network may receive, from the device, a third metadata set corresponding to the current position and direction of the device, obtain a rendered object corresponding to the current position and direction of the device based on the second metadata set and the third metadata set, and transmit the obtained rendered object to the device.Type: GrantFiled: February 18, 2021Date of Patent: April 25, 2023Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Younghyun Joo, Younggi Kim, Sangeun Seo, Jaewook Jung, Hyoyoung Cho
-
Patent number: 11633659Abstract: The disclosure relates to a system for evaluating movement of a body of a user. The system may include a video display, one or more digital cameras, and a processor. The processor may control the one or more cameras to generate images of at least the part of the body over a period of time. The processor may estimate a position of a plurality of joints of the body. The processor may receive a selection of a tracked pose, and determine, from the plurality of joints, a set of joints associated with the tracked pose. The processor may generate at least one joint vector connecting joints in the set of joints, and assign, based on changes in the joint vector over the period of time, a form score to a performance of the tracked pose. The processor may then generate a user interface that depicts the form score.Type: GrantFiled: June 29, 2021Date of Patent: April 25, 2023Assignee: MirrorAR LLCInventors: Hemant Virkar, Leah R. Kaplan, Stephen Furlani, Jacob Borgman, Anil Bhave
-
Patent number: 11628374Abstract: A virtual puppeteering system includes a portable device including a camera, a display, a hardware processor, and a system memory storing an object animation software code. The hardware processor is configured to execute the object animation software code to, using the camera, generate an image in response to receiving an activation input, using the display, display the image, and receive a selection input selecting an object shown in the image. The hardware processor is further configured to execute the object animation software code to determine a distance separating the selected object from the portable device, receive an animation input, identify, based on the selected object and the received animation input, a movement for animating the selected object, generate an animation of the selected object using the determined distance and the identified movement, and render the animation of the selected object.Type: GrantFiled: June 30, 2020Date of Patent: April 18, 2023Assignees: Disney Enterprises, Inc., ETH ZĂ¼richInventors: Raphael Anderegg, Loic Ciccone, Robert W. Sumner
-
Patent number: 11631222Abstract: A context based augmented reality system can be used to display augmented reality elements over a live video feed on a client device. The augmented reality elements can be selected based on a number of context inputs generated by the client device. The context inputs can include location data of the client device and location data of nearby physical places that have preconfigured augmented elements. The preconfigured augmented elements can be preconfigured to exhibit a design scheme of the corresponding physical place.Type: GrantFiled: May 17, 2021Date of Patent: April 18, 2023Assignee: Snap Inc.Inventors: Ebony James Charlton, Jokubas Dargis, Eitan Pilipski, Dhritiman Sagar, Victor Shaburov
-
Patent number: 11624926Abstract: According to examples, a display system may include a disparity sensing detector, collection optics, and a first lens assembly. The first lens assembly may include a first projector to output a first display light associated with a first image and a first waveguide for propagating the first display light to the collection optics, in which the collection optics is to direct the first display light to the disparity sensing detector. The display system may also include a second lens assembly including a second projector to output a second display light associated with a second image and a second waveguide for propagating the second display light to the collection optics, in which the collection optics is to direct the second display light to the disparity sensing detector.Type: GrantFiled: January 5, 2022Date of Patent: April 11, 2023Assignee: Meta Platforms Technologies, LLCInventors: Wai Sze Tiffany Lam, Yang Yang, Dominic Meiser, Wanli Chi
-
Patent number: 11625899Abstract: A server includes processing circuitry configured to receive one or more images, the one or more images including one or more representations of people. Additionally, the processing circuitry is configured to apply a neural network to the one or more images, wherein the neural network classifies at least one aesthetic component of each image of the one or more images, an aesthetic component score being generated for each image in the one or more images. Further, the processing circuitry is configured to generate a user eyewear equipment profile for a user, the user being matched to a persona from a personae database, each persona in the personae database being linked to one or more persona eyewear equipment profiles, the one or more persona eyewear equipment profiles being based on the aesthetic component score, and select eyewear equipment for the user based on the generated user eyewear equipment profile.Type: GrantFiled: November 18, 2019Date of Patent: April 11, 2023Assignee: Essilor InternationalInventors: Julien Andoche, Estelle Netter
-
Patent number: 11625858Abstract: A rear-facing camera captures a live-action video image while a front-facing camera captures an image of a distributor. An avatar controller controls an avatar based on the image of the distributor captured by the front-facing camera. A synthesizer arranges the avatar in a predetermined position of a real space coordinate system and synthesizes the avatar with the live-action video image. The face of the distributor captured by the front-facing camera is tracked and reflected on the avatar.Type: GrantFiled: November 19, 2021Date of Patent: April 11, 2023Assignee: Dwango Co., Ltd.Inventors: Nobuo Kawakami, Shinnosuke Iwaki, Takashi Kojima, Toshihiro Shimizu, Hiroaki Saito
-
Patent number: 11627303Abstract: A head mounted display system with video-see-through (VST) is taught. The system and method process video images captured by at least two forward facing video cameras mounted to the headset to produce generated images whose viewpoints correspond to the viewpoint of the user if the user was not wearing the display system. By generating VST images which have viewpoints corresponding to the user's viewpoint, errors in sizing, distances and positions of objects in the VST images are prevented.Type: GrantFiled: July 9, 2020Date of Patent: April 11, 2023Assignee: INTERAPTIX INC.Inventor: Dae Hyun Lee
-
Patent number: 11625090Abstract: Techniques are disclosed for performing localization of a handheld device with respect to a wearable device. At least one sensor mounted to the handheld device, such as an inertial measurement unit (IMU), may obtain handheld data indicative of movement of the handheld device with respect to the world. An imaging device mounted to either the handheld device or the wearable device may capture a fiducial image containing a number of fiducials affixed to the other device. The number of fiducials contained in the image are determined. Based on the number of fiducials, at least one of a position and an orientation of the handheld device with respect to the wearable device are updated based on the image and the handheld data in accordance with a first operating state, a second operating state, or a third operating state.Type: GrantFiled: October 8, 2021Date of Patent: April 11, 2023Assignee: Magic Leap, Inc.Inventors: Zachary C. Nienstedt, Samuel A. Miller, Barak Freedman, Lionel Ernest Edwin, Eric C. Browy, William Hudson Welch, Ron Liraz Lidji
-
Patent number: 11625097Abstract: Technologies are disclosed herein for controlling a head-mountable heads-up display system comprising a heads-up display unit and a hand cover. The hand cover includes a plurality of input elements located on appendages thereof that are configured to cause the hand cover to transmit input signals to the heads-up display unit. The heads-up display unit is configured to display virtual image content within a field of view of a user. As a result of receiving a user input, the heads-up display unit may display virtual image content based on a user input. The heads-up display unit updates the virtual image content as a result of receiving an input signal corresponding to an interaction between a pair of input elements of the hand cover. The heads-up display system may be useable in connection with a system of an outdoor recreational area to obtain information regarding the outdoor recreational area.Type: GrantFiled: October 13, 2021Date of Patent: April 11, 2023Assignee: Dish Network L.L.C.Inventor: Seth Byerley
-
Patent number: 11627297Abstract: A captured scene captured of a live action scene while a display wall is positioned to be part of the live action scene may be processed. To perform the processing, stereoscopic image data of the live action scene is received, and display wall metadata of the precursor image is determined. Further, a first portion of the stereoscopic image data comprising the stage element in the live action scene is determined based on the stereoscopic image data and the display wall metadata. A second portion of the stereoscopic image data comprising the display wall in the live action scene with the display wall displaying the precursor image is also determined. Thereafter, an image matte for the stereoscopic image data is generated based on the first portion and the second portion.Type: GrantFiled: December 10, 2021Date of Patent: April 11, 2023Assignee: Unity Technologies SFInventors: Kimball D. Thurston, III, Peter M. Hillman, Joseph W. Marks, Luca Fascione, Millicent Lillian Maier, Kenneth Gimpelson, Dejan Momcilovic, Keith F. Miller
-
Patent number: 11627092Abstract: The technologies described herein are generally directed to modeling radio wave propagation in a fifth generation (5G) network or other next generation networks. For example, a method described herein can include, for a network application, identifying, by a system comprising a processor, a characteristic value of a performance characteristic associated with an uplink connection enabled via a network of a user equipment to application server equipment hosting the network application. The method can further include, based on the characteristic value and a criterion, selecting, by the system, a first packet size for the uplink connection. The method can further include communicating, by the system, to the user equipment, the first packet size for use with the uplink connection.Type: GrantFiled: November 30, 2020Date of Patent: April 11, 2023Assignee: AT&T INTELLECTUAL PROPERTY I, L.P.Inventors: Rajarajan Sivaraj, Kittipat Apicharttrisorn, Bharath Balasubramanian, Rittwik Jana, Subhabrata Sen, Dhruv Gupta, Jin Wang
-
Patent number: 11626087Abstract: A control device of a head-mounted device is provided. The head-mounted device includes an image capturing device configured to capture an environment around a wearer and a display device configured to display an image to the wearer. The control device includes a first acquisition unit configured to acquire a first image captured by the image capturing device, a second acquisition unit configured to acquire a second image used to lead a mental state of the wearer to a target mental state, and a composition unit configured to composite the first image and the second image, thereby generating a third image to be displayed on the display device.Type: GrantFiled: March 25, 2021Date of Patent: April 11, 2023Assignee: Canon Kabushiki KaishaInventors: Osamu Nomura, Hiroshi Hosokawa
-
Patent number: 11626088Abstract: A method and system for generating attention pointers, including: displaying, in a display of a mobile device, an object within and outside a field of view (FOV) of an user wherein the object outside the FOV are real objects; monitoring, by a processor of the mobile device, for a change in the object within and outside the FOV; in response to a change, generating by the processor one or more attention pointers within the FOV of the user for directing user attention to the change in the object which is either inside or outside the FOV; and displaying, by the processor, on a virtual screen within the FOV to the user, the one or more attention pointers wherein the one or more attention pointers are dynamically configured to interact with the user in response to detections based on a movement of the user or the object within or outside the FOV of the user.Type: GrantFiled: September 15, 2021Date of Patent: April 11, 2023Assignee: Honeywell International Inc.Inventors: David Chrapek, Dominik Kadlcek, Michal Kosik, Sergij Cernicko, Marketa Szydlowska, Katerina Chmelarova
-
Patent number: 11620796Abstract: A method, a computer program product, and a computer system for transferring knowledge from an expert to a user using a mixed reality rendering. The method includes determining a user perspective of a user viewing an object on which a procedure is to be performed. The method includes determining an anchoring of the user perspective to an expert perspective, the expert perspective associated with an expert providing a demonstration of the procedure. The method includes generating a virtual rendering of the expert at the user perspective based on the anchoring at a scene viewed by the user, the virtual rendering corresponding to the demonstration of the procedure as performed by the expert. The method includes generating a mixed reality environment in which the virtual rendering of the expert is shown in the scene viewed by the user.Type: GrantFiled: March 1, 2021Date of Patent: April 4, 2023Assignee: International Business Machines CorporationInventors: Joseph Shtok, Leonid Karlinsky, Adi Raz Goldfarb, Oded Dubovsky
-
Patent number: 11620780Abstract: Examples are disclosed that relate to utilizing image sensor inputs from different devices having different perspectives in physical space to construct an avatar of a first user in a video stream. The avatar comprises a three-dimensional representation of at least a portion of a face of the first user texture mapped onto a three-dimensional body simulation that follows actual physical movement of the first user. The three-dimensional body simulation of the first user is generated based on image data received from an imaging device and image sensor data received from a head-mounted display device both associated with the first user. The three-dimensional representation of the face of the first user is generated based on the image data received from the imaging device. The resulting video stream is sent, via a communication network, to a display device associated with a second user.Type: GrantFiled: November 18, 2020Date of Patent: April 4, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Austin S. Lee, Kenneth Mitchell Jakubzak, Mathew J. Lamb, Alton Kwok
-
Patent number: 11620855Abstract: A method, computer system, and a computer program product for memory mapping is provided. The present invention may include identifying an augmented reality device and at least one Internet of Things (IoT) device which observes at least one biometric parameter. The present invention may include defining at least one user attention pattern based on the at least one biometric parameter. The present invention may include predicting an attentiveness of a user based on the at least one attention pattern. The present invention may include recording data from the augmented reality device, based on the attentiveness of the user dropping below a certain point. The present invention may include storing the recorded data.Type: GrantFiled: September 3, 2020Date of Patent: April 4, 2023Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Shikhar Kwatra, Jeremy R. Fox, Sarbajit K. Rakshit, John D. Wilson
-
Patent number: 11615596Abstract: A computer system, while displaying a view of a computer-generated environment, detects movement of a physical object, and in response: in accordance with a determination that a user is within a threshold distance of a first portion of the physical object and that the physical object meets preset criteria, the computer system changes an appearance of virtual content displayed at a position corresponding to a current location of the physical object's first portion, without changing an appearance of virtual content displayed at a position corresponding to the physical object's second portion; and in accordance with a determination that the user is within the threshold distance and that the physical object does not meet the preset criteria, the computer system changes an appearance of virtual content displayed at a position corresponding to a current location of the physical object's first portion.Type: GrantFiled: September 23, 2021Date of Patent: March 28, 2023Assignee: APPLE INC.Inventors: Jeffrey M. Faulkner, Stephen O. Lemay, William A. Sorrentino, III, Jonathan Ive, Kristi E. S. Bauerly
-
Patent number: 11615767Abstract: Even under a situation where an object to be presented has movement, the object can be presented as display information in a more favorable mode.Type: GrantFiled: October 8, 2019Date of Patent: March 28, 2023Assignee: Sony Group CorporationInventor: Mitsuru Nishibe
-
Patent number: 11612358Abstract: A method for providing a thermal feedback, includes executing a virtual reality application providing a virtual space that includes a virtual area to which an area temperature attribute is assigned, and a virtual object to which an object temperature attributed is assigned. An area event that reflects that a player character enters the virtual area is detected. A feedback device is controlled to output thermal feedback associated to the area temperature attribute when the area event is detected, the feedback device outputting the thermal feedback using a thermoelectric element performing a thermoelectric operation. An object event reflecting the player character is influenced by the virtual object is detected. The feedback device is controlled to override the thermal feedback associated to the area temperature attribute and output thermal feedback associated to the object temperature when the object is detected while the player character is in the virtual area.Type: GrantFiled: August 5, 2021Date of Patent: March 28, 2023Assignee: TEGWAY CO., LTD.Inventors: Kyoungsoo Yi, Ockkyun Oh, Jong Ok Ko
-
Patent number: 11609431Abstract: The invention relates to a head-mounted display (HMD) (1), comprising a housing (10) having an interior (11), at least one optical lens (15, 17) having a focal plane (5) located at the focal point (16, 18), which focal plane is arranged within the housing (10); at least one first polarizing filter (21) having a first polarization direction and one second polarizing filter (22) having a second polarization direction, and at least one LCD unit (25), wherein the at least one LCD unit (25) is arranged in the area of the focal plane (5) between the first polarizing filter (21) and the second polarizing filter (22); the first polarizing filter (21) and/or the second polarizing filter (22) is or are arranged at a distance from the focal plane (5). The invention further relates to an amusement device (2), in particular an open-air amusement ride, a roller coaster or a carousel, comprising an HMD (1).Type: GrantFiled: August 30, 2019Date of Patent: March 21, 2023Assignee: VR COASTER GMBH & CO. KGInventor: Thomas Faul
-
Patent number: 11609675Abstract: A system and method may include receiving data defining an augmented reality (AR) environment including a representation of a physical environment, identifying relationships between a plurality of scene elements in the AR environment, and obtaining a set of UI layout patterns for arranging the plurality of scene elements in the AR environment according to one or more relationships between the plurality of scene elements. The system and method may identify, for the at least one scene element, at least one relationship that corresponds to at least one UI layout pattern, generate a modified UI layout pattern for the at least one scene element using different relationships than the identified at least one relationship, and trigger display of the AR content associated with the information and the at least one scene element using the modified UI layout pattern.Type: GrantFiled: December 2, 2019Date of Patent: March 21, 2023Assignee: Google LLCInventors: David Joseph Murphy, Ariel Sachter-Zeltzer, Caroline Hermans
-
Patent number: 11609976Abstract: Provided are a method and system for managing an image based on interaction between a face image and a message account. An image management method may include: storing, in a database, a plurality of face images and a plurality of messenger accounts in association with each other; receiving a target image; recognizing a face image from the received target image; searching, the database for a first face image that matches the recognized face image, among the stored plurality of face images, and identifying a first messenger account corresponding to the first face image, among the stored plurality of messenger accounts; and displaying information of the first messenger account in association with the first face image, in the target image.Type: GrantFiled: November 7, 2019Date of Patent: March 21, 2023Assignee: LINE Plus CorporationInventors: Yuri Jo, Hee Jin Park
-
Patent number: 11610348Abstract: An augmented reality (AR) diagnostic tool embodied as a software application on a portable device employs AR infrastructure to enable a user to locate a failed/malfunctioning node of a cluster and, with minimal interaction, diagnose causes and provide recommendations to repair the node. The portable device may be a computer embodied as visualization technology and configured to execute the software application. Once installed, the AR diagnostic (ARD) tool is ready for use by the user, e.g., a customer service technician, to locate and repair one or more failed cluster nodes. In response to a failure/malfunction, the cluster node sends diagnostic and configuration information (i.e., failure/malfunction information) of the failed node to an analytics service. The failure information informs the technician of the cluster failure. The technician may then activate the ARD tool and AR infrastructure to locate and repair the failed node.Type: GrantFiled: July 21, 2021Date of Patent: March 21, 2023Assignee: NetApp, Inc.Inventor: Michael Keith Nunez
-
Patent number: 11601580Abstract: An integrated computational interface device includes a housing, at least one image sensor, and a foldable protective cover. The housing has a key region and a non-key region, and a keyboard associated with the key region. The foldable protective cover incorporates the at least one image sensor. The protective cover is configured to be manipulated into a plurality of folding configurations, including a first folding configuration, wherein the protective cover is configured to encase the key region and at least a portion of the non-key region, and a second folding configuration, wherein the protective cover is configured to stand in a manner that causes an optical axis of the at least one image sensor to generally face a user of the integrated computational interface device while the user types on the keyboard.Type: GrantFiled: March 30, 2022Date of Patent: March 7, 2023Assignee: MULTINARITY LTDInventors: Tamir Berliner, Tomer Kahan, Orit Dolev, Amit Knaani
-
Patent number: 11599333Abstract: A method for performing voice dictation with an earpiece worn by a user includes receiving as input to the earpiece voice sound information from the user at one or more microphones of the earpiece, receiving as input to the earpiece user control information from one or more sensors within the earpiece independent from the one or more microphones of the earpiece, inserting a machine-generated transcription of the voice sound information from the user into a user input area associated with an application executing on a computing device and manipulating the application executing on the computing device based on the user control information.Type: GrantFiled: March 3, 2021Date of Patent: March 7, 2023Assignee: BRAGI GMBHInventors: Peter Vincent Boesen, Luigi Belverato, Martin Steiner
-
Patent number: 11592871Abstract: An integrated computational interface device may include a portable housing having a key region and a non-key region; a keyboard associated with the key region of the housing; and a holder associated with the non-key region of the housing. The holder may be configured for selective engagement with and disengagement from the wearable extended reality appliance, such that when the wearable extended reality appliance is selectively engaged with the housing via the holder, the wearable extended reality appliance is transportable with the housing.Type: GrantFiled: April 1, 2022Date of Patent: February 28, 2023Assignee: Multinarity LTDInventors: Tamir Berliner, Tomer Kahan, Orit Dolev, Amit Knaani, Doron Assayas Terre
-
Patent number: 11594335Abstract: There is a need to accurately and dynamically evaluate an individual's risk associated with the transmission or contraction of a disease. This need can be addressed, for example, by generation of a real-time or near real-time predicted disease score for an associated user. In one example, a method includes receiving a video stream data object depicting a visual representation of a target user; processing the video stream data object to generate a protective covering indication with respect to the target user; processing the video stream data object to generate a spatial proximity determination score with respect to the target user; processing the protective covering indication and spatial proximity determination score to generate a predicted disease score associated with the target user; and providing an augmented reality video stream data object configured to depict the visual representation of the target user and the predicted disease score.Type: GrantFiled: March 10, 2021Date of Patent: February 28, 2023Assignee: Optum, Inc.Inventors: Geo Min, Kassi Elana Dibert, Tiffany K. Nguyen, Zachary B. Rosen, Samuel Landon Larsen
-
Image generation apparatus and image generation method using frequency lower than display frame rate
Patent number: 11592668Abstract: Methods and apparatus provide for generating an image by way of acquiring information relating to at least one of a position and a rotation of a camera operated by a user. The image is generated for display on a display unit viewed by the user. A rate at which the image is generated is at a first frequency, which is lower than a second frequency corresponding to a frame rate of the display unit.Type: GrantFiled: April 22, 2021Date of Patent: February 28, 2023Assignees: Sony Interactive Entertainment Inc., Sony Interactive Entertainment Europe Ltd.Inventors: Tomohiro Oto, Simon Mark Benson, Ian Henry Bickerstaff -
Patent number: 11594075Abstract: An eye tracking device for tracking an eye is described. The eye tracking device comprises: a first diffractive optical element, DOE, arranged in front of the eye, an image module, wherein the image module is configured to capture an image of the eye via the first DOE. The first DOE is adapted to direct a first portion of incident light reflected from the eye, towards the image module. The eye tracking device is characterized in that the first DOE is configured to provide a lens effect.Type: GrantFiled: March 30, 2020Date of Patent: February 28, 2023Assignee: TOBII ABInventor: Daniel Tornéus
-
Patent number: 11595580Abstract: The present disclosure provides systems and methods that use and/or generate image files according to a novel microvideo image format. For example, a microvideo can be a file that contains both a still image and a brief video. The microvideo can include multiple tracks, such as, for example, a separate video track, audio track, and/or one or more metadata tracks. As one example track, the microvideo can include a motion data track that stores motion data that can be used (e.g., at file runtime) to stabilize the video frames. A microvideo generation system included in an image capture device can determine a trimming of the video on-the-fly as the image capture device captures the microvideo.Type: GrantFiled: May 23, 2022Date of Patent: February 28, 2023Assignee: GOOGLE LLCInventors: Wei Hong, Marius Renn, Radford Ray Juang
-
Patent number: 11587535Abstract: Aspects of the subject disclosure may include, for example, detecting a user operating a cross reality headset, receiving a first location of the cross reality headset, providing image content and audio content to the cross reality headset, and receiving a second location of the cross reality headset. Further embodiments can include determining the cross reality headset is stationary based on the first location and the second location resulting in a determination, and providing instructions to the cross reality headset to adjust positioning the image content on a display of the cross reality headset and to provide the audio content according to the determination resulting in an adjustment of the positioning of the image content. The cross reality headset presents the image content according to the adjustment on the display. The cross reality headset provides the audio content. Other embodiments are disclosed.Type: GrantFiled: October 28, 2021Date of Patent: February 21, 2023Assignee: AT&T Intellectual Property I, L.P.Inventor: Kunwar Handa
-
Patent number: 11587200Abstract: A method, apparatus and computer program product enable multiple timeline support in playback of omnidirectional media content with overlay. The method, apparatus and computer program product receive a visual overlay configured to be rendered as a multi-layer visual content with an omnidirectional media content file (30). The omnidirectional media content file is associated with a first presentation timeline. The visual overlay is associated with a second presentation timeline. The method, apparatus and computer program product construct an overlay behavior definition file associated with the visual overlay (32). The overlay behavior definition file indicates a behavior of the second presentation timeline with respect to the first presentation in an instance that a pre-defined user interaction switch occurs during a playback of the omnidirectional media content file.Type: GrantFiled: September 20, 2019Date of Patent: February 21, 2023Assignee: Nokia Technologies OyInventors: Sujeet Shyamsundar Mate, Igor Curcio, Miska Hannuksela, Emre Aksu, Kashyap Kammachi Sreedhar
-
Patent number: 11586286Abstract: A system receives, from an augmented reality device, a first image of a web application, where the first image shows a first element of the web application. The system receives eye tracking information that indicates eye movements of a user as the user is looking at different elements of the web application. The system determines that the user is looking at a first location coordinate on the first image of the web application. The system determines that the first element is located at the first location coordinate. The system identifies first element attributes associated with the first element. The system generates an augmented reality message comprising the first element attributes. The system generates an augmented reality display in which the augmented reality message is presented as a virtual object. The system transmits the augmented reality display to the augmented reality device.Type: GrantFiled: May 18, 2022Date of Patent: February 21, 2023Assignee: Bank of America CorporationInventor: Shailendra Singh