Augmented Reality (real-time) Patents (Class 345/633)
  • Patent number: 10809532
    Abstract: Display method and display system are provided. The method includes acquiring a collection of first images by a first time-interval using a first process, performed by a plurality of acquisition devices acquiring surroundings thereof acquiring one or more second images using a second process different from the first process; processing each first image in the collection of the first images and the one or more second images using a predetermined process to form a collection of third images; and displaying the collection of the third images by a second time-interval. The predetermined process includes identifying overlapping areas between the first images and the second images and identifying a display priority relationship among the overlapping areas. The third images include a first area, the first area is at least a portion of the overlapping areas, and the first area displays at least a portion of the first images.
    Type: Grant
    Filed: January 2, 2019
    Date of Patent: October 20, 2020
    Assignee: LENOVO (BEIJING) CO., LTD.
    Inventor: Peng Wu
  • Patent number: 10810796
    Abstract: Systems and methods according to various embodiments enable a user to view three-dimensional representations of data objects (“nodes”) within a 3D environment from a first person perspective. The system may be configured to allow the user to interact with the nodes by moving a virtual camera through the 3D environment. The nodes may have one or more attributes that may correspond, respectively, to particular static or dynamic values within the data object's data fields. The attributes may include physical aspects of the nodes, such as color, size, or shape. The system may group related data objects within the 3D environment into clusters that are demarked using one or more cluster designators, which may be in the form of a dome or similar feature that encompasses the related data objects. The system may enable multiple users to access the 3D environment simultaneously, or to record their interactions with the 3D environment.
    Type: Grant
    Filed: July 29, 2019
    Date of Patent: October 20, 2020
    Assignee: SPLUNK INC.
    Inventors: Roy Arsan, Alexander Raitz, Clark Allan, Cary Glen Noel
  • Patent number: 10810777
    Abstract: A GPU receives an image comprising an array of pixels. The image depicts features in a field of an object on a background. The features and the background contrasting with the object field, and at least a portion of the object is at the center of the image. In parallel for each particular pixel of a first plurality of the pixels, the GPU sets the color value of the particular pixel to the lightest color value of a second plurality of the pixels substantially along a line outward from the particular pixel toward an edge of the image. The line can be defined by the particular pixel and the image center.
    Type: Grant
    Filed: March 28, 2019
    Date of Patent: October 20, 2020
    Assignee: Google LLC
    Inventor: John Day-Richter
  • Patent number: 10810797
    Abstract: Systems and methods for generating and animating virtual representations to a wearer of a HMD device are disclosed. A virtual representation associated with a real-world object is retrieved based on received input data. The retrieved virtual representation is rendered for display to the wearer. Sensor data tracking one or more of the real-world object and the wearer is also received. The rendered virtual representation can be further animated based on the sensor data.
    Type: Grant
    Filed: May 22, 2015
    Date of Patent: October 20, 2020
    Assignee: OTOY, INC
    Inventors: Julian Michael Urbach, Nicolas Lazareff, Clay Sparks
  • Patent number: 10809081
    Abstract: Techniques for assisting a passenger to identify a vehicle and for assisting a vehicle to identify a passenger are discussed herein. Also discussed herein are techniques for capturing data via sensors on a vehicle or user device and for presenting such data in various formats. For example, in the context of a ride hailing service using autonomous vehicles, the techniques discussed herein can be used to identify a passenger of the autonomous vehicle at the start of a trip, and can be used to assist a passenger to identify an autonomous vehicle that has been dispatched for that particular passenger. Additionally, data captured by sensors of the vehicle and/or by sensors of a user device can be used to initiate a ride, determine a pickup location, orient a user within an environment, and/or provide visualizations or augmented reality elements to provide information and/or enrich a user experience.
    Type: Grant
    Filed: May 3, 2018
    Date of Patent: October 20, 2020
    Assignee: Zoox, Inc.
    Inventors: Timothy David Kentley-Klay, Duncan John Curtis, Donovan Anton Bass, Michael Moshe Kohen, Auver Cedric Austria
  • Patent number: 10809530
    Abstract: An information processing apparatus including circuitry that acquires information associated with a user situation, determines a display mode based on the information associated with the user situation, and enables an operation unit to receive a user input based on a target displaying in the determined display mode.
    Type: Grant
    Filed: December 6, 2016
    Date of Patent: October 20, 2020
    Assignee: SONY CORPORATION
    Inventors: Jun Kimura, Tsubasa Tsukahara, Ryo Fukazawa, Tomohisa Tanaka
  • Patent number: 10807529
    Abstract: An image processing apparatus for a vehicle includes an additional-image generation portion generating an additional image to be added to a captured image, an output image generation portion generating an output image including the captured image and the additional image. The additional image includes a first marking image indicating a first line being away from an end portion of a vehicle in a vehicle width direction by a vehicle width of the vehicle or longer, the first line being along a vehicle front and rear direction.
    Type: Grant
    Filed: September 14, 2016
    Date of Patent: October 20, 2020
    Assignee: AISIN SEIKI KABUSHIKI KAISHA
    Inventor: Kazuya Watanabe
  • Patent number: 10805544
    Abstract: An image pickup apparatus that is capable of switching a display device that is used to display appropriately. A first display device at a rear face of a body of the apparatus and is rotatable upward around a first rotation axis. A second display device has an eyepiece and is movable between a housing position and a use position. A first sensor detects whether the eyepiece is in the use position. A second sensor detects approaching of an object to the eyepiece. A controller controls the first and second display devices based on detection results of the first and second sensors. The controller enables predetermined control set in association with the detection result of the second sensor when the first sensor detects that the eyepiece is in the use position and disables the predetermined control when the first sensor detects that the eyepiece is not in the use position.
    Type: Grant
    Filed: September 24, 2019
    Date of Patent: October 13, 2020
    Assignee: Canon Kabushiki Kaisha
    Inventor: Yoichi Osada
  • Patent number: 10803652
    Abstract: There are provided an image generating apparatus, an image generating method, and a program for generating an image that allows each of the users sharing a virtual space to see what the ether users are looking at. A virtual space managing section (126) arranges a fixation point object indicative of a fixation point at a position away from a first viewpoint in a first visual line direction passing through the first viewpoint arranged in a virtual space, the first visual line direction corresponding to the attitude of a first head-mounted display. A frame image generating section (128) generates an image indicating how things look from a second viewpoint in a second visual line direction corresponding to the attitude of a second head-mounted display, the second viewpoint being arranged in the virtual space in which the fixation point object is arranged.
    Type: Grant
    Filed: February 13, 2017
    Date of Patent: October 13, 2020
    Assignee: SONY INTERACTIVE ENTERTAINMENT INC.
    Inventors: Akio Ohba, Taeko Sanada
  • Patent number: 10802787
    Abstract: A method and system for integrating audience participation content into virtual reality (VR) content presented by head mounted display (HMD) of an HMD user is provided. The method includes providing a VR scene to the HMD of an HMD user and receiving indications from one or more spectator devices of one or more spectators that correspond to requests for audience participation content for participating in the VR scene. The method and system is capable of sending audience participation content to the spectator devices for enabling spectator interactivity with the audience participation content via respective spectator devices. The method and system provided also enables a receiving of the spectator input, which may then be subsequently integrated into the VR scene of the HMD.
    Type: Grant
    Filed: March 28, 2017
    Date of Patent: October 13, 2020
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Glenn T. Black, Michael G. Taylor, Todd Tokubo
  • Patent number: 10802577
    Abstract: One example provides, on a computing device comprising a display, a method of initiating and conducting voice communication with a contact. The method comprises displaying a user interface on the display, receiving a user input of a position signal for the user interface, and determining that the position signal satisfies a selection condition for a contact based on a location of the position signal on the user interface and a position of a proxy view of the contact on the user interface. The method further comprises, in response to determining that the position signal satisfies the selection condition, selecting the contact for communication, receiving voice input, and responsive to receiving the voice input while the contact is selected for communication, opening a voice communication channel with the contact and sending the voice input to the contact via the voice communication channel.
    Type: Grant
    Filed: June 4, 2015
    Date of Patent: October 13, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Anatolie Gavriliuc, Dan Osborn, Stephen Heijster
  • Patent number: 10796473
    Abstract: In one embodiment, a method includes retrieving a video stream that was recorded while a first artificial-reality effect was being displayed on the video stream, where each frame of the video stream comprises a real-world scene without the first artificial-reality effect, retrieving an artificial-reality state information stream corresponding to the video stream, where the artificial-reality state information stream comprises state information associated with the first artificial-reality effect, retrieving one or more contextual data streams corresponding to the video stream, where the first artificial-reality effect displayed on the video stream was rendered based on at least a portion of the one or more contextual data streams, rendering a second artificial-reality effect based on at least a portion of the artificial-reality state information stream and a portion of the one or more contextual data streams, and displaying the second artificial-reality effect on the video stream.
    Type: Grant
    Filed: December 11, 2018
    Date of Patent: October 6, 2020
    Assignee: Facebook, Inc.
    Inventors: Jonathan Lim, Yuanshuo Lu, Mohmmad Feisal Saleh Amir Rasras
  • Patent number: 10796493
    Abstract: Disclosed herein are an apparatus and method for calibrating an augmented-reality image. The apparatus includes a camera unit for capturing an image and measuring 3D information pertaining to the image, an augmented-reality image calibration unit for generating an augmented-reality image using the image and the 3D information and for calibrating the augmented-reality image, and a display unit for displaying the augmented-reality image.
    Type: Grant
    Filed: June 20, 2019
    Date of Patent: October 6, 2020
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Jae-Hean Kim, Bon-Ki Koo, Chang-Joon Park
  • Patent number: 10788895
    Abstract: A measurement system and a measurement method using the same are provided. Firstly, a measurement system is provided. The measurement system comprises a film, a sensor and a movement information calculator, wherein the film has a patterned structure layer, and the sensor is electrically isolated from and selectively in contact with the film. Then, the sensor directly contacts the patterned structure layer and generates a sensing signal during relative movement process between the sensor and the film. Then, the movement information calculator obtains at least one of a relative movement amount and a relative movement speed during the relative movement process according to the sensing signal.
    Type: Grant
    Filed: December 27, 2018
    Date of Patent: September 29, 2020
    Assignee: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTE
    Inventor: Hsuan-Yu Lin
  • Patent number: 10787076
    Abstract: A vehicle display device includes a display unit having a pointer, a dial plate as a background of the pointer, and a frame surrounding the dial plate, at least one of the pointer, the dial plate, and the frame being a physical element, and a virtual image display device that displays a virtual image overlapping with the physical element, in which the virtual image display device changes at least one of a color and a design of the virtual image overlapping with the physical element.
    Type: Grant
    Filed: March 16, 2018
    Date of Patent: September 29, 2020
    Assignee: YAZAKI CORPORATION
    Inventors: Kenzo Yamamoto, Gosei Sato
  • Patent number: 10789781
    Abstract: Systems and methods for providing an interactive augmented experience using prerecorded video include: creating a scene model based on an image of a physical environment; generating a fantasy object; integrating a position of the fantasy object onto the scene model; determining a state of the fantasy object; selecting, using a type of meta data, one or more frames of a pre-recorded video of the physical environment associated with a desired physical camera, such that each of the frames is associated with a frame number and acquired with a physical camera; synchronizing a virtual camera with the desired physical camera; and projecting, using a first video player or a second video player, the one or more frames onto the scene model to position the scene model relative to the fantasy object, such that the projecting alternates between the first video player and the second video player.
    Type: Grant
    Filed: August 30, 2019
    Date of Patent: September 29, 2020
    Assignee: Playdeo Limited
    Inventors: Jack Schulze, Timo Arnall, Nicholas Ludlam
  • Patent number: 10788888
    Abstract: A method of facilitating capturing visual information at a first location (1) for display at a second location (2) is disclosed. At the first location, a virtual reality device (11) can be configured to render visual information representing a virtual environment, and at least one capturing device (12) can be configured to capture information representing a real environment comprising the virtual reality device (11). At the second location, at least one monitor (21) can be configured to render the information captured by the at least one capturing device (12). The method comprises determining an orientation of at least one capturing device (12) relative to the virtual reality device (11), and providing in the virtual environment a visual indication of said orientation.
    Type: Grant
    Filed: June 6, 2017
    Date of Patent: September 29, 2020
    Assignees: KONINKLIJKE KPN N.V., NEDERLANDSE ORGANISATIE VOOR TOEGEPAST-NATUUWETENSCHAPPELIJK ONDERZOEK TNO
    Inventors: Martin Prins, Hans Maarten Stokking, Emmanuel Thomas, Omar Aziz Niamut, Mattijs Oskar Van Deventer
  • Patent number: 10789782
    Abstract: A near-eye display (NED) has an orientation detection device and a display block. The orientation detection device collects orientation data that describe an orientation of the NED. The display block has a display assembly, a focusing assembly, and a controller. The controller determines an orientation vector of the NED based in part on the orientation data and computes an angular difference between the orientation vector of the NED and a gravity vector. After comparing the angular difference to a threshold value, the controller generates multifocal instructions that adjusts the optical element to display an augmented scene at the selected image plane corresponding to the multifocal instructions.
    Type: Grant
    Filed: December 11, 2019
    Date of Patent: September 29, 2020
    Assignee: Facebook Technologies, LLC
    Inventors: Lu Lu, Ji Luo, Nicholas Daniel Trail, Kevin James MacKenzie, Pasi Saarikko, Andrew John Ouderkirk, Scott Charles McEldowney
  • Patent number: 10789952
    Abstract: A computing system is provided. The computing system includes a processor of a display device configured to execute one or more programs. The processor is configured to receive, from a user, a voice command, a first auxiliary input from a first sensor, and a second auxiliary input from a second sensor. The processor is configured to, for each of a plurality of objects in the user's field of view in an environment, determine a first set of probability factors with respect to the first auxiliary input and a second set of probability factors with respect to the second auxiliary input. Each probability factor in the first and second sets indicates a likelihood that respective auxiliary inputs are directed to one of the plurality of objects. The processor is configured to determine a target object based upon the probability factors and execute the command on the target object.
    Type: Grant
    Filed: December 20, 2018
    Date of Patent: September 29, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Luke Cartwright, Richard William Neal, Alton Kwok
  • Patent number: 10789459
    Abstract: An information processing apparatus is used in contact with a user, and includes a detection unit and a control unit. The detection unit detects motion of the user, the motion being performed to operate an object as an operation target that is present in a real space from a facing position, and the motion being performed in the real space without contacting the object. The control unit instructs the object to execute an operation corresponding to the detected motion of the user.
    Type: Grant
    Filed: August 1, 2018
    Date of Patent: September 29, 2020
    Assignee: FUJI XEROX CO., LTD.
    Inventor: Kengo Tokuchi
  • Patent number: 10783711
    Abstract: An intelligent recommendation system and method for suggesting users perform various tasks in different reality systems to help user to maximize their productivity and achieve better satisfaction. By recognizing that a user is not efficient or unable to perform tasks well in real reality, the system suggests that the user try doing the similar task in a virtual reality (VR) or augmented reality (AR) environment and effects a physical switching to that environment for the user to practice and improve user's skill on the tasks. Further, by recognizing that user's emotions (e.g., sad or bad mood), the system further suggests the user to do certain things in the VR or AR environment to improve user's mood. The system and method continuously suggests performing tasks in VR or AR as needed, based on user's task efficiency score in real reality and any improvement occurred in RR when doing the task in VR.
    Type: Grant
    Filed: February 7, 2018
    Date of Patent: September 22, 2020
    Assignee: International Business Machines Corporation
    Inventors: Lin Sun, Liam S. Harpur, Matthew E. Broomhall, Paul R. Bastide
  • Patent number: 10785471
    Abstract: A system for content upsampling comprises a console and a head-mounted display (HMD). The console can select content for presentation and provide the content at a first frame rate. The HMD outputs fast calibration data comprising one or more intermediate estimated positions of a reference point on the HMD. The HMD estimates future positions of the HMD using the fast calibration data, and generates synthetic frames using frames from the content and the future positions of the HMD. The HMD then upsamples the content from the first frame rate to a second frame using the synthetic frames to generate augmented content, wherein the second frame rate is faster than the first frame rate. The HMD presents the augmented content at the second frame rate via an electronic display.
    Type: Grant
    Filed: February 27, 2018
    Date of Patent: September 22, 2020
    Assignee: Facebook Technologies, LLC
    Inventors: Warren Andrew Hunt, Devin Boyer, Nathan Bialke, William Howe-Lott, Hung Huu Nguyen
  • Patent number: 10782774
    Abstract: A mechanism is described for facilitating dynamic rendering of non-visual marker-based augmented reality experiences on computing devices according to one embodiment. A method of embodiments, as described herein, includes detecting non-visual data. The non-visual data may be captured via one or more capturing/sensing components of a computing device. The method may further include mapping the non-visual data with one or more augmented reality items to generate a first augmented reality experience, wherein the non-visual data is converted into one or more non-visual markers based on the one or more augmented reality items and one or more contexts, and rendering the first augmented reality experience based on the one or more non-visual markers.
    Type: Grant
    Filed: December 11, 2014
    Date of Patent: September 22, 2020
    Assignee: INTEL CORPORATION
    Inventor: Glen J. Anderson
  • Patent number: 10785621
    Abstract: This disclosure relates to systems and methods to provide an interactive space based on vehicle-to-vehicle communications. A vehicle may store experience information and/or other information. The experience information may define virtual content to be presented to a user residing in the vehicle to create an interactive space. The virtual content may be associated with an experience location in a real-world environment. Responsive to a vehicle location of the vehicle being at or near the experience location, the user may be presented with views of the virtual content. The user may interact with the virtual content causing an update of the experience information. Upon detection of presence of a second vehicle, the vehicle may communicate the updated experience information to the second vehicle.
    Type: Grant
    Filed: July 30, 2019
    Date of Patent: September 22, 2020
    Assignee: Disney Enterprises, Inc.
    Inventors: Corey D. Drake, Hunter Gibson, Jason Yeung, Michael P. Goslin
  • Patent number: 10777011
    Abstract: An electronic apparatus, a wireless communication system, a wireless communication method, and a computer-readable storage medium are provided. The electronic apparatus includes a processing circuit configured to: determine a current audiovisual angle of a user; compare the current audiovisual angle of the user with an expected audiovisual angle, and generate indication information for directing the user to the expected audiovisual angle, and provide the indication information to the user. The indication information directs the user to the expected audiovisual angle by using a direct direction indication and an indirect direction indication. With the electronic apparatus, the wireless communication system, the wireless communication method, and the computer-readable storage medium, the user can obtain a better visual feeling, and thus the user experience can be improved.
    Type: Grant
    Filed: June 27, 2018
    Date of Patent: September 15, 2020
    Assignee: SONY CORPORATION
    Inventors: Kai Xiao, Qi Zheng, Zhongsheng Hong, Jia Han
  • Patent number: 10777009
    Abstract: The present invention relates to a system and method of placing augmented reality renderings, within the context of physical space imagery, creating an immersive augmented reality experience. The method comprising the steps of imaging physical space, by way of a camera enabled mobile device, to create a physical space imagery, initiating data communication with a remote agent, selecting, by way of the remote agent, at least one of an augmented reality rendering, and data communicating the augmented reality rendering, for inclusion within the physical space imagery, forming an immersive augmented reality experience, viewable and useable, by a consumer, on the camera enabled mobile device.
    Type: Grant
    Filed: February 18, 2018
    Date of Patent: September 15, 2020
    Assignee: CN2, Inc.
    Inventors: Margaret A. Martin, Alex S. Hill, Harrison D. Leach
  • Patent number: 10777010
    Abstract: An environment map, such as a cube map, can be dynamically generated for a scene using image data captured by a device executing an augmented reality (AR) application. The current orientation of the device, along with field of view information for the camera, can be used to determine a portion of the environment map that corresponds to the captured image data. The image data can be used to fill in that portion, and subsequent image data captured for other orientations used to fill the remaining portions. The generated environment map can then be used to render AR content to be displayed with a live view of the scene. This can include determining the illumination and/or reflections effects for the AR content. The rendered AR content can be displayed as an overlay with respect to a live view of the scene.
    Type: Grant
    Filed: March 16, 2018
    Date of Patent: September 15, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: Pratik Patel, Sidharth Moudgil, Richard Schritter
  • Patent number: 10771350
    Abstract: A system for enabling an augmented reality application includes in a mobile device of a user, displaying a real-world object and allowing the user to interact with the real-world object by drawing a virtual outline of the object and build up the current configuration or its virtual twin. A computer processor retrieves a list of possible component objects of the real-world object. The user interacts with a visual depiction of the scene and move candidate component objects to the virtual outline of the real-world object. The configuration of the real-world object is saved allowing the AR application to retrieve the verified configuration and produce augmented overlays overtop the visual depiction of the real-world object. The configuration is verified by the user using spatial relationships of the component objects associated with the real-world object. The verified object configuration may be tracked in the AR application without needing a recognition procedure.
    Type: Grant
    Filed: September 26, 2017
    Date of Patent: September 8, 2020
    Assignee: Siemens Aktiengesellschaft
    Inventor: Mareike Kritzler
  • Patent number: 10766483
    Abstract: A negator module of a predictive motion system determines initial parameters for a passenger profile using a virtual reality system of an autonomous vehicle. The negator module receives upcoming driving conditions from an autonomous navigation system of the autonomous vehicle during a ride in which the passenger resides in a seat of the autonomous vehicle and uses the virtual reality system. Using a cognitive model, the negator module predicts a cognitive state of the passenger based on the passenger profile and the upcoming driving conditions. The negator module determines commands for actuators coupled to the seat and commands for the virtual reality system that match the predicted cognitive state of the passenger. The negator module sends the commands to the actuators and the virtual reality system to be executed.
    Type: Grant
    Filed: August 22, 2018
    Date of Patent: September 8, 2020
    Assignee: International Business Machines Corporation
    Inventors: Mauro Marzorati, Shikhar Kwatra, Jeremy R. Fox, Sarbajit K. Rakshit
  • Patent number: 10768951
    Abstract: Aspects of the disclosure relate to providing augmented reality user interfaces and controlling automated systems based on user activity information and pre-staging information. A computing platform may receive, from a client user device, a trip start notification indicating that a user of the client user device is initiating a trip to an enterprise center. In response to receiving the trip start notification, the computing platform may generate a pre-staging augmented reality user interface for a client augmented reality device linked to the client user device. Thereafter, the computing platform may receive pre-staging information identifying one or more events to be performed at the enterprise center when the user of the client user device arrives at the enterprise center. The computing platform may generate one or more pre-staging commands based on the pre-staging information and may send these commands to one or more systems associated with the enterprise center.
    Type: Grant
    Filed: August 29, 2018
    Date of Patent: September 8, 2020
    Assignee: Bank of America Corporation
    Inventor: Matthew E. Carroll
  • Patent number: 10766498
    Abstract: A method, a device, and a computer-readable storage medium with instructions for controlling a display of an augmented reality display device for a transportation vehicle. The presence of a driving situation is detected, in which a warning is displayed to a driver of the transportation vehicle; the augmented reality display device generates a virtual object for display; the virtual object visualizes potential imminent events, actions or dangerous situations; the virtual object has moving graphical elements that simulate a movement of the virtual object; and the augmented reality display device outputs the generated virtual object for display.
    Type: Grant
    Filed: September 6, 2018
    Date of Patent: September 8, 2020
    Assignee: VOLKSWAGEN AKTIENGESELLSCHAFT
    Inventors: Andro Kleen, Daniel Morales Fernández, Adrian Benjamin Haeske, Adrian Haar, Vitalij Sadovitch
  • Patent number: 10771772
    Abstract: According to the present invention, in an image display device that displays two images in parallel, a peak current for light source driving and continuation of a peak current are suppressed without reducing luminance of a display image. An image display device includes a light source unit, two panel units, and a timing control unit. A period of one frame of an image displayed by each panel unit has a standby period in which the panel unit is not illuminated by the light source unit for image generation and a lighting period in which the panel unit is illuminated by the light source unit. The timing control unit controls an operation timing such that the periods of one frame of the images displayed by the respective panel units overlap, and frame start times of the periods are shifted from each other by a predetermined delay time.
    Type: Grant
    Filed: August 9, 2018
    Date of Patent: September 8, 2020
    Assignee: HITACHI-LG DATA STORAGE, INC.
    Inventors: Ryuji Ukai, Yoshiho Seo
  • Patent number: 10771343
    Abstract: A system, method, and computer-implemented method for automatically and robustly standardizing electronic equipment identifiers with service profile identifiers in a domain of a unified computing system to better facilitate identification of the physical equipment. A particular domain in a domain list may be accessed, and a service profile identifier list of service profile identifiers and an equipment identifier list of equipment identifiers for the particular domain may be accessed. If a particular equipment identifier assigned to a particular piece of equipment (e.g., a blade server) does not include the particular service profile identifier assigned to that equipment, then the particular equipment identifier is changed to include the particular service profile identifier to facilitate identification of the physical equipment. Each service profile identifier in the particular domain is checked, and the process is repeated for any additional domains.
    Type: Grant
    Filed: November 6, 2018
    Date of Patent: September 8, 2020
    Assignee: Mastercard International Incorporated
    Inventor: Chase A. Aleshire
  • Patent number: 10762714
    Abstract: A virtual reality system includes a platform, a headset, a mount, and a control unit. The headset includes a motion-sensing unit and a display unit configured to display a video of a virtual environment. The mount is positioned on the platform and configured to releasably engage the headset. While the headset is engaged with the mount, the headset is positioned in a first position. While the headset is disengaged from the mount, the headset is positioned in a second position. The control unit is connected to the headset and configured to receive first data representing the first position and associate the first position with a predetermined first perspective of the virtual environment. The control unit is also configured to receive second data representing the second position, determine a second perspective of the virtual environment corresponding to the second position, and provide video of the virtual environment from the second perspective.
    Type: Grant
    Filed: December 19, 2018
    Date of Patent: September 1, 2020
    Assignee: DREAMWORKS ANIMATION LLC
    Inventors: Brad Kenneth Herman, St. John Colón
  • Patent number: 10762652
    Abstract: A head-mounted device (HMD) is configured to perform depth detection in conjunction with movement tracking. The HMD includes a stereo camera pair comprising a first camera and a second camera, both of which are mounted on the HMD. The fields of view for both of the cameras overlap to form an overlapping field of view. These cameras are configured to detect both visible light and infrared (IR) light. The HMD also includes an IR dot-pattern illuminator that is configured to emit an IR dot-pattern illumination. The HMD uses the IR dot-pattern illumination to determine an object's depth. The HMD also includes one or more flood IR light illuminators that emit a flood of IR light. The HMD uses the flood of IR light to track at least its own movements, and sometimes even hand movements, in various environments, even low light environments.
    Type: Grant
    Filed: July 22, 2019
    Date of Patent: September 1, 2020
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Raymond Kirk Price, Michael Bleyer, Christopher Francis Reidy
  • Patent number: 10762678
    Abstract: A device may generate, based on receiving feeder content, a structured format of the feeder content. The device may generate, based on the structured format of the feeder content, one or more semantic mappings for the feeder content. The device may generate, based on the one or more semantic mappings for the feeder content, an electronic storyboard of the feeder content. The device may generate an extended reality rendered content feed based on the electronic storyboard of the feeder content. The device may provide the extended reality rendered content feed to an extended reality device.
    Type: Grant
    Filed: October 4, 2018
    Date of Patent: September 1, 2020
    Assignee: Accenture Global Solutions Limited
    Inventors: Sunjeet Gupta, Rahul Mantri, Asheesh Gupta, Inderjit Singh, Sudhir Kudva Karkale, Shridhar Rajgopalan
  • Patent number: 10761786
    Abstract: A communication apparatus is provided which includes a control unit configured to perform control so that an execution frequency of predetermined communication in a state in which a connection using a communication method other than a Neighbor Awareness Network is established is lower than an execution frequency of the predetermined communication in a state in which a connection using the communication method other than the Neighbor Awareness Network is not established.
    Type: Grant
    Filed: January 15, 2019
    Date of Patent: September 1, 2020
    Assignee: Canon Kabushiki Kaisha
    Inventor: Atsushi Shimazaki
  • Patent number: 10758824
    Abstract: A virtual reality system comprising a headset, a primary battery, a secondary battery (provides backup power), an output device which displays a 3-D virtual world (VW) based on the orientation and position of the headset. Swapping of the primary battery with a physical battery is viewed in the (VW) using virtual counterparts of the primary battery and the secondary battery in an output device in the headset. A user physically swaps the batteries while “seeing” the swapping taking place in the VW. The locations the user sees (through the headset) the primary battery and secondary battery correspond to their locations in the real world relative to the headset so that while the user physically feels (with his/her hands) the primary battery and the secondary battery the locations of the primary battery and the secondary battery in the VW correspond to what the user feels so that the swapping feels “real.
    Type: Grant
    Filed: May 24, 2017
    Date of Patent: September 1, 2020
    Assignee: Out of Sight Vision Systems LLC
    Inventor: Jon Muskin
  • Patent number: 10755486
    Abstract: To prevent virtual content from occluding physical objects in an augmented reality (AR) system, the embodiments herein describe generating a pass-through texture using a pre-generated model of a physical object. In one embodiment, the AR system determines the location of the physical object as well as its orientation. With this information, the system rotates and re-sizes the pre-generated model of the object to match its location and orientation in the real-world. The system then generates the pass-through texture from the pre-generated model and inserts the texture into the virtual content. When the virtual content is displayed, the pass-through texture permits light from the real-world view to pass through substantially unaffected by the virtual content. In this manner, the physical object (which aligns with the location of the pass-through texture in the virtual content) can be seen by the user—i.e., is not occluded by the virtual content.
    Type: Grant
    Filed: April 19, 2018
    Date of Patent: August 25, 2020
    Assignee: Disney Enterprises, Inc.
    Inventors: Elliott H. Baumbach, Jonathan R D Hsu
  • Patent number: 10755434
    Abstract: A method includes acquiring, from a camera, an image data sequence of a real object in a real scene and performing a first template-matching on an image frame in the image data sequence using intensity-related data sets stored in one or more memories to generate response maps. The intensity-related data sets represent an intensity distribution of a reference object from respective viewpoints. The reference object corresponds to the real object. A candidate region of interest is determined for the real object in the image frame based on the response maps, and second template-matching is performed on the candidate region of interest using shape-related feature data sets stored in one or more memories to derive a pose of the real object. The shape-related feature data sets represent edge information of the reference object from the respective viewpoints.
    Type: Grant
    Filed: March 28, 2018
    Date of Patent: August 25, 2020
    Assignee: SEIKO EPSON CORPORATION
    Inventors: Dibyendu Mukherjee, Irina Kezele, Mikhail Brusnitsyn
  • Patent number: 10755342
    Abstract: Examples of a multisource augmented reality model are defined. In an example, the system receives a query from a user. The system obtains representative data corresponding to an environment associated with the query and identifies at least one context therein. The system obtains product parameter data and identifies a parameter set therein to process the query. The system implements an artificial intelligence component to sort the product parameter data, the representative data, and the context for identifying pertinent data domains associated with the query. The system may establish a product augmented reality model corresponding to the product by performing a first cognitive learning operation on a domain from the updated pertinent data domains and the identified parameter set. The system may a list of related products for guided selling facilitating a shopping decision of the user. The system may generate an augmented reality result for the user.
    Type: Grant
    Filed: May 13, 2019
    Date of Patent: August 25, 2020
    Assignee: ACCENTURE GLOBAL SOLUTIONS LIMITED
    Inventors: Sailatha Karthikeyan, Marin Grace Mercylawrence, Prasanna Srinivasa Rao
  • Patent number: 10755456
    Abstract: A method and an apparatus for displaying information of multiple objects are provided. The method includes following steps: capturing an image within a sight of a user viewing a transparent display; identifying multiple objects in the image to generate multiple identification frames capable of covering the objects, respectively; generating an auxiliary bonding box capable of covering the identification frames; defining an information display area according to non-overlapping areas of the auxiliary bonding box and the identification frames; and group displaying object information of the objects in the information display area on the transparent display.
    Type: Grant
    Filed: January 9, 2019
    Date of Patent: August 25, 2020
    Assignees: Industrial Technology Research Institute, Intellectual Property Innovation Corporation
    Inventors: Jian-Lung Chen, Chih-Chia Chang, Yu-Hsin Lin
  • Patent number: 10755472
    Abstract: Methods and apparatuses are provided for transmitting information about an omni-directional image based on user motion information by a server. Motion parameters are received from an apparatus worn by a user for displaying an omni-directional image. User motion information is generated based on the received motion parameters. First packing information corresponding to a user position is generated based on the user motion information. Second packing information corresponding to a position in close proximity to the user position is generated based on the user motion information. Third packing information is generated based on the first packing information and the second packing information At least one of the first packing information, the second packing information, and the third packing information is transmitted to the apparatus.
    Type: Grant
    Filed: March 28, 2018
    Date of Patent: August 25, 2020
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Ji-Hwan Woo, Eric Yip, Byeong-Doo Choi, Dong-Wook Kwon
  • Patent number: 10755061
    Abstract: The subject technology receives image data including a representation of a physical item. The subject technology analyzes the image data to recognize an object corresponding to an identification indicator of the physical item. The subject technology determines whether the identification indicator of the physical item includes a representation of a barcode. The subject technology extracts verification metadata from the representation of the barcode. The subject technology sends the verification metadata to determine verification information associated with the verification metadata. The subject technology receives the verification information. The subject technology sends the verification information and the verification metadata to determine provenance information associated with the physical item. The subject technology receives, from the second server, the provenance information associated with the physical item.
    Type: Grant
    Filed: September 30, 2019
    Date of Patent: August 25, 2020
    Assignee: Snap Inc.
    Inventor: Andrés Monroy-Hernández
  • Patent number: 10748443
    Abstract: A method includes receiving first data defining first user actions associated with a first augmented reality/virtual reality (AR/VR) space. The method also includes translating the first user actions into first commands associated with first visual objects in the first AR/VR space. The method further includes aggregating the first commands into at least one first record and transmitting the at least one first record. The method also includes receiving at least one second record containing second commands associated with second visual objects in a second AR/VR space. The method further includes translating the second commands into second user actions. In addition, the method includes creating or causing a user device to create a replica of the second AR/VR space based on the second user actions.
    Type: Grant
    Filed: March 30, 2018
    Date of Patent: August 18, 2020
    Assignee: Honeywell International Inc.
    Inventors: Ramesh Babu Koniki, Subhan Basha Dudekula, Arif Shuail Ahamed, Manas Dutta, Mark Phillips
  • Patent number: 10748342
    Abstract: In a system and method providing for interaction with virtual objects, and interaction between virtual objects, in an augmented reality, or mixed reality, or virtual reality, environment, detected conditions may trigger animated responses from virtual objects placed in a view of a physical environment. The detected conditions may include the detection of a user within a set threshold distance or proximity, the detection of another virtual object within a set threshold placement distance or proximity, the detection of particular environmental conditions in the view of the physical environment, and other such factors. Specific behavioral animations of the virtual objects may be triggered in response to detection of specific virtual objects and/or other conditions.
    Type: Grant
    Filed: June 19, 2018
    Date of Patent: August 18, 2020
    Assignee: GOOGLE LLC
    Inventors: Alan Joyce, Douglas Muir, Mark Dochtermann, Bryan Woods, Tarik Abdel-Gawad
  • Patent number: 10747312
    Abstract: An electronic device may have a display and a camera. Control circuitry in the device can gather information on a user's point of gaze using a gaze tracking system and other sensors, can gather information on the real-world image such as information on content, motion, and other image attributes by analyzing the real-world image, can gather user vision information such as user acuity, contrast sensitivity, field of view, and geometrical distortions, can gather user input such as user preferences and user mode selection commands, and can gather other input. Based on the point-of-gaze information and/or other gathered information, the control circuitry can display the real-world image and supplemental information on the display. The supplemental information can include augmentations such as icons, text labels, and other computer-generated text and graphics overlaid on the real world image and can include enhanced image content such as magnified portions of the real-world image.
    Type: Grant
    Filed: February 20, 2019
    Date of Patent: August 18, 2020
    Assignee: Apple Inc.
    Inventors: Ramin Samadani, Christina G. Gambacorta, Elijah H. Kleeman, Nicolas P. Bonnier
  • Patent number: 10748347
    Abstract: Systems and methods for local augmented reality (AR) tracking of an AR object are disclosed. In one example embodiment a device captures a series of video image frames. A user input is received at the device associating a first portion of a first image of the video image frames with an AR sticker object and a target. A first target template is generated to track the target across frames of the video image frames. In some embodiments, global tracking based on a determination that the target is outside a boundary area is used. The global tracking comprises using a global tracking template for tracking movement in the video image frames captured following the determination that the target is outside the boundary area. When the global tracking determines that the target is within the boundary area, local tracking is resumed along with presentation of the AR sticker object on an output display of the device.
    Type: Grant
    Filed: July 25, 2018
    Date of Patent: August 18, 2020
    Assignee: Snap Inc.
    Inventors: Jia Li, Linjie Luo, Rahul Bhupendra Sheth, Ning Xu, Jianchao Yang
  • Patent number: 10740987
    Abstract: A method, apparatus, and system for visualizing nonconformance data for a physical object. An augmented reality application in a portable computing device plots, in a defined coordinate cube, points corresponding to nonconformance locations on the physical object. The augmented reality application determines a sub-set of the points plotted that correspond to a region of the physical object visible in an image of the region of the physical object acquired by the portable computing device at a position of the portable computing device, where the sub-set of the points exclude nonconformance locations occluded from view by a physical object structure of the physical object in the image. The augmented reality application displays the nonconformance data for the sub-set of the points visible in the image in association with a sub-set of the nonconformance locations for the physical object in the image displayed on a display system in the portable computing device.
    Type: Grant
    Filed: October 12, 2018
    Date of Patent: August 11, 2020
    Assignee: The Boeing Company
    Inventors: Jeremiah Kent Scott, Robert Stephen Kanematsu Baker, Bryan James Beretta, Michael Louis Bernardoni
  • Patent number: 10740615
    Abstract: A network system, such as a transport management system, generates a mutual augmented reality (AR) experience for a user and a provider associated with a service. Responsive to receiving a service request, a service management module matches the user with an available provider and monitors the location of the user and provider client devices as the user and provider travel to the pickup location. When the devices are within a threshold distance of each other, an image recognition module monitors live video streams on the devices for the vehicle and the user. Responsive to the vehicle and user entering the field of view of the devices, an AR control module selects computer-generated AR elements and instructs the devices to visually augment the video streams to identify the user and provider to each other and to allow the user and provider to communicate and share data with each other.
    Type: Grant
    Filed: November 20, 2018
    Date of Patent: August 11, 2020
    Assignee: Uber Technologies, Inc.
    Inventors: Aaron Matthew Rogan, Wes Leung, Nicolas Garcia Belmonte, Ramik Sadana