Patents Examined by Ryan R Yang
  • Patent number: 11778318
    Abstract: A system and method for operating a depth sensor. A configuration operation can be performed by storing a first sequence of operation steps which define a first depth sensing mode of operation, and a second sequence of operation steps which define a second depth sensing mode of operation, in the memory. In response to a first request for depth measurements according to the first depth sensing mode of operation, the depth sensor can be operated in the first mode of operation by causing it to execute the first sequence of operation steps. In response to a second request for depth measurements according to the second depth sensing mode of operation, and without performing an additional configuration operation, the depth sensor can be operated in the second mode of operation by causing it to execute the second sequence of operation steps.
    Type: Grant
    Filed: March 9, 2022
    Date of Patent: October 3, 2023
    Assignee: Magic Leap, Inc.
    Inventors: Brian Keith Smith, Koon Keong Shee, Gregory Michael Link
  • Patent number: 11775074
    Abstract: Systems, apparatuses and interfaces and methods for implementing same including a mobile device having a camera system, where the systems, apparatuses, interfaces, and methods capture an image and embed the image into a background image selected from a group of background images generated by the systems, apparatuses, interfaces, and methods based on a location, a surroundings and an environment.
    Type: Grant
    Filed: May 6, 2019
    Date of Patent: October 3, 2023
    Assignee: Quantum Interface, LLC
    Inventor: Jonathan Josephson
  • Patent number: 11762623
    Abstract: A method of viewing image data of local content is disclosed. An augmented reality view is created by storing a first device coordinate frame (DCF), moving a first registration marker to select a first feature point (FP1) and a second feature point (FP2) on at least one real-word object viewable by the user through a display. A uniform coordinate system (UCS) alignment module stores locations of the registration marker when selecting the FP1 and the FP2, determines a user coordinate frame (UCF) based on the locations of the first registration marker when selecting the FP1 and the FP2, transforms the DCF to the UCF and displays image data of local content received on a first data source with a projector through the display to the user based on the transformation from the first DCF to the first UCF.
    Type: Grant
    Filed: February 26, 2020
    Date of Patent: September 19, 2023
    Assignee: Magic Leap, Inc.
    Inventor: Marc Alan McCall
  • Patent number: 11754841
    Abstract: An eyepiece waveguide for an augmented reality display system. The eyepiece waveguide can include an input coupling grating (ICG) region. The ICG region can couple an input beam into the substrate of the eyepiece waveguide as a guided beam. A first combined pupil expander-extractor (CPE) grating region can be formed on or in a surface of the substrate. The first CPE grating region can receive the guided beam, create a first plurality of diffracted beams at a plurality of distributed locations, and out-couple a first plurality of output beams. The eyepiece waveguide can also include a second CPE grating region formed on or in the opposite surface of the substrate. The second CPE grating region can receive the guided beam, create a second plurality of diffracted beams at a plurality of distributed locations, and out-couple a second plurality of output beams.
    Type: Grant
    Filed: January 14, 2022
    Date of Patent: September 12, 2023
    Assignee: Magic Leap, Inc.
    Inventors: Samarth Bhargava, Victor Kai Liu, Kevin Messer
  • Patent number: 11704874
    Abstract: Exemplary systems and methods for creating spatial contents in a mixed reality environment are disclosed. In an example, a location associated with a first user in a coordinate space is determined. A persistent virtual content is generated. The persistent virtual content is associated with the first user's associated location. The first user's associated location is determined and is associated with the persistent virtual content. A location of a second user at a second time in the coordinate space is determined. The persistent virtual content is presented to the second user via a display at a location in the coordinate space corresponding to the first user's associated location.
    Type: Grant
    Filed: August 6, 2020
    Date of Patent: July 18, 2023
    Assignee: Magic Leap, Inc.
    Inventors: Tushar Arora, Scott Kramarich
  • Patent number: 11670080
    Abstract: An augmented-reality system classifies a subject observed in video data obtained from a first area. A goal of a user of an augmented reality device is determined based on video data obtained from a second area. A correlation between the goal of the user and the classification of the subject is determined. An augmented reality display is generated to include a visual indicia of the subject, the visual indicia generated to represent the correlation between the goal and the classification.
    Type: Grant
    Filed: November 26, 2019
    Date of Patent: June 6, 2023
    Assignee: Vulcan, Inc.
    Inventors: Richard Earl Simpkinson, Omer Rosenbaum, Rusty Allen Gerard, Keith Rosema
  • Patent number: 11670081
    Abstract: In one example, a method performed by a processing system including at least one processor includes identifying an environment surrounding a user of an augmented reality display, identifying a relative location of the user within the environment, determining a field of view of the augmented reality display, identifying a room within the field of view, querying a data source for current information about the room, and modifying the augmented reality display to present the current information about the room.
    Type: Grant
    Filed: June 3, 2021
    Date of Patent: June 6, 2023
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Walter Cooper Chastain, Barrett Kreiner, James Pratt, Adrianne Luu, Robert Moton, Jr., Robert Koch, Ari Craine
  • Patent number: 11662700
    Abstract: A simulation system comprising: a plurality of physical components, each corresponding to one of a plurality of physical component types, an attachment panel comprising an arrangement of attachment locations, such that one or more of the physical components are attachable to the attachment panel; a display system configured to provide a visualisation on or proximate one or both of the attachment panel and the plurality of physical components; a capture device configured to capture image data of a current status of the attachment panel and the plurality of physical components; and a controller, when at least one physical component is coupled to the attachment panel.
    Type: Grant
    Filed: July 27, 2020
    Date of Patent: May 30, 2023
    Inventor: Gregory Quinn
  • Patent number: 11663736
    Abstract: Method for creating marker-based shared augmented reality (AR) session starts with initializing a shared AR session by a first device and by a second device. The first device displays on a display a marker. The second device detects the marker using a camera included in the second device and captures an image of the marker using the camera. The second device determines a transformation between the first device and the second device using the image of the marker. A common coordinate frame is then determined using the transformation, the shared AR session is generated using the common coordinate frame, and the shared AR session is caused to be displayed by the first device and by the second device. Other embodiments are described herein.
    Type: Grant
    Filed: December 27, 2019
    Date of Patent: May 30, 2023
    Assignee: Snap Inc.
    Inventors: Piers Cowburn, David Li, Isac Andreas Müller Sandvik, Qi Pan, Matan Zohar
  • Patent number: 11636641
    Abstract: Disclosed is an electronic device including a camera, a display, a sensor, and a processor, wherein the processor is configured to acquire one or more images including an external object through the camera, identify a position of the external object relative to the electronic device through at least one of the camera and the sensor, the position of the external object including a distance between the external object and the electronic device, determine whether the distance between the external object and the electronic device is within a threshold distance range, display the avatar corresponding to the external object based on the identified position of the external object through the display, if the distance between the external object and the electronic device is within the threshold distance range, wherein a size of the avatar is determined based on the distance between the external object and the electronic device, and display a specified avatar image through the display, if the distance between the exter
    Type: Grant
    Filed: October 8, 2021
    Date of Patent: April 25, 2023
    Inventors: Minsheok Choi, Jaeyun Song, Wooyong Lee, Hyejin Kang, Junho An, Hyoungjin Yoo, Gyuhee Han, Jiyoon Park, Jungeun Lee
  • Patent number: 11636644
    Abstract: This specification describes an apparatus, method and computer program relating to virtual reality, particularly augmented reality (AR) or mixed reality (MR). The method may comprise providing, based on a position associated with a display means, different first and second sets of virtual content for overlaid display at the display means at a first time and determining that the first set of virtual content is prioritized over the second set of virtual content. Based on the determination, the method may comprise prioritizing display of the first set of virtual content over the second set of virtual content at the display means, and enabling display of the second set of virtual content at a second, subsequent, time.
    Type: Grant
    Filed: May 25, 2021
    Date of Patent: April 25, 2023
    Assignee: NOKIA TECHNOLOGIES OY
    Inventors: Lasse Juhani Laaksonen, Jussi Artturi Leppänen, Arto Juhani Lehtiniemi
  • Patent number: 11631097
    Abstract: A method of displaying a symbol representative of changes in price during a time period, the method including receiving a first intra-time open price, a first intra-time high price, a first intra-time low price, and a first intra-time close price corresponding to a first intra-time period in the time period, receiving a second intra-time open price, a second intra-time high price, a second intra-time low price, and a second intra-time close price corresponding to a second intra-time period in the time period wherein the second intra-time period occurred after the first intra-time period, determining whether the second intra-time high price is higher than the first intra-time high price and whether the second intra-time low price is higher than the first intra-time low price, in response to the determining that the second intra-time high price is higher than the first intra-time high price and that the second intra-time low price is higher than the first intra-time low price, incrementing a higher-high-higher-
    Type: Grant
    Filed: October 2, 2021
    Date of Patent: April 18, 2023
    Inventor: Eric Schneider
  • Patent number: 11615562
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for removing an anchor point from a Bezier spline while preserving the shape of the Bezier spline. For example, the disclosed systems can replace adjacent input segments of an initial Bezier spline that are connected at an anchor point with a new contiguous segment that does not include an anchor point and that spans the portion of the spline covered by the adjacent segments. The disclosed systems can utilize an objective function to determine tangent vectors that indicate locations of control points for generating the new segment to replace the adjacent segments. In addition, the disclosed systems can generate a modified Bezier spline that includes the new segment in place of the adjacent segments of the initial Bezier spline.
    Type: Grant
    Filed: December 8, 2021
    Date of Patent: March 28, 2023
    Assignee: Adobe Inc.
    Inventors: Ankit Phogat, Vineet Batra, Daniel Kaufman
  • Patent number: 11610115
    Abstract: In various examples, a generative model is used to synthesize datasets for use in training a downstream machine learning model to perform an associated task. The synthesized datasets may be generated by sampling a scene graph from a scene grammar—such as a probabilistic grammar—and applying the scene graph to the generative model to compute updated scene graphs more representative of object attribute distributions of real-world datasets. The downstream machine learning model may be validated against a real-world validation dataset, and the performance of the model on the real-world validation dataset may be used as an additional factor in further training or fine-tuning the generative model for generating the synthesized datasets specific to the task of the downstream machine learning model.
    Type: Grant
    Filed: November 15, 2019
    Date of Patent: March 21, 2023
    Assignee: NVIDIA Corporation
    Inventors: Amlan Kar, Aayush Prakash, Ming-Yu Liu, David Jesus Acuna Marrero, Antonio Torralba Barriuso, Sanja Fidler
  • Patent number: 11604278
    Abstract: A three-dimensional distance measurement device includes a light emitting unit that irradiates a subject with light; a light receiving unit that detects reflected light from the subject; a distance calculation unit that calculates a three-dimensional distance to the subject on the basis of a transmission time of the detected reflected light; an image processing unit that generates a distance image of the subject on the basis of the calculated distance data; and a distance mode selection processing unit that selects a predetermined distance mode from a plurality of distance modes having different measurable distance ranges and sets a driving condition of the light emitting unit. By selecting a first distance mode in a first frame and selecting a second distance mode in a second frame, and by combining the distance data acquired in the respective frames, three-dimensional distance data of a frame to be output is generated.
    Type: Grant
    Filed: April 27, 2020
    Date of Patent: March 14, 2023
    Assignee: HITACHI-LG DATA STORAGE, INC.
    Inventor: Katsuhiko Izumi
  • Patent number: 11596301
    Abstract: The invention relates to a device for the determination and analysis of the motor skill and the oculomotor skill of a person (100), with a headset comprising at least the following components: a display unit (9) for displaying an image to the eyes of a person (100), when the headset is mounted on the head of the person (100); an optical sensor system (3, 4, 6) for estimating the position and shape of an object in three-dimensional space and for estimating the position of the head set in three dimensional space, wherein the optical sensor system (3, 4, 6) is arranged and designed for the detection and registration of the hands and fingers of the person (100); an eye-tracking module (8) that is configured to determine a point of gaze of the person (100) wearing the device. The invention furthermore relates to various methods for using the device.
    Type: Grant
    Filed: September 27, 2018
    Date of Patent: March 7, 2023
    Assignee: MOTOGNOSIS UG
    Inventors: Bastian Kayser, Karen Otte, Sebastian Mansow-Model, Alexander Brandt
  • Patent number: 11598636
    Abstract: A head-mounted display device has more convenient functions, which is usable in, e.g., surveying. An eyeglass display device includes a display unit and an operation content receiving unit. The display unit is configured to be placed on the head of a user and to be viewed by the user. The operation content receiving unit receives content of operation performed by the user. The display unit displays an image that shows a location relationship between positioning information of a target positioned by a location measuring device by using laser light and a predetermined placement planned location of the target. Multiple coordinate systems are prepared for a coordinate system of the displayed image. The operation content receiving unit receives designation of one coordinate system from among the multiple coordinate systems by the user.
    Type: Grant
    Filed: April 1, 2020
    Date of Patent: March 7, 2023
    Assignee: Topcon Corporation
    Inventor: Mitsutaka Kagata
  • Patent number: 11601993
    Abstract: A wireless communication device may locate a proximate object in an environment, such as an electronic device or a resource. During this communication technique, the wireless communication device may receive a transmission that includes an identifier associated with the object. The wireless communication device may determine a range and/or a direction of the object from the wireless communication device. For example, the wireless communication device may determine the range and/or the direction, at least in part, using wireless ranging. Next, the wireless communication device may present output information that indicates the range and/or the direction. In particular, the wireless communication device may display a map of a proximate area with an indicator representative of the object shown on the map. Alternatively, the wireless communication device may display an image of the proximate area with the indicator representative of the object on the image.
    Type: Grant
    Filed: March 23, 2020
    Date of Patent: March 7, 2023
    Assignee: Apple Inc.
    Inventors: James H. Foster, Duncan R. Kerr
  • Patent number: 11588897
    Abstract: Methods, systems, apparatuses, and computer-readable media are provided for simulating user interactions with shared content. In one implementation, the computer-readable medium includes instructions to cause a processor to establish a communication channel for sharing content and user interactions; transmit to at least one second wearable extended reality appliance, first data, representing an object associated with first wearable extended reality appliance, enabling a virtual representation of the object to be displayed through the at least one second wearable extended reality appliance; receive image data from an image sensor associated with the first wearable extended reality appliance; detect in the image data at least one user interaction including a human hand pointing to a specific portion of the object; and transmit to the at least one second wearable extended reality appliance second data indicating an area of the specific portion of the object.
    Type: Grant
    Filed: April 5, 2022
    Date of Patent: February 21, 2023
    Assignee: MULTINARITY LTD
    Inventors: Tamir Berliner, Tomer Kahan, Orit Dolev
  • Patent number: 11580657
    Abstract: Method of generating depth estimate based on biometric data starts with server receiving positioning data from first device associated with first user. First device generates positioning data based on analysis of a data stream comprising images of second user that is associated with second device. Server then receives a biometric data of second user from second device. Biometric data is based on output from a sensor or a camera included in second device. Server then determines a distance of second user from first device using positioning data and biometric data of the second user. Other embodiments are described herein.
    Type: Grant
    Filed: August 3, 2020
    Date of Patent: February 14, 2023
    Assignee: SNAP INC.
    Inventors: Piers George Cowburn, David Li, Isac Andreas Müller Sandvik, Qi Pan