Patents Examined by Jeffery A. Brier
  • Patent number: 11393141
    Abstract: A data-processing system identifies entities and relationships between entities that are recited in a set of documents. By identifying differently-named entities that share similar sets of relationships, the system is able to identify differently-named, but identical entities recited in the set of documents. The resulting entity and relationship map may be displayed to an end-user for intelligence analysis. In some examples, the end-user may make corrections to the relationship map which can then be used by the system to improve the inferences produced.
    Type: Grant
    Filed: September 29, 2020
    Date of Patent: July 19, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Benjamin Hsu, David Mordechai Sloan
  • Patent number: 11392197
    Abstract: An image rendering method, device, and system, a storage medium, and an image display method are provided. The image rendering method includes: acquiring an image to be displayed; according to a gaze point of human eyes on a display screen, obtaining a gaze point position on the image to be displayed; determining, according to the gaze point position, a first sampling area and a second sampling area of the image to be displayed; performing first resolution sampling on the first sampling area to obtain a first display area; performing second resolution sampling on the image to be displayed to obtain a second display area corresponding to the second sampling area, a resolution of the second sampling area being greater than that of the second display area; and splicing the first display area and the second display area to obtain an output image to be transmitted to the virtual reality device.
    Type: Grant
    Filed: April 1, 2019
    Date of Patent: July 19, 2022
    Assignees: BEIJING BOE OPTOELECTRONICS TECHNOLOGY CO., LTD., BOE TECHNOLOGY GROUP CO., LTD.
    Inventors: Xuefeng Wang, Yukun Sun, Jinghua Miao, Lili Chen, Hao Zhang, Bin Zhao, Lixin Wang, Xi Li, Jianwen Suo, Wenyu Li, Jinbao Peng, Qingwen Fan, Yuanjie Lu, Chenru Wang, Yali Liu, Jiankang Sun
  • Patent number: 11386529
    Abstract: A method for displaying a three dimensional (ā€œ3Dā€) image includes rendering a frame of 3D image data. The method also includes analyzing the frame of 3D image data to generate best known depth data. The method further includes using the best known depth data to segment the 3D image data into near and far frames of two dimensional (ā€œ2Dā€) image data corresponding to near and far depths respectively. Moreover, the method includes displaying near and far 2D image frames corresponding to the near and far frames of 2D image data at near and far depths to a user respectively.
    Type: Grant
    Filed: December 3, 2020
    Date of Patent: July 12, 2022
    Assignee: Magic Leap, Inc.
    Inventor: Robert Blake Taylor
  • Patent number: 11386629
    Abstract: An augmented reality viewing system is described. A local coordinate frame of local content is transformed to a world coordinate frame. A further transformation is made to a head coordinate frame and a further transformation is made to a camera coordinate frame that includes all pupil positions of an eye. One or more users may interact in separate sessions with a viewing system. If a canonical map is available, the earlier map is downloaded onto a viewing device of a user. The viewing device then generates another map and localizes the subsequent map to the canonical map.
    Type: Grant
    Filed: March 22, 2021
    Date of Patent: July 12, 2022
    Assignee: Magic Leap, Inc.
    Inventors: Jeremy Dwayne Miranda, Rafael Domingos Torres, Daniel Olshansky, Anush Mohan, Robert Blake Taylor, Samuel A. Miller, Jehangir Tajik, Ashwin Swaminathan, Lomesh Agarwal, Ali Shahrokni, Prateek Singhal, Joel David Holder, Xuan Zhao, Siddharth Choudhary, Helder Toshiro Suzuki, Hirai Honar Barot, Eran Guendelman, Michael Harold Liebenow, Christian Ivan Robert Moore
  • Patent number: 11380030
    Abstract: A drawing management apparatus is for managing various types of drawings of a plant and includes a processor and a communication interface. The processor is configured to notify, via the communication interface, a user or terminal apparatus handling a second type of drawing different from a first type of drawing of information related to a difference in the first type of drawing acquired based on a comparison between a new version and an old version of the first type of drawing.
    Type: Grant
    Filed: December 23, 2020
    Date of Patent: July 5, 2022
    Assignee: YOKOGAWA ELECTRIC CORPORATION
    Inventors: Takahiro Kambe, Tatenobu Seki, Nobuaki Ema, Masato Annen
  • Patent number: 11380075
    Abstract: An electronic apparatus capable of playing back a VR video image saves, in association with a first VR video image, a first reference direction serving as a reference for determining a range of a part of a first frame image of the first VR video image to be displayed on a screen at the start of playback of the first VR video image; selects a second VR video image different from the first VR video image among a plurality of VR video images; and performs control such that when the second VR video image starts to be played back, a range of a part of a second frame image of the second VR video image to be displayed first is displayed on the screen, the part being based on the first reference direction.
    Type: Grant
    Filed: March 10, 2021
    Date of Patent: July 5, 2022
    Assignee: CANON KABUSHIKI KAISHA
    Inventors: Kazuki Hamada, Seiji Ogawa
  • Patent number: 11379950
    Abstract: Systems and methods directed to placing content are described. More specifically, content is received and depth information corresponding to an external environment of the computing device is obtained. An indication to associate the content with a location based on the depth information is received from a user. Information for at least one plane associated with the depth information at the location is obtained and at least a portion of the content is warped to match the at least one plane based on the depth information.
    Type: Grant
    Filed: May 7, 2021
    Date of Patent: July 5, 2022
    Assignee: LEMON INC.
    Inventors: Frank Hamilton, Hwankyoo Shawn Kim, Zhixiong Lu, Qingyang Lv, WeiShan Yu, Ben Ma
  • Patent number: 11379508
    Abstract: Machine data reflecting operation of a monitored system is ingested and made available for search by a data intake and query system (DIQS). Ingested data includes log data entries produced by an application that represent low-level instances of user interface or interaction events. Inference processing generates a new collection of data instances that each identifies a higher-level task performed by a user in a sequence of the low-level events without regard to any explicit task affiliation data component of the low-level instances. Information for the task may include a measure of confidence that each low-level event of the sequence is properly associated with the task. Tasks of the new collection may be advantageously visualized and included in downstream processing.
    Type: Grant
    Filed: January 26, 2021
    Date of Patent: July 5, 2022
    Assignee: Splunk Inc.
    Inventors: Sara Alspaugh, Adam Jamison Oliner
  • Patent number: 11354898
    Abstract: A contextual filter system configured to perform operations that include, capturing an image frame at a client device, wherein the image frame includes a depiction of an object, identifying an object category of the object based on the depiction of the object within the image frame, accessing media content associated with the object category within a media repository, generating a presentation of the media content, and causing display of the presentation of the media content within the image frame at the client device.
    Type: Grant
    Filed: April 22, 2021
    Date of Patent: June 7, 2022
    Assignee: Snap Inc.
    Inventors: Ebony James Charlton, Celia Nicole Mourkogiannis, Travis Chen, Kevin Dechau Tang, Kaveh Anvaripour
  • Patent number: 11354868
    Abstract: Systems and methods for producing remote assistance via augmented reality (AR) by utilizing a field service device and a remote expert device are disclosed herein. An example method includes a field service device generating a non-AR video feed, and sending the non-AR video feed to a remote expert device. A user of the remote expert device then annotates the non-AR video feed by drawing a scribble pattern onto the non-AR video feed. The remote expert device then sends the scribble pattern to the field service device as an array of coordinates. The field service device then maps the received scribble pattern to a plane to create an AR scribble pattern. The field service device then creates an AR video feed based on the AR scribble pattern.
    Type: Grant
    Filed: February 26, 2021
    Date of Patent: June 7, 2022
    Assignee: Zebra Technologies Corporation
    Inventors: Thomas William Judd, Kevin B. Mayginnes
  • Patent number: 11354837
    Abstract: The current invention relates to a computer-implemented method for creating an overlay map, preferably a heat map, comprising: receiving at a server a user request for creation of the overlay map; loading vector data at the server comprising location data, the location data comprising at least one attribute; converting the vector data to image data composed of pixels according to an index scale; applying the converted image data to a color ramp; and creating the overlay map based on the converted image data and the color ramp; wherein said vector data is organized according to a plurality of layers, the location data comprising at least one attribute for each layer.
    Type: Grant
    Filed: June 25, 2019
    Date of Patent: June 7, 2022
    Assignee: KBC Groep NV
    Inventors: Martina Chlebcova, Viera Kovalova, Barak Chizi, Jeroen D'haen
  • Patent number: 11348296
    Abstract: Systems and methods are provided for providing a timeline representing a culture media protocol for a culture medium. Providing a timeline representing a culture media protocol can include receiving the culture media protocol for the culture media generating the timeline on a user interface based on the culture media protocol, monitoring time on the timeline, receiving one or more culture media images related to the culture media protocol, associating each of the one or more culture media images with a position on the timeline that correlates to a time at which the culture media image was captured, and generating a selectable marker for each culture media image associated with the timeline, the selectable marker being aligned with the position on the timeline that correlates to the time at which the culture media image was captured.
    Type: Grant
    Filed: November 25, 2020
    Date of Patent: May 31, 2022
    Assignee: BECTON, DICKINSON AND COMPANY
    Inventors: Strett Roger Nicolson, Keri Lynne Jones Aman, Mark Sakowski, Paul Fieni, Mark Larsen, Amy Alcott Llanso
  • Patent number: 11348321
    Abstract: A method of providing an augmented view of a real world scenery and of an occluded subsurface infrastructure. An image is taken by a camera with an image sensor and image reference information comprising a camera position and a camera orientation. From three dimensional information of a subsurface infrastructure, a two dimensional projection on the image sensor is made by using the reference information. A projection position of an anchor element of the subsurface infrastructure being visible on the at least one image is compared with an image position of the anchor element. A difference between the image position and the projection position is compensated for matching and overlaying the two dimensional projection derived from the three dimensional information of the subsurface infrastructure with the at least one image and thereby providing an improved augmented view.
    Type: Grant
    Filed: February 16, 2021
    Date of Patent: May 31, 2022
    Assignee: HEXAGON TECHNOLOGY CENTER GMBH
    Inventor: Bernhard Metzler
  • Patent number: 11335071
    Abstract: A moving-object position and orientation acquisition section acquires a position and an orientation of a moving object detected by a tracker provided in the moving object to be moved by a user. An AR region determination section determines, as an augmented reality region, a region that corresponds to a partial space occupied by the moving object in a real world and viewed from a viewpoint of the user. The moving object is configured to be moved by the user. An AR generation section generates an augmented reality image in the augmented reality region in a shot image of the real world.
    Type: Grant
    Filed: August 2, 2018
    Date of Patent: May 17, 2022
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Yoshinori Ohashi, Masaomi Nishidate, Norihiro Nagai
  • Patent number: 11328507
    Abstract: A sensing system with a detecting device that is used to detect a position of a target and a controller, where, for display on a display device or projection by a projection apparatus, the controller creates an augmented-reality image that shows: at least one of a setting related to detection of the target using the detecting device, a setting of a moving apparatus, and a setting of a robot that performs work on the target, a position of the target being recognized by the controller, a result of the detection of the target, a work plan of the moving apparatus, a work plan of the robot, a determination result of the controller and a parameter related to the target.
    Type: Grant
    Filed: September 17, 2020
    Date of Patent: May 10, 2022
    Assignee: FANUC CORPORATION
    Inventor: Masafumi Ooba
  • Patent number: 11328465
    Abstract: Systems and methods for augmented reality (AR) safe visualization for use with a near to eye (NTE) display system worn by a user are provided. The system includes: a processor programmed with an AR program and a task database storing task data; and, a camera mounted to the NTE display system and providing video input. The processor receives the video input and coordinates video image processing of the video input to identify therein a user's hand and an object. The processor receives an intended task from the user and retrieves associated task data based thereon. The processor processes the task data with the intended task to render a visualized item, such as a job card. The processor determines when the visualized item is in front of the hand while the user is performing the task and removes the visualized item responsive thereto.
    Type: Grant
    Filed: October 26, 2020
    Date of Patent: May 10, 2022
    Assignee: Honeywell International Inc.
    Inventors: Ondrej Pokorny, Michal Kosik, Marketa Szydlowska
  • Patent number: 11328493
    Abstract: An augmented reality screen system includes an augmented reality device and a host. The augmented reality device is configured to take a physical mark through a camera. The host is configured to receive the physical mark, determine position information and rotation information of the physical mark, and fetch a virtual image from a storage device through a processor of the host. The processor transmits an adjusted virtual image to the augmented reality device according to the position information and the rotation information, and the augmentation device projects the adjusted virtual image to a display of the augmented reality device. The adjusted virtual image becomes a virtual extended screen, and the virtual extended screen and the physical mark are simultaneously displayed on the display of the augmented reality device.
    Type: Grant
    Filed: February 24, 2021
    Date of Patent: May 10, 2022
    Assignee: ACER INCORPORATED
    Inventors: Huei-Ping Tzeng, Chao-Kuang Yang, Wen-Cheng Hsu, Chih-Wen Huang, Chih-Haw Tan
  • Patent number: 11328692
    Abstract: A head-mounted situational awareness system and method of operation provides a head gear with a retinal display, and multiple sensory-related electrical components. A microphone array and a motion sensor are integrated into head gear. The microphone array detects incoming audio signals to generate an audio signal. The motion sensor detects position and orientation of head gear relative to audio source to generate a position signal. A processor utilizes speech-to-text software to translate the sound to text for display on retinal display. The processor utilizes a position algorithm and triangulation functions to generate a position graphic of audio source. Noise cancelling software reduces background noise to sound. A remote subsystem, or command center, communicates audio signal and position signal with the head gear to receive an audio picture of the area, and also to generate actionable information that displays in real time on the retinal display of head gear.
    Type: Grant
    Filed: August 6, 2020
    Date of Patent: May 10, 2022
    Inventors: Alexandra Cartier, Gary England
  • Patent number: 11327630
    Abstract: Devices, methods, systems, and media are described for selecting virtual objects for user interaction in an extended reality environment. Distant virtual objects are brought closer to the user within a virtual 3D space to situate the selected virtual object in virtual proximity to the user's hand for direct manipulation. A virtual object is selected by the user based on movements of the user's hand and/or head that are correlated or associated with an intent to select a specific virtual object within the virtual 3D space. As the user's hand moves in a way that is consistent with this intent, the virtual object is brought closer to the user's hand within the virtual 3D space. To predict the user's intent, hand and head trajectory data may be compared to a library of kinematic trajectory templates to identify a best-matched trajectory template.
    Type: Grant
    Filed: February 4, 2021
    Date of Patent: May 10, 2022
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Taslim Arefin Khan, Szu Wen Fan, Changqing Zou, Wei Li
  • Patent number: 11317999
    Abstract: A processing device receives, from an image capture device associated with an augmented reality (AR) display, a plurality of images of a face of a patient. The processing device selects a subset of the plurality of images that meet one or more image selection criteria. The selection comprises determining, from the plurality of images, a first image that represents a first position extreme for the face; determining, from the plurality of images, a second image that represents a second position extreme of the face; selecting the first image; and selecting the second image. The processing device further generates a model of a jaw of the patient based at least in part on the subset of the plurality of images that have been selected.
    Type: Grant
    Filed: January 5, 2021
    Date of Patent: May 3, 2022
    Assignee: Align Technology, Inc.
    Inventors: Avi Kopelman, Adi Levin, Eric Paul Meyer, Elad Zeiri, Amir Ashkenazi, Ron Ganot, Sergei Ozerov, Inna Karapetyan