Patents Examined by Todd Buttram
  • Patent number: 11550842
    Abstract: A method and system for providing a data analysis in the form of a customized geographic visualization on a graphical user interface (GUI) on a remote client computing device using only a web browser on the remote client device. The system receives a user's selected data analysis to be performed by the system for display on the remote client device. The system verifies the data access permissions of the user to render a data analysis solution customized to that particular user, and automatically prevents that user from gaining access to data analysis solutions to which that user is prohibited. The system is configured to respond to the user's data analysis request, perform the necessary computations on the server side on the fly, and send a dataset interpretable by the client device's web browser for display on the client device or on a device associated with the client device.
    Type: Grant
    Filed: May 28, 2021
    Date of Patent: January 10, 2023
    Assignee: Blue Cross and Blue Shield Association
    Inventors: Teresa Nguyen Clark, Michael Steven Weinberg, Carlos Ricardo Villarreal, Nathania Hau, Jelani Akil McLean, Abigail Berube, Trent Tyrone Haywood
  • Patent number: 11545117
    Abstract: A foldable mobile terminal apparatus and control method are provided. The apparatus includes a first section, a second section coupled to the first section and movable between a folded state and an unfolded state including a plurality of partially folded states, a flexible display coupled to the first and second sections, a first sensor, and at least one processor to, while the mobile terminal apparatus is partially folded, identify one of the partially folded states of the mobile terminal apparatus, and control the display to display information corresponding to a compass direction based on magnetic related information obtained by the first sensor and according to the identified partially folded states. The electronic device can prevent distortion of the first sensor caused by the display by calibrating the geomagnetic value of the sensor based on an angle between a first surface of a first section and a third surface of the second section.
    Type: Grant
    Filed: April 2, 2020
    Date of Patent: January 3, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Jinik Kim, Namjoon Park, Jeongmin Park
  • Patent number: 11544884
    Abstract: A messaging system performs virtual clothing try-on. A method of virtual clothing try-on may include accessing a target garment image and a person image of a person wearing a source garment and processing the person image to generate a source garment mask and a person mask. The method may further include processing the source garment mask, the person mask, the target garment image, and a target garment mask to generate a warping, the warping indicating a warping to apply to the target garment image. The method may further include processing the target garment to warp the target garment in accordance with the warping to generate a warped target garment image, processing the warped target garment image to blend with the person image to generate a person with a blended target garment image, and processing the person with blended target garment image to fill in holes to generate an output image.
    Type: Grant
    Filed: December 11, 2020
    Date of Patent: January 3, 2023
    Assignee: Snap Inc.
    Inventors: Ma'ayan Shuvi, Avihay Assouline, Itamar Berger
  • Patent number: 11538226
    Abstract: An information processing device includes a vegetation analysis section configured to analyze a vegetation state of a monitoring area on the basis of detection information acquired from a detection unit and indicating a status of the monitoring area, a restricted area determination section configured to define a restricted area, where entry is restricted, in the monitoring area on the basis of the vegetation state, and a guidance information providing section configured to provide a terminal device with guidance information indicating restrictions on entry into the restricted area.
    Type: Grant
    Filed: March 2, 2021
    Date of Patent: December 27, 2022
    Assignee: HONDA MOTOR CO., LTD.
    Inventors: Masaya Honji, Makoto Yamamura, Tsukasa Sugino, Takayuki Kawai
  • Patent number: 11537854
    Abstract: Disclosed herein are related to a system and a method for providing an artificial reality. In one aspect, a system includes a shared physical memory and a first processor having access to the shared physical memory. In one aspect, the first processor performs, during a first time period, a first rendering process to generate a first image frame of a first view of an artificial reality. In one aspect, the first processor performs, during a second time period, a second rendering process to generate a second image frame of a second view of the artificial reality. In one aspect, the system includes a second processor including a neural network and having access to the shared physical memory. In one aspect, the second processor performs, during a third time period overlapping a portion of the second time period, an image enhancing process on the first image frame.
    Type: Grant
    Filed: November 8, 2019
    Date of Patent: December 27, 2022
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Behnam Bastani, Haomiao Jiang
  • Patent number: 11532114
    Abstract: A method and system for transforming simple user input into customizable animated images for use in text-messaging applications.
    Type: Grant
    Filed: May 23, 2022
    Date of Patent: December 20, 2022
    Assignee: EMONSTER INC
    Inventor: Enrique Bonansea
  • Patent number: 11532109
    Abstract: A graphical representation of an image of a subterranean formation along with log properties of the formation provided in a single graphical representation. Logged formation property values are coded into graphic representations of images of the formation in order to provide a graphical representation which allows the user to visually perceive the formation images and the logged formation properties simultaneously. A method may include receiving an image of a formation, the image including image values based on the formation, and also receiving a log property of the formation, the log property including log property values based on the formation. The log property values of the formation are correlated to corresponding locations in the image. A transfer function with the image values and the correlated log property values as inputs is determined. Based on the transfer function, a joint graphical representation of the image and the log property is rendered.
    Type: Grant
    Filed: January 24, 2020
    Date of Patent: December 20, 2022
    Assignee: HALLIBURTON ENERGY SERVICES, INC.
    Inventors: Yangqiu Hu, Naum Derzhi, Jonas Toelke
  • Patent number: 11527045
    Abstract: Systems and methods are provided for the generation of augmented reality (AR) content that provides a shared AR experience involving multiple users. Shared AR experiences can improve the communication and collaboration between multiple simultaneous users. According to an embodiment, AR content is generated for a first user in a shared AR experience. The AR content includes at least one of a render of a model, a virtual representation of a second user in the shared AR experience, a virtual representation of a user interaction in the shared AR experience, and spatial audio content. Modifications to a shared AR experience are also provided. These modifications may be initiated based on instructions from one user and be reflected in the AR content generated for multiple users.
    Type: Grant
    Filed: July 27, 2020
    Date of Patent: December 13, 2022
    Assignee: SHOPIFY INC.
    Inventors: Juho Mikko Haapoja, Byron Leonel Delgado, Stephan Leroux, Daniel Beauchamp
  • Patent number: 11526964
    Abstract: An apparatus to facilitate deep learning based selection of samples for adaptive supersampling is disclosed. The apparatus includes one or more processing elements to: receive training data comprising input tiles and corresponding supersampling values for the input tiles, wherein each input tile comprises a plurality of pixels, and train, based on the training data, a machine learning model to identify a level of supersampling for a rendered tile of pixels.
    Type: Grant
    Filed: June 10, 2020
    Date of Patent: December 13, 2022
    Assignee: INTEL CORPORATION
    Inventors: Daniel Pohl, Carl Marshall, Selvakumar Panneer
  • Patent number: 11513753
    Abstract: The present invention provides a data processing method and an electronic terminal. The electronic terminal obtains target data that includes at least one data item, converts the target data into a data image by using a data visualization technology, and then sets the data image as wallpaper, where the data image includes at least one graphic element, and the graphic element is in a one-to-one correspondence with the data item. The target data is user data, and may include operation event information of operating the electronic terminal by a user, or information that is associated with a user account and that is based on at least one network platform, so as to automatically generate the wallpaper, show the user data to the user by using the wallpaper, and improve user experience.
    Type: Grant
    Filed: March 31, 2021
    Date of Patent: November 29, 2022
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventor: Huaqi Hao
  • Patent number: 11514642
    Abstract: A method using a two-dimensional (2D) image representation of three-dimensional (3D) geometric objects in a machine learning framework has been developed. The method includes generating a single 2D geometry image corresponding to a 3D object model, and providing the single geometry image as input to a shape analysis task to enable shape analysis of the 3D object model based only on information encoded in the single 2D geometry image in the machine learning framework.
    Type: Grant
    Filed: October 6, 2017
    Date of Patent: November 29, 2022
    Assignee: Purdue Research Foundation
    Inventors: Ayan Tuhinendu Sinha, Karthik Ramani
  • Patent number: 11514650
    Abstract: An electronic apparatus is provided. The electronic apparatus includes a display, a camera configured to capture a rear of the electronic apparatus facing a front of the electronic apparatus in which the display displays an image, and a processor configured to render a virtual object based on the image captured by the camera, based on a user body being detected from the captured image, estimate a plurality of joint coordinates with respect to the detected user body using a pre-trained learning model, generate an augmented reality image using the estimated plurality of joint coordinates, the rendered virtual object, and the captured image, and control the display to display the generated augmented reality image, wherein the processor is configured to identify whether the user body touches the virtual object based on the plurality of estimated joint coordinates, and change a transmittance of the virtual object based on the touch being identified.
    Type: Grant
    Filed: October 26, 2020
    Date of Patent: November 29, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Yongsung Kim, Daehyun Ban, Dongwan Lee, Hongpyo Lee, Lei Zhang
  • Patent number: 11508130
    Abstract: Augmented reality (AR) and virtual reality (VR) environment enhancement using an eyewear device. The eyewear device includes an image capture system, a display system, and a position detection system. The image capture system and position detection system identify feature points within a point cloud that represents captured images of an environment. The display system presents image overlays to a user including enhancement graphics positioned at the feature points within the environment.
    Type: Grant
    Filed: June 13, 2020
    Date of Patent: November 22, 2022
    Assignee: Snap Inc.
    Inventors: Ilteris Canberk, Sumant Hanumante, Dhritiman Sagar, Stanislav Minakov
  • Patent number: 11501503
    Abstract: The present invention contemplates a method of producing a walkabout reality for a user through an augmented reality engine. The augmented reality engine retrieves data associated with user behavioral characteristics and identifies user behavioral characteristics from user patterns of behavior in at least one of a third-party virtual environment and a current physical environment. The augmented reality engine further analyzes the current physical environment to determine one or more customizable elements of the current physical environment and determines a predicted visual preference of the user. The augmented reality engine identifies one or more visual elements associated with the predicted visual preference of the user and renders a virtualized current physical environment within a threshold distance of the user by superimposing at least one of the one or more visual elements associated with the third-party virtual environment onto the one or more determined features and associated feature characteristics.
    Type: Grant
    Filed: May 13, 2021
    Date of Patent: November 15, 2022
    Assignee: Wormhole Labs, Inc.
    Inventors: Curtis Hutten, Robert D. Fish
  • Patent number: 11495018
    Abstract: In certain embodiments, item relocation may be facilitated via augmented reality cues and location-based confirmation. In some embodiments, in response to a detection of a first pattern in a live video stream obtained at a client device, a first location associated with the client device may be obtained, and an augmented reality presentation of a visual directional cue may be presented on a user interface of the client device such that the visual directional cue is overlaid on the live video stream. The visual directional cue may include visual directions from the first location to a destination location. In response to an indication that the item has been relocated to the destination location, a determination may be made as to whether the client device is within a threshold distance from the destination location. A confirmation may be generated in response to the client device being within the threshold distance.
    Type: Grant
    Filed: August 3, 2020
    Date of Patent: November 8, 2022
    Assignee: STAMPS.COM INC.
    Inventor: Charles Atkinson
  • Patent number: 11481979
    Abstract: Systems and methods are described for mobile and augmented reality-based depth and thermal fusion scan imaging. Some embodiments of the present technology use sophisticated techniques to fuse information from both thermal and depth imaging channels together to achieve synergistic effects for object recognition and personal identification. Hence, the techniques used in various embodiments provide a much better solution for, say, first responders, disaster relief agents, search and rescue, and law enforcement officials to gather more detailed forensic data. Some embodiments provide a series of unique features including small size, wearable devices, and ability to feed fused depth and thermal streams into AR glasses. In addition, some embodiments use a two-layer architecture for performing device local fusion and cloud-based platform for integration of data from multiple devices and cross-scene analysis and reconstruction.
    Type: Grant
    Filed: April 23, 2019
    Date of Patent: October 25, 2022
    Assignee: The Regents of the University of Colorado, a body corporate
    Inventors: Min-Hyung Choi, Shane Transue
  • Patent number: 11475607
    Abstract: Embodiments of the disclosure provide methods, apparatus and computer programs for generating a radio coverage map. A method comprises: obtaining image data of a geographical area, the image data comprising: a representation of the environment in the geographical area; and an indication of one or more transmission point locations corresponding to the locations of one or more transmission points in a wireless communications network; and applying a generative model to the image data, to generate a radio coverage map of the geographical area.
    Type: Grant
    Filed: December 19, 2017
    Date of Patent: October 18, 2022
    Assignee: Telefonaktiebolaget LM Ericsson (publ)
    Inventors: Jaeseong Jeong, Martin Isaksson, Yu Wang
  • Patent number: 11468540
    Abstract: Disclosed is a method and device for image processing. The image processing device may include a processor and a controller. The processor may include an artificial intelligence (AI) image processing model trained in image processing through learning, and an arithmetic logic unit (ALU) configured to perform a computation involved in image processing using the AI image processing model. According to the present disclosure, image processing using a deep neural network (DNN) is possible in an edge device.
    Type: Grant
    Filed: March 24, 2020
    Date of Patent: October 11, 2022
    Assignee: LG ELECTRONICS INC.
    Inventors: Jaehyun An, Jiwon Lee, Dongkyu Lee, Aram Kim, Jingyeong Kim
  • Patent number: 11468785
    Abstract: A system and method for a multi-stage brain-computer interface training using neural networks that reliably and predictably maps a user's thoughts to particular movements or actions in a computer-generated environment. The system comprises two stages: a pre-training stage, wherein specific exercises are generated on screen, and the brain activity is mapped to the exercises using a neural network as the user attempts to complete the exercises, and an in-use stage, wherein an initial mapping profile is loaded, brain activity is mapped to in-use interactions using a neural network, and those in-use mappings are compared to a library of stored mappings using a neural network to select a more accurate mapping for use in a given situation.
    Type: Grant
    Filed: July 3, 2019
    Date of Patent: October 11, 2022
    Assignee: TREV LABS, LLC
    Inventor: Abby D. Levenberg
  • Patent number: 11461991
    Abstract: Disclosed herein are systems and methods for methods of developing a database of controllable objects in an environment. For example, a method includes a mobile device having a camera to capture images of objects in an environment. For each object, the method includes, in response to receiving a user selection of the object, training a machine-learning model to recognize the object. The method includes receiving a command associated with the object and receiving a plurality of images of the object and training the machine-learning model to recognize the object based on the plurality of images. The method further includes transmitting the trained model and the command to a wearable electronic device causing the wearable electronic device to save the trained machine-learning model to a data store and to associate the command with the trained machine-learning model.
    Type: Grant
    Filed: November 5, 2021
    Date of Patent: October 4, 2022
    Assignee: Imagine Technologies, Inc.
    Inventors: Ian Davies Troisi, Justin Henry Deegan, Connor Liam McFadden, Nicholas Albert Silenzi