Patents Examined by Vu Le
  • Patent number: 12387837
    Abstract: A method of processing ophthalmic data of some embodiment examples includes preparing a data set acquired by applying optical scanning of a two-dimensional pattern to a subject's eye. The two-dimensional pattern of the optical scanning includes a series of cycles that intersects each other. The method further includes generating position history data based on the data set. The position history data represents a temporal change in a position of the subject's eye.
    Type: Grant
    Filed: March 1, 2022
    Date of Patent: August 12, 2025
    Assignees: UNIVERSITY OF TSUKUBA, TOPCON CORPORATION
    Inventors: Yoshiaki Yasuno, Shuichi Makita, Tatsuo Yamaguchi, Shinnosuke Azuma
  • Patent number: 12387451
    Abstract: This application provides an image obtaining method and apparatus. The image obtaining method according to this application includes: obtaining first original image data, where the first original image data is captured by an image sensor based on an initial visible light exposure parameter and luminous intensity of an infrared illuminator; obtaining a luminance of a visible light image based on the first original image data; adjusting the visible light exposure parameter based on a first difference, where the first difference is a difference between the luminance of the visible light image and preset target luminance of the visible light image; obtaining a luminance of an infrared image based on the first original image data; adjusting the luminous intensity of the infrared illuminator based on a second difference, where the second difference is a difference between the luminance of the infrared image and preset target luminance of the infrared image.
    Type: Grant
    Filed: August 12, 2022
    Date of Patent: August 12, 2025
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Jiaojiao Tu, Jing Lan, Yibo Su
  • Patent number: 12383811
    Abstract: A display apparatus may include: a camera; a display; a memory storing one or more instructions; and at least one processor configured to execute the one or more instructions stored in the memory to: analyze an image captured by the camera and detect a free space included in the image; determine an exercise motion performable in the free space, based on the detected free space; and control the display to output exercise content based on the determined exercise motion.
    Type: Grant
    Filed: September 9, 2022
    Date of Patent: August 12, 2025
    Assignee: Samsung Electronics Co., Ltd.
    Inventor: Hyungrae Kim
  • Patent number: 12380692
    Abstract: System and techniques for abandoned object detection are described herein. a fence is established about a person and an object is detected within the fence. An entry is created in an object-person relationship data structure to establish a relationship between the person and the object within the fence. Then, the position of the object is monitored until an indication that the fence is terminated is received. If it is detected that the object is outside the fence during the monitoring, the person is alerted.
    Type: Grant
    Filed: September 23, 2021
    Date of Patent: August 5, 2025
    Assignee: Intel Corporation
    Inventors: Charmaine Rui Qin Chan, Chia Chuan Wu, Marcos E. Carranza, Ignacio Javier Alvarez Martinez, Wei Seng Yeap, Tung Lun Loo
  • Patent number: 12380679
    Abstract: Systems and methods for machine learning are described. The systems and methods include receiving target training data including a training image and ground truth label data for the training image, generating source network features for the training image using a source network trained on source training data, generating target network features for the training image using a target network, generating at least one attention map for training the target network based on the source network features and the target network features using a guided attention transfer network, and updating parameters of the target network based on the attention map and the ground truth label data.
    Type: Grant
    Filed: January 20, 2022
    Date of Patent: August 5, 2025
    Assignee: ADOBE INC.
    Inventors: Divya Kothandaraman, Sumit Shekhar, Abhilasha Sancheti, Manoj Ghuhan Arivazhagan, Tripti Shukla
  • Patent number: 12380725
    Abstract: Provided is a system that enables inexpensive and accurate identification of the seating position of each user in a free address office without incurring additional equipment costs. The system identifies a user who has entered an office by performing a face verification operation which involves comparing a face image of an entering person who is entering the office acquired from an image captured by a first entrance camera, with the face image of each registered user for matching, and identifies the seating position of the user in the office by performing a person verification operation which involves comparing a first person image acquired from an image captured by a second entrance camera, with a second person image acquired from an image captured by an in-area camera to thereby associate a person who has entered the office with a corresponding person who is seated in the office.
    Type: Grant
    Filed: February 11, 2021
    Date of Patent: August 5, 2025
    Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.
    Inventors: Sonoko Hirasawa, Takeshi Fujimatsu
  • Patent number: 12380695
    Abstract: A system includes one or more processors; and one or more non-transitory, computer-readable media including instructions that, when executed by the one or more processors, cause the computing system to: receive a machine data set; process the machine data set with a trained deep learning model to generate predicted variety profile index values; and cause a visualization to be displayed.
    Type: Grant
    Filed: October 6, 2023
    Date of Patent: August 5, 2025
    Assignee: ADVANCED AGRILYTICS HOLDINGS, LLC
    Inventors: William Kess Berg, Jon J. Fridgen, Jonathan Michael Bokmeyer, Andrew James Woodyard
  • Patent number: 12373961
    Abstract: Systems and methods for automatically registering a first input medical image and a second input medical image are provided. The first input medical image in a first modality and the second input medical image in a second modality are received. One or more objects of interest are segmented from the first input medical image to generate a first segmentation map and one or more objects of interest are segmented from the second input medical image to generate a second segmentation map. A first point cloud is extracted from the first segmentation map and a second point cloud is extracted from the second segmentation map. A transformation for aligning the first point cloud and the second point cloud is determined to register the first input medical image and the second input medical image. The transformation is output.
    Type: Grant
    Filed: March 10, 2022
    Date of Patent: July 29, 2025
    Assignee: Siemens Healthineers AG
    Inventors: Sureerat Reaungamornrat, Mamadou Diallo, Ali Kamen
  • Patent number: 12373960
    Abstract: In various examples, systems and methods of the present disclosure detect and/or track objects in an environment using projection images generated from LiDAR. For example, a machine learning model—such as a deep neural network (DNN)—may be used to compute a motion mask indicative of motion corresponding to points representing objects in an environment. Various input channels may be provided as input to the machine learning model to compute a motion mask. One or more comparison images may be generated based on comparing depth values projected from a current range image to a coordinate space of a previous range image to depth values of the previous range image. The machine learning model may use the one or more projection images, the one or more comparison images, and/or the one or more range images to compute a motion mask and/or a motion vector output representation.
    Type: Grant
    Filed: February 15, 2022
    Date of Patent: July 29, 2025
    Assignee: NVIDIA Corporation
    Inventors: Jens Christian Bo Joergensen, Ollin Boer Bohan, Joachim Pehserl, Nikolai Smolyanskiy
  • Patent number: 12361703
    Abstract: An object detection arrangement having a controller configured to: a) receive a plurality of image data streams; b) perform feature extraction on each of the received a plurality of images providing a plurality of feature data streams; and to c) perform a common feature extraction based on the plurality of feature data streams providing as common feature data stream for object detection.
    Type: Grant
    Filed: July 17, 2019
    Date of Patent: July 15, 2025
    Assignee: Telefonaktiebolaget LM Ericsson (Publ)
    Inventors: Ashkan Kalantari, Héctor Caltenco, Saeed Bastani, Yun Li
  • Patent number: 12361094
    Abstract: A training data generation device generates training data usable in machine learning. A learned model using the training data generated by the training data generation device is used in an inspection device for determining whether an inspection target is a normal product by inputting an image capturing the inspection target into the learned model. The training data generation device includes: a determination-target image extraction unit that extracts, from an input image, one or more determination-target images containing a determination target that satisfies a predetermined condition; a sorting unit that associates, on the basis of sorting the inspection target captured in the determination-target image, each of the determination-target images and a result of the sorting with each other; and a training data memory unit that stores training data in which each of the determination-target images and a result of the sorting are associated with each other.
    Type: Grant
    Filed: September 20, 2019
    Date of Patent: July 15, 2025
    Assignee: SYSTEM SQUARE INC.
    Inventor: Sachihiro Nakagawa
  • Patent number: 12354345
    Abstract: A computing system includes processors and computer-readable media having stored instructions that, when executed, cause the system to receive a machine data set, retrieve one or more spatial data files, process the spatial data files and the machine data set using a regression machine learning model to generate predicted values, determine a plurality of environment-specific varietal responses for each agricultural field, generate a multi-genetics planting recommendation and a map layer showing respective predicted variety profile index values for each agricultural field, and display the multi-genetics planting recommendation and the map layer via a graphical user interface.
    Type: Grant
    Filed: March 20, 2024
    Date of Patent: July 8, 2025
    Assignee: ADVANCED AGRILYTICS HOLDINGS, LLC
    Inventors: William Kess Berg, Jon J. Fridgen, Jonathan Michael Bokmeyer, Andrew James Woodyard
  • Patent number: 12354284
    Abstract: There is provided a method, apparatus and system for adapting a machine learning model for optical flow prediction. A machine learning model can be trained or adapted based on compressed video data, using motion vector information extracted from the compressed video data as ground-truth information for use in adapting the model to a motion vector prediction task. The model so adapted can accordingly be adapted for the similar task of optical flow prediction. Thus, the model can be adapted at test time to image data which is taken from an appropriate distribution. A meta-learning process can be performed prior to such model adaptation to potentially improve the model's performance.
    Type: Grant
    Filed: November 11, 2021
    Date of Patent: July 8, 2025
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Wentao Liu, Seyed Mehdi Ayyoubzadeh, Yuanhao Yu, Irina Kezele, Yang Wang, Xiaolin Wu, Jin Tang
  • Patent number: 12354280
    Abstract: In one embodiment, a method includes identifying, in each image of a stereoscopic pair of images of a scene at a particular time, every pixel as either a static pixel corresponding to a portion of a scene that does not have local motion at that time or a dynamic pixel corresponding to a portion of a scene that has local motion at that time. For each static pixel, the method includes comparing each of a plurality of depth calculations for the pixel, and when the depth calculations differ by at least a threshold amount, then re-labeling that pixel as a dynamic pixel. For each dynamic pixel, the method includes comparing a geometric 3D calculation for the pixel with a temporal 3D calculation for that pixel, and when the geometric 3D calculation and the temporal 3D calculation are within a threshold amount, then re-labeling the pixel as a static pixel.
    Type: Grant
    Filed: July 28, 2022
    Date of Patent: July 8, 2025
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Yingen Xiong, Christopher Peri
  • Patent number: 12354388
    Abstract: According to an embodiment of the specification, disclosed is an electronic device that obtains an image by using a camera, identifies an object-of-interest among a plurality of objects included in the image, determines a selected segmentation model among a plurality of segmentation models based on a size of the object-of-interest and apply the determined segmentation model to a region of interest (ROI) of the image containing the object-of-interest is disclosed.
    Type: Grant
    Filed: May 11, 2022
    Date of Patent: July 8, 2025
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Jongbum Choi, Youngjo Kim, Hyunhee Park, Hyungju Chun, Changsu Han, Jonghun Won
  • Patent number: 12346409
    Abstract: In one implementation, a method of presenting content is performed by a device including an image sensor, a display one or more processors, and non-transitory memory. The method includes obtaining, using the image sensor, an image of a physical environment. The method includes classifying, based on the image of the physical environment, the physical environment as a particular environment type of a plurality of environment types. The method includes obtaining content based on the particular environment type. The method includes displaying, on the display, a representation of the content in association with the physical environment.
    Type: Grant
    Filed: October 21, 2021
    Date of Patent: July 1, 2025
    Assignee: Apple Inc.
    Inventor: Ian M. Richter
  • Patent number: 12347113
    Abstract: A farming machine moves through a field and includes an image sensor that captures an image of a plant in the field. A control system accesses the captured image and applies the image to a machine learned plant identification model. The plant identification model identifies pixels representing the plant and categorizes the plant into a plant group (e.g., plant species). The identified pixels are labeled as the plant group and a location of the pixels is determined. The control system actuates a treatment mechanism based on the identified plant group and location. Additionally, the images from the image sensor and the plant identification model may be used to generate a plant identification map. The plant identification map is a map of the field that indicates the locations of the plant groups identified by the plant identification model.
    Type: Grant
    Filed: October 16, 2023
    Date of Patent: July 1, 2025
    Assignee: Deere & Company
    Inventors: Christopher Grant Padwick, William Louis Patzoldt, Benjamin Kahn Cline, Olgert Denas, Sonali Subhash Tanna
  • Patent number: 12340563
    Abstract: Embodiments are disclosed for correlating video sequences and audio sequences by a media recommendation system using a trained encoder network.
    Type: Grant
    Filed: May 11, 2022
    Date of Patent: June 24, 2025
    Assignee: Adobe Inc.
    Inventors: Justin Salamon, Bryan Russell, Didac Suris Coll-Vinent
  • Patent number: 12333748
    Abstract: A method for generating a depth map of a region of a surface of a workpiece includes receiving a stack of images. The images image the region of the surface of the workpiece with defined focal plane positions that are different in a depth direction and a focal plane position is assigned to each. Image points of the images are respectively assigned to a corresponding object point on the surface. The method includes determining a focus value of each image point of each image. The method includes fitting a function along the depth direction to the focus values of those image points that are assigned to the same object point. The method includes determining a depth value of each object point on the surface in the depth direction based on an extremum of the fitted function. The method includes generating the depth map based on the determined depth values.
    Type: Grant
    Filed: March 30, 2022
    Date of Patent: June 17, 2025
    Assignees: Carl Zeiss Industrielle Messtechnik GmbH, Carl Zeiss Microscopy GmbH
    Inventors: Tomas Aidukas, Sören Schmidt, Daniel Plohmann
  • Patent number: 12334221
    Abstract: An automated and quantitative facial weakness screening framework that utilizes a Bi-LSTM network to model the temporal dynamics among the shape and appearance features. The technique is beneficial to assist the paramedics or other users to identify the facial weakness in the field or, more importantly, whenever expertise in neurology is not available either for emergency patient triage (e.g., pre-hospital stroke care) or chronic disease management (e.g., Bell's palsy rehabilitation screen), leading to increased coverage and earlier treatment. The technique provides visualizable and interpretable results to increase its transparency and interpretability. The technique provides for inexpensive solutions that can be used in areas underserved by non-neurologists to more readily identify neurological deficits such as facial weakness in the field or other environment.
    Type: Grant
    Filed: February 4, 2022
    Date of Patent: June 17, 2025
    Assignee: University of Virginia Patent Foundation
    Inventors: Gustavo Rohde, Andrew M. Southerland, Yan Zhuang, Mark McDonald, Omar Uribe, Chad M. Aldridge, Mohamed Abul Hassan