Patents Examined by C. L.
  • Patent number: 12280368
    Abstract: A connected ecosystem for a laboratory environment comprises an electronic lab notebook, and instrumented biosafety cabinet, and one or more sensing vessels containing cell cultures. The electronic lab notebook interfaces with the instrumented biosafety cabinet to provide instructions, guidance, and monitoring of a user during the set up of the experimental protocol and to receive commands from the user via one of several input modalities. After the experimental protocol has been set up in the instrumented biosafety cabinet, cell cultures may be moved to an incubator where the connected ecosystem may provide automatic monitoring of the cultures. The automatic monitoring is provided by sensors integrated into cell culture vessels and supplemented by images of the cell cultures captured by a camera. The user may be informed of deviations from expected results detected based on the automatic monitoring.
    Type: Grant
    Filed: November 9, 2020
    Date of Patent: April 22, 2025
    Assignee: CORNING INCORPORATED
    Inventors: John Wilfred Coddaire, Maryanne De Chambeau, James Thomas Eickmann, Paula Mary Flaherty, Anthony Glenn Frutos, Vasiliy Nikolaevich Goral, Angela Langer Julien, Marshall Jay Kosovsky, Brent Ravaughn Lanterman, Gregory Roger Martin, Christie Leigh McCarthy, John Forrest Roush, John Shyu, Tora Ann-Beatrice Eline Sirkka, Allison Jean Tanner, Kimberly Ann Titus, Todd Michael Upton, Timothy James Wood
  • Patent number: 12266113
    Abstract: A device automatically segments an image into different regions and automatically adjusts perceived exposure-levels or other characteristics associated with each of the different regions, to produce pictures that exceed expectations for the type of optics and camera equipment being used and in some cases, the pictures even resemble other high-quality photography created using professional equipment and photo editing software. A machine-learned model is trained to automatically segment an image into distinct regions. The model outputs one or more masks that define the distinct regions. The mask(s) are refined using a guided filter or other technique to ensure that edges of the mask(s) conform to edges of objects depicted in the image. By applying the mask(s) to the image, the device can individually adjust respective characteristics of each of the different regions to produce a higher-quality picture of a scene.
    Type: Grant
    Filed: July 15, 2019
    Date of Patent: April 1, 2025
    Assignee: Google LLC
    Inventors: Orly Liba, Florian Kainz, Longqi Cai, Yael Pritch Knaan
  • Patent number: 12225291
    Abstract: Disclosed herein are system, method, and computer program product embodiments for online sensor motion compensation. For example, the method includes: applying a random mechanical excitation to a support structure, wherein a plurality of image capture devices and a plurality of sets of strain gauges are coupled to the support structure; measuring, with each set of strain gauges of the plurality of sets of strain gauges, simultaneous to the application of the random mechanical excitation, a strain; capturing, with each image capture device of the plurality of image capture devices, simultaneous to the application of the random mechanical excitation, a series of images of a calibration target; and generating, based on the strain and the series of images, a mapping between the strain and a displacement between the plurality of image capture devices.
    Type: Grant
    Filed: February 2, 2022
    Date of Patent: February 11, 2025
    Assignee: Ford Global Technologies, LLC
    Inventor: Casey Sennott
  • Patent number: 12217440
    Abstract: A computer implemented method of decoding a signal. The method includes receiving a signal (which may be an electromagnetic signal), sampling the received signal to generate an input waveform having magnitude and phase components, applying a transform operation to the input waveform to generate a first decoded signal, and outputting the first decoded signal. The transform operation includes pre-processing the input waveform to generate a mirrored inverted waveform and applying a continuous wavelet transform to the mirrored inverted waveform to generate the first decoded signal. This allows inversion of the frequency and temporal resolution of the continuous wavelet transform, thereby enabling improved temporal and frequency decoding of a signal. The method is particularly suitable for signal filters and filtering units.
    Type: Grant
    Filed: July 28, 2020
    Date of Patent: February 4, 2025
    Assignee: The Secretary of State for Defence
    Inventor: Paul Mason
  • Patent number: 12211216
    Abstract: Disclosed are apparatuses, systems, and techniques that may perform efficient deployment of machine learning for detection and classification of moving objects in streams of images. A set of machine learning models with different input sizes may be used for parallel processing of various regions of interest in multiple streams of images. Both the machine learning models as well as the inputs into these models may be selected dynamically based on a size of the regions of interest.
    Type: Grant
    Filed: January 25, 2022
    Date of Patent: January 28, 2025
    Assignee: NVIDIA Corporation
    Inventor: Tushar Khinvasara
  • Patent number: 12205383
    Abstract: A method of recognizing target objects in images obtains a detection image of a target object. A template image is generated according to the target object. The detection image is compared with the template image to obtain a comparison result. Candidate regions of the target object are determined in the detection image according to the comparison result. At least one target region of the target object is obtained from the candidate regions. The method detects target objects in images very rapidly.
    Type: Grant
    Filed: May 18, 2022
    Date of Patent: January 21, 2025
    Assignee: HON HAI PRECISION INDUSTRY CO., LTD.
    Inventors: Cheng-Feng Wang, Hui-Xian Yang, Li-Che Lin
  • Patent number: 12198414
    Abstract: Examples described herein provide a method that includes performing, by a processing device, using a neural network, pattern recognition on an image to recognize a feature in the image. The method further includes performing, by the processing device, upscaling of the image to increase a resolution of the image while maintaining the feature to generate an upscaled image.
    Type: Grant
    Filed: January 25, 2022
    Date of Patent: January 14, 2025
    Assignee: FARO Technologies, Inc.
    Inventors: Michael Müller, Georgios Balatzis
  • Patent number: 12175678
    Abstract: An image processing apparatus, including a memory configured to store one or more instructions; and at least one processor configured to execute the one or more instructions to: based on a first image and a probability model, optimize an estimated pixel value and estimated gradient values of each pixel of an original image corresponding to the first image, obtain an estimated original image based on the optimized estimated pixel value of the each pixel of the original image, obtain a decontour map based on the optimized estimated pixel value and the estimated gradient values of the each pixel of the original image, and generate a second image by combining the first image with the estimated original image based on the decontour map.
    Type: Grant
    Filed: May 20, 2022
    Date of Patent: December 24, 2024
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Hanul Shin, Soomin Kang, Jaeyeon Park, Youngchan Song, Iljun Ahn, Tammy Lee
  • Patent number: 12175770
    Abstract: A lane extraction method uses projection transformation of a 3D point cloud map, by which the amount of operations required to extract the coordinates of a lane is reduced by performing deep learning and lane extraction in a two-dimensional (2D) domain, and therefore, lane information is obtained in real time. In addition, black-and-white brightness, which is most important information for lane extraction on an image, is substituted by the reflection intensity of a light detection and ranging (LiDAR) sensor so that a deep learning model capable of accurately extracting a lane is provided. Therefore, reliability and competitiveness is enhanced in the field of autonomous driving, the field of road recognition, the field of lane recognition, and the field of HD road maps for autonomous driving, and the fields similar or related thereto, and more particularly, in the fields of road recognition and autonomous driving using LiDAR.
    Type: Grant
    Filed: June 7, 2022
    Date of Patent: December 24, 2024
    Assignee: MOBILTECH
    Inventors: Jae Seung Kim, Yeon Soo Park
  • Patent number: 12165295
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media that generate inpainted digital images utilizing a cascaded modulation inpainting neural network. For example, the disclosed systems utilize a cascaded modulation inpainting neural network that includes cascaded modulation decoder layers. For example, in one or more decoder layers, the disclosed systems start with global code modulation that captures the global-range image structures followed by an additional modulation that refines the global predictions. Accordingly, in one or more implementations, the image inpainting system provides a mechanism to correct distorted local details. Furthermore, in one or more implementations, the image inpainting system leverages fast Fourier convolutions block within different resolution layers of the encoder architecture to expand the receptive field of the encoder and to allow the network encoder to better capture global structure.
    Type: Grant
    Filed: May 4, 2022
    Date of Patent: December 10, 2024
    Assignee: Adobe Inc.
    Inventors: Haitian Zheng, Zhe Lin, Jingwan Lu, Scott Cohen, Elya Shechtman, Connelly Barnes, Jianming Zhang, Ning Xu, Sohrab Amirghodsi
  • Patent number: 12159368
    Abstract: An electronic device for automated image-based auditing of equipment cabinets may include a memory storing an imaging computer program; and a computer processor. The imaging computer program, when executed by the computer processor, may cause the computer processor to perform the following: receive a plurality images from an image capture device on a carriage, wherein the image capture device is configured to traverse an equipment cabinet and capture the plurality of images of equipment in the equipment cabinet; generate a single image by stitching the plurality of images together; receive data from a sensor on the carriage, wherein the sensor is configured to capture data from the equipment in the equipment cabinet; associate the data with a location in the equipment cabinet; compare the single image and the data to an expected image and expected data; and output a result of the comparison.
    Type: Grant
    Filed: June 22, 2021
    Date of Patent: December 3, 2024
    Assignee: JPMORGAN CHASE BANK, N.A.
    Inventors: Jacob Cox, Robert S. Newnam, Zhen Du
  • Patent number: 12159436
    Abstract: Provided in the embodiments of the present application are a transform method, inverse transform method, encoder, decoder and storage medium.
    Type: Grant
    Filed: March 18, 2022
    Date of Patent: December 3, 2024
    Assignee: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD.
    Inventors: Wei Zhang, Fuzheng Yang, Shuai Wan, Yanzhuo Ma, Junyan Huo, Na Dai, Zexing Sun, Lihui Yang
  • Patent number: 12154680
    Abstract: This application relates to an endoscopic image display method, apparatus, computer device, and storage medium, and relates to the field of machine learning technologies. The method acquiring an endoscopic image; locating a target region image in the endoscopic image, the target region image being a partial image comprising a target region; inputting the target region image into a coding network to obtain a semantic feature of the target region image, the coding network being a part of an image classification network, and the image classification network being a machine learning network obtained through training with first training images; matching the semantic feature of the target region image against semantic features of image samples to obtain a matching result, the matching result indicating a target image sample that matches the target region image; and displaying the endoscopic image and the matching result in an endoscopic image display interface.
    Type: Grant
    Filed: February 17, 2022
    Date of Patent: November 26, 2024
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Junwen Qiu, Zhongqian Sun, Xinghui Fu, Hong Shang, Han Zheng
  • Patent number: 12148197
    Abstract: A computer-implemented method, computer program product and computer system (100) for detecting plant diseases. The system stores a convolutional neural network (120) trained with a multi-crop dataset. The convolutional neural network (120) has an extended topology comprising an image branch (121) based on a classification convolutional neural network for classifying the input images according to plant disease specific features, a crop identification branch (122) for adding plant species information, and a branch integrator for integrating the plant species information with each input image. The plant species information (20) specifies the crop on the respective input image (10). The system receives a test input comprising an image (10) of a particular crop (1) showing one or more particular plant disease symptoms, and further receives a respective crop identifier (20) associated with the test input via an interface (110).
    Type: Grant
    Filed: May 14, 2020
    Date of Patent: November 19, 2024
    Assignee: BASF SE
    Inventors: Artzai Picon Ruiz, Matthias Nachtmann, Maximilian Seitz, Patrick Mohnke, Ramon Navarra-Mestre, Alexander Johannes, Till Eggers, Amaia Maria Ortiz Barredo, Aitor Alvarez-Gila, Jone Echazarra Huguet
  • Patent number: 12148211
    Abstract: The present technology relates to an image processing apparatus, a 3D model generation method, and a program capable of reducing failed image capturing in multi-view image capturing for 3D model generation. The image processing apparatus includes a 3D region calculation unit that generates a 3D region of image capturing ranges generated from a plurality of multi-view images, and a determination unit that determines a situation in which an image capturing device captures a subject on the basis of a region image obtained by projecting the 3D region onto a specific viewpoint and a subject image from the image capturing device corresponding to the specific viewpoint. The present technology can be applied to, for example, an image processing apparatus for 3D model generation.
    Type: Grant
    Filed: March 27, 2020
    Date of Patent: November 19, 2024
    Assignee: SONY GROUP CORPORATION
    Inventors: Hiroaki Takahashi, Tetsuya Fukuyasu
  • Patent number: 12125274
    Abstract: It reduces labor and time to generate training data for the training model. An identification information assignment apparatus includes an acquirer configured to acquire a plurality of pieces of image data, an assigner configured to assign identification information to image data selected from the plurality of pieces of image data by using a learning model after learning, and an updater configured to update the learned model using the image data to which the identification information is assigned, wherein the assigner assigns identification information to a rest of the image data acquired by the acquirer using the learned model that has been updated.
    Type: Grant
    Filed: September 27, 2021
    Date of Patent: October 22, 2024
    Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.
    Inventors: Tomoaki Itoh, Hidehiko Shin
  • Patent number: 12100212
    Abstract: Methods, systems and computer readable medium for object detection coverage estimation are provided. The system for object detection coverage estimation includes a camera and a processing means. The processing means is coupled to the camera to receive image data acquired by the camera, the image data including a detected object. The processing means is configured to determine a spatial coverage of the detected object based on detected object metadata associated with the detected object in the image data received from the camera.
    Type: Grant
    Filed: March 24, 2020
    Date of Patent: September 24, 2024
    Assignee: NEC CORPORATION
    Inventors: Hui Lam Ong, Satoshi Yamazaki, Wen Zhang
  • Patent number: 12094211
    Abstract: A surveillance region identifying method is used to analyze a region feature of a surveillance region covered by a surveillance apparatus. The surveillance region identifying method includes analyzing all track information within a series of images acquired by the surveillance apparatus to acquire an appearing point and a disappearing point of each track information, utilizing cluster analysis to define a main appearing point cluster of the appearing points, computing enter vectors of all appearing points inside the main appearing point cluster, and analyzing vector angles of all enter vectors of the main appearing point cluster to define an entrance of the surveillance region.
    Type: Grant
    Filed: November 25, 2021
    Date of Patent: September 17, 2024
    Assignee: VIVOTEK INC.
    Inventors: Cheng-Chieh Liu, Shaw-Pin Chen
  • Patent number: 12094213
    Abstract: A control device for a vehicle for determining perceptual load of a visual and dynamic driving scene, the control device being configured to: receive an image sequence representing the driving scene, extract a set of scene features from the image sequence, the set of scene features representing static and/or dynamic information of the driving scene, calculate a time-aggregated representation of the image sequence based on the extracted set of scene features, calculate an attention map of the driving scene by attentional pooling of the time-aggregated representation of the image sequence, and determine the perceptual load of the driving scene based on the attention map. The invention further relates to a corresponding method.
    Type: Grant
    Filed: June 20, 2019
    Date of Patent: September 17, 2024
    Assignees: TOYOTA MOTOR EUROPE, MINDVISIONLABS LIMITED
    Inventors: Jonas Ambeck-Madsen, Gabriel Othmezouri, Daniel Olmeda Reino, Nilli Lavie, Luke Palmer, Petar Palasek
  • Patent number: 12056845
    Abstract: A farming machine identifies and treats a plant as the farming machine travels through a field. The farming machine includes an array of tiled image sensors for capturing images of the field. A control system identifies an active region in the captured images and generates a tiled image that includes the active region. The control system applies image processing functions to identify the plant in the tiled image and actuates a treatment mechanism to treat the identified plant. The control system causes the array of image sensors to capture the image, identifies the plant, and actuates the treatment mechanism in real time as the farming machine travels through the field.
    Type: Grant
    Filed: August 18, 2021
    Date of Patent: August 6, 2024
    Assignee: BLUE RIVER TECHNOLOGY INC.
    Inventors: John William Peake, Rajesh Radhakrishnan