Patents Examined by Ping Y Hsieh
  • Patent number: 11699237
    Abstract: Techniques are disclosed for implementing a neural network that outputs embeddings. Furthermore, techniques are disclosed for using sensor data to train a neural network to learn such embeddings. In some examples, the neural network may be trained to learn embeddings for instance segmentation of an object based on an embedding for a bounding box associated with the object being trained to match pixel embeddings for pixels associated with the object. The embeddings may be used for object identification, object matching, object classification, and/or object tracking in various examples.
    Type: Grant
    Filed: February 11, 2021
    Date of Patent: July 11, 2023
    Assignee: Zoox, Inc.
    Inventor: Sarah Tariq
  • Patent number: 11699288
    Abstract: Automated detection of features and/or parameters within an ocean environment using image data. In an embodiment, captured image data is received from ocean-facing camera(s) that are positioned to capture a region of an ocean environment. Feature(s) are identified within the captured image data, and parameter(s) are measured based on the identified feature(s). Then, when a request for data is received from a user system, the requested data is generated based on the parameter(s) and sent to the user system.
    Type: Grant
    Filed: January 5, 2021
    Date of Patent: July 11, 2023
    Assignee: SURFLINE\WAVETRAK, INC.
    Inventor: Benjamin Freeston
  • Patent number: 11699277
    Abstract: A segmentation neural network is extended to provide classification at the segment level. An input image of a document is received and processed, utilizing a segmentation neural network, to detect pixels having a signature feature type. A signature heatmap of the input image can be generated based on the pixels in the input image having the signature feature type. The segmentation neural network is extended from here to further process the signature heatmap by morphing it to include noise surrounding an object of interest. This creates a signature region that can have no defined shape or size. The morphed heatmap acts as a mask so that each signature region or object in the input image can be detected as a segment. Based on this segment-level detection, the input image is classified. The classification result can be provided as feedback to a machine learning framework to refine training.
    Type: Grant
    Filed: April 5, 2021
    Date of Patent: July 11, 2023
    Assignee: OPEN TEXT SA ULC
    Inventor: Sreelatha Reddy Samala
  • Patent number: 11694307
    Abstract: An image enhancement system and method based on a generative adversarial network (GAN) model. The image enhancement system includes an acquiring unit, a training unit and an enhancement unit. The acquiring unit is configured to acquire a first image of a driving environment captured by a camera of a first vehicle and a second image of the driving environment captured by a camera of a second vehicle. The training unit is configured to train a GAN by using the first training image to obtain an image enhancement model. The enhancement unit is configured to enhance the second image by inputting the second image into the image enhancement model.
    Type: Grant
    Filed: June 2, 2022
    Date of Patent: July 4, 2023
    Inventor: Huajie Zeng
  • Patent number: 11694354
    Abstract: Apparatuses, systems, methods, and medium are disclosed for precise geospatial structure geometry extraction from multi-view imagery, including a non-transitory computer readable medium storing computer executable code that when executed by a processor cause the processor to: receive an image of a structure having an outline, the image having pixels with first pixel values depicting the structure and second pixel values outside of the structure depicting a background of a geographic area surrounding the structure, and image metadata including first geolocation data; and generate a synthetic shape image of the structure from the image using a machine learning algorithm, the synthetic shape image including pixels having pixel values forming a synthetic shape of the outline, the synthetic shape image having second geolocation data derived from the first geolocation data.
    Type: Grant
    Filed: October 16, 2020
    Date of Patent: July 4, 2023
    Assignee: Pictometry International Corp.
    Inventor: Shadrian Strong
  • Patent number: 11695587
    Abstract: Devices, systems, and methods of wirelessly controlling appliances and electronic devices, such as ceiling fans, air conditioners, garage doors, or the like. A receive-only garage door system is wirelessly controlled by a proprietary remote control unit. A cloning unit is able to clone or duplicate the proprietary wireless signal, and to replay it or re-generate it in response to a triggering command that a user submitted via a smartphone or tablet; thereby enabling to control such garage door system via mobile electronic devices. The cloning unit utilizes recording of the wireless signal payload and carrier frequency; wireless signal analysis; image analysis of the appliance or of the remote control unit; queries to a remote server to obtain properties of the proprietary wireless signal; or other techniques of signal analysis or duplication.
    Type: Grant
    Filed: September 26, 2021
    Date of Patent: July 4, 2023
    Assignee: OLIBRA LLC
    Inventor: Zohar Shinar
  • Patent number: 11694317
    Abstract: Machine vision devices may be configured to automatically connect to a remote management server (e.g., a “cloud”-based management server), and may offload and/or communicate images and analyses to the remote management server via wired or wireless communications. The machine vision devices may further communicate with the management server, user computing devices, and/or human machine interface devices, e.g., to provide remote access to the machine vision device, provide real-time information from the machine vision device, receive configurations/updates, provide interactive graphical user interfaces, and/or the like.
    Type: Grant
    Filed: August 24, 2021
    Date of Patent: July 4, 2023
    Assignee: Samsara Inc.
    Inventors: Anubhav Jain, John Bicket, Yu Kang Chen, Arthur Pohsiang Huang, Adam Eric Funkenbusch, Sanjit Zubin Biswas, Benjamin Arthur Calderon, Andrew William Deagon, William Waldman, Noah Paul Gonzales, Ruben Vardanyan, Somasundara Pandian, Ye-Sheng Kuo, Siri Amrit Ramos
  • Patent number: 11682212
    Abstract: A computer vision system is provided that includes an image generation device configured to capture consecutive two dimensional (2D) images of a scene, a first memory configured to store the consecutive 2D images, a second memory configured to store a growing window of consecutive rows of a reference image and a growing window of consecutive rows of a current image, wherein the reference image and the current image are a pair of consecutive 2D images stored in the first memory, a third memory configured to store a sliding window of pixels fetched from the growing window of the reference image, wherein the pixels in the sliding window are stored in tiles, and a dense optical flow engine (DOFE) configured to determine a dense optical flow map for the pair of consecutive 2D images, wherein the DOFE uses the sliding window as a search window for pixel correspondence searches.
    Type: Grant
    Filed: November 2, 2020
    Date of Patent: June 20, 2023
    Assignee: Texas Instruments Incorporated
    Inventors: Hetul Sanghvi, Mihir Narendra Mody, Niraj Nandan, Anish Reghunath, Michael Peter Lachmayr
  • Patent number: 11674883
    Abstract: The present invention is related to correct the errors in instruments, operation, and others using intelligent monitoring structures and machine learning, and others.
    Type: Grant
    Filed: December 29, 2021
    Date of Patent: June 13, 2023
    Assignee: Essenlix Corporation
    Inventors: Stephen Y. Chou, Wei Ding, Wu Chou, Jun Tian, Yuecheng Zhang, Mingquan Wu, Xing Li
  • Patent number: 11670088
    Abstract: A plurality of temporally successive vehicle sensor images are received as input to a variational autoencoder neural network that outputs an averaged semantic birds-eye view image that includes respective pixels determined by averaging semantic class values of corresponding pixels in respective images in the plurality of temporally successive vehicle sensor images. From a plurality of topological nodes that each specify respective real-world locations, a topological node closest to the vehicle, and a three degree-of-freedom pose for the vehicle relative to the topological node closest to the vehicle, is determined based on the averaged semantic birds-eye view image. A real-world three degree-of-freedom pose for the vehicle is determined by combining the three degree-of-freedom pose for the vehicle relative to the topological node and the real-world location of the topological node closest to the vehicle.
    Type: Grant
    Filed: December 7, 2020
    Date of Patent: June 6, 2023
    Assignee: Ford Global Technologies, LLC
    Inventors: Mokshith Voodarla, Shubham Shrivastava, Punarjay Chakravarty
  • Patent number: 11659998
    Abstract: Techniques for automatic measurement using structured lights are provided. A computer system uses structured lights to acquire skin information, the skin information being associated with at least one affected area of the skin. The computer system determines parameters related to the at least one affected area of the skin, at least partly, on the skin information and generates an assessment for the at least one affected area of the skin.
    Type: Grant
    Filed: March 5, 2020
    Date of Patent: May 30, 2023
    Assignee: International Business Machines Corporation
    Inventors: Yasmeen Mourice George, Lenin Mehedy, Rahil Garnavi
  • Patent number: 11657514
    Abstract: An object of the present invention is to extract an area of a foreground object with high accuracy. The present invention is an image processing apparatus including: a target image acquisition unit configured to acquire a target image that is a target of extraction of a foreground area; a reference image acquisition unit configured to acquire a plurality of reference images including an image whose viewpoint is different from that of the target image; a conversion unit configured to convert a plurality of reference images acquired by the reference image acquisition unit based on a viewpoint corresponding to the target image; and an extraction unit configured to extract a foreground area of the target image by using data relating to a degree of coincidence of a plurality of reference images converted by the conversion unit.
    Type: Grant
    Filed: May 29, 2020
    Date of Patent: May 23, 2023
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Kina Itakura
  • Patent number: 11645754
    Abstract: Systems and methods for determining modified fractional flow reserve values of vascular lesions are provided. Patient physiologic data, including coronary vascular information, is measured. According to the physiologic data, a coronary vascular model is generated. Lesions of interest within the coronary vascular system of the patient are identified for modified fractional flow reserve value determination. The coronary vascular model is modified to generate modified blood flow information for determining the modified fractional flow reserve value.
    Type: Grant
    Filed: November 2, 2020
    Date of Patent: May 9, 2023
    Assignee: Medtronic Vascular, Inc.
    Inventor: Paul Coates
  • Patent number: 11633260
    Abstract: Systems and methods for generating a 3D model of a dentition are disclosed herein. A method includes generating a first 3D model of a dentition including a plurality of model teeth. The method includes receiving at least one digital representation comprising patient teeth. The method includes determining a virtual camera parameter. The virtual camera parameter corresponds with a camera used to capture the digital representation. The method includes comparing, based on the virtual camera parameter, a first position of a model tooth with a position of a corresponding patient tooth of the digital representation. The method includes moving the model tooth from the first position to a second position. The second position is based on the position of the corresponding patient tooth. The method includes generating a second 3D model comprising the model tooth of the first 3D model in the second position.
    Type: Grant
    Filed: June 3, 2022
    Date of Patent: April 25, 2023
    Assignee: SDC U.S. SmilePay SPV
    Inventors: Ryan Amelon, Jared Lafer, Ramsey Jones, Tim Wucher, Jordan Katzman, Martin Chobanyan
  • Patent number: 11636581
    Abstract: The present invention relates to the determination of damage to portions of a vehicle. More particularly, the present invention relates to determining whether each part of a vehicle should be classified as damaged or undamaged and optionally the severity of the damage to each part of the damaged vehicle. Aspects and/or embodiments seek to provide a computer-implemented method for determining damage states of each part of a damaged vehicle, indicating whether each part of the vehicle is damaged or undamaged and optionally the severity of the damage to each part of the damaged vehicle, using images of the damage to the vehicle and trained models to assess the damage indicated in the images of the damaged vehicle.
    Type: Grant
    Filed: May 19, 2021
    Date of Patent: April 25, 2023
    Assignee: TRACTABLE LIMITED
    Inventors: Razvan Ranca, Marcel Horstmann, Bjorn Mattsson, Janto Oellrich, Yih Kai Teh, Ken Chatfield, Franziska Kirschner, Rusen Aktas, Laurent Decamp, Mathieu Ayel, Julia Peyre, Shaun Trill, Crystal Van Oosterom
  • Patent number: 11631186
    Abstract: Systems and methods for image recognition are provided. A style-transfer neural network is trained for each real image to obtain a trained style-transfer neural network. The texture or style features of the real images are transferred, via the trained style-transfer neural network, to a target image to generate styled images which are used for training an image-recognition machine learning model (e.g., a neural network). In some cases, the real images are clustered and representative style images are selected from the clusters.
    Type: Grant
    Filed: July 25, 2018
    Date of Patent: April 18, 2023
    Assignee: 3M Innovative Properties Company
    Inventors: Muhammad J. Afridi, Elisa J. Collins, Jonathan D. Gandrud, James W. Howard, Arash Sangari, James B. Snyder
  • Patent number: 11625914
    Abstract: Systems, devices, media and methods are presented for identifying key terrain areas in a geographic location by accessing an image comprising a view of the geographic location comprising a plurality of terrain areas and for each terrain area, assigning a first score corresponding to positive or negative key terrain determinations using a general rule, assigning a second score corresponding to positive or negative key terrain determinations using an override rule, assigning a third score corresponding to positive or negative key terrain determinations using a user-defined rule and generating an aggregate mask to assign the key terrain area with an aggregate score. Based on the aggregate score, key terrain areas are identified and displayed on a computing device.
    Type: Grant
    Filed: October 6, 2020
    Date of Patent: April 11, 2023
    Assignee: Raytheon Company
    Inventor: Sarah A. Crate
  • Patent number: 11620832
    Abstract: This disclosure relates to systems and methods of obtaining accurate motion and orientation estimates for a vehicle traveling at high speed based on images of a road surface. A purpose of these systems and methods is to provide a supplementary or alternative means of locating a vehicle on a map, particularly in cases where other locationing approaches (e.g., GPS) are unreliable or unavailable.
    Type: Grant
    Filed: June 24, 2020
    Date of Patent: April 4, 2023
    Inventors: Hendrik J. Volkerink, Ajay Khoche
  • Patent number: 11615630
    Abstract: A method of generating a context-rich parking event of a target vehicle taken by a patrol vehicle including obtaining a plate read event identifying an identifier of the target vehicle; initiating a collection of a first context image of a first view of the target vehicle; obtaining of geolocation information; obtaining temporal information; verifying if at least one condition is met by calculating if at least one of: a temporal constraint threshold is reached by using the temporal information; and a position constraint threshold is reached by using the geolocation information; initiating a collection by the patrol vehicle of a second context image of a second view of the target vehicle; and causing an association between the second context image and the plate read event to generate the context-rich parking event.
    Type: Grant
    Filed: July 10, 2020
    Date of Patent: March 28, 2023
    Inventors: Guy Brousseau, Chris Yigit
  • Patent number: 11615519
    Abstract: A method and apparatus for identifying a concrete crack includes: obtaining a crack video, and manually annotating a video image frame by using a label; predicting a future frame and label for the annotated frame by using a spatial displacement convolutional block, propagating the future frame and label, to obtain a synthetic sample, and preprocessing the synthetic sample, to form a crack database; modifying input and output ports of data of a deep learning model for video semantic image segmentation and a parameter, to enable the deep learning model to accept video input, and establishing a concrete crack detection model based on the video output; using a convolutional layer in a trained deep learning model as an initial weight of the concrete crack detection model for migration; inputting the crack database into a migrated concrete crack detection model, and training the concrete crack detection model for crack data.
    Type: Grant
    Filed: January 6, 2021
    Date of Patent: March 28, 2023
    Assignee: ZHEJIANG UNIVERSITY
    Inventors: Yonggang Shen, Zhenwei Yu