Patents Examined by Xiao Liu
  • Patent number: 11756317
    Abstract: Systems and methods for processing point cloud data are disclosed. The methods include receiving a 3D image including point cloud data, displaying a 2D image of the 3D image, and generating a 2D bounding box that envelops an object of interest in the 2D image. The methods further include generating a projected image frame comprising a projected plurality of points by projecting a plurality of points in a first direction. The methods may then include displaying an image frame that includes the 2D image and the 2D bounding box superimposed by the projected image frame, receiving a user input that includes an identification of a set of points in the projected plurality of points that correspond to the object of interest, identifying a label for the object of interest, and storing the set of points that correspond to the object of interest in association with the label.
    Type: Grant
    Filed: September 24, 2020
    Date of Patent: September 12, 2023
    Assignee: ARGO AI, LLC
    Inventors: Wei-Liang Lai, Henry Randall Burke-Sweet, William Tyler Krampe
  • Patent number: 11741598
    Abstract: A computing apparatus for aiding visualization of lesions in a medical image includes a communicator and a processor. The processor is configured to receive a user input for selecting a single point in the medical image to modify a lesion mask representing a lesion area in them medical image, determine a modified lesion mask corresponding to the received user input among a plurality of pre-generated plurality of candidate lesion masks, and provide the modified lesion mask with the medical image.
    Type: Grant
    Filed: May 14, 2020
    Date of Patent: August 29, 2023
    Assignee: VUNO, INC.
    Inventor: Gwangbeen Park
  • Patent number: 11734918
    Abstract: An object model learning method includes: in an object identification model forming a convolutional neural network and a warp structure warping a feature map extracted in the convolutional neural network to a different coordinate system, preparing, in the warp structure, a warp parameter for relating a position in the different coordinate system to a position in a coordinate system before warp; and learning the warp parameter to input a capture image in which an object is captured to the object identification model and output a viewpoint conversion map in which the object is identified in the different coordinate system.
    Type: Grant
    Filed: November 25, 2020
    Date of Patent: August 22, 2023
    Assignee: DENSO CORPORATION
    Inventors: Kunihiko Chiba, Yusuke Sekikawa, Koichiro Suzuki
  • Patent number: 11727255
    Abstract: Systems and methods for edge assisted real-time object detection for mobile augmented reality are provided. The system employs a low latency offloading process, decouples the rendering pipeline from the offloading pipeline, and uses a fast object tracking method to maintain detection accuracy. The system can operate on a mobile device, such as an AR device, and dynamically offloads computationally-intensive object detection functions to an edge cloud device using an adaptive offloading process. The system also includes dynamic RoI encoding and motion vector-based object tracking processes that operate in a tracking and rendering pipeline executing on the AR device.
    Type: Grant
    Filed: October 15, 2020
    Date of Patent: August 15, 2023
    Assignee: Rutgers, The State University of New Jersey
    Inventors: Marco Gruteser, Luyang Liu, Hongyu Li
  • Patent number: 11727272
    Abstract: According to an aspect of an embodiment, operations may comprise receiving a point cloud representing a region. The operations may also comprise identifying a cluster of points in the point cloud having a higher intensity than points outside the cluster of points. The operations may also comprise determining a bounding box around the cluster of points. The operations may also comprise identifying a traffic sign within the bounding box. The operations may also comprise projecting the bounding box to coordinates of an image of the region captured by a camera. The operations may also comprise employing a deep learning model to classify a traffic sign type of the traffic sign in a portion of the image within the projected bounding box. The operations may also comprise storing information regarding the traffic sign and the traffic sign type in a high definition (HD) map of the region.
    Type: Grant
    Filed: June 18, 2020
    Date of Patent: August 15, 2023
    Assignee: NVIDIA CORPORATION
    Inventors: Derek Thomas Miller, Yu Zhang, Lin Yang
  • Patent number: 11722742
    Abstract: A method and a graphic user interface for displaying digital media objects and dynamically calculating their quality indicators. A first digital media object is displayed on a first computing device. A first user provides one of predefined inputs corresponding either to a positive or a negative response to the first digital media object. The quality indicator of the first digital media object is increased if the response is positive and decreased if the response is negative. The amount of increase or decrease is calculated based on a coefficient value associated with the first user. Subsequent responses to the first digital media object from other users impact the quality indicator of the first digital media object and, also, impact the coefficient value of the first user. Updated coefficient value of the first user is used to calculate impact of subsequent responses of the first user to other digital media objects.
    Type: Grant
    Filed: December 19, 2019
    Date of Patent: August 8, 2023
    Assignee: Alchephi LLC
    Inventors: Deva Finger, Sawyer Keith Waugh
  • Patent number: 11720782
    Abstract: A method includes obtaining, using at least one processor of an electronic device, multiple calibration parameters associated with multiple sensors of a selected mobile device. The method also includes obtaining, using the at least one processor, an identification of multiple imaging tasks. The method further includes obtaining, using the at least one processor, multiple synthetically-generated scene images. In addition, the method includes generating, using the at least one processor, multiple training images and corresponding meta information based on the calibration parameters, the identification of the imaging tasks, and the scene images. The training images and corresponding meta information are generated concurrently, different ones of the training images correspond to different ones of the sensors, and different pieces of the meta information correspond to different ones of the imaging tasks.
    Type: Grant
    Filed: December 28, 2020
    Date of Patent: August 8, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Chenchi Luo, Gyeongmin Choe, Yingmao Li, Zeeshan Nadir, Hamid R. Sheikh, John Seokjun Lee, Youngjun Yoo
  • Patent number: 11710302
    Abstract: A computer implemented method of performing single pass optical character recognition (OCR) including at least one fully convolutional neural network (FCN) engine including at least one processor and at least one memory, the at least one memory including instructions that, when executed by the at least processor, cause the FCN engine to perform a plurality of steps. The steps include preprocessing an input image, extracting image features from the input image, determining at least one optical character recognition feature, building word boxes using the at least one optical character recognition feature, determining each character within each word box based on character predictions and transmitting for display each word box including its predicted corresponding characters.
    Type: Grant
    Filed: November 6, 2020
    Date of Patent: July 25, 2023
    Assignee: TRICENTIS GMBH
    Inventors: David Colwell, Michael Keeley
  • Patent number: 11688176
    Abstract: An apparatus includes: a first camera configured to view an environment outside a vehicle; and a processing unit configured to receive images generated at different respective times by the first camera; wherein the processing unit is configured to identify objects in front of the vehicle based on the respective images generated at the different respective times, determine a distribution of the identified objects, and determine a region of interest based on the distribution of the identified objects in the respective images generated at the different respective times.
    Type: Grant
    Filed: October 30, 2020
    Date of Patent: June 27, 2023
    Assignee: Nauto, Inc.
    Inventors: Benjamin Alpert, Alexander Dion Wu
  • Patent number: 11669730
    Abstract: A recognition apparatus and a training method are provided. The recognition apparatus includes a memory configured to store a neural network including a previous layer of neurons, and a current layer of neurons that are activated based on first synaptic signals and second synaptic signals, the first synaptic signals being input from the previous layer, and the second synaptic signals being input from the current layer. The recognition apparatus further includes a processor configured to generate a recognition result based on the neural network. An activation neuron among the neurons of the current layer generates a first synaptic signal to excite or inhibit neurons of a next layer, and generates a second synaptic signal to inhibit neurons other than the activation neuron in the current layer.
    Type: Grant
    Filed: October 30, 2019
    Date of Patent: June 6, 2023
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventor: Jun Haeng Lee
  • Patent number: 11669096
    Abstract: In an environment in which a plurality of second pedestrians moves along predetermined movement patterns, a plurality of movement routes when a first pedestrian moves toward a destination point is recognized. Data, in which an environmental image indicating a visual environment in front of a virtual robot when the virtual robot moves along each of the movement routes and a moving direction command indicating a moving direction of the virtual robot are combined, is generated as learning data. In the environmental image, colors corresponding to kinds of the moving objects are applied to at least a portion of moving object image regions indicating pedestrians (moving objects) present around a robot. Model parameters of a CNN (action model) is learned using the learning data, and a moving velocity command for the robot is determined using a learned CNN.
    Type: Grant
    Filed: March 9, 2020
    Date of Patent: June 6, 2023
    Assignee: Honda Motor Co., Ltd.
    Inventor: Yuji Hasegawa
  • Patent number: 11662061
    Abstract: A safety system for use with a power tool that is usable by a user, the power tool including a base and a moving component that is movable relative to the base, includes a sensor assembly and a controller. The sensor assembly monitors a predetermined danger zone that is adjacent to the moving component of the power tool. The sensor assembly is configured to generate data relating to the predetermined danger zone. The controller receives the data from the sensor assembly and analyzes the data from the sensor assembly to determine if at least a portion of a hand of the user is present within the predetermined danger zone. The safety system can further include a wearable component including infrared only reflective material that is coupled to the hand of the user. The controller analyzes the data from the sensor assembly to determine if the wearable component is present within the predetermined danger zone.
    Type: Grant
    Filed: November 18, 2021
    Date of Patent: May 30, 2023
    Assignee: LAGUNA TOOLS, INC.
    Inventors: David Stoppenbrink, Torben Helshoj, Stephen Stoppenbrink
  • Patent number: 11657635
    Abstract: A distribution of a plurality of predictions generated by a deep neural network using sensor data is calculated, and the deep neural network includes a plurality of neurons. At least one of a measurement or a classification corresponding to an object is determined based on the distribution. The deep neural network generates each prediction of the plurality of predictions with a different number of neurons.
    Type: Grant
    Filed: August 28, 2019
    Date of Patent: May 23, 2023
    Assignee: Ford Global Technologies, LLC
    Inventor: Gurjeet Singh
  • Patent number: 11651191
    Abstract: A method, apparatus, and computer program product are provided for providing improved neural network implementations using a repeated convolution-based attention module. Example embodiments implement a repeated convolution-based attention module that utilizes multiple iterations of a repeated convolutional application layer and subsequent augmentations to generate an attention module output. Example methods may include augmenting an attention input data object based on a previous iteration convolutional output to produce a current iteration input parameter, inputting the input parameter to a repeated convolutional application layer to generate a current iteration input parameter, repeating for multiple iterations, and augmenting the attention input data object based on the final convolutional output to produce an attention module output.
    Type: Grant
    Filed: September 3, 2019
    Date of Patent: May 16, 2023
    Assignee: Here Global B.V.
    Inventors: Amritpal Singh Gill, Nicholas Dronen, Shubhabrata Roy, Raghavendran Balu
  • Patent number: 11645779
    Abstract: An apparatus includes an interface and a processor. The interface may be configured to receive pixel data of an area external to a vehicle. The processor may be configured to generate video frames from the pixel data, perform computer vision operations on the video frames to detect objects in the video frames and determine characteristics of the objects, analyze the characteristics of the objects to determine elevation characteristics of a driving surface with respect to the vehicle, perform a comparison of the elevation characteristics to clearance data of the vehicle and determine an approach angle for the vehicle in response to the comparison. The approach angle may be determined to prevent an impact between the vehicle and the driving surface. The approach angle may be presented to a vehicle system.
    Type: Grant
    Filed: September 21, 2020
    Date of Patent: May 9, 2023
    Assignee: Ambarella International LP
    Inventor: Anthony Pertsel
  • Patent number: 11640634
    Abstract: Systems, methods, and computer storage media are disclosed for predicting visual compatibility between a bundle of catalog items (e.g., a partial outfit) and a candidate catalog item to add to the bundle. Visual compatibility prediction may be jointly conditioned on item type, context, and style by determining a first compatibility score jointly conditioned on type (e.g., category) and context, determining a second compatibility score conditioned on outfit style, and combining the first and second compatibility scores into a unified visual compatibility score. A unified visual compatibility score may be determined for each of a plurality of candidate items, and the candidate item with the highest unified visual compatibility score may be selected to add to the bundle (e.g., fill the in blank for the partial outfit).
    Type: Grant
    Filed: May 4, 2020
    Date of Patent: May 2, 2023
    Inventors: Kumar Ayush, Ayush Chopra, Patel Utkarsh Govind, Balaji Krishnamurthy, Anirudh Singhal
  • Patent number: 11625554
    Abstract: A training method, system, and computer program product include training a neural network including using two-sided ReLU as a non-linear function or norm-pooling as a non-linear function and increasing a confidence gap.
    Type: Grant
    Filed: February 4, 2019
    Date of Patent: April 11, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Haifeng Qian, Mark Wegman
  • Patent number: 11615508
    Abstract: A method for automatic selection of display settings for a medical image is provided. The method includes receiving a medical image, mapping the medical image to an appearance classification cell of an appearance classification matrix using a trained deep neural network, selecting a first WW and a first WC for the medical image based on the appearance classification and a target appearance classification, adjusting the first WW and the first WC based on user preferences to produce a second WW and a second WC, and displaying the medical image with the second WW and the second WC via a display device.
    Type: Grant
    Filed: February 7, 2020
    Date of Patent: March 28, 2023
    Assignee: GE Precision Healthcare LLC
    Inventors: German Guillermo Vera-Gonzalez, Najib Akram, Ping Xue, Fengchao Zhang, Gireesha Chinthamani Rao, Justin Wanek
  • Patent number: 11615603
    Abstract: The embodiments herein provide a method and system that analyzes the pixel vectors by transforming the pixel vector into two-dimensional spectral shape space and then perform convolution over the image of graph thus formed. Method and system disclosed converts the pixel vector into image and provides a DCNN architecture that is built for processing 2D visual representation of the pixel vectors to learn spectral and classify the pixels. Thus, DCNN learn edges, arcs, arcs segments and the other shape features of the spectrum. Thus, the method disclosed enables converting a spectral signature to a shape, and then this shape is decomposed using hierarchical features learned at different convolution layers of the disclosed DCNN at different levels.
    Type: Grant
    Filed: March 15, 2021
    Date of Patent: March 28, 2023
    Assignee: Tata Consultancy Services Limited
    Inventors: Shailesh Shankar Deshpande, Rohit Thakur, Balamuralidhar Purushothaman
  • Patent number: 11610314
    Abstract: Systems and methods for panoptic segmentation of an image of a scene, comprising: receiving a synthetic data set as simulation data set in a simulation domain, the simulation data set comprising a plurality of synthetic data objects; disentangling the synthetic data objects by class for a plurality of object classes; training each class of the plurality of classes separately by applying a Generative Adversarial Network (GAN) to each class from the data set in the simulation domain to create a generated instance for each class; combining the generated instances for each class with labels for the objects in each class to obtain a fake instance of an object; fusing the fake instances to create a fused image; and applying a GAN to the fused image and a corresponding real data set in a real-world domain to obtain an updated data set. The process can be repeated across multiple iterations.
    Type: Grant
    Filed: April 24, 2020
    Date of Patent: March 21, 2023
    Assignee: TOYOTA RESEARCH INSTITUTE, INC
    Inventors: Kuan-Hui Lee, Jie Li, Adrien David Gaidon