Patents Examined by Ping Y Hsieh
  • Patent number: 11979941
    Abstract: Embodiments of this application relate to the terminal field and disclose a data transmission method and a terminal, to increase a speed of data transmission between terminals and ensure stability during the data transmission. The method includes: A terminal establishes a wireless connection to another terminal; in addition, the terminal establishes a USB connection to the another terminal; then, the terminal can display a first interface, where the first interface includes at least one piece of candidate data; a user may select to-be-transmitted data from the at least one piece of candidate data after performing a first input on the first interface; and the terminal may send the to-be-transmitted data to the another terminal through the wireless connection and the USB connection after the user performs a second input on the first interface.
    Type: Grant
    Filed: May 16, 2018
    Date of Patent: May 7, 2024
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Jun Yang, Zhongbiao Wu, Xiaodong Tian, Jian Chen, Wanjun Wei, Shuang Zhu
  • Patent number: 11967179
    Abstract: A system and method for performing facial recognition is described. In some implementations, the system and method identify points of a three-dimensional scan that are associated with occlusions, such as eyeglasses, to a face of a target subject and remove the identified points from the three-dimensional scan.
    Type: Grant
    Filed: February 1, 2021
    Date of Patent: April 23, 2024
    Assignee: Aeva, Inc.
    Inventors: Raghavender Reddy Jillela, Trina D. Russ
  • Patent number: 11967137
    Abstract: According to one embodiment, a method, computer system, and computer program product for object detection. The embodiment may include receiving an annotated image dataset comprising rectangles which surround objects to be detected and labels which specify a class to which an object belongs. The embodiment may include calculating areas of high and low probability of rectangle distribution for each class of objects within images of the dataset. The embodiment may include applying a correction factor to confidence values of object prediction results, obtained during validation of a trained object detection (OD) model, depending on a class label and a rectangle location of an object prediction result and calculating an accuracy of the trained OD model. The embodiment may include increasing the correction factor and re-calculating the accuracy of the trained OD model with every increase. The embodiment may include selecting an optimal correction factor which yields a highest accuracy.
    Type: Grant
    Filed: December 2, 2021
    Date of Patent: April 23, 2024
    Assignee: International Business Machines Corporation
    Inventors: Hiroki Kawasaki, Shingo Nagai
  • Patent number: 11961243
    Abstract: A geometric approach may be used to detect objects on a road surface. A set of points within a region of interest between a first frame and a second frame are captured and tracked to determine a difference in location between the set of points in two frames. The first frame may be aligned with the second frame and the first pixel values of the first frame may be compared with the second pixel values of the second frame to generate a disparity image including third pixels. One or more subsets of the third pixels that have a value above a first threshold may be combined, and the third pixels may be scored and associated with disparity values for each pixel of the one or more subsets of the third pixels. A bounding shape may be generated based on the scoring.
    Type: Grant
    Filed: February 26, 2021
    Date of Patent: April 16, 2024
    Assignee: NVIDIA Corporation
    Inventors: Dong Zhang, Sangmin Oh, Junghyun Kwon, Baris Evrim Demiroz, Tae Eun Choe, Minwoo Park, Chethan Ningaraju, Hao Tsui, Eric Viscito, Jagadeesh Sankaran, Yongqing Liang
  • Patent number: 11951622
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a generator neural network to adapt input images.
    Type: Grant
    Filed: March 23, 2022
    Date of Patent: April 9, 2024
    Assignee: Google LLC
    Inventors: Paul Wohlhart, Stephen James, Mrinal Kalakrishnan, Konstantinos Bousmalis
  • Patent number: 11948378
    Abstract: Systems and methods for dynamically generating a predicted similarity score for a pair of input sequences. A predicted similarity score for a pair of input sequences is determined based at least in part on at least one of a token-level similarity probability score for the pair of input sequences, a target region match indication for the pair of input sequences, a fuzzy match score for the pair of input sequences, a character-level match score for the pair of input sequences, one or more similarity ratio occurrence indicators for the pair of input sequences, and a harmonic mean score of the fuzzy match score for the pair of input sequences and the token-level similarity probability score for the pair of input sequences.
    Type: Grant
    Filed: December 23, 2021
    Date of Patent: April 2, 2024
    Assignee: UnitedHealth Group Incorporated
    Inventors: Subhodeep Dey, Brad Booher, Edward Sverdlin, Reshma S. Ombase, Raghvendra Kumar Yadav
  • Patent number: 11948273
    Abstract: An image processing method includes generating a second image by applying noise to a first image, as a first process, and generating a third image by upsampling the second image through a neural network, as a second process. The second image is generated by applying noise to the first image expressed in a second bit depth higher than a first bit depth of the first image and then restoring the bit depth to the first bit depth.
    Type: Grant
    Filed: September 21, 2021
    Date of Patent: April 2, 2024
    Assignee: Canon Kabushiki Kaisha
    Inventor: Yoshinori Kimura
  • Patent number: 11941892
    Abstract: A method for providing data for creating a digital map. The method includes: detecting surroundings sensor data of the surroundings during a measuring run of a physical system, preferably a vehicle, the surroundings sensor data capturing the surroundings in an at least partially overlapping manner, first surroundings sensor data including three-dimensional information, and second surroundings sensor data including two-dimensional information; extracting, with the aid of a first neural network situated in the physical system, at least one defined object from the first and second surroundings sensor data into first extracted data; and extracting, with the aid of a second neural network situated in the physical system, characteristic features including descriptors from the first extracted data into second extracted data, the descriptors being provided for a defined alignment of the second extracted data in a map creation process.
    Type: Grant
    Filed: September 10, 2021
    Date of Patent: March 26, 2024
    Assignee: ROBERT BOSCH GMBH
    Inventors: Tayyab Naseer, Piyapat Saranrittichai, Carsten Hasberg
  • Patent number: 11934953
    Abstract: An image detection apparatus includes: a display outputting an image; a memory storing one or more instructions; and a processor configured to execute the one or more instructions stored in the memory to: detect, by using a neural network, an additional information area in a first image output on the display; obtain style information of the additional information area from the additional information area; and detect, in a second image output on the display, an additional information area having style information different from the style information by using a model that has learned an additional information area having new style information generated based on the style information.
    Type: Grant
    Filed: June 22, 2021
    Date of Patent: March 19, 2024
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventor: Youngchun Ahn
  • Patent number: 11935224
    Abstract: The disclosure is directed to, among other things, systems and methods for troubleshooting equipment installations using machine learning. Particularly, the systems and methods described herein may be used to validate an installation of one or more devices (which may be referred to as “customer premises equipment (CPE)” herein as well) at a given location, such as a customer's home or a commercial establishment. As one non-limiting example, the one or more devices may be associated with a fiber optical network, and may include a modem and/or an optical network terminal (ONT). However, the one or more devices may include any other types of devices associated with any other types of networks as well.
    Type: Grant
    Filed: May 3, 2021
    Date of Patent: March 19, 2024
    Assignee: Cox Communications, Inc.
    Inventors: Monte Fowler, Lalit Bhatia, Jagan Arumugham
  • Patent number: 11922615
    Abstract: There is provided with an information processing device. A defect detecting unit detects a defect of an object in an input image. An extracting unit extracts a feature amount pertaining to a partial image of the defect from the input image, on the basis of a result of detecting the defect. An attribute determining unit determines an attribute of the defect using the feature amount pertaining to the partial image of the defect.
    Type: Grant
    Filed: February 9, 2022
    Date of Patent: March 5, 2024
    Assignee: Canon Kabushiki Kaisha
    Inventors: Atsushi Nogami, Yusuke Mitarai
  • Patent number: 11922619
    Abstract: A context-based inspection system is disclosed. The system may include an optical imaging sub-system. The system may further include one or more controllers communicatively coupled to the optical imaging system. The one or more controllers may be configured to: receive one or more reference images; receive one or more test images of a sample; generate one or more probabilistic context maps during inspection runtime using an unsupervised classifier; provide the generated one or more probabilistic context maps to a supervised classifier during the inspection runtime; and apply the supervised classifier to the received one or more test images to identify one or more DOIs on the sample.
    Type: Grant
    Filed: March 29, 2023
    Date of Patent: March 5, 2024
    Assignee: KLA Corporation
    Inventors: Brian Duffy, Bradley Ries, Laurent Karsenti, Kuljit S. Virk, Asaf J. Elron, Ruslan Berdichevsky, Oriel Ben Shmuel, Shlomi Fenster, Yakir Gorski, Oren Dovrat, Ron Dekel, Emanuel Garbin, Sasha Smekhov
  • Patent number: 11922605
    Abstract: A method includes receiving, at a conference endpoint, video captured using a wide angle lens. The method further includes selecting a view region in a frame of the video. The method further includes selectively applying, based on a size of the view region, deformation correction or distortion correction to the view region to generate a corrected video frame. The method further includes transmitting the corrected video frame to a remote endpoint.
    Type: Grant
    Filed: November 23, 2018
    Date of Patent: March 5, 2024
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Tianran Wang, Hailin Song, Wenxue He
  • Patent number: 11922617
    Abstract: The present application provides a method and system for defect detection. The method includes: acquiring a two-dimensional (2D) picture of an object to be detected; inputting the acquired 2D picture to a trained defect segmentation model to obtain a segmented 2D defect mask, where the defect segmentation model is trained based on a multi-level feature extraction instance segmentation network with intersection over union (IoU) thresholds being increased level by level, and the 2D defect mask includes information about a defect type, a defect size, and a defect location of a segmented defect region; and determining the segmented 2D defect mask based on a predefined defect rule to output a defect detection result.
    Type: Grant
    Filed: May 12, 2023
    Date of Patent: March 5, 2024
    Assignee: CONTEMPORARY AMPEREX TECHNOLOGY CO., LIMITED
    Inventors: Annan Shu, Chao Yuan, Lili Han
  • Patent number: 11915500
    Abstract: A system uses a neural network based model to perform scene text recognition. The system achieves high accuracy of prediction of text from scenes based on a neural network architecture that uses double attention mechanism. The neural network based model includes a convolutional neural network component that outputs a set of visual features and an attention extractor neural network component that determines attention scores based on the visual features. The visual features and the attention scores are combined to generate mixed features that are provided as input to a character recognizer component that determines a second attention score and recognizes the characters based on the second attention score. The system trains the neural network based model by adjusting the neural network parameters to minimize a multi-class gradient harmonizing mechanism (GHM) loss. The multi-class GHM loss varies based on a level of difficulty of the sample.
    Type: Grant
    Filed: January 28, 2021
    Date of Patent: February 27, 2024
    Assignee: Salesforce, Inc.
    Inventors: Pan Zhou, Peng Tang, Ran Xu, Chu Hong Hoi
  • Patent number: 11906660
    Abstract: In various examples, a deep neural network (DNN) may be used to detect and classify animate objects and/or parts of an environment. The DNN may be trained using camera-to-LiDAR cross injection to generate reliable ground truth data for LiDAR range images. For example, annotations generated in the image domain may be propagated to the LiDAR domain to increase the accuracy of the ground truth data in the LiDAR domain—e.g., without requiring manual annotation in the LiDAR domain. Once trained, the DNN may output instance segmentation masks, class segmentation masks, and/or bounding shape proposals corresponding to two-dimensional (2D) LiDAR range images, and the outputs may be fused together to project the outputs into three-dimensional (3D) LiDAR point clouds. This 2D and/or 3D information output by the DNN may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
    Type: Grant
    Filed: August 28, 2020
    Date of Patent: February 20, 2024
    Assignee: NVIDIA Corporation
    Inventors: Tilman Wekel, Sangmin Oh, David Nister, Joachim Pehserl, Neda Cvijetic, Ibrahim Eden
  • Patent number: 11907338
    Abstract: Techniques are provided herein for retrieving images that correspond to a target subject matter within a target context. Although useful in a number of applications, the techniques provided herein are particularly useful in contextual product association and visualization. A method is provided to apply product images to a neural network. The neural network is configured to classify the products in the images. The images are associated with a context representing the combination of classified products in the images. These techniques leverage both seller-provided images of products and user-generated content, which potentially includes hundreds or thousands of images of the same or similar products as the seller-provided images. A graphical user interface is configured to permit a user to select the context of interest in which to visualize the products.
    Type: Grant
    Filed: January 26, 2021
    Date of Patent: February 20, 2024
    Assignee: Adobe Inc.
    Inventors: Gourav Singhal, Sourabh Gupta, Mrinal Kumar Sharma
  • Patent number: 11908044
    Abstract: Methods and systems are provided for increasing a quality of computed tomography (CT) images reconstructed from high helical pitch scans. In one embodiment, the current disclosure provides for a method comprising generating a first computed tomography (CT) image from projection data acquired at a high helical pitch; using a trained multidimensional statistical regression model to generate a second CT image from the first CT image, the multidimensional statistical regression model trained with a plurality of target CT images reconstructed from projection data acquired at a lower helical pitch; and performing an iterative correction of the second CT image to generate a final CT image.
    Type: Grant
    Filed: June 17, 2021
    Date of Patent: February 20, 2024
    Assignees: GE PRECISION HEALTHCARE LLC, WISCONSIN ALUMNI RESEARCH FOUNDATION
    Inventors: Guang-Hong Chen, Jiang Hsieh
  • Patent number: 11908120
    Abstract: A fault detection method and system for tunnel dome lights based on an improved localization loss function. The method includes: constructing a dataset of tunnel dome light detection images; acquiring a you only look once (YOLO) v5s neural network based on the improved localization loss function; training the YOLO v5s neural network according to the dataset to obtain a trained YOLO v5s neural network; acquiring a to-be-detected tunnel dome light image; detecting, with the trained YOLO v5s neural network, the to-be-detected tunnel dome light image to obtain position coordinates of luminous dome lights; and determining, according to the position coordinates of the luminous dome lights, whether a fault occurs in the tunnel dome lights. The present disclosure can accurately localize the tunnel dome lights and label the positions, and can detect whether the tunnel dome lights work normally according to a relative positional relationship between the labeled dome lights.
    Type: Grant
    Filed: August 25, 2022
    Date of Patent: February 20, 2024
    Assignee: EAST CHINA JIAOTONG UNIVERSITY
    Inventors: Gang Yang, Cailing Tang, Lizhen Dai, Zhipeng Yang, Yuyu Weng, Hui Yang, Rongxiu Lu, Fangping Xu, Shuo Xu
  • Patent number: 11908036
    Abstract: The technology described herein is directed to a cross-domain training framework that iteratively trains a domain adaptive refinement agent to refine low quality real-world image acquisition data, e.g., depth maps, when accompanied by corresponding conditional data from other modalities, such as the underlying images or video from which the image acquisition data is computed. The cross-domain training framework includes a shared cross-domain encoder and two conditional decoder branch networks, e.g., a synthetic conditional depth prediction branch network and a real conditional depth prediction branch network. The shared cross-domain encoder converts synthetic and real-world image acquisition data into synthetic and real compact feature representations, respectively.
    Type: Grant
    Filed: September 28, 2020
    Date of Patent: February 20, 2024
    Assignee: Adobe Inc.
    Inventors: Oliver Wang, Jianming Zhang, Dingzeyu Li, Zekun Hao