Patents Examined by Leon Flores
-
Patent number: 11436449Abstract: An image processing method can include: acquiring an image; determining a feature map of the image based on an image tag classification model, wherein the image tag classification model comprises a plurality of classification tasks; and determining tags corresponding to the feature map based on the classification tasks, wherein each of the tags comprises a probability value.Type: GrantFiled: September 28, 2020Date of Patent: September 6, 2022Assignee: BEIJING DAJIA INTERNET INFORMATION TECH. CO., LTD.Inventors: Zhiwei Zhang, Fan Yang
-
Patent number: 11430203Abstract: A computer-implemented method for registering low dimensional images with a high dimensional image includes receiving a high dimensional image of a region of interest and simulating synthetic low dimensional images of the region of interest from a number of poses of a virtual low dimensional imaging device, from the high dimensional image. The method determines positions of landmarks within the low dimensional images by applying a first learning algorithm to the low dimensional images and back projecting of the positions of the determined landmarks into the high dimensional image space, to thereby obtain the positions of the landmarks in the high dimensional image. The positions of landmarks within low dimensional images acquired form an imaging device are determined by applying the first or a second learning algorithm to the low dimensional images. The low dimensional images are registered with the high dimensional image based on the positions of the landmarks.Type: GrantFiled: October 2, 2020Date of Patent: August 30, 2022Assignee: MAXER Endoscopy GmBHInventors: Nassir Navab, Matthias Grimm, Javier Esteban, Wojciech Konrad Karcz
-
Patent number: 11430219Abstract: Systems and methods predict a performance metric for a video and identify key portions of the video that contribute to the performance metric, which can be used to edit the video to improve the ultimate viewer response to the video. An initial performance metric is computed for an initial video (e.g., using a neural network). A perturbed video is generated by perturbing a video portion of the initial video. A modified performance metric is computed for the perturbed video. Based on a difference between the initial and modified performance metrics, the system determines that the video portion contributed to a predicted user viewer response to the initial video. An indication of the video portion that contributed to the predicted user viewer response is provided as output, which can be used to edit the video to improve the predicted viewer response.Type: GrantFiled: November 19, 2020Date of Patent: August 30, 2022Assignee: Adobe Inc.Inventors: Somdeb Sarkhel, Viswanathan Swaminathan, Stefano Petrangeli, Md Maminur Islam
-
Patent number: 11429809Abstract: The present disclosure discloses an image processing method and related device thereof. The method includes: acquiring an image to be processed; and performing a feature extraction process on the image to be processed using a target neural network so as to obtain target feature data of the image to be processed, wherein parameters of the target neural network are time average values of parameters of a first neural network which is obtained from training under supervision by a training image set and an average network, and parameters of the average network are time average values of parameters of a second neural network which is obtained from training under supervision by the training image set and the target neural network. A corresponding device is also disclosed. Feature data of image to be processed are obtained via the feature extraction process performed on the image to be processed.Type: GrantFiled: October 22, 2020Date of Patent: August 30, 2022Assignee: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTDInventors: Yixiao Ge, Dapeng Chen, Hongsheng Li
-
Patent number: 11429822Abstract: Disclosed is a fabric identifying system including a fabric identifying apparatus for identifying the type of a fabric of clothing and a server. The fabric identifying apparatus includes an image camera for obtaining image information on a fabric structure of clothing, a fabric identifier for performing a function of identifying the type of the fabric based on the fabric structure of the image information. The server includes an artificial intelligence model learner for generating a fabric type identifying engine for learning the fabric structure of the image information of the received clothing through a deep neural network, the server is configured to transmit the learned fabric type identifying engine to the fabric identifying apparatus. According to the present disclosure, it is possible to identify the type of the fabric of the clothing by using the artificial intelligence (AI), the artificial intelligence based screen recognition technology, and the 5G network.Type: GrantFiled: October 11, 2019Date of Patent: August 30, 2022Assignee: LG Electronics Inc.Inventor: Yeon Kyung Chae
-
Patent number: 11423666Abstract: A method of detecting target object includes: extracting, through a neural network, a feature of a reference frame and a feature of a frame under detection; inputting each of at least two feature groups from at least two network layers of the neural network into a detector so as to obtain a corresponding detection result group output from the detector; wherein each feature group includes features of the reference frame and of the frame under detection, each detection result group includes a classification result and a regression result with respect to each of a plurality of candidate boxes for a feature group; and acquiring a bounding box for the target object in the frame under detection according to the at least two detection result groups.Type: GrantFiled: November 17, 2020Date of Patent: August 23, 2022Assignee: Beijing Sensetime Technology Development Co., Ltd.Inventors: Bo Li, Wei Wu, Fangyi Zhang
-
Patent number: 11410035Abstract: Disclosed is a real-time object detection method deployed on a platform with limited computing resources, which belongs to the field of deep learning and image processing. In the present invention, YOLO-v3-tiny neural network is improved, Tinier-YOLO reserves the front five convolutional layers and pooling layers of YOLO-v3-tiny and makes prediction at two different scales. Fire modules in SqueezeNet, 1×1 bottleneck layers, and dense connection are introduced, so that the structure is used to achieve smaller, faster, and more lightweight network that can be run in real time on an embedded AI platform. The model size of Tinier-YOLO in the present invention is only 7.9 MB, which is only ¼ of 34.9 MB of YOLO-v3-tiny, and ? of YOLO-v2-tiny. The reduction in the model size of Tinier-YOLO does not affect real-time performance and accuracy of Tinier-YOLO. Real-time performance of Tinier-YOLO in the present invention is 21.8% higher than that of YOLO-v3-tiny and 70.8% higher than that of YOLO-v2-tiny.Type: GrantFiled: May 28, 2020Date of Patent: August 9, 2022Assignee: Jiangnan UniversityInventors: Wei Fang, Peiming Ren, Lin Wang, Jun Sun, Xiaojun Wu
-
Patent number: 11410348Abstract: An imaging method. The method comprises the following steps: determining a target by identifying target-related position information or characteristic information (S101); implementing a two-dimensional scan of the target to collect image data of the target in a three-dimensional space (S102); processing, during the scanning, and on a real-time basis, the image data and relevant spatial information to obtain a plurality of image contents of the target, and displaying the image content on a real-time basis (S103); and arranging the plurality of image contents in an incremental sequence to form an image of the target (S104). The imaging method prevents collection of unusable image information, shortens image data collection time, and increases the speed of an imaging process. The application further provides an imaging device.Type: GrantFiled: April 26, 2016Date of Patent: August 9, 2022Assignee: Telefield Medical Imaging LimitedInventor: Yongping Zheng
-
Patent number: 11409989Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for model co-occurrence object detection. One of the methods includes accessing, for a training image, first data that indicates a detected bounding box for a first object depicted in the training image and a predicted type label, accessing, for the training image, ground truth data for one or more ground truth objects, determining, using the first data and the ground truth data, that i) the detected bounding box represents an object that is not a ground truth object represented by the ground truth data or ii) the predicted type label for the first object does not match a ground truth label for the first object identified by the ground truth data, determining a penalty to adjust the model using a distance between the detected bounding box and the labeled bounding box, and training the model using the penalty.Type: GrantFiled: September 30, 2020Date of Patent: August 9, 2022Assignee: ObjectVideo Labs, LLCInventors: Sima Taheri, Gang Qian, Sung Chun Lee, Sravanthi Bondugula, Allison Beach
-
Patent number: 11393124Abstract: There is provided an information processing apparatus, an information processing method, and a program that are capable of easily predicting the posture of an object. An information processing apparatus according to an aspect of the present technology specifies, on the basis of learned data used in specifying corresponding points, obtained by performing learning using data of a predetermined portion that has symmetry with respect to other portions of an entire model that represents an object as a recognition target, second points on the model included in an input scene that correspond to first points on the model, as the corresponding points, and predicts the posture of the model included in the scene on the basis of the corresponding points. The present technology is applicable to an apparatus for controlling a projection system to project images according to projection mapping.Type: GrantFiled: February 20, 2019Date of Patent: July 19, 2022Assignee: SONY CORPORATIONInventor: Gaku Narita
-
Patent number: 11386288Abstract: A movement state recognition multitask DNN model training section 46 trains a parameter of a DNN model based on an image data time series and a sensor data time series, and based on first annotation data, second annotation data, and third annotation data generated for the image data time series and the sensor data time series. Training is performed such that a movement state recognized by the DNN model in a case in which input with the image data time series and the sensor data time series matches movement states indicated by the first annotation data, the second annotation data, and the third annotation data. This thereby enables information to be efficiently extracted and combined from both video data and sensor data, and also enables movement state recognition to be implemented with high precision for a data set including data that does not fall in any movement state class.Type: GrantFiled: April 26, 2019Date of Patent: July 12, 2022Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Shuhei Yamamoto, Hiroyuki Toda
-
Patent number: 11386662Abstract: The disclosure herein enables tracking of multiple objects in a real-time video stream. For each individual frame received from the video stream, a frame type of the frame is determined. Based on the individual frame being an object detection frame type, a set of object proposals is detected in the individual frame, associations between the set of object proposals and a set of object tracks are assigned, and statuses of the set of object tracks are updated based on the assigned associations. Based on the individual frame being an object tracking frame type, single-object tracking is performed on the frame based on each object track of the set of object tracks and the set of object tracks is updated based on the performed single-object tracking. For each frame received, a real-time object location data stream is provided based on the set of object tracks.Type: GrantFiled: May 28, 2020Date of Patent: July 12, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Ishani Chakraborty, Yi-Ling Chen, Lu Yuan
-
Patent number: 11380090Abstract: A method for object recognition at an interactive information system (IIS) includes capturing, using an imaging device of the IIS, a first image of a first representative object which represents a first one or more object disposed about the IIS; analyzing, by a computer processor of the IIS and based on a category model, the first image to determine a first representative category of the first one or more object; retrieving, by the computer processor and based on the first representative category, a first representative object model of a plurality of object models that are stored on a remote server; and analyzing, by the computer processor and based on the first representative object model, the first image to determine a first representative inventory identifier of the first representative object, which represents a first one or more inventory identifier corresponding to the first one or more object respectively.Type: GrantFiled: September 10, 2020Date of Patent: July 5, 2022Assignee: Flytech Technology Co., Ltd.Inventors: Tung-Ying Lee, Yi-Heng Tseng, Tzu-Wei Huang, Che-Wei Lin
-
Patent number: 11379697Abstract: An FPGA device receives an input matrix. A first convolutional kernel is determined by performing the exclusive nor operations between the input matrix and a first weight vector. A first binary kernel is determined based on the first convolutional kernel. A first layer feature map is determined by convoluting the input matrix using the first binary kernel. A second convolutional kernel is determined by performing the exclusive nor operations between the first feature map and the second weight vector. A pooled kernel is determined based on the second convolutional kernel. A second binary kernel is determined based on the pooled kernel. A second layer feature map is determined by convoluting the first layer feature map using the second binary kernel. A probability is determined that the input matrix is associated with a predetermined class of images. If the probability is greater than a threshold, classification results are provided.Type: GrantFiled: May 20, 2020Date of Patent: July 5, 2022Assignee: Bank of America CorporationInventor: Madhusudhanan Krishnamoorthy
-
Patent number: 11379970Abstract: A method for training a deep learning model of a patterning process. The method includes obtaining (i) training data including an input image of at least a part of a substrate having a plurality of features and including a truth image, (ii) a set of classes, each class corresponding to a feature of the plurality of features of the substrate within the input image, and (iii) a deep learning model configured to receive the training data and the set of classes, generating a predicted image, by modeling and/or simulation with the deep learning model using the input image, assigning a class of the set of classes to a feature within the predicted image based on matching of the feature with a corresponding feature within the truth image, and generating, by modeling and/or simulation, a trained deep learning model by iteratively assigning weights using a loss function.Type: GrantFiled: February 15, 2019Date of Patent: July 5, 2022Assignee: ASML Netherlands B.V.Inventors: Adrianus Cornelis Matheus Koopman, Scott Anderson Middlebrooks, Antoine Gaston Marie Kiers, Mark John Maslow
-
Patent number: 11373426Abstract: A method for detecting key points of skeleton, an apparatus, an electronic device, and a storage medium are provided. The method is implemented as follows. An original image is acquired. The original image includes a plurality of key points of skeleton. Based on a pre-trained stacked hourglass network structure, skeleton key point identification is performed on the original image to obtain heat maps of the plurality of key points. The stacked hourglass network structure includes at least one hourglass network. The at least one hourglass network is configured to perform deep-layer feature learning on feature maps of the plurality of key points based on weight values corresponding to the feature maps.Type: GrantFiled: October 30, 2020Date of Patent: June 28, 2022Assignee: Beijing Dajia Internet Information Technology Co., Ltd.Inventors: Jili Gu, Lei Zhang, Wen Zheng
-
Patent number: 11373309Abstract: A method of facilitating image analysis in pathology involves receiving a sample image representing a sample for analysis, the sample image including sample image elements, causing one or more functions to be applied to the sample image to determine a plurality of property specific confidence related scores, each associated with a sample image element and a respective sample property and representing a level of confidence that the associated element represents the associated sample property, sorting a set of elements based at least in part on the confidence related scores, producing signals for causing one or more of the set of elements to be displayed to a user in an order based on the sorting, for each of the one or more elements displayed, receiving user input, and causing the user input to be used to update the one or more functions. Other methods, systems, and computer-readable media are disclosed.Type: GrantFiled: October 8, 2020Date of Patent: June 28, 2022Assignee: Aiforia Technologies OyjInventors: Juha Reunanen, Liisa-Maija Keinänen, Tuomas Ropponen
-
Patent number: 11366988Abstract: This disclosure relates to method and system for of dynamically annotating data or validating annotated data. The method may include receiving input data comprising a plurality of input data points. The method may further include one of: a) generating a plurality of annotations for each of the plurality of input data points using at least one of a state-label mapping model and a comparative ANN model, or b) receiving the plurality of annotations for each of the plurality of input data points from an external device or from a user, and validating the plurality of annotations using at least one of the state-label mapping model and the comparative artificial neural network (ANN) model.Type: GrantFiled: July 11, 2019Date of Patent: June 21, 2022Assignee: Wipro LimitedInventors: Ghulam Mohiuddin Khan, Deepanker Singh, Sethuraman Ulaganathan
-
Patent number: 11347974Abstract: A method for automating performance evaluation of a test object detection system includes providing at least one frame of image data to the test object detection system, processing the image data via an image processor of the test object detection system, and receiving, from the test object detection system, a list of objects detected by the test object detection system in the at least one frame of image data. The frame of image data is provided to a validation object detection system, and a list of objects detected by the validation object detection system is received from the validation object detection system. The list of objects detected by the test object detection system is compared to the list of objects detected by the validation object detection system and discrepancies are determined between the lists. The determined discrepancies between the lists of objects detected are reported.Type: GrantFiled: August 20, 2020Date of Patent: May 31, 2022Assignee: MAGNA ELECTRONICS INC.Inventors: Sai Sunil Charugundla Gangadhar, Navdeep Singh
-
Patent number: 11348349Abstract: A training data increment method, an electronic apparatus and a computer-readable medium are provided. The training data increment method is adapted for the electronic apparatus and includes the following steps. A training data set is obtained, wherein the training data set includes a first image and a second image. An incremental image is generated based on the first image and the second image. A deep learning model is trained based on the incremental image.Type: GrantFiled: August 18, 2020Date of Patent: May 31, 2022Assignee: Wistron CorporationInventors: Zhe-Yu Lin, Chih-Yi Chien, Kuan-I Chung