Patents Examined by Xiao Liu
-
Patent number: 11551060Abstract: The disclosed computer-implemented method may include generating a three-dimensional (3D) feature map for a digital image using a fully convolutional network (FCN). The 3D feature map may be configured to identify features of the digital image and identify an image region for each identified feature. The method may also include generating a region composition graph that includes the identified features and image regions. The region composition graph may be configured to model mutual dependencies between features of the 3D feature map. The method may further include performing a graph convolution on the region composition graph to determine a feature aesthetic value for each node according to the weightings in the node's weighted connecting segments, and calculating a weighted average for each node's feature aesthetic value to provide a combined level of aesthetic appeal for the digital image. Various other methods, systems, and computer-readable media are also disclosed.Type: GrantFiled: November 7, 2019Date of Patent: January 10, 2023Assignee: Netflix, Inc.Inventors: Dong Liu, Nagendra Kamath, Rohit Puri, Subhabrata Bhattacharya
-
Patent number: 11550998Abstract: There is provided a method and apparatus for generating a competition commentary based on artificial intelligence, and a storage medium. The method comprises: obtaining commentator's words commentaries and structured data of historical competitions; generating a commentating model according to obtained information; during live broadcast of a competition, determining a corresponding words commentary according to the commentating model with respect to the structured data obtained each time.Type: GrantFiled: June 6, 2018Date of Patent: January 10, 2023Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.Inventors: Jianqing Cui, Yingchao Shi, Hao Tian, Shiqi Zhao, Qiaoqiao She
-
Patent number: 11544935Abstract: A system and method for risk object identification via causal inference that includes receiving at least one image of a driving scene of an ego vehicle and analyzing the at least one image to detect and track dynamic objects within the driving scene of the ego vehicle. The system and method also include implementing a mask to remove each of the dynamic objects captured within the at least one image. The system and method further include analyzing a level of change associated with a driving behavior with respect to a removal of each of the dynamic objects. At least one dynamic object is identified as a risk object that has a highest level of influence with respect to the driving behavior.Type: GrantFiled: June 30, 2020Date of Patent: January 3, 2023Assignee: HONDA MOTOR CO., LTD.Inventors: Yi-Ting Chen, Chengxi Li
-
Patent number: 11538252Abstract: In step S11, a color image CIMG is acquired. In step S12, a distance image DIMG is acquired. In step S13, the color image CIMG is projected onto the distance image DIMG. An alignment of the color image CIMG and the distance image DIMG is performed prior to the projection of the color image CIMG. In step S14, it is determined whether or not a basic condition is satisfied. In S15, it is determined whether or not a special condition is satisfied. If a judgement result of the steps S14 or S15 is positive, then in step S16 a first data point and a second data point on the distance image DIMG are associated. If bot of the judgement results of steps S14 and S15 are negative, data points are not associated in step S17.Type: GrantFiled: August 14, 2020Date of Patent: December 27, 2022Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHAInventors: Hiroshi Sakamoto, Ryosuke Fukatani, Hideyuki Matsui, Junya Ueno, Kazuki Tamura
-
Patent number: 11538170Abstract: Methods and systems are provided for optimal segmentation of an image based on multiple segmentations. In particular, multiple segmentation methods can be combined by taking into account previous segmentations. For instance, an optimal segmentation can be generated by iteratively integrating a previous segmentation (e.g., using an image segmentation method) with a current segmentation (e.g., using the same or different image segmentation method). To allow for optimal segmentation of an image based on multiple segmentations, one or more neural networks can be used. For instance, a convolutional RNN can be used to maintain information related to one or more previous segmentations when transitioning from one segmentation method to the next. The convolutional RNN can combine the previous segmentation(s) with the current segmentation without requiring any information about the image segmentation method(s) used to generate the segmentations.Type: GrantFiled: April 3, 2020Date of Patent: December 27, 2022Assignee: ADOBE INC.Inventors: Brian Lynn Price, Scott David Cohen, Henghui Ding
-
Patent number: 11538231Abstract: In various examples, sensor data may be adjusted to represent a virtual field of view different from an actual field of view of the sensor, and the sensor data—with or without virtual adjustment—may be applied to a stereographic projection algorithm to generate a projected image. The projected image may then be applied to a machine learning model—such as a deep neural network (DNN)—to detect and/or classify features or objects represented therein.Type: GrantFiled: April 6, 2020Date of Patent: December 27, 2022Assignee: NVIDIA CorporationInventor: Karsten Patzwaldt
-
Patent number: 11538262Abstract: Multiple field of view (FOV) systems are disclosed herein. An example system includes a bioptic barcode reader having a target imaging region. The bioptic barcode reader includes at least one imager having a first FOV and a second FOV and is configured to capture an image of a target object from each FOV. The example system includes one or more processors configured to receive the images and a trained object recognition model stored in memory communicatively coupled to the one or more processors. The memory includes instructions that, when executed, cause the one or more processors to analyze the images to identify at least a portion of a barcode and one or more features associated with the target object. The instructions further cause the one or more processors to determine a target object identification probability and to determine whether a predicted product identifies the target object.Type: GrantFiled: March 23, 2020Date of Patent: December 27, 2022Assignee: Zebra Technologies CorporationInventors: Edward Barkan, Mark Drzymala, Darran Michael Handshaw
-
Patent number: 11531110Abstract: In one embodiment, a method for solution inference using neural networks in LiDAR localization includes constructing a cost volume in a solution space for a predicted pose of an autonomous driving vehicle (ADV), the cost volume including a number of sub volumes, each sub volume representing a matching cost between a keypoint from an online point cloud and a corresponding keypoint on a pre-built point cloud map. The method further includes regularizing the cost volume using convention neural networks (CNNs) to refine the matching costs; and inferring, from the regularized cost volume, an optimal offset of the predicted pose. The optimal offset can be used to determine a location of the ADV.Type: GrantFiled: January 30, 2019Date of Patent: December 20, 2022Assignees: BAIDU USA LLC, BAIDU.COM TIMES TECHNOLOGY (BEIJING) CO. LTD.Inventors: Weixin Lu, Yao Zhou, Guowei Wan, Shenhua Hou, Shiyu Song
-
Patent number: 11532400Abstract: A system, method, and computer readable media are provided for obtaining a first set of skin data from an image capture system including at least one ultraviolet (UV) image of a user's skin. Performing a correction on the skin data using a second set of skin data associated with the user. Quantifying a plurality of skin parameters of the user's skin based on the first skin data, including quantifying a bacterial load. Quantifying the bacterial load by applying a brightness filter to isolate portions of the at least one UV image containing fluorescence, applying a dust filter, identifying portions of the at least one UV image that contain fluorescence due to bacteria, and determining a quantity of bacterial load in the users skin. Determining, using a machine learning model, an output associated with a normal skin state of the user and a current skin state of the user.Type: GrantFiled: December 6, 2019Date of Patent: December 20, 2022Assignee: X Development LLCInventors: Anupama Thubagere Jagadeesh, Brian Lance Hie
-
Patent number: 11527057Abstract: A license plate recognition system includes an image capturing module, a license plate detection module, a segment extraction module, a character classification module, and a character recognition module. The image capturing module is for capturing an image. The license plate detection module is for receiving the image and to identify a license plate in the image. The segment extraction module is for extracting a sequence of character segments on the license plate. The character classification module is for computing a probability of each possible character in each character segment. The character recognition module is for identifying permissible characters for the each character segment according to a syntax of the sequence of character segments, and to identify a character having a highest probability among the permissible characters as a selected character for the each character segment.Type: GrantFiled: September 30, 2020Date of Patent: December 13, 2022Assignee: REALTEK SINGAPORE PRIVATE LIMITEDInventors: Tien Ping Chua, Chen-Feng Kuo, Ruchi Mangesh Dhamnaskar, Zhengyu Li, Sin Yi Heung
-
Patent number: 11507615Abstract: A method and apparatus for image searching based on artificial intelligent (AI) are provided. The method includes obtaining first feature information by extracting features from an image based on a first neural network, obtaining second feature information corresponding to a target area of a query image by processing the first feature information based on a second neural network and at least two filters having different sizes, and identifying an image corresponding to the query image according to the second feature information.Type: GrantFiled: January 29, 2020Date of Patent: November 22, 2022Assignee: Samsung Electronics Co., Ltd.Inventors: Zhonghua Luo, Jiahui Yuan, Wei Wen, Zuozhou Pan, Yuanyang Xue
-
Patent number: 11501452Abstract: Techniques for detecting motion of a vehicle are disclosed. Optical flow techniques are applied to the entirety of the received images from an optical sensor mounted to a vehicle. Motion detection techniques are then imposed on the optical flow output to remove image portions that correspond to objects moving independent from the vehicle and determine the extent, if any, of movement by the vehicle from the remaining image portions. Motion detection can be performed via a machine learning classifier. In some aspects, motion can be detected by extracting the depth of received images in addition to optical flow. In additional or alternative aspects, the optical flow and/or motion detection techniques can be implemented by at least one artificial neural network.Type: GrantFiled: August 10, 2020Date of Patent: November 15, 2022Assignee: Honeywell International Inc.Inventors: Vegnesh Jayaraman, Sai Krishnan Chandrasekar, Andrew Stewart, Vijay Venkataraman
-
Patent number: 11488382Abstract: Various user-presence/absence recognition techniques based on deep learning are provided. More specifically, these user-presence/absence recognition techniques include building/training a CNN-based image recognition model including a user-presence/absence classifier based on training images collected from the user-seating area of a surgeon console under various clinically-relevant conditions/cases. The trained user-presence/absence classifier can then be used during teleoperation/surgical procedures to monitor/track users in the user-seating area of the surgeon console, and continuously classify the real-time video images of the user-seating area as either a user-presence state or a user-absence state. In some embodiments, the user-presence/absence classifier can be used to detect a user-switching event at the surgeon console when a second user is detected to have entered the user-seating area after a first user is detected to have exited the user-seating area.Type: GrantFiled: September 10, 2020Date of Patent: November 1, 2022Assignee: VERB SURGICAL INC.Inventor: Meysam Torabi
-
Patent number: 11475681Abstract: The present application discloses an image processing method, apparatus, electronic device and computer readable storage medium. The image processing method comprises detecting a text region in an image to be processed, recognizing the text region to obtain a text recognition result. In this application, the text recognition in the image to be processed is realized, the recognition manner for the text in the image is simplified, and the recognition effect for the text is improved.Type: GrantFiled: December 3, 2019Date of Patent: October 18, 2022Inventors: Xiaobing Wang, Yingying Jiang, Xiangyu Zhu, Hao Guo, Yi Yu, Pingjun Li, Zhenbo Luo
-
Patent number: 11458987Abstract: A system and method for predicting driving actions based on intent-aware driving models that include receiving at least one image of a driving scene of an ego vehicle. The system and method also include analyzing the at least one image to detect and track dynamic objects located within the driving scene and to detect and identify driving scene characteristics associated with the driving scene and processing an ego-thing graph associated with the dynamic objects and an ego-stuff graph associated with the driving scene characteristics. The system and method further include predicting a driver stimulus action based on a fusion of representations of the ego-thing graph and the ego-stuff graph and a driver intention action based on an intention representation associated with driving intentions of a driver of the ego vehicle.Type: GrantFiled: August 28, 2020Date of Patent: October 4, 2022Assignee: HONDA MOTOR CO., LTD.Inventors: Chengxi Li, Yi-Ting Chen
-
Patent number: 11455788Abstract: A method and apparatus for positioning a description statement in an image includes: analyzing a to-be-analyzed description statement and a to-be-analyzed image to obtain a plurality of statement attention weights of the to-be-analyzed description statement and a plurality of image attention weights of the to-be-analyzed image; obtaining a plurality of first matching scores based on the plurality of statement attention weights and a subject feature, a location feature and a relationship feature of the to-be-analyzed image; obtaining a second matching score between the to-be-analyzed description statement and the to-be-analyzed image based on the plurality of first matching scores and the plurality of image attention weights; and determining a positioning result of the to-be-analyzed description statement in the to-be-analyzed image based on the second matching score.Type: GrantFiled: March 24, 2020Date of Patent: September 27, 2022Assignee: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTD.Inventors: Xihui Liu, Jing Shao, Zihao Wang, Hongsheng Li, Xiaogang Wang
-
Patent number: 11443442Abstract: A method, apparatus and computer program product are provided to localize data from at least one of two or more data sets a based upon the registration of synthetic images representative of the two or more data sets. In the context of a method, first and second synthetic images are created from first and second data sets, respectively. In creating the first and second synthetic images, representations of one or more features from the first and second data sets are rasterized. The method also determines a transformation based upon a phase correlation between the first and second synthetic images. The transformation provides for improved localization of the data from at least one of the first or second data sets. The method also generates a report that defines the transformation.Type: GrantFiled: January 28, 2020Date of Patent: September 13, 2022Assignee: HERE GLOBAL B.V.Inventors: Andrew Philip Lewis, Stacey Matthew Mott
-
Patent number: 11443522Abstract: Methods of processing vehicle sensor information for object detection may include capturing generating a feature map based on captured sensor information, associating with each pixel of the feature map a prior box having a set of two or more width priors and a set of two or more height priors, determining a confidence value of each height prior and each width prior, outputting an indication of a detected object based on a highest confidence height prior and a highest confidence width prior, and performing a vehicle operation based on the output indication of a detected object. Embodiments may include determining for each pixel of the feature map one or more prior boxes having a center value, a size value, and a set of orientation priors, determining a confidence value for each orientation prior, and outputting an indication of the orientation of a detected object based on the highest confidence orientation.Type: GrantFiled: December 2, 2019Date of Patent: September 13, 2022Assignee: Qualcomm IncorporatedInventors: Bence Major, Daniel Hendricus Franciscus Fontijne, Ravi Teja Sukhavasi, Amin Ansari
-
Patent number: 11436743Abstract: System, methods, and other embodiments described herein relate to semi-supervised training of a depth model using a neural camera model that is independent of a camera type. In one embodiment, a method includes acquiring training data including at least a pair of training images and depth data associated with the training images. The method includes training the depth model using the training data to generate a self-supervised loss from the pair of training images and a supervised loss from the depth data. Training the depth model includes learning the camera type by generating, using a ray surface model, a ray surface that approximates an image character of the training images as produced by a camera having the camera type. The method includes providing the depth model to infer depths from monocular images in a device.Type: GrantFiled: June 19, 2020Date of Patent: September 6, 2022Assignee: Toyota Research Institute, Inc.Inventors: Vitor Guizilini, Igor Vasiljevic, Rares A. Ambrus, Sudeep Pillai, Adrien David Gaidon
-
Patent number: 11430202Abstract: Optical character recognition (OCR) based systems and methods for extracting and automatically evaluating contextual and identification information and associated metadata from an image utilizing enhanced image processing techniques and image segmentation. A unique, comprehensive integration with an account provider system and other third party systems may be utilized to automate the execution of an action associated with an online account. The system may evaluate text extracted from a captured image utilizing machine learning processing to classify an image type for the captured image, and select an optical character recognition model based on the classified image type. They system may compare a data value extracted from the recognized text for a particular data type with an associated online account data value for the particular data type to evaluate whether to automatically execute an action associated with the online account linked to the image based on the data value comparison.Type: GrantFiled: June 29, 2020Date of Patent: August 30, 2022Assignee: CAPITAL ONE SERVICES, LLCInventors: Ian Whitestone, Brian Chun-Lai So, Sourabh Mittal