Patents Examined by Leon Flores
-
Patent number: 11599748Abstract: In some embodiments, a method can include capturing images of produce. The method can further include generating simulated images of produce based on the images of produce. The method can further include associating each image of produce from the images of produce and each simulated image of produce from the simulated images of produce with a category indicator, an organic type indicator, and a bag type indicator, to generate a training set. The method can further include training a machine leaning model using the training set such that when the machine learning model is executed, the machine learning model receives an image and generates a predicted category indicator of the image, a predicted organic type indicator of the image, and a predicted bag type indicator of the image.Type: GrantFiled: December 18, 2020Date of Patent: March 7, 2023Assignee: Tiliter Pty Ltd.Inventors: Christopher Bradley Rodney Sampson, Sufyan Asghar, Khai Van Do, Rui Dong
-
Patent number: 11580838Abstract: System and method for detection of pre-drowning and drowning events based on underwater images are disclosed.Type: GrantFiled: July 18, 2022Date of Patent: February 14, 2023Assignee: MAIGUARD AI DETECTION SYSTEMS LTD.Inventors: Ilan Ben Gigi, Himant Gupta
-
Patent number: 11568682Abstract: Techniques are provided for recognition of activity in a sequence of video image frames that include depth information. A methodology embodying the techniques includes segmenting each of the received image frames into a multiple windows and generating spatio-temporal image cells from groupings of windows from a selected sub-sequence of the frames. The method also includes calculating a four dimensional (4D) optical flow vector for each of the pixels of each of the image cells and calculating a three dimensional (3D) angular representation from each of the optical flow vectors. The method further includes generating a classification feature for each of the image cells based on a histogram of the 3D angular representations of the pixels in that image cell. The classification features are then provided to a recognition classifier configured to recognize the type of activity depicted in the video sequence, based on the generated classification features.Type: GrantFiled: December 1, 2020Date of Patent: January 31, 2023Assignee: INTEL CORPORATIONInventors: Shaopeng Tang, Anbang Yao, Yurong Chen
-
Patent number: 11556744Abstract: Aspects of the disclosure relate to training a labeling model to automatically generate labels for objects detected in a vehicle's environment. In this regard, one or more computing devices may receive sensor data corresponding to a series of frames perceived by the vehicle, each frame being captured at a different time point during a trip of the vehicle. The computing devices may also receive bounding boxes generated by a first labeling model for objects detected in the series of frames. The computing devices may receive user inputs including an adjustment to at least one of the bounding boxes, the adjustment corrects a displacement of the at least one of the bounding boxes caused by a sensing inaccuracy. The computing devices may train a second labeling model using the sensor data, the bounding boxes, and the adjustment to increase accuracy of the second labeling model when automatically generating bounding boxes.Type: GrantFiled: December 9, 2020Date of Patent: January 17, 2023Assignee: Waymo LLCInventors: Aditya Joshi, Ingrid Fiedler, Lo Po Tsui
-
Patent number: 11537882Abstract: A machine learning apparatus according to an embodiment includes a feature extractor configured to extract features from an object region of an image, a label processor configured to create sentence label embeddings from a sentence label corresponding to the object region, a first training data creator to extract first sub-features from a plurality of first sub-regions created by partitioning the object region, add the sentence label embeddings to the extracted first sub-features, and add the first sub-features added with the sentence label embeddings to the features of the object region, a second training data creator to extract a plurality of second sub-regions along a bounding surface of the object region, create an attention matrix from the second sub-regions, and create a training data by applying the attention matrix to the features of the object region, and a trainer to train an object detection model using the training data.Type: GrantFiled: October 28, 2019Date of Patent: December 27, 2022Assignee: SAMSUNG SDS CO., LTD.Inventors: Ji-Hoon Kim, Young-Joon Choi, Jong-Won Choi, Byoung-Jip Kim, Seong-Won Bak
-
Patent number: 11538250Abstract: A system and method for self-learning operation of gate paddles is disclosed. Opening and closing of the gate paddles requires timing and other settings to avoid injury and fare evasion. Self-learning allows a machine learning model to adapt to new data dynamically. The new data captured at a fare gate improves the machine learning model, which can be shared the other similar fare gates within a transit system so that learning disseminates.Type: GrantFiled: January 19, 2021Date of Patent: December 27, 2022Assignee: Cubic CorporationInventors: Gavin R. Smith, Steffen Reymann, Jonathan Packham
-
Patent number: 11532158Abstract: Preferred embodiments described herein relate to a pipeline framework that allows for customized analytic processes to be performed on multiple streams of videos. An analytic takes data as input and performs a set of operations and transforms it into information. The methods and systems disclosed herein include a framework (1) that allows users to annotate and create variable datasets, (2) to train computer vision algorithms to create custom models to accomplish specific tasks, (3) to pipeline video data through various computer vision modules for preprocessing, pattern recognition, and statistical analytics to create custom analytics, and (4) to perform analysis using a scalable architecture that allows for running analytic pipelines on multiple streams of videos.Type: GrantFiled: October 15, 2020Date of Patent: December 20, 2022Assignee: UNIVERSITY OF HOUSTON SYSTEMInventors: Shishir K. Shah, Pranav Mantini
-
Patent number: 11532171Abstract: A method and apparatus for determining spatial characteristics of three-dimensional objects is described. In an exemplary embodiment, the device receives a point cloud representation of a three-dimensional surface structure of a plurality of objects. The device may further generate a set of bins to represent the three-dimensional surface structure based on the point cloud representation, each bin corresponding to a spatial occupancy related to the point cloud representation, each bin including a respective type indicating a spatial relationship of the surface structures and a corresponding spatial occupancy of the bin. In addition, the device may encode the set of bins using a convolutional neural network. The device may further determine a classification for the spatial characteristic of the surface structures based on the convolutional neural network with the encoded set of bins.Type: GrantFiled: October 2, 2020Date of Patent: December 20, 2022Assignee: ANSYS, INC.Inventors: Rishikesh Ranade, Jay Pathak
-
Patent number: 11531885Abstract: Systems, device and techniques are disclosed for training data generation for visual search model training. A catalog including catalog entries which may include images of an item and data about the item may be received. Labels may be applied to the images of the items based on the data about the items. The images of the items may be sorted into clusters using cluster analysis on the labels. Each cluster may include labels as categories of the cluster. Additional images may be received based on searching for the categories. Generative adversarial network (GAN) training data sets may be generated from the images of the items, the additional images, and the categories. GANs may be trained with the GAN training data sets. The GANs may generate images including images of generated items, which may be replaced with images of items from the catalog entries to create feature model training images.Type: GrantFiled: October 21, 2019Date of Patent: December 20, 2022Assignee: Salesforce, Inc.Inventors: Michael Sollami, Yang Zhang
-
Patent number: 11531842Abstract: A method for image reconstruction and domain transfer through an invertible depth network is described. The method includes training a first invertible depth network model using a first image dataset corresponding to a first geographic region to estimate a first depth map. The method also includes retraining the first invertible depth network model using a second image dataset corresponding to a second geographic region to estimate a second depth map. The method further includes reconstructing, by the first invertible depth network model, a third image dataset based on the second depth map. The method also includes training a second invertible depth network model using the third image dataset corresponding to the first geographic region and the second geographic region to estimate a third depth map.Type: GrantFiled: May 20, 2020Date of Patent: December 20, 2022Assignee: TOYOTA RESEARCH INSTITUTE, INC.Inventors: Vitor Guizilini, Adrien David Gaidon
-
Patent number: 11531317Abstract: Intelligence guided system and method for fruits and vegetables processing includes a conveyor for carrying produces, various image acquiring and processing hardware and software, water and air jets for cutting and controlling the position and orientation of the produces, and a networking hardware and software, operating in synchronism in an efficient manner to attain speed and accuracy of the produce cutting and high yield and low waste produces processing. The 2nd generation strawberry decalyxing system (AVID2) uniquely utilizes a convolutional neural network (AVIDnet) supporting a discrimination network decision, specifically, on whether a strawberry is to be cut or rejected, and computing a multi-point cutline curvature to be cut along by rapid robotic cutting tool.Type: GrantFiled: May 29, 2020Date of Patent: December 20, 2022Assignee: University of Maryland, College ParkInventors: Yang Tao, Dongyi Wang, Robert Vinson, Xuemei Cheng, Maxwell Holmes, Gary E. Seibel
-
Patent number: 11526963Abstract: Noise is adequately reduced irrespective of the content of an input image. In an embodiment of the present invention, an image processing apparatus that executes noise reduction processing of an image includes: a first estimation unit that estimates noise contained in the image; a second estimation unit that estimates an original image, which is the image from which the noise is removed; a noise reduction unit that performs the noise reduction processing on each of partial areas of the image by using the first estimation unit or the second estimation unit depending on the contents of the partial areas; and an integration unit that integrates the partial areas on which the noise reduction processing is performed.Type: GrantFiled: June 1, 2020Date of Patent: December 13, 2022Assignee: CANON KABUSHIKI KAISHAInventor: Yoshinari Higaki
-
Patent number: 11526750Abstract: An automated predictive analytics system disclosed herein provides a novel technique for industry classification. Leveraging specific API to construct a database of companies labeled with the industries to which they belong, the automated predictive analytics system trains a deep neural network to predict the industries of novel companies. The automated predictive analytics system examines the capacity of the model to predict six-digit NAICS codes, as well as the ability of the model architecture to adapt to other industry segmentation schemas. Additionally, the automated predictive analytics system investigates the ability of the model to generalize despite the presence of noise in the labels in the training set. Finally, the automated predictive analytics system explores the possibility of increasing predictive precision by thresholding based on the confidence scores that the model outputs along with its predictions.Type: GrantFiled: October 29, 2019Date of Patent: December 13, 2022Assignee: ZOOMINFO APOLLO LLCInventors: Hua Gao, Amit Rai, Yi Jin, Rakesh Gowda, Joseph James Kardwell
-
Patent number: 11526723Abstract: A pixel feature vector extraction system for extracting multi-scale features contains a cellular neural networks (CNN) based integrated circuit (IC) for extracting pixel feature vector out of input imagery data by performing convolution operations using pre-trained filter coefficients of ordered convolutional layers in a deep learning model. The ordered convolutional layers are organized in a number of groups with each group followed by a pooling layer. Each group is configured for a different size of feature map. Pixel feature vector contains a combination of feature maps from at least two groups, for example, concatenation of the feature maps. The first group of the at least two groups contains the largest size of the feature maps amongst all of the at least two groups. Feature maps of the remaining of the at least two groups are modified to match the size of the feature map of the first group.Type: GrantFiled: July 9, 2019Date of Patent: December 13, 2022Assignee: GYRFALCON TECHNOLOGY INC.Inventors: Lin Yang, Baohua Sun
-
Patent number: 11521412Abstract: A method and system may use machine learning analysis of audio data to automatically identify a user's biometric characteristics. A user's client computing device may capture audio of the user. Feature data may be extracted from the audio and applied to statistical models for determining several biometric characteristics. The determined biometric characteristic values may be used to identify individual health scores and the individual health scores may be combined to generate an overall health score and longevity metric. An indication of the user's biometric characteristics which may include the overall health score and longevity metric may be displayed on the user's client computing device.Type: GrantFiled: November 2, 2020Date of Patent: December 6, 2022Assignee: State Farm Mutual Automobile Insurance CompanyInventors: Dingchao Zhang, Michael Bernico, Peter Laube, Utku Pamuksuz, Jeffrey S. Myers, Marigona Bokshi-Drotar, Edward W. Breitweiser
-
Patent number: 11521379Abstract: A method for flood disaster monitoring and disaster analysis based on vision transformer is provided. It includes: step (1), constructing a bi-temporal image change detection model based on vision transformer; step (2), selecting bi-temporal remote sensing images to make flood disaster labels; and step (3), performing flood monitoring and disaster analysis according to the bi-temporal image change detection model constructed in the step (1). In combination with the bi-temporal image change detection model based on an advanced vision transformer in deep learning and radar data which is not affected by time and weather and has strong penetration ability, data when floods occur can be obtained and recognition accuracy is improved.Type: GrantFiled: July 4, 2022Date of Patent: December 6, 2022Assignees: NANJING UNIVERSITY OF INFORMATION SCI. & TECH., NATIONAL CLIMATE CENTERInventors: Guojie Wang, Buda Su, Yanjun Wang, Tong Jiang, Aiqing Feng, Lijuan Miao, Mingyue Lu, Zhen Dong
-
Patent number: 11513205Abstract: A system associated with predicting authentication of a device user based on a joint features representation related to an echo-signature associated with a device is disclosed. The system performs operations that include emitting acoustic signals in response to a request for processing of a profile associated with the device. The system receives a set of echo acoustic signals that are tailored based on reflection of the acoustic signals from unique contours of one or more depth portions associated with the user relative to a discrete epoch. One or one or more region segments associated with the echo acoustic signals are extracted in order to train a classification model. A classification model is generated based on the one or more region segments as extracted. A joint features representation based on the classification model is generated. A vector-based classification model is used in the prediction of the joint features representation.Type: GrantFiled: October 29, 2018Date of Patent: November 29, 2022Assignee: THE RESEARCH FOUNDATION FOR THE STATE UNIVERSITY OF NEW YORKInventors: Bing Zhou, Fan Ye
-
Patent number: 11514661Abstract: A method for pattern recognition may be provided, comprising: receiving data; processing the data with a trained convolutional neural network so as to recognize a pattern in the data, wherein the convolutional neural network comprises at least: an input layer, at least one convolutional layer, at least one batch normalization layer, at least one activation function layer, and an output layer; and wherein processing the data with a trained convolutional neural network so as to recognize a pattern in the data comprises: processing values outputted by a batch normalization layer so that the histogram of the processed values is flatter than the histogram of the values, and outputting the processed values to an activation function layer. A corresponding apparatus and system for pattern recognition, as well as a computer readable medium, a method for implementing a convolutional neural network and a convolutional neural network are also provided.Type: GrantFiled: August 21, 2017Date of Patent: November 29, 2022Assignee: Nokia Technologies OyInventor: Jiale Cao
-
Patent number: 11514714Abstract: A method includes generating a first representative vector based on a first vectors, wherein the first representative vector is associated with the first vectors in a collection of representative vectors, and the first vectors comprises a set of vector values within a latent space. The method further includes generating a second representative vector based on a second vectors, wherein the second representative vector is associated with the second vectors in the collection of representative vectors. The method further includes determining a latent space distance based on the first and second vectors. The method further includes determining whether the latent space distance satisfies a threshold. In response to a determination that the latent space distance satisfies the threshold, the method further includes associating a combined representative vector with the first vectors and the second vectors and removing the first and second representative vectors from the collection of representative vectors.Type: GrantFiled: April 8, 2022Date of Patent: November 29, 2022Assignee: Verkada Inc.Inventors: Kiumars Soltani, Yuewei Wang, Kabir Chhabra, Jose M. Giron Nanne, Yunchao Gong
-
Patent number: 11508185Abstract: A method for collecting facial recognition data includes: locating a first face area from an Nth image frame; extracting a first facial feature defined with S factors; acquiring a second facial feature extracted from a second face area shown in an (N?1)th image frame at a corresponding position; determining whether the first face area is relevant to the second face area, and assigning to the first face area a tracing code; determining whether to store the first facial feature according to a similarity level of the first facial feature to existent data; storing and inputting the first facial feature into a neural network to generate an adjusted feature defined with T factors if the similarity level of the first facial feature to the existent data is not lower than a preset level, wherein T is not smaller than S; acquiring adjusted data generated by inputting the existent data into the neural network; determining whether the person is a registered one according to a similarity level of the adjusted feature to aType: GrantFiled: November 25, 2020Date of Patent: November 22, 2022Assignee: QNAP SYSTEMS, INC.Inventors: Chun-Yen Chen, Chan-Cheng Liu, Ting-An Lin