Patents by Inventor Amirhossein HABIBIAN
Amirhossein HABIBIAN has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11842540Abstract: Systems and techniques are provided for performing holistic video understanding. For example a process can include obtaining a first video and determining, using a machine learning model decision engine, a first machine learning model from a set of machine learning models to use for processing at least a portion of the first video. The first machine learning model can be determined based on one or more characteristics of at least the portion of the first video. The process can include processing at least the portion of the first video using the first machine learning model.Type: GrantFiled: March 31, 2021Date of Patent: December 12, 2023Assignee: QUALCOMM IncorporatedInventors: Haitam Ben Yahia, Amir Ghodrati, Mihir Jain, Amirhossein Habibian
-
Publication number: 20230336754Abstract: Certain aspects of the present disclosure are directed to methods and apparatus for compressing video content using deep generative models. One example method generally includes receiving video content for compression. The received video content is generally encoded into a latent code space through an encoder, which may be implemented by a first artificial neural network. A compressed version of the encoded video content is generally generated through a trained probabilistic model, which may be implemented by a second artificial neural network, and output for transmission.Type: ApplicationFiled: June 19, 2023Publication date: October 19, 2023Inventors: Amirhossein HABIBIAN, Ties Jehan VAN ROZENDAAL, Taco Sebastiaan COHEN
-
Patent number: 11729406Abstract: Certain aspects of the present disclosure are directed to methods and apparatus for compressing video content using deep generative models. One example method generally includes receiving video content for compression. The received video content is generally encoded into a latent code space through an encoder, which may be implemented by a first artificial neural network. A compressed version of the encoded video content is generally generated through a trained probabilistic model, which may be implemented by a second artificial neural network, and output for transmission.Type: GrantFiled: March 21, 2020Date of Patent: August 15, 2023Assignee: QUALCOMM INCORPORATEDInventors: Amirhossein Habibian, Ties Jehan Van Rozendaal, Taco Sebastiaan Cohen
-
Publication number: 20230154169Abstract: Certain aspects of the present disclosure provide techniques and apparatus for processing video content using an artificial neural network. An example method generally includes receiving a video data stream including at least a first frame and a second frame. First features are extracted from the first frame using a teacher neural network. A difference between the first frame and the second frame is determined. Second features are extracted from at least the difference between the first frame and the second frame using a student neural network. A feature map for the second frame is generated based a summation of the first features and the second features. An inference is generated for at least the second frame of the video data stream based on the generated feature map for the second feature.Type: ApplicationFiled: November 10, 2022Publication date: May 18, 2023Inventors: Amirhossein HABIBIAN, Davide ABATI, Haitam BEN YAHIA
-
Publication number: 20230154157Abstract: A processor-implemented method of video processing using includes receiving, via an artificial neural network (ANN), a video including a first frame and a second frame. A saliency map is generated based on the first frame of the video. The second frame of the video is sampled based on the saliency map. A first portion of the second frame is sampled at a first resolution and a second portion of the second frame is sampled at a second resolution. The first resolution is different than the second resolution. A resampled second frame is generated based on the sampling of the second frame. The resampled second frame is processed to determine an inference associated with the video.Type: ApplicationFiled: October 25, 2022Publication date: May 18, 2023Inventors: Babak EHTESHAMI BEJNORDI, Amir GHODRATI, Fatih Murat PORIKLI, Amirhossein HABIBIAN
-
Publication number: 20230090941Abstract: Certain aspects of the present disclosure provide techniques and apparatus for processing a video stream using a machine learning model. An example method generally includes generating a first group of tokens from a first frame of the video stream and a second group of tokens from a second frame of the video stream. A first set of tokens associated with features to be reused from the first frame and a second set of tokens associated with features to be computed from the second frame are identified based on a comparison of tokens from the first group of tokens to corresponding tokens in the second group of tokens. A feature output is generated for portions of the second frame corresponding to the second set of tokens. Features associated with the first set of tokens are combined with the generated feature output into a representation of the second frame.Type: ApplicationFiled: September 20, 2022Publication date: March 23, 2023Inventors: Yawei LI, Bert MOONS, Tijmen Pieter Frederik BLANKEVOORT, Amirhossein HABIBIAN, Babak EHTESHAMI BEJNORDI
-
Patent number: 11600007Abstract: Certain aspects of the present disclosure are directed to methods and apparatus for predicting subject motion using probabilistic models. One example method generally includes receiving training data comprising a set of subject pose trees. The set of subject pose trees comprises a plurality of subsets of subject pose trees associated with an image in a sequence of images, and each subject pose tree in the subset indicates a location along an axis of the image at which each of a plurality of joints of a subject is located. The received training data may be processed in a convolutional neural network to generate a trained probabilistic model for predicting joint distribution and subject motion based on density estimation. The trained probabilistic model may be deployed to a computer vision system and configured to generate a probability distribution for the location of each joint along the axis.Type: GrantFiled: February 25, 2021Date of Patent: March 7, 2023Assignee: Qualcomm IncorporatedInventors: Mohammad Sadegh Ali Akbarian, Amirhossein Habibian, Koen Erik Adriaan Van De Sande
-
Publication number: 20220360794Abstract: Certain aspects of the present disclosure are directed to methods and apparatus for compressing video content using deep generative models. One example method generally includes receiving video content for compression. The received video content is generally encoded into a latent code space through an auto-encoder, which may be implemented by a first artificial neural network. A compressed version of the encoded video content is generally generated through a trained probabilistic model, which may be implemented by a second artificial neural network, and output for transmission.Type: ApplicationFiled: July 11, 2022Publication date: November 10, 2022Inventors: Amirhossein HABIBIAN, Taco Sebastiaan COHEN
-
Publication number: 20220318553Abstract: Systems and techniques are provided for performing holistic video understanding. For example a process can include obtaining a first video and determining, using a machine learning model decision engine, a first machine learning model from a set of machine learning models to use for processing at least a portion of the first video. The first machine learning model can be determined based on one or more characteristics of at least the portion of the first video. The process can include processing at least the portion of the first video using the first machine learning model.Type: ApplicationFiled: March 31, 2021Publication date: October 6, 2022Inventors: Haitam BEN YAHIA, Amir GHODRATI, Mihir JAIN, Amirhossein HABIBIAN
-
Publication number: 20220301311Abstract: A processor-implemented method for processing a video includes receiving the video as an input at an artificial neural network (ANN). The video includes a sequence of frames. A set of features of a current frame of the video and a prior frame of the video are extracted. The set of features including a set of support features for a set of pixels of the prior frame to be aligned with a set of reference features of the current frame. A similarity between a support feature for each pixel in the set of pixels of the set of support features of the prior frame and a corresponding reference feature of the current frame is computed. An attention map is generated based on the similarity. An output including a reconstruction of the current frame is generated based on the attention map.Type: ApplicationFiled: March 16, 2022Publication date: September 22, 2022Inventors: Davide ABATI, Amirhossein HABIBIAN, Amir GHODRATI
-
Patent number: 11388416Abstract: Certain aspects of the present disclosure are directed to methods and apparatus for compressing video content using deep generative models. One example method generally includes receiving video content for compression. The received video content is generally encoded into a latent code space through an auto-encoder, which may be implemented by a first artificial neural network. A compressed version of the encoded video content is generally generated through a trained probabilistic model, which may be implemented by a second artificial neural network, and output for transmission.Type: GrantFiled: March 21, 2019Date of Patent: July 12, 2022Assignee: Qualcomm IncorporatedInventors: Amirhossein Habibian, Taco Sebastiaan Cohen
-
Publication number: 20220159278Abstract: A method for video processing via an artificial neural network includes receiving a video stream as an input at the artificial neural network. A residual is computed based on a difference between a first feature of a current frame of the video stream and a second feature of a previous frame of the video stream. One or more portions of the current frame of the video stream are processed based on the residual. Additionally, processing is skipped for one or more portions of the current frame of the video based on the residual.Type: ApplicationFiled: November 16, 2021Publication date: May 19, 2022Inventors: Amirhossein HABIBIAN, Davide ABATI, Babak EHTESHAMI BEJNORDI
-
Publication number: 20220157045Abstract: Certain aspects of the present disclosure provide techniques for processing with an auto exiting machine learning model architecture, including processing input data in a first portion of a classification model to generate first intermediate activation data; providing the first intermediate activation data to a first gate; making a determination by the first gate whether or not to exit processing by the classification model; and generating a classification result from one of a plurality of classifiers of the classification model.Type: ApplicationFiled: November 15, 2021Publication date: May 19, 2022Inventors: Babak EHTESHAMI BEJNORDI, Amirhossein HABIBIAN, Fatih Murat PORIKLI, Amir GHODRATI
-
Patent number: 11308350Abstract: An artificial neural network for learning to track a target across a sequence of frames includes a representation network configured to extract a target region representation from a first frame and a search region representation from a subsequent frame. The artificial neural network also includes a cross-correlation layer configured to convolve the extracted target region representation with the extracted search region representation to determine a cross-correlation map. The artificial neural network further includes a loss layer configured to compare the cross-correlation map with a ground truth cross-correlation map to determine a loss value and to back propagate the loss value into the artificial neural network to update filter weights of the artificial neural network.Type: GrantFiled: September 18, 2017Date of Patent: April 19, 2022Assignee: QUALCOMM IncorporatedInventors: Amirhossein Habibian, Cornelis Gerardus Maria Snoek
-
Publication number: 20220058452Abstract: Systems, methods, and non-transitory media are provided for providing spatiotemporal recycling networks (e.g., for video segmentation). For example, a method can include obtaining video data including a current frame and one or more reference frames. The method can include determining, based on a comparison of the current frame and the one or more reference frames, a difference between the current frame and the one or more reference frames. Based on the difference being below a threshold, the method can include performing semantic segmentation of the current frame using a first neural network. The semantic segmentation can be performed based on higher-spatial resolution features extracted from the current frame by the first neural network and lower-resolution features extracted from the one or more reference frames by a second neural network. The first neural network has a smaller structure and/or a lower processing cost than the second neural network.Type: ApplicationFiled: August 23, 2021Publication date: February 24, 2022Inventors: Yizhe ZHANG, Amirhossein HABIBIAN, Fatih Murat PORIKLI
-
Publication number: 20210183073Abstract: Certain aspects of the present disclosure are directed to methods and apparatus for predicting subject motion using probabilistic models. One example method generally includes receiving training data comprising a set of subject pose trees. The set of subject pose trees comprises a plurality of subsets of subject pose trees associated with an image in a sequence of images, and each subject pose tree in the subset indicates a location along an axis of the image at which each of a plurality of joints of a subject is located. The received training data may be processed in a convolutional neural network to generate a trained probabilistic model for predicting joint distribution and subject motion based on density estimation. The trained probabilistic model may be deployed to a computer vision system and configured to generate a probability distribution for the location of each joint along the axis.Type: ApplicationFiled: February 25, 2021Publication date: June 17, 2021Inventors: Mohammad Sadegh ALI AKBARIAN, Amirhossein HABIBIAN, Koen Erik Adriaan VAN DE SANDE
-
Patent number: 10964033Abstract: A visual tracker may track an object by identifying the object in a frame, and the visual tracker by identify the object in the frame within a search region. The search region may be provided by a motion modeling system that independently models the motion of the object and models the motion of the camera. For example, an object motion model of the motion modeling system may first model the motion of the object, assuming the camera is not in motion, in order to identify the expected position of the object. A camera motion model of the motion modeling system may then update the expected position of the object, obtained from the object motion model, based on the motion of the camera.Type: GrantFiled: August 7, 2018Date of Patent: March 30, 2021Assignee: Qualcomm IncorporatedInventors: Amirhossein Habibian, Daniel Hendricus Franciscus Dijkman, Antonio Leonardo Rodriguez Lopez, Yue Hei Ng, Koen Erik Adriaan Van De Sande, Cornelis Gerardus Maria Snoek
-
Patent number: 10937173Abstract: Certain aspects of the present disclosure are directed to methods and apparatus for predicting subject motion using probabilistic models. One example method generally includes receiving training data comprising a set of subject pose trees. The set of subject pose trees comprises a plurality of subsets of subject pose trees associated with an image in a sequence of images, and each subject pose tree in the subset indicates a location along an axis of the image at which each of a plurality of joints of a subject is located. The received training data may be processed in a convolutional neural network to generate a trained probabilistic model for predicting joint distribution and subject motion based on density estimation. The trained probabilistic model may be deployed to a computer vision system and configured to generate a probability distribution for the location of each joint along the axis.Type: GrantFiled: November 15, 2018Date of Patent: March 2, 2021Assignee: Qualcomm IncorporatedInventors: Mohammad Sadegh Ali Akbarian, Amirhossein Habibian, Koen Erik Adriaan Van De Sande
-
Patent number: 10841549Abstract: The present disclosure relates to methods and devices for facilitating enhancing the quality of video. An example method disclosed herein includes estimating an optical flow between a first noisy frame and a second noisy frame, the second noisy frame following the first noisy frame. The example method also includes warping a first enhanced frame to align with the second noisy frame, the warping being based on the estimation of the optical flow between the first noisy frame and the second noisy frame, the first enhanced frame being an enhanced frame of the first noisy frame. The example method also includes generating a second enhanced frame based on the warped first enhanced frame and the second noisy frame, and outputting the second enhanced frame.Type: GrantFiled: March 19, 2020Date of Patent: November 17, 2020Assignee: QUALCOMM IncorporatedInventors: Reza Pourreza Shahri, Amirhossein Habibian, Taco Sebastiaan Cohen
-
Publication number: 20200304802Abstract: Certain aspects of the present disclosure are directed to methods and apparatus for compressing video content using deep generative models. One example method generally includes receiving video content for compression. The received video content is generally encoded into a latent code space through an encoder, which may be implemented by a first artificial neural network. A compressed version of the encoded video content is generally generated through a trained probabilistic model, which may be implemented by a second artificial neural network, and output for transmission.Type: ApplicationFiled: March 21, 2020Publication date: September 24, 2020Inventors: Amirhossein HABIBIAN, Ties Jehan VAN ROZENDAAL, Taco Sebastiaan COHEN