Patents by Inventor Balmanohar Paluri

Balmanohar Paluri has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11450006
    Abstract: In one embodiment, a method includes detecting objects in an image. The method includes accessing a mask for each object. The method includes receiving an input in relation to the image. The input corresponds to an input region and an input type. The method includes identifying a region of the image corresponding to the input region of the input. The identified region of the image includes one or more of the masks. The method includes providing feedback regarding the one or objects in the identified region of the image based on the input type.
    Type: Grant
    Filed: September 24, 2020
    Date of Patent: September 20, 2022
    Assignee: Meta Platforms, Inc.
    Inventors: Vincent Charles Cheung, Connie Yeewei Ho, Balmanohar Paluri
  • Patent number: 11019503
    Abstract: In one embodiment, a method includes accessing a point cloud comprising a plurality of point-cloud points, each point-cloud point corresponding to a location on a surface of an object located in a region in a three-dimensional space, identifying, from the point cloud, a plurality of point clusters, each point cluster comprising a plurality of point-cloud points located within a grid segment on a two-dimensional grid derived from the three-dimensional space, selecting, for each point cluster, a set of point-cloud points from the plurality of point-cloud points in the point cluster, the set of point-cloud points being selected based on a predetermined threshold number of point-cloud points associated with an acceptable reduction in an error detection rate, and determining, for each point cluster, a structure classification based on the selected set of point-cloud points from the point cluster.
    Type: Grant
    Filed: September 30, 2019
    Date of Patent: May 25, 2021
    Assignee: Facebook, Inc.
    Inventors: Guan Pang, Jing Huang, Balmanohar Paluri, Brian Christopher Karrer, Ismail Onur Filiz, Birce Tezel, Nicolas Emilio Stier Moses, Vishakan Ponnampalam, Timothy Eric Danford
  • Patent number: 10931854
    Abstract: An online system maintains connections among users of that system and allows them to share media information with one another. If multiple socially connected users are viewing the same event and are located in the vicinity of one another, a social camera application executing on each client device of the socially connected users allows these users to capture media information of that event, and a higher quality media content of the event can be generated from the multiple captures of the event. For example, a target user begins a social camera experience and invites other socially connected users in the vicinity to join that experience. These users upload their captures of the event to the online system, which are combined to create a social camera media item of the event with better quality than any of the individual captures taken by a user within the group.
    Type: Grant
    Filed: October 16, 2018
    Date of Patent: February 23, 2021
    Assignee: Facebook, Inc.
    Inventor: Balmanohar Paluri
  • Publication number: 20210004963
    Abstract: In one embodiment, a method includes detecting objects in an image. The method includes accessing a mask for each object. The method includes receiving an input in relation to the image. The input corresponds to an input region and an input type. The method includes identifying a region of the image corresponding to the input region of the input. The identified region of the image includes one or more of the masks. The method includes providing feedback regarding the one or objects in the identified region of the image based on the input type.
    Type: Application
    Filed: September 24, 2020
    Publication date: January 7, 2021
    Inventors: Vincent Charles Cheung, Connie Yeewei Ho, Balmanohar Paluri
  • Patent number: 10825181
    Abstract: In one embodiment, a method includes detecting one or more objects in an image, generating at least one mask for each of the detected objects, wherein each of the masks is defined by a perimeter, classifying the detected objects, receiving gesture input in relation to the image, determining whether one or more locations associated with the gesture input correlate with any of the masks, and providing feedback regarding the image in response to the gesture input. Each of the masks may include data identifying the corresponding detected object, and the perimeter of each mask may correspond to a perimeter of the corresponding detected object. The perimeter of the corresponding detected object may separate the detected object from one or more portions of the image that are distinct from the detected object.
    Type: Grant
    Filed: March 29, 2019
    Date of Patent: November 3, 2020
    Assignee: Facebook, Inc.
    Inventors: Vincent Charles Cheung, Connie Yeewei Ho, Balmanohar Paluri
  • Patent number: 10748247
    Abstract: A system trains a machine learning model to generate a high-resolution depth image. During a training phase, the system generates an accurate three dimensional reconstruction of a training scene such that the machine learning model is iteratively trained to minimize an error between the higher resolution depth image and the depth information in the accurate three dimensional reconstruction. During a real-time phase, the system applies the trained machine learning model to images captured from a scene of interest and generates a higher resolution depth image with higher accuracy. Thus, the higher resolution depth image can be subsequently used to solve computer vision problems.
    Type: Grant
    Filed: December 26, 2017
    Date of Patent: August 18, 2020
    Assignee: Facebook, Inc.
    Inventor: Balmanohar Paluri
  • Patent number: 10706350
    Abstract: In one embodiment, a method includes, by a computing device, receiving a plurality of inputs for a convolution layer of a convolutional neural network, the convolution layer having one or more input channels and one or more output channels, wherein the inputs are received via the input channels, generating, by convolving the inputs with one or more two-dimensional filters, a plurality of intermediate values, and generating, by convolving the intermediate values with one or more one-dimensional filters, a plurality of outputs, wherein the one-dimensional filters receive the intermediate values from the two-dimensional filters via intermediate channels. The method may provide the outputs to a subsequent layer of the convolutional neural network via the output channels. Each of the two dimensions of the two-dimensional filter may correspond to a spatial dimension, and the one dimension of the one-dimensional filter may correspond to a temporal dimension.
    Type: Grant
    Filed: August 10, 2018
    Date of Patent: July 7, 2020
    Assignee: Facebook, Inc.
    Inventors: Du Le Hong Tran, Benjamin Ray, Balmanohar Paluri
  • Patent number: 10699184
    Abstract: In one embodiment, a system retrieves a first feature vector for an image. The image is inputted into a first deep-learning model, which is a first-version model, and the first feature vector may be output from a processing layer of the first deep-learning model for the image. The first feature vector using a feature-vector conversion model to obtain a second feature vector for the image. The feature-vector conversion model is trained to convert first-version feature vectors to second-version feature vectors. The second feature vector is associated with a second deep-learning model, and the second deep-learning model is a second-version model. The second-version model is an updated version of the first-version model. A plurality of predictions for the image may be generated using the second feature vector and the second deep-learning model.
    Type: Grant
    Filed: December 29, 2016
    Date of Patent: June 30, 2020
    Assignee: Facebook, Inc.
    Inventor: Balmanohar Paluri
  • Patent number: 10645142
    Abstract: In one embodiment, a method includes receiving a query from a user for videos; identifying videos matching the query; retrieving, for each identified video, a set of keyframes that are associated with one or more concepts; calculating, for each keyframe of each identified video, a keyframe-score based on a prevalence of the concepts associated with the keyframe, determined with reference to the concepts associated with each other keyframe in the set of retrieved keyframes for the identified video; and sending, to the first user, a search-results interface including search results corresponding to one or more of the identified videos, each search result comprising keyframes for the corresponding identified video having keyframe-scores greater than a threshold keyframe-score.
    Type: Grant
    Filed: September 20, 2016
    Date of Patent: May 5, 2020
    Assignee: Facebook, Inc.
    Inventors: Dirk John Stoop, Adam Eugene Bussing, Oliver Scholz, Balmanohar Paluri
  • Publication number: 20200029225
    Abstract: In one embodiment, a method includes accessing a point cloud comprising a plurality of point-cloud points, each point-cloud point corresponding to a location on a surface of an object located in a region in a three-dimensional space, identifying, from the point cloud, a plurality of point clusters, each point cluster comprising a plurality of point-cloud points located within a grid segment on a two-dimensional grid derived from the three-dimensional space, selecting, for each point cluster, a set of point-cloud points from the plurality of point-cloud points in the point cluster, the set of point-cloud points being selected based on a predetermined threshold number of point-cloud points associated with an acceptable reduction in an error detection rate, and determining, for each point cluster, a structure classification based on the selected set of point-cloud points from the point cluster.
    Type: Application
    Filed: September 30, 2019
    Publication date: January 23, 2020
    Inventors: Guan Pang, Jing Huang, Balmanohar Paluri, Brian Christopher Karrer, Ismail Onur Filiz, Birce Tezel, Nicolas Emilio Stier Moses, Vishakan Ponnampalam, Timothy Eric Danford
  • Patent number: 10536860
    Abstract: In one embodiment, a method includes accessing a point cloud comprising several points, wherein each point corresponds to a location on a surface of an object located in three-dimensional space; determining whether each point in the point cloud is part of a linear structure, a planar structure, or a volumetric structure; identifying a plurality of point clusters, wherein each point cluster comprises one or more points that are located within a grid segment on a two-dimensional grid derived from the three-dimensional space; determining, for each point cluster, whether the point cluster represents a vertical-linear structure or a portion of a vertical-linear structure; identifying one or more point-cluster pairs, wherein each point-cluster pair includes two point clusters corresponding to one or more vertical-linear structures within a threshold distance in the three-dimensional space; and determining, for each point-cluster pair, whether a line-of-sight exists between each point-cluster in the point-cluster pa
    Type: Grant
    Filed: May 10, 2017
    Date of Patent: January 14, 2020
    Assignee: Facebook, Inc.
    Inventors: Guan Pang, Jing Huang, Balmanohar Paluri, Brian Karrer, Ismail Onur Filiz, Birce Tezel, Nicolas Emilio Stier Moses, Vishakan Ponnampalam, Timothy Eric Danford
  • Publication number: 20190272642
    Abstract: In one embodiment, a method includes detecting one or more objects in an image, generating at least one mask for each of the detected objects, wherein each of the masks is defined by a perimeter, classifying the detected objects, receiving gesture input in relation to the image, determining whether one or more locations associated with the gesture input correlate with any of the masks, and providing feedback regarding the image in response to the gesture input. Each of the masks may include data identifying the corresponding detected object, and the perimeter of each mask may correspond to a perimeter of the corresponding detected object. The perimeter of the corresponding detected object may separate the detected object from one or more portions of the image that are distinct from the detected object.
    Type: Application
    Filed: March 29, 2019
    Publication date: September 5, 2019
    Inventors: Vincent Charles Cheung, Connie Yeewei Ho, Balmanohar Paluri
  • Patent number: 10402703
    Abstract: In one embodiment, a method includes identifying a shared visual concept in visual-media items based on shared visual features in images of the visual-media items; extracting, for each of the visual-media items, n-grams from communications associated with the visual-media item; generating, in a d-dimensional space, an embedding for each of the visual-media items at a location based on the visual concepts included in the visual-media item; generating, in the d-dimensional space, an embedding for each of the extracted n-grams at a location based on a frequency of occurrence of the n-gram in the communications associated with the visual-media items; and associating, with the shared visual concept, the extracted n-grams that have embeddings within a threshold area of the embeddings for the identified visual-media items.
    Type: Grant
    Filed: June 4, 2018
    Date of Patent: September 3, 2019
    Assignee: Facebook, Inc.
    Inventors: Dirk John Stoop, Balmanohar Paluri
  • Publication number: 20190197667
    Abstract: A system trains a machine learning model to generate a high-resolution depth image. During a training phase, the system generates an accurate three dimensional reconstruction of a training scene such that the machine learning model is iteratively trained to minimize an error between the higher resolution depth image and the depth information in the accurate three dimensional reconstruction. During a real-time phase, the system applies the trained machine learning model to images captured from a scene of interest and generates a higher resolution depth image with higher accuracy. Thus, the higher resolution depth image can be subsequently used to solve computer vision problems.
    Type: Application
    Filed: December 26, 2017
    Publication date: June 27, 2019
    Inventor: Balmanohar Paluri
  • Publication number: 20190132492
    Abstract: An online system maintains connections among users of that system and allows them to share media information with one another. If multiple socially connected users are viewing the same event and are located in the vicinity of one another, a social camera application executing on each client device of the socially connected users allows these users to capture media information of that event, and a higher quality media content of the event can be generated from the multiple captures of the event. For example, a target user begins a social camera experience and invites other socially connected users in the vicinity to join that experience. These users upload their captures of the event to the online system, which are combined to create a social camera media item of the event with better quality than any of the individual captures taken by a user within the group.
    Type: Application
    Filed: October 16, 2018
    Publication date: May 2, 2019
    Inventor: Balmanohar Paluri
  • Patent number: 10249044
    Abstract: In one embodiment, a method includes detecting one or more objects in an image, generating at least one mask for each of the detected objects, wherein each of the masks is defined by a perimeter, classifying the detected objects, receiving gesture input in relation to the image, determining whether one or more locations associated with the gesture input correlate with any of the masks, and providing feedback regarding the image in response to the gesture input. Each of the masks may include data identifying the corresponding detected object, and the perimeter of each mask may correspond to a perimeter of the corresponding detected object. The perimeter of the corresponding detected object may separate the detected object from one or more portions of the image that are distinct from the detected object.
    Type: Grant
    Filed: December 30, 2016
    Date of Patent: April 2, 2019
    Assignee: Facebook, Inc.
    Inventors: Vincent Charles Cheung, Connie Yeewei Ho, Balmanohar Paluri
  • Publication number: 20190005332
    Abstract: In one embodiment, a method includes accessing a video-content object, determining a first feature vector representing the video-content object using a first recognition module of a first type based on an object in the video-content object, and determining a second feature vector representing the video-content object using a second recognition module of a second type based on the first feature vector. The first type is different from the second type. The method also includes determining a context of the video-content object based on the second feature vector.
    Type: Application
    Filed: August 27, 2018
    Publication date: January 3, 2019
    Inventors: Balmanohar Paluri, Benoit F. Dumoulin, Merlyn Deng, Reena Philip, Dario Garcia Garcia
  • Patent number: 10140545
    Abstract: The techniques introduced here include a system and method for transcoding multimedia content based on the results of content analysis. The determination of specific transcoding parameters, used for transcoding multimedia content, can be performed by utilizing the results of content analysis of the multimedia content. One of the results of the content analysis is the determination of image type of any images included in the multimedia content. The content analysis uses one or more of several techniques, including analyzing content metadata, examining colors of contiguous pixels in the content, using histogram analysis, using compression distortion analysis, analyzing image edges, or examining user provided inputs. Transcoding the multimedia content can include adapting the content to the constraints in delivery and display, processing and storage of user computing devices.
    Type: Grant
    Filed: December 12, 2016
    Date of Patent: November 27, 2018
    Assignee: FACEBOOK, INC.
    Inventors: Apostolos Lerios, Dirk Stoop, Ryan Mack, Lubomir Bourdev, Balmanohar Paluri
  • Publication number: 20180332480
    Abstract: In one embodiment, a method includes accessing a point cloud comprising several points, wherein each point corresponds to a location on a surface of an object located in three-dimensional space; determining whether each point in the point cloud is part of a linear structure, a planar structure, or a volumetric structure; identifying a plurality of point clusters, wherein each point cluster comprises one or more points that are located within a grid segment on a two-dimensional grid derived from the three-dimensional space; determining, for each point cluster, whether the point cluster represents a vertical-linear structure or a portion of a vertical-linear structure; identifying one or more point-cluster pairs, wherein each point-cluster pair includes two point clusters corresponding to one or more vertical-linear structures within a threshold distance in the three-dimensional space; and determining, for each point-cluster pair, whether a line-of-sight exists between each point-cluster in the point-cluster pa
    Type: Application
    Filed: May 10, 2017
    Publication date: November 15, 2018
    Inventors: Guan Pang, Jing Huang, Balmanohar Paluri, Brian Karrer, Ismail Onur Filiz, Birce Tezel, Nicolas Emilio Stier Moses, Vishakan Ponnampalam, Timothy Eric Danford
  • Publication number: 20180285700
    Abstract: In one embodiment, a method includes identifying a shared visual concept in visual-media items based on shared visual features in images of the visual-media items; extracting, for each of the visual-media items, n-grams from communications associated with the visual-media item; generating, in a d-dimensional space, an embedding for each of the visual-media items at a location based on the visual concepts included in the visual-media item; generating, in the d-dimensional space, an embedding for each of the extracted n-grams at a location based on a frequency of occurrence of the n-gram in the communications associated with the visual-media items; and associating, with the shared visual concept, the extracted n-grams that have embeddings within a threshold area of the embeddings for the identified visual-media items.
    Type: Application
    Filed: June 4, 2018
    Publication date: October 4, 2018
    Inventors: Dirk John Stoop, Balmanohar Paluri