Patents by Inventor Shilpa Jois Rao
Shilpa Jois Rao has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11924481Abstract: The disclosed computer-implemented method may include (1) accessing a first media data object and a different, second media data object that, when played back, each render temporally sequenced content, (2) comparing first temporally sequenced content represented by the first media data object with second temporally sequenced content represented by the second media data object to identify a set of common temporal subsequences between the first media data object and the second media data object, (3) identifying a set of edits relative to the set of common temporal subsequences that describe a difference between the temporally sequenced content of the first media data object and the temporally sequenced content of the second media data object, and (4) executing a workflow relating to the first media data object and/or the second media data object based on the set of edits. Various other methods, systems, and computer-readable media are also disclosed.Type: GrantFiled: March 20, 2023Date of Patent: March 5, 2024Assignee: Netflix, Inc.Inventors: Yadong Wang, Chih-Wei Wu, Kyle Tacke, Shilpa Jois Rao, Boney Sekh, Andrew Swan, Raja Ranjan Senapati
-
Publication number: 20230412760Abstract: The disclosed computer-implemented method may include systems and methods for automatically generating sound event subtitles for digital videos. For example, the systems and methods described herein can automatically generate subtitles for sound events within a digital video soundtrack that includes sounds other than speech. Additionally, the systems and methods described herein can automatically generate sound event subtitles as part of an automatic and comprehensive approach that generates subtitles for all sounds within a soundtrack of a digital video—thereby avoiding the need for any manual inputs as part of the subtitling process.Type: ApplicationFiled: June 15, 2022Publication date: December 21, 2023Inventors: Yadong Wang, Shilpa Jois Rao
-
Publication number: 20230409897Abstract: The disclosed computer-implemented method may include accessing an audio stream with heterogenous audio content; dividing the audio stream into a plurality of frames; generating a plurality of spectrogram patches, each spectrogram patch within the plurality of spectrogram patches being derived from a frame within the plurality of frames; and providing each spectrogram patch within the plurality of spectrogram patches as input to a convolutional neural network classifier and receiving, as output, a classification of music within a corresponding frame from within the plurality of frames. Various other methods, systems, and computer-readable media are also disclosed.Type: ApplicationFiled: June 15, 2022Publication date: December 21, 2023Inventors: Yadong Wang, Jeff Kitchener, Shilpa Jois Rao
-
Publication number: 20230232055Abstract: The disclosed computer-implemented method may include (1) accessing a first media data object and a different, second media data object that, when played back, each render temporally sequenced content, (2) comparing first temporally sequenced content represented by the first media data object with second temporally sequenced content represented by the second media data object to identify a set of common temporal subsequences between the first media data object and the second media data object, (3) identifying a set of edits relative to the set of common temporal subsequences that describe a difference between the temporally sequenced content of the first media data object and the temporally sequenced content of the second media data object, and (4) executing a workflow relating to the first media data object and/or the second media data object based on the set of edits. Various other methods, systems, and computer-readable media are also disclosed.Type: ApplicationFiled: March 20, 2023Publication date: July 20, 2023Inventors: Yadong Wang, Chih-Wei Wu, Kyle Tacke, Shilpa Jois Rao, Boney Sekh, Andrew Swan, Raja Ranjan Senapati
-
Patent number: 11682059Abstract: Visual diagram searching techniques are described herein. A visual diagram service enables users to efficiently search for data for item parts even in cases where the name of the item part is unknown. In one or more examples, search query input to locate item parts of an item is received via a user interface displayed by at least one computing device. A visual diagram of the item is displayed in the user interface. The visual diagram includes selectable portions mapped to respective item parts depicted in the corresponding selectable portion of the visual diagram. A user selection of one of the selectable portions of the visual diagram of the item is received via the user interface. In response to the user selection, search result data corresponding to the respective item part depicted in the selected selectable portion of the visual diagram of the item is displayed.Type: GrantFiled: February 2, 2021Date of Patent: June 20, 2023Assignee: eBay Inc.Inventors: Shilpa Jois Rao, Seyed-Mahdi Pedramrazi, Shaumik Chandra Mondal, Subramanian Sri Sankaran, Bryan Ephraim Freeland, Rita Marion Bosch, James L. Grubbs, Jr., Dong Chen
-
Patent number: 11659214Abstract: The disclosed computer-implemented method may include (1) accessing a first media data object and a different, second media data object that, when played back, each render temporally sequenced content, (2) comparing first temporally sequenced content represented by the first media data object with second temporally sequenced content represented by the second media data object to identify a set of common temporal subsequences between the first media data object and the second media data object, (3) identifying a set of edits relative to the set of common temporal subsequences that describe a difference between the temporally sequenced content of the first media data object and the temporally sequenced content of the second media data object, and (4) executing a workflow relating to the first media data object and/or the second media data object based on the set of edits. Various other methods, systems, and computer-readable media are also disclosed.Type: GrantFiled: April 30, 2021Date of Patent: May 23, 2023Assignee: Netflix, Inc.Inventors: Yadong Wang, Chih-Wei Wu, Kyle Tacke, Shilpa Jois Rao, Boney Sekh, Andrew Swan, Raja Ranjan Senapati
-
Patent number: 11430485Abstract: The disclosed computer-implemented method may include accessing an audio track that is associated with a video recording, identifying a section of the accessed audio track having a specific audio characteristic, reducing a volume level of the audio track in the identified section, accessing an audio segment that includes a synthesized voice and inserting the accessed audio segment into the identified section of the audio track, where the inserted segment has a higher volume level than the reduced volume level of the audio track in the identified section. The synthesized voice description can be used to provide additional information to a visually impaired viewer without interrupting the audio track that is associated with the video recording, typically by inserting the synthesized voice description into a segment of the audio track in which there is no dialog. Various other methods, systems, and computer-readable media are also disclosed.Type: GrantFiled: January 20, 2020Date of Patent: August 30, 2022Assignee: Netflix, Inc.Inventors: Yadong Wang, Murthy Parthasarathi, Andrew Swan, Raja Ranjan Senapati, Shilpa Jois Rao, Anjali Chablani, Kyle Tacke
-
Publication number: 20220115030Abstract: The disclosed computer-implemented method may include obtaining an audio sample from a content source, inputting the obtained audio sample into a trained machine learning model, obtaining the output of the trained machine learning model, wherein the output is a profile of an environment in which the input audio sample was recorded, obtaining an acoustic impulse response corresponding to the profile of the environment in which the input audio sample was recorded, obtaining a second audio sample, processing the obtained acoustic impulse response with the second audio sample, and inserting a result of processing the obtained acoustic impulse response and the second audio sample into an audio track. Various other methods, systems, and computer-readable media are also disclosed.Type: ApplicationFiled: December 17, 2021Publication date: April 14, 2022Inventors: Yadong Wang, Shilpa Jois Rao, Murthy Parthasarathi, Kyle Tacke
-
Patent number: 11238888Abstract: The disclosed computer-implemented method may include obtaining an audio sample from a content source, inputting the obtained audio sample into a trained machine learning model, obtaining the output of the trained machine learning model, wherein the output is a profile of an environment in which the input audio sample was recorded, obtaining an acoustic impulse response corresponding to the profile of the environment in which the input audio sample was recorded, obtaining a second audio sample, processing the obtained acoustic impulse response with the second audio sample, and inserting a result of processing the obtained acoustic impulse response and the second audio sample into an audio track. Various other methods, systems, and computer-readable media are also disclosed.Type: GrantFiled: December 31, 2019Date of Patent: February 1, 2022Assignee: Netflix, Inc.Inventors: Yadong Wang, Shilpa Jois Rao, Murthy Parthasarathi, Kyle Tacke
-
Publication number: 20220021911Abstract: The disclosed computer-implemented method may include (1) accessing a first media data object and a different, second media data object that, when played back, each render temporally sequenced content, (2) comparing first temporally sequenced content represented by the first media data object with second temporally sequenced content represented by the second media data object to identify a set of common temporal subsequences between the first media data object and the second media data object, (3) identifying a set of edits relative to the set of common temporal subsequences that describe a difference between the temporally sequenced content of the first media data object and the temporally sequenced content of the second media data object, and (4) executing a workflow relating to the first media data object and/or the second media data object based on the set of edits. Various other methods, systems, and computer-readable media are also disclosed.Type: ApplicationFiled: April 30, 2021Publication date: January 20, 2022Inventors: Yadong Wang, Chih-Wei Wu, Kyle Tacke, Shilpa Jois Rao, Boney Sekh, Andrew Swan, Raja Ranjan Senapati
-
Publication number: 20210407510Abstract: The disclosed computer-implemented method includes analyzing, by a speech detection system, a media file to detect lip movement of a speaker who is visually rendered in media content of the media file. The method additionally includes identifying, by the speech detection system, audio content within the media file, and improving accuracy of a temporal correlation of the speech detection system. The method may involve correlating the lip movement of the speaker with the audio content, and determining, based on the correlation between the lip movement of the speaker and the audio content, that the audio content comprises speech from the speaker. The method may further involve recording, based on the determination that the audio content comprises speech from the speaker, the temporal correlation between the speech and the lip movement of the speaker as metadata of the media file. Various other methods, systems, and computer-readable media are disclosed.Type: ApplicationFiled: June 24, 2020Publication date: December 30, 2021Inventors: Yadong Wang, Shilpa Jois Rao
-
Publication number: 20210390949Abstract: The disclosed computer-implemented method may include training a machine-learning algorithm to use look-ahead to improve effectiveness of identifying visemes corresponding to audio signals by, for one or more audio segments in a set of training audio signals, evaluating an audio segment, where the audio segment includes at least a portion of a phoneme, and a subsequent segment that includes contextual audio that comes after the audio segment and potentially contains context about a viseme that maps to the phoneme. The method may also include using the trained machine-learning algorithm to identify one or more probable visemes corresponding to speech in a target audio signal. Additionally, the method may include recording, as metadata of the target audio signal, where a probable viseme occurs within the target audio signal. Various other methods, systems, and computer-readable media are also disclosed.Type: ApplicationFiled: June 16, 2020Publication date: December 16, 2021Inventors: Yadong Wang, Shilpa Jois Rao, Murthy Parthasarathi
-
Publication number: 20210201931Abstract: The disclosed computer-implemented method may include obtaining an audio sample from a content source, inputting the obtained audio sample into a trained machine learning model, obtaining the output of the trained machine learning model, wherein the output is a profile of an environment in which the input audio sample was recorded, obtaining an acoustic impulse response corresponding to the profile of the environment in which the input audio sample was recorded, obtaining a second audio sample, processing the obtained acoustic impulse response with the second audio sample, and inserting a result of processing the obtained acoustic impulse response and the second audio sample into an audio track. Various other methods, systems, and computer-readable media are also disclosed.Type: ApplicationFiled: December 31, 2019Publication date: July 1, 2021Inventors: Yadong Wang, Shilpa Jois Rao, Murthy Parthasarathi, Kyle Tacke
-
Publication number: 20210158419Abstract: Visual diagram searching techniques are described herein. A visual diagram service enables users to efficiently search for data for item parts even in cases where the name of the item part is unknown. In one or more examples, search query input to locate item parts of an item is received via a user interface displayed by at least one computing device. A visual diagram of the item is displayed in the user interface. The visual diagram includes selectable portions mapped to respective item parts depicted in the corresponding selectable portion of the visual diagram. A user selection of one of the selectable portions of the visual diagram of the item is received via the user interface. In response to the user selection, search result data corresponding to the respective item part depicted in the selected selectable portion of the visual diagram of the item is displayed.Type: ApplicationFiled: February 2, 2021Publication date: May 27, 2021Applicant: eBay Inc.Inventors: Shilpa Jois Rao, Seyed-Mahdi Pedramrazi, Shaumik Chandra Mondal, Subramanian Sri Sankaran, Bryan Ephraim Freeland, Rita Marion Bosch, James L. Grubbs, JR., Dong Chen
-
Publication number: 20210151082Abstract: The disclosed computer-implemented method may include accessing an audio track that is associated with a video recording, identifying a section of the accessed audio track having a specific audio characteristic, reducing a volume level of the audio track in the identified section, accessing an audio segment that includes a synthesized voice and inserting the accessed audio segment into the identified section of the audio track, where the inserted segment has a higher volume level than the reduced volume level of the audio track in the identified section. The synthesized voice description can be used to provide additional information to a visually impaired viewer without interrupting the audio track that is associated with the video recording, typically by inserting the synthesized voice description into a segment of the audio track in which there is no dialog. Various other methods, systems, and computer-readable media are also disclosed.Type: ApplicationFiled: January 20, 2020Publication date: May 20, 2021Inventors: Yadong Wang, Murthy Parthasarathi, Andrew Swan, Raja Ranjan Senapati, Shilpa Jois Rao, Anjali Chablani, Kyle Tacke
-
Patent number: 10949906Abstract: Visual diagram searching techniques are described herein. A visual diagram service enables users to efficiently search for data for item parts even in cases where the name of the item part is unknown. In one or more examples, search query input to locate item parts of an item is received via a user interface displayed by at least one computing device. A visual diagram of the item is displayed in the user interface. The visual diagram includes selectable portions mapped to respective item parts depicted in the corresponding selectable portion of the visual diagram. A user selection of one of the selectable portions of the visual diagram of the item is received via the user interface. In response to the user selection, search result data corresponding to the respective item part depicted in the selected selectable portion of the visual diagram of the item is displayed.Type: GrantFiled: April 23, 2018Date of Patent: March 16, 2021Assignee: eBay Inc.Inventors: Shilpa Jois Rao, Seyed-Mahdi Pedramrazi, Shaumik Chandra Mondal, Subramanian Sri Sankaran, Bryan Ephraim Freeland, Rita Marion Bosch, James L. Grubbs, Jr., Dong Chen
-
Publication number: 20190325499Abstract: Visual diagram searching techniques are described herein. A visual diagram service enables users to efficiently search for data for item parts even in cases where the name of the item part is unknown. In one or more examples, search query input to locate item parts of an item is received via a user interface displayed by at least one computing device. A visual diagram of the item is displayed in the user interface. The visual diagram includes selectable portions mapped to respective item parts depicted in the corresponding selectable portion of the visual diagram. A user selection of one of the selectable portions of the visual diagram of the item is received via the user interface. In response to the user selection, search result data corresponding to the respective item part depicted in the selected selectable portion of the visual diagram of the item is displayed.Type: ApplicationFiled: April 23, 2018Publication date: October 24, 2019Applicant: eBay Inc.Inventors: Shilpa Jois Rao, Seyed-Mahdi Pedramrazi, Shaumik Chandra Mondal, Subramanian Sri Sankaran, Bryan Ephraim Freeland, Rita Marion Bosch, James L. Grubbs, JR., Dong Chen