Patents by Inventor Janki Y. Vora
Janki Y. Vora has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11189301Abstract: Utterances spoken or sung by a first person can be received, in real time. The detected utterances can be compared to at least a stored sample of utterances spoken or sung by the first person. Based on the comparing, audio of the utterances spoken or sung by the first person can be isolated from a background noise. A volume of the utterances spoken or sung by a first person relative to the background noise can be determined. A key indicator that indicates the volume of the detected utterances spoken or sung by the first person relative to the background noise can be generated. Based on the key indicator, information indicating the volume of the detected utterances spoken or sung by the first person relative to the background noise can be communicated.Type: GrantFiled: July 16, 2019Date of Patent: November 30, 2021Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Alan D. Emery, Aditya Sood, Mathews Thomas, Janki Y. Vora
-
Patent number: 11089072Abstract: A content delivery system may receive and aggregate video content from one or more content sources. In a first embodiment, the content delivery system may start streaming a video to a first viewer on a first device and then receive a request for a catch up version to be streamed to a second viewer viewing a second device. The content delivery system may send replacement segments of the video that are shortened summaries to the second device until the second viewer has caught up to the first viewer on the first device. In a second embodiment, the content delivery system may detect two or more viewers and customize a video content for both viewers. In a third embodiment, the content delivery system, in real time, may customize a segment of a video (possibly using a “green screen” or overlaying a second video over the original video segment) based on characteristics of the viewer and then stream the customized video segment to the viewer.Type: GrantFiled: March 10, 2020Date of Patent: August 10, 2021Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Jason A. Gonzalez, Eric L. Gose, Mathews Thomas, Janki Y. Vora
-
Publication number: 20200213375Abstract: A content delivery system may receive and aggregate video content from one or more content sources. In a first embodiment, the content delivery system may start streaming a video to a first viewer on a first device and then receive a request for a catch up version to be streamed to a second viewer viewing a second device. The content delivery system may send replacement segments of the video that are shortened summaries to the second device until the second viewer has caught up to the first viewer on the first device. In a second embodiment, the content delivery system may detect two or more viewers and customize a video content for both viewers. In a third embodiment, the content delivery system, in real time, may customize a segment of a video (possibly using a “green screen” or overlaying a second video over the original video segment) based on characteristics of the viewer and then stream the customized video segment to the viewer.Type: ApplicationFiled: March 10, 2020Publication date: July 2, 2020Inventors: Jason A. Gonzalez, Eric L. Gose, Mathews Thomas, Janki Y. Vora
-
Patent number: 10609107Abstract: A content delivery system may receive and aggregate video content from one or more content sources. In a first embodiment, the content delivery system may start streaming a video to a first viewer on a first device and then receive a request for a catch up version to be streamed to a second viewer viewing a second device. The content delivery system may send replacement segments of the video that are shortened summaries to the second device until the second viewer has caught up to the first viewer on the first device. In a second embodiment, the content delivery system may detect two or more viewers and customize a video content for both viewers. In a third embodiment, the content delivery system, in real time, may customize a segment of a video (possibly using a “green screen” or overlaying a second video over the original video segment) based on characteristics of the viewer and then stream the customized video segment to the viewer.Type: GrantFiled: December 18, 2017Date of Patent: March 31, 2020Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Jason A. Gonzalez, Eric L. Gose, Mathews Thomas, Janki Y. Vora
-
Publication number: 20200035113Abstract: An instructional support symbiont can be executed on a client device, which concurrently executes at least one application. The instructional support symbiont provides computer-based learning content within a presentation overlay for the application. During an application session, user interaction with an application window for the application can be detected. User compliance with discrete stages of a tutorial responsive to the user interactions can be determined. Content provided in the presentation overlay per specific stages of the tutorial can be updated. Presentation characteristics of the presentation overlay can be adjusted to ensure the presentation overlay is proximate to positions on the common desktop environment as determined from the user interactions.Type: ApplicationFiled: October 1, 2019Publication date: January 30, 2020Inventors: Edwin J. Bruce, Tong C. Dougharty, Tassanee K. Supakkul, Janki Y. Vora
-
Publication number: 20190378536Abstract: Utterances spoken or sung by a first person can be received, in real time. The detected utterances can be compared to at least a stored sample of utterances spoken or sung by the first person. Based on the comparing, audio of the utterances spoken or sung by the first person can be isolated from a background noise. A volume of the utterances spoken or sung by a first person relative to the background noise can be determined. A key indicator that indicates the volume of the detected utterances spoken or sung by the first person relative to the background noise can be generated. Based on the key indicator, information indicating the volume of the detected utterances spoken or sung by the first person relative to the background noise can be communicated.Type: ApplicationFiled: July 16, 2019Publication date: December 12, 2019Inventors: Alan D. Emery, Aditya Sood, Mathews Thomas, Janki Y. Vora
-
Patent number: 10438501Abstract: Instructional content is visually presented within a graphical user interface overlay on a display for an application window also presented on the display. Interactive events between a user and the application are dynamically detected. Responsive to the interactive events, state-specific substantive instructions are determined given a current state of the application as determined from the interactive events. The instructional content is dynamically modified to continuously present the state-specific substantive instructions that correspond with the detected interactive events.Type: GrantFiled: August 29, 2016Date of Patent: October 8, 2019Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Edwin J. Bruce, Tong C. Dougharty, Tassanee K. Supakkul, Janki Y. Vora
-
Patent number: 10395671Abstract: Utterances spoken or sung by a first person can be received, in real time, from a mobile communication device. A location of the mobile communication device can be determined to be in an area designated as a quiet zone. A key indicator that indicates at least one characteristic of the detected utterances spoken or sung by the first person can be generated. Based, at least in part, on the key indicator, a determination can be made that the first person is speaking or singing too loudly in the area designated as the quiet zone. Responsive to determining that the first person is speaking or singing too loudly in the area designated as the quiet zone, feedback indicating that the first person is speaking or singing too loudly in the area designated as the quiet zone can be communicated to the mobile communication device.Type: GrantFiled: October 1, 2017Date of Patent: August 27, 2019Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Alan D. Emery, Aditya Sood, Mathews Thomas, Janki Y. Vora
-
Patent number: 10388096Abstract: A method, system, and/or computer program product improve a function of a computer used to make a seat in a venue available to a user. One or more processors retrieve a user profile of a user that is requesting a seat in a venue, where the user profile describes a personal interest of the user. The processor(s) identify another person that shares the personal interest of the user, where the other person is currently seated at a first seat at the venue. The processor(s) identify an unoccupied second seat in proximity to the first seat.Type: GrantFiled: September 14, 2018Date of Patent: August 20, 2019Assignee: International Business Machines CorporationInventors: Edwin J. Bruce, Tassanee K. Supakkul, Janki Y. Vora
-
Publication number: 20190012862Abstract: A method, system, and/or computer program product improve a function of a computer used to make a seat in a venue available to a user. One or more processors retrieve a user profile of a user that is requesting a seat in a venue, where the user profile describes a personal interest of the user. The processor(s) identify another person that shares the personal interest of the user, where the other person is currently seated at a first seat at the venue. The processor(s) identify an unoccupied second seat in proximity to the first seat.Type: ApplicationFiled: September 14, 2018Publication date: January 10, 2019Inventors: EDWIN J. BRUCE, TASSANEE K. SUPAKKUL, JANKI Y. VORA
-
Patent number: 10140796Abstract: A method, system, and/or computer program product improve a function of a computer used to make a seat in a venue available to a user. One or more processors received a request for a seat at a venue from a user. The processor(s) retrieve a user profile of the user and a seat profile of the seat, and then match features in the user profile to features in the seat profile. The processor(s), in response to the features in the user profile matching the features in the seat profile, store the user profile and the seat profile in a seat control storage device that is solely dedicated to the seat. The processor(s) then direct the user to the seat that is identified in the seat control storage device, where the user is identified by the user profile in the seat control storage device, and where the seat is identified by the seat profile in the seat control storage device.Type: GrantFiled: June 24, 2016Date of Patent: November 27, 2018Assignee: International Business Machines CorporationInventors: Edwin J. Bruce, Tassanee K. Supakkul, Janki Y. Vora
-
Patent number: 10116716Abstract: A content delivery system may receive and aggregate video content from one or more content sources. In a first embodiment, the content delivery system may start streaming a video to a first viewer on a first device and then receive a request for a catch up version to be streamed to a second viewer viewing a second device. The content delivery system may send replacement segments of the video that are shortened summaries to the second device until the second viewer has caught up to the first viewer on the first device. In a second embodiment, the content delivery system may detect two or more viewers and customize a video content for both viewers. In a third embodiment, the content delivery system, in real time, may customize a segment of a video (possibly using a “green screen” or overlaying a second video over the original video segment) based on characteristics of the viewer and then stream the customized video segment to the viewer.Type: GrantFiled: November 1, 2016Date of Patent: October 30, 2018Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Jason A. Gonzalez, Eric L. Gose, Mathews Thomas, Janki Y. Vora
-
Publication number: 20180124142Abstract: A content delivery system may receive and aggregate video content from one or more content sources. In a first embodiment, the content delivery system may start streaming a video to a first viewer on a first device and then receive a request for a catch up version to be streamed to a second viewer viewing a second device. The content delivery system may send replacement segments of the video that are shortened summaries to the second device until the second viewer has caught up to the first viewer on the first device. In a second embodiment, the content delivery system may detect two or more viewers and customize a video content for both viewers. In a third embodiment, the content delivery system, in real time, may customize a segment of a video (possibly using a “green screen” or overlaying a second video over the original video segment) based on characteristics of the viewer and then stream the customized video segment to the viewer.Type: ApplicationFiled: November 1, 2016Publication date: May 3, 2018Inventors: Jason A. Gonzalez, Eric L. Gose, Mathews Thomas, Janki Y. Vora
-
Publication number: 20180124144Abstract: A content delivery system may receive and aggregate video content from one or more content sources. In a first embodiment, the content delivery system may start streaming a video to a first viewer on a first device and then receive a request for a catch up version to be streamed to a second viewer viewing a second device. The content delivery system may send replacement segments of the video that are shortened summaries to the second device until the second viewer has caught up to the first viewer on the first device. In a second embodiment, the content delivery system may detect two or more viewers and customize a video content for both viewers. In a third embodiment, the content delivery system, in real time, may customize a segment of a video (possibly using a “green screen” or overlaying a second video over the original video segment) based on characteristics of the viewer and then stream the customized video segment to the viewer.Type: ApplicationFiled: December 18, 2017Publication date: May 3, 2018Inventors: Jason A. Gonzalez, Eric L. Gose, Mathews Thomas, Janki Y. Vora
-
Publication number: 20180025742Abstract: Utterances spoken or sung by a first person can be received, in real time, from a mobile communication device. A location of the mobile communication device can be determined to be in an area designated as a quiet zone. A key indicator that indicates at least one characteristic of the detected utterances spoken or sung by the first person can be generated. Based, at least in part, on the key indicator, a determination can be made that the first person is speaking or singing too loudly in the area designated as the quiet zone. Responsive to determining that the first person is speaking or singing too loudly in the area designated as the quiet zone, feedback indicating that the first person is speaking or singing too loudly in the area designated as the quiet zone can be communicated to the mobile communication device.Type: ApplicationFiled: October 1, 2017Publication date: January 25, 2018Inventors: Alan D. Emery, Aditya Sood, Mathews Thomas, Janki Y. Vora
-
Publication number: 20170372551Abstract: A method, system, and/or computer program product improve a function of a computer used to make a seat in a venue available to a user. One or more processors received a request for a seat at a venue from a user. The processor(s) retrieve a user profile of the user and a seat profile of the seat, and then match features in the user profile to features in the seat profile. The processor(s), in response to the features in the user profile matching the features in the seat profile, store the user profile and the seat profile in a seat control storage device that is solely dedicated to the seat. The processor(s) then direct the user to the seat that is identified in the seat control storage device, where the user is identified by the user profile in the seat control storage device, and where the seat is identified by the seat profile in the seat control storage device.Type: ApplicationFiled: June 24, 2016Publication date: December 28, 2017Inventors: EDWIN J. BRUCE, TASSANEE K. SUPAKKUL, JANKI Y. VORA
-
Patent number: 9836382Abstract: A method for the cognitive debugging of a managed system includes first receiving an event in an event management system. Thereafter, a context for the event is extracted therefrom and the context is mapped to both one or more components of a managed computing system and also one or more corresponding debug mode commands for each of the components. Consequently, a debug mode is enabled in each of the components and the corresponding debug mode commands are issued for each of the components so as to provoke a generation of one or more log entries. The generated log entries then are matched to a pre-stored log entry amongst a multiplicity of pre-stored log entries and at least one problem resolution document stored in connection with the matched pre-stored log entry is transmitted to an operator of the event management system.Type: GrantFiled: February 17, 2016Date of Patent: December 5, 2017Assignee: International Business Machines CorporationInventors: Mandeep Chana, William King, Tomyo G. Maeshiro, Mathews Thomas, Janki Y. Vora
-
Patent number: 9779761Abstract: Arrangements described herein relate to receiving, in real time, utterances spoken or sung by a first person when the utterances are spoken or sung and comparing, in real time, the detected utterances spoken or sung by the first person to at least a stored sample of utterances spoken or sung by the first person. Based, at least in part, on the comparing the detected utterances spoken or sung by the first person to at least the stored sample of utterances spoken or sung by the first person, a key indicator that indicates at least one characteristic of the detected utterances spoken or sung by the first person can be generated. Feedback indicating the at least one characteristic of the detected utterances spoken or sung by the first person can be communicated to the first person or a second person.Type: GrantFiled: April 21, 2016Date of Patent: October 3, 2017Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Alan D. Emery, Aditya Sood, Mathews Thomas, Janki Y. Vora
-
Publication number: 20170235660Abstract: A method for the cognitive debugging of a managed system includes first receiving an event in an event management system. Thereafter, a context for the event is extracted therefrom and the context is mapped to both one or more components of a managed computing system and also one or more corresponding debug mode commands for each of the components. Consequently, a debug mode is enabled in each of the components and the corresponding debug mode commands are issued for each of the components so as to provoke a generation of one or more log entries. The generated log entries then are matched to a pre-stored log entry amongst a multiplicity of pre-stored log entries and at least one problem resolution document stored in connection with the matched pre-stored log entry is transmitted to an operator of the event management system.Type: ApplicationFiled: February 17, 2016Publication date: August 17, 2017Inventors: Mandeep Chana, William King, Tomyo G. Maeshiro, Mathews Thomas, Janki Y. Vora
-
Publication number: 20170208533Abstract: Controlling safety features for mobile device interactions in restricted areas includes identification, by a processor, of an operator of a vehicle based on at least one of subscriber identity module (SIM) card detection and image recognition. A physical location of mobile devices within the vehicle relative to one another is determined based on SIM card locations of the mobile devices. A physical location of an operator mobile device located in proximity of an operator position in the vehicle is determined based on imaging. The physical location of the operator mobile device is communicated to a cell tower. The processor detects whether the operator mobile device is in use and communicatively interacting with another mobile device. It is determined if the geographical location is in a predefined restricted area for using the operator mobile device, and if so, the operator mobile device communication is controlled.Type: ApplicationFiled: January 18, 2016Publication date: July 20, 2017Inventors: Charla L. Stracener, Mathews Thomas, Janki Y. Vora, Jeffery R. Washburn