Patents by Inventor Janki Y. Vora

Janki Y. Vora has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11189301
    Abstract: Utterances spoken or sung by a first person can be received, in real time. The detected utterances can be compared to at least a stored sample of utterances spoken or sung by the first person. Based on the comparing, audio of the utterances spoken or sung by the first person can be isolated from a background noise. A volume of the utterances spoken or sung by a first person relative to the background noise can be determined. A key indicator that indicates the volume of the detected utterances spoken or sung by the first person relative to the background noise can be generated. Based on the key indicator, information indicating the volume of the detected utterances spoken or sung by the first person relative to the background noise can be communicated.
    Type: Grant
    Filed: July 16, 2019
    Date of Patent: November 30, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Alan D. Emery, Aditya Sood, Mathews Thomas, Janki Y. Vora
  • Patent number: 11089072
    Abstract: A content delivery system may receive and aggregate video content from one or more content sources. In a first embodiment, the content delivery system may start streaming a video to a first viewer on a first device and then receive a request for a catch up version to be streamed to a second viewer viewing a second device. The content delivery system may send replacement segments of the video that are shortened summaries to the second device until the second viewer has caught up to the first viewer on the first device. In a second embodiment, the content delivery system may detect two or more viewers and customize a video content for both viewers. In a third embodiment, the content delivery system, in real time, may customize a segment of a video (possibly using a “green screen” or overlaying a second video over the original video segment) based on characteristics of the viewer and then stream the customized video segment to the viewer.
    Type: Grant
    Filed: March 10, 2020
    Date of Patent: August 10, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Jason A. Gonzalez, Eric L. Gose, Mathews Thomas, Janki Y. Vora
  • Publication number: 20200213375
    Abstract: A content delivery system may receive and aggregate video content from one or more content sources. In a first embodiment, the content delivery system may start streaming a video to a first viewer on a first device and then receive a request for a catch up version to be streamed to a second viewer viewing a second device. The content delivery system may send replacement segments of the video that are shortened summaries to the second device until the second viewer has caught up to the first viewer on the first device. In a second embodiment, the content delivery system may detect two or more viewers and customize a video content for both viewers. In a third embodiment, the content delivery system, in real time, may customize a segment of a video (possibly using a “green screen” or overlaying a second video over the original video segment) based on characteristics of the viewer and then stream the customized video segment to the viewer.
    Type: Application
    Filed: March 10, 2020
    Publication date: July 2, 2020
    Inventors: Jason A. Gonzalez, Eric L. Gose, Mathews Thomas, Janki Y. Vora
  • Patent number: 10609107
    Abstract: A content delivery system may receive and aggregate video content from one or more content sources. In a first embodiment, the content delivery system may start streaming a video to a first viewer on a first device and then receive a request for a catch up version to be streamed to a second viewer viewing a second device. The content delivery system may send replacement segments of the video that are shortened summaries to the second device until the second viewer has caught up to the first viewer on the first device. In a second embodiment, the content delivery system may detect two or more viewers and customize a video content for both viewers. In a third embodiment, the content delivery system, in real time, may customize a segment of a video (possibly using a “green screen” or overlaying a second video over the original video segment) based on characteristics of the viewer and then stream the customized video segment to the viewer.
    Type: Grant
    Filed: December 18, 2017
    Date of Patent: March 31, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Jason A. Gonzalez, Eric L. Gose, Mathews Thomas, Janki Y. Vora
  • Publication number: 20200035113
    Abstract: An instructional support symbiont can be executed on a client device, which concurrently executes at least one application. The instructional support symbiont provides computer-based learning content within a presentation overlay for the application. During an application session, user interaction with an application window for the application can be detected. User compliance with discrete stages of a tutorial responsive to the user interactions can be determined. Content provided in the presentation overlay per specific stages of the tutorial can be updated. Presentation characteristics of the presentation overlay can be adjusted to ensure the presentation overlay is proximate to positions on the common desktop environment as determined from the user interactions.
    Type: Application
    Filed: October 1, 2019
    Publication date: January 30, 2020
    Inventors: Edwin J. Bruce, Tong C. Dougharty, Tassanee K. Supakkul, Janki Y. Vora
  • Publication number: 20190378536
    Abstract: Utterances spoken or sung by a first person can be received, in real time. The detected utterances can be compared to at least a stored sample of utterances spoken or sung by the first person. Based on the comparing, audio of the utterances spoken or sung by the first person can be isolated from a background noise. A volume of the utterances spoken or sung by a first person relative to the background noise can be determined. A key indicator that indicates the volume of the detected utterances spoken or sung by the first person relative to the background noise can be generated. Based on the key indicator, information indicating the volume of the detected utterances spoken or sung by the first person relative to the background noise can be communicated.
    Type: Application
    Filed: July 16, 2019
    Publication date: December 12, 2019
    Inventors: Alan D. Emery, Aditya Sood, Mathews Thomas, Janki Y. Vora
  • Patent number: 10438501
    Abstract: Instructional content is visually presented within a graphical user interface overlay on a display for an application window also presented on the display. Interactive events between a user and the application are dynamically detected. Responsive to the interactive events, state-specific substantive instructions are determined given a current state of the application as determined from the interactive events. The instructional content is dynamically modified to continuously present the state-specific substantive instructions that correspond with the detected interactive events.
    Type: Grant
    Filed: August 29, 2016
    Date of Patent: October 8, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Edwin J. Bruce, Tong C. Dougharty, Tassanee K. Supakkul, Janki Y. Vora
  • Patent number: 10395671
    Abstract: Utterances spoken or sung by a first person can be received, in real time, from a mobile communication device. A location of the mobile communication device can be determined to be in an area designated as a quiet zone. A key indicator that indicates at least one characteristic of the detected utterances spoken or sung by the first person can be generated. Based, at least in part, on the key indicator, a determination can be made that the first person is speaking or singing too loudly in the area designated as the quiet zone. Responsive to determining that the first person is speaking or singing too loudly in the area designated as the quiet zone, feedback indicating that the first person is speaking or singing too loudly in the area designated as the quiet zone can be communicated to the mobile communication device.
    Type: Grant
    Filed: October 1, 2017
    Date of Patent: August 27, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Alan D. Emery, Aditya Sood, Mathews Thomas, Janki Y. Vora
  • Patent number: 10388096
    Abstract: A method, system, and/or computer program product improve a function of a computer used to make a seat in a venue available to a user. One or more processors retrieve a user profile of a user that is requesting a seat in a venue, where the user profile describes a personal interest of the user. The processor(s) identify another person that shares the personal interest of the user, where the other person is currently seated at a first seat at the venue. The processor(s) identify an unoccupied second seat in proximity to the first seat.
    Type: Grant
    Filed: September 14, 2018
    Date of Patent: August 20, 2019
    Assignee: International Business Machines Corporation
    Inventors: Edwin J. Bruce, Tassanee K. Supakkul, Janki Y. Vora
  • Publication number: 20190012862
    Abstract: A method, system, and/or computer program product improve a function of a computer used to make a seat in a venue available to a user. One or more processors retrieve a user profile of a user that is requesting a seat in a venue, where the user profile describes a personal interest of the user. The processor(s) identify another person that shares the personal interest of the user, where the other person is currently seated at a first seat at the venue. The processor(s) identify an unoccupied second seat in proximity to the first seat.
    Type: Application
    Filed: September 14, 2018
    Publication date: January 10, 2019
    Inventors: EDWIN J. BRUCE, TASSANEE K. SUPAKKUL, JANKI Y. VORA
  • Patent number: 10140796
    Abstract: A method, system, and/or computer program product improve a function of a computer used to make a seat in a venue available to a user. One or more processors received a request for a seat at a venue from a user. The processor(s) retrieve a user profile of the user and a seat profile of the seat, and then match features in the user profile to features in the seat profile. The processor(s), in response to the features in the user profile matching the features in the seat profile, store the user profile and the seat profile in a seat control storage device that is solely dedicated to the seat. The processor(s) then direct the user to the seat that is identified in the seat control storage device, where the user is identified by the user profile in the seat control storage device, and where the seat is identified by the seat profile in the seat control storage device.
    Type: Grant
    Filed: June 24, 2016
    Date of Patent: November 27, 2018
    Assignee: International Business Machines Corporation
    Inventors: Edwin J. Bruce, Tassanee K. Supakkul, Janki Y. Vora
  • Patent number: 10116716
    Abstract: A content delivery system may receive and aggregate video content from one or more content sources. In a first embodiment, the content delivery system may start streaming a video to a first viewer on a first device and then receive a request for a catch up version to be streamed to a second viewer viewing a second device. The content delivery system may send replacement segments of the video that are shortened summaries to the second device until the second viewer has caught up to the first viewer on the first device. In a second embodiment, the content delivery system may detect two or more viewers and customize a video content for both viewers. In a third embodiment, the content delivery system, in real time, may customize a segment of a video (possibly using a “green screen” or overlaying a second video over the original video segment) based on characteristics of the viewer and then stream the customized video segment to the viewer.
    Type: Grant
    Filed: November 1, 2016
    Date of Patent: October 30, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Jason A. Gonzalez, Eric L. Gose, Mathews Thomas, Janki Y. Vora
  • Publication number: 20180124142
    Abstract: A content delivery system may receive and aggregate video content from one or more content sources. In a first embodiment, the content delivery system may start streaming a video to a first viewer on a first device and then receive a request for a catch up version to be streamed to a second viewer viewing a second device. The content delivery system may send replacement segments of the video that are shortened summaries to the second device until the second viewer has caught up to the first viewer on the first device. In a second embodiment, the content delivery system may detect two or more viewers and customize a video content for both viewers. In a third embodiment, the content delivery system, in real time, may customize a segment of a video (possibly using a “green screen” or overlaying a second video over the original video segment) based on characteristics of the viewer and then stream the customized video segment to the viewer.
    Type: Application
    Filed: November 1, 2016
    Publication date: May 3, 2018
    Inventors: Jason A. Gonzalez, Eric L. Gose, Mathews Thomas, Janki Y. Vora
  • Publication number: 20180124144
    Abstract: A content delivery system may receive and aggregate video content from one or more content sources. In a first embodiment, the content delivery system may start streaming a video to a first viewer on a first device and then receive a request for a catch up version to be streamed to a second viewer viewing a second device. The content delivery system may send replacement segments of the video that are shortened summaries to the second device until the second viewer has caught up to the first viewer on the first device. In a second embodiment, the content delivery system may detect two or more viewers and customize a video content for both viewers. In a third embodiment, the content delivery system, in real time, may customize a segment of a video (possibly using a “green screen” or overlaying a second video over the original video segment) based on characteristics of the viewer and then stream the customized video segment to the viewer.
    Type: Application
    Filed: December 18, 2017
    Publication date: May 3, 2018
    Inventors: Jason A. Gonzalez, Eric L. Gose, Mathews Thomas, Janki Y. Vora
  • Publication number: 20180025742
    Abstract: Utterances spoken or sung by a first person can be received, in real time, from a mobile communication device. A location of the mobile communication device can be determined to be in an area designated as a quiet zone. A key indicator that indicates at least one characteristic of the detected utterances spoken or sung by the first person can be generated. Based, at least in part, on the key indicator, a determination can be made that the first person is speaking or singing too loudly in the area designated as the quiet zone. Responsive to determining that the first person is speaking or singing too loudly in the area designated as the quiet zone, feedback indicating that the first person is speaking or singing too loudly in the area designated as the quiet zone can be communicated to the mobile communication device.
    Type: Application
    Filed: October 1, 2017
    Publication date: January 25, 2018
    Inventors: Alan D. Emery, Aditya Sood, Mathews Thomas, Janki Y. Vora
  • Publication number: 20170372551
    Abstract: A method, system, and/or computer program product improve a function of a computer used to make a seat in a venue available to a user. One or more processors received a request for a seat at a venue from a user. The processor(s) retrieve a user profile of the user and a seat profile of the seat, and then match features in the user profile to features in the seat profile. The processor(s), in response to the features in the user profile matching the features in the seat profile, store the user profile and the seat profile in a seat control storage device that is solely dedicated to the seat. The processor(s) then direct the user to the seat that is identified in the seat control storage device, where the user is identified by the user profile in the seat control storage device, and where the seat is identified by the seat profile in the seat control storage device.
    Type: Application
    Filed: June 24, 2016
    Publication date: December 28, 2017
    Inventors: EDWIN J. BRUCE, TASSANEE K. SUPAKKUL, JANKI Y. VORA
  • Patent number: 9836382
    Abstract: A method for the cognitive debugging of a managed system includes first receiving an event in an event management system. Thereafter, a context for the event is extracted therefrom and the context is mapped to both one or more components of a managed computing system and also one or more corresponding debug mode commands for each of the components. Consequently, a debug mode is enabled in each of the components and the corresponding debug mode commands are issued for each of the components so as to provoke a generation of one or more log entries. The generated log entries then are matched to a pre-stored log entry amongst a multiplicity of pre-stored log entries and at least one problem resolution document stored in connection with the matched pre-stored log entry is transmitted to an operator of the event management system.
    Type: Grant
    Filed: February 17, 2016
    Date of Patent: December 5, 2017
    Assignee: International Business Machines Corporation
    Inventors: Mandeep Chana, William King, Tomyo G. Maeshiro, Mathews Thomas, Janki Y. Vora
  • Patent number: 9779761
    Abstract: Arrangements described herein relate to receiving, in real time, utterances spoken or sung by a first person when the utterances are spoken or sung and comparing, in real time, the detected utterances spoken or sung by the first person to at least a stored sample of utterances spoken or sung by the first person. Based, at least in part, on the comparing the detected utterances spoken or sung by the first person to at least the stored sample of utterances spoken or sung by the first person, a key indicator that indicates at least one characteristic of the detected utterances spoken or sung by the first person can be generated. Feedback indicating the at least one characteristic of the detected utterances spoken or sung by the first person can be communicated to the first person or a second person.
    Type: Grant
    Filed: April 21, 2016
    Date of Patent: October 3, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Alan D. Emery, Aditya Sood, Mathews Thomas, Janki Y. Vora
  • Publication number: 20170235660
    Abstract: A method for the cognitive debugging of a managed system includes first receiving an event in an event management system. Thereafter, a context for the event is extracted therefrom and the context is mapped to both one or more components of a managed computing system and also one or more corresponding debug mode commands for each of the components. Consequently, a debug mode is enabled in each of the components and the corresponding debug mode commands are issued for each of the components so as to provoke a generation of one or more log entries. The generated log entries then are matched to a pre-stored log entry amongst a multiplicity of pre-stored log entries and at least one problem resolution document stored in connection with the matched pre-stored log entry is transmitted to an operator of the event management system.
    Type: Application
    Filed: February 17, 2016
    Publication date: August 17, 2017
    Inventors: Mandeep Chana, William King, Tomyo G. Maeshiro, Mathews Thomas, Janki Y. Vora
  • Publication number: 20170208533
    Abstract: Controlling safety features for mobile device interactions in restricted areas includes identification, by a processor, of an operator of a vehicle based on at least one of subscriber identity module (SIM) card detection and image recognition. A physical location of mobile devices within the vehicle relative to one another is determined based on SIM card locations of the mobile devices. A physical location of an operator mobile device located in proximity of an operator position in the vehicle is determined based on imaging. The physical location of the operator mobile device is communicated to a cell tower. The processor detects whether the operator mobile device is in use and communicatively interacting with another mobile device. It is determined if the geographical location is in a predefined restricted area for using the operator mobile device, and if so, the operator mobile device communication is controlled.
    Type: Application
    Filed: January 18, 2016
    Publication date: July 20, 2017
    Inventors: Charla L. Stracener, Mathews Thomas, Janki Y. Vora, Jeffery R. Washburn