Patents by Inventor David M. Lubensky

David M. Lubensky has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20190187958
    Abstract: A workflow extraction method, system, and computer program product include analyzing, for each of the design screens, a relatability of one design screen to a previously analyzed design screen in the database and generating a tag that represents a workflow and creating a database linking the tag to a sequence of design screens from a transition graph that details how to move from one of the design screens to another.
    Type: Application
    Filed: December 28, 2018
    Publication date: June 20, 2019
    Inventors: Kyungmin Lee, David M. Lubensky, Marco Pistoia, Stephen Wood
  • Publication number: 20190175016
    Abstract: An approach is disclosed that receives, at a wearable sensing element worn by a user, sensor data that pertains to the user's physiological functions. Physiological states pertaining to the user are calculated from the received sensor data, with the physiological states including both physical states and mental states. The calculated physiological state is matched to an environmental action states, and environmental actions are responsively performed to change a physical environment of the user.
    Type: Application
    Filed: December 13, 2018
    Publication date: June 13, 2019
    Inventors: Anni R. Coden, Hani T. Jamjoom, David M. Lubensky, Justin Gregory Manweiler, Katherine Vogt, Justin Weisz
  • Publication number: 20190138269
    Abstract: Techniques for optimizing training data within voice user interface (VUI) of an application under development are disclosed. A VUI feedback module synthesizes human speech of a training phrase. This phrase is presented upon a speaker which is simultaneously captured upon a microphone. A speech to text framework converts the synthesized training phrase into text (textualized training phrase). The VUI feedback module compares the textualized training phrase to the actual training phrase and generates a speech training data structure that identifies similarities or dissimilarities between the textualized training phrase and the actual training phrase. This data structure may be utilized by an application developer computing system to identify training data that is most venerable to misinterpretation when a user interacts with the VUI. The VUI may subsequently be adjusted to account for the vulnerabilities to improve operations or user experience of the VUI.
    Type: Application
    Filed: November 9, 2017
    Publication date: May 9, 2019
    Inventors: Blaine H. Dolph, David M. Lubensky, Mal Pattiarachi, Marcus D. Roy, Justin Weisz
  • Publication number: 20190138270
    Abstract: Techniques for optimizing training data within voice user interface (VUI) of an application under development are disclosed. A VUI feedback module synthesizes human speech of a training phrase. This phrase is presented upon a speaker which is simultaneously captured upon a microphone. A speech to text framework converts the synthesized training phrase into text (textualized training phrase). The VUI feedback module compares the textualized training phrase to the actual training phrase and generates a speech training data structure that identifies similarities or dissimilarities between the textualized training phrase and the actual training phrase. This data structure may be utilized by an application developer computing system to identify training data that is most venerable to misinterpretation when a user interacts with the VUI. The VUI may subsequently be adjusted to account for the vulnerabilities to improve operations or user experience of the VUI.
    Type: Application
    Filed: November 9, 2017
    Publication date: May 9, 2019
    Inventors: Blaine H. Dolph, David M. Lubensky, Mal Pattiarachi, Marcus D. Roy, Justin Weisz
  • Publication number: 20190121618
    Abstract: Techniques are disclosed for identifying which graphical user interface (GUI) screens of an application that is under development would benefit from a voice user interface (VUI). A GUI screen parser analyzes to determine the GUI objects within GUI screens of the application. The parser assigns a speechability score to each analyzed GUI screen. Those GUI screens that have a higher speechability score than a predetermined speechability threshold are indicated as GUI screens that would benefit (e.g., the user experience in interacting with those GUI screens would increase, the number of GUI screens displayed would be reduced, or the like) with the addition of a VUI.
    Type: Application
    Filed: October 23, 2017
    Publication date: April 25, 2019
    Inventors: Blaine H. Dolph, David M. Lubensky, Mal Pattiarachi, Marco Pistoia, Nitendra Rajput, Justin Weisz
  • Publication number: 20190121609
    Abstract: Techniques are disclosed for generating a voice user interface (VUI) modality within an application that includes graphical user interface (GUI) screens. A GUI screen parser analyzes the GUI screens to determine the various navigational GUI screen paths that are associated with edge objects within multiple GUI screens. Some edge objects are identified as select objects or prompt objects. A natural language processing system generates a select object synonym data structure and a prompt object data structure that may be utilized by a VUI generator to generate VUI data structures that give the application VUI modality.
    Type: Application
    Filed: October 23, 2017
    Publication date: April 25, 2019
    Inventors: Blaine H. Dolph, David M. Lubensky, Mal Pattiarachi, Marco Pistoia, Nitendra Rajput, Justin Weisz
  • Publication number: 20190121619
    Abstract: Techniques are disclosed for identifying which graphical user interface (GUI) screens of an application that is under development would benefit from a voice user interface (VUI). A GUI screen parser analyzes to determine the GUI objects within GUI screens of the application. The parser assigns a speechability score to each analyzed GUI screen. Those GUI screens that have a higher speechability score than a predetermined speechability threshold are indicated as GUI screens that would benefit (e.g., the user experience in interacting with those GUI screens would increase, the number of GUI screens displayed would be reduced, or the like) with the addition of a VUI.
    Type: Application
    Filed: October 23, 2017
    Publication date: April 25, 2019
    Inventors: Blaine H. Dolph, David M. Lubensky, Mal Pattiarachi, Marco Pistoia, Nitendra Rajput, Justin Weisz
  • Publication number: 20190121608
    Abstract: Techniques are disclosed for generating a voice user interface (VUI) modality within an application that includes graphical user interface (GUI) screens. A GUI screen parser analyzes the GUI screens to determine the various navigational GUI screen paths that are associated with edge objects within multiple GUI screens. Some edge objects are identified as select objects or prompt objects. A natural language processing system generates a select object synonym data structure and a prompt object data structure that may be utilized by a VUI generator to generate VUI data structures that give the application VUI modality.
    Type: Application
    Filed: October 23, 2017
    Publication date: April 25, 2019
    Inventors: Blaine H. Dolph, David M. Lubensky, Mal Pattiarachi, Marco Pistoia, Nitendra Rajput, Justin Weisz
  • Patent number: 10268457
    Abstract: Techniques are disclosed for identifying which graphical user interface (GUI) screens of an application that is under development would benefit from a voice user interface (VUI). A GUI screen parser analyzes to determine the GUI objects within GUI screens of the application. The parser assigns a speechability score to each analyzed GUI screen. Those GUI screens that have a higher speechability score than a predetermined speechability threshold are indicated as GUI screens that would benefit (e.g., the user experience in interacting with those GUI screens would increase, the number of GUI screens displayed would be reduced, or the like) with the addition of a VUI.
    Type: Grant
    Filed: October 23, 2017
    Date of Patent: April 23, 2019
    Assignee: International Business Machines Corporation
    Inventors: Blaine H. Dolph, David M. Lubensky, Mal Pattiarachi, Marco Pistoia, Nitendra Rajput, Justin Weisz
  • Patent number: 10268458
    Abstract: Techniques are disclosed for identifying which graphical user interface (GUI) screens of an application that is under development would benefit from a voice user interface (VUI). A GUI screen parser analyzes to determine the GUI objects within GUI screens of the application. The parser assigns a speechability score to each analyzed GUI screen. Those GUI screens that have a higher speechability score than a predetermined speechability threshold are indicated as GUI screens that would benefit (e.g., the user experience in interacting with those GUI screens would increase, the number of GUI screens displayed would be reduced, or the like) with the addition of a VUI.
    Type: Grant
    Filed: October 23, 2017
    Date of Patent: April 23, 2019
    Assignee: International Business Mahcines Corporation
    Inventors: Blaine H. Dolph, David M. Lubensky, Mal Pattiarachi, Marco Pistoia, Nitendra Rajput, Justin Weisz
  • Patent number: 10248385
    Abstract: A mobile application workflow extraction method, system, and computer program product include extracting functional elements from a design file to create a database of design screens, generating a flow graph of the design screens and the functional elements in the design file, creating a transition graph that details how to move from each of the design screens to another, and analyzing, for each of the design screens, a relatability of each design screen to a previously analyzed design screen in the database and generating a tag that represents a workflow.
    Type: Grant
    Filed: November 30, 2017
    Date of Patent: April 2, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Kyungmin Lee, David M. Lubensky, Marco Pistoia, Stephen Wood
  • Patent number: 9962093
    Abstract: A method, system, and computer product for detecting an oral temperature for a human include capturing a thermal image for a human using a camera having a thermal image sensor, detecting a face region of the human from the thermal image, detecting a mouth region on the face region, detecting an open mouth region on the mouth region, detecting a degree of mouth openness on the detected open mouth region, determining that the degree of mouth openness meets a predetermined criterion, and detecting the oral temperature, responsive to determining that the degree of mouth openness meets the predetermined criterion.
    Type: Grant
    Filed: March 22, 2017
    Date of Patent: May 8, 2018
    Assignee: International Business Machines Corporation
    Inventors: Blaine H. Dolph, Jui-Hsin Lai, Ching-Yung Lin, David M. Lubensky
  • Publication number: 20180114285
    Abstract: Systems and methods for storing in a first database a user personal profile, storing in a second database per-restaurant profiles for a plurality of restaurants, enabling the user to connect to a cognitive computer, enabling the user to interact with the cognitive computer for generating a personalized recipe based on user culinary selections and the user profile in the first database, the personalized recipe including a first list of ingredients, determining by the cognitive computer whether there are one or more first type candidate restaurants for preparing the personalized recipe based on the per-restaurant profiles in the second database, the first type candidate restaurant being determined to be able to prepare the personalized recipe with the first list of ingredients, receiving a selection of a selected restaurant from the first type candidate restaurant and contracting out the preparation of the personalized recipe to the selected restaurant.
    Type: Application
    Filed: October 24, 2016
    Publication date: April 26, 2018
    Inventors: Anni R. Coden, Hani T. Jamjoom, David M. Lubensky, Justin G. Manweiler, Katherine Vogt, Justin D. Weisz
  • Publication number: 20180085006
    Abstract: A method, system, and computer product for detecting an oral temperature for a human include capturing a thermal image for a human using a camera having a thermal image sensor, detecting a face region of the human from the thermal image, detecting a mouth region on the face region, detecting an open mouth region on the mouth region, detecting a degree of mouth openness on the detected open mouth region, determining that the degree of mouth openness meets a predetermined criterion, and detecting the oral temperature, responsive to determining that the degree of mouth openness meets the predetermined criterion.
    Type: Application
    Filed: March 22, 2017
    Publication date: March 29, 2018
    Inventors: Blaine H. Dolph, Jui-Hsin Lai, Ching-Yung Lin, David M. Lubensky
  • Publication number: 20170316320
    Abstract: A database comprises historical information of a user's response to previous notifications. The database is accessed to determine a time at which to provide a (new) notification to the user, utilizing at least: a) current user activity status (e.g., determined from measurement information collected from one or more personal devices and/or user calendar events; b) time/day; and c) context information about the notification (e.g., geo-location, indoors/outdoors) including notification type (e.g., calendar entry, email, IM). The user gets the notification via a portable device at the determined time. A machine learning model can select the determined time by discriminating features of the previous notifications for which the user immediately attended versus those that were deferred and/or ignored. Content of the notification can also be altered in view of such discriminating features so as to increase a likelihood the user will immediately attend to the provided notification.
    Type: Application
    Filed: April 27, 2016
    Publication date: November 2, 2017
    Inventors: Hani Jamjoom, David M. Lubensky, Justin G. Manweiler, Katherine Vogt, Justin D. Weisz
  • Patent number: 9792576
    Abstract: Controlling drones and vehicles in package delivery, in one aspect, may include routing a delivery vehicle loaded with packages to a dropoff location based on executing on a hardware processor a spatial clustering of package destinations. A set of drones may be dispatched. A drone-to-package assignment is determined for the drones and the packages in the delivery vehicle. The drone is controlled to travel from the vehicle's dropoff location to transport the assigned package to a destination point and return to the dropoff location to meet the vehicle. The delivery vehicle may be alerted to speed up or slow down to meet the drone at the return location, for example, without the delivery vehicle having to stop and wait at the dropoff location while the drone is making its delivery.
    Type: Grant
    Filed: October 24, 2016
    Date of Patent: October 17, 2017
    Assignee: International Business Machines Corporation
    Inventors: Hani T. Jamjoom, David M. Lubensky, Justin G. Manweiler, Justin D. Weisz
  • Patent number: 9693695
    Abstract: A method, system, and computer product for detecting an oral temperature for a human include capturing a thermal image for a human using a camera, detecting a face region of the human from the thermal image, detecting a mouth region on the face region, comparing a temperature value on the mouth region to a reference temperature value on a first other face region, detecting an open mouth region on the mouth region based on a comparison result of the temperature value on the mouth region to the reference temperature value of the first other face region, determining whether a mouth of the human is open enough for an oral temperature to be detected, and computing the oral temperature based on temperature values on the mouth region and at least one other face region, responsive to the determination of the mouth being open enough for the oral temperature to be detected.
    Type: Grant
    Filed: September 23, 2016
    Date of Patent: July 4, 2017
    Assignee: International Business Machines Corporation
    Inventors: Blaine H. Dolph, Jui-Hsin Lai, Ching-Yung Lin, David M. Lubensky
  • Patent number: 9679247
    Abstract: A method of building a soft linkage between a plurality of graphs includes initializing a correspondence between type-1 and type-2 objects in the plurality of graphs, and reducing a cost function by alternately updating the type-1 correspondence and updating the type-2 correspondence.
    Type: Grant
    Filed: September 19, 2013
    Date of Patent: June 13, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Danai Koutra, David M. Lubensky, Hanghang Tong
  • Publication number: 20170155614
    Abstract: Embodiments include methods, systems and computer program products for providing information to two or more people who know each other and who are running separately yet relatively near to each other at the same instance in time to come together and thereafter run together are provided. Aspects include determining a movement state and a location of a first user, and determining a movement state and a location of at least one other user. Aspects also include based on the location of the first user and the location of the at least one other user being within a predetermined distance from one another, determining a route for each of the first user and the at least one other user to travel to come together at a single geographic location.
    Type: Application
    Filed: December 1, 2015
    Publication date: June 1, 2017
    Inventors: David M. Lubensky, Justin D. Weisz
  • Publication number: 20170091693
    Abstract: A method for improving team performance by refining team structure includes selecting a team of interest comprising a plurality of individuals, visualizing the team of interest using a graph depicting each individual's skills relevant to a task, refining the team of interest based on the visualization, and displaying the refined team of interest. The method of claim 1, wherein refining the team of interest comprises shrinking the team of interest by removing a member. The method may further comprise calculating a shrinkage score for each member of the team of interest, wherein a shrinkage score is representative of the negative effects of removing a team member from the team. The method may additionally include removing the team member with the smallest shrinkage score from the team of interest. A computer program product and computer system corresponding to the method are also disclosed.
    Type: Application
    Filed: September 30, 2015
    Publication date: March 30, 2017
    Inventors: Nan Cao, Ching-Yung Lin, David M. Lubensky, Hanghang Tong