Patents by Inventor Laurent Denoue

Laurent Denoue has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20190392057
    Abstract: Example implementations described herein are directed to detection of text and image differences between versions of documents (in particular, slide presentations), and generating an animation to indicate the differences between versions. Such example implementations can be implemented as an application layer on document platforms that otherwise do not have any features to indicate differences between document versions. Further, such implementations are also extendable to messaging applications for collaborative document editing.
    Type: Application
    Filed: June 26, 2018
    Publication date: December 26, 2019
    Inventors: Laurent DENOUE, Scott CARTER
  • Patent number: 10482777
    Abstract: Online educational videos are often difficult to navigate. Furthermore, most video interfaces do not lend themselves to note-taking. Described system detects and reuses boundaries that tend to occur in these types of videos. In particular, many educational videos are organized around distinct breaks that correspond to slide changes, scroll events, or a combination of both. Described algorithms can detect these structural changes in the video content. From these events the system can generate navigable overviews to help users searching for specific content. Furthermore, these boundary events can help the system automatically associate rich media annotations to manually-defined bookmarks. Finally, when manual or automatically recovered spoken transcripts are available, the spoken text can be combined with the temporal segmentation implied by detected events for video indexing and retrieval. This text can also be used to seed a set of text annotations for user selection or be combined with user text input.
    Type: Grant
    Filed: July 10, 2013
    Date of Patent: November 19, 2019
    Assignee: FUJI XEROX CO., LTD.
    Inventors: Scott Carter, Matthew L. Cooper, Laurent Denoue
  • Publication number: 20190319900
    Abstract: Example implementations are directed to a method of controlling contributions to a communication stream. An example implementation includes detecting a request from a user to add a data item to a communication stream of a channel, analyzing the data item in view of the communication stream to determine a relevancy score for the data item; and providing a control interface for the request based on the relevancy score of the data item. For example, the control interface can include an audience report, a notification, a previous post link, an alternative channel recommendation, a private message invitation, or a proceed to post command.
    Type: Application
    Filed: April 16, 2018
    Publication date: October 17, 2019
    Inventors: Jennifer Marlow, Scott Carter, Laurent Denoue
  • Publication number: 20190318150
    Abstract: An augmented reality (AR) environment synchronization method, includes, for each of a plurality of devices associated with respective users in the AR environment, receiving plane information, and generating object/unique identifier information; across the devices, coordinating the plane information; performing context-aware matching of the object/unique identifier information across the devices to generate a match between respective objects sensed by the devices; and providing synchronization control to the devices, to permit an annotation of the matched object to be locked to a landmark and the plane of one of the devices with respect to others of the devices.
    Type: Application
    Filed: April 17, 2018
    Publication date: October 17, 2019
    Inventors: David A. SHAMMA, Laurent DENOUE, Matthew L. COOPER
  • Publication number: 20190258311
    Abstract: In a telepresence scenario with remote users discussing a document or a slide, it can be difficult to follow which parts of the document are being discussed. One way to address this problem is to provide feedback by showing where the user's hand is pointing at on the document, which also enables more expressive gestural communication than a simple remote cursor. An important practical problem is how to transmit this remote feedback efficiently with high resolution document images. This is not possible with standard videoconferencing systems which have insufficient resolution. We propose a method based on using hand skeletons to provide the feedback. The skeleton can be captured using a depth camera or a webcam (with a deep network algorithm), and the small data can be transmitted at a high frame rate (without a video codec).
    Type: Application
    Filed: February 21, 2018
    Publication date: August 22, 2019
    Applicant: FUJI XEROX CO., LTD.
    Inventors: Chelhwon Kim, Patrick Chiu, Joseph Andrew Alkuino de la Pena, Laurent Denoue, Jun Shingu
  • Publication number: 20190243889
    Abstract: A method of converting a document from a first structure to a second structure, includes extracting data of the document to associate a field and a label in the first structure to generate a field/label association, receiving operator input indicative of associating a field/label association with one or more other field/label associations to generate a grouping, and based on the operator input and a spatial arrangement of the first structure, providing the grouping in the second structure as a natural conversational unit.
    Type: Application
    Filed: February 2, 2018
    Publication date: August 8, 2019
    Inventors: Scott Carter, Laurent Denoue, Matthew L. Cooper
  • Patent number: 10298907
    Abstract: A method of sharing documents is provided. The method includes capturing first image data associated with a document, detecting content of the document based on the captured first image data, capturing second image data associated with an object controlled by a user moved relative to the document, determining a relative position between the document and the object, combining a portion of the second image data with the first image data based on the determined relative position to generate a combined image signal that is displayed, and emphasizing a portion of the content in the displayed combined image signal, based on the relative position.
    Type: Grant
    Filed: April 20, 2016
    Date of Patent: May 21, 2019
    Assignee: FUJI XEROX CO., LTD.
    Inventors: Patrick Chiu, Sven Kratz, Shingu Jun, Laurent Denoue
  • Publication number: 20190036853
    Abstract: Example implementations described herein are directed to systems and methods for providing documents in the chat of a chat application. Example implementations can involve detecting, in a chat of a chat application, an indication to edit a document; inserting a fragment of the document into the chat of the chat application, the fragment configured to be editable within the chat of the chat application; and modifying the document based on input provided to the fragment of the document in the chat of the chat application.
    Type: Application
    Filed: July 31, 2017
    Publication date: January 31, 2019
    Inventors: Laurent DENOUE, Scott CARTER, Jennifer MARLOW, Matthew L. COOPER
  • Publication number: 20180359530
    Abstract: Example implementations are directed to methods and systems for curating messages from viewers to identify a question associated with a recorded video that include video data, where the question is extracted from a queue of the video data; analyze the video data to determine one or more answer segments for the question that satisfy a confidence score based on a location of the question in the recorded video; and generate an answer summary for the question with links to each of the one or more segments, where the links are ranked based on the confidence score.
    Type: Application
    Filed: June 9, 2017
    Publication date: December 13, 2018
    Inventors: Jennifer Marlow, Laurent Denoue, Matthew L. Cooper, Scott Carter, Daniel Avrahami
  • Publication number: 20180307316
    Abstract: A method at a computer system includes obtaining an electronic document comprising document elements, and injecting into the document in association with one of the document elements one or more hotspot attributes, the hotspot attributes defining attributes of a hotspot that is displayable in conjunction with the document element when the document is displayed, the hotspot attributes being associated with predefined physical gestures and respective document actions; such that the hotspot, when displayed as part of a displayed document, indicates that a viewer of the displayed document can interact with the displayed document using the predefined physical gestures (i) performed at a position that overlap a displayed version of the document in a field of view of a camera system and (ii) captured by the camera system, wherein a physical gesture results in a respective document action being performed on the displayed document.
    Type: Application
    Filed: April 20, 2017
    Publication date: October 25, 2018
    Inventors: Patrick Chiu, Joseph Andrew Alkuino de la Peña, Laurent Denoue, Chelhwon Kim
  • Publication number: 20180300309
    Abstract: Example implementations described herein are directed to a system for inserting document links or document fragments in messaging applications. For input provided to a messaging application, example implementations can parse the input to determine document parameters, determine previously linked documents in messages of the messaging application corresponding to the document parameters; and embed at least one of a selected document fragment or document link from the determined previously linked documents.
    Type: Application
    Filed: April 18, 2017
    Publication date: October 18, 2018
    Inventors: Laurent DENOUE, Scott CARTER, Jennifer MARLOW, Matthew L. COOPER
  • Patent number: 10091258
    Abstract: The various embodiments described herein include methods and systems for providing electronic feedback. In one aspect, software includes instructions which when executed by a computing system, cause the computing system to: (1) enable a user of the computing system to participate in an electronic conference with one or more remote participants, the electronic conference including an outgoing communications stream for the user; (2) receive feedback from a remote client device used by a particular participant of the one or more remote participants to participate in the electronic conference, the feedback corresponding to a quality of the user's outgoing communications stream at the second client device; and (3) adjust one or more attributes of the electronic conference based on the received feedback.
    Type: Grant
    Filed: September 29, 2017
    Date of Patent: October 2, 2018
    Assignee: FUJI XEROX CO., LTD.
    Inventors: Scott Carter, Laurent Denoue, Matthew Cooper
  • Publication number: 20180121394
    Abstract: System that automatically embeds interactive document snippets inside chat conversation stream. Specifically, described are techniques to automatically crop meaningful areas on document pages based on users' actions and underlying content to embed them inside chat window. Embedded snippets are easy to view because smart cropping provides viewers enough context. Snippets are playable inside the chat window so users can view the snippet without having to open the corresponding document. Importantly, viewers can reply inline to a document snippet, also without having to open the original document page. Like traditional text messages, snippets are appended to the conversation chat window, allowing co-workers to see what was added. When users choose to focus on document itself (as opposed to working inside the chat window only), the system automatically shows all relevant document snippets as well as chat messages, helping the person quickly see what conversations happened around this part of the document.
    Type: Application
    Filed: October 31, 2016
    Publication date: May 3, 2018
    Inventors: Laurent Denoue, Scott Carter, Matthew L. Cooper, Jennifer Marlow
  • Publication number: 20180032145
    Abstract: Systems and methods detect simple user gestures to enable selection of portions of segmented content, such as text, displayed on a display. Gestures may include finger (such as thumb) flicks or swipes as well as flicks of the handheld device itself. The used finger does not occlude the selected text, allowing users to easily see what the selection is at any time during the content selection process. In addition, the swipe or flick gestures can be performed by a non-dominant finger such as a thumb, allowing users to hold the device and make the selection using only one hand. After making the initial selection of a target portion of the content, to extend the selection, for example to the right, the user simply swipes or flicks the finger over the touchscreen to the right. The user could also flick the entire device in a move gesture with one hand.
    Type: Application
    Filed: October 9, 2017
    Publication date: February 1, 2018
    Applicant: FUJI XEROX CO., LTD.
    Inventors: Laurent Denoue, Scott Carter
  • Patent number: 9883144
    Abstract: Example implementations provide the representation a remote user in a video-mediated meeting when the user webcam feed is not available or not used, such as if they are attending to a meeting via a wearable device without a camera, or are on-the-go and prefer not to display their webcam feed for privacy or bandwidth-related reasons. In such cases, the system will infer when the user is active in the meeting and allow users to select to display an animated set of keyframes (from past meetings or representing computer-based activity) as a proxy for the user representation. Example implementations may facilitate a richer representation of a meeting participant (as opposed to a static picture or no information) and may lead to enhanced social dynamics within the meeting.
    Type: Grant
    Filed: May 12, 2016
    Date of Patent: January 30, 2018
    Assignee: FUJI XEROX CO., LTD.
    Inventors: Jennifer Marlow, Scott Carter, Laurent Denoue, Matthew L. Cooper
  • Publication number: 20180027032
    Abstract: The various embodiments described herein include methods and systems for providing electronic feedback. In one aspect, software includes instructions which when executed by a computing system, cause the computing system to: (1) enable a user of the computing system to participate in an electronic conference with one or more remote participants, the electronic conference including an outgoing communications stream for the user; (2) receive feedback from a remote client device used by a particular participant of the one or more remote participants to participate in the electronic conference, the feedback corresponding to a quality of the user's outgoing communications stream at the second client device; and (3) adjust one or more attributes of the electronic conference based on the received feedback.
    Type: Application
    Filed: September 29, 2017
    Publication date: January 25, 2018
    Inventors: SCOTT CARTER, LAURENT DENOUE, MATTHEW COOPER
  • Patent number: 9875222
    Abstract: Embodiments of the present invention enable the extraction, classification, storage, and supplementation of presentation video. A media system receives a video signal carrying presentation video. The media system processes the video signal and generates images for slides of the presentation. The media system then extracts text from the images and uses the text and other characteristics to classify the images and store them in a database. Additionally, the system enables viewers of the presentation to provide feedback on the presentation, which can be used to supplement the presentation.
    Type: Grant
    Filed: June 10, 2009
    Date of Patent: January 23, 2018
    Assignee: FUJI XEROX CO., LTD.
    Inventors: Laurent Denoue, Jonathan J. Trevor, David M. Hilbert, John E. Adcock
  • Publication number: 20170371496
    Abstract: Example implementations described herein are directed to systems and methods for representing meeting content. Such implementations may involve processing an online presentation for one or more media segments, extracting information from the one or more media segments indicative of one or more relationships between one or more participants of the online presentation and generating an interface for the online presentation, the interface indicative of the one or more relationships between the one or more participants of the online presentation. Through such example implementations, online presentations can be indexed and an interface can be generated for the online presentation that allows for content of the presentation to be searchable.
    Type: Application
    Filed: June 22, 2016
    Publication date: December 28, 2017
    Inventors: Laurent Denoue, Andreas Girgensohn, Scott Carter, Jennifer Marlow, Matthew L. Cooper
  • Publication number: 20170332044
    Abstract: Example implementations provide the representation a remote user in a video-mediated meeting when the user webcam feed is not available or not used, such as if they are attending to a meeting via a wearable device without a camera, or are on-the-go and prefer not to display their webcam feed for privacy or bandwidth-related reasons. In such cases, the system will infer when the user is active in the meeting and allow users to select to display an animated set of keyframes (from past meetings or representing computer-based activity) as a proxy for the user representation. Example implementations may facilitate a richer representation of a meeting participant (as opposed to a static picture or no information) and may lead to enhanced social dynamics within the meeting.
    Type: Application
    Filed: May 12, 2016
    Publication date: November 16, 2017
    Inventors: Jennifer Marlow, Scott Carter, Laurent Denoue, Matthew L. Cooper
  • Publication number: 20170310920
    Abstract: A method of sharing documents is provided. The method includes capturing first image data associated with a document, detecting content of the document based on the captured first image data, capturing second image data associated with an object controlled by a user moved relative to the document, determining a relative position between the document and the object, combining a portion of the second image data with the first image data based on the determined relative position to generate a combined image signal that is displayed, and emphasizing a portion of the content in the displayed combined image signal, based on the relative position.
    Type: Application
    Filed: April 20, 2016
    Publication date: October 26, 2017
    Inventors: Patrick CHIU, Sven KRATZ, Shingu JUN, Laurent DENOUE