Abstract: Methods and systems for determining a displacement of a peripheral device are provided. In one example, a peripheral device comprises: an image sensor, and a hardware processor configured to: control the image sensor to capture a first image of a surface when the peripheral device is at a first location on the surface, the first image comprising a feature of the first location of the surface; execute a trained machine learning model using data derived from the first image to estimate a displacement of the feature between the first image and a reference image captured at a second location of the surface; and determine a displacement of the peripheral device based on the estimated displacement of the feature.
Type:
Grant
Filed:
August 3, 2018
Date of Patent:
June 22, 2021
Assignee:
Logitech Europe S.A.
Inventors:
Nicolas Chauvin, François Morier, Helmut Grabner
Abstract: Embodiments of the disclosure provided herein can be used to improve the control, selection and transmission of data to a remote video conferencing environment, by use of a plurality of wired or wirelessly connected electronic devices. In one example, the transmission of data from a local environment can be improved by switching the source of visual inputs (e.g., cameras or display of an electronic device, such as laptop) and/or audio inputs (e.g., microphones) to the one or more appropriate visual and audio sources available within the local environment. The most appropriate visual and audio sources can be the sources that provide the participants in the remote environment the most relevant data giving the remote users the best understanding of the current activities in the local environment.
Type:
Grant
Filed:
August 16, 2019
Date of Patent:
June 15, 2021
Assignee:
LOGITECH EUROPE S.A.
Inventors:
Andreas Franz William Atkins, Joseph Yao-Hua Chu, Henry Levak, Kevin Mclintock
Abstract: An AR/VR input device include a processor(s), an internal measurement unit (IMU), and a plurality of sensors configured to detect emissions received from a plurality of remote emitters. The processor(s) can be configured to: determine a time-of-flight (TOF) of the detected emissions, determine a first estimate of a position and orientation of the input device based on the TOF of a subset of the detected emissions and the particular locations of each of the plurality of sensors on the input device that are detecting the detected emissions, determine a second estimate of the position and orientation of the input device based on the measured acceleration and velocity from the IMU, and continuously update a calculated position and orientation of the input device within the AR/VR environment in real-time based on a Beyesian estimation (e.g., Extended Kalman filter) that utilizes the first estimate and second estimate.
Type:
Grant
Filed:
October 17, 2018
Date of Patent:
May 4, 2021
Assignee:
Logitech Europe S.A.
Inventors:
Andreas Connellan, Arash Salarian, Fergal Corcoran, Jacques Chassot, Jerry Ahern, Laleh Makarem, Mario Gutierrez, Maxim Vlasov, Olivier Guédat, Padraig Murphy, Richard Perring
Abstract: A method is disclosed for synchronizing operating modes across a plurality of peripheral devices. The method includes each of the plurality of peripheral devices transmitting requests to change a rate of power consumption in response to a set of criteria for each peripheral device. A host computing device determines if one or more peripheral devices should change power modes and transmits corresponding commands to synchronize the plurality of peripheral devices into a single operating mode. Each peripheral device may have a unique power mode for a given operating mode and each peripheral device includes one or more common features, such as a lighting display, that is synchronized across all peripheral devices for a given operating mode.
Abstract: A system comprising a host device configured to request a haptic effect from a peripheral device, the peripheral device configured to perform operations including: receiving a request from the host device to generate a haptic effect at a specified intensity; determining an operating range of a motor configured to generate the haptic effect on the peripheral device, where the operating range defines a maximum force that the motor can generate in a linear region of operation, and the operating range changes based on a temperature of the motor; scaling the specified intensity of the haptic effect based on the determined operating range of the motor; and controlling the operation of the motor to generate the haptic effect at the scaled specified intensity, where the scaling is performed by the peripheral device.
Abstract: The present disclosure generally provides for advanced single camera video conferencing systems, and methods related thereto. The advanced single camera video conferencing system features a hybrid optical/digital camera, herein a camera device, having a controller that is configured to execute one or more of the methods set forth herein. In one embodiment, a method includes optically framing, a first portion of a video conferencing environment to provide an actual field-of-view, digitally framing a second portion of the video conferencing environment to provide an apparent field-of-view that is encompassed within the actual field-of-view, generating a video stream of the apparent field-of-view, surveying the actual field-of-view to generate survey data, and detecting changes in the survey data over time. The method may be performed using a single camera device using a single image sensor.
Type:
Grant
Filed:
March 30, 2020
Date of Patent:
April 6, 2021
Assignee:
LOGITECH EUROPE S.A.
Inventors:
Oleg Ostap, Henry Levak, John Scott Skeehan
Abstract: This present disclosure describes a system and methods for integrated multistreaming of media with graphical overlays. At least one method includes a multistream service and graphical overlays hosted by a server infrastructure; a user configuring both the multistream service and graphical overlays on the server infrastructure; a user playing video games on a computer, using broadcasting software to authenticate with the server infrastructure; the broadcasting software capturing video of the user's computer session; the software retrieving the user's custom graphical overlay from the server infrastructure, encoding the video signal and graphical overlay; the software using the same aforementioned authentication to upload the encoded video to a multistream service, and the multistream service streaming the user's encoded video simultaneously to multiple streaming services.
Type:
Grant
Filed:
February 20, 2018
Date of Patent:
March 30, 2021
Assignee:
Logitech Europe S.A.
Inventors:
Murtaza Hussain, Salman Budhwani, Ali Moiz
Abstract: The present disclosure generally provides for advanced single camera video conferencing systems, and methods related thereto. The advanced single camera video conferencing system features a hybrid optical/digital camera, herein a camera device, having a controller that is configured to execute one or more of the methods set forth herein. In one embodiment, a method includes optically framing, a first portion of a video conferencing environment to provide an actual field-of-view, digitally framing a second portion of the video conferencing environment to provide an apparent field-of-view that is encompassed within the actual field-of-view, generating a video stream of the apparent field-of-view, surveying the actual field-of-view to generate survey data, and detecting changes in the survey data over time. The method may be performed using a single camera device using a single image sensor.
Type:
Grant
Filed:
March 30, 2020
Date of Patent:
March 30, 2021
Assignee:
Logitech Europe S.A.
Inventors:
Oleg Ostap, Henry Levak, John Scott Skeehan
Abstract: Described herein are methods and systems for connecting via a cable a USB host and USB device over distances equal to greater than 50 meters. The methods and systems include having the host and device each send a pilot signal over the cable and the host and device, each detecting that the received pilot signal is valid. After confirming the validity of the pilot signals, the host begins standard USB protocols with the device. The system and methods also allow for the insertion of a power over Ethernet device into the cable to provide power to a remote USB device. In some embodiments, only the D+ and D? lines are used allowing multiple independent USB connections over the cable.
Abstract: In some embodiments, a video doorbell system includes a video doorbell device on an exterior surface of a structure and a chime kit within an interior of the structure. A transformer can be coupled in-series via electrical conductors with the video doorbell device and the chime kit. The chime kit can include an energy storage device that is charged via the electrical conductors. When a user activates a button on the video doorbell device power control circuitry within the video doorbell device can transmit a signal on the electrical conductors. Button detection circuitry within the chime kit can detect the signal and respond by transferring power from the energy storage device to a chime. While the chime is activated the transformer can continuously supply the video doorbell device with power.
Type:
Grant
Filed:
June 10, 2019
Date of Patent:
March 23, 2021
Assignee:
Logitech Europe S.A.
Inventors:
Aron Rosenberg, John M. Long, Nick Stoughton
Abstract: The present disclosure generally provides for advanced single camera video conferencing systems, and methods related thereto. The advanced single camera video conferencing system features a hybrid optical/digital camera, herein a camera device, having a controller that is configured to execute one or more of the methods set forth herein. In one embodiment, a method includes optically framing, a first portion of a video conferencing environment to provide an actual field-of-view, digitally framing a second portion of the video conferencing environment to provide an apparent field-of-view that is encompassed within the actual field-of-view, generating a video stream of the apparent field-of-view, surveying the actual field-of-view to generate survey data, and detecting changes in the survey data over time. The method may be performed using a single camera device using a single image sensor.
Type:
Grant
Filed:
March 30, 2020
Date of Patent:
March 16, 2021
Assignee:
LOGITECH EUROPE S.A.
Inventors:
Oleg Ostap, Henry Levak, John Scott Skeehan
Abstract: Methods and systems for providing a mixed reality (MR) interaction are provided. In one example, a method comprises: capturing, at a first time and using a camera of a head-mounted display (HMD) of a user, a first image of a physical interaction of the user with a physical object; measuring a movement of the HMD with respect to the physical object between the first time and a second time; processing the first image based on the measurement of the movement of the HMD to generate a second image; generating, based on the second image, a composite image of a virtual interaction involving the user; and displaying, via the HMD and based on the composite image, the virtual interaction in place of the physical interaction to the user at the second time.
Type:
Grant
Filed:
February 4, 2019
Date of Patent:
March 9, 2021
Assignee:
Logitech Europe S.A.
Inventors:
Mario Gutierrez, Thomas Rouvinez, Sidney Bovet, Helmut Grabner, Mathieu Meisser
Abstract: Some embodiments include a method comprising receiving gaze data from an image sensor that indicates where a user is looking, determining a location that the user is directing their gaze on a display based on the gaze data, receiving a confirmation input from an input device, and generating and effectuating an input command based on the location on the display that the user is directing their gaze when the confirmation input is received. When a bystander is in a field-of-view of the image sensor, the method may further include limiting the input command to be generated and effectuated based solely on the location on the display that the user is directing their gaze and the confirmation input and actively excluding detected bystander gaze data from the generation of the input command.
Abstract: In certain embodiments, a sensing and tracking system detects objects, such as user input devices or peripherals, and user interactions with them. A representation of the objects and user interactions are then injected into the virtual reality environment. The representation can be an actual reality, augmented reality, virtual representation or any combination. For example, an actual keyboard can be injected, but with the keys pressed being enlarged and lighted.
Type:
Grant
Filed:
February 6, 2018
Date of Patent:
February 23, 2021
Assignee:
Logitech Europe S.A.
Inventors:
Stephen Harvey, Denis O'Keeffe, Andreas Connellan, Damien O'Sullivan, Aidan Kehoe, Noirin Curran, Thomas Rouvinez, Mario Gutierrez, Olivier Riviere, Remy Zimmermann, Mathieu Meisser, Dennin Onorio, Ciaran Trotman, Pierce O'Bradaigh, Marcel Twohig, Padraig Murphy, Jerry Ahern