REMOTE INTERACTION DEVICE WITH TRACKING OF REMOTE MOVEMENT INPUT
Systems, devices, and methods are provided for remote interaction with a subject in an environment. The device has audio-visual recording and transmitting functionality to provide an operator at a remote location with an audio-visual feed of the environment near the device. The device also has a light emission component which the operator controls and which projects light onto a surface in the environment in the vicinity of the device. The systems, devices, and methods provide operators with the ability to control the positions of the light emission by tracking movement at a remote device at the remote location.
The present application is a continuation of U.S. patent application Ser. No. 15/352,270, filed Nov. 15, 2016, which claims the benefit of priority to U.S. Provisional Application No. 62/257,436, filed Nov. 19, 2015, both of which are hereby incorporated by reference in their entireties for all purposes.
TECHNICAL FIELDThe subject matter described herein relates generally to a remote interaction device, and more particularly to a device at a first location which allows a user at a second, remote location to control a laser pointer at the first location with movement input.
BACKGROUND OF THE INVENTIONPresently, pet owners generally interact with their pets only when they are in the same general location, such as a home. Many pet owners are required to leave their pets alone and unsupervised for numerous hours every day when the pet owner goes to work, runs errands, or leaves town on trips or vacations. Some pets become bored, lethargic, or sedentary when left alone. This can lead to numerous health problems including obesity and depression. Alternatively, some pets become ornery and mischievous when left alone. This can lead to property damage, barking which irritates neighbors, and in extreme cases injury or death of the pet may occur.
One attempted solution to a lack of interaction and stimulation for pets has been to hire pet sitters who may take care of pets while the pet owner is away. Pet sitters often charge an hourly fee and may do little more than feed the pet before leaving. In some cases the pet owner may never know that the pet sitter did not interact with the pet for more than a few minutes. Even in the case of a pet sitter who plays with the pet, the pet owner does not receive the direct benefit of interacting with the pet personally.
Other attempted solutions have included leaving televisions or radios on for the pet while the pet owner is away, attempting to use automatically controlled toys, electroshock punishment for misbehaving, and passive surveillance systems which provide one-directional monitoring of the pet. Each of these passive and active systems has its own drawbacks ranging from being inefficient to inhumane.
Accordingly, to overcome the above and other problems, a remote interaction device for interacting with pets was proposed in U.S. patent application Ser. No. 14/186,793, Pub. No. US 2014/0233906 A1, to Neskin, et al, the entire content and disclosure of which are herein incorporated by reference. The remote interaction device includes, among other components, a photonic emission device and photonic emission aiming device. The photonic emission device is generally a laser which can be controlled by a user at a remote location by issuing commands on a connected device. Accordingly, it would be desirable for the user to be able to control the laser using swipe and tap input at the connected device.
SUMMARY OF THE INVENTIONThe present invention is directed to a remote interaction device, and more particularly to a device at a first location which allows a user at a second, remote location to control a laser pointer at the first location with movement input. The movement input may include tap and swipe on a touch screen, movement of a computer mouse, pressing of keys on a keyboard, and so on.
In accordance with an example embodiment of the present invention, a remote interaction device is provided. The device generally has audio-visual recording and transmitting functionality to provide an operator at a remote location with an audio-visual feed of the environment near the device. The device also has a light emission component which the operator controls and which projects light onto a surface in the environment in the vicinity of the device. The systems, devices, and methods provide operators with the ability to control the positions of the light emission by tracking movement at a remote device at the remote location.
In some embodiments, a method of facilitating remote interaction is provided. The method includes recording visual data at a first location and transmitting the recorded visual data to a second location, receiving the visual data, that was recorded at the first location, at the second location and displaying the visual data on a local device at the second location, and tracking movement input positions on a display of the local device to control the aim of a photonic emission device at the first location.
In some embodiments, a system for facilitating remote interaction is provided. The system includes an interaction device located at a first location, a local device located at a second location, the local device connected to the interaction device over a network, and wherein the local device tracks movement input positions on a display of the local device to control the aim of a photonic emission device at the interaction device.
Other systems, devices, methods, features and advantages of the subject matter described herein will be or will become apparent to one with skill in the art upon examination of the following figures and detailed description. It is intended that all such additional systems, devices, methods, features and advantages be included within this description, be within the scope of the subject matter described herein, and be protected by the accompanying claims. In no way should the features of the example embodiments be construed as limiting the appended claims, absent express recitation of those features in the claims.
The details of the subject matter set forth herein, both as to its structure and operation, may be apparent by study of the accompanying figures, in which like reference numerals refer to like parts. The components in the figures are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the subject matter. Moreover, all illustrations are intended to convey concepts, where relative sizes, shapes and other detailed attributes may be illustrated schematically rather than literally or precisely.
Before the present subject matter is described in detail, it is to be understood that this disclosure is not limited to the particular embodiments described, as such may, of course, vary. It is also to be understood that the terminology used herein is for the purpose of describing particular embodiments only, and is not intended to be limiting, since the scope of the present disclosure will be limited only by the appended claims.
As used herein and in the appended claims, the singular forms “a”, “an”, and “the” include plural referents unless the context clearly dictates otherwise.
In the following description and in the figures, like elements are identified with like reference numerals. The use of “e.g.,” “etc,” and “or” indicates non-exclusive alternatives without limitation, unless otherwise noted. The use of “including” or “includes” means “including, but not limited to,” or “includes, but not limited to,” unless otherwise noted.
As used herein, the term “and/or” placed between a first entity and a second entity means one of (1) the first entity, (2) the second entity, and (3) the first entity and the second entity. Multiple entities listed with “and/or” should be construed in the same manner, i.e., “one or more” of the entities so conjoined. Other entities may optionally be present other than the entities specifically identified by the “and/or” clause, whether related or unrelated to those entities specifically identified. Thus, as a non-limiting example, a reference to “A and/or B”, when used in conjunction with open-ended language such as “comprising” can refer, in one embodiment, to A only (optionally including entities other than B); in another embodiment, to B only (optionally including entities other than A); in yet another embodiment, to both A and B (optionally including other entities). These entities may refer to elements, actions, structures, steps, operations, values, and the like.
The publications discussed herein are provided solely for their disclosure prior to the filing date of the present application. Nothing herein is to be construed as an admission that the present disclosure is not entitled to antedate such publication by virtue of prior disclosure. Further, the dates of publication provided may be different from the actual publication dates which may need to be independently confirmed.
It should be noted that all features, elements, components, functions, and steps described with respect to any embodiment provided herein are intended to be freely combinable and substitutable with those from any other embodiment. If a certain feature, element, component, function, or step is described with respect to only one embodiment, then it should be understood that that feature, element, component, function, or step can be used with every other embodiment described herein unless explicitly stated otherwise. This paragraph therefore serves as antecedent basis and written support for the introduction of claims, at any time, that combine features, elements, components, functions, and steps from different embodiments, or that substitute features, elements, components, functions, and steps from one embodiment with those of another, even if the following description does not explicitly state, in a particular instance, that such combinations or substitutions are possible. It is explicitly acknowledged that express recitation of every possible combination and substitution is overly burdensome, especially given that the permissibility of each and every such combination and substitution will be readily recognized by those of ordinary skill in the art.
Turning now to the drawings,
In the exemplary embodiment shown in
Remote interaction device 100 in the example embodiment may be made of various modules and components which facilitate the operator 114's interaction with the pet 111. In some embodiments, the remote interaction device 100 may connect to the network 116 using a wireless connection module 102. The wireless connection module 102 and other modules and components of the remote interaction device 100 may receive power from a power module 103. In some embodiments, the power module 103 may receive power via a universal serial bus (USB) interface, although in other embodiments other interfaces may be used. The CPU module 104 is a central processing unit (CPU) which generally controls all systems and processes in the remote interaction device 100.
A microphone 108 and a camera module 107 provide for audio and visual data capture at the location of remote interaction device 100 and may transmit the captured data to allow the operator 114 to view and hear what is going on at the location of the remote interaction device 100, using the connected device 101. In some embodiments, the remote interaction device 100 may record and store the captured data in a data storage (not shown). A laser positioning module 105 is operatively connected to laser beam 110 and controls its positioning. The casing 112 provides a protective housing for all components and modules of the remote interaction device 100. In some exemplary operations, the laser beam 110 and the speakers 106 may allow the operator 114 to have interaction with the location of remote interaction device 100, thus providing visual stimulation and audio stimulation respectively for the pet 111.
It should be noted that each of the components shown in
The camera module 107 in some embodiments may be a video recording device with a wide angle lens which allows for a video recording of the environment in front of camera module 107. In some embodiments, the camera module 107 may be a CMOS sensor and a camera lens as well as a printed circuit board (PCB). In other embodiments, the camera module 107 may use other digital video recording devices or other appropriate video recording devices. A wide angle lens may be used in some embodiments to allow for video recording of the environment without the need to move the camera to follow particular subjects or specific locations within the field of view of the camera module 107. In other embodiments, other appropriate lenses may be used.
In some embodiments, the camera module 107 may capture high definition (HD) video although in other embodiments, lower definition video may be captured.
The camera module 107 in some embodiments may have focusing capabilities which allow for focusing based on the distance of a subject from the camera module 107. In some embodiments, the focusing capability may be performed automatically by internal processing of a camera processor which is operable to process visual data signals from the camera module 107. In some embodiments, focusing may be performed manually by a user at a remote location by engaging an appropriate command on the connected device 101.
In some embodiments, additional components may be provided in the camera module 107 such as camera aiming devices, alternate and/or changeable filters, and others which allow a user to view different areas of the room by positioning the direction of the camera and viewing through different filters. In some embodiments, automatic motion-capture components may be used in order to direct the camera to capture movement in the environment such as movement of the pet 111.
The laser positioning module 105 in some embodiments may be made of a laser pointer module 136 which may be a laser pointer that emits light through optical amplification. Light emitted by the laser pointer module 136 may be directed to a specific location in the environment such as on a surface. Typical surfaces may be floors, furniture, walls, or other suitable surfaces. Many animals become interested in light such as lasers projected on surfaces. These animals will follow the light and try to catch it or capture it, providing entertainment for the animal. In some embodiments, the laser pointer module 136 uses a laser which is safe for use around humans and animals.
The pan and tilt platform 138 in some embodiments may be a platform to which laser pointer module 136 is mounted. The pan and tilt platform 138 provides the mechanical support which controls the physical location that the laser module 136 is pointing a laser beam 110. In some embodiments, electromagnets may be used to control the panning and tilting of the pan and tilt platform 138. The pan and tilt platform 138 will be described further herein and is also referred to as laser positioning device 600.
The microphone 108 in some embodiments may be a microphone which is operable to receive audio input signals from the environment such as barking from a dog, meowing from a cat, or other input signals. In some embodiments, the microphone 108 may be coupled to a processor which is operable to recognize when a sound is made in the environment. In some embodiments, this may trigger processes within the remote interaction device 100 such as notifying the operator 114 via or at the connected device 101 that noise is being made near the remote interaction device 100, beginning visual recording using the camera module 107, or other processes.
The RGB LED notifier 118 in some embodiment may be a light emitting diode (LED) which indicates the status of the remote interaction device 100. In some embodiments, status indications may include power, standby, transmit/receive, charging, or other suitable status indications. The RGB LED notifier 118 may indicate different device status in some embodiments by flashing, constant color display, alternating color display, or other suitable display methods. The RGB LED notifier 118 in some embodiments may be a single RGB LED. In other embodiments, the RGB LED notifier 118 may include multiple RGB LED's in various configurations.
The speaker 106 in some embodiments may be a speaker device which outputs audio signals into the environment near the remote interaction device 100. The speaker 106 in some embodiments may be operable to output audio signals such as a human voice, music, or other sounds received from the operator 114 via the connected device 101 over a wireless network connected to the Internet 116 and processed by an audio processor so as to communicate with the pet 111 near the remote interaction device 100. In some embodiments, multiple speakers may be used.
Turning to
Generally, in some exemplary operations, when the operator 114 first powers on the remote interaction device 100, he/she must configure the device to communicate with a wireless network connected to the Internet 116. This may be called first-time mode. In the first-time mode, the operator 114 may receive data about the network name and password, if required. After completion of the first-time mode process, the remote interaction device 100 may operate in normal operation mode. The operator 114 may not need to be a remote interaction device 100 owner, but the operator 114 may be any person who wishes to interact with the remote interaction device 100s. User interfaces on the connected device 101 provide a way for the operator 114 to interact with and control the remote interaction device 100.
Turning to
As mentioned above in
As shown in
It should be noted that although the above example shows the connected device 101 with a touch screen for tracking the positions of a finger, other tracking sources may be used when the connected device 101 does not have a touch screen as discussed herein.
Turning to
While embodiments of the present invention have been shown and described, various modifications may be made without departing from the spirit and scope of the present invention, and all such modifications and equivalents are intended to be covered.
In many instances entities are described herein as being coupled to other entities. It should be understood that the terms “coupled” and “connected” (or any of their forms) are used interchangeably herein and, in both cases, are generic to the direct coupling of two entities (without any non-negligible (e.g., parasitic) intervening entities) and the indirect coupling of two entities (with one or more non-negligible intervening entities). Where entities are shown as being directly coupled together, or described as coupled together without description of any intervening entity, it should be understood that those entities can be indirectly coupled together as well unless the context clearly dictates otherwise.
While the embodiments are susceptible to various modifications and alternative forms, specific examples thereof have been shown in the drawings and are herein described in detail. It should be understood, however, that these embodiments are not to be limited to the particular form disclosed, but to the contrary, these embodiments are to cover all modifications, equivalents, and alternatives falling within the spirit of the disclosure. Furthermore, any features, functions, steps, or elements of the embodiments may be recited in or added to the claims, as well as negative limitations that define the inventive scope of the claims by features, functions, steps, or elements that are not within that scope.
Claims
1. A method of facilitating remote interaction comprising:
- recording visual data at a first location and transmitting the recorded visual data to a second location;
- receiving the visual data, that was recorded at the first location, at the second location and displaying the visual data on a local device at the second location; and
- tracking movement input positions on a display of the local device to control the aim of a photonic emission device at the first location.
2. The method of claim 1, wherein the movement input positions are automatically transformed, at the local device at the second location, into video stream coordinates.
3. The method of claim 1, wherein the movement input positions are transformed, at the local device at the second location, to relative coordinates.
4. The method of claim 3, wherein the relative coordinates are further adjusted, at the local device at the second location, by selected calibration data.
5. The method of claim 4, wherein the adjusted relative coordinates are transmitted to the first location.
6. The method of claim 1, wherein the photonic emission device is a laser positioning device.
7. The method of claim 1, wherein the movement input positions include touch and swipe movement on a touch screen display.
8. A system for facilitating remote interaction comprising:
- an interaction device located at a first location;
- a local device located at a second location, the local device connected to the interaction device over a network; and
- wherein the local device tracks movement input positions on a display of the local device to control the aim of a photonic emission device at the interaction device.
9. The system of claim 8, wherein the photonic emission device is a laser positioning device.
10. The system of claim 8, wherein the movement input positions are automatically transformed, at the local device at the second location, into video stream coordinates.
11. The system of claim 8, wherein the movement input positions are transformed, at the local device at the second location, to relative coordinates.
12. The system of claim 11, wherein the relative coordinates are further adjusted, at the local device at the second location, by selected calibration data.
13. The system of claim 12, wherein the adjusted relative coordinates are transmitted to the first location.
14. The system of claim 8, wherein the movement input positions include touch and swipe movement on a touch screen display.
Type: Application
Filed: Sep 5, 2018
Publication Date: May 9, 2019
Inventors: Olexsandr Neskin (Dnipropetrovsk), Iaroslav Azhniuk (Kyiv), Andrii Kulbaba (Kyiv)
Application Number: 16/122,776