ELECTRONIC DEVICE HAVING PIVOTABLY CONNECTED SIDES WITH SELECTABLE DISPLAYS
An electronic device has an imaging device (such as a still camera or video camera) and is capable of displaying a viewfinder on one side or multiple sides of the device. The device may determine the side or sides on which to display the viewfinder based on factors such as user input, object proximity, grip detection, accelerometer data, and gyroscope data. In one implementation, the device has multiple imaging devices and can select which imaging device to use to capture an image based on the above factors as well.
This application is a continuation of U.S. patent application Ser. No. 16/044,652, filed Jul. 25, 2018, now allowed, which is a continuation of U.S. application Ser. No. 15/651,974, filed Jul. 17, 2017, now U.S. Pat. No. 10,057,497, issued Aug. 21, 2018, which is a continuation of U.S. application Ser. No. 14/219,160, filed Mar. 19, 2014, now U.S. Pat. No. 9,712,749, issued Jul. 18, 2018, which claims the benefit of U.S. Provisional Patent Application 61/945,595, filed Feb. 27, 2014, the contents of each of which are incorporated herein by reference.
TECHNICAL FIELDThe present disclosure relates to multi-sided electronic devices.
BACKGROUNDAs the smartphone market matures, manufacturers are increasingly looking for ways to differentiate their products from those of their competitors. One area of distinction is display size. For many consumers, having the largest display possible is a key consideration for selecting a smartphone. There are practical limits to how large of a display a smartphone can have using a traditional form factor, however. At some point, the size of the display will exceed the size of typical stowage compartments (e.g., pants, pockets, or purses). In addition, the overall bulk of the phone will make it difficult to hold with a single hand while making a phone call.
While the appended claims set forth the features of the present techniques with particularity, these techniques may be best understood from the following detailed description taken in conjunction with the accompanying drawings of which:
The disclosure is generally directed to an electronic device (“device”) having multiple sides. In some embodiments, at least two of the sides are pivotable with respect to one another. The device may have a display that wraps from one side to the other. In some embodiments, the device has multiple display drivers and each display driver is responsible for driving a different pixel region. The device may enable and disable one or more of the drivers based on an angle between the sides of the device. Two or more of the pixel regions may overlap, with one or more drivers being capable of driving the pixels of the overlapping region. The pixels or pixel regions that are enabled may be selected based on the angle between the sides of the device.
In an embodiment, the device has an imaging device (such as a still camera or video camera) and is capable of displaying a viewfinder on one side or multiple sides of the device. The device may determine the side or sides on which to display the viewfinder based on factors such as user input, object proximity, grip detection, accelerometer data, and gyroscope data. In one embodiment, the device has multiple imaging devices and can select which imaging device to use to capture an image based on the above factors as well.
According to an embodiment, the device has multiple gesture sensors (such as infrared sensors) and can interpret gestures based on the movement detected by the gesture sensors. The device may interpret data from each of the gesture sensors as separate gestures, or may interpret the data from two or more of the sensors as one single gesture. The device may select which interpretation to use based on an angle between two or more sides of the device.
Turning to
The device 100 can be manipulated into a number of possible positions.
Turning to
In an embodiment, the locations of pixels of the device 100 that are enabled on the display 108 vary according to the mode of the device 100. This can be seen in
Turning to
The processor 1410 retrieves instructions and data from the memory 1420 and, using the instructions and data, carries out the methods described herein. The processor 1410 provides outgoing data to, or receives incoming data from the network communication module 1440.
The device 100 further includes sensors 1452. Among the sensors 1452 are a motion sensor 1454 (e.g., an accelerometer or gyroscope), a flip angle sensor 1456, a first gesture sensor 1458, a second gesture sensor 1460, and a proximity sensor 1461. The motion sensor 1454 senses one or more of the motion and orientation of the device 100, generates data regarding the motion and orientation (whichever is sensed), and provides the data to the processor 1410. The flip angle sensor 1456 senses the angle between the first side 102 and the second side 104 of the device 100, generates data regarding the angle, and provides the data to the processor 1410. The processor 1410 can determine the position (e.g., first position, second position, or intermediate position) or mode (e.g., tablet mode, phone mode, desktop mode, or dual-user mode) based one or more of motion data from the motion sensor 1454, orientation data from the motion sensor 1454, and angle data from the flip angle sensor 1456. The processor 1410 may use various criteria for mapping the angle data to the various positions and modes, such as whether the angle is above, is below, or meets a particular threshold value (e.g., first threshold value, second threshold value, etc.), or whether the angle falls into a particular range (e.g., first range, second range, etc.) The processor 1410 may use multiple threshold values or a single threshold value. The angle ranges may be contiguous with one another (e.g., a first range may be contiguous with a second range) or not.
The proximity sensor 1461 senses proximity of objects, generates data regarding the proximity, and provides the data to the processor 1410. The processor 1410 may the interpret the data to determine whether, for example, a person's head is close by or whether the device 100 is being gripped. The first gesture sensor 1458 and the second gesture sensor 1460 sense movement of objects that are outside of the device 100. The gesture sensors 1458 and 1460 generate data regarding the movement and provide the data to the processor 1410. The first gesture sensor 1458 and the second gesture sensor 1460 may each be implemented as an Electromagnetic Radiation (“EMR”) sensor, such as an infrared (“IR”) sensor.
In some embodiments, the device 100 includes a first display driver 1462 and a second display driver 1464, either or both of which may drive the display 108 in a manner that will be discussed below in more detail. The processor 1410 or the graphics processor 1412 sends video frames to one or both of the first display driver 1462 and the second display driver 1464, which in turn display images on the display 108. In some embodiments, the display drivers 1462 and 1464 include memory in which to buffer the video frames. The display drivers 1462 and 1464 may be implemented as a single hardware component or as separate hardware components.
According to some embodiments, the device 100 includes EMR emitters 1470 through 1484. Each of the EMR emitters may be implemented as IR Light Emitting Diodes (“LEDs”). In such embodiments, the first and second gesture sensors 1458 and 1460 detect EMR emitted from the EMR emitters and reflected off of an object, such as a person's hand.
Each of the elements of
Turning to
Referring to
Referring to
Finally, referring to
Turning to
In
In
In
Turning to
In
In
In
As previously noted, the first display driver 1462 and the second display driver 1464 of the device 100 may drive different pixel regions at different frame rates. Turning to
According to an embodiment, when the device 100 is in the “phone mode,” it can use one of the imaging devices 1405 and 1407 to capture images. The user may be allowed to select, via a user interface toggle, the pixel region (e.g., that of the first side 102 or that of the second side 104) of the display 108 on which to show the viewfinder. Alternatively, the processor 1410 may intelligently select the pixel region for the viewfinder based on object proximity (e.g., a person's face or head, as detected by one of the imaging devices 1405 and 1407), grip detection (which may be implemented as capacitive sensors on the device 100), or readings of the motion sensor 1454. If the viewfinder is initiated and the processor 1410 detects a “tight rotation” of the device via the motion sensor 1454 (indicating that the user flipped the device around), the processor 1410 may switch the viewfinder to another pixel region.
Turning to
If the processor 1410 selects the first imaging device 1405, then it may display the viewfinder in the first pixel region 1802 for a front-facing picture (
In short, the device 100 is able to use the same imaging device for both self-portrait and scenery image captures, and the same display device (the display 108) for the viewfinder in both applications. The device 100 accomplishes this by changing the pixel region.
According to an embodiment, when the device 100 is in phone more, the device 100 may use a single imaging device (either the imaging device 1405 or the imaging device 1407) and display multiple viewfinders. Turning to
Referring back to
In an embodiment, the device 100 can use both of the imaging devices 1405 and 1407 to initiate a “panoramic sweep.” To execute the sweep, the user slowly opens the device 100 (from the second position) or closes the device 100 (from the first position) and the processor 1410 uses the angle data and image data from both imaging devices 1405 and 1407 to stitch together a wide panoramic sweep of the landscape.
Referring to
Turning to
If the device 100 is in the second position, such as shown in
Referring back to
According to an embodiment, the processor 1410 activates or deactivates one or more of the EMR emitters based on the angle θ between the first side 102 and the second side 104 of the device 100. For example, if the device 100 is in the second position, shown in
Turning to
In view of the many possible embodiments to which the principles of the present discussion may be applied, it should be recognized that the embodiments described herein with respect to the drawing figures are meant to be illustrative only and should not be taken as limiting the scope of the claims. Therefore, the techniques as described herein contemplate all such embodiments as may come within the scope of the following claims and equivalents thereof.
Claims
1. (canceled)
2. A computer-implemented method comprising:
- generating, by a foldable smart phone that includes two or more cameras, images by each of the two or more cameras, each image being annotated with fold angle information;
- determining, by the foldable smart phone, that a particular image generated by one of the two or more cameras of the foldable smart phone at least partially overlaps a particular image generated by another of the two or more cameras of the foldable smart phone, then generating a panoramic image based at least on some of the images of the two or more cameras and the fold angle information; and
- providing, by the foldable smart phone, the panoramic image for output.
3. The method of claim 2, wherein each camera is associated with a respective pivotable side of the foldable smart phone.
4. The method of claim 2, wherein the images are generated as the foldable smart phone is closed.
5. The method of claim 2, wherein the images are generated as the foldable smart phone is opened.
6. The method of claim 2, wherein the images of each camera are generated independently of each other.
7. The method of claim 2, comprising:
- generating a first camera panoramic image based on the images generated by a first camera of the two or more cameras; and
- generating a second camera panoramic image based on the images generated by a second camera of the two or more cameras,
- wherein generating the panoramic image comprises generating a stitched-together panoramic image based on the first camera panoramic image and the second camera panoramic image.
8. The method of claim 2, wherein determining that a particular image generated by one of the two or more cameras of the foldable smart phone at least partially overlaps a particular image generated by another of the two or more cameras of the foldable smart phone comprises determining that a portion of the particular image generated by the one of the two or more cameras includes a 3D element that is also included in a portion of the particular image generated by the another of the two or more cameras.
9. A non-transitory computer readable storage medium storing instructions executable by a data processing apparatus and upon such execution cause the data processing apparatus to perform operations comprising:
- generating, by a foldable smart phone that includes two or more cameras, images by each of the two or more cameras, each image being annotated with fold angle information;
- determining, by the foldable smart phone, that a particular image generated by one of the two or more cameras of the foldable smart phone at least partially overlaps a particular image generated by another of the two or more cameras of the foldable smart phone, then generating a panoramic image based at least on some of the images of the two or more cameras and the fold angle information; and
- providing, by the foldable smart phone, the panoramic image for output.
10. The medium of claim 9, wherein each camera is associated with a respective pivotable side of the foldable smart phone.
11. The medium of claim 9, wherein the images are generated as the foldable smart phone is closed.
12. The medium of claim 9, wherein the images are generated as the foldable smart phone is opened.
13. The medium of claim 9, wherein the images of each camera are generated independently of each other.
14. The medium of claim 9, wherein the operations comprise:
- generating a first camera panoramic image based on the images generated by a first camera of the two or more cameras; and
- generating a second camera panoramic image based on the images generated by a second camera of the two or more cameras,
- wherein generating the panoramic image comprises generating a stitched-together panoramic image based on the first camera panoramic image and the second camera panoramic image.
15. The medium of claim 9, wherein determining that a particular image generated by one of the two or more cameras of the foldable smart phone at least partially overlaps a particular image generated by another of the two or more cameras of the foldable smart phone comprises determining that a portion of the particular image generated by the one of the two or more cameras includes a 3D element that is also included in a portion of the particular image generated by the another of the two or more cameras.
16. A foldable smart phone that includes two or more cameras, the foldable smart phone comprising:
- one or more processing devices; and
- one or more storage devices storing instructions that are executable by the one or more processing devices to perform operations comprising:
- generating images by each of the two or more cameras, each image being annotated with fold angle information;
- determining that a particular image generated by one of the two or more cameras of the foldable smart phone at least partially overlaps a particular image generated by another of the two or more cameras of the foldable smart phone, then generating a panoramic image based at least on some of the images of the two or more cameras and the fold angle information; and
- providing the panoramic image for output.
17. The system of claim 16, wherein each camera is associated with a respective pivotable side of the foldable smart phone.
18. The system of claim 16, wherein the images are generated as the foldable smart phone is closed.
19. The system of claim 16, wherein the images are generated as the foldable smart phone is opened.
20. The system of claim 16, wherein the images of each camera are generated independently of each other.
21. The system of claim 2, comprising:
- generating a first camera panoramic image based on the images generated by a first camera of the two or more cameras; and
- generating a second camera panoramic image based on the images generated by a second camera of the two or more cameras,
- wherein generating the panoramic image comprises generating a stitched-together panoramic image based on the first camera panoramic image and the second camera panoramic image.
Type: Application
Filed: Dec 3, 2019
Publication Date: Apr 23, 2020
Inventors: Michael J. Lombardi (Lake Zurich, IL), John Gorsica (Round Lake, IL), Amber M. Pierce (Evanston, IL)
Application Number: 16/701,862