HANDHELD WRITING IMPLEMENT FORM FACTOR MOBILE DEVICE
Presented here is a handheld writing implement form factor for a mobile device. The shape of the device can correspond to the shape of a whiteboard marker. Due to the small size, the device does not have a physical keyboard or an onscreen keyboard and instead relies on environmental cues, gestural input, voice input, and touch input to interpret user instructions. For example, when the device determines from environmental cues that it is resting on a tabletop, a touch input consisting of a single press from the user is interpreted as an instruction to scroll the display. In other embodiments, the device can be used as a handheld writing implement, such as a computer stylus, or to control the device by handwriting in the air. Other uses of the device are disclosed such as making a payment, communicating with other appliances enabled for electronic communication, recording images with a gesture, etc.
This application claims priority to the U.S. provisional patent application Ser. No. 62/528,357, filed Jul. 3, 2017; and to the U.S. provisional patent application Ser. No. 62/553,077, filed Aug. 31, 2017, all of which are incorporated herein in their entirety and by this reference thereto.
TECHNICAL FIELDThe present application is related to mobile devices and, more specifically, to methods and systems used to operate a mobile device having a handheld writing implement form factor.
BACKGROUNDThe form factor of a mobile phone is its size, shape, and style, as well as the layout and position of its major components. There are three major form factors—bar phones, flip phones, and sliders—as well as sub-categories of these forms and some atypical forms. The most common form factor in mobile phones today is a slate or touchscreen phone which is a subset of the bar form. The bar form, like a tablet computer, has few physical buttons, instead relying upon a touchscreen and an onscreen keyboard. Since the mid-2010s, almost all smartphones come in a “touchscreen” slate form.
SUMMARYPresented here is a handheld writing implement form factor for a mobile device such as a mobile phone. The shape of the mobile device can roughly correspond to the shape of a whiteboard marker. Due to the small size, the mobile device does not necessarily have a physical keyboard or an onscreen keyboard, and instead relies on environmental cues, gestural input, voice input, and touch input to interpret user instructions. Environmental cues can include presence of other devices enabled for electronic communication, ambient light, ambient sound, etc. For example, when the mobile device determines from environmental cues that it is resting on a tabletop, a touch input consisting of a single press from the user is interpreted as an instruction to scroll the display. In other examples, the mobile device can be used as a handheld writing implement, such as a computer stylus, or to control the mobile device by handwriting in the air. Other uses of the handheld writing implement form factor phone are disclosed such as making a payment, communicating with other appliances enabled for electronic communication, recording images with a gesture, etc.
Presented here is a handheld writing implement form factor for a mobile device such as a mobile phone. The shape of the mobile device can roughly correspond to the shape of a whiteboard marker. Due to the small size, the mobile device does not necessarily have a physical keyboard or an onscreen keyboard and instead relies on environmental cues, gestural input, voice input, and touch input to interpret user instructions. Environmental cues can include presence of other devices enabled for electronic communication, ambient light, ambient sound, etc. For example, when the mobile device determines from environmental cues that it is resting on a tabletop, a touch input consisting of a single press from the user is interpreted as an instruction to scroll the display. In other examples, the mobile device can be used as a handheld writing implement, such as a computer stylus, or to control the mobile device by handwriting on a non-electronic surface, such as air, paper, desk, etc. Other uses of the handheld writing implement form factor phone are disclosed such as making a payment, communicating with other appliances enabled for electronic communication, recording images with a gesture, etc.
The size of the device is designed to fit a user's hand, fit inside the user's pocket, etc. The size of the device is substantially the same as the size of a handheld writing implement, such as a whiteboard marker. The handheld writing implement includes various pens, pencils, crayons, markers, highlighters, etc., and excludes typewriters and keyboards. The device can take on the shape similar to a whiteboard marker, an elongated prism, a wand, a stick, etc. The device has a length, a width, and a height. The length 110 is substantially larger than the width 120 and the height 130. The width 120 is substantially similar to the height 130. For example, the length 110 can be at least 2.5 times as long as the width 120 and height 130, and the width 120 can be at most two times as long as the height 130. Also, the width 120 can be at most 0.5 times as long as the height 130. In one embodiment, the length 110, the width 120 and the height 130 can be measured by defining a bounding box surrounding the device 100 and measuring the length of the bounding box in the three dimensions. In this embodiment, the length 110 is the same as the length of the bounding box, the width 120 is the same as the width of the bounding box, and the height 130 is the same as the height of the bounding box.
The handheld device 100 contains a display 140 and a camera 180. The display 140 has a shape including the length 110. The camera 180 can be placed on the same side of the handheld device 100 as the display 140. Given the small size of the device, the device 100 can be without a physical keyboard or onscreen keyboard. Instead, the device 100 can receive user input through voice commands, gestures, touch, environmental cues, etc.
A touch sensor can be integrated with the non-display surface 150. The touch sensor can be a layer of touch sensors 160 placed on the outside of the non-display surface 150, or the touch sensor can be a layer of touch sensors 170 placed on the inside of the non-display surface 150. The touch sensor layer can also be integrated into the non-display surface 150. The touch sensor can receive a user input performed on the non-display surface 150. The touch sensor can be resistive, surface acoustic wave, capacitive, infrared, infrared acrylic projection, optical imaging, dispersive signal technology, acoustic pulse recognition, etc. The touch sensor can receive an input from the user such as a single press, double press, a slide, a predetermined pattern, etc. Based on the received touch, a processor associated with the device 100 can interpret the touch as an instruction to scroll the display 140. Interpretation of the touch can be dependent on an environmental cue, as described in this application.
The surfaces 210, 220 can have the same dimensions but different orientations. In other words, the two surfaces 210, 220 are symmetric about axis 250. When two surfaces 210, 220 have same dimensions, regardless of which surface 210, 220 is resting on a support surface 240, an angle 280 between the display 140 and the support surface 240 is the same.
The surfaces 310, 320 can have different dimensions and different orientations. In other words, the two surfaces 310, 320 are asymmetric about axis 350. When two surfaces 310, 320 have different dimensions, an angle 380, 390 between the display 140 and the support surface 340 varies depending on which surface 310, 320 is resting on the support surface 340. For example, in
The lenses 405, 415, 425, 435, 460 can be placed around the perimeter 440 of the handheld device 100. A perimeter of a device, as used in this application, refers to a narrow surface ribbon surrounding a closed curve, where both the narrow surface ribbon and the closed curve are confined to the external surface of the device.
The non-display surface including the sides 400, 410, 420, 430 can contain an optically transparent material such as glass, optically transparent plastic, etc. The lenses 405, 415, 425, 435 can be placed on top of the non-display surface or flush with the non-display surface. When the non-display surface is optically transparent, the lenses 405, 415, 425, 435 can be placed beneath the non-display surface including the sides 400, 410, 420, 430. In addition, the optically transparent non-display surface can act as an additional lens for the cameras of the device 100.
For example, when the handheld device 100 is not moving, i.e., velocity and acceleration of the handheld device 100 are zero, and ambient light surrounding the handheld device 100 is moderate to high, the processor can determine that the handheld device 100 is resting on a surface in a position for the user to view the display 140 of the device 100. The processor can configure the display 140 into two parts 500, 510. Part 500 shows date and time, while part 510 shows an output of an application (such as Twitter) running on the device 100.
In another example, when ambient light surrounding the handheld device 100 is low, and the handheld device 100 is moving, the processor can determine that the handheld device 100 is inside a pocket or a purse. Consequently, the user cannot view the screen, and the handheld device 100 turns off the display 140 to preserve battery life.
For example, when the handheld device 100 is pointed (gestural input) at an appliance (environment cue), a processor associated with the handheld device 100 can interpret the gesture as an instruction to auto pair the handheld device 100 with the appliance at which the device 100 is pointed. To determine that the device 100 is pointed, the processor can detect a deceleration and a threshold period of time during which the device 100 is oriented in substantially the same direction. To determine that the device 100 is pointed at the appliance, the appliance being the environment cue, the processor can utilize a camera disposed on the backside of the device 100 to determine an object in the center of the cameras point of view. When the object is the appliance, the processor can proceed to pair the device 100 and the appliance.
In
The group picture consists of multiple selfie pictures stitched together into a single photo. In addition, because there are two or more cameras 1120, 1130 associated with each handheld device 1100, 1110, the multiple pictures generated by the multiple cameras 1120, 1130 can be used to distinguish the foreground from the background in the multiple selfies. As a result, the background can be blurred, while the foreground can be sharpened to produce a more pleasing group picture. The group picture can be saved to some or all of the multiple handheld devices 1100, 1110 that have recorded a selfie, or the group picture can be saved to all the handheld devices 1100, 1110 within a predetermined range.
The displays 1220, 1230 associated with the handheld devices 1200, 1210 display the how the gestural input and the environment cue are interpreted. The information exchanged can be information contained in a business card, associated with the devices 1200, 1210, money, etc.
The user can choose to create a full panorama 1450, or create a regular image 1460. If the user chooses to create the regular image 1460, the user can select the front 1430 or the back 1440 image by dragging the front 1430 or the back 1440 image toward the center of the display 140. The user can orient the selected image 1470 using user inputs 1405 such as dragging the selected image 1470 up and down, or left and right. Once the area of interest 1480 in the selected image 1470 has been designated, the user can unwrap the distorted image 1480 to obtain the regular image 1460. The user can transmit the image to various social networking sites, a cloud etc., as shown on the display 1490.
If the user chooses to create the full panorama 1450, the user can stitch the front 1430 and the back 1440 image using user input 1415 such as pinching to create the full panorama 1450. The user can use user inputs 1425 such as dragging the full panorama 1450 left and right, and up and down to view various parts of the full panorama 1450.
In
As shown in
As shown in
In step 1930, the device 100 receives the selection of a card to be used in the payment. In step 1940, the device 100 authenticates the user using fingerprint and/or voice. The display 140 of the device 100 can display information regarding the card used and the authentication method used. Step 1930 can be performed before or after step 1940. In addition to, or instead of the voice and fingerprint authentication, the device 100 can authenticate the user using a pin, as shown in step 1950 and/or using a pattern, as shown in step 1960. Various authentication methods can be used alone or can be combined. The payment can be communicated to an external device using near field communication (NFC) Bluetooth, Internet, ultrasound, mesh networks, etc.
In step 2210, the processor determines an environment cue proximate to the handheld device and a physical property of the handheld device. The environment cue can include an object the handheld device is pointing at, proximity of other electronic communication enabled devices, an ambient light surrounding the handheld device, an ambient sound surrounding the handheld device, etc. The physical property can include at least one of a position of the handheld device, an orientation of the handheld device, an acceleration of the handheld device, a velocity of the handheld device, etc. The position of the handheld device can be measured relative to the user.
In step 2220, the processor interprets the user input based on the environment cue proximate to the handheld device and the physical property of the handheld device. The user input can be a touch input, a voice input, a gesture input. Touch input can include a press, a double press, a slide, a double slide, a pattern, etc. For example, to interpret the user input, when the amount of ambient light is above a predefined threshold, a velocity and an acceleration of the handheld device is substantially nonexistent and the user input is a touch input, the processor determines the user input as an instruction to scroll the display of the handheld device.
The processor can perform an action indicated in the interpreted user input, such as scrolling the display, taking a picture, inputting the text into the handheld device using handwriting, sending an email, making a payment, etc., as described in this application. When the handheld device has multiple cameras around the perimeter of the device, the handheld device can record multiple images through multiple normal lenses of the cameras. The lenses can be placed on top of the non-display surface of the handheld device, flush with the non-display surface, or beneath the non-display surface. Based on the multiple images, the processor can create a single 360° image.
In addition to determining the physical property of the device, the processor can determine an environment cue proximate to the handheld device. Based on the environment cue proximate to the handheld device and the physical property of the handheld device, the processor can interpret the user input received by the handheld device as the instruction to the handheld device to perform the action. The environment cue can be detection of smiling faces, presence of other devices enabled for electronic communication, ambient light, ambient sound, etc.
The environment cue can include a presence of an electronic communication enabled device. The physical property can include a substantially still device pointed at the electronic communication enabled device. Based on the environment cue and the physical property, the processor transmits the user input received by the handheld device as a communication to the electronic communication enabled device. For example, the electronic communication enabled device can be a household appliance such as a lamp, coffee maker, thermostat, fridge, light switch, etc. When the user moves the handheld device upward, the processor sends an instruction to the lamp to increase the brightness; conversely, when the user moves the handheld device downward, the processor sends an instruction to the lamp to decrease the brightness.
The environment cue can include a presence of the electronic communication enabled device configured to receive a payment. The motion of the handheld device includes a substantially still handheld device. The processor receives a voice input specifying an amount of money to transfer between the handheld device and the electronic communication enabled device. Based on the environment cue and the physical property, the processor interprets the user input received by the handheld device as an instruction to transmit the amount associated with the payment between the handheld device and the electronic communication enabled device. The user input can include at least one of a squeeze of the handheld device, an orientation of the handheld device, a voice authentication by the handheld device, or a fingerprint authentication by the handheld device.
The environment cue can include a presence of an electronic communication enabled device. The processor can detect a physical contact between the electronic communication enabled device and the handheld device. The processor can interpret the physical contact between the electronic communication enabled device and the handheld device as an instruction to transmit information between the electronic communication enabled device and the handheld device. The information can include at least one of a file, a business card, or a payment.
The environment cue can include a presence of a smiling face. The motion of the handheld device can include a substantially still handheld device. The processor can detect as a user input an orientation of the handheld device at a predetermined angle and interpret the orientation of the handheld device as an instruction to record an image of the smiling face using a camera associated with the handheld device.
The environment cue can include a presence of a smiling face and a presence of an electronic communication enabled device. The motion of the handheld device can include a substantially still handheld device. The processor can detect an orientation of the handheld device at a predetermined angle and interpret the orientation of the handheld device as an instruction to record an image of the smiling face using a camera associated with the handheld device. The processor can combine the image of the smiling face with an image associated with the electronic communication enabled device and transmit the combined image between the handheld device and the electronic communication enabled device. The image associated with the electronic communication enabled device can be spatially offset from the image recorded by the handheld device. Using the two offset images, the processor can create a stereoscopic image, or perform image processing effects such as blurring the background in the combined image, or in either image individually.
The motion of the handheld device can include a change in a velocity of the handheld device. The processor can receive a handwriting motion of the handheld device and interpret the handwriting motion as the instruction to input a handwritten text to into the handheld device. The handwritten text can be stored as an image on the handheld device. The processor can perform optical character recognition on the handwritten text to obtain ASCII text, and/or the processor can transform the ASCII text into a voice recording. The processor can send the handwritten text in any of the above described forms to another device using text, email, voice message, etc.
The motion of the handheld device can include a change in a velocity of the handheld device. The processor can receive a selection motion of the handheld device. The selection motion can include a substantially circular portion, such as gesture 900 and
The property of the handheld device can include a change in the velocity of the handheld device and an image shown on a display of the handheld device. The processor can receive a shake of the handheld device, such as 820 in
In the example of
All, or some of the components shown in
This disclosure contemplates the computer system 2400 taking any suitable physical form. As example and not by way of limitation, computer system 2400 may be an embedded computer system, a system-on-chip (SOC), a single-board computer system (SBC) (such as, for example, a computer-on-module (COM) or system-on-module (SOM)), a desktop computer system, a laptop or notebook computer system, an interactive kiosk, a mainframe, a mesh of computer systems, a mobile telephone, a personal digital assistant (PDA), a server, or a combination of two or more of these. Where appropriate, computer system 2400 may include one or more computer systems 2400; be unitary or distributed; span multiple locations; span multiple machines; or reside in a cloud, which may include one or more cloud components in one or more networks. Where appropriate, one or more computer systems 2400 may perform without substantial spatial or temporal limitation one or more steps of one or more methods described or illustrated herein. As an example and not by way of limitation, one or more computer systems 2400 may perform in real time or in batch mode one or more steps of one or more methods described or illustrated herein. One or more computer systems 2400 may perform at different times or at different locations one or more steps of one or more methods described or illustrated herein, where appropriate.
The processor may be, for example, a conventional microprocessor such as an Intel Pentium microprocessor or Motorola power PC microprocessor. One of skill in the relevant art will recognize that the terms “machine-readable (storage) medium” or “computer-readable (storage) medium” include any type of device that is accessible by the processor.
The memory is coupled to the processor by, for example, a bus. The memory can include, by way of example but not limitation, random access memory (RAM), such as dynamic RAM (DRAM) and static RAM (SRAM). The memory can be local, remote, or distributed.
The bus also couples the processor to the non-volatile memory and drive unit. The non-volatile memory is often a magnetic floppy or hard disk, a magnetic-optical disk, an optical disk, a read-only memory (ROM), such as a CD-ROM, EPROM, or EEPROM, a magnetic or optical card, or another form of storage for large amounts of data. Some of this data is often written, by a direct memory access process, into memory during execution of software in the computer 2400. The non-volatile storage can be local, remote, or distributed. The non-volatile memory is optional because systems can be created with all applicable data available in memory. A typical computer system will usually include at least a processor, memory, and a device (e.g., a bus) coupling the memory to the processor.
Software is typically stored in the non-volatile memory and/or the drive unit. Indeed, storing and entire large program in memory may not even be possible. Nevertheless, it should be understood that for software to run, if necessary, it is moved to a computer readable location appropriate for processing, and for illustrative purposes, that location is referred to as the memory in this paper. Even when software is moved to the memory for execution, the processor will typically make use of hardware registers to store values associated with the software, and local cache that, ideally, serves to speed up execution. As used herein, a software program is assumed to be stored at any known or convenient location (from non-volatile storage to hardware registers) when the software program is referred to as “implemented in a computer-readable medium.” A processor is considered to be “configured to execute a program” when at least one value associated with the program is stored in a register readable by the processor.
The bus also couples the processor to the network interface device. The interface can include one or more of a modem or network interface. It will be appreciated that a modem or network interface can be considered to be part of the computer system 2400. The interface can include an analog modem, isdn modem, cable modem, token ring interface, satellite transmission interface (e.g. “direct PC”), or other interfaces for coupling a computer system to other computer systems. The interface can include one or more input and/or output devices. The I/O devices can include, by way of example but not limitation, a physical keyboard or an onscreen keyboard, a mouse or other pointing device, disk drives, printers, a scanner, and other input and/or output devices, including a display device. The display device can include, by way of example but not limitation, a cathode ray tube (CRT), liquid crystal display (LCD), or some other applicable known or convenient display device. For simplicity, it is assumed that controllers of any devices not depicted in the example of
In operation, the computer system 2400 can be controlled by operating system software that includes a file management system, such as a disk operating system. One example of operating system software with associated file management system software is the family of operating systems known as Windows® from Microsoft Corporation of Redmond, Wash., and their associated file management systems. Another example of operating system software with its associated file management system software is the Linux™ operating system and its associated file management system. The file management system is typically stored in the non-volatile memory and/or drive unit and causes the processor to execute the various acts required by the operating system to input and output data and to store data in the memory, including storing files on the non-volatile memory and/or drive unit.
Some portions of the detailed description may be presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.
It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussion, it is appreciated that throughout the description, discussions utilizing terms such as “processing” or “computing” or “calculating” or “determining” or “displaying” or “generating” or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the methods of some embodiments. The required structure for a variety of these systems will appear from the description below. In addition, the techniques are not described with reference to any particular programming language, and various embodiments may thus be implemented using a variety of programming languages.
In alternative embodiments, the machine operates as a standalone device or may be connected (e.g., networked) to other machines. In a networked deployment, the machine may operate in the capacity of a server or a client machine in a client-server network environment, or as a peer machine in a peer-to-peer (or distributed) network environment.
The machine may be a server computer, a client computer, a personal computer (PC), a tablet PC, a laptop computer, a set-top box (STB), a personal digital assistant (PDA), a cellular telephone, an iPhone, a Blackberry, a processor, a telephone, a web appliance, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine.
While the machine-readable medium or machine-readable storage medium is shown in an exemplary embodiment to be a single medium, the term “machine-readable medium” and “machine-readable storage medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term “machine-readable medium” and “machine-readable storage medium” shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies or modules of the presently disclosed technique and innovation.
In general, the routines executed to implement the embodiments of the disclosure, may be implemented as part of an operating system or a specific application, component, program, object, module or sequence of instructions referred to as “computer programs.” The computer programs typically comprise one or more instructions set at various times in various memory and storage devices in a computer, and that, when read and executed by one or more processing units or processors in a computer, cause the computer to perform operations to execute elements involving the various aspects of the disclosure.
Moreover, while embodiments have been described in the context of fully functioning computers and computer systems, those skilled in the art will appreciate that the various embodiments are capable of being distributed as a program product in a variety of forms, and that the disclosure applies equally regardless of the particular type of machine or computer-readable media used to actually effect the distribution.
Further examples of machine-readable storage media, machine-readable media, or computer-readable (storage) media include but are not limited to recordable type media such as volatile and non-volatile memory devices, floppy and other removable disks, hard disk drives, optical disks (e.g., Compact Disk Read-Only Memory (CD ROMS), Digital Versatile Disks, (DVDs), etc.), among others, and transmission type media such as digital and analog communication links.
In some circumstances, operation of a memory device, such as a change in state from a binary one to a binary zero or vice-versa, for example, may comprise a transformation, such as a physical transformation. With particular types of memory devices, such a physical transformation may comprise a physical transformation of an article to a different state or thing. For example, but without limitation, for some types of memory devices, a change in state may involve an accumulation and storage of charge or a release of stored charge. Likewise, in other memory devices, a change of state may comprise a physical change or transformation in magnetic orientation or a physical change or transformation in molecular structure, such as from crystalline to amorphous or vice versa. The foregoing is not intended to be an exhaustive list in which a change in state for a binary one to a binary zero or vice-versa in a memory device may comprise a transformation, such as a physical transformation. Rather, the foregoing is intended as illustrative examples.
A storage medium typically may be non-transitory or comprise a non-transitory device. In this context, a non-transitory storage medium may include a device that is tangible, meaning that the device has a concrete physical form, although the device may change its physical state. Thus, for example, non-transitory refers to a device remaining tangible despite this change in state.
RemarksThe language used in the specification has been principally selected for readability and instructional purposes, and it may not have been selected to delineate or circumscribe the inventive subject matter. It is therefore intended that the scope of the invention be limited not by this Detailed Description, but rather by any claims that issue on an application based hereon. Accordingly, the disclosure of various embodiments is intended to be illustrative, but not limiting, of the scope of the embodiments, which is set forth in the following claims.
Claims
1. A handheld device comprising:
- an elongated member extending along an axis, the elongated member having a girth substantially corresponding to that of a handheld writing implement;
- a display extending along more than half a length of the axis of the handheld device at a surface thereof; and
- a touch sensor layer integrated with at least a portion of the surface of the handheld device not occupied by the display, the touch sensor layer having a touch sensor surface outside the surface of the handheld device, the touch sensor surface to receive a user input to the handheld device.
2. The handheld device of claim 1, comprising a processor to determine a physical property of the handheld device and an environment cue surrounding the handheld device, to interpret the user input based on the physical property of the handheld device and the environment cue, and to perform an action based on the interpreted user input.
3. The handheld device of claim 2, the physical property of the handheld device comprising at least one of a position of the handheld device, or an orientation of the handheld device, an acceleration of the handheld device, or a velocity of the handheld device.
4. The handheld device of claim 2, the environment cue comprising at least one of surrounding electronic communication enabled devices, ambient sound, ambient light, or smiling faces.
5. The handheld device of claim 1, comprising:
- a support member associated with the portion of the surface of the handheld device not occupied by the display and making the display visible to a user while the handheld device is resting on the support member.
6. A handheld device comprising:
- an elongated member extending along an axis, the elongated member having a girth substantially corresponding to that of a handheld writing implement;
- a display extending along more than half a length of the axis of the handheld device at a surface of the handheld device; and
- a touch sensor layer integrated with at least a portion of the surface of the handheld device not occupied by the display, the touch sensor layer having a touch sensor surface inside the surface of the handheld device.
7. The handheld device of claim 6, comprising:
- a non-display surface disposed on the handheld device, the non-display surface and the display forming a chassis of the handheld device; and
- the touch sensor layer to receive a user input performed on the non-display surface.
8. The handheld device of claim 7, the user input comprising an instruction to scroll the display.
9. The handheld device of claim 7, the touch sensor layer disposed outside the non-display surface.
10. The handheld device of claim 6, comprising the handheld device without a physical keyboard or an onscreen keyboard.
11. The handheld device of claim 6, a chassis of the handheld device comprising a substantially faceted surface.
12. The handheld device of claim 6, a chassis of the handheld device extending to a rounded end.
13. The handheld device of claim 6, a chassis of the handheld device extending to a substantially flat end.
14. The handheld device of claim 6, a size of the handheld device corresponding to a size of a whiteboard marker.
15. The handheld device of claim 6, comprising a processor to determine a physical property of the handheld device and to interpret a user input based on the physical property of the handheld device.
16. The handheld device of claim 15, the physical property of the handheld device comprising at least one of a position of the handheld device, an orientation of the handheld device, a velocity of the handheld device, or an acceleration of the handheld device.
17. The handheld device of claim 6, comprising a processor to determine an environment cue surrounding the handheld device and to interpret a user input based on the environment cue surrounding the handheld device.
18. The handheld device of claim 17, the environment cue surrounding the handheld device comprising at least one of an ambient light, an ambient sound, other electronic communication enabled devices, or focal point of a camera associated with the handheld device.
19. The handheld device of claim 6, comprising a processor to determine a physical property of the handheld device and an environment cue surrounding the handheld device, to interpret a user input based on the physical property of the handheld device and the environment cue, and to perform an action based on the interpreted user input.
20. The handheld device of claim 19, the action comprising taking a picture, recording a message, sending an email, making a payment, or recording a handwritten text.
21. The handheld device of claim 6, comprising:
- a plurality of normal lenses disposed around a perimeter of the handheld device and beneath a non-display surface, the plurality of normal lenses to record a plurality of images; and
- a processor to create a 360° image from the plurality of images.
22. The handheld device of claim 21, the non-display surface comprising an optically transparent material.
23. The handheld device of claim 6, comprising:
- a support member associated with a chassis of the handheld device and making the display visible to a user while the handheld device is resting on the support member.
24. The handheld device of claim 23, the support member comprising a plurality of substantially planar surfaces meeting at an angle, a substantially planar surface in the plurality of substantially planar surfaces operable to support the handheld device.
25. A method comprising:
- providing a handheld device comprising an elongated member extending along an axis, the elongated member having a girth substantially corresponding to that of a handheld writing implement; and
- providing a display extending along more than half a length of the axis of the handheld device at a surface of the handheld device;
- providing a touch sensor layer integrated with at least a portion of the surface of the handheld device not occupied by the display, the touch sensor layer having a touch sensor surface inside the surface of the handheld device.
26. The method of claim 25, comprising:
- providing a non-display surface disposed on the handheld device, the non-display surface and the display forming a chassis of the handheld device providing the touch sensor layer to receive a user input performed on the non-display surface.
27. The method of claim 25, said providing the handheld device comprising providing the handheld device without a physical keyboard or an onscreen keyboard.
28. The method of claim 25, comprising:
- determining a physical property of the handheld device, the physical property of the handheld device comprising at least one of a position of the handheld device, an orientation of the handheld device, a velocity of the handheld device, or an acceleration of the handheld device; and
- interpreting a user input based on the physical property of the handheld device.
29. The method of claim 25, comprising:
- determining an environment cue surrounding the handheld device, the environment cue surrounding the handheld device comprising at least one of an ambient light, an ambient sound, other electronic communication enabled devices, or focal point of a camera associated with the handheld device; and
- interpreting a user input based on the environment cue surrounding the handheld device.
30. The method of claim 25, comprising:
- providing a support member associated with a chassis of the handheld device and making the display visible to a user while the handheld device is resting on the support member.
Type: Application
Filed: Sep 18, 2017
Publication Date: Jan 3, 2019
Inventors: Siqi Li (Menlo Park, CA), David John Evans, V (Palo Alto, CA)
Application Number: 15/707,813