Systems and Methods for Power Optimization in VLC Positioning
Embodiments of systems and methods of power optimization in VLC positioning are disclosed. In one embodiment, a method of power optimization in visible light communication (VLC) positioning of a mobile device comprises receiving, by a transceiver, positioning assistance data of a venue, where the positioning assistance data includes identifiers and positions of light fixtures in the venue, decoding, by a VLC signal decoder, one or more light fixtures within a field of view of the mobile device to obtain corresponding light fixture identifiers, determining, by a controller, a motion of the mobile device with respect to the one or more light fixtures based on the light fixture identifiers and the positioning assistance data of the venue, and controlling, by the controller, the mobile device to operate in a reduced power mode based on the motion of the mobile device with respect to the one or more light fixtures.
This application claims benefit of U.S. provisional application No. 62/401,101, “SYSTEMS AND METHODS TO MINIMIZE ACTUATOR POWER LEAKAGE” filed Sep. 28, 2016. The aforementioned United States patent application is hereby incorporated by reference in its entirety.
FIELDThe present disclosure relates to the field of positioning of mobile devices. In particular, the present disclosure relates to systems and methods for power optimization in visible light communication (VLC) positioning.
BACKGROUNDRecently, interest in radio over fiber technologies complementary to Radio Frequency (RF) technologies has increased due to the exhaustion of RF band frequencies, potential crosstalk between several wireless communication technologies, increased demand for communication security, and the advent of an ultra-high speed ubiquitous communication environment based on various wireless technologies. Consequently, visible light communication employing visible light LEDs has been developed to complement RF technologies.
In conventional visible light communication positioning applications, a camera of a mobile device is kept on while the user of the mobile device is in motion. Such conventional applications would require the image acquisition unit of the camera to continuously make adjustments based on the new scenes in the field of view of the camera, and cause the processing unit of the mobile device to continuously process the newly acquired image frames, such as making decisions regards to auto focus, auto exposure, and auto-white balance. These operations may unnecessarily consume battery power and computing bandwidth of the mobile device. Therefore, there is a need for systems and methods for power optimization in visible light communication positioning.
SUMMARYEmbodiments of systems and methods for power optimization in visible light communication positioning are disclosed. In one embodiment, a method of power optimization in visible light communication (VLC) positioning of a mobile device comprises receiving, by a transceiver, positioning assistance data of a venue, where the positioning assistance data includes identifiers and positions of light fixtures in the venue, decoding, by a VLC signal decoder, one or more light fixtures within a field of view of the mobile device to obtain corresponding light fixture identifiers, determining, by a controller, a motion of the mobile device with respect to the one or more light fixtures based on the light fixture identifiers and the positioning assistance data of the venue, and controlling, by the controller, the mobile device to operate in a reduced power mode based on the motion of the mobile device with respect to the one or more light fixtures.
In another embodiment, mobile device comprises a camera configured to receive visible light communication signals, a memory configured to store the visible light communication signals, a transceiver configured to receive positioning assistance data of a venue, where the positioning assistance data includes identifiers and positions of light fixtures in the venue, a VLC signal decoder configured to decode one or more light fixtures within a field of view of the mobile device to obtain corresponding light fixture identifiers, a controller configured to determine a motion of the mobile device with respect to the one or more light fixtures based on the light fixture identifiers and the positioning assistance data of the venue, and control the mobile device to operate in a reduced power mode based on the motion of the mobile deice with respect to the one or more light fixtures.
According to aspects of the present disclosure, systems and techniques for minimizing power usage in camera's that have a voltage regulator that is shared between at least one camera component and a display are disclosed. The camera of a phone has an actuator module which moves the lens in between macro and infinity position. In some embodiments, the actuator includes a voice coil motor (VCM). A spring (or other biasing mechanism) may be coupled to the lens. In some lens/sensor arrangements, the infinity position is the lens position where the lens is moved to be nearest to the image sensor, and the macro position is the lens position where the lens is moved to be farthest from the image sensor. Typically, when the camera is powered down, a 3A algorithm (auto-focus, auto-exposure, auto-white balance) moves the lens to a default positon, for example, the infinity position, using the actuator to physically move the lens. In most of low-end and mid-tier phones, multimedia subsystems will share voltage regulators due to space restrictions of the system-on-chip (SOC) and to lower cost of the power management integrated circuit (PMIC).
In some imaging systems, for example cell phones, a low-dropout voltage regulator (LDO) is shared between a display and a camera, and power is consumed by the camera sensor/components even after the camera is turned off if the display is still ON. Some components of the phone, like the actuator, has a power on/off control only with a voltage regulator. That is, the actuator does not have another way to power down except to power down the LDO.
As a result, in some imaging systems, the actuator does not turn off, and continues to consume power (at least at a low level) even when camera software turns the camera off—the LDO cannot be powered down because it is shared and needed by the display. This results in power consumption by the camera continuously even when the camera is off. The power consumption becomes negligible only if the camera and the display are placed in an OFF or SUSPEND state, and the voltage regulator can be powered down.
According to aspects of the present disclosure, an imaging device may include a camera system having an image sensor having sensing elements arranged in an imaging plane, a lens having at least one optical element, the lens and image sensor arranged in an optical path configured to propagate light through the lens and to the image sensor, and an actuator coupled to the lens, the actuator operative to move the lens to a plurality of focus positions each being a different distance from the image sensor. The imaging device also may include a display, a voltage regulator electrically connected to provide power to the display and to the camera system, a memory circuit configured to store information representing an actuator control value that corresponds to a low power focus position, the low power focus position being the lens position where the actuator uses the least amount of power, and a processor coupled to the memory circuit, the actuator and the display, the processor configured to control the actuator to move the lens to the low power focus position when the camera system is placed in an OFF state.
Such imaging devices can include other features, as described herein. For example, the imaging device may be configured such that in the OFF state, camera imaging functionality is disabled. In the OFF state, the voltage regulator supplies power to the camera system and to the display, and camera imaging functionality is disabled. In some embodiments, in the OFF state, the voltage regulator supplies power to actuator and to the display, and the image sensor functionality is in an OFF state such that no image data is generated by the image sensor. In some embodiments, the voltage regulator is a low-dropout voltage regulator. In some embodiments, the low power focus position includes a predetermined value. In some embodiments, the low power focus position stored in the memory circuit is selected based on a type of camera. In some embodiments, the low power focus position is selected based on a type of actuator. In some embodiments, the imaging device further includes actuator control information stored in the memory circuit and used by the processor to control the actuator to move the lens to the low power focus position, In some embodiments, the memory circuit comprises two or more memory components. In some embodiments, the lens is positioned on one side of the image sensor, an optical axis of the lens is aligned perpendicular with the image sensor, and the actuator operates to move the lens in a direction substantially perpendicular to the imaging plane, towards and away from the image sensor.
According to aspects of the present disclosure, a method of operating a mobile imaging device may include supplying power from a voltage regulator to a display of an imaging device and supplying power from a voltage regulator to a camera system of the imaging device. The camera system may include an image sensor having sensing elements arranged in an imaging plane, a lens having at least one optical element, the lens and image sensor arranged in an optical path configured to propagate light through the lens and to the image sensor, and an actuator coupled to the lens, the actuator operative to move the lens to a plurality of focus positions each being a different distance from the image sensor. The method further comprises receiving a control signal indicating to place the camera system in an OFF state, retrieving from a memory circuit an actuator control value that corresponds to a low power focus position, the low power focus position being the lens position where the actuator uses the least amount of power, and controlling, with a processor, the actuator to move the lens to the low power focus position, wherein the voltage regulator supplies power to the display and to the camera system when the camera system is in the OFF state.
Such methods can include other features, as described herein. For example, in some embodiments, when in the OFF state, camera imaging functionality is disabled. In some embodiments, in the OFF state, the voltage regulator supplies power to the camera system and to the display, and camera imaging functionality is disabled. In some embodiments, in the OFF state, the voltage regulator supplies power to actuator and to the display, and the image sensor functionality is in an OFF state such that no image data is generated by the image sensor. In some embodiments, the voltage regulator is a low-dropout voltage regulator. In some embodiments, the low power focus position includes a predetermined value. In some embodiments, the low power focus position stored in the memory circuit is selected based on a type of camera. In some embodiments, the low power focus position is selected, based on a type of actuator. In some embodiments, the method uses actuator control information stored in the memory circuit, which is used by the processor to control the actuator to move the lens to the low power focus position. In some embodiments, the memory circuit comprises two or more memory components.
According to aspects of the present disclosure, a non-transitory computer readable medium may include instructions that when executed cause a processor to perform a method for reducing defocus events occurring during autofocus search operations, the method including supplying power from a voltage regulator to a display of an imaging device, supplying power from a voltage regulator to a camera system of the imaging device. The camera system may include an image sensor having sensing elements arranged in an imaging plane, a lens having at least one optical element, the lens and image sensor arranged in an optical path configured to propagate light through the lens and to the image sensor, and an actuator coupled to the lens, the actuator operative to move the lens to a plurality of focus positions each being a different distance from the image sensor. The method may further include receiving a control signal indicating to place the camera system in an OFF state, retrieving from a memory circuit an actuator control value that corresponds to a low power focus position, the low power focus position being the lens position where the actuator uses the least amount of power, and controlling, with a processor, the actuator to move the lens to the low power focus position, wherein the voltage regulator supplies power to the display and to the camera system when the camera system is in the OFF state.
The aforementioned features and advantages of the disclosure, as well as additional features and advantages thereof, will be more clearly understandable after reading detailed descriptions of embodiments of the disclosure in conjunction with the non-limiting and non-exhaustive aspects of following drawings. The drawings are shown for illustration purposes and they are not drawn to scale.
Embodiments of systems and methods for power optimization in visible light communication positioning are disclosed. The following descriptions are presented to enable any person skilled in the art to make and use the disclosure. Descriptions of specific embodiments and applications are provided only as examples. Various modifications and combinations of the examples described herein will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other examples and applications without departing from the scope of the disclosure. Thus, the present disclosure is not intended to be limited to the examples described and shown, but is to be accorded the scope consistent with the principles and features disclosed herein. The word “exemplary” or “example” is used herein to mean “serving as an example, instance, or illustration.” Any aspect or embodiment described herein as “exemplary” or as an “example” is not necessarily to be construed as preferred or advantageous over other aspects or embodiments.
According to aspects of the present disclosure, visible light communication is a method of communication using modulation of a light intensity emitted by a light fixture, such as a light emitting diode (LED) luminary device. Visible light is light having a wavelength in a range that is visible to the human eye. The wavelength of the visible light is in the range of 380 to 780 nm. Since humans cannot perceive on-off cycles of a LED luminary device above a certain number of cycles per second, for example 150 Hz, LEDs may use Pulse Width Modulation (PWM) to increase the lifespan thereof and save energy.
According to aspects of the present disclosure, light fixtures, such as LEDs 184a through 184f, may broadcast positioning signals by modulating their light output level over time in the visible light communication mode. LED light output can be modulated at relatively high frequencies. Using modulation frequencies in the kHz range ensures that VLC signals will not cause any light flicker that could be perceived, by to the human eye, while at the same time allowing for sufficiently high data rates for positioning. The VLC signals can be designed in such a way that the energy efficiency of the light fixture is not compromised, and this is achieved, for example, by using binary modulation. This type of modulation can be produced by highly efficient boost converters that are an integral component PWM-dimmable LED drivers. In addition to being efficient and compatible with existing driver hardware, the VLC signal can also conform to the standard methods of dimming based on the PWM and variation of the maximum current.
The VLC signal transmitted by each light fixture conveys a unique identification which differentiates that light fixture from other light fixtures in the venue. The assistance data that contains a map of locations of the light fixtures and their identifiers may be created at the time the system is installed, and may be stored on a remote server. To determine its position, a mobile device may download the assistance data and may reference it every time the mobile device decodes light fixture identification from a VLC signal.
The identification may be either stored internally in the driver or may be supplied from an external system, such as a Bluetooth wireless mesh network. A light fixture may periodically switch to transmitting a new identification in order to prevent unauthorized use of the positioning infrastructure.
According to aspects of the present disclosure, the mobile device 182 may use a light fixture's identification as well as angle of arrival of light from the light fixture to estimate its position. Based on the mobile device's orientation and motion, the light fixtures visible at any position may remain substantially the same for a period of time. Thus, when the same light fixtures are available in the FOV, mobile device 182 can be configured to estimate its position by using angle of arrival of light and previously decoded identifiers of the light fixtures in the environment 30. Since, angle of arrival estimation may take less time than full identity detection, image sensors can be configured to be enabled or disabled periodically. The disabling of the image sensors and the intermittent decoding of VLC messages can result in significant power savings over conventional solutions. In some implementations, to determine the FOV of the image sensor, the controller may first identify a subset of the pixel array where the source of the VLC signal is visible. Note that the FOV may have a larger signal to noise ratio than the rest of the image. The controller may identify this region by identifying pixels that are brighter than the others. For example, it may set a threshold T, such that if the pixel intensity in luminance value is greater than T, the pixel may then be considered to be part of the FOV. The threshold T may, for instance, be the 50% luminance value of the image,
According to aspects of the present disclosure, a controller/processor of the mobile device 182 can be configured to decode the light fixture identifiers for neighboring light fixtures or visible light fixtures in the environment. In one exemplary approach, the controller can be configured to decode light fixtures in the FOV of the mobile device. Using the decoded identifiers of the light fixtures and assistance data of the environment 280, the controller can estimate the position of the mobile device and determine the relative position of neighboring light fixtures. As the position/orientation of the mobile device changes, the controller may continue to use light fixtures identifiers as well as angle of arrival to estimate the position of the mobile device.
In some embodiments, in situations when the motion of a mobile device is low, or the mobile device is moving slowly, or stationary, if the same light fixtures are visible, the mobile device may enter a reduced duty cycle state, where the image sensors may be switched off for a period of time and be turned on only for a programmable duration in a reduced duty cycle state. According to aspects of the present disclosure, in the reduced duty cycle state, the controller may skip decoding the light fixture identification and may measure the light pixels from the particular light fixture. In some implementations, light fixtures in the FOV of the image sensors can be detected based on just one or two image sensor full frames. Based on the last FOV information for a particular light fixture and mobile device orientation and motion information, the controller may predict the upcoming FOV of the particular light fixture in the environment. In addition, the controller may be configured to compute the likelihood between predicted FOV and observed/measured FOV. If the likelihood is high, then controller may determine that the previously decoded identification of the light fixtures can be used for positioning.
In some embodiments, the controller may also examine the similarities between two image frames to determine the validity of previously decoded identifiers of the light fixtures. The position of the mobile device may be calculated using measurements of the angle of arrival of the light fixtures signal. Based on this angle computation and light fixtures position information, the distance between the mobile device and the light fixtures in the FOV of the mobile device can be computed. Using the triangulation method, the mobile device's precise position may then be calculated based on the distance between the mobile device and the light fixtures in the FOV of the mobile device.
The mobile device 300 may include a transceiver 310, motion sensors 311, and a camera (also referred to as image sensor) 307. The transceiver 310 is coupled to one or more antennas 320. The transceiver 310 provides a means for communicating with various other apparatus over a transmission medium. The transceiver 310 receives a signal from the one or more antennas 320, extracts information from the received signal, and provides the extracted information to the mobile device 300. In addition, the transceiver 310 receives information from the mobile device 300, and based on the received information, generates a signal to be applied to the one or more antennas 320.
The camera 307 provides a means for capturing VLC signal frames. The camera 307 captures a VLC signal frame from a light source, extracts information from the captured VLC signal frame, and provides the extracted information to the mobile device 300.
The motion sensors 311 may include but not limited to, accelerometer, gyroscope, and magnetometer configured to detect motions and rotations of the mobile device. The accelerometer may perform better in detecting linear movements, the gyroscope may perform better in detecting rotations, and the magnetometer may perform better in detecting orientations of the mobile device. A combination of two or more such sensors may be used to detect movement, rotation, and orientation of the mobile device according to aspects of the present disclosure.
According to embodiments of the present disclosure, an accelerometer is a device that measures the acceleration of the mobile device. It measures the acceleration associated with the weight experienced by a test mass that resides in the frame of reference of the accelerometer. For example, an accelerometer measures a value even if it is stationary, because masses have weights, even though there is no change of velocity. The accelerometer measures weight per unit of mass a quantity also known as gravitational force or g-force. In other words, by measuring weight, an accelerometer measures the acceleration of the free-fall reference frame (inertial reference frame) relative to itself. In one approach, a multi-axis accelerometer can be used to detect magnitude and direction of the proper acceleration (or g-force), as a vector quantity. In addition, the multi-axis accelerometer can be used to sense orientation as the direction of weight changes, coordinate acceleration as it produces g-force or a change in g-force, vibration, and shock. In another approach, a micro-machined accelerometer can be used to detect position, movement, and orientation of the mobile device.
According to embodiments of the present disclosure, a gyroscope is used to measure rotation and orientation of the mobile device, based on the principles of conservation of angular momentum. The accelerometer or magnetometer can be used to establish an initial reference for the gyroscope. After the initial reference is established, the gyroscope cm be more accurate than the accelerometer or magnetometer in detecting rotation of the mobile device because it is less impacted by vibrations, or by the electromagnet fields generated by electrical appliances around the mobile device. A mechanical gyroscope can be a spinning wheel or disk whose axle is free to take any orientation. This orientation changes much less in response to a given external torque than it would without the large angular momentum associated with the gyroscope's high rate of spin. Since external torque is minimized by mounting the device in gimbals, its orientation remains nearly fixed, regardless of any motion of the platform on which it is mounted. In other approaches, gyroscopes based on other operating principles may also be used, such as the electronic, microchip-packaged Micro-electromechanical systems (MEMS) gyroscope devices, solid state ring lasers, fiber optic gyroscopes and quantum gyroscope.
According to embodiments of the present disclosure, a magnetometer can be used to measure orientations by detecting the strength or direction of magnetic fields around the mobile device. Various types of magnetometers may be used. For example, a scalar magnetometer measures the total strength of the magnetic field it is subjected to, and a vector magnetometer measures the component of the magnetic field in a particular direction, relative to the spatial orientation of the mobile device. In another approach, a solid-state Hall-effect magnetometer can be used. The Hall-effect magnetometer produces a voltage proportional to the applied magnetic field, and it can be configured to sense polarity.
The mobile device 300 includes a controller (also referred to as a processor) 904 coupled to a computer-readable medium/memory 306, which can include, in some implementations a non-transitory computer readable medium storing instructions for execution by one or more processors, such as controller processor 304. The controller/processor 304 is responsible for general processing, including the execution of software stored on the computer-readable medium/memory 306. The software, when executed by the controller 304, causes the mobile device 300 to perform. the various functions described in
According to aspects of the present disclosure, the camera or image sensor in the mobile device can be configured to extract a time domain VLC signal from a sequence of image frames that capture a given light fixture. The received VLC signal can be demodulated and decoded by the mobile device to produce a unique identification for a light fixture. Furthermore, the camera or image sensor can in parallel extract VLC signals from images containing multiple light fixtures that are visible in the field of view of the image sensor. In this manner, the mobile device may use multiple independent sources of information to confirm and refine its position.
Each pixel in an image sensor accumulates light energy coming from a narrow range of physical directions, so by performing pixel-level analysis the mobile device can precisely determine the angle of arrival of light from one or more light fixtures. This direction of angle of light enables the mobile device to compute its position relative to a light fixture to within a few centimeters.
By combining the position relative to a light fixture with the information about the location of that light fixture as determined based on the decoded identification coming from the positioning signal, the mobile device can determine its global position in the venue within accuracy of centimeters.
According to aspects of the present disclosure, the actuator movement is controlled via a digital to analog converter (DAC) register using an inter-IC (I2C) controller. Since infinity/macro DAC value may not be the same across all sensors due to module manufacturing errors, the infinity and macro DAC values are stored in EEPROM calibration data for each sensor. Infinity is the general position where the sensor consumes less power than the macro. Because sensors infinity is different due to module manufacturing errors, different power is consumed by different sensors at the infinity position.
To minimize power consumption, the system can be configured to place the actuator in a lowest power mode (for example close to or at infinity) when the LDO is shared with others and the camera is turned off even though the display is still on (such that the LDO shared by the actuator and display is still activated).
The process of putting the actuator to the low power mode can be determined with a module integrator, which can be configured to find the low power state of the actuator, which may be close to the EEPROM saved value for infinity. In other words, when the camera is turned off but the LDO is still activated (in situations when it is shared by the display), the actuator places the lens in a position that is predetermined to be the lowest power consumption position for the lens—information of this position are stored in memory and may be based on the specific configuration of the sensor/actuator. When this process has been implemented, a significant power consumption improvement of 2 to 4 percent may have been observed (i.e. 2 to 4 less power is consumed for the same set of operations).
In the example shown in
The mobile device 100 may also include a power supply 128 and a display 125. The power supply 128 may be coupled to the camera system 124 and supplies power to the actuator 118. The power supply 128 may also be coupled to the display 125 and provides power to the display 125. The display may be any type of an electronic display haying, for example, a plurality of LCD, LED, or OLED display elements. The power supply may be any type of a power supply component that supplies power to both the display 125 and the camera system 124, and may be controlled such that if one of either the display 125 or the camera system 124 is activated (needs power), power is also available and is supplied to the other component. In some embodiments the power supply can be a voltage regulator, for example a low dropout voltage regulator (LDO).
The table 400 illustrates one-example of how information of low power focus positions may be ordered (or related) for an embodiment where a plurality of low power lens positions are stored, each for a different set or type of camera components, in an imaging device. In this example, Table 400 includes a first column 465 that includes information representing different components that may be incorporated into the camera system 124. Table 400 also includes a second column 470 that includes information representing a low power lens position that corresponds to each of the component sets in the first column 465. For example, if the camera system has component set 3, the lens position for minimal power usage is LENS POSITION C. Accordingly, a processor can be configured to retrieve the stored information of a low power lens positon corresponding to whatever camera component set that is included on the imaging device 200 from such a table of stored information and use the retrieved information to place the lens in the position that uses that least amount of power when the camera is not being used but power is being supplied to the camera system 124 (for example, to the actuator 118). The table 400 illustrates an example of how information contained therein can be ordered, and an example of what type or information may be stored, according to some embodiments. In various embodiments, information may be stored in a lookup table, a list, or any type of relational arrangement such that a particular low power focus position can be retrieved from memory and used to control an actuator to move a lens to a position that uses the least amount of power for that particular actuator.
The camera system 124 includes an image sensor 107, an actuator 118, and a lens 112. The image sensor 107 includes sensing elements arranged in an imaging plane. The lens 112 includes at least one optical element 213 through which light propagates in a path through the lens to the image sensor 107. Light propagates to the lens 112 from a scene through aperture 123, which allows light to pass through the housing of the imaging device 200 and enter the camera system 124. Light passing through the aperture 123 is refracted by the lens 112 as it propagates through the lens 112 to the image sensor 107. In some embodiments, the actuator 118 is a voice coil motor (VCM). Other embodiments can include other types of actuators. The actuator 118 is coupled to the lens 112 and is configured to move the lens 112 to a plurality of lens positions each at a different distance from the image sensor 107. For example, the image processor 120 can control the actuator 118 to move the lens 112 to a desired position for focusing or for an optical zoom operation, or to place the lens 112 in a determined position when the camera system 124 is deactivated such that the least amount (or a minimal amount) of power is consumed by the actuator 118. The imaging device 200 also includes a display 125, a device processor 250, and a power supply 128. The power supply 128 is coupled to, and supplies power to, the display 125 and/or the camera system 124, including the actuator 118 via power line 217 for example. In some embodiments, the power supply 128 supplies power to other components of imaging device 200 as well. Because the power supply 128 supplies power to both the display 125 and the camera system 124, powering down (turning off) the power supply 128 will affect the power received by both the camera system 124 (and actuator 118) and the display 125.
Imaging device 200 may be a cell phone, digital camera, tablet computer, personal digital assistant, or the like. Some embodiments of imaging device 200 can be incorporated into a vehicle-based imaging system, for example an unmanned aerial vehicle. There are many portable computing devices in which a reduced thickness stereoscopic imaging system such as is described herein would provide advantages. Imaging device 200 may also be a stationary computing device or any device in which a thin stereoscopic imaging system would be advantageous. A plurality of applications may be available on imaging device 200. These applications may include traditional photographic and video applications, panoramic image generation, stereoscopic imaging such as three-dimensional images and/or three-dimensional video, three-dimensional modeling, and three-dimensional object and/or landscape mapping, to name a few.
The image processor 120 may be configured to perform various processing operations on received image data. Examples of image processing operations include cropping, scaling (e.g., to a different resolution), image format conversion, image filtering (e.g., spatial image filtering), lens artifact or defect correction, stereoscopic matching, depth map generation, etc. In some embodiments, the image processor 120 can be a chip in a three-dimensional wafer stack including the image sensor 107 of the camera system 124, for example a RICA processor. In such embodiments the working memory 126 and memory 230 can be incorporated as hardware or software of the image processor 120. In some embodiments, image processor 120 may be a general purpose processing unit or a processor specially designed for imaging applications. Image processor 120 may, in some embodiments, comprise a plurality of processors. Certain embodiments may have a processor dedicated to each image captured and transmitted to the image processor 120. In some embodiments, image processor 120 may be one or more dedicated image signal processors (ISPs).
As shown, the image processor 120 is connected to a memory 230 (any “memory” described herein is also referred to herein as a “memory circuit” indicating that the memory may be hardware or media that is configured to store information) and a working memory 126. In the illustrated embodiment, the memory 230 stores capture control module 235, actuator control module 240, and operating system 245. These modules include instructions that configure the image processor 120 to perform various image processing and device management tasks. Working memory 126 may be used by image processor 120 to store a working set of processor instructions contained in the modules of memory 230. Alternatively, working memory 126 may also be used by image processor 120 to store dynamic data created during the operation of imaging device 200. As discussed above, in some embodiments the working memory 126 and memory 230 can be incorporated as hardware or software of the image processor 120. In some embodiments, the functionality of the image processor 120 and the device processor 250 are combined to be performed by the same processor.
As mentioned above, the image processor 120 is configured by several modules stored in the memory 230. The capture control module 235 may include instructions that configure the image processor 120 to adjust the capture parameters (for example exposure time, focus position, and the like) of the image sensor 107 and optionally the camera system 124. Capture control module 235 may further include instructions that control the overall image capture functions of the imaging device 200. For example, capture control module 235 may include instructions that call subroutines to configure the image processor 120.
Actuator control module 240 may comprise instructions that configure the image processor 120 to control the actuator 118. For example, the actuator control module 240 may configure the image processor 120 (or another processor) to determine if the camera system 124 is ON or OFF determine whether the display 125 is ON or OFF, and provide certain control actions for the actuator 118 depending on the activation state of the display 125 and the camera system 124. The actuator control module 240 may also configure the image processor 120 (or another processor) to retrieve information from memory 230 or working memory 126 and use the retrieved information to control the actuator 118. For example, information indicating a low power focus position of the lens 112 may be stored in a memory 230 or 126 of the imaging device 200. The low power focus position is a position of the lens 112 that when the actuator 118 moves the lens to that position, the actuator 118 consumes the lowest amount of power. This position is not necessarily at an infinity position of the lens. The particular low power focus position can depend on the components of the camera system 124. For example, a particular low power focus position may depend on the type of actuator being used in the camera system 124, and even the particular make and/or model of the actuator 118.
If the image processor 120 determines that the display 125 is in an active state and the camera system 124 is in an inactive state (for example, an OFF state), the image processor 120 can operate to retrieve the low power focus position information from memory 230 or 126, and control the actuator 118 to move the lens to the lens to a position that corresponds to the low power lens position. An example of various “states” of the imaging device 200 is illustrated in
Operating system module 245 configures the image processor 120 to manage the working memory 126 and the processing resources of imaging device 200. For example, operating system module 245 may include device drivers to manage hardware resources such as the image sensor 107, the power supply 128, and storage 210, Therefore, in some embodiments, instructions contained in the image processing modules discussed above may not interact with these hardware resources directly, but instead interact through standard subroutines or application interfaces (APIs) located in operating system component 245. Instructions within operating system 245 may then interact directly with these hardware components. Operating system module 245 may further configure the image processor 120 to share information with device processor 250.
Device processor 250 may be configured to control the display 125 to captured images, or a preview of a captured image to a user. The display 125 may also be configured to provide a view finder displaying a preview image for use prior to capturing an image, or may be configured to display a captured image stored in memory or recently captured by the user. The display 125 comprise an LCD or LED screen and may implement touch sensitive technologies, for example providing a user interface for controlling device functions.
Device processor 250 or image processor 120 may write data to storage module 210, for example data representing captured images and/or depth information. While storage module 210 is represented graphically as a traditional disk device, those with skill in the art would understand that the storage module 210 may be configured as any storage media device. For example, the storage module 210 may include a disk drive, such as a hard disk drive, optical disk drive or magneto-optical disk drive, or a solid state memory such as a FLASH memory, RAM, ROM, and/or EEPROM. The storage module 210 can also include multiple memory units, and any one of the memory units may be configured to be within the imaging device 200, or may be external to the imaging device 200. For example, the storage module 210 may include a ROM memory containing system program instructions stored within the imaging device 200. The storage module 210 may also include memory cards or high speed memories configured to store captured images which may be removable from the imaging device 200. Though not illustrated, imaging device 200 may include one or more ports or devices for establishing wired or wireless communications with a network or with another device,
Although
At block 515, the method 500 further includes receiving a control signal indicating to place the camera system in an OFF state. Even though the camera is in the OFF state, power may still be supplied to the actuator because the camera system and the display share the power supply and the power supply cannot be de-activated when the display is being used, even if the camera system is de-activated (or OFF). At block 520, the method 500 includes retrieving from a memory circuit an actuator control value, for example, information that corresponds to (or represents) a lens position that is a low power focus position, where the low power focus position is a lens position where the actuator uses the least amount of power. The low power focus position can be dependent on the particular components used in the camera system (for example, the actuator). Accordingly, in some embodiments, information corresponding to a plurality of low power lens positions, each for different types of components of the camera system, may be stored in memory, and retrieved when needed. An example of an embodiment of such ordered information is illustrated
Such methods can include other features, as described herein. For example, in some embodiments, when the in the OFF state, camera imaging functionality is disabled. In some embodiments, in the OFF state, the voltage regulator supplies power to the camera system and to the display, and camera imaging functionality is disabled. In some embodiments, in the OFF state, the voltage regulator supplies power to actuator and to the display, and the image sensor functionality is in an OFF state such that no image data is generated by the image sensor. In some embodiments, the voltage regulator is a low-dropout voltage regulator. In some embodiments, the low power focus position includes a predetermined value. In some embodiments, the low power focus position stored in the memory circuit is selected based on a type of camera. In some embodiments, the low power focus position is selected based on a type of actuator. In some embodiments, the method uses actuator control information stored in the memory circuit, which is used by the processor to control the actuator to move the lens to the low power focus position. In some embodiments, the memory circuit comprises two or more memory components.
According to aspects of the present disclosure, the method performed in block 702 may further include the method performed in block 704. In block 704, the method of monitoring angles of arrival of light from the one or more light fixtures in field of view of the mobile device may include, for each light fixture in the field of view of the mobile device, measuring light pixels of the each light fixture for at least one image sensor frame, and determining the field of view for the each light fixture based on the light pixels measured for the at least one image sensor frame.
According to aspects of the present disclosure, the mobile device may be configured to perform one or more of the methods described in block 712, block 714, block 716, block 718, or block 720, alone or in combination. For example, in one implementation, the mobile device may be configured to perform the methods in blocks 712, 716, and 720. In another implementation, the mobile device may be configured to perform the methods in blocks 712, 714, and 718. In yet another implementations, the mobile device may be configured to perform the methods in all the blocks from 712 to 720.
According to aspects of the present disclosure, camera 801 includes a lens 850, a shutter 852, photo detector array 854 (which is an image sensor), a shutter control module 856, a photo detector readout module 858, an auto exposure lock activation module 860, and an interface module 862. The shutter control module 856, and photo detector readout module 858 and interface module 862 are communicatively coupled together via bus 864. In some embodiments, camera 801 may further include auto exposure lock activation module 860. Shutter control module 856, photo detector readout module 858. and auto exposure lock activation module 860 may be configured to receive control messages from controller/processor 802 via bus 809, interface module 862 and bus 864. Photo detector readout module 858 may communicate readout information of photo detector array 854 to controller/processor 802, via bus 864, interface module 862, and bus 809. Thus, the image sensor, such as the photo detector array 854, can be communicatively coupled to the controller/processor 802 via photo detector readout module 858, bus 864, interface module 862, and bus 809.
Shutter control module 856 may control the shutter 852 to expose different rows of the image sensor to input light at different times, for example, under the direction of controller/processor 802. Photo detector readout module 858 may output information to the processor, for example, pixel values corresponding to the pixels of the image sensor.
Input module 806 may include a wireless radio receiver module 810 and a wired and/or optical receiver interface module 814. Output module 808 may include a wireless radio transmitter module 812 and a wired and/or optical receiver interface module 814. Wireless radio receiver module 810, such as a radio receiver supporting OFDM and/or CDMA, may receive input signals via receive antenna 818. Wireless radio transmitter module 812, such as a radio transmitter supporting OFDM and/or CDMA, may transmit output signals via transmit antenna 820. In some embodiments, the same antenna can be used for transmitting and receiving signals. Wired and/or optical receiver interface module 814 may be communicatively coupled to the Internet and/or other network nodes, for example via a backhaul, and receives input signals. Wired and/or optical transmitter interface module 816 may be communicatively coupled to the Internet and/or other network nodes, for example via a backhaul, and may transmit output signals.
In various embodiments, controller/processor 802 can be configured to: sum pixel values in each row of pixel values corresponding to a first region of an image sensor to generate a first array of pixel value sums, at least some of the pixel value sums representing energy recovered from different portions of the VLC light signal, the different portions being output at different times and with different intensities; and perform a first demodulation operation on the first array of pixel value sums to recover information communicated by the VLC signal.
In some embodiments, controller/processor 802 can be further configured to: identify, as a first region of the image sensor, a first subset of pixel sensor elements in a sensor where the VLC signal is visible during a first frame time. In some such embodiments, controller/processor 802 can be further configured to identify, a second region of the image sensor corresponding to a second subset of pixel sensor elements in the sensor where the VLC signal may be visible during a second frame time, the first and second regions being different. In some embodiments, controller/processor 802 is further configured to: sum pixel values in each row of pixel values corresponding to the second region of the image sensor to generate a second array of pixel value sums, at least some of the pixel value sums in the second array representing energy recovered from different portions of the VLC light signal, the different portions being output at different times and with different intensities; and perform a second demodulation operation on the second array of pixel value sums to recover information communicated by the VLC signal; and where the first demodulation operation produces a first symbol value and the second demodulation produces a second symbol value.
In various embodiments, the recovered information includes a first symbol value, and different information is recovered from the VLC signal over a period of time. In some embodiments, the array of pixel value sums represents an array of temporally sequential light signal energy measurements made over a period of time.
In various embodiments, the portion of the VLC signal corresponding to a first symbol from which the first symbol value is produced has a duration equal to or less than the duration of a frame captured by the image sensor.
In some embodiments, controller/processor 802 is configured to identify a frequency from among a plurality of alternative frequencies, as part of being configured to perform a demodulation operation. In some embodiments, the transmitted VLC signal includes pure tones or square waves corresponding to tone frequencies equal to or greater than 150 Hz, and the lowest frequency component of the VLC signal may be 150 Hz or larger. In some embodiments, the transmitted VLC signal is a digitally modulated signal with binary amplitude (ON or OFF). In some such embodiments, the transmitted VLC signal is a digitally modulated signal with binary ON-OFF signals whose frequency content is at least 150 Hz.
In some embodiments, controller/processor 802 is configured to perform one of: an OFDM demodulation, CDMA demodulation, Pulse Position Modulation (PPM) demodulation, or ON-OFF keying demodulation to recover modulated symbols, as part of being configured to perform a demodulation operation.
In some embodiments, the image sensor may be a part of a camera that supports an auto exposure lock which when enabled disables automatic exposure, and controller/processor 802 is further configured to: activate the auto exposure lock; and capturing the pixels values using a fixed exposure time setting.
In some embodiments, controller/processor 802 is further configured to detect a beginning of a codeword including a predetermined number of symbols. In some such embodiments, controller/processor 802 is configured to: detect a predetermined VLC synchronization signal having duration equal to or less than the duration of a frame; and interpreting the VLC synchronization signal as an identification of the beginning of the codeword, as part of being configured to detect a beginning of a codeword.
According to aspects of the present disclosure, the image sensor in the mobile device can be configured to extract a time domain VLC signal from a sequence of image frames that capture a given light fixture. The received VLC signal can be demodulated and decoded by the mobile device to produce a unique identification for a light fixture. Furthermore, an image sensor can in parallel extract VLC signals from images containing multiple light fixtures that are visible in the field of view of the image sensor. In this manner, the mobile device may use multiple independent sources of information to confirm and refine its position.
Each pixel in an image sensor accumulates light energy coming from a narrow range of physical directions, so by performing pixel-level analysis the mobile device can precisely determine the angle of arrival of light from one or more light fixtures. This direction of angle of light enables the mobile device to compute its position relative to a light fixture to within a few centimeters.
By combining the position relative to a light fixture with the information about the location of that light fixture as determined based on the decoded identification coming from the positioning signal, the mobile device can determine its global position in the venue within accuracy of centimeters.
One of the benefits of the disclosed positioning method is that VLC-based positioning does not suffer from the uncertainty associated with the measurement models used by other positioning methods. For example, RF-signal-strength-based approaches may suffer from unpredictable multipath signal propagation. On the other hand, VLC-based positioning uses line-of-sight paths whose direction can be more precisely determined using the image sensor.
Another benefit of the disclosed positioning method is that, in addition to providing the position of the device in the horizontal plane, the disclosed positioning method may also provide position of the mobile device in the vertical dimension (the Z-axis). This is a benefit of using angle of arrival of light, which is a three dimensional vector. The ability of obtaining accurate height estimation can enable new applications such as autonomous navigation and operation of drones and forklifts in warehouses and on manufacturing floors.
Another benefit of using directionality of the light vectors is that the mobile device can determine its orientation in the horizontal (X-Y) plane, which is also referred to as the mobile device's azimuth or yaw angle. This information can be used to inform a user which way the user is holding the mobile device relative to other items in the venue. By contrast, a global positioning system (GPS) receiver determines the heading from a time sequence of position estimates which may require the user move in a certain direction before it can determine which way the user is going. With the disclosed approach, the orientation/heading is determined as soon as the first position is computed.
Moreover, the disclosed positioning method may have a low latency and update rate. Typical indoor positioning systems that use RF signals may require many measurements to be taken over time and space in order to get a position fix. With the disclosed approach, the time to first fix can be on the order of 0.1 second and the update rate can be as high as 30 Hz. This ensures a responsive and lively user experience and can even satisfy many of the challenging drone/robot navigation applications.
Furthermore, the disclosed positioning method has better scalability than conventional positioning methods. Conventional positioning methods that require two-way communication between a mobile device and infrastructure typically do not scale well as the number of mobile users and infrastructure access points increases. This is because each mobile-to-infrastructure communication creates interference for other mobile devices, as they attempt to position themselves. In the case of RTT-based positioning in Wi-Fi frequency bands, the interference may also cause a drop in total WLAN throughput. On the other hand, the disclosed VLC-based positioning can be inherently scalable because it employs one-way communication. As a result, the performance of the disclosed positioning approach does not degrade no matter how many users and transmitters may be simultaneously active in the venue.
Another benefit of disclosed method of visible light communication position is that in that VLC enables communication through widely available bandwidth without regulation. In addition, since users can observe a location and direction at which light corresponding to a VLC communication travels, information regarding coverage can be accurately ascertained. VLC can also offer reliable security and low power consumption. In light of these and other advantages, VLC can be applied in locations where the use of RF communications is prohibited, such as hospitals or airplanes, and can also provide additional information services through electronic display boards.
Note that at least the following three paragraphs,
The methodologies described herein may be implemented by various means depending upon applications according to particular examples. For example, such methodologies may be implemented in hardware and firmware/software. In a hardware implementation, for example, a processing unit may be implemented within one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), processors, controllers, micro-controllers, microprocessors, electronic devices, other devices units designed to perform the functions described herein, or combinations thereof.
Some portions of the detailed description included herein are presented in terms of algorithms or symbolic representations of operations on binary digital signals stored within a memory of a specific apparatus or special purpose computing device or platform. In the context of this particular specification, the term specific apparatus or the like includes a general purpose computer once it is programmed to perform particular operations pursuant to instructions from program software. Algorithmic descriptions or symbolic representations are examples of techniques used by those of ordinary skill in the signal processing or related arts to convey the substance of their work to others skilled in the art. An algorithm is here, and generally, is considered to be a self-consistent sequence of operations or similar signal processing leading to a desired result. In this context, operations or processing involve physical manipulation of physical quantities. Typically, although not necessarily, such quantities may take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared or otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to such signals as bits, data, values, elements, symbols, characters, terms, numbers, numerals, or the like. It should be understood, however, that all of these or similar terms are to be associated with appropriate physical quantities and are merely convenient labels. Unless specifically stated otherwise, as apparent from the discussion herein, it is appreciated that throughout this specification discussions utilizing terms such as “processing,” “computing,” “calculating,” “determining” or the like refer to actions or processes of a specific apparatus, such as a special purpose computer, special purpose computing apparatus or a similar special purpose electronic computing device. In the context of this specification, therefore, a special purpose computer or a similar special purpose electronic computing device is capable of manipulating or transforming signals, typically represented as physical electronic or magnetic quantities within memories, registers, or other information storage devices, transmission devices, or display devices of the special purpose computer or similar special purpose electronic computing device.
Wireless communication techniques described herein may be in connection with various wireless communications networks such as a wireless wide area network (WWAN), a wireless local area network (WLAN), a wireless personal area network (WPAN), and so on. The term “network” and “system” may be used interchangeably herein. A WWAN may be a Code Division Multiple Access (CDMA) network, a Time Division Multiple Access (TDMA) network, a Frequency Division Multiple Access (FDMA) network, an Orthogonal Frequency Division Multiple Access (OFDMA) network, a Single-Carrier Frequency Division Multiple Access (SC-FDMA) network, or any combination of the above networks, and so on. A CDMA network may implement one or more radio access technologies (RATs) such as cdma2000, Wideband-CDMA (W-CDMA), to name just a few radio technologies. Here, cdma2000 may include technologies implemented according to IS-95, IS-2000, and IS-856 standards. A TDMA network may implement Global System for Mobile Communications (GSM), Digital Advanced Mobile Phone System (D-AMPS), or some other RAT. GSM and W-CDMA are described in documents from a consortium named “3rd Generation Partnership Project” (3GPP). Cdma2000 is described in documents from a consortium named “3rd Generation Partnership Project 2” (3GPP2). 3GPP and 3GPP2 documents are publicly available. 4G Long Term Evolution (LTE) communications networks may also be implemented in accordance with claimed subject matter, in an aspect. A WLAN may comprise an Institute of Electrical and Electronic Engineers (IEEE) 802.11x network, and a WPAN may comprise a Bluetooth® network, an IEEE 802.15x, for example. Wireless communication implementations described herein may also be used in connection with any combination of WWAN, WLAN or WPAN.
In another aspect, as previously mentioned, a wireless transmitter or access point may comprise a femtocell, utilized to extend cellular telephone service into a business or home. In such an implementation, one or more mobile devices may communicate with a femtocell via a CDMA cellular communication protocol, for example, and the femtocell may provide the mobile device access to a larger cellular telecommunication network by way of another broadband network such as the Internet.
Techniques described herein may be used with a GPS that includes any one of several global navigation satellite system (GNSS) and/or combinations of GNSS. Furthermore, such techniques may be used with positioning systems that utilize terrestrial transmitters acting as “pseudolites”, or a combination of satellite vehicles (SVs) and such terrestrial transmitters. Terrestrial transmitters may, for example, include ground-based transmitters that broadcast a pseudo noise (PN) code or other ranging code (for example, similar to a GPS or CDMA cellular signal). Such a transmitter may be assigned a unique PN code so as to permit identification by a remote receiver. Terrestrial transmitters may be useful, for example, to augment a GPS in situations where GPS signals from an orbiting SV might be unavailable, such as in tunnels, mines, buildings, urban canyons or other enclosed areas. Another implementation of pseudolites is known as radio-beacons. The term “SV”, as used herein, is intended to include terrestrial transmitters acting as pseudolites, equivalents of pseudolites, and possibly others. The terms “GPS signals” and/or “SV signals”, as used herein, is intended to include GPS-like signals from terrestrial transmitters, including terrestrial transmitters acting as pseudolites or equivalents of pseudolites.
The terms, “and,” and “or” as used herein may include a variety of meanings that will depend at least in part upon the context in which it is used. Typically, “or” if used to associate a list, such as A, B or C. is intended to mean A, B, and C, here used in the inclusive sense, as well as A, B or C, here used in the exclusive sense. Reference throughout this specification to “one example” or “an example” means that a particular feature, structure, or characteristic described in connection with the example is included in at least one example of claimed subject matter. Thus, the appearances of the phrase “in one example” or “an example” in various places throughout this specification are not necessarily all referring to the same example. Furthermore, the particular features, structures, or characteristics may be combined in one or more examples. Examples described herein may include machines, devices, engines, or apparatuses that operate using digital signals. Such signals may comprise electronic signals, optical signals, electromagnetic signals, or any form of energy that provides information between locations.
While there has been illustrated and described what are presently considered to be example features, it will be understood by those skilled in the art that various other modifications may be made, and equivalents may be substituted, without departing from claimed subject matter. Additionally, many modifications may be made to adapt a particular situation to the teachings of claimed subject matter without departing from the central concept described herein. Therefore, it is intended that claimed subject matter not be limited to the particular examples disclosed, but that such claimed subject matter may also include all aspects falling within the scope of the appended claims, and equivalents thereof.
Claims
1. A method of power optimization in visible light communication (VLC) positioning of a mobile device comprising:
- receiving, by a transceiver, positioning assistance data of a venue, wherein the positioning assistance data includes identifiers and positions of light fixtures in the venue;
- decoding, by a VLC signal decoder, one or more light fixtures within a field of view of the mobile device to obtain corresponding light fixture identifiers;
- determining, by a controller, a motion of the mobile device with respect to the one or more light fixtures based on the light fixture identifiers and the positioning assistance data of the venue; and
- controlling, by the controller, the mobile device to operate in a reduced power mode based on the motion of the mobile device with respect to the one or more light fixtures.
2. The method of claim 1, further comprising:
- in the reduced power mode,
- monitoring angles of arrival of light from the one or more light fixtures in field of view of the mobile device; and
- determining a position of the mobile device using decoded light fixture identifiers and the angles of arrival of light from the one or more light fixtures in field of view of the mobile device.
3. The method of claim 2, wherein monitoring angles of arrival of light from the one or more light fixtures in field of view of-the mobile device comprises:
- for each light fixture in the field of view of the mobile device, measuring light pixels of the each light fixture for at least one image sensor frame; and determining the field of view for the each light fixture based on the light pixels measured for the at least one image sensor frame.
4. The method of claim 1, wherein controlling the mobile device to operate in a reduced power mode comprises:
- controlling an actuator of a camera of the mobile device to position one or more lenses of the camera in a fixed focal length during a period while the motion of the mobile device is above a first threshold.
5. The method of claim 1, wherein controlling the mobile device to operate in a reduced power mode further comprises:
- controlling a video frontend engine of the mobile device to stop generating statistics data during a period while the motion of the mobile device is above a first threshold.
6. The method of claim 1, wherein controlling the mobile device to operate in a reduced power mode further comprises:
- controlling a video front end engine of the mobile device to stop transferring statistics data to a memory during a period while the motion of the mobile device is above a first threshold.
7. The method of claim 1, wherein controlling the mobile device to operate in a reduced power mode further comprises:
- controlling an image processing engine of the mobile device to stop processing data in support of auto focus, auto white balance, and auto exposure during a period while the motion of the mobile device is above a first threshold.
8. The method of claim 1, wherein controlling the mobile device to operate in a reduced power mode further comprises:
- controlling the VLC signal decoder of the mobile device to intermittently decode incoming video frames during a period while the motion of the mobile device is above a first threshold.
9. The method of claim 1, further comprising:
- detecting the motion of the mobile device with respect to the one or more light fixtures is below a second threshold; and
- controlling the mobile device to operate in a normal power mode based on the motion of the mobile device with respect to the one or more light fixtures being below the second threshold.
10. A mobile device, comprising:
- a camera configured to receive visible light communication signals;
- a memory configured to store the visible light communication signals;
- a transceiver configured to receive positioning assistance data of a venue, wherein the positioning assistance data includes identifiers and positions of light fixtures in the venue;
- a VLC signal decoder configured to decode one or more light fixtures within a field of view of the mobile device to obtain corresponding light fixture identifiers;
- a controller configured to:
- determine a motion of the mobile device with respect to the one or more light fixtures based on the light fixture identifiers and the positioning assistance data of the venue; and
- control the mobile device to operate in a reduced power mode based on the motion of the mobile device with respect to the one or more light fixtures.
11. The mobile device of claim 10, wherein the controller is further configured to:
- in the reduced power mode,
- monitor angles of arrival of light from the one or more light fixtures in field of view of the mobile device; and
- determine a position of the mobile device using decoded light fixture identifiers and the angles of arrival of light from the one or more light fixtures in field of view of the mobile device.
12. The mobile device of claim 11, wherein the controller is further configured to:
- for each light fixture in the field of view of the mobile device, measure light pixels of the each light fixture for at least one image sensor frame; and determine the field of view for the each light fixture based on the light pixels measured for the at least one image sensor frame.
13. The mobile device of claim 10, wherein the controller is further configured to:
- control an actuator of the camera of the mobile device to position one or more lenses of the camera in a fixed focal length during a period while the motion of the mobile device is above a first threshold.
14. The mobile device of claim 10, wherein the controller is further configured to:
- control a video frontend engine of the mobile device to stop generating statistics data during a period while the motion of the mobile device is above a first threshold.
15. The mobile device of claim 10, wherein the controller is further configured to:
- control a video front end engine of the mobile device to stop transferring statistics data to the memory during a period while the motion of the mobile device is above a first threshold.
16. The mobile device of claim 10, wherein the controller is further configured to:
- control an image processing engine of the mobile device to stop processing data in support of auto focus, auto white balance, and auto exposure during a period while the motion of the mobile device is above a first threshold.
17. The mobile device of claim 10, wherein the controller is further configured to:
- control a VLC signal decoder of the mobile device to intermittently decode incoming video frames during a period while the motion of the mobile device is above a first threshold.
18. The mobile device of claim 10, wherein the controller is further configured to:
- detect the motion of the mobile device with respect to the one or more light fixtures is below a second threshold; and
- control the mobile device to operate in a normal power mode based on the motion of the mobile device with respect to the one or more light fixtures being below the second threshold.
19. A mobile device, comprising:
- means for receiving positioning assistance data of a venue, wherein the positioning assistance data includes identifiers and positions of light fixtures in the venue;
- means for decoding one or more light fixtures within a field of view of the mobile device to obtain corresponding light fixture identifiers;
- means for determining a motion of the mobile device with respect to the one or more light fixtures based on the light fixture identifiers and the positioning assistance data of the venue; and
- means for controlling the mobile device to operate in a reduced power mode based on the motion of the mobile device with respect to the one or more light fixtures.
20. The mobile device of claim 19, further comprising;
- in the reduced power mode,
- means for monitoring angles of arrival of light from the one or more light fixtures in field of view of the mobile device; and
- means for determining a position of the mobile device using decoded light fixture identifiers and the angles of arrival of light from the one or more light fixtures in field of view of the mobile device.
21. The mobile device of claim 20, wherein monitoring angles of arrival of light from the one or more fight fixtures in field of view of the mobile device comprises:
- for each light fixture in the field of view of the mobile device, means for measuring light pixels of the each light fixture for at least one image sensor frame; and means for determining the field of view for the each light fixture based on the light pixels measured for the at least one image sensor frame.
22. The mobile device of claim 19, wherein the means for controlling the mobile device to operate in a reduced power mode comprises:
- means for controlling an actuator of a camera of the mobile device to position one or more lenses of the camera in a fixed focal length during a period while the motion of the mobile device is above a first threshold.
23. The mobile device of claim 19, wherein the means for controlling the mobile device to operate in a reduced power mode further comprises:
- means for controlling a video frontend engine of the mobile device to stop generating statistics data during a period while the motion of the mobile device is above a first threshold.
24. The mobile device of claim 19, wherein the means for controlling the mobile device to operate in a reduced power mode further comprises:
- means for controlling a video front end engine of the mobile device to stop transferring statistics data to a memory during a period while the motion of the mobile device is above a first threshold.
25. The mobile device of claim 19, wherein the means for controlling the mobile device to operate in a reduced power mode further comprises:
- means for controlling an image processing engine of the mobile device to stop processing data in support of auto focus, auto white balance, and auto exposure during a period while the motion of the mobile device is above a first threshold.
26. The mobile device of claim 19, wherein the means for controlling the mobile device to operate in a reduced power mode further comprises:
- means for controlling a VLC signal decoder of the mobile device to intermittently decode incoming video frames during a period while the motion of the mobile device is above a first threshold.
27. The mobile device of claim 19, further comprising:
- means for detecting the motion of the mobile device with respect to the one or more light fixtures is below a second threshold; and
- means for controlling the mobile device to operate in a normal power mode based on the motion of the mobile device with respect to the one or more light fixtures being below the second threshold.
28. A non-transitory medium storing instructions for execution by one or more processors of a mobile device, the instructions comprising:
- instructions for receiving, by a transceiver, positioning assistance data of venue, wherein the positioning assistance data includes identifiers and positions of light fixtures in the venue;
- instructions for decoding, by a VLC signal decoder, one or more light fixtures within a field view of the mobile device to obtain corresponding light fixture identifiers;
- instructions for determining, by a controller, a motion of the mobile device with respect to the one or more light fixtures based on the light fixture identifiers and the positioning assistance data of the venue; and
- instructions for controlling, by the controller, the mobile device to operate in a reduced power mode based on the motion of the mobile device with respect to the one or more light fixtures.
29. The non-transitory medium of claim 28, further comprising:
- in the reduced power mode,
- instructions for monitoring angles of arrival of light from the one or more light fixtures in field of view of the mobile device; and
- instructions for determining a position of the mobile device using decoded light fixture identifiers and the angles of arrival of light from the one or more light fixtures in field of view of the mobile device.
30. The non-transitory medium of claim 29, wherein the instructions for monitoring angles of arrival of light from the one or more light fixtures in field of view of the mobile device comprises:
- for each light fixture in the field of the mobile device, instructions for measuring light pixels of the each light fixture for at least one image sensor frame; and instructions for determining the field of view for the each light fixture based on the light pixels measured for the at least one image sensor frame.
Type: Application
Filed: Aug 7, 2017
Publication Date: Mar 29, 2018
Inventors: Gaurav Gagrani (Hyderabad), Ajay Kumar Dhiman (Hyderabad)
Application Number: 15/670,323