Abstract: An electronic device according to the present invention, includes at least one memory and at least one processor which function as: an acquisition unit configured to acquire a first line-of-sight information and a second line-of-sight information generated by mutually different statistical methods as line-of-sight information relating to a line of sight of a user looking at a display surface; and a processing unit configured to perform first processing on a basis of the first line-of-sight information and second processing different from the first processing on a basis of the second line-of-sight information.
Abstract: An electronic device and method are provided. The electronic device includes a camera, a touchscreen display, and a processor configured to display, on the touchscreen display, a screen including at least one target object image corresponding to at least one subject obtained by the camera and at least one graphical object configured to capture the at least one subject in response to receiving an input, identify that the at least one target object image is resized according to a movement of at least one of the electronic device or the at least one subject, and in response to identifying the resizing of the at least one target object image, change a property of the screen by resizing the at least one graphical object.
Abstract: An imaging apparatus includes a unit pixel including a pixel electrode, a charge accumulation region electrically connected to the pixel electrode, and a signal detection circuit electrically connected to the charge accumulation region; a counter electrode facing the pixel electrode; a photoelectric conversion layer disposed between the electrodes; and a voltage supply circuit configured to selectively apply any one of first, second, and third voltages between the electrodes. The photoelectric conversion layer exhibits first and second wavelength sensitivity characteristics in a wavelength range when the voltage supply circuit applies the first and second voltages between the electrodes, respectively, and becomes insensitive to light in the wavelength range when the voltage supply circuit applies the third voltage between the electrodes.
Abstract: Embodiments of the disclosure relate generally to automatic focus selection in a multi-imager environment. Embodiments include methods, systems, and apparatuses configured for capturing by a first image sensor, a first image of a visual object with a background associated with a subject. A characteristic difference between the visual object in the first image and the background in the first image is determined and a distance determination mode is selected based on the determined characteristic difference. Accordingly, a distance to the subject is calculated based on the selected distance determination mode and a second image sensor is controlled to capture a second image of the subject, based on the calculated distance to the subject.
Abstract: A portable information device including a body, a position information portion that is provided in the body and that outputs position information, an image sensor that is provided in the body, a display system that is provided in the body and that displays, on a local field in a screen field, relevant information based on the position information such that the displayed relevant information is overlapped on a real-time image, an input portion that is provided on the body and, by use of which user operates an icon, the icon being displayed on the local field such that the displayed icon is overlapped on the real-time image and an imaging button that is provided, separately from the input portion, on the body and that is operated by user to capture a subject image via the image sensor.
Abstract: An image sensor configured to resolve intensity and polarization has multiple pixels each having a single microlens adapted to focus light on a central photodiode surrounded by at least a first, a second, a third, and a fourth peripheral photodiodes, where a first polarizer at a first angle is disposed upon the first peripheral photodiode, a third polarizer at a third angle is disposed upon the third peripheral photodiode, a second polarizer at a second angle is disposed upon the second peripheral photodiode, and a fourth polarizer at a fourth angle is disposed upon the fourth peripheral photodiode, the first, second, third, and fourth angles being different. In embodiments, 4 or 8 peripheral photodiodes are provided, and in an embodiment the polarizers are parts of an octagonal polarizer.
Abstract: An image capturing apparatus capable of displaying a live view image high in visibility on a high-luminance side. An image capturing section converts light from an object to image signals. An image processor performs image processing on image data formed by the image signals. An operation section receives an instruction for setting a live view mode for realizing a live view function. When an OVF simulation mode is set which is different from a recording live view mode for displaying image data subjected to the image processing on the image display section based on user's photographing settings, photographing is performed under an exposure condition darker than a proper exposure, gradation conversion for compensating for a difference in exposure condition from the proper exposure is performed, and display luminance is controlled to be brighter than display luminance in the recording live view mode.
Abstract: An image sensor includes a first photoelectric conversion unit that converts light incident through a first opening to an electric charge, a second photoelectric conversion unit that converts light incident through a second opening which is smaller than the first opening to an electric charge, and a signal output wiring that outputs a first signal generated by the electric charge converted by the first photoelectric conversion unit and a second signal generated by the electric charge converted by the second photoelectric conversion unit. The second photoelectric conversion unit is disposed between the second opening and the signal output wiring.
Abstract: While displaying playback of a first portion of a video in a video playback region, a device receives a request to add a first annotation to the video playback. In response to receiving the request, the device pauses playback of the video at a first position in the video and displays a still image that corresponds to the first, paused position of the video. While displaying the still image, the device receives the first annotation on a first portion of a physical environment captured in the still image. After receiving the first annotation, the device displays, in the video playback region, a second portion of the video that corresponds to a second position in the video, where the first portion of the physical environment is captured in the second portion of the video and the first annotation is displayed in the second portion of the video.
Type:
Grant
Filed:
April 8, 2022
Date of Patent:
April 18, 2023
Assignee:
APPLE INC.
Inventors:
Joseph A. Malia, Mark K. Hauenstein, Praveen Sharma, Matan Stauber, Julian K. Missig, Jeffrey T. Bernstein, Lukas Robert Tom Girling, Matthaeus Krenn
Abstract: An image compression method to compress image data generated through a pixel array includes categorizing image data corresponding to a plurality of sub-pixels arranged adjacent to each other and generating first color information for a first color pixel; determining a reference value as a criterion for compressing the image data based on pixel values of the sub-pixels; comparing the pixel values of the sub-pixels with the reference value; and outputting results of the comparisons and the reference value.
Abstract: An image capturing apparatus comprises an image sensing unit including an image sensor and configured to capture an image by photoelectric converting an optical image formed on an imaging surface and output an image signal, and a setting unit configured to set shooting parameters to be used at a time of capturing the image by the image sensing unit. The setting unit sets different shooting parameters to be used at the time of capturing the image by the image sensing unit between in a case where it is possible to perform image processing on the image signal by an external image processing apparatus and in a case where it is not possible to perform image processing on the image signal by the external image processing apparatus.
Abstract: The various embodiments illustrated herein disclose a method for operating an imaging device. the method includes activating a first image sensor at a first duty cycle within a first time period. The method further includes activating a second image sensor at a second duty cycle within the first time period. Additionally, the method includes modifying at least one of the first duty cycle or the second duty cycle based on at least a workflow associated an operator of the imaging device.
Type:
Grant
Filed:
December 21, 2021
Date of Patent:
March 21, 2023
Assignee:
Hand Held Products, Inc.
Inventors:
Patrick Anthony Giordano, David M. Wilz, Jeffrey Dean Harper, Ka Man Au, Benjamin Hejl
Abstract: A method of simultaneously displaying a first video feed, a second video feed, and a map overlay within a single window that is generated by a mobile application includes identifying, using the mobile application that is stored in a mobile device, the first video feed that is generated by a front-facing camera; identifying, using the mobile application, the second video feed that is generated by a rear-facing camera; generating, using the mobile application, the map overlay using location data of the mobile device; creating, using the mobile application and a GPU of the mobile device, the single window that includes the first video feed, the second video feed, and the map overlay; displaying, on UI and using the mobile application, the single window; recording, using the mobile application, the displayed single window; and storing the recording of the displayed single window as a single video file.
Abstract: An image processing apparatus acquires a plurality of pieces of image data sequentially outputted from an imager and, in accordance with reception of an image capturing instruction to capture a still image, specify as image data to be processed a plurality of pieces of image data in a period that includes a timing at which the still image is captured. The image processing apparatus, based on an action of a subject estimated using the image data to be processed, add information that indicates the action of the subject to data of the still image.
Abstract: A lighting module which provides adjustably controllable illumination of a camera field of view of a camera module includes an adjustable collimator which can be adjustably positioned such that the emitted light beam is adjustably directed to illuminate various regions of various camera fields of view. The collimator can be adjusted via an actuator which adjustably positions the collimator relative to static components of the lighting module, including the light emitter. The light beam can be directed to illuminate a selected limited region of a camera field of view, based on identification of a subject within the limited region. The light beam can be adjustably directed based on user interactions with a user interface, including adjusting the light beam according to user-commanded beam angle, intensity, and direction. The light beam can be adjustably directed to illuminate a region according to different fields of view of different camera modules.
Type:
Grant
Filed:
September 10, 2021
Date of Patent:
February 21, 2023
Assignee:
Apple Inc.
Inventors:
Miodrag Scepanovic, Angelo M. Alaimo, Florian R. Fournier, Andreas G. Weber, Simon S. Lee
Abstract: A depth measuring apparatus includes a camera assembly configured to capture a plurality of images of a target at a plurality of distances from the target. The depth measuring apparatus further includes a controller configured to, for each of a plurality of regions within the plurality of images: determine corresponding gradient values within the plurality of images; determine a corresponding maximum gradient value from the corresponding gradient values; and determine, based on the corresponding maximum gradient value, a depth measurement for a region of the plurality of regions.
Type:
Grant
Filed:
June 7, 2021
Date of Patent:
February 14, 2023
Assignee:
Applied Materials, Inc.
Inventors:
Ozkan Celik, Patricia A. Schulze, Gregory J. Freeman, Paul Z. Wirth, Tommaso Vercesi
Abstract: Methods and apparatus for capturing, communicating and using image data to support virtual reality experiences are described. Images, e.g., frames, are captured at a high resolution but lower frame rate than is used for playback. Interpolation is applied to captured frames to generate interpolated frames. Captured frames, along with interpolated frame information, are communicated to the playback device. The combination of captured and interpolated frames correspond to a second frame playback rate which is higher than the image capture rate. Cameras operate at a high image resolution but slower frame rate than images could be captured with the same cameras at a lower resolution. Interpolation is performed prior to delivery to the user device with segments to be interpolated being selected based on motion and/or lens FOV information. A relatively small amount of interpolated frame data is communicated compared to captured frame data for efficient bandwidth use.
Type:
Grant
Filed:
April 24, 2020
Date of Patent:
February 14, 2023
Assignee:
Nevermind Capital LLC
Inventors:
Ramesh Panchagnula, David Cole, Alan Moss
Abstract: A signal processing apparatus that processes image data output from a photoelectric conversion unit including a light-receiving region and a light-blocking region. The apparatus includes a control data generation unit that outputs control data used to generate correction data for correcting the image data using a trained model generated through machine learning, and a signal processing unit that generates the correction data on the basis of light-blocked image data and the control data, the light-blocked image data being image data, among the image data, that is from the light-blocking region, and corrects light-received image data in accordance with the correction data without applying the trained model, the light-received image data being image data, among the image data, that is from the light-receiving region.
Abstract: An image processing apparatus comprises an acquisition unit that acquires image data, an estimation unit that detects a predetermined subject from the image data and estimates posture information of the detected subject, and a determination unit that, in a case where a plurality of subjects are detected by the estimation unit, determines a main subject from the plurality of subject using feature vector of each of the subjects obtained from the posture information.
Abstract: A method for an unmanned aerial vehicle (UAV) includes: identifying a target object in a photographed image to track the target object; determining, based on a location of the target object in the photographed image, a location of the UAV, and an altitude of a gimbal of the UAV, a location of the target object to continuously record the location of the target object; and in response to a disappearance of the target object in the photographed image, controlling, according to the recorded location of the target object prior to the disappearance of the target object, the altitude of the gimbal such that a photographing device carried by the UAV through the gimbal continues to photograph in a direction from the photographing device to the location of target object.