Abstract: Methods and systems for identifying watchers of an object of interest at an incident scene. One method includes receiving, with an electronic processor, an object identifier corresponding to the object of interest. The method includes determining a watcher status for the object of interest. The method includes generating a notification based on the watcher status and the object identifier. The method includes transmitting, with a communication interface communicatively coupled to the electronic processor, the notification to an available watcher at the incident scene.
Type:
Grant
Filed:
January 27, 2023
Date of Patent:
November 28, 2023
Assignee:
MOTOROLA SOLUTIONS, INC.
Inventors:
Stefan Koprowski, Piotr Bartczak, Mariusz Wawrowski
Abstract: A method and apparatus for simultaneously acquiring a super-resolution image and a high-speed widefield image are disclosed. The image acquisition method includes receiving a first image signal from an optical microscope, generating, by using the first image signal, a first plurality of entire images, distinguishing, based on movements of a plurality of objects included in the first plurality of entire images, a dynamic region with respect to the first plurality of entire images and a static region with respect to the first plurality of entire images, and controlling the optical microscope so as to respectively irradiate lights having different amplitudes onto the dynamic region and the static region.
Type:
Grant
Filed:
August 3, 2021
Date of Patent:
November 28, 2023
Assignee:
UNIST(ULSAN NATIONAL INSTITUTE OF SCIENCE AND TECHNOLOGY)
Abstract: A recording apparatus includes a control unit configured to set the input additional information as recording additional information to be recorded together with a moving-image file, in response to input of the additional information, and a recording control unit configured to record moving-image data in a recording medium as the moving-image file, and record an additional information file including the recording additional information set by the control unit in the recording medium, in association with the moving-image file, wherein, in a case where the additional information is input by an input unit during recording of the moving-image file in the recording medium, the control unit sets the input additional information as the recording additional information at end of recording of the moving-image file, without setting the input additional information as the recording additional information in response to the input of the additional information by the input unit.
Abstract: A control circuit and a control method of an image sensor are provided. The image sensor generates a sensed data. The control circuit includes a statistical circuit, an auto exposure (AE) circuit, and a calculation circuit. The statistical circuit generates a luminance statistical data according to the sensed data. The AE circuit sets the image sensor and outputs an AE data which includes a target luminance of the image sensor. The calculation circuit controls an operation mode of the image sensor according to the luminance statistical data and the AE data.
Abstract: A display controlling method for a broadcast receiving apparatus allowed to receive IP simultaneous broadcasting of a 4K broadcasting service via a communication transmission line, in which simultaneous broadcasting of the 4K broadcasting service and a 2K broadcasting service is executed by a broadcasting wave, is configured to: receive and demodulate the 4K broadcasting service, and display a broadcasting program of the 4K broadcasting service; and select and execute one display switching process of first and second display switching processes in accordance with priority order in a case where a defect occurs in reception, demodulation, or display of the 4K broadcasting service, the 2K broadcasting service being received and demodulated to display a broadcasting program of the 2K broadcasting service in the first display switching process, the IP simultaneous broadcasting being received and demodulated to display a program of the IP simultaneous broadcasting in the second display switching process.
Abstract: In some examples, an apparatus comprises an array of pixel cells, and processing circuits associated with blocks of pixel cells of the array of pixel cells and associated with first hierarchical power domains. The apparatus further includes banks of memory devices, each bank of memory devices being associated with a block of pixel cells, to store the quantization results of the associated block of pixel cells, the banks of memory devices further being associated with second hierarchical power domains. The apparatus further includes a processing circuits power state control circuit configured to control a power state of the processing circuits based on programming data targeted at each block of pixel cells and global processing circuits power state control signals, and a memory power state control circuit configured to control a power state of the banks of memory devices based on the programming data and global memory power state control signals.
Type:
Grant
Filed:
May 19, 2021
Date of Patent:
November 21, 2023
Assignee:
META PLATFORMS TECHNOLOGIES, LLC
Inventors:
Andrew Samuel Berkovich, Shlomo Alkalay, Hans Reyserhove
Abstract: An imaging device includes multiple pixels, a first analog-to-digital (AD) converter and a second AD converter that receive analog signals read from the pixels and output digital signals the analog signals, a first frame memory, and an image processor. The analog signals include a reset signal representing a reset level and a pixel signal representing an image of a subject. The first frame memory temporarily stores a first digital signal that is one of the digital signals, corresponding to the reset signal, output from the first AD converter and the second AD converter. The image processor outputs a difference between the first digital signal stored on the first frame memory and a second digital signal that is another one of the digital signals, corresponding to the pixel signal on pixels from which the reset signal is read, output from the first AD converter and the second AD converter.
Abstract: An apparatus includes a capturing unit configured to capture an image of an object, an exposure control unit configured to control an exposure condition including an exposure time or an analog gain for each of a plurality of pixels or pixel groups on a surface of the capturing unit, a determination unit configured to determine one or more evaluation areas including an achromatic color area from the captured image, a calculation unit configured to calculate a first evaluation value for each of the plurality of pixels or pixel groups in the evaluation area, and calculate a second evaluation value based on the first evaluation value weighted based on the exposure condition for each of the plurality of pixels or pixel groups, a correction unit configured to correct the image based on the second evaluation value.
Abstract: An image capturing apparatus includes an image pickup device configured to output image data, and at least one processor programmed to perform the operations of following units: a calculation unit configured to calculate an evaluation value used to determine whether to perform an image capturing operation for recording the image data; a setting unit configured to set a threshold value used to determine whether to perform an image capturing operation for recording the image data; a determination unit configured to make a determination as to whether to control execution of an image capturing operation using the evaluation value and the threshold value; and a storing unit configured to store image capturing history information obtained from execution of an image capturing operation based on the determination made by the determination unit, wherein the setting unit sets the threshold value based on the image capturing history information.
Abstract: A camera device includes an image sensor, an integrated processor, and an output interface. The camera device also includes a splitter which reads in raw image signals from the image sensor and then provides this data to both the integrated processor and to the output interface for transmission to an external processor. The integrated processor is configured to determine first object data by processing the raw image signals provided.
Abstract: Provided is a photoelectric conversion device including: at least one charge holding portion including a first semiconductor region of a first conductivity type and configured to hold signal charges based on incident light; and an avalanche photodiode including a second semiconductor region of the first conductivity type, in which the signal charges are transferred from the first semiconductor region to the second semiconductor region via a third semiconductor region of a second conductivity type that is different from the first conductivity type, a fourth semiconductor region of the first conductivity type, and a fifth semiconductor region of the second conductivity type in this order.
Abstract: An image analysis device according to the present invention includes: an image capturing unit that captures a subject; a light emitting unit that emits light to the subject; a sensor unit that senses an inclination of the image capturing unit relative to the subject; a control unit that causes the image capturing unit to capture an image of the subject while controlling light emission of the light emitting unit; and a determination unit that, based on a positional relationship of the image capturing unit and the light emitting unit and the inclination, determines a measurement region spaced apart by a predetermined distance from a reflection region corresponding to a position in the image where a light from the light emitting unit regularly reflects at the subject.
Type:
Grant
Filed:
September 4, 2019
Date of Patent:
November 14, 2023
Assignee:
Shiseido Company, Ltd.
Inventors:
Yuji Masuda, Megumi Sekino, Hironobu Yoshikawa, David Christopher Berends, Michael Anthony Isnardi, Yuzheng Zhang
Abstract: In one embodiment, a computing system may receive sensor data from an image sensor having a pixel array including color pixel sensors and panchromatic pixel sensors in a first pattern. Each of the color pixel sensors is associated with one of several color channels. The computing system may generate, based on the sensor data, a filtered monochrome image including monochrome values corresponding to the pixel array of the image sensor. The computing system may generate a filtered color image having a second pattern of color channels. A first pixel of a particular color channel at a first pixel location in the filtered color image is determined based on the monochrome value corresponding to the first pixel location in the filtered monochrome image, the sensor data measured by a color pixel sensor at a second pixel location, and the monochrome value at the second pixel location in the filtered monochrome image.
Type:
Grant
Filed:
August 25, 2022
Date of Patent:
November 14, 2023
Assignee:
Meta Platforms Technologies, LLC
Inventors:
Alex Locher, Naveen Makineni, Oskar Linde, John Enders Robertson, Anthony Aslan Tenggoro
Abstract: An imaging apparatus includes a light emitter, a pixel section, and a signal processor which calculates distance information of a subject. The pixel section includes a photoelectric converter, first and second read-out gates, and a plurality of charge accumulators including a first charge accumulator and a second charge accumulator. The first read-out gate is activated in a first period and deactivated in a second period. The second read-out gate is activated in the first period and the second period. The signal processor calculates the distance information based on a total amount of signal charges accumulated in the charge accumulators in the first period and the second period and a difference between an amount of signal charges accumulated in the second charge accumulator in the first period and the second period and an amount of the signal charge accumulated in the first charge accumulator in the first period.
Type:
Grant
Filed:
August 11, 2021
Date of Patent:
November 7, 2023
Assignee:
NUVOTON TECHNOLOGY CORPORATION JAPAN
Inventors:
Keiichi Mori, Junichi Matsuo, Mitsuhiko Otani, Mayu Ogawa
Abstract: Disclosed in embodiments of the present application are a screen color gamut control method and apparatus, an electronic device, and a storage medium. The method comprises: displaying a control, the control being configured to control the screen to switch between different color gamut spaces; acquiring the state of the control, the state being used for representing the color gamut space to which the screen is controlled to switch; and adjusting the color gamut space of the screen to make same be switched to the color gamut space corresponding to the state of the control. By means of the present method, when the control is displayed, the color gamut space of the screen is adjusted by the control to switch the color gamut space to that corresponding to the state of the control, so that users can enjoy different color experiences, thereby improving user experience.
Type:
Grant
Filed:
January 14, 2022
Date of Patent:
November 7, 2023
Assignee:
GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD.
Abstract: Row-by-row pixel read-out is executed concurrently within respective clusters of pixels of a pixel array, alternating the between descending and ascending progressions in the intra-cluster row readout sequence to reduce temporal skew between neighboring pixel rows in adjacent clusters.
Abstract: In a technique to assess the blurriness of an image, an image of a face is received, the image including a depiction of lips. A processing device determines a region of interest in the image, wherein the region of interest comprises an area inside of the lips. The processing device applies a focus operator to the pixels within the region of interest, and calculates a sharpness metric for the region of interest using an output of the focus operator. The processing device determines whether the sharpness metric satisfies a sharpness criterion, and one or more additional operations are performed responsive to determining that the sharpness metric satisfies the sharpness criterion.
Type:
Grant
Filed:
December 3, 2020
Date of Patent:
November 7, 2023
Assignee:
Align Technology, Inc.
Inventors:
Chao Shi, Yingjie Li, Chad Clayton Brown, Christopher E. Cramer
Abstract: A teaching data correction device sets, for teaching data indicating an object area where an object of interest exists in a training image, a correction candidate area which is an area to be a correction candidate of the object area, the training image being used for learning. The teaching data correction device generates an output machine based on the correction candidate area, the output machine being learned to output, when an image is inputted thereto, an identification result or a regression result relating to the object of the interest. Then, the teaching data correction device updates the teaching data by the correction candidate area based on an accuracy of the output machine, the accuracy being calculated based on the identification result or the regression result outputted by the output machine.
Type:
Grant
Filed:
March 14, 2019
Date of Patent:
October 24, 2023
Assignee:
NEC CORPORATION
Inventors:
Hideaki Sato, Soma Shiraishi, Yasunori Babazaki, Jun Piao
Abstract: A system for visibility in an atmospheric suit includes a camera arranged to view an environment outside the atmospheric suit and a display device to display an image obtained by the camera. An input device obtains input from a wearer of the atmospheric suit. A controller changes a feature of the camera based on the input. The image obtained by the camera and displayed by the display device is modified based on the feature.
Type:
Grant
Filed:
January 4, 2022
Date of Patent:
October 24, 2023
Assignee:
HAMILTON SUNDSTRAND CORPORATION
Inventors:
Ashley Rose Himmelmann, Jake Rohrig, Monica Torralba
Abstract: Provided is an excellent robot device capable of preferably detecting difference between dirt and a scratch on a lens of a camera and difference between dirt and a scratch on a hand. A robot device detects a site in which there is the dirt or the scratch using an image of the hand taken by a camera as a reference image. Further, this determines whether the detected dirt or scratch is due to the lens of the camera or the hand by moving the hand. The robot device performs cleaning work assuming that the dirt is detected, and then this detects the difference between the dirt and the scratch depending on whether the dirt is removed.
Abstract: The present disclosure describes a method, apparatus, and storage medium for classifying a multimedia resource. The method includes obtaining a multimedia resource. extracting a plurality of features of the multimedia resource; clustering the plurality of features to obtain at least one cluster set, and determining cluster description information of each cluster set, the each cluster set comprising at least one feature of the multimedia resource; determining at least one piece of target feature description information of the multimedia resource based on the cluster description information of each cluster set, each piece of target feature description information being used for representing an association between one piece of cluster description information and the remaining cluster description information; and classifying the multimedia resource based on the at least one piece of target feature description information of the multimedia resource, to obtain a classification result of the multimedia resource.
Type:
Grant
Filed:
September 28, 2020
Date of Patent:
October 24, 2023
Assignee:
Tencent Technology (Shenzhen) Company Limited
Inventors:
Yongyi Tang, Lin Ma, Wei Liu, Lianqiang Zhou
Abstract: Embodiments of the disclosure provided herein generally relate to methods and video system components that have integrated background differentiation capabilities that allow for background replacement and/or background modification. In some embodiments, undesired portions of video data generated in a video environment are separated from desired portions of the video data by taking advantage of the illumination and decay of the intensity of electromagnetic radiation, provided from an illuminator, over a distance. Due to the decay of intensity with distance, the electromagnetic radiation reflected from the undesired background has a lower intensity when received by the sensor than the electromagnetic radiation reflected from the desired foreground. The difference in the detected intensity at the one or more wavelengths can then be used to separate and/or modify the undesired background from the desired foreground for use in a video feed.
Type:
Grant
Filed:
February 24, 2021
Date of Patent:
October 24, 2023
Assignee:
Logitech Europe S.A.
Inventors:
Joseph Yao-Hua Chu, Jeffrey Phillip Fisher, Ting Chia Hsu
Abstract: Aspects of the present invention relate to methods and devices for receiving and presenting content. In an example method, content is received, the content to be delivered to a plurality of recipients. Each recipient of the plurality of recipients has a preference for a respective first physical location at which the content is to be presented on a display of a wearable head device associated with the recipient. A sender of the content has a preference for a second physical location at which the content is to be presented to the respective recipient of the plurality of recipients. A respective final physical location for the presentation of the content for the respective recipient of the plurality of recipients is identified. The respective final physical location is based on the respective recipient's preference and the sender's preference. It is determined whether the respective recipient is proximate to a zone associated with the respective final physical location.
Type:
Grant
Filed:
April 5, 2022
Date of Patent:
October 17, 2023
Assignee:
Mentor Acquisition One, LLC
Inventors:
Ralph F. Osterhout, John D. Haddick, Robert Michael Lohse, John N. Border, Nima L. Shams
Abstract: An image processing method, an electronic device, and a computer-readable storage medium are provided. The method includes: obtaining N images; determining a reference image in the N images; where the reference image is an image in which a target tilt-shift object has a sharpness greater than a preset threshold; obtaining tilt-shift parameters input by a user, where the tilt-shift parameters are used to indicate an azimuth of a target focal plane and a tilt-shift area range; determining, based on the tilt-shift parameters, to-be-composited image(s) in intersection with the target focal plane; and performing, based on focal lengths of the to-be-composited image(s) and the reference image, image composite on the N images to output a target tilt-shift image.
Abstract: An electronic device includes a device housing having a front side and a rear side, a first image capture device positioned on the front side, and a second image capture device positioned on the rear side. One or more processors of the electronic device cause, in response to user input received at a user interface requesting the second image capture device capture an image of an object, the first image capture device to capture another image of a user delivering the user input. The one or more processors then define an image orientation of the image of the object to be the same as another image orientation of the other image of the user.
Type:
Grant
Filed:
February 9, 2022
Date of Patent:
October 17, 2023
Assignee:
Motorola Mobility LLC
Inventors:
Rahul Bharat Desai, Amit Kumar Agrawal, Mauricio Dias Moises
Abstract: Provided are a method and device for image processing, a terminal device and a storage medium. The method includes: a high-brightness region is determined based on brightness of pixels in a first image, the brightness of the pixels in the high-brightness region being higher than the brightness of the pixels around the high-brightness region; a diffraction region in the first image is determined based on the high-brightness region, the diffraction region being an image region around the high-brightness region; and brightness of the diffraction region is reduced to obtain a second image. Through the method, after the brightness of the diffraction region is reduced, an overlap image formed by diffraction is alleviated, and the image is more real.
Type:
Grant
Filed:
March 25, 2021
Date of Patent:
October 10, 2023
Assignee:
Beijing Xiaomi Mobile Software Co., Ltd.
Abstract: Method and device for image frame selection are provided. A computing device can receive, from an image capture device, a plurality of frames including a capture frame. The computing device can determine a computer-selected frame of the plurality of frames. The computing device can receive, by way of a user interface, a selection of an option to view the capture frame. The computing device can, responsive to receiving the selection, provide, by way of the user interface, an animation between the capture frame and the computer-selected frame. The animation includes an interpolation of one or more frames captured between the capture frame and the computer-selected frame.
Type:
Grant
Filed:
September 24, 2018
Date of Patent:
October 10, 2023
Assignee:
Google LLC
Inventors:
Chorong Johnston, John Oberbeck, Mariia Sandrikova
Abstract: A camera mode to use for capturing an image or video is selected by estimating high dynamic range (HDR), motion, and light intensity with respect to a scene of the image or video to capture. An image capture device includes a HDR estimation unit to detect whether HDR is present in a scene of an image to capture, a motion estimation unit to determine whether motion is detected within the scene, and a light intensity estimation unit to determine whether a scene luminance for the scene meets a threshold. A mode selection unit selects a camera mode to use for capturing the image based on output of the HDR estimation unit, the motion estimation unit, and the light intensity estimation unit. An image sensor captures the image according to the selected camera mode.
Type:
Grant
Filed:
July 25, 2022
Date of Patent:
October 10, 2023
Assignee:
GoPro, Inc.
Inventors:
Thomas Nicolas Emmanuel Veit, Sylvain Leroy, Heng Zhang, Ingrid Cotoros
Abstract: An electronic device includes an image sensor configured to capture a target to generate first image data, and a processor configured to perform directional interpolation on a first area of the first image data to generate first partial image data, perform upscale on a second area of the first image data to generate second partial image data, and combine the first partial image data and the second partial image data to generate second image data.
Abstract: Systems, apparatuses and interfaces and methods for implementing same including a mobile device having a camera system, where the systems, apparatuses, interfaces, and methods capture an image and embed the image into a background image selected from a group of background images generated by the systems, apparatuses, interfaces, and methods based on a location, a surroundings and an environment.
Abstract: A method of using an optical TOF module to determine distance to an object. The method includes acquiring signals in the TOF module indicative of distance to the object, using a first algorithm to provide an output indicative of the distance to the object based on the acquired signals, using at least one second different algorithm to provide an output indicative of the distance to the object based on the acquired signals, and combining the outputs of the first and at least one second algorithms to obtain an improved estimate of the distance to the object. In some implementations, each of the first and at least one second algorithms further provides an output representing a respective confidence level indicative of how accurately distance to the object has been extracted by the particular algorithm.
Type:
Grant
Filed:
November 30, 2018
Date of Patent:
October 3, 2023
Assignee:
ams International AG
Inventors:
Stephan Beer, Ioannis Tolios, David Stoppa, Qiang Zhang, Pablo Jesus Trujillo Serrano, Ian Kenneth Mills, Miguel Bruno Vaello Paños, Bryant Hansen, Mitchell Sterling Martin, Doug Nelson
Abstract: In exemplary embodiments, methods and systems are provided for a digital flashlight for a vehicle. In another exemplary embodiment, a system is provided that includes a display of a vehicle; and a processor coupled to the display of the vehicle and a camera of the vehicle and configured to at least facilitate: obtaining a camera image frame via the camera of the vehicle; determining a first region of the camera image frame having a first brightness level; determining a second region of the camera image frame having a second brightness level that is greater than the first brightness level; and providing instructions to the display of the vehicle for displaying the first region with increased brightness that is based on the second brightness level.
Type:
Grant
Filed:
April 7, 2022
Date of Patent:
October 3, 2023
Assignee:
GM GLOBAL TECHNOLOGY OPERATIONS LLC
Inventors:
Yun Qian Miao, Charles A Massoll, Ephraim Chi Man Yuen
Abstract: There is provided a method for capturing a sequence of image frames in a thermal camera having a microbolometer detector comprising: capturing a first sequence and a second sequence of image frames with a shutter of the thermal camera being in a closed state and an open state, respectively. While capturing each of the first and the second sequence, an integration time of the microbolometer detector is switched between a plurality of integration times according to one or more repetitions of a temporal pattern of integration times. The method further comprises correcting image frames in the second sequence that are captured when the integration time is switched to a particular position within the temporal pattern of integration times using image frames in the first sequence that are captured when the integration time is switched to the same particular position within the temporal pattern of integration times.
Abstract: An image capturing apparatus comprises an interface configured to connect to an external apparatus, a transmitting unit configured to transmit image data to an external apparatus connected by the interface, a processing unit configured to encode the image data with a predetermined format, a determination unit configured to determine whether or not image data to be encoded by the processing unit includes decoding information for decoding the image data, and a control unit configured to switch, based on the determination result, between processing for transmitting image data encoded by the processing unit and the decoding information to the external apparatus and processing for transmitting an image file in which image data including the decoding information is encoded to the external apparatus.
Abstract: A method of simultaneous localization and mapping (SLAM) is provided to position a target object. Each of detected tracked objects in a surrounding environment of the target object is classified into a moving object or a static object based on data detected at different time points. The target object is then positioned without considering any of the tracked objects that are classified into a moving object.
Abstract: An image sensor integrated with a convolutional neural network computation circuit is provided. The image sensor includes: a pixel array including pixels divided into pixel groups, wherein each pixel converts a light signal into a PWM signal; a convolution computation circuit controlling a turn-on time of a corresponding weighted current according to the first PWM signal of each pixel, and accumulating the weighted currents into an integrated current; a comparison circuit converting the integrated current into a second PWM signal and comparing it with that of an adjacent pixel group to output a larger one; and a classification circuit quantizing the second PWM signal to a quantization value according to a weight of a node in a fully-connected layer corresponding to each pixel group, accumulating the quantization values of all pixel groups into a feature value, and comparing the feature value with a feature threshold to obtain a classification result.
Abstract: An image encoder configured to process a Bayer image generated by passing through a color filter of a Bayer pattern includes: a detector configured to read the Bayer image in units of blocks and search for, in the blocks, a target pixel to be compressed and a plurality of candidate pixels which are located adjacent to the target pixel; a flag generator configured to compare a first pixel value of the target pixel with second pixel values based on pixel values of the plurality of candidate pixels, identify a reference pixel based on a comparison result, and generate a flag indicating relative direction information between the target pixel and the reference pixel; and a compressor configured to encode information corresponding to a comparison method applied by the flag generator and the comparison result and output the encoded information as a bitstream together with the flag.
Abstract: An AR scenario-based gesture interaction method, a non-transitory computer-readable medium, and a wireless communication terminal. The method includes: collecting a RGB image, a depth image, and corresponding IMU data of a current frame; obtaining posture information and hand information of the current frame by processing the RGB image, the depth image, and the IMU data; obtaining three-dimensional point cloud information of a hand in a reference coordinate system by performing three-dimensional dense modeling based on the posture information and the hand information of the current frame; obtaining pose information of a virtual object in the reference coordinate system; and obtaining an occlusion relationship between the hand and the virtual object by rendering the hand and the virtual object based on the three-dimensional point cloud information of the hand, the pose information of the virtual object, and preset point cloud information of the virtual object in the reference coordinate system.
Type:
Grant
Filed:
November 11, 2021
Date of Patent:
September 19, 2023
Assignee:
GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD.
Abstract: In an example embodiment, an item listing process is run in an item listing application. Upon reaching a specified point in the item listing process, a camera application on the user device is triggered (or the camera directly accessed by the item listing application) to enable a user to capture images using the camera, wherein the triggering includes providing an overlay informing the user as to an angle at which to capture images from the camera.
Abstract: The present application discloses an image recognition method, apparatus, device, and a computer storage medium, which is related to a technical field of artificial intelligence, and in particular, to a technical field of image processing. The method includes: performing organ recognition on a human face image and marking positions of the human facial five sense organs in the human face image, obtaining a marked human face image; inputting the marked human face image into a backbone network model and performing feature extraction, obtaining defect features of the marked human face image outputted by different convolutional neural network levels of the backbone network model; and fusing the defect features of different levels that are located in a same area of the human face image, obtaining a defect recognition result of the human face image.
Type:
Grant
Filed:
March 22, 2021
Date of Patent:
September 12, 2023
Inventors:
Zhizhi Guo, Yipeng Sun, Jingtuo Liu, Junyu Han
Abstract: An imaging element incorporates a reading portion that reads out captured image data at a first frame rate, a storage portion that stores the image data, a processing portion that processes the image data, and an output portion that outputs the processed image data at a second frame rate lower than the first frame rate. The reading portion reads out the image data of each of a plurality of frames in parallel. The storage portion stores, in parallel, each image data read out in parallel by the reading portion. The processing portion performs generation processing of generating output image data of one frame using the image data of each of the plurality of frames stored in the storage portion.
Abstract: The present disclosure provides a target tracking method and system, a readable storage medium, and a mobile platform. The method includes: obtaining a user's trigger operation on an operation button, and generating a trigger instruction based on the trigger operation to generate a candidate target box; displaying, based on the trigger operation, the candidate target box in a current frame of picture displayed on a display interface to correspond to the feature portion of the predetermined target; obtaining, based on the displayed candidate target box, a box selection operation performed by the user on the operation button, and generating a box selection instruction based on the box selection operation to generate a tracking target box, where the box selection instruction is used to determine that the candidate target boxes is a tracking target box; and tracking the target based on the tracking target box.
Abstract: The subject technology receives metadata corresponding to a respective media overlay, the metadata including information indicating that the respective media overlay is configured to be applied as an image processing operation during post-processing of image data during a post-capture stage. The subject technology selects the respective media overlay in response to the information indicating that the respective media overlay is configured to be applied as an image processing operation during post-processing of image data. The subject technology, based at least in part on a category indicator associated with the respective media overlay, populates a group of media overlays with at least the respective media overlay. The subject technology sends, to a client electronic device, second metadata including at least information related to the group of media overlays.
Type:
Grant
Filed:
December 31, 2019
Date of Patent:
September 5, 2023
Assignee:
Snap Inc.
Inventors:
Jean Luo, Oleksandr Grytsiuk, Celia Nicole Mourkogiannis, Ivan Golub
Abstract: A rotational image viewer for viewing an image on a mobile device having a display including a viewport. The image viewer monitors the orientation of the mobile device and enlarges/reduces the image responsive to the mobile device's orientation such that the viewport remains filled with at least a portion of the image (i.e., no letterboxing or pillarboxing within the viewport).
Abstract: An image pickup apparatus that is capable of picking up an image that effectively blurs a background easily. The image pickup apparatus including at least one processor and/or circuit configured to function as following units. A generation unit generates simulation images that simulate defocus states of background in cases that an object is picked up at different object distances based on a preliminary pickup image generated by picking up the background without including the object. A setting unit sets evaluation frames to the simulation images, respectively. A calculation unit calculates defocus-state evaluation values of the evaluation frames. A notification unit notifies of information about an object position at which the defocus state becomes optimal based on the defocus-state evaluation values.
Abstract: In a sky monitoring system, comprising an image sensor, a wide-angle lens, a microprocessor, and a memory unit, wherein the sky monitoring system is configured to take pictures of a sky scene, wherein the sky monitoring system is configured to subdivide each picture of the sky scene into a group of patches and to determine one luminance value for each patch, wherein the sky monitoring system is configured to calculate an output based on the luminance values of the patches, the image sensor, the wide-angle lens, the microprocessor and the memory unit are integrated into one single sky monitoring device thus making the sky monitoring system an embedded system.
Type:
Grant
Filed:
August 5, 2018
Date of Patent:
September 5, 2023
Assignee:
ECOLE POLYTECHNIQUE FEDERALE DE LAUSANNE (EPFL)
Inventors:
Yujie Wu, Jérôme Henri Kämpf, Jean-Louis Scartezzini
Abstract: This disclosure relates to managing a feature point map. The managing can include adding feature points to the feature point map using feature points captured by multiple devices and/or by a single device at different times. The resulting feature point map may be referred to as a global feature point map, storing feature points from multiple feature point maps. The global feature point map may be stored in a global feature point database accessible by a server so that the server may provide the global feature point map to devices requesting feature point maps for particular locations. In such examples, the global feature point map may allow a user to localize in locations without having to scan for feature points in the locations. In this manner, the user can localize in locations in which the user has never previously visited.
Abstract: An image sensor has a plurality of pixels arranged in a row direction and in a column direction. Each pixel comprises a color filter that has a portion with a low transmissivity and a portion with a high transmissivity, and a photoelectric conversion element that includes a first photoelectric conversion cell which receives light transmitting through the portion with the low transmissivity of the color filter, and a second photoelectric conversion cell which receives light transmitting through the portion with the high transmissivity of the color filter. The plurality of pixels are arranged such that positions of the portions with the low transmissivity for pixels of one color are identical among the plurality of pixels, and the portions with the low transmissivity are positioned adjacent to each other between adjacent pixels of different colors in the row direction only.
Abstract: Systems and methods for extracting ground plane information directly from monocular images using self-supervised depth networks are disclosed. Self-supervised depth networks are used to generate a three-dimensional reconstruction of observed structures. From this reconstruction the system may generate surface normals. The surface normals can be calculated directly from depth maps in a way that is much less computationally expensive and accurate than surface normals extraction from standard LiDAR data. Surface normals facing substantially the same direction and facing upwards may be determined to reflect a ground plane.
Type:
Grant
Filed:
June 26, 2020
Date of Patent:
August 22, 2023
Assignee:
TOYOTA RESEARCH INSTITUTE, INC.
Inventors:
Vitor Guizilini, Rares A. Ambrus, Adrien David Gaidon
Abstract: Embodiments of the present invention relate to the field of communications technologies, and provide a method for controlling a screen of a mobile terminal, and an apparatus, to resolve a prior-art problem of relatively low accuracy of controlling a screen of a mobile terminal to be turned on or turned off. The method includes: obtaining, by a mobile terminal, a current motion parameter of the mobile terminal, and determining whether the motion parameter meets a pick-up parameter threshold or a put-down parameter threshold; when the motion parameter meets the pick-up parameter threshold, determining that the mobile terminal is picked up; obtaining a sight line parameter of a user; and when it is determined that a visual center of the user is on a screen of the mobile terminal and the screen is in an off state, switching the screen to an on state.