Patents Issued in April 14, 2022
-
Publication number: 20220116536Abstract: The various embodiments illustrated herein disclose a method for operating an imaging device. the method includes activating a first image sensor at a first duty cycle within a first time period. The method further includes activating a second image sensor at a second duty cycle within the first time period. Additionally, the method includes modifying at least one of the first duty cycle or the second duty cycle based on at least a workflow associated an operator of the imaging device.Type: ApplicationFiled: December 21, 2021Publication date: April 14, 2022Inventors: Patrick Anthony GIORDANO, David M. WILZ, Jeffrey Dean HARPER, Ka Man AU, Benjamin HEJL
-
Publication number: 20220116537Abstract: A camera module according to the present embodiment includes a light emitting part configured to output a light signal to an object, a filter configured to allow the light signal to pass therethrough, at least one lens disposed on the filter and configured to collect the light signal from the object, a sensor configured to generate an electric signal from the light signal collected by the lens, the sensor including a plurality of pixels arranged in an array form, and a tilting part configured to tilt the filter to repeatedly move an optical path of the light signal having passed through the filter according to a predetermined rule. The optical path of the light signal passing through the filter is moved in one direction among diagonal directions of the sensor with respect to an optical path corresponding to the filter being disposed parallel to the sensor.Type: ApplicationFiled: January 6, 2020Publication date: April 14, 2022Applicant: LG INNOTEK CO., LTD.Inventors: Chang Hyuck LEE, Young Kil SONG
-
Publication number: 20220116538Abstract: An apparatus including: a system including a correction lens unit configured to move to correct an aberration; a driving device configured to move the correction lens unit; an image pickup element configured to pick up an image formed by the system; an obtaining device configured to obtain an aberration based on a picked up image, through use of a learned model obtained by learning an image and aberration data of the system; and a controller configured to control the driving device based on the aberration, to thereby correct the aberration.Type: ApplicationFiled: September 30, 2021Publication date: April 14, 2022Inventors: Toshinori Yamazaki, Hyochoru Tanaka, Shu Ito, Toshihiro Okuda
-
Publication number: 20220116539Abstract: A method for video stabilization may include obtaining a target frame of a video; dividing a plurality of pixels of the target frame into a plurality of pixel groups; determining a plurality of first feature points in the target frame; determining first location information of the plurality of first feature points in the target frame; determining second location information of the plurality of first feature points in a frame prior to the target frame in the video; obtaining a global homography matrix; determining an offset of each of the plurality of first feature points; determining a fitting result based on the first location information and the offsets; for each of the plurality of pixel groups, determining a correction matrix; and for each of the plurality of pixel groups, processing the pixels in the pixel group based on the global homography matrix and the correction matrix.Type: ApplicationFiled: December 21, 2021Publication date: April 14, 2022Applicant: ZHEJIANG DAHUA TECHNOLOGY CO., LTD.Inventor: Tingniao WANG
-
Publication number: 20220116540Abstract: Embodiments of the present disclosure relate to image stabilization technology, and provide an image stabilization method and apparatus, a terminal, and a storage medium. The method is applied in a terminal equipped with at least two camera modules corresponding to different focal length ranges. The method includes: obtaining a first image outputted from a first camera module of the at least two camera modules and a second image outputted from a second camera module of the at least two camera modules; processing, in a zooming process, the first image and the second image in a predetermined processing scheme to obtain a target image, the predetermined processing scheme including an electric image stabilization process and a zooming process, the zooming process being a process of switching from the first camera module to the second camera module; and displaying the target image on a viewfinder screen.Type: ApplicationFiled: December 22, 2021Publication date: April 14, 2022Applicant: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD.Inventor: Yuhu Jia
-
Publication number: 20220116541Abstract: An apparatus includes an imaging unit that captures an image, a zoom drive unit that controls an angle of view of the imaging unit, a pan drive unit that rotates the imaging unit in a pan direction, a tilt drive unit that rotates the imaging unit in a tilt direction, and a control unit that controls the pan drive unit and the tilt drive unit. The control unit controls accelerations or decelerations of the pan drive unit and the tilt drive unit based on the angle of view controlled by the zoom drive unit so that an acceleration time or a deceleration time of movement of a video shot by the imaging unit is made constant.Type: ApplicationFiled: September 28, 2021Publication date: April 14, 2022Inventor: Satoshi Ashitani
-
Publication number: 20220116542Abstract: An electronic device according to an embodiment includes: an ultra-wide band (UWB) communication module including a plurality of antennas, and a processor operatively connected to the UWB communication module, wherein the processor is configured to: measure a first coordinate of a first external device and a second coordinate of a second external device generated based on signals received from the first external device and the second external device using the plurality of antennas, generate a first coordinate system based on the electronic device corresponding to the measured first coordinate and second coordinate, and regenerate a second coordinate system through reconfiguration of the first coordinate system based on the first external device.Type: ApplicationFiled: October 14, 2021Publication date: April 14, 2022Inventors: Chulkwi KIM, Geonho YOON
-
Publication number: 20220116543Abstract: The present disclosure provides a following shoot method, a gimbal control method, a photographing apparatus, a handheld gimbal and a photographing system, the following shoot method includes: determining whether in a following preparation mode; if in the following preparation mode, displaying an area identifier; determining whether a target occurs in an image is a photographing target based on the area identifier; if the target is the photographing target, detecting a control operation of a manual control member by a user; controlling a following shoot mode to be start or end based on the control operation.Type: ApplicationFiled: December 22, 2021Publication date: April 14, 2022Applicant: SZ DJI TECHNOLOGY CO., LTD.Inventors: Youwei Jiang, Junrong Zhu
-
Publication number: 20220116544Abstract: Various embodiments disclosed herein include techniques for determining autofocus for a camera on a mobile device. In some instances, depth imaging is used to assist in determining a focus position for the camera through an autofocus process. For example, a determination of depth may be used to determine a focus position for the camera. In another example, the determination of depth may be used to assist another autofocus process.Type: ApplicationFiled: September 24, 2021Publication date: April 14, 2022Applicant: Apple Inc.Inventors: Mark N. Gamadia, Abhishek Dhanda, Gregory Guyomarc'h, Andrew D. Fernandez, Moshe Laifenfeld
-
Publication number: 20220116545Abstract: An apparatus configured for image processing comprises one or more processors configured to determine data associated with a distance between an object and the apparatus and determine a plurality of lens positions of a camera lens based on the data associated with the distance between the object and the apparatus. The one or more processors are further configured to determine, for each one of the plurality of lens positions, a respective focus value to generate a plurality of focus values. To determine, for each one of the plurality of lens positions, the respective focus value, the one or more processors are configured to determine, for each one of the plurality of lens positions, phase difference information. The one or more processors are further configured to determine a final lens position based on the plurality of focus values.Type: ApplicationFiled: March 16, 2021Publication date: April 14, 2022Inventors: Wen-Chun Feng, Hui Shan Kao, Hsuan-Ming Liu
-
Publication number: 20220116546Abstract: An under-display camera is positioned underneath a display of a mobile device. The under-display camera captures an image using light passing through a portion of the display. The mobile device displays a display image on the display, the display image based on the image. The mobile device displays an indicator overlaid over the display image on an indicator area of the display that overlaps with the portion of the display. The indicator may identify the position of the camera. The mobile device can compensate for occlusion of the camera by continuing to display a previous display image if a more recently captured image includes an occlusion. The mobile device can give users alternate ways to select areas of the display image to avoid camera occlusion, for instance using hardware buttons and/or touchscreen interface elements.Type: ApplicationFiled: October 12, 2020Publication date: April 14, 2022Inventors: Bapineedu Chowdary GUMMADI, Soman Ganesh NIKHARA, Ravi Shankar KADAMBALA
-
Publication number: 20220116547Abstract: Video routing may include integrated audio mixing, audio processing, or both. An audio subsystem that is integrated with a video router, for example, may receive and mix router input audio signals to provide mixed audio signals, and route the mixed audio signals to router outputs as router output audio signals. Regarding audio processing, input audio signals that include router input audio signals that are received by the video router may be routed as output audio signals. The output audio signals include router output audio signals to be output from the video router. Respective ones of the output audio signals are processed to provide respective processed output audio signals. The input audio signals for the routing also include the processed output audio signals.Type: ApplicationFiled: October 8, 2020Publication date: April 14, 2022Inventors: Michael Pala, A. Matthew Zimmer, Donald Mark Sizemore, Yu Liu
-
Publication number: 20220116548Abstract: A virtualized production switcher for media production is provided that includes a script database that stores predefined macros that each define a script for applying media production functions to media content, and a script optimizer that selects a subset of the predefined macros to be presented on a user interface as suggested scripts for each of a plurality of scenes of a media stream. Moreover, a program generator receives a user input via the user interface that selects one of the predefined macros and applies the corresponding script to a selected scene of the media stream for a media production by applying the at least one media production function to the selected scene based on an identified key-frame thereof. A script profiler identifies metadata related to the selected scene and updates the script database to store a correspondence between the selected scene and the selected predefined macro.Type: ApplicationFiled: October 11, 2021Publication date: April 14, 2022Inventor: Ian David FLETCHER
-
Publication number: 20220116549Abstract: Technology is described herein that uses an object-encoding system to convert an object image into a combined encoding. The object image depicts a reference object, while the combined encoding represents an environment image. The environment image, in turn, depicts an estimate of an environment that has produced the illumination effects exhibited by the reference object. The combined encoding includes: a first part that represents image content in the environment image within a high range of intensities values; and a second part that represents image content within a low range of intensity values. Also described herein is a training system that trains the object-encoding system based on combined encodings produced by a separately-trained environment-encoding system. Also described herein are various applications of the object-encoding system and environment-encoding system.Type: ApplicationFiled: October 12, 2020Publication date: April 14, 2022Inventors: Alexandros NEOFYTOU, Eric Chris Wolfgang SOMMERLADE, Alejandro SZTRAJMAN, Sunando SENGUPTA
-
Publication number: 20220116550Abstract: Systems and methods for generating a bias lighting effect are provided. A computer-implemented method can include obtaining a video comprising a plurality of video frames. For each of one or more video frames of the plurality of video frames, the method can include sampling an edge portion of the video frame. The edge portion can include a portion of the video frame adjacent to an edge of the video frame. The method can further include generating a bias lighting effect for the video frame. Generating the bias lighting effect can include inverting the edge portion across the edge and blurring the edge portion. The method can further include displaying the video frame concurrently with the bias lighting effect for the video frame. The bias lighting effect can be displayed adjacent to the edge of the video frame.Type: ApplicationFiled: May 1, 2019Publication date: April 14, 2022Inventors: Bryan Ku, Aileen Cheng, Rick Maria Frederikus Van Mook
-
Publication number: 20220116551Abstract: An image providing system capable of quickly providing a plurality of different cropped images is disclosed. The image providing system acquires position information about a position of a user designated by the user. The image providing system determines a range of an image that was captured by an image capturing unit, the range corresponding to the position information as a cropping range, before the image capturing unit starts a series of image capturing in which the image capturing unit captures a plurality of images. After the series of image capturing is started, the system applies cropping that cuts out a part of a captured image based on the cropping range. During the series of image capturing, the system provides a cropped image to which the cropping has been applied in a way in which a user can obtain the cropped image.Type: ApplicationFiled: December 20, 2021Publication date: April 14, 2022Inventors: Tomonobu Hiraishi, Koshi Tokunaga, Tomoki Kuroda, Junko Morikawa, Tokichi Minami
-
Publication number: 20220116552Abstract: A computer system is configured to integrate virtual content with a live streaming video in substantially real time. In particular, the computer system is configured to receive a stream of camera-generated video from a hardware camera. The computer system is also configured to obtain virtual content from a multimedia file. The virtual content is then integrated into the stream of the camera-generated video to generate an integrated stream of video in substantially real time. Integrating the virtual content into the stream of the camera-generated video comprises integrating each frame of the virtual content into each frame of the stream of the camera-generated video in substantially real time.Type: ApplicationFiled: October 6, 2021Publication date: April 14, 2022Inventor: Dror Benjamin
-
Publication number: 20220116553Abstract: One example method for conducting a conference between conference participants includes obtaining a scene layout for the conference, the scene layout comprising a plurality of video areas that are each assigned to a respective display video stream from one of the conference participants, and the scene layout forming a common visual presentation for the conference; receiving video streams from one or more of the conference participants; and displaying the scene layout, wherein the video streams from the conference participants are displayed in their respective assigned video areas.Type: ApplicationFiled: October 28, 2021Publication date: April 14, 2022Inventors: Lin Han, Wei Li
-
Publication number: 20220116554Abstract: Method and apparatus for overlaying themed imagery onto real-world objects in a head-mounted display device (HMDD). A computing device receives, from an HMDD, depth data that identifies distances from the HMDD to surfaces of a plurality of objects in a user space. The computing device detects at least one object in the user space based at least in part on the depth data. The computing device determines a classification of video content being presented on a display system of the HMDD. The computing device selects, based on the classification, a particular image theme from a plurality of different image themes, the image theme comprising one or more image textures. The computing device sends, to the HMDD, at least one image texture for overlaying the at least one object during presentation of the at least one object on the display system of the HMDD in conjunction with the video content.Type: ApplicationFiled: December 17, 2021Publication date: April 14, 2022Inventors: Dell Wolfensparger, Andrew Ip, Dhananjay Lal, Matthew Ringenberg
-
Publication number: 20220116555Abstract: Silicon-based photomultipliers (SiPMs) for reducing optical crosstalk effects in the SiPMs are provided. The SiPMs include macrocells. Each macrocell includes microcells, coupled in parallel, and a reading circuit coupled to an output of each macrocell. The microcells are arranged in the SiPM so that adjacent microcells belong to different macrocells. When a microcell performs a detection, the reading circuit of each macrocell having one or more microcells adjacent to the microcell that performed the detection is configured to disable its output signal during a predefined period of time. PET devices or systems and methods for reducing crosstalk effects are also provided.Type: ApplicationFiled: November 22, 2021Publication date: April 14, 2022Inventors: David GASCÓN FORA, Sergio GÓMEZ FERNÁNDEZ, Joan MAURICIO FERRÉ
-
Publication number: 20220116556Abstract: There is provided a method and system for pixel-wise imaging of a scene. The method including: receiving a pixel-wise pattern, the pixel-wise pattern including a masking value for each pixel in an array of pixels of an image sensor; producing an electronic signal at each pixel when such pixel is exposed to light received from the scene; and directing the electronic signal at each pixel to one or more collection nodes associated with such pixel based on the respective masking value, the one or more collection nodes each capable of integrating the received electronic signal.Type: ApplicationFiled: October 13, 2021Publication date: April 14, 2022Inventors: Roman GENOV, Kiriakos Neoklis KUTULAKOS, Navid SARHANGNEJAD, Nikola KATIC, Mian WEI
-
Publication number: 20220116557Abstract: A pixel array for an image sensor includes: a first pixel including a floating diffusion node, and a first selection transistor configured to output a first pixel signal generated using a voltage of the floating diffusion node of the first pixel; a second pixel including a floating diffusion node, and a second selection transistor configured to output a second pixel signal generated using a voltage of the floating diffusion node of the second pixel; and a column line connected to the first and second selection transistors. The floating diffusion nodes of the first and second pixels may be configured to be electrically connected to each other, and the first selection transistor and the second selection transistor may be configured to be turned on so that the first pixel signal and the second pixel signal are output to the column line, in a low conversion gain mode.Type: ApplicationFiled: October 6, 2021Publication date: April 14, 2022Inventors: Hongsuk LEE, Sanghyuck MOON, Jueun PARK, Jungbin YUN
-
Publication number: 20220116558Abstract: An image sensor with noise-reduction circuitry includes a pixel array, many analog-to-digital converters, and many correlated dual sampling units. The pixel array includes rows and columns of pixel cells. Each of the pixel cells converts light into analog electrical signal. Analog-to-digital converters convert the electric signals output from the pixel cells into digital signals. The correlated dual sampling units convert the digital signals and/or the electric signals into correlated dual sampling signals to denoise the output of the image sensor by subtraction from the analog content. An electronic device is also provided.Type: ApplicationFiled: July 30, 2021Publication date: April 14, 2022Inventors: JEN-SHENG TSAI, TUNG-CHI TSAI
-
Publication number: 20220116559Abstract: An Image sensor includes image sensor cells, each configured to accumulate charge corresponding to light incident thereon, a first driver connected to a power supply node, where the first driver generates control signals for a first image sensor cell based on a voltage of the power supply node, where the first image sensor cell generates an image signal in response to the control signals, and the image signal is based on the accumulated charge of the first image sensor cell. The image sensor also includes an ADC generating a digital representation of the image signal, and a switching voltage generator selectively generating the voltage of the power supply node in response to an enable signal, where the enable signal causes the switching voltage generator to not generate the voltage of the power supply node while the image signal is generated.Type: ApplicationFiled: October 8, 2020Publication date: April 14, 2022Inventors: Chao YANG, Matthew POWELL, Dazhi WEI
-
Publication number: 20220116560Abstract: A light detection element, including a first light detection unit, a second light detection unit, and a driving transistor, is provided. The first light detection unit includes a first transistor and a first light sensing unit. The first transistor and the first light sensing unit are electrically connected. The second light detection unit and the first light detection unit are electrically connected. The second light detection unit includes a second light sensing unit and a second transistor. The second light sensing unit and the second transistor are electrically connected. The driving transistor has a gate terminal. The gate terminal is electrically connected to the first light sensing unit and the second light sensing unit. In a time interval, the first transistor is not turned on and the second transistor is turned on.Type: ApplicationFiled: September 23, 2021Publication date: April 14, 2022Applicant: Innolux CorporationInventors: Chin-Lung Ting, Ming Chun Tseng, Ho-Tien Chen, Kung-Chen Kuo
-
Publication number: 20220116561Abstract: Active focusing non-line-of-sight methods and systems for focusing light over or around an obstacle to an object where light is focused using wavefront shaping based on feedback readings of light scattered by a two-dimensional scatterer such as a wall.Type: ApplicationFiled: October 12, 2021Publication date: April 14, 2022Inventors: Jian Xu, Ruizhi Cao, Changhuei Yang
-
Publication number: 20220116562Abstract: An image sensor includes image sensor cells, each configured to generate an image signal in response to control signals. The image sensor also includes an ADC to receive the image signals of the image sensor cells, and a first driver to generate one or more first control signals for a first image sensor cell, where the first driver includes a first negative supply terminal. The image sensor also includes a first multiplexor to selectively connect the first negative supply terminal of the first driver to one of a plurality of power supply nodes, and a second driver to generate one or more second control signals for a second image sensor cell, where the second driver includes a second negative supply terminal. The image sensor also includes a second multiplexor to selectively connect the second negative supply terminal of the second driver to one of the power supply nodes.Type: ApplicationFiled: October 8, 2020Publication date: April 14, 2022Inventors: Chao YANG, Matthew POWELL, Dazhi WEI
-
Publication number: 20220116563Abstract: An image sensor including a pixel of a first tap. a pixel of a second tap. an operational amplifier configured to perform an auto zeroing operation with a pixel signal of the pixel of the second tap applied, and perform an operation for comparison between a ramp voltage and a signal output from the pixel of the first tap, with a pixel signal of the pixel of the first tap applied, and a counter circuit configured to generate a digital code in response to an output of the operational amplifier.Type: ApplicationFiled: April 8, 2021Publication date: April 14, 2022Inventors: Jeong Eun SONG, Yu Jin PARK, Sung Uk SEO, Min Seok SHIN
-
Publication number: 20220116564Abstract: An A/D converter and an image sensor are disclosed. The image sensor includes: a pixel array including a plurality of pixels; a ramp signal generator configured to generate a ramp signal; and a comparison circuit configured to output a comparison result signal by comparing a pixel signal output by the pixel array with the ramp signal. The comparison circuit includes: a first comparator stage configured to output a first stage output signal according to a result of comparing the pixel signal with the ramp signal, to a first circuit node; a limiter including an n-type transistor having one end connected to the first circuit node and an opposite end to which power supply voltage is applied; and a second comparator stage configured to generate the comparison result signal by shaping the first stage output signal.Type: ApplicationFiled: September 9, 2021Publication date: April 14, 2022Inventors: DAEHWA PAIK, Jaehong Kim, Jinwoo Kim, Seunghyun Lim, Sanghyun Cho
-
Publication number: 20220116565Abstract: In some examples, a sensor apparatus comprises: a pixel cell configured to generate a voltages, the pixel cell including a photodiode configured to generate charge in response to incoming light, and a charge storage device to convert the charge to a voltage; an integrated circuit configured to: determine a first captured voltage converted by the charge storage device during a first time period; compare the first captured voltage to a threshold voltage value; and in response to determining that the first captured voltage meets or exceeds the threshold voltage value: determine first time data corresponding to the first time period; and prevent the charge storage device from further generating a charge; and an analog-to-digital converter (ADC) configured to generate a digital pixel value based on the first captured voltage, and a memory to store the digital pixel value and the first time data.Type: ApplicationFiled: October 7, 2021Publication date: April 14, 2022Inventors: Tsung-Hsun TSAI, Song CHEN, Xinqiao LIU
-
Publication number: 20220116566Abstract: An electronic device according to an embodiment of the disclosure includes: a memory, and a processor electrically connected to the memory, wherein the memory stores a high dynamic range image and dynamic metadata including dynamic tone mapping information corresponding to a plurality of frames included in the high dynamic range image, and wherein the memory stores instructions that, when executed, cause the processor to: control the electronic device to transmit a packet including data obtained by combining one frame among the plurality of frames and a part of the dynamic metadata corresponding to the one frame among the dynamic metadata to an external electronic device in response to an image request from the external electronic device.Type: ApplicationFiled: December 17, 2021Publication date: April 14, 2022Inventor: Chansik PARK
-
Publication number: 20220116567Abstract: A video frame interpolation method and device, and a computer-readable storage medium are described.Type: ApplicationFiled: April 30, 2020Publication date: April 14, 2022Inventors: Yunhua LU, Guannan CHEN, Ran DUAN, Lijie ZHANG, Hanwen LIU
-
Publication number: 20220116568Abstract: This technology is to enable high quality audio reproduction on the reception side without supplying a transmission clock using a clock signal line from the reception side to the transmission side. The transmission apparatus receives encoded data capable of clock recovery from a reception apparatus (external device), generates an audio clock on the basis of a carrier clock recovered from the encoded data, and transmits audio data to the reception apparatus in synchronization with the audio clock. The reception apparatus transmits the encoded data capable of clock recovery to the external device in synchronization with the carrier clock generated on the basis of an self-generating audio clock, receives the audio data from the transmission apparatus (external device), and processes the audio data on the basis of the self-generating audio clock.Type: ApplicationFiled: December 23, 2021Publication date: April 14, 2022Applicant: SONY CORPORATIONInventors: Kazuaki TOBA, Toshihisa HYAKUDAI
-
Publication number: 20220116569Abstract: A surveillance system includes at least one image or video capture device and a controller configured to determine a change in location for the at least one image or video capture device from a first location to a second location. First image or video data is received from the at least one image or video capture device at the second location, and in response to the location change, a preconfigured neural network is obtained or weights for a neural network are obtained based at least in part on the received first image or video data. Second image or video data is received from the at least one image or video capture device at the second location and an inference operation is performed on the second image or video data by processing the second image or video data using the obtained weights or the obtained preconfigured neural network.Type: ApplicationFiled: December 24, 2021Publication date: April 14, 2022Inventors: Shaomin Xiong, Toshiki Hirano, Haoyu Wu
-
Publication number: 20220116570Abstract: An artificial intelligence entry management device for an entry and delivery system includes a camera, a microphone, a motion detector, a speaker, and a housing. The housing has an oval shape with a substantially open middle. The substantially open middle has a housing protrusion portion configured to house the camera, the microphone, the motion detector, and the speaker. The entry and delivery system may also include one or more robots that interface with the entry management device to monitor an area around an access point and to alert the user of activity. A robot may be an aerial robot that has a camera, a robot light, a speaker, a microphone and an actuator to enable picking and moving a package. Aerial robots may be configured around a perimeter of a building to monitor the building and may turn on a robot light when motion is detected.Type: ApplicationFiled: September 28, 2021Publication date: April 14, 2022Inventor: Ronald Carter
-
Publication number: 20220116571Abstract: An information processing device which controls one projector and another projector includes: a display unit which displays a first operation screen for managing the one projector and a second operation screen for managing the another projector; an input unit which accepts an operation on the first operation screen and the second operation screen; and a control unit which controls the one projector and the another projector, based on the operation accepted by the input unit. When the information processing device is switched from a first state where the first operation screen is operable to a second state where the second operation screen is operable, the control unit causes the second operation screen where an item corresponding to a predetermined item selected on the first operation screen in the first state is selected, to be displayed in the second state.Type: ApplicationFiled: December 22, 2021Publication date: April 14, 2022Applicant: SEIKO EPSON CORPORATIONInventor: Toshiyuki SAKAI
-
Publication number: 20220116572Abstract: This invention provides an improved display system and method that is created by adjusting the properties of one or more displays to obtain coarse control over display behavior, by using sensors to optimize display parameters. The display is further improved by constructing a display map by selectively driving the display and sensing the optical image created. Furthermore, the sensors are used to ensure that the resulting optimized display meets target quality measurements over time, potentially taking into account ambient conditions. The system reports on its status, and is able to predict when the system will no longer meet a quality target. The system and method is able to optimize a display system and keep it optimized over time. Individual displays with the display system can have operating points that are matched to each other. Corrections to the input image signal to deliver improved display system performance can be minimized, and therefore, the unwanted artifacts of those changes can be minimized.Type: ApplicationFiled: October 25, 2021Publication date: April 14, 2022Inventors: Rajeev J. Surati, Ph.D., Samson J. Timoner, Ph.D, Kevin Amaratunga, Thomas F. Knight, JR.
-
Publication number: 20220116573Abstract: In an example, the present invention provides an optical engine apparatus. The apparatus has a laser diode device, the laser diode device characterized by a wavelength ranging from 300 to 2000 nm or any variations thereof. In an example, the apparatus has a lens coupled to an output of the laser diode device and a scanning mirror device operably coupled to the laser diode device. In an example, the apparatus has an un-patterned phosphor plate coupled to the scanning mirror and configured with the laser device; and a spatial image formed on a portion of the un-patterned phosphor plate configured by a modulation of the laser and movement of the scanning mirror device.Type: ApplicationFiled: October 20, 2021Publication date: April 14, 2022Applicant: KYOCERA SLD Laser, Inc.Inventors: Vlad Joseph Novotny, Paul Rudy
-
Publication number: 20220116574Abstract: A travel-environment display apparatus includes a generating unit that repeatedly generates a travel-environment image of a vehicle, and a display device that updates display in accordance with the travel-environment images. The generating unit generates an image including a vehicle object corresponding to the vehicle as viewed from a rear viewpoint and a linear road-surface object extending distantly from the vehicle object to correspond to a road, lane, or lane boundary line along which the vehicle is traveling. The generating unit also moves the vehicle object between two lanes expressed by the linear road-surface object between the travel-environment images, and changes an orientation of the linear road-surface object such that a far area thereof from the viewpoint significantly moves toward an opposite side from a lane-changing direction of the vehicle, as compared with a near area, and then moves back, when the vehicle performs a lane change between the two lanes.Type: ApplicationFiled: September 28, 2021Publication date: April 14, 2022Inventor: Ryosuke KAKIMARU
-
Publication number: 20220116575Abstract: A stereo imaging system includes a pair of folded-parallel-light-channel (FPLC) units arranged to provide a virtual left side view and a virtual right side view of a scene. Each FPLC unit includes a fixed lens unit adapted to focus reflected light comprising an image of a scene to an image sensor and a light-redirecting unit comprising a reflector adapted to define a parallel image reflection path to the fixed lens unit via a collimated light beam.Type: ApplicationFiled: December 22, 2021Publication date: April 14, 2022Inventor: Kai Michael Cheng
-
Publication number: 20220116576Abstract: An information processing apparatus according to the present technology includes an image obtaining unit and a display control unit. The image obtaining unit obtains a plurality of first divided images obtained by dividing a first image showing a first location along a second direction substantially perpendicular to a first direction, and a plurality of second divided images obtained by dividing a second image showing a second location along the second direction. The display control unit arranges and simultaneously displays the plurality of first divided images and the plurality of second divided images along the first direction on a display device of a user at a third location.Type: ApplicationFiled: December 11, 2019Publication date: April 14, 2022Inventors: MARI SAITO, KENJI SUGIHARA
-
Publication number: 20220116577Abstract: A processing system having at least one processor may obtain a two-dimensional source video, select a volumetric video associated with at least one feature of the source video from a library of volumetric videos, identify a first object in the source video, and determine a location of the first object within a space of the volumetric video. The processing system may further obtain a three-dimensional object model of the first object, texture map the first object to the three-dimensional object model of the first object to generate an enhanced three-dimensional object model of the first object, and modify the volumetric video to include the enhanced three-dimensional object model of the first object in the location of the first object within the space of the volumetric video.Type: ApplicationFiled: December 20, 2021Publication date: April 14, 2022Inventors: Eric Zavesky, Zhu Liu, David Crawford Gibbon, Behzad Shahraray, Tan Xu
-
Publication number: 20220116578Abstract: Provided are a method and an apparatus for streaming a multi-view 360 degree video, and a method for streaming a 360 degree video according to an embodiment of the present disclosure includes: encoding a multi-view video to a bitstream of a base layer and a bitstream of a tile layer constituted by at least one tile; selecting a tile included in a user view video in the encoded bitstream of the tile layer by using user view information received from a 360 degree video rendering apparatus, and video information of the multi-view video; extracting tile data included in the selected user view video from the encoded bitstream of the tile layer, and generating a tile bitstream corresponding to the extracted tile data; and transmitting the encoded bitstream of the base layer and the generated tile bitstream to the 360 degree video rendering apparatus.Type: ApplicationFiled: October 13, 2021Publication date: April 14, 2022Applicant: RESEARCH & BUSINESS FOUNDATION SUNGKYUNKWAN UNIVERSITYInventors: Jong Beom JEONG, Soon Bin LEE, Eun Seok RYU
-
Publication number: 20220116579Abstract: A three-dimensional model distribution method includes: distributing a first model, which is a three-dimensional model of a target space in a target time period, in a first distribution mode; and distributing a second model, which is a three-dimensional model of the target space in the target time period and makes a smaller change per unit time than the first model, in a second distribution mode different from the first distribution mode.Type: ApplicationFiled: December 21, 2021Publication date: April 14, 2022Inventors: Toshiyasu SUGIO, Toru MATSUNOBU, Satoshi YOSHIKAWA, Tatsuya KOYAMA, Yoichi SUGINO
-
Publication number: 20220116580Abstract: A 3D track assessment apparatus and method are disclosed for identifying and assessing features of a railway track bed based on 3D elevation data gathered from the railway track bed.Type: ApplicationFiled: December 22, 2021Publication date: April 14, 2022Applicant: TETRA TECH, INC.Inventor: Darel Mesher
-
Publication number: 20220116581Abstract: An information processing apparatus includes a detection unit that detects a three-dimensional position and a posture of an object in an instruction three-dimensional region having an enlarged or reduced relationship with an observation three-dimensional region in which a virtual viewpoint and a virtual visual line are defined, a derivation unit that derives the viewpoint and the visual line corresponding to detection results of the detection unit depending on positional relationship information indicating a relative positional relationship between the observation three-dimensional region and the instruction three-dimensional region, and an acquisition unit that acquires a virtual viewpoint image showing a subject in a case in which the subject is observed with the viewpoint and the visual line derived by the derivation unit, the virtual viewpoint image being based on a plurality of images obtained by imaging an imaging region included in the observation three-dimensional region by a plurality of imaging appaType: ApplicationFiled: December 21, 2021Publication date: April 14, 2022Inventors: Masahiko MIYATA, Takashi AOKI, Kazunori TAMURA, Fuminori IRIE
-
Publication number: 20220116582Abstract: A display control device includes a first acquisition unit that acquires first viewpoint position information, and a first control unit that performs a control of displaying a first viewpoint video selected from among a plurality of viewpoint videos generated based on images obtained by imaging an imaging region from a plurality of viewpoint positions on a first display unit, in which the first control unit performs a control of displaying first specific information for specifying a first viewpoint position in the first viewpoint video in a case in which the first viewpoint position indicated by the acquired first viewpoint position information is included in the first viewpoint video and performs a control of changing a display size of the first specific information depending on an angle of view of the first viewpoint video displayed on the first display unit.Type: ApplicationFiled: December 21, 2021Publication date: April 14, 2022Inventors: Fuminori IRIE, Takashi AOKI, Kazunori TAMURA, Masahiko MIYATA
-
Publication number: 20220116583Abstract: A method and system for synchronization of image data is provided. A plurality of image-capture devices is controlled to acquire a plurality of video clips of at least one object. Each video clip of the plurality of video clips is acquired by a corresponding image-capture device of the plurality of image-capture devices in a moving state. From the plurality of image-capture devices, a set of sensor data is acquired. Such data corresponds to the plurality of image-capture devices and is associated with a movement of the plurality of image-capture devices. Thereafter, relative offsets between the set of sensor data are determined by using cross-correlation and matching frames in each of the plurality of video clips is further determined, based on the relative offsets.Type: ApplicationFiled: June 15, 2021Publication date: April 14, 2022Inventors: NIKOLAOS GEORGIS, JAMES KUCH, KIYOHARU SASSA
-
Publication number: 20220116584Abstract: Systems, methods, and computer-readable media are disclosed for improved camera color calibration. An example method may involve capturing a first wavelength emitted by a first type of traffic light. The example method may also involve determining, based on the first wavelength, a first color value associated with the wavelength emitted by the first type of traffic light. The example method may also involve capturing, by a first camera, a first image, video, or real-time feed of a first portion of a test target, the first portion of the test target including a first light color that is based on the first color value. The example method may also involve determining, based on the first image, video, or real-time feed of the first portion of a test target, a second color value output by the camera. The example method may also involve determining, based on a comparison between the first color value and the second color value, that a difference exists between the first color value and the second color value.Type: ApplicationFiled: October 14, 2020Publication date: April 14, 2022Applicant: Argo AI, LLCInventors: Christopher N. St. John, Koji L. Gardiner, Ravi Babu Basavaraj, Bowei Zhang
-
Publication number: 20220116585Abstract: Disclosed is an apparatus for testing a camera module, and the apparatus for testing the camera module according to the disclosure includes a socket section configured to settle the camera module thereon; a movable unit-pattern chart lens section comprising a housing, a light source unit provided inside the housing and emitting light toward the camera module, and a chart disposed below the light source unit inside the housing and formed with a unit pattern; a first actuator configured to actuate the movable unit-pattern chart lens section; a second actuator configured to actuate the socket section; and a test image capturer configured to obtain a test image from images captured while actuating the movable unit-pattern chart lens section or the socket section based on actuation of the first actuator or the second actuator.Type: ApplicationFiled: July 29, 2021Publication date: April 14, 2022Applicant: ISMEDIA CO., LTD.Inventors: Byoung Dae LEE, Hyunseok KIM, Chanyoung PARK, MinSeog CHOI