Color Aspects (epo) Patents (Class 348/E13.019)
-
Patent number: 11915421Abstract: Implementations are described herein for auditing performance of large-scale tasks. In various implementations, one or more ground-level vision sensors may capture a first set of one or more images that depict an agricultural plot prior to an agricultural task being performed in the agricultural plot, and a second set of one or more images that depict the agricultural plot subsequent to the agricultural task being performed in the agricultural plot. The first and second sets of images may be processed in situ using edge computing device(s) based on a machine learning model to generate respective pluralities of pre-task and post-task inferences about the agricultural plot. Performance of the agricultural task may include comparing the pre-task inferences to the post-task inferences to generate operational metric(s) about the performance of the agricultural task in the agricultural plot. The operational metric(s) may be presented at one or more output devices.Type: GrantFiled: September 7, 2021Date of Patent: February 27, 2024Assignee: MINERAL EARTH SCIENCES LLCInventors: Zhiqiang Yuan, Elliott Grant
-
Patent number: 11904246Abstract: Methods and systems for facilitating intra-game communication in video game environments featuring first-person or third-person perspectives by generating an on-screen graphic that includes the communication and a pointer towards a location of another user within the video game environment.Type: GrantFiled: October 14, 2020Date of Patent: February 20, 2024Assignee: Rovi Guides, Inc.Inventor: Curtis Sullivan
-
Patent number: 11882267Abstract: A spatial direction of a wearable device that represents an actual viewing direction of the wearable device is determined. The spatial direction of the wearable device is used to select, from a multi-view image comprising single-view images, a set of single-view images. A display image is caused to be rendered on a device display of the wearable device. The display image represents a single-view image as viewed from the actual viewing direction of the wearable device. The display image is constructed based on the spatial direction of the wearable device and the set of single-view images.Type: GrantFiled: April 10, 2018Date of Patent: January 23, 2024Assignee: Dolby Laboratories Licensing CorporationInventors: Ajit Ninan, Neil Mammen
-
Patent number: 11538134Abstract: An information generation method includes obtaining a first image projected on the projection target by a projector, a second image taken from a first imaging position, and a third image taken from a second imaging position, identifying a first correspondence relationship between a first coordinate system and a second coordinate system based on the first and second images, identifying a second correspondence relationship between a first coordinate system and a third coordinate system based on the first and third images, making the user designate a display area in the second coordinate system, generating transformation information for transforming the display area designated into a display area in the third coordinate system based on the first correspondence relationship and the second correspondence relationship.Type: GrantFiled: September 10, 2021Date of Patent: December 27, 2022Assignee: SEIKO EPSON CORPORATIONInventor: Ippei Kurota
-
Patent number: 11510397Abstract: To provide a management apparatus capable of capturing a position of a specific individual satisfying a predetermined condition. A management apparatus according to an embodiment of the present technology includes a control unit. The control unit extracts, on the basis of first information that is generated by a sensor device worn by an individual and is related to a living body of the individual, a specific individual satisfying a predetermined condition, and generates, on the basis of position information related to a position of the specific individual, search information for causing a mobile object to move to the position of the specific individual.Type: GrantFiled: June 4, 2020Date of Patent: November 29, 2022Assignee: Sony CorporationInventors: Masakazu Yajima, Hideo Niikura
-
Patent number: 11436787Abstract: An image rendering method for a computer product coupled to a display apparatus may include rendering an entire display region of the display apparatus with a first rendering mode to generate a first rendering mode sample image, determining a target region in the entire display region, rendering the target region with a second rendering mode to generate a second rendering mode sample image, and transmitting data of the first rendering mode sample image and the second rendering mode sample image. The second rendering mode comprises at least a value of an image rendering feature that is higher than that of the first rendering mode.Type: GrantFiled: September 6, 2018Date of Patent: September 6, 2022Assignees: BEIJING BOE OPTOELECTRONICS TECHNOLOGY CO., LTD., BEIJING BOE TECHNOLOGY DEVELOPMENT CO., LTD.Inventors: Xuefeng Wang, Yukun Sun, Bin Zhao, Lixin Wang, Xi Li, Jianwen Suo, Wenyu Li, Qingwen Fan, Jinbao Peng, Yuanjie Lu, Yali Liu, Chenru Wang, Jiankang Sun, Hao Zhang, Lili Chen, Jinghua Miao
-
Publication number: 20140028794Abstract: Generally, this disclosure provides methods and systems for real-time video communication with three dimensional perception image rendering through generated parallax effects based on identification, segmentation and tracking of foreground and background layers of an image. The system may include an image segmentation module configured to segment a current local video frame into a local foreground layer and a local background layer and to generate a local foreground mask based on an estimated boundary between the local foreground layer and the local background layer; a face tracking module configured to track a position of a local user's face; a background layer estimation module configured to estimate a remote background layer; and an image rendering module configured to render a 3D perception image based on the estimated remote background layer, the current remote video frame and the remote foreground mask.Type: ApplicationFiled: July 30, 2012Publication date: January 30, 2014Inventors: Yi Wu, Wei Sun, Michael M. Chu, Ermal Dreshaj, Philip Muse, Lucas B. Ainsworth, Garth Shoemaker, Igor V. Kozintsev
-
Publication number: 20130201291Abstract: Head pose tracking technique embodiments are presented that use a group of sensors configured so as to be disposed on a user's head. This group of sensors includes a depth sensor apparatus used to identify the three dimensional locations of features within a scene, and at least one other type of sensor. Data output by each sensor in the group of sensors is periodically input, and each time the data is input it is used to compute a transformation matrix that when applied to a previously determined head pose location and orientation established when the first sensor data was input identifies a current head pose location and orientation. This transformation matrix is then applied to the previously determined head pose location and orientation to identify a current head pose location and orientation.Type: ApplicationFiled: February 8, 2012Publication date: August 8, 2013Applicant: MICROSOFT CORPORATIONInventors: Zicheng Liu, Zhengyou Zhang, Zhenning Li
-
Patent number: 8462196Abstract: Provided are a methods and apparatuses for generating a stereoscopic image format and reconstructing stereoscopic images from the stereoscopic image format. The method of generating a stereoscopic image format for compression or transmission of stereoscopic images includes receiving a base view image and an additional view image, determining block pixel information for the stereoscopic image format for each block position using first block pixel information of the base view image and second block pixel information of the additional view image based on blocks obtained by dividing the base view image and the additional view image, and disposing the determined block pixel information in each block position, thereby generating a combined image including pixel information of the base view image and pixel information of the additional view image.Type: GrantFiled: November 30, 2007Date of Patent: June 11, 2013Assignee: Samsung Electronics Co., Ltd.Inventors: Yong-tae Kim, Jae-seung Kim, Moon-seok Jang
-
Publication number: 20130002827Abstract: An apparatus and method for capturing a light field geometry using a multi-view camera that may refine the light field geometry varying depending on light within images acquired from a plurality of cameras with different viewpoints, and may restore a three-dimensional (3D) image.Type: ApplicationFiled: May 30, 2012Publication date: January 3, 2013Applicant: Samsung Electronics Co., LTD.Inventors: Seung Kyu Lee, Do Kyoon Kim, Hyun Jung Shim
-
Publication number: 20120293635Abstract: A three-dimensional pose of the head of a subject is determined based on depth data captured in multiple images. The multiple images of the head are captured, e.g., by an RGBD camera. A rotation matrix and translation vector of the pose of the head relative to a reference pose is determined using the depth data. For example, arbitrary feature points on the head may be extracted in each of the multiple images and provided along with corresponding depth data to an Extended Kalman filter with states including a rotation matrix and a translation vector associated with the reference pose for the head and a current orientation and a current position. The three-dimensional pose of the head with respect to the reference pose is then determined based on the rotation matrix and the translation vector.Type: ApplicationFiled: April 25, 2012Publication date: November 22, 2012Applicant: QUALCOMM IncorporatedInventors: Piyush Sharma, Ashwin Swaminathan, Ramin Rezaiifar, Qi Xue
-
Publication number: 20120281074Abstract: A three-dimensional image processing method and a three-dimensional image processing circuit using the above method are provided. The method is configured for processing N source images, and N is a natural number and is larger than or equal to two. Each of the source images corresponds to a visual angle, and each of the source images comprises image data with three primary colors. The image data of each of the source images are arranged in an array according to a predetermined color sequence. In the method, six parameters are provided firstly, wherein each of the six parameters is configured for defining a basic data-arrangement variation. Then, the image data with three primary colors of the N source image are obtained according to the six parameters, so as to form a three-dimensional image.Type: ApplicationFiled: January 11, 2012Publication date: November 8, 2012Applicant: AU OPTRONICS CORP.Inventors: Shen-Sian SYU, Hung-Wei Tseng, Chun-Huai Li
-
Publication number: 20120249744Abstract: An imaging module includes a matrix of detector elements formed on a single semiconductor substrate and configured to output electrical signals in response to optical radiation that is incident on the detector elements. A filter layer is disposed over the detector elements and includes multiple filter zones overlying different, respective, convex regions of the matrix and having different, respective passbands.Type: ApplicationFiled: April 3, 2012Publication date: October 4, 2012Applicant: PRIMESENSE LTD.Inventors: Benny Pesach, Erez Sali, Alexander Shpunt
-
Publication number: 20120133740Abstract: A method for analysis of an object dyed with fluorescent coloring agents. Separately fluorescing visible molecules or nanoparticles are periodically formed in different object parts, the laser produces the oscillation thereof which is sufficient for recording the non-overlapping images of the molecules or nanoparticles and for decoloring already recorded fluorescent molecules, wherein tens of thousands of pictures of recorded individual molecule or nanoparticle images, in the form of stains having a diameter on the order of a fluorescent light wavelength multiplied by a microscope amplification, are processed by a computer for searching the coordinates of the stain centers and building the object image according to millions of calculated stain center co-ordinates corresponding to the co-ordinates of the individual fluorescent molecules or nanoparticles. Two-dimensional and three-dimensional images are provided for proteins, nucleic acids and lipids with different coloring agents.Type: ApplicationFiled: February 6, 2012Publication date: May 31, 2012Applicant: Stereonic International, Inc.Inventors: Andrey Alexeevich KLIMOV, Dmitry Andreevich KLIMOV, Evgeniy Andreevich KLIMOV, Tatiana Vitalyevna KLIMOVA