Patents Examined by Yong Kim
-
Patent number: 11019353Abstract: A method of decoding JVET video includes receiving a bitstream and calculating a final planar prediction in planar mode to predict pixel values for a current coding block. The final planar prediction may rely on using unequal weights applied to each of a horizontal and vertical predictor, where such predictors may be generated by interpolating neighboring pixels for each predicted pixel within a coding block. The computation may be made more accurate by deriving a value for a bottom right neighboring pixel.Type: GrantFiled: December 20, 2019Date of Patent: May 25, 2021Assignee: ARRIS Enterprises LLCInventors: Krit Panusopone, Yue Yu, Seungwook Hong, Limin Wang
-
Patent number: 11019332Abstract: A video coder selects a set of wide-angle intra prediction directions based on a size of a luma block of a picture having a YUV 4:2:2 chroma sampling format. Additionally, the video coder determines an intra prediction direction for the luma block. The intra prediction direction for the luma block is in the set of wide-angle intra prediction directions. The video coder also determines an intra prediction direction for a chroma block. The luma block is collocated in the picture with the chroma block. The chroma block has a different width/height ratio than the luma block. The intra prediction direction for the chroma block is guaranteed to have the intra prediction direction for the luma block. The video coder uses the intra prediction directions for the luma and chroma blocks to generate prediction blocks for the luma and chroma blocks, respectively.Type: GrantFiled: March 25, 2020Date of Patent: May 25, 2021Assignee: Qualcomm IncorporatedInventors: Hongtao Wang, Han Huang, Yu Han, Geert Van der Auwera, Wei-Jung Chien, Marta Karczewicz
-
Patent number: 10992933Abstract: Provided is a video decoding method including obtaining a bitstream including residual data about a residual block of a current block, determining a plurality of prediction directions with respect to the current block, determining a plurality of reference samples included in a neighboring region of the current block in a current image, by using the plurality of prediction directions that are determined, generating a prediction block of the current block by using the plurality of reference samples, obtaining a residual block of the current block based on the residual data about the residual block of the current block, and reconstructing the current block by using the prediction block of the current block and the residual block of the current block.Type: GrantFiled: February 16, 2017Date of Patent: April 27, 2021Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: In-kwon Choi, Min-woo Park, Bo-ra Jin
-
Patent number: 10988083Abstract: A visual field support image generation device for generating a visual field support image of a vehicle has a camera that captures an image from the vehicle and a processing section. The processing section covers the image captured by the camera to generate the visual filed support image. The image is covered by compressing the captured image so that a compression ratio of the captured image in a horizontal direction becomes higher than a compression ratio of the captured image in a vertical direction by using a depth vanishing point included in the captured image as the center.Type: GrantFiled: March 24, 2020Date of Patent: April 27, 2021Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.Inventors: Masayoshi Michiguchi, Yoshimasa Okabe
-
Patent number: 10972737Abstract: A system and methods for a CODEC driving a real-time light field display for multi-dimensional video streaming, interactive gaming and other light field display applications is provided applying a layered scene decomposition strategy. Multi-dimensional scene data is divided into a plurality of data layers of increasing depths as the distance between a given layer and the plane of the display increases. Data layers are sampled using a plenoptic sampling scheme and rendered using hybrid rendering, such as perspective and oblique rendering, to encode light fields corresponding to each data layer. The resulting compressed, (layered) core representation of the multi-dimensional scene data is produced at predictable rates, reconstructed and merged at the light field display in real-time by applying view synthesis protocols, including edge adaptive interpolation, to reconstruct pixel arrays in stages (e.g. columns then rows) from reference elemental images.Type: GrantFiled: August 15, 2019Date of Patent: April 6, 2021Inventors: Matthew Hamilton, Chuck Rumbolt, Donovan Benoit, Matthew Troke, Robert Lockyer
-
Patent number: 10972742Abstract: The present invention is related to video coding and decoding, in particular HEVC RExt that define a palette coding mode dedicated to the coding of screen contents. In improved palette coding modes according to the invention, when building a palette, each time a new pixel is added to the class a palette entry defines, the palette entry is modified to take the means value of the pixels belonging to such class. In other improved palette coding modes, a built palette is post-processed to substitute a palette entry with a close entry of a palette predictor PRED. In yet other embodiments, palette coding modes having different threshold values to drive the building of their respective palettes are successively tested to use the best one in terms of rate-distortion criterion.Type: GrantFiled: December 18, 2014Date of Patent: April 6, 2021Assignee: Canon Kabushiki KaishaInventors: Christophe Gisquet, Patrice Onno, Guillaume Laroche
-
Patent number: 10970850Abstract: A method and device for recognizing a motion of an object, the method including receiving event signals from a vision sensor configured to sense the motion, storing, in an event map, first time information indicating a time at which intensity of light corresponding to the event signals changes; generating an image based on second time information corresponding to a predetermined time range among the first time information, and recognizing the motion of the object based on the image.Type: GrantFiled: July 8, 2019Date of Patent: April 6, 2021Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Kyoobin Lee, Keun Joo Park, Kyoungseok Pyun, Eric Hyunsurk Ryu
-
Patent number: 10958891Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. In particular, a multi-view interactive digital media representation (MVIDMR) of an object can be generated from live images of an object captured from a camera. After the MVIDMR of the object is generated, a tag can be placed at a location on the object in the MVIDMR. The locations of the tag in the frames of the MVIDMR can vary from frame to frame as the view of the object changes. When the tag is selected, media content can be output which shows details of the object at location where the tag is placed. In one embodiment, the object can be car and tags can be used to link to media content showing details of the car at the locations where the tags are placed.Type: GrantFiled: June 25, 2019Date of Patent: March 23, 2021Assignee: Fyusion, Inc.Inventors: Radu Bogdan Rusu, Dave Morrison, Keith Martin, Stephen David Miller, Pantelis Kalogiros, Mike Penz, Martin Markus Hubert Wawro, Bojana Dumeljic, Jai Chaudhry, Luke Parham, Julius Santiago, Stefan Johannes Josef Holzer
-
Patent number: 10945000Abstract: There is provided a file generation apparatus and a file generation method as well as a reproduction apparatus and a reproduction method by which a file in which quality information of a depth-related image at least including a depth image is efficiently stored can be managed. An MPD file generation unit generates an MPD file. The MPD file is a management file that manages a file in which quality information representative of quality of a plurality of depth-related images including at least a depth image is disposed in a form divided in a plurality of tracks or subsamples and the management file in which a correspondence relationship between the respective tracks and associationID that specifies the depth-related images is described. The present disclosure can be applied when a segment file and an MPD file of a video content are distributed, for example, in a method that complies with MPEG-DASH.Type: GrantFiled: February 8, 2017Date of Patent: March 9, 2021Assignee: SONY CORPORATIONInventors: Mitsuru Katsumata, Mitsuhiro Hirabayashi, Toshiya Hamada
-
Patent number: 10944963Abstract: A method and apparatus for decoding JVET video, including receiving a bitstream, and parsing said bitstream to identify a syntax element indicating an intra direction mode to use for generating at least one predictor. The intra direction mode is a first intra direction mode in a plurality of intra direction modes that includes at least one weighted intra direction mode that corresponds to a non-weighted intra direction mode. The syntax element may identify whether to use a non-weighted or weighted intra direction mode to generate the at least one predictor. Thus, the coding unit may be coded in accordance with the at least one generated predictor associated with the selected intra direction mode.Type: GrantFiled: May 25, 2017Date of Patent: March 9, 2021Assignee: ARRIS Enterprises LLCInventors: Yue Yu, Krit Panusopone, Limin Wang
-
Patent number: 10935367Abstract: Various embodiments provide for a method for calibrating a dimensioner. An example method includes receiving two or more previously captured images of a common field of view of the dimensioner, and identifying at least one static object in the common field of view. The method further includes determining one or more reference dimensions of the at least one static object. Thereafter, the method includes detecting an event on the dimensioner, and when an event is detected, determining one or more updated dimensions of the at least one static object. The method includes comparing the one or more updated dimensions to the one or more reference dimensions to determine whether the one or more updated dimensions satisfy a predefined dimension error range. When the one or more updated dimensions fail to satisfy the predefined dimension error range, the method includes modifying one or more parameters associated with the dimensioner.Type: GrantFiled: January 21, 2020Date of Patent: March 2, 2021Assignee: Hand Held Products, Inc.Inventor: Erik Van Horn
-
Patent number: 10931899Abstract: Systems and methods for computationally simulating and optimizing shearography systems are provided. The systems and methods for simulation and optimization avoid ray tracing, and, instead, implement a phase screen approach to image computation. The systems and methods include physics-based surface texture and surface motion simulations, the application of phase screens to computer-generated simulations of shearographic remote sensing, and the inclusion of de-polarization due to multiple scattering and birefringence at a surface being imaged. The systems and methods include further greatly optimize diffraction computations by treating an optical transfer function (OTF) of an arbitrary aperture as a sum of separable OTFs.Type: GrantFiled: December 17, 2019Date of Patent: February 23, 2021Assignee: BAE Systems Information and Electronic Systems Integration Inc.Inventor: Michael J. DeWeert
-
Patent number: 10922810Abstract: An automated visual inspection system for detecting the presence of particulate matter includes an empty, flexible container, a light source, a detector, and an image processor. The light source is configured to transmit light through the container towards the detector, and the detector is configured to receive the light and generate image data. The image processor is configured to analyze the image data, determine whether the empty, flexible container is defective, and generate a rejection signal if the empty, flexible container is defective.Type: GrantFiled: August 24, 2018Date of Patent: February 16, 2021Assignees: BAXTER INTERNATIONAL INC., BAXTER HEALTHCARE SAInventors: Frank Dudzik, William Hurst, Neal Zupec
-
Patent number: 10904570Abstract: A video encoding method is provided, which includes steps of acquiring a synchronized multi-view video; generating a spatial layout information of the synchronized multi-view video; encoding the synchronized multi-view video; and signaling the spatial layout information corresponding to the encoded multi-view video.Type: GrantFiled: July 4, 2017Date of Patent: January 26, 2021Assignee: KAONMEDIA CO., LTD.Inventors: Jeong Yun Lim, Hoa Sub Lim
-
Patent number: 10904537Abstract: Provided is a method of decoding an image, the method including: determining at least one coding unit for splitting an image, based on block shape information of a current coding unit; determining at least one transformation unit, based on a shape of the current coding unit included in the at least one coding unit; and decoding the image by performing inverse transformation based on the at least one transformation unit, wherein the block shape information indicates whether the current coding unit is a square shape or a non-square shape. Also, provided is an encoding method corresponding to the decoding method. In addition, provided is an encoding apparatus or decoding apparatus capable of performing the encoding or decoding method.Type: GrantFiled: October 10, 2016Date of Patent: January 26, 2021Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Jung-hye Min, Min-woo Park, Bo-ra Jin, Chan-yul Kim
-
Patent number: 10887573Abstract: Described here are systems, devices, and method for converting a two-dimensional video sequence into first and second video sequences for display at first and second display areas of a single display. In some embodiments, a two-dimensional video image sequence is received at a mobile device or wearable headset device. The two-dimensional video image sequence may be split into first and second video image sequences such that a first video image sequence is output to the first display area and a second video image sequence different from the first video image sequence is output to the second display area. The first and second video image sequences may be created from the two-dimensional video image sequence.Type: GrantFiled: July 24, 2019Date of Patent: January 5, 2021Inventor: Erin Sanaa Kenrick
-
Patent number: 10878675Abstract: Dual-camera audio/video (A/V) recording and communication devices in accordance with various embodiments of the present disclosure are provided. In one embodiment, a dual-camera A/V recording and communication device comprises a first camera, a second camera configured to capture image data of a drop-off zone, a communication module, and a processing module, the processing module comprising: a processor, and a zone monitoring application, wherein the zone monitoring application configures the processor to: determine when a parcel has been placed in the drop-off zone using the image data of the drop-off zone captured by the second camera, determine when the parcel has been removed from the drop-off zone using the image data of the drop-off zone captured by the second camera, and transmit a notification to a user's client device, using the communication module, when the parcel has been placed in the drop-off zone.Type: GrantFiled: September 21, 2017Date of Patent: December 29, 2020Assignee: Amazon Technologies, Inc.Inventors: James Siminoff, Stephen Grant Russell
-
Patent number: 10878594Abstract: Embodiments relate to a head-mounted display including an eye tracking system. The eye tracking system includes a source assembly, a camera, and a controller. In some embodiments, the source assembly is a plurality of sources and are positioned to illuminate at least a peripheral area of a cornea of an eye. In some embodiments, the sources are masked to be a particular shape. The peripheral region is a location on the eye where the cornea transitions to the sclera. In some embodiments, the camera can detect a polarization of the reflected light, and uses polarization to disambiguate possible reflection locations. Similarly, time of flight may also be used to disambiguate potential reflection locations. The controller uses information from the detector to track positions of the user's eyes.Type: GrantFiled: August 6, 2019Date of Patent: December 29, 2020Assignee: Facebook Technologies, LLCInventors: Robert Dale Cavin, Alexander Jobe Fix, Andrew John Ouderkirk
-
Patent number: 10866318Abstract: A platform-based observation system that is in communication with a substrate. The system is configured to identify a condition in, on, or within the substrate. The system has components selected from the group consisting of: inputs, processing, and outputs. The inputs may include a visual scanning sensor, an infrared scanning sensor, at least one GPS receiver, and a means for image collection. Processing includes the processing of measurements and image collection data to define conditions and organizing them according to file formatting associated with geographic systems. The outputs include recording the conditions and outputting the conditions on a monitor.Type: GrantFiled: October 11, 2018Date of Patent: December 15, 2020Assignee: GSE TECHNOLOGIES, LLCInventors: Glen Raymond Simula, Gary Bryan Howard
-
Patent number: 10856007Abstract: Techniques are described related to output and removal of decoded pictures from a decoded picture buffer (DPB). The example techniques may remove a decoded picture from the DPB prior to coding a current picture. For instance, the example techniques may remove the decoded picture if that decoded picture is not identified in the reference picture set of the current picture.Type: GrantFiled: December 3, 2019Date of Patent: December 1, 2020Assignee: Velos Media, LLCInventors: Ye-Kui Wang, Ying Chen