Patents Examined by Richard A Hansell, Jr.
-
Patent number: 10912357Abstract: An intelligent shading umbrella includes a processor, a base assembly, a stem assembly coupled to a base assembly and a central support assembly coupled to a stem assembly, the central support assembly including one or more arm support assemblies. The intelligent shading umbrella further comprises one or more blades, coupled to the arm support assemblies. The intelligent shading umbrella further comprises an infrared receiver, the infrared receiver receiving signals from a remote device and communicating signals to control movement of the intelligent shading umbrella.Type: GrantFiled: August 2, 2016Date of Patent: February 9, 2021Assignee: Shadecraft, LLCInventor: Armen Sevada Gharabegian
-
Patent number: 10904532Abstract: A method of entropy coding data samples includes for each entropy coding group, determining a native prefix value indicative of the bit size of the suffixes in the group; for each entropy coding group, evaluating at least one prefix coding condition for the group in the current block and the corresponding group in a previous block of sample values; in response to determining that the at least one prefix coding condition is met, applying differential prefix coding to code a differential prefix value for the group in the current block to generate a prefix for the group in the current block; and in response to determining that the at least one prefix coding condition is not met, applying direct prefix coding to code the native prefix value for the group in the current block to generate the prefix for the group in the current block.Type: GrantFiled: June 26, 2019Date of Patent: January 26, 2021Inventor: Vijayaraghavan Thirumalai
-
Patent number: 10889958Abstract: A display system for displaying image data of an environment of a machine includes an imaging device. The imaging device generates image data of the environment of the machine. The imaging device stores the image data in an uncompressed form. The imaging device compresses the image data and generates signals indicative of the compressed image data. The display system further includes a display screen communicably coupled to the imaging device. The display screen receives the signals indicative of the compressed image data from the imaging device and displays the compressed image data on the display screen. The imaging device and the display screen identify whether a region of interest exists in the image data. The display screen displays the image data corresponding to the region of interest in the uncompressed form.Type: GrantFiled: June 6, 2017Date of Patent: January 12, 2021Assignee: Caterpillar Inc.Inventors: Tod A. Oblak, Lawrence A. Mianzo, Jeffrey T. Stringer
-
Patent number: 10883922Abstract: A system is described that can detect, track and analyze a bubble of a secondary substance contained within a primary substance along a part of a fluid line. For example, the system can detect the presence of the bubble within the primary substance along the part of the fluid line, which can include assigning a digital signature to the bubble. In addition, the system can track the movement of the bubble in order to ensure that the bubble is accounted for only once as it passes through the part of the fluid line. Furthermore, the system can analyze the bubble, such as determine its direction of travel, speed of travel, volume and size.Type: GrantFiled: September 17, 2019Date of Patent: January 5, 2021Assignee: CareFusion 303, Inc.Inventors: Mark Bloom, Cathal Oscolai, David Brown
-
Patent number: 10871825Abstract: Various aspects of the subject technology relate to prediction of eye movements of a user of a head-mountable display device. Predictive foveated display systems and methods, using the predicted eye movements are also disclosed. Predictive variable focus display systems and methods using the predicted eye movements are also disclosed. Predicting eye movements may include predicting a future gaze location and/or predicting a future vergence plane for the user's eyes, based on the current motion of one or both of the user's eyes. The predicted gaze location may be used to pre-render a foveated display image frame with a high-resolution region at the predicted gaze location. The predicted vergence plane may be used to modify an image plane of a display assembly to mitigate or avoid a vergence/accommodation conflict for the user.Type: GrantFiled: December 4, 2019Date of Patent: December 22, 2020Assignee: Facebook Technologies, LLCInventors: Sebastian Sztuk, Javier San Agustin Lopez, Steven Paul Lansel
-
Patent number: 10869063Abstract: A deblocking filtering control involves deciding whether to apply deblocking filtering to sample values in a sample block in a picture and in a neighboring sample block in the picture based on i) a first magnitude modification of sample prediction values in a first prediction block in a reference picture for the sample block and ii) a second magnitude modification of sample prediction values in a second prediction block in the reference picture for the neighboring sample block. The sample block and the neighboring sample block are separated in the picture by a block boundary. This decision to apply deblocking filtering based on magnitude modifications reduce blocking artefacts that may otherwise arise in certain pictures of a video sequence.Type: GrantFiled: January 10, 2017Date of Patent: December 15, 2020Assignee: TELEFONAKTIEBOLAGET LM ERICSSON (PUBL)Inventors: Ruoyang Yu, Kenneth Andersson, Jonatan Samuelsson, Per Wennersten
-
Patent number: 10860864Abstract: According to one configuration, an example surveillance system includes a sensor device, analyzer hardware, and processing hardware. During operation, the sensor device scans a monitored location and generates scan data. In one embodiment, the scan data (such as distance-based data) indicates (defines) surface textures of one or more objects present at the monitored location (such as a location of interest) based on distance measurements. The analyzer hardware analyzes the scan data and change in surface textures. The controller hardware: i) generates a communication based on the detected surface textures, and ii) transmits the communication to a remote station.Type: GrantFiled: January 16, 2019Date of Patent: December 8, 2020Assignee: Charter Communications Operating, LLCInventor: Karim Ghessassi
-
Patent number: 10863184Abstract: A method for determining intra-prediction modes for prediction units (PUs) of a largest coding unit (LCU) is provided that includes determining an inter-prediction mode for each child PU of a PU, and selecting an intra-prediction mode for the PU based on the intra-prediction modes determined for the child PUs.Type: GrantFiled: August 4, 2013Date of Patent: December 8, 2020Assignee: TEXAS INSTRUMENTS INCORPORATEDInventor: Hyung Joon Kim
-
Patent number: 10855906Abstract: The present invention relates to a vehicle camera device comprising: a lens unit for inducing respective focal lengths of a plurality of image acquisition regions, divided in the horizontal direction, to be different from each other; an image sensor including a plurality of pixel arrays formed to correspond to the plurality of image acquisition regions, and converting, into an electrical signal, light introduced into each of the plurality of pixel arrays through the lens unit; and a processor for generating an image on the basis of the electrical signal.Type: GrantFiled: May 8, 2017Date of Patent: December 1, 2020Assignee: LG Electronics Inc.Inventor: Manhyung Lee
-
Patent number: 10848695Abstract: A solid state imaging device as an embodiment has a first transfer unit that includes a first gate and transfers charges from a photoelectric conversion portion to a holding portion; a second transfer unit that includes a second gate and transfers charges from the holding portion to a floating diffusion portion; and a third transfer unit that includes a third gate and drains charges from the photoelectric conversion portion to the charge draining portion. The impurity concentration of a second conductivity type in at least a part of a region under the first gate of the first transfer unit is lower than the impurity concentration of the second conductivity type in a region under the second gate of the second transfer unit and the impurity concentration of the second conductivity type in a region under the third gate of the third transfer unit.Type: GrantFiled: January 16, 2018Date of Patent: November 24, 2020Assignee: CANON KABUSHIKI KAISHAInventors: Takafumi Miki, Masahiro Kobayashi, Yusuke Onuki, Hiroshi Sekine
-
Patent number: 10843628Abstract: An onboard display device that includes: a display that includes a display region at a front passenger seat side of a vehicle width direction center position at a front face side of a vehicle cabin, and that is capable of displaying a picture in the display region; and a display controller that converts a second image captured by a second image capture section to a point-of-view-converted image viewed as if from a hypothetical point-of-view at an installation position of a first image capture section, that stitches a first image captured by the first image capture section with an image from a range in the point-of-view-converted image not overlapping with the first image to generate a composite image, and that controls the display so as to display the composite image in the display region as the picture.Type: GrantFiled: February 5, 2018Date of Patent: November 24, 2020Assignee: Toyota Jidosha Kabushiki KaishaInventors: Masashi Kawamoto, Takayuki Aoki
-
Patent number: 10839557Abstract: Approaches herein provide for multi-camera calibration and subsequent application in augmented reality applications. The approach obtains image data of a three-dimensional (3D) calibration object from different directions using the at least one camera. A determination is made for extrinsic parameters associated with a relative position of the 3D calibration object to the at least one camera. A further determination is made for intrinsic parameters associated with the at least one camera relative to the 3D calibration object facing in the different positions. Revised intrinsic parameters are generated from the intrinsic parameters. The revised intrinsic parameters provide a calibration output that may be applied to generate views of an item in an augmented reality application.Type: GrantFiled: April 3, 2018Date of Patent: November 17, 2020Assignee: A9.com, Inc.Inventors: Himanshu Arora, Yifan Xing, Radek Grzeszczuk, Chun-Kai Wang, Paulo Ricardo dos Santos Mendonca, Arnab Sanat Kumar Dhua
-
Patent number: 10841599Abstract: A method for encoding video data into a video bitstream using a video capture device having a brightness range limited output determines capture conditions for the capture device, the capture conditions including an ambient capture light level and a measured light level of captured video data. The method adjusts a brightness adaptation model using at least the measured light level and the ambient capture light level, the brightness adaption model defining a temporally variable peak luminance for a viewer of video captured using the capture device, and then determines a tone map such that where the measured light level exceeds a determined maximum light level the tone map is modified to reduce brightness, the maximum light level is determined using the brightness adaptation model. The captured video data is then encoded into the video bitstream using the determined tone map.Type: GrantFiled: July 25, 2016Date of Patent: November 17, 2020Assignee: Canon Kabushiki KaishaInventor: Christopher James Rosewarne
-
Patent number: 10834393Abstract: A method and apparatus for selecting an intra interpolation filter for multi-line intra prediction based on a reference line index for decoding a video sequence includes identifying a set of reference lines associated with a coding unit. A first type of interpolation filter is applied to reference samples included in a first reference line, of the set of reference lines, that is adjacent to the coding unit to generate a first set of prediction samples based on the first reference line being associated with a first reference line index. A second type of interpolation filter is applied to reference samples included in a second reference line, of the set of reference lines, that is non-adjacent to the coding unit to generate a second set of prediction samples based on the second reference line being associated with a second reference line index.Type: GrantFiled: November 28, 2018Date of Patent: November 10, 2020Assignee: TENCENT AMERICA LLCInventors: Liang Zhao, Xin Zhao, Xiang Li, Shan Liu
-
Patent number: 10825198Abstract: Provided is a three-dimensional locations calculating apparatus using photographic images and a three-dimensional locations calculation method by using photographic images, and more particularly, to a three-dimensional locations calculating apparatus using photographic images and a three-dimensional location calculation method by using photographic images, in which a plurality of photographic images are analyzed to calculate three-dimensional locations of a point that is commonly marked on the photographic images.Type: GrantFiled: June 20, 2019Date of Patent: November 3, 2020Assignee: CUPIX, INC.Inventor: SeockHoon Bae
-
Patent number: 10821896Abstract: Disclosed herein is a multi-camera driver assistance system. The system includes a plurality of cameras which dispose at different positions of a vehicle to capture images of a vicinity of the vehicle; an image processing unit which generates a virtual view with respect to a predetermined projection surface based on the images; and a display device which displays the virtual view, wherein the predetermined projection surface includes a slanted projection surfaces which are located at lateral sides of the vehicle.Type: GrantFiled: September 26, 2019Date of Patent: November 3, 2020Assignee: MANDO CORPORATIONInventor: Jochen Abhau
-
Patent number: 10820015Abstract: A method and apparatus for Intra prediction coding in multi-view video coding, three-dimensional video coding, or screen content video coding are disclosed. A first filtering-disable-flag associated with a high-level video data is determined to indicate whether to disable at least one filter from a filter group. If the first filtering-disable-flag is asserted, one or more selected Intra prediction modes from an Intra prediction mode group is determined, and at least one filter from the filter group for the current block is skipped if the current Intra prediction mode of the current block belongs to the selected Intra prediction modes. The system may further determine a second filtering-disable-flag associated with a low-level video data corresponding to a current block level or a higher level than the current block level to disable said at least one filter from a filter group for the low-level video data.Type: GrantFiled: December 31, 2014Date of Patent: October 27, 2020Assignee: HFI Innovation Inc.Inventors: Xianguo Zhang, Jian-Liang Lin, Kai Zhang, Jicheng An, Han Huang, Yi-Wen Chen
-
Patent number: 10812810Abstract: A method for video coding using a merge mode by a decoder or encoder. An embodiment of the method includes receiving a current block having a block size, setting a grid pattern based on the block size of the current block, wherein the grid pattern partitions a search region adjacent to the current block into search blocks, and a size of the search blocks is determined according to the block size of the current block, and searching for one or more spatial merge candidates from candidate positions in the search blocks to construct a candidate list that includes the one or more spatial merge candidates.Type: GrantFiled: November 29, 2018Date of Patent: October 20, 2020Assignee: Tencent America LLCInventors: Jing Ye, Xiang Li, Shan Liu
-
Patent number: 10805629Abstract: Regions for texture-based coding are identified using a spatial segmentation and a motion flow segmentation. For frames of a group of frames in a video sequence, a frame is segmented using a first classifier into at least one of a texture region or a non-texture region of an image in the frame. Then, the texture regions of the group of frames are segmented using a second classifier into a texture coding region or a non-texture coding region. The second classifier uses motion across the group of frames as input. Each of the classifiers is generated using a machine-learning process. Blocks of the non-texture region and the non-texture coding region of the current frame are coded using a block-based coding technique, while blocks of the texture coding region are coded using a coding technique that is other than the block-based coding technique.Type: GrantFiled: February 17, 2018Date of Patent: October 13, 2020Assignee: GOOGLE LLCInventors: Yuxin Liu, Adrian Grange
-
Patent number: 10805649Abstract: A method and device for blending multiple related frames into a single frame to reduce noise is disclosed. A method includes comparing an input frame to a corresponding reference frame in order to determine if at least one object that is in both frames moves in the input frame, and also to determine edge strengths of the at least one object. The method further includes, based on the comparison, determining which regions of the input frame to blend with corresponding regions of the reference frame, which regions of the input frame not to blend with corresponding regions of the reference frame, and which regions of the input frame to partially blend with corresponding regions of the reference frame.Type: GrantFiled: January 4, 2018Date of Patent: October 13, 2020Assignee: Samsung Electronics Co., Ltd.Inventors: Ibrahim E. Pekkucuksen, John Glotzbach, Hamid R. Sheikh, Rahul Rithe