Patents Examined by Randolph I Chu
-
Patent number: 11967118Abstract: Systems and methods are described herein for implementing a hybrid codec to compress and decompress image data using both lossy and lossless compression. In one example encoding process, it may be determined whether a first block of pixels of a frame of image data contains an edge. A type of compression by which to encode the first block may be selected based on that determination. The first block may be compressed using the selected type of compression. At least one second value associated with the first block of pixels may be set to indicate at least oof the compressed value or the type of compression used to compress the first block.Type: GrantFiled: November 30, 2020Date of Patent: April 23, 2024Assignee: Amazon Technologies, Inc.Inventors: Russell Allen Brown, Paolo Maggi, Paolo Angelo Angelo Borelli, Paul Hinks, Mark John Keller
-
Patent number: 11967066Abstract: An image processing method of the present disclosure may include receiving a scanned image, and processing the received image through an octave convolution-based neural network to output a high-quality image and an edge image for the received image. The octave convolution-based neural network may include a plurality of octave encoder blocks and a plurality of octave decoder blocks. Each octave encoder block may include an octave convolutional layer, and may be configured to output a high-frequency feature map and a low-frequency feature map for the image.Type: GrantFiled: April 12, 2021Date of Patent: April 23, 2024Assignees: DAEGU GYEONGBUK INSTITUTE OF SCIENCE AND TECHNOLOGY, MARQUETTE UNIVERSITYInventors: Sang Hyun Park, Dong Kyu Won, Dong Hye Ye
-
Patent number: 11948289Abstract: A device may receive frames of a video capturing an analog meter with a dial and a needle, may process the frames to identify a center, a radius, and a perimeter of the dial, and may determine calibrated values for the dial. The device may apply a model to one of the frames to create a base mask, may apply thresholding for a dynamic HSV bounding value, to the base mask and the frames, to create masked frames, and may identify contours for the masked frames. The device may identify a quantity of points for each of the contours, may estimate angles of the needle of the analog meter based on the quantity of points, and may average the estimated angles to determine an averaged needle angle. The device may determine a needle direction based on the averaged needle angle and may calculate a meter reading.Type: GrantFiled: July 19, 2021Date of Patent: April 2, 2024Assignee: Verizon Patent and Licensing Inc.Inventors: Hitesh K. Patel, Benjamin D. Allen, Gaurav Goel
-
Patent number: 11941098Abstract: An authentication device includes: a wearing position determination unit that determines a wearing position, the wearing position being a position at which a wearable article comprising a sensor is being worn on a body; and an authentication unit that performs authentication by using biometric information of the body, the biometric information being detected by the sensor at the wearing position.Type: GrantFiled: October 4, 2022Date of Patent: March 26, 2024Assignee: NEC CORPORATIONInventor: Hiroshi Fukuda
-
Patent number: 11941784Abstract: An image processing apparatus includes at least one memory and at least one processor that executes instructions stored in the memory to receive an input image based on image data, execute noise reduction processing on the image data, and output noise-reduced output data based on a result of the noise reduction processing, wherein the noise reduction processing calculates a value using reference pixels based on a first frequency range, and subtracts a value using pixels based on a second frequency range.Type: GrantFiled: March 10, 2021Date of Patent: March 26, 2024Assignee: Canon U.S.A., Inc.Inventor: Mitsuhiro Ikuta
-
Patent number: 11941787Abstract: Examples are provided relating to recovering depth data from noisy phase data of low-signal pixels. One example provides a computing system, comprising a logic machine, and a storage machine holding instructions executable by the logic machine to process depth data by obtaining depth image data and active brightness image data for a plurality of pixels, the depth image data comprising phase data for a plurality of frequencies, and identifying low-signal pixels based at least on the active brightness image data. The instructions are further executable to apply a denoising filter to phase data of the low-signal pixels to obtain denoised phase data and not applying the denoising filter to phase data of other pixels. The instructions are further executable to, after applying the denoising filter, perform phase unwrapping on the phase data for the plurality of frequencies to obtain a depth image, and output the depth image.Type: GrantFiled: August 23, 2021Date of Patent: March 26, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Sergio Ortiz Egea, Augustine Cha
-
Patent number: 11935299Abstract: In one embodiment, a first device identifies a region of interest in video in which a light source of a second device is present by: using a current frame of the video and a prior frame of the video to compute a difference frame, performing thresholding on the current frame to form a threshold frame, and performing pixel-wise conjunction operations between the difference frame and the threshold frame, to identify a centroid of the light source of the second device. The first device detects a message within the region of interest transmitted by the second device via its light source. The device provides the message for review by a user.Type: GrantFiled: April 8, 2022Date of Patent: March 19, 2024Assignee: CISCO TECHNOLOGY, INC.Inventors: Samer Salam, Jad Al Aaraj
-
Patent number: 11935185Abstract: An apparatus for content based anti-aliasing is described herein. The apparatus comprises a detector, corrector, and downscaler. The detector is to detect potential aliased content in an input image, wherein the potentially aliased content occurs at a downscaled version of the input image. The corrector is to apply a correction to a single component of the input image. A downscaler may downscale the corrected input image to an output image according to a scaling factor.Type: GrantFiled: March 18, 2020Date of Patent: March 19, 2024Assignee: INTEL CORPORATIONInventors: Vijay Sundaram, Yi-Jen Chiu
-
Patent number: 11928796Abstract: Encoding can involve correcting chroma components representing the chroma of an input image according to a first component representing a mapping of the luminance component of said input image used for reducing or increasing the dynamic range of said luminance component, and a reconstructed component representing an inverse mapping of said first component. At least one correction factor according to said at least one scaled chroma components can also be obtained and transmitted. Decoding can involve scaled chroma components being obtained by multiplying chroma components of an image by at least one corrected chroma correction function depending on said at least one correction factor. Components of a reconstructed image can then be derived as a function of said scaled chroma components and a corrected matrix that depends on an inverse of a theoretical color space matrix conversion and said at least one correction factor.Type: GrantFiled: November 30, 2018Date of Patent: March 12, 2024Assignee: InterDigital Patent Holdings, Inc.Inventors: Marie-Jean Colaitis, David Touze, Nicolas Caramelli
-
Patent number: 11915099Abstract: An information processing method includes: acquiring a first object detection result obtained by use of an object detection model to which sensing data from a first sensor is input, and a second object detection result obtained by use of a second sensor; determining a degree of agreement between the first object detection result and the second object detection result in a specific region in a sensing space of the first sensor and the second sensor; and selecting the sensing data as learning data for the object detection model, according to the degree of agreement obtained in the determining.Type: GrantFiled: July 18, 2019Date of Patent: February 27, 2024Assignee: PANASONIC INTELLECTUAL PROPERTY CORPORATION OF AMERICAInventor: Takuya Yamaguchi
-
Patent number: 11908222Abstract: The present application relates to an occluded pedestrian re-identification method, including steps of obtaining global features and local features of occluded pedestrians, and recombining the local features into a local feature map; obtaining a heat map of key-points of pedestrian images and a group of key-point confidences, obtaining a group of features of the pedestrian key-points by using the local feature map and the heat map; obtaining a local feature group by using the global features to enhance each key-point feature in the group of features of pedestrian key-points according to Conv, and an adjacency matrix of key-points is obtained through the key-points, the local feature group and the adjacency matrix of key-points are used as the input of GCN to obtain the final features of pedestrian key-points.Type: GrantFiled: October 17, 2023Date of Patent: February 20, 2024Assignee: Hangzhou Dianzi UniversityInventors: Ming Jiang, Lingjie He, Min Zhang
-
Patent number: 11900687Abstract: A method includes obtaining a fisheye image of a scene and identifying multiple regions of interest in the fisheye image. The method also includes applying one or more transformations to transform and rotate one or more of the regions of interest in the fisheye image to produce one or more transformed regions. The method further includes generating a collage image having at least one portion based on the fisheye image and one or more portions containing the one or more transformed regions. In addition, the method includes performing object detection to identify one or more objects captured in the collage image.Type: GrantFiled: June 16, 2022Date of Patent: February 13, 2024Assignee: Canoo Technologies Inc.Inventors: Jongmoo Choi, David R. Arft
-
Patent number: 11900434Abstract: A system is provided that includes a mobile user device that executes an application that determines a skintone of a user, and determines and transmits a recipe for generating a target foundation that is based on a combination of a plurality of separate foundation ingredients that are associated with the skintone of the user. The system includes a dispensing device configured to receive the transmitted recipe from the mobile user device and dispense each of the plurality of separate foundation ingredients onto a common dispensing surface such that when the dispensed amounts of each of the plurality of separate foundation ingredients is blended on the dispensing surface, the target foundation is achieved. The mobile user device is configured to determine a skintone of the user based on features in a detected face of the user in a self-taken image of the user that is captured by a camera of the mobile user device.Type: GrantFiled: December 31, 2020Date of Patent: February 13, 2024Assignee: L'ORÉALInventors: Gregoire Charraud, Guive Balooch, Aldina Suwanto
-
Patent number: 11893753Abstract: A security camera with a motion detection function, which comprises: an image sensor, configured to capture original images; a variation level computation circuit, configured to compute image variation levels of the original images; a long term computation circuit, configured to calculate a first average level for the image variation levels corresponding to M of the original images; a short term computation circuit, configured to calculate a second average level for the image variation levels corresponding to N of the original images, wherein M>N; and a motion determining circuit, configured to determine whether a motion of an object appears in a detection range of the image sensor according to a relation between the first average level and the second average level. By such security camera, the interference caused by noise or small object can be avoided. Accordingly, the motion detection of the security camera can be more accurate.Type: GrantFiled: July 24, 2020Date of Patent: February 6, 2024Assignee: PixArt Imaging Inc.Inventors: Kevin Len-Li Lim, Joon Chok Lee, Zi Hao Tan, Ching Geak Chan, Keen-Hun Leong
-
Patent number: 11893731Abstract: Systems and methods described herein relate, among other things, to unmixing more than three stains, while preserving the biological constraints of the biomarkers. Unlimited numbers of markers may be unmixed from a limited-channel image, such as an RGB image, without adding any mathematical complicity to the model. Known co-localization information of different biomarkers within the same tissue section enables defining fixed upper bounds for the number of stains at one pixel. A group sparsity model may be leveraged to explicitly model the fractions of stain contributions from the co-localized biomarkers into one group to yield a least squares solution within the group. A sparse solution may be obtained among the groups to ensure that only a small number of groups with a total number of stains being less than the upper bound are activated.Type: GrantFiled: August 2, 2021Date of Patent: February 6, 2024Assignee: Ventana Medical Systems, Inc.Inventors: Srinivas Chukka, Ting Chen
-
Patent number: 11885638Abstract: According to one aspect of the invention, there is provided a method for generating a map for a robot, the method comprising the steps of: acquiring a raw map associated with a task of the robot; identifying pixels estimated to be a moving obstacle in the raw map, on the basis of at least one of colors of pixels specified in the raw map and sizes of areas associated with the pixels; and performing dilation and erosion operations on the pixels estimated to be the moving obstacle, and determining a polygon-based contour of the moving obstacle.Type: GrantFiled: December 28, 2020Date of Patent: January 30, 2024Assignee: Bear Robotics, Inc.Inventors: Yeo Jin Jung, Seongjun Park, Jungju Oh
-
Patent number: 11869274Abstract: A method includes receiving a set of video frames that correspond to a video, including a first video frame and a second video frame that each include a face, wherein the second video frame is subsequent to the first video frame. The method further includes performing face tracking on the first video frame to identify a first face resampling keyframe and performing face tracking on the second video frame to identify a second face resampling keyframe. The method further includes deriving an interpolation amount. The method further includes determining a first interpolated face frame based on the first face resampling keyframe and the interpolation amount. The method further includes determining a second interpolated face frame based on the second face resampling keyframe and the interpolation amount. The method further includes rendering an interpolated first face and an interpolated second face. The method further includes displaying a final frame.Type: GrantFiled: March 29, 2022Date of Patent: January 9, 2024Assignee: Google LLCInventor: Dillon Cower
-
Patent number: 11861975Abstract: A gaming system that receives a frame of image data captured by a camera at a gaming table, generates a set of images from portions of the frame of image data, and determines whether the set of images meets an input requirement of a neural network model. If the set of images does not meet the input requirement, the gaming system modifies, by an incremental amount, an image property of a subset from the set of images until the set of images meets the input requirement. When the set of images meets the input requirement, the gaming system transmits the set of images as a unit (e.g., as a composite of the set of images) to the neural network model for concurrent analysis.Type: GrantFiled: March 30, 2021Date of Patent: January 2, 2024Assignee: LNW Gaming, Inc.Inventors: Bryan Kelly, Martin S. Lyons
-
Patent number: 11841921Abstract: The present application provides a model training method and apparatus, and a prediction method and apparatus, and it relates to fields of artificial intelligence, deep learning, image processing, and autonomous driving. The model training method includes: inputting a first sample image of sample images into a depth information prediction model, and acquiring depth information of the first sample image; acquiring inter-image posture information based on a second sample image of the sample images and the first sample image; acquiring a projection image corresponding to the first sample image, at least according to the inter-image posture information and the depth information; and acquiring a loss function by determining a function for calculating a similarity between the second sample image and the projection image, and training the depth information prediction model using the loss function.Type: GrantFiled: December 4, 2020Date of Patent: December 12, 2023Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.Inventors: Xibin Song, Dingfu Zhou, Jin Fang, Liangjun Zhang
-
Patent number: 11830214Abstract: A method includes obtaining first pass-through image data characterized by a first pose. The method includes obtaining respective pixel characterization vectors for pixels in the first pass-through image data. The method includes identifying a feature of an object within the first pass-through image data in accordance with a determination that pixel characterization vectors for the feature satisfy a feature confidence threshold. The method includes displaying the first pass-through image data and an AR display marker that corresponds to the feature. The method includes obtaining second pass-through image data characterized by a second pose. The method includes transforming the AR display marker to a position associated with the second pose in order to track the feature. The method includes displaying the second pass-through image data and maintaining display of the AR display marker that corresponds to the feature of the object based on the transformation.Type: GrantFiled: May 29, 2019Date of Patent: November 28, 2023Assignee: APPLE INC.Inventors: Jeffrey S. Norris, Alexandre Da Veiga, Bruno M. Sommer, Ye Cong, Tobias Eble, Moinul Khan, Nicolas Bonnier, Hao Pan