Patents Examined by Amir Alavi
-
Patent number: 12288411Abstract: In example embodiments, techniques are provided that use two different ML models (a symbol association ML model and a link association ML model), one to extract associations between text labels and one to extract associations between symbols and links, in a schematic diagram (e.g., P&ID) in an image-only format. The two models may use different ML architectures. For example, the symbol association ML model may use a deep learning neural network architecture that receives for each possible text label and symbol pair both a context and a request, and produces a score indicating confidence the pair is associated. The link association ML model may use a gradient boosting tree architecture that receives for each possible text label and link pair a set of multiple features describing at least the geometric relationship between the possible text label and link pair and produces a score indicating confidence the pair is associated.Type: GrantFiled: October 6, 2022Date of Patent: April 29, 2025Assignee: Bentley Systems, IncorporatedInventors: Marc-Andrè Gardner, Simon Savary, Louis-Philippe Asselin
-
Patent number: 12288567Abstract: A neural network, a system using this neural network and a method for training a neural network to output a description of the environment in the vicinity of at least one sound acquisition device on the basis of an audio signal acquired by the sound acquisition device, the method including: obtaining audio and image training signals of a scene showing an environment with objects generating sounds, obtaining a target description of the environment seen on the image training signal, inputting the audio training signal to the neural network so that the neural network outputs a training description of the environment, and comparing the target description of the environment with the training description of the environment.Type: GrantFiled: January 10, 2020Date of Patent: April 29, 2025Assignees: TOYOTA JIDOSHA KABUSHIKI KAISHA, ETH ZÜRICHInventors: Wim Abbeloos, Arun Balajee Vasudevan, Dengxin Dai, Luc Van Gool
-
Patent number: 12283081Abstract: A stroboscopic device, comprising: a camera for acquiring video frames; a light source; and at least one hardware processor; and software that is configured to, when executed by the at least one hardware processor, generating synthetic video frames of an object, providing real video frames of the object and the synthetic video frames to a discriminator configured to detect the difference between the real video frames from the synthetic video frames, using feedback from the discriminator to generate further synthetic video frames, providing real video frames of the object and the further synthetic video frames to the discriminator and repeating until a convergence point is achieved where the discriminator can't reliably tell the real video frames from the synthetically generated video frames, generating a data set comprising synthetic video frames and real vide frames of the object, using the data set to train a model, acquire a sequence of video frames via the camera, for a first and second frame in the sequeType: GrantFiled: July 10, 2024Date of Patent: April 22, 2025Assignee: IOT Technologies LLCInventors: Abraham Greenboim, Osama Mustafa
-
Patent number: 12283105Abstract: A rail area extraction method based on laser point cloud data is provided, including: preprocessing collected laser point cloud data; screening and clustering the laser point cloud data based on a fixed distance segmentation method, and representing laser point cloud data of reference objects by a main laser point cloud data cluster; projecting laser point cloud data of reference objects to a horizontal plane, fitting reference curves based on an improved differential evolution algorithm with a train left side reference curve as an upper boundary and a train right side reference curve as a lower boundary; selecting a target boundary line from upper and lower boundaries based on a laser point cloud data amount-density two-step decision method; calculating a rail area center line based on the target boundary line; and selecting a rail area boundary line extension method or rail area center line extension method to calculate the rail area.Type: GrantFiled: July 29, 2024Date of Patent: April 22, 2025Assignee: Suzhou TongRuiXing Technology Co., LTDInventors: Tuo Shen, Lanxin Xie, Tenghui Xue
-
Patent number: 12277739Abstract: A point-cloud decoding device 200 according to the present invention includes: a geometry information decoding unit 2010 configured to decode a flag that controls whether to apply “Implicit QtBt” or not; and an attribute-information decoding unit 2060 configured to decode a flag that controls whether to apply “scalable lifting” or not; wherein, a restriction is set not to apply the “scalable lifting” when the “Implicit QtBt” is applied.Type: GrantFiled: September 30, 2022Date of Patent: April 15, 2025Assignee: KDDI CORPORATIONInventors: Kyohei Unno, Kei Kawamura, Sei Naito
-
Patent number: 12276504Abstract: Systems and methods for navigating intersections autonomously or semi-autonomously can include, but are not limited to including, accessing data related to the geography and traffic management features of the intersection, executing autonomous actions to navigate the intersection, and coordinating with one or more processors and/or operators executing remote actions, if necessary. Traffic management features can be identified by using various types of images such as oblique images.Type: GrantFiled: March 11, 2024Date of Patent: April 15, 2025Assignee: DEKA Products Limited PartnershipInventors: Praneeta Mallela, Aaditya Ravindran, Benjamin V. Hersh, Boris Bidault
-
Patent number: 12277665Abstract: An electronic device is provided. The electronic device includes a camera, a display, and a processor configured to obtain a first image including one or more external objects by using the camera, display to output a three-dimensional (3D) object generated based on attributes related to a face among the one or more external objects using the display, receive a selection of at least one graphic attribute from a plurality of graphic attributes which can be applied to the 3D object, generate a 3D avatar for the face based on the at least one graphic attribute, and generate a second image including at least one object reflecting a predetermined facial expression or motion using the 3D avatar.Type: GrantFiled: August 21, 2023Date of Patent: April 15, 2025Assignee: Samsung Electronics Co., Ltd.Inventors: Wooyong Lee, Yonggyoo Kim, Byunghyun Min, Dongil Son, Chanhee Yoon, Kihuk Lee, Cheolho Cheong
-
Patent number: 12272131Abstract: In one aspect, an example method includes a processor (1) applying a feature map network to an image to create a feature map comprising a grid of vectors characterizing at least one feature in the image and (2) applying a probability map network to the feature map to create a probability map assigning a probability to the at least one feature in the image, where the assigned probability corresponds to a likelihood that the at least one feature is an overlay. The method further includes the processor determining that the probability exceeds a threshold, and responsive to the processor determining that the probability exceeds the threshold, performing a processing action associated with the at least one feature.Type: GrantFiled: December 19, 2023Date of Patent: April 8, 2025Assignee: The Nielsen Company (US), LLCInventors: Wilson Harron, Irene Zhu
-
Patent number: 12272032Abstract: A method includes obtaining an input image that contains blur. The method also includes providing the input image to a trained machine learning model, where the trained machine learning model includes (i) a shallow feature extractor configured to extract one or more feature maps from the input image and (ii) a deep feature extractor configured to extract deep features from the one or more feature maps. The method further includes using the trained machine learning model to generate a sharpened output image. The trained machine learning model is trained using ground truth training images and input training images, where the input training images include versions of the ground truth training images with blur created using demosaic and noise filtering operations.Type: GrantFiled: August 18, 2022Date of Patent: April 8, 2025Assignee: Samsung Electronics Co., Ltd.Inventors: Devendra K. Jangid, John Seokjun Lee, Hamid R. Sheikh
-
Patent number: 12272107Abstract: Encoding a three-dimensional point cloud. A set of points are obtained within the three-dimensional point cloud, a point within the set of points having a co-ordinate in three-dimensions. The points are converted into a two-dimensional representation. For a point within the set of points, information describing the co-ordinate is represented as a location within the two-dimensional representation and a value at the location. The two-dimensional representation is encoded using a tier-based hierarchical coding format to output encoded data. The tier-based hierarchical coding format encodes the two-dimensional representation as layers, the layers representing echelons of data used to progressively reconstruct the signal at different levels of quality.Type: GrantFiled: February 11, 2021Date of Patent: April 8, 2025Assignee: V-NOVA INTERNATIONAL LIMITEDInventor: Guido Meardi
-
Patent number: 12272101Abstract: A seed camera disposed a first location is manually calibrated. A second camera, disposed at a second location, detects a physical marker based on predefined characteristics of the physical marker. The physical marker is located within an overlapping field of view between the seed camera and the second camera. The second camera is calibrated based on a combination of the physical location of the physical marker, the first location of the seed camera, the second location of the second camera, a first image of the physical marker generated with the seed camera, and a second image of the physical marker generated with the second camera.Type: GrantFiled: June 5, 2023Date of Patent: April 8, 2025Assignee: Nice North America LLCInventor: Chandan Gope
-
Patent number: 12266083Abstract: A method of filtering a target pixel in an image forms, for a kernel of pixels comprising the target pixel and its neighbouring pixels, a data model to model pixel values within the kernel; calculates a weight for each pixel of the kernel comprising: (i) a geometric term dependent on a difference in position between that pixel and the target pixel; and (ii) a data term dependent on a difference between a pixel value of that pixel and its predicted pixel value according to the data model; and uses the calculated weights to form a filtered pixel value for the target pixel, e.g. by updating the data model with a weighted regression analysis technique using the calculated weights for the pixels of the kernel; and evaluating the updated data model at the target pixel position so as to form the filtered pixel value for the target pixel.Type: GrantFiled: August 14, 2023Date of Patent: April 1, 2025Assignee: Imagination Technologies LimitedInventor: Ruan Lakemond
-
Patent number: 12266120Abstract: An image fusion system provides a predicted alignment between images of different modalities and synchronization of the alignment, once acquired. A spatial tracker detects and tracks a position and orientation of an imaging device within an environment. A predicted pose of an anatomical feature can be determined, based on previously acquired image data, with respect to a desired position and orientation of the imaging device. When the imaging device is moved into the desired position and orientation, a relationship is established between the pose of the anatomical feature in the image data and the pose of the anatomical feature imaged by the imaging device. Based on tracking information provided by the spatial tracker, the relationship is maintained even when the imaging device moves to various positions during a procedure.Type: GrantFiled: February 14, 2023Date of Patent: April 1, 2025Assignee: MIM SOFTWARE INC.Inventor: Jonathan William Piper
-
Patent number: 12260473Abstract: Methods and systems for generation and use of an accelerated tomographic reconstruction preconditioner (ATRP) for accelerated iterative tomographic reconstruction are disclosed. An example method for generating an ATRP for accelerated iterative tomographic reconstruction includes accessing data for a tomography investigation of a sample and determining a trajectory of the tomography investigation of a sample. At least one toy model sample depicting a feature characteristic of the sample are accessed and at least one candidate preconditioner is selected. A first performance of each of the at least one candidate preconditioner on the one or more toy samples is determined, where the candidate preconditioners are then updated to create updated candidate preconditioners. A second performance of each of the updated candidate preconditioners on the one or more toy samples is determined determining. An ATRP is then generated based on at least the first performance and the second performance.Type: GrantFiled: September 30, 2022Date of Patent: March 25, 2025Assignee: FEI CompanyInventors: Glenn Myers, Andrew Kingston, Adrian Sheppard, Shane Latham, Trond Varslot
-
Patent number: 12257064Abstract: Various techniques pertain to a user interface that includes an instructional display portion comprising one or more textual or graphical instructions pertaining to preparing for a body scan for body care of a client and a scan initiation display portion that includes a scan initiation widget which, when interacted by the client, invokes execution of a scan process and an analysis process for a respective body area of the client. The user interface further includes a scan result display portion including interactable textual or graphical information pertaining to a result of the body scan for the body care and an adjustment display portion having a plurality of adjustment widgets that transform at least a color index or value into a transformed color index or value in a L*A*B* color space or a Lch color space.Type: GrantFiled: September 9, 2022Date of Patent: March 25, 2025Assignee: Sephora USA, Inc.Inventors: Prasad Tendulkar, Nelly Mensah, Anna Teresa Nicholas, Jillian Bertone, Lisa Zhu, Jennifer Stokes, Witono Liujanto, Aaron Oelckers, Marc McCotter, Harsha Haddar, Fiona Kavanagh, Amit Adiraju, Naroju Janardhan
-
Patent number: 12260570Abstract: A model dataset is generated based on first image data. The model dataset and second image data map at least a common part of an examination region at a second detail level. The model dataset and the second image data are pre-aligned at a first detail level below the second detail level based on first features that are mapped at the first detail level in the model dataset and the second image data and/or an acquisition geometry of the second image data. The model dataset and the second image data are registered at the second detail level based on second features that are mapped at the second detail level in the model dataset and the second image data. The second class of features is mappable at the second detail level or above. The registered second image data and/or the registered model dataset is provided.Type: GrantFiled: September 22, 2022Date of Patent: March 25, 2025Assignee: Siemens Healthineers AGInventors: Alois Regensburger, Amilcar Alzaga, Birgi Tamersoy, Thomas Pheiffer, Ankur Kapoor
-
Patent number: 12249134Abstract: The second multi-dimensional feature vectors 92a of sample image data 34a having instruction signals that are converted by a feature converter 27 are read in (Step S10), two-dimensional graph data for model 36a is generated based on the read second multi-dimensional feature vectors 92a to be stored (Step S12), two-dimensional model graphs Og and Ng are generated based on the generated two-dimensional graph data for model 36a, to be displayed on the window 62 (Step S14). The second multi-dimensional feature vectors 92a are indicators appropriate for visualization of the trained state (individuality) of a trained model 35. Thus, it is possible to visually check and evaluate whether the trained model 35 is in an appropriately trained state (individuality) or not.Type: GrantFiled: April 1, 2021Date of Patent: March 11, 2025Assignee: ROXY CORP.Inventors: Takayuki Ishiguro, Hitoshi Hoshino
-
Patent number: 12249112Abstract: The present disclosure includes systems and methods for handwriting recognition. Handwriting data is received. Geometric data of text in handwriting data is determined. Sub-characters of the text are determined. Sub-characters of text are matched to a model. Most probable characters of the text is determined based on the matching.Type: GrantFiled: January 24, 2022Date of Patent: March 11, 2025Assignee: Read-Ink CorporationInventor: Thomas O Binford
-
Patent number: 12243274Abstract: System, device, and method for improved encoding and enhanced compression of images, videos, and media content. A method includes: (a) receiving a source image, and analyzing its content on a pixel-by-pixel basis, and classifying each pixel as either (I) a pixel associated with Photographic content, or (II) a pixel associated with Non-Photographic content; (c) generating a pixel-clusters map that indicates (i) clusters of pixels that were classified as Photographic content, and (ii) clusters of pixels that were classified as Non-Photographic content; (d) generating a composed image, by: (d1) applying a first encoding technique, particularly lossy encoding, to encode pixel-clusters that were classified as Photographic content; (d2) applying a second, different, encoding technique, particularly lossless encoding, to encode pixel-clusters that were classified as Non-Photographic content.Type: GrantFiled: December 4, 2022Date of Patent: March 4, 2025Assignee: CLOUDINARY LTD.Inventors: Amnon Cohen-Tidhar, Jon Philippe D. Sneyers, Tal Lev-Ami
-
Patent number: 12236554Abstract: Method are provided that exhibit increased quality and compression factor for compressing images. The methods can include generating a set of coefficients indicative of image contents of a block of image pixels at a plurality of spatial frequencies. The set of coefficients is scaled to generate a first set of scaled coefficients. An assessment is performed for a plurality of quantization levels, which includes quantizing a subset of the first set of scaled coefficients according to respective quantization levels to generate a quantized subset of the first set of scaled coefficients and determining a post-quantization energy of the quantized subset of the first set of scaled coefficients. Based on the assessment of the plurality of quantization levels, a scaled and quantized version of the set of coefficients is generated. An encoded version of the image based on the scaled and quantized version of the set of coefficients is generated.Type: GrantFiled: October 14, 2019Date of Patent: February 25, 2025Assignee: Google LLCInventors: Jyrki Alakuijala, Luca Versari