Patents Examined by Matthew C. Bella
-
Patent number: 12086991Abstract: Provided is a system and a method for image synthesis of dental anatomy transformation. In an aspect, there is provided a method including: receiving an input image, the input image including a mouth with teeth exposed; building an input segmentation map using the input image as input to an artificial neural network; transforming the input segmentation map into an input latent vector using a trained encoder; transforming the input latent vector to an output latent vector using a trained transformer machine learning model; transforming the output latent vector to an output segmentation map using a trained decoder; generating a simulated image using the output segmentation map as input to a generative adversarial network; and outputting the simulated image, the output segmentation map, or both.Type: GrantFiled: December 1, 2021Date of Patent: September 10, 2024Assignee: Tasty Tech Ltd.Inventors: Gil Hagi, Balazs Keszthelyi
-
Patent number: 12080092Abstract: Embodiments of a system and method for sorting and delivering articles in a processing facility based on image data are described. Image processing results such as rotation notation information may be included in or with an image to facilitate downstream processing such as when the routing information cannot be extracted from the image using an unattended system and the image is passed to an attended image processing system. The rotation notation information may be used to dynamically adjust the image before presenting the image via the attended image processing system.Type: GrantFiled: May 24, 2022Date of Patent: September 3, 2024Assignee: United States Postal ServiceInventor: Ryan J. Simpson
-
Patent number: 12079309Abstract: A data processing apparatus is provided that includes forecast circuitry for generating a forecast of an aspect of a system for a next future time and for one or more subsequent future times following the next future time. Measurement circuitry generates, at the next future time, a new measurement of the aspect of the system. Aggregation circuitry produces an aggregation of the forecast of the aspect of the system for the next future time and of the new measurement of the aspect of the system. The forecast circuitry revises the forecast of the aspect of the system for the one or more subsequent future times using the aggregation.Type: GrantFiled: December 22, 2021Date of Patent: September 3, 2024Assignee: Arm LimitedInventor: Michael Bartling
-
Patent number: 12073588Abstract: A camera is positioned to obtain an image of an object. The image is input to a neural network that outputs a three-dimensional (3D) bounding box for the object relative to a pixel coordinate system and object parameters. Then a center of a bottom face of the 3D bounding box is determined in pixel coordinates. The bottom face of the 3D bounding box is located in a ground plane in the image. Based on calibration parameters for the camera that transform pixel coordinates into real-world coordinates, a) a distance from the center of the bottom face of the 3D bounding box to the camera relative to a real-world coordinate system and b) an angle between a line extending from the camera to the center of the bottom face of the 3D bounding box and an optical axis of the camera are determined. The calibration parameters include a camera height relative to the ground plane, a camera focal distance, and a camera tilt relative to the ground plane.Type: GrantFiled: September 24, 2021Date of Patent: August 27, 2024Assignee: Ford Global Technologies, LLCInventors: Mostafa Parchami, Enrique Corona, Kunjan Singh, Gaurav Pandey
-
Patent number: 12067086Abstract: Apparatuses and methods of operating such apparatuses are disclosed. An apparatus comprises feature dataset input circuitry to receive a feature dataset comprising multiple feature data values indicative of a set of features, wherein each feature data value is represented by a set of bits. Class retrieval circuitry is responsive to reception of the feature dataset from the feature dataset input circuitry to retrieve from class indications storage a class indication for each feature data value received in the feature dataset, wherein class indications are predetermined and stored in the class indications storage for each permutation of the set of bits for each feature. Classification output circuitry is responsive to reception of class indications from the class retrieval circuitry to determine a classification in dependence on the class indications. A predicated class may thus be accurately generated from a simple apparatus.Type: GrantFiled: February 27, 2020Date of Patent: August 20, 2024Assignee: Arm LimitedInventors: Emre Özer, Gavin Brown, Charles Edward Michael Reynolds, Jedrzej Kufel, John Philip Biggs
-
Patent number: 12067081Abstract: A method and an apparatus for training a transferable vision transformer (TVT) for unsupervised domain adaption (UDA) in heterogeneous devices are provided. The method includes that a heterogeneous device including one or more graphic processing units (GPUs) loads multiple patches into the TVT which includes a transferability adaption module (TAM). Furthermore, a patch-level domain discriminator in the TAM assigns weights to the multiple patches and determines one or more transferable patches based on the weights. Moreover, the heterogeneous device generates a transferable attention output for an attention module in the TAM based on the one or more transferable patches.Type: GrantFiled: September 24, 2021Date of Patent: August 20, 2024Assignee: KWAI INC.Inventors: Ning Xu, Jingjing Liu, Jinyu Yang
-
Patent number: 12067804Abstract: Methods and systems are disclosed for performing operations comprising: receiving, by one or more processors, an image that includes a depiction of a face of a user; computing a real-world scale of the face of the user based on a selected subset of landmarks of the face of the user; obtaining an augmented reality graphical element comprising augmented reality eyewear; scaling the augmented reality graphical element based on the computed real-world scale of the face; and positioning the scaled augmented reality graphical element within the image on the face of the user.Type: GrantFiled: March 22, 2021Date of Patent: August 20, 2024Assignee: Snap Inc.Inventors: Avihay Assouline, Itamar Berger, Jean Luo, Matan Zohar
-
Patent number: 12064187Abstract: A computer-implemented method and a system for computer guided surgery, which include a transposition of an action, planned in a virtual environment with respect to a virtual referential RP, to a physical action performed with a surgical tool in a real operating theatre environment for orthopedic surgery of a patient.Type: GrantFiled: April 24, 2020Date of Patent: August 20, 2024Assignee: GANYMED ROBOTICSInventors: Blaise Bleunven, Cyril Moulin, Sophie Cahen, Nicolas Loy Rodas, Michel Bonnin, Tarik Ait Si Selmi, Marion Decrouez
-
Patent number: 12062182Abstract: A method and computing system for facilitating edge detection are presented. The computing system may include at least one processing circuit configured to receive image information representing a group of objects in a camera field of view, and to identify, from the image information, a plurality of candidate edges associated with the group of objects. The at least one processing circuit may further determine, when the plurality of candidate edges include a first candidate edge which is formed based on a border between a first image region and a second image region, whether the image information satisfies a defined darkness condition at the first candidate edge. The at least one processing circuit may further select a subset of the plurality of candidate edges to form a selected subset of candidate edges for representing the physical edges of the group of objects.Type: GrantFiled: May 27, 2021Date of Patent: August 13, 2024Assignee: MUJIN, INC.Inventors: Jinze Yu, Jose Jeronimo Moreira Rodrigues
-
Patent number: 12056211Abstract: A method for determining a target image to be labeled includes: obtaining an original image and an autoencoder (AE) set, the original image being an image having not been labeled, the AE set including N AEs; obtaining an encoded image set corresponding to the original image by using the AE set, the encoded image set including N encoded images, the encoded images being corresponding to the AEs; obtaining the encoded image set and a segmentation result set corresponding to the original image by using an image segmentation network, the image segmentation network including M image segmentation sub-networks, and the segmentation result set including [(N+1)*M] segmentation results; determining labeling uncertainty corresponding to the original image according to the segmentation result set; and determining whether the original image is a target image according to the labeling uncertainty.Type: GrantFiled: October 14, 2021Date of Patent: August 6, 2024Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Yifan Hu, Yuexiang Li, Yefeng Zheng
-
Patent number: 12056900Abstract: Techniques are described for using computing devices to perform automated operations to generate mapping information via analysis of visual data of photos of a defined area, and for using the generated mapping information in further automated manners, including to display the generated mapping information via various types of visualizations corresponding graphical user interfaces. In some situations, the defined area includes an interior of a multi-room building, and the generated information includes at least a partial floor plan and/or other modeled representation of the building—in addition, the generating may be further performed without having measured depth information about distances from the photos' acquisition locations to walls or other objects in the surrounding building. The generated floor plan and/or other mapping-related information may be further used in various manners, including for controlling navigation of devices (e.g., autonomous vehicles), etc.Type: GrantFiled: August 27, 2021Date of Patent: August 6, 2024Assignee: MFTB Holdco, Inc.Inventors: Manjunath Narayana, Ivaylo Boyadzhiev
-
Patent number: 12050663Abstract: This disclosure describes an automated process for training an ML model used by a computer vision system in a point of sale (POS) system to recognize a new item. Instead of relying on a manual process performed by a data scientist, the automated process can use images of a new (i.e., unknown) item captured at one or more POS systems to then retrain the ML model to recognize the new item. That is, the images of the item are used to retrain the ML model and to test the accuracy of the updated ML model. If the updated ML model can confidently identify the new item, the updated ML model is then used by the computer vision system to identify items at the POS system.Type: GrantFiled: September 30, 2021Date of Patent: July 30, 2024Assignee: Toshiba Global Commerce Solutions Holdings CorporationInventors: Judith L. Atallah, J. Wacho Slaughter, Evgeny Shevtsov, Srija Ganguly
-
Patent number: 12033370Abstract: According to an embodiment, a learning device includes one or more processors. The processors calculate a latent vector of each of a plurality of first target data, by using a parameter of a learning model configured to output a latent vector indicating a feature of a target data. The processors calculate, for each first target data, first probabilities that the first target data belongs to virtual classes on an assumption that the plurality of first target data belong to the virtual classes different from each other. The processors update the parameter such that a first loss of the first probabilities, and a second loss that is lower as, for each of element classes to which a plurality of elements included in each of the plurality of first target data belong, a relation with another element class is lower, become lower.Type: GrantFiled: August 24, 2020Date of Patent: July 9, 2024Assignee: KABUSHIKI KAISHA TOSHIBAInventors: Yaling Tao, Kentaro Takagi, Kouta Nakata
-
Patent number: 12026921Abstract: In one embodiment, a first device may receive, from a second device, a reference landmark map identifying locations of facial features of a user of the second device depicted in a reference image and a feature map, generated based on the reference image, representing an identity of the user. The first device may receive, from the second device, a current compressed landmark map based on a current image of the user and decompress the current compressed landmark map to generate a current landmark map. The first device may update the feature map based on a motion field generated using the reference landmark map and the current landmark map. The first device may generate scaling factors based on a normalization facial mask of pre-determined facial features of the user. The first device may generate an output image of the user by decoding the updated feature map using the scaling factors.Type: GrantFiled: April 6, 2021Date of Patent: July 2, 2024Assignee: Meta Platforms, Inc.Inventors: Maxime Mohamad Oquab, Pierre Stock, Oran Gafni, Daniel Raynald David Haziza, Tao Xu, Peizhao Zhang, Onur Çelebi, Patrick Labatut, Thibault Michel Max Peyronel, Camille Couprie
-
Patent number: 12026195Abstract: A plurality of on-road vehicles moving along streets of a city, while utilizing onboard cameras for autonomous driving functions, are bound to capture, over time, random images of various city objects such as structures and street signs. Images of a certain city object may be captured by many of the vehicles at various times while passing by that object, thereby resulting in a corpus of imagery data containing various random images and other representations of that certain object. Per each of at least some of the city objects appearing more than once in the corpus of imagery data, appearances are detected and linked, and then analyzed to uncover behaviors associated with that city object, thereby eventually, over time, revealing various city dynamics associated with many city objects.Type: GrantFiled: August 31, 2021Date of Patent: July 2, 2024Inventors: Gal Zuckerman, Moshe Salhov
-
Patent number: 12014551Abstract: In the method, a predetermined first number of sensed positions (Pi) of the object (20) are provided, the positions being provided with respect to a vehicle coordinate system (K) of the vehicle (10). Depending on a predetermined second number of the provided positions, a second coordinate system (K?) is determined. The provided positions are transformed into the second coordinate system (K?) and a function is determined which approximates a course of the provided positions in the second coordinate system (K?). For a predetermined third number of sampling points of the function, at least one analytical property of the function is determined at the respective sampling point. The coordinates and the at least one analytical property of the respective sampling point are transformed into the vehicle coordinate system (K).Type: GrantFiled: October 18, 2019Date of Patent: June 18, 2024Assignee: Bayerische Motoren Werke AktiengesellschaftInventors: Dominik Bauch, Josef Mehringer, Daniel Meissner, Marco Baumgartl, Michael Himmelsbach, Luca Trentinaglia
-
Patent number: 11991940Abstract: A method for detecting real lateral locations of target plants includes: recording an image of a ground area at a camera; detecting a target plant in the image; accessing a lateral pixel location of the target plant in the image; for each tool module in a set of tool modules arranged behind the camera and in contact with a plant bed: recording an extension distance of the tool module; and recording a lateral position of the tool module relative to the camera; estimating a depth profile of the plant bed proximal the target plant based on the extension distance and the lateral position of each tool module; estimating a lateral location of the target plant based on the lateral pixel location of the target plant and the depth profile of the plant bed surface proximal the target plant; and driving a tool module to a lateral position aligned with the lateral location of the target plant.Type: GrantFiled: October 19, 2020Date of Patent: May 28, 2024Assignee: FarmWise Labs, Inc.Inventors: Arthur Flajolet, Eric Stahl-David, Sébastien Boyer, Thomas Palomares
-
Patent number: 11995150Abstract: An information processing method implemented by a computer includes: obtaining a piece of first data, and a piece of second data not included in a training dataset for training an inferencer; calculating, using a piece of first relevant data obtained by inputting the first data to the inferencer trained by machine learning using the training dataset, a first contribution representing contributions of portions constituting the first data to a piece of first output data output by inputting the first data to the inferencer; calculating, using a piece of second relevant data obtained by inputting the second data to the inferencer, a second contribution representing contributions of portions constituting the second data to a piece of second output data output by inputting the second data to the inferencer; and determining whether to add the second data to the training dataset, according to the similarity between the first and second contributions.Type: GrantFiled: April 19, 2021Date of Patent: May 28, 2024Assignee: PANASONIC INTELLECTUAL PROPERTY CORPORATION OF AMERICAInventors: Denis Gudovskiy, Alec Hodgkinson, Takuya Yamaguchi, Sotaro Tsukizawa
-
Patent number: 11994377Abstract: Methods and systems for capturing motion and/or determining the shapes and positions of one or more objects in 3D space utilize cross-sections thereof. In various embodiments, images of the cross-sections are captured using a camera based on reflections therefrom or shadows cast thereby.Type: GrantFiled: September 2, 2020Date of Patent: May 28, 2024Assignee: Ultrahaptics IP Two LimitedInventor: David S. Holz
-
Patent number: 11989956Abstract: Systems and methods for object detection generate a feature pyramid corresponding to image data, and rescaling the feature pyramid to a scale corresponding to a median level of the feature pyramid, wherein the rescaled feature pyramid is a four-dimensional (4D) tensor. The 4D tensor is reshaped into a three-dimensional (3D) tensor having individual perspectives including scale features, spatial features, and task features corresponding to different dimensions of the 3D tensor. The 3D tensor is used with a plurality of attention layers to update a plurality of feature maps associated with the image data. Object detection is performed on the image data using the updated plurality of feature maps.Type: GrantFiled: April 5, 2021Date of Patent: May 21, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Xiyang Dai, Yinpeng Chen, Bin Xiao, Dongdong Chen, Mengchen Liu, Lu Yuan, Lei Zhang