Patents Examined by Nicolas James Boyajian
  • Patent number: 11494937
    Abstract: Provided are systems and methods that perform multi-task and/or multi-sensor fusion for three-dimensional object detection in furtherance of, for example, autonomous vehicle perception and control. In particular, according to one aspect of the present disclosure, example systems and methods described herein exploit simultaneous training of a machine-learned model ensemble relative to multiple related tasks to learn to perform more accurate multi-sensor 3D object detection. For example, the present disclosure provides an end-to-end learnable architecture with multiple machine-learned models that interoperate to reason about 2D and/or 3D object detection as well as one or more auxiliary tasks. According to another aspect of the present disclosure, example systems and methods described herein can perform multi-sensor fusion (e.g.
    Type: Grant
    Filed: October 16, 2019
    Date of Patent: November 8, 2022
    Assignee: UATC, LLC
    Inventors: Raquel Urtasun, Bin Yang, Ming Liang
  • Patent number: 11367283
    Abstract: An electronic device is disclosed. The electronic device includes a communicator configured to receive a video consisting of a plurality of frames, a processor configured to sense a frame having a predetermined object in the received video, extract information from the sensed frame, and generate metadata by using the extracted information, and a memory configured to store the generated metadata.
    Type: Grant
    Filed: October 26, 2018
    Date of Patent: June 21, 2022
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventor: Young Chun Ahn
  • Patent number: 11367269
    Abstract: An information processing system includes an image reading device and an information processing device. The image reading device reads a document to generate target image data. The information processing device processes the target image data. The information processing device includes a first conversion processing section, a second conversion processing section, and a selection section. The first conversion processing section is capable of converting image data to character code data. The second conversion processing section is capable of converting image data to character code data. The selection section selects conversion of the target image data to character code data by the first conversion processing section or the second conversion processing section.
    Type: Grant
    Filed: February 18, 2020
    Date of Patent: June 21, 2022
    Assignee: KYOCERA Document Solutions Inc.
    Inventors: Daisuke Yoshida, Tetsuji Yamaguchi, Yosuke Oka
  • Patent number: 11367266
    Abstract: Provided is an image recognition system that can easily perform image recognition on a side face of an item. An image recognition system according to one example embodiment of the present invention includes: a placement stage used for placing an item below an image capture device provided so as to perform capturing of a downward direction; a support structure configured to support the item at a predetermined angle relative to a top face of the placement stage; and an image recognition apparatus that identifies the item by performing image recognition on an image of the item acquired by the image capture device.
    Type: Grant
    Filed: February 8, 2018
    Date of Patent: June 21, 2022
    Assignee: NEC CORPORATION
    Inventors: Ryoma Iio, Kota Iwamoto, Hideo Yokoi, Ryo Takata, Kazuya Koyama
  • Patent number: 11361448
    Abstract: Provided is an image processing apparatus comprising: one or more memories that store a set of instructions; and one or more processors that execute the instructions to obtain a plurality of inputted images that are contiguously captured, perform determination of a first region based on the obtained inputted images, the first region being formed from pixels each having a change in pixel value below a predetermined threshold in a predetermined period, the determination being performed in each of a plurality of the continuous predetermined periods, determine a second region based on a plurality of the first regions determined in the plurality of the continuous predetermined periods, respectively, determine a third region by subjecting image data representing the determined second region to image processing, and update a background image based on the obtained inputted images and any of the determined third region.
    Type: Grant
    Filed: August 29, 2019
    Date of Patent: June 14, 2022
    Assignee: CANON KABUSHIKI KAISHA
    Inventors: Kiwamu Kobayashi, Akihiro Matsushita
  • Patent number: 11348248
    Abstract: A system includes one or more memory devices storing instructions, and one or more processors configured to execute the instructions to perform the steps of a method to automatically crop images. The system may convert a raw image into a grayscale image before applying an edge detection operator to the grayscale image to create an edge image. The system may then create a binary image based on the edge image, identify one or more contours in the binary image, and determine one or more contour bounding image areas surrounding the contour(s). Upon identifying contour bounding image area(s) having user-specified dimensional criteria, the system may determine a minimum bounded image area including those area(s), pad the minimum bounded image area, and crop the raw image based on the padded bounded area.
    Type: Grant
    Filed: May 5, 2020
    Date of Patent: May 31, 2022
    Assignee: CARMAX ENTERPRISE SERVICES, LLC
    Inventor: Omar Ahmed Ansari
  • Patent number: 11341376
    Abstract: A method and device for recognizing an image, electronic equipment and a storage medium are provided. The method includes: acquiring an image to be recognized; determining a potential recognition region based on a target algorithm model; determining an up-sampled potential recognition region by up-sampling the potential recognition region; and determining a classification recognition result based on the up-sampled potential recognition region.
    Type: Grant
    Filed: June 24, 2020
    Date of Patent: May 24, 2022
    Assignee: Beijing Dajia Internet Information Technology Co., Ltd.
    Inventors: Shuifa Zhang, Yan Li, Sibo Wang, Chang Liu
  • Patent number: 11341595
    Abstract: Various embodiments of the present invention relate to an electronic device capable of generating a virtual image by using a user input, and an operating method therefor.
    Type: Grant
    Filed: September 4, 2018
    Date of Patent: May 24, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Dong-Bum Choi, Sangjin Lee, Seung Hye Chyung, Jonghoon Won, Haedong Lee
  • Patent number: 11341366
    Abstract: A cross-modality processing method is related to a field of natural language processing technologies. The method includes: obtaining a sample set, wherein the sample set includes a plurality of corpus and a plurality of images; generating a plurality of training samples according to the sample set, in which each of the plurality of the training samples is a combination of at least one of the plurality of the corpus and at least one of the plurality of the images corresponding to the at least one of the plurality of the corpus; adopting the plurality of the training samples to train a semantic model, so that the semantic model learns semantic vectors containing combinations of the corpus and the images.
    Type: Grant
    Filed: August 10, 2020
    Date of Patent: May 24, 2022
    Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.
    Inventors: Guocheng Niu, Bolei He, Xinyan Xiao
  • Patent number: 11334754
    Abstract: An apparatus and a method for monitoring an object in a vehicle which recognize a state of an object from a different type of image through an artificial intelligence algorithm are disclosed. The in-vehicle object monitoring apparatus is an apparatus which monitors an in-vehicle object by using a first image and a second image, and comprises: an interface configured to receive a first image generated by a first camera in a vehicle and a second image generated by a second camera in the vehicle; and a processor configured to, when it is determined that there is an object in each of the first image and the second image, extract the first feature information about the object from the first image, extract the second feature information about the object from the second image, and recognize a state of the object, based on the first feature information and the second feature information.
    Type: Grant
    Filed: December 31, 2019
    Date of Patent: May 17, 2022
    Assignee: LG Electronics Inc.
    Inventors: Hyun Kyu Kim, Ki Bong Song
  • Patent number: 11335086
    Abstract: Embodiment herein discloses methods and devices for waste management by using an artificial intelligence based waste object categorizing engine. The method includes acquiring at least one image and detecting at least one waste object from the at least one acquired image. Additionally, the method determines that the at least one detected waste object matches with a pre-stored waste object and identifies a type of the detected waste object using the pre-stored waste object. Furthermore, the method includes displaying the type of the detected waste object based on the identification.
    Type: Grant
    Filed: March 21, 2020
    Date of Patent: May 17, 2022
    Inventors: Roy William Jenkins, Maithreya Chakravarthula, Wolfgang Decker, Cristopher Luce, Christoper Heney, Clifton Luce
  • Patent number: 11328429
    Abstract: Embodiments of the present disclosure relate to a method and apparatus for detecting ground point cloud points. The method may include: determining a segmentation plane and a ground based on a point cloud collected by a lidar; segmenting the point cloud into a first sub point cloud and a second sub point cloud based on the segmentation plane; and determining the point cloud points whose distances from the ground are smaller than a first distance threshold in the first sub point cloud as ground point cloud points, and determining the point cloud points whose distances from the ground are smaller than a second distance threshold in the second sub point cloud as the ground point cloud points, where the first distance threshold is smaller than the second distance threshold.
    Type: Grant
    Filed: March 3, 2020
    Date of Patent: May 10, 2022
    Assignee: APOLLO INTELLIGENT DRIVING TECHNOLOGY (BEIJING) CO., LTD.
    Inventors: Xiang Liu, Shuang Zhang, Bin Gao, Xiaoxing Zhu
  • Patent number: 11302009
    Abstract: A method of generating landmark locations for an image crop comprises: processing the crop through an encoder-decoder to provide a plurality of N output maps of comparable spatial resolution to the crop, each output map corresponding to a respective landmark of an object appearing in the image crop; processing an output map from the encoder through a plurality of feed forward layers to provide a feature vector comprising N elements, each element including an (x,y) location for a respective landmark. Any landmarks locations from the feature vector having an x or a y location outside a range for a respective row or column of the crop are selected for a final set of landmark locations; with remaining landmark locations tending to be selected from the N (x,y) landmark locations from the plurality of N output maps.
    Type: Grant
    Filed: August 19, 2019
    Date of Patent: April 12, 2022
    Assignee: FotoNation Limited
    Inventors: Ruxandra Vranceanu, Tudor Mihail Pop, Oana Parvan-Cernatescu, Sathish Mangapuram
  • Patent number: 11295852
    Abstract: A method is for calculating a personalized patient model including an external surface model of a patient and an organ model of the patient. In an embodiment, the method includes acquiring metadata of the patient, the metadata being assigned to at least one metadata category; ascertaining, using the patient metadata acquired, the external surface model of the patient and an internal anatomical model of the patient, the internal anatomical model including a body cavity of the patient; ascertaining the organ model of the patient using the patient metadata acquired and using the internal anatomical model of the patient ascertained; and calculating the personalized patient model, the external surface model of the patient ascertained and the organ model of the patient ascertained, being combined.
    Type: Grant
    Filed: September 19, 2018
    Date of Patent: April 5, 2022
    Assignee: SIEMENS HEALTHCARE GMBH
    Inventors: Norbert Strobel, Xia Zhong
  • Patent number: 11290705
    Abstract: AR elements are occluded in video image frames. A depth map is determined for an image frame of a video received from a video capture device. An AR graphical element for overlaying over the image frame is received. An element distance for AR graphical elements relative to a position of a user of the video capture device (e.g., the geographic position of the video capture device) is also received. Based on the depth map for the image frame, a pixel distance is determined for each pixel in the image frame. The pixel distances of the pixels in the image frame are compared to the element distance, and in response to a pixel distance for a given pixel being less than the element distance, the pixel of the image frame is displayed rather than a corresponding pixel of the AR graphical element.
    Type: Grant
    Filed: May 11, 2020
    Date of Patent: March 29, 2022
    Assignee: Mapbox, Inc.
    Inventors: Aleksandr Buslaev, Henadzi Klimuk, Roman Kuznetsov
  • Patent number: 11288533
    Abstract: A community mapping platform may receive an image that depicts a community layout of a community and may process, using a computer vision model, the image to identify a unit, of the community, that is depicted in the image (e.g., based on identifying a text string and/or a polygon in the image). The community mapping platform may determine sets of community geographical coordinates for a set of reference locations of the community and may map the sets of community geographical coordinates to corresponding reference pixel locations of the image. The community mapping platform may determine, using a geographical information system, unit geographical coordinates of the unit based on the reference pixel locations and may perform an action associated with the unit geographical coordinates.
    Type: Grant
    Filed: May 27, 2020
    Date of Patent: March 29, 2022
    Assignee: Verizon Patent and Licensing Inc.
    Inventors: Adithya Umakanth, Ramesh Babu Kare, Vijay Naykodi, Adithya vikram raj Garoju, Ashis Sarkar, Balagangadhara Thilak Adiboina
  • Patent number: 11276189
    Abstract: Disclosed are techniques for radar-aided single-image three-dimensional (3D) depth reconstruction. In an aspect, at least one processor of an on-board computer of an ego vehicle receives, from a radar sensor of the ego vehicle, at least one radar image of an environment of the ego vehicle, receives, from a camera sensor of the ego vehicle, at least one camera image of the environment of the ego vehicle, and generates, using a convolutional neural network (CNN), a depth image of the environment of the ego vehicle based on the at least one radar image and the at least one camera image.
    Type: Grant
    Filed: March 4, 2020
    Date of Patent: March 15, 2022
    Assignee: QUALCOMM Incorporated
    Inventors: Urs Niesen, Jayakrishnan Unnikrishnan
  • Patent number: 11275943
    Abstract: Object disposal recommendations are generated by classifying an object within an image based on object recognition and metadata of the image and comparing the object and metadata to a corpus of object classification images. A current object state is determined relative to disposal recommendation, based on disposal policies. Responsive to determining disposal of the object in the current state is not recommended, one or more components of the object are identified, and a determination made as to a recommendation for disposal of the one or more object components. If a disposal recommendation is found for the one or more components of the object, the processor generates information to disassemble and prepare the one or more components of the object for disposal and provides disposal recommendation for the one or more components of the object.
    Type: Grant
    Filed: September 11, 2019
    Date of Patent: March 15, 2022
    Assignee: International Business Machines Corporation
    Inventors: Susan Jachin Christian, Pushpita Panigrahi, Michael Anson Lau, Romelia H. Flores
  • Patent number: 11270146
    Abstract: Aspects of the present invention provide a new text location technique, which can be applied to general handwriting detection at a variety of levels, including characters, words, and sentences. The inventive technique is efficient in training deep learning systems to locate text. The technique works for different languages, for text in different orientations, and for overlapping text. In one aspect, the technique's ability to separate overlapping text also makes the technique useful in application to overlapping objects. Embodiments take advantage of a so-called skyline appearance that text tends to have. Recognizing a skyline appearance for text can facilitate the proper identification of bounding boxes for the text. Even in the case of overlapping text, discernment of a skyline appearance for words can help with the proper identification of bounding boxes for each of the overlapping text words/phrases, thereby facilitating the separation of the text for purposes of recognition.
    Type: Grant
    Filed: March 31, 2020
    Date of Patent: March 8, 2022
    Assignee: KONICA MINOLTA BUSINESS SOLUTIONS U.S.A., INC.
    Inventor: Junchao Wei
  • Patent number: 11270427
    Abstract: A system for establishing a junction trace of an assembly includes a surface model creating module, a processing module, and a material inspection module. The assembly includes a first part and a second part assembled with each other. The surface model creating module scans the first part and the second part to separately establish first surface model data and second surface model data. The processing module establishes assembled surface model data according to the first surface model data and the second surface model data, determines a junction region from the assembled surface model data, and determines inspection points mapped on the assembly according to the junction region. The material inspection module inspects materials of the assembly at the inspection points. The processing module establishes a junction trace of the first part and the second part in the assembly according to a material inspection result.
    Type: Grant
    Filed: March 5, 2020
    Date of Patent: March 8, 2022
    Assignee: Industrial Technology Research Institute
    Inventors: Bing-Cheng Hsu, Cheng-Kai Huang, Jan-Hao Chen, Chwen-Yi Yang, Yi-Ying Lin