Patents Examined by Fayyaz Alam
  • Patent number: 11210572
    Abstract: A method, apparatus and system for understanding visual content includes determining at least one region proposal for an image, attending at least one symbol of the proposed image region, attending a portion of the proposed image region using information regarding the attended symbol, extracting appearance features of the attended portion of the proposed image region, fusing the appearance features of the attended image region and features of the attended symbol, projecting the fused features into a semantic embedding space having been trained using fused attended appearance features and attended symbol features of images having known descriptive messages, computing a similarity measure between the projected, fused features and fused attended appearance features and attended symbol features embedded in the semantic embedding space having at least one associated descriptive message and predicting a descriptive message for an image associated with the projected, fused features.
    Type: Grant
    Filed: December 17, 2019
    Date of Patent: December 28, 2021
    Assignee: SRI International
    Inventors: Ajay Divakaran, Karan Sikka, Karuna Ahuja, Anirban Roy
  • Patent number: 11210523
    Abstract: A scene aware dialog system includes an input interface to receive a sequence of video frames, contextual information, and a query and a memory configured to store neural networks trained to generate a response to the input query by analyzing one or combination of input sequence of video frames and the input contextual information. The system further includes a processor configured to detect and classify objects in each video frame of the sequence of video frames; determine relationships among the classified objects in each of the video frame; extract features representing the classified objects and the determined relationships for each of the video frame to produce a sequence of feature vectors; and submit the sequence of feature vectors, the input query and the input contextual information to the neural network to generate a response to the input query.
    Type: Grant
    Filed: February 6, 2020
    Date of Patent: December 28, 2021
    Assignee: Mitsubishi Electric Research Laboratories, Inc.
    Inventors: Shijie Geng, Peng Gao, Anoop Cherian, Chiori Hori, Jonathan Le Roux
  • Patent number: 11188738
    Abstract: A system associated with progressive spatial analysis of prodigious 3D data including complex structures is disclosed. The system receives minimum boundary information related to a first data object and a second data object, which are proximate neighbors. The system determines whether boundary data associated with a first data object is within an area delineated by minimum boundary information of first data objects. A first geometric structure associated with the first data object is generated based on respective decompressed data. A structural skeleton is determined using the first geometric structure to identify respective skeleton vertices. A geometric representation is generated based on the skeleton vertices associated with the first geometric structure. The system determines whether boundary data associated with the second data object is within the area delineated by the minimum boundary information of the first data object.
    Type: Grant
    Filed: February 7, 2018
    Date of Patent: November 30, 2021
    Assignee: THE RESEARCH FOUNDATION FOR THE STATE UNIVERSITY OF NEW YORK
    Inventors: Yanhui Liang, Fusheng Wang, Hoang Vo
  • Patent number: 11182595
    Abstract: Real-time modification of video images of humans allows for the video to be modified so that an expression of a subject human may be changed. Customer service agents may have more successful interactions with customers if they display an appropriate facial expression, such as to provide a particular emotional state. By determining an appropriate facial expression, and any deviation from a customer service agent's current expression, a modification to the video image of the customer service agent may be determined and applied. As a result, agents may not have a facial expression that is best suited to successfully resolve a purpose of the interaction, to have the customer be presented with the best-suited facial expression.
    Type: Grant
    Filed: August 8, 2019
    Date of Patent: November 23, 2021
    Assignee: Avaya Inc.
    Inventors: Valentine C. Matula, Pushkar Yashavant Deole, Sandesh Chopdekar, Sadashiv Vamanrao Deshmukh
  • Patent number: 11176457
    Abstract: A method of generating a 3D microstructure using a neural network includes configuring an initial 3D microstructure; obtaining a plurality of cross-sectional images by disassembling the initial 3D microstructure in at least one direction of the initial 3D microstructure; obtaining first output feature maps with respect to at least one layer that constitutes the neural network by inputting each of the cross-sectional images to the neural network; obtaining second output feature maps with respect to at least one layer by inputting a 2D original image to the neural network; generating a 3D gradient by applying a loss value to a back-propagation algorithm after calculating the loss value by comparing the first output feature maps with the second output feature maps; and generating a final 3D microstructure based on the 2D original image by reflecting the 3D gradient to the initial 3D microstructure.
    Type: Grant
    Filed: October 4, 2019
    Date of Patent: November 16, 2021
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Seungwoo Seo, Kyongmin Min, Eunseog Cho
  • Patent number: 11172753
    Abstract: A wearable pack assembly for a mobile device having a touchscreen and a camera. The wearable pack assembly includes a harness and a pack. The harness includes a base and a strap assembly configured to be worn by a wearer. The pack is removably coupled to the base and includes an extension panel and an envelope platform. The envelope platform includes an envelope with an inlet configured to receive the mobile device, and cover having apertures that form windows on the outward and inward facing sides of the envelope to enable viewing and touching of the touchscreen and taking of pictures and video with the camera from the windowed pocket through the window. The envelope platform and the extension panel can be rotatably coupled at opposing edged to provide an extended position in which the touchscreen can be positioned at the wearer's eye level.
    Type: Grant
    Filed: January 22, 2019
    Date of Patent: November 16, 2021
    Assignee: UVU, LLC
    Inventor: Karl D. Picking
  • Patent number: 11176455
    Abstract: A learning data generation apparatus includes a memory and a processor configured to perform determination of a region of interest in each of a plurality of images related to a learning target for machine learning in accordance with a result of image matching between the plurality of images, apply an obscuring processing to a specific region other than the region of interest in each of the plurality of images, and generate learning data including the plurality of images to which the obscuring processing is applied.
    Type: Grant
    Filed: November 25, 2019
    Date of Patent: November 16, 2021
    Assignee: FUJITSU LIMITED
    Inventor: Yusuke Hida
  • Patent number: 11176425
    Abstract: A system for detecting and describing keypoints in images is described. A camera is configured to capture an image including a plurality of pixels. A fully convolutional network is configured to jointly and concurrently: generate descriptors for each of the pixels, respectively; generate reliability scores for each of the pixels, respectively; and generate repeatability scores for each of the pixels, respectively. A scoring module is configured to generate scores for the pixels, respectively, based on the reliability scores and the repeatability scores of the pixels, respectively. A keypoint list module is configured to: select X of the pixels having the X highest scores, where X is an integer greater than 1; and generate a keypoint list including: locations of the selected X pixels; and the descriptors of the selected X pixels.
    Type: Grant
    Filed: December 11, 2019
    Date of Patent: November 16, 2021
    Assignees: NAVER CORPORATION, NAVER LABS CORPORATION
    Inventors: Jérome Revaud, Cesar De Souza, Martin Humenberger, Philippe Weinzaepfel
  • Patent number: 11176407
    Abstract: A method and an image processing system for detecting an object in an image are described. A set of line segments are detected in the image. A subset of the line segments is identified based on a projection space orientation that defines a projection space. Each one of the line segments of the subset of line segments is projected into the projection space to obtain a set of projected line segments, where each projected line segment of the set of projected line segments is represented by a respective set of projection parameters. A determination is performed, in the projection space, based on the sets of projection parameters and a shape criterion that characterizes the object, of whether the image includes an instance of the object. In response to determining that the image includes the instance of the object, the instance of the object is output.
    Type: Grant
    Filed: February 5, 2020
    Date of Patent: November 16, 2021
    Assignee: Matrox Electronics Systems Ltd.
    Inventor: Maguelonne Héritier
  • Patent number: 11176387
    Abstract: A device and method for recognizing an object included in an input image are provided, the device for recognizing the object included in the input image includes a memory in which at least one program is stored; a camera configured to capture an environment around the device; and at least one processor configured to execute the at least one program to recognize an object included in an input image, wherein the at least one program includes instructions to: obtain the input image by controlling the camera; obtain information about the environment around the device that obtains the input image; determine, based on the information about the environment, a standard for using a plurality of feature value sets in a combined way, the plurality of feature value sets being used to recognize the object in the input image; and recognize the object included in the input image, by using the plurality of feature value sets based on the determined standard for using the plurality of feature value sets in the combined way.
    Type: Grant
    Filed: January 17, 2018
    Date of Patent: November 16, 2021
    Inventors: Hyun-seok Hong, Sahng-gyu Park, Seung-hoon Han, Bo-seok Moon
  • Patent number: 11170226
    Abstract: A system for tracking objects in a temporal sequence of digital images is configured to: detect potential objects in the images, the detected potential objects being indicated as nodes, identify pairs of neighboring nodes, such that for each pair the nodes of said pair potentially represent an identical object and their spatial and/or temporal relationship with each other is within a predetermined range, connect each pair of neighboring nodes with a first type edge, identify at least one supplementary pair of distant nodes whose spatial and/or temporal relationship with each other exceeds the predetermined range, connect the pair of distant nodes with a supplementary second type edge, each of the first and second type edges being assigned a cost value, and determine a track of an object in the temporal sequence of digital images based on a set of connected first type edges and at least one second type edge.
    Type: Grant
    Filed: July 6, 2017
    Date of Patent: November 9, 2021
    Assignees: TOYOTA MOTOR EUROPE, MAX-PLANCK-GESELLSCHAFT ZUR FÖRDERUNG DER WISSENSCHAFTEN E.V.
    Inventors: Daniel Olmeda Reino, Bernt Schiele, Björn Andres, Mykhaylo Andriluka, Siyu Tang
  • Patent number: 11171716
    Abstract: Methods and systems are described for providing end-to-end beamforming. For example, end-to-end beamforming systems include end-to-end relays and ground networks to provide communications to user terminals located in user beam coverage areas. The ground segment can include geographically distributed access nodes and a central processing system. Return uplink signals, transmitted from the user terminals, have multipath induced by a plurality of receive/transmit signal paths in the end to end relay and are relayed to the ground network. The ground network, using beamformers, recovers user data streams transmitted by the user terminals from return downlink signals. The ground network, using beamformers generates forward uplink signals from appropriately weighted combinations of user data streams that, after relay by the end-end-end relay, produce forward downlink signals that combine to form user beams.
    Type: Grant
    Filed: March 6, 2020
    Date of Patent: November 9, 2021
    Assignee: ViaSat, Inc.
    Inventors: Kenneth V. Buer, Mark J. Miller
  • Patent number: 11170299
    Abstract: In various examples, a deep neural network (DNN) is trained—using image data alone—to accurately predict distances to objects, obstacles, and/or a detected free-space boundary. The DNN may be trained with ground truth data that is generated using sensor data representative of motion of an ego-vehicle and/or sensor data from any number of depth predicting sensors—such as, without limitation, RADAR sensors, LIDAR sensors, and/or SONAR sensors. The DNN may be trained using two or more loss functions each corresponding to a particular portion of the environment that depth is predicted for, such that—in deployment—more accurate depth estimates for objects, obstacles, and/or the detected free-space boundary are computed by the DNN. In some embodiments, a sampling algorithm may be used to sample depth values corresponding to an input resolution of the DNN from a predicted depth map of the DNN at an output resolution of the DNN.
    Type: Grant
    Filed: March 9, 2020
    Date of Patent: November 9, 2021
    Assignee: NVIDIA CORPORATION
    Inventors: Junghyun Kwon, Yilin Yang, Bala Siva Sashank Jujjavarapu, Zhaoting Ye, Sangmin Oh, Minwoo Park, David Nister
  • Patent number: 11170217
    Abstract: A method for predicting conditions associated with a coal stock pile is described. The method includes collecting aerial data for a site including one or more coal stock piles. Using the aerial data, the method includes performing localization of the site to identify boundaries of the coal stock piles and extracting multi-spectral features. The method also includes obtaining additional data associated with the coal stock piles from at least one data source and merging the aerial data with the additional data. Using the merged data and the extracted multi-spectral features, the method includes analyzing a status of the coal stock piles by a prediction module to predict at least one of an impending combustion event or a severe condition associated with the coal stock piles. In response to the predicted at least one impending combustion event or severe condition, the method includes implementing a response.
    Type: Grant
    Filed: August 30, 2019
    Date of Patent: November 9, 2021
    Assignee: Accenture Global Solutions Limited
    Inventors: Bhushan Gurmukhdas Jagyasi, Abhijeet Chowdhary, Ramaa Gopal Varma Vegesna, Nasiruddin Mohammad, Pallavi S. Gawade, Urvi Suresh Shah, Abhishek Kumar Jaiswal, Akash Manikrao Jadhav, Bolaka Mukherjee
  • Patent number: 11170253
    Abstract: An information processing apparatus includes a processor. The processor is configured to receive first image data, and generate, by processing corresponding to information represented in the first image data and corresponding to specific information other than information of a deletion target out of the information represented in the first image data, second image data not representing the information of the deletion target out of the information represented in the first image data but representing the information other than the information of the deletion target.
    Type: Grant
    Filed: February 4, 2020
    Date of Patent: November 9, 2021
    Assignee: FUJIFILM Business Innovation Corp.
    Inventors: Hiroyoshi Uejo, Chizuko Sento, Naohiro Nukaya
  • Patent number: 11164317
    Abstract: An embodiment of mask quality predication technology may include a memory to store a set of input images and a reference mask image associated with each input image of the set of input images, a processor communicatively coupled to the memory, and logic communicatively coupled to the processor and the memory to generate a set of two or more masks of different quality associated with each input image of the set of input images, and determine a quality score for each generated mask. Other embodiments are disclosed and claimed.
    Type: Grant
    Filed: June 28, 2019
    Date of Patent: November 2, 2021
    Assignee: Intel Corporation
    Inventors: Fahim Mohammad, Joseph Batz, Nathan Segerlind, Itay Benou, Tzachi Hershkovich
  • Patent number: 11157772
    Abstract: Methods and systems for generating adversarial examples are disclosed. The method comprises accessing a set of inputs and generating an instance of a variable auto-encoder (VAE), the instance of the VAE encoding the set of inputs into latent representation elements associated with a latent space. The method further comprises applying a manifold learning routine on the instance of the VAE to establish a characterization of a manifold in the latent space and applying a perturbation routine to generate perturbed latent representation elements while constraining the perturbed latent representation elements to remain within the manifold. The method further comprises generating adversarial examples based on the perturbed latent representation elements and outputting the adversarial examples.
    Type: Grant
    Filed: October 28, 2019
    Date of Patent: October 26, 2021
    Assignee: ELEMENT AI INC.
    Inventor: Ousmane Dia
  • Patent number: 11151383
    Abstract: System and method for image processing are provided. A stream of images may be obtained, for example by capturing images using an image sensor. Points in time associated with an activity may be obtained. For each point in time, the stream of images may be analyzed to identify events related to the activity and preceding the point in time. Based on the identified events, an event detection rule configured to analyze images to detect at least one event may be obtained. Image data may be obtained, and the image data may be analyzed using the event detection rule to detect events matching selected criteria in the image data.
    Type: Grant
    Filed: February 5, 2020
    Date of Patent: October 19, 2021
    Assignee: Allegro Artificial Intelligence LTD
    Inventors: Moshe Guttmann, Nir Bar-Lev
  • Patent number: 11151377
    Abstract: A cloud detection method based on Landsat 8 snow-containing image, including the following steps: Step 1, selecting any Landsat 8 image as a current image; Step 2, obtaining a cloud threshold for delineating a cloud range from the current image; and Step 3, removing false anomalies in the cloud range delineated by the cloud threshold from the current image so as to obtain a cloud image from which the false anomalies have been removed. The present disclosure can effectively solve the problem of confusion of cloud and snow present in conventional cloud detection methods, and is applicable to regions of different latitudes, without limitations by the amount of cloud.
    Type: Grant
    Filed: September 17, 2018
    Date of Patent: October 19, 2021
    Assignee: CHANG'AN UNIVERSITY
    Inventors: Ling Han, Zhiheng Liu, Tingting Wu
  • Patent number: 11151389
    Abstract: An information processing apparatus of the present invention detects a queue (20) of objects from video data (12). Further, the information processing apparatus of the present invention generates element information using a video frame (14) in which the queue (20) of objects is detected. The element information is information in which an object area (24) in the video frame (14) occupied by the object (22) included in the queue (20) of objects is associated with an attribute of the object (22). Furthermore, the information processing apparatus of the present invention detects a change in the queue (20) of objects based on the element information and the detection result of the object to video frame (14) generated after the video frame (14) in which the element information is generated. Then, the information processing apparatus of the present invention generates element information for the queue (20) of objects in which a change is detected to update the element information used later.
    Type: Grant
    Filed: March 28, 2018
    Date of Patent: October 19, 2021
    Assignee: NEC CORPORATION
    Inventor: Takuya Ogawa