Patents Issued in March 17, 2020
-
Patent number: 10592739Abstract: A gaze-tracking system for use in a head-mounted display apparatus. The gaze-tracking system includes at least one illuminator for emitting light pulses; at least one first optical element comprising a plurality of micro-to-nano-sized components, shaped and arranged relative to each other in a manner that, when incident thereupon, a structure of the light pulses is modified to produce structured light, wherein the produced structured light is used to illuminate a user's eye; at least one camera for capturing an image of reflections of the structured light from the user's eye, wherein the image is representative of a form and a position of the reflections on an image plane of the at least one camera; and a processor configured to control the at least one illuminator and the at least one camera, and to process the captured image to detect a gaze direction of the user.Type: GrantFiled: February 1, 2018Date of Patent: March 17, 2020Assignee: VARJO TECHNOLOGIES OYInventors: Mikko Ollila, Klaus Melakari, Oiva Arvo Oskari Sahlsten
-
Patent number: 10592740Abstract: [Object] To provide a control system, an information processing device, a control method, and a program capable of capturing a clear iris image having no reflected light of illumination without interfering with a user's field of view. [Solution] A control system including: an illumination section configured to irradiate any one of left and right eyes with light; an imaging section configured to image the other eye different from the one of the left and right eyes; and a control section configured to perform control to cause the imaging section to image the other eye while the illumination section is irradiating the one of the left and right eyes with light.Type: GrantFiled: August 18, 2015Date of Patent: March 17, 2020Assignee: SONY CORPORATIONInventors: Takashi Abe, Takeo Tsukamoto, Shuichi Konami, Tomoyuki Ito
-
Patent number: 10592741Abstract: Apparatus for identifying a person who wishes to receive information, where identifying information for each of a plurality of registered individuals is stored in a database, calls for capturing images of an individual requesting information, and determining whether this individual is the same as one of the registered individuals. The stored identifying information includes images of a unique, observable biologic identifier on a body portion of each registered individual. The specificity of the identification process is enhanced by storing registered examples of altered biological information in the database, by allowing the information provider to induce an alteration in a biologic identifier of a requesting person at the time of the request, and by comparing the altered requesting person information to stored information.Type: GrantFiled: April 16, 2018Date of Patent: March 17, 2020Inventor: Jeffrey A. Matos
-
Patent number: 10592742Abstract: Described is a multiple-camera system and process for re-identifying an agent located in a materials handling facility based on anterior views of agents. An anterior view of a newly detected agent may be partitioned and color signatures generated for each partition. Likewise, stored anterior views of agents (candidate agents) that may potentially be the newly detected agent are partitioned and color signatures generated for each partition. Based on the color signatures, a similarity between the anterior view of the newly detected agent and the candidate agents is determined. The similarity may be used to either determine that the newly detected agent is one of the candidate agents or reduce the set of candidate agents that are considered during a manual review.Type: GrantFiled: September 28, 2015Date of Patent: March 17, 2020Assignee: Amazon Technologies, Inc.Inventors: Gang Hua, Gerald Guy Medioni
-
Patent number: 10592743Abstract: An automatic method of determining an image composition procedure that generates a new image visualization based on aggregations and variations of input images. A set of input images is received. Visual features are extracted from the input images. Context associated with input images is received. Based on the extracted visual features and the context associated with the input images, a composition procedure comprising a set of image operations to apply on the set of input images is learned. One or more image operations in the composition procedure are determined to present to a user. A difference visualization image associated with the input images may be generated by executing the one or more image operations.Type: GrantFiled: August 24, 2017Date of Patent: March 17, 2020Assignee: International Business Machines CorporationInventors: Paul Borrel, Alvaro B. Buoro, Ruberth A. A. Barros, Daniel Salles Chevitarese
-
Patent number: 10592744Abstract: A system and method is provided for determining the location of a device based on image of objects captured by the device. In one aspect, an interior space includes a plurality of objects having discernable visual characteristics disposed throughout the space. The device captures an image containing one or more of the objects and identifies the portions of the image associated with the objects based on the visual characteristics. The visual appearance of the objects may also be used to determine the distance of the object to other objects or relative to a reference point. Based on the foregoing and the size and shape of the image portion occupied by the object, such as the height of an edge or its surface area, relative to another object or a reference, the device may calculate its location.Type: GrantFiled: February 2, 2018Date of Patent: March 17, 2020Assignee: Google LLCInventors: Ehud Rivlin, Brian McClendon, Jean-Yves Bouguet
-
Patent number: 10592745Abstract: A method for removing foreign matter from an agricultural product stream of a manufacturing process. The method includes conveying a product stream past an inspection station; scanning a region of the agricultural product stream as it passes the inspection station using at least one light source of a single or different wavelengths; generating hyperspectral images from the scanned region; determining a spectral fingerprint for the agricultural product stream from the hyperspectral images; comparing the spectral fingerprint obtained in step (c) to a spectral fingerprint database containing a plurality of fingerprints using a computer processor to determine whether foreign matter is present and, if present, generating a signal in response thereto; and removing a portion of the conveyed product stream in response to the signal. A system for detecting foreign matter within an agricultural product stream is also provided.Type: GrantFiled: December 28, 2017Date of Patent: March 17, 2020Assignee: Altria Client Services LLCInventors: Henry M. Dante, Samuel Timothy Henry, Seetharama C. Deevi
-
Patent number: 10592746Abstract: Aspects of the subject disclosure may include, for example, observing a plurality of objects viewed through a smart lens, wherein the plurality of objects are in a frame of an image viewed by the smart lens, determining an identification for an object of the plurality of objects, assigning tag information for the object based on the identification, storing the tag information for the object and the frame in which the object was observed, receiving a recall request for the object, retrieving the tag information for the object and the frame responsive to the receiving the recall request, and displaying the tag information and the frame. Other embodiments are disclosed.Type: GrantFiled: July 16, 2018Date of Patent: March 17, 2020Assignee: AT&T Intellectual Property I, L.P.Inventor: Roque Rios, III
-
Patent number: 10592747Abstract: A multi-view interactive digital media representation (MVIDMR) of an object can be generated from live images of an object captured from a camera. Selectable tags can be placed at locations on the object in the MVIDMR. When the selectable tags are selected, media content can be output which shows details of the object at location where the selectable tag is placed. A machine learning algorithm can be used to automatically recognize landmarks on the object in the frames of the MVIDMR and a structure from motion calculation can be used to determine 3-D positions associated with the landmarks. A 3-D skeleton associated with the object can be assembled from the 3-D positions and projected into the frames associated with the MVIDMR. The 3-D skeleton can be used to determine the selectable tag locations in the frames of the MVIDMR of the object.Type: GrantFiled: November 2, 2018Date of Patent: March 17, 2020Assignee: Fyusion, Inc.Inventors: Chris Beall, Abhishek Kar, Stefan Johannes Josef Holzer, Radu Bogdan Rusu, Pavel Hanchar
-
Patent number: 10592748Abstract: An approach is provided for performing sorting of physical mail items using Augmented Reality (A/R) glasses. A/R glasses acquire an image of a physical mail item to be sorted and generate image data that represents the image. A unique value is generated for the image, for example, by processing the image data for the image using one or more hash functions to generate a hash value. The hash value is used to obtain sorting information for the mail item from a mail item manager. The A/R glasses use the sorting information to assist the user in sorting the mail item by displaying the name of a sort location for the mail item, visually distinguishing the sort location from other sort locations, displaying information about the mail item, providing “out of view” assistance, etc. The A/R glasses may allow the user to override the sort location specified for the mail item and override information is sent to the mail item manager.Type: GrantFiled: March 15, 2019Date of Patent: March 17, 2020Assignee: RICOH COMPANY, LTD.Inventors: Steve Cousins, Nicole Blohm
-
Patent number: 10592749Abstract: One example aspect of the present disclosure is directed to a method for analyzing at least one phase of an aircraft turn at an airport. The method includes receiving one or more video streams. The method includes processing the one or more video streams to identify one or more objects. Processing the one or more video streams includes extracting data associated with the one or more objects. The method includes tracking the one or more objects to determine an event based on the one or more objects and the data. The method includes storing the event in a database with an associated parameter. The method includes performing an analysis of the at least one phase of the aircraft turn based, at least in part, on the event and the associated parameter. The method includes providing a signal indicative of an issue with the event based on the analysis.Type: GrantFiled: September 6, 2017Date of Patent: March 17, 2020Assignee: General Electric CompanyInventors: John C. Coppock, Andrew Glen Rector
-
Patent number: 10592750Abstract: A system and method is provided for using rules to perform a set of actions on video data when conditions are satisfied by the video data. The system receives rules to select a theme, portions of the video data and/or a type of output. For example, based on annotation data associated with the video data, the system may apply rules to select one or more themes, with each of theme associated with a portion of the video data. In some examples, the system may apply rules to determine the portion of the video data associated with the theme. The system may apply rules to generate various types of output data associated with each of the selected themes, the types of output data may include a video summarization, individual video clips, individual video frames, a photo album including video frames selected from the video data or the like.Type: GrantFiled: December 21, 2015Date of Patent: March 17, 2020Assignee: Amazon Technlogies, Inc.Inventors: Deepak Suresh Yavagal, Matthew Alan Townsend, Robert James Hanson
-
Patent number: 10592751Abstract: A method of generating a summary of a media file that comprises a plurality of media segments is provided. The method includes calculating, by a neural network, respective importance scores for each of the media segments, based on content features associated with each of the media segments and a targeting approach, selecting a media segment from the media segments, based on the calculated importance scores, generating a caption for the selected media segment based on the content features associated with the selected media segment, and generating a summary of the media file based on the caption.Type: GrantFiled: February 3, 2017Date of Patent: March 17, 2020Assignee: FUJI XEROX CO., LTD.Inventors: Bor-Chun Chen, Yin-Ying Chen, Francine Chen
-
Patent number: 10592752Abstract: Aspects of the present disclosure aim to improve upon methods and systems for the incorporation of additional material into source video data. In particular, the method of the present disclosure may use a pre-existing corpus of source video data to produce, test and refine a prediction model for enabling the prediction of the characteristics of placement opportunities. The model may be created using video analysis techniques which obtain metadata regarding placement opportunities and also through the identification of categorical characteristics relating to the source video which may be provided as metadata with the source video, or obtaining through image processing techniques described below. Using the model, the method and system may then be used to create a prediction of insertion zone characteristics for projects for which source video is not yet available, but for which information corresponding to the identified categorical characteristics is known.Type: GrantFiled: March 22, 2018Date of Patent: March 17, 2020Assignee: Mirriad Advertising PLCInventors: Tim Harris, Philip McLauchlan, David Ok
-
Patent number: 10592753Abstract: The described implementations relate to managing depth cameras. One example can include a depth camera that includes an emitter for illuminating light on a scene and a sensor for sensing light reflected from the scene. The example can also include a resource-conserving camera control component configured to determine when the scene is static by comparing captures and/or frames of the scene from the sensor. The resource-conserving camera control component can operate the depth camera in resource constrained modes while the scene remains static.Type: GrantFiled: March 1, 2019Date of Patent: March 17, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Sergio Ortiz Egea, Onur C. Akkaya, Cyrus Bamji
-
Patent number: 10592754Abstract: Disclosed are a shadow removing method for an image and an application. The shadow removing method comprises a shadow-free feature analysis process, a shadow-free transformation parameter acquisition process and a shadow-free feature imaging process. The application is for road surface detection, a detection method comprising: firstly, using a shadow-free feature extraction method to select a region of interest and extract a feature; next, performing image filtering, segmentation and road surface region selection; lastly, performing image morphology filtering and hole filling. The method may remove shadows in color images, thus serving as a pre-processing step applied in various machine vision fields, and the application in road surface detection may solve the problem of detecting a road surface in dark shadows. The method has the advantages of having low complexity, high processing speed and high accuracy.Type: GrantFiled: December 13, 2016Date of Patent: March 17, 2020Assignee: PEKING UNIVERSITY SHENZHEN GRADUATE SCHOOLInventors: Ge Li, Zhenqiang Ying
-
Patent number: 10592755Abstract: In an apparatus for controlling a vehicle equipped with a radar device and an imaging device to detect an object around the vehicle. In the apparatus, an identity determiner is configured to, based on a first predicted time to collision with a first target and a second predicted time to collision with a second target, perform an identity determination as to whether or not the first target and the second target correspond to the same object. A scene determiner is configured to determine whether or not one of at least one specific scene where large calculation errors in the second predicted time may be generated is matched depending on the calculation method corresponding to the second predicted time to collision. A determination aspect setter is configured to, based on the calculation method and a result of determination by the scene determiner, set an aspect of the identity determination.Type: GrantFiled: January 29, 2018Date of Patent: March 17, 2020Assignee: DENSO CORPORATIONInventor: Ryo Takaki
-
Patent number: 10592756Abstract: A method for detecting a parking area on at least one road section includes providing a usable width of the road section. The usable width represents a passable width of the road section between parking vehicles. The method further includes travelling on the road section using a detector vehicle and detecting lateral distances from objects with a detector device arranged in the detector vehicle. The method also includes comparing the detected lateral distances with the usable width, and detecting the parking area based on the comparison.Type: GrantFiled: July 26, 2016Date of Patent: March 17, 2020Assignee: Robert Bosch GmbHInventors: Philipp Mayer, Carlos Eduardo Cunha, Thorben Schick, Peter Christian Abeling
-
Patent number: 10592757Abstract: Vehicle cognitive data is collected using multiple devices. A user interacts with various pieces of technology to perform numerous tasks and activities. Reactions can be observed and cognitive states inferred from reactions to the tasks and activities. A first computing device within a vehicle obtains cognitive state data which is collected on an occupant of the vehicle from multiple sources, wherein the multiple sources include at least two sources of facial image data. A second computing device generates analysis of the cognitive state data which is collected from the multiple sources. A third computing device renders an output which is based on the analysis of the cognitive state data. The cognitive state data from multiple sources is tagged. The cognitive state data from the multiple sources is aggregated. The cognitive state data is interpolated when collection is intermittent. The cognitive state analysis is interpolated when the cognitive state data is intermittent.Type: GrantFiled: February 1, 2018Date of Patent: March 17, 2020Assignee: Affectiva, Inc.Inventors: Rana el Kaliouby, Abdelrahman N. Mahmoud, Seyedmohammad Mavadati, Panu James Turcot
-
Patent number: 10592758Abstract: An occupant monitoring device which is provided in a vehicle, and monitors one or more occupants riding in the vehicle. The occupant monitoring device includes: a recognizer that recognizes one or more occupants riding in the vehicle; a monitor that monitors the one or more occupants riding in the vehicle according to a result of recognition of the one or more occupants by the recognizer; a start controller that individually starts or stops the recognizer and the monitor. The start controller starts the recognizer in a stopped state of the monitor.Type: GrantFiled: December 3, 2018Date of Patent: March 17, 2020Assignee: SUBARU CORPORATIONInventors: Ryota Nakamura, Masayuki Marubashi, Keita Onishi
-
Patent number: 10592759Abstract: An object recognition apparatus is disclosed. The present apparatus includes a storage unit for obtaining an initial image of a preset object and storing the initial image as a reference image; and a control unit for obtaining a first additional image of the preset object, determining whether the size of the first additional image relative to the initial image meets a first preset condition and additionally storing the first additional image as a reference image if the first additional image meets the preset condition.Type: GrantFiled: March 9, 2015Date of Patent: March 17, 2020Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Byoung-hyun Kim, Sang-yoon Kim, Kyoung-jae Park, Ki-jun Jeong, Eun-heui Jo
-
Patent number: 10592761Abstract: The present disclosure discloses an image processing method and device. The image processing method includes: dividing a detection image into a plurality of first subregions, dividing a template image into a plurality of second subregions, calculating a principal rotation direction of each first subregion with respect to the corresponding second subregion; and calculating a principal rotation direction of the detection image according to the principal rotation directions of the plurality of first subregions.Type: GrantFiled: March 8, 2018Date of Patent: March 17, 2020Assignee: BOE TECHNOLOGY GROUP CO., LTD.Inventor: Jinglin Yang
-
Patent number: 10592762Abstract: Embodiments disclosed herein generally relate to a method, system, and computer readable medium for generating a thumbnail for a media file. A web client application server receives the media file having metadata associated therewith. The web client application server generates an interest point area. The interest point area includes one or more interest points in the media file. The web client application server aligns a thumbnail area with respect to the interest point area. The web client application server displays a portion of the media file in the thumbnail area. The portion of the media file that is displayed includes at least a portion of the interest point area.Type: GrantFiled: February 1, 2018Date of Patent: March 17, 2020Assignee: SMUGMUG, INC.Inventors: David Parry, Aaron Meyers, Bobby Yang
-
Patent number: 10592763Abstract: Devices and a method are provided for providing feedback to a user. In one implementation, the method comprises obtaining a plurality of images from an image sensor. The image sensor is configured to be positioned for movement with the user's head. The method further comprises monitoring the images, and determining whether relative motion occurs between a first portion of a scene captured in the plurality of images and other portions of the scene captured in the plurality of images. If the first portion of the scene moves less than at least one other portion of the scene, the method comprises obtaining contextual information from the first portion of the scene. The method further comprises providing the feedback to the user based on at least part of the contextual information.Type: GrantFiled: June 11, 2019Date of Patent: March 17, 2020Assignee: ORCAM TECHNOLOGIES LTD.Inventors: Yonatan Wexler, Amnon Shashua
-
Patent number: 10592764Abstract: Systems and methods for reconstructing a document from a series of document images. An example method comprises: receiving a plurality of image frames, wherein each image frame of the plurality of image frames contains at least a part of an image of an original document; identifying a plurality of visual features in the plurality of image frames; performing spatial alignment of the plurality of image frames based on matching the identified visual features; splitting each of the plurality of image frames into a plurality of image fragments; identifying one or more text-depicting image fragments among the plurality of image fragments; associating each identified text-depicting image fragment with an image frame in which that image fragment has an optimal value of a pre-defined quality metric among values of the quality metric for that image fragment in the plurality of image frames; and producing a reconstructed image frame by blending image fragments from the associated image frames.Type: GrantFiled: September 27, 2017Date of Patent: March 17, 2020Assignee: ABBYY Production LLCInventors: Vasily Loginov, Ivan Zagaynov, Irina Karatsapova
-
Patent number: 10592765Abstract: Examples of various method and systems are provided for information generation from images of a building. In one example, 2D building and/or building element information can be generated from 2D images of the building that are overlapping. 3D building and building element information can be generated from the 2D building and/or building element information. The 2D image information can be combined with 3D information about the building and/or building element to generate projective geometry information. Clustered 3D information can be generated by partitioning and grouping 3D data points. An information set associated with the building and/or at least one building element can then be generated.Type: GrantFiled: January 19, 2018Date of Patent: March 17, 2020Assignee: Pointivo, Inc.Inventors: Habib Fathi, Miguel M. Serrano, Bradden John Gross, Daniel L. Ciprari
-
Patent number: 10592766Abstract: An image processing apparatus includes a controller configured to execute: acquiring objective image data representing an objective image which includes a first character and a second character; analyzing first partial image data and specifying the first character in an image represented by the first partial image data; and generating processed image data representing a processed image which includes the first character and the second character by using the objective image data. The objective image data includes the first partial image data in a bitmap format which represents the image including the first character and second partial image data in a vector format which represents an image including the second character. The processed image data includes: first processed data representing an image including the first character; and second processed data representing an image including the second character.Type: GrantFiled: February 24, 2017Date of Patent: March 17, 2020Assignee: BROTHER KOGYO KABUSHIKI KAISHAInventor: Kazuhide Sawada
-
Patent number: 10592767Abstract: Approaches for interpretable counting for visual question answering include a digital image processor, a language processor, and a counter. The digital image processor identifies objects in an image, maps the identified objects into an embedding space, generates bounding boxes for each of the identified objects, and outputs the embedded objects paired with their bounding boxes. The language processor embeds a question into the embedding space. The scorer determines scores for the identified objects. Each respective score determines how well a corresponding one of the identified objects is responsive to the question. The counter determines a count of the objects in the digital image that are responsive to the question based on the scores. The count and a corresponding bounding box for each object included in the count are output. In some embodiments, the counter determines the count interactively based on interactions between counted and uncounted objects.Type: GrantFiled: January 29, 2018Date of Patent: March 17, 2020Assignee: salesforce.com, inc.Inventors: Alexander Richard Trott, Caiming Xiong, Richard Socher
-
Patent number: 10592768Abstract: A Hough processor comprises a pre-processor and a Hough transformation unit. The pre-processor is configured to receive a plurality of samples respectively comprising an image and in order to rotate or reflect the image of the respective sample. The Hough transformation unit is configured to collect a predetermined searched pattern in the plurality of samples on the basis of a plurality of versions. The Hough transformation unit comprises a characteristic being dependent on the searched pattern, which is adjustable according to the searched pattern.Type: GrantFiled: August 4, 2016Date of Patent: March 17, 2020Assignee: Fraunhofer-Gesellschaft zur Foerderung der angewandten Forschung e.V.Inventors: Daniel Krenzer, Albrecht Hess, András Kátai
-
Patent number: 10592769Abstract: Techniques describe submitting a video clip as a query by a user. A process retrieves images and information associated with the images in response to the query. The process decomposes the video clip into a sequence of frames to extract the features in a frame and to quantize the extracted features into descriptive words. The process further tracks the extracted features as points in the frame, a first set of points to correspond to a second set of points in consecutive frames to construct a sequence of points. Then the process identifies the points that satisfy criteria of being stable points and being centrally located in the frame to represent the video clip as a bag of descriptive words for searching for images and information related to the video clip.Type: GrantFiled: August 18, 2016Date of Patent: March 17, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Linjun Yang, Xian-Sheng Hua, Yang Cai
-
Patent number: 10592771Abstract: This document describes systems, methods, devices, and other techniques for accessing a first video showing a first two-dimensional scene of an environment and captured by a first camera located in the environment having a first field of view; detecting one or more objects shown in the first video; analyzing the first video to determine one or more features of each of the detected objects shown in the first video; accessing a second video showing a second 2D scene of the environment and captured by a second camera located in the environment having a second field of view; detecting one or more objects shown in the second video; analyzing the second video to determine one or more features of each of the detected objects shown in the second video; and correlating one or more objects shown in the first video with one or more objects shown in the second video.Type: GrantFiled: December 30, 2016Date of Patent: March 17, 2020Assignee: Accenture Global Solutions LimitedInventors: Anders Astrom, Vitalie Schiopu, Philippe Daniel
-
Patent number: 10592772Abstract: An attachment tool includes an image capturing unit configured to capture an image of a random pattern provided on a comparison region of a part; and an identification result-outputting unit configured to output a part identification result obtained by comparing an image characteristic of the captured image of the random pattern with a previously stored image characteristic of a random pattern of a part.Type: GrantFiled: March 24, 2016Date of Patent: March 17, 2020Assignee: NEC CORPORATIONInventors: Rui Ishiyama, Takayuki Abe, Kayato Sekiya
-
Patent number: 10592773Abstract: A user captures images on a user computing device. The user signs in to an application, which transmits the user's images to an account management system, which recognizes objects within the images and assigns one or more object categories to the images and recognizes multiple images comprising objects in a common object category. After receiving user consent, the application groups the images on the user computing device according to object category. The user computing device captures an image of another object. The application transmits the image to the account management system, which detects objects within the image, identifies the object category, and saves the image to the corresponding object category group on the user computing device. After receiving user consent, the account management system finds information for each image in the object category group of images and transmits the information to the user computing device.Type: GrantFiled: January 11, 2018Date of Patent: March 17, 2020Assignee: Google LLCInventor: Maryam Tohidi
-
Patent number: 10592774Abstract: A system for identifying in an image an object that is commonly found in a collection of images and for identifying a portion of an image that represents an object based on a consensus analysis of segmentations of the image. The system collects images of containers that contain objects for generating a collection of common objects within the containers. To process the images, the system generates a segmentation of each image. The image analysis system may also generate multiple segmentations for each image by introducing variations in the selection of voxels to be merged into a segment. The system then generates clusters of the segments based on similarity among the segments. Each cluster represents a common object found in the containers. Once the clustering is complete, the system may be used to identify common objects in images of new containers based on similarity between segments of images and the clusters.Type: GrantFiled: August 11, 2017Date of Patent: March 17, 2020Assignee: Lawrence Livermore National Security, LLCInventors: Peer-Timo Bremer, Hyojin Kim, Jayaraman J. Thiagarajan
-
Patent number: 10592775Abstract: An image processing method includes steps of receiving an image sequence; when at least one object appears in the image sequence, analyzing a moving trajectory of each object; extracting at least one characteristic point from each moving trajectory; classifying the at least one characteristic point of each moving trajectory within a predetermined time period into at least one cluster; and storing at least one characteristic parameter of each cluster.Type: GrantFiled: September 4, 2017Date of Patent: March 17, 2020Assignee: VIVOTEK INC.Inventors: Cheng-Chieh Liu, Chih-Yen Lin
-
Patent number: 10592776Abstract: The present disclosure is directed towards methods and systems for determining multimodal image edits for a digital image. The systems and methods receive a digital image and analyze the digital image. The systems and methods further generate a feature vector of the digital image, wherein each value of the feature vector represents a respective feature of the digital image. Additionally, based on the feature vector and determined latent variables, the systems and methods generate a plurality of determined image edits for the digital image, which includes determining a plurality of set of potential image attribute values and selecting a plurality of sets of determined image attribute values from the plurality of sets of potential image attribute values wherein each set of determined image attribute values comprises a determined image edit of the plurality of image edits.Type: GrantFiled: February 8, 2017Date of Patent: March 17, 2020Assignee: ADOBE INC.Inventors: Stephen DiVerdi, Matthew Douglas Hoffman, Ardavan Saeedi
-
Patent number: 10592777Abstract: Systems and methods for generating a slate of ranked items are provided. In one example embodiment, a computer-implemented method includes inputting a sequence of candidate items into a machine-learned model, and obtaining, in response to inputting the sequence of candidate items into the machine-learned model, an output of the machine-learned model that includes a ranking of the candidate items that presents a diverse set of the candidate items at the top positions in the ranking such that one or more highly relevant candidate items can be demoted in the ranking.Type: GrantFiled: May 20, 2019Date of Patent: March 17, 2020Assignee: Google LLCInventors: Ofer Pinhas Meshi, Irwan Bello, Sayali Satish Kulkarni, Sagar Jain
-
Patent number: 10592778Abstract: A method of object detection includes receiving a first image taken from a first perspective by a first camera and receiving a second image taken from a second perspective, different from the first perspective, by a second camera. Each pixel in the first image is offset relative to a corresponding pixel in the second image by a predetermined offset distance resulting in offset first and second images. A particular pixel of the offset first image depicts a same object locus as a corresponding pixel in the offset second image only if the object locus is at an expected object-detection distance from the first and second cameras. The method includes recognizing that a target object is imaged by the particular pixel of the offset first image and the corresponding pixel of the offset second image.Type: GrantFiled: March 6, 2018Date of Patent: March 17, 2020Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: David Nister, Piotr Dollar, Wolf Kienzle, Mladen Radojevic, Matthew S. Ashman, Ivan Stojiljkovic, Magdalena Vukosavljevic
-
Patent number: 10592779Abstract: Mechanisms are provided to implement a machine learning training model. The machine learning training model trains an image generator of a generative adversarial network (GAN) to generate medical images approximating actual medical images. The machine learning training model augments a set of training medical images to include one or more generated medical images generated by the image generator of the GAN. The machine learning training model trains a machine learning model based on the augmented set of training medical images to identify anomalies in medical images. The trained machine learning model is applied to new medical image inputs to classify the medical images as having an anomaly or not.Type: GrantFiled: December 21, 2017Date of Patent: March 17, 2020Assignee: International Business Machines CorporationInventors: Ali Madani, Mehdi Moradi, Tanveer F. Syeda-Mahmood
-
Patent number: 10592780Abstract: In order for the feature extractors to operate with sufficient accuracy, a high degree of training is required. In this situation, a neural network implementing the feature extractor may be trained by providing it with images having known correspondence. A 3D model of a city may be utilized in order to train a neural network for location detection. 3D models are sophisticated and allow manipulation of viewer perspective and ambient features such as day/night sky variations, weather variations, and occlusion placement. Various manipulations may be executed in order to generate vast numbers of image pairs having known correspondence despite having variations. These image pairs with known correspondence may be utilized to train the neural network to be able to generate feature maps from query images and identify correspondence between query image feature maps and reference feature maps. This training can be accomplished without requiring the capture of real images with known correspondence.Type: GrantFiled: March 30, 2018Date of Patent: March 17, 2020Assignee: WHITE RAVEN LTD.Inventors: Roni Gurvich, Idan Ilan, Ofer Avni, Stav Yagev
-
Patent number: 10592781Abstract: A method for allowing a computer to classify an input containing data. A list of categories is received. A sub list of categories is selected, wherein the sub-list comprises those categories in the list that have corresponding distinct correlation scores above a predetermined value. Input data that tends to over correlate to the classification system is received. A truncated snapshot is generated, the truncated snapshot comprising only attributes from the plurality of input attributes that have corresponding input categories that match categories in the sub-list of categories. The data is classified using the truncated snapshot and the classification system.Type: GrantFiled: July 18, 2014Date of Patent: March 17, 2020Assignee: The Boeing CompanyInventor: John Desmond Whelan
-
Patent number: 10592782Abstract: Methods, computer program products, and systems are presented. The method computer program products, and systems can include, for instance: obtaining from a user one or more data queries; identifying a product of interest in response to the one or more data query; examining a plurality of product records to determine a set of related products that are related to the product of interest, wherein the examining includes performing image analysis to extract one or more product topic classifier from product image data representing one or more product; and providing one or more output in response to the examining.Type: GrantFiled: January 22, 2018Date of Patent: March 17, 2020Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Lisa Seacat Deluca, Jeremy A. Greenberger
-
Patent number: 10592783Abstract: A feature extraction is performed on transaction data to obtain a user classification feature and a transaction classification feature. A first dimension feature is constructed based on the user classification feature and the transaction classification feature. A dimension reduction processing is performed on the first dimension feature to obtain a second dimension feature. A probability that the transaction data relates to a risky transaction is determined based on a decision classification of the second dimension feature, where the decision classification is based on a pre-trained deep forest network including a plurality of levels of decision tree forest sets.Type: GrantFiled: March 27, 2019Date of Patent: March 17, 2020Assignee: Alibaba Group Holding LimitedInventors: Wenhao Zheng, Yalin Zhang, Longfei Li
-
Patent number: 10592784Abstract: A system and method to perform detection based on sensor fusion includes obtaining data from two or more sensors of different types. The method also includes extracting features from the data from the two or more sensors and processing the features to obtain a vector associated with each of the two or more sensors. The method further includes concatenating the two or more vectors obtained from the two or more sensors to obtain a fused vector, and performing the detection based on the fused vector.Type: GrantFiled: February 20, 2018Date of Patent: March 17, 2020Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Shuqing Zeng, Wei Tong, Yasen Hu, Mohannad Murad, David R. Petrucci, Gregg R. Kittinger
-
Patent number: 10592785Abstract: Methods, apparatus, and systems are provided for integrated driver expression recognition and vehicle interior environment classification to detect driver condition for safety. A method includes obtaining an image of a driver of a vehicle and an image of an interior environment of the vehicle. Using a machine learning method, the images are processed to classify a condition of the driver and of the interior environment of the vehicle. The machine learning method includes general convolutional neural network (CNN) and CNN with adaptive filters. The adaptive filters are determined based on influence of filters. The classification results are combined and compared with predetermined thresholds to determine if a decision can be made based on existing information. Additional information is requested by self-motivated learning if a decision cannot be made, and safety is determined based on the combined classification results. A warning is provided to the driver based on the safety determination.Type: GrantFiled: July 12, 2017Date of Patent: March 17, 2020Assignee: Futurewei Technologies, Inc.Inventors: Yingxuan Zhu, Lifeng Liu, Xiaotian Yin, Jun Zhang, Jian Li
-
Patent number: 10592786Abstract: Methods and systems for generating an annotated dataset for training a deep tracking neural network, and training of the neural network using the annotated dataset. For each object in each frame of a dataset, one or more likelihood functions are calculated to correlate feature score of the object with respective feature scores each associated with one or more previously assigned target identifiers (IDs) in a selected range of frames. A target ID is assigned to the object by assigning a previously assigned target ID associated with a calculated highest likelihood or assigning a new target ID. Track management is performed according to a predefined track management scheme to assign a track type to the object. This is performed for all objects in all frames of the dataset. The resulting annotated dataset contains target IDs and track types assigned to all objects in all frames.Type: GrantFiled: August 14, 2017Date of Patent: March 17, 2020Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventor: Ehsan Taghavi
-
Patent number: 10592787Abstract: The present disclosure relates to a font recognition system that employs a multi-task learning framework and adversarial training to improve font classification and remove negative side effects caused by intra-class variances of glyph content. For example, in one or more embodiments, the font recognition system adversarial trains a font recognition neural network by minimizing font classification loss while at the same time maximizing glyph classification loss. By employing an adversarially trained font classification neural network, the font recognition system can improve overall font recognition by removing the negative side effects from diverse glyph content.Type: GrantFiled: November 8, 2017Date of Patent: March 17, 2020Assignee: ADOBE INC.Inventors: Yang Liu, Zhaowen Wang, Hailin Jin
-
Patent number: 10592788Abstract: Described is a system for recognition of unseen and untrained patterns. A graph is generated based on visual features from input data, the input data including labeled instances and unseen instances. Semantic representations of the input data are assigned as graph signals based on the visual features. The semantic representations are aligned with visual representations of the input data using a regularization method applied directly in a spectral graph wavelets (SGW) domain. The semantic representations are then used to generate labels for the unseen instances. The unseen instances may represent unknown conditions for an autonomous vehicle.Type: GrantFiled: December 19, 2017Date of Patent: March 17, 2020Assignee: HRL Laboratories, LLCInventors: Shay Deutsch, Kyungnam Kim, Yuri Owechko
-
Patent number: 10592790Abstract: In some examples, an imaging device may include a controller including processing circuitry to detect, a first quantity of rows of pixels to be included as a first band of a contone image, process the pixels of each row of the first band in parallel raster order, detect a second quantity of rows of pixels to be included as a second band of the contone image; and process the pixels of each row of the second band in response to the completion of the pixels of the first band, where the rows of the second band are processed in parallel in serpentine order with respect to the first band.Type: GrantFiled: November 8, 2018Date of Patent: March 17, 2020Assignees: Hewlett-Packard Development Company, L.P., Purdue Research FoundationInventors: Yafei Mao, Jan Allebach, Lluis Abello Rosello, Joan Vidal Fortia, Robert Ulichney, Utpal Kumar Sarkar
-
Patent number: 10592791Abstract: Provided are an image processing apparatus, an image processing method, and a storage medium that can contribute to formation of electric circuits having different resistance values without causing increase in cost. The image processing apparatus generates print data for printing an electrically conductive print image on a print medium by ejecting a metal particle-containing ink from ejection device, the image processing apparatus has: a setting unit configured to set a printing condition based on a resistance value of the electrically conductive print image so as to change at least one of an amount of ink droplets of the metal particle-containing ink contacting each other and a time to be taken for ejected metal particle-containing ink droplets to contact each other; and a generation unit configured to generate the print data based on image data for the electrically conductive print image and the printing condition.Type: GrantFiled: July 2, 2018Date of Patent: March 17, 2020Assignee: Canon Kabushiki KaishaInventor: Minako Kato