Patents Issued in September 8, 2020
-
Patent number: 10769457Abstract: Provided herein is a system and method that detects an airborne object and determines a driving action based on the airborne object. The system comprises one or more sensors; one or more processors; a memory storing instructions that, when executed by the one or more processors, causes the system to perform detecting an airborne object within a detection radius of a vehicle. In response to detecting the airborne object, the system performs tracking the airborne object to obtain 3-D coordinate information of the airborne object at distinct times, determining a probability that the airborne object will collide with the one or more sensors based on the 3-D coordinate information, determining a driving action of a vehicle based on the determined probability, and performing the driving action.Type: GrantFiled: September 26, 2019Date of Patent: September 8, 2020Assignee: Pony Al Inc.Inventors: Peter G. Diehl, Robert Dingli
-
Patent number: 10769458Abstract: The method of the invention comprises: obtaining a sequence of at least two images, with different levels of illumination; extracting the region containing the sign in the image; calculating the luminance values of the signs; and obtaining the difference in luminance of the sign corresponding to the two levels of illumination. The value obtained is the luminance of the sign (11) corresponding to an illumination equal to the difference between the illuminations, or additional illumination. This result is based on the additive property of luminance, according to which the luminance of a sign is the sum of the luminance produced by each source of illumination. A basic illumination device (5), an additional illumination device (7), at least one camera for taking images, and image recording, positioning and synchronism systems are required to implement the method.Type: GrantFiled: April 4, 2018Date of Patent: September 8, 2020Assignee: DBI/CIDAUT Technologies, LLCInventors: Francisco Javier Burgoa Roman, Jose Antonio Guiterrez Mendez, Alberto Mansilla Gallo, Diego Ortiz de Lejarazu Machin, Alberto Senen Perales Garcia
-
Patent number: 10769459Abstract: A method and a system are provided for monitoring driving conditions. The method includes receiving video data comprising video frames from one or more sensors where the video frames may represent an interior or exterior of a vehicle, detecting and recognizing one or more features from the video data where each feature is associated with at least one driving condition, extracting the one or more features from the video data, developing intermediate features by associating and aggregating the extracted features among the extracted features, and developing a semantic meaning for the at least one driving condition by utilizing the intermediate features and the extracted one or more features.Type: GrantFiled: August 30, 2016Date of Patent: September 8, 2020Assignee: SRI InternationalInventors: Amir Tamrakar, Gregory Ho, David Salter, Jihua Huang
-
Patent number: 10769460Abstract: The driver condition detection system includes a driver monitor camera capturing a face of a driver of a vehicle and generating a facial image of the driver, and a driver condition detection part configured to detect a condition of the driver based on the facial image. If a part of face parts of the driver is hidden in the facial image, the driver condition detection part is configured to detect a condition of the driver based on face parts of the driver not hidden in the facial image. The face parts of the driver are a mouth, nose, right eye, and left eye of the driver.Type: GrantFiled: February 8, 2018Date of Patent: September 8, 2020Assignee: Toyota Jidosha Kabushiki KaishaInventor: Takeshi Matsumura
-
Patent number: 10769461Abstract: Distracted driver detection is provided. In various embodiments, a video frame is captured. The video frame is provided to a trained classifier. The presence of a predetermined action by a motor vehicle operator depicted therein is determined from the trained classifier. An alert is sent via a network indicating the presence of the predetermined action and at least one identifier associated with the motor vehicle operator.Type: GrantFiled: December 14, 2018Date of Patent: September 8, 2020Assignee: COM-IoT TechnologiesInventors: Ahmed Madkor, Youssra Elqattan, Abdarhman S. AbdElHamid
-
Patent number: 10769462Abstract: An information processing apparatus includes a position specifying unit that specifies a position of each member of an assembly, a biometric information acquiring unit that acquires biometric information from members of which the number is smaller than the number of all members, an activeness degree specifying unit that specifies, from the biometric information acquired by the biometric information acquiring unit, an activeness degree of the member from which the biometric information is acquired by the biometric information acquiring unit, and specifies, from the activeness degree and the position specified by the position specifying unit, an activeness degree of a member other than the member from which the biometric information is acquired by the biometric information acquiring unit among the members, and a determination unit that determines a state of the assembly from the activeness degree specified by the activeness degree specifying unit.Type: GrantFiled: August 13, 2018Date of Patent: September 8, 2020Assignee: FUJI XEROX CO., LTD.Inventor: Shoko Chiba
-
Patent number: 10769463Abstract: Systems and methods of training vehicles to improve performance, reliability and autonomy of vehicles. Sensors capture a vehicle driver's eye movements, hand grip and contact area on steering wheel, distance of the accelerator and brake pedals from the foot, depression of these pedals by the foot, and ambient sounds. Data from these sensors is used as training data to improve autonomy of vehicles. Detection of events occurring outside a vehicle is made by comparing human sensor data with an associated road map to determine correlation. Signatures corresponding to human reactions and actions are extracted, which also incorporate data acquired using vehicle and outside environment sensors. By scoring driver responses to events outside the vehicle as well as during non-events, expert drivers are identified, whose responses are used as a part of vehicle training data.Type: GrantFiled: November 11, 2019Date of Patent: September 8, 2020Inventor: Ashok Krishnan
-
Patent number: 10769464Abstract: Embodiments of the present disclosure relate to mobile terminal technologies, a facial recognition method and related products are provided. The method includes: acquiring a face image by a mobile terminal; extracting face feature information from the face image and matching the face feature information with a predetermined face feature template by a central processing unit (CPU) of the mobile terminal; performing liveness detection according to the face image by a graphics processing unit of the mobile terminal when the CPU extracts the face feature information from the face image and matches the face feature information with the predetermined face feature template.Type: GrantFiled: August 21, 2018Date of Patent: September 8, 2020Assignee: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD.Inventors: Haitao Zhou, Lizhong Wang, Fangfang Hui
-
Patent number: 10769465Abstract: A method and related terminal device for biometric recognition are provided. The method includes: detecting a target distance between a terminal device and a human face through a distance sensor; capturing an iris image through an iris camera and performing iris recognition based on the iris image, when the target distance falls within an iris recognition distance range; capturing a human face image through a front camera and performing face recognition based on the human face image, when the target distance falls within a human face recognition distance range.Type: GrantFiled: July 12, 2018Date of Patent: September 8, 2020Assignee: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD.Inventors: Haiping Zhang, Yibao Zhou
-
Patent number: 10769466Abstract: An image of a region captured by an unmanned aerial vehicle flying at an altitude may be received. A computer vision algorithm may be executed with the image as an input to compute an overall confidence score associated with detecting one or more candidate objects in the image. Responsive to determining that the confidence score is below a predefined minimum threshold or above a predefined maximum threshold, the unmanned aerial vehicle may be controlled to change its altitude and recapture the image of the region at a new position. Responsive to determining that the overall confidence score is not below the predefined minimum threshold, information associated with the image may be stored on a storage device.Type: GrantFiled: February 20, 2018Date of Patent: September 8, 2020Assignee: International Business Machines CorporationInventors: Andrea Britto Mattos Lima, Carlos Henrique Cardonha, Matheus Palhares Viana, Maciel Zortea
-
Patent number: 10769467Abstract: According to one embodiment, an information processing apparatus includes an input receiver, a template selector, and a tracker. The input receiver receives an input operation of a user. The template selector specifies at least one template out of a plurality of templates that are related to a shape of an object based on the input operation received by the input receiver. The tracker tracks the object in an image including the object by using the at least one template specified by the template selector.Type: GrantFiled: February 12, 2018Date of Patent: September 8, 2020Assignee: KABUSHIKI KAISHA TOSHIBAInventors: Ryusuke Hirai, Yasunori Taguchi, Wataru Watanabe
-
Patent number: 10769468Abstract: Provided is a technique for enhancing operability of a mobile apparatus. An information processing apparatus (2000) includes a first processing unit (2020), a second processing unit (2040), and a control unit (2060). The first processing unit (2020) generates information indicating an event detection position in accordance with a position on a surveillance image set in a first operation. The first operation is an operation with respect to the surveillance image displayed on a display screen. The second processing unit (2040) performs a display change process with respect to the surveillance image or a window including the surveillance image. The control unit (2060) causes any one of the first processing unit (2020) and the second processing unit (2040) to process the first operation on the basis of a second operation.Type: GrantFiled: December 18, 2018Date of Patent: September 8, 2020Assignee: NEC CORPORATIONInventors: Kenichiro Ida, Hiroshi Kitajima, Hiroyoshi Miyano
-
Patent number: 10769469Abstract: A method of prompting a failure or error, applicable to a terminal apparatus including a fingerprint recognizer having a sensor array, includes: obtaining electrical signals containing fingerprint information through the sensor array upon a fingerprint recognition being triggered; determining a number of sensors with abnormally-changing electrical signals in the sensor array; and outputting prompt information upon the determined number of sensors exceeds a number threshold.Type: GrantFiled: December 26, 2018Date of Patent: September 8, 2020Assignee: BEIJING XIAOMI MOBILE SOFTWARE CO., LTD.Inventor: Xuebin Huang
-
Patent number: 10769470Abstract: A method and a system for optimizing an image capturing boundary in a proposed image for enhancing user experience while capturing an image are provided. The method includes locating an image capturing boundary in a proposed image and computing a composition measure for the image capturing boundary. Further, the method includes identifying at least one missing portion in the image capturing boundary based on the composition measure. Further, the method includes providing an indication, associated with an image capturing device, to optimize the image capturing boundary based on the identified at least one missing portion. Furthermore, the method includes computing an optimal zoom level automatically in response to actions performed by the user and captures the image by including the at least one missing portion.Type: GrantFiled: March 6, 2019Date of Patent: September 8, 2020Assignee: Samsung Electronics Co., Ltd.Inventors: Anish Anil Patankar, Rishi Prajapati, Joy Bose
-
Patent number: 10769471Abstract: A system for holding an image display apparatus (60) for displaying an image captured by means of an image capturing apparatus (20) comprises a movable holding apparatus (70) for an alterable hold of the image display apparatus (60), a controllable drive device (72) for moving the holding apparatus (70), comprising a control signal input (74) for receiving a control signal, and a controller (40) comprising a signal input (42) for receiving a signal that represents an orientation or a change in the orientation of the viewing direction (28) of the image capturing apparatus (20) in space or that facilitates a determination of the orientation or the change in the orientation of the viewing direction (28) of the image capturing apparatus (20), and comprising a control signal output (47), couplable to the control signal input (74) of the controllable drive device (72), for providing a control signal for controlling the controllable drive device (72).Type: GrantFiled: October 2, 2019Date of Patent: September 8, 2020Assignee: KARL STORZ SE & Co. KGInventors: Chunman Fan, Johannes Fallert, Yaokun Zhang, Thorsten Ahrens, Sebastian Wagner
-
Patent number: 10769472Abstract: Disclosed herein is a method and system for counting plurality of objects placed in a region. An image of the region is captured and partitioned into segments based on depth of the plurality of objects. Further, shape of each of the plurality of objects in each object region of each segment is determined and validated based on comparison of the determined shape with predetermined shapes. Finally, count of the plurality of objects of each shape is aggregated for determining count of the plurality of objects in the region. In an embodiment, the present disclosure helps in automatically recognizing and counting the plurality of objects of multiple dimensions and multiple shapes, even when the image of the region includes a distorted/unfavorable background.Type: GrantFiled: March 28, 2018Date of Patent: September 8, 2020Assignee: Wipro LimitedInventors: Gyanesh Dwivedi, Vinod Pathangay
-
Patent number: 10769473Abstract: A plurality of recognition positions each recognized by a recognizer as a position of a target object on an input image are acquired. At least one representative position is obtained by performing clustering for the plurality of recognition positions. The representative position is edited in accordance with an editing instruction from a user for the representative position. The input image and the representative position are saved as learning data to be used for learning of the recognizer.Type: GrantFiled: August 10, 2018Date of Patent: September 8, 2020Assignee: Canon Kabushiki KaishaInventor: Masaki Inaba
-
Patent number: 10769474Abstract: Embodiments relate a keypoint detection circuit for identifying keypoints in captured image frames. The keypoint detection circuit generates an image pyramid based upon a received image frame, and determine multiple sets of keypoints for each octave of the pyramid using different levels of blur. In some embodiments, the keypoint detection circuit includes multiple branches, each branch made up of one or more circuits for determining a different set of keypoints from the image, or for determining a subsampled image for a subsequent octave of the pyramid. By determining multiple sets of keypoints for each of a plurality of pyramid octaves, a larger, more varied set of keypoints can be obtained and used for object detection and matching between images.Type: GrantFiled: August 10, 2018Date of Patent: September 8, 2020Assignee: Apple Inc.Inventors: David R. Pope, Cecile Foret, Jung Kim
-
Patent number: 10769475Abstract: An electronic device includes a display, and a processor functionally connected with the display. The processor is configured to output content including one or more objects through the display, receive user input for specifying at least one point in the entire region of the content, determine a portion of an entire region with respect to the at least one point as a search region, obtain a saliency map associated with the content based on the search region, and determine a region of interest of the user based on the saliency map. Alternatively, the processor is configured to obtain an index map associated with the content by dividing the entire region of the content into similar regions according to a preset criterion and determine the region of interest of the user by overlapping the saliency map and the index map. It is possible to provide other embodiments.Type: GrantFiled: August 22, 2018Date of Patent: September 8, 2020Assignee: Samsung Electronics Co., Ltd.Inventors: Moojung Kim, Daehee Kim, Daekyu Shin, Sungdae Cho
-
Patent number: 10769476Abstract: Embodiments of the present disclosure provide a method and apparatus for detecting a license plate.Type: GrantFiled: March 8, 2017Date of Patent: September 8, 2020Assignee: Hangzhou Hikvision Digital Technology Co., Ltd.Inventors: Shiliang Pu, Yi Niu, Zuozhou Pan, Binghua Luo
-
Method, apparatus, device and storage medium for extracting a cardiovisceral vessel from a CTA image
Patent number: 10769477Abstract: Disclosed are a method, an apparatus, a device and a storage medium for extracting a cardiovisceral vessel from a CTA image, including: performing corrosion operation and expansion operation on an image data successively via a preset structural element to obtain a structure template, wherein the image data is a coronary angiography image after a downsampling processing, and the structure template is a structure excluding a pulmonary region; performing a transformation in layer-by-layer on slice images of the structure template to acquire a first ascending aorta structure in the structure template, and acquiring an aorta center coordinate and an aorta radius in the last layer of slice image of the structure template; and establishing a binarized spherical structure according to the aorta center coordinate and the aorta radius, and synthesizing a second ascending aorta structure by combining the first ascending aorta structure with the structure template and the binarized spherical structure.Type: GrantFiled: July 25, 2018Date of Patent: September 8, 2020Assignee: SHENZHEN INSTITUTES OF ADVANCED TECHNOLOGYInventors: Shoujun Zhou, Baochang Zhang, Baolin Li, Cheng Wang, Pei Lu -
Patent number: 10769478Abstract: A convolutional neutral network identification efficiency increasing method is applied to a related device. The convolutional neutral network identification efficiency increasing method includes analyzing an input image to acquire foreground information, utilizing the foreground information to generate a foreground mask, and transforming the input image into an output image via the foreground mask. The output image is used to be an input of the convolutional neutral network identification for preferred object identification efficiency.Type: GrantFiled: November 21, 2018Date of Patent: September 8, 2020Assignee: VIVOTEK INC.Inventors: Cheng-Chieh Liu, Chia-Po Wei, Fu-Min Wang, Chia-Wei Chi
-
Patent number: 10769479Abstract: A recognition system includes: a sensor processing unit (SPU) that performs sensing to output a sensor value; a task-specific unit (TSU) including an object detection part that performs an object detection task based on the sensor value and a semantic segmentation part that performs a semantic segmentation task based on the sensor value; and a generic-feature extraction part (GEU) including a generic neural network disposed between the sensor processing unit and the task-specific unit, the generic neural network being configured to receive the sensor value as an input to extract a generic feature to be input in common into the object detection part and the semantic segmentation part.Type: GrantFiled: April 6, 2018Date of Patent: September 8, 2020Assignee: DENSO IT LABORATORY, INC.Inventors: Ikuro Sato, Mitsuru Ambai, Hiroshi Doi
-
Patent number: 10769480Abstract: An object detection method and a neural network system for object detection are disclosed. The object detection method acquires a current frame of a sequence of frames representing an image sequence, and extracts a feature map of the current frame. The extracted feature map is pooled with information of a pooled feature map of a previous frame to thereby obtain a pooled feature map of the current frame. An object is detected from the pooled feature map of the current frame. A dynamic vision sensor (DVS) may be utilized to provide the sequence of frames. Improved object detection accuracy may be realized, particularly when object movement speed is slow.Type: GrantFiled: August 27, 2018Date of Patent: September 8, 2020Assignees: SAMSUNG ELECTRONICS CO., LTD., BEIJING SAMSUNG TELECOM R&D CENTERInventors: Jia Li, Feng Shi, Weiheng Liu, Dongqing Zou, Qiang Wang, Hyunsurk Ryu, Keun Joo Park, Hyunku Lee
-
Patent number: 10769481Abstract: A system and method for extraction of design elements of a fashion product is provided. The system includes a memory having computer-readable instructions stored therein. The system further includes a processor configured to access a catalogue image of a fashion product. In addition, the processor is configured to segment the catalogue image of the fashion product to determine an article of interest of the fashion product. The processor is further configured to generate an outer contour of the article of interest using a contour tracing technique. Moreover, the processor is configured to analyze coordinates of the generated contour based upon convexity defects of the contour to identify one or more design points. Furthermore, the processor is configured to extract one or more design elements of the fashion product using the identified design points.Type: GrantFiled: May 15, 2018Date of Patent: September 8, 2020Assignee: MYNTRA DESIGN PRIVATE LIMITEDInventor: Makkapati Vishnu Vardhan
-
Patent number: 10769482Abstract: The present disclosure is directed to systems and methods for analyzing digital images to determine alphanumeric strings depicted in the digital images. An electronic device may generate a set of filtered images using a received digital image. The electronic device may also perform an optical character recognition (OCR) technique on the set of filtered images, and may filter out any of the set of filtered images according to a set of rules. The electronic device may further identify a set of common elements representative of the alphanumeric string depicted in the digital image, and determine a machine-encoded alphanumeric string based on the set of common elements.Type: GrantFiled: February 22, 2019Date of Patent: September 8, 2020Assignee: STATE FARM MUTUAL AUTOMOBILE INSURANCE COMPANYInventors: Joseph Antonetti, Abid Imran, Justin Loew, Calvin Moon, Gary Foreman
-
Patent number: 10769483Abstract: A method is disclosed including: receiving raw image data corresponding to a series of raw images; processing the raw image data with an encoder to generate encoded data, where the encoder is characterized by an input/output transformation that substantially mimics the input/output transformation of one or more retinal cells of a vertebrate retina; and applying a first machine vision algorithm to data generated based at least in part on the encoded data.Type: GrantFiled: May 23, 2019Date of Patent: September 8, 2020Assignee: CORNELL UNIVERSITYInventors: Shelia Nirenberg, Illya Bomash
-
Patent number: 10769484Abstract: Disclosed embodiments relate to a character detection method and apparatus. In some embodiments, the method includes: using an image including an annotated word as an input to a machine learning model; selecting, based on a predicted result of characters inside an annotation region of the annotated word predicted and annotation information of the annotated word, characters for training the machine learning model from the characters inside the annotation region of the annotated word predicted; and training the machine learning model based on features of the selected characters. This implementation manner implements the full training of a machine learning model by using existing word level annotated images, to obtain a machine learning model capable of detecting characters in images, thereby reducing the costs for the training of a machine learning model capable of detecting characters in images.Type: GrantFiled: January 30, 2017Date of Patent: September 8, 2020Assignee: Baidu Online Network Technology (Beijing) Co., LtdInventors: Chengquan Zhang, Han Hu, Yuxuan Luo, Junyu Han, Errui Ding
-
Patent number: 10769485Abstract: A framebuffer-less system of convolutional neural network (CNN) includes a region of interest (ROI) unit that extracts features, according to which a region of interest in an input image frame is generated; a convolutional neural network (CNN) unit that processes the region of interest of the input image frame to detect an object; and a tracking unit that compares the features extracted at different times, according to which the CNN unit selectively processes the input image frame.Type: GrantFiled: June 19, 2018Date of Patent: September 8, 2020Assignee: Himax Technologies LimitedInventor: Der-Wei Yang
-
Patent number: 10769486Abstract: Provided is an image processing apparatus including an acquisition unit configured to acquire a multi-valued image and a binarization unit configured to generate a binary image obtained by binarizing the multi-valued image, and the stated image processing apparatus is configured such that the binarization unit detects a closed region within the multi-valued image, and binarizes the inside of the closed region based on luminance inside the closed region and luminance around the closed region.Type: GrantFiled: September 12, 2018Date of Patent: September 8, 2020Assignee: Seiko Epson CorporationInventor: Tomohiro Nakamura
-
Patent number: 10769487Abstract: The present application relates to a method and a device for extracting information from a pie chart. The method comprises the following steps: detecting each element in a pie chart to be processed and position information thereof, wherein the elements comprise text elements and legend elements; performing text recognition on the detected text elements and legend elements to obtain text information corresponding to the text elements and legend texts included in the legend elements respectively; and obtaining sector information and legend information according to each detected element and position information thereof and the legend texts, and enabling the sector information to correspond to the legend information one by one, wherein the sector information comprises a sector color and a proportion of the sector in the pie chart, and the legend information comprises a legend color and a corresponding legend text thereof.Type: GrantFiled: April 17, 2018Date of Patent: September 8, 2020Assignee: ABC FINTECH CO, LTD.Inventors: Zhou Yu, Yongzhi Yang, Zhanqiang Zhang
-
Patent number: 10769488Abstract: This disclosure describes a system for automatically identifying an item from among a variation of items of a same type. For example, an image may be processed and resulting item image information compared with stored item image information to determine a type of item represented in the image. If the matching stored item image information is part of a cluster, the item image information may then be compared with distinctive features associated with stored item image information of the cluster to determine the variation of the item represented in the received image.Type: GrantFiled: July 24, 2019Date of Patent: September 8, 2020Assignee: Amazon Technologies, Inc.Inventors: Sudarshan Narasimha Raghavan, Xiaofeng Ren, Michel Leonard Goldstein, Ohil K. Manyam
-
Patent number: 10769489Abstract: An input image is received from a mobile device. A portion of the input image is determined to correspond to a test card and an image transformation is applied to that portion of the input image. The image transformation rectifies the portion of the input image. Based on the rectified image, a specific test for which the test card includes a result and the result of that test are both identified.Type: GrantFiled: June 7, 2019Date of Patent: September 8, 2020Assignee: Bio-Rad Laboratories (Israel), Inc.Inventors: Guy Nahum, Nir Broer, Amit Oved, Rivka Monitz
-
Patent number: 10769490Abstract: A method for image processing includes: acquiring features of multiple images of a target object and a standard feature of the target object; and determining trusted images of the target object from the multiple images of the target object according to similarities between the features of the multiple images of the target object and the standard feature thereof, wherein similarities between features of the trusted images of the target object and the standard feature of the target object meet a preset similarity requirement. The image processing method may be applied to application scenarios such as image comparison, identity recognition, target object search, and similar target object determination.Type: GrantFiled: January 30, 2020Date of Patent: September 8, 2020Assignee: Alibaba Group Holding LimitedInventors: Nan Jiang, Mingyu Guo
-
Patent number: 10769491Abstract: Techniques are disclosed for identifying discriminative, fine-grained features of an object in an image. In one example, an input device receives an image. A machine learning system includes a model comprising a first set, a second set, and a third set of filters. The machine learning system applies the first set of filters to the received image to generate an intermediate representation of the received image. The machine learning system applies the second set of filters to the intermediate representation to generate part localization data identifying sub-parts of an object and one or more regions of the image in which the sub-parts are located. The machine learning system applies the third set of filters to the intermediate representation to generate classification data identifying a subordinate category to which the object belongs. The system uses the part localization and classification data to perform fine-grained classification of the object.Type: GrantFiled: August 31, 2018Date of Patent: September 8, 2020Assignee: SRI InternationalInventors: Bogdan Calin Mihai Matei, Xiyang Dai, John Benjamin Southall, Nhon Hoc Trinh, Harpreet Sawhney
-
Patent number: 10769492Abstract: The present disclosure relates to unsupervised visual attribute transfer through reconfigurable image translation. One aspect of the present disclosure provides a system for learning the transfer of visual attributes, including an encoder, converter and generator. The encoder encodes an original source image to generate a plurality of attribute values that specify the original source image, and to encode an original reference image to generate a plurality of attribute values that specify the original reference image. The converter replaces at least one attribute value of an attribute that is target attribute of the attribute values of the original source image with at least one corresponding attribute value of the original reference image, to obtain a plurality of attribute values that specify a target image of interest. The generator generates a target image based on the attribute values of the target image of interest.Type: GrantFiled: July 31, 2018Date of Patent: September 8, 2020Assignee: SK TELECOM CO., LTD.Inventors: Taeksoo Kim, Byoungjip Kim, Jiwon Kim, Moonsu Cha
-
Patent number: 10769493Abstract: The embodiments of the present invention provide training and construction methods and apparatus of a neural network for object detection, an object detection method and apparatus based on a neural network and a neural network. The training method of the neural network for object detection, comprises: inputting a training image including a training object to the neural network to obtain a predicted bounding box of the training object; acquiring a first loss function according to a ratio of the intersection area to the union area of the predicted bounding box and a true bounding box, the true bounding box being a bounding box of the training object marked in advance in the training image; and adjusting parameters of the neural network by utilizing at least the first loss function to train the neural network.Type: GrantFiled: July 26, 2017Date of Patent: September 8, 2020Assignees: BEIJING KUANGSHI TECHNOLOGY CO., LTD., MEGVII (BEIJING) TECHNOLOGY CO., LTD.Inventors: Jiahui Yu, Qi Yin
-
Patent number: 10769494Abstract: Systems, methods, and non-transitory computer readable media configured to generate enhanced training information. Training information may be obtained. The training information may characterize behaviors of moving objects. The training information may be determined based on observations of the behaviors of the moving objects. Behavior information may be obtained. The behavior information may characterize a behavior of a given object. Enhanced training information may be generated by inserting the behavior information into the training information.Type: GrantFiled: April 10, 2018Date of Patent: September 8, 2020Assignee: Pony AI Inc.Inventors: Bo Xiao, Yiming Liu, Sinan Xiao, Xiang Yu, Tiancheng Lou, Jun Peng, Jie Hou, Zhuo Zhang, Hao Song
-
Patent number: 10769495Abstract: In implementations of collecting multimodal image editing requests (IERs), a user interface is generated that exposes an image pair including a first image and a second image including at least one edit to the first image. A user simultaneously speaks a voice command and performs a user gesture that describe an edit of the first image used to generate the second image. The user gesture and the voice command are simultaneously recorded and synchronized with timestamps. The voice command is played back, and the user transcribes their voice command based on the play back, creating an exact transcription of their voice command. Audio samples of the voice command with respective timestamps, coordinates of the user gesture with respective timestamps, and a transcription are packaged as a structured data object for use as training data to train a neural network to recognize multimodal IERs in an image editing application.Type: GrantFiled: August 1, 2018Date of Patent: September 8, 2020Assignee: Adobe Inc.Inventors: Trung Huu Bui, Zhe Lin, Walter Wei-Tuh Chang, Nham Van Le, Franck Dernoncourt
-
Patent number: 10769496Abstract: Disclosed herein are techniques for detecting logos in images or video. In one embodiment, a first logo detection model detects, from an image, candidate regions for determining logos in the image. A feature vector is then extracted from each candidate region and is compared with reference feature vectors stored in a database. The logo corresponding to the best matching reference feature vector is determined to be the logo in the candidate region if the best matching meets a certain criterion. In some embodiments, a second logo detection model trained using synthetic training images is used in combination with the first logo detection model to detect logos in a same image.Type: GrantFiled: October 25, 2018Date of Patent: September 8, 2020Assignee: Adobe Inc.Inventors: Brunno Fidel Maciel Attorre, Nicolas Huynh Thien
-
Patent number: 10769497Abstract: A learning device, comprising a reception circuit that receives requests indicating photographs the user likes from external device, a machine learning processor that extracts images that match the requests and that have received a given evaluation from a third party, from within an image database, performs machine learning using these images that have been extracted, and outputs an inference model, and a transmission circuit that transmits an inference model that has been output from the learning processor to the external device.Type: GrantFiled: July 23, 2018Date of Patent: September 8, 2020Assignee: Olympus CorporationInventors: Kazuhiro Haneda, Hisashi Yoneyama, Atsushi Kohashi, Zhen Li, Dai Ito, Yoichi Yoshida, Kazuhiko Osa, Osamu Nonaka
-
Patent number: 10769498Abstract: A computer-implemented method (40) reduces processing time of an application for visualizing image data, which application is one of a plurality of applications selectable by a user. Each of the plurality of applications includes a pre-processing algorithm for pre-processing the image data. The method predicts which one of the pre-processing algorithms is to be performed in response to a selection of an application by the user by: —extracting a feature vector from the image data, metadata, and/or additional data associated with the image data, —supplying the feature vector as input to a machine learned model, and —receiving an algorithm identifier as output from the machine learned model. The algorithm identifier identifies the pre-processing algorithm. Additionally, the algorithm identifier is used to select (42) the pre-processing algorithm. The image data is pre-processed (43) using the selected pre-processing algorithm.Type: GrantFiled: January 25, 2017Date of Patent: September 8, 2020Assignee: KONINKLIJKE PHILIPS N.V.Inventors: Eran Rubens, Bella Fadida-Specktor, Eliahu Zino, Menachem Melamed, Eran Meir, Netanel Weinberg, Maria Kaplinsky Nus
-
Patent number: 10769499Abstract: A method and apparatus for removing black eyepits and sunglasses in first actual scenario data having an image containing a face acquired from an actual scenario, to obtain second actual scenario data; counting a proportion of wearing glasses in the second actual scenario data; dividing original training data composed of an image containing a face into wearing-glasses and not-wearing-glasses first and second training data, where a proportion of wearing glasses in the original training data is lower than a proportion in the second actual scenario data; generating wearing-glasses third training data based on glasses data and the second training data; generating fourth training data in which a proportion of wearing glasses is equal to the proportion of wearing glasses in the second actual scenario data, based on the third training data and the original training data; and training a face recognition model based on the fourth training data.Type: GrantFiled: November 2, 2018Date of Patent: September 8, 2020Assignee: FUJITSU LIMITEDInventors: Meng Zhang, Rujie Liu, Jun Sun
-
Patent number: 10769500Abstract: System and method for an active learning system including a sensor obtains data from a scene including a set of images having objects. A memory to store active learning data including an object detector trained for detecting objects in images. A processor in communication with the memory, is configured to detect a semantic class and a location of at least one object in an image selected from the set of images using the object detector to produce a detection metric as a combination of an uncertainty of the object detector about the semantic class of the object in the image (classification) and an uncertainty of the object detector about the location of the object in the image (localization). Using an output interface or a display type device, in communication with the processor, to display the image for human labeling when the detection metric is above a threshold.Type: GrantFiled: August 31, 2017Date of Patent: September 8, 2020Assignee: Mitsubishi Electric Research Laboratories, Inc.Inventors: Teng-Yok Lee, Chieh-Chi Kao, Pradeep Sen, Ming-Yu Liu
-
Patent number: 10769501Abstract: The present disclosure relates to analysis of perturbed subjects using semantic embeddings. One example embodiment includes a method. The method includes applying a respective perturbation to each of a plurality of subjects in a controlled environment. The method also includes producing a respective visual representation for each of the perturbed subjects using at least one imaging modality. Further, the method includes obtaining, by a computing device for each of the respective visual representations, a corresponding semantic embedding associated with the respective visual representation. The semantic embedding associated with the respective visual representation is generated using a machine-learned, deep metric network model. In addition, the method includes classifying, by the computing device based on the corresponding semantic embedding, each of the visual representations into one or more groups.Type: GrantFiled: September 17, 2018Date of Patent: September 8, 2020Assignee: Google LLCInventors: Dale M. Ando, Marc Berndl, Lusann Yang, Michelle Dimon
-
Patent number: 10769502Abstract: Computer-implemented techniques for sematic image retrieval are disclosed. Digital images are classified into N number of categories based on their visual content. The classification provides a set of N-dimensional image vectors for the digital images. Each image vector contains up to N number of probability values for up to N number of corresponding categories. An N-dimensional image match vector is generated that projects an input keyword query into the vector space of the set of image vectors by computing the vector similarities between a word vector for the input query and a word vector for each of the N number of categories. Vector similarities between the image match vectors and the set of image vectors can be computed to determine images semantically relevant to the input query.Type: GrantFiled: April 8, 2019Date of Patent: September 8, 2020Assignee: Dropbox, Inc.Inventors: Thomas Berg, Peter Neil Belhumeur
-
Patent number: 10769503Abstract: A method of analyzing and organizing printed documents is performed at a computing system having one or more processors and memory. The method includes receiving one or more printed documents, each including one or more pages. The method includes processing each page of each printed document. The method includes scanning the respective page to obtain an image file. The method also includes determining a document class for the respective page by inputting the image file to one or more trained classifier models, and generating a semantic analyzer pipeline including at least an optical character recognition (OCR)-based semantic analyzer. The method also includes applying the OCR-based semantic analyzer to the preprocessed output page to generate a preprocessed output page and to extract semantic information corresponding to the respective page. The method includes determining a digital organization for the respective printed document based on the extracted semantic information and the document class.Type: GrantFiled: April 25, 2019Date of Patent: September 8, 2020Assignee: Zorroa CorporationInventors: Juan Jose Buhler, David DeBry, Daniel Wexler
-
Patent number: 10769504Abstract: An expanding appliance includes a connect port, an image capturing module, an intelligent control module, an image transmitting module, and a result displaying module. The expanding appliance connects an image input device through the connect port, connects a display device through the result displaying module, and connects one or more image-applied function module through the image transmitting module. The intelligent control module generates a demanding command according to a successfully-connected image-applied function module. The image capturing module controls the image input device to capture image data based on the demanding command, and quantizes samples of the image data as computation data. The image transmitting module provides the computation data to the image-applied function module for image identification and receives an identification result. Finally, the intelligent control module triggers the result display module for displaying the identification result on the display device.Type: GrantFiled: June 7, 2018Date of Patent: September 8, 2020Assignee: CHICONY POWER TECHNOLOGY CO., LTD.Inventors: Ting-Fu Hsu, Jewel Tsai
-
Patent number: 10769505Abstract: An optical sensor device includes a first light source, a second light source, a sensor, and control circuitry. The first light source has a plurality of peak wavelengths in a wavelength range of 400 nm to 780 nm. The second light source emits ultraviolet light. The control circuitry adjusts a light amount of the first light source based on an output of the sensor in a state the first light source is on and the second light source is off, adjusts a light amount of the second light source based on an output of the sensor in a state the second light source is on and the first light source is off, and acquires a correction value of data output by the sensor, based on an output of the sensor in a state each of the first and second light sources is on with the light amount adjusted.Type: GrantFiled: March 18, 2019Date of Patent: September 8, 2020Assignee: Ricoh Company, Ltd.Inventors: Satoshi Iwanami, Nobuyuki Satoh, Hirohito Murate
-
Patent number: 10769506Abstract: When a sum of a first gradation data for a first color and a second gradation data for a second color is greater than the maximum value of thresholds of a first threshold matrix, a generation unit generates a first overlapping gradation data and second overlapping gradation data by dividing a value obtained by subtracting the maximum value from the sum. Further, the generation unit generates a first quantization data based on a result of comparing the first overlapping gradation data with the second threshold or a result of comparing a difference between the first gradation data and the first overlapping gradation data with the first threshold, and generates a second quantization data based on a result of comparing the second overlapping gradation data with the first threshold or a result of comparing a difference between the second gradation data and the second overlapping gradation data with the second threshold.Type: GrantFiled: July 23, 2019Date of Patent: September 8, 2020Assignee: CANON KABUSHIKI KAISHAInventors: Tsukasa Doi, Hirokazu Tanaka, Hisashi Ishikawa, Shoei Moribe, Yusuke Yamamoto, Shigeo Kodama, Yuta Ikeshima