Patents Issued in April 23, 2020
-
Publication number: 20200125800Abstract: A method and computer product encoding the method is available for preparing a domain or subdomain specific glossary. The method included using probabilities, word context, common terminology and different terminology to identify domain and subdomain specific language and a related glossary updated according to the method.Type: ApplicationFiled: October 17, 2019Publication date: April 23, 2020Inventors: Christopher J. Jeffs, Ian Beaver
-
Publication number: 20200125801Abstract: According to principles described herein, unsupervised statistical models, semi-supervised data models, and HITL methods are combined to create a text normalization system that is both robust and trainable with a minimum of human intervention.Type: ApplicationFiled: October 18, 2019Publication date: April 23, 2020Inventor: Ian Beaver
-
Publication number: 20200125802Abstract: A digital magazine server receives content items from various sources or information identifying content items maintained by various sources. Based on characteristics of the content items, the digital magazine server identifies themes of various content items. A theme of a content item identifies a primary topic or primary meaning of the content item. In various embodiments, the digital magazine server determines the theme of a content item based on words within the content item, accounting for meanings of words in the content item, parts of speech of each word, combinations of words in the content item, and syntax of words in the content item.Type: ApplicationFiled: October 23, 2019Publication date: April 23, 2020Inventors: David E. Wigder, Ruth Zheng
-
Publication number: 20200125803Abstract: A device receives document information associated with a document, and receives a request to identify insights in the document information. The device performs, based on the request, natural language processing on the document information to identify words, phrases, and sentences in the document information, and utilizes a first machine learning model with the words, the phrases, and the sentences to identify information indicating abstract insights, concrete insights, and non-insights in the document. The device utilizes a second machine learning model to match the abstract insights with particular concrete insights that are different than the concrete insights, and utilizes a third machine learning model to determine particular insights based on the non-insights. The device generates an insight document that includes the concrete insights, the abstract insights matched with the particular concrete insights, and the particular insights determined based on the non-insights.Type: ApplicationFiled: November 25, 2019Publication date: April 23, 2020Inventor: Joni Bridget JEZEWSKI
-
Publication number: 20200125804Abstract: A semantic vector generation device (100) obtains vectors of a plurality of words included in text data. The semantic vector generation device (100) extracts a word included in any group. The semantic vector generation device (100) generates a vector in accordance with the any group on the basis of a vector of the word extracted among the obtained vectors of the words. The semantic vector generation device (100) identifies a vector of a word included in an explanation of any semantics of the word extracted among the obtained vectors of the words. The semantic vector generation device (100) generates a vector in accordance with the any semantics on the basis of the vector identified and the vector generated.Type: ApplicationFiled: December 22, 2019Publication date: April 23, 2020Applicant: FUJITSU LIMITEDInventors: Satoshi Onoue, Masahiro Kataoka
-
Publication number: 20200125805Abstract: Customer support, and other types of activities in which there is a dialogue between two humans can generate large volumes of conversation records. Automated analysis of these records can provide information about high-level features of, for example, the workings of a customer service department. Analysis of these conversations between a customer and a customer-support agent may also allow identification of customer support activities that can be provided by virtual agents instead of actual human agents. The analysis may evaluate conversations in terms of complexity, duration, and sentiment of the participants. Additionally, the conversations may also be analyzed to identify the existence of selected concepts or keywords. Workflow characteristics, the extent to which the conversation represents a multi-step process intended to accomplish a task, may also be determined for the conversations.Type: ApplicationFiled: December 20, 2019Publication date: April 23, 2020Inventor: Charles C. Wooters
-
Publication number: 20200125806Abstract: A management device is connected to an apparatus and configured to manage the apparatus. The management device includes a multi-language display processing unit configured to, when an input unit receives a change request to change a language of messages to be displayed on a display unit, transmit standard language data to a translation device, and a translated data reception unit configured to acquire translated data translated into a language corresponding to a language environment of a mobile terminal on the translation device with reference to the standard language data. The multi-language display processing unit is configured to change a language of messages to be displayed on the display unit from a default language to a language corresponding to the language environment of the mobile terminal by using the translated data.Type: ApplicationFiled: March 21, 2017Publication date: April 23, 2020Inventor: Hiroaki OBANA
-
Publication number: 20200125807Abstract: A cognitive communication assistant receives a message transmitted over a communication network from a sender to a recipient. A sender's industry identified with the sender and a recipient's industry identified with the recipient are determined. One or more terms associated with the sender's industry are extracted from the message. A definition associated with the one or more terms is searched for in an on-line reference text. The message is updated based on the definition. The message is transmitted over the communication network to the recipient.Type: ApplicationFiled: December 20, 2019Publication date: April 23, 2020Inventors: Tara Astigarraga, Itzhack Goldberg, Jose R. Mosqueda Mejia, Daniel J. Winarski
-
Publication number: 20200125808Abstract: In accordance with an embodiment, a wireless tag reading device comprises an antenna configured to receive a response wave from a wireless tag; a reader configured to acquire identification information from the wireless tag from the response wave; a processor configured to determine a relative movement direction of the wireless tag and the antenna based on a phase of the response wave; and a storage section configured to store identification information for identifying the wireless tag if the movement direction is a first direction.Type: ApplicationFiled: June 19, 2019Publication date: April 23, 2020Inventor: Jun Yaginuma
-
Publication number: 20200125809Abstract: A commodity container includes a main body with a container space and a radio frequency reader with a communication range that covers an opening of the container space. The radio frequency reader outputs tag information based on a radio frequency signal from a wireless tag and outputs time variation information indicating a time variation of the radio frequency signal. A registration device is attached to the main body and includes a communication interface to receive the tag and time variation information, and a processor configured to determine a time variation in a positional relationship of the wireless tag and the radio frequency reader based on the time variation information and update a commodity registration list based on the determined time variation of the positional relationship.Type: ApplicationFiled: September 19, 2019Publication date: April 23, 2020Inventors: Sunao TSUCHIDA, Jun YAGINUMA
-
Publication number: 20200125810Abstract: A radiofrequency identification (RFID) reader device includes a radiofrequency device configured to transmit and receive electromagnetic radiation through an antenna array. An RFID control computing device is coupled to the radiofrequency device and includes a memory coupled to a processor which is configured to be capable of executing programmed instructions comprising and stored in the memory to operate the radiofrequency device in a first mode to transmit a first radiofrequency beam to a scan area through the antenna array. A spatial location for RFID tags located within the scanned area is determined from a radar image. The radiofrequency device is operated in a second mode to transmit a second radiofrequency beam to at least one of the RFID tags, based on the determined spatial location of the RFID tags, to power an integrated circuit or sensor located on and to communicate with the at least one of the RFID tags.Type: ApplicationFiled: October 17, 2019Publication date: April 23, 2020Inventor: Michael Gregory Pettus
-
Publication number: 20200125811Abstract: Field-upgradable barcode readers. An example field-upgradeable barcode reader is configured to be supported by a workstation and includes a first housing portion supporting a generally horizontal platter having a generally horizontal window and a second housing portion supporting a generally vertical window. The second housing includes a receptacle configured to alternatively receive one of a cover and a field-installable imaging assembly insert. The field-installable imaging assembly insert is configured to receive an image acquisition assembly.Type: ApplicationFiled: October 22, 2018Publication date: April 23, 2020Inventors: Mark Drzymala, Edward Barkan, Darran Michael Handshaw
-
Publication number: 20200125812Abstract: Systems and methods for providing additional processing capabilities related to machine-readable symbols. A data collection system (100) may include a scan engine (102), auxiliary image processor (104), auxiliary visualizer (106), and host system (108). The scan engine may output decoded information obtained from a representation of a machine-readable symbol captured by a two-dimensional image processor. The scan engine may also output a set of images related to the machine-readable symbol and an object associated with the machine-readable symbol, in which the set of images may form a streaming set of images or streaming video. The set of images may be used by the auxiliary image processor to obtain further information about the machine-readable symbol and/or associated object, such as OCR or DWM information. The set of images may be stored and made accessible by the auxiliary visualizer. The host system may synchronize a display of the images and decoded data output by the scan engine.Type: ApplicationFiled: June 7, 2017Publication date: April 23, 2020Inventors: Federico Canini, Simone Spolzino, Marco Bozzoli, Luca Perugini
-
Publication number: 20200125813Abstract: The current system and method relate to a tracking and feedback system. More specifically, the system and method comprise a tracking apparatus attached to a tracked entity, reader nodes, a central management database including software platform, and a feedback system incorporated within the tracking apparatus and the central management database.Type: ApplicationFiled: October 17, 2019Publication date: April 23, 2020Inventors: Brennan Scott Flores, Charles Robert Miller
-
Publication number: 20200125814Abstract: A transaction code identification method comprises scanning a transaction code, the transaction code comprising: a two-dimensional code and a check code; parsing the two-dimensional code to obtain a two-dimensional code content contained in the two-dimensional code; obtaining the check code; and sending the two-dimensional code content and the check code to a server to cause the server to verify the transaction code based on the two-dimensional code content and the check code.Type: ApplicationFiled: December 17, 2019Publication date: April 23, 2020Inventor: JIAN REN
-
Publication number: 20200125815Abstract: An ultrasonic fingerprint sensor system of the present disclosure may be provided with a flexible substrate. The ultrasonic fingerprint sensor system may include a film stack disposed on the flexible substrate that provides acceptable acoustic coupling for fingerprint sensing. The ultrasonic fingerprint sensor system includes a high acoustic impedance layer in an acoustic path of ultrasonic waves through a display. The high acoustic impedance layer can be electrically conductive or electrically nonconductive. In some implementations, the ultrasonic fingerprint sensor system includes an ultrasonic transceiver or an ultrasonic transmitter separate from an ultrasonic receiver.Type: ApplicationFiled: July 29, 2019Publication date: April 23, 2020Inventors: Yipeng Lu, Hrishikesh Vijaykumar Panchawagh, Kostadin Dimitrov Djordjev, Jae Hyeong Seo, Nicholas Ian Buchan, Chin-Jen Tseng, Tsongming Kao
-
Publication number: 20200125816Abstract: Please replace the Abstract of the specification with the following Abstract: A fingerprint image is acquired, and it is determined if the finger is still touching the sensing surface. When the finger no longer touches the sensing surface, the user is authenticated based on one or several fingerprint image(s) acquired so far. When the finger still touches the sensing surface, a quality measure is determined for the fingerprint image. When a quality of the fingerprint image fulfills a predefined quality criterion, the user is authenticated based on one or several fingerprint images acquired so far. When the quality of the fingerprint image fails to fulfill the predefined quality criterion, a subsequent fingerprint image is acquired.Type: ApplicationFiled: April 19, 2018Publication date: April 23, 2020Applicant: FINGERPRINT CARDS ABInventors: Morten WITH PEDERSEN, Morten HANSEN
-
Publication number: 20200125817Abstract: Provided are a deformable fingerprint recognition device and a fingerprint authentication method and an electronic apparatus using the deformable fingerprint recognition device. The deformable fingerprint recognition device may include a fingerprint sensor configured to be deformed in shape and a strain sensor provided on a surface of the fingerprint sensor to measure deformation distribution of the fingerprint sensor. The deformable fingerprint recognition device may recognize a fingerprint of a user by reflecting the deformation distribution of the fingerprint sensor measured by the strain sensor. The deformable fingerprint recognition device may include a plurality of first pixel regions to detect the fingerprint of the user, and the strain sensor may include a plurality of second pixel regions to measure the deformation distribution of the fingerprint sensor.Type: ApplicationFiled: October 16, 2019Publication date: April 23, 2020Applicant: SAMSUNG ELECTRONICS CO., LTD.Inventors: Hyunjoon KIM, Jingu HEO, Dongkyun KIM, Seokwhan CHUNG
-
Publication number: 20200125818Abstract: Method and apparatus for segmenting a cellular image are disclosed. A specific embodiment of the method includes: acquiring a cellular image; enhancing the cellular image using a generative adversarial network to obtain an enhanced cellular image; and segmenting the enhanced cellular image using a hierarchical fully convolutional network for image segmentation to obtain cytoplasm and zona pellucida areas in the cellular image.Type: ApplicationFiled: October 22, 2019Publication date: April 23, 2020Inventors: Yiu Leung CHAN, Mingpeng ZHAO, Han Hui LI, Tin Chiu LI
-
Publication number: 20200125819Abstract: The system receives exemplary time-series sensor signals comprising ground truth versions of signals generated by a monitored system associated with a target use case and a synchronization objective, which specifies a desired tradeoff between synchronization compute cost and synchronization accuracy for the target use case. The system performance-tests multiple synchronization techniques by introducing randomized lag times into the exemplary time-series sensor signals to produce time-shifted time-series sensor signals, and then uses each of the multiple synchronization techniques to synchronize the time-shifted time-series sensor signals across a range of different numbers of time-series sensor signals, and a range of different numbers of observations for each time-series sensor signal. The system uses the synchronization objective to evaluate results of the performance-testing in terms of compute cost and synchronization accuracy.Type: ApplicationFiled: October 23, 2018Publication date: April 23, 2020Applicant: Oracle International CorporationInventors: Kenny C. Gross, Guang C. Wang
-
Publication number: 20200125820Abstract: A data recognition method includes: extracting a feature map from input data based on a feature extraction layer of a data recognition model; pooling component vectors from the feature map based on a pooling layer of the data recognition model; and generating an embedding vector by recombining the component vectors based on a combination layer of the data recognition model.Type: ApplicationFiled: March 7, 2019Publication date: April 23, 2020Applicant: Samsung Electronics Co., Ltd.Inventors: Insoo KIM, Kyuhong KIM, Chang Kyu CHOI
-
Publication number: 20200125821Abstract: The system and method of detecting targets using a camera comprising narrow band filters in a compact pixel cluster arrangement. In some cases, the target is a chemical target. In some cases, the target is a naval target and the processing of the data provides extent, shape, direction and other characteristics that can provide details about the type of naval target even in dark or low light conditions.Type: ApplicationFiled: October 18, 2018Publication date: April 23, 2020Applicant: BAE SYSTEMS Information and Electronic Systems Integration Inc.Inventors: Michael J. CHOINIERE, Kenneth DINNDORF
-
Publication number: 20200125822Abstract: Implementations relate to detecting/replacing transient obstructions from high-elevation digital images, and/or to fusing data from high-elevation digital images having different spatial, temporal, and/or spectral resolutions. In various implementations, first and second temporal sequences of high-elevation digital images capturing a geographic area may be obtained. These temporal sequences may have different spatial, temporal, and/or spectral resolutions (or frequencies). A mapping may be generated of the pixels of the high-elevation digital images of the second temporal sequence to respective sub-pixels of the first temporal sequence. A point in time at which a synthetic high-elevation digital image of the geographic area may be selected. The synthetic high-elevation digital image may be generated for the point in time based on the mapping and other data described herein.Type: ApplicationFiled: January 8, 2019Publication date: April 23, 2020Inventors: Jie Yang, Cheng-en Guo, Zhiqiang Yuan, Elliott Grant, Hongxu Ma
-
Publication number: 20200125823Abstract: Methods and systems for detecting objects from aerial imagery are disclosed. The method includes obtaining an image of an area, obtaining a plurality of regional aerial images from the image of the area, classifying the plurality of regional aerial images as a first class or a second class by a classifier, wherein: the first class indicates a regional aerial image contains a target object, the second class indicates a regional aerial image does not contain a target object, and the classifier is trained by first and second training data, wherein the first training data include first training images containing target objects, and the second training data include second training images containing target objects obtained by adjusting at least one of brightness, contrast, color saturation, resolution, or a rotation angle of the first training images; and recognizing a target object in a regional aerial image in the first class.Type: ApplicationFiled: December 20, 2019Publication date: April 23, 2020Applicant: GEOSAT Aerospace & TechnologyInventors: Cheng-Fang LO, Zih-Siou CHEN, Chang-Rong KO, Chun-Yi WU, Ya-Wen CHENG, Kuang-Yu CHEN, Hsiu-Hsien WEN, Te-Che LIN, Ting-Jung CHANG
-
Publication number: 20200125824Abstract: The present invention relates to a method for extracting features of interest from a fingerprint represented by an input image, the method being characterized in that it comprises the implementation, by data processing means (21) of a client equipment (2), of steps of: (a) Estimation of at least one candidate angular deviation of an orientation of said input image with respect to a reference orientation, by means of a convolutional neural network, CNN; (b) Recalibration of said input image as a function of said estimated candidate angular deviation, so that the orientation of the recalibrated image matches said reference orientation; (c) Processing said recalibrated image so as to extract said features of interest from the fingerprint represented by said input image.Type: ApplicationFiled: October 21, 2019Publication date: April 23, 2020Inventors: Guy MABYALAHT, Laurent KAZDAGHLI
-
Publication number: 20200125825Abstract: Techniques for providing an information image display method are described. One example method includes biometric information is received from a user at a client device. The received biometric information is determined whether it matches a predetermined biometric information from a plurality of stored predetermined biometric information. In response to determining the received biometric information matches one of the stored predetermined biometric information, data indicating an interface of an application is retrieved that corresponds to the stored predetermined biometric information. The data indicating the interface of the application on a display screen of the client device for a predetermined duration is displayed while the client device is in an unused mode. After the predetermined duration has elapsed, the data indicating the interface of the application from the display screen is removed while the client device remains in the unused mode.Type: ApplicationFiled: December 18, 2019Publication date: April 23, 2020Applicant: Alibaba Group Holding LimitedInventors: Jing WANG, Wenxia TONG, Jie ZENG
-
Publication number: 20200125826Abstract: An object classification system for classifying objects is described. The system comprises an imaging region adapted for irradiating an object of interest, an arrayed detector, and a mixing unit configured for mixing the irradiation stemming from the object of interest by reflecting or scattering on average at least three times the irradiation after its interaction with the object of interest and prior to said detection.Type: ApplicationFiled: May 26, 2018Publication date: April 23, 2020Inventors: Peter BIENSTMAN, Alessio LUGNAN, Floris LAPORTE
-
Publication number: 20200125827Abstract: A classifier receives a document and analyzes the document to determine one or more predicted roles of one or more signatories, each predicted role determined based on one or more signature elements in the content of the document executed by the one or more signatories. The classifier evaluates each of the one or more predicted roles in view of a plurality of expected signatory role characteristics of a plurality of categories of documents of a transaction to select a particular category associated with the document from among the plurality of categories. The classifier classifies the document within the transaction as a particular logical type identified by the particular category from among a plurality of logical types for the transaction.Type: ApplicationFiled: October 22, 2018Publication date: April 23, 2020Inventors: ANDREW R. FREED, CORVILLE O. ALLEN
-
Publication number: 20200125828Abstract: Disclosed herein is a signature verification apparatus including a first verification circuitry verifying a user's signature by comparing dynamic signature data indicating a change of a user's writing state over time during signing of his or her name by the user and reference data for dynamic signature, a second verification circuitry verifying the user's signature by comparing static signature data indicating a writing path during signing of his or her name by the user and reference data for static signature, and a data registration circuitry registering the reference data for dynamic signature and register the reference data for static signature. In a case where the reference data for static signature has yet to be registered by the data registration circuitry, the second verification circuitry verifies the user's signature by regarding static signature data generated from the reference data for dynamic signature already registered as the reference data for static signature.Type: ApplicationFiled: October 2, 2019Publication date: April 23, 2020Inventor: Nicholas Victor Mettyear
-
Publication number: 20200125829Abstract: A method of identifying an unknown device via interrogation may comprise determining a communication address of an unknown device in communication with an information handling system, capturing a digital image of the unknown device, determining the unknown device belongs to a known class of devices via an object recognition algorithm to analyze the digital image, accessing a device class registry listing a plurality of candidate object identifications associated with the known class of devices, identifying a stimulus action or actuation associated with a first of the plurality of candidate device identifications. The method may also comprise performing the stimulus action, receiving an indication via the communication address indicating the unknown device detected the stimulus action, and associating the communication address of the unknown device with the first of the plurality of candidate device identifications.Type: ApplicationFiled: October 22, 2018Publication date: April 23, 2020Applicant: Dell Products, LPInventors: Tyler R. Cox, Spencer G. Bull, Ryan N. Comer, Shreya Gupta, Richard W. Schuckle
-
Publication number: 20200125830Abstract: A method of recognizing an object includes comparing a three-dimensional point cloud of the object to a three-dimensional candidate from a dataset to determine a first confidence score, and comparing color metrics of a two-dimensional image of the object to a two-dimensional candidate from the dataset to determine a second confidence score. The point cloud includes a color appearance calibrated from a white balance image, and the color appearance of the object is compared with the three-dimensional candidate. The first or second confidence score is selected to determine which of the three-dimensional candidate or the two-dimensional candidate corresponds with the object.Type: ApplicationFiled: April 27, 2017Publication date: April 23, 2020Applicant: Hewlett-Packard Development Company, L.P.Inventors: Yang Lei, Jian Fan, Jerry Liu
-
Publication number: 20200125831Abstract: The disclosure discloses a focusing method, device, and computer apparatus for realizing a clear human face. The focusing method includes: acquiring first position information of a human face in a current frame of an image to be captured by performing face recognition on the image, after a camera finishing focusing; acquiring second position information of the human face in a next frame, before shooting the image; determining whether a position of the human face changes, based on the first and second position information; resetting an ROI of the human face when it changes; and refocusing on the human face based on the ROI. The disclosure can track the human face in real-time, and trigger the camera to refocus after the human face deviates from a previous focusing position, thereby to make the human face in the captured photograph clear.Type: ApplicationFiled: June 11, 2018Publication date: April 23, 2020Inventors: Shijie ZHUO, Xiaopeng LI
-
Publication number: 20200125832Abstract: The present disclosure provides a verification system. The verification system is formed with a trusted execution environment, the verification system includes a processor set, and the processor set is configured to: obtain an infrared image to be verified of a target object; determine, in the trusted execution environment, whether the infrared image to be verified matches a pre-stored infrared template; in response to determining that the infrared image to be verified matches the pre-stored infrared template, obtain a depth image to be verified of the target object; and determine, in the trusted execution environment, whether the depth image to be verified matches a pre-stored depth template.Type: ApplicationFiled: November 13, 2019Publication date: April 23, 2020Inventors: Xueyong Zhang, Xiangnan Lyu
-
Publication number: 20200125833Abstract: Provided are methods and apparatuses for positioning face feature points. The method includes: carrying out edge detection on a face image to obtain a face feature line image; and fusing the face image and the face feature line image to obtain position information of face feature points.Type: ApplicationFiled: December 19, 2019Publication date: April 23, 2020Applicant: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTD.Inventors: Chen QIAN, Wenyan WU
-
Publication number: 20200125834Abstract: An unlocking control method and a related product, the method may include: acquiring, by means of a proximity sensor of a mobile terminal, the distance between a face and the mobile terminal; determining a target biometrics module, the target biometrics module being any one biometrics module to be adjusted to a matching threshold value within a current biometrics apparatus; determining, according to the distance, a target matching threshold value corresponding to the target biometrics module, and adjusting a matching threshold value of the target biometrics module to the target matching threshold value; and performing unlocking control according to the target matching threshold value.Type: ApplicationFiled: December 20, 2019Publication date: April 23, 2020Inventors: Yibao Zhou, Haiping Zhang
-
Publication number: 20200125835Abstract: A three-dimensional model (e.g., motion capture model) of a user is generated from captured images or captured video of the user. A machine learning network may track poses and expressions of the user to generate and refine the three-dimensional model. Refinement of the three-dimensional model may provide more accurate tracking of the user's face. Refining of the three-dimensional model may include refining the determinations of poses and expressions at defined locations (e.g., eye corners and/or nose) in the three-dimensional model. The refining may occur in an iterative process. Tracking of the three-dimensional model over time (e.g., during video capture) may be used to generate an animated three-dimensional model (e.g., an animated puppet) of the user that simulates the user's poses and expressions.Type: ApplicationFiled: September 27, 2019Publication date: April 23, 2020Applicant: Apple Inc.Inventors: Sofien Bouaziz, Brian Amberg, Thibaut Weise, Patrick Snape, Stefan Brugger, Alex Mansfield, Reinhard Knothe, Thomas Kiser
-
Publication number: 20200125836Abstract: Provided are training method for descreening system, descreening method, device, apparatus and medium. The training method for descreening system comprises: obtaining a first error function based on a halftone position and a preset label value, obtaining a second error function based on simulation images and non-halftone training samples, obtaining a third error function and a fourth error function according to a discrimination result, obtaining a fifth error function according to feature A and feature B, and based on the characteristics of Generative Adversarial Networks, according to the second error function, third error function, fourth error function and fifth error function, the sixth error function and seventh error function are respectively obtained, then an error function is constructed according to the errors generated by training, and the network parameters of each part of the models in the descreening system are updated by backpropagation of error function.Type: ApplicationFiled: June 25, 2018Publication date: April 23, 2020Inventor: Lingling Xu
-
Publication number: 20200125837Abstract: A system and method for generating a facial representation. The method includes identifying, via at least one data source, at least one multimedia content element; generating at least one signature for at least a portion of each identified multimedia content element, wherein each generated signature represents at least one facial concept; analyzing the generated signatures to determine a cluster of signatures of facial concepts; and generating, based on the cluster of facial concept signatures, a facial representation.Type: ApplicationFiled: December 20, 2019Publication date: April 23, 2020Applicant: Cortica Ltd.Inventors: Igal Raichelgauz, Karina Odinaev, Yehoshua Y. Zeevi
-
Publication number: 20200125838Abstract: The disclosure relates to systems and methods for automating offender documentation with facial or other recognition technique. A method can include generating, by cameras situated about a correctional facility, pixel data including first pixels corresponding to a face of a first offender in the correctional facility and an environment around the first offender corresponding to a location within the field of view of the camera, and providing, to a server, the pixel data from the cameras through a network connection, and receiving, from the server, an alert that, based on facial recognition and location detection analysis performed on the pixel data by the server, indicates a monitoring rule associated with the first offender is violated, an offender identification associated with the first offender, a location of the first offender in the correctional facility, and an indication of the monitoring rule violated.Type: ApplicationFiled: April 26, 2019Publication date: April 23, 2020Inventors: Kenneth L. Dalley, JR., Frank Montemorano, Andrew Shaw
-
Publication number: 20200125839Abstract: A system for determining a quantitative accuracy of a test movement relative to a reference movement includes a display output device, a memory, and a processor operatively connected to the display output device and the memory. The memory stores motion capture data and programming instructions. The processor executes the programming instructions to determine a quantitative accuracy of the test movement relative to the reference movement. A method, executable by the processor, for determining the quantitative accuracy includes receiving, with the processor, motion capture data that includes the reference movement and the test movement. The motion data is split into individual movements, and the test movement is aligned with the reference movement. The processor computes a quantitative accuracy of the test movement relative to the reference movement, and generates, with the display output device, a visualization representative of the test movement. The computed accuracy is encoded into the visualization.Type: ApplicationFiled: October 22, 2018Publication date: April 23, 2020Inventors: Yen-Lin Chen, Lincan Zou, Liu Ren
-
Publication number: 20200125840Abstract: A computer-implemented method of managing hierarchically arranged elements is disclosed.Type: ApplicationFiled: July 15, 2019Publication date: April 23, 2020Inventors: KUNLING GENG, SRIDHAR GUNAPU
-
Publication number: 20200125841Abstract: Apparatus for the detection of print marks with a sensor arrangement which has at least one contrast sensor, which for generation of a cyclical sensor signal is disposed above the area of printed material containing the print mark which is passed below the contrast sensor, said apparatus also having a signal conditioning unit. The signal conditioning unit has at least one filter unit with a first filter for determination of the first derivation of the sensor signal, and on the basis of an evaluation of at least the first derivation of the sensor signal the filter unit generates at least one output value which is representative of print marks.Type: ApplicationFiled: December 17, 2019Publication date: April 23, 2020Applicant: B&R INDUSTRIAL AUTOMATION GMBHInventor: Thomas ENZINGER
-
Publication number: 20200125842Abstract: Apparatuses, methods, and systems are presented for sensing scene-based occurrences. Such an apparatus may comprise a vision sensor system comprising a first processing unit and dedicated computer vision (CV) computation hardware configured to receive sensor data from at least one sensor array comprising a plurality of sensor pixels and capable of computing one or more CV features using readings from neighboring sensor pixels. The vision sensor system may be configured to send an event to be received by a second processing unit in response to processing of the one or more computed CV features by the first processing unit. The event may indicate possible presence of one or more irises within a scene.Type: ApplicationFiled: December 23, 2019Publication date: April 23, 2020Inventors: Evgeni GOUSEV, Alok GOVIL, Jacek MAITAN, Venkat RANGAN, Edwin Chongwoo PARK, Jeffery HENCKELS
-
Publication number: 20200125843Abstract: Systems and methods for robust biometric applications using a detailed eye shape model are described. In one aspect, after receiving an eye image of an eye (e.g., from an eye-tracking camera on an augmented reality display device), an eye shape (e.g., upper or lower eyelids, an iris, or a pupil) of the eye in the eye image is calculated using cascaded shape regression methods. Eye features related to the estimated eye shape can then be determined and used in biometric applications, such as gaze estimation or biometric identification or authentication.Type: ApplicationFiled: December 18, 2019Publication date: April 23, 2020Inventors: Jixu Chen, Gholamreza Amayeh
-
Publication number: 20200125844Abstract: Systems and methods for identifying clouds and cloud shadows in satellite imagery are described herein. In an embodiment, a system receives a plurality of images of agronomic fields produced using one or more frequency bands. The system also receives corresponding data identifying cloud and cloud shadow locations in the images. The system trains. a machine learning system to identify at least cloud locations using the images as inputs and at least data identifying pixels as cloud pixels or non-cloud pixels as outputs. When the system receives one or more particular images of a particular agronomic field produced using the one or more frequency bands, the system uses the one or more particular images as inputs into the machine learning system to identify a plurality of pixels in the one or more particular images as particular cloud locations.Type: ApplicationFiled: October 18, 2019Publication date: April 23, 2020Inventors: Ying She, Pramithus Khadka, Wei Guan, Xiaoyuan Yang, Demir Devecigil
-
Publication number: 20200125845Abstract: Systems, methods, and non-transitory computer-readable media can determine a first label at a first position in a first image captured from a vehicle, the first label indicating that a first object is depicted in the first image at the first position, wherein the first image is a two-dimensional image. The first object is identified in a three-dimensional coordinate space representative of an environment of the vehicle based on the first position of the first label within the first image. A second label is automatically generated at a second position in a second image captured from the vehicle based on simultaneous localization and mapping (SLAM) information associated with the vehicle. The second label indicates that the first object is depicted in the second image at the second position.Type: ApplicationFiled: October 22, 2018Publication date: April 23, 2020Applicant: Lyft, Inc.Inventors: Wolfgang Hess, Clemens Marschner, Holger Rapp
-
Publication number: 20200125846Abstract: A method, apparatus, and system for augmenting a live view of a task location. A portable computing device is localized to an object. A visualization of a task location is displayed on the live view of the object for performing a task using a model of the object and a combined map of the object. The combined map is generated from scans of the object by portable computing devices at different viewpoints to the object.Type: ApplicationFiled: October 23, 2018Publication date: April 23, 2020Inventors: Brian Dale Laughlin, Melissa Margaret Skelton
-
Publication number: 20200125847Abstract: Systems and methods provided for presenting supplemental content in an augmented reality environment where an object within a field of view of an augmented reality device of a user is identified and processed to detect a reference related to a participant in an event. A user profile or user social network is searched to identify a message from the user about the participant. The message may be combined with the object in the augmented reality field of view.Type: ApplicationFiled: July 2, 2019Publication date: April 23, 2020Inventors: Adam Bates, Jesse F. Patterson, Mark K. Berner, Eric Dorsey, Jonathan A. Logan, David W. Chamberlin, Paul Stevens, Herbert A. Waterman
-
Publication number: 20200125848Abstract: The exemplified system and method facilitates process, grammar, and framework to perform analytics operations, and visualize the result of analytics operations using augmented reality. The exemplified system and method can be used, but is not limited to, for augmented reality presentations of physical objects as paper documents, digital or printed signage, posters, physical or digital displays, real-world objects, indoor and outdoor spaces, hardware device displays, vehicle dashboards, and other real-world scenes.Type: ApplicationFiled: August 22, 2019Publication date: April 23, 2020Inventors: Arnab Nandi, Codi Burley
-
Publication number: 20200125849Abstract: An apparatus for raising livestock includes one or more confinement pens together with alleyways for transfer of the animals from one location to another. The apparatus includes one or more cameras for obtaining images of all animals in the containment area. A processor is provided for analyzing the images, the processor being arranged to allocate an arbitrary identification to each animal and to track all animals continually to maintain the allocation. From this tracking various data related to individual animals or the animals as a group can be obtained to assess their characteristics and to provide an indication to the worker of the animal to be extracted. The processor can be arranged to detect by image analysis of the image a quantity of feed and/or water in a feeder and to obtain images of the farrowing pen including the sow confinement area and at least one piglet confinement area.Type: ApplicationFiled: September 4, 2019Publication date: April 23, 2020Inventors: Jacquelin Labrecque, Frank Gouineau, Pierre Savatte, Dimitri Estrade, Joel Rivest