Patents Issued in December 24, 2020
-
Publication number: 20200401786Abstract: Techniques are described for determining the quality of a nerve graft by assessing quantitative structural characteristics of the nerve graft. Aspects of the techniques include obtaining an image identifying laminin-containing tissue in the nerve graft; creating a transformed image using a transformation function of an image processing application on the image; using an analysis function of the image processing application, analyzing the transformed image to identify one or more structures in accordance with one or more recognition criteria; and determining one or more structural characteristics of the nerve graft derived from a measurement of the one or more structures.Type: ApplicationFiled: August 18, 2020Publication date: December 24, 2020Applicant: Axogen CorporationInventor: Curt DEISTER
-
Publication number: 20200401787Abstract: A computer method for correlating depictions of colonies of microorganisms includes receiving an image of a substrate associated with a first time and showing a colony of microorganisms. A second image of the same substrate and associated with a second time shows a candidate colony of microorganisms. A region of the second image that shows the candidate colony of microorganisms is located. The first region of the first image is compared to the second region of the second image. Based on the comparison of the images, the candidate colony of microorganism is determined to be the same colony as the first colony of microorganisms. Systems for moving substrates having colonies of microorganisms and maintaining orientation of the substrates before and after movement are also described.Type: ApplicationFiled: September 8, 2020Publication date: December 24, 2020Applicant: Purdue Research FoundationInventors: Joseph Paul Robinson, Bartlomiej Rajwa, Valery Patsekin
-
Publication number: 20200401788Abstract: An image classification model is configured to classify a subject viewed via a multi-camera apparatus. A depth classification model is configured to distinguish between a depth characteristic of a 3D subject and a depth characteristic of a realistic 2D rendering of a 3D subject, using depth information provided by viewing a subject via a multi-camera apparatus. A 2D rendering of a 3D subject is presented as an input to a multi-camera apparatus, the multi-camera apparatus operating the image classification model and the depth classification model. Using the multi-camera apparatus, a first depth information corresponding to the input is collected. The first depth information is provided to the depth classification model. Responsive to the depth classification model classifying the input as a 2D rendering, the 2D rendering is rejected as being the 3D subject.Type: ApplicationFiled: June 18, 2019Publication date: December 24, 2020Applicant: International Business Machines CorporationInventors: David Okun, Justin Mccoy
-
Publication number: 20200401789Abstract: A method, apparatus and computer program stored on a non-volatile computer readable storage medium for confirming a pill in the mouth of a user. The computer program causing a general purpose computer to perform the steps of capturing one or more images of a user by an image capture device, confirming the position of the face of the user within the captured image by measuring a size of the face, and setting a predetermined portion of the face of the user to be a region of interest. An open mouth of the user is confirmed within the region of interest, and the open mouth of the user is classified as one of a mouth with a pill therein and a mouth without a pill therein.Type: ApplicationFiled: September 4, 2020Publication date: December 24, 2020Inventors: Lei Guan, Adam Hanina
-
Publication number: 20200401790Abstract: Embodiments of this application disclose a face image processing method and apparatus, and a storage medium. The method includes: obtaining a to-be-processed face image; receiving an operation instruction for deforming a target face portion of a face in the face image, and determining an operation type of deformation according to the operation instruction; determining deformation parameters of the deformation according to the operation type, and generating an adjuster according to the deformation parameters; obtaining an adjustment amplitude by which the adjuster performs dynamic adjustment on the target face portion, and displaying a change effect of the target face portion based on the dynamic adjustment in a display interface; and determining an adjustment parameter according to the adjustment amplitude, and obtaining the deformed face image according to the adjustment parameter.Type: ApplicationFiled: August 31, 2020Publication date: December 24, 2020Inventors: Kai HU, Xiaoqi Li, Yue YANG
-
Publication number: 20200401791Abstract: In one embodiment, a method includes identifying a facial image from an image of a scene. The method then determines a context associated with the facial image based on a context model. The method then identifies a person from a database based on the context and a facial recognition model associated with the facial image.Type: ApplicationFiled: March 6, 2020Publication date: December 24, 2020Inventors: Jiang Gao, Xinyao Wang, Gene Becker
-
Publication number: 20200401792Abstract: Implementations include actions of receiving consumer-specific data and ID-specific data from an identification presented by a consumer to a vending machine, processing at least a portion of the ID-specific data to determine one or more of whether the identification is unexpired and whether the identification is authentic, and serving the consumer from the vending machine at least partially in response to determining that the identification is unexpired and that the identification is authentic and determining that the consumer is authentic relative to the identification.Type: ApplicationFiled: June 22, 2020Publication date: December 24, 2020Inventors: Christopher J. McClellan, Jon C. Carder, Kevin D. McCann, Benjamin Scott Rogers, Christopher Bryan Paul Barr, Josef Salyer, Michael James Smith, David W. Nippa
-
APPARATUS AND METHODS FOR DETERMINING MULTI-SUBJECT PERFORMANCE METRICS IN A THREE-DIMENSIONAL SPACE
Publication number: 20200401793Abstract: Apparatus and methods for extraction and calculation of multi-person performance metrics in a three-dimensional space. An example apparatus includes a detector to identify a first subject in a first image captured by a first image capture device based on a first set of two-dimensional kinematic keypoints in the first image, the two-dimensional kinematic keypoints corresponding to a joint of the first subject, the first image capture device associated with a first view of the first subject, a multi-view associator to verify the first subject using the first image and a second image captured by a second image capture device, the second image capture device associated with a second view of the first subject, the second view different than the first view, and a keypoint generator to generate three-dimensional keypoints for the first subject using the first set of two-dimensional kinematic keypoints.Type: ApplicationFiled: June 26, 2020Publication date: December 24, 2020Inventors: Nelson Leung, Jonathan K. Lee, Bridget L. Williams, Sameer Sheorey, Amery Cong, Mehrnaz Khodam Hazrati, Mourad S. Souag, Adam Marek, Pawel Pieniazek, Bogna Bylicka, Jakub Powierza, Anna Banaszczyk-fiszer -
Publication number: 20200401794Abstract: A nonverbal information generation apparatus includes a nonverbal information generation unit that generates time-information-stamped nonverbal information that corresponds to time-information-stamped text feature quantities on the basis of the time-information-stamped text feature quantities and a learned nonverbal information generation model. The time-information-stamped text feature quantities are configured to include feature quantities that have been extracted from text and time information representing times assigned to predetermined units of the text. The nonverbal information is information for controlling an expression unit that expresses behavior that corresponds to the text.Type: ApplicationFiled: February 15, 2019Publication date: December 24, 2020Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Ryo ISHII, Ryuichiro HIGASHINAKA, Taichi KATAYAMA, Junji TOMITA, Nozomi KOBAYASHI, Kyosuke NISHIDA
-
Publication number: 20200401795Abstract: A computer-implemented method and system for neural network-based recognition of trade workers present on industrial sites is presented.Type: ApplicationFiled: September 2, 2020Publication date: December 24, 2020Inventors: LAI HIM MATTHEW MAN, MOHAMMAD SOLTANI, AHMED ALY, WALID ALY
-
Publication number: 20200401796Abstract: The invention concerns a method comprising: detecting strokes of digital ink input on a computing device in a free handwriting format; detecting a text block from said strokes; performing text recognition on each text line of said text block, including extracting text lines from the text block and generating model data that associate each stroke of the text block with a character, a word and a text line of the text block; normalizing each text line from the free handwriting format into a structured format to comply with a document pattern. The normalization may comprise for each text line: computing a transform function to transform said text line into the structured format; applying the transform function to the text line; and updating the model data of said text line based on the transform function.Type: ApplicationFiled: December 16, 2019Publication date: December 24, 2020Inventor: Alain Chateigner
-
Publication number: 20200401797Abstract: A system summary can be evaluated accurately with consideration given to the meaning of each unit. A unit division portion divides a document to be summarized and a system summary into units. For each reference summary, an oracle generation portion generates an oracle, which is a subset of units of the document to be summarized that satisfy a length requirement and maximize a score of an evaluation function for the subset of units with respect to the reference summary. An oracle unit score determination portion determines the score of each unit included in a set of the oracles based on the generated oracles. For each unit of the system summary, a system unit score determination portion obtains a corresponding unit from among the units included in the set of the oracles and determines the score of the unit of the system summary. An evaluation score determination portion determines the score of the system summary based on the scores of the units of the system summary.Type: ApplicationFiled: February 14, 2019Publication date: December 24, 2020Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Tsutomu HIRAO, Masaaki NAGATA
-
Publication number: 20200401798Abstract: Computer-implemented methods are provided for generating a data structure representing tabular information in a scanned image. Such a method can include storing image data representing a scanned image of a table, processing the image data to identify positions of characters and lines in the image, and mapping locations in the image of information cells, each containing a set of the characters, in dependence on said positions. The method can also include, for each cell, determining cell attribute values, dependent on the cell locations, for a predefined set of cell attributes, and supplying the attribute values as inputs to a machine-learning model trained to pre-classify cells as header cells or data cells in dependence on cell attribute values.Type: ApplicationFiled: June 24, 2019Publication date: December 24, 2020Inventors: Antonio Foncubierta Rodriguez, Maria Gabrani, Waleed Farrukh
-
Publication number: 20200401799Abstract: A method may include acquiring one or more image texts from an image of a document, segmenting the image into one or more sub-images using the one or more image texts, determining, by applying a machine learning model, one or more experimental techniques of one or more experiments for the one or more sub-images, and adding, to a knowledge base, one or more mappings of the one or more sub-images to the one or more experiments.Type: ApplicationFiled: June 24, 2019Publication date: December 24, 2020Applicant: Scinapsis Analytics Inc. dba BenchSciInventors: Anshuman Sahoo, Thomas Kai Him Leung, David Qixiang Chen, Elvis Mboumien Wianda
-
Publication number: 20200401800Abstract: A method for controlling an unmanned aerial vehicle (UAV) to track a monitored person is provided. The method includes directing the UAV toward a target location the target location being based on past or present location information provided by a personal monitoring device attached to a monitored person, the location information representing the location of the personal monitoring device; assuming with the UAV a surveillance position relative to the target location; and determining that the monitored device is proximate to the target location by receiving signals from the personal monitoring device and/or observing the monitored person through a camera on the UAV.Type: ApplicationFiled: June 24, 2019Publication date: December 24, 2020Applicant: Satellite Tracking of People LLCInventor: David W. LeJeune, JR.
-
Publication number: 20200401801Abstract: A monitor device includes a camera for capturing a motion image of a motion of a robot device and an acceleration sensor for detecting an operational status of the robot device. A storage control unit for a robot controller performs control for storing, in a storage unit, a motion image captured by the camera and attached with a time, and control for storing, in the storage unit, an acceleration acquired from an acceleration sensor and attached with a time. When the operational status of the robot device deviates from a reference operational status, an extraction unit extracts, from the storage unit, a motion image in a period preceding the deviation from the reference operational status and stores the extracted motion image in the storage unit.Type: ApplicationFiled: June 10, 2020Publication date: December 24, 2020Inventor: Masafumi OOBA
-
Publication number: 20200401802Abstract: A computer-implemented system and method provide for a tagging user (TU) device that determines a first location of the TU device and receives, in the first location, a selection of a real-world object from a TU who views the object through the TU device. The TU device receives, from a TU, tagging information to attach to the object, and captures descriptive attributes of the object. The descriptive attributes and the tagging information associated with the first location are stored in a tagged object database.Type: ApplicationFiled: June 21, 2019Publication date: December 24, 2020Inventors: Robert Huntington Grant, Zachary A. Silverstein, Vyacheslav Zheltonogov, Juan C. Lopez
-
Publication number: 20200401803Abstract: Methods, apparatus, systems and articles of manufacture are disclosed for augmented reality measuring of equipment. An example apparatus disclosed herein includes an image comparator to compare camera data with reference information of a reference vehicle part to identify a vehicle part included in the camera data, and an inspection image analyzer to, in response to the image comparator identifying the vehicle part, measure the vehicle part by causing an interface generator to generate an overlay representation of the reference vehicle part on the camera data displayed on a user interface, and determining, based on one or more user inputs to adjust the overlay representation, a measurement corresponding to the vehicle part.Type: ApplicationFiled: May 21, 2020Publication date: December 24, 2020Inventors: Stephen Gilbert, Eliot Winer, Jack Miller, Alex Renner, Nathan Sepich, Vinodh Sankaranthi, Chris Gallant, David Wehr, Rafael Radkowski, Cheng Song
-
Publication number: 20200401804Abstract: Various implementations disclosed herein include devices, systems, and methods that use an object as a background for virtual content. Some implementations involve obtaining an image of a physical environment. A location of a surface of an object is detected based on the image. A virtual content location to display virtual content is determined, where the virtual content location corresponds to the location of the surface of the object. Then, a view of the physical environment and virtual content displayed at the virtual content location is provided.Type: ApplicationFiled: May 22, 2020Publication date: December 24, 2020Inventors: Anselm Grundhoefer, Rahul Nair
-
Publication number: 20200401805Abstract: In order to make it possible for the user to perceive the possibility of a collision with an object in the real world, an image processing apparatus comprises: a location estimation unit configured to, based on a video obtained by an image capturing unit for capturing a physical space, estimating a self-location of the image capturing unit in the physical space; a recognition unit configured to recognize a physical object existing within a certain distance from the self-location based on the video; an area decision unit configured to decide a predetermined area in the physical space in relation to the video; and a determination unit configured to determine whether or not a warning is given in accordance with whether or not a physical object recognized by the recognition unit is included in the predetermined area.Type: ApplicationFiled: June 19, 2020Publication date: December 24, 2020Inventor: Takuya Kotoyori
-
Publication number: 20200401806Abstract: An augmented reality displaying system for displaying a virtual object through compositing on an image taken of the real world, comprising: a camera for capturing an image of the real world; a location information acquiring portion for acquiring, as location information, the coordinates and orientation at the instant of imaging by the camera; an image analyzing portion for analyzing, as depth information, the relative distances to imaging subjects for individual pixels that structure the real world image that has been captured; a virtual display data generating portion for generating virtual display data, on real map information that includes geographical information in the real world, based on the location information acquired by the location information acquiring portion; and a compositing processing portion for displaying the virtual display data, generated by the virtual display data generating portion, superimposed on an image captured by the camera in accordance with the depth information.Type: ApplicationFiled: August 31, 2020Publication date: December 24, 2020Applicant: DWANGO Co., Ltd.Inventors: Kouichi NAKAMURA, Kazuya ASANO
-
Publication number: 20200401807Abstract: A refuse vehicle includes a body coupled to a chassis and defining a refuse compartment configured to store refuse, a refuse collection arm configured to engage and lift a refuse container to unload refuse, and an object detection system configured to provide object detection data relating to locations of objects relative to the refuse vehicle. An actuator controls movement of the refuse collection arm. A controller is configured to use the object detection data to determine if the refuse container is present within an aligned zone relative to the chassis, the aligned zone representing a range of locations in which the refuse collection arm is capable of engaging the refuse container. In response to a determination that the refuse container is within the aligned zone, the controller is configured to provide an indication to an operator that the refuse container is within the aligned zone.Type: ApplicationFiled: August 31, 2020Publication date: December 24, 2020Applicant: Oshkosh CorporationInventors: Grant Wildgrube, Zhenyi Wei, Cody D. Clifton
-
Publication number: 20200401808Abstract: A method for recognizing a key time point in a video includes: obtaining at least one video segment by processing each image frame in the video by an image classification model; determining a target video segment in the at least one video segment based on a shot type; obtaining respective locations of a first object and a second object in an image frame of the target video segment by an image detection model; and based on a distance between the location of the first object and the location of the second object in the image frame satisfying a preset condition, determining a time point of the image frame as the key time point of the video.Type: ApplicationFiled: September 8, 2020Publication date: December 24, 2020Applicant: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LTDInventors: Tao Wu, Xu Yuan Xu, Guo Ping Gong
-
Publication number: 20200401809Abstract: Data processing systems and methods are disclosed for combining video content with one or more augmentations to produce augmented video. Objects within video content may have associated bounding boxes that may each be associated with respective RGB values. Upon user selection of a pixel, the RGBA value of the pixel may be used to determine a bounding box associated with the RGBA value. The client may transmit an indicator of the determined bounding box to an augmentation system to request augmentation data for the object associated with the bounding box. The system then uses the indicator to determine the augmentation data and transmits the augmentation data to the client device.Type: ApplicationFiled: August 31, 2020Publication date: December 24, 2020Inventors: Yu-Han Chang, Rajiv Tharmeswaran Maheswaran, Jeffrey Wayne Su, Emil Dotchevski, Jason Kent Simon
-
Publication number: 20200401810Abstract: A method for monitoring an object is provided. The method includes acquiring videos recorded by an imaging device of a vehicle and position of the vehicle, and storing the videos and the position into a database, and receiving a monitoring command from a terminal device, wherein the monitoring command comprising a dynamic monitoring command and/or a static monitoring command. The method further can obtain a search result by searching in the database according to the monitoring command and/or the static monitoring command and output the search result.Type: ApplicationFiled: August 28, 2019Publication date: December 24, 2020Inventor: KUO-HUNG LIN
-
Publication number: 20200401811Abstract: The present disclosure relates to systems and methods for target identification in a video. The method may include obtaining a video including a plurality of frames of video data. The method may further include sampling, from the plurality of frames, each pair of consecutive sampled frames being spaced apart by at least one frame of the plurality of frames of the video data. The method may further include identifying, from the one or more sampled frames, a reference frame of video data using an identification model. The method may still further include determining a start frame and an end frame including the target object.Type: ApplicationFiled: September 3, 2020Publication date: December 24, 2020Applicant: BEIJING DIDI INFINITY TECHNOLOGY AND DEVELOPMENT CO., LTD.Inventor: Guangda YU
-
Publication number: 20200401812Abstract: This disclosure provides a method and a system for detecting and recognizing a target object in a real-time video. The method includes: determining whether a target object recognition result RX-1 of a previous frame of image of a current frame of image is the same as a target object recognition result RX-2 of a previous frame of image of the previous frame of image; performing target object position detection in the current frame of image by using a first-stage neural network to obtain a position range CX of a target object in the current frame of image when the two recognition results RX-1 and RX-2 are different; or determining a position range CX of a target object in the current frame of image according to a position range CX-1 of the target object in the previous frame of image when the two recognition results RX-1 and RX-2 are the same; and performing target object recognition in the current frame of image according to the position range CX by using a second-stage neural network.Type: ApplicationFiled: September 4, 2020Publication date: December 24, 2020Applicant: Tencent Technology (Shenzhen) Company LimitedInventors: Jun CHENG, Haibao SHANG, Feng LI, Haoyuan LI, Xiaoxiang ZUO
-
Publication number: 20200401813Abstract: In a method for performing adaptive content classification of a video content item, frames of a video content item are analyzed at a sampling rate for a type of content, wherein the sampling rate dictates a frequency at which frames of the video content item are analyzed. Responsive to identifying content within at least one frame indicative of the type of content, the sampling rate of the frames is increased. Responsive to not identifying content within at least one frame indicative of the type of content, the sampling rate of the frames is decreased. It is determined whether the video content item includes the type of content based on the analyzing the frames.Type: ApplicationFiled: June 18, 2020Publication date: December 24, 2020Applicant: Gfycat, Inc.Inventors: Richard RABBAT, Ernestine FU
-
Publication number: 20200401814Abstract: The present invention is directed to a system and method including a firearm detection device that operates silently to identify a firearm or bullet stored in the barrel of the firearm. Utilizing a camera as well as one or more characteristics of the individual carrying the firearm whereby once scanned, the system may send an alert to the proper authorities, with the system utilizing exemption tags to properly identify authorities or other entities that are not threats as well as provide other utilities useful during an emergency situation.Type: ApplicationFiled: September 2, 2020Publication date: December 24, 2020Inventor: Hugo Mauricio Salguero
-
Publication number: 20200401815Abstract: A computer vision system includes a camera that captures a plurality of image frames in a target field. A user interface is coupled to the camera. The user interface is configured to perform accelerated parallel computations in real-time on the plurality of image frames acquired by the camera. The system provides identification of common movement pathways within a space.Type: ApplicationFiled: June 24, 2019Publication date: December 24, 2020Inventors: Mark Raymond Miller, Archana Ramachandran, Christopher C. Anderson
-
Publication number: 20200401816Abstract: Operations may comprise obtaining a first point cloud from a map representing a region. The operations may also include obtaining a second point cloud from one or more sensors of a vehicle traveling through the region. In addition, the operations may include identifying one or more subsets of clusters of second points of the second point cloud. The operations may also include determining correspondences between first points of the first point cloud and cluster points of the one or more subsets of clusters of the second point cloud. Moreover, the operations may include identifying at least a cluster of the one or more subsets of clusters, the identified cluster having, with respect to first points of the first point cloud, a correspondence percentage that is less than a threshold value. The operations may also include adjusting the second point cloud based on the identified cluster.Type: ApplicationFiled: June 24, 2020Publication date: December 24, 2020Inventor: Derik Schroeter
-
Publication number: 20200401817Abstract: According to one or more embodiments, operations may comprise obtaining a first point cloud. The operations also comprise performing segmentation of the first point cloud, the segmentation generating one or more clusters of points of the point cloud. The operations also comprise determining, for each respective cluster of the plurality of clusters, a respective geometric feature of a corresponding object that corresponds to the respective cluster. The operations also comprise obtaining a second point cloud. The operations also comprise assigning a plurality of weights that comprises assigning a respective weight to each respective cluster based on the respective geometric feature that corresponds to the respective cluster. The operations also comprise obtaining a second point cloud and aligning the first point cloud with the second point cloud based on the plurality of weights.Type: ApplicationFiled: June 24, 2020Publication date: December 24, 2020Inventor: Derik Schroeter
-
Publication number: 20200401818Abstract: A camera monitoring system is adapted for use in vehicles, and includes an image capturing means, a control unit, and at least one display device. The image capturing means is configured to capture an image from an external environment, and is associated with an exterior rear-view mirror of the vehicle. The unit is connected to the capturing means, and is configured to select an image region from the captured image. The image region is smaller than the captured image and is movable within the captured image. The device may be a touch screen, and the system may include a control surface in the touch screen configured to move via touches performed by a driver. The image region is displayed in the touch screen, or in another, additional, screen of the device. The system may further include a gesture detector for the detection of driver gestures associated with the image region.Type: ApplicationFiled: June 24, 2020Publication date: December 24, 2020Inventors: LluÃs Gibert Castroverde, Natalia Canosa Perez, Noelia Rodriguez Ibañez, Brenda Meza GarcÃa, Jordi Tenas MartÃnez, Xavier Biosca Yuste, Daniel Guerra Fagundez
-
Publication number: 20200401819Abstract: This disclosure relates to systems and methods of obtaining accurate motion and orientation estimates for a vehicle traveling at high speed based on images of a road surface. A purpose of these systems and methods is to provide a supplementary or alternative means of locating a vehicle on a map, particularly in cases where other locationing approaches (e.g., GPS) are unreliable or unavailable.Type: ApplicationFiled: June 24, 2020Publication date: December 24, 2020Inventors: Erik Volkerink, Ajay Khoche
-
Publication number: 20200401820Abstract: Methods, systems, and apparatus for a detection system. The detection system includes a memory configured to store image data and a first camera configured to capture first image data including a first set of objects in the environment when the vehicle is moving. The electronic control unit is configured to obtain, from the first camera, the first image data including the first set of objects, determine a motion of an object of the first set of objects based on the first image data, and determine that the motion of the object will be different than a baseline motion for the object. The electronic control unit is configured to record and capture, in the memory and using the camera, the first image data for a time period before and after the determination that the motion of the object will be different than the baseline motion.Type: ApplicationFiled: June 19, 2019Publication date: December 24, 2020Inventors: Katsumi Nagata, Shojiro Takeuchi
-
Publication number: 20200401821Abstract: An operating assistance method for a working device or for a vehicle. Object boxes for an object in a field of view of the working device are obtained at consecutive times. From object boxes of a given object, for images recorded in succession or direct succession, an instantaneous change of scale of an object box for the specific object, and an instantaneous lateral change of position of the object box for the specific object, are determined. An object box predicted in the future is determined from the current change of scale, and from the instantaneous lateral change in position for an object box for a specific object. The position of the predicted object box and/or the ratio of a lateral extension of the predicted object box to a lateral extension of a covered field of view and/or of the recorded images are determined and evaluated.Type: ApplicationFiled: April 11, 2019Publication date: December 24, 2020Inventors: Jens Roeder, Michael Kessler, Patrick Koegel, Steffen Brueggert
-
Publication number: 20200401822Abstract: Detection of three dimensional obstacles using a system mountable in a host vehicle including a camera connectible to a processor. Multiple image frames are captured in the field of view of the camera. In the image frames, an imaged feature is detected of an object in the environment of the vehicle. The image frames are portioned locally around the imaged feature to produce imaged portions of the image frames including the imaged feature. The image frames are processed to compute a depth map locally around the detected imaged feature in the image portions. Responsive to the depth map, it is determined if the object is an obstacle to the motion of the vehicle.Type: ApplicationFiled: September 3, 2020Publication date: December 24, 2020Inventors: Oded BERBERIAN, Gideon STEIN
-
Publication number: 20200401823Abstract: According to an aspect of an embodiment, operations may comprise receiving a point cloud representing a region. The operations may also comprise identifying a cluster of points in the point cloud having a higher intensity than points outside the cluster of points. The operations may also comprise determining a bounding box around the cluster of points. The operations may also comprise identifying a traffic sign within the bounding box. The operations may also comprise projecting the bounding box to coordinates of an image of the region captured by a camera. The operations may also comprise employing a deep learning model to classify a traffic sign type of the traffic sign in a portion of the image within the projected bounding box. The operations may also comprise storing information regarding the traffic sign and the traffic sign type in a high definition (HD) map of the region.Type: ApplicationFiled: June 18, 2020Publication date: December 24, 2020Inventors: Derek Thomas Miller, Yu Zhang, Lin Yang
-
Publication number: 20200401824Abstract: Camera image information includes an image that is imaged by a camera installed on a vehicle. Lamp pattern information, which is information on a traffic signal having plural lamp parts, indicates a relative positional relationship between the plural lamp parts and an appearance of each lamp part when lighted. A system detects a subject traffic signal around the the vehicle based on the camera image information to acquire traffic signal detection information that indicates at least an appearance of each of plural detected parts of the subject traffic signal. The system compares the traffic signal detection information with the lamp pattern information. The system recognizes a lighting state of the plural lamp parts that is consistent with the appearance of each of the plural detected parts of the subject traffic signal as a lighting state of the subject traffic signal.Type: ApplicationFiled: May 15, 2020Publication date: December 24, 2020Inventors: Yusuke Hayashi, Taichi Kawanai, Kentaro Ichikawa
-
Publication number: 20200401825Abstract: An object detection device includes: a camera ECU that measures the bearing of an object by detecting the object from an image captured by a camera and identifying the direction toward the object; a sonar that measures the distance to the object existing around the vehicle; and a position identification unit that identifies the position of the object by combining the measured bearing data and distance data on a grid map that is based on a polar coordinate system and identifying the grid cell where the object exists on the grid map.Type: ApplicationFiled: September 4, 2020Publication date: December 24, 2020Inventors: Makoto OHKADO, Naohiro FUJIWARA
-
Publication number: 20200401826Abstract: Determining that a motor vehicle driver is using a mobile device while driving a motor vehicle. Multiple images of a driver of a motor vehicle are captured through a side window of the motor vehicle. Positive images show a driver using a mobile device while driving a motor vehicle. Negative images show a driver not using a mobile device while driving a motor vehicle. Multiple training images are selected from both the positive images and the negative images. The selected training images and respective labels, indicating that the selected training images are positive images or negative images, are input to a machine (e.g. Convolutional Neural Network, (CNN)). The CNN is trained to classify that a test image, captured through a side window of a motor vehicle, shows a driver using a mobile device while driving the motor vehicle.Type: ApplicationFiled: June 21, 2020Publication date: December 24, 2020Inventors: Jonathan Devor, Moshe Mikhael Frolov, Igal Muchnik, Herbert Zlotogorski
-
Publication number: 20200401827Abstract: A method includes receiving a first image and a second image, wherein the first and second images represent first and second relative locations, respectively, of an image acquisition device with respect to a subject. The method also includes determining, using the first and second images, a total relative displacement of the subject with respect to the image acquisition device between a time of capture of the first image and a time of capture of the second image, and determining, based on sensor data associated with one or more sensors associated with the image acquisition device, a component of the total relative displacement associated with a motion of the image acquisition device. The method also includes determining, based on a difference between the first total relative displacement and the component, that the first subject is an alternative representation of a live person, and in response, preventing access to a secure system.Type: ApplicationFiled: June 21, 2019Publication date: December 24, 2020Inventors: Gregory Lee Storm, Reza R. Derakhshani
-
Publication number: 20200401828Abstract: An information processor including a sensor unit that recognizes an operating tool that comes into contact with a display unit, and a control unit that shows a guidance display for guiding an operation of the operating tool on the display unit.Type: ApplicationFiled: December 4, 2018Publication date: December 24, 2020Applicant: SONY CORPORATIONInventors: QiHong WANG, Masatomo KURATA, Hiroyuki SHIGEI, Sota MATSUZAWA, Naoko KOBAYASHI, Makoto SATO
-
Publication number: 20200401829Abstract: A picture processing method is provided for a computer device. The method includes obtaining a to-be-processed picture; extracting a text feature in the to-be-processed picture using a machine learning model; and determining text box proposals at any angles in the to-be-processed picture according to the text feature. Corresponding subtasks are performed by using processing units corresponding to substructures in the machine learning model, and at least part of the processing units comprise a field-programmable gate array (FPGA) unit. The method also includes performing rotation region of interest (RROI) pooling processing on each text box proposal, and projecting the text box proposal onto a feature graph of a fixed size, to obtain a text box feature graph corresponding to the text box proposal; and recognizing text in the text box feature graph, to obtain a text recognition result.Type: ApplicationFiled: September 2, 2020Publication date: December 24, 2020Inventor: Yao XIN
-
Publication number: 20200401830Abstract: A method for controlling an image acquisition component includes: when a terminal including a movable image acquisition component receives an activating instruction for the image acquisition component, acquired data acquired by a Proximity Sensor (P-sensor) is obtained, the image acquisition component being capable of moving in and out of the terminal under driving of a driving component of the terminal; it is determined whether there is an obstacle in a preset range of the terminal based on the acquired data; and when there is the obstacle in the preset range of the terminal, the activating instruction is forbidden to be executed, and the image acquisition component is kept at a first position in the terminal.Type: ApplicationFiled: November 16, 2019Publication date: December 24, 2020Applicant: BEIJING XIAOMI MOBILE SOFTWARE CO., LTD.Inventor: Nengjin ZHU
-
Publication number: 20200401831Abstract: An image editing program can include a content-aware selection system. The content-aware selection system can enable a user to select an area of an image using a label or a tag that identifies object in the image, rather than having to make a selection area based on coordinates and/or pixel values. The program can receive a digital image and metadata that describes an object in the image. The program can further receive a label, and can determine from the metadata that the label is associated with the object. The program can then select a bounding box for the object, and identify in the bounding box, pixels that represent the object. The program can then output a selection area that surrounds the pixels.Type: ApplicationFiled: September 4, 2020Publication date: December 24, 2020Inventors: Subham Gupta, Ajay Bedi, Poonam Bhalla, Krishna Singh Karki
-
Publication number: 20200401832Abstract: A computer-implemented method and system for selecting one or more regions of interest (ROIs) in an image. The method comprises: identifying one or more objects of interest that have been segmented from the image; identifying predefined landmarks of the objects; determining reference morphometrics pertaining to the objects by performing morphometrics on the objects by reference to the landmarks; selecting one or more ROIs from the objects according to the reference morphometrics, comprises identifying the location of the ROIs relative to the reference morphometrics; and outputting the selected one or more ROIs.Type: ApplicationFiled: June 21, 2019Publication date: December 24, 2020Inventor: Yu PENG
-
Publication number: 20200401833Abstract: A system for detecting license plates is described. The system receives raw data comprising images of license plates. A base version of a ground truth is prepared based on the raw data, using a generic license plate detection (LPD). The system prepares input data for training a deep learning network. The deep learning network is trained with the prepared input data. A newly trained generic LPD is formed using data generated by the existing generic LPD.Type: ApplicationFiled: June 18, 2019Publication date: December 24, 2020Inventors: Ilya Popov, Krishna Khadloya, Sofiya Klyan
-
Publication number: 20200401834Abstract: The present invention discloses a system for automated vehicles license plates characters segmentation and recognition comprising an imaging processor connected to at least one image grabber module or camera. The image grabber module captures images of the vehicles and forwards it to said connected imaging processor and the imaging processor segments and recognizes the vehicles license plates character region including the region with deformed license plates characters in the captured vehicle images by involving binarization of maximally stable external regions corresponding to probable license plate region in the captured vehicle images.Type: ApplicationFiled: February 25, 2019Publication date: December 24, 2020Applicant: VIDEONETICS TECHNOLOGY PRIVATE LIMITEDInventors: Sudeb DAS, Apurba GORAI, Tinku ACHARYA
-
Publication number: 20200401835Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for generating semantic scene graphs for digital images using an external knowledgebase for feature refinement. For example, the disclosed system can determine object proposals and subgraph proposals for a digital image to indicate candidate relationships between objects in the digital image. The disclosed system can then extract relationships from an external knowledgebase for refining features of the object proposals and the subgraph proposals. Additionally, the disclosed system can generate a semantic scene graph for the digital image based on the refined features of the object/subgraph proposals. Furthermore, the disclosed system can update/train a semantic scene graph generation network based on the generated semantic scene graph. The disclosed system can also reconstruct the image using object labels based on the refined features to further update/train the semantic scene graph generation network.Type: ApplicationFiled: June 21, 2019Publication date: December 24, 2020Inventors: Handong Zhao, Zhe Lin, Sheng Li, Mingyang Ling, Jiuxiang Gu