Motion Or Velocity Measuring Patents (Class 382/107)
  • Patent number: 11175366
    Abstract: A method for acquiring magnetic resonance imaging data with respiratory motion compensation using one or more motion signals includes acquiring a plurality of gradient-delay-corrected radial readout views of a subject using a free-breathing multi-echo pulse sequence, and sampling a plurality of data points of the gradient-delay-corrected radial readout views to yield a self-gating signal. The self-gating signal is used to determine a plurality of respiratory motion states corresponding to the plurality of gradient-delay-corrected radial readout views. The respiratory motion states are used to correct respiratory motion bias in the gradient-delay-corrected radial readout views, thereby yielding gradient-delay-corrected and motion-compensated multi-echo data. One or more images are reconstructed using the gradient-delay-corrected and motion-compensated multi-echo data.
    Type: Grant
    Filed: February 5, 2020
    Date of Patent: November 16, 2021
    Assignees: Siemens Healthcare GmbH, The Regents of the University of California
    Inventors: Xiaodong Zhong, Holden H. Wu, Vibhas S. Deshpande, Tess Armstrong, Li Pan, Marcel Dominik Nickel, Stephan Kannengiesser
  • Patent number: 11150750
    Abstract: An electronic pen main body unit of an electronic pen having a function of a fountain pen includes an ink writing unit in which a cartridge housing liquid ink is fitted to a rear end portion of a pen core, and a pen body is disposed so as to be superposed on the pen core in a direction orthogonal to a coupling direction of the pen core and the cartridge, and an interaction circuit having an electronic part which, in operation, exchanges a signal with a tablet. The interaction circuit is disposed on a side of the pen core opposite the pen body in the direction orthogonal to the coupling direction of the pen core and the cartridge in a state in which the interaction circuit recedes to the cartridge side from a writing end of the pen body in the coupling direction of the pen core and the cartridge.
    Type: Grant
    Filed: July 24, 2020
    Date of Patent: October 19, 2021
    Assignee: Wacom Co., Ltd.
    Inventors: Kohei Tanaka, Kenichi Ninomiya, Takenori Kaneda, Toshihiko Horie
  • Patent number: 11146661
    Abstract: An endpoint system including one or more computing devices receives user input associated with an avatar in a shared virtual environment; calculates, based on the user input, motion for a portion of the first avatar, such as a hand; determines, based on the user input, a first gesture state for first avatar; transmits first location change notifications and a representation of the first gesture state for the first avatar; receives second location change notifications and a representation of a second gesture state for a second avatar; detects a collision between the first avatar and the second avatar based on the first location change notifications and the second location change notifications; and identifies a collaborative gesture based on the detected collision, the first gesture state, and the second gesture state.
    Type: Grant
    Filed: June 28, 2017
    Date of Patent: October 12, 2021
    Assignee: Rec Room Inc.
    Inventors: Nicholas Fajt, Cameron Brown, Dan Kroymann, Omer Bilal Orhan, Johnathan Bevis, Joshua Wehrly
  • Patent number: 11141239
    Abstract: A reprocessing apparatus for cleaning and/or disinfecting a medical instrument including a fluid container for a reprocessing fluid and a reprocessing device. The reprocessing device includes: a reprocessing space in which the medical instrument is introduced for reprocessing; a fluid line for connection to at least one channel of the medical instrument, wherein the fluid line is configured to transport the reprocessing fluid to the at least one channel; a bubble introducing apparatus for introducing gas bubbles into the fluid line; and a gas bubble speed determining apparatus for determining a speed of the gas bubbles in the fluid line. The gas bubble speed determining apparatus includes a camera for capturing successive images of at least a portion of the gas bubbles in the fluid line.
    Type: Grant
    Filed: August 15, 2019
    Date of Patent: October 12, 2021
    Assignee: OLYMPUS WINTER & IBE GMBH
    Inventors: Niklas Erdmann, Sascha Eschborn, Antonia Weis
  • Patent number: 11140329
    Abstract: An image processing apparatus and an image processing method include obtaining status information of a terminal device, obtaining photographing scene information of the terminal device, determining an image processing mode based on the status information and the photographing scene information, obtaining a to-be-displayed image, and processing the to-be-displayed image based on the image processing mode.
    Type: Grant
    Filed: November 4, 2019
    Date of Patent: October 5, 2021
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Jun Dai, Biying Hu, Yining Huang
  • Patent number: 11127146
    Abstract: The invention relates to layered depth data. In multi-view images, there is a large amount of redundancy between images. Layered Depth Video format is a well-known formatting solution for formatting multi-view images which reduces the amount of redundant information between images. In LDV, a reference central image is selected and information brought by other images of the multi-view images that are mainly regions occluded in the central image are provided. However, LDV format contains a single horizontal occlusion layer, and thus fails rendering viewpoints that uncover multiple layers dis-occlusions. The invention uses light filed content which offers disparities in every directions and enables a change in viewpoint in a plurality of directions distinct from the viewing direction of the considered image enabling to render viewpoints that may uncover multiple layer dis-occlusions which may occurs with complex scenes viewed with wide inter-axial distance.
    Type: Grant
    Filed: July 21, 2017
    Date of Patent: September 21, 2021
    Assignee: INTERDIGITAL VC HOLDINGS, INC.
    Inventors: Didier Doyen, Guillaume Boisson, Sylvain Thiebaud
  • Patent number: 11127181
    Abstract: An avatar facial expression generating system and a method of avatar facial expression generation are provided. In the method, multiple user data are obtained and related to the sensing result of a user from multiple data sources. Multiple first emotion decisions are determined, respectively, based on each user data. Whether an emotion collision occurs among the first emotion decisions is determined. The emotion collision is related that the corresponding emotion groups of the first emotion decisions are not matched with each other. A second emotion decision is determined from one or more emotion groups according to the determining result of the emotion collision. The first or second emotion decision is related to one emotion group. A facial expression of an avatar is generated based on the second emotion decision. Accordingly, a proper facial expression of the avatar could be presented.
    Type: Grant
    Filed: February 27, 2020
    Date of Patent: September 21, 2021
    Assignee: XRSPACE CO., LTD.
    Inventors: Feng-Seng Chu, Peter Chou
  • Patent number: 11119587
    Abstract: An image sensing system control method, comprising: (a) predicting a first velocity of the image sensor; (b) calculating a first time duration between a first frame time and a first polling time after the first frame time, wherein the image sensor captures a first frame at the first frame time and receives a first polling from the control circuit at the first polling time; and (c) calculating a first predicted motion delta of the first time duration according to the first velocity and the first time duration.
    Type: Grant
    Filed: March 25, 2020
    Date of Patent: September 14, 2021
    Assignee: PixArt Imaging Inc.
    Inventor: Shang Chan Kong
  • Patent number: 11113526
    Abstract: A method for training a deep neural network of a robotic device is described. The method includes constructing a 3D model using images captured via a 3D camera of the robotic device in a training environment. The method also includes generating pairs of 3D images from the 3D model by artificially adjusting parameters of the training environment to form manipulated images using the deep neural network. The method further includes processing the pairs of 3D images to form a reference image including embedded descriptors of common objects between the pairs of 3D images. The method also includes using the reference image from training of the neural network to determine correlations to identify detected objects in future images.
    Type: Grant
    Filed: September 13, 2019
    Date of Patent: September 7, 2021
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Kevin Stone, Krishna Shankar, Michael Laskey
  • Patent number: 11110343
    Abstract: An example method includes obtaining, obtaining, from one or more sensors of a computing device, data relating to a feature in an environment. Following, the method includes analyzing the data to identify one or more details of the feature in the environment and determining, based on a comparison of the data to a stored dataset in a database, that the details includes a detail that the stored dataset lacks. The method includes providing one or more game elements for gameplay on an interface of the computing device based on the details including the detail that the stored dataset lacks.
    Type: Grant
    Filed: April 5, 2019
    Date of Patent: September 7, 2021
    Assignee: Niantic, Inc.
    Inventors: Ryan Michael Hickman, Soohyun Bae
  • Patent number: 11116027
    Abstract: Provided is a method of storing information on a face of a passenger in a vehicle in association with a terminal of the passenger, and an electronic apparatus therefor. In the present disclosure, at least one of an electronic apparatus, a vehicle, a vehicle terminal, and an autonomous vehicle may be connected or converged with an artificial intelligence (AI) module, an unmanned aerial vehicle (UAV), a robot, an augmented reality (AR) device, a virtual reality (VR) device, a device associated with a 5G service, and the like.
    Type: Grant
    Filed: December 30, 2019
    Date of Patent: September 7, 2021
    Assignee: LG ELECTRONICS INC.
    Inventors: Poram Kim, Hyunchul Choi, Inyeop Jang, Salkmann Ji, Hyunsu Choi, Sungmin You, Taegil Cho
  • Patent number: 11096630
    Abstract: A method for generating a movement signal of a body part, of which at least a portion is undergoing a cardiac movement, includes providing a pilot tone signal acquired from the body part by a magnetic resonance receiver coil arrangement. A demixing matrix is calculated from a calibration portion of the Pilot Tone signal using an independent component analysis algorithm. The independent component corresponding to the cardiac movement is selected. The demixing matrix is applied to further portions of the pilot tone signal to obtain a movement signal representing the cardiac movement. An, adaptive stochastic, or model-based filter is applied to the signal representing the cardiac movement, to obtain a filtered movement signal.
    Type: Grant
    Filed: June 8, 2018
    Date of Patent: August 24, 2021
    Assignee: Siemens Healthcare GmbH
    Inventors: Peter Speier, Mario Bacher
  • Patent number: 11093762
    Abstract: A method for validation of an obstacle candidate identified within a sequence of image frames comprises the following steps: A. for a current image frame of the sequence of image frames, determining within the current image frame a region of interest representing the obstacle candidate, dividing the region of interest into sub-regions, and, for each sub-region, determining a Time-To-Contact (TTC) based on at least the current image frame and a preceding or succeeding image frame of the sequence of image frames; B. determining one or more classification features based on the TTCs of the sub-regions determined for the current image frame; and C. classifying the obstacle candidate based on the determined one or more classification features.
    Type: Grant
    Filed: May 8, 2019
    Date of Patent: August 17, 2021
    Assignee: Aptiv Technologies Limited
    Inventors: Jan Siegemund, Christian Nunn
  • Patent number: 11080517
    Abstract: Face anti-counterfeiting detection methods and systems, electronic devices, and computer storage media include: obtaining an image or video to be detected containing a face; extracting a feature of the image or video to be detected, and detecting whether the extracted feature contains counterfeited face clue information; and determining whether the face passes the face anti-counterfeiting detection according to a detection result.
    Type: Grant
    Filed: June 25, 2019
    Date of Patent: August 3, 2021
    Assignee: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTD
    Inventors: Liwei Wu, Tianpeng Bao, Meng Yu, Yinghui Che, Chenxu Zhao
  • Patent number: 11062452
    Abstract: An image processing apparatus configured to acquire first registration information being registration information between a first image of interest, which is the first frame image in the frame image pair, and the first reference image; to acquire second registration information being registration information between a second image of interest, which is the second frame image in the frame image pair, and the second reference image; to acquire reference registration information being registration information between the first reference image and the second reference image; and to acquire third registration information being registration information between the first image of interest and the second image of interest, based on the first registration information, the second registration information and the reference registration information.
    Type: Grant
    Filed: April 20, 2020
    Date of Patent: July 13, 2021
    Assignees: Canon Kabushiki Kaisha, Canon Medical Systems Corporation
    Inventors: Toru Tanaka, Ryo Ishikawa
  • Patent number: 11048914
    Abstract: Face anti-counterfeiting detection methods and systems, electronic devices, and computer storage media include: obtaining an image or video to be detected containing a face; extracting a feature of the image or video to be detected, and detecting whether the extracted feature contains counterfeited face clue information; and determining whether the face passes the face anti-counterfeiting detection according to a detection result.
    Type: Grant
    Filed: June 25, 2019
    Date of Patent: June 29, 2021
    Assignee: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTD
    Inventors: Liwei Wu, Tianpeng Bao, Meng Yu, Yinghui Che, Chenxu Zhao
  • Patent number: 11043037
    Abstract: A method to determine the dimensions and distance of a number of objects in an environment includes providing a number of objects including a marking element; recording a visual image-dataset of at least one of the objects with a camera; and determining a parameter value from the image of a marking element in the image-dataset or from a measurement of an additional sensor at the location of the camera. The parameter value is a value depending from the distance of the object to the camera. The method further includes calculating the relative distance between the object and the camera based on the parameter value and calculating dimensions of the object from at least a part of the image of the object in the image-dataset and the calculated distance. A related device, a related system and a related control unit for a virtual reality system are also disclosed.
    Type: Grant
    Filed: March 3, 2020
    Date of Patent: June 22, 2021
    Assignee: SIEMENS HEALTHCARE GMBH
    Inventor: Anton Ebert
  • Patent number: 11037194
    Abstract: A favorable merging or grouping of simply connected regions into which the array of information samples is sub-divided, is coded with a reduced amount of data. To this end, a predetermined relative locational relationship is defined enabling an identifying, for a predetermined simply connected region, of simply connected regions within the plurality of simply connected regions which have the predetermined relative locational relationship to the predetermined simply connected region. Namely, if the number is zero, a merge indicator for the predetermined simply connected region may be absent within the data stream. In other embodiments, spatial sub-division is performed depending on a first subset of syntax elements, followed by combining spatially neighboring simply connected regions depending on a second subset of syntax elements, to obtain an intermediate sub-division.
    Type: Grant
    Filed: June 22, 2020
    Date of Patent: June 15, 2021
    Assignee: GE Video Compression, LLC
    Inventors: Philipp Helle, Simon Oudin, Martin Winken, Detlev Marpe, Thomas Wiegand
  • Patent number: 11035663
    Abstract: Systems and related methods are disclosed for characterizing physical phenomena. In an embodiment, the system includes a frame defining an active volume, a camera configured to capture an image of the active volume, and a controller coupled to the camera. In an embodiment, the controller is configured to: track an object within the active volume via the camera, analyze a motion of the object within the active volume, and output a visual depiction of the object and one or more vectors characterizing the motion of the object on a display.
    Type: Grant
    Filed: August 21, 2019
    Date of Patent: June 15, 2021
    Assignee: The Texas A&M University System
    Inventors: Ricardo Eusebi, Jeffrey Breitschopf, David Andrew Overton, Brian Muldoon, Sifu Luo, Zhikun Xing
  • Patent number: 11037525
    Abstract: A display system with high display quality in which display unevenness is reduced is provided. The display system includes a processing unit and a display portion. The processing unit generates second image data by using first image data. The display portion displays an image on the basis of the second image data. The processing unit includes three layers. The first image data is supplied to the first layer. The first image data contains a plurality of pieces of data. The plurality of pieces of data each correspond to any one of the plurality of pixels. The first layer generates first arithmetic data by making the number of data corresponding to one pixel larger than the number of the first image data by using the first image data. The second layer generates second arithmetic data by multiplying the first arithmetic data by a weight coefficient.
    Type: Grant
    Filed: June 14, 2018
    Date of Patent: June 15, 2021
    Inventors: Masataka Shiokawa, Natsuko Takase, Hideaki Okamoto, Kensuke Yoshizumi, Daiki Nakamura
  • Patent number: 11017242
    Abstract: A traffic monitoring system includes a first car moving on a first path; a camera having a field of vision including at least a portion of the first path; and a computing system. The computing system receives a plurality of images from the camera. The computing system has a processor. When instructed, the processor performs circling a perimeter of the first car on each of the images with a first rectangle; composing a first set of points, each point of the first set of points representing a center of the first rectangle; finding a first centroid using the first set of points, wherein the first centroid represents the first path; and calculating a speed of the first car using the first centroid.
    Type: Grant
    Filed: February 19, 2019
    Date of Patent: May 25, 2021
    Assignee: Unisys Corporation
    Inventors: Kelsey L Bruso, Dayln Limesand, James Combs
  • Patent number: 11006042
    Abstract: An imaging device includes a plurality of image sensors, and circuitry configured to determine whether a difference between average brightness values of first image data and second image data, both captured by a same image sensor, is equal to or greater than a first threshold. The second image data is captured at a timing later than capture of the first image data. The circuitry perform one of a) output of image data captured by a rest of the plurality of image sensors excluding the one of the plurality of image sensors and b) composition of the image data captured by the rest of the plurality of image sensors, in response to a determination that the difference in average brightness value is equal to or greater than the first threshold and the average brightness value of the second image data is equal to or smaller than a second threshold.
    Type: Grant
    Filed: May 21, 2020
    Date of Patent: May 11, 2021
    Assignee: Ricoh Company, Ltd.
    Inventors: Koji Takatsu, Susumu Fujioka
  • Patent number: 11003914
    Abstract: A system for monitoring and recording and processing an activity includes one or more cameras for automatically recording video of the activity. A processor and memory associated and in communication with the camera is disposed near the location of the activity. The system may include AI logic configured to identify a user recorded within a video frame captured by the camera. The system may also detect and identify a user when the user is located within a predetermined area. The system may include a video processing engine configured to process images within the video frame to identify the user and may modify and format the video upon identifying the user and the activity. The system may include a communication module to communicate formatted video to a remote video processing system, which may further process the video and enable access to a mobile app of the user.
    Type: Grant
    Filed: May 13, 2020
    Date of Patent: May 11, 2021
    Inventor: Kevin R. Imes
  • Patent number: 10991262
    Abstract: A simulation mapping system and method for determining a plurality of performance metric values in relation to a training activity performed by a user in an interactive computer simulation, the interactive computer simulation simulating a virtual element comprising a plurality of dynamic subsystems. A processor module obtains dynamic data related to the virtual element being simulated in an interactive computer simulation station comprising a tangible instrument module. The dynamic data captures actions performed by the user on tangible instruments. The processor module constructs a dataset corresponding to the plurality of performance metric values from the dynamic data having a target time step by synchronizing dynamic data and by inferring, for at least one missing dynamic subsystems of the plurality of dynamic subsystems missing from the dynamic data, a new set of data into the dataset from dynamic data associated to one or more co-related dynamic subsystems.
    Type: Grant
    Filed: March 30, 2018
    Date of Patent: April 27, 2021
    Assignee: CAE Inc.
    Inventors: Jean-François Delisle, Antoine Dufour, Marc-André Proulx, Dac Toan Ho
  • Patent number: 10973581
    Abstract: Disclosed are systems and methods for obtaining a structured light reconstruction using a hybrid spatio-temporal pattern sequence projected on a surface. The method includes projecting a structured light pattern, such as a binary de Bruijn sequence, onto a 3D surface and acquiring an image set of at least a portion of this projected sequence with a camera system, and projecting a binary edge detection pattern onto the portion of the surface and acquiring an image set of the same portion of the projected binary pattern. The acquired image set of the binary pattern is processed to determine edge locations therein, and then employed to identify the locations of pattern edges within the acquired image set of the structured light pattern. The detected edges of the structured light pattern images are employed to decode the structured light pattern and calculate a disparity map, which is used to reconstruct the 3D surface.
    Type: Grant
    Filed: June 16, 2017
    Date of Patent: April 13, 2021
    Assignee: 7D SURGICAL INC.
    Inventors: Adrian Mariampillai, Kenneth Kuei-Ching Lee, Michael Leung, Peter Siegler, Beau Anthony Standish, Victor X. D. Yang
  • Patent number: 10977804
    Abstract: In accordance with an embodiment, a method of detecting moving objects via a moving camera includes receiving a sequence of images from the moving camera; determining optical flow data from the sequence of images; decomposing the optical flow data into global motion related motion vectors and local object related motion vectors; calculating global motion parameters from the global motion related motion vectors; calculating moto-compensated vectors from the local object related motion vectors and the calculated global motion parameters; compensating the local object related motion vectors using the calculated global motion parameters; and clustering the compensated local object related motion vectors to generate a list of detected moving objects.
    Type: Grant
    Filed: January 28, 2020
    Date of Patent: April 13, 2021
    Assignee: STMICROELECTRONICS S.R.L.
    Inventors: Giuseppe Spampinato, Salvatore Curti, Arcangelo Ranieri Bruna
  • Patent number: 10970935
    Abstract: A person who is not using a hybrid reality (HR) system communicates with the HR system without using a network communications link using a body pose. Data is received from a sensor and an individual is detected in the sensor data. A first situation of at least one body part of the individual in 3D space is ascertained at a first time and a body pose is determined based on the first situation of the at least one body part. An action is decided on based on the body pose and the action is performed on an HR system worn by a user.
    Type: Grant
    Filed: December 21, 2018
    Date of Patent: April 6, 2021
    Inventors: Anthony Mark Jones, Jessica A. F. Jones, Bruce A. Young
  • Patent number: 10970855
    Abstract: Provided are embodiments for a computer-implemented method. The method includes receiving a sequence of image data, transforming objects in each frame of the sequence of the image data into direction vectors, and clustering the direction vectors based at least in part on features of the objects. The method also includes mapping the direction vectors for the objects in each frame into a position-orientation data structure, and performing tracking using the mapped direction vectors in the position-orientation data structure. Also provided are embodiments of a computer program product and a system for performing object tracking.
    Type: Grant
    Filed: March 5, 2020
    Date of Patent: April 6, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Umar Asif, Jianbin Tang, Subhrajit Roy
  • Patent number: 10952658
    Abstract: An information processing method includes, by a computer: acquiring biological information on a first person; acquiring an image obtained by imaging the first person in synchronization with acquisition timing of the biological information; identifying person identification information for identifying the first person based on the image; storing, in a storage unit, the identified person identification information, the acquired biological information, and the acquired image in association with one another; acquiring the person identification information on the first person selected by a second person different from the first person, and state information indicating a state of the first person selected by the second person; and extracting, from the storage unit, the image associated with the acquired person identification information and the biological information corresponding to the acquired state information.
    Type: Grant
    Filed: February 25, 2019
    Date of Patent: March 23, 2021
    Assignee: PANASONIC INTELLECTUAL PROPERTY CORPORATION OF AMERICA
    Inventors: Masaru Yamaoka, Mikiko Matsuo
  • Patent number: 10956751
    Abstract: The present invention provides an external apparatus connected to an imaging apparatus over a network, the imaging apparatus including an imaging unit which captures an image of a vessel being a subject, the external apparatus including an obtaining unit which obtains image data including the vessel captured by the imaging unit, a display unit which displays the image data, an analyzing unit which extracts vessel estimation information regarding an arbitrary vessel included in the image data based on the image data, a receiving unit which receives vessel information based on a wireless communication from the vessel, and a comparing unit which compares the vessel estimation information and the vessel information, wherein, in a case where the vessel estimation information and the vessel information are not matched, the display unit displays a warning in addition to the image data.
    Type: Grant
    Filed: July 21, 2017
    Date of Patent: March 23, 2021
    Assignee: Canon Kabushiki Kaisha
    Inventor: Koji Shinohe
  • Patent number: 10942619
    Abstract: A system for interactive reality activity augmentation includes a sensing unit, digital projectors and a server. The sensing unit illuminates activity surfaces and captures one of series of real-time images and co-ordinates information of the activity surfaces. The server includes a memory and a processor. The processor receives and processes one of the series of received real-time images and the co-ordinates information to detect presence and tracks trajectories and calculate individual co-ordinates of physical activity objects, virtual objects and users along the activity surfaces to detect interaction information, and feeds the calculated co-ordinates and the interaction information to a scheduler through an application program interface to manipulate one or more contents running inside the scheduler in response to the co-ordinates information and the interaction information.
    Type: Grant
    Filed: June 19, 2020
    Date of Patent: March 9, 2021
    Assignee: TOUCHMAGIX MEDIA PVT. LTD.
    Inventor: Anup Tapadia
  • Patent number: 10944921
    Abstract: Systems and methods are described for replacing a background portion of an image. An illustrative method includes receiving a first image, identifying a background portion of the first image and a subject portion of the first image, identifying a geographic location corresponding to the background portion of the first image, identifying a landmark associated with the geographic location of the object, retrieving a second image depicting the landmark, and generating for display a third image comprising the subject portion of the first image placed over the second image.
    Type: Grant
    Filed: March 27, 2019
    Date of Patent: March 9, 2021
    Assignee: ROVI GUIDES, INC.
    Inventors: Deviprasad Punja, Aditya Rautray
  • Patent number: 10936883
    Abstract: A road region detection method is provided. The method includes: obtaining a first image captured by a camera at a first time point and a second image captured by the camera at a second time point (S101), converting the first and second images into a first top view and a second top view, respectively (S103), obtaining a movement vector matrix which substantially represents movement of a road region relative to the camera between the first and second time points (S105), and determining whether a candidate point belongs to the road region by determining whether a position change of the candidate point between the first and second top views conforms to the movement vector matrix. The accuracy and efficiency may be improved.
    Type: Grant
    Filed: March 1, 2013
    Date of Patent: March 2, 2021
    Assignee: Harman International Industries, Incorporated
    Inventors: Wenming Zheng, Haitian Zhu, Zongcai Ruan, Yankun Zhang
  • Patent number: 10911737
    Abstract: Disclosed herein are primary and auxiliary image capture devices for image processing and related methods. According to an aspect, a method may include using primary and auxiliary image capture devices to perform image processing. The method may include using the primary image capture device to capture a first image of a scene, the first image having a first quality characteristic. Further, the method may include using the auxiliary image capture device to capture a second image of the scene. The second image may have a second quality characteristic. The second quality characteristic may be of lower quality than the first quality characteristic. The method may also include adjusting at least one parameter of one of the captured images to create a plurality of adjusted images for one of approximating and matching the first quality characteristic. Further, the method may include utilizing the adjusted images for image processing.
    Type: Grant
    Filed: January 29, 2019
    Date of Patent: February 2, 2021
    Assignee: 3DMedia Corporation
    Inventors: Bahram Dahi, Tassos Markas, Michael McNamer, Jon Boyette
  • Patent number: 10898804
    Abstract: The present disclosure relates to an image processing device and an image processing method for achieving an easy change of a display mode of each object in live-action content. The image processing device includes an image generation section that changes a display mode of each of objects within a display image on the basis of segment information that indicates a position of a segment in which each of the objects is present, the position of the segment being a position in each of a plurality of layer images that are images generated on the basis of a plurality of captured images and are images classified into a plurality of layers in accordance with distances of the images from a predetermined visual point. For example, the present disclosure is applicable to a display device or the like.
    Type: Grant
    Filed: August 4, 2017
    Date of Patent: January 26, 2021
    Assignee: SONY CORPORATION
    Inventor: Tooru Masuda
  • Patent number: 10902636
    Abstract: A method for assisting the location of a target for a first user equipped with an observation device includes an augmented reality observation device associated with a first user reference frame. According to this method, a reference platform associated with a master reference frame is positioned on the terrain, the reference platform is observed from at least one camera worn by the first user, the geometry of the observed platform is compared with a numerical model of same and the orientation and location of the first user reference frame is deduced with respect to the master reference frame. It is then possible to display, on an augmented reality observation device, at least one virtual reticle locating the target.
    Type: Grant
    Filed: October 12, 2017
    Date of Patent: January 26, 2021
    Assignee: NEXTER SYSTEMS
    Inventor: Philippe Bruneau
  • Patent number: 10897621
    Abstract: A moving image encoding apparatus comprises a detection unit configured to detect motion information in units of blocks from a moving image; a determination unit configured to determine a region of interest in the moving image based on a first region determined through processing for detecting an object from an image, and the motion information; a control unit configured to perform control such that a quantized value of a block determined as being the region of interest is set to a value lower than a quantized value of a block determined as not being the region of interest; and an encoding unit configured to perform compression encoding on the moving image based on the quantized value set by the control unit.
    Type: Grant
    Filed: February 26, 2019
    Date of Patent: January 19, 2021
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Shogo Yamasaki
  • Patent number: 10891727
    Abstract: An automated airfield ground lighting inspection system and method is disclosed. An image acquisition means captures image streams of the airfield ground lighting system lights when moved across an airfield. A location sensor detects positional information for the image acquisition means when capturing the plurality of images comprising the image streams. An image processor coupled to the image acquisition means and the location sensor processes the image stream of a light of the airfield ground lighting system by: (a) associating characteristics of a plurality of points in an image with an item in the light to be checked, and using this association for extraction of the points; (b) verifying each extracted point; and (c) determining the state of the light of the image stream by processing the verified extracted points comprising an item to be checked.
    Type: Grant
    Filed: November 29, 2017
    Date of Patent: January 12, 2021
    Assignees: Airport Authority, D2V Limited
    Inventors: Tak Kit Lau, Kai Wun Lin, Pong Mau Ng, Kai To Wong
  • Patent number: 10891734
    Abstract: An information processing apparatus, an information processing method, and a cell analysis system are provided. The information processing apparatus includes a processor configured to: determine a frequency feature value based on motion data from an image of a cell, and control displaying information associated with the frequency feature value, wherein the frequency feature value includes a power spectral density for each time range and each frequency band, and wherein the information associated with the frequency feature value is displayed in association with the each time range and the each frequency band.
    Type: Grant
    Filed: July 9, 2018
    Date of Patent: January 12, 2021
    Assignee: Sony Corporation
    Inventors: Shiori Oshima, Kazuhiro Nakagawa, Eriko Matsui
  • Patent number: 10891864
    Abstract: Disclosed herein is an obstacle warning method for a vehicle, which includes detecting a first obstacle through a laser sensor, identifying a location of an adjacent vehicle, determining a blind spot of the adjacent vehicle due to the first obstacle based on the location of the adjacent vehicle, detecting a second obstacle involved in the blind spot through the laser sensor, and transmitting a danger message to the adjacent vehicle. A vehicle to which the disclosure is applied may be connected to any artificial intelligence (AI) module, a drone, an unmanned aerial vehicle, a robot, an augmented reality (AR) module, a virtual reality (VR) module, a 5th generation (5G) mobile communication device, and so on.
    Type: Grant
    Filed: August 29, 2019
    Date of Patent: January 12, 2021
    Assignee: LG Electronics Inc.
    Inventors: So-Young Kim, Jung Yong Lee, Sangkyeong Jeong
  • Patent number: 10885617
    Abstract: An image analysis system includes a plurality of cameras. The cameras are configured for taking images. The image analysis system further includes at least one server. The server includes a first obtaining module, a second obtaining module, a filter module, and a storage module. The first obtaining module is configured for obtaining the moving track of the target object. The second obtaining module is configured for obtaining the images taken by the cameras which the target object has passed according to the moving track. The filter module is configured for extracting images containing the target object from the obtained images according to pre-stored specific image features of the target object. The storage module is configured for storing the extracted images that contain the target object. An image analysis method and a server are also provided.
    Type: Grant
    Filed: May 6, 2018
    Date of Patent: January 5, 2021
    Assignee: Chiun Mai Communication Systems, Inc.
    Inventors: Kuang-Hui Wu, Huang-Mu Chen
  • Patent number: 10884382
    Abstract: A sensor and/or system controller may process an image multiple times at multiple resolutions to detect glare conditions. A glare condition threshold used to determine whether a glare condition exists may be based on the resolution of the image. When the resolution of the image is higher, the glare condition threshold may be higher. The sensor and/or system controller may organize one or more adjacent pixels having similar intensities into pixel groups. The pixel groups may vary in size and/or shape. The sensor and/or system controller may determine a representative group luminance for the pixel group (e.g., an average luminance of the pixels in the group). The sensor and/or system controller may determine a group glare condition threshold, which may be used to determine whether a glare condition exists for the group of pixels and/or may be based on the size of the group.
    Type: Grant
    Filed: June 14, 2019
    Date of Patent: January 5, 2021
    Assignee: Lutron Technology Company LLC
    Inventors: Craig Alan Casey, Brent Protzman
  • Patent number: 10885778
    Abstract: A monitoring system includes a sensor system for capturing at a capture instant, information relating to moving objects moving in a roadway infrastructure portion, and a control station comprising a display for displaying at a display instant subsequent to the capture instant, a view of the roadway infrastructure portion on which is visible an image of each moving object. The monitoring system also includes at least one computer for deriving captured information of a measured position and speed of each moving object at the capture instant, and another computer for deducing the measured position and speed at an estimated position of each moving object at the display instant. The display is configured to display in the view of the road infrastructure portion a virtual image of each moving object at its estimated position.
    Type: Grant
    Filed: April 26, 2019
    Date of Patent: January 5, 2021
    Assignee: Transdev Group
    Inventors: Cem Karaoguz, Jean-Christophe Smal, Kien-Cuong Nguyen, Alexis Beauvillain
  • Patent number: 10881329
    Abstract: A motion display system includes: a detector that detects an acceleration of a body part of a test subject; an imager that generates a moving image by imaging a motion of the test subject; an identifier that is attached to the test subject to determine a position of the body part within the moving image; a superimposer that superimposes a motion trajectory of the body part generated based on the acceleration detected by the detector on a position of the body part determined based on the identifier within the moving image generated by the imager in synchronization with the moving image; and a display that displays the moving image on which the motion trajectory is superimposed by the superimposer.
    Type: Grant
    Filed: November 15, 2017
    Date of Patent: January 5, 2021
    Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.
    Inventors: Yoshihiro Matsumura, Tomoharu Nakahara, Tomohiko Fujita
  • Patent number: 10876873
    Abstract: An optical flow sensing method includes: using an image sensor to capture images; using a directional-invariant filter device upon at least one first block of the first image to process values of pixels of the at least one first block of the first image, to generate a first filtered block image; using the first directional-invariant filter device upon at least one first block of the second image to process values of pixels of the at least one first block of the second image, to generate a second filtered block image; comparing the filtered block images to calculate a correlation result; and estimating a motion vector according to a plurality of correlation results.
    Type: Grant
    Filed: February 11, 2020
    Date of Patent: December 29, 2020
    Assignee: PixArt Imaging Inc.
    Inventors: Hsin-Chia Chen, Sen-Huang Huang, Wei-Chung Wang, Chao-Chien Huang, Ting-Yang Chang, Chun-Wei Chen
  • Patent number: 10873749
    Abstract: A better rate distortion ratio is achieved by making interrelationships between coding parameters of different planes available for exploitation for the aim of redundancy reduction despite the additional overhead resulting from the need to signal the inter-plane prediction information to the decoder. In particular, the decision to use inter plane prediction or not may be performed for a plurality of planes individually. Additionally or alternatively, the decision may be done on a block basis considering one secondary plane.
    Type: Grant
    Filed: October 11, 2012
    Date of Patent: December 22, 2020
    Assignee: GE Video Compression, LLC
    Inventors: Martin Winken, Heiner Kirchhoffer, Heiko Schwarz, Detlev Marpe, Thomas Wiegand
  • Patent number: 10872432
    Abstract: A disparity estimation device calculates, for each of first pixels of a first image and each of second pixels of a second image, a first census feature amount and a second census feature amount, calculates, for each of the first pixels, a first disparity value of the first pixel with integer accuracy, extracts, for each of the first pixels, reference pixels located in positions corresponding to the first disparity value and a near disparity value close to the first disparity value from the second pixels, calculates sub-pixel evaluation values based on the relationship between the pixel values of the first pixel and the neighboring pixel and the pixel values of each of the reference pixels and the neighboring pixel, and estimates a second disparity value of the first pixel with sub-pixel accuracy by equiangular fitting.
    Type: Grant
    Filed: December 28, 2018
    Date of Patent: December 22, 2020
    Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.
    Inventors: Takeo Azuma, Kunio Nobori, Satoshi Sato, Nobuhiko Wakai
  • Patent number: 10861158
    Abstract: The present application relates to a method for acquiring maximum principal strain or a maximum principal stress status of a vessel wall.
    Type: Grant
    Filed: June 30, 2017
    Date of Patent: December 8, 2020
    Assignee: SHANGHAI UNITED IMAGING HEALTHCARE CO., LTD.
    Inventors: Hongjian Wang, Jieyan Ma, Yuan Ren
  • Patent number: 10860844
    Abstract: Techniques are provided for recognition of activity in a sequence of video image frames that include depth information. A methodology embodying the techniques includes segmenting each of the received image frames into a multiple windows and generating spatio-temporal image cells from groupings of windows from a selected sub-sequence of the frames. The method also includes calculating a four dimensional (4D) optical flow vector for each of the pixels of each of the image cells and calculating a three dimensional (3D) angular representation from each of the optical flow vectors. The method further includes generating a classification feature for each of the image cells based on a histogram of the 3D angular representations of the pixels in that image cell. The classification features are then provided to a recognition classifier configured to recognize the type of activity depicted in the video sequence, based on the generated classification features.
    Type: Grant
    Filed: June 2, 2016
    Date of Patent: December 8, 2020
    Assignee: Intel Corporation
    Inventors: Shaopeng Tang, Anbang Yao, Yurong Chen
  • Patent number: 10861168
    Abstract: Methods, systems, and/or apparatuses are described for detecting relevant motion of objects of interest (e.g., persons and vehicles) in surveillance videos. As described herein input data based on a plurality of captured images and/or video is received. The input data may then be pre-processed and used as an input into a convolution network that may, in some instances, have elements that perform both spatial-wise max pooling and temporal-wise max pooling. Based on The convolution network may be used to generate a plurality of prediction results of relevant motion of the objects of interest.
    Type: Grant
    Filed: September 7, 2018
    Date of Patent: December 8, 2020
    Assignee: Comcast Cable Communications, LLC
    Inventors: Ruichi Yu, Hongcheng Wang