Target Tracking Or Detecting Patents (Class 382/103)
  • Patent number: 11935224
    Abstract: The disclosure is directed to, among other things, systems and methods for troubleshooting equipment installations using machine learning. Particularly, the systems and methods described herein may be used to validate an installation of one or more devices (which may be referred to as “customer premises equipment (CPE)” herein as well) at a given location, such as a customer's home or a commercial establishment. As one non-limiting example, the one or more devices may be associated with a fiber optical network, and may include a modem and/or an optical network terminal (ONT). However, the one or more devices may include any other types of devices associated with any other types of networks as well.
    Type: Grant
    Filed: May 3, 2021
    Date of Patent: March 19, 2024
    Assignee: Cox Communications, Inc.
    Inventors: Monte Fowler, Lalit Bhatia, Jagan Arumugham
  • Patent number: 11935165
    Abstract: A method for proactively creating an image product includes capturing an image of an object in a first environment by a device, storing a library of personalized products each characterized by a product type, automatically recognizing the object in the image as having a product type associated with the library of personalized products, automatically creating a design for the personalized product of the product type using personalized content, automatically displaying the design of the personalized product of the product type incorporating the selected photo in the first environment on the device, and manufacturing a physical product based on the design of the personalized product.
    Type: Grant
    Filed: August 17, 2020
    Date of Patent: March 19, 2024
    Assignee: Shutterfly, LLC
    Inventors: Abhishek Kirankumar Sabbarwal, David Le, Ira Blas, Ryan Lee
  • Patent number: 11935199
    Abstract: A computer-implemented method includes receiving a two-dimensional image of a scene captured by a camera, recognizing one or more objects in the scene depicted in the two-dimensional image, and determining whether the one or more recognized objects have known real-world dimensions. The computer-implemented method further includes determining a depth of at least one recognized object having known real-world dimensions from the camera, and overlaying three-dimensional (3-D) augmented reality content over a display the 2-D image of the scene considering the depth of the at least one recognized object from the camera.
    Type: Grant
    Filed: July 26, 2021
    Date of Patent: March 19, 2024
    Assignee: GOOGLE LLC
    Inventors: Alexander James Faaborg, Shengzhi Wu
  • Patent number: 11937018
    Abstract: A surveillance system having an interface to a camera network for video surveillance of a surveillance area. The camera network includes a plurality of cameras each for capturing a surveillance subarea. The cameras are designed to provide surveillance images of the surveillance subareas. The surveillance system also includes a surveillance device for re-identifying people in the surveillance images. The surveillance device includes a person detection module for detecting people and an object detection module 10 for detecting objects. The surveillance device 5 includes an assignment module 12 designed to assign at least one item of object information to a person 4, and an action detection module 11 designed to detect an action of the person 4 on one of the objects 8.
    Type: Grant
    Filed: May 6, 2021
    Date of Patent: March 19, 2024
    Assignee: Robert Bosch GmbH
    Inventors: Gregor Blott, Jan Rexilius
  • Patent number: 11935258
    Abstract: A method for range detection is described. The method includes segmenting an image into one or more segmentation blobs captured by a monocular camera of an ego vehicle. The method includes focusing on pixels forming a selected segmentation blob of the one or more segment blobs. The method also includes determining a distance to the selected segmentation blob according to a focus function value of the monocular camera of the ego vehicle.
    Type: Grant
    Filed: March 8, 2021
    Date of Patent: March 19, 2024
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventor: Alexander Russell Green
  • Patent number: 11934571
    Abstract: A system, a head-mounted device, a computer program, a carrier, and a method for a head-mounted device comprising an eye tracking sensor, for updating an eye tracking model in relation to an eye are disclosed. First sensor data in relation to the eye are obtained by means of the eye tracking sensor. After obtaining the first sensor data, the eye tracking sensor is moved in relation to the eye. After moving the eye tracking sensor, second sensor data in relation to the eye are obtained by means of the eye tracking sensor. The eye tracking model in relation to the eye is then updated based on the first sensor data and the second sensor data.
    Type: Grant
    Filed: September 30, 2020
    Date of Patent: March 19, 2024
    Assignee: Tobii AB
    Inventors: Pravin Kumar Rana, Gerald Bianchi
  • Patent number: 11935173
    Abstract: Provided are a method and device for providing interactive virtual reality content capable of increasing user immersion by naturally connecting an idle image to a branched image. The method includes providing an idle image including options, wherein an actor in the idle image performs a standby operation, while the actor performs the standby operation, receiving a user selection for an option, providing a connection image, and providing a corresponding branched image according to the selection of the user, wherein a portion of the actor in the connection image is processed by computer graphics, and the actor performs a connection operation so that a first posture of the actor at a time point at which the selection is received is smoothly connected to a second posture of the actor at a start time point of the branched image.
    Type: Grant
    Filed: January 4, 2023
    Date of Patent: March 19, 2024
    Assignees: VISION VR INC.
    Inventors: Dong Kyu Kim, Won-Il Kim
  • Patent number: 11934484
    Abstract: Systems and methods for facilitating computer-vision-based detection and identification of consumer products comprise a support surface for supporting thereon one or more consumer products to be detected and a plurality of digital cameras each arranged at a different height relative to the support surface and positioned such that a field of view of each of the digital cameras includes the consumer product(s) located on the support surface. Movement of the support surface and/or the digital cameras permits the digital cameras to capture images of the consumer product from different angles of view and up to a full 360 degree of rotation around the at least one consumer product. A computing device uses at least the image data from such images to train a computer vision consumer product identification model to generate a reference image data model to be used for recognition of the at least one consumer product.
    Type: Grant
    Filed: August 27, 2021
    Date of Patent: March 19, 2024
    Assignee: Walmart Apollo, LLC
    Inventors: Soren A. Larson, Soheil Salehian-Dardashti, Elizabeth J. Barton, Victoria A. Moeller-Chan
  • Patent number: 11937019
    Abstract: Each of a plurality of co-located inspection camera modules captures raw images of objects passing in front of the co-located inspection camera modules which form part of a quality assurance inspection system. The inspection camera modules have either a different image sensor or lens focal properties and generate different feeds of raw images. The co-located inspection camera modules can be selectively switched amongst to activate the corresponding feed of raw images. The activated feed of raw images is provided to a consuming application or process for quality assurance analysis.
    Type: Grant
    Filed: June 7, 2021
    Date of Patent: March 19, 2024
    Assignee: Elementary Robotics, Inc.
    Inventors: Arye Barnehama, Dat Do, Daniel Pipe-Mazo
  • Patent number: 11934746
    Abstract: An information generation device generating a test case being a simulation model for reproducing a road traffic condition in an area on a road including a target point, the information generation device including: a first storage unit that stores moving-object information being information regarding a moving object existing in the area; a determination unit that determines whether or not an incident in which the moving object existing in the area shows a behavior that leads to occurrence of an accident has occurred, on the basis of the moving-object information; an extraction unit that extracts, as target information, moving-object information in a target period being a predetermined time period including a time point at which the incident occurred; and a generation unit that generates the test case upon occurrence of the incident on the basis of the target information.
    Type: Grant
    Filed: October 16, 2018
    Date of Patent: March 19, 2024
    Assignee: IHI CORPORATION
    Inventors: Yoshihisa Yamanouchi, Yosuke Seto, Minori Orita, Hiroki Saito, Takeharu Kato
  • Patent number: 11928771
    Abstract: An exemplary method of detecting a light source using an electronic device having a camera and a sensor may include: scanning a real environment using the camera to establish an environment map of the real environment; capturing, using the camera, a first image of a real light source from a first location in the real environment and a second image of the real light source from a second location in the real environment; tracking, using the sensor, a first position and a first orientation of the camera in the environment map while the first image is captured, and a second position and a second orientation of the camera in the environment map while the second image is captured; and computing a position of the real light source in the environment map based on the first position, the first orientation, the second position, and the second orientation.
    Type: Grant
    Filed: June 6, 2022
    Date of Patent: March 12, 2024
    Assignee: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD.
    Inventors: Yi Xu, Shuxue Quan
  • Patent number: 11925429
    Abstract: A robotic surgical system includes a linkage, an input handle, and a processing unit. The linkage moveably supports a surgical tool relative to a base. The input handle is moveable in a plurality of directions. The processing unit is in communication with the input handle and is operatively associated with the linkage to move the surgical tool based on a scaled movement of the input handle. The scaling varies depending on whether the input handle is moved towards a center of a workspace or away from the center of the workspace. The workspace represents a movement range of the input handle.
    Type: Grant
    Filed: May 17, 2022
    Date of Patent: March 12, 2024
    Assignee: COVIDIEN LP
    Inventor: William Peine
  • Patent number: 11929844
    Abstract: Various arrangements for using captured voice to generate a custom interface controller are presented. A vocal recording from a user may be captured in which a spoken command and multiple smart-home devices are indicated. One or more common functions that map to the multiple smart-home devices may be determined. A custom interface controller may be generated that controls the one or more common functions of each smart-home device of the multiple smart-home devices.
    Type: Grant
    Filed: March 30, 2022
    Date of Patent: March 12, 2024
    Assignee: Google LLC
    Inventors: Benjamin Brown, Da Huang, Christopher Conover, Lisa Williams, Henry Chung
  • Patent number: 11925492
    Abstract: A non-transitory computer-readable recording medium having stored an image processing program that causes a computer to execute a process, the process includes extracting a plurality of consecutive pixels corresponding to a first part or a second part of a body, from a pixel column in a predetermined direction of an image of the body, obtaining a statistical value of pixel values of the plurality of consecutive pixels, and identifying a part corresponding to the plurality of consecutive pixels, among the first part or the second part, based on the statistical value.
    Type: Grant
    Filed: February 3, 2021
    Date of Patent: March 12, 2024
    Assignee: FUJITSU LIMITED
    Inventors: Yasutaka Moriwaki, Hiroaki Takebe, Nobuhiro Miyazaki, Takayuki Baba
  • Patent number: 11926337
    Abstract: Systems and methods for determining object intentions through visual attributes are provided. A method can include determining, by a computing system, one or more regions of interest. The regions of interest can be associated with surrounding environment of a first vehicle. The method can include determining, by a computing system, spatial features and temporal features associated with the regions of interest. The spatial features can be indicative of a vehicle orientation associated with a vehicle of interest. The temporal features can be indicative of a semantic state associated with signal lights of the vehicle of interest. The method can include determining, by the computing system, a vehicle intention. The vehicle intention can be based on the spatial and temporal features. The method can include initiating, by the computing system, an action. The action can be based on the vehicle intention.
    Type: Grant
    Filed: April 28, 2022
    Date of Patent: March 12, 2024
    Assignee: UATC, LLC
    Inventors: Davi Eugenio Nascimento Frossard, Eric Randall Kee, Raquel Urtasun
  • Patent number: 11928286
    Abstract: A mobile device includes a housing, a network interface to communicate with an external network, a user interface comprising a panel with a display panel to display a menu image and a touch panel to receive a touch input, the panel including a first portion and a second portion with a transparent condition and being within the first portion, a sensing unit including two front cameras disposed in the second portion of the panel, and a control unit to control the sensing unit in a photographing mode and a sensing mode to perform a function of the mobile device.
    Type: Grant
    Filed: April 29, 2023
    Date of Patent: March 12, 2024
    Inventor: Seungman Kim
  • Patent number: 11929870
    Abstract: Monitoring systems and methods for use in security, safety, and business process applications utilizing a correlation engine are disclosed. Sensory data from one or more sensors are captured and analyzed to detect one or more events in the sensory data. The events are correlated by a correlation engine, optionally by weighing the events based on attributes of the sensors that were used to detect the primitive events. The events are then monitored for an occurrence of one or more correlations of interest, or one or more critical events of interest. Finally, one or more actions are triggered based on a detection of one or more correlations of interest, one or more anomalous events, or one or more critical events of interest. A hierarchical storage manager, having access to a hierarchy of two or more data storage devices, is provided to store data from the one or more sensors.
    Type: Grant
    Filed: May 3, 2022
    Date of Patent: March 12, 2024
    Assignee: SecureNet Solutions Group LLC
    Inventors: John J Donovan, Daniar Hussain
  • Patent number: 11928577
    Abstract: A parallel convolutional neural network is provided. The CNN is implemented by a plurality of convolutional neural networks each on a respective processing node. Each CNN has a plurality of layers. A subset of the layers are interconnected between processing nodes such that activations are fed forward across nodes. The remaining subset is not so interconnected.
    Type: Grant
    Filed: April 27, 2020
    Date of Patent: March 12, 2024
    Assignee: Google LLC
    Inventors: Alexander Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton
  • Patent number: 11928766
    Abstract: The present disclosure is related to a method to generate user representative avatars that fit within a design paradigm. The method includes receiving depth information corresponding to multiple user features of the user, determining one or more feature landmarks for the user based on the depth information, utilizing the one or more feature landmarks to classify a first user feature relative to an avatar feature category, selecting a first avatar feature from the avatar feature category based on the classification of the first user feature, combining the first avatar feature within an avatar representation to generate a user avatar, and output the user avatar for display.
    Type: Grant
    Filed: March 24, 2022
    Date of Patent: March 12, 2024
    Assignee: Disney Enterprises, Inc.
    Inventors: Dumene Comploi, Francisco E. Gonzalez
  • Patent number: 11928862
    Abstract: An approach is disclosed for visually identifying and/or pairing ride providers and passengers. The approach involves, for example, receiving location data indicating that a driver vehicle is within a proximity threshold of a passenger pickup location. The approach also involves initiating an activation of a camera of a passenger device to present live imagery on the passenger device. The approach further involves processing sensor data collected from one or more sensors of the passenger device to determine a rotation vector indicating a pointing direction of the passenger device. The approach also involves determining a new direction to point the passenger device to capture the driver vehicle in a field of view of the camera based on the rotation vector and the location data. The approach further involves providing output data for presenting a representation of the new direction in a user interface of the passenger device.
    Type: Grant
    Filed: September 28, 2021
    Date of Patent: March 12, 2024
    Assignee: HERE Global B.V.
    Inventors: Ron Livne, Silviu Zilberman
  • Patent number: 11924076
    Abstract: The present disclosure relates to methods and devices for wireless communication of an apparatus, e.g., a UE. In one aspect, the apparatus may determine whether a connection of a video call is interrupted, the video call including a plurality of decoded frames. The apparatus may also determine, if the connection of the video call is interrupted, whether one or more decoded frames of the plurality of decoded frames are suitable for artificial frame generation. The apparatus may also generate one or more artificial frames based on the one or more decoded frames and an audio feed from a transmitting device. Additionally, the apparatus may determine whether the one or more artificial frames are suitable for a facial model call. The apparatus may also establish a facial model call based on a combination of the one or more artificial frames and the audio feed from the transmitting device.
    Type: Grant
    Filed: October 18, 2022
    Date of Patent: March 5, 2024
    Assignee: QUALCOMM Incorporated
    Inventors: Ansh Abhay Balde, Venkata Phani Krishna Akella, Rajesh Polisetti, Hemanth Yerva, Sandeep Padubidri Ramamurthy
  • Patent number: 11922583
    Abstract: An interactive method for a movable platform, an interactive system, a movable platform and a storage medium including the interactive method. The interactive method may include projecting three-dimensional point cloud data collected by a sensor into image data collected by a camera for fusion processing to obtain a fused image; rendering the fused image to determine a three-dimensional visualization image of a surrounding environment where the movable platform is located; and outputting the three-dimensional visualization image of the surrounding environment where the movable platform is located on a display interface.
    Type: Grant
    Filed: August 10, 2021
    Date of Patent: March 5, 2024
    Assignee: SZ DJI TECHNOLOGY CO., LTD.
    Inventor: Bin Xu
  • Patent number: 11922617
    Abstract: The present application provides a method and system for defect detection. The method includes: acquiring a two-dimensional (2D) picture of an object to be detected; inputting the acquired 2D picture to a trained defect segmentation model to obtain a segmented 2D defect mask, where the defect segmentation model is trained based on a multi-level feature extraction instance segmentation network with intersection over union (IoU) thresholds being increased level by level, and the 2D defect mask includes information about a defect type, a defect size, and a defect location of a segmented defect region; and determining the segmented 2D defect mask based on a predefined defect rule to output a defect detection result.
    Type: Grant
    Filed: May 12, 2023
    Date of Patent: March 5, 2024
    Assignee: CONTEMPORARY AMPEREX TECHNOLOGY CO., LIMITED
    Inventors: Annan Shu, Chao Yuan, Lili Han
  • Patent number: 11922671
    Abstract: Apparatus for processing image data associated with at least one input image, including a convolutional neural network, CNN,-based encoder configured to provide a plurality of hierarchical feature maps based on the image data, a decoder configured to provide output data based on the plurality of feature maps, wherein the decoder includes a convolutional long short-term memory, Conv-LSTM, module configured to sequentially process at least some of the plurality of hierarchical feature maps.
    Type: Grant
    Filed: August 14, 2019
    Date of Patent: March 5, 2024
    Assignee: Nokia Technologies OY
    Inventor: Tinghuai Wang
  • Patent number: 11922710
    Abstract: A character recognition method includes the following operations: determining that the image of character to be identified corresponds to a matching character of several registered characters according to several vector distances to be identified between a vector of an image of character to be identified and several vectors of several registered character images of several registered characters, and storing a matching vector distance between the vector of the image of character to be identified and a vector of the matching character by a processor; and storing a data of the matching character according to the image of character to be identified when the matching vector distance is less than a vector distance threshold by the processor.
    Type: Grant
    Filed: April 18, 2022
    Date of Patent: March 5, 2024
    Assignee: Realtek Semiconductor Corporation
    Inventors: Chien-Hao Chen, Chao-Hsun Yang, Shih-Tse Chen
  • Patent number: 11922576
    Abstract: A system that measures the motion of a camera traveling over a living subject by reference to images taken from the camera, while measuring the subject's shape, pose and motion. The system can segment each image into pixels containing the subject and pixels not containing the subject, and ascribe each pixel containing the subject to a precise location on the subject's body. In one embodiment, a computer connected to a camera and an Inertial Measurement Unit (IMU) provides estimates of the camera's location, attitude and velocity by integrating the motion of the camera with respect to features in the environment and on the surface of the subject, corrected for the motion of the subject. The system corrects accumulated errors in the integration of camera motion by recognizing the subject's body shape in the collected data.
    Type: Grant
    Filed: January 4, 2022
    Date of Patent: March 5, 2024
    Assignee: Triangulate Labs, Inc.
    Inventor: William D. Hall
  • Patent number: 11919546
    Abstract: Systems and methods for operating a robotic system. The methods comprise: inferring, by a computing device, a first heading distribution for the object from a 3D point cloud; obtaining, by the computing device, a second heading distribution from a vector map; obtaining, by the computing device, a posterior distribution of a heading using the first and second heading distributions; defining, by the computing device, a cuboid on a 3D graph using the posterior distribution; and using the cuboid to facilitate driving-related operations of a robotic system.
    Type: Grant
    Filed: March 1, 2023
    Date of Patent: March 5, 2024
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Wulue Zhao, Kevin L. Wyffels, G. Peter K. Carr
  • Patent number: 11921042
    Abstract: An achromatic 3D STED measuring optical process and optical method, based on a conical diffraction effect or an effect of propagation of light in uniaxial crystals, including a cascade of at least two uniaxial or conical diffraction crystals creating, from a laser source, all of the light propagating along substantially the same optical path, from the output of an optical bank to the objective of a microscope. A spatial position of at least one luminous nano-emitter, structured object or a continuous distribution in a sample is determined. Reconstruction of the sample and its spatial and/or temporal and/or spectral properties is treated as an inverse Bayesian problem leading to the definition of an a posteriori distribution, and a posteriori relationship combining, by virtue of the Bayes law, the probabilistic formulation of a noise model, and possible priors on a distribution of light created in the sample by projection.
    Type: Grant
    Filed: February 16, 2021
    Date of Patent: March 5, 2024
    Assignee: Bioaxial SAS
    Inventors: Gabriel Y Sirat, Lionel Moisan
  • Patent number: 11919462
    Abstract: A vehicle occupant monitoring apparatus includes two or more imaging modules and a controller. The imaging modules each include: a light-emitting device configured to emit and apply light to an occupant in a vehicle; and an imaging device configured to perform imaging of the occupant to obtain an imaging image. The controller is configured to cause the light-emitting device of each of the imaging modules to apply the light when the imaging device of corresponding one of the imaging modules performs the imaging. The controller is configured to cause, in a case where contact of the vehicle with another object is predicted or detected, the imaging device to perform the imaging to obtain the imaging image while causing the light-emitting devices of the imaging modules to apply the light together, and monitor the position of the occupant on the basis of the obtained imaging image.
    Type: Grant
    Filed: October 1, 2020
    Date of Patent: March 5, 2024
    Assignee: SUBARU CORPORATION
    Inventor: Ryota Nakamura
  • Patent number: 11915404
    Abstract: An on-board thermal track misalignment detection system method therefor is presented. The system can use on-board locomotive sensors attached to an end-of-train device to detect (on the edge), signs and symptoms of thermal misalignments of the track. Once detected an alert can be transmitted to prevent potential derailments. The system can also include a forward-facing and rearward-facing imaging sensors (e.g., camera, LiDAR sensor, etc). The system can wirelessly communicate (e.g., via radio) with a leading locomotive to ensure proper air pressure and location. The system can be powered by an on-board battery and/or air pressure device. Advantageously, the system can calculate whether any rail deviation is significant (e.g., via one or more threshold values). The system can also leverage image processing functionality, executed by one or more processors) to find the centerline and the distance between the tracks.
    Type: Grant
    Filed: March 17, 2023
    Date of Patent: February 27, 2024
    Assignee: BNSF Railway Company
    Inventors: Asim Ghanchi, Nicholas Dryer, Coleman Barkley, Michael Saied Saniei
  • Patent number: 11914792
    Abstract: The technology disclosed relates to relates to providing command input to a machine under control. It further relates to gesturally interacting with the machine. The technology disclosed also relates to providing monitoring information about a process under control. The technology disclosed further relates to providing biometric information about an individual. The technology disclosed yet further relates to providing abstract features information (pose, grab strength, pinch strength, confidence, and so forth) about an individual.
    Type: Grant
    Filed: February 17, 2023
    Date of Patent: February 27, 2024
    Assignee: Ultrahaptics IP Two Limited
    Inventors: Kevin A. Horowitz, Matias Perez, Raffi Bedikian, David S. Holz, Gabriel A. Hare
  • Patent number: 11915486
    Abstract: A system includes one or more video capture devices and a processor coupled to each video capture device. Each processor is operable to direct its respective video capture device to obtain an image of a monitored area and process the image to identify objects of interest represented in the image. The processor is also operable to generate bounding perimeter virtual objects for the identified objects of interest, each bounding perimeter virtual object surrounding at least part of its respective object of interest. The processor is further operable to determine danger zones for the identified objects of interest based on the bounding perimeter virtual objects. The processor is further operable to determine at least one near-miss condition based at least in part on an actual or predicted overlap of danger zones for multiple objects of interest, and may optionally generate an alert at least partially in response to the near-miss condition.
    Type: Grant
    Filed: March 31, 2023
    Date of Patent: February 27, 2024
    Assignee: Ubicquia IQ LLC
    Inventors: Morné Neser, Samuel Leonard Holden, Sébastien Magnan
  • Patent number: 11915451
    Abstract: A method and a system for object detection and pose estimation within an input image. A 6-degree-of-freedom object detection and pose estimation is performed using a trained encoder-decoder convolutional artificial neural network including an encoder head, an ID mask decoder head, a first correspondence color channel decoder head and a second correspondence color channel decoder head. The ID mask decoder head creates an ID mask for identifying objects, and the color channel decoder heads are used to create a 2D-to-3D-correspondence map. For at least one object identified by the ID mask, a pose estimation based on the generated 2D-to-3D-correspondence map and on a pre-generated bijective association of points of the object with unique value combinations in the first and the second correspondence color channels is generated.
    Type: Grant
    Filed: January 17, 2020
    Date of Patent: February 27, 2024
    Assignee: Siemens Aktiengesellschaft
    Inventors: Ivan Shugurov, Andreas Hutter, Sergey Zakharov, Slobodan Ilic
  • Patent number: 11914850
    Abstract: A user profile picture generation method and an electronic device includes in a process in which a user searches for a profile picture in a plurality of thumbnails displayed in a user interface, when the user selects a thumbnail, the electronic device displays an original picture corresponding to the thumbnail, and displays a crop box in the original picture, where the selection may be a tap operation on the thumbnail. The electronic device may generate a profile picture of the user based on the crop box. The crop box includes a human face region in the original picture, and a composition manner of the human face region in the crop box is the same as a composition manner of the human face region in the original picture.
    Type: Grant
    Filed: June 29, 2020
    Date of Patent: February 27, 2024
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Da Su, Yundie Zhang, Liang Hu, Siju Wu
  • Patent number: 11915450
    Abstract: Embodiments are generally directed to methods and apparatuses for determining a frontal body orientation. An embodiment of a method for determining a three-dimensional (3D) orientation of frontal body of a player comprises: detecting each of a plurality of players in each of a plurality of frames captured by a plurality of cameras; for each of the plurality of cameras, tracking each of the plurality of players between continuous frames captured by the camera; and associating the plurality of frames captured by the plurality of cameras to generate the 3D orientation of each of the plurality of players.
    Type: Grant
    Filed: August 6, 2021
    Date of Patent: February 27, 2024
    Assignee: Intel Corporation
    Inventors: Yiwei He, Ming Lu, Haihua Lin, Liwei Liao, Jiansheng Chen, Xiaofeng Tong, Qiang Li, Wenlong Li
  • Patent number: 11915490
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for the generation and use of a surfel map with semantic labels. One of the methods includes receiving a surfel map that includes a plurality of surfels, wherein each surfel has associated data that includes one or more semantic labels; obtaining sensor data for one or more locations in the environment, the sensor data having been captured by one or more sensors of a first vehicle; determining one or more surfels corresponding to the one or more locations of the obtained sensor data; identifying one or more semantic labels for the one or more surfels corresponding to the one or more locations of the obtained sensor data; and performing, for each surfel corresponding to the one or more locations of the obtained sensor data, a label-specific detection process for the surfel.
    Type: Grant
    Filed: August 15, 2022
    Date of Patent: February 27, 2024
    Assignee: Waymo LLC
    Inventors: Dragomir Anguelov, Colin Andrew Braley, Christoph Sprunk
  • Patent number: 11911682
    Abstract: A method can include providing an object having a size smaller than a size of a known regulation object, projecting the object, via a delivery device, toward a trainee, and training the trainee to follow the object. A method can include determining a game parameter of a game trajectory of a sports object that was projected along the game trajectory in a real-time sports event, and based on the game parameters, adapting a delivery device to deliver a training object along a training trajectory that mimics at least a portion of the game trajectory, with the training object being smaller than the sports object.
    Type: Grant
    Filed: January 14, 2022
    Date of Patent: February 27, 2024
    Assignee: VXT SPORTS LLC
    Inventors: Preston Carpenter Cox, Robin Birdwell Cox
  • Patent number: 11915496
    Abstract: A body information acquisition device includes: a skeleton point detection unit configured to detect a skeleton point of a person included in a captured image; a body information acquisition unit configured to acquire body information of the person based on detection of the skeleton point; and an imaging state determination unit configured to determine whether an imaging state of the person reflected in the captured image corresponds to a specific imaging state specified based on a predetermined evaluation index value, in which the body information acquisition unit does not acquire the body information based on the detection of the skeleton point when the imaging state corresponds to the specific imaging state.
    Type: Grant
    Filed: February 28, 2022
    Date of Patent: February 27, 2024
    Assignee: AISIN CORPORATION
    Inventors: Godai Tanaka, Yoshiaki Tomatsu, Hirotaka Watanabe, Takahisa Hayakawa, Kazumichi Hara
  • Patent number: 11915278
    Abstract: An automatic system for detecting a damage to a vehicle, comprising: (a) at least one camera for capturing “handover” and “return” images irrespective of the vehicle's movement or orientation relative to the camera; (b) a memory for storing images of basic-parts (c) a first determination unit for, based on the basic-parts images, determine the location of one or more basic-parts within the “handover” and “return” images; (d) a second determination unit configured to, based on the determined locations of the basic parts within the images, determine the locations of “other parts” within the “handover” and “return” images, thereby to form “handover” and “return” part-images, respectively; (e) a transformation unit configured to separately transform each pair of “handover” and “return” part-images, respectively, to a same plane; (f) a comparison unit configured to separately compare pixel-by-pixel each transformed pairs of “handover” and “return” part-images, to detect a difference above a predefined threshold.
    Type: Grant
    Filed: March 9, 2020
    Date of Patent: February 27, 2024
    Assignee: RAVIN AI LTD.
    Inventors: Eliron Ekstein, Roman Sandler, Alexander Kenis, Alexander Serebryiani
  • Patent number: 11913771
    Abstract: An information processing device is provided with a calculating unit and a determining unit. The calculating unit calculates, as a length calculation value, a length between parts set for measuring the length of the object, from an image of the object in a captured image in which the object to be measured has been imaged. In accordance with a pre-assigned selection rule, the determining unit selects, from among length calculation values calculated respectively from a plurality of captured images having different image capture time points within a set time range, the length calculation value when the object is in a basic attitude for length measurement, and an assumed length calculation value, and determines the measured value of the length of the object using the selected length calculation values.
    Type: Grant
    Filed: March 18, 2019
    Date of Patent: February 27, 2024
    Assignee: NEC CORPORATION
    Inventor: Takeharu Kitagawa
  • Patent number: 11915463
    Abstract: Disclosed herein is a system and method of identifying new products on a retail shelf using a feature extractor trained to extract features from images of products on the shelf and output identifying information regarding the product in the product image. The extracted features are compared to extracted features in a product library and a best-fit is obtained. A new product is identified if the distance between the features of the product on the shelf and the features of the best-fit product from the product library are above a predetermined threshold.
    Type: Grant
    Filed: August 23, 2021
    Date of Patent: February 27, 2024
    Assignee: Carnegie Mellon University
    Inventors: Marios Savvides, Chenchen Zhu, Fangyi Chen, Uzair Ahmed, Ran Tao
  • Patent number: 11911917
    Abstract: A mobile robot including a vision system, the vision system including a camera and an illumination system; the illumination system including a plurality of light sources arranged to provide a level of illumination to an area surrounding the mobile robot; and a control system for controlling the illumination system. The control system adjusts the level of illumination provided by the plurality of light sources based on an image captured by the camera; an exposure time of the camera at the time the image was captured; and robot rotation information.
    Type: Grant
    Filed: May 8, 2019
    Date of Patent: February 27, 2024
    Assignee: Dyson Technology Limited
    Inventors: David Finlay Wyatt, David Andrew Richards, Hossein Farid Ghassem Nia, Christopher Andrew Smith
  • Patent number: 11912199
    Abstract: A vehicular trailer hitching assist system includes a camera disposed at a rear portion of a vehicle and viewing at least rearward of the vehicle. During a reversing maneuver of the vehicle toward a trailer that is spaced from the vehicle at a distance from the vehicle, the camera views at least a portion of a front profile of the trailer. An electronic control unit (ECU) includes an image processor operable to process image data captured by the camera. The vehicular trailer hitching assist system, via image processing at the ECU of image data captured by the camera during the reversing maneuver of the vehicle toward the trailer, determines a plurality of landmarks corresponding to the front profile of the trailer. Based at least in part on the determined plurality of landmarks, the vehicular trailer hitching assist system determines location of a trailer coupler of the trailer.
    Type: Grant
    Filed: March 27, 2023
    Date of Patent: February 27, 2024
    Assignee: Magna Electronics Inc.
    Inventors: Akbar Assa, Shweta Suresh Daga, Brijendra Kumar Bharti, Guruprasad Mani Iyer Shankaranarayanan, Jyothi P. Gali, Anoop S. Mann, Alexander Velichko
  • Patent number: 11906629
    Abstract: A method for optical distance measurement, comprising a creation of at least one frame, including determining 3D information of at least one subregion of a measuring region. A time budget for creating the frame is split between a first phase for assessing at least one region of interest, and a second phase for determining 3D information from the at least one region of interest. During the first phase a plurality of measuring pulses is emitted by a transmitting unit, and reflected measuring pulses are received by a receiving unit, wherein 2D information of the measuring region is determined, wherein at least one region of interest is assessed from the 2D information. During the second phase a plurality of measuring pulses is emitted by a transmitting unit, and reflected measuring pulses are received by the receiving unit, wherein 3D information of the at least one region of interest is determined as part of the second phase.
    Type: Grant
    Filed: September 2, 2020
    Date of Patent: February 20, 2024
    Assignee: Microvision, Inc.
    Inventor: Ünsal Kabuk
  • Patent number: 11908242
    Abstract: Systems, and method and computer readable media that store instructions for calculating signatures, utilizing signatures and the like.
    Type: Grant
    Filed: March 31, 2020
    Date of Patent: February 20, 2024
    Assignee: CORTICA LTD.
    Inventor: Igal Raichelgauz
  • Patent number: 11908192
    Abstract: An electronic device is disclosed. The electronic device comprises a memory for storing a content, and a processor for: acquiring a probability value for each of a plurality of objects included in each of a plurality of frames configuring the stored content; grouping the plurality of objects into at least one group according to a correlation value between the plurality of objects, the correlation value being obtained on the basis of the acquired probability value; counting, for each of a plurality of frames for each group, a case where the acquired probability value is equal to or greater than a preconfigured threshold value; and acquiring a summary content on the basis of a result of the counting.
    Type: Grant
    Filed: March 26, 2019
    Date of Patent: February 20, 2024
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Donghyun Kim, Juhyoung Lee
  • Patent number: 11908222
    Abstract: The present application relates to an occluded pedestrian re-identification method, including steps of obtaining global features and local features of occluded pedestrians, and recombining the local features into a local feature map; obtaining a heat map of key-points of pedestrian images and a group of key-point confidences, obtaining a group of features of the pedestrian key-points by using the local feature map and the heat map; obtaining a local feature group by using the global features to enhance each key-point feature in the group of features of pedestrian key-points according to Conv, and an adjacency matrix of key-points is obtained through the key-points, the local feature group and the adjacency matrix of key-points are used as the input of GCN to obtain the final features of pedestrian key-points.
    Type: Grant
    Filed: October 17, 2023
    Date of Patent: February 20, 2024
    Assignee: Hangzhou Dianzi University
    Inventors: Ming Jiang, Lingjie He, Min Zhang
  • Patent number: 11908156
    Abstract: Described herein is a detector for determining a position of at least one object. The detector includes at least one sensor element having a matrix of optical sensors, the optical sensors each having a light-sensitive area, wherein the sensor element is configured to determine a reflection image of the object. The detector also includes an evaluation device configured to select a reflection feature of the reflection image, and determine a distance estimate of the selected reflection feature of the reflection image by optimizing at least one blurring function fa, wherein the distance estimate is given by a longitudinal coordinate z and an error interval ±?. The evaluation device is adapted to determine at least one displacement region in at least one reference image corresponding to the distance estimate, and to match the selected reflection feature with at least one reference feature within the displacement region.
    Type: Grant
    Filed: January 8, 2020
    Date of Patent: February 20, 2024
    Assignee: TRINAMIX GMBH
    Inventors: Michael Eberspach, Christian Lennartz, Robert Send, Patrick Schindler, Peter Schillen, Ingmar Bruder
  • Patent number: 11908203
    Abstract: LiDAR (light detection and ranging) and RADAR (radio detection and ranging) systems are commonly used to generate point cloud data for 3D space around vehicles, for such functions as localization, mapping, and tracking. Improved techniques for processing the point cloud data that has been collected are provided. The improved techniques include mapping one or more point cloud data points into a depth map, the one or more point cloud data points being generated using one or more sensors; determining one or more mapped point cloud data points within a bounded area of the depth map, and detecting, using one or more processing units and for an environment surrounding a machine corresponding to the one or more sensors, a location of one or more entities based on the one or more mapped point cloud data points.
    Type: Grant
    Filed: April 12, 2022
    Date of Patent: February 20, 2024
    Assignee: NVIDIA Corporation
    Inventors: Ishwar Kulkarni, Ibrahim Eden, Michael Kroepfl, David Nister
  • Patent number: 11908298
    Abstract: The present disclosure discloses a smoke detection system and a smoke detection method. The smoke detection system includes a camera, a storage unit, and a processor. The camera acquires a current image and a previous image. The storage unit stores a plurality of modules. The processor is coupled with the camera and executes the plurality of modules. The processor generates a difference image based on the current image and the previous image. The processor inputs the current image and the difference image to a semantic segmentation model so that the semantic segmentation model outputs a smoke confidence map. The smoke confidence map is generated based on whether a current environment is a dark environment or a bright environment. The processor analyzes the smoke confidence map to determine whether a smoke event occurs in the current image. Therefore, a reliable smoke detection function can be achieved.
    Type: Grant
    Filed: May 9, 2022
    Date of Patent: February 20, 2024
    Assignee: VIA TECHNOLOGIES, INC.
    Inventors: Jia-yo Hsu, Wei-Chung Cheng