Target Tracking Or Detecting Patents (Class 382/103)
  • Patent number: 11947781
    Abstract: Embodiments of this application provide an interface generation method and a device, where the method is applied to a device having a development function, and may provide a method for automatically adjusting a layout of a visual element on a to-be-generated interface to quickly generate an interface. The method includes: The device obtains a visual element of a reference interface, and obtains configuration information of a display of a target terminal device (501). The device determines a visual focus of the visual element based on attribute information of the visual element (502). The device determines, based on the configuration information of the display, an interface layout template corresponding to the configuration information (503). Finally, the device adjusts, based on the visual focus and the interface layout template, a layout of the visual element on a to-be-generated interface, and generates an interface (504).
    Type: Grant
    Filed: July 23, 2020
    Date of Patent: April 2, 2024
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventor: Zhang Gao
  • Patent number: 11948313
    Abstract: A method of tracking subjects in an area. The method including receiving a plurality of sequences of images of corresponding fields of view in the area of real space, using a plurality of trained inference engines that process respective sequences of images in the plurality of sequences of images to locate features of subjects in the corresponding fields of view of the respective sequences, combining the located features from more than one of the trained inference engines which process respective sequences of images having overlapping fields of view to generate data locating subjects in three dimensions in the area of real space during identification intervals, and matching located subjects from a plurality of identification intervals to identify tracked subjects, including comparing located subjects with tracked subjects.
    Type: Grant
    Filed: January 10, 2022
    Date of Patent: April 2, 2024
    Inventor: Jordan E. Fisher
  • Patent number: 11948293
    Abstract: A position of an object is determined by optically capturing at least one capture structure arranged at the object or at a reference object captured from the object and thereby obtaining capture information, the at least one capture structure having a point-symmetrical profile of an optical property that varies along a surface of the capture structure, transforming a location-dependent mathematical function corresponding to the point-symmetrical profile of the optical property into a frequency domain, forming a second frequency-dependent mathematical function from a first frequency-dependent mathematical function, wherein the second mathematical function is formed from a relationship of in each case a real part and an imaginary part of complex function values of the first frequency-dependent mathematical function, and forming at least one function value of the second frequency-dependent mathematical function and determining the same as location information about a location of a point of symmetry of the locati
    Type: Grant
    Filed: January 31, 2021
    Date of Patent: April 2, 2024
    Assignee: Carl Zeiss Industrielle Messtechnik GmbH
    Inventor: Wolfgang Hoegele
  • Patent number: 11949628
    Abstract: Technologies to improve wireless communications by on-body products are described. One device includes millimeter wave (mmWave) frequency front-end circuitry and a baseband processor with an Orthogonal Frequency Division Multiplexing (OFDM) physical (PHY) layer. The baseband processor determines received signal strength indicator (RSSI) value and phase value associated with a wireless channel in a mmWave frequency range. The baseband processor determines a state of motion of the device using the RSSI value and the phase value. The baseband processor sends data to the second device using a first subcarrier structure of the OFDM PHY layer, in response to the state of motion being a first state of motion. The baseband processor sends data to the second device using a second subcarrier structure of the OFDM physical layer, in response to the state motion being a second state of motion having more motion than the first state of motion.
    Type: Grant
    Filed: December 6, 2021
    Date of Patent: April 2, 2024
    Assignee: Amazon Technologies, Inc.
    Inventors: Cyril Arokiaraj Arool Emmanuel, Balamurugan Shanmugam
  • Patent number: 11948238
    Abstract: Embodiments relate to a method for real-time facial animation, and a processing device for real-time facial animation. The method includes providing a dynamic expression model, receiving tracking data corresponding to a facial expression of a user, estimating tracking parameters based on the dynamic expression model and the tracking data, and refining the dynamic expression model based on the tracking data and estimated tracking parameters. The method may further include generating a graphical representation corresponding to the facial expression of the user based on the tracking parameters. Embodiments pertain to a real-time facial animation system.
    Type: Grant
    Filed: May 27, 2022
    Date of Patent: April 2, 2024
    Assignee: Apple Inc.
    Inventors: Sofien Bouaziz, Mark Pauly
  • Patent number: 11948332
    Abstract: A system for determining the gaze endpoint of a subject, the system comprising: a eye tracking unit adapted to determine the gaze direction of one or more eyes of the subject; a head tracking unit adapted to determine the position comprising location and orientation of the eye tracker with respect to a reference coordinate system; a 3D Structure representation unit, that uses the 3D structure and position of objects of the scene in the reference coordinate system to provide a 3D structure representation of the scene; based on the gaze direction, the eye tracker position and the 3D structure representation, calculating the gaze endpoint on an object of the 3D structure representation of the scene or determining the object itself.
    Type: Grant
    Filed: February 17, 2023
    Date of Patent: April 2, 2024
    Assignee: APPLE INC.
    Inventors: Jan Hoffmann, Tom Sengelaub, Denis Williams
  • Patent number: 11937524
    Abstract: A method includes obtaining, by the treatment system configured to implement a machine learning (ML) algorithm, one or more images of a region of an agricultural environment near the treatment system, wherein the one or more images are captured from the region of a real-world where agricultural target objects are expected to be present, determining one or more parameters for use with the ML algorithm, wherein at least one of the one or more parameters is based on one or more ML models related to identification of an agricultural object, determining a real-world target in the one or more images using the ML algorithm, wherein the ML algorithm is at least partly implemented using the one or more processors of the treatment system, and applying a treatment to the target by selectively activating the treatment mechanism based on a result of the determining the target.
    Type: Grant
    Filed: September 15, 2022
    Date of Patent: March 26, 2024
    Assignee: Verdant Robotics, Inc.
    Inventors: Gabriel Thurston Sibley, Lorenzo Ibarria, Curtis Dale Garner, Patrick Christopher Leger, Dustin James Webb
  • Patent number: 11941818
    Abstract: Various implementations disclosed herein include devices, systems, and methods that determine a 3D location of an edge based on image and depth data. This involves determining a 2D location of a line segment corresponding to an edge of an object based on a light-intensity image, determining a 3D location of a plane based on depth values (e.g., based on sampling depth near the edge/on both sides of the edge and fitting a plane to the sampled points), and determining a 3D location of the line segment based on the plane (e.g., by projecting the line segment onto the plane). The devices, systems, and methods may involve classifying an edge as a particular edge type (e.g., fold, cliff, plane) and detecting the edge based on such classification.
    Type: Grant
    Filed: March 3, 2021
    Date of Patent: March 26, 2024
    Assignee: Apple Inc.
    Inventors: Vedant Saran, Alexandre Da Veiga
  • Patent number: 11938962
    Abstract: A server of a driving support system includes a path setting unit configured to set a first traveling path in which a vehicle travels in a parking lot, a detecting unit configured to detect a person, and a determining unit configured to determine whether or not a person is present in a part of the first traveling path, based on the detection result of the detecting unit. When it is determined that a person is present for a predetermined time or more in the part of the first traveling path, the path setting unit searches for a second traveling path that is a path that bypasses the part of the first traveling path, and the detecting unit limits the processing for the measurement result of the sensor whose measurement target is the part of the first traveling path after the second traveling path is searched.
    Type: Grant
    Filed: January 18, 2022
    Date of Patent: March 26, 2024
    Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventor: Issei Matsunaga
  • Patent number: 11941954
    Abstract: Images captured for components of a device are monitored for changes by evaluating a first region of interest in the images. Periodically, a command is sent to the device to move one or more of the components to a known position or state. A certain component or set of components associated with being moved based on the command is evaluated in a second region of interest in the images to determine if the corresponding component or set of components is in the known position or state within the images. When the corresponding component or set of components is not identified from the images in the known position or state, a security alert is raised for the device and security operations are processed on the host device.
    Type: Grant
    Filed: January 31, 2023
    Date of Patent: March 26, 2024
    Assignee: NCR Corporation
    Inventors: Alexander William Whytock, Conor Michael Fyfe
  • Patent number: 11941320
    Abstract: An electronic monitoring system has one or more imaging devices that can detect at least one triggering event comprising sound and motion and a controller that executes a program to categorize the triggering event as being located in a user-defined activity zone within the field of view and/or as being a taxonomic-based triggering event. Upon categorizing the triggering event, the system generates an output comprising a video component and an audio component. At least a portion of the audio component is modified if the triggering event is a categorized triggering event. Modification of the audio may include muting all or a portion of the audio component of the output.
    Type: Grant
    Filed: September 28, 2021
    Date of Patent: March 26, 2024
    Assignee: Arlo Technologies, Inc.
    Inventors: Rajinder Singh, John Thomas, Michael Harris, Dennis Aldover
  • Patent number: 11937883
    Abstract: Various embodiments of the present disclosure encompass a visual endoscopic guidance device employing an endoscopic viewing controller (20) for controlling a display of an endoscopic view (11) of an anatomical structure, and a visual guidance controller (130) for controlling of a display one or more guided manipulation anchors (50-52) within the display of the endoscopic view (11) of the anatomical structure. A guided manipulation anchor (50-52) is representative of location marking and/or a motion directive of a guided manipulation of the anatomical structure. The visual guidance controller (130) further controls a display of a hidden feature anchor relative to the display of the endoscopic view (11) of the anatomical structure. The hidden feature anchor (53) being representative of a position (e.g., a location and/or an orientation) of a guided visualization of the hidden feature of the anatomical structure.
    Type: Grant
    Filed: December 9, 2020
    Date of Patent: March 26, 2024
    Assignee: KONINKLIJKE PHILIPS N.V.
    Inventors: Paul Thienphrapa, Torre Michelle Bydlon, Prasad Vagdargi, Sean Joseph Kyne, Aleksandra Popovic
  • Patent number: 11941841
    Abstract: A computer-implemented method according to one embodiment includes running an initial network on a plurality of images to detect actors pictured therein and body joints of the detected actors. The method further includes running fully-connected networks in parallel, one fully-connected network for each of the detected actors, to reconstruct complete three-dimensional poses of the actors. Sequential model fitting is performed on the plurality of images. The sequential model fitting is based on results of running the initial network and the fully-connected networks. The method further includes determining, based on the sequential model fitting, a locational position for a camera in which the camera has a view of a possible point of collision of two or more of the actors. The camera is instructed to be positioned in the locational position.
    Type: Grant
    Filed: September 22, 2021
    Date of Patent: March 26, 2024
    Assignee: International Business Machines Corporation
    Inventors: Yu-Siang Chen, Ching-Chun Liu, Ryan Young, Ting-Chieh Yu
  • Patent number: 11938963
    Abstract: A live map system may be used to propagate observations collected by autonomous vehicles operating in an environment to other autonomous vehicles and thereby supplement a digital map used in the control of the autonomous vehicles. In addition, a live map system in some instances may be used to propagate location-based teleassist triggers to autonomous vehicles operating within an environment. A location-based teleassist trigger may be generated, for example, in association with a teleassist session conducted between an autonomous vehicle and a remote teleassist system proximate a particular location, and may be used to automatically trigger a teleassist session for another autonomous vehicle proximate that location and/or to propagate a suggested action to that other autonomous vehicle.
    Type: Grant
    Filed: December 28, 2022
    Date of Patent: March 26, 2024
    Assignee: AURORA OPERATIONS, INC.
    Inventors: Niels Joubert, Benjamin Kaplan, Stephen O'Hara
  • Patent number: 11941368
    Abstract: Certain embodiments of the disclosure relate to an apparatus and a method for translating a text included in an image by using an external electronic device in an electronic device. One method comprises displaying a picture comprising an object bearing text at a location within the picture on a display, extracting the text, generating another text from the extracted text, and automatically overlaying the another text on the object in another picture comprising the object at another location within the another picture on the display.
    Type: Grant
    Filed: March 22, 2021
    Date of Patent: March 26, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Sihyoung Lee, Beomsu Kim, Sunjung Kim, Soowan Kim, Jaehyun Kim, Insun Song, Hyunseok Lee, Jihwan Choe
  • Patent number: 11938971
    Abstract: In a vehicle control device for an autonomous driving vehicle that autonomously travels based on an operation command, a gesture image of a person around the autonomous driving vehicle is acquired, and a stored reference gesture image is collated with the acquired gesture image. At this time, when it is discriminated that the gesture of the person around the autonomous driving vehicle is a gesture requesting the autonomous driving vehicle to stop, it is determined whether a disaster has occurred. When it is determined that the disaster has occurred, the autonomous driving vehicle is caused to stop around the person requesting the autonomous driving vehicle to stop.
    Type: Grant
    Filed: January 12, 2022
    Date of Patent: March 26, 2024
    Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Hiromitsu Kobayashi, Taizo Masuda, Yuta Kataoka, Miki Nomoto, Yoshiki Ueda, Satoshi Omi, Yuki Nishikawa
  • Patent number: 11941875
    Abstract: Methods, computer systems, and apparatus, including computer programs encoded on computer storage media, for processing a perspective view range image generated from sensor measurements of an environment. The perspective view range image includes a plurality of pixels arranged in a two-dimensional grid and including, for each pixel, (i) features of one or more sensor measurements at a location in the environment corresponding to the pixel and (ii) geometry information comprising range features characterizing a range of the location in the environment corresponding to the pixel relative to the one or more sensors. The system processes the perspective view range image using a first neural network to generate an output feature representation. The first neural network comprises a first perspective point-set aggregation layer comprising a geometry-dependent kernel.
    Type: Grant
    Filed: July 27, 2021
    Date of Patent: March 26, 2024
    Assignee: Waymo LLC
    Inventors: Yuning Chai, Pei Sun, Jiquan Ngiam, Weiyue Wang, Vijay Vasudevan, Benjamin James Caine, Xiao Zhang, Dragomir Anguelov
  • Patent number: 11941648
    Abstract: The disclosure includes implementations for providing a recommendation to a driver of a second DSRC-equipped vehicle. The recommendation may describe an estimate of how long it would take the second DSRC-equipped vehicle to receive a roadside service from a drive-through business. A method according to some implementations may include receiving, by the second DSRC-equipped vehicle, a Dedicated Short Range Communication message (“DSRC message”) that includes path history data. The path history data may describe a path of a first DSRC-equipped vehicle over a plurality of different times while the first DSRC-equipped vehicle is located in a queue of the drive-through business. The method may include determining delay time data for the second DSRC-equipped vehicle based on the path history data for the first DSRC-equipped vehicle. The delay time data may describe the estimate. The method may include providing the recommendation to the driver. The recommendation may include the estimate.
    Type: Grant
    Filed: March 7, 2019
    Date of Patent: March 26, 2024
    Inventors: Gaurav Bansal, Hongsheng Lu, John Kenney, Toru Nakanishi
  • Patent number: 11941822
    Abstract: Systems and techniques are described herein for performing optical flow estimation for one or more frames. For example, a process can include determining an optical flow prediction associated with a plurality of frames. The process can include determining a position of at least one feature associated with a first frame and determining, based on the position of the at least one feature in the first frame and the optical flow prediction, a position estimate of a search area for searching for the at least one feature in a second frame. The process can include determining, from within the search area, a position of the at least one feature in the second frame.
    Type: Grant
    Filed: March 8, 2023
    Date of Patent: March 26, 2024
    Assignee: QUALCOMM Incorporated
    Inventors: Jamie Menjay Lin, Fatih Murat Porikli
  • Patent number: 11941893
    Abstract: A virtual traffic line generation apparatus and a method thereof are provided. The virtual traffic line generation apparatus includes a controller that determines reliability of a traffic line detected for each frame and generates a virtual traffic line based on a traffic line with the highest reliability among traffic lines detected in a previous frame when the traffic line is not detected and a storage storing the reliability of the traffic line for each frame.
    Type: Grant
    Filed: June 16, 2022
    Date of Patent: March 26, 2024
    Assignees: Hyundai Motor Company, Kia Corporation
    Inventor: Gi Won Park
  • Patent number: 11941906
    Abstract: Provided is a method of identifying a hand of a genuine user wearing a wearable device. According to an embodiment, the method includes using a sensor included in the wearable device to recognize a hand located in a detection area of the sensor; estimating a position of a shoulder connected to the recognized hand based on a positional relation between the orientation of the recognized hand and at least one body part connected to the recognized hand; and using information about a probability of a shoulder of the genuine user being present in the estimated position to determine whether the recognized hand is a hand of the genuine user.
    Type: Grant
    Filed: June 10, 2019
    Date of Patent: March 26, 2024
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Bonkon Koo, Jaewoo Ko
  • Patent number: 11941725
    Abstract: In one embodiment, a method includes, by an operating system of a first artificial-reality device, receiving a notification that virtual objects are shared with the first artificial-reality device by a second artificial-reality device, where the virtual objects are shared by being placed inside a sender-side shared space anchored to a physical object. The method further includes the first artificial-reality device accessing descriptors of a physical object and a spatial-relationship definition between the physical object and a receiver-side shared space, detecting physical objects based on the descriptors, determining pose of the receiver-side shared space, detecting physical constraints within the receiver-side shares space, receiving display instructions for the virtual objects, and rendering the virtual objects on the first artificial-reality device in the receiver-side shared space.
    Type: Grant
    Filed: November 17, 2021
    Date of Patent: March 26, 2024
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Alexander Michael Louie, Michal Hlavac, Jasper Stevens
  • Patent number: 11935261
    Abstract: A system for measuring dimensions of film to be applied to a window includes a plurality of removable cards for placement on the window, each of the plurality of removable cards having a plurality of visible correspondence points at pre-defined positions on the respective removable card. The system further includes a mobile device having an image capture device, and the image capture device is configured to capture an image of the window having a set of correspondence points visible in the image. Additionally, the system includes a server configured to receive the image of the window captured by the mobile device and process the image to determine dimensions of a film to be cut for the window based on the set of correspondence points and the pre-defined positions, and processing the image removes perspective distortions in the image. The system is configured to output the dimensions of the film.
    Type: Grant
    Filed: May 8, 2019
    Date of Patent: March 19, 2024
    Assignee: 3M Innovative Properties Company
    Inventors: Raghunath Padiyath, Viruru Phaniraj, Jeffrey P. Adolf, Steven P. Floeder
  • Patent number: 11935303
    Abstract: A threat detection system and method for detecting panic detection in a crowd of people. The threat detection system using software algorithms detect people's movement from video camera scene and video feeds. The system analyses the recent history of the scene to describe features that would be considered common/normal in the scene when enough people are present. The system then uses that baseline information to continually analyze frames with the requisite number of people in a frame and update the baseline features. If the features of the scene change dramatically based on the perceived movement of the people in the scene and meet or exceed the threshold features for movement in enough consecutive frames, then the systems determines that there is panic in the scene.
    Type: Grant
    Filed: February 17, 2022
    Date of Patent: March 19, 2024
    Inventors: Junaid Iqbal, James Allan Douglas Cameron, Phil Konrad Munz
  • Patent number: 11935199
    Abstract: A computer-implemented method includes receiving a two-dimensional image of a scene captured by a camera, recognizing one or more objects in the scene depicted in the two-dimensional image, and determining whether the one or more recognized objects have known real-world dimensions. The computer-implemented method further includes determining a depth of at least one recognized object having known real-world dimensions from the camera, and overlaying three-dimensional (3-D) augmented reality content over a display the 2-D image of the scene considering the depth of the at least one recognized object from the camera.
    Type: Grant
    Filed: July 26, 2021
    Date of Patent: March 19, 2024
    Assignee: GOOGLE LLC
    Inventors: Alexander James Faaborg, Shengzhi Wu
  • Patent number: 11934484
    Abstract: Systems and methods for facilitating computer-vision-based detection and identification of consumer products comprise a support surface for supporting thereon one or more consumer products to be detected and a plurality of digital cameras each arranged at a different height relative to the support surface and positioned such that a field of view of each of the digital cameras includes the consumer product(s) located on the support surface. Movement of the support surface and/or the digital cameras permits the digital cameras to capture images of the consumer product from different angles of view and up to a full 360 degree of rotation around the at least one consumer product. A computing device uses at least the image data from such images to train a computer vision consumer product identification model to generate a reference image data model to be used for recognition of the at least one consumer product.
    Type: Grant
    Filed: August 27, 2021
    Date of Patent: March 19, 2024
    Assignee: Walmart Apollo, LLC
    Inventors: Soren A. Larson, Soheil Salehian-Dardashti, Elizabeth J. Barton, Victoria A. Moeller-Chan
  • Patent number: 11935329
    Abstract: The system of the present disclosure comprises: an acquisition means for acquiring a video image of an online session between a first user and a second user; a face recognition means for recognizing at least a face image of the first user and the second user included in the video image for each predetermined frame; a voice recognition means for recognizing at least the voice of the subject included in the video image; an evaluation means for calculating an evaluation value from a plurality of viewpoints based on both the recognized face image and the voice; and a determination means for determining the degree of match of the second user to the first user based on the evaluation values.
    Type: Grant
    Filed: March 24, 2021
    Date of Patent: March 19, 2024
    Assignee: I'MBESIDEYOU INC.
    Inventor: Shozo Kamiya
  • Patent number: 11934571
    Abstract: A system, a head-mounted device, a computer program, a carrier, and a method for a head-mounted device comprising an eye tracking sensor, for updating an eye tracking model in relation to an eye are disclosed. First sensor data in relation to the eye are obtained by means of the eye tracking sensor. After obtaining the first sensor data, the eye tracking sensor is moved in relation to the eye. After moving the eye tracking sensor, second sensor data in relation to the eye are obtained by means of the eye tracking sensor. The eye tracking model in relation to the eye is then updated based on the first sensor data and the second sensor data.
    Type: Grant
    Filed: September 30, 2020
    Date of Patent: March 19, 2024
    Assignee: Tobii AB
    Inventors: Pravin Kumar Rana, Gerald Bianchi
  • Patent number: 11935165
    Abstract: A method for proactively creating an image product includes capturing an image of an object in a first environment by a device, storing a library of personalized products each characterized by a product type, automatically recognizing the object in the image as having a product type associated with the library of personalized products, automatically creating a design for the personalized product of the product type using personalized content, automatically displaying the design of the personalized product of the product type incorporating the selected photo in the first environment on the device, and manufacturing a physical product based on the design of the personalized product.
    Type: Grant
    Filed: August 17, 2020
    Date of Patent: March 19, 2024
    Assignee: Shutterfly, LLC
    Inventors: Abhishek Kirankumar Sabbarwal, David Le, Ira Blas, Ryan Lee
  • Patent number: 11935173
    Abstract: Provided are a method and device for providing interactive virtual reality content capable of increasing user immersion by naturally connecting an idle image to a branched image. The method includes providing an idle image including options, wherein an actor in the idle image performs a standby operation, while the actor performs the standby operation, receiving a user selection for an option, providing a connection image, and providing a corresponding branched image according to the selection of the user, wherein a portion of the actor in the connection image is processed by computer graphics, and the actor performs a connection operation so that a first posture of the actor at a time point at which the selection is received is smoothly connected to a second posture of the actor at a start time point of the branched image.
    Type: Grant
    Filed: January 4, 2023
    Date of Patent: March 19, 2024
    Assignees: VISION VR INC.
    Inventors: Dong Kyu Kim, Won-Il Kim
  • Patent number: 11935258
    Abstract: A method for range detection is described. The method includes segmenting an image into one or more segmentation blobs captured by a monocular camera of an ego vehicle. The method includes focusing on pixels forming a selected segmentation blob of the one or more segment blobs. The method also includes determining a distance to the selected segmentation blob according to a focus function value of the monocular camera of the ego vehicle.
    Type: Grant
    Filed: March 8, 2021
    Date of Patent: March 19, 2024
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventor: Alexander Russell Green
  • Patent number: 11934746
    Abstract: An information generation device generating a test case being a simulation model for reproducing a road traffic condition in an area on a road including a target point, the information generation device including: a first storage unit that stores moving-object information being information regarding a moving object existing in the area; a determination unit that determines whether or not an incident in which the moving object existing in the area shows a behavior that leads to occurrence of an accident has occurred, on the basis of the moving-object information; an extraction unit that extracts, as target information, moving-object information in a target period being a predetermined time period including a time point at which the incident occurred; and a generation unit that generates the test case upon occurrence of the incident on the basis of the target information.
    Type: Grant
    Filed: October 16, 2018
    Date of Patent: March 19, 2024
    Assignee: IHI CORPORATION
    Inventors: Yoshihisa Yamanouchi, Yosuke Seto, Minori Orita, Hiroki Saito, Takeharu Kato
  • Patent number: 11935224
    Abstract: The disclosure is directed to, among other things, systems and methods for troubleshooting equipment installations using machine learning. Particularly, the systems and methods described herein may be used to validate an installation of one or more devices (which may be referred to as “customer premises equipment (CPE)” herein as well) at a given location, such as a customer's home or a commercial establishment. As one non-limiting example, the one or more devices may be associated with a fiber optical network, and may include a modem and/or an optical network terminal (ONT). However, the one or more devices may include any other types of devices associated with any other types of networks as well.
    Type: Grant
    Filed: May 3, 2021
    Date of Patent: March 19, 2024
    Assignee: Cox Communications, Inc.
    Inventors: Monte Fowler, Lalit Bhatia, Jagan Arumugham
  • Patent number: 11937019
    Abstract: Each of a plurality of co-located inspection camera modules captures raw images of objects passing in front of the co-located inspection camera modules which form part of a quality assurance inspection system. The inspection camera modules have either a different image sensor or lens focal properties and generate different feeds of raw images. The co-located inspection camera modules can be selectively switched amongst to activate the corresponding feed of raw images. The activated feed of raw images is provided to a consuming application or process for quality assurance analysis.
    Type: Grant
    Filed: June 7, 2021
    Date of Patent: March 19, 2024
    Assignee: Elementary Robotics, Inc.
    Inventors: Arye Barnehama, Dat Do, Daniel Pipe-Mazo
  • Patent number: 11937018
    Abstract: A surveillance system having an interface to a camera network for video surveillance of a surveillance area. The camera network includes a plurality of cameras each for capturing a surveillance subarea. The cameras are designed to provide surveillance images of the surveillance subareas. The surveillance system also includes a surveillance device for re-identifying people in the surveillance images. The surveillance device includes a person detection module for detecting people and an object detection module 10 for detecting objects. The surveillance device 5 includes an assignment module 12 designed to assign at least one item of object information to a person 4, and an action detection module 11 designed to detect an action of the person 4 on one of the objects 8.
    Type: Grant
    Filed: May 6, 2021
    Date of Patent: March 19, 2024
    Assignee: Robert Bosch GmbH
    Inventors: Gregor Blott, Jan Rexilius
  • Patent number: 11928862
    Abstract: An approach is disclosed for visually identifying and/or pairing ride providers and passengers. The approach involves, for example, receiving location data indicating that a driver vehicle is within a proximity threshold of a passenger pickup location. The approach also involves initiating an activation of a camera of a passenger device to present live imagery on the passenger device. The approach further involves processing sensor data collected from one or more sensors of the passenger device to determine a rotation vector indicating a pointing direction of the passenger device. The approach also involves determining a new direction to point the passenger device to capture the driver vehicle in a field of view of the camera based on the rotation vector and the location data. The approach further involves providing output data for presenting a representation of the new direction in a user interface of the passenger device.
    Type: Grant
    Filed: September 28, 2021
    Date of Patent: March 12, 2024
    Assignee: HERE Global B.V.
    Inventors: Ron Livne, Silviu Zilberman
  • Patent number: 11928771
    Abstract: An exemplary method of detecting a light source using an electronic device having a camera and a sensor may include: scanning a real environment using the camera to establish an environment map of the real environment; capturing, using the camera, a first image of a real light source from a first location in the real environment and a second image of the real light source from a second location in the real environment; tracking, using the sensor, a first position and a first orientation of the camera in the environment map while the first image is captured, and a second position and a second orientation of the camera in the environment map while the second image is captured; and computing a position of the real light source in the environment map based on the first position, the first orientation, the second position, and the second orientation.
    Type: Grant
    Filed: June 6, 2022
    Date of Patent: March 12, 2024
    Assignee: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD.
    Inventors: Yi Xu, Shuxue Quan
  • Patent number: 11928766
    Abstract: The present disclosure is related to a method to generate user representative avatars that fit within a design paradigm. The method includes receiving depth information corresponding to multiple user features of the user, determining one or more feature landmarks for the user based on the depth information, utilizing the one or more feature landmarks to classify a first user feature relative to an avatar feature category, selecting a first avatar feature from the avatar feature category based on the classification of the first user feature, combining the first avatar feature within an avatar representation to generate a user avatar, and output the user avatar for display.
    Type: Grant
    Filed: March 24, 2022
    Date of Patent: March 12, 2024
    Assignee: Disney Enterprises, Inc.
    Inventors: Dumene Comploi, Francisco E. Gonzalez
  • Patent number: 11925492
    Abstract: A non-transitory computer-readable recording medium having stored an image processing program that causes a computer to execute a process, the process includes extracting a plurality of consecutive pixels corresponding to a first part or a second part of a body, from a pixel column in a predetermined direction of an image of the body, obtaining a statistical value of pixel values of the plurality of consecutive pixels, and identifying a part corresponding to the plurality of consecutive pixels, among the first part or the second part, based on the statistical value.
    Type: Grant
    Filed: February 3, 2021
    Date of Patent: March 12, 2024
    Assignee: FUJITSU LIMITED
    Inventors: Yasutaka Moriwaki, Hiroaki Takebe, Nobuhiro Miyazaki, Takayuki Baba
  • Patent number: 11928577
    Abstract: A parallel convolutional neural network is provided. The CNN is implemented by a plurality of convolutional neural networks each on a respective processing node. Each CNN has a plurality of layers. A subset of the layers are interconnected between processing nodes such that activations are fed forward across nodes. The remaining subset is not so interconnected.
    Type: Grant
    Filed: April 27, 2020
    Date of Patent: March 12, 2024
    Assignee: Google LLC
    Inventors: Alexander Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton
  • Patent number: 11926337
    Abstract: Systems and methods for determining object intentions through visual attributes are provided. A method can include determining, by a computing system, one or more regions of interest. The regions of interest can be associated with surrounding environment of a first vehicle. The method can include determining, by a computing system, spatial features and temporal features associated with the regions of interest. The spatial features can be indicative of a vehicle orientation associated with a vehicle of interest. The temporal features can be indicative of a semantic state associated with signal lights of the vehicle of interest. The method can include determining, by the computing system, a vehicle intention. The vehicle intention can be based on the spatial and temporal features. The method can include initiating, by the computing system, an action. The action can be based on the vehicle intention.
    Type: Grant
    Filed: April 28, 2022
    Date of Patent: March 12, 2024
    Assignee: UATC, LLC
    Inventors: Davi Eugenio Nascimento Frossard, Eric Randall Kee, Raquel Urtasun
  • Patent number: 11928286
    Abstract: A mobile device includes a housing, a network interface to communicate with an external network, a user interface comprising a panel with a display panel to display a menu image and a touch panel to receive a touch input, the panel including a first portion and a second portion with a transparent condition and being within the first portion, a sensing unit including two front cameras disposed in the second portion of the panel, and a control unit to control the sensing unit in a photographing mode and a sensing mode to perform a function of the mobile device.
    Type: Grant
    Filed: April 29, 2023
    Date of Patent: March 12, 2024
    Inventor: Seungman Kim
  • Patent number: 11929844
    Abstract: Various arrangements for using captured voice to generate a custom interface controller are presented. A vocal recording from a user may be captured in which a spoken command and multiple smart-home devices are indicated. One or more common functions that map to the multiple smart-home devices may be determined. A custom interface controller may be generated that controls the one or more common functions of each smart-home device of the multiple smart-home devices.
    Type: Grant
    Filed: March 30, 2022
    Date of Patent: March 12, 2024
    Assignee: Google LLC
    Inventors: Benjamin Brown, Da Huang, Christopher Conover, Lisa Williams, Henry Chung
  • Patent number: 11929870
    Abstract: Monitoring systems and methods for use in security, safety, and business process applications utilizing a correlation engine are disclosed. Sensory data from one or more sensors are captured and analyzed to detect one or more events in the sensory data. The events are correlated by a correlation engine, optionally by weighing the events based on attributes of the sensors that were used to detect the primitive events. The events are then monitored for an occurrence of one or more correlations of interest, or one or more critical events of interest. Finally, one or more actions are triggered based on a detection of one or more correlations of interest, one or more anomalous events, or one or more critical events of interest. A hierarchical storage manager, having access to a hierarchy of two or more data storage devices, is provided to store data from the one or more sensors.
    Type: Grant
    Filed: May 3, 2022
    Date of Patent: March 12, 2024
    Assignee: SecureNet Solutions Group LLC
    Inventors: John J Donovan, Daniar Hussain
  • Patent number: 11925429
    Abstract: A robotic surgical system includes a linkage, an input handle, and a processing unit. The linkage moveably supports a surgical tool relative to a base. The input handle is moveable in a plurality of directions. The processing unit is in communication with the input handle and is operatively associated with the linkage to move the surgical tool based on a scaled movement of the input handle. The scaling varies depending on whether the input handle is moved towards a center of a workspace or away from the center of the workspace. The workspace represents a movement range of the input handle.
    Type: Grant
    Filed: May 17, 2022
    Date of Patent: March 12, 2024
    Assignee: COVIDIEN LP
    Inventor: William Peine
  • Patent number: 11921042
    Abstract: An achromatic 3D STED measuring optical process and optical method, based on a conical diffraction effect or an effect of propagation of light in uniaxial crystals, including a cascade of at least two uniaxial or conical diffraction crystals creating, from a laser source, all of the light propagating along substantially the same optical path, from the output of an optical bank to the objective of a microscope. A spatial position of at least one luminous nano-emitter, structured object or a continuous distribution in a sample is determined. Reconstruction of the sample and its spatial and/or temporal and/or spectral properties is treated as an inverse Bayesian problem leading to the definition of an a posteriori distribution, and a posteriori relationship combining, by virtue of the Bayes law, the probabilistic formulation of a noise model, and possible priors on a distribution of light created in the sample by projection.
    Type: Grant
    Filed: February 16, 2021
    Date of Patent: March 5, 2024
    Assignee: Bioaxial SAS
    Inventors: Gabriel Y Sirat, Lionel Moisan
  • Patent number: 11924076
    Abstract: The present disclosure relates to methods and devices for wireless communication of an apparatus, e.g., a UE. In one aspect, the apparatus may determine whether a connection of a video call is interrupted, the video call including a plurality of decoded frames. The apparatus may also determine, if the connection of the video call is interrupted, whether one or more decoded frames of the plurality of decoded frames are suitable for artificial frame generation. The apparatus may also generate one or more artificial frames based on the one or more decoded frames and an audio feed from a transmitting device. Additionally, the apparatus may determine whether the one or more artificial frames are suitable for a facial model call. The apparatus may also establish a facial model call based on a combination of the one or more artificial frames and the audio feed from the transmitting device.
    Type: Grant
    Filed: October 18, 2022
    Date of Patent: March 5, 2024
    Assignee: QUALCOMM Incorporated
    Inventors: Ansh Abhay Balde, Venkata Phani Krishna Akella, Rajesh Polisetti, Hemanth Yerva, Sandeep Padubidri Ramamurthy
  • Patent number: 11922576
    Abstract: A system that measures the motion of a camera traveling over a living subject by reference to images taken from the camera, while measuring the subject's shape, pose and motion. The system can segment each image into pixels containing the subject and pixels not containing the subject, and ascribe each pixel containing the subject to a precise location on the subject's body. In one embodiment, a computer connected to a camera and an Inertial Measurement Unit (IMU) provides estimates of the camera's location, attitude and velocity by integrating the motion of the camera with respect to features in the environment and on the surface of the subject, corrected for the motion of the subject. The system corrects accumulated errors in the integration of camera motion by recognizing the subject's body shape in the collected data.
    Type: Grant
    Filed: January 4, 2022
    Date of Patent: March 5, 2024
    Assignee: Triangulate Labs, Inc.
    Inventor: William D. Hall
  • Patent number: 11919462
    Abstract: A vehicle occupant monitoring apparatus includes two or more imaging modules and a controller. The imaging modules each include: a light-emitting device configured to emit and apply light to an occupant in a vehicle; and an imaging device configured to perform imaging of the occupant to obtain an imaging image. The controller is configured to cause the light-emitting device of each of the imaging modules to apply the light when the imaging device of corresponding one of the imaging modules performs the imaging. The controller is configured to cause, in a case where contact of the vehicle with another object is predicted or detected, the imaging device to perform the imaging to obtain the imaging image while causing the light-emitting devices of the imaging modules to apply the light together, and monitor the position of the occupant on the basis of the obtained imaging image.
    Type: Grant
    Filed: October 1, 2020
    Date of Patent: March 5, 2024
    Assignee: SUBARU CORPORATION
    Inventor: Ryota Nakamura
  • Patent number: 11922710
    Abstract: A character recognition method includes the following operations: determining that the image of character to be identified corresponds to a matching character of several registered characters according to several vector distances to be identified between a vector of an image of character to be identified and several vectors of several registered character images of several registered characters, and storing a matching vector distance between the vector of the image of character to be identified and a vector of the matching character by a processor; and storing a data of the matching character according to the image of character to be identified when the matching vector distance is less than a vector distance threshold by the processor.
    Type: Grant
    Filed: April 18, 2022
    Date of Patent: March 5, 2024
    Assignee: Realtek Semiconductor Corporation
    Inventors: Chien-Hao Chen, Chao-Hsun Yang, Shih-Tse Chen