Patents by Inventor Andrei Sergeevich Terechko
Andrei Sergeevich Terechko has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240149817Abstract: A method for animal detection and repelling includes detecting a plurality of animal characteristics of an animal, the animal characteristics comprising at least an animal size and an animal position. A species of the animal is determined from a bioinformatics database, wherein the bioinformatics database receives the animal size and the animal position. An animal deterrent specific to the species is determined from the bioinformatics database. A repelling signal is generated based on the animal deterrent.Type: ApplicationFiled: November 6, 2022Publication date: May 9, 2024Inventors: Yuting Fu, Andrei Sergeevich Terechko, Jochen Seemann
-
Patent number: 11909851Abstract: A packet is transmitted from a remote device over a communication network. A fragment detector detects one or more fragments in a field of the packet, where the field is associated with a session layer or higher abstraction layer of an open systems interconnect (OSI) model. Fragment information is extracted from the packet which indicates one or more of a last fragment index associated with a last fragment of one or more fragment in the packet and a fragment count indicative of a number of fragments associated with a message which is fragmented. Interrupts associated with the packet with other interrupts associated with other packets are coalesced based on one or more of the last fragment index and the fragment count.Type: GrantFiled: October 4, 2021Date of Patent: February 20, 2024Assignee: NXP B.V.Inventors: Jochen Seemann, Andrei Sergeevich Terechko
-
Publication number: 20230103738Abstract: A packet is transmitted from a remote device over a communication network. A fragment detector detects one or more fragments in a field of the packet, where the field is associated with a session layer or higher abstraction layer of an open systems interconnect (OSI) model. Fragment information is extracted from the packet which indicates one or more of a last fragment index associated with a last fragment of one or more fragment in the packet and a fragment count indicative of a number of fragments associated with a message which is fragmented. Interrupts associated with the packet with other interrupts associated with other packets are coalesced based on one or more of the last fragment index and the fragment count.Type: ApplicationFiled: October 4, 2021Publication date: April 6, 2023Inventors: Jochen Seemann, Andrei Sergeevich Terechko
-
Patent number: 11142212Abstract: A method, system and device are disclosed for determining safety conflicts in redundant subsystems of autonomous vehicles. Each redundant subsystem calculates a world model or path plan, including locations, dimensions, and orientations of moving and stationary objects, as well as projected travel paths for moving objects in the future. The travel paths and projected future world models are subsequently compared using a geometric overlay operation. If at future time moments the projected world models match within predefined margins, the comparison results in a match. In case of a mismatch at a given future moment between projected world models, a determination is made as to whether the autonomous vehicle and all road users in this future moment are safe from collision or driving off the drivable space or road based on a geometric overlay operation.Type: GrantFiled: June 6, 2019Date of Patent: October 12, 2021Assignee: NXP B.V.Inventors: Andrei Sergeevich Terechko, Ali Osman Örs
-
Publication number: 20200385008Abstract: A method, system and device are disclosed for determining safety conflicts in redundant subsystems of autonomous vehicles. Each redundant subsystem calculates a world model or path plan, including locations, dimensions, and orientations of moving and stationary objects, as well as projected travel paths for moving objects in the future. The travel paths and projected future world models are subsequently compared using a geometric overlay operation. If at future time moments the projected world models match within predefined margins, the comparison results in a match. In case of a mismatch at a given future moment between projected world models, a determination is made as to whether the autonomous vehicle and all road users in this future moment are safe from collision or driving off the drivable space or road based on a geometric overlay operation.Type: ApplicationFiled: June 6, 2019Publication date: December 10, 2020Applicant: NXP B.V.Inventors: Andrei Sergeevich Terechko, Ali Osman Örs
-
Publication number: 20200059425Abstract: Certain aspects of the disclosure are directed to methods and apparatuses for health monitoring of wireless connections among vehicles. An example method can include receiving as input to processing circuitry configured and arranged to monitor a health status of wireless communications links between a plurality of vehicles in a vehicle platoon, object information including coordinates of stationary and moving objects, and determining, using the received object information, a relative location of a vehicle among the plurality of vehicles in the vehicle platoon. The method further includes determining, based on the received object information and the relative location of the vehicle, physical parameters for line-of-sight wireless communications between the vehicle and other vehicles in the vehicle platoon. The health status of the wireless communications links can be determined between the plurality of vehicles in the vehicle platoon using the physical parameters for the line-of-sight wireless communications.Type: ApplicationFiled: August 14, 2018Publication date: February 20, 2020Inventors: Andrei Sergeevich Terechko, Johannes Martinus Bernardus Petrus Van Doorn, Gerardo Henricus Otto Daalderop, Han Raaijmakers
-
Patent number: 10554521Abstract: Certain aspects of the disclosure are directed to methods and apparatuses for health monitoring of wireless connections among vehicles. An example method can include receiving as input to processing circuitry configured and arranged to monitor a health status of wireless communications links between a plurality of vehicles in a vehicle platoon, object information including coordinates of stationary and moving objects, and determining, using the received object information, a relative location of a vehicle among the plurality of vehicles in the vehicle platoon. The method further includes determining, based on the received object information and the relative location of the vehicle, physical parameters for line-of-sight wireless communications between the vehicle and other vehicles in the vehicle platoon. The health status of the wireless communications links can be determined between the plurality of vehicles in the vehicle platoon using the physical parameters for the line-of-sight wireless communications.Type: GrantFiled: August 14, 2018Date of Patent: February 4, 2020Assignee: NXP B.V.Inventors: Andrei Sergeevich Terechko, Johannes Martinus Bernardus Petrus Van Doorn, Gerardo Henricus Otto Daalderop, Han Raaijmakers
-
Patent number: 8732408Abstract: A circuit contains a shared memory (12), that is used by a plurality of processing elements (10) that contain cache circuits (102) for caching data from the shared memory (12). The processing elements perform a plurality of cooperating tasks, each task involving caching data from the shared memory (12) and sending cache message traffic. Consistency between cached data for different tasks is maintained by transmission of cache coherence requests via a communication network. Information from cache coherence requests generated for all of said tasks is buffered. One of the processing elements provides an indication signal indicating a current task stage of at least one of the processing elements. Additional cache message traffic is generated adapted dependent on the indication signal and the buffered information from the cache coherence requests. Thus conditions of cache traffic stress may be created to verify operability of the circuit, or cache message traffic may be delayed to avoid stress.Type: GrantFiled: October 16, 2008Date of Patent: May 20, 2014Assignee: Nytell Software LLCInventors: Sainath Karlapalem, Andrei Sergeevich Terechko
-
Patent number: 8607246Abstract: Tasks are executed in a multiprocessing system with a master processor core (10) and a plurality of slave processor cores (12). The master processor core (10), executes a program that defines a matrix of tasks at respective positions in the matrix and a task dependency pattern applicable to a plurality of the tasks and defined relative to the positions. Each dependency pattern defines relative dependencies for a plurality of positions in the matrix, rather than using individual dependencies for individual positions. In response to the program the master processor core (10) dynamically stores definitions of current task dependency patterns in a dependency pattern memory. A hardware task scheduler computes the positions of the tasks that are ready for execution at run time from information from information about positions for which tasks have been completed and the task dependency pattern applied relative to those tasks.Type: GrantFiled: July 2, 2009Date of Patent: December 10, 2013Assignee: NXP, B.V.Inventors: Ghiath Al-Kadi, Andrei Sergeevich Terechko
-
Patent number: 8578104Abstract: A multiprocessor system has a background memory and a plurality of processing elements, each comprising a processor core and a cache circuit. The processor cores execute programs of instructions and the cache circuits cache background memory data accessed by the programs. A write back monitor circuit is used to buffer write addresses used for writing data by at least part of the processor cores. The programs contain commands to read the buffered write back addresses from the write back monitor circuit and commands from the programs to invalidate cached data for the write back addresses read by the commands to read the buffered write back addresses. Thus cache management is performed partly by hardware and partly by the program that uses the cache. The processing core may be a VLIW core, in which case instruction slots that are not used by the program can be made useful to include instructions for cache management.Type: GrantFiled: June 9, 2009Date of Patent: November 5, 2013Assignee: NXP, B.V.Inventors: Jan Hoogerbrugge, Andrei Sergeevich Terechko
-
Patent number: 8265160Abstract: Various exemplary embodiments relate to a method and related motion estimation unit for performing motion estimation on video data comprising a plurality of frames. The method may begin by reading a current frame of the plurality of frames from a memory of a motion estimation unit. The method may then select a motion vector for each respective block of pixels in a current row of the current frame. The step of selecting the motion vector may include, for each respective block, selecting, by the motion estimation unit, a candidate vector for at least one block directly surrounding the respective block based on a determination of whether the directly surrounding block has been processed for the current frame, calculating, for each candidate vector, a difference value, and selecting, as the motion vector, the candidate vector with the lowest difference value.Type: GrantFiled: October 5, 2009Date of Patent: September 11, 2012Assignee: NXP B.V.Inventors: Ghiath Al-Kadi, Andrei Sergeevich Terechko, Jan Hoogerbrugge, Abraham Karel Riemens, Klaas Brink
-
Publication number: 20110107345Abstract: Tasks are executed in a multiprocessing system with a master processor core (10) and a plurality of slave processor cores (12). The master processor core (10), executes a program that defines a matrix of tasks at respective positions in the matrix and a task dependency pattern applicable to a plurality of the tasks and defined relative to the positions. Each dependency pattern defines relative dependencies for a plurality of positions in the matrix, rather than using individual dependencies for individual positions. In response to the program the master processor core (10) dynamically stores definitions of current task dependency patterns in a dependency pattern memory. A hardware task scheduler computes the positions of the tasks that are ready for execution at run time from information from information about positions for which tasks have been completed and the task dependency pattern applied relative to those tasks.Type: ApplicationFiled: July 2, 2009Publication date: May 5, 2011Applicant: NXP B.V.Inventors: Ghiath Al-kadi, Andrei Sergeevich Terechko
-
Publication number: 20110099337Abstract: A circuit that comprises a processor core (100), a background memory (12) and a cache circuit (102) between the processor core (100) and the background memory (12). In operation a sub-range of a plurality of successive addresses is detected within a range of successive addresses associated with a cache line, the sub-range containing addresses for which updated data is available in the cache circuit. Updated data for the sub-range is selectively transmitted to the background memory (12). A single memory transaction for a series of successive addresses may be used, the detected sub-range being used to set the start address and a length or end address of the memory transaction. This may be applied for example when only updated data is available in the cache line, and no valid data for other addresses, or to reduce bandwidth use when only a small run of addresses has been updated in the cache line.Type: ApplicationFiled: June 10, 2009Publication date: April 28, 2011Applicant: NXP B.V.Inventors: Jan Hoogerbrugge, Andrei Sergeevich Terechko
-
Publication number: 20110093661Abstract: A multiprocessor system has a background memory and a plurality of processing elements (10), each comprising a processor core (100) and a cache circuit (102). The processor cores (100) execute programs of instructions and the cache circuits (102) cache background memory data accessed by the programs. A write back monitor circuit (14) is used to buffer write addresses used for writing data by at least part of the processor cores (100). The programs contain commands to read the buffered write back addresses from the write back monitor circuit (14) and commands from the programs to invalidate cached data for the write back addresses read by the commands to read the buffered write back addresses. Thus cache management is performed partly by hardware and partly by the program that uses the cache. The processing core may be a VLIW core, in which case instruction slots that are not used by the program can be made useful to include instructions for cache management.Type: ApplicationFiled: June 9, 2009Publication date: April 21, 2011Applicant: NXP B.V.Inventors: Jan Hoogerbrugge, Andrei Sergeevich Terechko
-
Publication number: 20110082981Abstract: Data is processed using a first and second processing circuit (12) coupled to a background memory (10) via a first and second cache circuit (14, 14?) respectively. Each cache circuit (14, 14?) stores cache lines, state information defining states of the stored cache lines, and flag information for respective addressable locations within at least one stored cache line. The cache control circuit of the first cache circuit (14) is configured to selectively set the flag information for part of the addressable locations within the at least one stored cache line to a valid state when the first processing circuit (12) writes data to said part of the locations, without prior loading of the at least one stored cache line from the background memory (10). Data is copied from the at least one cache line into the second cache circuit (14?) from the first cache circuit (14) in combination with the flag information for the locations within the at least one cache line.Type: ApplicationFiled: April 22, 2009Publication date: April 7, 2011Applicant: NXP B.V.Inventors: Jan Hoogerbrugge, Andrei Sergeevich Terechko
-
Publication number: 20110004881Abstract: A method comprising receiving tasks for execution on at least one processor, and processing at least one task within one processor. To decrease the turn-around time of task processing, a method comprises parallel to processing the at least one task, verifying readiness of at least one next task assuming the currently processed task is finished, preparing a readystructure for the at least one task verified as ready, and starting the at least one task verified as ready using the ready-structure after the currently processed task is finished.Type: ApplicationFiled: March 12, 2009Publication date: January 6, 2011Applicant: NXP B.V.Inventors: Andrei Sergeevich Terechko, Ghiath Al-Kadi, Marc Andre Georges Duranton, Magnus Själander
-
Publication number: 20100328538Abstract: Various exemplary embodiments relate to a method and related motion estimation unit for performing motion estimation on video data comprising a plurality of frames. The method may begin by reading a current frame of the plurality of frames from a memory of a motion estimation unit. The method may then select a motion vector for each respective block of pixels in a current row of the current frame. The step of selecting the motion vector may include, for each respective block, selecting, by the motion estimation unit, a candidate vector for at least one block directly surrounding the respective block based on a determination of whether the directly surrounding block has been processed for the current frame, calculating, for each candidate vector, a difference value, and selecting, as the motion vector, the candidate vector with the lowest difference value.Type: ApplicationFiled: October 5, 2009Publication date: December 30, 2010Applicant: NXP B.V.Inventors: Ghiath Al-Kadi, Andrei Sergeevich Terechko, Jan Hoogerbrugge, Abraham Karel Riemens, Klaas Brink
-
Publication number: 20100299485Abstract: A circuit contains a shared memory (12), that is used by a plurality of processing elements (10) that contain cache circuits (102) for caching data from the shared memory (12). The processing elements perform a plurality of cooperating tasks, each task involving caching data from the shared memory (12) and sending cache message traffic. Consistency between cached data for different tasks is maintained by transmission of cache coherence requests via a communication network. Information from cache coherence requests generated for all of said tasks is buffered. One of the processing elements provides an indication signal indicating a current task stage of at least one of the processing elements. Additional cache message traffic is generated adapted dependent on the indication signal and the buffered information from the cache coherence requests. Thus conditions of cache traffic stress may be created to verify operability of the circuit, or cache message traffic may be delayed to avoid stress.Type: ApplicationFiled: October 16, 2008Publication date: November 25, 2010Applicant: NXP B.V.Inventors: Sainath Karlapalem, Andrei Sergeevich Terechko
-
Publication number: 20090063780Abstract: The present invention relates to a data processing system with a plurality of processing units (PU), a shared memory (M) for storing data from said processing units (PU) and an interconnect means (IM) for coupling the memory (M) and the plurality of processing units (PU). At least one of the processing units (PU) comprises a cache memory (C). Furthermore, a transition buffer (STB) is provided for buffering at least some of the state transitions of the cache memories (C) of said at least one of said plurality of processing units (PU). A monitoring means (MM) is provided for monitoring the cache coherence of the caches (C) of said plurality of processing units (PU) based on the data of the transition buffer (STB), in order to determine any cache coherence violations.Type: ApplicationFiled: October 17, 2005Publication date: March 5, 2009Applicant: KONINKLIJKE PHILIPS ELECTRONICS N.V.Inventors: Andrei Sergeevich Terechko, Jayram Moorkanikara Nageswaran