METHOD AND APPARATUS FOR CONTROLLING DETERGENT QUANTITY
Provided are a washing machine based on artificial intelligence and a method of controlling a detergent quantity thereof. The method of controlling a detergent quantity of a washing machine based on artificial intelligence (AI) includes obtaining information about a kind of detergent from an external information collector; applying the information about a kind of detergent to a pre-learned artificial neural network (ANN) model; determining a kind of the detergent based on the application result; receiving appropriate detergent quantity information from the server according to the determined kind of detergent; and determining whether a supplied detergent quantity satisfies an appropriate detergent quantity. Thereby, it can be guided to use an appropriate quantity of detergent according to a laundry amount. An AI device may be connected to a drone (unmanned aerial vehicle (UAV)), a robot, an augmented reality (AR) device, a virtual reality (VR) device, a device related to a 5G service, and the like.
Latest LG Electronics Patents:
The present disclosure relates to a method and device for controlling a detergent quantity based on artificial intelligence (AI), and more particularly, to a method and device for controlling a detergent quantity based on artificial intelligence that can classify a kind of detergent and control an appropriate quantity of detergent according to the kind of detergent to be used in an operation of a washing machine.
Related ArtIn general, a washing machine means various devices for processing laundry by applying a physical action and/or a chemical action to the laundry such as clothes and bedding. The washing machine includes an outer tub for receiving washing water and an inner tub for receiving laundry and rotatably installed inside the outer tub. A general method of washing a washing machine includes a washing process of washing laundry by rotating the inner tub, and a dehydrating process of dehydrating the laundry using a centrifugal force of the inner tub.
When washing is performed through a washing machine, it is necessary that an appropriate quantity of laundry detergent is used according to a laundry amount of the detergent. When an appropriate quantity of laundry detergent is not used, there may occur problems such as decrease of washing power, skin diseases due to a large quantity of residue detergent left on clothing after washing, excessive consumption of water and power, and environment pollution.
Therefore, although a standard quantity of detergent is described according to a laundry amount, it is difficult to know whether many kilograms of laundry is put in the washing machine, and it is difficult for a user to accurately measure and use an appropriate detergent quantity.
SUMMARY OF THE DISCLOSUREAn object of the present disclosure is to solve the above-described needs and/or problems.
The present disclosure further provides a method and device for controlling a detergent quantity based on artificial intelligence that can guide use of an appropriate quantity of detergent.
The present disclosure further provides a method and device for controlling a detergent quantity based on artificial intelligence that can reduce power consumption upon driving.
In an aspect, a method of controlling a detergent quantity of an artificial intelligence (AI) based washing machine includes obtaining information about a kind of detergent from an external information collector; applying the information about a kind of detergent to a pre-learned artificial neural network (ANN) model; determining a kind of the detergent based on the application result; receiving predetermined detergent quantity information from the server according to a laundry amount based on the determined detergent kind; and determining whether the detergent quantity supplied through a detergent inlet is less than the predetermined detergent quantity based on the detergent quantity information.
The predetermined detergent quantity may be a value determined in advance by a manufacturer of the detergent according to the laundry amount.
The external information collector may include at least one of a camera or a microphone.
The method may further include obtaining information about a kind of the detergent from a message received from a user terminal.
When the detergent quantity is less than the predetermined detergent quantity, the method may further include displaying information of an insufficient detergent quantity through a display of the washing machine or outputting information of an insufficient detergent quantity through a speaker of the washing machine.
When the detergent quantity exceeds the predetermined detergent quantity, the method may further include supplying the detergent corresponding to the predetermined detergent quantity together with the washing water to the inner tub of the washing machine; and storing the remaining detergent, except for the supplied detergent in a detergent storage of the washing machine.
The method may further include determining whether additional supply detergent satisfies the insufficient detergent quantity.
The determining of whether additional supply detergent may include measuring the additional supply detergent quantity; and outputting, if the additional supply detergent quantity satisfies the insufficient detergent quantity, a notification sound through the speaker.
If the additional supply detergent quantity does not satisfy the insufficient detergent quantity, and when the detergent is not supplied within a predetermined time, the method may further include stopping a process related to supply of the detergent and performing a washing operation.
The artificial neural network model may be stored in an artificial intelligence (AI) device, and the applying of the information may include transmitting a feature value related to information about a kind of detergent to the AI device; and obtaining a result in which information about a kind of detergent is applied to the artificial neural network model from the AI device.
The artificial neural network model may be stored in a network, and the applying of the information may include transmitting information about a kind of the detergent to the network; and obtaining a result in which information about a kind of the detergent is applied to the artificial neural network model from the network.
In another aspect, a washing machine based on artificial intelligence includes a controller; a memory; a communication circuit; a processor; and an external information collector, wherein information about a kind of detergent is obtained from the external information collector, and by applying the information about the kind of detergent to a pre-learned Artificial Neural Network (ANN) model, the kind of detergent is determined, and it is determined whether a quantity of detergent supplied through a detergent inlet is less than the predetermined detergent quantity based on predetermined detergent quantity information according to a laundry amount and a kind of detergent received through the communication circuit.
The predetermined detergent quantity may be a value determined in advance according to the laundry amount by a manufacturer of the detergent.
The external information collector may include at least one of a camera or a microphone.
The washing machine may further include a user interface, wherein the user interface may include at least one of a display or a speaker, and when the detergent quantity is less than the predetermined detergent quantity, insufficient detergent quantity information may be displayed through the display or output through the speaker.
If the detergent quantity exceeds the predetermined detergent quantity, the controller may supply the detergent corresponding to the predetermined detergent quantity together with washing water to an inner tub of the washing machine, and store the remaining detergent, except for the supplied detergent in a detergent storage of the washing machine.
The processor may determine whether an additional supply detergent quantity measured through a detergent quantity detector satisfies the insufficient detergent quantity.
When the insufficient detergent quantity is satisfied, the controller may output a notification sound through the speaker.
If the additional supply detergent quantity does not satisfy the insufficient detergent quantity and when the detergent is not supplied within the preset time, the controller may stop a process related to supply of detergent and perform a washing operation.
The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the technical features of the disclosure. In the drawings:
In what follows, embodiments disclosed in this document will be described in detail with reference to appended drawings, where the same or similar constituent elements are given the same reference number irrespective of their drawing symbols, and repeated descriptions thereof will be omitted.
In describing an embodiment disclosed in the present specification, if a constituting element is said to be “connected” or “attached” to other constituting element, it should be understood that the former may be connected or attached directly to the other constituting element, but there may be a case in which another constituting element is present between the two constituting elements.
Also, in describing an embodiment disclosed in the present document, if it is determined that a detailed description of a related art incorporated herein unnecessarily obscure the gist of the embodiment, the detailed description thereof will be omitted. Also, it should be understood that the appended drawings are intended only to help understand embodiments disclosed in the present document and do not limit the technical principles and scope of the present disclosure; rather, it should be understood that the appended drawings include all of the modifications, equivalents or substitutes described by the technical principles and belonging to the technical scope of the present disclosure.
5G ScenarioThe three main requirement areas in the 5G system are (1) enhanced Mobile Broadband (eMBB) area, (2) massive Machine Type Communication (mMTC) area, and (3) Ultra-Reliable and Low Latency Communication (URLLC) area.
Some use case may require a plurality of areas for optimization, but other use case may focus only one Key Performance Indicator (KPI). The 5G system supports various use cases in a flexible and reliable manner.
eMBB far surpasses the basic mobile Internet access, supports various interactive works, and covers media and entertainment applications in the cloud computing or augmented reality environment. Data is one of core driving elements of the 5G system, which is so abundant that for the first time, the voice-only service may be disappeared. In the 5G, voice is expected to be handled simply by an application program using a data connection provided by the communication system. Primary causes of increased volume of traffic are increase of content size and increase of the number of applications requiring a high data transfer rate. Streaming service (audio and video), interactive video, and mobile Internet connection will be more heavily used as more and more devices are connected to the Internet. These application programs require always-on connectivity to push real-time information and notifications to the user. Cloud-based storage and applications are growing rapidly in the mobile communication platforms, which may be applied to both of business and entertainment uses. And the cloud-based storage is a special use case that drives growth of uplink data transfer rate. The 5G is also used for cloud-based remote works and requires a much shorter end-to-end latency to ensure excellent user experience when a tactile interface is used. Entertainment, for example, cloud-based game and video streaming, is another core element that strengthens the requirement for mobile broadband capability. Entertainment is essential for smartphones and tablets in any place including a high mobility environment such as a train, car, and plane. Another use case is augmented reality for entertainment and information search. Here, augmented reality requires very low latency and instantaneous data transfer.
Also, one of highly expected 5G use cases is the function that connects embedded sensors seamlessly in every possible area, namely the use case based on mMTC. Up to 2020, the number of potential IoT devices is expected to reach 20.4 billion. Industrial IoT is one of key areas where the 5G performs a primary role to maintain infrastructure for smart city, asset tracking, smart utility, agriculture and security.
URLLC includes new services which may transform industry through ultra-reliable/ultra-low latency links, such as remote control of major infrastructure and self-driving cars. The level of reliability and latency are essential for smart grid control, industry automation, robotics, and drone control and coordination.
Next, a plurality of use cases will be described in more detail.
The 5G may complement Fiber-To-The-Home (FTTH) and cable-based broadband (or DOCSIS) as a means to provide a stream estimated to occupy hundreds of megabits per second up to gigabits per second. This fast speed is required not only for virtual reality and augmented reality but also for transferring video with a resolution more than 4K (6K, 8K or more). VR and AR applications almost always include immersive sports games. Specific application programs may require a special network configuration. For example, in the case of VR game, to minimize latency, game service providers may have to integrate a core server with the edge network service of the network operator.
Automobiles are expected to be a new important driving force for the 5G system together with various use cases of mobile communication for vehicles. For example, entertainment for passengers requires high capacity and high mobile broadband at the same time. This is so because users continue to expect a high-quality connection irrespective of their location and moving speed. Another use case in the automotive field is an augmented reality dashboard. The augmented reality dashboard overlays information, which is a perception result of an object in the dark and contains distance to the object and object motion, on what is seen through the front window. In a future, a wireless module enables communication among vehicles, information exchange between a vehicle and supporting infrastructure, and information exchange among a vehicle and other connected devices (for example, devices carried by a pedestrian). A safety system guides alternative courses of driving so that a driver may drive his or her vehicle more safely and to reduce the risk of accident. The next step will be a remotely driven or self-driven vehicle. This step requires highly reliable and highly fast communication between different self-driving vehicles and between a self-driving vehicle and infrastructure. In the future, it is expected that a self-driving vehicle takes care of all of the driving activities while a human driver focuses on dealing with an abnormal driving situation that the self-driving vehicle is unable to recognize. Technical requirements of a self-driving vehicle demand ultra-low latency and ultra-fast reliability up to the level that traffic safety may not be reached by human drivers.
The smart city and smart home, which are regarded as essential to realize a smart society, will be embedded into a high-density wireless sensor network. Distributed networks comprising intelligent sensors may identify conditions for cost-efficient and energy-efficient conditions for maintaining cities and homes. A similar configuration may be applied for each home. Temperature sensors, window and heating controllers, anti-theft alarm devices, and home appliances will be all connected wirelessly. Many of these sensors typified with a low data transfer rate, low power, and low cost. However, for example, real-time HD video may require specific types of devices for the purpose of surveillance.
As consumption and distribution of energy including heat or gas is being highly distributed, automated control of a distributed sensor network is required. A smart grid collects information and interconnect sensors by using digital information and communication technologies so that the distributed sensor network operates according to the collected information. Since the information may include behaviors of energy suppliers and consumers, the smart grid may help improving distribution of fuels such as electricity in terms of efficiency, reliability, economics, production sustainability, and automation. The smart grid may be regarded as a different type of sensor network with a low latency.
The health-care sector has many application programs that may benefit from mobile communication. A communication system may support telemedicine providing a clinical care from a distance. Telemedicine may help reduce barriers to distance and improve access to medical services that are not readily available in remote rural areas. It may also be used to save lives in critical medical and emergency situations. A wireless sensor network based on mobile communication may provide remote monitoring and sensors for parameters such as the heart rate and blood pressure.
Wireless and mobile communication are becoming increasingly important for industrial applications. Cable wiring requires high installation and maintenance costs. Therefore, replacement of cables with reconfigurable wireless links is an attractive opportunity for many industrial applications. However, to exploit the opportunity, the wireless connection is required to function with a latency similar to that in the cable connection, to be reliable and of large capacity, and to be managed in a simple manner. Low latency and very low error probability are new requirements that lead to the introduction of the 5G system.
Logistics and freight tracking are important use cases of mobile communication, which require tracking of an inventory and packages from any place by using location-based information system. The use of logistics and freight tracking typically requires a low data rate but requires large-scale and reliable location information.
The present disclosure to be described below may be implemented by combining or modifying the respective embodiments to satisfy the aforementioned requirements of the 5G system.
Referring to
The cloud network 10 may comprise part of the cloud computing infrastructure or refer to a network existing in the cloud computing infrastructure. Here, the cloud network 10 may be constructed by using the 3G network, 4G or Long Term Evolution (LTE) network, or 5G network.
In other words, individual devices (11 to 16) constituting the AI system may be connected to each other through the cloud network 10. In particular, each individual device (11 to 16) may communicate with each other through the eNB but may communicate directly to each other without relying on the eNB.
The AI server 16 may include a server performing AI processing and a server performing computations on big data.
The AI server 16 may be connected to at least one or more of the robot 11, self-driving vehicle 12, XR device 13, smartphone 14, or home appliance 15, which are AI devices constituting the AI system, through the cloud network 10 and may help at least part of AI processing conducted in the connected AI devices (11 to 15).
At this time, the AI server 16 may teach the artificial neural network according to a machine learning algorithm on behalf of the AI device (11 to 15), directly store the learning model, or transmit the learning model to the AI device (11 to 15).
At this time, the AI server 16 may receive input data from the AI device (11 to 15), infer a result value from the received input data by using the learning model, generate a response or control command based on the inferred result value, and transmit the generated response or control command to the AI device (11 to 15).
Similarly, the AI device (11 to 15) may infer a result value from the input data by employing the learning model directly and generate a response or control command based on the inferred result value.
AI+RobotBy employing the AI technology, the robot 11 may be implemented as a guide robot, transport robot, cleaning robot, wearable robot, entertainment robot, pet robot, or unmanned flying robot.
The robot 11 may include a robot control module for controlling its motion, where the robot control module may correspond to a software module or a chip which implements the software module in the form of a hardware device.
The robot 11 may obtain status information of the robot 11, detect (recognize) the surroundings and objects, generate map data, determine a travel path and navigation plan, determine a response to user interaction, or determine motion by using sensor information obtained from various types of sensors.
Here, the robot 11 may use sensor information obtained from at least one or more sensors among lidar, radar, and camera to determine a travel path and navigation plan.
The robot 11 may perform the operations above by using a learning model built on at least one or more artificial neural networks. For example, the robot 11 may recognize the surroundings and objects by using the learning model and determine its motion by using the recognized surroundings or object information. Here, the learning model may be the one trained by the robot 11 itself or trained by an external device such as the AI server 16.
At this time, the robot 11 may perform the operation by generating a result by employing the learning model directly but also perform the operation by transmitting sensor information to an external device such as the AI server 16 and receiving a result generated accordingly.
The robot 11 may determine a travel path and navigation plan by using at least one or more of object information detected from the map data and sensor information or object information obtained from an external device and navigate according to the determined travel path and navigation plan by controlling its locomotion platform.
Map data may include object identification information about various objects disposed in the space in which the robot 11 navigates. For example, the map data may include object identification information about static objects such as wall and doors and movable objects such as a flowerpot and a desk. And the object identification information may include the name, type, distance, location, and so on.
Also, the robot 11 may perform the operation or navigate the space by controlling its locomotion platform based on the control/interaction of the user. At this time, the robot 11 may obtain intention information of the interaction due to the user's motion or voice command and perform an operation by determining a response based on the obtained intention information.
AI+Autonomous NavigationBy employing the AI technology, the self-driving vehicle 12 may be implemented as a mobile robot, unmanned ground vehicle, or unmanned aerial vehicle.
The self-driving vehicle 12 may include an autonomous navigation module for controlling its autonomous navigation function, where the autonomous navigation control module may correspond to a software module or a chip which implements the software module in the form of a hardware device. The autonomous navigation control module may be installed inside the self-driving vehicle 12 as a constituting element thereof or may be installed outside the self-driving vehicle 12 as a separate hardware component.
The self-driving vehicle 12 may obtain status information of the self-driving vehicle 12, detect (recognize) the surroundings and objects, generate map data, determine a travel path and navigation plan, or determine motion by using sensor information obtained from various types of sensors.
Like the robot 11, the self-driving vehicle 12 may use sensor information obtained from at least one or more sensors among lidar, radar, and camera to determine a travel path and navigation plan.
In particular, the self-driving vehicle 12 may recognize an occluded area or an area extending over a predetermined distance or objects located across the area by collecting sensor information from external devices or receive recognized information directly from the external devices.
The self-driving vehicle 12 may perform the operations above by using a learning model built on at least one or more artificial neural networks. For example, the self-driving vehicle 12 may recognize the surroundings and objects by using the learning model and determine its navigation route by using the recognized surroundings or object information. Here, the learning model may be the one trained by the self-driving vehicle 12 itself or trained by an external device such as the AI server 16.
At this time, the self-driving vehicle 12 may perform the operation by generating a result by employing the learning model directly but also perform the operation by transmitting sensor information to an external device such as the AI server 16 and receiving a result generated accordingly.
The self-driving vehicle 12 may determine a travel path and navigation plan by using at least one or more of object information detected from the map data and sensor information or object information obtained from an external device and navigate according to the determined travel path and navigation plan by controlling its driving platform.
Map data may include object identification information about various objects disposed in the space (for example, road) in which the self-driving vehicle 12 navigates. For example, the map data may include object identification information about static objects such as streetlights, rocks and buildings and movable objects such as vehicles and pedestrians. And the object identification information may include the name, type, distance, location, and so on.
Also, the self-driving vehicle 12 may perform the operation or navigate the space by controlling its driving platform based on the control/interaction of the user. At this time, the self-driving vehicle 12 may obtain intention information of the interaction due to the user's motion or voice command and perform an operation by determining a response based on the obtained intention information.
AI+XRBy employing the AI technology, the XR device 13 may be implemented as a Head-Mounted Display (HMD), Head-Up Display (HUD) installed at the vehicle, TV, mobile phone, smartphone, computer, wearable device, home appliance, digital signage, vehicle, robot with a fixed platform, or mobile robot.
The XR device 13 may obtain information about the surroundings or physical objects by generating position and attribute data about 3D points by analyzing 3D point cloud or image data acquired from various sensors or external devices and output objects in the form of XR objects by rendering the objects for display.
The XR device 13 may perform the operations above by using a learning model built on at least one or more artificial neural networks. For example, the XR device 13 may recognize physical objects from 3D point cloud or image data by using the learning model and provide information corresponding to the recognized physical objects. Here, the learning model may be the one trained by the XR device 13 itself or trained by an external device such as the AI server 16.
At this time, the XR device 13 may perform the operation by generating a result by employing the learning model directly but also perform the operation by transmitting sensor information to an external device such as the AI server 16 and receiving a result generated accordingly.
AI+Robot+Autonomous NavigationBy employing the AI and autonomous navigation technologies, the robot 11 may be implemented as a guide robot, transport robot, cleaning robot, wearable robot, entertainment robot, pet robot, or unmanned flying robot.
The robot 11 employing the AI and autonomous navigation technologies may correspond to a robot itself having an autonomous navigation function or a robot 11 interacting with the self-driving vehicle 12.
The robot 11 having the autonomous navigation function may correspond collectively to the devices which may move autonomously along a given path without control of the user or which may move by determining its path autonomously.
The robot 11 and the self-driving vehicle 12 having the autonomous navigation function may use a common sensing method to determine one or more of the travel path or navigation plan. For example, the robot 11 and the self-driving vehicle 12 having the autonomous navigation function may determine one or more of the travel path or navigation plan by using the information sensed through lidar, radar, and camera.
The robot 11 interacting with the self-driving vehicle 12, which exists separately from the self-driving vehicle 12, may be associated with the autonomous navigation function inside or outside the self-driving vehicle 12 or perform an operation associated with the user riding the self-driving vehicle 12.
At this time, the robot 11 interacting with the self-driving vehicle 12 may obtain sensor information in place of the self-driving vehicle 12 and provide the sensed information to the self-driving vehicle 12; or may control or assist the autonomous navigation function of the self-driving vehicle 12 by obtaining sensor information, generating information of the surroundings or object information, and providing the generated information to the self-driving vehicle 12.
Also, the robot 11 interacting with the self-driving vehicle 12 may control the function of the self-driving vehicle 12 by monitoring the user riding the self-driving vehicle 12 or through interaction with the user. For example, if it is determined that the driver is drowsy, the robot 11 may activate the autonomous navigation function of the self-driving vehicle 12 or assist the control of the driving platform of the self-driving vehicle 12. Here, the function of the self-driving vehicle 12 controlled by the robot 12 may include not only the autonomous navigation function but also the navigation system installed inside the self-driving vehicle 12 or the function provided by the audio system of the self-driving vehicle 12.
Also, the robot 11 interacting with the self-driving vehicle 12 may provide information to the self-driving vehicle 12 or assist functions of the self-driving vehicle 12 from the outside of the self-driving vehicle 12. For example, the robot 11 may provide traffic information including traffic sign information to the self-driving vehicle 12 like a smart traffic light or may automatically connect an electric charger to the charging port by interacting with the self-driving vehicle 12 like an automatic electric charger of the electric vehicle.
AI+Robot+XRBy employing the AI technology, the robot 11 may be implemented as a guide robot, transport robot, cleaning robot, wearable robot, entertainment robot, pet robot, or unmanned flying robot.
The robot 11 employing the XR technology may correspond to a robot which acts as a control/interaction target in the XR image. In this case, the robot 11 may be distinguished from the XR device 13, both of which may operate in conjunction with each other.
If the robot 11, which acts as a control/interaction target in the XR image, obtains sensor information from the sensors including a camera, the robot 11 or XR device 13 may generate an XR image based on the sensor information, and the XR device 13 may output the generated XR image. And the robot 11 may operate based on the control signal received through the XR device 13 or based on the interaction with the user.
For example, the user may check the XR image corresponding to the viewpoint of the robot 11 associated remotely through an external device such as the XR device 13, modify the navigation path of the robot 11 through interaction, control the operation or navigation of the robot 11, or check the information of nearby objects.
AI+Autonomous Navigation+XRBy employing the AI and XR technologies, the self-driving vehicle 12 may be implemented as a mobile robot, unmanned ground vehicle, or unmanned aerial vehicle.
The self-driving vehicle 12 employing the XR technology may correspond to a self-driving vehicle having a means for providing XR images or a self-driving vehicle which acts as a control/interaction target in the XR image. In particular, the self-driving vehicle 12 which acts as a control/interaction target in the XR image may be distinguished from the XR device 13, both of which may operate in conjunction with each other.
The self-driving vehicle 12 having a means for providing XR images may obtain sensor information from sensors including a camera and output XR images generated based on the sensor information obtained. For example, by displaying an XR image through HUD, the self-driving vehicle 12 may provide XR images corresponding to physical objects or image objects to the passenger.
At this time, if an XR object is output on the HUD, at least part of the XR object may be output so as to be overlapped with the physical object at which the passenger gazes. On the other hand, if an XR object is output on a display installed inside the self-driving vehicle 12, at least part of the XR object may be output so as to be overlapped with an image object. For example, the self-driving vehicle 12 may output XR objects corresponding to the objects such as roads, other vehicles, traffic lights, traffic signs, bicycles, pedestrians, and buildings.
If the self-driving vehicle 12, which acts as a control/interaction target in the XR image, obtains sensor information from the sensors including a camera, the self-driving vehicle 12 or XR device 13 may generate an XR image based on the sensor information, and the XR device 13 may output the generated XR image. And the self-driving vehicle 12 may operate based on the control signal received through an external device such as the XR device 13 or based on the interaction with the user.
Extended Reality TechnologyeXtended Reality (XR) refers to all of Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR). The VR technology provides objects or backgrounds of the real world only in the form of CG images, AR technology provides virtual CG images overlaid on the physical object images, and MR technology employs computer graphics technology to mix and merge virtual objects with the real world.
MR technology is similar to AR technology in a sense that physical objects are displayed together with virtual objects. However, while virtual objects supplement physical objects in the AR, virtual and physical objects co-exist as equivalents in the MR.
The XR technology may be applied to Head-Mounted Display (HMD), Head-Up Display (HUD), mobile phone, tablet PC, laptop computer, desktop computer, TV, digital signage, and so on, where a device employing the XR technology may be called an XR device.
Hereinafter, 5G communication (5th generation mobile communication) required by an apparatus requiring AI processed information and/or an AI processor will be described through paragraphs A through G.
A. Example of Block Diagram of UE and 5G NetworkReferring to
A 5G network including another vehicle communicating with the autonomous device is defined as a second communication device (920 of
The 5G network may be represented as the first communication device and the autonomous device may be represented as the second communication device.
For example, the first communication device or the second communication device may be a base station, a network node, a transmission terminal, a reception terminal, a wireless device, a wireless communication device, an autonomous device, or the like.
For example, the first communication device or the second communication device may be a base station, a network node, a transmission terminal, a reception terminal, a wireless device, a wireless communication device, a vehicle, a vehicle having an autonomous function, a connected car, a drone (Unmanned Aerial Vehicle, UAV), and AI (Artificial Intelligence) module, a robot, an AR (Augmented Reality) device, a VR (Virtual Reality) device, an MR (Mixed Reality) device, a hologram device, a public safety device, an MTC device, an IoT device, a medical device, a Fin Tech device (or financial device), a security device, a climate/environment device, a device associated with 5G services, or other devices associated with the fourth industrial revolution field.
For example, a terminal or user equipment (UE) may include a cellular phone, a smart phone, a laptop computer, a digital broadcast terminal, personal digital assistants (PDAs), a portable multimedia player (PMP), a navigation device, a slate PC, a tablet PC, an ultrabook, a wearable device (e.g., a smartwatch, a smart glass and a head mounted display (HMD)), etc. For example, the HMD may be a display device worn on the head of a user. For example, the HMD may be used to realize VR, AR or MR. For example, the drone may be a flying object that flies by wireless control signals without a person therein. For example, the VR device may include a device that implements objects or backgrounds of a virtual world. For example, the AR device may include a device that connects and implements objects or background of a virtual world to objects, backgrounds, or the like of a real world. For example, the MR device may include a device that unites and implements objects or background of a virtual world to objects, backgrounds, or the like of a real world. For example, the hologram device may include a device that implements 360-degree 3D images by recording and playing 3D information using the interference phenomenon of light that is generated by two lasers meeting each other which is called holography. For example, the public safety device may include an image repeater or an imaging device that can be worn on the body of a user. For example, the MTC device and the IoT device may be devices that do not require direct interference or operation by a person. For example, the MTC device and the IoT device may include a smart meter, a bending machine, a thermometer, a smart bulb, a door lock, various sensors, or the like. For example, the medical device may be a device that is used to diagnose, treat, attenuate, remove, or prevent diseases. For example, the medical device may be a device that is used to diagnose, treat, attenuate, or correct injuries or disorders. For example, the medial device may be a device that is used to examine, replace, or change structures or functions. For example, the medical device may be a device that is used to control pregnancy. For example, the medical device may include a device for medical treatment, a device for operations, a device for (external) diagnose, a hearing aid, an operation device, or the like. For example, the security device may be a device that is installed to prevent a danger that is likely to occur and to keep safety. For example, the security device may be a camera, a CCTV, a recorder, a black box, or the like. For example, the Fin Tech device may be a device that can provide financial services such as mobile payment.
Referring to
UL (communication from the second communication device to the first communication device) is processed in the first communication device 910 in a way similar to that described in association with a receiver function in the second communication device 920. Each Tx/Rx module 925 receives a signal through each antenna 926. Each Tx/Rx module provides RF carriers and information to the Rx processor 923. The processor 921 may be related to the memory 924 that stores program code and data. The memory may be referred to as a computer-readable medium.
B. Signal Transmission/Reception Method in Wireless Communication SystemReferring to
Meanwhile, when the UE initially accesses the BS or has no radio resource for signal transmission, the UE can perform a random access procedure (RACH) for the BS (steps S203 to S206). To this end, the UE can transmit a specific sequence as a preamble through a physical random access channel (PRACH) (S203 and S205) and receive a random access response (RAR) message for the preamble through a PDCCH and a corresponding PDSCH (S204 and S206). In the case of a contention-based RACH, a contention resolution procedure may be additionally performed.
After the UE performs the above-described process, the UE can perform PDCCH/PDSCH reception (S207) and physical uplink shared channel (PUSCH)/physical uplink control channel (PUCCH) transmission (S208) as normal uplink/downlink signal transmission processes. Particularly, the UE receives downlink control information (DCI) through the PDCCH. The UE monitors a set of PDCCH candidates in monitoring occasions set for one or more control element sets (CORESET) on a serving cell according to corresponding search space configurations. A set of PDCCH candidates to be monitored by the UE is defined in terms of search space sets, and a search space set may be a common search space set or a UE-specific search space set. CORESET includes a set of (physical) resource blocks having a duration of one to three OFDM symbols. A network can configure the UE such that the UE has a plurality of CORESETs. The UE monitors PDCCH candidates in one or more search space sets. Here, monitoring means attempting decoding of PDCCH candidate(s) in a search space. When the UE has successfully decoded one of PDCCH candidates in a search space, the UE determines that a PDCCH has been detected from the PDCCH candidate and performs PDSCH reception or PUSCH transmission on the basis of DCI in the detected PDCCH. The PDCCH can be used to schedule DL transmissions over a PDSCH and UL transmissions over a PUSCH. Here, the DCI in the PDCCH includes downlink assignment (i.e., downlink grant (DL grant)) related to a physical downlink shared channel and including at least a modulation and coding format and resource allocation information, or an uplink grant (UL grant) related to a physical uplink shared channel and including a modulation and coding format and resource allocation information.
An initial access (IA) procedure in a 5G communication system will be additionally described with reference to
The UE can perform cell search, system information acquisition, beam alignment for initial access, and DL measurement on the basis of an SSB. The SSB is interchangeably used with a synchronization signal/physical broadcast channel (SS/PBCH) block.
The SSB includes a PSS, an SSS and a PBCH. The SSB is configured in four consecutive OFDM symbols, and a PSS, a PBCH, an SSS/PBCH or a PBCH is transmitted for each OFDM symbol. Each of the PSS and the SSS includes one OFDM symbol and 127 subcarriers, and the PBCH includes 3 OFDM symbols and 576 subcarriers.
Cell search refers to a process in which a UE acquires time/frequency synchronization of a cell and detects a cell identifier (ID) (e.g., physical layer cell ID (PCI)) of the cell. The PSS is used to detect a cell ID in a cell ID group and the SSS is used to detect a cell ID group. The PBCH is used to detect an SSB (time) index and a half-frame.
There are 336 cell ID groups and there are 3 cell IDs per cell ID group. A total of 1008 cell IDs are present. Information on a cell ID group to which a cell ID of a cell belongs is provided/acquired through an SSS of the cell, and information on the cell ID among 336 cell ID groups is provided/acquired through a PSS.
The SSB is periodically transmitted in accordance with SSB periodicity. A default SSB periodicity assumed by a UE during initial cell search is defined as 20 ms. After cell access, the SSB periodicity can be set to one of {5 ms, 10 ms, 20 ms, 40 ms, 80 ms, 160 ms} by a network (e.g., a BS).
Next, acquisition of system information (SI) will be described.
SI is divided into a master information block (MIB) and a plurality of system information blocks (SIBs). SI other than the MIB may be referred to as remaining minimum system information. The MIB includes information/parameter for monitoring a PDCCH that schedules a PDSCH carrying SIB1 (SystemInformationBlock1) and is transmitted by a BS through a PBCH of an SSB. SIB1 includes information related to availability and scheduling (e.g., transmission periodicity and SI-window size) of the remaining SIBs (hereinafter, SIBx, x is an integer equal to or greater than 2). SiBx is included in an SI message and transmitted over a PDSCH. Each SI message is transmitted within a periodically generated time window (i.e., SI-window).
A random access (RA) procedure in a 5G communication system will be additionally described with reference to
A random access procedure is used for various purposes. For example, the random access procedure can be used for network initial access, handover, and UE-triggered UL data transmission. A UE can acquire UL synchronization and UL transmission resources through the random access procedure. The random access procedure is classified into a contention-based random access procedure and a contention-free random access procedure. A detailed procedure for the contention-based random access procedure is as follows.
A UE can transmit a random access preamble through a PRACH as Msg1 of a random access procedure in UL. Random access preamble sequences having different two lengths are supported. A long sequence length 839 is applied to subcarrier spacings of 1.25 kHz and 5 kHz and a short sequence length 139 is applied to subcarrier spacings of 15 kHz, 30 kHz, 60 kHz and 120 kHz.
When a BS receives the random access preamble from the UE, the BS transmits a random access response (RAR) message (Msg2) to the UE. A PDCCH that schedules a PDSCH carrying a RAR is CRC masked by a random access (RA) radio network temporary identifier (RNTI) (RA-RNTI) and transmitted. Upon detection of the PDCCH masked by the RA-RNTI, the UE can receive a RAR from the PDSCH scheduled by DCI carried by the PDCCH. The UE checks whether the RAR includes random access response information with respect to the preamble transmitted by the UE, that is, Msg1. Presence or absence of random access information with respect to Msg1 transmitted by the UE can be determined according to presence or absence of a random access preamble ID with respect to the preamble transmitted by the UE. If there is no response to Msg1, the UE can retransmit the RACH preamble less than a predetermined number of times while performing power ramping. The UE calculates PRACH transmission power for preamble retransmission on the basis of most recent pathloss and a power ramping counter.
The UE can perform UL transmission through Msg3 of the random access procedure over a physical uplink shared channel on the basis of the random access response information. Msg3 can include an RRC connection request and a UE ID. The network can transmit Msg4 as a response to Msg3, and Msg4 can be handled as a contention resolution message on DL. The UE can enter an RRC connected state by receiving Msg4.
C. Beam Management (BM) Procedure of 5G Communication SystemA BM procedure can be divided into (1) a DL MB procedure using an SSB or a CSI-RS and (2) a UL BM procedure using a sounding reference signal (SRS). In addition, each BM procedure can include Tx beam swiping for determining a Tx beam and Rx beam swiping for determining an Rx beam.
The DL BM procedure using an SSB will be described.
Configuration of a beam report using an SSB is performed when channel state information (CSI)/beam is configured in RRC_CONNECTED.
-
- A UE receives a CSI-ResourceConfig IE including CSI-SSB-ResourceSetList for SSB resources used for BM from a BS. The RRC parameter “csi-SSB-ResourceSetList” represents a list of SSB resources used for beam management and report in one resource set. Here, an SSB resource set can be set as {SSBx1, SSBx2, SSBx3, SSBx4, . . . }. An SSB index can be defined in the range of 0 to 63.
- The UE receives the signals on SSB resources from the BS on the basis of the CSI-SSB-ResourceSetList.
- When CSI-RS reportConfig with respect to a report on SSBRI and reference signal received power (RSRP) is set, the UE reports the best SSBRI and RSRP corresponding thereto to the BS. For example, when reportQuantity of the CSI-RS reportConfig IE is set to ‘ssb-Index-RSRP’, the UE reports the best SSBRI and RSRP corresponding thereto to the BS.
When a CSI-RS resource is configured in the same OFDM symbols as an SSB and ‘QCL-TypeD’ is applicable, the UE can assume that the CSI-RS and the SSB are quasi co-located (QCL) from the viewpoint of ‘QCL-TypeD’. Here, QCL-TypeD may mean that antenna ports are quasi co-located from the viewpoint of a spatial Rx parameter. When the UE receives signals of a plurality of DL antenna ports in a QCL-TypeD relationship, the same Rx beam can be applied.
Next, a DL BM procedure using a CSI-RS will be described.
An Rx beam determination (or refinement) procedure of a UE and a Tx beam swiping procedure of a BS using a CSI-RS will be sequentially described. A repetition parameter is set to ‘ON’ in the Rx beam determination procedure of a UE and set to ‘OFF’ in the Tx beam swiping procedure of a BS.
First, the Rx beam determination procedure of a UE will be described.
-
- The UE receives an NZP CSI-RS resource set IE including an RRC parameter with respect to ‘repetition’ from a BS through RRC signaling. Here, the
RRC parameter ‘repetition’ is set to ‘ON’.
-
- The UE repeatedly receives signals on resources in a CSI-RS resource set in which the RRC parameter ‘repetition’ is set to ‘ON’ in different OFDM symbols through the same Tx beam (or DL spatial domain transmission filters) of the BS.
- The UE determines an RX beam thereof.
- The UE skips a CSI report. That is, the UE can skip a CSI report when the RRC parameter ‘repetition’ is set to ‘ON’.
Next, the Tx beam determination procedure of a BS will be described.
-
- A UE receives an NZP CSI-RS resource set IE including an RRC parameter with respect to ‘repetition’ from the BS through RRC signaling. Here, the RRC parameter ‘repetition’ is related to the Tx beam swiping procedure of the BS when set to ‘OFF’.
- The UE receives signals on resources in a CSI-RS resource set in which the RRC parameter ‘repetition’ is set to ‘OFF’ in different DL spatial domain transmission filters of the BS.
- The UE selects (or determines) a best beam.
- The UE reports an ID (e.g., CRI) of the selected beam and related quality information (e.g., RSRP) to the BS. That is, when a CSI-RS is transmitted for BM, the UE reports a CRI and RSRP with respect thereto to the BS.
Next, the UL BM procedure using an SRS will be described.
-
- A UE receives RRC signaling (e.g., SRS-Config IE) including a (RRC parameter) purpose parameter set to ‘beam management” from a BS. The SRS-Config IE is used to set SRS transmission. The SRS-Config IE includes a list of SRS-Resources and a list of SRS-ResourceSets. Each SRS resource set refers to a set of SRS-resources.
The UE determines Tx beamforming for SRS resources to be transmitted on the basis of SRS-SpatialRelation Info included in the SRS-Config IE. Here, SRS-SpatialRelation Info is set for each SRS resource and indicates whether the same beamforming as that used for an SSB, a CSI-RS or an SRS will be applied for each SRS resource.
-
- When SRS-SpatialRelationlnfo is set for SRS resources, the same beamforming as that used for the SSB, CSI-RS or SRS is applied. However, when SRS-SpatialRelationlnfo is not set for SRS resources, the UE arbitrarily determines Tx beamforming and transmits an SRS through the determined Tx beamforming.
Next, a beam failure recovery (BFR) procedure will be described.
In a beamformed system, radio link failure (RLF) may frequently occur due to rotation, movement or beamforming blockage of a UE. Accordingly, NR supports BFR in order to prevent frequent occurrence of RLF. BFR is similar to a radio link failure recovery procedure and can be supported when a UE knows new candidate beams. For beam failure detection, a BS configures beam failure detection reference signals for a UE, and the UE declares beam failure when the number of beam failure indications from the physical layer of the UE reaches a threshold set through RRC signaling within a period set through RRC signaling of the BS. After beam failure detection, the UE triggers beam failure recovery by initiating a random access procedure in a PCell and performs beam failure recovery by selecting a suitable beam. (When the BS provides dedicated random access resources for certain beams, these are prioritized by the UE). Completion of the aforementioned random access procedure is regarded as completion of beam failure recovery.
D. URLLC (Ultra-Reliable and Low Latency Communication)URLLC transmission defined in NR can refer to (1) a relatively low traffic size, (2) a relatively low arrival rate, (3) extremely low latency requirements (e.g., 0.5 and 1 ms), (4) relatively short transmission duration (e.g., 2 OFDM symbols), (5) urgent services/messages, etc. In the case of UL, transmission of traffic of a specific type (e.g., URLLC) needs to be multiplexed with another transmission (e.g., eMBB) scheduled in advance in order to satisfy more stringent latency requirements. In this regard, a method of providing information indicating preemption of specific resources to a UE scheduled in advance and allowing a URLLC UE to use the resources for UL transmission is provided.
NR supports dynamic resource sharing between eMBB and URLLC. eMBB and URLLC services can be scheduled on non-overlapping time/frequency resources, and URLLC transmission can occur in resources scheduled for ongoing eMBB traffic. An eMBB UE may not ascertain whether PDSCH transmission of the corresponding UE has been partially punctured and the UE may not decode a PDSCH due to corrupted coded bits. In view of this, NR provides a preemption indication. The preemption indication may also be referred to as an interrupted transmission indication.
With regard to the preemption indication, a UE receives DownlinkPreemption IE through RRC signaling from a BS. When the UE is provided with DownlinkPreemption IE, the UE is configured with INT-RNTI provided by a parameter int-RNTI in DownlinkPreemption IE for monitoring of a PDCCH that conveys DCI format 2_1. The UE is additionally configured with a corresponding set of positions for fields in DCI format 2_1 according to a set of serving cells and positionlnDCl by INT-ConfigurationPerServing Cell including a set of serving cell indexes provided by servingCellID, configured having an information payload size for DCI format 2_1 according to dci-Payloadsize, and configured with indication granularity of time-frequency resources according to timeFrequencySect.
The UE receives DCI format 2_1 from the BS on the basis of the DownlinkPreemption IE.
When the UE detects DCI format 2_1 for a serving cell in a configured set of serving cells, the UE can assume that there is no transmission to the UE in PRBs and symbols indicated by the DCI format 2_1 in a set of PRBs and a set of symbols in a last monitoring period before a monitoring period to which the DCI format 2_1 belongs. For example, the UE assumes that a signal in a time-frequency resource indicated according to preemption is not DL transmission scheduled therefor and decodes data on the basis of signals received in the remaining resource region.
E. mMTC (Massive MTC)mMTC (massive Machine Type Communication) is one of 5G scenarios for supporting a hyper-connection service providing simultaneous communication with a large number of UEs. In this environment, a UE intermittently performs communication with a very low speed and mobility. Accordingly, a main goal of mMTC is operating a UE for a long time at a low cost. With respect to mMTC, 3GPP deals with MTC and NB (NarrowBand)-IoT.
mMTC has features such as repetitive transmission of a PDCCH, a PUCCH, a PDSCH (physical downlink shared channel), a PUSCH, etc., frequency hopping, retuning, and a guard period.
That is, a PUSCH (or a PUCCH (particularly, a long PUCCH) or a PRACH) including specific information and a PDSCH (or a PDCCH) including a response to the specific information are repeatedly transmitted. Repetitive transmission is performed through frequency hopping, and for repetitive transmission, (RF) retuning from a first frequency resource to a second frequency resource is performed in a guard period and the specific information and the response to the specific information can be transmitted/received through a narrowband (e.g., 6 resource blocks (RBs) or 1 RB).
F. Basic Operation of AI Using 5G CommunicationThe UE transmits specific information to the 5G network (S1). In addition, the 5G network can determine whether to remotely control the vehicle (S2). Here, the 5G network may include a server or a module which performs remote control related to autonomous driving. In addition, the 5G network can transmit information (or signal) related to remote control to the UE (S3).
G. Applied Operations Between UE and 5G Network in 5G Communication SystemHereinafter, the operation of an using 5G communication will be described in more detail with reference to wireless communication technology (BM procedure, URLLC, mMTC, etc.) described in
First, a basic procedure of an applied operation to which a method proposed by the present disclosure which will be described later and eMBB of 5G communication are applied will be described.
As in steps S1 and S3 of
More specifically, the UE performs an initial access procedure with the 5G network on the basis of an SSB in order to acquire DL synchronization and system information. A beam management (BM) procedure and a beam failure recovery procedure may be added in the initial access procedure, and quasi-co-location (QCL) relation may be added in a process in which the UE receives a signal from the 5G network.
In addition, the UE performs a random access procedure with the 5G network for UL synchronization acquisition and/or UL transmission. The 5G network can transmit, to the UE, a UL grant for scheduling transmission of specific information. Accordingly, the UE transmits the specific information to the 5G network on the basis of the UL grant. In addition, the 5G network transmits, to the UE, a DL grant for scheduling transmission of 5G processing results with respect to the specific information. Accordingly, the 5G network can transmit, to the UE, information (or a signal) related to remote control on the basis of the DL grant.
Next, a basic procedure of an applied operation to which a method proposed by the present disclosure which will be described later and URLLC of 5G communication are applied will be described.
As described above, an UE can receive DownlinkPreemption IE from the 5G network after the UE performs an initial access procedure and/or a random access procedure with the 5G network. Then, the UE receives DCI format 2_1 including a preemption indication from the 5G network on the basis of DownlinkPreemption IE. The UE does not perform (or expect or assume) reception of eMBB data in resources (PRBs and/or OFDM symbols) indicated by the preemption indication. Thereafter, when the UE needs to transmit specific information, the UE can receive a UL grant from the 5G network.
Next, a basic procedure of an applied operation to which a method proposed by the present disclosure which will be described later and mMTC of 5G communication are applied will be described.
Description will focus on parts in the steps of
In step S1 of
The above-described 5G communication technology can be combined with methods proposed in the present disclosure which will be described later and applied or can complement the methods proposed in the present disclosure to make technical features of the methods concrete and clear.
Referring to
At a front surface of a main body of the washing machine, a laundry inlet for enabling injection and discharge of laundry is formed, and a door that can open and close the laundry inlet is rotatably provided. The door may be provided with an image acquisition unit including an image sensor, a camera, and the like, for checking a state of the laundry located in the inner tub of the washing machine.
A detergent inlet may receive detergent such as laundry detergent, rinse detergent, or bleach. The detergent inlet may be drawn out through the front of the body. Detergent in the detergent inlet is mixed with the washing water upon supplying washing water to be supplied to the inner tub of the washing machine. The detergent inlet may be formed with divided into a portion that receives laundry detergent, a portion that receives rinse detergent, and a portion that receives bleach.
In one embodiment of the present disclosure, the detergent inlet may include a detergent quantity detection unit. Specifically, the detergent quantity detection unit may include a weight sensor, an image sensor, and the like for detecting a quantity of detergent injected into the detergent inlet, and may use all known means capable of detecting a mass of a powdered substance or a liquid substance.
Referring to
The controller 100 controls the hardware 200 to control entire driving of the washing machine LM according to information input through the user interface 400. Further, the controller 100 controls an operation of the hardware 200 based on a laundry image obtained through an image acquisition unit (not shown). More specifically, the controller 100 may obtain laundry classification information or laundry distribution information from the laundry image and control an operation of the hardware 200 based on laundry classification information or laundry distribution information. The laundry classification information is information about a kind, material, etc. of the laundry, and may particularly indicate moisture content information of the laundry. The laundry distribution information may indicate a degree of disposition or height information of the laundry received on an inner tub 211.
The controller 100 may learn laundry classification information to estimate a vibration degree of the inner tub 211 that may occur in a dehydration process, and vary revolution per minute (RPM) of a motor 220 in the dehydration process according to the vibration degree of the inner tub 211. For example, when it is determined that the laundry classification information is laundry that may cause a short circuit of the motor 220, the controller 100 may control to lower RPM of the motor 220 in the dehydration process.
The hardware 200 may include a washing tub 210, a motor 220, a water supply valve 230, and a heater 240.
The washing tub 210 includes an outer tub 213 for receiving washing water, and an inner tub 211 disposed inside the outer tub 213 to receive laundry and to rotate using rotatory power provided from the motor 220. The water supply valve 230 controls supply of washing water. The heater 240 heats water supply in the washing tub.
The external information acquisition unit may include a camera, a microphone, and the like.
The external information acquisition unit may obtain voice information of a user or an object outside the washing machine through an input device such as a camera or a microphone. Further, the external information acquisition unit may include a communication unit for obtaining an external signal. The camera may use at least one of a 2D or 3D camera, and may be disposed at the front of the washing machine.
The user interface 400 may include a power input unit 410, a start input unit 420, a course selection unit 430, an option selection unit 440, a display unit 450, and a speaker 460.
The power input unit 410 provides a means for controlling on/off of main power of the washing machine LM. The start input unit 420 provides a means for controlling the start of a washing process, a rinsing process, or a dehydration process. The course selection unit 430 provides a means for selecting a kind of a washing process, a rinsing process, or a dehydration process. The option selection unit 440 provides a means for selecting detailed options for a washing process, a rinsing process, or a dehydration process. For example, the option selection unit 440 may be a means for selecting options such as a water temperature, a time, and a reservation. The display unit 450 may display an operation state of the washing machine LM or may display course information selected by the user through the course selection unit 430 or option information selected through the option selection unit 440. The speaker 460 outputs an operating state of the washing machine LM or a situation of a specific event as a voice signal. The specific event may be a situation of laundry distribution control or RPM control based on the laundry image.
The communication module 500 may transmit information sensed during a washing process by the washing machine LM, detected error information, and the like to an external electronic device. For example, the external electronic device may include a Bluetooth device, an autonomous vehicle, a robot, a drone, an AR device, a mobile device, a home appliance, and the like.
The laundry amount detection unit performs a function of detecting an amount of laundry supplied to the inner tub. Specifically, the laundry amount detection unit may include an image acquisition unit, a sensing unit, a current detection unit, and the like.
The image acquisition unit (not shown) may be provided inside the door of the washing machine to photograph a state of supplied laundry as an image, and determine an amount of laundry based on the obtained image information.
The sensing unit (not shown) may indicate a six-axis sensor. Six-axis sensors may be used for obtaining sensing data on gravity and acceleration, and a laundry amount may be detected based on the sensing data.
The current detection unit (not shown) detects a current, i.e., a driving current Id flowing to the motor 220. The current detection unit may be various examples such as a hall sensor and an encoder. The current detection unit periodically detects a current flowing to the motor 220 and inputs the current to the controller. The controller may detect an amount of laundry based on the current detected by the current detection unit.
The detergent quantity detection unit may be provided in a detergent inlet or a detergent storage unit. In this case, the detergent inlet is an inlet for supplying detergent of the washing machine, and is generally positioned at the front of the washing machine to be disposed to inject and discharge to and from the washing machine in a slide manner. The detergent storage unit may be disposed inside the washing machine to store in advance detergent to be used for a washing operation. Detection of a detergent quantity may include a method of detecting a quantity of detergent supplied through a pressure sensor, a method of determining a quantity of detergent by analyzing an image photographed by an image sensor, and the like, but it is not limited to a listed example.
The washing machine LM may further include an eccentric amount detection unit (not shown) for detecting an unbalanced degree (eccentricity) of the laundry injected into the inner tub 211, i.e., unbalance (UB) of the inner tub 211. The eccentric amount detection unit may perform unbalance detection based on a rotation speed change amount of the inner tub 211, i.e., a rotation speed change amount of the motor 220. For this reason, a speed detection unit (not shown) for detecting a rotation speed of the motor 220 is separately provided or a speed detection unit may calculate a rotation speed based on a driving current of the motor 220 detected by the current detection unit to detect unbalance based on the rotation speed. The eccentric amount detection unit may be provided inside the controller.
Referring to
The AI processing may include all operations related to the controller 100 of the washing machine LM of
The AI device 20 may be a client device directly using an AI processing result or may be a device of a cloud environment that provides the AI processing result to another device. The AI device 20 is a computing device that may learn a neural network and may be implemented into various electronic devices such as a server, a desktop personal computer (PC), a notebook PC, and a tablet PC.
The AI device 20 may include an AI processor 21, a memory 25, and/or a communication unit 27.
The AI processor 21 may learn a neural network using a program stored in the memory 25. In particular, the AI processor 21 may learn a neural network for recognizing related data of the washing machine LM. Here, the neural network for recognizing related data of the washing machine LM may be designed to simulate a human brain structure on a computer and include a plurality of network nodes having a weight and simulating a neuron of the human neural network. The plurality of network modes may exchange data according to each connection relationship so as to simulate a synaptic activity of neurons that send and receive signals through a synapse. Here, the neural network may include a deep learning model developed in the neural network model. In the deep learning model, while a plurality of network nodes is located in different layers, the plurality of network nodes may send and receive data according to a convolution connection relationship. An example of the neural network model includes various deep learning techniques such as deep neural networks (DNN), convolutional deep neural networks (CNN), Recurrent Boltzmann Machine (RNN), Restricted Boltzmann Machine (RBM), deep belief networks (DBN), and a deep Q-network and may be applied to a field of computer vision, speech recognition, natural language processing, and voice/signal processing.
The processor for performing the above-described function may be a general-purpose processor (e.g., CPU), but may be an AI dedicated processor (e.g., GPU) for learning AI.
The memory 25 may store various programs and data necessary for an operation of the AI device 20. The memory 25 may be implemented into a non-volatile memory, a volatile memory, a flash memory, a hard disk drive (HDD), or a solid state drive (SDD) and the like. The memory 25 may be accessed by the AI processor 21 and reading/writing/modifying/deleting/updating of data may be performed by the AI processor 21. Further, the memory 25 may store a neural network model (e.g., a deep learning model 26) generated through learning algorithm for data classification/recognition according to an embodiment of the present disclosure.
The AI processor 21 may include a data learning unit 22 for learning a neural network for data classification/recognition. The data learning unit 22 may learn learning data to use in order to determine data classification/recognition and a criterion for classifying and recognizing data using learning data. By obtaining learning data to be used for learning and applying the obtained learning data to a deep learning model, the data learning unit 22 may learn a deep learning model.
The data learning unit 22 may be produced in at least one hardware chip form to be mounted in the AI device. For example, the data learning unit 22 may be produced in a dedicated hardware chip form for artificial intelligence (AI) and may be produced in a part of a general-purpose processor (CPU) or a graphic dedicated processor (GPU) to be mounted in the AI device 20. Further, the data learning unit 22 may be implemented into a software module. When the data learning unit 22 is implemented into a software module (or program module including an instruction), the software module may be stored in non-transitory computer readable media. In this case, at least one software module may be provided by an Operating System (OS) or may be provided by an application.
The data learning unit 22 may include a learning data acquisition unit 23 and a model learning unit 24.
The learning data acquisition unit 23 may obtain learning data necessary for a neural network model for classifying and recognizing data.
The model learning unit 24 may learn to have a determination criterion in which a neural network model classifies predetermined data using the obtained learning data. In this case, the model learning unit 24 may learn a neural network model through supervised learning that uses at least a portion of the learning data as a determination criterion. Alternatively, the model learning unit 24 may learn the neural network model through unsupervised learning that finds a determination criterion by self-learning using learning data without supervision. Further, the model learning unit 24 may learn the neural network model through reinforcement learning using feedback on whether a result of status determination according to learning is correct. Further, the model learning unit 24 may learn the neural network model using learning algorithm including error back-propagation or gradient decent. Supervised learning is learned using a series of learning data and a label (target output value) corresponding thereto, and a neural network model based on supervised learning may be a model that infers a function from training data. Supervised learning may receive a series of learning data and target output values corresponding thereto, find errors through learning that compares actual output values of input data with target output values, and modify the model based on the corresponding results. Supervised learning may be further classified into regression, classification, detection, semantic segmentation, etc., according to a form of the result. Functions derived from supervised learning may again be used for estimating new result values. In this way, the neural network model based on supervised learning may optimize parameters of the neural network model through learning of much learning data.
When the neural network is learned, the model learning unit 24 may store the learned neural network model in the memory. The model learning unit 24 may store the learned neural network model at the memory of the server connected to the AI device 20 by a wired or wireless network.
In order to improve an analysis result of a recognition model or to save a resource or a time necessary for generation of the recognition model, the data learning unit 22 may further include a learning data pre-processor (not illustrated) and a learning data selection unit (not illustrated).
The learning data pre-processor may pre-process obtained data so that the obtained data may be used in learning for situation determination. For example, the learning data pre-processor may process the obtained data in a predetermined format so that the model learning unit 24 uses obtained learning data for learning for image recognition.
Further, the learning data selection unit may select data necessary for learning among learning data obtained from the learning data acquisition unit 23 or learning data pre-processed in the pre-processor. The selected learning data may be provided to the model learning unit 24. For example, by detecting a specific area of an image obtained through a photographing means of the washing machine LM, the learning data selection unit may select only data of an object included in the specified area as learning data.
Further, in order to improve an analysis result of the neural network model, the data learning unit 22 may further include a model evaluation unit (not illustrated).
The model evaluation unit inputs evaluation data to the neural network model, and when an analysis result output from evaluation data does not satisfy predetermined criteria, the model evaluation unit may enable the model learning unit 24 to learn again. In this case, the evaluation data may be data previously defined for evaluating a recognition model. For example, when the number or a proportion of evaluation data having inaccurate analysis results exceeds a predetermined threshold value among analysis results of a learned recognition model of evaluation data, the model evaluation unit may evaluate evaluation data as data that do not satisfy predetermined criteria.
The communication unit 27 may transmit an AI processing result by the AI processor 21 to an external electronic device. The external electronic device may include a Bluetooth device, an autonomous vehicle, a robot, a drone, an AR device, a mobile device, a home appliance and the like.
It has been described that the AI device 20 of
Specifically,
The artificial neural network model is generally configured with an input layer, a hidden layer, and an output layer, and neurons included in each layer may be connected through a weight. Through a linear combination of weights and neuron values and a nonlinear activation function, the artificial neural network model may have a form to approximate complex functions. A learning purpose of the artificial neural network model is to find a weight that minimizes the difference between an output computed at the output layer and an actual output.
The deep neural network may mean an artificial neural network model configured with several hidden layers between an input layer and an output layer. By using many hidden layers, complex nonlinear relationships may be modeled, and a neural network structure that enables advanced abstraction by increasing the number of layers is referred to as deep learning. Deep learning learns a very large amount of data, and when new data is input, deep learning may select a highest probability answer based on the learning results. Therefore, deep learning may operate adaptively according to an input and automatically find characteristic factors in a process of learning a model based on data.
A deep learning-based model may include various deep learning techniques such as deep neural networks (DNN), convolutional deep neural networks (CNN), Recurrent Boltzmann Machine (RNN), Restricted Boltzmann Machine (RBM), deep belief networks (DBN), and a deep Q-network of
Referring to
Referring to
The learned artificial neural network model according to an embodiment of the present disclosure applies information about a kind of detergent as learning data. In this case, the information about the kind of detergent may include image information, audio information, and the like. The artificial neural network model repeatedly learned several times may stop learning when an error value is less than a reference value and may be stored in the memory of the AI device. When the learned artificial neural network model is used, if image information or voice information about the kind of detergent is input, the washing machine may determine and classify the kind of detergent based on the corresponding information.
Referring to
The AI device applies information about the kind of detergent obtained through the external information acquisition unit of the washing machine to a pre-learned artificial neural network (ANN) model (S920).
In this case, the information about the kind of detergent may include image information, voice information, text information, gesture information, detergent component information, and the like, and it is not limited to listed examples. Further, the above-described learned artificial neural network model was trained using, as learning data, information about the kind of detergent before identification of the kind of detergent using the model, and is a reliable artificial neural network model that derives an output value less than a specific error value.
The washing machine may determine the kind of detergent according to the output value of the artificial neural network model (S930).
According to the input data, a specific output value may be derived from the artificial neural network model, and the kind of detergent may be determined based on the output value. In this case, when a use environment of the washing machine is changed, input data having a large difference from learning data of the pre-learned artificial neural network model is applied, and thus an error may occur. In this case, by additionally providing an autoencoder, an AI device may be implemented to prevent a malfunction according to a change in environment.
The washing machine receives predetermined detergent quantity information according to a laundry amount from the server based on the determined detergent kind (S940).
The server may store appropriate detergent quantity information set for each of a detergent kind and a laundry amount in the memory or may obtain the corresponding information through web crawling. That is, when the kind of detergent is determined by the AI device, the washing machine may request appropriate detergent quantity information set in advance for each laundry amount corresponding to the kind of the detergent to the server.
In this case, the server may transmit the appropriate detergent quantity information to the washing machine in response to the request. The washing machine may store the received appropriate detergent quantity information in a memory.
The washing machine may measure a laundry amount of the inner tub through the laundry amount detection unit (S950).
In this case, a method of detecting a laundry amount through the laundry amount detection unit may include a method of photographing an image of the inner tub through the image acquisition unit, a method of estimating a laundry amount through control information of a motor, a method of estimating a laundry amount through a six-axis sensor, and the like.
The washing machine may detect a quantity of detergent supplied through the detergent supply unit (S960).
As described above, the detergent supply unit is an inlet for supplying detergent of the washing machine, and is generally positioned at the front of the washing machine to be disposed to inject and discharge to and from the washing machine in a slide manner, and in one embodiment of the present disclosure, a detergent quantity detection unit for detecting a quantity of detergent supplied to the detergent supply unit may be provided.
The washing machine may determine whether an injected quantity of detergent is exceeded or insufficient based on the predetermined detergent quantity information received from the server (S970)
The washing machine may detect a quantity of supplied detergent through the detergent quantity detection unit, and determine whether the quantity of detergent is exceeded or insufficient according to the detected quantity of supplied detergent.
In this case, when the detergent quantity is insufficient, the washing machine may display the fact through the display unit or output the fact through the speaker.
In this case, when the quantity of supplied detergent exceeds a predetermined detergent quantity, the washing machine may supply the predetermined quantity of detergent together with washing water to the inner tub and store the remaining detergent in a detergent storage unit. However, when the detergent is stored for a predetermined time or more, the detergent may be deteriorated. Therefore, when a predetermined time has elapsed, the detergent may be discharged through a separate pipe instead of being used for washing.
In this case, when the quantity of supplied detergent is less than the predetermined quantity of detergent, the fact is displayed through the display unit or is output through the speaker, as described above, additional detergent supply by the user is required. When detergent is additionally supplied by the user, the detergent quantity detection unit may measure and/or detect an additional supply detergent quantity, and when the additional supply detergent quantity satisfies an insufficient detergent quantity, the detergent quantity detection unit may output a notification sound through the speaker.
However, when the additional supply detergent does not satisfy the insufficient quantity of detergent, and when additional detergent is not supplied within a preset time, the washing machine determines that the user does not intend to supply additional detergent, the washing machine may stop a process related to additional supply of detergent and perform immediately a washing operation.
The washing machine may supply only the predetermined detergent quantity among the supplied detergent quantity to the inner tub together with the washing water (S980).
In this case, when a detergent quantity is excessively supplied, the remaining detergent may be stored in the detergent storage unit, as described above.
A terminal of
Referring to
The terminal may learn an artificial neural network model based on the received information (S1020).
The terminal may obtain information about a kind of detergent to be supplied by the user through the external information acquisition unit (S1030).
The terminal may determine a kind of detergent through the learned artificial neural network model (S1040).
The terminal may request appropriate amount information of detergent according to an amount of the laundry based on the determined kind of detergent to the server (S1050).
The server may transmit the appropriate detergent quantity information according to the kind of detergent to the terminal (S1060).
In this case, the terminal may store the appropriate detergent quantity information according to the kind of detergent in a memory (not shown) provided in the washing machine. In this case, even if the washing machine is not connected by communication to the server based on the stored appropriate detergent quantity information, the washing machine may determine the appropriate detergent quantity.
The terminal may perform a washing operation using an appropriate detergent quantity (S1070)
As described above, the external information acquisition unit may include a camera and a microphone, and the communication module may also be included in the external information acquisition unit.
The camera may be provided inside the washing machine, or may be used for obtaining image information by electrically connecting or communicating a camera outside the washing machine with the washing machine. Further, when the detergent is photographed through the camera, an area including the detergent may be misrecognized due to an area other than the detergent and thus a camera having a function that may identify an area corresponding to the detergent using another pre-learned artificial neural network model may be used. For example, when an image including the detergent is photographed through the camera, in the image, an area corresponding to the detergent may be separated and recognized, and the area may be analyzed, thereby deriving “LG Household & Health Tech Clean white”.
The microphone is a device capable of receiving sound information of a user and identifying a kind of detergent and may indicate a speech recognition device. The speech recognition device may receive the user's sound information and extract a feature value from the sound information to determine a speech intention. Specifically, the speech recognition device may receive speech information through a microphone and extract feature values to be used for determining a speech intention from text information in which the speech information is converted into a text to determine a speech intention through a corresponding word or a combination of words. For example, when the speech is “Tech Clean White!” the speech recognition device may recognize “Tech Clean White” and derive “LG Household & Health Tech Clean White” detergent manufactured and sold by LG Household & Health Care from “Tech Clean White”.
The communication module may receive a message about a kind of detergent from an external terminal connected to the communication. The washing machine may receive a message about the kind of detergent through the communication module, and determine a kind of detergent according to the received message. For example, when the user selects “LG Household & Health Tech Clean White” in a mobile phone application, the mobile phone may send the message to the washing machine. As a result, the washing machine may determine a kind of detergent based on the received message.
The washing machine may classify an area including the detergent based on the image information of the detergent, and combine elements forming an appearance of the detergent such as a design, a text, and a color of the detergent to determine information about the kind of the detergent.
In one embodiment of the present disclosure, when a kind of detergent is identified, the washing machine may output information of the detergent by sound through the speaker unit. Thereby, it is possible to prevent a malfunction of inaccurately receiving an appropriate quantity of detergent due to misunderstanding of the kind of detergent.
When the washing machine determines a kind of detergent, the washing machine may receive appropriate detergent quantity information according to a laundry amount based on a kind of detergent from the server. Thereafter, when an appropriate quantity of detergent is supplied to the washing machine, the washing machine may perform a washing operation.
An external terminal including a mobile phone, a tablet, a PC, etc. may transmit a message about a kind of detergent using in the washing machine. In this case, a message transmitting means may use a known method such as transmitting through an application or transmitting a text message.
When the washing machine receives a message on a kind of detergent from an external terminal, the washing machine may perform a process described with reference to
Even if a kind of detergent is identified, a user may have difficulty in adjusting and supplying an accurate quantity of detergent unless the user uses a measuring cup. Therefore, a function for user convenience is required.
Referring to
When a laundry amount is detected through the laundry amount detection unit, the washing machine may determine an appropriate quantity of detergent based on information about a quantity of detergent according to the laundry amount received in advance. In this case, when a quantity of supplied detergent does not reach an appropriate quantity of detergent, the washing machine may notify the user that the detergent is insufficient through a speaker or a display unit.
According to one embodiment of the present disclosure, when an insufficient quantity of detergent is satisfied according to additional detergent supply, the washing machine may guide the fact through the speaker or the display. Through display through the display or transmission of a guide message through the speaker, it may be prevented that the user supplies detergent of a required quantity or more to the washing machine.
According to one embodiment of the present disclosure, excess detergent may be stored in the detergent storage unit provided in the washing machine and be used in a subsequent washing process.
Further, according to another embodiment of the present disclosure, despite a previous notification of detergent shortage, when the user does not supply detergent within a designated time, or even if the detergent is supplied, when detergent supply is stopped before supplying detergent of an appropriate detergent quantity, the washing machine determines that the user no longer intends to supply the detergent, and performs a washing operation with detergent supplied until now.
According to the detergent shortage notification, additional detergent is supplied to the washing machine, and when the appropriate quantity of detergent is secured, the washing machine may perform a washing operation. In this case, the washing machine may transmit a message notifying the start of washing through the speaker.
The present disclosure may be implemented into a computer readable code in a program recording medium. The computer readable medium includes all kinds of record devices that store data that may be read by a computer system. The computer readable medium may include, for example, a Hard Disk Drive (HDD), a Solid State Disk (SSD), a Silicon Disk Drive (SDD), a read-only memory (ROM), a random-access memory (RAM), a compact disc read-only memory (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device and the like and also include a medium implemented in the form of a carrier wave (e.g., transmission through Internet). Accordingly, the detailed description should not be construed as being limitative from all aspects, but should be construed as being illustrative. The scope of the present disclosure should be determined by reasonable analysis of the attached claims, and all changes within the equivalent range of the present disclosure are included in the scope of the present disclosure.
Effects of an artificial intelligence-based washing machine and a method of controlling a detergent quantity thereof according to an embodiment of the present disclosure are described as follows.
According to the present disclosure, it can be guided to use an appropriate quantity of detergent according to a laundry amount.
Further, according to the present disclosure, power consumption can be reduced upon driving.
The effects of the present disclosure are not limited to the above-described effects and the other effects will be understood by those skilled in the art from the detailed description.
Claims
1. A method of controlling a detergent quantity of a washing machine based on artificial intelligence (AI), the method comprising:
- obtaining information about a kind of detergent from an external information collector;
- applying the information about a kind of detergent to a pre-learned artificial neural network (ANN) model;
- determining a kind of the detergent based on the application result;
- receiving predetermined detergent quantity information from the server according to a laundry amount based on the determined kind of detergent; and
- determining whether the detergent quantity supplied through a detergent inlet is less than the predetermined detergent quantity based on the detergent quantity information.
2. The method of claim 1, wherein the predetermined detergent quantity is a value determined in advance by a manufacturer of the detergent according to the laundry amount.
3. The method of claim 1, wherein the external information collector comprises at least one of a camera or a microphone.
4. The method of claim 1, further comprising obtaining information about a kind of the detergent from a message received from a user terminal.
5. The method of claim 1, further comprising displaying information of an insufficient detergent quantity through a display of the washing machine or outputting information of an insufficient detergent quantity through a speaker of the washing machine, when the detergent quantity is less than the predetermined detergent quantity.
6. The method of claim 1, further comprising:
- when the detergent quantity exceeds the predetermined detergent quantity,
- supplying the detergent corresponding to the predetermined quantity of detergent together with the washing water to the inner tub of the washing machine; and
- storing the remaining detergent, except for the supplied detergent in a detergent storage of the washing machine.
7. The method of claim 5, further comprising determining whether the additional supply detergent satisfies the insufficient detergent quantity.
8. The method of claim 7, wherein the determining of whether the additional supply detergent comprises:
- measuring the additional supply detergent; and
- outputting, if the additional supply detergent quantity satisfies the insufficient detergent quantity, a notification sound through the speaker.
9. The method of claim 7, further comprising stopping a process related to supply of the detergent and performing a washing operation, if the additional supply detergent quantity does not satisfy the insufficient detergent quantity and when the detergent is not supplied within a predetermined time.
10. The method of claim 1, wherein the neural network model is stored in an artificial intelligence (AI) device, and
- wherein the applying of the information comprises:
- transmitting a feature value related to information about a kind of detergent to the AI device; and
- obtaining a result in which information about a kind of detergent is applied to the artificial neural network model from the AI device.
11. The method of claim 1, wherein the artificial neural network model is stored in a network, and
- wherein the applying of the information comprises:
- transmitting information about a kind of the detergent to the network; and
- obtaining a result in which information about a kind of the detergent is applied to the artificial neural network model from the network.
12. A washing machine based on artificial intelligence, the washing machine comprising:
- a controller;
- a memory;
- a communication circuit;
- a processor; and
- an external information collector,
- wherein information about a kind of detergent is obtained from the external information collector, and
- by applying the information about the kind of detergent to a pre-learned Artificial Neural Network (ANN), the kind of detergent is determined, and it is determined whether a quantity of detergent supplied through a detergent inlet is less than the predetermined detergent quantity based on predetermined detergent quantity information according to a laundry amount and a kind of detergent received through the communication circuit.
13. The washing machine of claim 12, wherein the predetermined detergent quantity is a value determined in advance according to the laundry amount by a manufacturer of the detergent.
14. The washing machine of claim 12, wherein the external information collector comprises at least one of a camera or a microphone.
15. The washing machine of claim 12, further comprising a user interface,
- wherein the user interface comprises at least one of a display or a speaker,
- wherein insufficient detergent quantity information is displayed through the display or is output through the speaker, when the detergent quantity is less than the predetermined detergent quantity.
16. The washing machine of claim 12, wherein the controller supplies detergent corresponding to the predetermined detergent quantity together with washing water to an inner tub of the washing machine and stores the remaining detergent, except for the supplied detergent in a detergent storage of the washing machine, if the detergent quantity exceeds the predetermined detergent quantity.
17. The washing machine of claim 15, wherein the processor determines whether an additional supply detergent quantity measured through a detergent quantity detector satisfies the insufficient detergent quantity.
18. The washing machine of claim 17, wherein the controller outputs a notification sound through the speaker when the insufficient detergent quantity is satisfied.
19. The washing machine of claim 17, wherein the controller stops a process related to supply of detergent and performs a washing operation, if the additional supply detergent quantity does not satisfy the insufficient detergent quantity and when the detergent is not supplied within the preset time.
Type: Application
Filed: Sep 10, 2019
Publication Date: Jan 2, 2020
Applicant: LG ELECTRONICS INC. (Seoul)
Inventor: Jiyoun YANG (Seoul)
Application Number: 16/566,194