Decongest Wireless IoT
Researchers from the Tokyo University of Science and Keio University propose that a certain machine learning algorithm can help resource-constrained devices on a wireless network select optimal channels for information transmission; this could potentially decongest massive IoT networks. Tokyo University of Science

The wireless Internet of Things (IoT) is a network of devices in which each device can directly send information to another over wireless channels of communication, without human intervention. With the number of IoT devices increasing every day, the amount of information on the wireless channels is also increasing. This is causing congestion over the network, leading to loss of information due to interference and the failure of information delivery. Research to solve this problem of congestion is ongoing, and the most widely accepted and applied solution is the "multi-channel" technology. In this technology, information transmission is distributed among various parallel channels based on the traffic in a particular channel at a given time.

But, at present, optimal information transmission channels are selected using algorithms that cannot be supported by most existing IoT devices because these are resource-constrained; i.e., they have low storage capacity and low processing power, and must be power-saving while remaining in operation for long periods of time. In a recent study published in Applied Sciences, a group of scientists from the Tokyo University of Science and Keio University, Japan, propose the use of a machine-learning algorithm, based on the tug-of-war model (which is a fundamental model, earlier proposed by Professor Song-Ju Kim from Keio University, that is used to solve such problems as that of how to distribute information across channels), to select channels. "We realized that this algorithm could be applied to IoT devices, and we decided to implement it and experiment with it," Professor Mikio Hasegawa, the lead scientist from the Tokyo University of Science, says.

In their study, they built a system in which several IoT devices were connected to form a network and each device could only select one of several available channels through which to transmit information each time. Moreover, each device was resource-constrained. In the experiment, the devices were tasked with waking up, transmitting one piece of information, going to sleep, and then repeating the cycle a certain number of times. The role of the proposed algorithm was to enable the devices to select the optimal channel each time, such that at the end of it all, the highest possible number of successful transmissions (i.e., when all the information reaches its destination in one piece) has taken place.

The algorithm is called reinforcement-learning and it goes about the task as follows: every time a piece of information is transmitted through a channel, it notes the probability of achieving successful transmissions via that channel, based on whether the information completely and accurately reaches its destination. It updates this data with every subsequent transmission.

The researchers used this setup to also check a) whether the algorithm was successful, b) whether it was unbiased in its selection of channels, and c) whether it could adapt to traffic variations in a channel. For the tests, an additional control system was constructed in which each device was assigned a particular channel and it could not select any other channel when transmitting information. In the first case, some channels were congested before beginning the experiment, and the scientists found that the number of successful transmissions was larger when the algorithm was used, as opposed to when it was not. In the second case, some channels became congested when the algorithm was not used, and information could not be transmitted through them after a point of time, causing "unfairness" in channel selection. However, when the scientists used the algorithm, the channel selection was found to be fair. The findings for the third case clarify those for the previous two cases: when the algorithm was used, devices automatically began to ignore a congested channel and re-used it only when the traffic in it decreased.

"We achieved channel selection with a small amount of computation and a high-performance machine-learning algorithm," Prof Hasegawa tells us. While this means that the algorithm successfully solved the channel selection problem under experimental conditions, its faring in the real world remains to be seen. "Field experiments to test the robustness of this algorithm will be conducted in further research," the scientists say. They also plan to improve the algorithm in future research by taking into consideration other network characteristics, such as channel transmission quality.

The world is swiftly moving towards massive wireless IoT networks with an increasing number of devices connecting over wireless channels globally. Every possible organization or scholar is taking the opportunity of this moment in the history of time to solve the channel selection problem and get ahead of the game. Prof Hasegawa and his team have managed to take one of the first steps in the race. The future of high-speed, error-free wireless information transmission may be near!