Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2022 Oct 6;12(1):16697.
doi: 10.1038/s41598-022-21235-y.

Guiding principle of reservoir computing based on "small-world" network

Affiliations

Guiding principle of reservoir computing based on "small-world" network

Ken-Ichi Kitayama. Sci Rep. .

Abstract

Reservoir computing is a computational framework of recurrent neural networks and is gaining attentions because of its drastically simplified training process. For a given task to solve, however, the methodology has not yet been established how to construct an optimal reservoir. While, "small-world" network has been known to represent networks in real-world such as biological systems and social community. This network is categorized amongst those that are completely regular and totally disordered, and it is characterized by highly-clustered nodes with a short path length. This study aims at providing a guiding principle of systematic synthesis of desired reservoirs by taking advantage of controllable parameters of the small-world network. We will validate the methodology using two different types of benchmark tests-classification task and prediction task.

PubMed Disclaimer

Conflict of interest statement

The author declares no competing interests.

Figures

Figure 1
Figure 1
Reservoir computing architecture. Input weight matrix Win is a fixed N × L matrix where N is the number of nodes in the reservoir, and L is the dimension of the inputs at each time step. Reservoir weight matrix Wres is a fixed N × N matrix, which is typically sparse with nonzero elements having an either a symmetrical uniform, discrete bi-valued, or normal distribution centered around zero. Output weight matrix Wout is a learned M×N matrix where M is the number of classes of the output data.
Figure 2
Figure 2
Architectures of 10-node (N=10) networks with the node degree k=2; regularly connected (p=0) on the l.h.s. on the top, modestly disordered (p=0.5) on the r.h.s. on the top, and totally disordered (p=1) at the bottom. Each node is connected with four (= 2k) neighboring nodes. Each table represents the pairs of nodes for each p. For example, node#1 is connected with nodes#2, 3, 9, and 10 for the case with p=0.
Figure 3
Figure 3
a Clustering coefficient C(p)/C(0) and average path length or hop count L(p)/L(0) vs. probability of rewiring p for the case with 1000-node (N=1000) and the node degrees k=4 and 6. Range roughly p= 0.01–0.7 of small-world is indicated by the shaded area. b Clustering coefficients C(p)/C(0) and average path lengths L(p)/L(0) vs. probability of rewiring p for the case with the node degrees k=20 and 100.
Figure 4
Figure 4
(a) Classification accuracy of human 6-activity vs. probability of rewiring p for the case with N=1000 and k=4. The accuracy is maximized to be 74.9% at p=0.5. For an example, the temporal waveforms of accelerations of walking on x-, y, and z-axes are also shown on the bottom. (b) Prediction accuracy represented by the mean square error (MSE) of MG chaotic signal vs. p for the case with N=1000 and k=4. MSE is minimized to be 4.98×10-6 at p=0.1.
Figure 5
Figure 5
Confusion matrices of human 6-activity classification are compared for 1000-node reservoir. (a) Reservoir weight matrix Wres of small-world network k,p=4,0.5. Accuracy (in green) is 75.2%. (b) Conventional sparsely random matrix Wres with the density of 0.008. Accuracy is 73.0%.
Figure 6
Figure 6
2000 timestep-long waveforms of predicted and that of target MG chaotic time series for 1000-node reservoir. (a) Result of reservoir weight matrix Wres of 1000-node small-world network k,p=4,0.1. Mean square error (MSE) is 2.46×10-6. (b) Conventional sparsely random matrix Wres with the density of 0.008. MSE is 1.38×10-6. Results of the two benchmark tests are summarized in Table 1.
Figure 7
Figure 7
Performance of human activity classification using reservoir weight matrix Wres generated from the Watts–Strogatz graph. (a) Classification accuracy versus the number of nodes N. Node degree k=4 and the probability of rewiring p=0.5 within the range of small-world network are assumed. (b) Classification accuracy against the degree of node k. p=0.5 of 1000-node small-world network is assumed. Horizontal axis on the top is density of matrix Wres calculated by Eq. (6).
Figure 8
Figure 8
Method for generating the weight matrix Wres of the reservoir of 10-node (N=10) networks with k,p=(2,0.5). Table of pairs of connected nodes on the l.h.s. and 10×10 weight matrix Wres. For instance, connections of node 1, pairs of nodes, (1,6) and (1,8) reflect on the weight matrix Wres, as marked by circles.

Similar articles

References

    1. Verstraeten D, Schrauwen B, Stroobandt D, Van Campenhout J. Isolated word recognition with the liquid state machine: A case study. Inf. Process. Lett. 2005;95:521–528. doi: 10.1016/j.ipl.2005.05.019. - DOI
    1. Lukoševicius M, Jaeger H. Reservoir computing approaches to recurrent neural network training. Comput. Sci. Rev. 2009;3:127–149. doi: 10.1016/j.cosrev.2009.03.005. - DOI
    1. Jaeger. H. The ‘echo state’ approach to analyzing and training recurrent neural networks. Technical Report GMD Report148, German National Research Center for Information Technology (2001).
    1. Jaeger H, Hass H. Harnessing nonlinearity: Predicting chaotic systems and saving energy in wireless communication. Science. 2004;304:78–80. doi: 10.1126/science.1091277. - DOI - PubMed
    1. Maass W, Natschläger T, Markram H. Real-time computing without stable states: A new framework for neural computation based on perturbations. Neural Comput. 2002;14:2531–2560. doi: 10.1162/089976602760407955. - DOI - PubMed

MeSH terms