REDUNDANCY in NETWORKS 2)
← Back
The existence of a multiplicity of nodes and paths between them, able to perform in an equivalent manner.
The theory of redundancy in networks was developed by W.S. McCULLOCH, W.H. PITTS and J.von NEUMANN, who showed that redundancy offered the only hope to obtain satisfactory operative results from networks made of sometimes unreliable components.
In St. BEER words: "… von NEUMANN propounded a mathematical theorem which showed that if one is prepared to go on increasing the redundancy of a network without limit, it is possible to obtain an output of arbitrarily high accuracy from a network whose components are of arbitrarily low reliability" (1968, p.201).
This is so in natural networks, like the cerebral cortex, which remains more or less reliable as a whole, even when numerous neurons have been lost, for example because of a cerebral haemorrhage. It is also the case in artificial networks and even in social ones, as may be seen when they repair themselves, sometimes after massive destruction of elements and interconnections.
Categories
- 1) General information
- 2) Methodology or model
- 3) Epistemology, ontology and semantics
- 4) Human sciences
- 5) Discipline oriented
Publisher
Bertalanffy Center for the Study of Systems Science(2020).
To cite this page, please use the following information:
Bertalanffy Center for the Study of Systems Science (2020). Title of the entry. In Charles François (Ed.), International Encyclopedia of Systems and Cybernetics (2). Retrieved from www.systemspedia.org/[full/url]
We thank the following partners for making the open access of this volume possible: