The analysis of the relational data seems to be so more difficult according to the researcher's range of action. A methodology, founded upon the use of neural networks, allows to leave on background, in the analysis phase, the theoretical formalization of the distribution of the relationships that forms the social data in examination, according to the statistic tradition, leaving this involvement bypassed by the software. With the opportune construction of indexes, it is possible to analyze the matrix of weights produced by the net, nevertheless it doesn't give us back any function able to approximate the course of the analyzed phenomenon, cause we didn't give the function first, at the beginning of the process! A neural network is an object able to reproduce, with a minimum amount of error, the intensity of the relationships that tie the values of the variable that we build from the segment of reality that we want to examine. This ability cause the net to be comparable to a liquid which we can dose the stringiness: too diluted, it is not able to generalize from the training set to real set of analysis, becoming too tied to the peculiarities of the first one (a problem called overfitting); on the contrary, if we make the fluid too much viscous, preventing it to slip in every ravine of our object, it will become able to make generalizations, but not able to gather the peculiarities of our object. The aim of the neural social researcher is, in this case, not the choice of a theoretical distribution to analyze data, but the choice of the correct values of the parameters of the net that manage its abilities of analysis. It is in a some way a methodology that works case by case: it can produce scenaries and simulations, but doesn't make generalizations in the traditional statistic sense of the term. Is this a consequence to which we have to undergo, a sort of implicit renouncement, or is this a precise operational choice, in consequence of assuming social reality as something not forceable by generalizations that can result, at the end, only misleading, only partial representations able just to produce uncorrect images of the relational world? Is the network approach always correct? Which are the limits and the risks in the use of a neural approach tout-court? Is there any problem for which would be more correct a less complex approach?
Gabbriellini, S., The aim of a Neural Social Researcher, Abstract de <<Sixth International Conference on Logic and Methodology (RC33)>>, (Amsterdam, The Netherlands, 05-05 May 2004 ), ISA Research Committee on Logic and Methodology, Amsterdam, The Netherlands 2004: 312-313 [https://hdl.handle.net/10807/299736]
The aim of a Neural Social Researcher
Gabbriellini, Simone
Primo
Conceptualization
2004
Abstract
The analysis of the relational data seems to be so more difficult according to the researcher's range of action. A methodology, founded upon the use of neural networks, allows to leave on background, in the analysis phase, the theoretical formalization of the distribution of the relationships that forms the social data in examination, according to the statistic tradition, leaving this involvement bypassed by the software. With the opportune construction of indexes, it is possible to analyze the matrix of weights produced by the net, nevertheless it doesn't give us back any function able to approximate the course of the analyzed phenomenon, cause we didn't give the function first, at the beginning of the process! A neural network is an object able to reproduce, with a minimum amount of error, the intensity of the relationships that tie the values of the variable that we build from the segment of reality that we want to examine. This ability cause the net to be comparable to a liquid which we can dose the stringiness: too diluted, it is not able to generalize from the training set to real set of analysis, becoming too tied to the peculiarities of the first one (a problem called overfitting); on the contrary, if we make the fluid too much viscous, preventing it to slip in every ravine of our object, it will become able to make generalizations, but not able to gather the peculiarities of our object. The aim of the neural social researcher is, in this case, not the choice of a theoretical distribution to analyze data, but the choice of the correct values of the parameters of the net that manage its abilities of analysis. It is in a some way a methodology that works case by case: it can produce scenaries and simulations, but doesn't make generalizations in the traditional statistic sense of the term. Is this a consequence to which we have to undergo, a sort of implicit renouncement, or is this a precise operational choice, in consequence of assuming social reality as something not forceable by generalizations that can result, at the end, only misleading, only partial representations able just to produce uncorrect images of the relational world? Is the network approach always correct? Which are the limits and the risks in the use of a neural approach tout-court? Is there any problem for which would be more correct a less complex approach?I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.