Ideas for Neurocomputational Social Science

Ideas for Neurocomputational Social Science

16th IFAC Conference on Technology, Culture and International 16th IFAC Conference on Technology,Available Culture and International Stability online ...

657KB Sizes 20 Downloads 104 Views

16th IFAC Conference on Technology, Culture and International 16th IFAC Conference on Technology,Available Culture and International Stability online at www.sciencedirect.com 16th IFAC Conference Technology, Culture and International Stability September 24-27, 2015.on Sozopol, Bulgaria Stability September 24-27, 2015. Sozopol, Bulgaria September 24-27, 2015. Sozopol, Bulgaria

ScienceDirect

IFAC-PapersOnLine 48-24 (2015) 219–224

Ideas for Neurocomputational Social Science Ideas for Neurocomputational Social Science Ideas for Neurocomputational Social Science George Mengov* George  Mengov* George  Mengov* *Sofia University St Kliment Òhridski, 1113 Sofia,  *Sofia University Stg.mengov@ Kliment Òhridski, 1113 Sofia, Bulgaria (e-mail: feb.uni-sofia.bg). *Sofia University Kliment Òhridski, 1113 Sofia, Bulgaria (e-mail:Stg.mengov@ feb.uni-sofia.bg). Bulgaria (e-mail: g.mengov@ feb.uni-sofia.bg).

Abstract: Currently, many methods for data analysis in the social sciences resort to applied statistics and even Abstract: Currently,techniques. many methods dataa toolbox analysisprovides in the social sciences base resortfortoconclusions applied statistics and even simpler extrapolation Whilefor such a satisfactory and predictions, Currently,techniques. many methods for dataa toolbox analysisprovides in the social sciences base resortfortoconclusions applied statistics and even Abstract: simpler extrapolation While such a satisfactory and predictions, this state of affairs is hardly inspiring. Here, a knowledge transfer is advocated, whereby achievements of

simpler extrapolation techniques. While such a can toolbox provides atransfer satisfactory base for conclusions and predictions, this state of and affairs is hardly inspiring. Here, abeknowledge is advocated, whereby achievements of mathematical computational neuroscience used as a tool to study phenomena of socioeconomic nature. this state of and affairs is hardly inspiring. Here, abeknowledge transfer is advocated, whereby achievements of mathematical computational neuroscience can used as a tool to study phenomena of socioeconomic The foundations for such a cross-breeding comprise a mechanism shared by the functioning of the brain, and nature. certain mathematical andforcomputational neuroscience can be used as a tool to study phenomena ofofsocioeconomic nature. The foundations such a cross-breeding comprise a mechanism shared by the functioning the brain, and certain aspects of the operation of social networks. Two rudimentary modelling examples illustrate the point. The foundations for suchofasocial cross-breeding a mechanism sharedexamples by the functioning of point. the brain, and certain aspects of the operation networks. comprise Two rudimentary modelling illustrate the aspects theComputational operation of social networks. of Two rudimentary modelling examples illustrate © 2015,ofIFAC (International Federation Automatic Control) Hosting by Elsevier Ltd.theAllpoint. rights reserved. Keywords: neuroscience, socioeconomic systems, simulated annealing

Keywords: Computational neuroscience, socioeconomic systems, simulated annealing Keywords: Computational neuroscience, socioeconomic systems, simulated annealing   social networks and systems may resemble – in some limited  1. INTRODUCTION social andneural systems may resemble – in some limited sense –networks neurons in networks. 1. INTRODUCTION social andneural systems may resemble – in some limited sense –networks neurons in networks. 1. INTRODUCTION In 1714, Gottfried Leibniz suggested his concept of sense – neurons in neural networks. This paper suggests that the social sciences could benefit In 1714, Gottfried Leibniz suggested his concept monadology – the idea that each living creature, plant of or This paper suggests that the social sciences could benefit In 1714, Gottfried Leibniz suggested his concept of from paper a transfer of models andsocial modelling techniques from monadology – the idea that each living creature, plant or This suggests that the sciences could benefit animal, contains itself multitude its ownplant micro a transfer of computational models and modelling techniques from monadology – thein idea thataa each living of creature, or from mathematical and neuroscience. Section 2 animal, contains in itself multitude of its own micro from a transferand of computational models and modelling techniques from replicas. At the time it was not clear how realistic that mathematical neuroscience. Section 2 animal, contains in itself a multitude of its own micro presents a few relevant concepts from the philosophy of replicas. Atwas, theand time it was its notimportance clear howwas realistic that mathematical and computational neuroscience. Section of 2 hypothesis moreover, difficult to presents a few relevant concepts from the philosophy replicas. Atwas, theand time it was its notimportance clear howwas realistic that science that are used in theconcepts exposition. Section 3 outlines the hypothesis moreover, difficult to presents a few relevant from the philosophy of assess. Centuries later, self-similarity across scales attracted science that in the exposition. Section 3 outlines the hypothesis was, and moreover, its importance was difficult to contours of are theused computationally-driven matching between assess. Centuries later, self-similarity acrossonce scales attracted science that are used in the exposition. Section 3 outlines the the attention of the scientific community again. An contours of the computationally-driven matching between assess. Centuries later,scientific self-similarity acrossonce scalesagain. attracted variables and socioeconomic constructs. Section 4 the attention of the community An neural contours of the computationally-driven matching between important analogy by Ernest Rutherford suggested that the neural variables and socioeconomic constructs. Section 4 the attention of thebyscientific communitysuggested once again. An presents two rudimentary examples that give a hint about the important analogy Ernest Rutherford that the neural variables and socioeconomic constructs. Section 4 electrons inanalogy the atombycircle around a small suggested but heavy nucleus, presents two rudimentary examples that give a hint about the important Ernest Rutherford that the technical aspects of the advocated electrons in the atommove circlearound aroundthe a small but heavy nucleus, presents two rudimentary examplesknowledge that give atransfer. hint about the just like the planets sun in the solar system. technical aspects of the advocated knowledge transfer. electrons in the atommove circlearound aroundthe a small but heavy just like the planets sun the solarnucleus, system. Newton’s law of move gravity inspired thein development of technical aspects of the advocated knowledge transfer. just like the planets around the sun in the solar system. 2. CONCEPTS FROM THE PHILOSOPHY OF SCIENCE Newton’s law of gravity inspired the development of international economics’ gravity models, positing that trade 2. CONCEPTS FROM THE PHILOSOPHY OF SCIENCE Newton’s law of gravity inspired the development of international economics’ gravity models, positing that trade 2. CONCEPTSdiscipline FROM THE OFmethods, SCIENCE between two countries is more intense when they are Each has PHILOSOPHY its own tools and yet international gravity models, positing that between twoeconomics’ countries ishave more intense when Similarly, theytrade are Each scientific scientific discipline has its own tools and methods, yet geographically closer and bigger economies. these are all based on principles, common to all sciences. between two countries ishave more intense when Similarly, they are Each scientific discipline has its own tools and methods, yet geographically closer and bigger economies. these are all based on principles, common to all sciences. the Navier–Stokes transfer equations were Figure 1 shows a diagram of the different elements that build geographically closerheat andand havemass bigger economies. Similarly, the Navier–Stokes heat and mass transfer equations were these are all based on principles, common to all that sciences. Figure 1 shows a diagram of the elements build adapted to model capital flows in finance. The fractal school a scientific theory in any fielddifferent of knowledge (Torgerson, the Navier–Stokes heat flows and mass transfer equations were up th school adapted to model capital in finance. The fractal Figure 1 shows a diagram of the different elements that build up a scientific theory in any field of knowledge (Torgerson, century of thought flourished in the last decades of the 20 th school 1958). adapted to model capital flows indecades finance.of The fractal century of thought flourished in the last the 20 up a scientific theory in any field of knowledge (Torgerson, duethought to Benoit Mandelbrot andofgave fruits in the 1958). th century of flourished in theand lastothers, decades the 20 due toofBenoit Mandelbrot and others, and gave fruits in the shape research methodologies for various fields across the 1958). due toofBenoit Mandelbrot and others, and gave fruits in the Theoretical shape research methodologies for various fields across natural life sciences. for various fields across the Theoretical shape ofsciences researchand methodologies the models natural sciences and life sciences. Nature Theoretical models natural sciences and life sciences. Nature Recently, Bruno Apolloni (2013) remarked that, “The social models Scientific Nature Recently, Bruno Apolloni (2013) remarked that, “The social network a fractal extension of our brain networks”. Scientific constructs Recently, is Bruno Apolloni (2013) remarked that, “The social network is a fractal extension of our brain networks”. P Scientific constructs Decades him, extension Stephen Grossberg developed his network isbefore a fractal of our brain networks”. P Decades before him, ofStephen Grossberg developed his B constructs famous equations cooperative-competitive neural P B Decades before him, ofStephen Grossberg developed his famous equations cooperative-competitive neural interactions and briefly examined their applicability for B famous equations of cooperative-competitive neural A interactions brieflysystems. examined their applicability for characterizingand economic He showed that some of his A interactions and brieflysystems. examined their applicability for characterizing economic He showed that some of his A models described equally well neurons competing locally Q characterizing economic systems. He showed that somelocally of his models exhibiting described equally well neurons competing Q while globally coordinated behaviour, and C models exhibiting described equally well neurons competing locally Q while globally coordinated behaviour, and C production companies in a coordinated class of stable competitive Operational while exhibiting globally behaviour, and production companies in a class of stable competitive Operational D C definitions markets (Grossberg, 1980, 1988). D production companies in1988). a class of stable competitive Operational definitions markets (Grossberg, 1980, D definitions markets (Grossberg, 1980, 1988). Following Stephen Jay Gould’s vision that the human brain Following Stephen Jay Gould’s vision that the human brain R E was not build for a Jay restricted but as evolved for R Following Stephen Gould’spurpose, vision that theit brain E was not build for a restricted purpose, but as ithuman evolved for hunting, social cohesion and other functions, it transcended R E was not build for a restricted purpose, but as it evolved for hunting, social cohesionofand other functions, it transcended the adaptive boundaries its original purpose (Gould, 1981), hunting, social cohesion and other functions, it transcended the adaptive boundaries of its originalmechanisms purpose (Gould, it seems plausible that its cognitive could1981), have the adaptive boundaries of its originalmechanisms purpose (Gould, it seems plausible that its cognitive could1981), have extended to interpersonal relations. That is why individuals in Fig. 1. A diagram of the structure of any scientific field. it seems plausible that itsrelations. cognitiveThat mechanisms could have extended to interpersonal is why individuals in Fig. 1. A diagram of the structure of any scientific field. extended to interpersonal relations. That is why individuals in Fig. 1. A diagram of the structure of any scientific field. 2405-8963 © 2015, IFAC (International Federation of Automatic Control) Hosting by Elsevier Ltd. All rights reserved. Copyright IFAC responsibility 2015 219Control. Peer review©under of International Federation of Automatic Copyright © IFAC 2015 219 10.1016/j.ifacol.2015.12.086 Copyright © IFAC 2015 219

IFAC TECIS 2015 220 September 24-27, 2015. Sozopol, BulgariaGeorge Mengov et al. / IFAC-PapersOnLine 48-24 (2015) 219–224

The shaded area to the right represents the observable world we explore. Our goal is to understand and explain its phenomena and, if possible, to predict them. Circles A, B, C, D, E represent the theoretical constructs in each discipline. For example, in various branches of physics that can be length, force, electric current etc. In every science, it is necessary that at least some of the theoretical constructs be defined so as to have a clear connection with phenomena and objects from the real world. That is how a science can be linked to empirical observations and can become useful in practical terms. In Fig. 1 these are the constructs P, Q, R. Their definitions, shown with double lines, contain a prescription of actions that always yield the same result when conducted by an expert. The sequence of these operations and procedures makes up the operational definition, and shows how exactly a scientific theory connects with the real world. Many constructs are defined only theoretically and together with the connections among them they form theoretical models. When a branch of science contains theoretical models of connected constructs, and at the same time for some of them exist operational definitions, then there is a theory that can be empirically verified. Eventually, such theories are refined and become useful in explaining and predicting phenomena from the surrounding world. Consequently, it becomes possible to invent and then manage artificial systems.

decision-making by the individual agent. Those publications showed in sufficient detail how neural activations, mediumterm memories, and long-term memories could be mapped onto economic variables such as prices of goods or services, price discounts, surcharges, expected and delivered commodity quantities, company reputations etc. Those studies demonstrated how empirical data from continuous-time decision processes in an economic experiment can be related to a system of differential equations, comprising the neuroscientific model. A major problem consisted of mapping variables from the decision process with both categorical and quantitative nature, such as chosen economic option, or degree of customer satisfaction, onto variables from the equations. In addition, usually the number of equation constants was different from the number of empirical observations, making it impossible to formulate a Cauchy problem. Instead, stochastic optimization procedures such as simulated annealing had to be implemented to solve the system of equations. Because the data were divided in calibration and test sub-samples, it became not only possible to assess the method’s viability, but also to substantiate claims about human cognitive strategies in making economic choices (Mengov, 2015, 2014). Yet, those studies dealt with fixed neural model structures and predefined relationships among neural and economic variables. The idea of the present paper is to suggest a step further, whereby a problem is formulated and solved with the latter relationships unknown. In other words, it will not be known a priori which neural variable is most suitable to link with which economic or socioeconomic variable. The initial conceptual considerations can only be partial. It will be the task of the computational experiment to reach by trial and error the finest mapping between neural and socioeconomic quantities in a mathematically optimal sense. This approach is essentially semi-empirical.

There is a direct link between the stability of definitions, the precision of measurement, and the usefulness of a theory. In our time, the natural and engineering sciences seem to be ahead of the social sciences in both measurement accuracy and construct stability. The economic variable inflation can serve as a good example. It is theoretically defined by Fisher’s equation from the quantity theory of money, which operates with variables like price level, total stock of money, velocity of money circulation etc. On the other hand, an operational definition of inflation, i.e. a method for its measurement, is the consumer price index (CPI) that rests on a basket of consumer goods – a set of frequently bought household goods and essential services. Apparently, the latter definition changes over time, unlike its theoretical counterpart. The basket will evolve due to social progress and technological innovation, bringing out new products in the market. One can view the circles B and P in Fig. 1 as an illustration of the two definitions, with the dotted line indicating the unsteady relationship between them.

Nature

P B A Q

3. AUTOMATING THE CONSTRUCT MAPPING

C D

A hypothetical neuroscience-inspired social science model could look like a creation of some branch in contemporary theoretical physics: a mathematical structure nicely fitting key elements from the general picture, yet containing quantities about which little or nothing is known. However, phenomenological and semi-empirical models are the natural early companions of each pioneering effort.

E

R

Fig. 2. An idea for automated mapping search. (See text.)

In Mengov (2014, 2015) and Mengov et al (2008) it was shown that the well-known READ gated dipole model (Grossberg & Schmajuk, 1987) of classical and operant conditioning could be adapted to account for economic 220

IFAC TECIS 2015 September 24-27, 2015. Sozopol, BulgariaGeorge Mengov et al. / IFAC-PapersOnLine 48-24 (2015) 219–224

Fig. 2 illustrates this point. Now only some constructs among A, B, C, D, E will be initially linked to some of the operationally defined variables, and even that bonding will not be irrevocable. Of course the two sets of variables could be much larger than in the present illustration. It will be up to the stochastic optimization procedure – be it simulated annealing, genetic algorithms, taboo search, or any other – to decide not only which values for the equation constants would be the best, but also, and above all, which mappings between operational and theoretical quantities provide the most adequate bond between theory and empirical observations. The grey dotted lines and arrows in Fig. 2 indicate this initial state of vagueness, which is to be overcome computationally. This automated search for the best mapping is completely within the reach of today’s computers.

replenishment when a signal is sent to another neuron. Term b(1  zi ) accounts for transmitter regeneration. The substance is produced until the maximum capacity (equal to 1) is reached. The regeneration rate is accounted for by the constant b. Term cyi zi states that transmitter loss in the sending neuron is proportionate to yi zi . That is, the stronger the signal yi , the faster would neuron i empty its store. But reproduction takes place simultaneously, and so a kind of dynamic balance is achieved. Eq. (1) has the following analytical solution under the assumption yi  const , which is used here only to fix ideas:

zi (t ) 

b  cI exp t cyi  b  b  cyi

(2)

With cI being the integration constant, the equilibrium value for z i is given by zi ()  b /(b  cyi ) . Because the incoming

4. TWO RUDIMENTARY EXAMPLES It must be stressed that the above procedure has not been tested yet, at least not with neuroscientific models and data from socioeconomic systems. On the other hand, the outlined approach is technically not particularly challenging. In comparison, cryptographers have much more daunting tasks on their hands. Rather, it seems that the main challenge here is to figure out suitable candidate problems from business and economic cases that could be interesting for mapping onto neural models. What follows are two examples about the practical viability of the proposed procedure in the limited case of predefined links between variables from the two domains.

signal yi is not constant, Mengov et al (2006) proposed to adapt the solution by positing a fictitious equilibrium at each time step and computing yˆi for it. Going that way, the following recurrent solution was found (Mengov, 2014): z i (t )  z i (t  1) exp ci yi (t )  bi  

bi 1  exp ci yi (t )  bi  bi  ci yi (t )

(3)

The usefulness of Eq. (3) depends critically on the size of the time step. When it is sufficiently small and the sampling frequency is high enough to meet the demands of the Nyquist–Shannon–Kotelnikov sampling theorem, the precision is guaranteed by that theorem. Exactly the same approach is applied to the LTM differential equations.

4.1 Neural modelling of economic choices In an experiment, at the same time economic and psychological, a number of neural and economic variables were mapped onto each other (Mengov, 2014, 2015). Here, only the technical issues regarding the solutions of differential equations are presented. Most models from the Grossberg School comprise of three fundamental equations: one for short-term memory (STM), which is an adaptation of the Hodgkin–Huxley neuron activation equation; one for medium-term memory (MTM); and one for long-term memory (MTM). The latter two equations operate on a time scale two or three orders of magnitude slower than the former, which makes it possible to introduce some simplifications. First, all STM equations are assumed to react instantaneously and are solved at equilibrium. Often, MTM and LTM equations are solved by numerical methods, incorporating different rates of integration. Over the last decade, another approach has been sometimes implemented. Here it is illustrated with the example of the following MTM equation:

dz i  b(1  z i )  cy i z i dt

221

Once the analytical form of all STM, MTM, and LTM equations is known, the question is how to find a solution, fitting the empirical data. In the particular experiment, the task was to model the decisions of the individual agent, but modelling socioeconomic systems is similar in the following sense. Both an individual and a population of agents are usually facing a nonstationary economic process, in which they seek to maximize some utility. Modelling nonstationarity is always tricky, and here is what was done in this case. The agent had to choose one out of four offers by suppliers of a fictitious good. The goal was to accumulate as much as possible from that good, which was exchanged for real money after the game. At each transaction the agent had to assess their own satisfaction or disappointment in a psychometric scale. The game unfolded in a nonstationary way, i.e. previous experience was only partially useful. The most that could be done in calibrating the model of differential equations was simply to put more emphasis on the end of the calibration sequence. Thus, the following objective function was constructed:

(1)

Here, z i is neurotransmitter quantity produced and stored by neuron i. Constants b and c are real and positive. Derivative dzi / dt gives the rate of neurotransmitter loss and

m1

m2

i 1

i 1

J   I i    I m1  i  RΨ(t DS ),o(t DS ) 221

(4)

IFAC TECIS 2015 222 September 24-27, 2015. Sozopol, BulgariaGeorge Mengov et al. / IFAC-PapersOnLine 48-24 (2015) 219–224

Objective function J had to be maximized with respect to both choices and emotions. In Eq. (4), quantity I i is indicator

test sample, consisting of the final eight rounds in the economic game, were predicted 75%-100% successfully while the prior probability was 25%. Yet, those were only 8%-10% of all participants in the experiment. The method was moderately successful with many more, and was useful for two thirds of all 211 participants. Among them, however, about a third were impenetrable for the model and for them it failed to rise above 25%. A detailed explanation is given in Mengov (2014, 2015), but in short, it all had to do with people’s economic strategies in the game. Those who acted on gut feelings were easily predictable. Those who devised strategies could not be tackled by the model as it had no neural functionality for that.

equal to 1 if in round i the model has chosen a supplier exactly as the participant, and 0 otherwise. The first summation is over the initial m1 rounds, while the second is over the next m2 rounds in the calibration sample. For example, with 12 rounds for calibration, It was established that a good division comprised m1  8 and m2  4 . The nonstationarity of the process was mitigated by enhancing the impact on J of the final m2 calibration rounds, whose correct predictions were weighted more (   1 ) than the initial m1 rounds. Coefficient  had to ensure a good balance between the two calibration subsets. All technical considerations are omitted here, but it could be stated that   5 comprised an excellent compromise. This meant making the last four choices in the calibration sample five times more influential than the preceding eight rounds.

4.2 Calibrating an ART neural system The example above dealt with only 18 differential equations. In this section another example is discussed, which involved about a hundred such equations. The task here was to make them simultaneously behave in a desired way, i.e., to model correctly category choices in an ART neural system.

The last term in Eq. (4) was the correlation RΨ(t DS ),o(t DS ) between two vectors. The first was Ψ(t DS ) and it consisted of the self-reported emotions of disappointment or satisfaction (DS), ranging within [4;4] after the decisions:



New input pattern X1 enters the ART system and is installed in neuron layer F1, also activating layer F2.



T

(1) (m) Ψ (t DS )   (t DS ),...,  (t DS ) .

Vector

t DS

 t

(1) DS

,...,t

Neuron J1 from layer F2 holding the most similar prototype to pattern X1 according to criterion α is trying to assimilate X1 in the J1↔F1 connections.



(m) T DS

consisted of all the moments when the participant in the experiment reported their emotion with a mouse click. The pair comprised an essential component of the empirical data. It served to calibrate the differential equations.

Neuron J1 is turned down because its prototype is not similar enough to pattern X1 according to criterion β.

The second vector contained the model-generated values for the same emotions:



(1) DS

o(t DS )  ol (t ),..., ol (t

(m) DS

Neuron J2 from layer F2 holding the second most similar prototype to pattern X1 according to criterion α is trying to assimilate X1 in the J2↔F1 connections.



T

) .

As the computational process of stochastic optimization progressed, all the elements of vector o(t DS ) became closer Neuron J2 satisfies criterion β and assimilates pattern X1 by updating the J2↔F1 connections.

to their respective counterparts in vector Ψ(t DS ) . In that sense, the model-computed emotions were made to resemble that of the human being. Now let us come back to Eq. (4). The objective function J is maximized in three possible ways – by improving choice matches in the first eight calibration rounds, by doing the same in the next four calibration rounds, or by better accounting for the participant’s emotion of satisfaction or disappointment. In case the former two elements coincide, a system solution offering superior “emotional match” prevails over the currently best solution.

New input pattern X2 enters the ART system and is installed in neuron layer F1, also activating layer F2.

.

Fig. 3. Operations in an ART neural network.

The approach sketched here was very successful for people who took decisions by quick intuition. Their choices in the 222

IFAC TECIS 2015 September 24-27, 2015. Sozopol, BulgariaGeorge Mengov et al. / IFAC-PapersOnLine 48-24 (2015) 219–224

223

Fig. 3 describes the operations in the adaptive resonance theory (ART) neural network as they happen in time. Today we know that adaptive resonances are widespread in the brain (Grossberg, 2013) and take part in many cognitive processes. According to this theory, all knowledge is stored in connections among neurons in the brain whereby, to a first approximation, two neural layers F1 and F2 are instrumental. They exchange multidimensional signals in two directions: a “bottom-up” stream F1→F2 comes in from the senses and provokes a “top-down” F2→F1 response of associations, based on previous knowledge. Both streams are compared and matched to produce “impressions”, which, if found adequate in a certain mathematical sense–satisfying both criteria α and β in Fig. 3–are eventually memorized. The theory explains how one manages to learn new knowledge without destroying the existing. It posits that the brain is “plastic” as it is able to accommodate change, and at the same time “stable” as it retains what has been learned before. This is the solution to the famous “stability-plasticity dilemma”. To a first approximation, the process can be explained as follows. Responding to an incoming image, the neural network scans its memories to find a sufficiently close match. If one exists, its carrier neuron in F2 exchanges signals with all neurons in the F1 layer. This process is called adaptive resonance. Until it lasts, knowledge update takes place. The interaction is local as it affects a limited number of neurons and their synaptic connections. If the old memories fail to offer an adequate match, a new set of neurons assimilates the incoming signals and patterns, whereby the network enters again a resonant state.

Fig. 4. A stochastic optimisation process leading to increased precision.

The significance of this example is in the fact that it showed how a larger-scale task could still be successfully optimized by using the procedure outlined here.

In the example discussed here, the ART memory consisted of onlyNew nine elements, that the required input pattern Xbut ART those hundred 1 enters equations. Again, a suitable objective system and is installed in neuron layer F1, function was constructed. Asactivating in the previous also layer F2.example, the stochastic optimization method was a fast version of simulated annealing.

5. DISCUSSION AND CONCLUSIONS This paper suggested an idea about the use of models from mathematical and computational neuroscience in the broad domain of social sciences. Some of the essential technical issues were discussed and it was shown that they have either been resolved already, or do not seem insurmountable.

Fig. 4 illustrates the stages through which the procedure went. Its Jgoal was to train the ART network so as to Neuron 1 from layer F2 holding the most recognize correctly thepattern incoming signals and activate the similar prototype to X1 according right F2 neurons in the right sequence in time. Somewhat in to criterion α is trying to assimilate X1 in advance, one can say that the third panel in the figure comes J1↔F1 connections. close to the achieving the desired performance. Initially, however, only few correct runs were observed as indicated by the stable signals maintained in some of the time windows. The jittery signals in-between were caused by suboptimal choice of equation constants. the trial-and-error process Neuron J1 is turned down In because its better constants were found, as the middle plot shows. prototype is not similar enough to pattern Finally, the bottom plot gives almost perfect system X1 according to criterion β. penultimate time slot. performance, compromised only in the Apparently that malfunction was due to the speed of the procedure – with a fast version of simulated annealing there is no longer a guarantee that the global optimum of the objective function would beFreached. Neuron J2 from layer 2 holding the

The paper’s main objective was to draw the attention of the research community to the opportunities, offered by the suggested cross-breeding of disciplines. An anonymous reviewer suggested that the small number of citations probably did not reflect the true proportion of the authors working on this topic. Ironically, that guess was correct – not many people have seriously contributed to this research area simply because it is quite new. While the area of Computational Social Science has been in existence for quite some time, this paper’s title introduces for the first time the word combination Neurocomputational Social Science. Hopefully the field will prove its usefulness in the future. A number of neural models look promising for applications in the social sciences. Mengov (2015) developed a detailed analogy between the ART neural network and any leaderelecting social system.

second most similar prototype to pattern X1 according to criterion α is trying to assimilate X1 in the J2↔F1 connections.

223

IFAC TECIS 2015 224 September 24-27, 2015. Sozopol, BulgariaGeorge Mengov et al. / IFAC-PapersOnLine 48-24 (2015) 219–224

The recurrent gated dipole looks like another straightforward candidate for analyzing social processes in which emotion is involved. The dynamics of various markets, the sentiments in virtual social networks, and socially motivated mass protests are all potential candidates for such modelling. Moreover, neural models that are more complex may tackle successfully the relations among entities such as a country’s political establishment, industry, labour force, trade unions, third sector, professional and other communities. In sum, the suggested approach could be used to explain and forecast the evolution of important trends and events in socioeconomic systems, previously unpredictable. REFERENCES Apolloni, B. (2013) Toward a cooperative brain: Continuing the work with John Taylor. Proceedings of IJCNN 2013, pp. 1-5. DOI: 10.1109/IJCNN.2013.6706715. Gould, S.J. (1981) Hyena Myths and Realities. Natural History, 90 (2), 16. Grossberg, S. (1980) Biological competition: Decision rules, pattern formation, and oscillations. Proceedings of the National Academy of Sciences, 77 (4), 2338-2342. Grossberg, S. (1988) Nonlinear neural networks: Principles, mechanisms, and architectures. Neural Networks, 1, 1761. Grossberg, S. (2013) Adaptive Resonance Theory: How a brain learns to consciously attend, learn, and recognize a changing world. Neural Networks, 37, 1-47. Grossberg, S. and Schmajuk, N. (1987) Neural dynamics of attentionally-modulated Pavlovian conditioning: Conditioned reinforcement, inhibition, and opponent processing. Psychobiology, 15 (3), 195-240. Leibniz, G. (1714) The Monadology. Translated by George MacDonald Ross, 1999. Mengov, G. (2015) Decision Science: A Human-Oriented Perspective. Springer-Verlag, Berlin Heidelberg. Mengov, G. (2014) Person-by-person prediction of intuitive economic choice. Neural Networks, 60, 232-245. Mengov, G., Egbert, H., Pulov, S., and Georgiev, K. (2008) Affective balances in experimental consumer choices. Neural Networks, 21 (9), 1213-1219. Mengov, G., Georgiev, K., Pulov, S., Trifonov, T., and Atanassov, K. (2006) Fast computation of a gated dipole field. Neural Networks, 19 (10), 1636-1647. Torgerson, W. (1958) Theory and Methods of Scaling. Wiley, New York.

224