Searching NK fitness landscapes: On the trade off between speed and quality in complex problem solving - Sylvie Geisendorf section environmental ...

Page created by Derrick Lee
 
CONTINUE READING
papers on agent-based economics
                                                      nr 7

         Searching NK fitness landscapes:
On the trade off between speed and quality
               in complex problem solving

                                     Sylvie Geisendorf

         section environmental and innovation economics
                                      university of kassel
papers on agent-based economics                                                                 nr 7

                                                Abstract
The complexity of problems is often too high for people or organizations, having to solve them, to do
so in an optimal way. In order to cope with such problems, either the search space has to be
decomposed, or it has to be searched by random trial and error processes. Kauffman´s NK model
offers a way to depict such problem space decompositions and the search for solutions in them.
However, papers on the effect of different decompositions on solution quality come to differing
conclusions as to the advantages or disadvantages of incorrect modularization assumptions. The
current paper thus examines the results of more empirically based search strategies. Some trade offs
become visible, but the sometimes observed initial advantage of a too deep modularization could not
be confirmed.

Keywords: NK-model, search processes, complexity reduction, modularity, agent-based modelling

                                                                                                   2
papers on agent-based economics                                                                    nr 7

I. Introduction
Going back to the work of Simon (1969, 1983), it has been recognized that the complexity of
problems is often too high for people or organizations, having to solve them, to do so in an
optimal way. Simon further suggests that in order to cope with such problems, agents need to
reduce problem complexity by decomposing the search space; an insight that is backed by
psychological findings on the organization of human knowledge and problem solving
(Beckenbach, 2005). Such a reduction is achievable by different procedures, e.g. by
concentrating on only one of several criteria the solution has to fulfil, by stopping search,
once a satisfactory level has been reached, or by following fixed decision routines. All these
possibilities however, are not directly connected with the structure of the problem itself. There
is another feature of complex problems, pointed out by Simon, but mostly overlooked in
modelling of bounded rationality in economics. Solutions and artefacts are themselves
decomposable to a certain degree, where decomposability means that a change in one part
doesn’t affect the performance of other parts.

Some authors however, tried to depict the degree of decomposability of economic artefacts or
problems and of the strategies to solve them, and investigated, how the decomposition of the
problem influences the kinds of attainable solutions (Marengo/Dosi 2005, Siggelkow/Rivkin
2006), how different search strategies perform on more or less connected landscapes
(Levinthal 1997, Frenken/Valente 2002, Ethiraj/Levinthal 2004), and mainly how an over- or
underestimation of the decomposability affects possible solutions (Strumsky/Lobo 2002,
Siggelkow/Levinthal 2003, Marengo/Dosi 2005, Siggelkow/Rivkin 2006). The basis for such
analyses is the NK-model, developed by Kauffmann (1993), with which varying degrees of
decomposability of the problem space can be modelled. N stands for the number of
components of an artefact or strategy and K for the intensity of connections among them.1

A central result of such analyses concerns the effects of over- and underestimations of the
problem’s decomposability on the attainable solutions. Assumingly, search in a complex
problem landscape should be decomposed in the same way as the problem or artefact itself.
But due to their bounded rationality, the deciding agents make decomposition mistakes. If
such deviations from the actual connectedness of the problem occur, agents might e.g. try to
optimize a seemingly independent module and be surprised by the effects on other parts of the
solution. What is more, they might not even notice them, because they only occur in the final
assembly of a product, composed of parts from different suppliers. On the other hand it is

1
 K = 0 thus reflects a case of full decomposability, whereas higher K values indicate that the performance
of the elements is influenced by changes of one or several other elements.
                                                                                                        3
papers on agent-based economics                                                                 nr 7

obvious that a high modularization of the search space reduces the number of options to be
tested and thus search time and costs considerably, which might constitute an advantage.2 The
analysis of the actual influence of an over- or underestimation of the problem’s
decomposability on the attainable solutions is thus an important question – all the more, if the
degree of decomposition of a product or organizational structure can be chosen deliberately.
But modelling results comprising this aspect come to different conclusions.

Marengo/Dosi (2005) applied the NK-model to depict how the degree of decentralization of
industrial organizations determines their problem solving capacities. A central result is a
trade-off between the speed and quality of better solutions. If search is decomposed stronger
than the actual problem, the speed with which solution quality increases is initially higher.
Only after a long time, this search mode locks in to an inferior local maximum and is
overtaken by a search mode based on the correct decomposition that eventually reaches the
global optimum. This result implies that it is by no means evident, that a correct
decomposition of the search space is to be preferred, because a prolonged search can be too
time consuming or costly. However, the work of Levinthal (1997) and Ethiraj/Levinthal
(2004) suggests this is not the case, as in their model a correct decomposition is always
advantageous. The divergence of these results indicates that the observed trade-off is not
robust against a variation of the search strategy. As the applied search algorithms were both
borrowed from NK resp. Genetic Algorithm practice, but not based on observations about
how economic agents perform search in complex problem landscapes, the question arises, if
an over-reduction of search space is actually advantageous in empirical contexts.

The current paper attempts to investigate this question. It will examine the problem solving
capacity of more empirically based strategies of problem decomposition. As already been
proposed by Beckenbach (2005), the work of Fleming/Sorenson (2003) on the modularity of
technological innovations, based on US patent data, and the correspondence of the search
strategies they identified with psychological findings on human problem solving, as reported
in Beckenbach, provide a good background for such an analysis. The paper will thus
implement search strategies resembling Fleming/Sorenson´s findings and test their problem
solving capacities for different degrees of connectedness of an exemplary problem. It is to be
expected that the ideal problem decomposition depends on the details of the applied search
procedure and is thus not a question that can be decided on the basis of ad hoc specifications.

2
  Assume a product composed of 10 input factors, having two possible states each. If the performance of
each of the factors depends on all other factors (no decomposability), there are 210 = 1024 design
possibilities. If the same product is decomposable into two modules of 5 factors, the search space is
reduced to 25 + 25 = 64.
                                                                                                     4
papers on agent-based economics                                                           nr 7

II. Depicting complex problem solving by NK and NK-related fitness landscapes
Assume that a problem can be represented as a binary string, containing the characteristics of
the problem’s elements. One specific binary string then constitutes the optimal solution to a
given problem, whereas other, differing strings constitute possible solutions deviating more or
less significantly from the optimal one. Each time, one of the binary values of the constituting
elements is switched we get another solution. The performance of different solutions is
measured by attributing fitness values to the individual elements and aggregating them to the
string’s fitness. Fitness values ranging from 0 to 1 are attributed randomly to the elements and
the string’s fitness is calculated by adding the individual values and dividing them by the
number of elements. Regardless of its length, each string thus receives a fitness value between
0 and 1. An element being connected to another now means that the fitness of the former
changes if the binary value of the latter is switched and vice versa.3

In Kauffman´s NK model, connections between elements of a problem (representing a
product, a strategy or an organizational structure) are spread arbitrarily (Kauffman 1993). A
K value of 2 for a number of elements N = 8 might e.g. result in the following connective
structure:

                            1      0     0      1      0     0      1    1

With such an arbitrary spread of a given number of connections between the elements, even
low K values can lead to largely connected structures. Modelling the problem space following
that procedure thus poses problems to depict perfect or even near decomposability. A problem
would be perfectly decomposable, if it consisted of separable components (or modules), with
only internal connections among the module’s elements:

                            1      0     0      1      0     0      1    1

Modularization reduces the search space considerably. If a solution composed of 12 elements
can be divided into 3 modules, and this structure is known to the searching agents, instead of

3
    One directional dependency is also possible, but not assumed here.
                                                                                              5
papers on agent-based economics                                                             nr 7

testing all 2N = 4096 possibilities to find the optimal solution, each module can be optimized
separately. For each module there are only 2n = 16 solutions and thus a total of 2n x m = 48
tests to perform in order to optimize the whole string.

As papers like Ethiraj/Levinthal (2004) or Marengo/Dosi (2005) are concerned with the effect
of over- and under-estimations of module sizes in problem solving, they do not use the
original randomly connected NK landscape, proposed by Kauffman, but pre-designed
landscapes with defined nearly decomposable modules. Near decomposability means that
connection intensity inside a given module is much stronger than with elements outside the
module. Ethiraj/Levinthal assumed such nearly decomposable modules by connecting the last
element of each module with the first one of the next module. Otherwise, each element inside
a module was connected with all the other module’s elements Ethiraj/Levinthal (2004).

Search strategies in NK or related fitness landscapes are algorithms, repeatedly performing a
given procedure, like arbitrarily changing one element of the search string and keeping the
resulting string, if its fitness value is higher than the one of the former solution. Thus far, in
NK-related literature only a few search strategies have been tested for their performance and
characteristics. Marengo/Dosi (2005) tested the performance of parallel one or several bit
mutations (here called switches) inside the assumed modules. Ethiraj/Levinthal (2004)
compared different strategies of local search and recombination. Local search corresponded to
the switch of one element inside a module and the acceptance of the solution if module fitness
improved by it. String fitness can decrease for this procedure. Recombination draws on
exchange between firms and exchanges whole modules of a firm’s strategy against the
corresponding module of another firm, if the potential exchange module’s fitness is higher
than the former module’s one. Selection for such exchange modules has been designed on
modular and firm level. Firm selection chooses a random module of another firm, with a
higher likelihood of copying from a good performer. Module selection compares fitness
directly on module level and chooses to exchange a module if another firm offers a better
performing one.

The search procedures, chosen in these two papers, are quite dissimilar in several respects.
Marengo/Dosi (2005) performed a complete parallel search over all possible local changes of
a given solution, which constitutes more of a theoretical analysis than the representation of an
empirical search strategy. For large problem landscapes, a complete evaluation, even of only
one step variations will not be possible. Ethiraj/Levinthal (2004) on the other hand, introduced
inter-firm exchange, which also constitutes a form of parallelism, but restricts it to a limited

                                                                                                6
papers on agent-based economics                                                                nr 7

number of firms and thus solutions (10 or 100). Additionally the way in which the basic
fitness landscape has been formulated, differs between the two papers. A comparison of their
results is therefore not easily possible. This is a little unfortunate, because the results differ in
an important respect. Marengo/Dosi (2005) found a trade-off between the speed and quality of
better solutions for different search strategies. In their paper, an under-estimation of module
size led to initially quickly increasing solution quality, but an eventual lock in to an inferior
local maximum, whereas search with the correct module size eventually reaches the global
optimum, but takes a long time to overtake the suboptimal search strategy. If time and search
costs are considered, this trade-off might thus indicate that over-decomposition of the search
space is to be preferred. However, Levinthal (1997) and Ethiraj/Levinthal (2004) suggest, this
is not the case, as in their model a correct decomposition is always advantageous. As the
respective search algorithms have been designed on the basis of NK (Marengo/Dosi 2005)
and Genetic Algorithm (Ethiraj/Levinthal 2004) practice, the empirical relevance of the
diverging results can not be assessed easily. The current paper therefore investigates whether
the observed trade-off also occurs for more empirically based search strategies. In the
following model, innovative search will thus be based on findings by Fleming/Sorenson
(2003) on strategies for product innovation, derived from US patent data, as already been
proposed by Beckenbach (2005).

III. The model
III.1. The basic fitness landscape
Similar, but not exactly like for Ethiraj/Levinthal (2004), the fitness landscape deviates from
the original NK model. The original correlation structure from NK models can not be used to
depict decomposable problems. As the current paper attempts to investigate the effect of over-
and under-modularized search strategies on the quality of solutions, it will assume perfectly
decomposable problems. For a given number N = 12 of elements per binary string, different
degrees of modularization m = {2, 3, 4} are tested, where m = 2 means that the string is
composed of 2 independent modules with n = 6 elements each. For simplicity, inside each
module, all elements are connected. The number of connections k thus equals the number of
elements inside each module n minus one. For N = 12 and m = 3, n = 4 and k = 3 result:

            1     0      0     1      0     0     1      1     0      1     0     1

                                                                                                   7
papers on agent-based economics                                                           nr 7

For N = 12 2N = 4096 solutions exist. Each of them has a different performance, representing
success indicators, like different product qualities or differing efficiencies of organizational
structures. The basic fitness landscape of the model, containing all 2N potential solutions is
generated by attributing randomly distributed fitness values between 0 and 1 to each element
of an exemplary starting string and changing an element’s fitness, each time the binary value
of a connected element is switched. As long as only elements of unrelated modules are
switched, the fitness remains unchanged. A switch of the second element therefore, would
change the fitness values of elements 1 through 4, but not of elements 5 through 12.
Afterwards these element fitness values can be aggregated to module and string fitness. One
of the 2N strings now represents the best solution to the given problem, indicated by its having
the highest attainable fitness. As the landscape is generated randomly, this highest value
varies. In the following simulations, this random influence will be eliminated by comparing
the average performances over 100 random landscapes.

III.2. The search strategies
The invention and development of new products is a typical example of complex problem
solving and the advantages and disadvantages of modularization. Dividing a product into
several independent components, each of which can be developed independently, reduces
search effort, but tends to lead to suboptimal overall solutions, because it prevents an entire
redesign of the whole product. Considering the whole, on the other hand, allows for
occasional breakthroughs, but can be costly and time consuming, because the innovators have
to put up with long periods of unsuccessful experimentation. Fleming/Sorenson (2003)
investigated the advantages and disadvantages of corresponding strategies by examining US
patent data on technological innovations of more than 200 years. Using technological
subclasses and establishing their in- or interdependence by analysing how they had been
combined, Fleming/Sorenson could distinguish between more or less coupled product
architectures. Thereafter they studied the influence of component connectivity on innovative
success in the given product classes. As a result they identified three types of innovative
strategies used by US firms:

       A modular strategy, in which products are decomposed to a certain degree and the
        components are improved independently, considering only component performance.

                                                                                              8
papers on agent-based economics                                                                     nr 7

        A Coupled strategy with Shotgun Sampling, where the whole product’s performance
         is considered and improvements are attempted by a large number of relatively
         uninformed trial and error experiments.

        A Coupled strategy with Mapped Searching, also considering the whole product, but
         trying to reduce uncertainty about its decomposition by scientific research.
         Improvements are then attempted on the basis of acquired knowledge.

For the model presented in this paper, these findings have been implemented as follows. All
strategies are individual search procedures. They start with an arbitrary solution drawn from
the set of all 4096 possible solutions and try to improve it by one of the following procedures:

        Shotgun Sampling: The decomposition of the problem is not considered. Either one
         (one-bit-Shotgun Sampling) or several (multiple-bit Shotgun Sampling) arbitrarily
         chosen elements of the search string are varied. Afterwards the performance of the
         resulting string is assessed. If it has improved in relation to the former solution it
         substitutes it.

        Modular Search: Unaware of the actual problem decomposition, but wanting to
         reduce search effort, this strategy modularizes the search space by its own accord.
         After assuming a degree of modularization, it arbitrarily changes one or several
         elements in a randomly chosen module. If the performance of the module increases,
         the variation is kept.

        Mapped Search: Mapped Search tries to establish an understanding of the problem’s
         decomposition. It performs tests to derive the correct module size and develops
         module improvements. First it is decided in each time step whether to invest in
         research or improvements (with 0.5/0.5 probability for both).4

         ƒ    In research mode, initially a module size is assumed and one element inside the
              assumed module is switched. Then one of two possible tests is performed with
              equal probabilities: The inner-test inspects all elements inside the assumed
              module. If at least one of their fitness values did not change (as it should, if it is
              part of the same module as the switched element), Mapped Search assumes that

4
  Note that this choice has been included to take the costs of scientific research into account. As Mapped
Search is the most arduous procedure, it would be more costly to perform both search modes in one time
step. As costs are not included explicitly in the model, this is reflected in time requirements. Also, the
information provided to the research mode is more detailed than for all the other strategies, because the
fitness contributions of all elements of the searched module are investigated individually. Such a research
would be more costly, which is a second reason to slow the search down, by only allowing for either
research or improvement in one time step.
                                                                                                         9
papers on agent-based economics                                                                nr 7

              the module size was chosen to large and reduces it to the next smaller size. The
              outer-test inspects the fitness of all other supposed modules and assumes that
              module size was chosen to small, if one of these has changed unexpectedly. It
              then augments the assumed module size. The thus established module size is kept
              for further investigations.

          ƒ   In improvement mode, Mapped Search performs Modular Search as described
              above, but it does so, on increasingly better representations of the actual module
              size.

IV. Model results
IV.1. Trade-off between speed and quality of better solutions
The trade-off between initial adaptation speed of under-modularized search and long term
quality of perfectly modularized solutions, found by Marengo/Dosi (2005), could not be
confirmed with the current model. Assuming too small sizes of n was always disadvantageous
in the model. The model of the current paper thus confirms the results of Ethiraj/Levinthal
(2004) in this respect, although not having performed parallel search, as they did (fig. 1).

                                                 Shotgun       correctMod = 4        Mapped Search
   Performance
                                                 Multi Bit
   0.75       Shotgun           modSize = 6
              One Bit
    0.7

   0.65

                                                             modSize = 3        modSize = 2
    0.6

   0.55

                        modSize = 12
                                                                                modSize = 1
    0.5

                                                                                                      t
                          20                40               60                 80             100

Fig. 1: Shotgun Sampling, Mapped Search and 6 Modular Search procedures in an n=4 fitness
landscape

                                                                                                     10
papers on agent-based economics                                                                            nr 7

In contrast, there is even a slight trade-off observable in exactly the other direction. For a brief
initial period, the un-modularized strategies of one bit Shotgun Sampling and an assumed
module size of 12 for Modular Search (thus comprising all elements in only one “module”)
perform slightly better than the correct modularization.

It might be assumed that there is a simple reason for this lack of an initial advantage of over-
modularization. The Marengo/Dosi algorithm seems to be more intelligent, in that it only puts
new solutions to an external test. Among all possible experiments, it only tests new variations,
not yet tried out. It thus possesses some sort of memory, guaranteeing that only new module
constellations are tested.5 As modularization reduces the search space considerably, it seems
straightforward to assume, that smaller than optimal modules can be improved faster, which
might also lead to initially quicker advances of the whole solution. The algorithm of the
current paper performs on a trial and error basis and is not endowed with the ability to check
for double trials, nor seem the algorithms of Ethiraj/Levinthal. They are thus loosing time in
repeating trials. Therefore, it shall now be tested, whether this additional divergence of the
search procedures accounts for the difference in results.

Fig. 2 shows the results for an altered Modular ONLY NEW Search, where all tested element
constellations for each module are memorized. As long as new combinations are possible, the
algorithm tests them.6 Once all possible variants of a given module have been tested, it keeps
its last solution. Such a search can be assumed to confirm Marengo/Dosi´s results, where
over-modularized search is initially more successful, but also gets stuck earlier in local
optima, because it stops experimenting with a module, once it seems to have been optimized.
Note however, that no actual optimization might have been realized if the experiments
operated on wrong module sizes. Changes in other assumed modules, the elements of which
are actually connected with elements of the test module might have altered the functionality
of the test module. Fig. 2 shows that the expected trade-off still not emerges. As the assumed
module sizes for modSize = 1, 2 and 3 are too small, changes in parts of the string often affect
other parts of it in an unintended way. Prohibiting trying the same constellation twice, is thus
a less promising strategy than it seems at first glance. It prevents the search procedure from
reacting to changed requirements, provoked by changes in other parts of the whole product or
strategy. Interestingly, the short initial advantage of under-modularization is more pronounced

5
 Note however, that this does not imply that a once switched allele can not be switched a second time. A
switch of one particular element inside an assumed module can be made several times, if at least one other
element differs from earlier times the same switch has been attempted.
6 If e.g. the assumed module size is 3, and the constellations {{0, 0, 0}, {0, 0, 1}, {0, 1, 1}, {0, 1, 0}, {1, 0,
0}, {1, 0, 1}} have already been tried, only {1, 1, 0} and {1, 1, 1} can be tested in subsequent periods.
                                                                                                              11
papers on agent-based economics                                                           nr 7

for the altered search procedure. As fig. 2 shows, modSize = 12 and 6 are at an advantage for
the first 9 periods.

                                                          correctMod = 4
   Performance
    0.75

     0.7    modSize = 6

    0.65

                                                       modSize = 3
                                                                           modSize = 2
     0.6

    0.55
                        modSize = 12
                                                                           modSize = 1
     0.5

                                                                                                 t
                           20           40               60                80            100

Fig. 2: 6 Modular ONLY NEW Search procedures in an n=4 fitness landscape

The same advantage for under-modularization shows, if the problem can be completely
separated into its elements (correctMod = 1). Concentrating on the whole solution is also
initially quicker. The reason for this observation is straightforward. The search modus for the
correct modularization picks one module at random and changes one or several random
elements of it. If the module only contains one element, only one element can be switched at a
time. Searching over the whole problem however, allows for several switches at a time.
Although each changed element influences the fitness contributions of all the others, there is a
brief initial period in which the summed performance can rise fast, due to the larger changes
by parallel switches.

But there are two other trade-off effects observable in the results. One is a clear initial
advantage of Shotgun Sampling over Mapped Search, which later overtakes to slowly attain
the global maximum, whereas Shotgun Sampling locks in at an inferior level. Shotgun
Sampling is an easy way to explore different parts of the whole search space, which accounts
for its initial success. However, if it only allows for immediately beneficial changes (which
also is important for its initial success), it sooner or later gets stuck in a local maximum.
Mapped Search on the other hand, can explore the whole search space, by consecutively

                                                                                               12
papers on agent-based economics                                                           nr 7

reducing it to relevant regions (fig. 3). Establishing these regions takes time, but eventually
the optimal solution is found.
   Performance
   0.75
                                                           Mapped Search

    0.7                      Shotgun

   0.65

    0.6

   0.55

    0.5

                                                                                                 t
                        20                40             60              80              100

Fig. 3: Shotgun Sampling and Mapped Search in an n=4 fitness landscape

The second trade-off effect concerns the difference between one and multi bit Shotgun
Sampling, which – astonishingly – is not considerable and changes two times. Initially multi
bit Shotgun Sampling is quicker in finding better solutions, but it is soon overtaken by the one
bit variant. After some time however, the performance changes again, because the one bit
variant gets stuck sooner in a local optimum (fig. 4).
   Performance
   0.75                                  Shotgun
                                         One Bit

    0.7

   0.65

    0.6                      Shotgun
                             Multi Bit

   0.55

    0.5

                                                                                                 t
                        20                40             60              80              100

Fig. 4: One bit and multi bit Shotgun Sampling in an n=4 fitness landscape

                                                                                               13
papers on agent-based economics                                                                                            nr 7

IV.2. Further results
Apart from the observed trade-off effects between speed and quality of search solutions, one
other observation shall be pointed out. All the above results were obtained with an N = 12
landscape with an ideal decomposition into 3 modules. Additionally it has been investigated
whether the results are robust against a variation of this ideal decomposition. Therefore, ideal
decompositions into 2 and 4 modules have been tested. While the main observation of the
constant superiority of a correctly modularized over an over-modularized search space
remains intact, the initial advantage of under-modularized search becomes slightly more
discernable for the state space with more modules (m = 4). A second interesting observation
concerns the divergence of the search strategies performance. The more, the state space is
divided into modules, the less divergence can be observed between different strategies
performance (compare modularizations of the problem space ranging from m = 2 to m = 4 in
fig. 5).
 Performance                                 Performance
                                                                                         Performance
 0.75                                        0.75                                        0.75

  0.7                                         0.7                                         0.7

 0.65                                        0.65                                        0.65

  0.6                                         0.6                                         0.6

 0.55                                        0.55                                        0.55

  0.5                                         0.5                                         0.5

                                         t                                           t                                           t
               20   40   60   80   100                     20   40   60   80   100                     20   40   60   80   100

Fig. 5: Divergence of strategy performance for problem spaces with 2, 3 and 4 modules

V. Conclusions
The decomposition of search problems into more or less separable modules or independent
decision units is a necessary and useful strategy to cope with complex problems in economics,
like product design, management strategy or inner firm organization. Psychological findings
(Beckenbach 2005) as well as empirical studies on product decomposition (Fleming/Sorenson
2003) back this insight. NK models, developed by Kauffman (1993) and related
modularization models can serve to depict corresponding problem and search spaces.
However, the correct decomposition of complex problems is no evident task. If the structure
of the whole product or strategy is not entirely understood, assumptions about connections
among its constituting elements and about separable sub-units may be erroneous. Thus, the
question arises, how an over- or under-estimation of the actual decomposition of the problem
affects solution quality.

Theoretical studies on these effects come to differing results. Particularly, they diverged as to
the existence of a trade-off between the speed and long-term quality of better solutions.

                                                                                                                                     14
papers on agent-based economics                                                          nr 7

Marengo/Dosi (2005) found that a more than optimal decomposition was initially beneficial,
only to be overtaken by the optimal decomposition scheme after a long time. Ethiraj/Levinthal
(2004) on the other hand, did not find this trade-off. The former study was based on parallel
one or several bit mutations inside assumed modules, based on NK literature, the latter study
on different parallel search procedures, derived from Genetic Algorithm practice. As the
results were not agreeing, the current paper investigated, whether the trade-off would be
observable for more empirically based search strategies. These have been developed on the
basis of Fleming/Sorenson’s (2003) strategies of Shotgun Sampling, Modular Search and
Mapped Search, derived from the study of innovation strategies, based on US patent data. The
paper found that no beneficial effect of an over-modularization of the search process can be
confirmed. Quite the contrary, there even was a short initial advantage of strategies, ignoring
the modularization of the problem altogether and searching by one or several bit mutations
over the whole string, only considering the change of string fitness.

However, a trade-off could be observed with the present model for Shotgun Sampling and
Mapped Search. The arbitrary trial and error experimentation of Shotgun search is initially
quicker in finding better solutions, but eventually gets stuck in a suboptimal local optimum.
The scientifically based Mapped Search is more time consuming and thus initially in
disadvantage, but finally able to approach the optimal solution.

Another interesting finding concerns the consequences of the underlying correct degree of
decomposition on the divergence of the results of different search strategies. The more the
problem is decomposable, the less it matters, which strategy is chosen. The general order of
performance is robust against the degree of decomposition, but the divergence of performance
reduces (without however becoming unimportant) when the problem is more decomposed.

After examining the differing results for some of the NK based studies on the effect of over-
and under-estimations of module sizes of complex problems, it can be concluded that more
empirically based search strategies have to be investigated, in order to determine, which
results might be relevant for economic reality. The current paper tried to contribute to this
investigation by testing search strategies based on empirical findings.

                                                                                            15
papers on agent-based economics                                                    nr 7

References
Beckenbach, F., 2005. Knowledge Representation and Search Processes – a Contribution to
the Microeconomics of Invention and Innovation. Volkswirtschaftliche Diskussionsbeiträge.
Universität Kassel, 75/05

Ethiraj, S.K. and Levinthal, D., 2004, Modularity and Innovation in Complex Systems.
Management Science, 50,159-173

Fleming, L. and Sorenson, O., 2003. Navigating the Technology Landscape of Innovation.
MIT Sloan Management Review (winter), 15-23

Frenken, K. and Valente, M., 2002. The Organisation of Search Activity in Complex Fitness
Landscapes. Computing in Economics and Finance, 157. Society for Computational
Economics

Kauffman, S.A., 1993. The Origins of Order. Oxford University Press, Oxford

Levinthal, D.A., 1997. Adaptation on rugged landscapes. Management Science 43 (7), 934-
950

Marengo, L. and Dosi, G. 2005. Division of Labor, Organizational Coordination and Market
Mechanism in Collective Problem-Solving. Journal of Economic Behavior and Organization
58(2), 303-326

Siggelkow, N. and Levinthal, D.A., 2003. Temporarily divide to conquer: centralized,
decentralized, and reintegrated organizational approaches to exploration and adaptation.
Organization Science 14 (6), 650-669

Siggelkow, N. and Rivkin, J.W., 2006. When Exploration Backfires: Unintended
Consequences of Multi-Level Organizational Search. Academy of Management Journal 49,
779-795

Simon, H.A., 1969. The Sciences of the Artificial. MIT Press, Cambridge, MA.

Simon, H.A., 1983. Reason in Human Affairs. Stanford University Press, Stanford

Strumsky, D. and Lobo, J., 2003. “If it Isn’t Broken, Don’t Fix it:” Extremal search on a
technology landscape, Santa Fe Institute Working Paper 03-02-003

                                                                                      16
papers on agent-based economics                                      nr 7

Impressum:
papers on agent-based economics
Herausgeber:
Universität Kassel
Fachbereich Wirtschaftswissenschaften (Prof. Dr. Frank Beckenbach)

Fachgebiet Umwelt- und Verhaltensökonomik
Nora-Platiel- Str. 4

34127 Kassel
www.ivwl.uni-kassel.de/beckenbach/

                                                                       17
papers on agent-based economics   nr 7

ISSN: 1864-5585

                                    18
You can also read