Posts Tagged ‘cooperation’
Human cognition and thinking are much more complex than the cognition and thinking of other primates. Human social interaction and organization are much more complex than the social interaction and organization of other primates as well. It is highly unlikely, we would argue, that this is a coincidence. Complex human cognition is of course responsible for complex human societies in the sense that human societies would fall apart if human-like cognition were not available to support them. But this cognition- to-society causal link is not a plausible direction for an account of evolutionary origins. For that direction of effect, there would need to be some other behavioral domain in which powerful cognitive skills were selected, and then those skills were somehow extended to solving social problems. But it is not clear what other behavioral domain that might be, given that we are trying to explain the many particularities of cognitive skills supporting humans’ unique forms of collaboration and communication, including in the end such things as cultural conventions, norms, and institutions. It seems highly unlikely that cognitive skills adapted for, say, individual tool use or the tracking of prey could be exapted in this way for such complex cooperative enterprises. And so, in the current view, the most plausible evolutionary scenario is that new ecological pressures (e.g., the disappearance of individually obtainable foods and then increased population sizes and competition from other groups) acted directly on human social interaction and organization, leading to the evolution of more cooperative human lifeways (e.g., collaboration for foraging and then cultural organization for group coordination and defense). Coordinating these newly collaborative and cultural lifeways communicatively required new skills and motivations for co-operating with others, first via joint intentionality, and then via collective intentionality. Thinking for co-operating. This, in broadest possible outline, is the shared intentionality hypothesis.
This book brings together philosophical approaches to cooperation and collective agency with research into human-machine interaction and cooperation from engineering, robotics, computer science and AI. Bringing these so far largely unrelated fields of study together leads to a better understanding of collective agency in natural and artificial systems and will help to improve the design and performance of hybrid systems involving human and artificial agents. Modeling collective agency with the help of computer simulations promises also philosophical insights into the emergence of collective agency.The volume consists of four sections. The first section is dedicated to the concept of agency. The second section of the book turns to human-machine cooperation. The focus of the third section is the transition from cooperation to collective agency. The last section concerns the explanatory value of social simulations of collective agency in the broader framework of cultural evolution.
The present study asks how cooperation and consequently structure can emerge in many different evolutionary contexts. Cooperation, here, is a persistent behavioural pattern of individual entities pooling and sharing resources. Examples are: individual cells forming multicellular systems whose various parts pool and share nutrients; pack animals pooling and sharing prey; families firms, or modern nation states pooling and sharing financial resources. In these examples, each atomistic decision, at a point in time, of the better-off entity to cooperate poses a puzzle: the better-off entity will book an immediate net loss — why should it cooperate? For each example, specific explanations have been put forward. Here we point out a very general mechanism — a sufficient null model — whereby cooperation can evolve. The mechanism is based the following insight: natural growth processes tend to be multiplicative. In multiplicative growth, ergodicity is broken in such a way that fluctuations have a net-negative effect on the time-average growth rate, although they have no effect on the growth rate of the ensemble average. Pooling and sharing resources reduces fluctuations, which leaves ensemble averages unchanged but — contrary to common perception — increases the time-average growth rate for each cooperator.
The purpose of this article is to describe how organizations have evolved across three periods of modern economic history. These periods can be called the age of competition, age of cooperation, and age of collaboration. The major organizational forms that appeared in each of the three eras, including their capabilities and limitations, are discussed. Across the eras of competition, cooperation, and collaboration, managers and organization designers changed their views about the use of resources. During the age of competition, managers believed it was important to control resources through ownership. Whatever resources the firm needs, it is the management’s job to acquire those resources for use by the firm. During the age of cooperation, managers’ resource strategies expanded to include the practice of “link and leverage.” Here the belief is that the firm need not own all of its resources; it can also link to other firms and thereby gain access to external resources. Finally, during the age of collaboration, managers learned the value of sharing resources. By forming resource commons that multiple parties can share, organizations in certain kinds of situations can achieve increasing returns on their use of resources.
From the age of Darwin to the present day, biologists have been grappling with the origins of our moral sense. Why, if the human instinct to survive and reproduce is “selfish,” do people engage in self-sacrifice, and even develop ideas like virtue and shame to justify that altruism? Many theories have been put forth, some emphasizing the role of nepotism, others emphasizing the advantages of reciprocation or group selection effects. But evolutionary anthropologist Christopher Boehm finds existing explanations lacking, and in “Moral Origins, “he offers an elegant new theory. Tracing the development of altruism and group social control over 6 million years, Boehm argues that our moral sense is a sophisticated defense mechanism that enables individuals to survive and thrive in groups. One of the biggest risks of group living is the possibility of being punished for our misdeeds by those around us. Bullies, thieves, free-riders, and especially psychopaths–those who make it difficult for others to go about their lives–are the most likely to suffer this fate. Getting by requires getting along, and this social type of selection, Boehm shows, singles out altruists for survival. This selection pressure has been unique in shaping human nature, and it bred the first stirrings of conscience in the human species. Ultimately, it led to the fully developed sense of virtue and shame that we know today. A groundbreaking exploration of the evolution of human generosity and cooperation, “Moral Origins” offers profound insight into humanity’s moral past–and how it might shape our moral future.
The new socio-technological systems of the internet involve complex collaborative behaviors, of which Wikis in general are a particular successful case, and especially Wikipedia. This encyclopedia has created and harnessed new social and work dynamics, which can provide insight into specific aspects of cognition, as amplified by a multitude of editors and their ping-pong style of editing, spatial and time flexibility within unique technology-community fostering features. Wikipedia’s motto “The Free Encyclopedia That Anyone Can Edit” is analyzed to reveal human, technological and value actors within a theoretical context of distributed cognition, cooperation and technological agency. As this work is part of an emergent field of Wiki Studies, with an interdisciplinary approach, three avenues of inquiry are used to research cooperation and cognition in Wikipedia articles. These studies contribute to constructing an ecology of the article, a vision of humanities bottom-up, and a better understanding of cooperation and cognition within socio-technological networks.
Cooperation often involves behaviours that reduce immediate payoffs for actors. Delayed benefits have often been argued to pose problems for the evolution of cooperation because learning such contingencies may be difficult as partners may cheat in return. Therefore, the ability to achieve stable cooperation has often been linked to a species’ cognitive abilities, which is in turn linked to the evolution of increasingly complex central nervous systems. However, in their famous 1981 paper, Axelrod and Hamilton stated that in principle even bacteria could play a tit-for-tat strategy in an iterated Prisoner’s Dilemma. While to our knowledge this has not been documented, interspecific mutualisms are present in bacteria, plants and fungi. Moreover, many species which have evolved large brains in complex social environments lack convincing evidence in favour of reciprocity. What conditions must be fulfilled so that organisms with little to no brainpower, including plants and single-celled organisms, can, on average, gain benefits from interactions with partner species? On the other hand, what conditions favour the evolution of large brains and flexible behaviour, which includes the use of misinformation and so on? These questions are critical, as they begin to address why cognitive complexity would emerge when ‘simple’ cooperation is clearly sufficient in some cases. This paper spans the literature from bacteria to humans in our search for the key variables that link cooperation and deception to cognition.