Over the past few years, you might have noticed a surfeit of articles covering current research on bilingualism. Some of them suggest that it sharpens the mind, while others are clearly intended to provoke more doubt than confidence, such as Maria Konnikova’s “Is Bilingualism Really an Advantage?” (2015) in The New Yorker. The pendulum swing of the news cycle reflects a real debate in the cognitive science literature, wherein some groups have observed effects of bilingualism on non-linguistic skills, abilities and function, and others have been unable to replicate these findings.
Researchers reveal our genes may play a role in how empathetic we are. The study reports genetic variants associated with low empathy may indicate a higher risk of autism.
Empathy has two parts: the ability to recognize another person’s thoughts and feelings, and the ability to respond with an appropriate emotion to someone else’s thoughts and feelings. The first part is called ‘cognitive empathy’ and the second part ‘affective empathy’.
Fifteen years ago, a team of scientists at the University of Cambridge developed the Empathy Quotient (EQ), a brief self-report measure of empathy. The EQ measures both parts of empathy.
The human response to unfairness evolved in order to support long-term cooperation, according to a research team from Georgia State University and Emory University.
Fairness is a social ideal that cannot be measured, so to understand the evolution of fairness in humans, Dr. Sarah Brosnan of Georgia State’s departments of Psychology and Philosophy, the Neuroscience Institute and the Language Research Center, has spent the last decade studying behavioral responses to equal versus unequal reward division in other primates.
Researchers report allowing children to taste alcohol may lead to an increased risk of developing drinking problems during late adolescence.
Parents who allow their young children to occasionally sip and taste alcohol may be contributing to an increased risk for alcohol use and related problems when those kids reach late adolescence, according to a new study by a University at Buffalo psychologist.
The findings contradict the common belief that letting kids sip and taste alcoholic drinks is harmless, and might even help to promote responsible drinking later in life.
Posted in Alcohol, Kids
Tagged Alcohol, Kids
Several theories of cognition distinguish between strategies that differ in the mental effort that their use requires. But how can the effort—or cognitive costs—associated with a strategy be conceptualized and measured? We propose an approach that decomposes the effort a strategy requires into the time costs associated with the demands for using specific cognitive resources. We refer to this approach as resource demand decomposition analysis (RDDA) and instantiate it in the cognitive architecture Adaptive Control of Thought–Rational (ACT-R). ACT-R provides the means to develop computer simulations of the strategies. These simulations take into account how strategies interact with quantitative implementations of cognitive resources and incorporate the possibility of parallel processing. Using this approach, we quantified, decomposed, and compared the time costs of two prominent strategies for decision making, take-the-best and tallying. Because take-the-best often ignores information and foregoes information integration, it has been considered simpler than strategies like tallying. However, in both ACT-R simulations and an empirical study we found that under increasing cognitive demands the response times (i.e., time costs) of take-the-best sometimes exceeded those of tallying. The RDDA suggested that this pattern is driven by greater requirements for working memory updates, memory retrievals, and the coordination of mental actions when using take-the-best compared to tallying. The results illustrate that assessing the relative simplicity of strategies requires consideration of the overall cognitive system in which the strategies are embedded.
Intuition, argues Gerd Gigerenzer, a director at the Max Planck Institute for Human Development, is less about suddenly “knowing” the right answer and more about instinctively understanding what information is unimportant and can thus be discarded.
Gigerenzer, author of the book Gut Feelings: The Intelligence of the Unconscious, says that he is both intuitive and rational. “In my scientific work, I have hunches. I can’t explain always why I think a certain path is the right way, but I need to trust it and go ahead. I also have the ability to check these hunches and find out what they are about. That’s the science part. Now, in private life, I rely on instinct. For instance, when I first met my wife, I didn’t do computations. Nor did she.”
Once upon a time, it was the lone scientist who achieved brilliant breakthroughs. No longer. Today, science is done in teams of as many as hundreds of researchers who may be scattered across continents and represent a range of hierarchies. These collaborations can be powerful, but they demand new ways of thinking about scientific research. When three hundred people make a discovery, who gets credit? How can all collaborators’ concerns be adequately addressed? Why do certain STEM collaborations succeed while others fail?
Focusing on the nascent science of team science,The Strength in Numbers synthesizes the results of the most far-reaching study to date on collaboration among university scientists to provide answers to such questions. Drawing on a national survey with responses from researchers at more than one hundred universities, anonymous web posts, archival data, and extensive interviews with active scientists and engineers in over a dozen STEM disciplines, Barry Bozeman and Jan Youtie set out a framework to characterize different types of collaboration and their likely outcomes. They also develop a model to define research effectiveness, which assesses factors internal and external to collaborations. They advance what they have found to be the gold standard of science collaborations: consultative collaboration management. This strategy—which codifies methods of consulting all team members on a study’s key points and incorporates their preferences and values—empowers managers of STEM collaborations to optimize the likelihood of their effectiveness.
The Strength in Numbers is a milestone in the science of team science and an indispensable guide for scientists interested in maximizing collaborative success.
Posted in Team
A new field of collective intelligence has emerged in the last few years, prompted by a wave of digital technologies that make it possible for organizations and societies to think at large scale. This “bigger mind”—human and machine capabilities working together—has the potential to solve the great challenges of our time. So why do smart technologies not automatically lead to smart results? Gathering insights from diverse fields, including philosophy, computer science, and biology, Big Mind reveals how collective intelligence can guide corporations, governments, universities, and societies to make the most of human brains and digital technologies.
Geoff Mulgan explores how collective intelligence has to be consciously organized and orchestrated in order to harness its powers. He looks at recent experiments mobilizing millions of people to solve problems, and at groundbreaking technology like Google Maps and Dove satellites. He also considers why organizations full of smart people and machines can make foolish mistakes—from investment banks losing billions to intelligence agencies misjudging geopolitical events—and shows how to avoid them.
Highlighting differences between environments that stimulate intelligence and those that blunt it, Mulgan shows how human and machine intelligence could solve challenges in business, climate change, democracy, and public health. But for that to happen we’ll need radically new professions, institutions, and ways of thinking.
Read also: Review
Complex networks impact the diffusion of ideas and innovations, the formation of opinions, and the evolution of cooperative behavior. In this context, heterogeneous structures have been shown to generate a coordination-like dynamics that drives a population towards a monomorphic state. In contrast, homogeneous networks tend to result in a stable co-existence of multiple traits in the population. These conclusions have been reached through the analysis of networks with either very high or very low levels of degree heterogeneity. In this paper, we use methods from Evolutionary Game Theory to explore how different levels of degree heterogeneity impact the fate of cooperation in structured populations whose individuals face the Prisoner’s Dilemma. Our results suggest that in large networks a minimum level of heterogeneity is necessary for a society to become evolutionary viable. Moreover, there is an optimal range of heterogeneity levels that maximize the resilience of the society facing an increasing number of social dilemmas. Finally, as the level of degree heterogeneity increases, the evolutionary dominance of either cooperators or defectors in a society increasingly depends on the initial state of a few influential individuals. Our findings imply that neither very unequal nor very equal societies offer the best evolutionary outcome.