Tag Archives: methods

Strategies for Managing Team-Based Research (co-authored with Akram Al-turk)

The scientific community celebrates individual achievements by conferring prestige and honors on scientists who win out in the competitive game of being the first to publish innovative research. Paradoxically, however, modern scientific expertise rests heavily upon work carried out by teams, rather than scholars working on their own. Tensions between the forces of competition and cooperation thus infuse every aspect of scholarly activities: grant writing, publishing, leadership in scientific organizations, and so forth. Thus, it is understandable that graduate students and junior scholars would be perplexed by how to manage such tensions.

We believe the key to successful collaborative relationships lies in preparing for them ahead of time, rather than attempting to deal with problems as they arise. In fact, some research suggests that the effectiveness of collaborative work is determined before any of the work is carried out. We have identified four structural elements that increase the likelihood of creating and sustaining collaborative relationships.

Define the Scope and Logic of the Project

At the start, the parties to a collaborative relationship should agree on a project’s scope and logic of inquiry. The researchers should ask themselves a few questions that will ensure that they are all on the same page. For example, will the project be open-ended, continuing until all possible avenues of interpretation have been explored and as many papers as possible published? Or, is the project more self-contained, with target journals or conferences identified and the project ended when a paper is accepted? Is the relevant data for the project already in hand or clearly identified, or will building a new dataset be a major thrust of the effort? Sharing “mental models” of the work to be done and how it should be carried out leads to effective teamwork.

In addition to being able to answer these questions, the types of goals a team comes up with will likely affect how well the collaboration goes. Although “write a paper together and get it published” is a common goal for academic collaborations, the success of the research project may depend on having a compelling goal. Is the research question challenging and (by academic standards) somewhat consequential? And, is the goal focused enough so that researchers are working toward a final product but open-ended enough that researchers have some level of autonomy and can be creative when the need arises? Interdisciplinary teams need to communicate with one another the reward systems of their disciplines, as some may place higher values on books than journal articles, or may value certain kinds of journals over others.

Agree about Responsibilities

Teams should also be deliberative and explicit about each researcher’s responsibilities.  External factors often dictate how well an organization (or group) does, but individual interventions, especially by team leaders, can lead to more effective team performance . Teams should decide whether one person will be identified as the “leader” of the project, ultimately responsible for taking major decisions (after consulting with the team) or whether leadership responsibilities will be rotated. In either case, a leader can increase effectiveness by ensuring that the research team comprises individuals whose skills and competencies complement each other and all contribute to the overall goal of the project, designing tasks that give everyone enough autonomy to make their contributions personally fulfilling and meaningful to the project and establishing norms of how the group will work and interact . Teams should identify each team member’s competencies, clarify what that member will do to move the project forward, and make sure everyone on the team knows the others’ roles.

Enforce Deadlines and Give/Receive Timely Feedback

Failure to meet deadlines often sinks collaborative relationships. However, failure to even set deadlines is probably a bigger headache. Without deadlines, members have no way of holding one another accountable for holding up their end of the relationship, as a member can always say that they’re not quite finished yet or they will have their part done “soon.” To receive the benefits of collaborating with people who have complementary skills, team members must be ready to comment in a timely fashion on intermediate products produced by others. First, team leaders can make sure that all researchers on the team are kept in the loop about how the project is going. Second, leaders can try to encourage everyone on the research team (and model ways) to provide good, timely feedback, e.g. by scheduling regular feedback sessions.

Use Coordination Mechanisms That Facilitate the Collaboration Process

Coordination and communication challenges can hinder the success of collaborative research. Although email and video conferencing services such as Skype have become ubiquitous, these technologies do not necessarily ensure that collaboration is successful. For example, although email and video conferencing allow researchers to communicate more easily, these kinds of tools may not be the best for task coordination, information sharing, and intra-project learning. One of the main challenges for teamwork is juggling multiple and simultaneous work tasks. Researchers, therefore, should use tools that help them manage these multiple tasks, allowing them to know what’s expected of them and see changes to the project almost instantaneously. A plethora of programs and software now allow for this. We recommend that researchers start with one that has low start-up costs—both in terms of time and money—and not be lured by fancy features, as they can be a time sink. Sometimes, investing in innovative technologies is worth the time, but teams should be deliberate about whether the investment is worth it for their project.

Summary

We have identified strategies for mitigating or eliminating collaboration problems in team-based research. At the beginning of a project, face-to-face meetings can establish the ground rules and expectations were all members of the team. Free riding, shirking, and social loafing are much harder when team members agree on responsibilities and create monitoring and enforcement mechanisms. Candid and timely feedback limits the damage that emergent problems can create, but requires strong leadership and commitment by all members to be effective. Finally, as in other collaborative efforts, state-of-the-art coordination and communication technologies facilitates effective team governance.

Save

Save

The Impossible Necessity of History

Some book titles are so compelling that you’d feel guilty if you didn’t at least pick the book up and skim it. Such is the case with Ged Martin’s book, Past Futures: the Impossible Necessity of History (University of Toronto Press, 2004), based on the 1996 Joanne Goodman lectures at the University of Western Ontario. Despite his thoroughly convincing arguments that historical explanation, as we know it, is methodologically and analytically impossible, he managed to convince me that it is nonetheless worth doing. This is the kind of book that people used to describe as a tour de force. What’s Martin’s argument?

A small town that time has (almost) forgotten.

A small town that time has (almost) forgotten.

He asserts that the data available to historians are hopelessly incomplete, the models they build are fraught with selection bias, and our view of the past is unjustifiably judgmental. He advocates giving up traditional historical scholarship in favor of locating events in time, identifying their relationship to each other, and connecting them to the provisional present.

In terms of data, three problems confront anyone turning to the historical record for evidence about what “happened in the past.” First, throughout most of human history, very little that happened was permanently documented. Hugely significant events went unrecorded or noted with incomplete details using fragile techniques and materials, which disintegrated, burned, and were lost forever. Second, only a minuscule fraction of the population has ever been in a position to actually have their actions recorded. Much of what we do know about the past concerns that vanishingly small segment of the population some have recently labeled the 1%: elites who had the luxury of employing others to document what they did or the resources to create semi-permanent records using materials such as stone or parchment. The vast majority of the population engaged in activities that are now essentially invisible to us, although forensic anthropology and archaeology are pretty good at working with the few artifacts we can find. Third, more problematic is the tendency of those people who did leave records behind to engage in hyperbole, self-aggrandizement, and untrustworthy accounts of the role they actually played in historical events. Although the rise of modern digital technology would seem to have improved matters greatly, Martin argues that the problem still exists, but now on a grander scale. It is simply impossible to know everything that happened in the past.

In terms of model building, contemporary historians are in the unfortunate position of knowing exactly how things turned out. First, scholars are tempted to build their explanations backwards, starting from outcomes and then searching for plausible prior events, continuing back through history until reaching a “satisfactory” explanation. But, they will be working with historical materials left behind from each era by people who had their own theories of why things had happened and structured their documentation accordingly. Second, almost all events have multiple causes.  Prioritizing them and determining how much leverage each exerted on an outcome of interest is nearly impossible, given the data problems mentioned above. Martin compares this task unfavorably to the situation that laboratory scientists work with, which allows them to run multiple experiments, under conditions where they can control many possible causes, and isolate the influence of specific factors. By that test, of course, almost all social science explanations will also fail. Third, and perhaps more important, uncertainty permeates every aspect of human activity, with people facing multiple options at every turn. Even focusing on “decision-making,” as Martin advocates, doesn’t remove the problem of people having only the faintest of ideas concerning what’s going to happen next, given the action they take. Moreover, because we have no way of getting inside the heads of the people who made those important decisions, we can only speculate as to what they were thinking at the time they acted.

The “past futures” of the title refers to the fact that from the perspective of the present, everything in the past could be viewed as the realized futures of people who had little clue as to what was coming next. Today, we are their future, but it is highly unlikely that one any of them foresaw it. In writing history by looking backwards, from the present, it is tempting to make our “known past” part of our explanation by treating it as the intended future of humans who were making decisions about what options to pursue. But of course, lacking clairvoyance, they had no ability to imagine all the possible futures that would unfold. Nonetheless, the temptation to write linear, coherent narratives about why things had to happen the way they did overwhelms most scholars.

But wait, there’s more! Martin also takes historians to task for imposing normative judgments on the actions of historical figures, using contemporary values. The severity of the normative judgment increases, the further back in time the historian travels. He uses the example of people involved in the slave trade in the 17th and 18th centuries, as well as more contemporary examples. Martin’s point is that such normative judgments cloud the construction of analytic arguments, biasing the selection of cases and causal principles.

Despite the incredibly bleak picture Martin draws of the impossibility of historical analysis, he nonetheless concludes his book with the argument that contemporary social scientists “need” historical analysis. Giving up their quest for comprehensive explanations of historical events, historians can instead simply locate events in time and identify their relationships to one another. They can tentatively indicate which events were more significant than others by making comparisons to possible alternatives, now known because we have the luxury of looking backwards. Abandoning the conceit of the superior present, they can remind us that “each succeeding present is merely provisional, nothing more than a moving line between past and future.”

Discerning readers of my blog post will now recognize why I like this book so much: this is a very evolutionary argument, cognizant of the need for humility in building tentative explanations of social phenomenon. “Past futures” are always explicable, if one is willing to commit the kinds of methodological and analytic fallacies that Martin points out. Don’t go there. He argues that contemporary historiography has plenty to do, without falling into the trap of building “neat and tidy” explanations. Instead, historians can make us aware of our own ethical standpoints and caution us against ransacking the past for justifications of currently favored policies. The future awaits us, but it is probably not the one that we envisioned, nor could we.

Write As If You Don’t Have the Data

At a conference, when you ask somebody to tell you about their current project, what do they typically say? I often get a puzzling response: instead of beginning by telling me about an idea, the person starts by describing their data. They tell me they are using survey data they have collected, or data from an archive, or data they’ve scraped from the web. As they go on at length about the nature of the data, I have to interrupt them and ask for what purpose the data will be used. Then, I’m likely to get a description of an analytic method or computer software. It’s almost as if they have devoted most of their working hours to thinking about what they can do with the data they have collected – – or will collect – – and very little time to the question of where their project fits into some larger scheme.

Control loss

Loss of control can be dangerous but exciting!

I’ve realized that this response partially explains why many graduate students have such a difficult time in writing a thesis proposal. Two kinds of problems result from a “data first” strategy.

First and most obviously, beginning with data considerations may lead to the unintended outcome of writing a theoretical framework and conceptual model, complete with hypotheses, that are totally framed around what the data permits. In the worst-case scenario, this can resemble the kinds of narratives corporate historians write when they begin with what they know about their firms in the present and then build a story to suit. Researchers may anticipate journal reviewers’ biases toward “significant” results and may simply wait to begin writing their story until they’ve conducted preliminary analyses.

In the writing workshops that I offer at conferences, I often have students tell me that they wait to write the introduction to their paper or thesis until after they’ve done the “analysis and results” section. This is certainly a safe strategy to follow if one wants to economize on doing multiple drafts of a paper, but it goes against the spirit of disciplined inquiry that we try to engender in our theory and methods classes.

Second and far more damaging from my point of view, following a data first strategy severely constrains creativity and imagination. Writing a theoretical introduction and conceptual model that is implicitly tailored to a specific research design or data set preemptively grounds any flights of fancy that might have tempted an unconstrained author. By contrast, beginning with a completely open mind in the free writing phase of preparing a proposal or paper allows an author to pursue promising ideas, regardless of whether they are “testable” with what is currently known about available data.

When I say “write as if you don’t have the data,” I’m referring to the literature review and planning phase of a project, preferably before it has been locked into a specific research design. Writing about ideas without worrying about whether they can be operationalized – – whether in field work, surveys, or simulations – – frees authors of the burden they will eventually face in writing their “methods” section. Eventually, a researcher will have to explain what compromises have been made, given the gap between the ideas they set out to explore and the reality of data limitations, but that bridge will be crossed later. Rushing over that bridge during the idea generation stage almost guarantees that the journey will be a lifeless one.

Even if someone is locked into a mentor’s or principal investigator’s research design and data set, I would recommend they still begin their literature review and conceptual modeling as if they had the luxury of a blank slate. In their initial musings and doodles, as they write interpretive summaries of what they read, they might picture a stone wall that temporarily buffers them from the data obligations that come with their positions as data supplicants. Writing without data constraints will, I believe, free their imaginations to range widely over the realm of possibilities, before they are brought to earth by practical necessities.

So, the next time someone asks you about what you are working on, don’t begin by talking about the data. Instead, tell them about the ideas that emerged as you wrote about the theories and models that you would like to explore, rather than about the compromises you will eventually be forced to make. The conversation will be a lot more interesting for both of you!