As Harriet Keck* registered the words on her screen, the first thing she felt was shock. The doctor and scientist at a major Canadian teaching hospital had received an email from her boss’s office stating that an allegation of scientific misconduct had been made against her. “I was trembling, I was out of my mind,” she says.
Feeling deeply ashamed, Keck tried to piece together what could have happened. She thought it might relate to research she’d collaborated on with a doctor at her institution. The project had required extensive time, often at night and on weekends, in her laboratory. Their studies were important—related to developing better tests for diagnosing a disease that was the focus of their research—but the relationship soured when they quarrelled over a journal article.
Journal articles are the currency of science. A discovery barely exists until it has undergone peer review—a rigorous process of scrutiny conducted by other experts in the field—and a journal has published the results. Furthermore, publications are essential for building a research career. For the article in question, Keck maintained forcefully that the prize position of last author, which signifies supervising scientist and confers a competitive edge in the high-stakes battle for research funds, academic promotion and prized appointments, was rightfully hers, as she’d done the work on the project in her lab. While she won the battle, she lost the research relationship, opting not to work with the senior doctor on future projects.
Up to this point, her experience could be described as unpleasant but not uncommon. Elise Smith of the United States National Institute of Environmental Health Sciences and Vincent Larivière of the Université de Montréal recently surveyed 6,000 scientists about scientific collaboration; Smith asserts that their preliminary analysis shows that over half of the respondents had encountered disagreements about authorship on scientific papers, one-third had limited further collaboration as a result of the disagreements, and around a quarter had seen authorship disputes lead to hostilities between colleagues.
After Keck ended the research relationship, her former research partner asked to receive comprehensive materials from their studies together because he intended to go further with the project. That’s when things turned from bad to ugly. While Keck gave him what she could, the senior doctor also wanted items that could no longer be examined because they were not designed to be permanent, she says. It led to an ugly argument.
When Keck met with hospital administrators, she learned what had happened next: her collaborator told the hospital’s head of research that the data in the paper they’d published together could not be trusted because Keck could not produce original versions of all the materials the paper was based on, suggesting that her data could be inaccurate or false. Now, the administrators told her, they would need to conduct an inquiry to see if the situation warranted a full-scale investigation.
Consequences for research misconduct can be severe. Science relies on experimental data, so a scientist who fabricates or falsifies it is guilty of committing a cardinal sin. By the time errors come to light, other scientists will, most likely, have wasted precious time devoting themselves to building on an idea that must be discarded, and scarce grant monies will have underwritten fraudulent work. In the case of medical research, patients in hospital may be affected if false findings are relied on to guide clinical treatments. But if you’re missing certain, necessarily impermanent, materials from your research, is that as bad as making up data? That depends. In Canada, it could be.
In December 2011, after decades of mounting criticism about science regulation, the country’s three federal research granting agencies—the Canadian Institutes of Health Research (CIHR), the Natural Sciences and Engineering Research Council (NSERC) and the Social Sciences and Humanities Research Council (SSHRC), which together give out about $2.4 billion annually—enacted an eighteen-page, six-part code of conduct that dictates how scientists are to behave, how institutions are to keep them in line and what must happen if scientists or institutions breach the code. Called the Tri-Agency Framework: Responsible Conduct of Research, it is overseen by the federal Secretariat on Responsible Conduct of Research. But as the framework reaches its fifth birthday, critics say it’s fallen short, ensnaring some scientists unfairly while leaving others facing few or no sanctions despite serious violations.
The system uses a broad definition of what constitutes misconduct, bundling unintentional mistakes with outright fraud. It also relies on individual institutions to receive and investigate allegations, a self-governance arrangement that hasn’t always worked well in the past. Moreover, as confidentiality rules shroud the investigation processes, operations intended to boost trust in Canadian science are, themselves, clandestine.
In 1989, Memorial University’s Dr. Ranjit Chandra was made an officer of the Order of Canada for his contributions to nutrition research. That same year, he published a study in the British Medical Journal (BMJ) on a hypoallergenic infant formula. But, as the CBC later revealed in an exposé on The National, the paper included data on nonexistent babies, according to a nurse who blew the whistle to university authorities. An investigation at Memorial concluded that Dr. Chandra had committed scientific misconduct. But he was not fired. The school did not inform the BMJ about the investigation, and, in 2000, Chandra submitted another paper there, purporting to show the brain-boosting effects of a vitamin supplement in those over age sixty-five. This time, reviewers for the journal thought the data might be fabricated, and a journal editor contacted Memorial (the university did not reveal its earlier investigation). BMJ passed on the article, and Chandra sent the paper to Nutrition, which published it in 2001. A New York Times health columnist was among those reporting its findings that seniors who took the supplement had significant improvements in short term memory. When other scientists raised concerns, finding some of its claims “implausible,” Nutrition gave Chandra a chance to respond. He could not do so to their satisfaction, and in 2005, the journal retracted the paper.
Chandra resigned from Memorial and left Canada, but after the CBC aired its takedown, he sued the university and the network for defamation. In 2015, a jury ruled for the CBC. For its part, the BMJ retracted Chandra’s 1989 paper and ran an editorial slamming all involved. “It is shameful that the university, Canadian authorities, and other scientific bodies have taken no action against Chandra and that it has been left to the mass media to expose his fraud,” two editors wrote. Memorial also acted in 2015, finally accepting a university investigation from 2009 that found Chandra had committed misconduct in his supplement study—a process that, according to the university, had been stalled by the litigation. That same year, Chandra lost his Order of Canada.
In the US, a federal office now known as the Office of Research Integrity (ORI) has investigated allegations of scientists’ dishonesty since 1989. By 2007, several countries in Europe had also founded similar agencies or committees. An editorial published the same year in the Canadian Medical Association Journal posed the question, “Why has Canada lagged so far behind its Western counterparts in establishing comprehensive mechanisms and processes to deal with scientific misconduct?” But Daniele Fanelli, who studies scientific misconduct at Stanford University in California, is not surprised that Canada had no clear national policy before 2011. “Until a decade ago, virtually nowhere did you have rules in place to do anything about suspected cases of scientific misconduct,” he says. Fanelli, who has a doctorate in evolutionary biology, switched his focus to misconduct after growing cynical about the idea of science as objective and self-correcting. Analyzing eighteen surveys that asked scientists about falsifying and fabricating data, he found that about 2 percent admitted to having done so themselves and about 14 percent had personal knowledge of a colleague who had. Though the surveys were largely conducted in the US and the United Kingdom, there’s no reason to think that the results, published in a 2009 paper in the journal PLOS One, would be any different in Canada.
In 2009, amid growing pressure for change, a network of Canadian academic and research organizations issued a report calling for uniform guidelines for responding to scientific misconduct. Diverging from the model adopted in the US, the network lobbied for investigative power to remain with individual institutions. Adopted in December 2011, the Tri-Agency Framework largely accepted the structure the network had suggested—the institutions receive allegations and investigate them, while the Tri-Agency Secretariat tracks the process and reviews the institutions’ reports. “That’s the most that the stakeholders would put up with,” says philosopher Michael McDonald, founding director of the University of British Columbia’s W. Maurice Young Centre for Applied Ethics. For research institutions, he says, the interests lay in “protecting the institutional reputation [and] keeping things carefully in house.”
Still, there were results. Nationally, there was a jump in reporting from thirty allegations of research misconduct reported to the three funding agencies in 2010–2011 to seventy-seven allegations filed with the Secretariat the year the Framework took effect, from December 2011 to March 2013. By the year 2015–2016, the figure had jumped another 15 percent, to eighty-nine allegations. Over the past five years, seventy-eight scientists seriously breached the rules in sixty-eight separate cases, according to David Wolkowski, public affairs officer at the CIHR. Nearly a third of the cases—twenty-one files—involved plagiarism, while fabrication or falsification of data accounted for eleven files. The rest involved mismanaging agency funds (eleven files); breaching the research integrity policy (nine files); giving false or inaccurate information to a granting agency (eight files); and breaching other agency rules (eight files). During a talk in Thunder Bay, Ontario, in April 2016, the Secretariat’s Karen Wallace shared that the reprimanded included grant applicants who falsified letters of support in their applications and a supervisor who plagiarized from a student’s thesis.
When an investigative report comes in, Secretariat analysts check it for adherence to the requirements, strip it of identifying details and send the redacted version to a slate of academics, dubbed the Panel on Responsible Conduct of Research, which recommends recourse to the agency presidents. Some cases result in serious sanctions: since 2011, sixteen scientists have been ordered to repay the government hundreds of thousands in grant monies and six have been banned for life from receiving agency funds. In less severe cases, a scientist will receive a letter of awareness or admonishment from the agency funding their work.
Punishments imposed by a scientist’s institution, however, remain inconsistent—a similar violation may bring a slap on the wrist at one university and far worse at another. For example, two senior professors, Robert Casper, an obstetrician at the University of Toronto who holds a research chair at Toronto’s Mount Sinai Hospital, and Dongqing Li, a mechanical engineer known for his work in nanotechnology at University of Waterloo in Ontario, both had papers retracted for plagiarism. Casper’s paper, written with two co-authors, plagiarized from several different articles and was retracted from the journal Reproductive BioMedicine Online. Neither the hospital nor the university investigated. “The retraction was deemed to sufficiently address the issue,” wrote Sally Szuster, senior manager of communications and public affairs for Mount Sinai Hospital, in an email. Casper said in an interview that the first draft of the article was written by the first author, as is customary; when he read it over he saw no sign of plagiarism and was mortified at the discovery. He thought his superiors at the hospital accepted that the retraction was sufficient because it was a review article and not original research.
In 2010, Li and a graduate student also published a review article that plagiarized, incorporating parts of a paper that a scientist in Boston had posted on a website; according to a report in Postmedia News, the article was retracted in 2012 after the Boston scientist discovered the plagiarism and contacted both the journal and the University of Waterloo. Li too was unaware of the plagiarism at the time of submission (journal editor Roland Zengerle reported, via a letter to Postmedia, that Li’s student claimed responsibility), yet according to media reports from the National Post and elsewhere, Waterloo investigated and punished Li by barring him from his laboratory and the campus for four months without pay; Li also issued a public apology for his actions. In Li’s case, the retraction came after the framework was adopted, while Casper’s took place in the same month. Yet consequences for misconduct continue to vary. “Each institution runs its own show,” says McDonald.
In the view of Secretariat Executive Director Susan Zimmerman, the system is “proving to be quite robust.” She accepts that one institution’s processes differ from the next, preferring variation to overly proscriptive policy. Zimmerman also points to the development of explicit requirements for investigative committees, such as clear timelines for investigations and requiring that they include at least one external person unaffiliated with the institution, as examples of the positive developments the Tri-Agency Framework has engendered. Others say the framework’s heavy reliance on institutions is fundamentally flawed.
Meanwhile, Dr. Donald Miller, a former journal editor at the Ottawa Hospital, says that even filing an allegation can be a problem at certain institutions. Each is supposed to have a go-to person for complaints about unethical or questionable research practices, a condition of eligibility for receiving federal funds since 2011. But as Miller explains, they don’t. If a pollster were to ask medical school deans across the country who at their faculties handles complaints about scientists behaving dishonestly, the responses would be “all over the map,” he says, with a few schools having no one assigned to the job. There’s such a wide gap between how Canadian institutions oversee integrity in science, he says, “we don’t have really national standards.”
For Keck, the next step after hearing the allegations was a risky move: informing her close colleagues. Once word gets out, she says, “other people see you as if you have leprosy or something, an infectious disease that they’re going to contract, so everybody wants you out of their way.” Despite this, she wanted her collaborators to know of the allegation so they could make informed decisions about the work they were doing with her. She also worried about further consequences if the wheels now turning led to her having to retract the paper her former partner was questioning.
Just as publishing builds a scientific career, retractions dismantle it. Journals don’t keep blacklists, but the effect on the retracting scientist is as if they did, Miller says. “You’re not going to be allowed to come back in,” he explains. Even scientists who worked unwittingly on a retracted paper pay a high price. In a study published in 2015, Larivière followed scholars whose papers had been pulled out of circulation after a discovery of misconduct and found that all the authors were affected strongly, “despite the fact that only a handful or only one committed the fraud,” he says. The “innocent coauthors” of retracted papers were far more likely to have stopped publishing entirely—meaning they’d left science—than other scholars publishing at the same time in the same journals.
One of Keck’s colleagues told her to consult a lawyer. Keck would be answering questions from a hospital-appointed committee and the lawyer walked her through how that would work. The lawyer also warned that, despite the vaunted timelines, if she was promised a result within a certain period of time she should assume it would take at least three times that long.
The meeting finally came about eight months after Keck first heard about the allegation against her. It was four-on-one: two scientists asked questions while two hospital staff listened and took notes, and though Keck thought the scientists were genuinely interested in what had happened, she still feared the hospital could bias the investigation in favour of her accuser—though the scientists would write a report, the hospital’s research authorities would ultimately decide the case. She worried they would pander to her accuser, who was a celebrated figure. Through her lawyer, Keck had seen emails he’d written hospital higher-ups after filing his complaint against her—“Such a great meeting, so nice to see you again” were lines that stayed with her—and she thought he formed part of an Old Boy’s Club with those who would now judge her.
In addition, a colleague who had served on an investigating committee a few years earlier had told Keck that, in that instance, the hospital leadership had pressured his committee to “find something.” When I spoke to that colleague directly, he told me that the request to serve on the committee had come directly from his boss, “somebody who could turn around and make my life difficult.” His boss’s agenda, he says, was to find a “source of guilt” for the accused. Though he and the other committee members did their best to get at the truth, he says, “it was a conflict.”
“You’d like to think that with investigations into misconduct, they’re done with the perspective of ‘let the data speak,’ and you just go where the data goes,” says Steven Shafer, an anesthesiologist at Stanford University in California and editor of a leading journal in his field. But it’s not always done that way, he says. “If people are out to get you, they’ll take things out of context.”
The possibility for institutional bias is the basis for the Tri-Agency Framework’s requirement that every investigating committee include a scientist external to the institution, says Marc Joanisse, chair of the Advisory Panel on Responsible Conduct of Research, which advises the agencies on how to punish scientists. The aim, he says, is to ensure that researchers aren’t subject to reprimand simply because they’ve “incurred the ire of a certain vice president of research.” But David Robinson, executive director of the Canadian Association of University Teachers (CAUT), says universities sometimes bring in a retired professor as the external committee member, “so it’s not really independent.” And those serving on investigative committees can be prohibited from disclosing their opinions, even within confidential investigative reports. Some universities and hospitals require all members of investigating committees to sign a statement that they agree to the majority decision, while others explicitly prohibit minority or dissenting reports.
For a scientist who believes the case against her is unfair, there are not many options. She may be able to appeal to an institutional leader such as a hospital CEO or university vice president, but some institutions lack any appeals mechanism for professors found guilty of research misconduct. For his part, Robinson would like to see the establishment of a process to appeal an internal investigation to an outside and independent authority. “One of the basic principles of natural justice is that you do have a right to appeal if there’s fair grounds for doing so,” he says. Scientists can also complain to the Secretariat, which has the authority to investigate institutions to see if they adhered to the Tri-Agency Framework, but the success rate for such complaints is low. According to Wolkowski, eleven of sixteen complaints by scientists against institutions since 2011 have been closed without a finding that the institution had breached the framework, and five remain open.
As for the directive to “find something,” that’s not necessarily a tall order given the framework’s broad definition of misconduct and the fact that a scientist need not have committed a violation intentionally. Prominent Canadian scientists argued forcefully for this in the years leading up to the 2011 framework, with Howard Alper, former chair of the federal Science, Technology and Innovation Council, writing in 2010 that research misconduct “applies to anything that violates ‘good research culture.’” Under the framework, Canadian scientists who misuse their grant monies or fail to give a colleague or a student appropriate credit for their work are guilty of breaching the framework, as are those who make honest and unintentional mistakes or have sloppy practices. The concern is that even inadvertent misconduct can affect the research record, leading other researchers astray, says McDonald.
It’s a sharp contrast to the US, where a scientist is guilty of misconduct only if she fabricated or falsified data, or plagiarized, and did so in a way that was intentional, knowing or reckless. Early on, the US had counted a scientist’s “serious deviations” from “accepted” practices as a type of misconduct, but that definition was rejected as overly inclusive after a federal appeals board overturned a series of decisions by the ORI, saying the evidence did not support the charges. In one of those cases, involving a paper where a coauthor was a Nobel prize winner, several of the charges “boiled down to differences of scientific interpretation,” wrote science historian Daniel Kevles in a book about the case. The Nobelist, who had resigned as president of a university when the charges were laid, returned to academic leadership posts after the successful appeal, becoming president of the California Institute of Technology in Pasadena, a top science university, where he remains active today.
“Everybody knows you shouldn’t fabricate, falsify or plagiarize, no matter what field,” says Zubin Master, a bioethicist at Albany Medical College in New York. But other questions, such as who counts as an author, fall into what “is still a very, very greyish, ethically ambiguous area,” he says. As an example, he points to widely accepted guidelines from the International Committee of Medical Journal Editors that say authors should make a “substantial contribution” to the work. “What does ‘substantial’ mean?” Master asks. “Are you going to hold someone in misconduct for that? What if you make a mistake? What if you just don’t know? It gets cloudy to make someone at fault.”
The framework’s broad definition held an ominous meaning for Keck. Waiting on the decision, she contemplated what could happen. She was certain she could lose her job. On long walks with her husband, sometimes early in the morning before their teenagers awoke, they talked over how they would deal with such an enormous financial blow. Should they sell the house? Change the children’s activities? “It was just a nightmare,” she says.
Professor Trudo Lemmens, Scholl Chair in Health Law and Policy at the University of Toronto Faculty of Law, taught research integrity at the University of Toronto’s Institute of Medical Science. In his lectures he described questionable situations, such as a scientist writing up data in a drug study in a way that intentionally misinformed people about the results. He says that afterwards, one or two students, budding medical researchers, would approach him to detail incidents that seemed ethically off base in the laboratories where they were working. When he suggested they report their concerns, students sometimes asked him what they should do if the lab director in question was an important person in their institution. The implication was that a junior person could not report a high profile scientist and remain unscathed. And yet there isn’t anywhere else they can go: complaints filed directly with the Secretariat are diverted back to the institution. “It speaks for itself that an investigation by an institution of its own misconduct creates a conflict,” Lemmens says. “It’s like you would ask the police to investigate their own misconduct.”
At the moment, anonymous allegations are not universally accepted—though Joanisse said that should change under a revision to the framework that will require all institutions to investigate anonymous complaints as long as they have clear substance. (CIHR Media Specialist David Coulombe said the agency anticipates final approval of this new requirement by the end of 2016.)
What’s needed, in Lemmens’ view, is an “independent investigative structure,” such as a federal agency with a primary investigative role. The CAUT has lobbied for that as well, due to concerns about the lack of whistleblower protections at most universities and colleges. David Robinson says he’s raised the notion of an independent research integrity board or office with Zimmerman and with Minister of Science Kirsty Duncan, since such a major change would require a political decision.
Those who do blow the whistle can find the experience disconcerting. Gina Whitworth*, a clinical researcher, felt this acutely after a difficult situation that emerged when a foundation asked her to review a grant proposal. As she was reading the document, it began to sound familiar; she soon realized that the proposal reproduced, almost verbatim, pages from a manual she’d written with colleagues. Shocked, she contacted the foundation, provided them with a copy of her manual and, at their request, identified the parts that were the same in the proposal. Then she waited as the foundation asked the professor’s institution to investigate. A year passed before she heard from them again, in a letter marked “Very Strictly Confidential.” The foundation wrote that while they could not share details of the institution’s report, the professor had taken responsibility for the plagiarism and the investigation had concluded it resulted from “poor judgment or carelessness.”
“I don’t buy that at all,” says Whitworth, pointing out that the professor had purposely changed some words in the plagiarized material so it would appear to apply to the research she was proposing. But Whitworth learned little else about what happened with the proposal or what the consequences were—the letter said only that sanctions had been applied, and she never received an apology.
Worried for junior researchers at the plagiarizer’s institution who might have their own work stolen, Whitworth grew even more concerned a few years later when the professor was appointed to a high-level administrative post. Because of the “Very Strictly Confidential” label on the letter she’d received, she thought she could be accused of violating confidentiality if she spoke up—but she had no way of knowing if the administrators promoting the plagiarizer were even aware of the history.
The typical case of scientific misconduct does not get press attention, says Larivière. These cases—where researchers commit misconduct on only one paper—comprise what he calls the “everyday fraud,” and they are not well understood. James DuBois, a medical ethicist and experimental psychologist at Washington University in St. Louis, Missouri, is focused on those cases, working with scientists who’ve committed “more mild” types of what Canadians would refer to as research misconduct. DuBois runs a program he refers to as “researcher rehab,” a three-day small-group workshop followed by three months of personal coaching that aims to help researchers improve and maybe win back their rights to do research. Institutions refer the scientists—who must then pay the hefty program fee—usually after revoking their research privileges. (The program has not taken on cases of scientists who have been fired from their jobs.)
Early on, naysayers told DuBois that institutions wouldn’t refer scientists because the material was too confidential to talk about in a small group setting, but the National Institutes of Health disagreed and funded the program’s development.
Yet the critics weren’t entirely wrong: as the program starts its fourth year, only thirty institutions have sent scientists. “There are a lot of people who say, ‘Why should we give anyone a second chance?’” DuBois says. He thinks that institutions want to show federal funding agencies like the ORI that they’re taking research integrity and compliance seriously.
Soon after the program launched, DuBois spoke with Zimmerman and a few others working with her in Ottawa. They wanted to learn about the St. Louis program and discussed the feasibility of a similar initiative in Canada. To date, that hasn’t happened. In Canadian science, the funding agencies set guidelines and the Secretariat implements them, but the power to keep scientists honest lies mostly with institutions. The Secretariat does not police investigations, though Zimmerman and Joanisse both say their office prods and cajoles.
“From the moment they know that there’s been an allegation, they’re making sure that the wheels are turning at the institution to do the inquiry,” Joanisse said. “And in the rare instance where the procedures haven’t been followed, they’re going back to the institution and saying, ‘Look, it’s been weeks. What’s going on? Why haven’t we heard from you?’” But it stops there.
“They don’t really have much authority beyond moral authority,” says Robinson.
After the four-on-one meeting, Harriet Keck continued to face a void of information. Though intimidated by the meeting, at its close she’d dared ask when she could expect to hear back on the results of the inquiry. A research staff member told her it would be within a month. Remembering what the lawyer had said, Keck thought she might hear three months later whether there would be a full-scale investigation. Six months later, Keck had heard nothing. “I have no idea what’s going on,” she said when I reached her, sounding close to tears.
At that time, Keck was in limbo. She hadn’t received a copy of the notes taken at the meeting and didn’t know what the evidence was against her. She’d applied for a university promotion, but she feared the allegation against her could block the process.
A senior colleague then phoned her at home to let her know that someone else had heard, informally, that there would be a decision in her favour. Still, more weeks passed, and no one from the investigating office contacted her, something she says underscored her sense that information was going to those deemed politically important, while she had been treated like “a criminal.” While she supported the need to investigate alleged misconduct, she felt that the process was unfair and opaque.
Finally, about seven months after the four-on-one meeting and fifteen months after she’d first learned about the allegation, the hospital told her that a decision had been made: the scientists had concluded that there were no grounds for further investigation, the inquiry was over. Keck was relieved, she told me, but she remained angry that it had dragged on for over a year and that she’d been kept in the dark. She says that officials gave information about her case to others, including her research collaborator on an unrelated project, violating her confidentiality.
“As a government agency, what we can do is set out good guidance, provide education and set standards that we expect institutions to live up to,” Zimmerman said when we spoke about gaps in the system for investigating misconduct. “We can’t be in every institution,” she added. If the experiences of Keck and Whitworth are anything to go by, the institutions are fully aware of the Secretariat’s limitations. The processes they’ve established, variable and inequitable, prone to bias and largely closed to outside scrutiny, are the result. A solid system for honest science requires the sort of independent oversight the Secretariat is not empowered to provide: if it could, its spotlight might focus as much on Canada’s research institutions as on any individual scientist.
*Names have been changed to protect privacy.