5-14-10 Community E-Newsletter/Call
Hello Everyone. In today's Community Call, we'll update you on our Phase 2 web development and request input from the community.
This is an open invitation to get more involved with the world's first ever, unbiased, free license and dynamic medical/health knowledge base.
Time: Friday (Today!) 2 - 2:45pm EST
If you would like to opt out of these emails, please let me know. Thank you for your continued interest in the OurMed initiative--we couldn't make this happen without your participation and are grateful for you pitching in!
Greg Miller, Executive Director at OurMed.Org
Watch our OurMed YouTube Video:http://www.youtube.com/watch?v=zqgYfFxEkLk
OurMed's Technology Update
In today's community call, we'd like you to give your feedback several critical issues related to the site development.
1) CMS Platform - We have a commitment from our vendor, Blueliner to develop the CMS in straight MediaWiki--extensions for news, blogs, etc will same as the current design.
2) Multi-language functionality has been confirmed. We aim to be like Wikipedia in that regard.
3) Split Screen--One of our objectives to build a new site is to marry the meritocracy of Healthcare Professional authored articles with patient and consumer feedback that represent the Democracy of Information--This is a critical inner page that we'd like your feedback on.
4) The Search Algorithm is something that is a highly technical part of our -site. Using geo-targeting strategies we will need to determine how we search for sites considering ip addresses, an author of an article's location, or the searcher's location. Blueliner's Arbab will suggest an algorithm.
5) New Facebook technology to affect OurMed!
Arbab wanted to draw your attention to latest features that are being launched by Facebook. Arbab believes Facebook made the announcement yesterday. Have a look at the presentation: http://apps.facebook.com/feightlive/
The new features will really re-shape how users interact with other site and how they share information. It will also encourage sites to 6incorporate Facebook's new APIs to draw more traffic.
6) Using an image on our homepage to communicate "collaboration". This very important aspect of the OurMed branding will be in the Featured Spotlight section of our homepage design.
Building content – What to write about?
Changes or additions can be made easily with our WYSIWYG editor (what you see is what you get), making it much easier than Wikipedia and cutting the volume ramp up rate to the new site significantly faster.
At times posting new material may be difficult to give attributions for so it may be easier to publish previously published work
Alternatively, you can choose content from one of the many free content ("copy left") sites such as much of the content on:
2. NIH's PubMedCentral.gov from the National Library of Medicine
4. GanFyd.org (original medical wiki site that claims Medpedia copied them and boasts 2000 site visitors per day)
7. Open.Michigan from the University of Michigan
Posting on OurMed.Org
In addition to the Symbiosis Project, OurMed offers writers of original work to publish a vast range of medical topics. Under the three pillar approach of 1) Being Referenced 2) Being Bold and 3) Being Polite, OurMed strives to be a forum through which multiple health and medical issues are presented and debated.
To write, you must have a free OurMed account. You can write about nearly anything, just keep your comments about new ideas,health and medicine.--It's really important that OurMed gets off the ground using a communities collaborative approach to building it, just as Wikipedia did nine years ago.
We are furthering our editorial policies to include a Style Guide. Feel free to suggest ideas to make this a global "go-to" resource for all healthcare needs fit for any patient or healthcare professional.
Please click on this link to make a small post about whatever's on your mind. You can suggest articles, design or features that you'd like to see on the site. http://ourmed.org/index.php/New_Ideas_for_Site
Most Active Authors in The Past Month:
* D Joiner
OurMed's MedTool Project
As we develop OurMed's Phase 2 site, we want to announce a competition to inspire our content contributors to come up with the most useful healthcare diagnostic tools from around the world.
Submissions will eventually be open-sourced and written in Joomla so that it will "plug-in" to our new site as well as be available to all around the world in a copy-left offering. To contribute, a contestant need not be tech savy but only be familiar with common health and medical needs. Will it be a simple Body-Mass calculator, Symptom Disease matcher, Diabetic Insulin calculator--the list may go on and one but we want the most popular, best and easiest to use!
24,000 Memphis patients rated their doctors
This article comes to us from Eileen McGinn--Thank you Eileen! I just want to note that rating doctors is something that OurMed does NOT intend to do as it has ethical implications, particularly in countries that have laws against it.
May. 3rd, 2010 by Andrew Van Dam Filed under: Health data, Hot Health Headline
The Healthy Memphis Common Table is an effort to help patients and providers take charge of improving the city’s health. It includes the results of about 24,000 patient ratings of 430 local primary care doctors, all conducted by the nonprofit Consumers’ Checkbook.
Manoj Jain, M.D., M.P.H., (bio) is on the table’s advisory committee and he, as part of its mission to publicize the effort, wrote a three-part series in the The (Memphis) Commercial Appeal on the results and potential of the survey. The first installment is the one with the broadest appeal, as it discusses survey results and consequences.
In the second installment, Jain profiles a highly rated doctor and includes his own musings on what makes a physician great. Jain then wraps up the series with anonymous profiles of two poorly rated doctors and further musings on how their ratings might be improved. Interestingly, Jain’s suggestions almost always focus on non-clinical factors such as office staff quality and communication skills. Bookmark and Share
Consort 2010 Statement: updated guidelines for reporting parallel group randomised trials
| || |
[[Image:|Open Access]][[Image:|Highly Access]]Research
1 Family Health International, Research Triangle Park, NC 27709, USA
2 Centre for Statistics in Medicine, University of Oxford, Wolfson College, Oxford, UK
3 Ottawa Methods Centre, Clinical Epidemiology Program, Ottawa Hospital Research Institute, Department of Epidemiology and Community Medicine, University of Ottawa, Ottawa, Canada
The electronic version of this article is the complete one and can be found online at: http://www.trialsjournal.com/<wbr></wbr>content/11/1/32
This is an Open Access article distributed under the terms of the Creative C://ommons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
The CONSORT statement is used worldwide to improve the reporting of randomized controlled trials. Kenneth Schulz and colleagues describe the latest version, CONSORT 2010, which updates the reporting guideline based on new methodological evidence and accumulating experience.
To encourage dissemination of the CONSORT 2010 Statement, this article is freely accessible on bmj.com and will also be published in the Lancet, Obstetrics and Gynecology, PLoS Medicine, Annals of Internal Medicine, Open Medicine, Journal of Clinical Epidemiology, BMC Medicine, and Trials.
Randomized controlled trials, when appropriately designed, conducted, and reported, represent the gold standard in evaluating healthcare interventions. However, randomized trials can yield biased results if they lack methodological rigor [[|]]. To assess a trial accurately, readers of a published report need complete, clear, and transparent information on its methodology and findings. Unfortunately, attempted assessments frequently fail because authors of many trial reports neglect to provide lucid and complete descriptions of that critical information [[|]][[|]][[|]][2-4].
That lack of adequate reporting fueled the development of the original CONSORT (Consolidated Standards of Reporting Trials) statement in 1996 [[|]] and its revision five years later [[|]][[|]][[|]][6-8]. While those statements improved the reporting quality for some randomized controlled trials, [[|]][[|]][9,10] many trial reports still remain inadequate [[|]]. Furthermore, new methodological evidence and additional experience has accumulated since the last revision in 2001. Consequently, we organized a CONSORT Group meeting to update the 2001 statement [[|]][[|]][[|]][6-8]. We introduce here the result of that process, CONSORT 2010. [[|]]
Intent of CONSORT 2010
The CONSORT 2010 Statement is this paper including the 25 item checklist in the table (Table [[|]]1) and the flow diagram (Figure [[|]]1). It provides guidance for reporting all randomized controlled trials, but focuses on the most common design type-individually randomized, two group, parallel trials. Other trial designs, such as cluster randomized trials and non-inferiority trials, require varying amounts of additional information. CONSORT extensions for these designs, [[|]][[|]][11,12] and other CONSORT products, can be found through the CONSORT website http://www.consort-statement.<wbr></wbr>org website. Along with the CONSORT statement, we have updated the explanation and elaboration article, [[|]] which explains the inclusion of each checklist item, provides methodological background, and gives published examples of transparent reporting.
Table 1. CONSORT 2010 checklist of information to include when reporting a randomized trial*
[[Image:|thumbnail]Figure 1.] Flow diagram of the progress through the phases of a parallel randomized trial of two groups (that is, enrollment, intervention allocation, follow-up, and data analysis).
Diligent adherence by authors to the checklist items facilitates clarity, completeness, and transparency of reporting. Explicit descriptions, not ambiguity or omission, best serve the interests of all readers. Note that the CONSORT 2010 Statement does not include recommendations for designing, conducting, and analyzing trials. It solely addresses the reporting of what was done and what was found.
Nevertheless, CONSORT does indirectly affect design and conduct. Transparent reporting reveals deficiencies in research if they exist. Thus, investigators who conduct inadequate trials, but who must transparently report, should not be able to pass through the publication process without revelation of their trial's inadequacies. That emerging reality should provide impetus to improved trial design and conduct in the future, a secondary indirect goal of our work. Moreover, CONSORT can help researchers in designing their trial. [[|]]
Background to CONSORT
Efforts to improve the reporting of randomized controlled trials accelerated in the mid-1990s, spurred partly by methodological research. Researchers had shown for many years that authors reported such trials poorly, and empirical evidence began to accumulate that some poorly conducted or poorly reported aspects of trials were associated with bias [[|]]. Two initiatives aimed at developing reporting guidelines culminated in one of us (DM) and Drummond Rennie organizing the first CONSORT statement in 1996 [[|]]. Further methodological research on similar topics reinforced earlier findings [[|]] and fed into the revision of 2001 [[|]][[|]][[|]][6-8]. Subsequently, the expanding body of methodological research informed the refinement of CONSORT 2010. More than 700 studies comprise the CONSORT database (located on the CONSORT website), which provides the empirical evidence to underpin the CONSORT initiative.
Indeed, CONSORT Group members continually monitor the literature. Information gleaned from these efforts provides an evidence base on which to update the CONSORT statement. We add, drop, or modify items based on that evidence and the recommendations of the CONSORT Group, an international and eclectic group of clinical trialists, statisticians, epidemiologists, and biomedical editors. The CONSORT Executive (KFS, DGA, DM) strives for a balance of established and emerging researchers. The membership of the group is dynamic. As our work expands in response to emerging projects and needed expertise, we invite new members to contribute. As such, CONSORT continually assimilates new ideas and perspectives. That process informs the continually evolving CONSORT statement.
Over time, CONSORT has garnered much support. More than 400 journals, published around the world and in many languages, have explicitly supported the CONSORT statement. Many other healthcare journals support it without our knowledge. Moreover, thousands more have implicitly supported it with the endorsement of the CONSORT statement by the International Committee of Medical Journal Editors http://www.icmje.org website. Other prominent editorial groups, the Council of Science Editors and the World Association of Medical Editors, officially support CONSORT. That support seems warranted: when used by authors and journals, CONSORT seems to improve reporting [[|]]. [[|]]
Development of CONSORT 2010
Thirty one members of the CONSORT 2010 Group met in Montebello, Canada, in January 2007 to update the 2001 CONSORT statement. In addition to the accumulating evidence relating to existing checklist items, several new issues had come to prominence since 2001. Some participants were given primary responsibility for aggregating and synthesizing the relevant evidence on a particular checklist item of interest. Based on that evidence, the group deliberated the value of each item. As in prior CONSORT versions, we kept only those items deemed absolutely fundamental to reporting a randomized controlled trial. Moreover, an item may be fundamental to a trial but not included, such as approval by an institutional ethical review board, because funding bodies strictly enforce ethical review and medical journals usually address reporting ethical review in their instructions for authors. Other items may seem desirable, such as reporting on whether on-site monitoring was done, but a lack of empirical evidence or any consensus on their value cautions against inclusion at this point. The CONSORT 2010 Statement thus addresses the minimum criteria, although that should not deter authors from including other information if they consider it important.
After the meeting, the CONSORT Executive convened teleconferences and meetings to revise the checklist. After seven major iterations, a revised checklist was distributed to the larger group for feedback. With that feedback, the executive met twice in person to consider all the comments and to produce a penultimate version. That served as the basis for writing the first draft of this paper, which was then distributed to the group for feedback. After consideration of their comments, the executive finalized the statement.
The CONSORT Executive then drafted an updated explanation and elaboration manuscript, with assistance from other members of the larger group. The substance of the 2007 CONSORT meeting provided the material for the update. The updated explanation and elaboration manuscript was distributed to the entire group for additions, deletions, and changes. That final iterative process converged to the CONSORT 2010 Explanation and Elaboration [[|]]. [[|]]
Health care tries to figure out what works best
By Guy Boulton ♦
Posted: May 2, 2010 |(21) Comments
No one really knows - including your doctor.
In a health care system that spends $2.5 trillion a year, less than one-tenth of 1% is spent on research to determine what treatment options work best - and, in some cases, whether they work at all.
"We spend billions of dollars on developing new treatments and technologies, but we don't go back through and say, 'OK, how do they work?' " said Murray Ross, director of research at the Kaiser Permanente Institute for Health Policy.
The result is tens of billions of dollars - and maybe much more - spent each year on treatments that are of marginal or questionable value.
In recent years, doctors, economists, health plans, business groups and others have called for increased research on comparative effectiveness - research that compares different treatment options.
That's about to happen.
The American Recovery and Reinvestment Act passed by Congress last year allocated $1.1 billion for the research. And the new health care reform legislation will create a nonprofit institute to fund research on the effectiveness of medical treatments.
The new Patient-Centered Outcomes Research Institute is to receive more than $200 million a year on research starting in 2013 to learn more about which treatments work best for which patients.
"For a physician, the first question is what works," said Maureen Smith, a physician and professor at the University of Wisconsin School of Medicine and Public Health.
The evidence needed to answer that question for specific patients often doesn't exist. As a result, many of the treatment options that confront doctors and patients every day - from which drug to prescribe to complex regimens for chemotherapy - are not based on solid evidence.
Doctors instead must rely on weak or limited studies, expert opinion, anecdotal evidence, their own experience and judgment - and, to some degree, marketing by pharmaceutical and medical device companies.
The Institute of Medicine, the health arm of the National Academy of Sciences, has estimated that fewer than half of treatments given to patients are supported by good evidence.
Others consider that estimate high. Ross estimates that 25% or less of what doctors and other clinicians do isn't based on good evidence.
When should a patient with narrow arteries be treated with drugs instead of angioplasty, or angioplasty instead of bypass surgery? When is it best to treat atrial fibrillation with drugs, surgery or catheter ablation, a procedure used to scar or destroy tissue that may interfere with the electrical signal to the heart? What's the best way to manage patients with noninvasive breast cancer?
"We have strong evidence for some things, but we have substantially less evidence for most of what we do in medicine," said Paul Keckley, executive director of the Deloitte Center for Health Solutions.
Funding research to answer those and other questions could result in better care.
It also could make better use of health care dollars, although that's not guaranteed. Sometimes the most effective treatment costs more. And any savings could be a decade or more away, given the complexity of the studies and the challenges in changing the way doctors practice.
But numerous studies have estimated that as much as one-third of the money spent on medical care doesn't improve patients' health.
One result is the widespread variation in treatments for similar patients, which can be seen in Wisconsin. The Dartmouth Atlas of Health Care found that for every 1,000 people in Medicare in 2005:
• Patients were 107% more likely to have an angioplasty if they lived in Milwaukee instead of La Crosse. They were 39% more likely to have that procedure in Milwaukee than in Marshfield.
• Patients were 120% more likely to have heart bypass surgery if they lived in Wausau instead of Madison. They were 49% more likely to have the surgery if they lived in Milwaukee instead of Madison.
• Patients were 99% more likely to have back surgery if they lived in the Neenah and Oshkosh area instead of Marshfield. They were 88% more likely to have back surgery in Neenah and Oshkosh than Madison.
The variation in common procedures is even larger nationally and, as the Institute of Medicine and others have noted, all of the patients can't be receiving the best care.
Without question, uncertainty is an inherent part of medicine. There are more than 6,000 drugs and 4,000 operations and procedures - not to mention 13,000 medical diagnoses. And each advance raises new questions.
But part of the uncertainty stems from the focus by health care systems on new drugs and procedures. New technology accounts for roughly half of the increase in spending - and new technology in health care almost always costs more.
This focus has resulted in stunning advances in medicine in recent decades. At the same time, doctors want to stay at the forefront of their profession, and they often are quick to adopt new technologies before good evidence exists to show that they work better than existing technologies.
Health care systems can be just as quick to tout new treatments - which almost always pay more - in their marketing campaigns.
The result is that new drugs and technology can outpace the ability of doctors and researchers to determine whether they work better than existing technology.
Pharmaceutical companies put their research dollars into developing new drugs as opposed to research on how to make the best use of relatively new drugs already on the market, said Ann Nattinger, a physician and professor at the Medical College of Wisconsin.
She doesn't oppose developing new drugs and other treatments. "But we don't have enough evidence on how to use the drugs that we already have," said Nattinger, who is director of the Center for Patient Care and Outcomes Research.
That can be seen in the practice guidelines - recommendations on how to treat specific diseases - developed by medical specialties and others.
For example, a 2006 review of recommendations for preventing and treating breast cancer - Nattinger's research focus - found the overall quality of the available guidelines was "modest."
"There are so many aspects of the treatment that haven't been tested," she said, "so the guidelines end up having to be rather general."
How long should women be treated with hormonal therapy, for example, or what's the best way to manage noninvasive breast cancer?
"All those are still open questions," Nattinger said.
Cardiologists may have the best practice guidelines of any specialty. Yet a recent review of the guidelines developed by the American College of Cardiology and the American Heart Association found that relatively few recommendations were based on high quality evidence.
"It's a shocking deficit," said Matthew Wolff, a cardiologist and professor at the UW medical school. "We have the technology to do it, but we don't spend the money."
One of the quirks of the health care system is that few incentives exist to conduct comparative effectiveness research.
Drug companies in most cases have to show only that a drug works better than nothing, and even then only in carefully controlled conditions for a select group of patients that can differ significantly from those that doctors see day in and day out.
The bar is even lower for medical devices, such as orthopedic implants.
The clinical trials needed to win federal approval for a new drug also are unlikely to detect uncommon side effects because they typically involve relatively few people and often focus on short-term outcomes.
In addition, the people in research trials often are younger and healthier than the patients likely to be given the drug if it wins approval.
"That's probably the biggest distinction between where we have been and where we need to go," said Smith, the UW professor.
An effective treatment for an 80-year-old woman may differ significantly for a 20-year-old woman. "We don't know nearly enough about how to deliver the best care to the entire range of patients," Smith said.
The lack of information isn't limited to new drugs or to the most advanced therapies to treat heart disease or cancer.
The health care system spends $20 billion a year on wound therapy. Which of the competing devices used for negative pressure wound therapy - applying a vacuum on a wound - work best? A review of the available evidence by the federal Agency for Healthcare Research and Quality concluded that this couldn't be determined.
For that matter, whether the devices work any better than standard treatments also isn't known.
Even the most basic care often isn't based on solid evidence. How often a pregnant woman should see her doctor: Four times? Six times? How often should she have an ultrasound? Is an ultrasound even needed for a normal pregnancy?
Doctors don't know, said Sheldon Wasserman, a Milwaukee obstetrician and gynecologist.
That basic question - what's the optimal use of ultrasound during pregnancy - is among 100 priorities for comparative effectiveness research developed by the Institute of Medicine last summer.
The institute was given the task of recommending research priorities for the $1.1 billion allocated under the recovery act. It set up a committee that received more than 2,600 suggestions.
Recommendations range from comparing the effectiveness of complex cancer drugs to the best way to treat attention deficit hyperactivity disorder to treatments for back pain and heart disease.
Funding the research is just the start. The research is complex. Studies often conclude that existing evidence doesn't provide any answers. And even when it does, the answers are rarely clear-cut.
"It is often not that one is better and one is worse," said Alan B. Rosenberg, a physician and vice president who oversees medical policy and technology assessment for WellPoint Inc., the parent of Anthem Blue Cross and Blue Shield of Wisconsin. "It often is the relative benefits and relative harms of the alternatives."
A treatment may work better in some patients and worse in others. Doctors will still have to rely on their judgment and experience.
Another challenge is to persuade doctors and patients to make use of the research. The Institute of Medicine report last year noted that a wide gap exists between the results of the research and the findings' making their way into clinical practice and health policy.
The research also often faces opposition. The proposal to fund more comparative effectiveness research sparked controversy in the debate on health care reform, and that could continue.
Despite the challenges and the controversy, the research has broad support among doctors.
"They are the ones who support this," said UW's Smith, "because they and their patients are the ones who have to make the decisions."
Reporter Guy Boulton conducted research for this series during a fellowship funded by the Kaiser Family Foundation, a nonprofit, nonpartisan health policy research organization with offices in Menlo Park, Calif., and Washington, D.C.