Medicine, Healthcare, and History: Past as Prologue
Once upon a time, we called it the art of medicine. Then we called it the science of medicine. Then we called it health care. Today we call it a mess. Arguably no public policy issue of our times stirs more impassioned, often embittered, sometimes irrational debate than this one. How did it happen that the ancient art of healing as it evolved into the citadel of biomedical science became so embattled?

“Citadel” was the word that A. J. Cronin chose for the title of his novel about abuse, incompetence, and idealism in medicine as he observed it in Britain in the early twentieth century. He could not have guessed how well he chose. He wrote about doctors and how they treated the rich and the poor, access and fairness. He wrote about botched procedures and neglected prevention, quality and impact. He wrote about the motives that drove men and women to take up lives in medicine, idealism and money. A century later, these remain lively issues, especially in the United States. One year after passage of the Affordable Care Act (2010), its fate remains uncertain, as it makes its way through the courts. Since health care will remain a critical issue, we do well to reflect on how we came to this moment and how history will shape choices yet to come.

There is wide agreement that our society can and should be healthier and that we should use less of our collective wealth in purchasing the health care required to achieve that goal. While we have made great progress in understanding the complexities of human biology and banishing much disease, we have been far less successful in translating our knowledge into more effective evidence-based treatments and policies to promote healthier living.

Part of the problem lies in the fact that health care remains a highly fragmented, “cottage,” industry, poorly organized, with limited control over quality and cost and characterized by slow dissemination of new knowledge, technology, and practice. Health care operates partly in the market and partly as ward of the state. Its constituents, patients, doctors and other providers, research and teaching institutions, the insurance and pharmaceutical industries, the government itself are beset by a bedlam of confused, even faulty incentives.

In the face of these challenges, there is no shortage of blueprints for change. Each advances its own principles to arrive at the levels of performance and collaboration needed to create a sustained and measurable impact on the health of our society, all at an acceptable cost. Whatever we choose to do next, however brilliant or disappointing this latest attempt at “reform” turns out to be, history suggests that it will not be the final but only the next chapter in a long-running serial. There have been four installments thus far: Science in Medicine, The Fruits of Discovery, Health Care and Rights, and Costs.

Science in Medicine

How bodies heal when diseased or injured is medicine’s mystery. From antiquity, physicians observed that they did heal and that wellness was nature’s state. How to help the body along, or at least how not to obstruct the way? This much was known and respected long before the coming of modern science to medicine. It was not trifling knowledge then, and it is not trifling now. Today’s best clinicians will still say that healing occurs mysteriously and that much of what they do is empirical and designed merely to help manage that process. The methods and insights of inductive science, in chemistry, biology, physics, crept slowly into medicine. By the late nineteenth century, they had reached a scale to drive powerful movements for reform. These aimed at improving therapies, raising standards of practice, and consolidating the authority of the medical profession in a smaller number of better-qualified medical doctors.

The reform of medical education was the primary means to this end, culminating in the publication in 1910 of Bulletin Number Four of the Carnegie Institute for the Advancement of Teaching, known to history as the Flexner Report after its author Abraham Flexner.

The Flexner Report was inspired by the example of Johns Hopkins and the German universities, and called for the radical winnowing of American medical schools. It detailed strict standards for admission and stipulated a universal four-year curriculum structured around two pre-clinical or basic science years followed by two years of clinical training. Teachers ideally (and controversially) were to be full-time so as to permit true devotion to research and teaching without the diversions of practice. Medical schools were to be allied with universities to expunge the proprietary stain of their history and ensure high intellectual standards.

The Flexner Report proved an earthquake less because of its originality than because of its sponsorship. Over the next 15 years, Flexner (who was not a physician) won the strategic commitment of both Carnegie and Rockefeller philanthropies to the cause of medical education reform. Relatively speaking, reform happened overnight. Substandard schools disappeared; superior ones blossomed. A new breed of physician-scientist emerged as the elite of a profession trained in curative medicine, with ever improving understanding of the biology of disease and with better but still limited means to cure it.

The Fruits of Discovery

The purpose of medicine is to serve, of science to know, and the coming together of helpful service and useful knowledge took some time. The first great successes of research came in the field of infectious diseases, beginning in the 1930s with the development of penicillin and in the 1940s and 1950s of other chemotherapeutic agents that attacked the specific microbial agents known to be fundamental mechanisms of disease. Wartime accelerated research and development more quickly than at any time in its long past. Medicine got measurably better at what it had long claimed to do. With the “antibiotic revolution” of the early postwar years, once largely untreatable bacterial infections like pneumonia, tuberculosis, syphilis, polio, and typhoid came under medicine’s effective control.

Scientific and clinical advance occurred against a policy background that was destined to cast a long shadow, as private insurance regimes developed to spread risk and ensure access to medicine’s bounty across the general population. As science proved its therapeutic value, and as the presumption of uniformly high quality settled in, the central challenge became one of how to socialize the costs of modern medicine. Who paid and who got access?

The rise of broad-based group health insurance in the United States dates to the founding in the late 1920s and early 1930s of Blue Cross: insurance plans that paid for treatment in hospitals on a cost-plus (cost of service plus cost of capital) basis. When price controls during World War II prevented wage competition for scarce labor, many firms embraced fringe benefits including hospital and health insurance, the cost of which in 1943 became tax deductible for businesses, the benefits tax-exempt for workers. In the postwar years, the system of employer-based insurance grew broadly and benignly, dominated by nonprofit organizations, which operated on the principle of community rating (equal premiums regardless of risk) and pooling (whereby high and low risk individuals bought coverage together). If the pool was large enough, the result was a kind of social insurance based on membership in the group, and not on need for service. By the late 1970s, 85 percent of American civilians were covered by such an employer-based private system.

Health Care and Rights

Medicine cannot escape its social, cultural and economic contexts, and it was not well-prepared for the tumult of the 1960s and 1970s. At first, when times were good and national selfconfidence high, the names our leaders gave to those years, the New Frontier and the Great Society, served to mobilize popular idealism and reconfirm older national meanings. The peculiarly American ambition to do all things well for all people all of the time boldly asserted itself. Medicine, which in previous decades had so well proven its promise, was an easy target. It ceased in fact to be “medicine” at all and evolved into “health care.” The World Health Organization’s famous 1946 definition of health as “a state of complete physical, mental and social well-being and not merely the absence of disease or infirmity” had set the tone, and the words mattered. Medicine was precise and limited; health care was something else, altogether more comprehensive. In addition to the biological factors that were the domain of scientific medicine, health care encompassed social, economic, environmental, and religious factors. More and more, traditional providers of medicine were looked to for health as well. More was asked of them than before.

Implicit in these changing expectations of health was a “right” to it. Accordingly, the conversation changed. More and more, people assumed that they were entitled to medical or health care along with the products and technologies that enhanced it. To satisfy that right in a society whose productive capacity was finite could in turn require giving to some while depriving others. Evolving systems of health care provision reflected such anxieties.

In postwar America, a good job had come to mean one that offered not only good pay and some security but also health insurance. For those outside the employment-based insurance system, a net of public health coverage addressed the same expectation. The Hill-Burton Act of 1946 conditioned public subsidy for hospital construction on the provision of free care for the uninsured poor. In 1965, landmark federal legislation created Medicare and Medicaid, which entitled the elderly and the poor to medical coverage. Both public and private systems paid providers on the principle of “usual and customary charges,” a notion rooted in a pre-insurance era when the fees physicians could charge for service were subject to price competition. Thirdparty payment through insurance, however, reduced consumers’ sensitivity to price and even upended classical economic behavior, as high price often came to be correlated with high quality, low price with low. As long as the definition of “usual and customary” remained whatever the market would bear, costs rose even as new technology spread, which in other industries would have driven costs down. Meanwhile, for-profit insurers offering lower riskrated premiums to attract healthier people began to punch big holes in the Blues’ pool.

Costs

The entitlement reforms of the 1960s failed to address cost. As an aging population suffering more from chronic maladies than from infectious diseases spiked demand for health care, the cost of providing it approached levels that threatened to overwhelm other national priorities. By the 1980s, the distress of America’s once virtuous, parallel private/public system had grown acute, resulting in calls for reform across the political spectrum. Efforts to control costs were less than successful. In 1983, Medicare, which set the pattern for private insurers as well, attacked the cost-plus principle for hospitals with its Inpatient Prospective Payment System based on Diagnosis Related Groups (DRGs). While DRGs reimbursed fixed fees based on diagnosis, prospective payment stopped short of relating payment to results, and while hospital stays indeed went down, quality could go down too with perverse incentives to under-treat. Physicians’ fees came under this imperfect prospective payment regime in the 1990s, as did outpatient hospital care in 2000.

The subsequent movement to “managed care” bet on competition among insurers to drive costs down and on oversight of care by primary care physicians. More new phrases soon crowded the industry glossary, as Health Maintenance Organizations and Preferred Provider Organizations stepped up to negotiate prices with providers and, as it turned out, to manage and micromanage physicians. Capitation schemes (which paid fixed fees per enrollee per time period and allowed providers to keep the change if care turned out to cost less than the fee) further buttressed incentives to reduce costs and, like DRGs, cut down hospital stays. To maintain prices and margins, providers predictably pushed back against large health plans and zealous managed care administrators, consolidating and investing sometimes redundantly in facilities and technology. Reform reached fever pitch in the failed 1993 plan of the first Clinton administration, which proposed to insure the uninsured, oversee pricing nationally, and organize Americans into health insurance purchasing cooperatives. The debate continued to center on costs and access, leaving assumptions about quality still largely undisturbed.

The cumulative load of this story now weighs heavy as the United States, amidst economic crisis, embarks on yet another season of reform. Contenders for change range widely, from mandate programs and market-based competition to single payer systems and guaranteed coverage requiring dedicated funding. We know that infrastructure must be mended, that information must be better deployed, that incentives must work correctly. The prospect of comprehensive health care reform under a new, progressive administration will focus on payment mechanisms and equal and universal access, with bold subheads on controlling costs and improving quality. When reform does happen, it will be recorded as a triumph of daunting political complexity, and it will represent a new settlement among healthcare’s many stakeholders that is likely to define the field of play, if not all the plays, for years to come. But however it is reformed, the American health care system will be built with many blocks inherited from the past, not all of them well-fitting.

Even then, stubborn questions will remain. How to define health care itself? For the definition will shift, just as it has shifted from earlier, narrower understandings of what defined medicine. What is included and what is not? How to manage expectations?

There are those who contend that the American health care system, with its spotty quality, disproportionate costs, and lack of universal access falls short even of the very first law of medicine: do no harm. This is certainly true, but Hippocrates’ admonition serves a larger purpose than indictment. It is modest. Coupled with an appreciation of how science, today driven by genomics, proteomics and research at the molecular level, does and does not work, “do no harm” bids us keep some sense of proportion in our view of the health care enterprise. Science has fuelled the greatest achievements in medicine over the last century. It will continue to do so in the century to come, though in spite of, not because of, the rising insistence on the right to health, whose immediate goal is care and matters of availability and just distribution. Caring, like health, is broadly comprehensive. Curing, which is what science in medicine has always aimed to do, is not. Within its limited realm, science in medicine still holds untold promise for humanity. It also helps to remember that medicine and those who provide it can better cure our bodies than they can care for the rest of us. While we strive nobly for a system that treats “the whole person” the right way each and every time, even that system probably should not be relied upon for the care better found in families, schools, and churches.

Health care in America has a long history: from beginnings when medicine was largely (though not merely) an art, to medicine’s gradual and imperfect union with science, to the supply of new knowledge and surprising discovery that science delivered, to the demand for discovery’s fruits expressed through markets and money, to the rise of countervailing concerns about rights and justice. Some of the policy arguments that inform our current debates are new. The underlying concerns are not. Long before there was much medicine worthy of the name, Thomas Jefferson offered an epigram as fit for our time as for his: “Without health, there is no happiness.”

© Copyright 2011 The Winthrop Group, Inc. All Rights Reserved.