In Defense of a Liberal Education

by Fareed Zakaria

Clock Icon 20 minute read

A Brief History of Liberal Education

An excerpt from In Defense of a Liberal Education

For most of human history, education was job training. Hunters, farmers, and warriors taught the young to hunt, farm, and fight. Children of the ruling class received instruction in the arts of war and governance, but this too was intended first and foremost as preparation for the roles they would assume later in society, not for any broader purpose. All that began to change twenty-five hundred years ago in ancient Greece.

Prior to the change, education in Greece had centered on the development of arête, roughly meaning excellence or virtue. The scholar Bruce Kimball notes that the course of study largely involved the memorization and recitation of Homeric epic poetry. Through immersion in the world of gods and goddesses, kings and warriors, children would master the Greek language and imbibe the lessons, codes, and values considered important by the ruling elite. Physical training was a crucial element of Greek education. In the city-state of Sparta, the most extreme example of this focus, young boys considered weak at birth were abandoned to die. The rest were sent to grueling boot camps, where they were toughened into Spartan soldiers from an early age.

Around the fifth century BC, some Greek city-states, most notably Athens, began to experiment with a new form of government. “Our constitution is called a democracy,” the Athenian statesman Pericles noted in his funeral oration, “because power is in the hands not of a minority but of the whole people.” This innovation in government required a simultaneous innovation in education. Basic skills for sustenance were no longer sufficient—citizens also had to be properly trained to run their own society. The link between a broad education and liberty became important to the Greeks. Describing this approach to instruction centuries later, the Romans coined a term for it: a “liberal” education, using the word liberal in its original Latin sense, “of or pertaining to free men.”

From the beginning, people disagreed over the purpose of a liberal education. (Perhaps intellectual disagreement is inherent in the idea itself.) The first great divide took place in ancient Greece, between Plato, the philosopher, and Isocrates, the orator. Plato and his followers, including Aristotle, considered education a search for truth. Inspired by Socrates, they used the dialectic mode of reasoning and discourse to pursue knowledge in its purest form. Isocrates, on the other hand, hearkened back to the tradition of arête. He and his followers believed a person could best arrive at virtue and make a good living by studying the arts of rhetoric, language, and morality. This debate—between those who understand liberal education in instrumental terms and those who see it as an end in and of itself—has continued to the present day.

In general, the more practical rationale for liberal education gained the upper hand in the ancient world. Yet the two traditions have never been mutually exclusive. The Roman statesman and philosopher Cicero, one of the earliest writers on record to use the term artes liberales, wanted to combine the search for truth with rhetoric, which was seen as a more useful skill. “For it is from knowledge that oratory must derive its beauty and fullness,” the philosopher-statesman wrote around 55 BC. While debate continues, the reality is that liberal education has always combined a mixture of both approaches—practical and philosophical.

Science was central to liberal education from the start. Except that in those days, the reason to study it was the precise opposite of what is argued today. In the ancient world, and for many centuries thereafter, science was seen as a path to abstract knowledge. It had no practical purpose. Humanistic subjects, like language and history, on the other hand, equipped the young to function well in the world as politicians, courtiers, lawyers, and merchants. And yet the Greeks and Romans studied geometry and astronomy alongside rhetoric and grammar. In the first century BC, this dualistic approach to education was “finally and definitively formalized” into a system described as “the seven liberal arts.” The curriculum was split between science and humanities, the theoretical and the practical. Centuries later, it was often divided into two subgroups: the trivium—grammar, logic, and rhetoric—was taught first; the quadrivium—arithmetic, geometry, music, and astronomy—came next.

Soldiers and statesmen, naturally, placed greater emphasis on subjects they thought of as practical—what today we would call the humanities. But even so, the idea of a broader education always persisted. In the eighth century, Charlemagne, king of the Franks (a Germanic tribe that inhabited large chunks of present-day Germany, France, Belgium, Netherlands, and Luxembourg), consolidated his empire. Bruce Kimball notes that Charlemagne then established a palace school and named as its master Alcuin, an English scholar (even then Englishmen were the ideal headmasters). Alcuin and his followers concentrated on grammar and textual analysis and demoted mathematics, but they continued to teach some version of the liberal arts. And the deeper quest for understanding never disappeared. Even during the Dark Ages, medieval monasteries kept alive a tradition of learning and inquiry.

Why did European learning move beyond monasteries? One influence might have been Islam, the most advanced civilization in the Middle Ages. Within the world of Islam there were dozens of madrasas—schools where history, politics, science, music, and many other subjects were studied and where research was pursued (though not all Islamic educational institutions were called madrasas). Islamic learning produced innovations, especially in the study of mathematics. Algebra comes from the Arab phrase al-jabr, meaning “the reunion of broken parts.” The name of the Persian scholar al-Khwarizmi was translated into Latin as algoritmi, which became “algorithm.” By the eleventh century, Cairo’s al-Azhar and Baghdad’s Nizamiyah were famous across the world for their academic accomplishments, as were many other centers of learning in the Arab world. This Islamic influence found a home in the Muslim regions of continental Europe as well, in the madrasas of Moorish Spain, in Granada, Córdoba, Seville, and elsewhere.

By the late Middle Ages, Europe’s stagnation was ending. The expansion in global trade and travel meant that its leaders needed greater knowledge and expertise in areas like geography, law, and science. As city-states competed with one another economically, they sought out individuals with better skills and education. Because of its long coastline, Italy became a place where commerce, trade, and capitalism were beginning to stir. Groups of scholars started coming together in various Italian cities to study theology, canon and civil law, and other subjects. These scholars came from great distances and were often grouped by their geographical origins, each one being called a “nation,” an early use of the word. Some of these “nations” hired local scholars, administered exams, and joined together into groups that came to be called universitas. These organizations sought and were granted special protections from local laws, thus allowing them necessary freedoms and autonomy.

In 1088, Europe’s first university was founded in Bologna. Over the next century, similar institutions sprouted up in Paris, Oxford, Cambridge, and Padua. By 1300, western Europe was home to between fifteen and twenty universities. These schools were initially not bastions of free inquiry, but they did become places where scholars discussed some taboo subjects, recovered, translated, and studied Aristotle’s writings, and subjected laws to close scrutiny. Yet most research took place outside of universities in those days because of their religious influence. It was heretical, for instance, for scientists to speculate on earth’s place amid the stars. In most cities, while students were accorded some of the same freedoms and exemptions as the clergy, they desired even more. The University of Padua’s motto was Universa universis patavina libertas—”Paduan freedom is universal for everyone.”

In the fourteenth century, the balance between practical and philosophical knowledge shifted again. Some Italian scholars and writers believed that universities had become too specialized. They looked to return European education to its Greek and Roman roots. These humanists rejected the highly detailed, scholastic approach to learning and theology that was pervasive in medieval universities. Instead, as the late scholar Paul Oskar Kristeller notes, they encouraged a “revival of arts and of letters, of eloquence and of learning” that “led to a new and intensified study of ancient literature and history.” Over the next two centuries, what has been called Renaissance humanism spread to the rest of Europe.

These traditions of scholarship, however, did not create the experience we now think of as a liberal education. That modern tradition had less to do with universities and more with colleges. And “college as we know it,” writes Columbia University professor Andrew Delbanco, “is a fundamentally English idea.” The earliest English colleges were founded in the thirteenth century for scholars of divinity whose duties, Delbanco notes, “included celebrating mass for the soul of the benefactor who had endowed the college and thereby spared them from menial work.” Religious influences were strong—the public lecture, for instance, was a secular outgrowth of the Sunday sermon—though the curriculum was varied and included non-theological subjects.

Colleges grew more secular by the nineteenth century as seminaries assumed responsibility for training ministers. They also began to develop a character distinct from European universities, which were becoming increasingly focused on research, especially in Germany. Unlike universities, which often lacked a clear physical embodiment, colleges were defined by their architecture. An imposing stone building was usually constructed with an open courtyard in the center and student dormitories arrayed around it. The “common” room was where students could meet, the chapel where they could pray, and the library where they could read. This model of a residential college originated in England and spread to the Anglo-American world, where it remains the distinctive form for undergraduates.

In the early twentieth century, among the major universities, Harvard and Yale adopted the full-fledged residential college model for student housing, partly in an effort to retain the intimate setting of liberal arts colleges while pursuing their ambitions to become great research universities. The residential college has come to be seen as possessing certain qualities that enhance the experience of liberal education beyond the curriculum. The advantages of such an arrangement are often described today in terms like “living-learning experiences,” “peer-to-peer education,” and “lateral learning.” Samuel Eliot Morison, the legendary historian of Harvard, best described the distinctive benefits of the residential setting: “Book learning alone might be got by lectures and reading; but it was only by studying and disputing, eating and drinking, playing and praying as members of the same collegiate community, in close and constant association with each other and with their tutors, that the priceless gift of character could be imparted.” An emphasis on building character, stemming from the religious origins of colleges, remains an aim of liberal arts colleges almost everywhere, at least in theory.

America’s earliest colleges were modeled on their English predecessors. Many of the founders of Harvard College, for example, were graduates of Emmanuel College at Cambridge University. Perhaps because, in America, they did not start out strictly as seminaries, colonial colleges often incorporated into their curricula a variety of disciplines, including the sciences, humanities, and law. Students were expected to take all these subjects and relate them to one another because it was assumed there was a single, divine intelligence behind all of them. In Cardinal John Newman’s nineteenth-century formulation of this approach to education, “The subject-matter of knowledge is intimately united in itself, as being the acts and the work of the Creator.” It was a theological version of what physicists today call the unified field theory.

America’s first colleges stuck to curricula that could be described as God and Greeks—theology and classics. But a great debate over this approach emerged at the beginning of the nineteenth century. People wondered why students should be required to study ancient Greek and Latin. They suggested that colleges should begin to incorporate modern languages and subjects into their instruction. After all, the country was growing rapidly and developing economically and technologically, making the college course of study seem antiquated in comparison. After much deliberation, the Yale faculty issued a report in 1828 defending the classical curriculum. It powerfully influenced American colleges for half a century—delaying, some might say, their inevitable evolution. It also, however, outlined a central tension in liberal education that persists till now.

The Yale report explained that the essence of liberal education was “not to teach that which is peculiar to any one of the professions; but to lay the foundation which is common to them all.” It described its two goals in terms that still resonate: training the mind to think and filling the mind with specific content.

The two great points to be gained in intellectual culture, are the discipline and the furniture of the mind; expanding its powers, and storing it with knowledge. The former of these is, perhaps, the more important of the two. A commanding object, therefore, in a collegiate course, should be, to call into daily and vigorous exercise the faculties of the student. Those branches of study should be prescribed, and those modes of instruction adopted, which are best calculated to teach the art of fixing the attention, directing the train of thought, analyzing a subject proposed for investigation; following, with accurate discrimination, the course of argument; balancing nicely the evidence presented to the judgment; awakening, elevating, and controlling the imagination; arranging, with skill, the treasures which memory gathers; rousing and guiding the powers of genius.

Though its particular aim historically was to defend the classical curriculum, the Yale report’s broader argument was that learning to think is more important than the specific topics and books that are taught. A Harvard man revived the argument fifty years later, as he battled to undo the report’s recommendations.

Charles Eliot was an unlikely candidate for the presidency of Harvard. He was a scientist at a time when the heads of schools like Harvard, Yale, and Princeton were still generally ministers. After graduating from Harvard in 1853, Eliot was appointed to be a tutor and later an assistant professor of mathematics and chemistry at the school. But he was not made a full professor as he had hoped, and at about the same time, his bad luck compounded as his father’s fortune collapsed. Eliot decided to travel to Europe, where he saw firsthand the rapidly changing state of higher education on the Continent and the rise of the great research universities in Germany. He then returned to the United States to take up a professorship at the Massachusetts Institute of Technology in 1865. At the time, like many other colleges, Harvard was in the midst of a tumultuous period in its history. It faced calls for more vocational education in order to prepare Americans for the workforce in the rapidly industrializing economy just emerging from the Civil War.

To address these concerns, Eliot penned a two-part essay in the Atlantic Monthly titled “The New Education.” It began with words that could be uttered by any parent today, adjusted for gender: “What can I do with my boy? I can afford, and am glad, to give him the best training to be had. I should be proud to have him turn out a preacher or a learned man; but I don’t think he has the making of that in him. I want to give him a practical education; one that will prepare him, better than I was prepared, to follow my business or any other active calling.” Eliot’s answer was that Americans needed to combine the best developments of the emerging European research university with the best traditions of the classic American college.

Eliot proposed that America’s great universities embrace the research function, but that they do so at the graduate level, leaving undergraduates free to explore their interests more broadly. He showed a strong understanding and mastery of the emerging trends in education, like the difference between scientific and humanistic fields and the rise in technical training. He wanted colleges to distinguish carefully between a skills-based and a liberal education, the latter of which he considered more important. Months after his essays were published, at the age of thirty-five, Charles Eliot was offered the presidency of Harvard, a post that he held for four decades—exactly—and from which he reshaped the university and the country.

Eliot made so many transforming changes at Harvard that they are impossible to recount—he essentially established the modern American university. Yet perhaps his most influential reform, at least for undergraduates, was his advocacy for a curriculum based on the “spontaneous diversity of choice.” In other words, under his new system, students had very few required courses and many electives. Previously in American colleges, much of the curriculum had been set in stone. Students had enrolled in courses and studied topics in a predetermined sequence from one year to the next. The faculty had believed, in the terms of the Yale report, that it should choose “the furniture” that was to inhabit the students’ minds.

Eliot disagreed profoundly. He was probably influenced by his Protestantism, which saw the individual as the best mediator of his own fate. But perhaps more than anything, he was imbued with the spirit of Ralph Waldo Emerson and his distinctively American ideas, which were deeply influential at the time. For Emerson, the task of every human being was to find his or her voice and give expression to it. “Trust thyself,” Emerson wrote in “Self-Reliance.” “Every heart vibrates to that iron string.” Emerson’s notion of the importance of authenticity, as opposed to imitation, and his praise of unique thinking could have been turned into copy for Apple ad campaigns (“Think Different”).

In an 1885 speech, Eliot outlined the case for his elective system using language that remains radical today—and with which many parents might still disagree. “At eighteen the American boy has passed the age when a compulsory external discipline is useful,” Eliot wrote. “A well-instructed youth of eighteen can select for himself—not for any other boy, or for the fictitious universal boy but for himself alone—a better course of study than any college faculty, or any wise man who does not know him and his ancestors and his previous life, can possibly select for him.” Eliot believed that American liberal education should allow you to choose your own courses, excite your own imagination, and thus realize your distinctive self. Many responded that some subjects are not worthy of being taught. The solution, he believed, was to let faculty members offer what they want and students to take what they like.

Eliot’s views were not shared by many influential educators of the time, most notably the president of Princeton, James McCosh. (In fact, Eliot’s speech that I quote from above was from a public debate with McCosh on the topic in New York City.) A Scottish minister and philosopher, McCosh thought that universities should provide a specific framework of learning and a hierarchy of subjects for their students—or else they were failing in their role as guardians. In particular, religion could not simply be treated like any other subject, to be taken or dropped at an undergraduate’s whim. Eliot’s ideas, however, were more in sync with American culture and its emphasis on individualism and freedom of choice. Over time, the elective system in some form or another has come to dominate American higher education, with a few notable exceptions.

In the early years of the twentieth century, a swell in the tide of immigrants entering the United States prompted concern among some citizens, educators, and public officials that the country was losing its character. Against that backdrop, an English professor at Columbia University, John Erskine, began offering a two-year course called General Honors in 1920. Erskine “wanted to provide young people from different backgrounds with a common culture, something he thought was already thin in the United States,” writes the Harvard scholar Louis Menand. Erskine believed that the best way to become truly educated was to immerse oneself in great works of the past.

In 1930, Mortimer Adler, an educator who had taught a section in Erskine’s program, left Columbia for the University of Chicago. His friend Robert Maynard Hutchins had recently been appointed president of the school, and the two began teaching a seminar together for underclassmen on classic works in the Western canon. The course evolved into a “great books” program—a core curriculum in which students read prescribed works of history, literature, and philosophy and then gather for small-group discussions guided by faculty members. Several years later, two professors named Stringfellow Barr and Scott Buchanan moved from Chicago to St. John’s College in Annapolis to start their own great-books program. Barr and Buchanan radically altered the undergraduate curriculum at the small school with the tradition of seven liberal arts in mind. Even science was taught from a great-books perspective, reading classic accounts that were, in many ways, outdated or had been superseded. The program left no room for electives.

By the 1930s and 1940s, interest in the common core waned. Today, about 150 schools in the United States offer some kind of core program based on great books, though very few require that all undergraduates take it, as Columbia, Chicago, and St. John’s do.

Whatever its merits, the idea of a curriculum based on some set of great books has always been debated. In a 1952 essay, Hutchins, who could be considered the father of the great-books movement, made what has become a familiar case for it. “Until lately the West has regarded it as self-evident that the road to education lay through great books,” Hutchins wrote. “No man was educated unless he was acquainted with the masterpieces of his tradition.” Times have changed, but political and social changes cannot “invalidate the tradition or make it irrelevant for modern men,” he insisted. Except that, as we have seen, this account is not entirely true. Everyone who has ever set up a great-books program based it on the belief that, in the good old days, people used to study a set of agreed-upon classics. In fact, from the start of liberal education, there were disputes over what men (and women) should read and how much or how little freedom they should have to follow their curiosity. Martha Nussbaum, a philosopher at the University of Chicago, argues that the Socratic tradition of inquiry by its nature rejected an approach dependent “on canonical texts that had moral authority.” She writes, “It is an irony of the contemporary ‘culture wars’ that the Greeks are frequently brought onstage as heroes in the ‘great books’ curricula proposed by many conservatives. For there is nothing on which the Greek philosophers were more eloquent, and more unanimous, than the limitations of such curricula.”

I’ve found that my own views on this subject have changed over time, from my days as an undergraduate, then as a teacher in graduate school, and now as a parent. I still sympathize with arguments in support of a core, but I have come to place a greater value than I once did on the openness inherent in liberal education—the ability for the mind to range widely and pursue interests freely. In my own experience, the courses I took simply because I felt I needed to know some subject matter or acquire cultural literacy have faded in my memory. Those that I took out of genuine curiosity or because I was inspired by a great teacher have left a more lasting and powerful impression. After all, one can always read a book to get the basic information about a particular topic, or simply use Google. The crucial challenge is to learn how to read critically, analyze data, and formulate ideas—and most of all to enjoy the intellectual adventure enough to be able to do them easily and often.

Loving to learn is a greater challenge today than it used to be. I’ve watched my children grow up surrounded by an amazing cornucopia of entertainment available instantly on their computers, tablets, and phones. Perhaps soon these pleasures will be hardwired into their brains. The richness, variety, and allure of today’s games, television shows, and videos are dazzling. Many are amazingly creative, and some are intellectually challenging—there are smart video games out there. But the all-consuming power of modern entertainment can turn something that demands active and sustained engagement, like reading and writing, into a chore.

And yet reading—especially, I would argue, reading books—remains one of the most important paths to real knowledge. There are few substitutes to understanding an issue in depth than reading a good book about it. This has been true for centuries, and it has not changed. And kids need to enjoy reading—not just see it as the thing their parents make them do before they can play video games or watch a television show. If having teenagers read Philip Roth’s Goodbye, Columbus rather than Jane Austen’s novels makes this more likely, so be it. I don’t decry or condemn new forms of entertainment and technology. They open up new vistas of knowledge and ways of thinking. Our children will be smarter and quicker than us in many ways. But a good education system must confront the realities of the world we live in and educate in a way that addresses them, rather than pretend that these challenges don’t exist.

Some of the most controversial features of modern liberal education have come into being not out of intellectual conviction but from bureaucratic convenience. As America’s best colleges became the world’s best universities, the imperatives of the latter began to dominate the former. Research has trumped teaching in most large universities—no one gets tenure for teaching. But as important, the curriculum has also been warped to satisfy research. Professors find that it is dreary and laborious for them to teach basic courses that might be interesting and useful for students. It is much easier to offer seminars on their current research interests, no matter how small, obscure, or irrelevant the topic is to undergraduates. As knowledge becomes more specialized, the courses offered to students become more arcane. It is this impulse that produces the seemingly absurd courses one finds in some colleges today, as much as the subversive desires of a left-wing professoriat.

Another development, again unrelated to any intellectual theory about liberal education, has been the abandonment of rigor, largely in the humanities. Grades have risen steadily in almost all American colleges in recent decades. Today, 43 percent of all grades awarded are in the A range—up from 15 percent in 1960. This is an outgrowth of a complex set of factors, one of which is indeed the rising quality of students. But others are bureaucratic and philosophical, such as the 1960s assault on hierarchy.

The greatest shift in liberal education over the past century has been the downgrading of subjects in science and technology. Historically, beginning with Greek and Roman developments in education, scientific exploration was pursued through the lens of “natural philosophy.” In the Middle Ages, the subject was seen as part of an effort to explain God’s creation and man’s role within it. But during the age of scientific revolutions, and coming to a climax in the nineteenth century with Charles Darwin’s theory of evolution, the study of science increasingly conflicted with religion. This led to the discipline losing its central position in liberal education, which was still then grounded in a pious outlook that sought to understand not only the mystery of life but also its purpose. As Anthony Kronman writes, a rise in scientific research meant “a material universe whose structure could now be described with astounding precision but which was itself devoid of meaning and purpose. As a result, the physical sciences ceased to be concerned with, or to have much to contribute to, the search for an answer to the question of the meaning of life.” Science was relegated to scientists—a huge loss to society as a whole.

By the middle of the twentieth century, following the quantum revolution in physics, laypeople found it even more difficult to understand science and integrate it into other fields of knowledge. In 1959, C. P. Snow, an English physicist and novelist, wrote a famous essay, “The Two Cultures,” in which he warned that the polarization of knowledge into two camps was producing “mutual incomprehension…hostility and dislike.” He explains:

A good many times I have been present at gatherings of people who, by the standards of the traditional culture, are thought highly educated and who have with considerable gusto been expressing their incredulity at the illiteracy of scientists. Once or twice I have been provoked and have asked the company how many of them could describe the Second Law of Thermodynamics. The response was cold: it was also negative. Yet I was asking something which is about the scientific equivalent of: Have you read a work of Shakespeare’s? I now believe that if I had asked an even simpler question—such as, What do you mean by mass, or acceleration, which is the scientific equivalent of saying, Can you read?—not more than one in ten of the highly educated would have felt that I was speaking the same language. So the great edifice of modern physics goes up, and the majority of the cleverest people in the western world have about as much insight into it as their neolithic ancestors would have had.

In 2003, Lawrence Summers, then president of Harvard, echoed Snow’s concerns and advocated a return to scientific literacy for all at the undergraduate level. Former Princeton president Shirley Tilghman, herself a scientist, argued in 2010 that discussions of public policy are impoverished because of the basic ignorance of science that pervades American society today. Nonscientists need to understand science, she contends, and scientists are best off with a strong background in other subjects as well. And yet little has changed on this front in recent years.

The most interesting and ambitious effort to reform liberal education for the twenty-first century is not taking place in America. In fact, it is taking place about as far away from the United States as one can possibly get—Singapore. In 2011, Yale University joined with the National University of Singapore to establish a new liberal arts school in Asia called Yale-NUS College, and in the fall of 2013, it welcomed its first class of 157 students from twenty-six countries. When I was a trustee at Yale, I enthusiastically supported this venture. The project—though not without risks—has the potential to create a beachhead for broad-based liberal education in a part of the world that, while rising to the center stage globally, remains relentlessly focused on skills-based instruction.

Scholars from both universities have used the venture as an opportunity to reexamine the concept of liberal education in an increasingly connected and globalized world. The curriculum of Yale-NUS reflects that thinking, in some parts drawing on the best of the old tradition, in some parts refining it, and in some parts creating a whole new set of ideas about teaching the young. In April 2013, a committee of this new enterprise set forth the ideas that will define the college. It is an extraordinary document and, if implemented well, could serve as a model for the liberal arts college of the future.

The Yale-NUS report is radical and innovative. First, the school calls itself a college of liberal arts and sciences, to restore science to its fundamental place in an undergraduate’s education. It abolishes departments, seeing them as silos that inhibit cross-fertilization, interdisciplinary works, and synergy. It embraces a core curriculum, which takes up most of the first two years of study but is very different from the Columbia-Chicago model. The focus of the Yale-NUS core is to expose students to a variety of modes of thinking. In one module they are to learn how experimental scientists conduct research; in another, how statistics informs social science and public policy. There is a strong emphasis throughout on exposing students to scientific methods rather than scientific facts so that—whatever their ultimate major—they are aware of the way in which science works.

The Yale-NUS core does include courses on the great books, but it does not treat them as simply a canon to be checked off on a cultural literacy list. The books selected are viewed as interesting examples of a genre, chosen not because they are part of a “required” body of knowledge but because they benefit from careful analysis. The emphasis again is on the method of inquiry. Students learn how to read, unpack, and then write about a great work of literature or philosophy or art. The curriculum requires students to take on projects outside the classroom, in the belief that a “work” component teaches valuable lessons that learning from a book cannot. This part has a powerful practical appeal. I once asked Jeff Bewkes, the CEO of Time Warner, what skill was most useful in business that wasn’t taught in college or graduate schools. He immediately replied, “Teamwork. You have to know how to work with people and get others to want to work with you. It’s probably the crucial skill, and yet education is mostly about solo performances.”

The greatest innovation in the Yale-NUS curriculum comes directly from the nature of the association between the two universities and their home cultures. Students study not only Plato and Aristotle but also, in the same course, Confucius and the Buddha—and ask why their systems of ethics might be similar or different. They study the Odyssey and the Ramayana. They examine the “primitivisms” of Paul Gauguin and Pablo Picasso while also looking at the woodcarvings from the South Sea Islands and the ukiyo-e tradition of Japanese woodblock prints that influenced Western artists. And, of course, as they study modern history, politics, and economics, they will naturally find themselves taking a more comparative approach to the topics than any college in the United States or Asia would likely do by itself. Multiculturalism in education is usually a cliché that indicates little of substance, or involves Western critiques of the West (like those of the writer Frantz Fanon or the historian Howard Zinn). The Yale-NUS curriculum is built to provide a genuine multicultural education in a college designed for the emerging multicultural world. In studying other societies, students learn much more about their own. It is only by having some point of comparison that one can understand the distinctive qualities of Western or Chinese or Indian culture.

Yale-NUS is in its very early days. It may not be able to implement all its ideas. It does not solve all the problems of a liberal education. The tensions between freedom of inquiry and the still-closed political system in Singapore might undermine the project. But the educators involved have conceived of the college’s mission and mandate brilliantly, and have pointed the way to a revived, rigorous liberal education that recovers the importance of science, places teaching at its heart, combines a core with open exploration, and reflects the direction the world is headed, in which knowledge of new countries and cultures is an essential component of any education. Yale-NUS should become a model studied around the world.

But what if a liberal education done well still doesn’t get you a job? In 1852, Cardinal Newman wrote that a student of liberal education “apprehends the great outlines of knowledge” for its own sake rather than to acquire skills to practice a trade or do a job. Even then, he noted, there were skeptics who raised questions of practicality. As we have seen, such questions have surrounded the idea of liberal education since the days of Isocrates, and they persist today. Newman tells us that his critics would ask him, “To what then does it lead? where does it end? what does it do? How does it profit?” Or as a former president of Yale, the late A. Bartlett Giamatti, put it in one of his beautiful lectures, “What is the earthly use of all this kind of education?”

So, what is the earthly use of a liberal education?

Excerpt adapted from IN DEFENSE OF A LIBERAL EDUCATION by Fareed Zakaria. Copyright © 2015 by Kastella Rylestone, LLC. With permission of the publisher, W. W. Norton & Company, Inc. All rights reserved.

Related Reads