History Channel: Newfields + Its True Place in Museum History
In honor of Presidents Day, going back in time to contextualize one issue of art-world importance…
WHAT’S OLD IS NEW AGAIN
Fueled by data on both the museum sector overall and the local Indianapolis community, Newfields director Charles Venable took a flamethrower to an entire forest of museum-world norms during a two-part interview with my colleague Andrew Goldstein last week.
With a freshly executed contract extension securing his position until 2026, Venable now enters the sixth year of his mission to reshape the institution formerly known as the Indianapolis Museum of Art into a hybrid campus with grand ambitions—ambitions that involve equally prioritizing nature, dining, and other non-art attractions to entice a robust new audience.
Philosophically, Venable’s argument boils down to this: The traditional museum model no longer works in the age of the attention economy. Art institutions now have to compete with everything from music festivals to farm-to-table restaurants to, maybe most of all, the endless smartphone notifications yanking us around with the insistence of a kid who chugged two liters of soda with no awareness of the route to the nearest bathroom.
Instead, as Venable sees it, the museum has to become something more populist, more varied, more FUN. Otherwise, admissions and revenue will continue declining to unsustainable levels, and most museums will follow the old rules right into a tar pit.
Now, I don’t agree with all the prescriptions Venable has written as a result of this examination. But I do think his diagnosis of what ails museums is generally sound, and that some of the needles now being brandished in his direction owe as much to a common analytical fallacy as to his specific policies.
That fallacy is recency bias, or the term for our tendency to overvalue the results directly in front of us purely because they’re the easiest to see and the most visceral to feel—regardless of whether or not they accurately reflect the big picture.
For example, I’ve written before about how we’re now supposed to regard Peggy Guggenheim as the epitome of a “true collector,” despite the fact that she put Jackson Pollock under the same types of contracts in the 1940s that have caused many critics today to lose their minds and cast Stefan Simchowitz as some kind of soul-gulping hellspawn. But almost no one remembers the earlier, more complicated chapters in Guggenheim’s story because her later ones read so much clearer and brighter.
In the Great Museum Debate of 2018, recency bias shows up in the assumption that museums as we’ve known them in our lifetimes are, for all intents and purposes, the way they have always been—and therefore, how they were always meant to be.
Yet history tells us something very different.
In a piece from last year called “How the White Cube Came to Dominate the Art World,” Abigail Cain points out that major public museums didn’t even exist until the 18th century, with the most consequential institutions only cropping up in the latter half of the era. Her examples include the British Museum in 1759 and the Louvre in 1793.
More importantly, her piece clarifies that humanity’s understanding of what an art museum should look like, how an art museum should function, and essentially what an art museum should BE have all been in a state of constant flux practically since the concept debuted.
Using the Paris salons as their model, museums generally began by adopting the same borderline-manic floor-to-ceiling installation style. It wasn’t until the mid-1800s that institutions like London’s National Gallery started installing works only at eye level. True, paintings in this new configuration might still be arranged in multiple rows, but at least visitors could feel more like they were viewing artworks, rather than desperately scanning every inch of a Victorian crime scene for clues.
This change in exhibition practice helped necessitate the early 20th century creation of a strange new job called “curator,” which hadn’t existed before because museums were literally showing everything they owned at all times. (Hence the reason that looking at one of their walls felt so overwhelmingly extra.)
The introduction of curation to galleries also meant that museums started to need storage space (an amenity that they’ve arguably abused in the decades since). And as installation standards became less hectic, officials also began changing the colors of gallery walls, which were suddenly drastically more noticeable.
Artificial lighting was introduced, then refined—an innovation that I suspect broadened operating hours for museums in the same way it did for other modern businesses in the Industrial Revolution.
Although a handful of slightly less prestigious institutions got there a few years earlier, it wasn’t until Alfred H. Barr, the first director of the Museum of Modern Art in New York, went full white cube in his 1936 show “Cubism and Abstract Art” that American museums followed en masse.
While German institutions did the same almost concurrently—a development that led art historian Charlotte Klonk to say that in Europe “one is almost tempted to speak of the white cube as a Nazi invention”—Allied art centers like England and France waited to ride the wave until World War II’s conclusion.
Of course, this capsule history isn’t the whole story. I would argue that the accepted model hasn’t been as stable as this summary implies, since US museums couldn’t go 50 years without implementing a seismic change in the exhibition format.
In 1978, the Met inaugurated the trend of blockbuster exhibitions with “The Treasures of Tutankhamun.” The show still holds the museum’s attendance record today, and it is difficult to overstate the effect of its success on the rest of the field.
To pull off the exhibition, the Met’s director Thomas P.F. Hoving—some critics joked his middle initials were an acronym for “Publicity Forever”—claimed in his (often-questioned) memoir that he had to inaugurate the ideas of both a special exhibition calendar and an exhibition committee, as well as hire the museum’s first exhibition designer. All of these soon became musts in the wider field, and remain so today, in no small part because of the sheer production challenge of presenting blockbusters.
To recap, then…
- Public museums of consequence have really only existed for, at most, about 260 years.
- The predominant museum exhibition standard—the sparsely hung, beautifully lit white cube—has only maintained traction for about 85.
- Even though the basic display aesthetic has remained stable since then, as recently as 40 years ago the most important players in the game felt it was necessary to completely rethink the programming format.
Put those together, and the suggestion that we might need more significant changes to the museum model in 2018 isn’t exactly the same as suggesting we should use the wisdom of the ancients as toilet paper.
To be clear, I think there are solutions for museums that lie somewhere between going full ivory tower and full Six Flags. However, if we react to every proposed change to Alfred Barr’s Depression-era MoMA as if someone just defiled a veterans’ cemetery, we’re never going to find those more elegant solutions, because we’re never going to be able to have a rational conversation about them.
Museums can change lives. But to do that, they must be allowed to change themselves—just like they’ve always needed to. And that permission needs to come as much from outside the institutions’ walls as from inside.
[artnet News: Part I | Part II]
That’s it for this week. ‘Til next time, remember: If you don’t like change, you’re going to like irrelevance even less.