The Origins of Everyday Items
9. Atop the Vanity
Cosmetics: 8,000 Years Ago, Middle East
A thing of beauty may be a joy forever, but keeping it that way can be a costly matter. American men and women, in the name of vanity, spend more than five billion dollars a year in beauty parlors and barbershops, at cosmetic and toiletry counters.
Perhaps no one should be surprised—or alarmed—at this display of grooming, since it has been going on for at least eight thousand years. Painting, perfuming, and powdering the face and body, and dyeing the hair, began as parts of religious and war rites and are at least as old as written history. Archaeologists unearthed palettes for grinding and mixing face powder and eye paint dating to 6000 B.C.
In ancient Egypt by 4000 B.C., beauty shops and perfume factories were flourishing, and the art of makeup was already highly skilled and widely practiced. We know that the favorite color for eyeshade then was green, the preferred lipstick blue-black, the acceptable rouge red, and that fashionable Egyptian women stained the flesh of their fingers and feet a reddish orange with henna. And in those bare-breasted times, a woman accented veins on her bosom in blue and tipped her nipples gold.
Egyptian men were no less vain—in death as well as life. They stocked their tombs with a copious supply of cosmetics for the afterlife. In the 1920s, when the tomb of King Tutankhamen, who ruled about 1350 B.C., was opened, several small jars of skin cream, lip color, and cheek rouge were discovered—still usable and possessing elusive fragrances.
In fact, during the centuries prior to the Christian era, every recorded culture lavishly adorned itself in powders, perfumes, and paints—all, that is, except the Greeks.
Egyptian woman at her toilet. Lipstick was blue-black, eye shadow green, bare nipples were tipped in gold paint.
Unlike the Romans, who assimilated and practiced Egyptian makeup technology, the Greeks favored a natural appearance. From the time of the twelfth-century Dorian invasions until about 700 B.C., the struggling Greeks had little time for languorous pleasures of self-adornment. And when their society became established and prosperous during the Golden Age of the fifth century B.C., it was dominated by an ideal of masculinity and natural ruggedness. Scholastics and athletics prevailed. Women were chattels. The male, unadorned and unclothed, was the perfect creature.
During this time, the craft of cosmetics, gleaned from the Egyptians, was preserved in Greece through the courtesans. These mistresses of the wealthy sported painted faces, coiffed hair, and perfumed bodies. They also perfumed their breath by carrying aromatic liquid or oil in their mouths and rolling it about with the tongue. The breath freshener, apparently history’s first, was not swallowed but discreetly spit out at the appropriate time.
Among Greek courtesans we also find the first reference in history to blond hair in women as more desirable than black. The lighter color connoted innocence, superior social status, and sexual desirability, and courtesans achieved the shade with the application of an apple-scented pomade of yellow flower petals, pollen, and potassium salt.
In sharp contrast to the Greeks, Roman men and women were often unrestrained in their use of cosmetics. Roman soldiers returned from Eastern duty laden with, and often wearing, Indian perfumes, cosmetics, and a blond hair preparation of yellow flour, pollen, and fine gold dust. And there is considerable evidence that fashionable Roman women had on their vanity virtually every beauty aid available today. The first-century epigrammatist Martial criticized a lady friend, Galla, for wholly making over her appearance: “While you remain at home, Galla, your hair is at the hairdresser’s; you take out your teeth at night and sleep tucked away in a hundred cosmetics boxes—even your face does not sleep with you. Then you wink at men under an eyebrow you took out of a drawer that same morning.”
Given the Roman predilection for beauty aids, etymologists for a long time believed that our word “cosmetic” came from the name of the most famous makeup merchant in the Roman Empire during the reign of Julius Caesar: Cosmis. More recently, they concluded that it stems from the Greek Kosmetikos, meaning “skilled in decorating.”
Eye Makeup: Pre-4000 B.C., Egypt
Perhaps because the eyes, more than any other body part, reveal inner thoughts and emotions, they have been throughout history elaborately adorned. The ancient Egyptians, by 4000 B.C., had already zeroed in on the eye as the chief focus for facial makeup. The preferred green eye shadow was made from powdered malachite, a green copper ore, and applied heavily to both upper and lower eyelids. Outlining the eyes and darkening the lashes and eyebrows were achieved with a black paste called kohl, made from powdered antimony, burnt almonds, black oxide of copper, and brown clay ocher. The paste was stored in small alabaster pots and, moistened by saliva, was applied with ivory, wood, or metal sticks, not unlike a modern eyebrow pencil. Scores of filled kohl pots have been preserved.
Fashionable Egyptian men and women also sported history’s first eye glitter. In a mortar, they crushed the iridescent shells of beetles to a coarse powder, then they mixed it with their malachite eye shadow.
Many Egyptian women shaved their eyebrows and applied false ones, as did later Greek courtesans. But real or false, eyebrows that met above the nose were favored, and Egyptians and Greeks used kohl pencils to connect what nature had not.
Eye adornment was also the most popular form of makeup among the Hebrews. The custom was introduced to Israel around 850 B.C. by Queen Jezebel, wife of King Ahab. A Sidonian princess, she was familiar with the customs of Phoenicia, then a center of culture and fashion. The Bible refers to her use of cosmetics (2 Kings 9:30): “And when Jehu was come to Jezreel, Jezebel heard of it; and she painted her face…” From the palace window, heavily made up, she taunted Jehu, her son’s rival for the throne, until her eunuchs, on Jehu’s orders, pushed her out. It was Jezebel’s cruel disregard for the rights of the common man, and her defiance of the Hebrew prophets Elijah and Elisha, that earned her the reputation as the archetype of the wicked woman. She gave cosmetics a bad name for centuries.
Rouge, Facial Powder, Lipstick: 4000 B.C., Near East
Although Greek men prized a natural appearance and eschewed the use of most cosmetics, they did resort to rouge to color the cheeks. And Greek courtesans heightened rouge’s redness by first coating their skin with white powder. The large quantities of lead in this powder, which would whiten European women’s faces, necks, and bosoms for the next two thousand years, eventually destroyed complexions and resulted in countless premature deaths.
An eighteenth-century European product, Arsenic Complexion Wafers, was actually eaten to achieve a white pallor. And it worked—by poisoning the blood so it transported fewer red hemoglobin cells, and less oxygen to organs.
A popular Greek and Roman depilatory, orpiment, used by men and women to remove unwanted body hair, was equally dangerous, its active ingredient being a compound of arsenic.
Rouge was hardly safer. With a base made from harmless vegetable substances such as mulberry and seaweed, it was colored with cinnabar, a poisonous red sulfide of mercury. For centuries, the same red cream served to paint the lips, where it was more easily ingested and insidiously poisonous. Once in the bloodstream, lead, arsenic, and mercury are particularly harmful to the fetus. There is no way to estimate how many miscarriages, stillbirths, and congenital deformities resulted from ancient beautifying practices—particularly since it was customary among early societies to abandon a deformed infant at birth.
Throughout the history of cosmetics there have also been numerous attempts to prohibit women from painting their faces—and not only for moral or religious reasons.
Xenophon, the fourth-century B.C. Greek historian, wrote in Good Husbandry about the cosmetic deception of a new bride: “When I found her painted, I pointed out that she was being as dishonest in attempting to deceive me about her looks as I should be were I to deceive her about my property.” The Greek theologian Clement of Alexandria championed a law in the second century to prevent women from tricking husbands into marriage by means of cosmetics, and as late as 1770, draconian legislation was introduced in the British Parliament (subsequently defeated) demanding: “That women of whatever age, rank, or profession, whether virgins, maids or widows, who shall seduce or betray into matrimony, by scents, paints, cosmetic washes, artificial teeth, false hair, shall incur the penalty of the law as against witchcraft, and that the marriage shall stand null and void.”
It should be pointed out that at this period in history, the craze for red rouge worn over white facial powder had reached unprecedented heights in England and France. “Women,” reported the British Gentlemen’s Magazine in 1792, with “their wooly white hair and fiery red faces,” resembled “skinned sheep.” The article (written by a man for a male readership) then reflected: “For the single ladies who follow this fashion there is some excuse. Husbands must be had…. But the frivolity is unbecoming the dignity of a married woman’s situation.” This period of makeup extravagance was followed by the sober years of the French Revolution and its aftermath.
By the late nineteenth century, rouge, facial powder, and lipstick—six-thousand-year-old makeup staples enjoyed by men and women—had almost disappeared in Europe. During this lull, a fashion magazine of the day observed: “The tinting of face and lips is considered admissible only for those upon the stage. Now and then a misguided woman tints her cheeks to replace the glow of health and youth. The artificiality of the effect is apparent to everyone and calls attention to that which the person most desires to conceal. It hardly seems likely that a time will ever come again in which rouge and lip paint will be employed.”
That was in 1880. Cosmetics used by stage actresses were homemade, as they had been for centuries. But toward the closing years of the century, a complete revival in the use of cosmetics occurred, spearheaded by the French.
The result was the birth of the modern cosmetics industry, characterized by the unprecedented phenomenon of store-bought, brand-name products: Guerlain, Coty, Roger & Gallet, Lanvin, Chanel, Dior, Rubinstein, Arden, Revlon, Lauder, and Avon. In addition—and more important—chemists had come to the aid of cosmetologists and women, to produce the first safe beautifying aids in history. The origins of brand names and chemically safe products are explored throughout this chapter.
Beauty Patch and Compact: 17th Century, Europe
Smallpox, a dreaded and disfiguring disease, ravaged Europe during the 1600s. Each epidemic killed thousands of people outright and left many more permanently scarred from the disease’s blisters, which could hideously obliterate facial features. Some degree of pockmarking marred the complexions of the majority of the European population.
Beauty patches, in the shapes of stars, crescent moons, and hearts—and worn as many as a dozen at a time—achieved immense popularity as a means of diverting attention from smallpox scars.
In black silk or velvet, the patches were carefully placed near the eyes, by the lips, on the cheeks, the forehead, the throat, and the breasts. They were worn by men as well as women. According to all accounts, the effect was indeed diverting, and in France the patch acquired the descriptive name mouche, meaning “fly.”
Patch boxes, containing emergency replacements, were carried to dinners and balls. The boxes were small and shallow, with a tiny mirror set in the lid, and they were the forerunner of the modern powder compact.
The wearing of beauty patches evolved into a silent, though well-communicated, language. A patch near a woman’s mouth signaled willing flirtatiousness; one on the right cheek meant she was married, one on the left, that she was betrothed; and one at the corner of the eye announced smoldering passion.
In 1796, the medical need for beauty patches ceased. An English country doctor, Edward Jenner, tested his theory of a vaccine against smallpox by inoculating an eight-year-old farm boy with cowpox, a mild form of the disease. The boy soon developed a slight rash, and when it faded, Jenner inoculated him with the more dangerous smallpox. The child displayed no symptoms. He had been immunized.
Jenner named his procedure “vaccination,” from the Latin for cowpox, vaccinia. As use of the vaccine quickly spread throughout Europe, obliterating the disease, beauty patches passed from practical camouflage to cosmetic affectation. In this latter form, they gave birth to the penciled-on beauty mark. And the jeweled patch boxes, now empty, were used to hold compacted powder.
Nail Polish: Pre-3000 B.C., China
The custom of staining fingernails, as well as fingers, with henna was common in Egypt by 3000 B.C. But actual fingernail paint is believed to have originated in China, where the color of a person’s nails indicated social rank.
The Chinese had by the third millennium B.C. combined gum arabic, egg white, gelatin, and beeswax to formulate varnishes, enamels, and lacquers. According to a fifteenth-century Ming manuscript, the royal colors for fingernails were for centuries black and red, although at an earlier time, during the Chou Dynasty of 600 B.C., gold and silver were the royal prerogative.
Among the Egyptians, too, nail color came to signify social order, with shades of red at the top. Queen Nefertiti, wife of the heretic king Ikhnaton, painted her fingernails and toenails ruby red, and Cleopatra favored a deep rust red. Women of lower rank were permitted only pale hues, and no woman dared to flaunt the color worn by the queen—or king, for Egyptian men, too, sported painted nails.
This was particularly true of high-ranking warriors. Egyptian, Babylonian, and early Roman military commanders spent hours before a battle having their hair lacquered and curled, and their nails painted the same shade as their lips.
Such ancient attention to fingernails and toenails suggests to cosmetics historians that manicuring was already an established art. The belief is supported by numerous artifacts. Excavations at the royal tombs at Ur in southern Babylonia yielded a manicure set containing numerous pieces in solid gold, the property of a doubtless well-groomed Babylonian nobleman who lived some four thousand years ago. Well-manicured nails became a symbol of culture and civilization, a means of distinguishing the laboring commoner from the idle aristocrat.
Creams, Oils, Moisturizers: 3000 B.C., Near East
It is not surprising that oils used to trap water in the skin and prevent desiccation developed in the hot, dry desert climate of the Near East. More than two thousand years before the development of soap, these moisturizers also served to clean the body of dirt, the way cold cream removes makeup.
The skin-softening oils were scented with frankincense, myrrh, thyme, marjoram, and the essences of fruits and nuts, especially almonds in Egypt. Preserved Egyptian clay tablets from 3000 B.C. reveal special formulations for particular beauty problems. An Egyptian woman troubled by a blemished complexion treated her face with a mask of bullock’s bile, whipped ostrich eggs, olive oil, flour, sea salt, plant resin, and fresh milk. An individual concerned with the advancing dryness and wrinkles of age slept for six nights in a facial paste of milk, incense, wax, olive oil, gazelle or crocodile dung, and ground juniper leaves.
Little has really changed over the centuries. A glance at any of today’s women’s magazines reveals suggestions of cucumber slices for blemishes, moist tea bags for tired eyes, and beauty masks of honey, wheat germ oil, aloe squeezed from a windowsill plant, and comfrey from the herb garden.
In the ancient world, the genitalia of young animals were believed to offer the best chances to retard aging and restore sexual vigor. Foremost among such Near East concoctions was a body paste made of equal parts of calf phallus and vulva, dried and ground. The preparation—in its composition, its claims, and its emphasis on the potency of infant animal tissue—is no more bizarre than such modern youth treatments as fetal lamb cell injections. Our contemporary obsession with beauty and sexual vigor into old age, and the belief that these desiderata can be bottled, have roots as ancient as recorded history—and probably considerably older.
Of the many ancient cosmetic formulas, one, cold cream, has come down to us through the centuries with slight variation.
Cold Cream: 2nd Century, Rome
First, there is something cold about cold cream. Formulated with a large quantity of water, which evaporates when the mixture comes in contact with the warmth of the skin, the cream can produce a slight cooling sensation, hence its name.
Cold cream was first made by Galen, the renowned second-century Greek physician who practiced in Rome.
In A.D. 157, Galen was appointed chief physician to the school of gladiators in Pergamum, and he went on to treat the royal family of Rome. While he prepared medications to combat the serious infections and abscesses that afflicted gladiators, he also concocted beauty aids for patrician women. As recorded in his Medical Methods, the formula for cold cream called for one part white wax melted into three parts olive oil, in which “rose buds had been steeped and as much water as can be blended into the mass.” As a substitute for the skin-softening and-cleansing properties of cold cream, Galen recommended the oil from sheep’s wool, lanolin, known then as despyum. Although many earlier beauty aids contained toxic ingredients, cold cream, throughout its long history, remained one of the simplest and safest cosmetics.
In more recent times, three early commercial creams merit note for their purity, safety, and appeal to women at all levels of society.
In 1911, a German pharmacist in Hamburg, H. Beiersdorf, produced a variant of cold cream which was intended to both moisturize and nourish the skin. He named his product Nivea, and it quickly became a commercial success, supplanting a host of heavier beauty creams then used by women around the world. The product still sells in what is essentially its original formulation.
Jergens Lotion was the brainstorm of a former lumberjack. Twenty-eight-year-old Andrew Jergens, a Dutch immigrant to America, was searching for a way to invest money he had saved while in the lumber business. In 1880, he formed a partnership with a Cincinnati soapmaker, and their company began to manufacture a prestigious toilet soap. Jergens, from his years in the lumber trade, was aware of the benefits of hand lotion and formulated one bearing his own name. His timing couldn’t have been better, for women were just beginning to abandon homemade beauty aids for marketed preparations. The product broke through class barriers, turning up as readily on the vanity in a Victorian mansion as by the kitchen sink in a humbler home.
The third moderately priced, widely accepted cream, Noxzema, was formulated by a Maryland school principal turned pharmacist. After graduating from the University of Maryland’s pharmacy school in 1899, George Bunting opened a drugstore in Baltimore. Skin creams were a big seller then, and Bunting blended his own in a back room and sold it in small blue jars labeled “Dr. Bunting’s Sunburn Remedy.”
When female customers who never ventured into the sun without a parasol began raving about the cream, Bunting realized he had underestimated the benefits of his preparation. Seeking a catchier, more encompassing name, he drew up lists of words and phrases, in Latin and in English, but none impressed him. Then one day a male customer entered the store and remarked that the sunburn remedy had miraculously cured his eczema. From that chance remark, Dr. Bunting’s Sunburn Remedy became Noxzema, and a limited-use cold cream became the basis of a multimillion-dollar business.
Mirror: 3500 B.C., Mesopotamia
The still water of a clear pool was man’s first mirror. But with the advent of the Bronze Age, about 3500 B.C., polished metal became the favored material, and the Sumerians in Mesopotamia set bronze mirrors into plain handles of wood, ivory, or gold. Among the Egyptians, the handles were of elaborate design, sculpted in the shapes of animals, flowers, and birds. Judging from the numerous mirrors recovered in Egyptian tombs, a favorite handle had a human figure upholding a bronze reflecting surface.
Metal mirrors were also popular with the Israelites, who learned the craft in Egypt. When Moses wished to construct a laver, or ceremonial washbasin, for the tabernacle, he commanded the women of Israel to surrender their “looking-glasses,” and he shaped “the laver of brass, and the foot of it of brass.”
In 328 B.C., the Greeks established a school for mirror craftsmanship. A student learned the delicate art of sand polishing a metal without scratching its reflective surface. Greek mirrors came in two designs; disk and box.
A disk mirror was highly polished on the front, with the back engraved or decorated in relief. Many disk mirrors had a foot, enabling them to stand upright on a table.
A box mirror was formed from two disks that closed like a clamshell. One disk was the highly polished mirror; the other disk, unpolished, served as a protective cover.
The manufacturing of mirrors was a flourishing business among the Etruscans and the Romans. They polished every metal they could mine or import. Silver’s neutral color made it the preferred mirror metal, for it reflected facial makeup in its true hues. However, around 100 B.C., gold mirrors established a craze. Even head servants in wealthy households demanded personal gold mirrors, and historical records show that many servants were allotted a mirror as part of their wages.
Throughout the Middle Ages, men and women were content with the polished metal mirror that had served their ancestors. Not until the 1300s was there a revolution in this indispensable article of the vanity.
Glass Mirror. Glass had been molded and blown into bottles, cups, and jewelry since the start of the Christian era. But the first glass mirrors debuted in Venice in 1300, the work of Venetian gaffers, or glass blowers.
The gaffer’s craft was at an artistic pinnacle. Craftsmen sought new technological challenges, and glass mirrors taxed even Venetian technicians’ greatest skills. Unlike metal, glass could not be readily sand-polished to a smooth reflecting surface; each glass sheet had to be poured perfectly the first time. The technology to guarantee this was crude at first, and early glass mirrors, although cherished by those who could afford them and coveted by those who could not, cast blurred and distorted images.
A Roman vanity, centered around a hand-held mirror. Mirrors were of polished metals until the 1300s.
Image (and not that reflected in a mirror) was all-important in fourteenth-century Venice. Wealthy men and women took to ostentatiously wearing glass mirrors about the neck on gold chains as pendant jewelry. While the image in the glass might be disappointingly poor, the image of a mirror-wearer in the eyes of others was one of unmistakable affluence. Men carried swords with small glass mirrors set in the hilt. Royalty collected sets of glass mirrors framed in ivory, silver, and gold, which were displayed more than they were used. Early mirrors had more flash than function, and given their poor reflective quality, they probably served best as bric-a-brac.
Mirror quality improved only moderately until 1687. That year, French gaffer Bernard Perrot patented a method for rolling out smooth, undistorted sheets of glass. Now not only perfectly reflective hand mirrors but also full-length looking glasses were produced. (See also “Glass Window,” page 156.)
Hair Styling: 1500 B.C., Assyria
In the ancient world, the Assyrians, inhabiting the area that is modern northern Iraq, were the first true hair stylists. Their skills at cutting, curling, layering, and dyeing hair were known throughout the Middle East as nonpareil. Their craft grew out of an obsession with hair.
The Assyrians cut hair in graduated tiers, so that the head of a fashionable courtier was as neatly geometric as an Egyptian pyramid, and somewhat similar in shape. Longer hair was elaborately arranged in cascading curls and ringlets, tumbling over the shoulders and onto the breasts.
Hair was oiled, perfumed, and tinted. Men cultivated a neatly clipped beard, beginning at the jaw and layered in ruffles down over the chest. Kings, warriors, and noblewomen had their abundant, flowing hair curled by slaves, using a fire-heated iron bar, the first curling iron.
The Assyrians developed hair styling to the exclusion of nearly every other cosmetic art. Law even dictated certain types of coiffures according to a person’s position and employment. And, as was the case in Egypt, high-ranking women, during official court business, donned stylized fake beards to assert that they could be as authoritative as men.
Baldness, full or partial, was considered an unsightly defect and concealed by wigs.
Like the Assyrians, the Greeks during the Homeric period favored long, curly hair. They believed that long hair, and difficult-to-achieve hair styles, distinguished them from the barbarians in the north, who sported short, unattended hair. “Fragrant and divine curls” became a Greek obsession, as revealed by countless references in prose and poetry.
Fair hair was esteemed. Most of the great Greek heroes—Achilles, Menelaus, Paris, to mention a few—are described as possessing light-colored locks. And those not naturally blond could lighten or redden their tresses with a variety of harsh soaps and alkaline bleaches from Phoenicia, then the soap center of the Mediterranean.
Men in particular took considerable measures to achieve lighter hair shades. For temporary coloring, they dusted hair with a talc of yellow pollen, yellow flour, and fine gold dust. Menander, the fourth-century B.C. Athenian dramatist, wrote of a more permanent method: “The sun’s rays are the best means for lightening the hair, as our men well know.” Then he describes one practice: “After washing their hair with a special ointment made here in Athens, they sit bareheaded in the sun by the hour, waiting for their hair to turn a beautiful golden blond. And it does.”
In 303 B.C., the first professional barbers, having formed into guilds, opened shops in Rome.
Roman social standards mandated well-groomed hair, and tonsorial neglect was often treated with scorn or open insult. Eschewing the Greek ideal of golden-blond hair, Roman men of high social and political rank favored dark-to-black hair. Aging Roman consuls and senators labored to conceal graying hair. The first-century Roman naturalist Pliny the Elder wrote candidly of the importance of dark hair dyes. A preferred black dye was produced by boiling walnut shells and leeks. But to prevent graying in the first place, men were advised to prepare a paste, worn overnight, of herbs and earthworms. The Roman antidote for baldness was an unguent of crushed myrtle berries and bear grease.
A tonsorial obsession. Assyrians oiled, perfumed, tinted and curled their tresses. Only a coiffed soldier was fit for battle.
Not all societies favored blond or dark hair. Early Saxon men (for reasons that remain a mystery) are depicted in drawings with hair and beards dyed powder blue, bright red, green, or orange. The Gauls, on the other hand, were known to favor reddish hair dyes. And in England when Elizabeth I was arbiter of fashion, prominent figures of the day—male and female—dyed their hair a bright reddish orange, the queen’s color. An ambassador to court once noted that Elizabeth’s hair was “of a light never made by nature.”
Although men and women had powdered their hair various colors since before the Christian era, the practice became the rule of fashion in sixteenth-century France. The powder, liberally applied to real hair and wigs, was bleached and pulverized wheat flour, heavily scented. By the 1790s, at the court of Marie Antoinette, powdering, and all forms of hair dressing generally, reached a frenzied peak. Hair was combed, curled, and waved, and supplemented by mounds of false hair into fantastic towers, then powdered assorted colors. Blue, pink, violet, yellow, and white—each had its vogue.
At the height of hair powdering in England, Parliament, to replenish the public treasury, taxed hair powders. The returns were projected at a quarter of a million pounds a year. However, political upheaval with France and Spain, to say nothing of a capricious change in hair fashion that rendered powdering passé, drastically reduced the revenue collected.
Modern Hair Coloring: 1909, France
Permanent coloring of the hair has never been a harmless procedure. The risk of irritation, rash, and cellular mutations leading to cancer are present even with today’s tested commercial preparations. Still, they are safer than many of the caustic formulations used in the past.
The first successful attempt to develop a safe commercial hair dye was undertaken in 1909 by French chemist Eugene Schueller. Basing his mixture on a newly identified chemical, paraphenylenediamine, he founded the French Harmless Hair Dye Company. The product initially was not an impressive seller (though it would become one), and a year later Schueller conceived a more glamorous company name: L’Oréal.
Still, most women resisted in principle the idea of coloring their hair. That was something done by actresses. As late as 1950, only 7 percent of American women dyed their hair. By comparison, the figure today is 75 percent. What brought about the change in attitude?
In large measure, the modern hair-coloring revolution came not through a safer product, or through a one-step, easy-to-use formulation, but through clever, image-changing advertising.
The campaign was spearheaded largely by Clairol.
A New York copywriter, Shirley Polykoff, conceived two phrases that quickly became nationwide jargon: “Does She or Doesn’t She?” and “Only Her Hairdresser Knows for Sure.” The company included a child in every pictorial advertisement, to suggest that the adult model with colored hair was a respectable woman, possibly a mother.
Ironically, it was the double entendre in “Does She or Doesn’t She?” that raised eyebrows and consequently generated its own best publicity. “Does she or doesn’t she what?” people joked. Life magazine summarily refused to print the advertisement because of its blatant suggestiveness. To counter this resistance, Clairol executives challenged Life’s all-male censor panel to test the advertisement on both men and women. The results were astonishing, perhaps predictable, and certainly revealing. Not a single woman saw sexual overtones in the phrase, whereas every man did.
Life relented. The product sold well. Coloring hair soon ceased to be shocking. By the late 1960s, almost 70 percent of American women—and two million men—altered their natural hair color. Modern-day Americans had adopted a trend that was popular more than three thousand years ago. The only difference in the past was that the men coloring their hair outnumbered the women.
Wigs: 3000 B.C., Egypt
Although the Assyrians ranked as the preeminent hair stylists of the ancient world, the Egyptians, some fifteen hundred years earlier, made an art of wigs. In the Western world, they originated the concept of using artificial hair, although its function was most often not to mask baldness but to complement formal, festive attire.
Many Egyptian wigs survive in excellent condition in museums today. Chemical analyses reveal that their neatly formed plaits and braids were made from both vegetable fiber and human hair.
Some decorative hairpieces were enormous. And weighty. The wig Queen Isimkheb, in 900 B.C., wore on state occasions made her so top-heavy that attendants were required to help her walk. Currently in the Cairo Museum, the wig was chemically tested and found to be woven entirely of brown human hair. As is true of other wigs of that time, its towering style was held in place with a coating of beeswax.
Blond wigs became a craze in Rome, beginning in the first century B.C. Whereas Greek courtesans preferred bleaching or powdering their own hair, Roman women opted for fine flaxen hair from the heads of German captives. It was made into all styles of blond wigs. Ovid, the first-century Roman poet, wrote that no Roman, man or woman, had ever to worry about baldness given the abundance of German hair to be scalped at will.
Blond wigs eventually became the trademark of Roman prostitutes, and even of those who frequented them. The dissolute empress Messalina wore a “yellow wig” when she made her notorious rounds of the Roman brothels. And Rome’s most detestable ruler, Caligula, wore a similar wig on nights when he prowled the streets in search of pleasure. The blond wig was as unmistakable as the white knee boots and miniskirt of a contemporary streetwalker.
The Christian Church tried repeatedly to stamp out all wearing of wigs, for whatever purpose. In the first century, church fathers ruled that a wigged person could not receive a Christian blessing. In the next century, Tertullian, the Greek theologian, preached that “All wigs are such disguises and inventions of the devil.” And in the following century, Bishop Cyprian forbade Christians in wigs or toupees to attend church services, declaiming, “What better are you than pagans?”
Such condemnation peaked in A.D. 692. That year, the Council of Constantinople excommunicated Christians who refused to give up wearing wigs.
Even Henry IV, who defied the Church in the twelfth century over the king’s right to appoint bishops and was subsequently excommunicated, adhered to the Church’s recommended hair style—short, straight, and unadorned. Henry went so far as to prohibit long hair and wigs at court. Not until the Reformation of 1517, when the Church was preoccupied with the more pressing matter of losing members, did it ease its standards on wigs and hair styles.
A cartoon captures the burden of false hair in an era when wigs were weighty and required hours of attention.
By 1580, wigs were again the dernier cri in hair fashion.
One person more than any other was responsible for the return of curled and colored wigs: Elizabeth I, who possessed a huge collection of red-orange wigs, used mainly to conceal a severely receding hairline and thinning hair.
Wigs became so commonplace they often went unnoticed. The fact that Mary, Queen of Scots wore an auburn wig was unknown even by people well acquainted with her; they learned the truth only when she was beheaded. At the height of wig popularity in seventeenth-century France, the court at Versailles employed forty full-time resident wigmakers.
Once again, the Church rose up against wigs. But this time the hierarchy was split within its own ranks, for many priests wore the fashionable long curling wigs of the day. According to a seventeenth-century account, it was not uncommon for wigless priests to yank wigs off clerics about to serve mass or invoke benediction. One French clergyman, Jean-Baptiste Thiers from Champrond, published a book on the evils of wigs, the means of spotting wig wearers, and methods of sneak attack to rip off false hair.
The Church eventually settled the dispute with a compromise. Wigs were permitted on laymen and priests who were bald, infirm, or elderly, although never in church. Women received no exemption.
In eighteenth-century London, wigs worn by barristers were so valuable they were frequently stolen. Wig stealers operated in crowded streets, carrying on their shoulders a basket containing a small boy. The boy’s task was to suddenly spring up and seize a gentleman’s wig. The victim was usually discouraged from causing a public fuss by the slightly ridiculous figure he cut with a bared white shaven head. Among barristers, the legal wig has remained part of official attire into the twentieth century.
Hairpin: 10,000 Years Ago, Asia
A bodkin, a long ornamental straight pin, was used by Greek and Roman women to fasten their hair. In shape and function it exactly reproduced the slender animal spines and thistle thorns used by earlier men and women and by many primitive tribes today. Ancient Asian burial sites have yielded scores of hairpins of bone, iron, bronze, silver, and gold. Many are plain, others ornately decorated, but they all clearly reveal that the hairpin’s shape has gone unchanged for ten thousand years.
Cleopatra preferred ivory hairpins, seven to eight inches long and studded with jewels. The Romans hollowed out their hairpins to conceal poison. The design was similar to that of the pin Cleopatra is reputed to have used in poisoning herself.
The straight hairpin became the U-shaped bobby pin over a period of two centuries.
Wig fashion at the seventeenth-century French court necessitated that a person’s real hair be either clipped short or pinned tightly to the head. Thus “bobbed,” it facilitated slipping on a wig as well as maintaining a groomed appearance once the wig was removed. Both large straight pins and U-shaped hairpins were then called “bobbing pins.” In England, in the next century, the term became “bobby pin.” When small, two-pronged pins made of tempered steel wire and lacquered black began to be mass-produced in the nineteenth-century, they made straight hairpins virtually obsolete and monopolized the name bobby pin.
Hair Dryer: 1920, Wisconsin
The modern electric hair dryer was the offspring of two unrelated inventions, the vacuum cleaner and the blender. Its point of origin is well known: Racine, Wisconsin. And two of the first models—named the “Race” and the “Cyclone” —appeared in 1920, both manufactured by Wisconsin firms, the Racine Universal Motor Company and Hamilton Beach.
The idea of blow-drying hair originated in early vacuum cleaner advertisements.
In the first decade of this century, it was customary to promote several functions for a single appliance, especially an electrical appliance, since electricity was being touted as history’s supreme workhorse. The stratagem increased sales; and people had come to expect multifunction gadgets.
The vacuum cleaner was no exception. An early advertisement for the so-called Pneumatic Cleaner illustrated a woman seated at her vanity, drying her hair with a hose connected to the vacuum’s exhaust. With a why-waste-hot-air philosophy, the caption assured readers that while the front end of the machine sucked up and safely trapped dirt, the back end generated a “current of pure, fresh air from the exhaust.” Although early vacuum cleaners sold moderately well, no one knows how many women or men got the most out of their appliance.
The idea of blow-drying hair had been hatched, though. What delayed development of a hand-held electric hair dryer was the absence of a small, efficient, low-powered motor, known technically among inventors as a “fractional horsepower motor.”
Enter the blender.
Racine, Wisconsin, is also the hometown of the first electric milk shake mixer and blender. (See page 111.) Although a blender would not be patented until 1922, efforts to perfect a fractional horsepower motor to run it had been under way for more than a decade, particularly by the Racine Universal Motor Company and Hamilton Beach.
Thus, in principle, the hot-air exhaust of the vacuum cleaner was married to the compact motor of the blender to produce the modern hair dryer, manufactured in Racine. Cumbersome, energy-inefficient, comparatively heavy, and frequently overheating, the early hand-held dryer was, nonetheless, more convenient for styling hair than the vacuum cleaner, and it set the trend for decades to come.
Improvements in the ’30s and ’40s involved variable temperature settings and speeds. The first significant variation in portable home dryers appeared in Sears, Roebuck’s 1951 fall—winter catalogue. The device, selling for $12.95, consisted of a hand-held dryer and a pink plastic bonnet that connected directly to the blower and fitted over the woman’s head.
Hair dryers were popular with women from the year they debuted. But it was only in the late 1960s, when men began to experience the difficulty of drying and styling long hair, that the market for dryers rapidly expanded.
Comb: Pre-4000 B.C., Asia and Africa
The most primitive comb is thought to be the dried backbone of a large fish, which is still used by remote African tribes. And the comb’s characteristic design is apparent in the ancient Indo-European source of our word “comb,” gombhos, meaning “teeth.”
The earliest man-made combs were discovered in six-thousand-year-old Egyptian tombs, and many are of clever design. Some have single rows of straight teeth, some double rows; and others possess a first row thicker and longer than the second. A standard part of the Egyptian man’s and woman’s vanity, the instrument served the dual function of combing hair and of pinning a particular style in place.
Archaeologists claim that virtually all early cultures independently developed and made frequent use of combs—all, that is, except the Britons.
Dwelling along the coastline of the British Isles, these early peoples wore their hair unkempt (even during occupation by the Romans, themselves skilled barbers). They are believed to have adopted the comb only after the Danish invasions, in 789. By the mid-800s, the Danes had settled throughout the kingdom, and it is they who are credited with teaching coastal Britons to comb their hair regularly.
In early Christian times, combing hair was also part of religious ceremonies, in a ritualistic manner similar to washing the feet. Careful directions exist for the proper way to comb a priest’s hair in the sacristy before vespers. Christian martyrs brought combs with them into the catacombs, where many implements of ivory and metal have been found. Religious historians suspect that the comb at one time had some special symbolic significance; they point to the mysterious fact that during the Middle Ages, many of the earliest stained-glass church windows contain unmistakable images of combs.
Magic, too, came to surround the comb. In the 1600s, in parts of Europe, it was widely accepted that graying hair could be restored to its original color by frequent strokes with a lead comb. Although it is conceivable that soft, low-grade, blackened lead might actually have been microscopically deposited on strands of hair, slightly darkening them, there is more evidence to suggest that the comber dyed his hair, then attributed the results to the instrument. The suspicion is supported by the fact that in the last few decades of the century, the term “lead comb” —as in “He uses a lead comb” —was the socially accepted euphemism for dyeing gray hair.
There were no real changes in comb design until 1960, when the first home electric styling comb originated in Switzerland.
Perfume: Pre-6000 B.C., Middle and Far East
Perfume originated at ancient sacred shrines, where it was the concern of priests, not cosmeticians. And in the form of incense, its original function, it survives today in church services.
The word itself is compounded from per and fumus, Latin for “through the smoke.” And that precisely describes the manner in which the fragrant scents reached worshipers: carried in the smoke of the burning carcass of a sacrificial animal.
Foraging man, preoccupied with the quest for food, believed the greatest offering to his gods was part of his most precious and essential possession, a slaughtered beast. Perfume thus originated as a deodorizer, sprinkled on a carcass to mask the stench of burning flesh. The Bible records that when Noah, having survived the Flood, burned animal sacrifices, “the Lord smelled the sweet odor” —not of flesh but of incense.
Incense, used to mask the stench of sacrificial burning flesh, evolved into perfume.
In time, through symbolic substitution, the pungent, smoky fragrances themselves became offerings. Burning such resinous gums as frankincense, myrrh, cassia, and spikenard signified the deepest homage a mortal could pay to the gods. Perfume thus passed from a utilitarian deodorizer of foul smells to a highly prized commodity in its own right. No longer in need of heavy, masking scents, people adopted light, delicate fragrances of fruits and flowers.
This transition from incense to perfume, and from heavy scents to lighter ones, occurred in both the Far East and the Middle East some six thousand years ago. By 3000 B.C., the Sumerians in Mesopotamia and the Egyptians along the Nile were literally bathing themselves in oils and alcohols of jasmine, iris, hyacinth, and honeysuckle.
Egyptian women applied a different scent to each part of the body. Cleopatra anointed her hands with kyaphi, an oil of roses, crocus, and violets; and she scented her feet with aegyptium, a lotion of almond oil, honey, cinnamon, orange blossoms, and tinting henna.
Although the men of ancient Greece eschewed the use of facial cosmetics, preferring a natural appearance, they copiously embraced perfumes—one scent for the hair, another for the skin, another for clothing, and still a different one to scent wine.
Greek writers around 400 B.C. recommended mint for the arms, cinnamon or rose for the chest, almond oil for the hands and feet, and extract of marjoram for the hair and eyebrows. Fashionable young Greeks carried the use of perfumes to such extremes that Solon, the statesman who devised the democratic framework of Athens, promulgated a law (soon repealed) prohibiting the sale of fragrant oils to Athenian men.
From Greece, perfumes traveled to Rome, where a soldier was considered unfit to ride into battle unless duly anointed with perfumes. Fragrances of wisteria, lilac, carnation, and vanilla were introduced as the Roman Empire conquered other lands. From the Far and Middle East, they acquired a preference for cedar, pine, ginger, and mimosa. And from the Greeks, they learned to prepare the citric oils of tangerine, orange, and lemon.
Guilds of Roman perfumers arose, and they were kept busy supplying both men and women with the latest scents. Called unguentarii, perfumers occupied an entire street of shops in ancient Rome. Their name, meaning “men who anoint,” gave rise to our word “unguent.”
The unguentarii concocted three basic types of perfume: solid unguents, which were scents from only one source, such as pure almond, rose, or quince; liquids, compounded from squeezed or crushed flowers, spices, and gums in an oil base; and powdered perfumes, prepared from dried and pulverized flower petals and spices.
Like the Greeks, the Romans lavished perfume upon themselves, their clothes, and their home furnishings. And their theaters. The eighteenth-century British historian Edward Gibbon, writing on Roman customs, observed, “The air of the amphitheater was continually refreshed by the playing of fountains, and profusely impregnated by the grateful scents of aromatics.”
The emperor Nero, who set a fashion in the first century for rose water, spent four million sesterces—the equivalent of about $160,000 today—for rose oils, rose waters, and rose petals for himself and his guests for a single evening’s fete. And it was recorded that at the funeral in A.D. 65 of his wife, Poppaea, more perfume was doused, splashed, and sprayed than the entire country of Arabia could produce in a year. Even the processional mules were scented. (Perhaps especially the mules.)
Such fragrance excesses incensed the Church. Perfume became synonymous with decadence and debauchery, and in the second century, church fathers condemned the personal use of perfumes among Christians.
After the fall of the Roman Empire, perfume was manufactured primarily in the Middle and Far East. One of the costliest Eastern perfumes, reintroduced to Europe by the eleventh-century Crusaders, was “rose attar,” the essential oil from the petals of the damask rose. Two hundred pounds of feather-light rose petals produced a single ounce of attar.
It was the Crusaders, returning with exotic fragrances, who reawakened Europe’s interest in perfumes and perfume making. And at that point in perfume’s history, a new element entered the arena: animal oils. From the East, pharmacists learned that small portions of four highly unlikely animal secretions cast intoxicating effects on humans. The oils were musk, ambergris, civet, and castor—the fundamental essences of modern perfumes.
These are unlikely ingredients for perfume because they are sexual and glandular secretions, which in themselves can be overpowering, unpleasant, and even nauseating. Their origins with respect to perfume are only partially known.
Musk. Musk derives from a particular deer, Moschus moschiferus, a small, shy denizen of the rhododendron and birch thickets of western China. Fully grown males weigh only twenty-two pounds.
It is the male that carries, in the front of his abdomen, a sac that secretes a sexual signal, similar in function to the spray of a tomcat. Centuries ago, Eastern hunters, noticing a sweet, heavy fragrance throughout local forests, eventually isolated the source of the odor, and the diminutive deer have been hunted ever since. After the deer is killed, the sac is removed, dried, and sold to perfumers. Essential musk oil can be detected in amounts as small as 0.000,000,000,000,032 ounce. That is one meaning of “essential.”
Ambergris. This highly odorous, waxy substance is cast off from the stomach of the sperm whale. It is the basis of the most expensive perfume extracts and, like musk, is worth the equivalent of gold.
The great mammal Physeter catodon lives on a diet of cuttlefish, a squid-like sea mollusk that contains a sharp bone, the cuttlebone, which is used in bird cages for sharpening the beaks of parakeets. Ambergris is secreted to protect the whale’s intestinal lining from this abrasive bone.
As an oil, it floats, and often coats the nets of fishermen. It was early Arab fishermen who first appreciated ambergris’s sweet odor and its great fixative qualities in extending the life of a perfume. Ambergris, for example, is able to delay significantly the rate of volatility of other perfume oils with which it is mixed. Today both musk and ambergris can be synthesized, and the perfume trade has voluntarily refused to purchase ambergris out of consideration for the survival of the sperm whale.
Civet. This is a soft, waxy substance secreted by the civet cat, a nocturnal, flesh-eating animal of Africa and the Far East, with spotted yellowish fur.
Civet is a glandular secretion of both male and female cats of the family Viverra civetta. The waxy substance is formed near the genitalia, and it can be collected from captive cats about twice a week. It possesses a revoltingly fecal odor, but when blended with other perfume essences, it becomes both extremely agreeable and strongly fixative. Exactly how ancient perfumers of the Far East discovered this fact remains a puzzling mystery.
Castor. This scent is derived from both Russian and Canadian beavers of the family Castor fiber. The secretion collects in two abdominal sacs in both males and females. Extremely diluted, castor (or castoreum) is itself agreeable, but its primary use is as a scent-extending fixative. The fixating qualities that mark all four of these animal essences are a function of their high molecular weight. The heavy molecules act as anchors, impeding a perfume’s predominant scents from rising too quickly above the liquid’s surface and escaping into the air.
Cologne: 1709, Germany
An Italian barber, Jean-Baptiste Farina, arrived in Cologne, Germany, in 1709 to seek his fortune in the fragrance trade. Among his special concoctions was an alcohol-based blend of lemon spirits, orange bitters, and mint oil from the pear-shaped bergamot fruit. His creation was the world’s first eau de Cologne, “water of Cologne,” named after the city founded in A.D. 50 by Agrippina, wife of the Roman emperor Claudius.
While the city of Cologne was famous in the Middle Ages for its great cathedral, containing the shrine of the Magi, after Farina’s creation it became known throughout Europe as the major producer of cologne. The first cologne fragrance enjoyed a tremendous success, particularly among French soldiers stationed in that city in the mid-1700s during the Seven Years’ War. The Farina family prospered. Several members moved to Paris and started another successful perfume business, which in the 1860s was taken over by two French cousins, Armand Roger and Charles Gallet. Broadening the Farina line of toiletries, the cousins sold them under their combined names, Roger & Gallet.
Soon, in the trade, “cologne,” “toilet water,” and “perfume” acquired well-defined meanings. A perfume became any mixture of ethyl alcohol with 25 percent of one or more fragrant essential oils. Toilet water was a thinner dilution of the same ingredients, containing approximately 5 percent essential oils. And cologne was a further alcoholic dilution, with 3 percent fragrant oils. Those definitions apply today, although a particularly rich (and pricey) perfume can contain up to 42 percent of the precious oils.
The French dominated the perfume industry well into the nineteenth century—and beyond.
It was François Coty, a Corsican whose real surname was Sportuno, who, watching U.S. infantrymen sending home vast quantities of perfume following World War I, grasped the full possibilities of the American obsession with French fragrances. By selling name-brand products in smaller quantities and at cheaper prices, Coty appealed to new sectors of society and ushered in the first form of mass production in the perfume industry. Also capitalizing on the American desire for French perfumes, Jeanne Lanvin took her creation Mon Péché, which had failed in Paris, and in 1925 turned it into an immediate and resounding success in America under the name My Sin.
shalimar. The same year that My Sin debuted, two French brothers, Pierre and Jacques Guerlain, created Shalimar, Sanskrit for “temple of love.” The brothers were inspired when a rajah, visiting Paris, enthralled them with a tale of courtship in the Shalimar gardens of Lahore, Pakistan. In the gardens, replete with fragrant blossoming trees imported from around the world, Shah Jahan, a seventeenth-century emperor of India, courted and married Mumtaz Mahal. After her death, he built the magnificent Taj Mahal mausoleum as her memorial.
Chanel No. 5. The superstitious fashion designer Gabrielle (“Coco”) Chanel associated good luck with the number five. In 1921, she introduced to the world her new fragrance, announcing it on the fifth day of the fifth month and labeling the perfume No. 5.
At that time, the perfume was unlike others on the market in that it did not have the distinctive floral “feminine” scent then popular. That, in fact, played a large measure in its appeal to the “boyish” flappers of the Jazz Age. The revolutionary No. 5, with its appropriate timing and scent, turned out to be a lucky number all around for its creator, earning her fifteen million dollars. Americans took immediately to the perfume, and Marilyn Monroe once replied to a journalist who asked her what she wore to bed: “Chanel No. 5.”
Avon: 1886, New York
The modern cosmetics industry in America was not dominated entirely by foreigners. It is true that Chanel, Coty, and Guerlain hailed from France; Helena Rubinstein from Krakow, Poland; Elizabeth Arden (born Florence Nightingale Graham) from Canada; Max Factor from Russia. But Avon was strictly an American phenomenon, and a unique and pioneering one at that.
The first Avon Lady was actually a man, young door-to-door salesman David McConnell from upstate New York. He launched Avon Calling in 1886, offering women cosmetics in the comfort and privacy of their own homes. But perfumes and hand creams were not McConnell’s initial merchandise.
At the age of sixteen, McConnell had begun selling books door-to-door. When his fare was not well received, he resorted to the then-popular advertising gimmick of offering a free introductory gift in exchange for being allowed to make a sales pitch. A complimentary vial of perfume, he thought, would be an ideal entrée, and he blended the original scent himself, with the aid of a local pharmacist.
Fate stepped in. As a later door-to-door salesman discovered that his free soapy steel-wool pads (see “S.O.S. Pads,” page 102) were preferred by housewives over his actual pot-and-pan wares, McConnell learned that women adored his perfume and remained indifferent to his books. Thus, he abandoned books and organized the New York–based California Perfume Company, named in honor of a friend and investor from California. The door-to-door approach seemed tailor-made for cosmetics, particularly in rural areas, where homemakers, in horse-and-buggy days, had poor access to better stores.
The first female Avon Lady was Mrs. P. F. E. Albee, a widow from Winchester, New Hampshire. She began her chime-ringing career selling the company’s popular Little Dot Perfume Set, and she recruited other women, training them as door-to-door salespeople. The company was rechristened Avon for the simple reason that the New York State town in which David McConnell lived, Suffern on the Ramapo, reminded him of Shakespeare’s Stratford-on-Avon.
By 1897, McConnell had twelve women employees selling a line of eighteen fragrances. And the numbers kept growing and growing. Today, despite the scores of expensive, prestigious American and foreign brand-name cosmetics, Avon ranks first in sales nationwide, with more than half a million Avon Ladies ringing doorbells from coast to coast.
Through the Medicine Chest
Medication: 3500 B.C., Sumer
Because early man viewed illness as divine punishment and healing as purification, medicine and religion were inextricably linked for centuries. You became ill because you lost favor with a god, and you regained that god’s grace, and your health, by a physical and spiritual purging. This notion is apparent in the origin of our word “pharmacy,” which comes from the Greek pharmakon, meaning “purification through purging.”
By 3500 B.C., the Sumerians in the Tigris-Euphrates valley had developed virtually all of our modern methods of administering drugs. They used gargles, inhalations, suppositories, enemas, poultices, snuffs, decoctions, infusions, pills, troches, lotions, ointments, and plasters.
The first drug catalogue, or pharmacopoeia, was written at that time by an unknown Sumerian physician. Preserved in cuneiform script on a single clay tablet are the names of dozens of drugs to treat ailments that still afflict us today. As a gargle, salt dissolved in water; as a general disinfectant for wounds, soured wine; as an astringent, potassium nitrate, obtained from the nitrogenous waste products in urine. And to relieve a fever, pulverized willow bark, nature’s equivalent of aspirin.
The Egyptians added to the ancient medicine chest.
The Ebers Papyrus, a scroll dating from 1900 B.C. and named after the German Egyptologist Georg Ebers, reveals the trial-and-error know-how acquired by early Egyptian physicians. Constipation was treated with a laxative of ground senna pods and castor oil; for indigestion, a chew of peppermint leaves and carbonates (known today as antacids); and to numb the pain of tooth extraction, Egyptian doctors temporarily stupefied a patient with ethyl alcohol.
The scroll also provides a rare glimpse into the hierarchy of ancient drug preparation. The “chief of the preparers of drugs” was the equivalent of a head pharmacist, who supervised the “collectors of drugs,” field workers who gathered essential minerals and herbs. The “preparers’ aides” (technicians) dried and pulverized ingredients, which were blended according to certain formulas by the “preparers.” And the “conservator of drugs” oversaw the storehouse where local and imported mineral, herb, and animal-organ ingredients were kept.
By the seventh century B.C., the Greeks had adopted a sophisticated mind-body view of medicine. They believed that a physician must pursue the diagnosis and treatment of the physical (body) causes of disease within a scientific framework, as well as cure the supernatural (mind) components involved. Thus, the early Greek physician emphasized something of a holistic approach to health, even if the suspected “mental” causes of disease were not recognized as stress and depression but interpreted as curses from displeased deities. Apollo, chief god of healing, and Prometheus, a Titan who stole fire from heaven to benefit mankind, ruled over the preparation of all medications.
Modern Drugs. The modern era of pharmacology began in the sixteenth century, ushered in by the first major discoveries in chemistry. The understanding of how chemicals interact to produce certain effects within the body would eventually remove much of the guesswork and magic from medicine.
The same century witnessed another milestone: publication in Germany in 1546 of the first modern pharmacopoeia, listing hundreds of drugs and medicinal chemicals, with explicit directions for preparing them. Drugs that had previously varied widely in concentrations, and even in constituents, were now stringently defined by the text, which spawned versions in Switzerland, Italy, and England.
Drugs had been launched on a scientific course, but centuries would pass before superstition was displaced by scientific fact. One major reason was that physicians, unaware of the existence of disease-causing pathogens such as bacteria and viruses, continued to dream up imaginary causative evils. And though new chemical compounds emerged, their effectiveness in treating disease was still based largely on trial and error. When a new drug worked, no one really knew why, or more challenging still, how.
As we will see in this chapter, many standard, common drugs in the medicine chest developed in this trial-and-error environment. Such is the complexity of disease and human biochemistry that even today, despite enormous strides in medical science, many of the latest sophisticated additions to our medicine chest shelves were accidental finds.
Vaseline: 1879, Brooklyn, New York
In its early days, Vaseline had a wide range of uses and abuses. The translucent jelly was gobbed onto fishermen’s hooks to lure trout. Stage actresses dabbed the glistening ointment down their cheeks to simulate tears. Because Vaseline resists freezing, Arctic explorer Robert Peary took the jelly with him to the North Pole to protect his skin from chapping and his mechanical equipment from rusting. And because the compound does not turn rancid in steamy tropical heat, Amazonian natives cooked with Vaseline, ate it as a spread on bread, and even exchanged jars of the stuff as money.
The reports of myriad uses from all latitudes and longitudes did not surprise Vaseline’s inventor, Robert Augustus Chesebrough, a Brooklyn chemist, who lived to the age of ninety-six and attributed his longevity to Vaseline. He himself ate a spoonful of it every day.
In 1859, Robert Chesebrough was searching not for a new pharmaceutical unguent but for a way to stave off bankruptcy. At a time when kerosene was a major source of home and industrial power, his Brooklyn-based kerosene business was threatened by the prospect of cheaper petroleum fuel from an oil boom in Pennsylvania.
The young Brooklyn chemist journeyed to Titusville, Pennsylvania, heart of the oil strike, with the intention of entering the petroleum business. His chemist’s curiosity, though, was piqued by a pasty paraffin-like residue that stuck annoyingly to drilling rods, gumming them into inactivity. The field workers Chesebrough questioned had several unprintable names for the stuff that clogged their pumps, but no one had a hint as to its chemical nature. Workers had discovered one practical use for it: rubbed on a wound or burn, the paste accelerated healing.
Chesebrough returned to Brooklyn not with an oil partnership but with jars of the mysterious petroleum waste product. Months of experimentation followed, in which he attempted to extract and purify the paste’s essential ingredient.
That compound turned out to be a clear, smooth substance he called “petroleum jelly.” Chesebrough became his own guinea pig. To test the jelly’s healing properties, he inflicted various minor, and some major, cuts, scratches, and bums on his hands and arms. Covered with the paste extract, they seemed to heal quickly and without infection. By 1870, Chesebrough was manufacturing the world’s first Vaseline Petroleum Jelly.
There are two views on the origin of the name Vaseline, and Chesebrough seems to have discouraged neither. In the late 1800s, his friends maintained that he dreamed up the name, during the early days of purifying the substance, from the practice of using his wife’s flower vases as laboratory beakers. To “vase” he tagged on a popular medical suffix of that day, “line.” However, members of the production company he formed claimed that Chesebrough more scientifically compounded the word from the German wasser, “water,” and the Greek elaion, “olive oil.”
As he had been the product’s chief guinea pig, Robert Chesebrough also became its staunchest promoter. In a horse and buggy, he traveled the roads of upper New York State, dispensing free jars of Vaseline to anyone who promised to try it on a cut or burn. The public’s response was so favorable that within a half year Chesebrough employed twelve horse-and-buggy salesmen, offering the jelly for a penny an ounce.
New Englanders, though, were dabbing Vaseline on more than cuts and burns. Housewives claimed that the jelly removed stains and rings from wood furniture, and that it glisteningly polished and protected wood surfaces. They also reported that it gave a second life to dried leather goods. Farmers discovered that a liberal coating of Vaseline prevented outdoor machinery from rusting. Professional painters found that a thin spread of the jelly prevented paint splatters from sticking to floors. But the product was most popular with druggists, who used the pure, clean ointment as a base for their own brands of salves, creams, and cosmetics.
By the turn of the century, Vaseline was a staple of home medicine chests. Robert Chesebrough had transformed a gummy, irksome waste product into a million-dollar industry. In 1912, when a disastrous fire swept through the headquarters of a large New York insurance company, Chesebrough was proud to learn that the burn victims were treated with Vaseline. It became a hospital standard. And the then-burgeoning automobile industry discovered that a coating of the inert jelly applied to the terminals of a car battery prevented corrosion. It became an industry standard. And a sports standard too. Long-distance swimmers smeared it on their bodies, skiers coated their faces, and baseball players rubbed it into their gloves to soften the leather.
Throughout all these years of diverse application, Vaseline’s inventor never missed his daily spoonful of the jelly. In his late fifties, when stricken with pleurisy, Chesebrough instructed his private nurse to give him regular whole-body Vaseline rubdowns. He liked to believe that, as he joked, he “slipped from death’s grip” to live another forty years, dying in 1933.
Listerine: 1880, St. Louis, Missouri
Developed by a Missouri physician, Joseph Lawrence, Listerine was named in honor of Sir Joseph Lister, the nineteenth-century British surgeon who pioneered sanitary operating room procedures. Shortly after its debut in 1880, the product became one of America’s most successful and trusted commercial mouthwashes and gargles.
In the 1860s, when the science of bacteriology was still in its infancy, Lister campaigned against the appalling medical hygiene of surgeons. They operated with bare hands and in street clothes, wearing shoes that had trekked over public roads and hospital corridors. They permitted spectators to gather around an operating table and observe surgery in progress. And as surgical dressings, they used pads of pressed sawdust, a waste product from mill floors. Although surgical instruments were washed in soapy water, they were not heat-sterilized or chemically disinfected. In many hospitals, postoperative mortality was as high as 90 percent.
Before Lister pioneered sanitary operating conditions, postoperative mortality in many hospitals ran as high as 90 percent.
The majority of doctors, in England and America, scoffed at Lister’s plea for “antiseptic surgery.” When he addressed the Philadelphia Medical Congress in 1876, his speech received a lukewarm reception. But Lister’s views on germs impressed Dr. Joseph Lawrence. In his St. Louis laboratory, Lawrence developed an antibacterial liquid, which was manufactured locally by the Lambert Pharmacal Company (later to become the drug giant Warner-Lambert).
In 1880, to give the product an appropriately antiseptic image, the company decided to use the name of Sir Joseph Lister, then the focus of controversy on two continents. Surgeons, employing many of Lister’s hygienic ideas, were beginning to report fewer postoperative infections and complications, as well as higher survival rates. “Listerism” was being hotly debated in medical journals and the popular press. Listerine arrived on the scene at the right time and bearing the best possible name.
The mouthwash and gargle was alleged to “Kill Germs By Millions On Contact.” And Americans, by millions, bought the product. Early advertisements pictured a bachelor, Herb, “an awfully nice fellow, with some money,” who also “plays a swell game of bridge.” But Herb’s problem, according to the copy, was that “he’s that way.”
Halitosis, not homosexuality, was Herb’s problem. But in the early years of this century, it was equally unspeakable. Americans began the Listerine habit for sweetening their breath, to the extent that as late as the mid-1970s, with scores of competing breath-freshening sprays, mints, gargles, and gums on the market, Listerine still accounted for the preponderance of breath-freshener sales in the United States.
Then Joseph Lawrence’s early belief in the potency of his product was medically challenged. A 1970s court order compelled Warner-Lambert to spend ten million dollars in advertising a disclaimer that Listerine could not prevent a cold or a sore throat, or lessen its severity.
Band-Aid: 1921, New Brunswick, New Jersey
At the 1876 Philadelphia Medical Congress, Dr. Joseph Lawrence was not the only American health worker impressed with Sir Joseph Lister’s germ-disease theory. A thirty-one-year-old pharmacist from Brooklyn, Robert Johnson, had his life changed by the eminent British surgeon’s lecture.
Lister deplored the use of pressed sawdust surgical dressings made from wood-mill wastes. He himself disinfected every bandage he used in surgery by soaking it in an aqueous solution of carbolic acid.
Johnson, a partner in the Brooklyn pharmaceutical supply firm of Seabury & Johnson, was acquainted with the sawdust dressings, as well as with an array of other nonsterile paraphernalia used in American hospitals. He persuaded his two brothers—James, a civil engineer, and Edward, an attorney—to join him in his attempt to develop and market a dry, prepackaged, antiseptic surgical dressing along the lines that Lister had theoretically outlined at the congress.
By the mid-1880s, the brothers had formed their own company, Johnson & Johnson, and produced a large dry cotton-and-gauze dressing. Individually sealed in germ-resistant packages, the bandages could be shipped to hospitals in remote areas and to doctors on military battlefields, with sterility guaranteed.
The Johnson brothers prospered in the health care field. In 1893, they introduced American mothers to the fresh scent of Johnson’s Baby Powder, including it as a giveaway item in the multipurpose Maternity Packets sold to midwives.
On the horizon, though, was the sterile product that soon would appear in home medicine chests worldwide.
It was in 1920 that James Johnson, the firm’s president, heard of a small homemade bandage created by one of his employees, Earle Dickson. A cotton buyer in the company’s purchasing department, Dickson had recently married a young woman who was accident-prone, frequently cutting or burning herself in the kitchen. The injuries were too small and minor to benefit from the company’s large surgical dressings. As Earle Dickson later wrote of the Band-Aid: “I was determined to devise some manner of bandage that would stay in place, be easily applied and still retain its sterility.”
To treat each of his wife’s injuries, Dickson took a small wad of the company’s sterile cotton and gauze, placing it at the center of an adhesive strip. Tiring of making individual bandages as they were needed, Dickson conceived of producing them in quantity, and of using a crinoline fabric to temporarily cover the bandages’ sticky portions. When James Johnson watched his employee strip off two pieces of crinoline and easily affix the bandage to his own finger, Johnson knew the firm had a new first-aid product.
The name Band-Aid, which would eventually become a generic term for small dressings, was suggested by a superintendent at the company’s New Brunswick plant, W. Johnson Kenyon. And those first adhesive bandages were made by hand, under sterile conditions, in assembly line fashion.
Sales were initially poor. One of the company’s strongest promoters of the Band-Aid Brand Adhesive Bandage was Dr. Frederick Kilmer, head of the company’s research department (and father of the poet Joyce Kilmer). Kilmer had been responsible for the development and marketing of Johnson’s Baby Powder in the 1890s, and in the 1920s he joined the campaign to promote Band-Aids. He published medical and popular articles on the product’s ability to prevent infection and accelerate healing of minor cuts and burns. One of the company’s cleverest advertising ploys was to distribute an unlimited number of free Band-Aids to Boy Scout troops across the country, as well as to local butchers.
The popularity of Band-Aids steadily increased. By 1924, they were being machine-produced, measuring three inches long by three quarters of an inch wide. Four years later, Americans could buy Band-Aids with aeration holes in the gauze pad to increase airflow and accelerate healing.
Band-Aids’ inventor, Earle Dickson, went on to enjoy a long and productive career with Johnson & Johnson, becoming a vice president and a member of the board of directors. As for his invention, the company estimates that since the product was introduced in 1921, people around the world have bandaged themselves with more than one hundred billion Band-Aids.
Witch Hazel: Post-7th Century, England
A mild alcoholic astringent applied to cleanse cuts, witch hazel was made from the leaves and bark of the witch hazel plant, Hamamelis. The shrub, whose pods explode when ripe, was used both practically and superstitiously in Anglo-Saxon times.
Because the plant’s yellow flowers appear in late autumn, after the branches are bare of leaves and the bush is seemingly dead, the inhabitants of the British Isles ascribed supernatural powers to the witch hazel tree. They believed, for instance, that a witch hazel twig, in a high priest’s skilled hands, could single out a criminal in a crowd.
A more practical application of a pliant witch hazel twig was as a divining rod to locate underground water in order to sink wells. In fact, the word “witch” in the plant’s name comes from the Anglo-Saxon wice, designating a tree with pliant branches.
The Anglo-Saxons’ interest in the witch hazel plant led to the assumption that they developed the first witch hazel preparation. What is known with greater confidence is that American Indian tribes taught the Pilgrims how to brew witch hazel bark as a lotion for soothing aches, bruises, and abrasions.
For the next two hundred years, families prepared their own supplies of the lotion. Its uses in America were numerous: as an antiseptic, a facial cleanser and astringent, a topical painkiller, a deodorant, a base for cosmetic lotions, and as a cooling liquid (similar to today’s splashes) in hot weather, for the rapid evaporation of witch hazel’s alcohol stimulates the cooling effect of sweating.
In 1866, a New England clergyman, Thomas Newton Dickinson, realized that a profitable market existed for a commercial preparation. He located his distilling plant in Essex, Connecticut, on the banks of the Connecticut River, adjacent to fields of high-quality American witch hazel shrubs, Hamamelis virginiana.
In the 1860s, Dickinson’s Witch Hazel was sold by the keg to pharmacists, who dispensed it in bottles to customers. The keg bore the now-familiar “bull’s-eye” trademark, and Dickinson’s formula for witch hazel proved so successful that it is basically unchanged to this day. It is one product that has been in medicine chests for at least three hundred years, if not longer.
Vick’s VapoRub: 1905, Selma, North Carolina
Before the turn of the century, the most popular treatments for chest and head colds were poultices and plasters. They were not all that different from the mint and mustard formulations used in the Near East five thousand years ago. Unfortunately, both the ancient and the modern preparations, which were rubbed on the chest and forehead, frequently resulted in rashes or blisters, for their active ingredients, which produced a tingling sensation of heat, often were skin irritants.
There was another popular cold remedy, but one potentially more dangerous. Physicians recommended, with caution, that children suffering from the croup or a cold inhale hot herbal vapors. These temporarily opened the nasal passages while a child’s head was over the steam, but many a child (and adult) received facial burns from overly hot water. Before gas and electric stoves would provide a measured and steady source of energy to boil water, coal or wood fires could abruptly vary in intensity, producing a sudden geyser of scalding steam.
Many a druggist sought to produce a skin-tingling, sinus-opening ointment that combined the best aspects of plasters and vapors with none of their drawbacks. For Lunsford Richardson, a druggist from Selma, North Carolina, two events occurred that led him to the perfect product. The first was the popularity of petroleum jelly as a safe, neutral base for salves and cosmetics. The second was the introduction in America of menthol, a waxy, crystalline alcohol extract from oil of peppermint, which released a pungent vapor.
Menthol had first caught the public’s attention in 1898 in the form of a sore-muscle balm named Ben-Gay. Developed by, and named after, a French pharmacist, Jules Bengué, the product combined menthol’s heat-producing effects with an analgesic pain reliever, salicylate of methyl, in a base of lanolin. Touted in Europe and America as a remedy for gout, rheumatoid arthritis, and neuralgia, Bengué’s balm was also reported to clear the sinuses during a head cold.
Richardson listened to testimonials for Ben-Gay from his own customers. In 1905, he blended menthol with other ingredients from the drugstore shelf into a base of petroleum jelly, producing Richardson’s Croup and Pneumonia Cure Salve, a forehead and chest rub. Vaporized by body heat, the chemicals opened blocked air passages at the same time they stimulated blood circulation through skin contact. That year, Richardson could not work fast enough to fill orders from cold sufferers and other druggists.
Searching for a catchier name for his already popular product, Richardson turned to his brother-in-law, a physician named Joshua Vick. It was in Vick’s drugstore that Richardson had begun his career in pharmacology, and it was in Vick’s backroom laboratory that Richardson concocted his vapor rub. He named the product in honor of his relative and mentor.
Richardson advertised in newspapers, with coupons that could be redeemed for a trial jar of Vick’s VapoRub. And he persuaded the U.S. Post Office to allow him to institute a new mailing practice, one that has since kept home mailboxes full, if not overflowing: Advertisements for Vick’s VapoRub were addressed merely to “Boxholder,” the equivalent of today’s “Occupant.” Before then, all mail had to bear the receiver’s name.
Sales were strong. Then a tragic twist of fate caused them to skyrocket.
In the spring of 1918, a flu epidemic erupted in U.S. military bases. It was carried by troops to France, then to Spain, where the virus became more virulent, earning it the name Spanish Flu. It spread to China. By the fall of that year, an even deadlier strain broke out in Russia.
The death toll was enormous. The flu killed one half of one percent of the entire population of the United States and England, 60 percent of the Eskimos in Nome, Alaska. In just six weeks, 3.1 percent of the U.S. recruits at Camp Sherman died. Ocean liners docked with up to 7 percent fewer passengers than had embarked. The epidemic was characterized aptly by what fourteenth-century Italian author Giovanni Boccaccio said of an earlier scourge: “How many valiant men, how many fair ladies, breakfasted with their kinsfolk and that same night supped with their ancestors in the other world.”
World War I had taken four years to claim the lives of nine million military personnel. The 1918 pandemic, in one year, killed twenty-five million people worldwide, making it history’s worst plague.
Not surprisingly, the influenza drove up the sales of all kinds of cold medications. Aspirin, cough syrups and drops, and decongestants were, of course, ineffective against the flu bug, which mysteriously vanished in 1919, perhaps having mutated and passed into swine. But these drug sales, as well as those of Vick’s VapoRub, set new industry records. Vick’s, in 1918, broke the million-dollar mark.
Deodorants: 3500 B.C., Near East
The problem of body odor is ancient, as are man’s attempts to solve it. From the dawn of written history, 5,500 years ago in Sumer, every major civilization has left a record of its efforts to produce deodorants.
The early Egyptians recommended following a scented bath with an underarm application of perfumed oils. They developed special citrus and cinnamon preparations that would not turn rancid in the semitropical climate and thus be themselves offensive. Through experimentation, the Egyptians discovered that the removal of underarm hair significantly diminished body odor. Centuries later, scientists would understand why: hair greatly increases the surface area on which bacteria, odorless themselves, can live, populate, die, and decompose to offend.
Both the Greeks and the Romans derived their perfumed deodorants from Egyptian formulas. In fact, throughout most of recorded history, the only effective deodorant—aside from regular washing—was perfume. And it merely masked one scent with another. For a time.
The link between sweat and odor was to be more clearly understood once the sweat glands were discovered in the nineteenth century.
Scientists learned that human perspiration is produced by two kinds of sweat glands, the apocrine and the eccrine. The first structures exist over the entire body’s surface at birth, giving babies their distinctive scent. Most of these glands gradually disappear, except for those concentrated in the armpit, around the anus, and circling the breast nipples. The glands are relatively inactive during childhood, but begin to function in puberty, switched on by the sex hormones. In old age, they may wither and atrophy.
Most of the body’s sweat, though, is produced by the eccrine glands, abundant over the body’s surface. Eccrine sweat is copious—and cooling. In extreme heat, and with high water intake, human subjects have been measured to secrete up to three gallons of sweat in twenty-four hours.
The eccrine glands also function in response to nervousness, fever, stress, and the eating of spicy foods. And sweat caused by emotional stress is particularly perfusive in the armpits, on the palms of the hands, and on the soles of the feet. But most perspiration evaporates or is absorbed effectively by clothing.
From Egyptian scented oils to Mum, the first modern antiperspirant, the search for an effective deodorant spanned five millennia.
It is because the armpits remain warm and moist that they create a hospitable environment for bacteria. Convincing scientific evidence shows that armpit odor arises mainly, though not exclusively, from bacteria that thrive in secretions of the apocrine glands. One study collected fresh human apocrine sweat and showed that it was odorless. Kept for six hours at room temperature (with bacteria multiplying and dying), it acquired its characteristic odor. When sweat from the same source was refrigerated, no odor developed.
Thus, ancient to modern perfumed deodorizers never tackled the source of the problem: persistent underarm moisture. Deprived of moisture, by an “antiperspirant,” bacteria cannot multiply.
Antiperspirants: 1888, United States
The first product marketed specifically to stem underarm moisture, and thus odor, was Mum, introduced in 1888. The formulation used a compound of zinc in a cream base. No scientist then, and none now, really understands how certain chemicals such as zinc thwart the production of sweat. Nonetheless, Mum worked, and its popularity in America convinced drug companies that a vast market existed for antiperspirants.
In 1902, Everdry debuted, followed in 1908 by Hush. These were the first antiperspirants to use another drying compound, aluminum chloride, which is found in most modern formulations.
For many years, Americans remained so sensitive to the issue of antiperspirants that they asked for them in drugstores with the same hushed confidentiality with which they requested prophylactics. The first antiperspirant to boldly speak its name with national magazine advertising, in 1914, bore the echoic name Odo-Ro-No. It claimed to remedy excessive perspiration, keeping women “clean and dainty.” Deodorant advertisements that followed also emphasized dryness, though none mentioned what dryness actually prevented.
Then, in 1919, the pioneering Odo-Ro-No again led the way. For the first time, a deodorant ad asserted that “B.O.” existed, and that it was socially shocking and offensive.
Amazingly, during these early days, antiperspirants were advertised exclusively to and used mainly by women, who considered them as essential as soap. It was not until the 1930s that companies began to target the male market.
After nearly a hundred years of studying the action of antiperspirants, how do scientists suspect they work?
One popular theory holds that “drying” elements such as aluminum and zinc penetrate a short distance into the sweat ducts. There they act as corks, blocking the release of water. Pressure mounts in the ducts, and through a biofeedback mechanism, the pressure itself stops further sweating.
Unfortunately, antiperspirants act only on the eccrine glands, not on the apocrine glands, the principal culprits in causing body odor. This is why no antiperspirant is effective for extended periods of time. The best routine for combating underarm odor combines the timeless custom of washing, with the ancient Egyptian practice of shaving underarm hair, and the application of a modern antiperspirant: something old, something borrowed, and something new.
Antacids: 3500 B.C., Sumer
Considering his largely uncooked diet, early man may have suffered more severe indigestion than people do today. We know that from the time people began to record their thoughts on clay tablets, they consulted physicians for comfort from stomach upset. The earliest remedies, found among the Sumerians, included milk, peppermint leaves, and carbonates.
What Sumerian physicians had discovered by trial and error was that alkaline substances neutralize the stomach’s natural acid. Today’s antacids work by offering the positively charged ions in the stomach’s hydrochloric acid negative, neutralizing ions. This, in turn, inhibits the release of pepsin, another potent component of the digestive juice, which can be highly irritating to the stomach’s lining.
The Sumerians’ most effective antacid was baking soda, or sodium bicarbonate (also known as bicarbonate of soda). For centuries, it served as a major ingredient in a host of homemade stomach remedies. The only thing that has somewhat diminished its use in commercial antacids today is the link between sodium intake and hypertension.
Pure baking soda’s first significant brand-name competitor appeared in 1873: Phillips’ Milk of Magnesia. Created by a former candlemaker turned chemist, Charles Phillips of Glenbrook, Connecticut, it combined a powdered antacid with the laxative magnesia. The product, taken in small doses, won immediate acceptance as a soothing remedy for stomach discomfort.
Alka-Seltzer: 1931, United States
The Alka-Seltzer story began in the winter of 1928, when Hub Beardsley, president of the Dr. Miles Laboratories, visited the offices of a local newspaper in Elkhart, Indiana. There was a severe flu epidemic that year. Many of Beardsley’s own employees were out sick. But Beardsley learned that no one on the newspaper’s staff had missed a day of work as the result of influenza. The paper’s editor explained that at the first hint of a cold symptom, he dosed staff members with a combination of aspirin and baking soda.
Beardsley was impressed. Both medications were ancient, but their combination was novel. Since his laboratories specialized in home-medicine-chest remedies, he decided to test the formula. He asked his chief chemist, Maurice Treneer, to devise an attractive new tablet. Of course, what Treneer created—the pill that went “plop, plop, fizz, fizz” —was more novel than the combination of aspirin and baking soda, and the gimmick was instrumental in popularizing the product.
Beardsley took a supply of the experimental tablets with him on a Mediterranean cruise. His wife reported that they cured her headaches. Beardsley himself found they soothed the ravages of excessive shipboard dining and drinking. And fellow passengers who tried the tablets claimed they cured seasickness.
The fizzing tablet, which prompted a hung-over W. C. Fields to joke, “Can’t anyone do something about that racket!” bowed in 1931, during the Depression. Radio promotion was heavy. But Alka-Seltzer’s sales really skyrocketed in 1933, when Americans emerged parched from the dry spell of Prohibition.
Ironically, one of Alka-Seltzer’s original two ingredients, aspirin, is a strong stomach irritant for many people. This awareness caused Miles Laboratories to introduce an aspirin-free tablet called Alka-2 Antacid in the mid-1970s.
Today a wide variety of non-sodium, non-aspirin antacids neutralize stomach acid. A glance at the medicine chest shelf will reveal that the modern components are aluminum, calcium, bismuth, magnesium, and phosphates, and the one ancient ingredient is dried milk solids.
Cough Drops: 1000 B.C., Egypt
A cough’s main purpose is to clear the air passage of inhaled foreign matter, chemical irritants, or, during a head cold, excessive bodily secretions. The coughing reflex is part voluntary, part involuntary, and drugs that reduce the frequency and intensity of coughs are called cough suppressors or, technically, antitussives.
Many of these modern suppressor chemicals—like the narcotic codeine—act in the brain to depress the activity of its cough center, reducing the urge to cough. Another group, of older suppressors, acts to soothe and relax the coughing muscles in the throat. This is basically the action of the oldest known cough drops, produced for Egyptian physicians by confectioners three thousand years ago.
It was in Egypt’s New Kingdom, during the Twentieth Dynasty, that confectioners produced the first hard candies. Lacking sugar—which would not arrive in the region for many centuries—Egyptian candymakers began with honey, altering its flavor with herbs, spices, and citrus fruits. Sucking on the candies was found to relieve coughing. The Egyptian ingredients were not all that different from those found in today’s sugary lozenges; nor was the principle by which they operated: moistening an irritated dry throat.
The throat-soothing candy underwent numerous minor variations in different cultures. Ingredients became the distinguishing factor. Elm bark, eucalyptus oil, peppermint oil, and horehound are but a few of the ancient additives. But not until the nineteenth century did physicians develop drugs that addressed the source of coughing: the brain. And these first compounds that depressed the brain’s cough reflex were opiates.
Morphine, an alkaloid of opium, which is the dried latex of unripe poppy blossoms, was identified in Germany in 1805. Toward the close of the century, in 1898, chemists first produced heroin (diacetylmorphine), a simple morphine derivative. Both agents became popular and, for a time, easily available cough suppressants. A 1903 advertisement touted “Glyco-Heroin” as medical science’s latest “Respiratory Sedative.”
But doctors’ increasing awareness of the dangers of dependency caused them to prescribe the drugs less and less. Today a weaker morphine derivative, codeine (methylmorphine), continues to be used in suppressing serious coughs. Since high doses of morphine compounds cause death by arresting respiration, it is not hard to understand how they suppress coughing.
Morphine compounds opened up an entirely new area of cough research. And pharmacologists have successfully altered opiate molecules to produce synthetic compounds that suppress a cough with less risk of inducing a drug euphoria or dependency.
Turn of the century remedies: Throat atomizer (top), nasal model; various lozenges for coughs, hoarseness, halitosis, and constipation; syringe, for when tablets fail.
But these sophisticated remedies are reserved for treating serious, life-threatening coughs and are available only by prescription. Millions of cold sufferers every winter rely on the ancient remedy of the cough drop. In America, two of the earliest commercial products, still popular today, appeared during the heyday of prescribing opiate suppressors.
Smith Brothers. Aside from Abraham Lincoln, the two hirsute brothers who grace the box of Smith Brothers Cough Drops are reputed to be the most reproduced bearded faces in America. The men did in fact exist, and they were brothers. Andrew (on the right of the box, with the longer beard) was a good-natured, free-spending bachelor; William was a philanthropist and an ardent prohibitionist who forbade ginger ale in his home because of its suggestive alcoholic name.
In 1847, their father, James Smith, a candymaker, moved the family from St. Armand, Quebec, to Poughkeepsie, New York, and opened a restaurant. It was a bitter winter, and coughs and colds were commonplace. One day, a restaurant customer in need of cash offered James Smith the formula for what he claimed was a highly effective cough remedy. Smith paid five dollars for the recipe, and at home, employing his candymaking skills, he produced a sweet hard medicinal candy.
As Smith’s family and friends caught colds, he dispensed his cough lozenges. By the end of the winter, word of the new remedy had spread to towns along the wind-swept Hudson River. In 1852, a Poughkeepsie newspaper carried the Smiths’ first advertisement: “All afflicted with hoarseness, coughs, or colds should test [the drops’] virtues, which can be done without the least risk.”
Success spawned a wave of imitators: the “Schmitt Brothers”; the “Smythe Sisters”; and even another “Smith Brothers,” in violation of the family’s copyright. In 1866, brothers William and Andrew, realizing the family needed a distinctive, easily recognizable trademark, decided to use their own stern visages—not on the now-familiar box but on the large glass bowls kept on drugstore counters, from which the drops were dispensed. At that time, most candies were sold from counter jars.
In 1872, the Smith brothers designed the box that bore—and bears—their pictures. The first factory-filled candy package ever developed in America, it launched a trend in merchandising candies and cough drops. A confectioner from Reading, Pennsylvania, William Luden, improved on that packaging a few years later when he introduced his own amber-colored, menthol-flavored Luden’s Cough Drops. Luden’s innovation was to line the box with waxed paper to preserve the lozenges’ freshness and flavor.
As cold sufferers today open the medicine chest for Tylenol, NyQuil, or Contac, in the 1880s millions of Americans with sore throats and coughs reached for drops by the Smith brothers or Luden. William and Andrew Smith acquired the lifelong nicknames “Trade” and “Mark,” for on the cough drop package “trademark” was divided, each half appearing under a brother’s picture. The Smiths lived to see production of their cough drops soar from five pounds to five tons a day.
Suntan Lotion: 1940s, United States
Suntan and sunscreen lotions are modern inventions. The suntanning industry did not really begin until World War II, when the government needed a skin cream to protect GIs stationed in the Pacific from severe sunburns. And, too, the practice of basking in the sun until the body is a golden bronze color is largely a modern phenomenon.
Throughout history, people of many cultures took great pains to avoid skin darkening from sun exposure. Opaque creams and ointments, similar to modern zinc oxide, were used in many Western societies; as was the sun-shielding parasol. Only common field workers acquired suntans; white skin was a sign of high station.
In America, two factors contributed to bringing about the birth of tanning. Until the 1920s, most people, living inland, did not have access to beaches. It was only when railroads began carrying Americans in large numbers to coastal resorts that ocean bathing became a popular pastime. In those days, bathing wear covered so much flesh that suntan preparations would have been pointless. (See “Bathing Suit,” page 321.) Throughout the ’30s, as bathing suits began to reveal increasingly more skin, it became fashionable to bronze that skin, which, in turn, introduced the real risk of burning.
At first, manufacturers did not fully appreciate the potential market for sunning products, especially for sunscreens. The prevailing attitude was that a bather, after acquiring sufficient sun exposure, would move under an umbrella or cover up with clothing. But American soldiers, fighting in the scorching sun of the Philippines, working on aircraft carrier decks, or stranded on a raft in the Pacific, could not duck into the shade. Thus, in the early 1940s, the government began to experiment with sun-protecting agents.
One of the most effective early agents turned out to be red petrolatum. It is an inert petroleum by-product, the residue that remains after gasoline and home heating oil are extracted from crude oil. Its natural red color, caused by an intrinsic pigment, is what blocks the sun’s burning ultraviolet rays. The Army Air Corps issued red petrolatum to wartime fliers in case they should be downed in the tropics.
One physician who assisted the military in developing the sunscreen was Dr. Benjamin Green. Green believed there was a vast, untapped commercial market for sunning products. After the war, he parlayed the sunscreen technology he had helped develop into a creamy, pure-white suntan lotion scented with the essence of jasmine. The product enabled the user to achieve a copper-colored skin tone, which to Green suggested a name for his line of products. Making its debut on beaches in the 1940s, Coppertone helped to kick off the bronzing of America.
Eye Drops: 3000 B.C., China
Because of the eye’s extreme sensitivity, eye solutions have always been formulated with the greatest care. One of the earliest recorded eye drops, made from an extract of the mahuang plant, was prepared in China five thousand years ago. Today ophthalmologists know that the active ingredient was ephedrine hydrochloride, which is still used to treat minor eye irritations, especially eyes swollen by allergic reactions.
Early physicians were quick to discover that the only acceptable solvent for eye solutions and compounds was boiled and cooled sterile water. And an added pinch of boric acid powder, a mild antibacterial agent, made the basis of many early remedies for a host of eye infections.
The field of ophthalmology, and the pharmacology of sterile eye solutions, experienced a boom in the mid-1800s. In Germany, Hermann von Helm-holtz published a landmark volume, Handbook of Physiological Optics, which debunked many antiquated theories on how the eye functioned. His investigations on eye physiology led him to invent the ophthalmoscope, for examining the eye’s interior, and the ophthalmometer, for measuring the eye’s ability to accommodate to varying distances. By the 1890s, eye care had never been better.
In America at that time, a new addition to the home medicine chest was about to be born. In 1890, Otis Hall, a Spokane, Washington, banker, developed a problem with his vision. He was examining a horse’s broken shoe when the animal’s tail struck him in the right eye, lacerating the cornea. In a matter of days, a painful ulcer developed, and Hall sought treatment from two ophthalmologists, doctors James and George McFatrich, brothers.
Part of Otis Hall’s therapy involved regular use of an eye solution, containing muriate of berberine, formulated by the brothers. His recovery was so rapid and complete that he felt other people suffering eye ailments should be able to benefit from the preparation. Hall and the McFatriches formed a company to mass-produce one of the first safe and effective commercial eye drop solutions. They brand-named their muriate of berberine by combining the first and last syllables of the chemical name: Murine.
Since then, numerous eye products have entered the medicine chest to combat “tired eyes,” “dryness,” and “redness.” They all contain buffering agents to keep them close to the natural acidity and salinity of human tears. Indeed, some over-the-counter contact lens solutions are labeled “artificial tears.” The saltiness of tears was apparent to even early physicians, who realized that the human eye required, and benefited from, low concentrations of salt. Ophthalmologists like to point out that perhaps the most straightforward evidence for the marine origin of the human species is reflected in this need for the surface of the eye to be continually bathed in salt water.
Dr. Scholl’s Foot Products: 1904, Chicago
It seems fitting that one of America’s premier inventors of corn, callus, and bunion pads began his career as a shoemaker. Even as a teenager on his parents’ Midwestern dairy farm, William Scholl exhibited a fascination with shoes and foot care.
Born in 1882, one of thirteen children, young William spent hours stitching shoes for his large family, employing a sturdy waxed thread of his own design. He demonstrated such skill and ingenuity as the family’s personal cobbler that at age sixteen his parents apprenticed him to a local shoemaker. A year later, he moved to Chicago to work at his trade. It was there, fitting and selling shoes, that William Scholl first realized the extent of the bunions, corns, and fallen arches that plagued his customers. Feet were neglected by their owners, he concluded, and neither physicians nor shoemakers were doing anything about it.
Scholl undertook the task himself.
Employed as a shoe salesman during the day, he worked his way through the Chicago Medical School’s night course. The year he received his medical degree, 1904, the twenty-two-year-old physician patented his first arch support, “Foot-Eazer.” The shoe insert’s popularity would eventually launch an industry in foot care products.
Convinced that a knowledge of proper foot care was essential to selling his support pads, Scholl established a podiatric correspondence course for shoe store clerks. Then he assembled a staff of consultants, who crisscrossed the country delivering medical and public lectures on proper foot maintenance.
Scholl preached that bad feet were common across the country because only one American in fifty walked properly. He recommended walking two miles a day, with “head up, chest out, toes straight forward,” and he advised wearing two pairs of shoes a day, so each pair could dry out. To further promote foot consciousness, he published the physician-oriented The Human Foot: Anatomy, Deformities, and Treatment (1915) and a more general guide, Dictionary of the Foot (1916).
Scholl’s personal credo— “Early to bed, early to rise, work like hell and advertise” —certainly paid off handsomely for him in the long run. But in the early days, his advertising, featuring naked feet, prompted many complaints about the indecency of displaying publicly feet clad only in bunion pads or perched atop arch supports.
Scholl created a national surge in foot consciousness in 1916 by sponsoring the Cinderella Foot Contest. The search for the most perfect female feet in America sent tens of thousands of women to their local shoe stores. Competing feet were scrutinized, measured, and footprinted by a device designed by Scholl. A panel of foot specialists selected Cinderella, and her prize-winning footprint was published in many of the country’s leading newspapers and magazines. As Scholl had hoped, thousands of American women compared their own imperfect feet with the national ideal and rushed out to buy his products. Across the country, in pharmacies, department stores, and five-and-ten-cent stores, the yellow-and-blue Dr. Scholl’s packages became part of the American scene.
William Scholl died in 1968, at age eighty-six. He maintained till the end, as he had throughout his life, that while other people boasted of never forgetting a face, he never forgot a foot.
Laxatives: 2500 B.C., Near East
“Preoccupation with the bowel,” a medical panel recently reported, “seems to be the concern of a significant proportion of our population.” The physicians based their assessment on the number of prescription and over-the-counter laxatives consumed by Americans each year, generating profits of a half billion dollars annually.
But concern for proper bowel function is not new. The history of pharmacology shows that ancient peoples were equally concerned with daily and regular bowel behavior. And early physicians concocted a variety of medications to release what nature would not.
The earliest recorded cathartic, popular throughout Mesopotamia and along the Nile, was the yellowish oil extracted from the castor bean. Castor oil served not only as a laxative, but also as a skin-softening lotion and as a construction lubricant for sliding giant stone blocks over wooden rollers.
By 1500 B.C., the Assyrians’ knowledge of laxatives was extensive. They were familiar with “bulk-forming” laxatives such as bran; “saline” laxatives, which contain sodium and draw water into the bowel; and “stimulant” laxatives, which act on the intestinal wall to promote the peristaltic waves of muscular contraction that result in defecation. These are the three major forms of modern laxative preparations.
Archaeologists believe that there is good reason why people throughout history have displayed somewhat of an obsession with bowel functioning. Prior to 7000 B.C., man was nomadic, a hunter-gatherer, existing primarily on a diet of fibrous roots, grains, and berries. A high-fiber diet. This had been his ancestors’ menu for tens of thousands of years. It was the only diet the human stomach experienced, and that the stomach and intestines were experienced in handling.
Then man settled down to farming. Living off the meat of his cattle and their milk, he shocked the human bowel with a high-fat, lower-fiber diet. Ever since, people have been troubled by irregular bowel function and sought remedying cathartics. Perhaps only today, with the emphasis on high-fiber foods, is the human bowel beginning to relax.
In the intervening millennia, physicians worked hard to find a variety of laxatives, and to mix them with honey, sugar, and citrus rinds to make them more palatable. One druggist, in 1905, hit upon the idea of combining a laxative with chocolate, and he caught the attention of the American market.
In his native Hungary, Max Kiss was a practicing pharmacist, familiar with a chemical, phenolphthalein, that local wine merchants were adding to their products. The practice was at first thought to be innocuous. But soon the merchants, and the wine-drinking public, discovered that a night’s overindulgence in wine created more than a hangover in the morning.
The chemical additive turned out to be an effective laxative. And when Max Kiss emigrated to New York in 1905, he began combining phenolphthalein with chocolate as a commercial laxative. He initially named the product Bo-Bo, a name inadvisably close to the slang expression for the laxative’s target. Kiss reconsidered and came up with Ex-Lax, his contraction for “Excellent Laxative.”
The chocolate-tasting product was a welcomed improvement over such standard cathartics as castor oil. Especially with children. Production of the laxative candy eventually rose to 530 million doses a year, making the preparation an integral part of the early-twentieth-century American medicine chest.
Eyeglasses: 13th Century, Italy
Ancient peoples must have needed eyeglasses to aid their vision at some point in life, but the invention did not appear until the close of the thirteenth century. Until that time, those unfortunate people born with defective eyesight, and the aged, had no hope of being able to read or to conduct work that demanded clear vision.
The inventor of spectacles most likely resided in the Italian town of Pisa during the 1280s. He is believed to have been a glass craftsman. Although his exact identity has never been conclusively established, two men, Alessandro Spina and Salvino Armato, coevals and gaffers—glass blowers—are the most likely candidates for the honor.
The evidence slightly favors Salvino Armato. An optical physicist originally from Florence, the thirty-five-year-old Armato is known to have impaired his vision around 1280 while performing light-refraction experiments. He turned to glassmaking in an effort to improve his sight, and he is thought to have devised thick, curved correcting lenses.
History records two early references to eyeglasses in Armato’s day. In 1289, an Italian writer, Sandro di Popozo, published Treatise on the Conduct of the Family. In it, he states that eyeglasses “have recently been invented for the benefit of poor aged people whose sight has become weak.” Then he makes it clear that he had the good fortune to be an early eyeglass wearer: “I am so debilitated by age that without them I would no longer be able to read or write.” Popozo never mentions the inventor by name.
The second reference was made by an Italian friar, Giordano di Rivalto. He preached a sermon in Florence on a Wednesday morning in February 1306, which was recorded and preserved: “It is not yet twenty years since there was found the art of making eye-glasses, one of the best arts and most necessary that the world has.” The friar then discussed the inventor, but without mentioning his name, concluding only with the remark, “I have seen the man who first invented and created it, and I have talked to him.”
Concave and Convex Lenses. Whoever the inventor of eyeglasses was, the evidence is unequivocal that the innovation caught on quickly. By the time Friar Giordano mentioned spectacles in his sermon, craftsmen in Venice, the center of Europe’s glass industry, were busily turning out the new “disks for the eyes.” The lenses in these early glasses were convex, aiding only farsighted individuals; amazingly, more than a hundred years would pass before concave lenses would be ground to improve vision for the nearsighted.
Eyeglass technology traveled to England. By 1326, spectacles were available for scholars, nobility, and the clergy. Glasses were not ground individually; rather, a person peered through the various lenses stocked in a craftsman’s shop, selecting those that best improved vision. Physicians had not yet endorsed glasses, and there were still no calibrating procedures such as eye charts and eye testing.
In the mid-fourteenth century, Italians began to call glass eye disks “lentils.” This was because of their resemblance in shape to the popular Italian legume the lentil, which is circular, with biconvex surfaces. The Italian for “lentils” is lenticchie, and for more than two hundred years eyeglasses were known as “glass lentils.” Not surprisingly, “lentil” is the origin of our word “lens.”
One early problem with eyeglasses was how to keep them on, for rigid arms looping over the ears were not invented until the eighteenth century. Many people resorted to leather straps tied behind the head; others devised small circles of cord that fitted over each ear; still others simply allowed the spectacles to slide down the nose until they came to rest at the most bulbous embankment.
Leonardo da Vinci, designer of the first contact lens; metal-framed spectacles for reading; the lentil bush, whose small biconvex seeds inspired the word “lens.”
Spectacles with concave lenses to correct for myopia were first made in the fifteenth century. Because they corrected for poor distance vision, in an era when most eyeglasses were used for reading, they were deemed less essential for pursuits of the mind and consequently were rarer and more costly than convex lenses.
Cost, though, was no concern of the recklessly extravagant Cardinal Giovanni de’ Medici, second son of Lorenzo the Magnificent, who in 1513 became Pope Leo X. Though at times the severely nearsighted cardinal was so desperate for money that he pawned palace furniture and silver, he purchased several pairs of concave-lens eyeglasses to improve his marksmanship in hunting game and fowl. Four years after he became pope, he sat for a portrait by Raphael that became the first depiction in art of concave correcting lenses.
Despite the many drawbacks of early eyeglasses, they had a profound effect on people from seamstresses to scholars, extending working life into old age. And with the arrival of the printing press, and the wealth of books and newspapers it spawned, eyeglasses began the transition from one of life’s luxuries to one of its necessities.
Modern Frames and Bifocals. The first “temple” spectacles with rigid sides were manufactured by a London optician, Edward Scarlett, in 1727. They were hailed by one French publication as “lorgnettes that let one breathe,” since the anchoring side arms made breathing and moving about possible without fear of the glasses falling off the nose.
Starting in the 1760s, Benjamin Franklin experimented with designing bifocal lenses, so that on trips he could glance up from reading to enjoy the scenery. But bifocals would not come into common use until the 1820s, freeing people who needed both reading and distance lenses from alternating two pairs of glasses.
Whereas eyeglasses were something of a status symbol in the centuries in which they were rare and costly, by the nineteenth century, when glasses were relatively inexpensive and commonplace, wearing them became decidedly unfashionable. Particularly for women. Glasses were worn in private, and only when absolutely necessary were they used in public.
Today we take for granted that eyeglasses are lightweight, but one of their early drawbacks was their heaviness. Temple spectacles sculpted of bone, real tortoiseshell, or ivory rested so firmly on the ears and the bridge of the nose that corrected vision could be impaired by headaches. And the burden was significantly increased by the pure glass lenses the frames supported. Even temple spectacles of lightweight wire frames contained heavy glass lenses. It was only with the advent of plastic lenses and frames in this century that eyeglasses could be worn throughout the day without periodic removal to rest the ears and nose.
Sunglasses: Pre-15th Century, China
Smoke tinting was the first means of darkening eyeglasses, and the technology was developed in China prior to 1430. These darkened lenses were not vision-corrected, nor were they initially intended to reduce solar glare. They served another purpose.
For centuries, Chinese judges had routinely worn smoke-colored quartz lenses to conceal their eye expressions in court. A judge’s evaluation of evidence as credible or mendacious was to remain secret until a trial’s conclusion. Smoke-tinted lenses came to serve also as sunglasses, but that was never their primary function. And around 1430, when vision-correcting eyeglasses were introduced into China from Italy, they, too, were darkened, though mainly for judicial use.
The popularity of sunglasses is really a twentieth-century phenomenon. And in America, the military, which played a role in the development of sunscreens, also was at the forefront of sunglass technology.
In the 1930s, the Army Air Corps commissioned the optical firm of Bausch & Lomb to produce a highly effective spectacle that would protect pilots from the dangers of high-altitude glare. Company physicists and opticians perfected a special dark-green tint that absorbed light in the yellow band of the spectrum. They also designed a slightly drooping frame perimeter to maximally shield an aviator’s eyes, which repeatedly glanced downward toward a plane’s instrument panel. Fliers were issued the glasses at no charge, and the public soon was able to purchase the model that banned the sun’s rays as Ray-Ban aviator sunglasses.
What helped make sunglasses chic was a clever 1960s’ advertising campaign by the comb and glass firm of Foster Grant.
Woodcut of a sixteenth-century book collector in corrective spectacles; smoked lenses, the earliest sunglasses.
Bent on increasing its share of the sunglass market, the company decided to emphasize glamour. It introduced the “Sunglasses of the Stars” campaign, featuring the sunglassed faces of such Hollywood celebrities as Peter Sellers, Elke Sommer, and Anita Ekberg. Magazine advertisements and television commercials teased: “Isn’t that…behind those Foster Grants?” Soon any star in sunglasses, whatever the actual brand, was assumed to be wearing Foster Grants.
Well-known fashion designers, as well as Hollywood stars, escalated the sunglass craze in the ’70s with their brand-name lines. A giant industry developed where only a few decades earlier none existed. As women since ancient times had hidden seductively behind an expanded fan or a dipped parasol, modern women—and men—discovered an allure in wearing sunglasses, irrespective of solar glare.
Contact Lenses: 1877, Switzerland
The first person to propose a contact lens system was the Italian painter, sculptor, architect, and engineer Leonardo da Vinci. In his sixteenth-century Code on the Eye, da Vinci described an optical method for correcting poor vision by placing the eye against a short, water-filled tube sealed at the end with a flat lens. The water came in contact with the eyeball and refracted light rays much the way a curved lens does. Da Vinci’s use of water as the best surface to touch the eye is mirrored today in the high water content of soft contact lenses.
The acute sensitivity of the human eye means that only an extremely smooth foreign surface can come in contact with it. For centuries, this eliminated contact lenses of glass, which even after polishing remained fairly coarse.
In the 1680s, French opticians attempted a novel approach to the problem. They placed a protective layer of smooth gelatin over the eyeball, then covered it with a small fitted glass lens. The gelatin represented an attempt to use a medium with high water content. The French lens possessed a major flaw, for it frequently fell out of the wearer’s eye. It remained experimental.
The first practical contact lenses were developed in 1877 by a Swiss physician, Dr. A. E. Fick. They were hard lenses. Thick, actually. And not particularly comfortable. The glass was either blown or molded to the appropriate curvature, polished smooth, then cut into a lens that covered not only the cornea but the entire eyeball. Wearing them took a serious commitment to vanity. Fick’s lenses, however, demonstrated that vision, in most cases, could be corrected perfectly when refracting surfaces were placed directly on the eye. And they proved that the eye could learn to tolerate, without irreparable damage, a foreign object of glass.
Glass remained the standard material of hard lenses until 1936. That year, the German firm I. G. Farben introduced the first Plexiglas, or hard plastic, lens, which quickly became the new standard of the industry. It was not until the mid-1940s that American opticians produced the first successful corneal lens, covering only the eye’s central portion. The breakthrough ushered in the era of modern contact lens design. Since that time, scientists have ingeniously altered the physical and chemical composition of lenses, often in an attempt to achieve a surface that duplicates, as closely as possible, the composition of the human eye.
Today factors other than high water content are regarded as essential in a good lens (such as permeability to oxygen so that living eye cells may breathe). Still, though, with an instinctive belief in the comfort of water against the eye, many wearers seek out lenses that are up to 80 percent liquid—even though a lens of less water content might provide better vision correction. Da Vinci, with his 100 percent liquid lens, perhaps realized the psychological appeal of having only water touch the delicate surface of the eyeball.
Stimulants: Pre-2737 B.C., China
To achieve altered states of consciousness in religious rites, ancient man used naturally occurring plant stimulants. One of the earliest, and mildest, of recorded stimulants was strongly brewed tea. Although the origins of the beverage are shrouded in Oriental folklore, the legendary Chinese emperor Shen Nung is said to have discovered the kick of tea. An entry in Shen’s medical diary, dated 2737 B.C., declares that tea not only “quenches thirst” but “lessens the desire to sleep.”
Tea’s stimulant, of course, is caffeine. And the drug, in the form of coffee, became one of the most widely used, and abused, early pick-me-ups. After the discovery of the effects of chewing coffee beans in Ethiopia in A.D. 850, the drug became an addiction in the Near and Middle East. And as coffee spread throughout Europe and Asia, its stimulant effect merited more social and medical comment than its taste.
Caffeine’s use today continues stronger than ever. Aside from occurring naturally in coffee, tea, and chocolate, caffeine is added to cola drinks and a wide range of over-the-counter drugs. If your medicine chest contains Anacin-3, Dexatrim, Dristan Decongestant, Excedrin, NoDoz, or Slim (to mention a few), you have a caffeine-spiked analgesic or diet aid on the shelf.
Why is caffeine added?
In decongestants, it counters the soporific effects of the preparations’ active compounds. In analgesics, caffeine actually enhances (through a mechanism yet unknown) the action of painkillers. And in diet aids, the stimulant is the active ingredient that diminishes appetite. Safe in moderate doses, caffeine can kill. The lethal dose for humans is ten grams, or about one hundred cups of coffee consumed in four hours.
In this century, a new and considerably more potent class of synthetic stimulants entered the medicine chest.
Amphetamines. These drugs were first produced in Germany in the 1930s. Their chemical structure was designed to resemble adrenaline, the body’s own fast-acting and powerful stimulant. Today, under such brand names as Benzedrine, Dexedrine, and Preludin (to list a few), they represent a multimillion-dollar pharmaceutical market.
Commonly known as “speed” or “uppers,” amphetamines were discovered to give more than an adrenaline rush. They produce a degree of euphoria, the ability to remain awake for extended periods, and the suppression of appetite by slowing muscles of the digestive system. For many years, they replaced caffeine as the primary ingredient in popular dieting aids. While their role in weight loss has greatly diminished, they remain a medically accepted mode of treatment for hyperactivity in children and such sleep disorders as narcolepsy.
In the 1930s, amphetamines existed only in liquid form and were used medically as inhalants to relieve bronchial spasms and nasal congestion. Because of their easy availability, they were greatly abused for their stimulant effects. And when they were produced in tablets, the drugs’ uses and abuses skyrocketed. During World War II, the pills were issued freely to servicemen and widely prescribed to civilians in a cavalier way that would be regarded today as irresponsibility bordering on malpractice.
By the 1960s, physicians recognized that amphetamines carried addictive risks. The condition known as amphetamine psychosis, which mimics classic paranoid schizophrenia, was identified, and by the end of the decade, legislation curtailed the use of the drugs. Any amphetamine on a medicine chest shelf today is either a prescription drug or an illegal one.
Sedatives: 1860s, Germany
Apples and human urine were the main and unlikely ingredients that composed the first barbiturate sedatives, developed in Germany in the 1860s. And the drugs derived their classification title “barbiturate” from a Munich waitress named Barbara, who provided the urine for their experimental production.
This bizarre marriage of ingredients was compounded in 1865 by German chemist Adolph Baeyer. Unfortunately, the specific reasoning, or series of events, that led him to suspect that the malic acid of apples combines with the urea of urine to induce drowsiness and sleep has been lost to history. What is well documented, however, is the rapid public acceptance of sedatives—to calm anxiety, cure insomnia, and achieve a placid euphoria.
The period from Baeyer’s discovery to the commercial production of barbiturates spans almost four decades of laboratory research. But once the chemical secrets were unlocked and the ingredients purified, the drugs began to appear rapidly. The first barbiturate sleeping drug, barbital, bowed in 1903, followed by phenobarbital, then scores of similarly suffixed drugs with varying degrees of sedation. Drugs like Nembutal and Seconal acquired street names of “yellow jackets” and “nebbies” and spawned a large illicit drug trade.
All the barbiturates worked by interfering with nerve impulses within the brain, which, in turn, “calmed the nerves.” Insomniacs alone, in America estimated to number over fifty million, created a huge market. But while sedatives provided a needed respite from wakefulness for many people, they often became addictive.
Of the many prescription sedatives found in American medicine chests today, one in particular merits mention for its outstanding use and abuse.
Valium. In 1933, drug researchers discovered a new class of nonbarbiturate sedatives. Known as benzodiazepines, they would soon acquire commercial brand names such as Librium and Valium, and Valium would go on to top the federal government’s list of the twenty most abused drugs in America, surpassing both heroin and cocaine.
During the first decade following their discovery, benzodiazepines did not attract much attention from drug companies. The belief was that barbiturates were safe, effective, and not terribly addictive, and thus there was no need for an entirely new class of sedating drugs.
Then medical opinion changed. In the mid-1950s, experiments revealed that benzodiazepines, in substantially smaller doses than barbiturate sedatives, were capable of inducing sleep in monkeys. In addition, the drugs not only sedated; they also diminished aggressive tendencies. Drug companies, learning of the surprising laboratory results with monkeys, began conducting human tests, and in 1960 the world was introduced to the first nonbarbiturate sedative, Librium. Three years later, Valium debuted.
Known as “minor tranquilizers” (compared with the more potent Thorazine, a “major” tranquilizer), Librium and Valium began to be prescribed in record quantities. The reputation of barbiturates by that time had been grimly besmirched, and the new drugs seemed safer, less addictive. They were liberally dispensed as antianxiety agents, muscle relaxants, anticonvulsants, sleeping pills, and as a harmless treatment for the symptoms of alcohol withdrawal. Valium became an industry in itself.
In time, of course, medical opinion again changed. The benzodiazepines are extremely important and useful drugs, but they, too, possess a great potential for abuse. Today chemists are attempting to tailor-make a new classification of nonaddictive sedatives and painkillers with only a single-purpose function. In the meantime, Americans continue to consume more than five billion sedatives a year, making Valium and its sister drugs almost as familiar a medicine chest item as aspirin.
Aspirin: 1853, France
For a fever, physicians in the ancient world recommended a powder made from the bark of the willow tree. Today we know that the bark contains a salicylic compound, related to aspirin, though not as effective, and causing greater gastrointestinal irritation and possible bleeding.
Aspirin, acetylsalicylic acid, is a man-made variation of the older remedy. It is the world’s most widely used painkiller and anti-inflammatory drug, and it was prepared in France in 1853, then forgotten for the next forty years—rediscovered only when a German chemist began searching for a cure for his father’s crippling arthritis.
Alsatian chemist Charles Frederick von Gerhardt first synthesized acetylsalicylic acid in 1853, at his laboratory at the University of Montpellier. But from his own limited testing, he did not believe the drug to be a significant improvement over the then-popular salicin, an extract from the bark of the willow tree and the meadowsweet plant, a botanical relative of the rose. Aspirin was ignored, and sufferers of fevers, inflammations, and arthritis continued to take salicin.
In 1893, a young German chemist, Felix Hoffman, at the Farbenfabriken Bayer drug firm, had exhausted all the known drugs in attempting to ease his father’s rheumatoid arthritis. Hoffman knew of the synthetic type of salicin, and in desperation prepared a batch and tested it on his father. To his astonishment, the man-made derivative palliated the disease’s crippling symptoms and almost completely ameliorated its pain.
Chemists at Bayer, in Düsseldorf, realized Hoffman had hit on an important new drug. Deciding to produce the compound from the meadowsweet plant, Spiraea ulmaria, the company arrived at the brand name Aspirin by taking the “a” from acetyl,“spir” from the Latin Spiraea, and “in” because it was a popular suffix for medications.
First marketed in 1899 as a loose powder, Aspirin quickly became the world’s most prescribed drug. In 1915, Bayer introduced Aspirin tablets. The German-based firm owned the brand name Aspirin at the start of World War I, but following Germany’s defeat, the trademark became part of the country’s war reparations demanded by the Allies. At the Treaty of Versailles in June 1919, Germany surrendered the brand name to France, England, the United States, and Russia.
For the next two years, drug companies battled over their own use of the name. Then, in a famous court decision of 1921, Judge Learned Hand ruled that since the drug was universally known as aspirin, no manufacturer owned the name or could collect royalties for its use. Aspirin with a capital A became plain aspirin. And today, after almost a century of aspirin use and experimentation, scientists still have not entirely discovered how the drug achieves its myriad effects as painkiller, fever reducer, and anti-inflammatory agent.
Under the Flag
Uncle Sam: 1810s, Massachusetts
There was a real-life Uncle Sam. This symbol of the United States government and of the national character, in striped pants and top hat, was a meat packer and politician from upstate New York who came to be known as Uncle Sam as the result of a coincidence and a joke.
The proof of Uncle Sam’s existence was unearthed only a quarter of a century ago, in the yellowing pages of a newspaper published May 12, 1830. Had the evidence not surfaced, doubt about a real-life prototype would still exist, and the character would today be considered a myth, as he was for decades.
Uncle Sam was Samuel Wilson. He was born in Arlington, Massachusetts, on September 13, 1766, a time when the town was known as Menotomy. At age eight, Sam Wilson served as drummer boy on the village green, on duty the April morning of 1775 when Paul Revere made his historic ride. Though the “shot heard round the world” was fired from nearby Lexington, young Sam, banging his drum at the sight of redcoats, alerted local patriots, who prevented the British from advancing on Menotomy.
As a boy, Sam played with another youthful patriot, John Chapman, who would later command his own chapter in American history as the real-life Johnny Appleseed. At age fourteen, Sam joined the army and fought in the American Revolution. With independence from Britain won, Sam moved in 1789 to Troy, New York, and opened a meat-packing company. Because of his jovial manner and fair business practices, he was affectionately known to townsfolk as Uncle Sam.
It was another war, also fought against Britain on home soil, that caused Sam Wilson’s avuncular moniker to be heard around the world.
During the War of 1812, government troops were quartered near Troy. Sam Wilson’s fair-dealing reputation won him a military contract to provide beef and pork to soldiers. To indicate that certain crates of meat produced at his warehouse were destined for military use, Sam stamped them with a large “U.S.” —for “United States,” though the abbreviation was not yet in the vernacular.
On October 1, 1812, government inspectors made a routine tour of the plant. They asked a meat packer what the ubiquitously stamped “U.S.” stood for. The worker, himself uncertain, joked that the letters must represent the initials of his employer, Uncle Sam. The error was perpetuated. Soon soldiers began referring to all military rations as bounty from Uncle Sam. Before long, they were calling all government-issued supplies property of Uncle Sam. They even saw themselves as Uncle Sam’s men.
The first Uncle Sam illustrations appeared in New England newspapers in 1820. At that time, the avuncular figure was clean-shaven and wore a solid black top hat and black tailcoat. The more familiar and colorful image of Uncle Sam we know today arose piecemeal, almost one item at a time, each the contribution of an illustrator.
Solid red pants were introduced during Andrew Jackson’s presidency. The flowing beard first appeared during Abraham Lincoln’s term, inspired by the President’s own beard, which set a trend at that time. By the late nineteenth century, Uncle Sam was such a popular national figure that cartoonists decided he should appear more patriotically attired. They adorned his red pants with white stripes and his top hat with both stars and stripes. His costume became an embodiment of the country’s flag.
Uncle Sam at this point was flamboyantly dressed, but by today’s standards of height and weight he was on the short side and somewhat portly.
It was Thomas Nast, the famous German-born cartoonist of the Civil War and Reconstruction period, who made Uncle Sam tall, thin, and hollow-cheeked. Coincidentally, Nast’s Uncle Sam strongly resembles drawings of the real-life Sam Wilson. But Nast’s model was actually Abraham Lincoln.
The most famous portrayal of Uncle Sam—the one most frequently reproduced and widely recognized—was painted in this century by American artist James Montgomery Flagg. The stern-faced, stiff-armed, finger-pointing figure appeared on World War I posters captioned: “I Want You for U.S. Army.” The poster, with Uncle Sam dressed in his full flag apparel, sold four million copies during the war years, and more than half a million in World War II. Flagg’s Uncle Sam, though, is not an Abe Lincoln likeness, but a self-portrait of the artist as legend.
A nineteenth-century meat-packing plant in upstate New York; birthplace of the Uncle Sam legend.
During these years of the poster’s peak popularity, the character of Uncle Sam was still only a myth. The identity of his prototype first came to light in early 1961. A historian, Thomas Gerson, discovered a May 12, 1830, issue of the New York Gazette newspaper in the archives of the New-York Historical Society. In it, a detailed firsthand account explained how Pheodorus Bailey, postmaster of New York City, had witnessed the Uncle Sam legend take root in Troy, New York. Bailey, a soldier in 1812, had accompanied government inspectors on the October day they visited Sam Wilson’s meat-packing plant. He was present, he said, when a worker surmised that the stamped initials “U.S.” stood for “Uncle Sam.”
Sam Wilson eventually became active in politics and died on July 31, 1854, at age eighty-eight. A tombstone erected in 1931 at Oakwood Cemetery in Troy reads: “In loving memory of ‘Uncle Sam,’ the name originating with Samuel Wilson.” That association was first officially recognized during the administration of President John F. Kennedy, by an act of the Eighty-seventh Congress, which states that “the Congress salutes ‘Uncle Sam’ Wilson of Troy, New York, as the progenitor of America’s National symbol of ‘Uncle Sam.’ “
Though it may be stretching coincidence thin, John Kennedy and Sam Wilson spoke phrases that are strikingly similar. On the eve of the War of 1812, Wilson delivered a speech, and a plan, on what Americans must do to ensure the country’s greatness: “It starts with every one of us giving a little more, instead of only taking and getting all the time.” That plea was more eloquently stated in John Kennedy’s inaugural address: “ask not what America will do for you—ask what you can do for your country.”
Johnny Appleseed: 1810s, Massachusetts
Sam Wilson’s boyhood playmate John Chapman was born on September 26, 1774, in Leominster, Massachusetts. Chapman displayed an early love for flowering plants and trees—particularly apple trees. His interest progressed from a hobby, to a passion, to a full-fledged obsession, one that would transform him into a true American folk character.
Though much lore surrounds Chapman, it is known that he was a devoted horticulturist, establishing apple orchards throughout the Midwest. He walked barefoot, inspecting fields his sapling trees had spawned. He also sold apple seeds and saplings to pioneers heading farther west, to areas he could not readily cover by foot.
A disciple of the eighteenth-century Swedish mystic Emanuel Swedenborg, John Chapman was as zealous in preaching Scripture as he was in planting apple orchards. The dual pursuits took him, barefooted, over 100,000 square miles of American terrain. The trek, as well as his demeanor, attire, and horticultural interests, made him as much a recognizable part of the American landscape as his orchards were. He is supposed to have worn on his head a tin mush pan, which served both as a protection from the elements and as a cooking pot at his impromptu campsites.
Frontier settlers came to humorously, and sometimes derisively, refer to the religious fanatic and apple planter as Johnny Appleseed. American Indians, though, revered Chapman as a medicine man. The herbs catnip, rattlesnake weed, horehound, and pennyroyal were dried by the itinerant horticulturist and administered as curatives to tribes he encountered, and attempted to convert.
Both Sam Wilson and John Chapman played a part in the War of 1812. While Wilson, as Uncle Sam, packaged rations for government troops, Chapman, as Johnny Appleseed, traversed wide areas of northern Ohio barefoot, alerting settlers to the British advance near Detroit. He also warned them of the inevitable Indian raids and plundering that would follow in the wake of any British destruction. Later, the town of Mansfield, Ohio, erected a monument to John Chapman.
Chapman died in March of 1845, having contracted pneumonia from a barefoot midwinter journey to a damaged apple orchard that needed tending. He is buried in what is known today as Johnny Appleseed Park, near the War Memorial Coliseum in Fort Wayne, Indiana, the state in which he died.
Although Johnny Appleseed never achieved the fame of his boyhood playmate Uncle Sam, Chapman’s likeness has appeared on commemorative U.S. stamps. And in 1974, the New York and New England Apple Institute designated the year as the Johnny Appleseed Bicentennial. Chapman’s most enduring monuments, however, are the apple orchards he planted, which are still providing fruit throughout areas of the country.
American Flag: Post-1777, New England
So much patriotism and sacrifice are symbolized by the American flag that it is hard for us today to realize that the star-spangled banner did not have a single dramatic moment of birth. Rather, the flag’s origin, as that of the nation itself, evolved slowly from humble beginnings, and it was shaped by many hands—though probably not those of Betsy Ross. The latest historical sleuthing indicates that her involvement, despite history book accounts, may well have been fictive. And no authority today can claim with certainty who first proposed the now-familiar design, or even when and where the Stars and Stripes was first unfurled.
What, then, can we say about the origin of a flag that the military salutes, millions of schoolchildren pledge allegiance to, and many home owners hang from a front porch pole every Fourth of July?
It is well documented that General George Washington, on New Year’s Day of 1776, displayed over his camp outside Boston an improvised “Grand Union Flag.” It combined both British and American symbols. One upper corner bore the two familiar crosses—St. George’s for England, and St. Andrew’s for Scotland—which had long been part of the British emblem. But the background field had thirteen red and white stripes to represent the American colonies. Since the fighting colonists, including Washington, still claimed to be subjects of the British crown, it’s not surprising that their homemade flag should carry evidence of that loyalty.
The earliest historical mention of an entirely American “Stars and Stripes” flag—composed of thirteen alternating red and white stripes, and thirteen stars on a blue field—is in a resolution of the Continental Congress dated June 14, 1777. Since Congress, and the country, had more urgent matters to resolve than a finalized, artistic flag design, the government stipulated no specific rules about the flag’s size or arrangement of details. It even failed to supply Washington’s army with official flags until 1783, after all the major war battles had ended.
Francis Scott Key, composer of “The Star-Spangled Banner.”
During the Revolutionary War, the American army and navy fought under a confused array of local, state, and homemade flags. They were adorned variously with pine and palmetto trees, rattlesnakes, eagles, stripes of red, blue, and yellow, and stars of gold—to mention a few.
In fact, it was not until 1814, nearly forty years after its authorization by Congress, that the flag began to be widely discussed by Americans as a symbol of the country. In that year, an American flag bearing fifteen stars flew over Fort Henry at Baltimore, inspiring Francis Scott Key to write “The Star-Spangled Banner.”
Where in the gradual, piecemeal evolution of the American flag does the figure of the Philadelphia seamstress born Elizabeth Griscom belong?
Betsy Ross. When John Ross, an upholsterer, was killed in a munitions explosion in 1776, his wife, Betsy, took over operation of their tailoring business. The Ross store was on Philadelphia’s Arch Street, not far from the State House, on Chestnut Street, where history was being made almost daily.
According to legend, Betsy Ross was visited at her shop by General George Washington in June of 1776. They were supposed to have discussed various flag designs. And Washington allegedly settled for one composed of seven red and six white stripes, and thirteen five-pointed white stars arranged in a circle—though he had requested six-pointed stars. Betsy Ross is said to have convinced him that it would be easier for her to cut out five-pointed stars. When the general departed, legend has it, the seamstress commenced stitching the official American flag.
Historians find it significant that not a single one of the numerous flags that flew at different times and places during the Revolutionary War is of the design alleged to be the handiwork of Betsy Ross.
Further, the tale recounted in history books was told by Betsy Ross herself—on her deathbed in 1836, and to her eleven-year-old grandson, William J. Canby. Betsy Ross at the time was eighty-four years old. Canby, in turn, did not publicly relate the tale until 1870, when he presented it at a meeting of the Pennsylvania Historical Society. That was thirty-four years after he had heard it as a boy, and almost a hundred years after the incident was alleged to have occurred.
Historical records verify that George Washington was in Philadelphia in June of 1776. But in his written itinerary there is no mention of a meeting with a local seamstress. Nor in Washington’s diary is there any evidence of his concern with the design of an official American flag. In fact, Congress had not yet convened a committee to tackle any flag design, nor at the time was there congressional talk of replacing the Grand Union Flag. Washington had made personal modifications in that flag, combining American with British features, but he had not expressed a desire to abandon it entirely. The consensus among historians who have investigated the Betsy Ross legend is that it’s no more than that—a legend: a nonverifiable story handed down from generation to generation. And one begun by the lady herself.
History and legend, though, have a way of blending in the crucible of time. Betsy Ross’s deathbed tale has inextricably rooted itself in the heart of American folklore. And whether in time it is unequivocally proved or disproved, it almost assuredly will be told and retold.
Pledge of Allegiance: 1892, Rome, New York
The pledge of allegiance to the American flag is neither an old verse nor one composed by the Republic’s founding fathers. It was written especially for children in the summer of 1892, to commemorate that year’s celebration of Columbus Day in public schools throughout the country.
The pledge’s first appearance in print was on September 8, 1892, in The Youth’s Companion, an educational publication. It is estimated that more than ten million American schoolchildren recited it that Columbus Day. In its original form, it read: “I pledge allegiance to my Flag and the Republic for which it stands—one nation indivisible—with liberty and justice for all.”
Its author was an editor of The Youth’s Companion, Francis Bellamy of Rome, New York. Bellamy intended his verse to be a one-time recitation. But its immediate popularity among the nation’s schoolchildren and teachers transformed it first into an annual Columbus Day tradition, then into a daily classroom ritual. It became one of the earliest verses memorized by schoolchildren.
Since its debut, Bellamy’s pledge has undergone two alterations. In 1923, the United States Flag Association replaced the somewhat ambiguously personal “my flag” wording with the more explicitly patriotic “the Flag of the United States of America.” And in 1954, President Dwight D. Eisenhower signed a bill that introduced a religious note to the pledge, with the addition of the words “under God.”
Washington, D.C.: 1790
Although much has been written about the selection of Washington, D.C., as the nation’s capital, little has appeared concerning one of the early motivating factors for locating the center of government in an area that then was a remote swampland. This part of the story involves the desire of congressmen for a safe haven where they could peacefully conduct business without harassment by disgruntled civilians and soldiers.
The idea for a national capital city in a remote, inconvenient area originated at a June 1783 meeting of the Congress in the Old City Hall in Philadelphia. While several factors contributed to the decision, one in particular galvanized Congress to action.
The War of Independence had recently been concluded. The treasury was flat broke. The new nation had no credit, still lacked a President, and was heavily in debt to its soldiers for back pay. On June 20, a large and angry mob of unpaid soldiers invaded Philadelphia to present their grievances to Congress. It was not the first such violent confrontation. That day, though, a number of agitated congressmen—some angry, others frightened—expressed their weariness with such direct public intrusions. They launched a movement to establish a federal city where lawmakers could transact the business of state without civilian intimidation.
Several locations were considered. New Englanders, led by Alexander Hamilton of New York, sought a capital in the north. Southerners, represented by Thomas Jefferson of Virginia, argued for a location in the south. In 1790, in an attempt to placate both sides, the recently elected President, George Washington, chose a site eighteen miles up the Potomac River from his home in Mount Vernon—a location then midway between north and south. In addition, the area was between the thriving seaports of Alexandria, Virginia, and Georgetown, Maryland. No one denied, however, that the ten-mile-square site was a bog.
After several years of planning, in September 1793 President Washington himself laid the cornerstone for the first U.S. Capitol. Office buildings were quickly erected. By 1800, the U.S. government had officially moved headquarters from Philadelphia to Washington.
No one was pleased with the new city.
Congressmen complained that it was too isolated. A wilderness. They and their families resisted constructing homes there; as did government employees. Groups of citizens petitioned that the capital city be relocated to a more desirable, prestigious, and accessible location. What had been conceived by Washington as a “city of magnificent distances” was now disparagingly attacked by congressmen as a “capital of miserable huts,” “a mud-hole.” Abigail Adams, wife of the first President to occupy the presidential mansion, expressed a desire to move out, lamenting, “We have not the least convenience.”
By the close of Thomas Jefferson’s term of office, in 1809, the population of the nation’s new city was scarcely five thousand. To foreign heads of state, America’s capital was a nightmare. With a dearth of cultural institutions and personal conveniences, and with the Potomac continually muddying the dirt streets, foreign ambassadors stationed in the capital actually collected “hardship pay” from their governments.
The advent of the steam engine and the telegraph quelled some of the complaints. These inventions put the city in touch with the outside world. But the real change of attitude toward the new capital, in the minds of both ordinary citizens and government officials, resulted from a national tragedy.
In August 1814, the British invaded the city. They burned the President’s mansion, the Capitol, and the Navy Arsenal. Americans were incensed. And they were united, too, against an enemy that had attempted to destroy the nation’s capital—even if that capital was inaccessible, inhospitable, and undesirable to live in.
All clamor to relocate the city ceased. An immense and patriotic rebuilding effort began. Jefferson donated his own extensive collection of books to replace the destroyed contents of the Library of Congress. And the badly charred wooden planks of the President’s mansion were painted a shimmering white, conferring upon it for all time the title the White House.
In 1874, Frederick Law Olmsted, the designer of New York’s Central Park, began landscaping the Capitol grounds with trees from various states and foreign countries. Contributing to that effort in 1912, the Japanese government presented the United States with a gift of three thousand cherry trees, whose blossoming thereafter would signal the city’s annual Cherry Blossom Festival. By then, of course, the site on the Potomac once intended to keep citizens from lobbying Congress had become the home of lobbyists.
Mount Rushmore: 1923, South Dakota
The faces originally to be carved into Mount Rushmore were not the fatherly countenances of four famous Presidents but the romanticized visages of three Western legends: Kit Carson, Jim Bridger, and John Colter. Planned as a tourist attraction to draw money into South Dakota’s economy, the monument, as originally conceived, might scarcely have achieved its goal.
The full story of the origin of Mount Rushmore begins sixty million years ago, when pressures deep within the earth pushed up layers of rock. The forces created an elongated granite-and-limestone dome towering several thousand feet above the Dakota prairie lands. The first sculpting of the mountain was done by nature. The erosive forces of wind and water fashioned one particularly protuberant peak, which was unnamed until 1885.
That year, a New York attorney, Charles E. Rushmore, was surveying the mountain range on horseback with a guide. Rushmore inquired about the impressive peak’s name, and the guide, ribbing the city lawyer, answered, “Hell, it never had a name. But from now on we’ll call the damn thing Rushmore.” The label stuck. And later, with a gift of five thousand dollars, Charles Rushmore became one of the earliest contributors to the presidential memorial.
The origin of the sculpture is better documented and more inspiring than that of the mountain’s name.
The idea to transform a gigantic mountaintop into a colossus of human figures sprang from the mind of a South Dakota historian, Doane Robinson. In 1923, Robinson presented to the state his plan to simultaneously increase South Dakota’s tourism, strengthen its economy, and immortalize three “romantic western heroes.” A commission then sought the skills of renowned sculptor John Gutzon de la Mothe Borglum, an authority on colossi.
Idaho born, Borglum started as a painter, then switched to sculpture, and his fame grew in proportion to the size of his works. The year Doane Robinson conceived the idea for a Mount Rushmore memorial, Borglum accepted a commission from the United Daughters of the Confederacy to carve a head of General Robert E. Lee on Stone Mountain in Georgia.
Mount Rushmore, though, beckoned with the greater challenge.
Borglum opposed sculpting Western heroes. The notion was overly provincial, he argued. A colossus should capture prominent figures. In a letter dated August 14, 1925, Borglum proposed the faces of four influential American Presidents.
Construction on the 6,200-foot-high wilderness peak was fraught with dangers. And the mountain itself was inaccessible except by foot or horseback, which necessitated countless climbs to lug up drills and scaffolding. But for Borglum, two features made the remote Rushmore peak ideal. The rocks faced southeast, ensuring maximum sunlight for construction, and later for viewing. And the peak’s inaccessibility would protect the monument from vandals.
Bitter winters, compounded by a chronic shortage of funds, continually threatened to terminate construction. Weathered surface rock had to be blasted away to expose suitably firm stone for sculpting. The chin of George Washington, for instance, was begun thirty feet back from the original mountain surface, and Theodore Roosevelt’s forehead was undertaken only after one hundred twenty feet of surface rock were peeled away.
Borglum worked from a scale model. Critical “points” were measured on the model, then transferred to the mountain to indicate the depth of rock to be removed point by point.
In 1941, fourteen years after construction began—and at a total cost of $990,000—a new world wonder was unveiled. There stood George Washington, whom Borglum selected because he was “Father of the Nation”; Abraham Lincoln, “Preserver of the Union”; Thomas Jefferson, “The Expansionist”; and Theodore Roosevelt, “Protector of the Working Man.”
The figures measure sixty feet from chin to top of head. Each nose is twenty feet long, each mouth eighteen feet wide, and the eyes are eleven feet across. “A monument’s dimensions,” Borglum believed, “should be determined by the importance to civilization of the events commemorated.”
Gutzon Borglum died on March 6, 1941, aged seventy-four. The monument was essentially completed. His son, also a sculptor, added the finishing touches.
Boy Scouts of America: 1910, Chicago
A good deed performed by an anonymous boy prompted a wealthy Chicago businessman to found the scouting movement in America. The boy was already a scout, a British scout, a member of an organization begun in England by Colonel Robert Baden-Powell. (The scouts’ motto, “Be Prepared,” is not only a forceful exhortation but also something of a tribute to Baden-Powell’s initials, a coincidence he enjoyed calling attention to, since practically no one else noticed.)
While serving his country in Africa during the turn-of-the-century Boer War, Baden-Powell complained that young recruits from England lacked strength of character and resourcefulness. On returning home, he assembled twenty-two boys, to imbue them with the attributes of loyalty, courage, and leadership. And in 1908, he published Scouting for Boys, a stalking and survival manual, which formally marked the start of the British Boy Scouts.
The social and political upheaval in Edwardian England provided a climate for scouting. Britons were anxious about their country’s national decline, the poor physical condition of large segments of the urban population, and the increasing vulnerability of British colonies abroad. The idea of training thousands of young boys to be loyal, resourceful, law-abiding citizens met with unanimous approval.
A year after the British scouting movement had been launched, William Boyce, a Chicago publisher visiting London, found himself lost on a dark, foggy night. The youth who came to Boyce’s aid identified himself only as a “boy scout.” Boyce was impressed with the boy’s courtesy and resolve to be of assistance; and he was astonished by the boy’s refusal to accept a tip. Boyce would later comment that he had never met an American youth who’d decline an earned gratuity. He was sufficiently intrigued with the British scouting movement to meet with its master, Baden-Powell.
On February 10, 1910, Boyce established the Boy Scouts of America, modeled on the British organization. Its immediate acceptance by parents, educators, and the young men who joined the movement in the tens of thousands guaranteed scouting’s success. Within a year, the scouts had their “On my honor” oath, a score of merit badges, and the scout’s law, comprising a string of twelve attributes to aspire to. By 1915, there were a half-million American boy scouts, with troops in every state.
American Presidents were involved with scouting from the start. William Taft began the tradition that every President automatically becomes an honorary scout. Theodore Roosevelt went a step further after his presidency by becoming head scoutmaster of Troup 39, Oyster Bay, New York.
And while all Presidents became scouts, some scouts became Presidents. The first one to do so had been a member of Troop 2 of Bronxville, New York, from 1929 to 1931—John F. Kennedy. And the first Eagle Scout to become President began his scouting career in 1924 as a member of Troup 15, Grand Rapids, Michigan—Gerald R. Ford.
By the late 1920s, scouting was so popular throughout the country that parents began to inquire if their younger children might not be permitted to join the movement. To satisfy that request, early in 1930 the Cub Scout program was formally launched; by year’s end, its membership stood at 847,051 and climbing.
Girl Scouts of the U.S.A.: 1912, Savannah, Georgia
Born Juliette Daisy Gordon in Savannah in 1860, the founder of the Girl Scouts exhibited a flair for organization at an early age. As a teenager, she formed the Helpful Hands Club, a youth organization that made and repaired clothes for needy children. At age twenty-six, Juliette married a wealthy Englishman, William Mackay Low, and the couple took up residence in England. Undaunted by advancing deafness, Juliette Low established herself as a popular London party giver.
It was at a party in 1911 that she was introduced to Colonel Robert Baden-Powell. The colonel’s enthusiasm for scouting must have been contagious. Three years earlier, he had inspired William Boyce to institute the American Boy Scouts. He had only recently encouraged his sister, Agnes, to launch an equivalent female movement, the British Girl Guides. At the party, Baden-Powell, accompanied by Agnes, imbued Juliette Low with the scouting zeal.
So much so, in fact, that within weeks of the meeting, Juliette Low was a London Girl Guide leader. The following year, she brought the idea home to Savannah, Georgia. On March 12, 1912, eighteen young girls from a local school became America’s first Girl Guides. The next year, their name was formally changed to Girl Scouts.
By the time of Juliette Low’s death, on January 17, 1927, there were more than 140,000 Girl Scouts, with troops in every state. And the tradition had been started that the wives of American Presidents automatically become honorary Girl Scouts.
The British surrender at Yorktown as Washington’s band plays “Yankee Doodle.”
“Yankee Doodle”: 1750s, England
Yankee Doodle came to town,
Riding on a pony;
He stuck a feather in his cap
And called it macaroni.
Today this song is played as a short, incidental piece of music, and when sung, it’s mostly by a child as a nursery rhyme. But in the eighteenth century, “Yankee Doodle” was a full-fledged national air of many stanzas, a lively expression of American patriotism, usually played by a military band. This despite the fact that the melody and lyrics originated with the British as a derisive slur to colonists. In London, prior to the American Revolution, a version of “Yankee Doodle” expressed growing anti-American sentiment.
Musicologists and historians have struggled with the origin and interpretation of the song. It’s known that British Redcoats prided themselves on always being dapperly and uniformly attired. Colonial soldiers were by comparison a ragamuffin lot, each dressing in whatever clothes he owned. An early version of “Yankee Doodle” clearly mocks Americans’ shabby dress, and the derision is carried into later versions in the term “macaroni.”
In eighteenth-century England, “macaroni” ridiculed an English dandy who affected foreign mannerisms and fashions, particularly ones French or Italian. A “macaroni” believed he was stylishly attired when by the vogue of the day his outfit was outlandish. Thus, the archetype Yankee Doodle character, by sticking a feather in his cap, believes he has become fashionable when in fact his appearance is comical. In singing the song, the British poked fun at what they viewed as New England’s country bumpkins.
The song’s authorship is clouded by at least a dozen vying claims. Many historians believe the original melody and lyrics were composed by a British surgeon, Dr. Richard Schuckburg, around 1758. Others maintain it was an impromptu composition on American soil by British soldiers, who then carried it home to England.
Whoever the composer, in America the musical insult fell on deaf ears. The colonists warmly embraced the tune, many times modifying its lyrics, though never deleting “macaroni.” In April 1767, the melody was highlighted in an American comic opera composed by Andrew Barton and titled The Disappointment: or, The Force of Credulity. By the close of the Revolutionary War, George Washington’s troops had turned the once defiant insult into a rousing celebratory salute. At the surrender of the British at Yorktown, Washington’s band struck up a chorus of “Yankee Doodle” to mortify the defeated British Lord Cornwallis and his men. Out of this sentiment, somewhat akin to “He who laughs last” or “They’ll eat their words,” the tune “Yankee Doodle” became for several decades a national air.
“The Star-Spangled Banner”: 1814, Baltimore
America acquired the song that is its national anthem about thirty-eight years after the country won its independence from England. It is somewhat ironic that the melody is British, and came from a song extolling the pleasures of wine and amours. The American lyrics were of course penned by lawyer and poet Francis Scott Key. But Key directed that his lines be sung to the British melody “To Anacreon in Heaven,” Anacreon being the sixth-century B.C. Greek poet known for his lyric love verse.
Why did the patriotic Francis Scott Key choose a British melody?
During Key’s time, “To Anacreon in Heaven” was one of the most popular songs in England and America. At least eighty-five American poems were fitted to the tune. And Key himself, in 1805—nine years before he’d write “The Star-Spangled Banner” —set a poem, “When the Warrior Returns,” to the British melody. (That poem, interestingly, contained an image that the poet would soon reshape and immortalize: “By the light of the star-spangled flag.”) Thus, Key was well acquainted with the melody, its popularity, and its musical cadence.
During the War of 1812, Key was a Washington lawyer in his thirties. Under a brief truce, he was sent aboard a British vessel in Chesapeake Bay to acquire the release of a captured American physician. By the time the lengthy negotiations were completed, the truce had ceased and British ships were bombarding Fort McHenry, which guarded the city of Baltimore.
Key witnessed the fiery battle. By morning, the American flag of fifteen stars and stripes was still flying over the fort. Inspired by the sights and sounds of that night of September 13, 1814, Key composed a poem, “The Defense of Fort McHenry,” which was published the following week.
Americans almost immediately regarded the poem, sung to Key’s suggested melody, as their national anthem. But, surprisingly, the anthem was not officially adopted until March 3, 1931, by a presidential proclamation of Herbert Hoover. Today the Stars and Stripes flag that inspired Francis Scott Key is preserved in the Smithsonian Institution.
“America the Beautiful”: 1895, Colorado
It was a New Jersey church organist, a New England poet, and the breath-takingly beautiful vista of Colorado mountain peaks that combined to give America a song that could have been, and almost was, its national anthem.
The poet, Katherine Lee Bates, was born in 1859 in Falmouth, Massachusetts, and was a professor of English literature at Wellesley College. Visiting Colorado in the early 1890s, she was inspired by the majestic view from the summit of Pikes Peak to compose a poem opening with the line “O beautiful for spacious skies.” The completed work was printed in Boston on July 4, 1895.
Popular poems were frequently set to existing melodies. Katherine Bates’s composition was fitted to a religious song, “Materna,” at the time already thirteen years old. It had been written by an organist, choirmaster, and Newark, New Jersey, music dealer, Samuel A. Ward. He composed the song in 1882 to be sung in his parish church. Its opening line was “O Mother dear, Jerusalem,” metrically identical to Katherine Bates’s “O beautiful for spacious skies.”
Both Katherine Bates and Samuel Ward lived to see their creation achieve nationwide popularity. Throughout the 1920s, when the country still had no official national anthem, there were numerous attempts to persuade Congress to elevate “America the Beautiful” to that status. Not until 1931, when “The Star-Spangled Banner” was adopted, did the debate quiet down, and it still has not been totally silenced. The issue then and now is not with lyrics but with the higher tessitura of Francis Scott Key’s song. “America the Beautiful” is simply easier for most people to sing.
“The Marines’ Hymn”: Pre-1920, Mexico and France
There is some humorous incongruity in the fact that the hearty, forceful “Marines’ Hymn,” belted out vigorously by generations of America’s toughest fighters, derives from a frivolous, lighthearted comic opera by French composer Jacques Offenbach.
How did opéra bouffe come to represent American military might?
During the Mexican-American War, an anonymous member of the Marine Corps stationed in Mexico composed a historical poem (most likely in 1847). It opened with references to the glorious days of the last Aztec emperor, Montezuma, recounted his people’s demise, then proceeded to relate the Marine’s mission in Mexico to fight for “freedom and liberty.” The poem, somewhat altered, was eventually published in the Marines’ newspaper, The Quantico Leatherneck, and for several decades Marines sang the words to an old Spanish folk tune.
During that period, Jacques Offenbach composed the comic opera Geneviève de Brabant. The lightweight, sentimental work contained one song, “Two Men in the Army,” which in melody, lyrics, and slapstick staging thrilled Parisian audiences, as well as American operagoers who heard it at the Metropolitan Opera House in October 1868. Excerpted from the opera, the song achieved independent popularity in France and America.
What happened next was a combination of mental forgetting and musical fitting. In time, people simply forgot that the frequently sung “Two Men in the Army” had ever been an opera duet (certainly the opera itself was forgotten). New generations of Marines sang “Two Men,” and its robust marching rhythm was found to fit closely the meter of their popular military poem. Neither Marine nor music historians have successfully determined exactly when enlisted men dropped the old Spanish folk melody in favor of the more driving beat of the Offenbach tune.
What is documented is that the now-familiar words and music were first published jointly in New York in August 1919. A year later, the United States Marine Corps copyrighted the song, titled “The Marines’ Hymn.” While several opera composers incorporated nationalistic melodies into their works (as did Donizetti in the overture to Roberto Devereux), Offenbach’s is the first opera melody to become a popular patriotic song.
“Dixie”: 1859, New England
It became the national anthem of the Confederacy, but “Dixie” was composed by a Northerner, Daniel Decatur Emmett, who specialized in writing songs for blackface minstrel shows. One of his shows, staged in New York’s Mechanic’s Hall on April 4, 1859, contained a number the playbill listed as “Mr. Dan Emmett’s original Plantation Song and Dance, Dixie’s Land.”
A group of peripatetic musicians, Bryant’s Minstrels, carried the song to New Orleans in 1860. They introduced it to the South in their musical Pocahontas, based loosely on the relationship between the American Indian princess and Captain John Smith. The song’s immediate success led them to include it in all their shows, and it became the minstrels’ signature number.
Eventually, the term “Dixie” became synonymous with the states below the Mason-Dixon line. When the song’s composer, a staunch Union sympathizer, learned that his tune “Dixie” was played at the inauguration of Jefferson Davis as president of the Confederate States of America, he said, “If I had known to what use they were going to put my song, I’ll be damned if I’d have written it.” For a number of years, people whistled “Dixie” only in the South.
Abraham Lincoln attempted to change that. On April 10, 1865, the day following Lee’s surrender to Grant at Appomattox, President Lincoln delivered a speech outside the White House. He jokingly addressed the South’s monopoly of the song, saying, “I had heard that our adversaries over the way had attempted to appropriate it. I insisted yesterday that we had fairly captured it.” Lincoln then suggested that the entire nation feel free to sing “Dixie,” and he instructed the military band on the White House lawn to strike up the melody to accompany his exit.
West Point Military Academy: 1802, New York
The origin of West Point Military Academy dates back to the Revolutionary War, when the colonists perceived the strategic significance of the Hudson River, particularly of an S-shaped curve along the bank in the region known as West Point.
To control the Hudson was to command a major artery linking New England with the other colonies. General George Washington and his forces gained that control in 1778, occupying the high ground at the S-shaped bend in the river. Washington fortified the town of West Point that year, and in 1779 he established his headquarters there.
During the war, Washington realized that a crash effort to train and outfit civilians every time a conflict arose could never guarantee America’s freedom. The country needed professional soldiers. At the end of the war, in 1783, he argued for the creation of an institution devoted exclusively to the military arts and the science of warfare.
But in the atmosphere of confidence created by victory, no immediate action was taken. Washington came and went as President (1789–1797), as did John Adams (1797–1801). It was President Thomas Jefferson who signed legislation in 1802 establishing the United States Military Academy at West Point, New York. With a class of only ten cadets, the academy opened its doors on Independence Day of that year—and none too soon.
War broke out again, faster than anyone had imagined it would. The War of 1812 refocused attention on the country’s desperate need for trained officers. James Madison, then President, upped the size of the Corps of Cadets to 250, and he broadened the curriculum to include general scientific and engineering courses.
The academy was girded for the next conflict, the Civil War of 1861. Tragically, and with poignant irony, the same officers who had trained diligently at West Point to defend America found themselves fighting against each other. During the Civil War, West Point graduates—Grant, Sherman, Sheridan, Meade, Lee, Jackson, and Jefferson Davis—dominated both sides of the conflict. In fact, of the war’s sixty major battles, West Pointers commanded both sides in fifty-five. Though the war was a tragedy for the country as a whole, it was particularly traumatic for the military academy.
In this century, the institution witnessed changes in three principal areas. Following the school’s centennial in 1902, the curriculum was expanded to include English, foreign languages, history, and the social sciences. And following World War II, in recognition of the intense physical demands of modern warfare, the academy focused on physical fitness, with the stated goal to make “Every cadet an athlete.” Perhaps the biggest change in the academy’s history came in 1976, when it admitted females as cadets.
From a Revolutionary War fortress, the site at the S-bend in the Hudson became a flourishing center for military and academic excellence—all that General George Washington had intended and more.
Statue of Liberty: 1865, France
The Statue of Liberty, refurbished for her 1986 centennial, is perhaps the most renowned symbol of American patriotism throughout the world. It is the colossal embodiment of an idea that grew out of a dinner conversation between a historian and a sculptor.
In 1865, at a banquet in a town near Versailles, the eminent French jurist and historian Edouard de Laboulaye discussed Franco-American relations with a young sculptor, Frédéric-Auguste Bartholdi. De Laboulaye, an ardent admirer of the United States, had published a three-volume history of the country and was aware of its approaching independence centennial. When the historian suggested that France present America with an impressive gift, sculptor Bartholdi immediately envisioned a massive statue. But at the time, the idea progressed no further than discussion.
A trip later took Bartholdi to Egypt. Strongly influenced by ancient colossi, he attempted to persuade the ruling authorities for a commission to create a large statue to grace the entrance of the newly completed Suez Canal. But before he could secure the assignment, war erupted between France and Prussia, and Bartholdi was summoned to fight.
The idea of a centennial statue for America was never far from the sculptor’s mind. And in 1871, as he sailed into the bustling mouth of New York Harbor on his first visit to the country, his artist’s eyes immediately zeroed in on a site for the work: Bedloe’s Island, a twelve-acre tract lying southwest of the tip of Manhattan. Inspired by this perfect pedestal of an island, Bartholdi completed rough sketches of his colossus before the ship docked. The Franco-American project was undertaken, with the artist, engineers, and fund-raisers aware that the unveiling was a mere five years away.
The statue, to be named “Liberty Enlightening the World,” would be 152 feet high and weigh 225 tons, and its flowing robes were to consist of more than three hundred sheets of hand-hammered copper. France offered to pay for the sculpture; the American public agreed to finance its rock-concrete-and-steel pedestal. To supervise the immense engineering feat, Bartholdi enlisted the skills of French railroad builder Alexandre-Gustave Eiffel, who later would erect the tower that bears his name. And for a fittingly noble, wise, and maternal face for the statue, Bartholdi turned to his mother, who posed for him.
Construction of the Statue of Liberty in 1885.
From the start, the French contributed generously. Citizens mailed in cash and checks, and the government sponsored a special “Liberty” lottery, with profits going toward construction costs. A total of $400,000 was raised, and the esteemed French composer Charles Gounod created a cantata to celebrate the project.
In America, the public was less enthusiastic. The disinterest centered around one question: Did the country really need—or want—such a monumental gift from France? Publisher Joseph Pulitzer spearheaded a drive for funds in his paper the World. In March 1885, Pulitzer editorialized that it would be “an irrevocable disgrace to New York City and the American Republic to have France send us this splendid gift without our having provided so much as a landing place for it.” He lambasted New York’s millionaires for lavishing fortunes on personal luxuries while haggling over the pittances they were asked to contribute to the statue’s pedestal. In two months, Pulitzer’s patriotic editorials and harangues netted a total of $270,000.
The deadline was not met. When the country’s 1876 centennial arrived, only segments of the statue were completed. Thus, as a piecemeal preview, Liberty’s torch arm was displayed at the Philadelphia Centennial celebrations, and two years later, at the Paris Fair, the French were treated to a view of Liberty’s giant head.
Constructing the colossus in France was a herculean challenge, but dismantling it and shipping it to America seemed an almost insurmountable task. In 1884, the statue’s exterior and interior were taken apart piece by piece and packed into two hundred mammoth wooden crates; the half-million-pound load was hauled by special trucks to a railroad station, where a train of seventy cars transported it to a shipyard. In May 1885, Liberty sailed for America aboard the French warship Isère.
When, on October 28, 1886, President Grover Cleveland presided over the statue’s inauguration ceremonies, Lady Liberty did not yet bear her now-immortal poem. The verse “Give me your tired, your poor, / Your huddled masses yearning to breathe free…” was added in 1903, after the statue was closely identified with the great flow of immigrants who landed on nearby Ellis Island.
The moving lines are from a sonnet, “The New Colossus,” composed in 1883 by New York City poet Emma Lazarus. A Sephardic Jew, whose work was praised by the Russian novelist Ivan Turgenev, Lazarus devoted much of her life to the cause of Jewish nationalism. She tackled the theme of persecution in poems such as “Songs of a Semite,” and in a drama, The Dance to Death, based on the accusation leveled against Jews of poisoning water wells and thus causing Europe’s fourteenth-century Black Death.
But her sonnet “The New Colossus” was almost completely ignored by the critics of the day and the public. She had written it for a literary auction held at New York’s Academy of Design, and it expressed her belief in America as a refuge for the oppressed peoples of the world. Sixteen years after her death from cancer in 1887, the sonnet’s final five lines were etched in bronze and in the memory of a nation.
On the Body
Shoes: Pre-2000 B.C., Near East
Although some clothing originated to shelter the body, most articles of attire, from earliest times, arose as statements of status and social rank. Color, style, and fabric distinguished high priest from layman, lawmaker from lawbreaker, and military leader from his followers. Costume set off a culture’s legends from its legions. In fact, costume is still the most straight-forwardly visible means of stating social hierarchy. As for the contributions made to fashion by the dictates of modesty, they had virtually nothing to do with the origin of clothing and stamped their particular (and often peculiar) imprint on attire centuries later.
Shoes, as we’ll see, though eminently practical, are one early example of clothes as categorizer.
The oldest shoe in existence is a sandal. It is constructed of woven papyrus and was discovered in an Egyptian tomb dating from 2000 B.C. The chief footwear of ancient people in warm climates, sandals exhibited a variety of designs, perhaps as numerous as styles available today.
Greek leather sandals, krepis, were variously dyed, decorated, and gilded. The Roman crepida had a thicker sole and leather sides, and it laced across the instep. The Gauls preferred the high-backed campagus, while a rope sandal of hemp and esparto grass, the alpargata, footed the Moors. From tombs, gravesites, and ancient paintings, archaeologists have catalogued hundreds of sandal designs.
Although sandals were the most common ancient footwear, other shoes were worn. The first recorded nonsandal shoe was a leather wraparound, shaped like a moccasin; it tightened against the foot with rawhide lacing and was a favorite in Babylonia around 1600 B.C.
A similar snug-fitting leather shoe was worn by upper-class Greek women starting around 600 B.C., and the stylish colors were white and red. It was the Romans, around 200 B.C., who first established shoe guilds; the professional shoemakers were the first to fashion footwear specifically for the right and left feet.
Roman footwear, in style and color, clearly designated social class. Women of high station wore closed shoes of white or red, and for special occasions, green or yellow. Women of lower rank wore natural-colored open leather sandals. Senators officially wore brown shoes with four black leather straps wound around the leg up to midcalf, and tied in double knots. Consuls wore white shoes. There were as yet no brand names, but there were certain guild cobblers whose products were sought for their exceptional craftsmanship and comfortable fit. Their shoes were, not surprisingly, more costly.
The word “shoe” changed almost as frequently over the ages as shoe styles. In the English-speaking world, “shoe” evolved through seventeen different spellings, with at least thirty-six variations for the plural. The earliest Anglo-Saxon term was sceo, “to cover,” which eventually became in the plural schewis, then shooys, and finally “shoes.”
Standard Shoe Size. Until the first decade of the fourteenth century, people in the most civilized European societies, including royalty, could not acquire shoes in standard sizes. And even the most expensive custom-made shoes could vary in size from pair to pair, depending on the measuring and crafting skills of particular cobblers.
That began to change in 1305. Britain’s King Edward I decreed that for a standard of accuracy in certain trades, an inch be taken as the length of three contiguous dried barleycorns. British cobblers adopted the measure and began manufacturing the first footwear in standard sizes. A child’s shoe measuring thirteen barleycorns became commonly known as, and requested by, size 13. And though shoes cut for the right and left foot had gone out of existence after the fall of the Roman Empire, they reemerged in fourteenth-century England.
A new style surfaced in the fourteenth century: shoes with extremely long spiked toes. The vogue was carried to such lengths that Edward III enacted a law prohibiting spikes’ extending two inches beyond the human toe. For a while, people observed the edict. But by the early 1400s, the so-called crakows had attained tips of eighteen inches or more, with wearers routinely tripping themselves.
The crakows, arriving in the creative atmosphere that nurtured the Renaissance, ushered in a new shoe-style trendiness, as one fashion extreme replaced another. The absurdly long, pointed toe, for example, was usurped by a painfully short, comically broad-boxed toe that in width could accommodate an extra set of digits.
In the seventeenth century, the oxford, a low calf-leather shoe laced up the front through three or more eyelets, originated with cobblers in the academic town of Oxford, England.
In America at the time, shoe design took a step backward. The first colonial cobblers owned only “straight lasts,” that is, single-shape cutting blocks, so right and left footwear was unavailable. The wealthy resorted to British imports. Shoe selection, price, and comfort improved in the mid-eighteenth century when the first American shoe factory opened in Massachusetts. These mass-produced shoes were still cut and stitched by hand, with leather sewn at home by women and children for a shameful pittance, then assembled at the factory.
Complete mechanization of shoemaking, and thus true mass production, was slow in coming. In 1892, the Manfield Shoe Company of Northampton, England, operated the first machines capable of producing quality shoes in standard sizes and in large quantities.
Boots: 1100 B.C., Assyria
Boots originated as footwear for battle. The Sumerians and the Egyptians sent soldiers into combat barefoot, but the Assyrians, around 1100 B.C., developed a calf-high, laced leather boot with a sole reinforced by metal.
There is evidence that the Assyrians, as well as the Hittites, both renowned as shoemakers, had right-and left-footed military boots. One translation of a Hittite text tells of Telipinu, god of agriculture, in a foul temper because he inadvertently put “his right boot on his left foot and his left boot on his right foot.”
The Assyrian infantry boot was not readily adopted by Greek or Roman soldiers. From fighting barefoot, they progressed to sandals with hobnail soles for additional grip and wear. It was primarily for extended journeys on foot that Greek and Roman men outfitted themselves in sturdy boots. In cold weather, they were often lined with fur and adorned at the top by a dangling animal paw or tail.
Boots also became the customary footwear for nomadic horse-riding communities in cold mountainous regions and on the open steppes. Their sturdiness, and the slight heel that held the foot in the stirrup, guaranteed boots a role as combat gear. In the 1800s, cobblers in Hesse, Germany, introduced knee-high military boots called Hessians, of polished black leather with a tassel, similar to the Romans’ animal tail, hanging from the top. And during the same period, British shoemakers, capitalizing on a military victory, popularized Wellingtons, high boots named for Arthur Wellesley, the “Iron Duke” of Wellington, who presided over Napoleon’s defeat at Waterloo.
French high heels c. 1850 and a gentleman’s boots, the earliest shoes to sport elevated heels.
Boots have been in and out of fashion over the centuries. But one aspect of the boot, its pronounced heel, gave birth to the fashion phenomenon of high-heeled shoes.
High Heels: 16th Century, France
High heels did not appear overnight. They grew inch by inch over decades, with the upward trend beginning in sixteenth-century France. And though the term “high heels” would later become a rubric for women’s elevated footwear, the shoes were first worn by men. In the sixteenth century, there was comparatively little development in women’s shoes because they were hidden under long gowns.
The advantage of an elevated heel on a shoe was first appreciated in horseback riding; a heel secured the foot in the stirrup. Thus, riding boots were the first shoes routinely heeled. And during the Middle Ages, when overcrowding and poor sanitation made human and animal waste street obstacles, boots with thick soles and elevated heels offered a few inches of practical protection as well as a psychological lift.
It was for the purpose of rising above public filth, in fact, that clogs were developed during the Middle Ages. They originated in Northern Europe as an overshoe, made partly or wholly of wood, with a thick base to protect the wearer’s good leather shoes from street debris. In warmer months, they were often worn in place of a snug-fitting leather shoe.
A German shoe called a pump became popular throughout Europe in the mid-1500s. The loose slipper, plain or jeweled, had a low heel, and historians believe its name is onomatopoeic for the “plump, plump” sound its heel made in flapping against a wood floor. A later woman’s slipper, the scuff, would be thus named.
In the mid-1600s, male boots with high heels were de rigueur in France. The fad was started, and escalated, by the Sun King, Louis XIV. In his reign of seventy-three years, the longest in European history, France attained the zenith of its military power and the French court reached an unprecedented level of culture and refinement. None of Louis’s towering achievements, though, could compensate psychologically for his short height. The monarch at one point had inches added to the heels of his shoes. In a rush to emulate their king, noble men and women at court instructed bootmakers to heighten their own heels. The homage forced Louis into higher heels. When, in time, Frenchmen descended to their anatomical heights, women courtiers did not, thus launching a historic disparity in the heel heights of the sexes.
By the eighteenth century, women at the French court wore brocaded high-heeled shoes with elevations up to three inches. American women, taking the fashion lead from Paris, adopted what was known as the “French heel.” It helped launch a heel polarization in the United States. As women’s heels climbed higher and grew narrower, men’s heels (though not on boots) correspondingly descended. By the 1920s, “high heel” no longer denoted a shoe’s actual heel height but connoted an enticing feminine fashion in footwear.
Loafers. The laceless, slip-on loafer is believed to have evolved from the Norwegian clog, an early overshoe. It is known with greater certainty that the Weejun loafer was named by a cobbler from Wilton, Maine, Henry Bass, after the final two syllables of “Norwegian.”
Bass began making sturdy, over-the-ankle shoes in 1876 for New England farmers. He eventually expanded his line to include a lumberjack shoe and specialty footwear on request. He constructed insulated hiking boots for both of Admiral Byrd’s successful expeditions to the South Pole, and lightweight flying boots for Charles Lindbergh’s historic transatlantic flight. In 1936, Henry Bass was shown a Norwegian slipper moccasin that was fashionable at the time in Europe. He secured permission from the Norwegian manufacturer to redesign the shoe for the American market, and the finished loafer launched his Bass Weejun line of footwear. By the late 1950s, the Bass Weejun was the most popular hand-sewn moccasin ever made, a collegiate status symbol in the ancient tradition of the shoe as statement of social position.
Sneakers: 1910s, United States
The rubber-bottomed athletic shoe whose silent footsteps earned it the name “sneaker” had to await a technological breakthrough: the vulcanization of rubber by Charles Goodyear in the 1860s. Goodyear proved that the natural gum from the rubber plant did not have to be sticky when warm and brittle when cold. Mixed with sulfur, rubber became a dry, smooth, pliant substance, perfect for footwear such as rain galoshes, one of its first successful uses in apparel in the late 1800s.
Before the turn of the century rubber was on the soles of leather shoes. And vulcanized rubber soles were being glued to canvas tops to produce what manufacturers advertised as a revolution in athletic footwear. In 1917, U.S. Rubber introduced Keds, the first popularly marketed sneaker, with a name that suggested “kids” and rhymed with ped, the Latin root for “foot.” Those first sneakers were neither all white, nor white soles with black canvas; rather, the soles were black and the canvas was a conservative chestnut brown, because that was the popular color for men’s leather shoes.
The substantive design of sneakers varied little until the early 1960s. Then a former college runner and his coach made a serendipitous observation that ushered in the era of the modern, waffle-soled sneaker. As a miler at the University of Oregon, Phil Knight had preferred to run in European sneakers, lighter in weight than American models. Believing that other track and field athletes would opt to better their performances with high-quality footwear, Knight and coach Bill Bowerman went into the sneaker business in 1962, importing top-notch Japanese models.
The shoes’ reduced weight was an undeniable plus, but Bowerman felt further improvement was possible, especially in the area of traction, a major concern of athletes. Yet he was uncertain what constituted an optimum sole topography. Many manufacturers relied on the shallow peak-and-trough patterns developed for automobile traction. One morning, operating the waffle iron in his home kitchen, Bowerman was inspired to experiment. Stuffing a piece of rubber into the iron, he heated it, producing a deeply waffle-shaped sole pattern that soon would become a world standard for sneakers. In addition to the sole, the new sneakers featured three other innovations: a wedged heel, a cushioned mid-sole as protection against shock, and nylon tops that were lighter and more breathable than the older canvas.
To promote the waffle-soled nylon shoes, named Nikes after the winged Greek goddess of victory, Knight turned to runners in the Olympic trials held in Eugene, Oregon, in 1972. Several marathoners raced in the custom-designed shoes, and advertising copy hailed the sneakers as having been on the feet of “four of the top seven finishers,” omitting to mention that the runners who placed first, second, and third were wearing West Germany’s Adidas sneakers. Nonetheless, waffle-soled sneakers, in a variety of brands, sold so well that by the end of the decade the flatter-soled canvas shoes had been left in the dust.
Pants: Post-15th Century, Italy
St. Pantaleone was a fourth-century Christian physician and martyr known as the “all-merciful.” Beheaded under orders of Roman emperor Diocletian, he became the patron saint of Venice, and a reliquary containing his blood (allegedly still liquid) is housed in the Italian town of Ravello. Pantaleone is probably the only saint to be dubiously honored by having an article of clothing named after him—though how the attribution came about involves folklore more than fact. His name literally means “all lion” (pan, “all”; leone, “lion”), and though he was a clever and pious physician, he passed inexplicably into Italian folklore as a lovable but simpleminded buffoon, decidedly unsaintly in character.
It is the comic Pantaleone of folklore, through behavior and attire, who eventually gave his name to pants. An abject slave to money, he starved servants until their skeletons cast no shadow, and though he valued a gentlemanly reputation, he flirted with women, who publicly mocked him. These traits are embodied in a gaunt, swarthy, goateed Pantaleone of the sixteenth-century Italian commedia dell’arte. The character wore a pair of trousers, tight from ankle to knee, then flaring out like a petticoat.
The comedy genre was carried by bands of traveling actors to England and France. And the Pantalone character always appeared in exaggerated trousers. In France, the character and his pants came to be called Pantalon; in England, Pantaloon. Shakespeare helped popularize the British term in As You Like It.
In the eighteenth century, when pantaloons—by then a stylized form of knee breeches—reached the shores of America, their name was shortened to “pants.” And in this century, the fashion industry, when referring to stylish women’s trousers, has further abbreviated the word to “pant.”
Whereas St. Pantaleone circuitously lent his name to pants, the ancient Celts donated their word for men’s leg coverings, trews, to “trousers,” while the Romans contributed their word for a baggy type of breeches, laxus, meaning “loose,” to “slacks.” The one convenience all these ancient leg coverings lacked was pockets.
Pockets. Simple and indispensable as pockets are, it is hard to imagine that they did not exist before the late 1500s. Money, keys, and personal articles were wrapped in a piece of cloth, an impromptu purse, and tucked into any convenient part of a person’s costume.
One popular place for a man in the 1500s to carry his personal effects was his codpiece. These frontal protrusions, which fell from fashion when their exaggerated size became ludicrous and cumbersome, originated as a convenient opening, or fly, to trousers. Fashion of the day dictated that the fastened flap be stuffed with cloth, and it became an ideal place to carry the special cloth containing a man’s valuables. When the codpiece went out of fashion, the cloth did not move far: it became a small bag, drawn up at the top with a string, that hung from a man’s waist. The cloth was on its way to becoming the lining that is a pocket.
The first pockets in trousers appeared near the close of the 1500s. They evolved in two steps. At first, an opening was made as a side seam in a man’s tight-fitting trousers. Into the opening a man inserted the cloth pouch containing his belongings. The independent pouch soon became a permanent, sewn-in feature of trousers.
From drawstring bag to waist purse, the evolution of pants pockets.
Once introduced, pockets proved their convenience and utility. In the next century, they became a design feature of men’s and women’s capes and coats. At first, they were located down at the hem of an overcoat; only later did they move up to the hip.
Suspenders. Before suspenders were used to hold up pants, they were worn around the calf to support socks, not yet elasticized to stay up on their own. Trouser suspenders were introduced in England in the eighteenth century. First called “gallowses,” then “braces,” the straps, worn over the shoulders, buttoned to trousers. They were given their graphic name “suspenders” by eighteenth-century New Englanders who adopted the British fashion.
Knickers. Like early breeches, knickers were a form of loose-fitting trousers gathered just below the knee. Their name originated as an abbreviation of Knickerbocker, a Dutch surname prevalent among the early settlers of New Amsterdam. The loose trousers were worn by early immigrants. But they did not achieve their nickname until nineteenth-century writer Washington Irving created the fictitious author Diedrich Knickerbocker.
In his humorous two-volume 1809 work, A History of New York from the Beginning of the World to the End of the Dutch Dynasty, Knickerbocker, a phlegmatic Dutch burgher, wrote about Dutchmen clad in breeches that buckled just below the knee. Many examples were illustrated throughout the text. Americans copied the costume, especially as pants for young boys.
Leotard. Similar to the centuries-old tight-fitting hose worn by men throughout Europe, leotards were named for nineteenth-century French trapeze artist Jules Léotard. In the clinging costume that became his trademark, Leotard astonished audiences with his aerial somersault, as well as his risqué outfit. He enjoyed a large female following. And he advised men that if they also wished “to be adored by the ladies,” they should “put on a more natural garb, which does not hide your best features.”
Bloomers. A pair of baggy trousers gathered at the ankles and worn with a short belted tunic was sported by Amelia Jenks Bloomer of Homer, New York, in 1851. She had copied the pants costume from a friend, Elizabeth Smith Miller. But it was Mrs. Bloomer, an early feminist and staunch supporter of reformer Susan B. Anthony, who became so strongly associated with the masculine-type outfit that it acquired her name.
Pants, then men’s wear, appealed to Amelia Bloomer. She advocated female dress reform on the grounds that the large hoop skirts of her day (essentially seventeenth-century farthingales, in which the hoop had dropped from the hips to the hem) were immodest, drafty, and cumbersome—not only to maneuver in but also to manage when attending to bodily functions. Matters were made worse by the stiff linen and horsehair crinoline in vogue in the 1840s, worn to further exaggerate the femininity of a dress.
Amelia Bloomer refused to wear the popular fashion. Starting in 1851, she began to appear in public in baggy pants and short tunic. And as more women joined the campaign for the right to vote, Mrs. Bloomer turned the trousers into a uniform of rebellion. The pants trend received additional impetus from the bicycle craze of the ’80s and ’90s. Skirts frequently caught in a bike’s cogs and chains, resulting in minor or serious accidents. Bloomers became ideal riding attire, challenging the long tradition of who in the family wore the pants.
Blue Jeans: 1860s, San Francisco
Before jeans were blue, even before they were pants, jeans was a twilled cotton cloth, similar to denim, used for making sturdy work clothes. The textile was milled in the Italian town of Genoa, which French weavers called Genes, the origin of our word “jeans.”
The origin of blue jeans, though, is the biography of a seventeen-year-old immigrant tailor named Levi Strauss. When Strauss arrived in San Francisco during the gold rush of the 1850s, he sold much-needed canvas for tents and covered wagons. An astute observer, he realized that miners went through trousers, literally and quickly, so Strauss stitched some of his heavy-duty canvas into overalls.
Though coarse and stiff, the pants held up so well that Strauss was in demand as a tailor.
In the early 1860s, he replaced canvas with denim, a softer fabric milled in Nimes, France. Known in Europe as serge de Nimes, in America the textile’s name was pronounced “denim.” And Strauss discovered that dying neutral-colored denim pants indigo blue to minimize soil stains greatly increased their popularity. Cowboys, to achieve a snug fit, put on a pair of Strauss’s pants, soaked in a horse-watering trough, then lay in the sun to shrink-dry the material.
While denim pants resisted tearing, miners complained that the weight of tools often caused pockets to split at the seams. Strauss solved that problem by borrowing an idea from a Russian-Jewish tailor, Jacob Davis. In 1873, copper rivets appeared at each pocket seam, as well as one rivet at the base of the fly to prevent the crotch seam from opening when a miner squatted panning for gold.
That crotch rivet, though, generated a different kind of complaint. Miners, unencumbered by the etiquette of underwear, found that squatting too near a campfire heated the rivet to give a painful burn. The crotch rivet was abandoned.
Pocket rivets remained in place until 1937, when complaints of still a different nature were voiced. Children in many parts of the country routinely wore jeans to school. And principals reported that back-pocket rivets were scratching and gouging wooden desks and benches beyond repair. Pocket rivets were abandoned.
Blue jeans, strictly utilitarian, first became a fashion item in 1935. That year, an advertisement appeared in Vogue. It featured two society women in snug-fitting jeans, and it kicked off a trend named “western chic.” The fad was minor compared to the one that erupted out of the designer-jeans competition of the 1970s. The pants once intended for work became the costume of play, creating a multimillion-dollar industry. At the height of the designer-jeans war, Calvin Klein jeans, for instance, despite their high price of fifty dollars (or because of it), were selling at the rate of 250,000 pairs a week.
Shirt: Post-16th Century, Europe
Fashion historians point out that the modern waist-length, tuck-in shirt originated in response to pants, as the blouse came into being to complement the skirt. Previously, a man’s or woman’s “shirt” was an inclusive body covering, reaching to below the knees or longer and belted at the waist. Pants, and, later, skirts, made below-the-waist shirt material redundant, thus, in effect, creating the need for new garments.
The male shirt came first, in the 1500s in Western Europe. It was worn directly over the flesh, for the undershirt would not appear as a standard article of attire until the 1800s. The blouse, on the other hand, emerged much later, in the second half of the nineteenth century. It was loose, with high collar, full sleeves, and fitted cuffs.
As women were beginning to hang blouses in their closets, a new garment appeared which complemented the shirt, and later the blouse: the cardigan sweater.
A collarless wool sweater that buttoned down the front, it was named for James Thomas Brudenell, seventh earl of Cardigan. On October 25, 1854, as a major in the British Army during the Crimean War, Brudenell led his men in the famous charge of the Light Brigade. The earl was one of the few survivors. Although the event was immortalized in a poem by Tennyson, the seventh earl of Cardigan is remembered today only for the knitted woolen sweater he wore and popularized.
Button-down Collar. In the 1890s, the standard attire of a British polo player was white flannel trousers, white wool sweater, and long-sleeved white shirt. The shirt had a full, straight collar. Untethered, the collar tended to flap in response to a breeze or the up-and-down jouncing of a horse. Players routinely asked seamstresses to batten down their collars, and two buttons became the most popular solution to the problem.
In 1900, John Brooks, son of the founder of the Brooks Brothers clothing concern, observed the button-down collars. He dubbed the look the “Polo collar,” and a new shirt was added to the Brooks Brothers line.
The style became a classic. And the word “button-down” found its way into the language: in a literal sense, as in Mary McCarthy’s short story “The Man in the Button-Down Shirt”; and figuratively, as in the title of a comedy album, The Button-Down Mind of Bob Newhart. Although it was traditionally popular to name collars after the people who popularized them—the Lord Byron collar, the Peter Pan collar, the Nehru collar, the Windsor collar—the Polo collar became best known by its function: button-down.
Lacoste Shirt. Whereas a polo match inspired John Brooks to create the button-down collar, an alligator-skin suitcase in the window of a Boston store inspired French tennis star René Lacoste to produce a line of shirts bearing a crocodile trademark.
In 1923, on an American tour with the French Davis Cup tennis team, the nineteen-year-old Lacoste spotted the alligator luggage in a store window. He boasted to teammates that he’d treat himself to the expensive bag if he won his upcoming matches. Lacoste lost. And he did not buy the alligator-skin bag. In jest, his teammates took to calling him le crocodile.
René Lacoste retired from tennis in 1929. Four years later, when he began designing tennis shirts, he patented his former nickname as a trademark. And although the garments today are popularly called “alligator shirts,” the name’s a misnomer. Lacoste had researched his reptiles. The long-snouted animal on the shirt is technically a crocodile, of the zoological family Crocodylidae. An alligator is a reptile with a shorter, blunter snout, a subspecies of the crocodiles.
Neckties originated in France as a fashion affectation and quickly spawned a variety of styles, knots, and names (clockwise): Puff, Windsor, Four-in-Hand, and the Bowtie.
Necktie: 17th Century, France
This functionless, decorative, least comfortable of mens attire is of military origin.
The first recorded neckwear appeared in the first century B.C. In the heat of day, Roman soldiers wore focale—scarves soaked in water and wrapped around the neck to cool down the body. This completely utilitarian garment, however, never caught on sufficiently—in either a practical or a decorative sense—to become a standard article of menswear.
The origin of the modern necktie is traceable to another military custom.
In 1668, a regiment of Croatian mercenaries in the service of Austria appeared in France wearing linen and muslin scarves about their necks. Whether the scarves were once functional, as were focale, or merely a decorative accent to an otherwise bland military uniform, has never been established. History does record that fashion-conscious French men and women were greatly taken with the idea. They began to appear in public wearing neckwear of linen and lace, knotted in the center, with long flowing ends. The French called the ties cravates, their name for the “Croats” who inspired the sartorial flair.
The fashion spread quickly to England. But the fad might have died out if the extravagant, pleasuring-loving British monarch Charles II had not by his own example made neckwear a court must. And had the times not been ripe for a lighthearted fashion diversion. Londoners had recently suffered through the plague of 1665 and the devastating citywide fire of 1666. The neckwear fad swept the city almost as fast as the flames of the great conflagration.
The trend was reinforced in the next century by Beau Brummel, who became famous for his massive neckties and innovative ways of tying them. In fact, the proper way to tie neckwear became a male obsession, discussed, debated, and hotly argued in conversation and the press. A fashion publication of the day listed thirty-two different knots. Knots and ties were named for famous people and fashionable places, such as the racecourse at Ascot. Since that time, neckwear in some form—belt-long or bowtie-short, plain or fancy, rope-narrow or chest-broad—has been continually popular.
The bow tie, popularized in America in the 1920s, may also have originated among Croatian men.
For many years, fashion historians believed the small, detachable bow tie developed as one of many variations on longer neckwear. But that was opened to debate by the discovery that, for centuries, part of the costume of men in areas of Croatia consisted of bow ties. They were made from a square handkerchief, folded along the diagonal, pulled into a bow knot, then attached with a cord around the neck.
Suit: 18th Century, France
Today a man may wear a sport jacket and slacks of different fabric and color, but the outfit is never called a suit. By modern definition, a suit consists of matching jacket and trousers, occasionally with a vest. But this was not the suit’s original definition. Nor was a suit worn as business attire.
The tradition of a man’s suit originated in France, in the eighteenth century, with the fashion of wearing a coat, waistcoat, vest, and trousers of different fabrics, patterns, and colors. The cut was loose, bordering on baggy, and the suit was intended as informal country wear and known as a “lounge suit.” In the 1860s, it became fashionable to have all components of a suit made in matching fabric.
Because country lounge suits were also worn for horseback riding, tailors were often requested to slit the jacket up the back—the origin of the back slit in modern suits. Another suit feature originated for utilitarian purposes: the lapel hole, truly a buttonhole and not intended for a flower, since on cold days a man turned up the collar of his lounge suit and buttoned it closed.
Gentlemen found lounge suits so comfortable, they began wearing them in the city as well. Tailors improved the cut, and by the 1890s, the leisure lounge suit had become respectable business attire.
Tuxedo: 1886, Tuxedo Park, New York
On the night the tuxedo made its debut, slightly more than a hundred years ago, it should have been pronounced scandalous attire, inappropriate for a formal occasion. The tailless coat was after all an affront to the customary black tie and tails of the day, formal wear that originated among English dandies in the early 1800s. However, the coat was designed and worn by a family whose name and position tempered the social reaction.
The tuxedo story begins in the summer of 1886, in Tuxedo Park, New York, a hamlet about forty miles north of Manhattan. Pierre Lorillard IV, a blueblood New Yorker of French extraction, heir to the Lorillard tobacco fortune, sought something less formal than tails to wear to the annual Autumn Ball. He commissioned a tailor to prepare several tailless jackets in black, modeled after the scarlet riding jackets then popular with British fox hunters. There is some evidence that Lorillard was inspired by the fashionable Edward VII, who as Prince of Wales had ordered the tails cut off his coat during a visit to India because of oppressive heat.
On the night of the ball, Pierre Lorillard suddenly experienced a lack of daring and declined to wear the jacket of his design. Instead, his son, Griswold, and several of Griswold’s friends, donned the tailless black dinner jackets, and with a nod to the British riding coat that had inspired the creation, they wore scarlet vests.
In the 1880s’ highly restrictive code of proper attire, the splash of scarlet and the affront of taillessness should probably have done more than just raise eyebrows. The ad hoc costume might well have passed quickly into oblivion, had it not been designed by a Lorillard and worn by a Lorillard, in a town built on land owned largely by the Lorillard family. Under the circumstances, the informal wear was copied and eventually became standard evening attire.
The American Formalwear Association claims that the Lorillards’ act of rebellion launched a multimillion-dollar industry. In 1985, for instance, the sale and rental of tuxedos and their accessories grossed $500 million. Eighty percent of all rentals were for weddings, the next-largest rental category being high school proms.
For weddings and proms, one standard tuxedo accessory has become the cummerbund, a wide sash worn around the waist. It originated in India as part of a man’s formal dress. The Hindu name for the garment was kamarband, meaning “loin band,” since it was once worn lower down on the abdomen as a token of modesty. In time, the garment moved up the body to the waist, and it was appropriated by the British, who Anglicized the name to cummerbund.
The tuxedo took its name, of course, from the town in which it bowed. And today the word “tuxedo” has formal and glamorous connotations. But the term has a frontier origin, going back to the Algonquian Indians who once inhabited the area that is now Tuxedo Park. The regional Algonquian sachem, or chief, was named P’tauk-Seet (with a silent P), meaning “wolf.” In homage, the Indians referred to the area as P’tauk-Seet. Colonists, though, often phoneticized Indian words, and a 1765 land survey of the region reveals that they recorded P’tauk-Seet as “Tucksito.” By the year 1800, when Pierre Lorillard’s grandfather began acquiring land in the area, the name had already become Tuxedo. Thus, “tuxedo” derives from the Indian for “wolf,” which may or may not say something about a man who wears one.
Hats: Antiquity, Europe and Asia
The similarities in sound and spelling between the words “hat,” a head covering, and “hut,” a primitive home, are not coincidental.
Long before Western man designed clothes for the body, he constructed thatched shelters. A haet, or hull, offered protection from the elements and from the darkness of night. And when he protected his head—from heat, rain, or falling debris—the covering, whatever its composition, was also labeled haet or hutt, both of which etymologists translate as “shelter” and “protection.”
The association between a head covering and a primitive home goes further than hat equals hut. The earliest inhabitants of the British Isles wore a conical hat made of bound rush, called a cappan. They lived in a shelter, also constructed of rush, known as a cabban. The two terms are, respectively, the origins of our words “cap” and “cabin.” The evolution of language is replete with examples of peoples borrowing words for existing objects to christen new creations.
The first recorded use of a hat with a brim was in Greece in the fifth century B.C. Worn by huntsmen and travelers for protection from sun and rain, the felt petasos was wide-brimmed, and when not on the head it hung down the back on a cord. The petasos was copied by the Etruscans and the Romans, and was popular well into the Middle Ages.
The Greeks also wore a brimless hat shaped like a truncated cone. They copied the design from the Egyptians and named it pilos, for “felt,” the material of its construction. It appeared with variations throughout European cultures, and with the rise of universities in the late Middle Ages, the pileus quadratus, or four-sided felt hat, became the professional head covering for scholars—and later, as the mortarboard, was worn by high school and college students at graduation ceremonies.
Hats today are more popular with women than with men, but this was not always the case. In classical times, women rarely wore them, while men kept them on indoors and in churches and cathedrals. The customs continued into the sixteenth century, when the popularity of false hair and the mushrooming size of wigs made wearing hats inconvenient if not impossible. As the fad of wigs died out, men resumed the practice of wearing hats, though never again with the devotion of the past. And three customs underwent complete reversals: a man never worn a hat indoors, in church, or in the presence of a lady.
It was at this time, the late 1700s, that women in large numbers began to wear hats—festooned with ribbons, feathers, and flowers, and trimmed in lace. Previously, if a European woman wore a hat at all, it was a plain cap indoors, a hood outside.
Women’s hats that tied under the chin became bonnets. The word “bonnet” already existed, but throughout the late Middle Ages it denoted any small, soft hat; only in the eighteenth century did it come to signify a particular kind of feminine headwear. Milan became the bonnet capital of Europe, with Milanese hats in great demand. So much so that all women’s headwear fell under the British rubric “millinery,” and a Milaner craftsman became a milliner.
Top Hat: 1797, England
John Etherington, a London haberdasher with a fashionable shop on the Strand, emerged from his store in the twilight hours of January 15, 1797, wearing a new hat of his own design. The London Times reported that Etherington’s black stovepipe hat drew a crowd so large that a shoving match erupted; one man was pushed through a storefront window. Etherington was arrested for disturbing the peace. Within a month, though, he had more orders for top hats than he could fill.
British costume historians contend that Etherington’s was the world’s first top hat. Their French counterparts claim that the design originated a year earlier in Paris and that John Etherington pilfered it. The only evidence supporting the Parisian origin, however, is a painting by French artist Charles Vernet, Un Incroyable de 1796, which depicts a dandy in an Etherington-like stovepipe hat. Though artists traditionally have presaged trends, the British believe the painting may be more an example of an artist’s antedating a work.
Fedora. A soft felt crown with a center crease and a flexible brim mark the fedora, whose name is derived from a hat worn by a character in an 1882 French play. Written by playwright Victorien Sardou, whose dramas were the rage of Paris in the nineteenth century, Fedora was composed for its star, Sarah Bernhardt, and it established a new trend in hats. A fedora, with a veil and feather, became a favorite woman’s bicycling hat.
Panama. Though it would seem logical that the Panama hat originated in the Central American capital it is named for, it did not. The lightweight straw hat, made of finely plaited jipajapa leaves, originated in Peru. Panama became a major distribution center. North American engineers first encountered the hats in Panama, during the 1914 construction of the Panama Canal, and considered them a local product.
Derby. In 1780, Edward Smith Stanley, the twelfth earl of Derby, instituted an annual race for three-year-old horses, the Derby, to be held at Epsom Downs, near London. Popular at that time among men were stiff felt hats with dome-shaped crowns and narrow brims. Regularly worn to the Derby, the hats eventually acquired the race’s name.
Stetson. In the 1860s, Philadelphia haberdasher John B. Stetson was searching for a way to earn a profit from his hat business. Recalling a vacation to the Midwest and the number of wealthy cattle ranchers he’d met there, Stetson decided to produce an oversized hat fit for “cattle kings.” The “ten-gallon” Western cowboy hat, named “The Boss of the Plains,” transformed Stetson’s business into a success and became a classic symbol of the Wild West and of the men—and women—who tamed it. Buffalo Bill, General Custer, and Tom Mix wore Stetsons, as did Annie Oakley and Calamity Jane.
Gloves: 10,000 Years Ago, Northern Europe
Gloves evolved from the desire to protect the hands from cold and from heavy manual labor. Among the numerous examples discovered in parts of Northern Europe are “bag gloves,” sheaths of animal skin that reach to the elbow. These mittens are at least ten thousand years old.
The earliest peoples to inhabit the warm lands bordering the Mediterranean used gloves for construction and farming. Among these southerners, the Egyptians, around 1500 B.C., were the first to make gloves a decorative accessory. In the tomb of King Tutankhamen, archaeologists retrieved a pair of soft linen gloves wrapped in layers of cloth, as well as a single tapestry glove woven with colored threads. Strings around the tops of the gloves indicate they were tied to the wrist. And the separate fingers and thumb leave no doubt that hand-shaped gloves were used at least 3,500 years ago.
Regardless of the warmth of the climate, every major civilization eventually developed both costume and work gloves. In the fourth century B.C., the Greek historian Xenophon commented on the Persian production of exquisitely crafted fur costume gloves; and in Homer’s Odyssey, Ulysses, returning home, finds his father, Laertes, laboring in the garden, where “gloves secured his hands to shield them from the thorns.”
It was the Anglo-Saxons, calling their heavy leather hand covering glof, meaning “palm of hand,” who gave us the word “glove.”
Purse: Pre-8th Century B.C., Southern Europe
If you purse your lips, you are contracting them into wrinkles and folds, similar in appearance to the mouth of a drawstring bag, ancient people’s earliest purse. But it was the material from which those early bags were made, hide, or byrsa in Greek, that is the origin of the word “purse.”
The Romans adopted the Greek drawstring byrsa unaltered, Latinizing its name to bursa. The early French made it bourse, which also came to mean the money in the purse, and then became the name of the stock exchange in Paris, the Bourse.
Until pockets appeared in clothing in the sixteenth century, men, women, and children carried purses—sometimes no more than a piece of cloth that held keys and other personal effects, or at the other extreme, elaborately embroidered and jeweled bags.
Handkerchief: Post-15th Century, France
During the fifteenth century, French sailors returned from the Orient with large, lightweight linen cloths that they had seen Chinese field-workers use as protective head covers in the sun. Fashion-minded French women, impressed with the quality of the linen, adopted the article and the practice, naming the headdress a couvrechef, meaning “covering for the head.” The British took up the custom and Anglicized the word to “kerchief.” Since these coverings were carried in the hand until needed in sunlight, they were referred to as “hand kerchiefs.”
Since upper-class European women, unlike Chinese in the rice paddies, already carried sun-shielding parasols, the hand kerchief was from the start a fashion affectation. This is evident in numerous illustrations and paintings of the period, in which elaborately decorated hand kerchiefs are seldom worn but prominently carried, waved, and demurely dropped. Hand kerchiefs of silk, some with silver or gold thread, became so costly in the 1500s that they often were mentioned in wills as valuables.
It was during the reign of Elizabeth I that the first lace hand kerchiefs appeared in England. Monogrammed with the name of a loved one, the articles measured four inches square, and had a tassel dangling from one corner. For a time, they were called “true love knots.” A gentleman wore one bearing his lady’s initials tucked into his hatband; and she carried his love knot between her breasts.
When, then, did the Chinese head cover, which became the European hand kerchief, become a handkerchief, held to the nose? Perhaps not long after the hand kerchief was introduced into European society. However, the nose-blowing procedure was quite different then than today.
Throughout the Middle Ages, people cleared their noses by forcefully exhaling into the air, then wiped their noses on whatever was handy, most often a sleeve. Early etiquette books explicitly legitimize the practice. The ancient Romans had carried a cloth called a sudarium, meaning “sweat cloth,” which was used both to wipe the brow on hot days and to blow the nose. But the civility of the sudarium fell with the Roman Empire.
The first recorded admonitions against wiping the nose on the sleeve (though not against blowing the nose into the air) appear in sixteenth-century etiquette books—during the ascendancy of the hand kerchief. In 1530, Erasmus of Rotterdam, a chronicler of customs, advised: “To wipe your nose with your sleeve is boorish. The hand kerchief is correct, I assure you.”
From that century onward, hand kerchiefs made contact, albeit tentatively at first, with the nose. The nineteenth-century discovery of airborne germs did much to popularize the custom, as did the machine age mass production of inexpensive cotton cloths. The delicate hand kerchief became the dependable handkerchief.
Fan: 3000 B.C., China and Egypt
Peacock-feather fans, and fans of papyrus and palm fronds: these decorative and utilitarian breeze-stirrers developed simultaneously and independently about five thousand years ago in two disparate cultures. The Chinese turned fans into an art; the Egyptians, into a symbol of class distinction.
Numerous Egyptian texts and paintings attest to the existence of a wealthy man’s “fan servant” and a pharaoh’s “royal fan bearer.” Slaves, both white-skinned and black-skinned, continually swayed huge fans of fronds or woven papyrus to cool masters. And the shade cast on the ground by opaque fans was turf forbidden to commoners. In semitropical Egypt, the intangibles of shade and breeze were desiderata that, owing to the vigilance of slaves, adorned the wealthy as prestigiously as attire.
In China, fans cooled more democratically. And the fans themselves were considerably more varied in design and embellishment. In addition to the iridescent peacock-feather fan, the Chinese developed the “screen” fan: silk fabric stretched over a bamboo frame and mounted on a lacquered handle. In the sixth century A.D., they introduced the screen fan to the Japanese, who, in turn, conceived an ingenious modification: the folding fan.
The Japanese folding fan consisted of a solid silk cloth attached to a series of sticks that could collapse in on each other. Folding fans, depending on their fabric, color, and design, had different names and prescribed uses. Women, for instance, had “dance” fans, “court” fans, and “tea” fans, while men carried “riding” fans and even “battle” fans.
The Japanese introduced the folding fan to China in the tenth century. At that point, it was the Chinese who made a clever modification of the Japanese design. Dispensing with the solid silk cloth stretched over separate sticks, the Chinese substituted a series of “blades” in bamboo or ivory. These thin blades alone, threaded together at their tops by a ribbon, constituted the fan, which was also collapsible. Starting in the fifteenth century, European merchants trading in the Orient returned with a wide variety of decorative Chinese and Japanese fans. By far the most popular model was the blade fan, or brise, with blades of intricately carved ivory strung together with a ribbon of white or red silk.
Safety Pin: 1000 B.C., Central Europe
In the modern safety pin, the pinpoint is completely and harmlessly concealed in a metal sheath. Its ancestor had its point cradled away, though somewhat exposed, in a curved wire. This bent, U-shaped device originated in Central Europe about three thousand years ago and marked the first significant improvement in design over the straight pin. Several such pins in bronze have been unearthed.
Straight pins, of iron and bone, had been fashioned by the Sumerians around 3000 B.C. Sumerian writings also reveal the use of eye needles for sewing. Archaeologists, examining ancient cave drawings and artifacts, conclude that even earlier peoples, some ten thousand years ago, used needles, of fish spines pierced in the top or middle to receive the thread.
By the sixth century B.C, Greek and Roman women fastened their robes on the shoulder and upper arm with a fibula. This was an innovative pin in which the middle was coiled, producing tension and providing the fastener with a spring-like opening action. The fibula was a step closer to the modern safety pin.
In Greece, straight stick pins were used as ornamental jewelry. “Stilettos,” in ivory and bronze, measuring six to eight inches, adorned hair and clothes. Aside from belts, pins remained the predominant way to fasten garments. And the more complex wraparound and slip-on clothing became, the more numerous were the fastening pins required. A palace inventory of 1347 records the delivery of twelve thousand pins for the wardrobe of a French princess.
Not surprisingly, the handmade pins were often in short supply. The scarcity could drive up prices, and there are instances in history of serfs taxed to provide feudal lords with money for pins. In the late Middle Ages, to remedy a pin shortage and stem the overindulgence in and hoarding of pins, the British government passed a law allowing pinmakers to market their wares only on certain days of the year. On the specified days, upper-and lower-class women, many of whom had assiduously saved “pin money,” flocked to shops to purchase the relatively expensive items. Once the price of pins plummeted as a result of mass machine production, the phrase “pin money” was equally devalued, coming to mean “a wife’s pocket money,” a pittance sufficient to purchase only pins.
The esteemed role of pins in the history of garments was seriously undermined by the ascendancy of the functional button.
Garment pins from the Bronze Age (top); three Roman safety pins, c. 500 B.C. (middle); modern version. Pinned garments gave way to clothes that buttoned from neck to hem.
Button: 2000 B.C., Southern Asia
Buttons did not originate as clothes fasteners. They were decorative, jewelry-like disks sewn on men’s and women’s clothing. And for almost 3,500 years, buttons remained purely ornamental; pins and belts were viewed as sufficient to secure garments.
The earliest decorative buttons date from about 2000 B.C. and were unearthed at archaeological digs in the Indus Valley. They are seashells, of various mollusks, carved into circular and triangular shapes, and pierced with two holes for sewing them to a garment.
The early Greeks and Romans used shell buttons to decorate tunics, togas, and mantles, and they even attached wooden buttons to pins that fastened to clothing as a broach. Elaborately carved ivory and bone buttons, many leafed with gold and studded with jewels, were retrieved from European ruins. But nowhere, in illustration, text, or garment fragment, is there the slightest indication that an ancient tailor conceived the idea of opposing a button with a buttonhole.
When did the noun “button” become a verb? Surprisingly, not until the thirteenth century.
Buttonhole. The practice of buttoning a garment originated in Western Europe, and for two reasons.
In the 1200s, baggy, free-flowing attire was beginning to be replaced with tighter, form-fitting clothing. A belt alone could not achieve the look, and while pins could (and often did), they were required in quantity; and pins were easily misplaced or lost. With sewn-on buttons, there was no daily concern over finding fasteners when dressing.
The second reason for the introduction of buttons with buttonholes involved fabric. Also in the 1200s, finer, more delicate materials were being used for garments, and the repeated piercing of fabrics with straight pins and safety pins damaged the cloth.
Thus, the modern, functional button finally arrived. But it seemed to make up for lost time with excesses. Buttons and buttonholes appeared on every garment. Clothes were slit from neck to ankle simply so that a parade of buttons could be used to close them. Slits were made in impractical places—along sleeves and down legs—just so the wearer could display buttons that actually buttoned. And buttons were contiguous, as many as two hundred closing a woman’s dress—enough to discourage undressing. If searching for misplaced safety pins was time-consuming, buttoning garments could not have been viewed as a time-saver.
Statues, illustrations, and paintings of the fourteenth and fifteenth centuries attest to button mania. The mode peaked in the next century, when buttons, in gold and silver and studded with jewels, were sewn on clothing merely as decorative features—as before the creation of the buttonhole.
In 1520, French king Francis I, builder of Fontainebleau castle, ordered from his jeweler 13,400 gold buttons, which were fastened to a single black velvet suit. The occasion was a meeting with England’s Henry VII, held with great pomp and pageantry on the Field of Cloth of Gold near Calais, where Francis vainly sought an alliance with Henry.
Henry himself was proud of his jeweled buttons, which were patterned after his rings. The buttoned outfit and matching rings were captured on canvas by the German portrait painter Hans Holbein.
The button craze was somewhat paralleled in this century, in the 1980s, though with zippers. Temporarily popular were pants and shirts with zipped pockets, zipped openings up the arms and legs, zipped flaps to flesh, and myriad other zippers to nowhere.
Right and Left Buttoning. Men button clothes from right to left, women from left to right. Studying portraits and drawings of buttoned garments, fashion historians have traced the practice back to the fifteenth century. And they believe they understand its origin.
Men, at court, on travels, and on the battlefield, generally dressed themselves. And since most humans are right-handed, the majority of men found it expeditious to have garments button from right to left.
Women who could afford the expensive buttons of the day had female dressing servants. Maids, also being predominantly right-handed, and facing buttons head-on, found it easier to fasten their mistresses’ garments if the buttons and buttonholes were sewn on in a mirror-image reversal. Tailors complied, and the convention has never been altered or challenged.
Judson’s original hook-and-eye zipper; created to replace shoelaces.
Zipper: 1893, Chicago
The zipper had no ancient counterpart, nor did it originate in a sudden blaze of ingenuity. It emerged out of a long and patient technological struggle, requiring twenty years to transform the idea into a marketplace reality, and an additional ten years to persuade people to use it. And the zipper was not conceived as a clothes fastener to compete with buttons, but as a slide to close high boots, replacing the long, buttonhooked shoelaces of the 1890s.
On August 29, 1893, a mechanical engineer living in Chicago, Whitcomb Judson, was awarded a patent for a “clasp-locker.” At the time, there was nothing in the patent office files that even remotely resembled Judson’s prototype zipper. Two clasp-lockers were already in use: one on Judson’s own boots, the other on the boots of his business partner, Lewis Walker.
Although Judson, who held a dozen patents for motors and railroad brakes, had an established reputation as a practical inventor, he found no one interested in the clasp-locker. The formidable-looking device consisted of a linear sequence of hook-and-eye locks, resembling a medieval implement of torture more than it did a modern time-saver.
To drum up interest, Judson put the clasp-locker on display at the 1893 Chicago World’s Fair. But the twenty-one million viewers who poured into the fairgrounds flocked to the world’s first electric Ferris wheel and the titillating “Coochee-Coochee” sideshow, featuring the belly dancer Little Egypt. The world’s first zipper was ignored.
Judson and Walker’s company, Universal Fastener, did receive an order from the United States Postal Service for twenty zipper mail bags. But the zippers jammed so frequently that the bags were discarded. Although Whitcomb Judson continued making improvements on his clasp fastener, perfection of the device fell to another inventor: Swedish-American engineer Gideon Sundback. Abandoning Judson’s hook-and-eye design, Sundback, in 1913, produced a smaller, lighter, more reliable fastener, which was the modern zipper. And the first orders for Sundback’s zippers came from the U.S. Army, for use on clothing and equipment during World War I.
At home, zippers appeared on boots, money belts, and tobacco pouches. Not until around 1920 did they begin to appear on civilian clothing.
The early zippers were not particularly popular. A metal zipper rusted easily, so it had to be unstitched before a garment was washed and sewed back in after the garment had dried. Another problem involved public education: Unlike the more evident insertion of a button into a buttonhole, something even a child quickly mastered, the fastening of a zipper was not obvious to the uninitiated. Zippered garments came with small instruction manuals on the operation and maintenance of the device.
In 1923, the B. F. Goodrich Company introduced rubber galoshes with the new “hookless fasteners.” Mr. Goodrich himself is credited with coining the echoic name “zipper,” basing it on the “z-z-z-zip” sound his own boots made when closing. Goodrich renamed his new product “Zipper Boots,” and he ordered 150,000 zippers from the Hookless Fastener Company, which would later change its name to Talon. The unusual name “zipper,” as well as increased reliability and rustproofing, greatly helped popularize zippers.
Concealed under a flap, the zipper was a common fastener on clothing by the late ’20s. It became a fashion accessory in its own right in 1935, when renowned designer Elsa Schiaparelli introduced a spring clothing collection which The New Yorker described as “dripping with zippers.” Schiaparelli was the first fashion designer to produce colored zippers, oversized zippers, and zippers that were decorative and nonfunctional.
After a slow birth and years of rejection, the zipper found its way into everything from plastic pencil cases to sophisticated space suits. Unfortunately, Whitcomb Judson, who conceived a truly original idea, died in 1909, believing that his invention might never find a practical application.
Velcro: 1948, Switzerland
For several decades, it appeared that no invention could ever threaten the zipper’s secure position in the garment industry. Then along came Velcro, one man’s attempt to create synthetic burs like the small prickly thistle balls produced as seedpods on cocklebur bushes.
During an Alpine hike in 1948, Swiss mountaineer George de Mestral became frustrated by the burs that clung annoyingly to his pants and socks. While picking them off, he realized that it might be possible to produce a fastener based on the burs to compete with, if not obsolete, the zipper.
Today a Velcro fastener consists of two nylon strips, one containing thousands of tiny hooks, the other, tiny eyes. Pressing the strips together locks the hooks into the eyes. To perfect that straightforward idea required ten years of effort.
Textile experts de Mestral consulted scoffed at the idea of man-made burs. Only one, a weaver at a textile plant in Lyon, France, believed the idea was feasible. Working by hand on a special undersized loom, he managed to produce two strips of cotton fabric, one with tiny hooks, the other with smaller eyes. Pressed together, the strips stuck adequately and remained united until they were pulled apart. De Mestral christened the sample “locking tape.”
Developing equipment to duplicate the delicate handwork of the weaver required technological advances. Cotton was replaced by the more durable nylon, for repeated opening and closing of the original strips damaged the soft hooks and eyes. One significant breakthrough came when de Mestral discovered that pliant nylon thread, woven under infrared light, hardened to form almost indestructible hooks and eyes. By the mid-1950s, the first nylon locking tape was a reality. For a trademark name, de Mestral choose vel from “velvet,” simply because he liked the sound of the word, and cro from the French crochet, the diminutive for “hook.”
By the late ’50s, textile looms were turning out sixty million yards of Velcro a year. And although the nylon fastener did not replace the zipper, as de Mestral hoped it would, it found diverse zipper-like applications—sealing chambers of artificial hearts, securing gear in the gravity-free environment of space, and of course zipping dresses, bathing suits, and diapers. The list is endless, though not yet as endless as George de Mestral had once envisioned.
Umbrella: 1400 B.C., Mesopotamia
An emblem of rank and distinction, the umbrella originated in Mesopotamia 3,400 years ago as an extension of the fan. For these early umbrellas did not protect Mesopotamians from rain, a rarity in their desert land, but from harsh sun. And umbrellas continued to serve primarily as sunshades for centuries, a fact evident in the word “umbrella,” derived from the Latin umbra, “shade.” In many African societies today, an umbrella bearer walks behind the tribal chief to shield his head from sun—reflecting the ancient Egyptian and Mesopotamian tradition.
By 1200 B.C., the Egyptian umbrella had acquired religious significance. The entire canopy of the sky was believed to be formed by the body of the celestial goddess Nut. Spanning the earth as a vast umbrella, she touched the ground only with her toes and fingertips. Her star-studded belly created the night sky. Man-made umbrellas became earthly embodiments of Nut, held only above heads of nobility. An invitation to stand in the penumbra of the royal umbrella was a high honor, the shade symbolizing the king’s protection. Palm fronds, feathers, and stretched papyrus were the materials for umbrellas, as they were for fans.
The Greeks and the Romans borrowed liberally from Egyptian culture, but they regarded the umbrella as effeminate. It was rarely used by men. There are numerous derisive references by sixth-century B.C. Greek writers concerning men who carry sunshields “as women do.” For many centuries, the only occasion when a Greek man might excusably be seen holding an umbrella in public was to protect the head of a female companion.
The situation was entirely opposite for women. Greek women of high rank carried white parasols. And once a year they engaged in the Feast of Parasols, a fertility procession staged at the Acropolis.
But it was Roman women, with their own parasol celebration, who began the practice of oiling paper sunshades to waterproof them. Roman historians record that a drizzle at an outdoor amphitheater could result in hundreds of women lifting view-obstructing umbrellas, to the annoyance of male spectators. Debate arose over the use of rain umbrellas at public events, and in the first century A.D., the issue was put before Emperor Domitian, who ruled in favor of women’s protecting themselves with oiled parasols.
Sun parasols and rain umbrellas remained predominantly female accessories of dress well into the eighteenth century in Europe—and beyond that time in America. Men wore hats and got soaked. More than a casual attempt to escape the elements was seen as unmanly. The sixteenth-century French author Henri Estienne summed up the European sentiment toward men with umbrellas: “If French women saw men carrying them, they would consider them effeminate.”
It was a British gentleman, Jonas Hanway, who made umbrellas respectable raingear for men. He accomplished that transformation only through dogged perseverance, humiliation, and public ridicule.
Hanway acquired a fortune in trading with Russia and the Far East, then retired at age thirty-eight, devoting himself to founding hospitals and orphanages. And to popularizing the umbrella, a passion of his.
Beginning in 1750, Hanway seldom ventured outdoors, rain or shine, without an umbrella. He always caused a sensation. Former business associates suddenly viewed him as epicene; street hooligans jeered as he passed; and coachmen, envisioning their livelihood threatened by the umbrella as a legitimate means of shelter from the rain, steered through puddles to splash him with gutter mud.
Undaunted, Hanway carried an umbrella for the final thirty years of his life. Gradually, men realized that a one-time investment in an umbrella was cheaper than hailing a coach every time it rained—in London, a considerable savings. Perhaps it was the economics of the situation, or a case of familiarity breeding indifference, but the stigma of effeminacy long associated with the umbrella lifted. Before Jonas Hanway’s death in 1786, umbrellas were toted on rainy days by British gentlemen and, in fact, referred to as “Hanways.”
Modern Rainwear: 1830, Scotland
The history of rainwear is as old as the history of clothing itself. Early man, to protect himself from rain, fashioned water-repellent cloaks and head coverings by weaving waxy leaves and grass and stitching together strips of greased animal hide. The water-repellent coatings applied to materials varied from culture to culture.
The ancient Egyptians, for instance, waxed linen and oiled papyrus, while the Chinese varnished and lacquered paper and silk. But it was the South American Indians who paved the way for convenient, lightweight, truly effective rubberized raingear.
In the sixteenth century, Spanish explorers to the New World observed natives coating their capes and moccasins with a milky white resin from a local tree, Hevea brasiliensis. The pure white sap coagulated and dried, leaving the coated garment stiff but pliant. The Spaniards named the substance “tree milk,” and copying the Indians’ method of bleeding trees, they brushed the liquid on their coats, capes, hats, pants, and the soles of their boots. The garments effectively repelled rain, but in the heat of day the repellent became gummy, accumulating dried grass, dirt, and dead leaves which, by the cool of evening, were encrusted in the coating.
The sap was taken back to Europe. Noted scientists of the day experimented to improve its properties. In 1748, French astronomer François Fresneau developed a chemical method that rendered the tree sap, when painted on fabric, more pliant and less gummy, but the chemical additives themselves had an intolerably unpleasant odor.
Another failed experiment at least gave the sap a name. In 1770, Joseph Priestley, the great British chemist and the discoverer of oxygen, was working to improve the milky latex. Coincidentally, he observed that a piece of congealed sap would rub out graphite marks, which suggested a practical name. It was not until 1823 that a fifty-seven-year-old Scottish chemist, Charles Macintosh, made a monumental discovery that ushered in the era of modern rubberized rainwear.
Experimenting at his Glasgow laboratory, Macintosh found that natural rubber readily dissolved in coal-tar naphtha, a volatile, oily liquid produced by the “fractional” distillation of petroleum (the fraction that boils off between gasoline and kerosene). By cementing naphtha-treated thicknesses of rubber to cloth, Macintosh created rainproof coats that smelled only of rubber; the public referred to them as macintoshes.
Footwear made of naphtha-treated rubber acquired the name “galoshes,” a term already in use for high boots. The word derived from the Roman expression for the heavy thonged sandals of the Gauls. The shoes, which tied with crisscrossed wrappings that reached to midcalf, were called gallica solea, which translated as “Gaulish shoes,” or, eventually, “galoshes.”
Bathing Suit: Mid-19th Century, Europe
The origin of the bathing suit as a distinct piece of attire began in the mid-1800s. Prior to that time, recreational bathing was not a popular pastime; if a man or woman took a dip, it was in an undergarment or in the nude.
One major development helped change bathing practices and create a need for the bathing suit. European physicians in the 1800s began to advocate recreational bathing as a tonic for “nerves” —a term that encompassed something as temporary as lovesickness or as terminal as tubercular meningitis. The cure was the “waters” —mineral, spring, or ocean. By the tens of thousands, Europeans, who for centuries had equated full-body bathing with death, waded, soaked, and paddled in lakes, streams, and surf.
The bathing suits that emerged to fill this need followed the design of street dress. Women, for example, wore a costume of flannel, alpaca, or serge, with a fitted bodice, high neck, elbow-length sleeves, and a knee-length skirt, beneath which were bloomers, black stockings, and low canvas shoes. Wet, the bathing suit could weigh as much as the bather. Fatalities recorded in England and America attest to the number of waterlogged bathers caught in an undertow. The male outfit was only somewhat less cumbersome and dangerous.
These garments were strictly bathing suits, as opposed to the later, lighter swimming suits.
From about the 1880s, women could take a safer ocean dip in a “bathing machine.” The contraption, with a ramp and a dressing chamber, was wheeled from the sand into shallow water. A lady undressed in the machine, donned a shapeless full-length flannel gown fastened at the neck by a drawstring, and descended the ramp into the ocean. An awning, known as a “modesty hood,” hid her from males on the beach. Bathing machines were vigilantly guarded by female attendants, called “dippers,” whose job was to hasten the pace of male lingerers.
Shortly before America’s entry into World War I, the clinging one-piece suit became popular—though it had sleeves and reached to the knees; the women’s model also sported a skirt. The suit revolution was made possible in large measure by the textile know-how of a Danish-American named Carl Jantzen.
Born in Aarhus, Denmark, in 1883, Jantzen immigrated to America and in 1913 became a partner in Oregon’s Portland Knitting Mills. The firm produced a line of woolen sweaters, socks, caps, and mittens. Jantzen was experimenting with a knitting machine in 1915, attempting to produce a tighter, lighter-weight woolen sweater with exceptional stretch, when he developed an elasticized rib-knit stitch.
The wool knit was suppose to go into the production of sweaters. But a friend on the Portland Rowing Team asked Jantzen for an athletic outfit with more “give.” Jantzen’s skin-tight, rib-knit stretch suits were soon worn by every member of the team.
A bikini, as depicted in a fourth-century Roman mosaic. A nineteenth-century bathing outfit; dangerous when wet.
The Portland company changed its name to Jantzen Knitting Mills and adopted the slogan: “The suit that changed bathing to swimming.”
Bikini. Swimsuits became more revealing in the 1930s. From backless designs with narrow shoulder straps, women’s attire quickly progressed to the two-piece halter-neck top and panties. The bikini was the next step. And through its name, the fashion is forever linked with the start of the nuclear age.
On July 1, 1946, the United States began peacetime nuclear testing by dropping an atom bomb on the chain of Marshall Islands in the Pacific Ocean known as Bikini Atoll. The bomb, similar to the type that a year earlier devastated Hiroshima and Nagasaki, commanded worldwide media attention.
In Paris, designer Louis Réard was preparing to introduce a daringly skimpy two-piece swimsuit, still unnamed. Newspapers were filled with details of the bomb blast. Réard, wishing his suit to command media attention, and believing the design was itself explosive, selected a name then on the public’s lips.
On July 5, four days after the bomb was dropped, Réard’s top model, Micheline Bernardi, paraded down a Paris runway in history’s first bikini. In 1946, the swimsuit seemed to stir more debate, concern, and condemnation than the bomb.
Off-the-Rack Clothes: 18th Century, Europe
Given today’s wide selection of men’s and women’s attire in department stores and boutiques, it is hard to imagine a time when ready-made, ready-to-wear clothing did not exist. But off-the-rack garments have been a reality and a convenience for less than two hundred years. And high-quality ready-made clothes appeared only a hundred years ago. Previously, clothes were made when needed, by a professional tailor or a female member of the family.
The first ready-to-wear clothes were men’s suits. Loose, shapeless, and cheap, the garments sold in London in the early 1700s. Eschewed by men of style, and derided by professional tailors, who feared a loss of business, the ill-fitting suits were purchased by laborers and the lower classes, who were pleased to own a suit for special occasions, however ill-fitting.
With London’s lower classes significantly outnumbering the city’s gentlemen, and with many of the former aspiring to emulate the latter, it is not surprising that ready-made suits sold at a brisk pace. Within a decade, they were being produced in Liverpool and Dublin.
Tailors’ guilds attempted to thwart the trend. They expelled guild members who made the suits, while petitioning Parliament to outlaw ready-to-wear apparel. Parliament declined to enter into the imbroglio. And as more people purchased ready-made clothes, more tailors abandoned the guilds to satisfy the growing demand.
In the 1770s, the men’s ready-to-wear phenomenon hit Paris, Europe’s fashion center. Tailors competed for business among themselves by bettering the fit and quality of suits. And superior garments attracted a higher-class clientele. By the end of the decade, a half-dozen French firms featured suits as well as coats, the second ready-made item manufactured. The clothes were a particular favorite of sailors, whose brief time in port precluded multiple fittings for custom-made garments.
Women for many years continued to make their own clothes. They even resisted the concept of clothes produced by strangers, who possessed no knowledge of their personal style preferences and private body dimensions.
But the many conveniences of clothes ready-made over homemade—from a wider selection of styles, colors, and fabrics to the immense savings in time spent sewing—eventually won women over. The first large firm manufacturing ready-made clothes for women and children opened in Paris in 1824 and was called La Belle Jardinière because of its proximity to a flower market. In America around this time, 1830, Brooks Brothers of New Bedford, Massachusetts, began making ready-to-wear men’s clothing.
Two inventions of the day helped turn the manufacture of off-the-rack clothes into the multibillion-dollar industry it is today. The sewing machine (see page 147) permitted rapid mass production of garments; for the first time in history, clothes were not hand sewn. The second breakthrough involved the adoption of a scale of standardized clothing sizes for men, women, and children.
Until around 1860, clothes were cut to size in one of two ways. A new garment was made by copying an existing one, which usually entailed unstitching and opening up fabric. Or a rough shape of the garment was cut out of muslin, basted on the wearer, and recut and reshaped in this manner until it fit satisfactorily. Then the perfected muslin pattern was copied in the garment’s actual and more expensive fabric. The tedious process is still employed for many couture creations, but it was unsuitable for mass production.
Standardized sizes, in the form of “graded paper patterns,” became an industry reality in the 1860s. No longer was it necessary for a customer to hold up three or four rack garments and guess which one would give the best fit.
Home seamstresses also turned to paper patterns, which were featured in magazines and store catalogues and sold through the mail. By 1875, paper patterns were selling at the rate of ten million a year. It became chic to wear a pattern-cut garment. Queen Victoria, who could well afford custom-tailored clothes for the royal family, ordered for her sons suits fashioned from Butterick patterns, the most popular name of the day.
There was a certain democracy to ready-to-wear clothes. They did not exactly prove that all people were created equal, but they did reassure that most people, rich and poor, came in a limited number of sizes. More important, for the first time in history, fashion was no longer the prerogative of the wealthy few but available to everyone.
Designer Labels: 19th Century, France
Dior. Blass. Klein. Givenchy. De la Renta. Von Furstenberg. Cassini. Cardin. Lauren. Gucci.
History records the names of today’s fashion designers, but nowhere in its pages are the names of the tailors, dressmakers, and seamstresses who clothed royalty and nobility throughout the ages. They must have existed. Fashion certainly did. France and Milan were recorded as two of Europe’s earliest fashion centers. But what was important prior to the late eighteenth century was the garment itself—the style, detailing, color, fabric, and, too, the person who paraded it; everything except the designer.
Who originated designer clothes and paved the way for the phenomenon of the name label?
Her name was Rose Bertin, the first fashion designer to achieve fame, recognition, and a page in the history books. Born Marie-Jeanne in Abbeville, France, in the mid-1700s, she might not have became famous, despite talent, had it not been for a series of fortunate encounters.
Rose Bertin began her career as a milliner in Paris in the early 1770s. Her stylish hats caught the attention of the duchess of Chartres, who became her patron and presented her to Empress Maria Theresa. The Hungarian queen was displeased with the style of dress worn by her daughter, Marie Antoinette, and Rose Bertin was commissioned to make over the woman who would become perhaps France’s most extravagant and famous queen. Rose’s lavish costumes for the dauphine dazzled the French court, though they distressed the empress, who complained that her daughter now dressed with the excesses of a stage actress.
As queen, Marie Antoinette devoted increasingly more time and money to fashion. And as her extravagances rose to the level of a national scandal. Rose Benin’s salon became the fashion center of Paris. She dressed not only Marie Antoinette, meeting with the queen twice weekly to create new gowns, but most of the French aristocracy, as well as the queens of Sweden and Spain, the duchess of Devonshire, and the czarina of Russia.
Rose Benin’s prices were exorbitant. Even the fermenting revolution did nothing to lower the prices, the demand for gowns, and the queen’s commitment to fashion—which may have led to the arrest that resulted in her beheading.
Early in June 1791, prior to the planned escape of Marie Antoinette and her husband, set for the twentieth of the month, the queen ordered from Rose Bertin a large number of traveling outfits to be completed as quickly as possible. The discovery of the order is believed to have confirmed suspicions that the royal couple was about to flee the country.
The queen, of course, was caught, imprisoned, and guillotined in 1793. Rose Bertin fled to Frankfurt, then moved to London, where she continued to design clothes for European and Asian nobility. She died in 1812, during the reign of Napoleon.
Her worldwide fame helped draw attention to the people who design clothes. In Paris, salons and individual designers began to attach their own names to the fashions they created. And one Parisian designer, Charles Worth, introduced in 1846 the concept of using live models to display name-brand clothes—which were now protected by copyright from reproduction. Those events marked the birth of haute couture. And it was that nineteenth-century phenomenon, coupled with the concurrent rise of off-the-rack ready-wear, that made designer labels a possibility, then a profitable reality.
Into the Bedroom
Bedroom: 3500 B.C., Sumer
One third of the history of humankind has never been written, for it occurred in the eight nightly hours kings, queens, and commoners spent in bed over their lifetime. It’s as if between saying good night and sitting down for breakfast, humankind ceased to exist. But in those ignored—and seemingly lost—hours, man was conceived and born, sired future generations, and died. To venture into the bedroom is to enter a realm rich in its own lore, language, trivia, and erotica.
A special room in a house set aside for a bed first appeared in the royal palaces at Sumer about 3500 B.C. One significant fact about ancient Sumerian bedrooms is that there was usually only one to a home, regardless of the immensity of the residence and the number of its inhabitants. The head of the household occupied the bedroom and its bed, while his wife, children, servants, and guests slept around the house on couches, on lounges, or on the floor. Pillows existed for everyone, but they were hard, curved headrests of wood, ivory, or alabaster, intended primarily to protect a styled coiffure overnight.
The Egyptians were better bedded—though their pillows were no softer. Palaces in the fourth millennium B.C. allowed for a “master bedroom,” usually fitted with a draped, four-poster bed, and surrounding narrow “apartments” for a wife and children, each with smaller beds.
The best Egyptian bedrooms had double-thick walls and a raised platform for the bed, to insulate the sleeper from midnight cold, midday heat, and low drafts. Throughout most of the ancient world, beds were for sleeping at night, reclining by day, and stretching out while eating.
Most Egyptian beds had canopies and draped curtains to protect from a nightly nuisance: mosquitoes. Along the Nile, the insects proved such a persistent annoyance that even commoners slept beneath (or wrapped cocoon-like in) mosquito netting. Herodotus, regarded as the “father of history,” traveled throughout the ancient world recording the peoples and behaviors he encountered. He paints a picture of a mosquito-infested Egypt that can elicit sympathy from any person today who struggles for a good night’s sleep in summer:
In parts of Egypt above the marshes the inhabitants pass the night upon lofty towers, as the mosquitoes are unable to fly to any height on account of the winds. In the marshy country, where there are no towers, each man possesses a net. By day it serves him to catch fish, while at night he spreads it over the bed, and creeping in, goes to sleep underneath. The mosquitoes, which, if he rolls himself up in his dress or in a piece of muslin, are sure to bite through the covering, do not so much as attempt to pass the net.
Etymologists find a strong association between the words “mosquito” and “canopy.” Today “canopy” suggests a splendid drape, but to the ancient Greeks, konops referred to the mosquito. The Romans adopted the Greeks’ mosquito netting and Latinized konops to conopeum, which the early inhabitants of Britain changed to canape. In time, the name came to stand for not the mosquito itself but the bed draping that protected from the insect.
Whereas the Egyptians had large bedrooms and beds, the Greeks, around 600 B.C., led a more austere home life, which was reflected in the simplicity of their bedrooms. The typical sleeping chamber of a wealthy Greek man housed a plain bed of wood or wicker, a coffer for valuables, and a simple chair. Many Spartan homes had no actual bedroom because husbands, through military duty, were separated from wives for a decade or longer. A Spartan youth at age twenty joined a military camp, where he was required to sleep. If married, he could visit his wife briefly after supper, but he could not sleep at home until age thirty, when he was considered a full citizen of Greece.
Roman bedrooms were only slightly less austere than those of the Greeks. Called cubicula (giving us the word “cubicle”), the bedroom was more a closet than a room, closed by a curtain or door. These cubicles surrounded a home’s or palace’s central court and contained a chair, chamber pot, and simple wooden bed, often of oak, maple, or cedar. Mattresses were stuffed with either straw, reeds, wool, feathers, or swansdown, depending on a person’s finances. Mosquito netting was commonplace.
Though some Roman beds were ornately carved and outfitted with expensive linens and silk, most were sparsely utilitarian, reflecting a Roman work ethic. On arising, men and women did not bathe (that took place midday at public facilities; see page 200), nor did breakfast consist of anything more than a glass of water. And dressing involved merely draping a toga over undergarments that served as nightclothes. For Romans prided themselves on being ready to commence a day’s work immediately upon arising. The emperor Vespasian, for instance, who oversaw the conquest of Britain and the construction of the Colosseum, boasted that he could prepare himself for imperial duties, unaided by servants, within thirty seconds of waking.
Canopy derives from the Greek word for mosquito, and canopied beds protected ancient peoples from the nightly nuisance.
Making a Bed. The decline of the bed and the bedroom after the fall of the Roman Empire is aptly captured in the phrase “make the bed.” This simple expression, which today means no more than smoothing out sheets and blankets and fluffing a pillow or two, was a literal statement throughout the Dark Ages.
From about A.D. 500 onward, it was thought no hardship to lie on the floor at night, or on a hard bench above low drafts, damp earth, and rats. To be indoors was luxury enough. Nor was it distasteful to sleep huddled closely together in company, for warmth was valued above privacy. And, too, in those lawless times, safety was to be found in numbers.
Straw stuffed into a coarse cloth sack could be spread on a table or bench by a guest in a home or an inn, to “make a bed.” And since straw was routinely removed to dry out, or to serve a daytime function, beds were made and remade.
The downward slide of beds and bedroom comfort is reflected in another term from the Middle Ages: bedstead. Today the word describes a bed’s framework for supporting a mattress. But to the austere-living Angles, Saxons, and Jutes, a bedstead was merely the location on the floor where a person bedded down for the night.
Hardship can be subtly incorporated into custom. And throughout the British Isles, the absence of comfortable beds was eventually viewed as a plus, a nightly means of strengthening character and body through travail. Soft beds were thought to make soft soldiers. That belief was expressed by Edgar, king of the Scots, at the start of the 1100s. He forbade noblemen, who could afford comfortable down mattresses, to sleep on any soft surface that would pamper them to effeminacy and weakness of character. Even undressing for bed (except for the removal of a suit of mail armor) was viewed as a coddling affectation. So harshly austere was Anglo-Saxon life that the conquering Normans regarded their captives as only slightly more civilized than animals.
Spring Mattress: Late 18th Century, England
Mattresses once were more nightmarish, in the bugs and molds they harbored, than a sleeper’s worst dreams. Straw, leaves, pine needles, and reeds—all organic stuffings—mildewed and rotted and nurtured bedbugs. Numerous medieval accounts tell of mice and rats, with the prey they captured, nesting in mattresses not regularly dried out and restuffed. Leonardo da Vinci, in the fifteenth century, complained of having to spend the night “upon the spoils of dead creatures” in a friend’s home. Physicians recommended adding such animal repellents as garlic to mattress stuffing.
Until the use of springs and inorganic stuffings, the quest for a comfortable, critter-free mattress was unending. Indeed, as we’ve seen, one reason to “make a bed” anew each day was to air and dry out the mattress stuffing.
Between da Vinci’s time and the birth of the spring mattress in the eighteenth century, numerous attempts were made to get a comfortable, itch-free night’s repose. Most notable perhaps was the 1500s French air mattress. Known as a “wind bed,” the mattress was constructed of heavily waxed canvas and equipped with air valves for inflation by mouth or mechanical pump. The brainchild of upholsterer William Dujardin, history’s first air mattress enjoyed a brief popularity among the French nobility of the period, but cracking from repeated or robust use severely shortened its lifetime. A number of air beds made from more flexible oilcloth were available in London in the seventeenth century, and in Ben Jonson’s 1610 play The Alchemist, a character states his preference for air over straw, declaring, “I will have all my beds blown up, not stuffed.”
Adjustable sick bed, incorporating a chamber pot.
British patents for springs—in furniture and carriage seats—began to appear in the early eighteenth century. They were decidedly uncomfortable at first. Being perfectly cylindrical (as opposed to conical), springs when sat upon snaked to one side rather than compressing vertically; or they turned completely on their sides. And given the poor metallurgical standards of the day, springs might snap, poking hazardously through a cushion.
Spring mattresses were attempted. But they presented complex technological problems since a reclining body offers different compressions along its length. Springs sturdy enough to support the hips, for instance, were unyielding to the head, while a spring made sensitive for the head was crushed under the weight of the hips.
By the mid-1850s, the conical innerspring began to appear in furniture seats. The larger circumference of its base ensured a more stable vertical compression. An early mention of sleeping on conical innersprings appeared in an 1870s London newspaper: “Strange as it may seem, springs can be used as an excellent sleeping arrangement with only a folded blanket above the wires.” The newspaper account emphasized spring comfort: “The surface is as sensitive as water, yielding to every pressure and resuming its shape as soon as the body is removed.”
Early innerspring mattresses were handcrafted and expensive. One of the first patented in America, by an inventor from Poughkeepsie, New York, was too costly to arouse interest from any United States bedding manufacturer. For many years, innerspring-mattress beds were found chiefly in luxury hotels and in ocean liners such as the Mauretania, Lusitania, and Titanic. As late as 1925, when U.S. manufacturer Zalmon Simmons conceived of his “Beautyrest” innerspring mattress, its $39.50 price tag was more than twice what Americans paid for the best hair-stuffed mattress of the day.
Simmons, however, cleverly decided not to sell just a mattress but to sell sleep— “scientific sleep,” at that. Beautyrest advertisements featured such creative geniuses of the era as Thomas Edison, Henry Ford, H. G. Wells, and Guglielmo Marconi. The company promoted “scientific sleep” by informing the public of the latest findings in the relatively new field of sleep research: “People do not sleep like logs; they move and turn from twenty-two to forty-five times a night to rest one set of muscles and then another.”
With several of the most creative minds of the day stressing how they benefited from a good night’s sleep, it is not surprising that by 1929, Beautyrest, the country’s first popular innerspring mattress, had annual sales of nine million dollars. Stuffed mattresses were being discarded faster than trashmen could collect them.
Electric Blanket: 1930s, United States
Man’s earliest blankets were animal skins, or “choice fleeces,” as they are referred to in the Odyssey. But our word “blanket” derives from a later date and a different kind of bed covering. French bed linens (and bedclothes) during the Middle Ages consisted largely of undyed woolen cloth, white in color and called blanquette, from blanc, meaning “white.” In time, the word evolved to “blanket” and it was used solely for the uppermost bed covering.
The first substantive advance in blankets from choice fleeces occurred in this century and as a spinoff from a medical application of electricity. In 1912, while large areas of the country were still being wired for electric power, American inventor S. I. Russell patented an electric heating pad to warm the chests of tubercular sanitarium patients reclining outdoors. It was a relatively small square of fabric with insulated heating coils wired throughout, and it cost a staggering $150.
Almost immediately, the possibility for larger, bed-sized electric blankets was appreciated; and not only for the ailing. But cost, technology, and safety were obstacles until the mid-1930s. The safety of electric blankets remained an issue for many years. In fact, most refinements to date involved generating consistent heat without risking fire. One early advance involved surrounding heating elements with nonflammable plastics, a spin-off of World War II research into perfecting electrically heated flying suits for pilots.
Birth Control: 6 Million Years Ago, Africa and Asia
The term “birth control” was coined in 1914 by Irish-American nurse Margaret Sanger, one of eleven children herself, who is regarded as the “mother of planned parenthood.” But the concept is ancient, practiced in early societies, and it arose out of an astonishing biological change that occurred in the female reproductive cycle some six million years ago.
The change involved estrus, or heat. At that time, females, in the lineage that would become Homo sapiens, began to switch from being sexually receptive to males only during limited periods of estrus to continuous arousal and receptivity. Thus, from conceiving young only during a brief season (nature’s own birth control), the female evolved to bearing young year round.
Anthropologists theorize this development went hand in hand with the emerging trait of walking erect. To achieve balance for upright posture, the pelvic canal narrowed; this meant difficult and often fatal pregnancies. Natural selection began to favor females with a proclivity for giving premature birth—that is, for having babies small enough to negotiate the narrowed canal. These premature babies required longer postnatal care and consequently kept their mothers busier. Thus, the female became increasingly dependent on the male for food and protection. And she guaranteed herself and her offspring these necessities by offering the male in return sexual favors for longer and longer periods of time. Those females with only limited periods of estrus gradually died off. Soon entire generations carried the gene for continuous sexual arousal and receptivity. And with this development came the notion of controlling unwanted conception.
For tens of thousands of years, the only contraceptive method was coitus interruptus, in which the man withdraws to ejaculate outside the woman’s body: the biblical sin of Onan. With the emergence of writing about 5,500 years ago, a record of birth control methods—from the bizarre to the practical—entered history.
Every culture sought its own foolproof method to prevent conception. In ancient China, women were advised to swallow quicksilver (mercury) heated in oil. It may well have worked, since mercury is highly toxic and probably poisoned the fetus—and to a lesser extent, the mother.
A less harmful procedure was practiced by Egyptian women. Before intercourse, a woman was advised to insert a mixture of crocodile dung and honey into her vagina. While the viscous honey might have served as a temporary obstacle to impede sperm from colliding with an egg, it is more likely that the salient ingredient was dung: its sharp acidity could alter the pH environment necessary for conception to occur, killing the sperm. In effect, it was history’s first spermicide.
Egyptian birth control methods are the oldest on record. The Petri Papyrus, written about 1850 B.C., and the Eber Papyrus, composed three hundred years later, describe numerous methods to avert pregnancy. For the man, in addition to coitus interruptus there was coitus obstructus, which is full intercourse, with the ejaculate forced into the bladder through the depression of the base of the urethra. (The papryi also contain an early mention of how women handled menstruation: Egyptian women used a homemade tampon-shaped device composed of shredded linen and crushed acacia branch powder, later known as gum arabic, an emulsion stabilizer used in paints, candy, and medicine.)
Contraceptive methods assumed additional importance in the free-spirited Rome of the second and third centuries A.D. Soranus of Ephesus, a Greek gynecologist practicing in Rome, clearly understood the difference between contraceptives, which prevent conception from occurring, and abortifacients, which eject the egg after it’s fertilized. And he taught (correctly, though dangerously) that permanent female sterility could be achieved through repeated abortions. He also advised (incorrectly) that immediately following intercourse, women cough, jump, and sneeze to expel sperm; and he hypothesized infertile or “safe” days in the menstrual cycle.
Spermicides were a popular birth control method in the Near and Middle East. In ancient Persia, women soaked natural sea sponges in a variety of liquids believed to kill sperm—alcohol, iodine, quinine, and carbolic acid—and inserted them into the vagina before intercourse. Syrian sponges, from local waters, were highly prized for their absorptivity, and perfumed vinegar water, highly acidic, was a preferred spermicide.
In the ancient world, physical, as opposed to chemical, means of birth control were also available:
Cervical Cap. From about the sixth century B.C., physicians, invariably males, conceived of countless cap-like devices for the female to insert over the opening of the cervix. Greek doctors advised women to scoop out the seeds of a pomegranate half to obtain a sperm-blocking cap. Centuries later, Casanova—the Italian gambler, celebrated lover, and director of the French state lotteries, who told all in his twelve-volume memoirs—presented his mistresses with partially squeezed lemon halves. The lemon shell acted as a physical barrier, and its juice as an acidic spermicide.
A highly effective cervical cap appeared in Germany in 1870. Designed by the anatomist and physician Wilhelm Mensinga, the cap was a hollow rubber hemisphere with a watch spring around the head to secure it in place. Known as the “occlusive pessary,” or popularly as the “Dutch cap,” it was supposed to be 98 percent effective—as good as today’s diaphragms.
IUD. The scant documentation of the origin of intrauterine devices is attributable to their mysterious function in preventing conception. It is known that during the Middle Ages, Arabs used IUDs to thwart conception in camels during extended desert journeys. Using a hollow tube, an Arab herder slid a small stone into a camel’s uterus. Astonishingly, not until the late 1970s did doctors begin to understand how an IUD works. The foreign object, metal or plastic today, is treated as an invader in the uterus and attacked by the body’s white blood cells. Part of the white cells’ arsenal of weapons is the antiviral compound interferon. It’s believed that interferon kills sperm, preventing conception.
The Arab practice with camels led to a wide variety of foreign objects being inserted into animal and human uteruses: beads of glass and ebony, metals, buttons, horsehair, and spools of silver threads, to mention a few. However, the first truly effective metal-coil IUD was the “silver loop,” designed in 1928 by the German physician Ernst Frafenberg. Measuring about three fifths of an inch in diameter, the loop had adequate elasticity, though as with many later IUDs, some women developed pelvic inflammation.
Throughout history, there were physicians in all cultures who advised women to douche immediately after intercourse, believing this alone was an effective contraceptive measure. But modern research has shown that within ten seconds after the male ejaculates, some sperm may already have swum from the vaginal canal into the cervix, where douching is ineffective.
From crocodile dung to douching, all ancient contraceptive methods were largely hit or miss, with the onus of preventing conception falling upon the female. Then, in the sixteenth century, an effective means of male contraception arose: the condom.
Condom: 16th and 17th Centuries, Italy and England
Prior to the sixteenth century, did no physician think of simply placing a sheath over the penis during intercourse?
It must be stated that sheaths in earlier times were thick. They interfered with a man’s pleasure. And most doctors were men. Thus, sheaths were seldom recommended or used. That may be overstating the case, but only slightly. Penile sheaths did exist. There is evidence that the Romans, and possibly the Egyptians, used oiled animal bladders and lengths of intestine as sheaths. However, their purpose was not primarily to prevent the woman from becoming pregnant but to protect the man against catching venereal disease. When it came to birth control, men preferred to let women take the lead.
Italian anatomist Gabriel Fallopius, the sixteenth-century physician who first described the two slender tubes that carry ova from the ovaries to the uterus, is generally regarded as the “father of the condom” —an anachronistic title since Dr. Condom would not make his contribution to the device for another hundred years.
In the mid-1500s, Fallopius, a professor of anatomy at the University of Padua, designed a medicated linen sheath that fit over the glans, or tip of the penis, and was secured by the foreskin. It represents the first clearly documented prophylactic for the male member. Soon sheaths appeared for circumcised men. They were a standard eight inches long and tied securely at the base with a pink ribbon, presumably to appeal to the female. Fallopius’s invention was tested on over one thousand men, “with complete success,” as the doctor himself reported. The euphemism of the day labeled them “overcoats.”
Fallopius initially conceived of the sheath not as a contraceptive device but as a means of combating venereal disease, which then was on an epidemic rise. It is from this sixteenth-century European outbreak that sailors to the New World are believed to have introduced the Treponema pallidum bacterium of syphilis to native Indians.
Penile sheaths in the sixteenth century were dullingly thick, made from animal gut and fish membranes in addition to linen. Since they interfered with the pleasure of intercourse and only occasionally prevented disease—being improperly used, and reused unwashed—they were unpopular with men and regarded with derision. A French marquis sarcastically summed up the situation when he called a cattle-intestine sheath he’d tried “armor against love, gossamer against infection.”
How did Fallopius’s overcoats get to be named condoms?
Legend has it that the word derives from the earl of Condom, the knighted personal physician to England’s King Charles II in the mid-1600s. Charles’s pleasure-loving nature was notorious. He had countless mistresses, including the most renowned actress of the period, Nell Gwyn, and though he sired no legitimate heirs, he produced innumerable bastards throughout the realm.
Dr. Condom was requested to produce, not a foolproof method of contraception, but a means of protecting the king from syphilis. His solution was a sheath of stretched and oiled intestine of sheep. (It is not known if he was aware of Fallopius’s invention of a hundred years earlier. It is part of condom lore that throughout the doctor’s life, he discouraged the use of his name to describe the invention.) Condom’s sheath caught the attention of noblemen at court, who adopted the prophylactics, also against venereal disease.
The fact that sexually transmitted disease was feared far more than siring illegitimate children can be seen in several dictionary definitions of condoms in the seventeenth and eighteenth centuries. A Classical Dictionary of the Vulgar Tongue, for instance, published in London in 1785, defines a condom as “the dried gut of a sheep, worn by men in the act of coition, to prevent venereal infection.” The entry runs for several additional sentences, with no mention of contraception.
Only in this century, when penicillin laid to rest men’s dread of syphilis, did the condom come to be viewed as protection primarily against pregnancy.
A condom made of vulcanized rubber appeared in the 1870s and from the start acquired the popular name rubber. It was not yet film thin, sterile, and disposable. A man was instructed to wash his rubber before and after intercourse, and he reused one until it cracked or tore. Effective and relatively convenient, it was still disliked for its dulling of sensation during intercourse. Thinner modern latex rubber would not be introduced until the 1930s.
Rubbers were denounced by religious groups. In New York in the 1880s, the postal service confiscated more than sixty-five thousand warehouse condoms about to sold through the mail, labeling them “articles for immoral purposes,” and police arrested and fined more than seven hundred people who manufactured and promoted the goods.
Vasectomy; Sperm and Egg: 1600s, England and Netherlands
In the century when Dr. Condom supposedly introduced sheaths to England, fellow British physicians performed the first vasectomy. Although the means of cutting and cauterizing the male tubes was crude, the surgery was supposed to be effective—though never reversible, as a vasectomy usually is today.
It was also in the seventeenth century that a major human reproductive principle was confirmed—the union of sperm and egg.
Early physicians did not realize that conception required a sperm to collide with a female’s egg. For centuries, no one even suspected that an egg existed. Men, and only men, were responsible for the continuation of the species. Physicians assumed that the male ejaculate contained homunculi, or “tiny people,” who grew into human beings after being deposited in a woman’s uterus. Contraceptive methods were a means of halting the march of homunculi to the nurturing uterus. In the sixteenth century, Gabriel Fallopius described the tubes connecting the ovaries to the uterus, and in 1677 a Dutch haberdasher constructed the first quality microscope and identified sperm cells, half the reproductive story.
Antonie van Leeuwenhoek was born in 1632 in Delft, the Netherlands. He plied the haberdashery trade and in his spare time experimented with grinding glass to make lenses. In producing microscopes of high resolution and clarity, Leeuwenhoek almost single-handedly established the field of microbiology.
Continually sliding new specimens under his superior lenses, Leeuwenhoek made numerous important discoveries. He observed that aphids reproduced by parthenogenesis or “virgin birth,” in which female eggs hatch without male fertilization. Using his own blood, he gave the first accurate description of red blood cells; and using his own saliva, he recorded the myriad bacteria that inhabit the human mouth. Using his own ejaculate (which drew public cries of immorality), he discovered sperm. Clearly, semen was not composed of homunculi; sperm had to unite with an egg, and women did make half the contribution to the production of offspring, a role that in the past had often been denied them.
The Pill: 1950s, Shrewsbury, Massachusetts
No event in the history of contraception has had a more profound effect on birth control than the introduction of an oral contraceptive. “The pill,” as it quickly became known, contains hormone-like substances that enter the bloodstream and disrupt the production of ova and ovulation. Although birth control pills were predicted in the mid-nineteenth century, they did not become a reality until the 1950s, the result of pioneering medical research and the encouragement of Margaret Sanger, organizer of the planned parenthood movement in the United States.
The pill originated in an unexpected discovery made in the tropical jungles of Mexico in the 1930s. There, chemistry professor Russell Marker, on leave from Pennsylvania State College, was experimenting with a group of plant steroids known as sapogenins, which produce a soaplike foam in water. He discovered a chemical process that transformed the sapogenin diosgenin into the human female sex hormone progesterone. The wild Mexican yam, cabeza de negro, proved to be a rich source of the hormone precursor.
At that time, progesterone was used to treat menstrual disorders and prevent miscarriages. But the drug was available only from European pharmaceutical companies, and methods of preparing it were laborious and costly. Still, Marker was unable to acquire financial backing from an American pharmaceutical company to pursue synthetic progesterone research.
He rented a laboratory in Mexico City, collected ten tons of yams, and at his own expense isolated pure diosgenin. Back in the United States, he synthesized more than 2,000 grams of progesterone, which at the time was worth $160,000. The synthesis was far simpler than the traditional methods, and in time it would bring down the price of sex steroids from eighty dollars to one dollar a gram.
Researchers in the late 1940s began to reevaluate the possibility of an inexpensive oral contraceptive. Chemist Gregory Pincus at the Worchester Foundation for Experimental Biology in Shrewsbury, Massachusetts, tested a yam-derived ovulation inhibitor, norethynodrel, on 1,308 volunteers in Puerto Rico in 1958. It established menstrual regularity and was an effective contraceptive. Searle Pharmaceuticals applied for FDA approval to market norethynodrel. Despite intense opposition from religious groups opposed to birth control, research and marketing efforts continued, and in 1960, women across America were introduced to Enovid, history’s first oral contraceptive.
Although there was considerable social condemnation of the pill, sales figures revealed that in the privacy of their homes across the country, women were not reluctant to take it regularly. By the end of 1961, a half-million American women were on the pill, and that number more than doubled the following year.
Since that time, drug companies have worked to develop a variety of safer versions of oral contraceptives, with fewer side effects. None of today’s oral contraceptives, taken by approximately seventy million women worldwide, contains the original yam derivative norethynodrel. Researchers believe that oral contraceptives will remain women’s major birth control aid until the introduction, projected for the 1990s, of an antipregnancy vaccine that will offer several years’ immunization against conception.
Planned Parenthood. A woman who encouraged chemist Gregory Pincus to perfect the pill was Margaret Sanger. Born in 1883, Sanger had ten brothers and sisters and witnessed the difficult life of her Irish mother, characterized by continual pregnancy and childbirth, chronic poverty, and an early death. As a maternity nurse on Manhattan’s Lower East Side at the turn of the century, she was equally dismayed by the high rate of unwanted pregnancies and self-induced abortions. She believed that fewer children, spaced further apart, could help many families attain a better standard of living. But when Sanger attempted to learn more about family planning, she discovered that sound information simply did not exist.
There was a straightforward reason. The Comstock Act of 1873, named after Anthony Comstock, a postal inspector and leader of the New York Society for the Suppression of Vice, had labeled all contraceptive information “obscene.” As a result, it ceased being published. Physicians Sanger interviewed were reluctant even to discuss artificial birth control for fear of being quoted and later prosecuted under the Comstock Act.
To acquire what information existed, she traveled throughout Europe in 1913, returning home the following year armed with literature and methodology. She published contraceptive information in her own monthly magazine, Woman Rebel, which earned her nine counts of defiance against the Comstock law and resulted in her journal’s being barred from the U.S. mails. In 1916, she opened the world’s first birth control clinic, in the Brownsville section of Brooklyn, offering women accurate and practical advice on avoiding pregnancy and planning the size of a family.
New York City police soon closed the clinic as a “public nuisance.” Diaphragms, condoms, and literature were confiscated. And Margaret Sanger went to prison. The U.S. Court of Appeals eventually ruled that doctors could provide prophylactic devices to women strictly for the “cure and prevention of disease,” not for contraception. In 1927, Margaret Sanger organized the first World Population Conference, and twenty years later she launched the International Planned Parenthood Federation.
In the early 1950s, she visited the Massachusetts laboratory of Dr. Gregory Pincus. She convinced him of the need for a simple oral contraceptive, and she championed his invention up until her death in 1966. By then, the pill was six years old and four million American women were consuming 2,600 tons of birth control pills annually.
Nightgown and Pajamas: Post-16th Century, France and Persia
Late in the sixteenth century, when tight-corseted, multilayered clothes and powdered wigs reigned as vogue, it became a luxury for both men and women at day’s end to slip into something more comfortable. In that era, the term “nightgown” originated in Europe to describe a full-length unisex frock, fastened in front, with long sleeves. Intended also for warmth before there was central heating, a nightgown was often of velvet or wool, lined and trimmed in fur. For the next hundred fifty years, men and women wore the same basic garment to bed, with differences existing only in feminine embellishments of lace, ribbon, or embroidery.
A substantive divergence in styles began in the eighteenth century with the emergence of the negligee for women. The term arose as differences in styles and fabrics of men’s and women’s nightgowns grew more pronounced. A woman’s negligee—a tighter garment in silk or brocade with ruffles or lace, often belted at the waist—not only was for sleeping but also was informal wear for lounging in the privacy of the home. The notion of relaxing in a negligee—that is, of performing no household work—is inherent in the word’s origin: neglegere, Latin for “to neglect,” compounded from neg and legere, meaning “not to pick up.”
The man’s plainer, baggier nightgown grew shorter in the same century, to become a “night shirt.” It was not uncommon for a man to relax at home in trousers and a night shirt, and even to wear the shirt during the day as an undergarment. One popular pair of lounging trousers was imported from Persia. Loose-fitting and modeled after the harem pants worn by Eastern women, they were named pajamas, derived from pae, Persian for “leg garment,” and jama, “clothing.” The night shirt and Persian trousers, originally uncoordinated in color, fabric, and print, evolved into the more stylized pajama ensemble we know today.
Underwear: Mid-1800s, Europe
Unmentionables. Indescribables. Unwhisperables. These are among the many euphemisms men’s and women’s underwear acquired during its relatively brief history. In the ancient world, beneath loose robes and togas, underwear was not recognized as a standard article of attire.
Prior to the nineteenth century, underwear (if worn at all) was simple: a loose chemise and some type of drawers. In some cases, an undergarment was designed as an integral part of a particular outfit. Intended to be seen by no one except the wearer, an undergarment, in style and fit, was of minor concern. A notable exception, during the periods when a woman’s waist and bust were, respectively, artificially cinched and distended, was the corset, which was literally engineered to achieve its effect.
Fashion historians record a major change in underwear and the public’s attitude toward it beginning around the 1830s. Undergarments became heavier, longer, and a routine part of dress. For the first time in history, not to wear underclothing implied uncleanliness, coarseness, lower-class disregard for civility, or licentious moral character. This transformation is believed to have resulted from a confluence of three factors: the blossoming of Victorian prudishness and its corresponding dictates of modesty in attire; the introduction of finer, lighter dress fabrics, which in themselves called for underclothing; and the medical professions’ awareness of germs, which, combined with a body chill, led to illness.
Advertisement for 1880s woolen underwear, believed to possess miraculous health benefits.
This last factor was of particular significance. Physicians advised against catching “a chill” as if it were as tangible an entity as a virus, and the public developed an almost pathological fear of exposing any body part except the face to the reportedly germ-laden air. Pasteur had recently proved the germ theory of disease and Lister was campaigning for antiseptics in medicine. The climate, so to speak, called for underwear.
Underclothing then was white, usually starched, often scratchy, and made chiefly from cambric batiste, coarse calico, or flannel. From about the 1860s, women’s undergarments were designed with an emphasis on attractiveness, and silk first became a popular underwear fabric in the 1880s.
Woolen underwear, invariably scratchy, swept Europe and America in the same decade, ushered in by the medical profession.
What came to be called the Wool Movement began in Britain under Dr. Gustav Jaeger, a former professor of physiology at Stuttgart University and founder of Jaeger Company, manufacturers of wool clothing. Dr. Jaeger advocated the health benefits of wearing coarse, porous wool in contact with the skin, since it allowed the body to “breathe.” The wool could never be dyed. In England, a “wool health culture” sprung up, with distinguished followers such as Oscar Wilde and George Bernard Shaw (the latter for a time wore only wool next to his skin). Wool underwear, corsets, and petticoats became popular, and in America, so-called knickers, similar to the newly introduced bloomers, were also of wool. For more than two decades, the Wool Movement caused underwear discomfort on both sides of the Atlantic.
In 1910, American men welcomed a minor underwear innovation: the X-shaped overlapping frontal fly. And in 1934, men’s underwear was revolutionized with the introduction of the Jockey Brief. The Wisconsin firm of Cooper and Sons copied the design from a men’s swimsuit popular the previous year on the French Riviera. The first Jockey style, named No. 1001, proved to be so popular that it soon was replaced by the more streamlined No. 1007, which became known as the Classic Jockey Brief, with the word “Jockey” stitched around the elastic waistband.
Brassiere: 2500 B.C., Greece
Throughout history, as the female bust has gone in and out of clothing fashion, so, too, have the breasts themselves gone in and out of public view. Around 2500 B.C., Minoan women on the Greek isle of Crete, for instance, wore bras that lifted their bare breasts entirely out of their garments.
On the other hand, in the male-oriented ancient classical world, Greek and Roman women strapped on a breast band to minimize bust size, a fashion reintroduced centuries later by church fathers. In fact, from its birth in Greece 4,500 years ago, the bra, or the corset, has been the principal garment by which men have attempted to reshape women to their liking.
In certain periods, devices were designed to enlarge breasts considered inadequate by the standard of the day. The first public advertisements for what would become known as “falsies” appeared in nineteenth-century Paris. The “bust improver” consisted of wool pads which were inserted into a boned bodice. Later in the same century, French women could purchase the first rubber breast pads, called “lemon bosoms” because of their shape and size. Throughout these decades, brassieres were extensions of corsets.
The first modern brassiere debuted in 1913. It was the needlework of New York socialite Mary Phelps Jacobs, the woman responsible for the demise of the corset.
Fashionable women of that day wore a boxlike corset of whalebone and cordage that was uncomfortable and impeded movement. Mary Jacobs’s concern, though, was not comfort but appearance. In 1913, she purchased an expensive sheer evening gown for a society affair. The gown clearly revealed the contour of her corset, so Mrs. Jacobs, with the assistance of her French maid, Marie, devised a brief, backless bra from two white handkerchiefs, a strand of ribbon, and cord. Female friends who admired the lightweight, impromptu fashion received one as a gift. But a letter from a stranger, containing a dollar and a request for “your contraption,” prompted the socialite to submit sketches of her design to the U.S. Patent Office.
In November 1914, a patent was awarded for the Backless Brassiere. Aided by a group of friends, Mary Jacobs produced several hundred handmade garments; but without proper marketing, the venture soon collapsed. By chance, she was introduced socially to a designer for the Warner Brothers Corset Company of Bridgeport, Connecticut. Mary Jacobs mentioned her invention, and when the firm offered $1,500 for patent rights, she accepted. The patent has since been valued at $15 million.
Before the bra. A nineteenth-century depiction of the harmful skeletal effects from tight corseting.
Innovations on Mary Jacobs’s design followed. Elastic fabric was introduced in the ’20s, and the strapless bra, as well as standard cup sizes, in the ’30s. The woman largely responsible for sized bras was Ida Rosenthal, a Russian-Jewish immigrant who, with the help of her husband, William, founded Maidenform.
During the “flapper era” of the ’20s, fashion dictated a flat-chested, boyish look. Ida Rosenthal, a seamstress and dress designer, bucked the trend, promoting bust-flattering bras. Combining her own design experience and paper-patterns, she grouped American women into bust-size categories and produced a line of bras intended to lift the female figure through every stage from puberty to maturity. Her belief that busts would return to fashion built a forty-million-dollar Maidenform industry. Asked during the ’60s, when young women were burning bras as a symbol of female liberation, if the action signaled the demise of the brassiere business, Ida Rosenthal answered, “We’re a democracy. A person has the right to be dressed or undressed.” Then she added, “But after age thirty-five a woman hasn’t got the figure to wear no support. Time’s on my side.”
Hosiery: 4th Century B.C., Rome
Sock. Hose. Stocking. However we define these related words today, or choose to use them interchangeably in a sentence, one thing is certain: originally they were not the items they are now. The sock, for instance, was a soft leather slipper worn by Roman women and effeminate men. Hose covered the leg but not the foot. The word “stocking” does not appear in the vocabulary of dress until the sixteenth century, and its evolution up the leg from the foot took hundreds of years.
The history of men’s and women’s “socks” begins with the birth of garments that were “put on” rather than merely “wrapped around.”
The first “put on” items were worn by Greek women around 600 B.C.: a low, soft sandal-like shoe that covered mainly the toes and heel. (See page 294.) Called a sykhos, it was considered a shameful article for a man to wear and became a favorite comic theater gimmick, guaranteed to win a male actor a laugh.
Roman women copied the Greek sykhos and Latinized its name to soccus. It, too, was donned by Roman mimes, making it for centuries standard comedy apparel, as baggy pants would later become the clown’s trademark.
The soccus sandal was the forerunner both of the word “sock” and of the modern midcalf sock. From Rome, the soft leather soccus traveled to the British Isles, where the Anglo-Saxons shortened its name to soc. And they discovered that a soft soc worn inside a coarse boot protected the foot from abrasion. Thus, from its home inside the boot, the soc was on its way to becoming the modern sock. Interestingly, the Roman soccus also traveled to Germany, where it was worn inside a boot, its spelling abbreviated to socc, which until the last century meant both cloth footwear and a lightweight shoe.
Hose. In ancient times in warm Mediterranean countries, men wore wraparound skirts, having no need for the leg protection of pants. In the colder climates of Northern Europe, though, Germanic tribes wore loose-fitting trousers reaching from waist to ankle and called heuse. For additional warmth, the fabric was commonly crisscrossed with rope from ankle to knee, to shield out drafts.
This style of pants was not unique to Northern Europeans. When Gaius Julius Caesar led his Roman legions in the first-century B.C. conquest of Gaul, his soldiers’ legs were protected from cold weather and the thorns and briers of northwestern forests by hosa—gathered leg coverings of cloth or leather worn beneath the short military tunic. The word hosa became “hose,” which for many centuries denoted gathered leg coverings that reached down only to the ankles.
Logically, it might seem that in time, leg hose were stitched to ankle socks to form a new item, stockings. However, that is not what happened. The forerunner of modern stockings are neither socks nor hose but, as we’re about to see, undones.
Stockings: 5th Century, Rome
By A.D. 100, the Romans had a cloth foot sock called an udo (plural, udones). The earliest mention of the garment is found in the works of the poet and epigrammatist M. Valerius Martialis, who wrote that in udones, the “feet will be able to take refuge in cloth made of goat’s hair.”
At that time, the udo fitted over the foot and shinbone. Within a period of one hundred years, Roman tailors had extended the udo up the leg to just above the knee, to be worn inside boots. Men who wore the stocking without boots were considered effeminate; and as these knee-length udones crept farther up the leg to cover the thigh, the stigma of effeminacy for men who sported them intensified.
Unfortunately, history does not record when and why the opprobrium of effeminacy attached to men wearing stockings disappeared. But it went slowly, over a period of one hundred years, and Catholic clergymen may well have been the pioneering trendsetters. The Church in the fourth century adopted above-the-knee stockings of white linen as part of a priest’s liturgical vestments. Fifth-century church mosaics display full-length stockings as the vogue among the clergy and laity of the Roman Empire.
Stockings had arrived and they were worn by men.
The popularity of form-fitting stockings grew in the eleventh century, and they became trousers known as “skin tights.” When William the Conqueror crossed the English Channel in 1066 and became the Norman king of England, he and his men introduced skin tights to the British Isles. And his son, William Rufus, wore French stocking pants (not much different in design from today’s panty hose) of such exorbitant cost that they were immortalized in a poem. By the fourteenth century, men’s tights so accurately revealed every contour of the leg, buttocks, and crotch that churchmen condemned them as immodest.
The rebellious nature of a group of fourteenth-century Venetian youths made stocking pants even more scandalous, splitting teenagers and parents into opposing camps.
A fraternity of men calling themselves La Compagna della Calza, or The Company of the Hose, wore short jackets, plumed hats, and motley skin tights, with each leg a different color. They presented public entertainments, masquerades, and concerts, and their brilliant outfits were copied by youths throughout Italy. “Young men,” complained one chronicler of the period, “are in the habit of shaving half their heads, and wearing a close-fitting cap.” And he reported that decent people found the “tight-fitting hose…to be positively immodest.” Even Geoffrey Chaucer commented critically on the attire of youth in The Canterbury Tales. Skin-tight, bicolored stockings may indeed have been the first rebellious fashion statement made by teenagers.
From a fourteenth-century British illustration of an attendant handing stocking to her mistress. It’s the first pictorial evidence of a woman wearing stockings.
The stockings discussed so far were worn by priests, warriors, and young men. When did women begin to roll on stockings?
Fashion historians are undecided. They believe that women wore stockings from about A.D. 600. But because long gowns concealed legs, there is scant evidence in paintings and illustrated manuscripts that, as one eighteenth-century writer expressed it, “women had legs.”
Among the earliest pictorial evidence of a woman in stockings is an illustrated 1306 British manuscript which depicts a lady in her boudoir, seated at the edge of the bed, with a servant handing her one of a pair of stockings. The other stocking is already on her leg. As for one of the earliest references to the garment in literature, Chaucer, in The Canterbury Tales, comments that the Wife of Bath wore stockings “of fine skarlet redde.”
Still, references to women’s stockings are extremely rare up until the sixteenth century. Female legs, though undoubtedly much admired in private, were something never to be mentioned in public. In the sixteenth century, a British gift of silk stockings for the queen of Spain was presented with full protocol to the Spanish ambassador, who drawing himself haughtily erect, proclaimed: “Take back thy stockings. And know, foolish sir, that the Queen of Spain hath no legs.”
In Queen Elizabeth’s England, women’s stockings fully enter history, and with fashion flair. In extant texts, stockings are described as colored “scarlet crimson” and “purple,” and as “beautified with exquisite embroideries and rare incisions of the cutters art.” In 1561, the third year of her reign, Elizabeth was presented with her first pair of knitted silk stockings, which converted her to silk to the exclusion of all other stocking fabrics for the remainder of her life.
It was also during Elizabeth’s reign that the Reverend William Lee, in 1589, invented the “loome” for machine-knitting stockings. The Reverend Lee wrote that for the first time, stockings were “knit on a machine, from a single thread, in a series of connected loops.” That year, the hosiery industry began.
Nylon Stockings: May 15, 1940, United States
Because of the public-relations fanfare surrounding the debut of nylon stockings, there is no ambiguity concerning their origin. Perhaps there should have been skepticism, though, of the early claim that a pair of stockings would “last forever.”
The story begins on October 27, 1938, when the Du Pont chemical company announced the development of a new synthetic material, nylon, “passing in strength and elasticity any previously known textile fibers.” On the one hand, the breakthrough meant that the hosiery industry would no longer be periodically jeopardized by shortages of raw silk for silk stockings. But manufacturers also feared that truly indestructible stockings would quickly bankrupt the industry.
While the “miracle yarn” was displayed at the 1939 World’s Fair, women across America eagerly awaited the new nylon stockings. Test wearers were quoted as saying the garments endured “unbelievable hours of performance.”
Du Pont had shipped selected hosiery manufacturers spools of nylon yarn, which they agreed to knit according to company specifications. The mills then allotted nylon stockings to certain stores, on the promise that none be sold before “Nylon Day,” slated as May 15 of that year, 1940.
The hysteria that had been mounting across the country erupted early on that mid-May morning. Newspapers reported that no consumer item in history ever caused such nationwide pandemonium. Women queued up hours before store doors opened. Hosiery departments were stampeded for their limited stock of nylon stockings. In many stores, near riots broke out. By the close of that year, three million dozen pairs of women’s nylons had been sold—and that number could have been significantly higher if more stockings were available.
At first, the miracle nylons did appear to be virtually indestructible. Certainly that was true in comparison to delicate silk stockings. And it was also true because, due to nylons’ scarcity, women doubtless treated the one or two pairs they managed to buy with greater care than they did silk stockings.
In a remarkably short time, silk stockings were virtually obsolete. And nylon stockings became simply “nylons.” Women after all had legs, and never before in history were they so publicly displayed and admired.
Sex-Related Words: Post-11th Century, England and France
With the conquest of England in 1066 by William of Normandy, the Anglo-Saxon language of the British Isles underwent several alterations. As the French-speaking Normans established themselves as the ruling caste, they treated the native Saxons and their language as inferior. Many Saxon words were regarded as crude simply because they were spoken by Saxons. Some of these words, once inoffensive, survived and passed eventually into English as coarse, impolite, or foul expressions. Etymologists list numerous examples of “polite” (Norman) and “impolite” (Saxon) words:
The mother tongue of the twelve kings and queens from William I (who ruled from 1066 to 1087) to Richard II (from 1377 to 1399) was the Normans’ French, though the Anglo-Saxons’ English continued to be spoken. When the two tongues blended into a new language, Middle English, which became the official language of the court in 1362 and the language for teaching in the universities at Oxford and Cambridge in 1380, we inherited many double expressions. In addition to those listed above, the Norman “fornicate” came to be the respectable replacement for the Saxon “fuck,” which itself derived from the Old English word fokken, meaning “to beat against.”
The Normans, of course, had obtained their word “fornicate” from an earlier language, and etymologists trace the origin to fornix, Latin for a small, vaulted-ceiling basement room that could be rented for a night. For in Roman Christian times, prostitutes practiced their trade secretly in such underground rooms, much the way a modern prostitute might rent a motel room. Fornix first became a noun synonymous with “brothel,” then a verb meaning “to frequent a brothel,” fornicari, and finally the name of the activity conducted therein.
The word “prostitute” comes to us from the Latin prostitutus, meaning “offered for sale.” It not only reflects that a hooker charges for services, but as the verb “to prostitute,” connotes sacrificing one’s integrity for material gain. “Prostitute” was itself a euphemism for the Old English word “whore,” a term that once merely suggested desire.
“Hooker” is believed to be associated with General Joseph (“Fighting Joe”) Hooker of Civil War fame. To bolster the morale of his men, General Hooker is supposed to have allowed prostitutes access to his troops in camp, where they became known as “Hooker’s girls.” When a section of Washington was set aside for brothels, it acquired the name Hooker’s Division, and the local harlots became hookers.
The term “gay,” synonymous today with “homosexual,” dates back to thirteenth-century France, when gai referred to the “cult of courtly love” —that is, homosexual love—and a “lover” was a gaiol. Troubadour poetry of that period explicitly discusses this “cult” love. In the following centuries, the word was appropriated to describe first a prostitute, then any social undesirable, and lastly, in a homophobic British culture, to describe both homosexuality and the homosexual himself. Its first public use in the United States (aside from pornographic fiction) was in a 1939 Hollywood comedy, Bringing Up Baby, when Cary Grant, sporting a dress, exclaimed that he had “gone gay.”
From the Magazine Rack
Magazines in America: 1741, New England
Newspapers were developed to appeal to the general public; magazines, on the other hand, were intended from the start to deliver more narrowly focused material to special-interest groups, and they experienced a difficult birth. In America, early magazines failed so quickly and frequently that the species was continually endangered, several times extinct.
The origin of the magazine, following the development of the printing press in fifteenth-century Germany, was straightforward: printed single-page leaflets expanded into multipage pamphlets that filled the middle ground between newspapers and books. History’s first magazine was the 1633 German periodical Erbauliche Monaths-Unterredungen, or Edifying Monthly Discussions, started by Johann Rist, a poet and theologian from Hamburg. Strongly reflecting its publisher’s dual vocations, the “monthly” appeared whenever Rist could spare the time to write and print it, and its edifying contents strictly embodied the author’s own views. It lasted, on and off, for five years—an eternity for early magazines.
Magazines for light reading, for diversion, and for exclusively female readership began appearing by the mid-seventeenth century. Two are notable for having established a format that survives to this day.
A 1672 French publication, Mercure Galant, combined poems, colorful anecdotes, feature articles, and gossip on the nobles at court. And in 1693, a British publisher took the bold step of introducing a magazine devoted to “the fairer sex.” Ladies’ Mercury offered advice on etiquette, courtship, and child rearing, plus embroidery patterns and home cosmetic preparations, along with dollops of light verse and heavy doses of gossip—a potpourri of how-tos, delights, and inessentials that could not be found in newspapers or books. The magazine found itself a niche and set forth a formula for imitators.
Magazines originated to fill the middle ground between newspapers and books.
While “penny weeklies” thrived in centuries-old Europe, in the nascent American colonies they encountered indifferent readership, reluctant authorship, and seemingly insurmountable circulation problems that turned many a weekly into a semiannual.
Due to competitive forces, America’s first two magazines, both political, were issued within three days of each other. In February 1741, Benjamin Franklin’s General Magazine, and Historical Chronicle, For all the British Plantations in America was narrowly beaten to publication by the rival effort of publisher Andrew Bradford: American Magazine, or A Monthly View of the Political State of the British Colonies. A fierce quarrel ensued and both Philadelphia periodicals quickly folded; Bradford’s after three months, Franklin’s after six.
Numerous other magazines were started—spanning spectrums from poetry to prose, fact to fiction, politics to how-to—and most of them failed. Noah Webster lamented in 1788, “The expectation of failure is connected with the very name of a Magazine.” And the New-York Magazine, one of the longest-lived of the eighteenth-century ventures, went to its inevitable demise editorializing: “Shall every attempt of this nature desist in these States? Shall our country be stigmatised, odiously stigmatised, with want of taste for literature?”
Why such failure?
Three factors are to blame: broadly, the reader, the writer, and the mails.
The American reader: In 1741, the year Benjamin Franklin’s magazine debuted, the population of the colonies was only about one million, whites and blacks, many of both races illiterate. This sparse population was scattered over an area measuring more than twelve hundred miles north to south along the seaboard, and at some points, a thousand miles westward. And in most regions the roads were, as one publication stated, “wretched, not to say shameful.” Stagecoach travel between the major cities of Boston and New York took eight to ten days. Thus, it’s not surprising that during the eighteenth century, no American magazine achieved a readership higher than fifteen hundred; the average number of subscribers was about eight hundred.
The American writer: Only less discouraging than a small and scattered readership was the unwillingness of eighteenth-century writers to contribute to magazines, which they viewed as inferior to books and newspapers. Consequently, most of the early American magazines reprinted material from books, newspapers, and European magazines. As the editor of the moribund New-York Magazine bemoaned, “In the present state of this Western World, voluntary contributions are not to be depended on.”
The American mails: Horse-carried mail was of course faster than mail delivered by stagecoach, but magazines (and newspapers) in the eighteenth century were admitted to the mails only at the discretion of local postmasters. In fact, many of America’s early magazine publishers were postmasters, who readily franked their own products and banned those of competitors. This gave postmasters immense power over the press, and it led to corruption in political campaigns, forcing politicians to pay regional postmasters in order to appear in print. Even the honorable Benjamin Franklin, appointed postmaster of Philadelphia in 1737, discriminated in what publications his post riders could carry.
Furthermore, the cost of a magazine was compounded by a commonplace postal practice: For more than fifty years, many American periodicals arrived by mail only if a subscriber paid a fee to both the local post rider and the regional postmaster. This practice was actually legalized in the Postal Ordinance Act of 1782. Publishers advertised that subscribers would receive issues “by the first opportunity,” meaning whenever and however a magazine could be delivered.
One further problem bedeviled early American magazine publishers, one that has since been palliated but not solved: the delinquent customer. Today it is common practice to pay in advance or in installments, or to charge a magazine subscription. But in the eighteenth century, a subscriber paid weeks or months after receiving issues—which, given the vagaries of the mail, never arrived, or arrived late or damaged. Poor incentives for paying debts. And there were no such intimidations as a collection agency or a credit rating.
The dilemma led publishers to strange practices. Desperate for payment, they often stated in their magazines that in lieu of cash they would accept wood, cheese, pork, corn, and other products. Isaiah Thomas, editor of the 1780s Worcester Magazine, wrote in an issue that his family was short on butter and suggested how delinquent subscribers could quickly clear their arrears: “The editor requests all those who are indebted to him for Newspapers and Magazines, to make payment—butter will be recieved in small sums, if brought within a few days.”
In the face of so many fatal odds, why did American publishers continue to issue new magazines? Because they looked toward Europe and were reminded of the lucrative and prestigious possibilities of periodicals—if only the problems of readership, authorship, and the mails could be solved.
Ladies’ Home Journal: 1883, Pennsylvania
In the year of America’s centennial, a twenty-six-year-old Philadelphia newspaperman, Cyrus Curtis, conceived a family-oriented horticulture magazine, Tribune and Farmer, to sell for fifty cents for a year’s subscription. Mrs. Curtis persuaded her husband to allot her space for a short regular column, which she proposed to title “Woman and the Home.” He reluctantly consented. Mr. Curtis’s magazine folded; his wife’s contribution split off to become the Ladies’ Home Journal, still in strong circulation today.
Issues in the early 1880s contained comparatively few pages—of recipes, household hints, needlepoint patterns, gardening advice, poems, and occasionally a short story. Unpretentious, inexpensively printed, the thin magazine offered great variety, and Mrs. Curtis, editing under her maiden name, Louise Knapp, clearly recognized her audience as America’s middle-class homemakers. At the conclusion of its first year, the Journal had a circulation of 2,500, an impressive number for that day.
Cyrus Curtis, having abandoned his own publishing venture, concentrated on increasing the circulation of his wife’s magazine. The older problems of limited readership and unreliable mail distribution no longer plagued publishers, but snagging the best writers of the day was a continuing challenge. Especially for a magazine whose hallmark was household hints. Curtis soon discovered that many established authors—Louisa May Alcott, for one—reserved their work for prestigious journals, even if the pay was slightly less.
For Cyrus Curtis, a breakthrough came when he learned that an offer to contribute to an author’s favorite charity often was enough to win an article for his wife’s magazine. Thus, Louisa May Alcott came to head the Journal’s “List of Famous Contributors,” which Curtis publicized. This, and an aggressive advertising campaign, capped by a contest with cash prizes, caused circulation in 1887 to shoot up to 400,000; correspondingly, the magazine’s size expanded to a handsome thirty-two pages per monthly issue.
In the 1890s, the magazine’s appeal to American women was due in part to its newfound tone of intimacy. Editor Edward Bok, a bachelor, had instituted a chatty, candid personal-advice column, “Side Talks with Girls,” which he himself initially wrote under the pseudonym Ruth Ashmore. Its phenomenal success—the first column drew seven hundred letters from women seeking counsel on matters from courtship to health—spawned “Side Talks with Boys” and “Heart to Heart Talks,” and established “advice” and “self-improvement” features as magazine staples. And while other magazines of the day featured identical cover illustrations issue after issue, the Journal daringly changed its cover images monthly, setting another modern trend.
Not all the magazine’s features were lighthearted and chatty. The Journal uncovered a major advertising hoax.
At the turn of the century, there was no national agency to screen the miracle claims made for scores of over-the-counter syrups and sarsaparillas. The federal government and the medical profession were continually battling companies nostrum by nostrum, claim by outrageous claim. Considerable controversy surrounded the top-selling Lydia E. Pinkham Vegetable Compound, a panacea for a spectrum of female woes. Advertisements for the product claimed that Miss Lydia herself was toiling away— “in her laboratory at Lynn, Massachusetts” —improving the compound. The Ladies’ Home Journal proved that Lydia was actually in Pine Grove Cemetery near Lynn, where she had been resting for twenty years. To prove it, the magazine published a picture of the dated tombstone. No Watergate coup, to be sure, but the article heightened public awareness of falsehood in advertising, and the following year, 1906, Congress passed the long-awaited Federal Food and Drug Act.
The Ladies’ Home Journal could rightfully boast that it became the first magazine in America to attain a circulation of one million readers.
Good Housekeeping: 1885, Massachusetts
The hallmark of the 1880s Good Housekeeping: A Family Journal Conducted in the Interests of the Higher Life of the Household was that it invited readers’ contributions and sponsored contests. One of the earliest requests for contributions offered $250 for the best article on “How to Eat, Drink and Sleep as Christians Should.” And initial contests awarded cash prizes for the most effective “Bug Extinguisher,” the best “Bed Bug Finisher,” and the most potent “Moth Eradicator,” highlighting an entomological concern that apparently was of pressing significance to readers.
The thirty-two-page biweekly, which sold for $2.50 a year, was the brainchild of Massachusetts political writer and poet Clark Bryan. Scrapbookish in design, the magazine featured word puzzles and quizzes in addition to advice on home decorating, cooking, and dressmaking. Bryan’s reliance on reader-written articles precluded an elite roster of contributors, but it helped immensely to popularize the homey periodical, offering a subscriber the opportunity to see his or her name and views in print. Each issue led with one of Bryan’s own poems, but in 1898, after battling a serious illness, the magazine’s founder committed suicide.
The magazine survived and thrived. And it continued to feature poems—by such writers as Amy Lowell, Edna St. Vincent Millay, Alfred Noyes, and Ogden Nash. By 1908, Good Housekeeping boasted a readership of 200,000 and was still printing articles like “Inexpensive Christmas Gifts,” the kind Bryan had favored.
Clark Bryan did not live to see the words “Good Housekeeping” adapted as the country’s first national badge of consumerism. Yet the Good Housekeeping Seal of Approval arose out of Bryan’s founding philosophy. To test the numerous recipes, spot-removing practices, salves, and labor-saving gadgets that subscribers recommended to other readers, the magazine set up its own Experimental Station in 1900. The station’s home economists and scientists also tested products of the magazine’s advertisers and published ads only for products that won approval. The concept was novel, innovative, much needed, and it rapidly gained the respect of readers. By 1909, the magazine had instituted its official Seal of Approval, an elliptical graphic enclosing the words “Tested and Approved by the Good Housekeeping Institute Conducted by Good Housekeeping Magazine.” A guarantee of a product’s quality and availability, the phrase passed into the American vernacular, where it was applied not only to consumer merchandise but colloquially to any person, place, or thing that met with approval.
Cosmopolitan: 1886, New York
Bearing the motto “The world is my country and all mankind are my countrymen,” Cosmopolitan was born in Rochester, New York, in 1886, the idea of writer and publisher Paul J. Schlicht. The handsome magazine, with its high yearly subscription price of four dollars, was not at all successful. In accordance with its motto, the periodical featured articles on such disparate subjects as how ancient people lived, climbing Mount Vesuvius, the life of Mozart, plus European travel sketches and African wild animal adventures.
After a financial struggle, Schlicht sold the magazine to a former West Point cadet and diplomat to China, forty-year-old John Walker, a New Yorker. Although Walker was both praised and criticized for introducing “the newspaper ideas of timeliness and dignified sensationalism into periodical literature,” under his leadership Cosmopolitan prospered. He raised the prestige of the magazine in 1892 by hiring the respected literary figure William Dean Howells as a coeditor. And the first issue under Howells’s stewardship was impressive: it carried a poem by James Russell Lowell; an article by Henry James; an essay by Thomas Wentworth Higginson, Emily Dickinson’s mentor; and a feature by Theodore Roosevelt.
To further increase the magazine’s circulation, Walker undertook a railroad tour of New England cities. New subscribers received as gifts the memoirs of either Grant or Sherman, and successful student salesmen of Cosmopolitan subscriptions won college scholarships. By 1896, the magazine held a secure place among the country’s leading illustrated periodicals.
Walker’s “dignified sensationalism” was not quite dignified enough for the subtler literary tastes of coeditor Howells, who resigned. But Walker’s philosophy was appreciated by the public and by the Hearst Corporation, which acquired the magazine in 1905. Compared to its competitors, Cosmopolitan was expensive: thirty-five cents an issue during the ’20s. But people did not seem to mind spending the money for dignified sensationalism, as well as for features by Theodore Dreiser and Stephen Crane. President Coolidge remarked when he selected the magazine to publish his newly completed autobiography, “When you pay thirty-five cents for a magazine, that magazine takes on in your eyes the nature of a book and you treat it accordingly.”
Vogue: 1892, New York
To the American woman in the 1890s, Vogue depicted a sophisticated new world. George Orwell, the British literary critic and satirist, wrote that few of the magazine’s pages were devoted to politics and literature, the bulk featured “pictures of ball dresses, mink coats, panties, brassieres, silk stockings, slippers, perfumes, lipsticks, nail polish—and, of course, of the women, unrelievedly beautiful, who wear them or make use of them.”
In fact, it was for those women who “make use of them” that the magazine was designed.
Vogue began in 1892 as a society weekly for wealthy New Yorkers. The names of most of the two hundred fifty stockholders of its publishing company were in the Social Register, including a Vanderbilt, a Morgan, and a Whitney. According to its philosophy, the weekly was to be “a dignified, authentic journal of society, fashion, and the ceremonial side of life,” with its pages uncluttered by fiction, unsullied by news.
The first issues were largely social schedules on soirees and coming-out parties. For the cover price of ten cents, average folk could ogle the galas, betrothals, marriages, travel itineraries, and gossip of New York’s elite. The magazine mentioned Delmonico’s with respect, and reported from the theater, concert hall, and art gallery with approval or disdain. Its avid coverage of golf suggested the sport was already a national craze.
Vogue was not for everyone. Its unique brand of humor—as when it printed: “Now that the masses take baths every week, how can one ever distinguish the gentleman?” —often confounded or infuriated its more thoughtful readers. And the early magazine was edited by the brilliant but eccentric Josephine Redding, described as “a violent little woman, square and dark, who, in an era when everyone wore corsets, didn’t.” Renowned for her hats, which she was never seen to remove, she once, when confined to bed by illness, received her staff in a nightgown and a hat.
Vogue scored a coup in 1895, publishing detailed drawings of the three-thousand-dollar trousseau of Consuelo Vanderbilt, whose impending marriage to the duke of Marlborough was the Charles-and-Diana event of the day. No lesser magazine of that era would ever have been given access to the material.
In 1909, Vogue was purchased by publisher Condé Nast. Under his leadership, the magazine became primarily a fashion journal, still for the elite. An editorial that year proclaimed as the purpose of the magazine to “hold the mirror up to the mode, but to hold it at such an angle that only people of distinction are reflected therein.” In the 1930s, Nast introduced Mademoiselle, geared for women aged seventeen to thirty, then Glamour, edited for the young career woman. But Vogue remained the company’s proud centerpiece, America’s preeminent chronicler of fashion and the fashionable, labeled by Time as “No. 1” in its field.
House Beautiful: 1896, Illinois
Its name was taken from a poem by Robert Louis Stevenson, “The House Beautiful,” and that was the exact title of the original 1896 magazine. The was dropped in 1925.
Begun as a journal of “Simplicity and Economy” by Eugene Klapp, a Chicago engineer who had a flair for architecture and literature, the magazine cost a then-reasonable ten cents. It contained short, readable articles on home building and decorating, and the magazine’s first page announced its philosophy: “A little money spent with careful thought by people of keen artistic perception will achieve a result which is astonishing.” In other words, beauty and elegance were affordable to the average home owner. When Eugene Klapp joined the military in 1899, the magazine came into the capable hands of Harvard-educated Herbert Stone.
Possessing an abhorrence of pretension, Stone was perfectly suited to the magazine’s homey philosophy. He oversaw a series of critical articles under the rubric “The Poor Taste of the Rich.” The intent of the series was to assure readers “That Wealth Is Not Essential to the Decoration of a House,” and to enlighten them to the fact “That the Homes of Many of Our Richest Citizens Are Furnished in Execrable Taste.” The critical articles came amply illustrated with photographs of the offending mansions, and the names of the affluent residents were not withheld. The series generated considerable publicity but, interestingly, no lawsuits.
In 1898, House Beautiful instituted an annual competition for the best design of a three-thousand-dollar home, regularly upping the limit to keep pace with inflation and prosperity. And when apartment house living gained ascendancy in the 1910s, the magazine offered the first articles on the special requirements in furnishing and decorating this new type of space. In reflecting such trends, the magazine’s articles became a social barometer of middle-class American living; for example, the single title “Three-Bedroom House with Two-Car Garage for $8,650” (April 1947) could conjure up an era and its people’s expectations.
Herbert Stone served as editor for sixteen years, but in 1915, returning home from a European holiday on the Lusitania, he drowned when the ship was sunk by a German submarine.
National Geographic: 1888, Washington, D.C.
With its yellow-boarded cover and timeless photographic essays, The National Geographic Magazine became an American institution soon after its introduction in October 1888. From the start, subscribers saved back issues, for quality nature photography (in black and white) was something of a new phenomenon, elevating the magazine to the nondisposable status of a book.
The landmark publication originated with the National Geographic Society as a means of disseminating “geographic knowledge.” From the premier issue, the magazine specialized in maps of exotic rivers, charts of rain forest precipitation, reports on volcanology and archaeology, and adventurous forays by eminent scientists and explorers into foreign lands. In the era before air travel, National Geographic transported thousands of readers to regions they would never visit and most likely had never imagined existed. The adventure was to be had for a five-dollar membership in the society, with a year’s subscription to the magazine.
An early president of the society was inventor Alexander Graham Bell. With the magazine’s membership static at about thirteen hundred in 1897, Bell undertook the challenge of creating a new audience. Solicitation now took the form of a personal invitation to membership in the National Geographic Society, beginning, “I have the honor to inform you that you have been recommended for membership.” A subscriber was assured that his or her money funded scientific exploration in new parts of the globe.
Bell also encouraged his contributing authors to humanize their adventure stories, enabling the reader to participate vicariously in the hardships and exhilarations of exploration. Peary, Cook, Amundsen, Byrd, and Shackleton were just a few of the renowned explorers who wrote firsthand accounts of their harrowing adventures. The society contributed to many expeditions, and a grateful Byrd once wrote, “Other than the flag of my country, I know of no greater privilege than to carry the emblem of the National Geographic Society.” By 1908, pictures occupied more than half of the magazine’s eighty pages.
But what significantly transformed National Geographic and boosted its popularity was the advent of color photographs and illustrations.
The first color pictures appeared in the November 1910 issue: thirty-nine bright and exotic images of Korea and China, most of them full-page. Reader response was so great that following issues featured color photo spreads of Japan and Russia, and colored drawings of birds, which became a staple of the magazine. It claimed many photographic innovations: the first flashlight photographs of wild animals in their natural habitats, the first pictures taken from the stratosphere, and the first natural-color photographs of undersea life. One of the most popular single issues appeared in March 1919. Titled “Mankind’s Best Friend,” it contained magnificent color illustrations of seventy-three breeds of dogs.
From readers’ admiring letters, the editors learned that while subscribers enjoyed studying the colorful dress of foreign peoples, they preferred even more the undress of bare-breasted natives in obscure parts of the globe. By 1950, National Geographic held a firm position among the top ten monthly periodicals in the world.
Scientific American: 1845, New York
A shoemaker at age fifteen and an amateur fiddler, Rufus Porter was also a tireless inventor—of cameras, clocks, and clothes-washing machines. Somewhere between his experiments with an engine lathe and electroplating in the summer of 1845, the native New Yorker launched a slender weekly newspaper, Scientific American, devoted to new inventions, including many of his own. A year later, bored with his latest endeavor, Porter sold his paper for a few hundred dollars.
The purchasers, Orson Munn and Alfred Beach, immediately increased the weekly from four to eight pages and broadened its scope to include short articles on mechanical and technical subjects. In those early years, Scientific American virtually ignored the fields of biology, medicine, astronomy, and physics.
Many of its technical articles were futuristic, some solid, others fanciful. In 1849, for instance, the magazine prematurely heralded the advent of subway transportation. In “An Underground Railroad in Broadway,” the editors outlined plans for a subterranean tunnel to run the length of New York City’s Broadway, with “openings in stairways at every corner.” Since electric power was not yet a reality, the subway envisioned by the editors was quite different from today’s: “The cars, which are to be drawn by horses, will stop ten seconds at every corner, thus performing the trip up and down, including stoppages, in about an hour.”
When New York newspapers ridiculed the idea, editor Beach secured legislative approval to build instead an underground pneumatic tube system. In February 1870, workmen actually began digging a tunnel in lower Manhattan from Warren to Murray streets. As conceived, a car accommodating eighteen passengers would fit snugly into the pneumatic tube. A compressed-air engine would blow it downtown, then, with the engine reversed, the car would be sucked uptown. Construction was still proceeding when the city’s government, convinced that some sort of subway was feasible and essential, announced plans for a five-million-dollar elevated steam train.
The magazine’s early contributors were among the greatest inventors of the period. Samuel Morse wrote about his dot-and-dash code, and Thomas Edison, who walked three miles to get his monthly copy of the journal, composed a feature in 1877 about his new “Talking Machine,” the phonograph. The magazine’s stated goal was “to impress the fact that science is not inherently dull, but essentially fascinating, understandable, and full of undeniable charm” —a goal that it achieved early in its history.
Life: 1936, New York
In November 1936, after months of experimentation and promotion, Henry Luce’s Life magazine appeared on newsstands throughout the country. For a dime, a reader was entertained and enlightened by ninety-six pages of text and photographs: the first picture was of an obstetrician slapping a baby to consciousness and was captioned “Life Begins.” The issue sold out within hours, and customers clamored to add their names to dealers’ waiting lists for the next installment.
Although Life was the most successful picture magazine in history, it was not the first picture magazine, nor was it the first Life. Luce’s product took its name from a picture periodical that debuted in 1883, the creation of an illustrator named John Mitchell.
Mitchell graduated from Harvard College with a degree in science and studied architecture in Paris. In 1882, he settled in New York City and decided to start a “picture weekly” that would make use of a new zinc etching method of reproducing line drawings directly instead of having them first engraved on wood blocks. Mitchell’s Life was a magazine of humor and satire, and a showcase for many of his own comic illustrations. In its pages in 1877, Charles Dana Gibson, not yet twenty-one, introduced Americans to the serenely beautiful, self-reliant “Gibson Girl.” Until the Depression, Mitchell’s Life was one of America’s most successful ten-cent weeklies.
Enter Henry Luce.
In 1936, Luce was searching for a catchy title for his soon-to-be-launched photographic picture magazine—which at the time was tentatively named Look. Luce purchased the name Life from Mitchell’s illustrated humor magazine for $92,000.
Luce’s Life, relating the news in photographs, found an eager audience in the millions of Americans enthralled with motion pictures. Images, rather than text, were a new and graphic way to convey a story, and Life’s gutsy and artful pictures read like text. The magazine’s “picture essays” brought to maturity the field of photojournalism. Within only a few weeks of its October 1936 debut, Life was selling a million copies an issue, making it one of the most successful periodicals in history.
Charles Dana Gibson’s “Gibson Girl” in Life.
Look: 1937, New York
Around the time Henry Luce was changing the proposed name of his picture magazine from Look to Life, newspaperman Gardner Cowles, Jr., was hard at work independently developing a similar periodical, to be called Look. Look, though, was no imitator of Life, nor were the magazines competitors—at first. In fact, Gardner Cowles and Henry Luce traded ideas on their projects. For a time, Luce was even an investor in Look.
Look actually evolved from the Sunday picture section of the Des Moines, Iowa, Register and Tribune, a newspaper owned by the Cowles family since early in the century. In 1925, the paper surveyed its readers and discovered that they preferred pictures to text. Thus, the newspaper began running series of photographs that told a story instead of a single picture with text. These “picture stories” were so successful that in 1933 the Register and Tribune began syndicating them to twenty-six other newspapers. It was then that Cowles formulated plans for a picture magazine.
Although Gardner Cowles and Henry Luce agreed on the power of visual images, their early magazines were fundamentally different. Life, an “information” weekly, was printed on slick stock and emphasized news, the arts, and the sciences, with an occasional seasoning of sex. Look, first a monthly, then a biweekly, was printed on cheaper paper and focused on personalities, pets, foods, fashions, and photo quizzes. As Look matured, it grew closer in concept to Life and the two magazines competed for readers—with each magazine finding enough loyal followers to keep it thriving and competing for many years.
Ebony: 1945, Illinois
While Look and Life were top sellers, a new and significantly different American magazine appeared, capturing the readership of more than a quarter of the black adults in the country.
John Johnson, head of the Johnson Publishing Company, founded Ebony in 1945 specifically for black World War II veterans, who were returning home in large numbers. Johnson felt that these men, ready to marry and father children, needed wider knowledge of the world and could benefit from reading stories about successful blacks.
Johnson had already displayed a talent for persuading powerful whites to take him and his projects seriously. His first publishing venture had been a magazine called Negro Digest. He had raised the capital to launch that periodical, and when white magazine distributors refused to believe that a magazine for blacks could succeed, Johnson coaxed hundreds of his acquaintances to ask for the magazine at newsstands. And after several places agreed to stock Negro Digest on a trial basis, Johnson’s friends then purchased all the copies. Chicago’s white distributors, concluding that readership for a black magazine existed, welcomed Johnson’s digest. Within months, circulation of Negro Digest rose to fifty thousand, and in 1943, when the magazine was a year old, Johnson persuaded Eleanor Roosevelt to write an article titled “If I Were a Negro.” It generated so much publicity nationwide that before year’s end, the circulation of Negro Digest trebled.
With Ebony, the black readership was strong but white advertisers shied away from the magazine. Johnson’s breakthrough came with the Zenith Corporation. The electronics company’s president, Commander Eugene McDonald, had journeyed to the North Pole with Admiral Peary and a black explorer, Matthew Henson. When Johnson approached Commander McDonald, he displayed an issue of Ebony featuring a story about Henson and the Peary expedition. The commander’s nostalgia induced him to honor Johnson’s request, and Zenith’s advertisements in Ebony undermined the white wall of resistance. With Ebony, Negro Digest, and another publication, Jet, John Johnson captured a combined readership of twelve million, nearly half the black adults in America.
Esquire: 1933, New York
The immediate inspiration for Esquire was a publication that debuted in October 1931, Apparel Arts, a handsome quarterly for the men’s clothing trade edited by Arnold Gingrich. Apparel Arts was popular but expensive; Gingrich figured that American men might flock in large numbers to a version of the fashion magazine that could sell for a dime. He considered calling the spin-off Trend, Stag, or Beau. Then one day he glanced at an abbreviation in his attorney’s letterhead, “Esq.,” and had his title.
Gingrich felt certain that a market existed for Esquire because of reports from clothing stores that customers stole counter copies of Apparel Arts. It was customary for stores to display the thick quarterly, allowing customers to order from among its merchandise. Several East Coast stores had already asked Gingrich if he could produce an inexpensive, giveaway fashion brochure that customers could take home and browse through at leisure. Instead of a handout, Gingrich conceived of the ten-cent Esquire, and he prepared a dummy copy by cutting and pasting pictures and articles from back issues of its parent, Apparel Arts.
The first issue of Esquire, in October 1933, was an attractive, glossy quarterly of 116 pages, one third of them in color, but costing fifty cents. Although industry experts had predicted that a men’s fashion magazine could sell no more than 25,000 copies, clothing stores alone ordered 100,000 copies of the initial issue; Gingrich immediately decided to make the magazine a monthly.
In addition to fashion, the premier issue included articles, short stories, and sports pieces bearing impressive bylines: Ernest Hemingway, John Dos Passos, Dashiell Hammett, and Gene Tunney. And Esquire continued as a magazine of fashion and literary distinction, featuring writings by Thomas Mann, D. H. Lawrence, André Maurois, and Thomas Wolfe. Hemingway first published “The Snows of Kilimanjaro” in its pages, F. Scott Fitzgerald contributed original stories, Arthur Miller wrote “The Misfits” for the magazine, which also introduced new plays by William Inge and Tennessee Williams. By 1960, the magazine that had been conceived as a free clothing store handout was generating yearly advertising revenue in excess of seven million dollars and had a circulation of almost a million. One can only speculate on its success had Arnold Gingrich called it Stag.
Reader’s Digest: 1922, New York
The son of a Presbyterian minister from St. Paul, Minnesota, DeWitt Wallace had an idea for a small family digest that might at best earn him five thousand dollars a year. He believed that people wished to be well informed, but that no reader in 1920 had the time or money to read the scores of magazines issued weekly. Wallace proposed to sift out the most noteworthy articles, condense them for easy reading, and gather them into a handy periodical the size of a novella.
From back issues of other magazines, Wallace prepared his dummy. Two hundred copies of the prototype, already named Reader’s Digest, were mailed to New York publishers and other potential backers. No one expressed the least interest. So Wallace and his fiancée, Lila Bell Acheson, the daughter of another Presbyterian minister, rented an office in New York’s Greenwich Village and formed the Reader’s Digest Association. They condensed magazine articles, and prepared a mimeographed circular soliciting subscriptions, which they mailed to several thousand people on their wedding day, October 15, 1921. When they returned from their honeymoon two weeks later, the Wallaces found they had fifteen hundred charter subscribers at three dollars each. They then set to work on issue number one of Reader’s Digest, dated February 1922.
With success came an unanticipated problem.
At first, other magazines readily granted the Digest permission to reprint articles without fees. It was publicity. But as circulation increased, the Digest was suddenly viewed as a competitor, cannibalizing copy and cutting into advertising revenue and readership. Soon many of the country’s major magazines refused the Wallaces reprint rights.
In 1933, to maintain the appearance of a digest, DeWitt Wallace instituted a controversial practice. He commissioned and paid for original articles to be written for other magazines, with the proviso that he be permitted to publish excerpts. Critics lambasted them as “planted articles,” while Wallace more benignly called them “cooperatively planned.” Magazines with small budgets welcomed the articles, but larger publications accused Wallace of threatening the free flow of ideas and determining the content of too many publications. The practice was discontinued in the 1950s. By that time, the wholesome family digest that praised a life of neighborliness and good works was earning thirty million dollars a year, and had recently launched a new venture, the Reader’s Digest Condensed Book Club. In the next decade, its circulation would climb to fifteen million readers.
TV Guide: 1953, Pennsylvania
The magazine that would achieve a weekly circulation of seventeen million readers and change the way Americans watched television was born out of a telephone conversation.
In November 1952, Merrill Panitt, a television columnist for the Philadelphia Inquirer and an administrative assistant at Triangle Publications, received a phone call from his employer at Triangle, Walter H. Annenberg. The influential businessman had spotted a newspaper advertisement for a new weekly magazine, TV Digest. Annenberg instructed Panitt to learn more about the proposed publication and discover if there were any others like it around the country. Before that phone call was concluded, Annenberg had convinced himself to publish a national television magazine with local program listings. By the time he had hung up, he had laid out in principle what was to become one of America’s top-selling periodicals.
Panitt learned that local television magazines existed in at least New York City, Philadelphia, Chicago, and Los Angeles. Annenberg moved quickly to acquire these publications, while contemplating what to name his own venture.
Panitt began the work of assembling a national editorial staff. Since there was no reservoir of stories or photos to fall back on, assignments were issued quickly. Red Smith, later to win a Pulitzer Prize, was hired to contribute a regular sports column.
At the Philadelphia headquarters, the editors had no trouble in deciding whom to put on their first cover. There had never been a television show as popular as I Love Lucy. It was a national phenomenon. President Eisenhower delayed an address to the nation rather than run against Lucy, and since the show aired on the night America’s department stores remained opened till nine-thirty, stores across the country installed television sets, hoping to win back shoppers who were staying home by the tens of thousands rather than miss their favorite program. Since the entire country had followed Lucy’s pregnancy and her baby’s birth on television, the editors decided to highlight the baby, Desidero Alberto Arnaz IV, and to place Lucy’s familiar face in the magazine’s upper-right-hand corner.
TV Guide made its debut in April 1953, in ten different editions, with regional program listings. Although that first issue was a resounding success, weekly circulation began a strange and unanticipated decline. No one at the new publication had taken into account a social practice of the ’50s: With the majority of American homes lacking air-conditioning, television viewing declined precipitously in the summer months, while families opted for outdoor recreation, even if it was only to rock on the front porch to catch a breeze.
With the approach of fall, the circulation of TV Guide rose steadily, and the editors made an innovative move in devoting one issue to the shows scheduled for the new 1953–54 season. That first Fall Preview Issue sold out at newsstands and supermarkets and started a tradition. In fact, the annual Fall Preview quickly became TV Guide’s biggest issue in advertising revenue and circulation. Today, headquartered in Radnor, Pennsylvania, the magazine publishes 108 local editions, covering every state but Alaska.
Time: 1923, New York
Almost titled Chance or Destiny, the most popular news weekly in the history of publishing, Time, sprang out of a close collegiate friendship between two extraordinarily different men.
Briton Hadden, born in Brooklyn in 1898 of well-to-do parents, had shown an interest in journalism since childhood, when he entertained his family with poems and stories and his classmates with a newspaper, the Daily Glonk. He secured his first professional writing position, with the New York World, after informing its editor, who had tried to dismiss him, “You’re interfering with my destiny.”
Whereas Hadden was extroverted and prankish, Henry Luce was serious and pragmatic. He was the son of a Presbyterian missionary who had founded two American universities in China, where Luce was born. Family members were required to spend at least an hour a day in some effort that benefited mankind.
Henry Luce met Briton Hadden at the Hotchkiss School in Lakeville, Connecticut, and an immediate and intense bond developed between the young men. Hadden edited the school newspaper, the Hotchkiss Record, while Luce published the Hotchkiss Literary Monthly, contributing essays and poetry. At Yale University, their friendship strengthened, with Hadden as chairman of the Yale News and Luce its managing editor. They jointly interrupted their college careers in 1918 to enlist in the Student Army Training Corps, and it was then that they conceived the idea of founding a national news weekly.
The magazine the two men eventually produced in 1923 was considerably different from the Time of today.
They had drawn up a prospectus for a publication that featured condensed rewrites of information that appeared in daily newspapers—mainly the New York Times. The prospectus read: “TIME collects all available information on all subjects of importance. The essence of all this information is reduced to approximately 100 short articles, none of which are over 400 words. No article will be written to prove any special case.” For capital, the two twenty-four-year-old men had turned to the wealthy families of their Yale friends. The mother of one classmate wrote out a check for twenty thousand dollars, though uncertain the venture would succeed; before her death, the investment had appreciated to more than a million dollars.
From a foot-high stack of newspapers, the two men and their small staff produced the first issue of Time, which appeared in March 1923. Its thirty-two pages contained more than two hundred concise rehashed “items,” as Hadden dubbed the separate pieces, ranging in length from a mere three lines to one hundred lines. The cover portrait was a simple charcoal sketch of a recently retired congressman, Joseph Gurney Cannon. That first issue, it was written, met with “a burst of total apathy on the part of the U.S. press and public.” And when Hadden and Luce asked a prominent figure for advice on their first issue, he answered, “Let the first be the last.”
Undaunted, the two young editors revamped their magazine and its cover, introducing around the cover portrait a red margin which would become a Time trademark. But most important, they hired a staff to perform its own original reporting and writing. And the magazine acquired a reputation for its own variously acerbic, supercilious, and humorous writing style. Time’s writers coined words, committed puns, inverted syntax, and interjected tropes, epithets, and esoteric terms into nearly every paragraph.
These eccentricities came to be called the “Time style.” Many of the magazine’s phrases entered the American vernacular: “Tycoon,” a phonetic spelling of the Japanese taikun, meaning “mighty lord,” used only infrequently in English, gained immense popularity. To a lesser degree, the magazine familiarized Americans with “pundit,” from the Hindu pandit, meaning “learned one,” and “kudos” (from the Greek kydos, meaning “glory”), employed by Time editors mainly to refer to honorary degrees. Forgotten Time neologisms include tobacconalia, improperganda, and radiorating.
The magazine that had been the dream of two college boys become a national, then an international, phenomenon.
When Hadden died in 1929 of a streptococcus infection—a week before the sixth anniversary of Time’s founding and not long after his thirty-first birthday—he left stock worth over a million dollars. In a boxed announcement on the first page of the next issue, a brokenhearted Henry Luce wrote of his college friend in the succinct, convoluted prose that had become the magazine’s hallmark: “Creation of his genius and heir to his qualities, Time attempts neither biography nor eulogy of Briton Hadden.”
The organization founded by Henry Luce would go on to fill home magazine racks with a selection of periodicals that made publishing history.
Fortune appeared in February 1930. The small business section of Time could not accommodate the wealth of material its staff produced weekly, and in 1928, Henry Luce suggested that the company launch a periodical of restricted circulation to use Time’s business pages’ surplus.
Christened Fortune by Luce, the new magazine was a success from the start. As it grew in size, readership, and scope, its founder prided himself on the periodical’s record for accuracy amidst the torrent of facts and figures it regularly published. The magazine’s editors were so confident of that accuracy that in May 1937 they offered readers five dollars for every factual error they could find in Fortune’s pages. Not many readers nibbled at the five-dollar bait. But when the amount was doubled, nearly a thousand letters poured in. The editors conceded to two “major” errors, twenty-three “minor” ones, and forty discrepancies they labeled “small points.” They paid out four thousand dollars, then withdrew the offer because of the time involved in reading, checking, and answering readers’ allegations.
Following Time (1923), Fortune (1930), and Life (1936), the Luce publishing empire again made magazine history in 1954, with Sports Illustrated, and in 1973, with People. These two periodicals transformed the hackneyed journalism of the traditional sports and fan magazines into a new level of quality and popularity. In sifting through a magazine rack today, it is hard not to come upon a publication that owes its existence to the company started by two college classmates and lifelong friends, Briton Hadden and Henry Luce.
Newsweek: 1933, New York
The Depression year of 1933 was a bleak period for starting a news weekly, especially since Time seemed to have captured that particular audience. Nonetheless, Newsweek was launched that year, in which one American worker in four was unemployed, when businesses were failing at the rate of 230 a day, and when newspapers were called “Hoover blankets” and were valued as much for their warmth as for their information.
Newsweek was founded by a disgruntled Time staffer. Thomas Martyn, an Englishman, had been hired by Hadden and Luce as Time’s first foreign news editor, under the mistaken notion that he was an experienced writer on world affairs. After gaining his first professional writing experience at Time, Martyn moved on to the New York Times, then he quit the newspaper to draw up a prospectus for his own news weekly.
Thomas Martyn gave both personal and professional reasons for starting his own periodical: He wished to “run Henry Luce out of business,” but he also firmly believed he could produce a better magazine. As he once wrote: “I think there’s room for another news magazine that isn’t quite as acid, that does a more thorough job of reporting, that can dig out the facts behind the news and give the news more meaning.”
Armed with an editorial staff of twenty-two, and a suitcase full of newspapers clips to serve as basic source material, Martyn published the first issue of News-Week on February 17, 1933, for ten cents a copy. The magazine’s multi-image cover, containing pictures of seven important news events, one for each day of the week, thoroughly confused newsstand customers. Even when Martyn switched to a single cover photograph for each issue, sales did not significantly increase.
Surviving four years of severe financial hardships, in 1937 the magazine modified its name to Newsweek. And abandoning the digest-like format, the editors announced a “three-dimensional editorial formula” consisting of the breaking news itself, a background perspective on the news story, and an interpretation of its significance.
To accomplish this comprehensive approach to news, the magazine established its own information-gathering network of correspondents and bureaus. And that same year, Newsweek began the tradition (later followed by Time) of clearly separating fact from opinion by having writers sign columns of commentary so they would not be confused with neutral reporting. The new approach paid off, and by 1968, though Newsweek had certainly not “run Henry Luce out of business,” it had topped its major rival, Time, in advertising pages, establishing a trend that would continue into the 1980s.
Marbles: 3000 B.C., Egypt
In his 1560 masterpiece Children’s Games, Flemish painter Pieter Brueghel the Elder depicts children of his era at play: spinning hoops, patting mud pies, tossing jacks, dressing dolls, teetering on stilts, and shooting marbles—some eighty activities in all. It is clearly displayed that many games played today were enjoyed by children five hundred years ago. And several of those games, one being marbles, were part of the daily play of Egyptian children 4,500 years earlier.
With marbles, as with many ancient games, it is important to differentiate between adults’ divination and children’s diversion. For many toys, as we’ll see, originated to augur the fortunes of kings and tribes, and only through disuse were bequeathed to youngsters.
Marbles, in the form of the knucklebones of dogs and sheep, existed in the Near East as auguries more than a thousand years before they became toys. Archaeologists have deduced the transformation from religious article to toy based in part on where ancient marbles were unearthed: among the ruins of a temple or in a child’s tomb. Thus, the oldest game marbles are taken to be a set of rounded semiprecious stones buried with an Egyptian child around 3000 B.C. in a gravesite at Nagada.
On the Greek island of Crete, Minoan youths played with highly polished marbles of jasper and agate as early as 1435 B.C. And it is the Greeks, from their term for a polished white agate, marmaros, who gave us the word “marble.”
In the ancient world, a marble’s composition often reflected the economic and technologic state of a culture. For the advanced and cultured Minoans, marbles of semiprecious stone were standard, whereas ordinary stone and pellets of clay formed the marbles of the austere-living inhabitants of the British Isles (even among the ruling class); even more primitive peoples used olives, hazelnuts, chestnuts, and rounded galls from the oak tree. Rustic as many Celtic, Saxon, and African tribes were, their children did not want when it came to marbles. The game developed independently in virtually every ancient culture.
Marbles: A children’s game and an adult augury that existed in every culture.
Marbles was a popular game among Roman children. The first Roman emperor, Caesar Augustus, would descend from his litter in order to join street children shooting marble pebbles and galls. Even clear glass marbles, fused from silica and ash, were manufactured in ancient Rome. Despite the numerous marble artifacts obtained from ruins and the many references to the sport in extant texts, rules on how the game was played do not exist.
Tops: 3000 B.C., Babylonia
It would take modern minds schooled in the kinematics of rotation to understand the complex forces that combine to keep a top spinning upright. But it did not require a probing mind, or a knowledge of mechanics, to discover that a conical object, given a twist, executed a fascinating blur of motion. Clay tops, their sides etched with animal and human forms, were spun by Babylonian children as early as 3000 B.C. The unearthed artifacts appear to have been toys, since they were discovered in children’s gravesites alongside sets of marbles.
Medieval German top; the top as a mechanical motor.
A decorated top is more interesting to watch as its rotation slows and its images become discernible, and the earliest known toy tops were all scored or painted with designs. The ancient Japanese painted tops intricately, and they were the first to create holes around the circumference of the clay toys to produce tops that hummed and whistled.
Hula Hoop: 1000 B.C., Near East
In 1958, a hula hoop craze swept the United States. Hoops of brightly colored plastic, placed round the waist, were fiercely rotated by wriggling the hips. Stores sold out stock as quickly as it arrived. Within six months, Americans bought twenty million hula hoops, at $1.98 apiece. And doctors treated young and old alike for back and neck injuries and warned of greater dangers.
The hula hoop game, though, was not new; nor were the medical warnings. Both originated centuries ago.
Children in ancient Egypt, and later in Greece and Rome, made hoops from dried and stripped grapevines. The circular toys were rolled on end, propelled along by a rod, tossed into the air and caught around the body, and swung round the waist. In an ancient British game, “kill the hoop,” the center of the rolling toy became the target for hurled darts. South American cultures devised play hoops from the sugarcane plant.
Historians of children’s games record a “hula” hoop craze that swept England during the fourteenth-century Edwardian era. Children and adults twirled hoops made of wood or metal round their waists, and physicians treated the accompanying aches, pains, and dislocated backs. As with the modem craze, many adult deaths by heart failure were attributed to excessive hoop twirling, and the British medical profession warned that “hoops kill,” a macabre reversal of “kill the hoop.”
The name “hula” did not become associated with the game until the 1700s. Then hula was a sensuous, mimetic Hawaiian dance, performed sitting or standing, with undulating hip gestures. Originally a religious dance, performed to promote fecundity, honor Hawaiian gods, and praise the tribal chief, the hula, with its explicit sensuality—accented by bare-breasted female dancers in short pa’us skirts and men in briefer malos loincloths—shocked missionaries from Britain and New England. They discouraged men from dancing the hula and compelled native women to replace the skimpy pa’us with the long grass holokus. The dance’s hip gyrations so perfectly matched the motions required to rotate a toy hoop that “hula” became the name of the game.
Yo-Yo: 1000 B.C., China
In the sixteenth century, hunters in the Philippines devised a killer yo-yo of large wood disks and sturdy twine. The weapon was hurled, and its twine ensnared an animal by the legs and tripped it to the ground for an easy kill. The yo-yo was a hunter’s aid, similar to the Australian boomerang, in that both devices were intended to incapacitate prey at a distance, and its name was a word from Tagalog, an Indonesian language and the chief native tongue of the Philippine people. The yo-yo was no toy.
In the 1920s, an enterprising American named Donald Duncan witnessed the Philippine yo-yo in action. Scaling down the size of the weapon, he transformed it into a child’s toy, retaining the Tagalog name. But Duncan’s yo-yo was not the first double-disk-and-twine game.
Yo-yo-like toys originated in China about 1000 B.C. The Oriental version consisted of two disks sculpted from ivory, with a silk cord wound around their connecting central peg. The Chinese toys eventually spread to Europe, where in England the plaything was known as a “quiz,” while in France it was a “bandalore.” These European yo-yos were richly decorated with jewels and painted geometrical patterns that while bobbing created mesmerizing blurs.
Kite: 1200 B.C., China
Kites originated in China as military signaling devices. Around 1200 B.C., a Chinese kite’s color, its painted pattern, and particularly the air movements it was forced to execute communicated coded messages between camps. The ancient Chinese became so proficient in constructing huge, lightweight kites that they attempted, with marginal success, to employ them as one-man aircraft. The flier, spread-eagled upon the upper surface of a bamboo-and-paper construction, held hand grips and hoped for a strong and steady wind.
Ancient Chinese silk prints and woodcuts show children flying small kites of ingenious design, whose variety of weighted tails indicates that the aerodynamic importance of tails was appreciated early in kite construction. From China, kites traveled to India, then to Europe, and in each new land their initial application was in military communications, where they complemented older signaling devices such as hillside beacon fires and coded bursts of smoke.
By the twelfth century, European children were flying “singing” kites, which whistled by means of small holes in the kite’s body and the use of multiple vibrating cords. Kites carrying atmospheric measuring equipment played a vital role in the science of meteorology, and knowledge gleaned from centuries of kite construction helped establish the field of aerodynamics. Today the kite survives in all cultures as a toy.
Frisbee: Pre-1957, Connecticut
It was the Frisbie Pie Company of Bridgeport, Connecticut, whose name—and lightweight pie tins—gave birth to the modern Frisbee.
In the 1870s, New England confectioner William Russell Frisbie opened a bakery that carried a line of homemade pies in circular tin pans embossed with the family surname. Bridgeport historians do not know if children in Frisbie’s day tossed empty tins for amusement, but sailing the pans did become a popular diversion among students at Yale University in the mid-1940s. The school’s New Haven campus was not far from the Bridgeport pie factory, which served stores throughout the region. The campus fad might have died out had it not been for a Californian, Walter Frederick Morrison, with an interest in flying saucers.
The son of the inventor of the sealed-beam automobile headlight, Morrison was intrigued with the possibility of alien visits from outer space, a topic that in the ’50s captured the minds of Hollywood film makers and the American public. Hoping to capitalize on America’s UFO mania, Morrison devised a lightweight metal toy disk (which he’d later construct of plastic) that in shape and airborne movements mimicked the flying saucers on movie screens across the country. He teamed up with the Wham-O Company of San Gabriel, California, and on January 13, 1957, the first toy “Flyin’ Saucers” debuted in selected West Coast stores.
Within a year, UFOs in plastic were already something of a hazard on California beaches. But the items remained largely a Southern California phenomenon.
To increase sales, Wham-O’s president, Richard Knerr, undertook a promotional tour of Eastern college campuses, distributing free plastic UFOs. To his astonishment, he discovered students at two Ivy League schools, Yale and Harvard, playing a lawn game that involved tossing metal pie tins. They called the disks “Frisbies” and the relaxation “Frisbie-ing.” The name appealed to Knerr, and unaware of the existence of the Frisbie Pie Company, he trademarked the word “Frisbee” in 1959. And from the original pie tin in the sky, a national craze was launched.
Frisbee originated on an American college campus in the 1940s, but it had a historical antecedent.
Rattles: 1360 B.C., Egypt
Dried gourds and clay balls stuffed with pebbles were people’s earliest rattles, which were used not for play but to frighten off evil spirits. Rattling was invoked by tribal priests at the time of a birth, sickness, or death, transitions in which early peoples believed they were especially susceptible to evil intrusions. Societies that bordered the sea constructed religious rattles from bivalve shells filled with pebbles.
The first rattles designed for children’s amusement appeared in Egypt near the beginning of the New Kingdom, around 1360 B.C., and several are displayed in that country’s Horniman Museum. Many were discovered in children’s tombs; all bear indications that they were intended to be shaken by children: Shaped like birds, pigs, and bears, the clay rattles are protectively covered in silk, and they have no sharp protuberances. A pig’s ears, for instance, were always close to the head, while birds never had feet, legs, or pointed beaks. Many rattles are painted or glazed sky blue, a color that held magical significance for the Egyptians.
Today, in tribal Africa, rattles are still made from dried seed pods. And their multiple uses are ancient: making music, frightening demons, and amusing children.
Teddy Bear: 1902, United States
Despite the popularity today of toy bears with names such as Bear Mitzvah, Lauren Bearcall, and Humphrey Beargart, the classic bear is still the one named Teddy, who derived his moniker from America’s twenty-sixth President.
In 1902, an issue of the Washington Star carried a cartoon of President Theodore Roosevelt. Roosevelt, drawn by Clifford Berryman, stood rifle in hand, with his back turned to a cowering bear cub; the caption read: “Drawing the line in Mississippi.” The reference was to a trip Roosevelt had recently taken to the South in the hope of resolving a border dispute between Louisiana and Mississippi.
For recreation during that trip, Roosevelt had engaged in a hunting expedition sponsored by his Southern hosts. Wishing the President to return home with a trophy, they trapped a bear cub for him to kill, but Roosevelt refused to fire. Berryman’s cartoon capturing the incident received nationwide publicity, and it inspired a thirty-two-year-old Russian-immigrant toy salesman from Brooklyn, Morris Michtom, to make a stuffed bear cub. Michtom placed the cub and the cartoon in his toy-store window. Intended as an attention-getting display, the stuffed bear brought in customers eager to purchase their own “Teddy’s Bear.” Michtom began manufacturing stuffed bears with button eyes under the name Teddy’s Bear, and in 1903 he formed the Ideal Toy Company.
The American claim to the creation of the Teddy Bear is well documented. But German toy manufacturer Margaret Steiff also began producing stuffed bear cubs, shortly after Morris Michtom. Steiff, who at the time already headed a prosperous toy company, claimed throughout her life to have originated the Teddy Bear.
Margaret Steiff, who would become a respected name in the stuffed-toy industry, was a polio victim, confined to a wheelchair. In the 1880s in her native Germany, she began hand-sewing felt animals. As German toy manufacturers tell it, shortly after the Clifford Berryman cartoon appeared, an American visitor to the Steiff factory showed Margaret Steiff the illustration and suggested she create a plush toy bear. She did. And when the bears made their debut at the 1904 Leipzig Fair, her firm was overwhelmed with orders. It seems that the Teddy Bear was an independent American and German creation, with the American cub arriving on the toy scene about a year earlier.
President Theodore Roosevelt inspired the teddy bear, which became the most popular children’s toy of the 1910s.
The stuffed bear became the most popular toy of the day. During the first decade of this century, European and American manufacturers produced a variety of toy bears, which ranged in price from ninety-eight cents to twelve dollars, and factories supplied them with sweaters, jackets, and overcoats. For a while, it appeared that dolls were about to become obsolete.
Crossword Puzzle: 1913, New York
The concept of the crossword puzzle is so straightforwardly simple that it is hard to believe the puzzles were not invented prior to this century, and that “crossword” did not enter American dictionaries as a legitimate word until 1930.
The crossword puzzle was the brainchild of an English-born American journalist. In 1913, Arthur Wynne worked on the entertainment supplement, Fun, of the Sunday edition of the New York World. One day in early December, pressured to devise a new game feature, he recalled a Victorian-era word puzzle, Magic Square, which his grandfather had taught him.
Magic Square was a child’s game, frequently printed in nineteenth-century British puzzle books and American penny periodicals. It consisted of a group of given words that had to be arranged so the letters read alike vertically and horizontally. It exhibited none of the intricate word criss-crossings and blackened squares that Wynne built into his game. And where Magic Square gave a player the words to work with, Wynne created a list of Down and Across “clues,” challenging the player to deduce the appropriate words.
In the December 21 edition of the World, American readers were confronted with the world’s first crossword puzzle. The Sunday feature was not billed as a new invention, but was only one of a varied group of the supplement’s “mental exercises.” And compared to the taxing standards of today’s crossword puzzles, Wynne’s was trivially simple, containing only well-known words suggested by straightforward clues. Nonetheless, the game struck the public’s fancy.
Within months, Wynne’s “mental exercise” was appearing in other newspapers, and by the early 1920s, every major U.S. paper featured its own crossword puzzle. The publishing firm of Simon & Schuster released the first book of crossword puzzles, and in 1924, crossword books held the top four positions on the national best-seller list. Booksellers nationwide experienced an unexpected bonus: dictionaries were selling at a faster rate than at any previous time in history.
In 1925, Britain succumbed to crossword mania, with one publication observing that “the puzzle fad becomes a well-entrenched habit.” Soon the puzzles began to appear in almost every language except those, like Chinese, that do not lend themselves to a letter-by-letter vertical and horizontal word construction. Crossword puzzles were such an international phenomenon by the early ’30s that women’s dresses, shoes, handbags, and jewelry were patterned with crossword motifs. While other games have come and gone, crossword puzzles have continued to become more and more challenging. Regularly enjoyed by more than fifty million Americans today, the crossword puzzle is rated as the most popular indoor game in the country.
Board Games: 3000 B.C., Mesopotamia
In 1920, British archaeologist Sir Leonard Woolley discovered among the ruins of the ancient Mesopotamian city of Ur a gaming board considered to be the oldest in the world. Each player had seven marked pieces, and moves were controlled by the toss of six pyramidal dice, two of the four corners tipped with inlay. Three dice were white, three lapis lazuli. Though the game’s rules are unknown, the board is on display at the British Museum and its markings suggest it was played like backgammon.
Vying for the record as oldest board game is senet, the most popular game in Egypt some 4,300 years ago. Played by peasants, artisans, and pharaohs, the game consisted of a race across a papyrus playing board, with each player moving five ivory or stone pieces. The game was such a popular pastime that it was placed in the tombs of pharaohs; Tutankhamen’s game of senet was discovered when his tomb was opened in the 1920s.
Board games began as a form of divination, with a scored board and its marked pieces the equipment of sages and soothsayers. The historical crossover point from religion to recreation is unknown for many games. But as late as 1895, when the French army attacked the capital of Madagascar, the island’s queen and her advisers turned for a prophetic glimpse of the battle’s outcome to the ancient board game of fanorona, a relative of checkers. The advances, retreats, and captures of the game’s white and black pieces represented divine strategy, which was followed even in the face of imminent defeat.
Chess. One of the oldest board games to survive to the present day, chess was thought to have been devised by a Hindu living in northwest India in the late fifth century A.D. Or by the ancient Persians, since they played a similar game at that time, and since the expression “checkmate” derives from the Arabic phrase al shah mat, meaning “the king is dead.”
Recently, however, the discovery in the Soviet Union of two ivory chessmen dating to the second century A.D. preempts the Indian and Persian claims.
In the eleventh century, Spain became the first European country introduced to chess, and through the travels of the Crusaders the game became a favorite of the cultured classes throughout Europe.
Checkers. The game of checkers began in Egypt as a form of wartime prognostication about 2000 B.C. and was known as alquerque. There were “enemy” pieces, “hostile” moves, and “captures.” Examples of the game have been found in Egyptian tombs, and they, along with wall paintings, reveal that alquerque was a two-player game, with each player moving as many as a dozen pieces across a checkered matrix. Adopted and modified slightly by the Greeks and the Romans, checkers became a game for aristocrats.
Parcheesi: 1570s, India
The third all-time top-selling board game in America, Parcheesi originated in sixteenth-century India as the royal game—a male chauvinist’s delight.
The game’s original “board” was the royal courtyard of Mogul emperor Akbar the Great, who ruled India from 1556 to 1605. The game’s pawns, moving in accordance with a roll of the emperor’s dice, were India’s most beautiful young women, who stepped from one marked locale to another among the garden’s lush flowering shrubs.
The dice were cowries, brightly colored, glossy mollusk shells, which once served as currency. A shell landing with its opening upward counted as one step for a pawn. The country’s most exquisite women vied for the honor of being pieces in the emperor’s amusement of pacisi, Hindu for “twenty-five,” the number of cowrie shells tossed in a roll.
During the Victorian era, the India entertainment was converted into a British board game, Pachisi. Its scallop-shaped path, traversed by ivory pawns, was a replica of the footpaths in Akbar’s garden. In America, as Parcheesi, the game became a favorite of such figures as Calvin Coolidge, Thomas Edison, and Clara Bow, and it was trademarked in 1894 by the firm of Selchow & Righter, which would later manufacture Scrabble. The board’s center, marked “Home,” a pawn’s ultimate goal, originally was Akbar’s ornate garden throne. One of the Indian pacisi gardens survives today at the palace in Agra.
Monopoly: 1933, Pennsylvania
Two of the most enduring modem board games—one known technically as a “career” game, the other as a “word” game—are, respectively, Monopoly and Scrabble. Both entertainments were conceived in the Depression years of the early 1930s, not as a means of making a fortune but merely to occupy their creators’ days of unemployment and discontent.
In reaction to the poverty of the Great Depression, Charles B. Darrow, an unemployed engineer from Germantown, Pennsylvania, created the high-stakes, buying-and-selling real estate game of Monopoly.
Financially strapped and emotionally depressed, Darrow spent hours at home devising gaming board amusements to occupy himself. The real-life scarcity of cash made easy money a key feature of his pastimes, and the business bankruptcies and property foreclosures carried daily in newspapers suggested play “deeds,” “hotels,” and “homes” that could be won—and lost—with the whimsy of a dice toss. One day in 1933, the elements of easy money and ephemeral ownership congealed as Darrow recalled a vacation, taken during better times, in Atlantic City, New Jersey. The resort’s streets, north to Baltic and south to Pacific avenues, became game board squares, as did prime real estate along the Boardwalk, on Park Place, and in Marvin Gardens.
Darrow’s friends and family so enjoyed playing the homemade entertainment that in 1934 they persuaded him to approach the Massachusetts game firm of Parker Brothers. Company executives test-played Monopoly, then unanimously rejected it on the grounds that the concept was dull, the action slow-paced, and the rules hopelessly complex.
Darrow persevered. And at Wanamaker’s department store in Philadelphia, he found an executive who not only enjoyed playing the game but offered to stock it in the store. With loans from family and friends, Darrow had five thousand Monopoly games manufactured and delivered to Wanamaker’s. When Parker Brothers discovered that Monopoly sets were selling swiftly, they replayed the game and found that it was imaginative, fast-paced, and surprisingly easy to master. The game was copyrighted in 1935, and soon the company’s plant was turning out twenty thousand Monopoly sets a week.
However, top company executives still harbored reservations. They believed the game was strictly for the adult market and merely a fad, which would not last more than three years. In December 1936, convinced that the game’s popularity had run its course, George Parker, the company president, ordered the manufacturing plant to “cease absolutely to make any more boards or utensil boxes. We will stop making Monopoly against the possibility of a very early slump.”
The slump, of course, never came. And the unemployed Charles Darrow became a millionaire from royalties as his game gained popularity in twenty-eight countries and nineteen languages. There was evidence that the capitalist board game was even played in the Soviet Union: Six Monopoly sets displayed at the American National Exhibition in Moscow in 1959 all mysteriously disappeared. Today Monopoly is one of the two longest-and best-selling board games of this century, the other being Scrabble.
Scrabble: 1931, New England
Like Monopoly’s inventor, Charles Darrow, the man who conceived Scrabble, Alfred Butts, was left unemployed by the Depression. Unlike Darrow, who translated poverty into a game of fantasy fortune, Butts amused himself at home with pure escapism, translating the national mania for crossword puzzles into a challenging board game that, not surprisingly, he named Criss Cross.
As conceived in 1931, Criss Cross consisted of a hundred wooden tiles, each painted with a letter of the alphabet. But the game’s final rules, and each letter’s point value, based on its frequency of use, took Butts almost a decade to refine.
Alfred Butts was in no hurry. For Criss Cross was strictly a home entertainment for his family and friends. It was one friend, James Brunot, from Newton, Connecticut, who in 1948 convinced Butts of the game’s commercial potential and persuaded him to copyright it as Scrabble.
Scrabble, in a test playing, interested the game-manufacturing firm of Selchow & Righter, who had already scored a best-seller with Parcheesi. Echoing Parker Brothers’ belief that Monopoly would be a short-lived fad, Selchow & Righter were convinced that Scrabble, a faddish spin-off of crossword puzzles, would sell for no more than two years. Instead, it became the second all-time top-selling board game in America (between Monopoly and Parcheesi), was translated into more than half a dozen languages and issued in a Braille version for the blind, and continues to sell strongly today.
Silly Putty: 1940s, Connecticut
In the early 1940s, the U.S. War Production Board sought an inexpensive substitute for synthetic rubber. It would be used in the mass production of jeep and airplane tires, gas masks, and a wide variety of military gear. The board approached General Electric, and a company engineer, James Wright, was assigned to investigate the possibility of chemically synthesizing a cheaper, all-purpose rubber.
Working with boric acid and silicone oil, Wright succeeded in creating a rubber-like compound with highly unusual properties. The pliant goo stretched farther than rubber, rebounded 25 percent more than the best rubber ball, was impervious to molds and decay, and withstood a wide range of temperatures without decomposing. And it possessed the novel property, when flattened across newspaper print or a comic book image, of lifting the ink onto itself.
Unfortunately, Wright’s substance had no real industrial advantages over synthetic rubber, and it became an in-house curiosity at General Electric’s laboratory in New Haven, Connecticut. Dubbed “nutty putty,” it was demonstrated to visitors, and in 1945 the company mailed samples to several of the world’s leading engineers, challenging them to devise a practical use for the strange-behaving substance.
No scientist succeeded. Rather, it took a former advertising copywriter, Paul Hodgson, operating a New Haven toy store, to realize that the putty had a future not as an industrial marvel but as a marvelous toy.
Hodgson, who had recently moved from Montreal, had the good fortune to be at a New Haven party where a wad of nutty putty was demonstrated; it kept a group of adults amused for hours. Entering into an agreement with General Electric, Hodgson bought a large mass of the stuff for $147 and hired a Yale student to separate it into one-ounce balls, to be marketed inside colored plastic eggs. That year, 1949, Silly Putty outsold every other item in Hodgson’s toy store. And once mass-produced, it became an overnight novelty sensation, racking up sales during the ’50s and ’60s of over six million dollars a year.
Americans wrote to the manufacturer of their own uses for the substance: it collected cat fur and lint, cleaned ink and ribbon fiber from typewriter keys, lifted dirt from car seats, and placed under a leg, stabilized teetering furniture. Though the list was endless, no one then or now discovered a really practical application for the unsuccessful rubber substitute.
Slinky: Mid-1940s, United States
As Silly Putty was a failed war effort to develop an inexpensive rubber, Slinky, the spring that descends steps with grace, elegance, and stealth, was the failed attempt of an engineer to produce an antivibration device for ship instruments.
In the early 1940s, marine engineer Richard James was experimenting with various kinds of delicate, fast-responding springs. His goal was to develop a spring that would instantaneously counterbalance the wave motion that rocks a ship at sea. A set of such springs, strategically placed around a sensitive nautical instrument, would keep its needle gauges unaffected by pitching and yawing. In attempting to improve on existing antivibration devices, Richard James stumbled upon a fascinating toy.
One day in his home laboratory, James accidentally knocked an experimental spring off a shelf. It did not fall summarily to the floor, but literally crawled, coil by coil, to a lower shelf, onto a stack of books, down to the tabletop, and finally came to rest, upright, on the floor. A quick experiment revealed that the spring was particularly adept at descending stairs. It was Richard James’s wife, Betty, who realized that her husband’s invention should be a toy. After two days of thumbing through a dictionary, she settled on what she felt was the best adjective in the English language to describe the spring’s snake-like motion: slinky.
Betty James still runs the company she founded with her husband in 1946 to market Slinkys. And in an unusual reversal of roles, Slinky the toy has been put to practical uses. Carried by communications soldiers in Vietnam, Slinky was tossed over a high tree branch as a makeshift radio antenna. Slinky was incorporated into a spring device used to pick pecans from trees. And Slinky has gone aloft in the space shuttle to test the effects of zero gravity on the physical laws that govern the mechanics of springs. In space, Slinky behaves like neither a spring nor a toy but as a continuously propagating wave.
Toys That Glow in the Dark: 1603, Italy
There are various toy amulets, as well as religious artifacts, made of a milky white plastic that, when exposed to light, then moved into darkness, glows a greenish white. That magical property was first produced, fittingly, by a seventeenth-century alchemist in a quest to transform base metals into gold.
Vincenzo Cascariolo was a cobbler in Bologna, Italy. Experimenting in the centuries-old tradition of alchemy, he sought the “philosopher’s stone” to transmute relatively worthless metals such as iron and copper into silver or gold. In 1603, Cascariolo combined barium sulfate with powdered coal, heated the mixture, spread it over an iron bar, and let the coating cool.
To his disappointment, the iron did not become gold. But when Cascariolo placed the coated bar on a darkened shelf for storage, he was astonished by its sudden glow. Though the light eventually faded, Cascariolo learned that repeated exposure to sun “reanimated” the bar. The alchemist believed that he had stumbled upon a means of capturing the sun’s golden rays; and his chemicals did briefly store a form of solar energy. He hailed his discovery as the first step in producing a philosopher’s stone.
Throughout Italy, his compound became known as lapis solaris, or “sun stone,” and it was a great novelty. Particularly with the clergy. Crucifixes, miniature icons of saints, and rosary beads were painted, varnished, and compounded with lapis solaris, to imbue them with eerie halos. The belief developed that prayers recited in the presence of a glowing amulet were more readily answered. And the market for objects that glowed in the dark expanded throughout Christian countries. The alchemist had not succeeded in transmuting iron to gold, but he had spawned a gold mine in religious artifacts that would only lose their mysterious aura centuries later, when physicists explained how molecules absorb and radiate light through the process of chemiluminescence.
Roller Skates: 1759, Belgium
The first practical pair of roller skates, called skaites, was built by a Belgian musical instrument maker, Joseph Merlin, in 1759. Each skate had only two wheels, aligned along the center of the shoe, and Merlin constructed the skates in order to make a spectacular entrance at a costume party in the Belgian city of Huy. The crude design, which strapped to the feet, was based on the ice skates of Merlin’s day.
A master violinist, Merlin intended to roll into the party while playing his violin. Unfortunately, he had neglected to master the fine art of stopping on skates, and he crashed into a full-length mirror, breaking it and his violin; his entrance was indeed spectacular. Merlin’s accident underscored the technological drawback of all early “wheeled feet”: starting and stopping were not so much decisions of the skater as of the skates. The crude wheels, without ball bearings, resisted turning, then abruptly turned and resisted stopping, then jammed to a halt on their own.
When, in the 1850s, skate technology improved, roller skating began to compete in popularity with ice skating, though marginally at first. German composer Jakob Liebmann Beer, who achieved fame as Giacomo Meyerbeer, wrote a mid-1800s opera, Le Prophète, which contained an ice-skating scene that was performed on the improved roller skates. The opera was a great success in its own right, but many people attended to witness the much-publicized roller-skating scene. And an Italian ballet of the period, Winter Pastimes; or, The Skaters, choreographed and composed by Paul Taglioni, also became famous for its ice-skating episode executed on roller skates.
Interestingly, during these decades, roller skates were seldom depicted on stage as an entertainment themselves, but mimicked ice skating. Part of the reason was that until 1884, when ball-bearing wheels were introduced, roller skating was difficult, dangerous, and not a widely popular pastime.
Piggy Bank: 18th Century, England
Since dogs bury bones for a rainy day, and since they have been man’s best friend for fourteen thousand years, why not a dog-shaped bank for coins? Since horses were indispensable to the development of commerce and finance, why not a horse bank? On the other hand, squirrels are well-known hoarders, and we talk of “squirreling away” valuables; why not a bank in the shape of a squirrel?
Instead, for almost three hundred years, the predominant child’s bank has been a pig with a slot in its back. Pigs are not known for their parsimony. A proverb warns of the futility of attempting to make a silk purse from a sow’s ear. And Scripture admonishes against throwing pearls to swine—as exemplified by dropping hard-earned cash into a piggy bank.
How did a pig come to symbolize the act of saving money? The answer is: by coincidence.
During the Middle Ages, mined metal was scarce, expensive, and thus rarely used in the manufacture of household utensils. More abundant and economical throughout Western Europe was a type of dense, orange clay known as pygg. It was used in making dishes, cups, pots, and jars, and the earthenware items were referred to as pygg.
Frugal people then as now saved cash in kitchen pots and jars. A “pygg jar” was not yet shaped like a pig. But the name persisted as the clay was forgotten. By the eighteenth century in England, pygg jar had become pig jar, or pig bank. Potters, not usually etymologists, simply cast the bank in the shape of its common, everyday name.
Firecrackers: 10th Century, China
Sparklers, flares, and full-fledged fireworks originated in tenth-century China, when a cook, toiling in a kitchen, mixed several ingredients and produced history’s first man-made explosion of sparks. It is often stated that the anonymous cook was attempting to produce a better gunpowder. But in fact, there was no such thing as gunpowder at that time. Moreover, it was the cook’s concoction—of sulfur, charcoal, and saltpeter—that served as the Chinese origin of fireworks and gunpowder.
Historians have not determined what dish the cook was attempting to prepare. But the three above-mentioned ingredients, explosive when combined, were commonplace in a Chinese kitchen. Saltpeter, or potassium nitrate, served as preserving and pickling salt; sulfur was used to intensify the heat of a fire; and as fuel, charred firewood and coal provided an abundant source of charcoal.
The Chinese soon discovered that if the explosive ingredients were packed into hollowed-out bamboo, the confined explosion rocketed skyward, to spectacular effect. The accompanying light and bang proved perfect for ceremoniously frightening off evil spirits, and for celebrating weddings, victories, eclipses of the moon, and the New Year. The Chinese called their early fireworks “arrows of flying fire.”
Though the Chinese had all the ingredients for gunpowder, they never employed the explosive for military purposes. That violent application fell, ironically, to a thirteenth-century German monk of the Franciscan order, Berthold Schwarz, who produced history’s first firearms.
The Chinese were more interested in using explosives for celebrations—and in attempting to fly. One inventor, Wan-hu, built a plane consisting of two kites, propelled by forty-two rocket-like fireworks, and seated himself in its center in a chair. Unfortunately, when the rockets were simultaneously ignited, the paper kites, the wooden chair, and the flesh of the inventor were reduced to the common ingredient ash.
For eight centuries firework displays were limited to shades of yellow and reddish amber.
By the beginning of the seventeenth century, European fireworks technicians could create elaborate flares that exploded into historic scenes and figures of famous people, a costly and lavish entertainment that was popular at the French royal palace at Versailles. For eight centuries, though, the colors of firework explosions were limited mainly to yellows and reddish amber. It was not until 1830 that chemists produced metallic zinc powders that yield a greenish-blue flare. Within the next decade, combinations of chemicals were discovered that gave star-like explosions in, first, pure white, then bright red, and later a pale whitish blue. The last and most challenging basic color to be added to the fireworks palette, in 1845, was a brilliant pure blue. By midcentury, all the colors we enjoy today had arrived.
Dolls: 40,000 Years Ago, Africa and Asia
Long before Mattel’s Barbie became the toy industry’s first “full-figure” doll in 1958, buxom female figurines, as fertility symbols, were the standard dolls of antiquity. And they were the predecessors of modern dolls. These figures with ample bosoms and distended, childbearing bellies were sculpted in clay some forty thousand years ago by Homo sapiens sapiens, the first modern humans.
As early man developed mythologies and created a pantheon of gods, male and female, dolls in wax, stone, iron, and bronze were sculpted in the likenesses of deities. In India, for instance, around 2900 B.C., miniatures of Brahma rode a goose; Shiva, a bull; and his wife, Durga, a tiger. At the same time, in Egypt, collections of dolls were boxed and buried with a high-ranking person; these ushabti dolls were imagined to be servants who would cater to the needs of the deceased in the afterlife.
The transition from dolls as idols to dolls as toys began when figurines came to represent ordinary human beings such as Egyptian servants. For while it would have been sacrilegious for a child in antiquity to play with a clay idol, it became acceptable when that figurine represented a mere mortal. These early toy dolls, which arose independently in the Near and Middle East and the Orient, never took the form of infants, as do modern dolls; rather, they were miniatures of adults.
Other features distinguished these original toy dolls. Whereas today’s infant doll is usually of indeterminate sex (the gender suggested by incidentals such as hair length or color of dress), the gender of an ancient doll was never ambiguous. In general, female dolls were voluptuous and buxom, while males were endowed with genitalia. It was thought natural that a human representation of an adult should be accurate in detail.
Both the Greeks and the Romans by 500 B.C. had toy dolls with movable limbs and human hair. Joints at the hips, shoulders, elbows, and knees were fastened with simple pins. Most Greek dolls of the period were female—to be played with by young girls. And although Roman craftsmen fashioned wax and clay dolls for boys, the figures were always of soldiers. Thus, at least 2,500 years ago, a fundamental behavioral distinction between the sexes was laid down. Several Greek gravestones exist with inscriptions in which Greek girls who had died in youth bequeathed their collections of dolls to friends.
The transition from adult dolls to infant dolls is not clearly documented.
Existing evidence suggests that the infant doll evolved in ancient Greece once craftsmen began to fashion “babies” that fitted into the arms of adult “mother” dolls. This practice existed in third century B.C. Greece, and in time, the popularity of the infant outgrew that of the adult doll. Modern psychological studies account for the transition. A female child, given the choice of playing with an adult or infant doll, invariably selects the infant, viewing it as her “own baby” and seeing herself as “mother”, thereby reen-acting the early relationship with her own mother.
By the dawn of the Christian era, Greek and Roman children were playing with movable wooden dolls and painted clay dolls, were dressing dolls in miniature clothes and rearranging furniture in a dollhouse.
Barbie Doll. The Barbie doll was inspired by, and named after, Barbie Handler, daughter of Ruth Handler, a toy manufacturer born in 1917 in Denver, Colorado. With her husband, Elliot, a designer of dollhouses, Ruth Handler founded the Mattel toy company in 1945.
American dolls then were all of the cherub-faced-infant variety. Mrs. Handler, observing that her daughter preferred to play with the more shapely teenage paper dolls, cutting out their wide variety of fashion clothes, decided to fill a void in toyland, and designed a full-figured adult doll with a wardrobe of modish outfits.
Popular dolls of the 1890s. Bisque China head and muslin body (left); jointed, dressed dolls (middle, top); wooden doll (bottom); basic jointed doll, undressed.
The Barbie doll, bowing in 1958, helped turn Mattel into one of the world’s largest toy manufacturers. And the doll’s phenomenal overnight success spawned a male counterpart in 1961: Ken, named after the Handlers’ son. The dolls became such a part of the contemporary American scene that in 1976, the year of the United States bicentenary, Barbie dolls were sealed into time capsules and buried, to be opened a hundred years thence as social memorabilia for the tricentenary.
China Doll. Traditionally, dolls’ heads were sculpted of wood, terra cotta, alabaster, or wax. In Europe in the 1820s, German Dresden dolls with porcelain heads and French bisque dolls with ceramic heads became the rage. The painted ceramic head had originated in China centuries earlier, and many manufacturers—as well as mothers and their young daughters—had observed and complained that the dolls’ exquisite ceramic faces occasionally were marred by brownish-black speckles. The source of these imperfections remained a confounding mystery until the early 1980s.
The solution unfolded when a sixteen-year-old British girl who made reproduction antique China dolls noticed that if she touched the dolls’ heads when painting them, black speckles appeared after the ceramic was fired. She took her problem to a doctor, who enlisted a team of scientific detectives. That the problem disappeared if the girl wore gloves suggested sweat from her hands as the source of the trouble.
X-ray fluorescence showed that the black speckles consisted not only of the normal body salts found in sweat but also of sulfides. The girl’s diet was scrupulously studied and found to contain small but regular quantities of garlic—in sauces, soups, and meat dishes. Garlic is high in sulfides. When she abstained from garlic, the problem ceased.
The British researchers further investigated the sweat from the girl’s hands. It contained sulfur metabolites of garlic, which in most people are broken down and excreted in urine. These metabolites were reacting with iron in the clay to produce the speckles. Medical studies revealed that the girl had a subtle, harmless metabolic deficiency, which would never have shown up had she done less unusual work. The researchers concluded that the cloudy speckles occasionally found on ceramic faces of antique dolls probably had a similar origin: A small percentage of humans do not sufficiently metabolize sulfides, and certain ceramic-doll makers literally left fingerprints of their deficiency.
In the Pantry
Potato Chip: 1853, Saratoga Springs, New York
As a world food, potatoes are second in human consumption only to rice. And as thin, salted, crisp chips, they are America’s favorite snack food. Potato chips originated in New England as one man’s variation on the French-fried potato, and their production was the result not of a sudden stroke of culinary invention but of a fit of pique.
In the summer of 1853, American Indian George Crum was employed as a chef at an elegant resort in Saratoga Springs, New York. On Moon Lake Lodge’s restaurant menu were French-fried potatoes, prepared by Crum in the standard, thick-cut French style that was popularized in France in the 1700s and enjoyed by Thomas Jefferson as ambassador to that country. Ever since Jefferson brought the recipe to America and served French fries to guests at Monticello, the dish was popular and serious dinner fare.
At Moon Lake Lodge, one dinner guest found chef Crum’s French fries too thick for his liking and rejected the order. Crum cut and fried a thinner batch, but these, too, met with disapproval. Exasperated, Crum decided to rile the guest by producing French fries too thin and crisp to skewer with a fork.
The plan backfired. The guest was ecstatic over the browned, paper-thin potatoes, and other diners requested Crum’s potato chips, which began to appear on the menu as Saratoga Chips, a house specialty. Soon they were packaged and sold, first locally, then throughout the New England area. Crum eventually opened his own restaurant, featuring chips. At that time, potatoes were tediously peeled and sliced by hand. It was the invention of the mechanical potato peeler in the 1920s that paved the way for potato chips to soar from a small specialty item to a top-selling snack food.
For several decades after their creation, potato chips were largely a Northern dinner dish. In the 1920s, Herman Lay, a traveling salesman in the South, helped popularize the food from Atlanta to Tennessee. Lay peddled potato chips to Southern grocers out of the trunk of his car, building a business and a name that would become synonymous with the thin, salty snack. Lay’s potato chips became the first successfully marketed national brand, and in 1961 Herman Lay, to increase his line of goods, merged his company with Frito, the Dallas-based producer of such snack foods as Fritos Corn Chips.
Americans today consume more potato chips (and Fritos and French fries) than any other people in the world; a reversal from colonial times, when New Englanders consigned potatoes largely to pigs as fodder and believed that eating the tubers shortened a person’s life—not because potatoes were fried in fat and doused with salt, today’s heart and hypertension culprits, but because the spud, in its unadulterated form, supposedly contained an aphrodisiac which led to behavior that was thought to be life shortening. Potatoes of course contain no aphrodisiac, though potato chips are frequently consumed with passion and are touted by some to be as satisfying as sex.
Pretzel: A.D. 610, Northern Italy
The crisscross-shaped pretzel was the creation of a medieval Italian monk, who awarded pretzels to children as an incentive for memorizing prayers. He derived the shape of his confection from the folded arms of children in prayer. That origin, as popular folklore has it, is supported by the original Latin and Italian words for “pretzel”: the Latin pretiole means “little gift,” and the Italian bracciatelli means “small arms.” Thus, pretzels were gifts in the shape of praying arms.
From numerous references in art and literature, as well as extant recipes, we know that the pretzel was widely appreciated in the Middle Ages, and that it was not always baked firm and crisp but was frequently chewy. A recipe for moist, soft pretzels traveled in the thirteenth century from Italy to Germany, where the baked good was first called, in Old High German, bretzitella, then brezel—the immediate predecessor of our word.
The pretzel is one of the few foods to have played a role in the history of warfare. Early in the sixteenth century, Asian armies under the Turkish-Mongol emperor Babar swept into India and parts of Europe. A wave of Turkish forces encountered resistance at the high stone wall surrounding the city of Vienna. Following several unsuccessful attempts to scale the wall, the Turks planned to tunnel secretly beneath it, and to avoid detection, they dug at night.
Snack foods. Instruments to prepare homemade potato chips (top); A monk shaped the pretzel after the folded arms of children in prayer.
Turkish generals, however, were unfamiliar with the working hours of Viennese pretzel makers, who to ensure the freshness of their specialty, baked from midnight to daybreak. A group of bakers, toiling in kitchen cellars, heard suspicious digging and alerted the town council; the local military thwarted the invasion. Viennese pretzel bakers were honored for their part in the victory with an official coat of arms that displays a pretzel, still the bakers’ emblem today.
Popcorn: 3000 B.C., Americas
Not all corn pops. Ideally, a corn kernel should have at least 14 percent water content so that under heat, the water expands to steam, causing the nugget to explode into a puffy white mass.
The art involved in popping corn is at least five thousand years old, perfected by the American Indians. They clearly appreciated the difference between sweet corn (for immediate eating), field corn (as cattle feed), and so-called Indian corn, which has sufficient water content for popping.
Popped corn was a native Indian dish and a novelty to the early explorers of the New World. Columbus and his men purchased popcorn necklaces from natives in the West Indies, and in the 1510s, when Hernando Cortes invaded the territory that today is Mexico City, he discovered the Aztecs wearing amulets of stringed popcorn in religious ceremonies. The dish derives its echoic name “popcorn” from the Middle English word poppe, meaning “explosive sound.”
The Indians developed three methods for popping high-moisture corn. They might skewer an ear of popping corn on a stick and roast it over a fire, gathering up kernels that popped free of the flames. Alternatively, the kernels were first scraped from the cob, then thrown directly into a low fire; again, those that jumped free were eaten. The third method was the most sophisticated. A shallow clay cooking vessel containing coarse sand was heated, and when the sand reached a high temperature, corn kernels were stirred in; cooking, they popped up to the surface of the sand.
Legend has it that the Plymouth Pilgrims enjoyed popcorn at the first Thanksgiving dinner in 1621. It is known that Indian chief Massasoit of the Wampanoag tribe arrived with ninety of his braves bearing various foods. Massasoit’s brother, Quadequina, is supposed to have contributed several deerskin bags of corn already popped.
Popping corn was simplified in the 1880s with the introduction of specially designed home and store popping machines. But at the time, corn could be purchased only in enormous quantities, and often still on the cob. The 1897 Sears, Roebuck catalogue, for instance, advertised a twenty-five-pound sack of popping corn, on cobs, for one dollar. The problem with buying popping corn in quantity was that storage depleted the kernels of their essential water content. Today food scientists know that if the internal moisture falls below about 12 percent, kernels open only partially or not at all. Charred, unpopped kernels are now called “duds” and are rare, which was not the case in the nineteenth century, when they were cursed as “old maids.”
The first electric corn popper in America appeared in 1907, at a time when electrical appliances were new, often large, and not always safe. A magazine advertisement for the device pointedly addresses these two drawbacks: “Of the host of electrical household utensils, the new corn popper is the daintiest of them all,” and “children can pop corn on the parlor table all day without the slightest danger or harm.”
The advent of electric popping machines, and the realization during the Depression that popcorn went a long way in stretching the family food budget, heightened the food’s popularity. But it was in the lobbies of movie theaters that popcorn became big business. By 1947, 85 percent of the nation’s theaters sold the snack, and 300,000 acres of Midwestern farmland were planted annually with Indian popping corn.
The arrival of television in the ’50s only increased Americans’ demands for corn, to pop in the kitchen between programs. A mid-decade poll showed that two out of three television watchers munched popcorn as often as four nights a week. Not all brands, though, were of equivalent quality; some yielded an annoyingly high number of duds. It was the quest to produce a high-quality popcorn that led Orville Redenbacher, a graduate in agronomy from Purdue University, to experiment with new hybrids of Indian popcorn.
Agronomy, the science and economics of crop production, was an established field of study by the 1950s, having contributed to improved management of America’s farmlands. In 1952, Redenbacher and a college friend, Charles Bowman, produced a corn whose kernels seldom failed to pop—and popped into larger, puffier morsels. But the quality hybrid was comparatively expensive, and popcorn companies that Redenbacher approached declined to sell his product, believing that snack food had to be low-priced. Convinced that popcorn lovers hated duds as much as he did, Redenbacher began packaging his corn and selling it to retail grocers. Its quality proved worth the price, for it became America’s best-selling popcorn, contributing substantially to the 192 million pounds of corn popped annually in electric poppers, in fireplaces, and atop stoves. Today the average American consumes almost two pounds of popcorn a year.
Peanuts: 1800s, United States
As a plant, the peanut is prehistoric; as a snack food, it is comparatively modern. And its name is a misnomer, for the nugget is not a nut (which grows aboveground on trees) but a legume, a member of the bean family, and one whose seed pods grow underground.
Native to South America, peanut plants were brought from Brazil to North America—to the area that today is Virginia—centuries before Columbus’s arrival. They flourished throughout the Southeast, where they were grown mainly for feeding pigs, chickens, and turkeys. Only poor Southern families and slaves ate peanuts, which were commonly known as “goobers,” from the Bantu word nguba. By the 1850s, “goober” was also a derisive term for any backwoodsman from Virginia, Alabama, or Georgia, and the latter state, for its prodigious peanut crop, became known as the Goober State. It was not until the American Civil War that Northerners really got to taste peanuts.
In the 1860s, when Union forces converged on the South, thousands of hungry soldiers found themselves gladly eating a new kind of pea-size bean from an unfamiliar vine. Soldiers brought the vine, Arachis hypogaea, which bears yellow flowers and brittle pods, home with them, but the peanut remained little more than a culinary curiosity in the North. In the 1880s, showman P. T. Barnum began marketing peanuts in nickel-size bags at his circuses, and as Americans took to the circus, they also took to peanuts; as popcorn would become the quintessential movie snack, peanuts became part of the three-ring experience.
Peanut, as a word and a food, entered our lives in other ways. At public entertainments throughout the 1880s and 1890s, the euphemism “peanut gallery” gained currency to designate the remote seats reserved for blacks at circuses, theaters, and fairs. Not until the 1940s would the phrase, reiterated on television’s Howdy Doody show, gain wide recognition simply as a grandstand for children. And peanut butter was an 1890s “health food” invention of a St. Louis physician; etymologists do not find the term linked with “jelly” until the 1920s, when the classic sandwich became a national dietary mainstay.
Peanuts were introduced into China in 1889 by American missionaries, who brought along crates of the beans to fortify their conversion efforts. Each Chinese couple who submitted to Christian baptism was rewarded with a quart of peanuts, a trifling amount—or “peanuts,” a connotation that originated in the American South in the 1830s, since blacks would work, literally, for peanuts. Once introduced into China, the new delicacy was cultivated in every province, and the peanut, around the turn of the century, became an American embellishment to traditional Oriental cuisine.
At that time, two young Italians emigrated to America and established a peanut empire that did much to popularize the bean as a nutritious snack food.
Planters Peanuts. Amedeo Obici and Mario Peruzzi arrived from Italy and settled with friends in Wilkes-Barre, Pennsylvania, opening a small fruit and nut stand. Their roasted peanuts were popular, but manual daily cranking of the roasting machine required effort and endurance. Experimenting with motors, Obici perfected the automatic peanut roaster that became the cornerstone of his business. Billing himself as “Obici, Peanut Specialist,” he attracted customers from neighboring towns with his machine-roasted and salted nuts, and in 1906 he and his partner formed the Planters Peanut Company.
To publicize the lowly peanut, as well as to create a distinctive company trademark, in 1916 the two men sponsored a contest. The winning entry was a fourteen-year-old boy’s crayon drawing titled “Little Peanut Person.” It won the boy five dollars, and an in-house artist, adding a monocle, cane, and top hat, turned the cartoon into Mr. Peanut. The amusing figure, in capturing the public imagination, elevated the peanut to a fun food, to be enjoyed even after the circus collapsed its tent and left town.
In the South during those years, a famed agronomist at the Tuskegee Institute in Alabama—George Washington Carver—was popularizing the peanut through his own research and recipes, the latter including two foods that would become American firsts and standards: peanut ice cream and peanut butter cookies. Before his death in 1943, Carver created more than three hundred products from the versatile peanut and its by-products: mayonnaise, cheese, chili sauce, shampoo, bleach, axle grease, linoleum, metal polish, wood stain, adhesives, plastics, ink, dyes, shoe polish, creosote, salve, shaving cream, soap, and several kinds of peanut butter. In a short time, the goober had come a long way.
Filberts. True nuts of the birch tree family, filberts were enjoyed by the Romans, who ate them fresh and dried. Etymologists believe the sweet-flavored nut was named by early Christians for St. Philibert, a French abbot who died in 684 and whose feast day, August 20, falls during the nut-harvesting season. The Old Norman expression for the food was noix de filbert, or “nut of Philibert.” Traditionally, the Britons eat filberts with figs, the Chinese with tea, while Americans have long relegated filberts to boxes of mixed nuts.
Walnuts. The walnut’s history goes back to ancient Persia, where the two lobed seed was so rare and so highly valued that it once served as currency. Cultivation of the nut has been traced from Persia to Carthage to Rome, then throughout Europe and to the New World.
Peanuts, once animal fodder, were greatly popularized by Mr. Peanut. (Clockwise) Walnut, peanut, pistachio, and almond.
A product of the tree of the genus Juglans, the walnut derived its name from a medieval British pejorative. To the British, people and things foreign to their soil were often disparaged as “Welsh.” When the first walnuts arrived in the Isles, they were initially referred to as wealh hnutu, “Welsh nut,” which in Middle English became walnot.
In America, the walnut was prized by the native Indians and the early colonists, and in seasons of surplus harvest the nuts also served as fodder for swine. Today the United States is the world’s major walnut producer, followed by France, Italy, and China.
Almonds. One of the two nuts mentioned in the Bible (the other is the pistachio), the almond was cultivated in ancient Mesopotamia, where its sweet-smelling oil served as an early body moisturizer, hair conditioner, and perfume. As early as 2500 B.C., almonds were grown in Greece, and seeds have been found in the palace at Knossos on Crete. A favorite dessert dish for the Greeks, the almond was called amygdale, and by the Romans amygdala, which today is the anatomical term for any almond-shaped body structure, such as the tonsil.
Almonds are the oldest, most widely cultivated and extensively used nuts in the world. In the United States, the earliest almonds were harvested from trees originating in Mexico and Spain, whose seeds were planted by missionaries to California. Most of those early trees, however, died off when the missions were abandoned. The current California crop is based on trees brought from the East in 1843. Today the state’s groves produce more almonds than all other locations in the world combined.
Pistachio. Indigenous to Persia and Syria, the pale yellow-green pistachio—pistah in ancient Persian—was widely cultivated throughout the Near East, and its trees were planted in the royal gardens of Babylonia during the eighth century B.C. The nut was exploited for its oil, as well as being eaten fresh and used in Persian confections. Pistachios fetched high prices in ancient Rome as delicacies, eaten at the conclusion of a meal as dessert. In Gaul, dessert was synonymous with nuts, and the origin of our word “dessert” is the Old French verb desservir, “to clear the table,” signaling the serving of the nut course.
Cracker Jack: 1893, Chicago
Billed at the 1893 Chicago World’s Fair as “Candied Popcorn and Peanuts,” Cracker Jack was the brainchild of a German immigrant, F. W. Rueckheim. He concocted a confection that combined the proven popularity of candy with Americans’ growing acceptance of popcorn and peanuts as snack foods.
With a savings of two hundred dollars from farm wages, in 1871 Rueckheim opened a small popcorn stand in Chicago. The successful business eventually led him to expand his fare to include peanuts, caramels, marsh-mallows, and molasses taffy. In the early 1890s, the confectioner reasoned that if customers so enjoyed popcorn, peanuts, and molasses taffy individually, they might prefer a combination of the three. This succotash of sweets was not entirely original and daring, for molasses-coated “popcorn balls” had been a candy favorite in the Northeast since the 1870s. Peanuts, though, a salient ingredient in Rueckheim’s creation, were a novelty circus snack at the time.
Company legend has it that a friend tasted Rueckheim’s new confection, exclaimed, “That’s crackerjack!” and the product’s name was born. It’s a likely possibility. In that era, “cracker” was a Northeastern vernacularism meaning “excellent”; “Jack” was a breezy address for a man whose name was unknown; and both “crackajack” and “crackerjack” were abbreviated expressions for the approving phrase “Cracker, Jack!”
A box of Cracker Jack did not always include a prize. At first, a box carried a discount coupon toward a subsequent purchase; a child’s prize in the form of a trinket entered the box in 1913. Three years later, a sailor boy, Jack, and his black-and-white dog, Bingo, began to appear in product advertisements, then as the company trademark. The real-life “Jack,” the inspiration for the logo, was Rueckheim’s grandson Robert, who at the age of eight died of pneumonia. The sailor boy image acquired such meaning for the founder of Cracker Jack that he had it carved on his tombstone, which can still be seen in St. Henry’s Cemetery, Chicago. Today every ounce of machine-packaged Cracker Jack contains exactly nine peanuts, fewer than Rueckheim prescribed in 1893, when the circus nut was something of a novelty.
Hot Dog: 1500 B.C., Babylonia
The history of the hot dog begins 3,500 years ago with the Babylonians, who stuffed animal intestines with spiced meats. Several civilizations adopted, modified, or independently created the dish; the Greeks called it orya, the Romans salsus, the origin of our word “sausage.”
Homer, in the Odyssey, sang the gastronomical praises of sausage, its first reference in literature: “As when a man beside a great fire has filled a sausage with fat and blood and turns it this way and that and is very eager to get it quickly roasted…”
The decline of the sausage preceded that of the Roman Empire. According to the oldest known Roman cookbook, written in A.D. 228, sausage was a favorite dish at the annual pagan festival Lupercalia, held February 15 in honor of the pastoral god Lupercus. The celebration included sexual initiation rites, and some writers have suggested that sausage served as more than just a food. The early Catholic Church is known to have outlawed the Lupercalia and made eating sausage a sin. And when Constantine the Great, the fourth-century emperor of Rome, embraced Christianity, he, too, banned sausage consumption. As would happen in the twentieth century with liquor prohibition, the Roman populace indulged in “bootlegged” sausage to such an extent that officials, conceding the ban was unenforceable, eventually repealed it.
The evolution of the broad sausage to a slender hot dog began during the Middle Ages. Butchers’ guilds in various European city-states coveted regional sausage formulas, producing their own distinctive shapes, thicknesses, and brands, with names denoting the places of origin. Wiener wurst— “Vienna sausage” —eventually gave birth to the German-American terms “wiener” and “wienie.”
Shape and size were not the only distinguishing national features to emerge. Mediterranean countries specialized in hard, dry sausages that would not spoil in warm weather. In Scotland, oatmeal, a common and copious food, became one of the earliest cereal fillers for sausage, starting a practice that then, as now, made pork or beef all too often a secondary ingredient. In Germany, sausages were thick, soft, and fatty, and it was in that country that the “frank” was born in the 1850s.
In 1852, the butchers’ guild in Frankfurt introduced a sausage that was spiced, smoked, and packed in a thin, almost transparent casing. Following tradition, the butchers dubbed their creation “frankfurter,” after their hometown. The butchers also gave their new, streamlined sausage a slightly curved shape. German folklore claims this was done at the coaxing of a butcher who owned a pet dachshund that was much loved in the town. He is supposed to have convinced co-workers that a dachshund-shaped sausage would win the hearts of Frankfurters.
Three facts are indisputable: the frankfurter originated in the 1850s, in the German city from which it derived its name; it possessed a curved shape; and it was alternatively known as a “dachshund sausage,” a name that trailed it to America.
In America, the frankfurter would also become known as the hot dog, today its worldwide name.
Two immigrants from Frankfurt, Germany, are credited with independently introducing the sausage to America in the 1880s: Antoine Feuchtwanger, who settled in St. Louis, Missouri; and Charles Feltman, a baker who sold pies from a pushcart along Coney Island’s rustic dirt trails. It was Feltman who would become an integral part of the hot dog’s history.
In the early 1890s, when Coney Island inns began to serve a variety of hot dishes, Feltman’s pie business suffered from the competition. Friends advised him to sell hot sandwiches, but his small pie wagon could not accommodate a variety of foods and cooking equipment. Instead, the pieman decided to specialize in one hot sandwich, his hometown’s sausage, the frankfurter.
Installing a small charcoal stove in his pushcart, Feltman boiled the sausages in a kettle and advertised them as “frankfurter sandwiches,” which he served with the traditional German toppings of mustard and sauerkraut. The sandwiches’ success enabled Charles Feltman to open his own Coney Island restaurant, Feltman’s German Beer Garden, and the amusement resort became identified with the frankfurter. With business booming, in 1913 Feltman hired a young man, Nathan Handwerker, as a roll slicer and part-time delivery boy, for eleven dollars a week. The move would open a new chapter in the hot dog’s unfolding history.
Nathan’s Franks. By 1913, Coney Island was a plush resort and an important entertainment center. Two avid frankfurter eaters along the beach-front were a local singing waiter named Eddie Cantor and his prominent-profiled accompanist, Jimmy Durante. Both worked for little money and resented the fact that the prospering Charles Feltman had raised the price of his “franks” to a dime. The struggling vaudevillians suggested to Nathan Handwerker that instead of working for Feltman, he go into competition with him, selling franks for half the price.
In 1916, Nathan did just that. With savings of three hundred dollars, he purchased an open-front Coney Island concession on the corner of Surf and Stillwell avenues and introduced the nickel frank, using a spiced meat formula devised by his wife, Ida. And to promote his product, Nathan employed a clever stratagem. He offered doctors at nearby Coney Island Hospital free franks if they would eat them at his stand wearing their professional whites and with stethoscopes prominently displayed. Doctors, then unassailably revered, proved an advertisement for the quality and salubriousness of Nathan’s franks that—together with the nickel price—almost sank the competition. To assist in serving the steady stream of customers, Nathan hired a perky, redheaded teenager, Clara Bowtinelli, who did not last long. A talent agent who frequented the concession took an interest in her, shortened her surname to Bow, and she was headed to Hollywood to become the glamorous “It Girl” of silent films.
Hot dog and hamburger, today American specialities, have German roots.
“Hot Dog”. In 1906, slender, streamlined sausages were still something of a novelty in America, and they went by a variety of names: frankfurters, franks, wieners, red hots, and dachshund sausages. By this time, a refreshments concessionaire, Harry Stevens, had already made the sausage a familiar food at New York City baseball games. At the Polo Grounds—the home of the New York Giants—Stevens’s vendors worked the bleachers, bellowing, “Get your red-hot dachshund sausages!”
In the stands one summer day in 1906 was a syndicated Hearst newspaper cartoonist, Tad Dorgan. The dog-like curve of the frank and the vendors’ “barking” call inspired Dorgan to sketch a cartoon of a real dachshund, smeared with mustard, sandwiched in a bun. As the story is told, back at his office, Dorgan refined the cartoon, and unable to spell “dachshund,” he settled on “dog,” captioning the picture “Get your hot dogs!”
The name not only stuck, it virtually obsoleted its predecessors. And it quickly spawned a string of neologisms: the exclamatory approval “hot dog!”; the more emphatic “hot diggity dog!”; the abbreviated “hot diggity!” which inspired the song lyrics “Hot diggity, dog diggity, zoom what you do to me”; the noun for a daredevil, “hot dogger”; and the verb for going fast or making tracks, “to hot dog,” which decades later became a surfing term.
It was the universal acceptance of the term “hot dog” that caused the world to regard the frank or wiener as a thoroughly American invention. And America fast became the major producer of hot dogs: today 16.5 billion are turned out each year, or about seventy-five hot dogs for each man, woman, and child in the country.
The man responsible for the term “hot dog,” Thomas Aloysius Dorgan, who signed his illustrations TAD, was a major American cartoonist. There have been retrospectives of his work, and several cartoon museums around the country feature Dorgan collections. Historians, archivists, and curators of cartoon museums generally credit Dorgan with originating “hot dog,” but their numerous searches to date have not produced the verifying cartoon.
Hamburger: Middle Ages, Asia
The hamburger has its origin in a medieval culinary practice popular among warring Mongolian and Turkic tribes known as Tartars: low-quality, tough meat from Asian cattle grazing on the Russian steppes was shredded to make it more palatable and digestible. As the violent Tartars derived their name from the infernal abyss, Tartarus, of Greek mythology, they in turn gave their name to the phrase “catch a tartar,” meaning to attack a superior opponent, and to the shredded raw meat dish, tartar steak, known popularly today by its French appellation, steak tartare.
Tartar steak was not yet a gourmet dish of capers and raw egg when Russian Tartars introduced it into Germany sometime before the fourteenth century. The Germans simply flavored shredded low-grade beef with regional spices, and both cooked and raw it became a standard meal among the poorer classes. In the seaport town of Hamburg, it acquired the name “Hamburg steak.”
The Hamburg specialty left Germany by two routes and acquired different names and means of preparation at its points of arrival.
It traveled to England, where a nineteenth-century food reformer and physician, Dr. J. H. Salisbury, advocated shredding all foods prior to eating them to increase their digestibility. Salisbury particularly believed in the health benefits of beef three times a day, washed down by hot water. Thus, steak, regardless of its quality, was shredded by the physician’s faddist followers and the Hamburg steak became Salisbury steak, served on a plate, not in a bun.
In the 1880s, the Hamburg steak traveled with a wave of German immigrants to America, where it acquired the name “hamburger steak,” then merely “hamburger.” Exactly when and why the patty was put in a bun is unknown. But when served at the 1904 St. Louis World’s Fair, it was already a sandwich, with its name further abbreviated to “hamburg.” And some three decades before McDonald’s golden arch would become the gateway to hamburger Mecca, the chain of White Castle outlets popularized the Tartar legacy.
Sandwich: 1760, England
The sandwich, as well as the Sandwich Islands (now the Hawaiian Islands), were named for a notorious eighteenth-century gambler, John Montagu, fourth earl of Sandwich, and British first lord of the Admiralty for the duration of the American Revolution.
Montagu’s tenure of office was characterized by graft, bribery, and mis-management, and his personal life, too, was less than exemplary. Although married, he kept a mistress, Margaret Reay, by whom he had four children. Because of his high military rank, when English explorer Captain James Cook discovered the Hawaiian archipelago, the islands were named in the earl’s honor.
An inveterate gambler, Montagu refused to leave the gaming tables even for meals. In 1762, when he was forty-four years old and the country’s foreign secretary, he spent twenty-four straight hours gambling, ordering sliced meats and cheeses served to him between pieces of bread. The repast, which enabled him to eat with one hand and gamble with the other, had for some time been his playing trademark, and that notorious episode established it as the “sandwich.”
Montagu’s sandwich was not the first food served between slices of bread. The Romans in the pre-Christian era enjoyed a light repast that they called an offula, which was a sandwich-like snack between meals. Perhaps it is not surprising that the Romans ate food between slices of bread; they were master bread bakers in the ancient world. A typical Roman loaf of bread, weighing one pound, was shaped into a mound and cooked in either of two ways: atop the stove, as panis artopicius, “pan bread”; or baked in an earthenware vessel, as panis testustis, “pot bread.” Historians in the second century B.C. pointedly observed that Roman women deplored ovens and left the baking of bread to freed slaves.
Bread itself originated with the Egyptians about 2600 B.C., when bakers made a momentous discovery. If they did not immediately bake a grain-and-water recipe called gruel, but first let it ferment, the resultant product was a higher, lighter bread. With this discovery of leavening, Egyptian bakers expanded their skills to include more than fifty different loaves, including whole wheat and sourdough breads.
Centuries later, the Westphalian Germans would create a variation on sour rye bread and pejoratively name it pumpernickel, from pumpern, “to break wind,” and Nickel, “Old Nick the devil.” The earliest instance of “pumpernickel” in print appeared in 1756 in A Grand Tour of Germany, by a travel writer named Nugent. He reported that the Westphalian loaf “is of the very coarsest kind, ill baked, and as black as a coal, for they never sift their flour.” The sour rye bread was considered so difficult to digest that it was said to make even Satan break wind.
Melba Toast: 1892, London
The opera singer who gave her stage name to a dry, brittle crisp of toast—and to a dessert—was born Helen Porter Mitchell in 1861 in Melbourne, Australia. Adapting the name of her hometown, the coloratura soprano introduced it as her stage name in 1887 when she performed as Gilda in Verdi’s Rigoletto at Brussels. By the 1890s, Nellie Melba was adored by opera lovers around the world and worshiped by French chef Auguste Escoffier.
In 1892, Melba was staying at London’s Savoy Hotel, where Escoffier reigned as head chef. After attending her Covent Garden performance as Elsa in Wagner’s Lohengrin, he was inspired to create a dish for the diva, who regularly dined at the Savoy. Sculpting from a block of ice the wings of a swan, and coating them with iced sugar, he filled the center with vanilla ice cream topped with peaches. The dish was to recall the opera’s famous scene in which Lohengrin, knight of the Holy Grail, arrives to meet Elsa in a boat pulled by a swan, singing, “Nun sei bedankt, mein lieber Schwan” (“Only you to thank, my beloved swan”).
Chef Escoffier initially called his creation peches au cygne, “swan peaches.” Later, on the occasion of the opening of London’s Carlton Hotel, he improved on the dessert by adding raspberry sauce, and renamed it Peach Melba. The soprano, always weight conscious, breakfasted at the Savoy on tea and dry toasted bread as thin as Escoffier could slice it. Thus, her name came to represent both a low-calorie diet crisp and a decidedly nondietary dessert.
Ketchup: 300 B.C., Rome
Though we think of ketchup as strictly a tomato-based sauce, it was defined for centuries as any seasoned sauce of puree consistency and was one of civilization’s earliest condiments. First prepared by the Romans in 300 B.C., it consisted of vinegar, oil, pepper, and a paste of dried anchovies, and was called liquamen. The Romans used the sauce to enhance the flavor of fish and fowl, and several towns were renowned for their condiment factories. Among the ruins of Pompeii were small jars bearing an inscription translated as: “Best strained liquamen. From the factory of Umbricus Agathopus.”
Though the Roman puree is the oldest “ketchup” on record, it is not the direct antecedent of our modern recipe. In 1690, the Chinese developed a tangy sauce, also for fish and fowl. A brine of pickled fish, shellfish, and spices, it was named ke-tsiap, and its popularity spread to the Malay archipelago, where it was called kechap.
Early in the eighteenth century, British seamen discovered the natives of Singapore and Malaysia using kechap and brought samples of the puree back to their homeland. English chefs attempted to duplicate the condiment, but, unfamiliar with its Eastern spices, they were forced to make substitutions such as mushrooms, walnuts, and cucumbers. Mistakenly spelled “ketchup,” the puree became an English favorite, and a popular 1748 cookbook, Housekeeper’s Pocketbook, by a Mrs. Harrison, cautions the homemaker “never to be without the condiment.” It was so popular in England that Charles Dickens, in Barnaby Rudge, smacked his lips over “lamb chops breaded with plenty of ketchup,” and Lord Byron praised the puree in his poem “Beppo.”
When and where did tomatoes enter ketchup?
Around 1790, in New England.
It could not have been much earlier, because prior to that decade, colonists suspected the tomato of being as poisonous as its botanical relatives deadly nightshade and belladonna. Although the Aztecs had cultivated the tomato (technically a berry and a fruit), calling it tamatl, and the Spaniards had sampled it as a tomate, early botanists correctly recognized it as a member of the family Solanaceae, which includes several poisonous plants (but also the potato and the eggplant). The Italians (who would later make the tomato an indispensable part of their cuisine) called it mala insana, “unhealthy apple,” and food authorities can only conclude that many peoples, unfamiliar with the plant, ate not its large red berries but its leaves, which are toxic.
In America, Thomas Jefferson, one of the first in the United States to cultivate the tomato, is credited with exonerating and legitimizing the fruit. One of the earliest recipes for “tomata catsup” appeared in the 1792 The New Art of Cookery, by Richard Brigg. And though acceptance of the tomato and its ketchup was slow, by the mid-1800s the fruit and its puree were kitchen staples. A popular cookbook of the day, Isabella Beeton’s Book of Household Management, counseled housewives: “This flavoring ingredient is one of the most useful sauces to the experienced cook, and no trouble should be spared in its preparation.”
But preparation of homemade ketchup was time-consuming. Tomatoes had to be parboiled and peeled, and the puree had to be continually stirred. It is little wonder that in 1876, homemakers eagerly purchased America’s first mass-produced, bottled ketchup, from the factory of German-American chef and businessman Henry Heinz. Heinz Tomato Catsup, billed as “Blessed relief for Mother and the other women in the household!” was an immediate success in its wide-base, thin-neck, cork-sealed bottle, and both the bottle design and the ingredients in the puree have hardly changed in over a hundred years.
Following the success of ketchup, Henry Heinz produced a variety of pickles, relishes, fruit butters, and horseradishes. But his company as yet had no identifiable slogan. In the early 1890s, while riding in a New York City elevated subway car, Heinz spotted a sign above a local store: “21 Styles of Shoes.” In a moment of inspiration, he reworked the phrase, upped the number, and created what would become one of the most famous numerical slogans in advertising: “57 Varieties.” At that time, the company actually produced sixty-five different products; Henry Heinz simply liked the way the number 57 looked in print.
Worcestershire Sauce. In the mid-1800s, British nobleman Sir Marcus Sandys returned to his native England from service in India as governor of the province of Bengal. A noted epicure, Sandys had acquired a recipe for a tangy sauce, a secret blend of spices and seasonings which was doused liberally on many Indian dishes.
From his estate in Worcester, England, Sandys commissioned two chemists, John Lea and William Perrins, to prepare bottles of the sauce for private use in his household and as gifts for friends. Its popularity prompted Lea and Perrins, with Sandys’s permission, to manufacture it under the name “Worcester Sauce.” It debuted in America, though, as “Worcestershire Sauce,” shire being the British equivalent to county, and Worcestershire being the county seat of Worcester. Americans took readily to the condiment, if not to the pronunciation of its name.
A.1. Steak Sauce. As the white sauce béchamel was created by Louis de Béchamel, steward to France’s King Louis XIV, A.1. Steak Sauce was the brainchild of another royal chef, and was created to please the palate of another European monarch: England’s King George IV. Indolent, devious, and profligate, and by his own assessment “rather too fond of women and wine,” George was redeemed in later public opinion for his superb taste in paintings and his recognition of the literary genius of Jane Austen and Walter Scott. He was also an epicure, whose gastronomic demands challenged his chief chef, Brand. Brand continually devised new dishes and sauces, and one spicy condiment for meats consisted of soy, vinegar, anchovy, and shallots. Popular legend has it that on tasting the new sauce, the king approvingly declared, “This sauce is A-1!”
There may be truth to the tale. During George’s reign, from 1820 to 1830, Lloyds of London began numerically classifying ships for insurance purposes, with “A Number 1” being the highest rating, for the most insurable vessels. The phrase caught on with London businessmen and the general public, who used it to label everything from prime real estate to quality theater fare, often in the abbreviated form “A-1.” It came to signify any person, place, or thing that was “tops” or “first class.”
Following the monarch’s death, Brand resigned and began to manufacture his condiment privately. It was exported to America, but during World War I, British shipments became infrequent and sporadic. The American-based spirits company of Heublein finally reached an agreement with the Brand Company of England, and A.1. Steak Sauce went into production in Hartford, Connecticut. During Prohibition, with no legal home market for Heublein’s line of liquors, it was A.1. Steak Sauce— “The Dash That Makes The Dish” —that kept the company from bankruptcy. Today Brand’s condiment is one of the top-selling meat sauces in America.
Mayonnaise. A Spanish condiment made of raw egg yolk and olive oil was popular on Minorca, one of the Balearic Islands, beginning in the eighteenth century. While neighboring Majorca would acquire fame when composer Frédéric Chopin sojourned there, Minorca would become known in Europe for its sauce, which was sampled by the French duke Richelieu in the island’s major port of Mahón.
Richelieu returned to France with the recipe for what he humbly labeled “sauce of Mahón.” But adopted by French chefs as a high-quality condiment reserved for the best meats, the sauce was renamed Mahonnaise. Even when “mayonnaise” arrived in America in the early 1800s, it was regarded as a delicate French creation, and one difficult to prepare.
Two breakthroughs transformed the haute sauce to a popular sandwich spread: the arrival of the electric blender, which simplified its preparation; and inexpensive bottled dressings. Richard Hellman, the German-born owner of a Manhattan delicatessen, perceived that there was a market for a quality premixed brand of mayonnaise, and in 1912 he began selling his own version in one-pound wooden “boats.” A year later, he packaged the product in large glass jars. It’s ironic but understandable that as the condiment became increasingly commonplace, spread on BLTs and burgers, it lost its former luster as haute Mahónnaise, the exotic sauce of Mahón.
Tabasco Sauce. Amidst the coastal marshes of Louisiana’s fabled Cajun country is a prehistoric geological phenomenon known as Avery Island. An upthrust salt dome six miles in circumference, the island is covered with meadows and was the site of America’s first salt mine, which still produces a million and a half tons of salt a year. Avery Island is also the birthplace of Tabasco sauce, named by its creator, Edmund Mclhenny, after the Tabasco River in southern Mexico, because he liked the sound of the word.
In 1862, Mclhenny, a successful New Orleans banker, fled with his wife, Mary Avery Mclhenny, when the Union Army entered the city. They took refuge on Avery Island, where her family owned a salt-mining business. Salt, though, was vital in preserving meat for the war’s troops, and in 1863 Union forces invaded the island, capturing the mines. The McIhennys fled to Texas, and returning at war’s end, found their plantation ruined, their mansion plundered. One possession remained: a crop of capsicum hot peppers.
Determined to turn the peppers into income, Edmund Mclhenny devised a spicy sauce using vinegar, Avery Island salt, and chopped capsicum peppers. After aging the mixture in wooden barrels for several days, he siphoned off the liquid, poured it into discarded empty cologne bottles, and tested it on friends. In 1868, Mclhenny produced 350 bottles for Southern wholesalers. A year later, he sold several thousand bottles at a dollar apiece, and soon opened a London office to handle the increasing European demand for Cajun Tabasco sauce.
Marco Polo (left) and the Oriental origin of spaghetti, meaning “little strings.”
Clearly contradicting the standing joke that there is no such thing as an empty Tabasco bottle, the Mclhenny company today sells fifty million two-ounce bottles a year in America alone. And the sauce, made from Edmund Mclhenny’s original recipe, can be found on food-store shelves in over a hundred countries. Each year, 100,000 tourists visit Avery Island to witness the manufacturing of Tabasco sauce, and to descend into the cavernous salt mines, which reach 50,000 feet down into geological time.
Pasta: Pre-1000 B.C., China
We enjoy many foods whose Italian names tell us something of their shape, mode of preparation, or origin: espresso (literally “pressed out”), cannelloni (“big pipes”), ravioli (“little turnips”), spaghetti (“little strings”), tutti-frutti (“every fruit”), vermicelli (“little worms”), lasagna (“baking pot”), parmesan (“from Parma”), minestrone (“dished out”), and pasta (“dough paste”). All these foods conjure up images of Italy, and all derive from that country except one, pasta (including vermicelli and spaghetti), which was first prepared in China at least three thousand years ago, from rice and bean flour.
Tradition has it that the Polo brothers, Niccolo and Maffeo, and Niccolo’s son, Marco, returned from China around the end of the thirteenth century with recipes for the preparation of Chinese noodles. It is known with greater certainty that the consumption of pasta in the form of spaghetti-like noodles and turnip-shaped ravioli was firmly established in Italy by 1353, the year Boccaccio’s Decameron was published. That book of one hundred fanciful tales, supposedly told by a group of Florentines to while away ten days during a plague (hence the Italian name Decamerone, meaning “ten days”), not only mentions the two dishes but suggests a sauce and cheese topping: “In a region called Bengodi, where they tie the vines with sausage, there is a mountain made of grated parmesan cheese on which men work all day making spaghetti and ravioli, eating them in capon’s sauce.”
For many centuries, all forms of pasta were laboriously rolled and cut by hand, a consideration that kept the dish from becoming the commonplace it is today. Spaghetti pasta was first produced on a large scale in Naples in 1800, with the aid of wooden screw presses, and the long strings were hung out to dry in the sun. The dough was kneaded by hand until 1830, when a mechanical kneading trough was invented and widely adopted throughout Italy.
Bottled spaghetti and canned ravioli originated in America, the creation of an Italian-born, New York–based chef, Hector Boiardi. He believed Americans were not as familiar with Italian food as they should be and decided to do something about it.
A chef at Manhattan’s Plaza Hotel in the 1920s, Boiardi began bottling his famous meals a decade later under a phoneticized spelling of his surname, Boy-ar-dee. His convenient pasta dinners caught the attention of John Hartford, an executive of the A & P food chain, and soon chef Boiardi’s foods were appearing on grocery store shelves across the United States. Though much can be said in praise of today’s fresh, gourmet pastas, served primavera, al pesto, and alla carbonara, Boy-ar-dee’s tomato sauce dishes, bottled, canned, and spelled for the masses, created something of a culinary revolution in the 1940s; they introduced millions of non-Italian Americans to their first taste of Italian cuisine.
Pancake: 2600 B.C., Egypt
The pancake, as a wheat flour patty cooked on a flat hot stone, was known to the Egyptians and not much different from their unleavened bread. For prior to the advent of true baking, pancakes and bread were both flat sheets of viscous gruel cooked atop the stove.
The discovery of leavening, around 2600 B.C., led the Egyptians to invent the oven, of which many examples remain today. Constructed of Nile clay, the Egyptian oven tapered at the top into a truncated cone and was divided inside by horizontal shelves. At this point in time, bread became the leavened gruel that baked and rose inside the oven, while gruel heated or fried on a flat-topped stove was the pancake—though it would not be cooked in a pan for many centuries.
The pancake became a major food in the ancient world with the advent of Lenten shriving observances in A.D. 461. Shriving was the annual practice of confession and atonement for the previous year’s sins, enacted as preparation for the holy Lenten season. The three-day period of Sunday, Monday, and Shrove Tuesday (the origin of Mardi Gras—literally “fat Tuesday” —the day before the start of Lent) was known as Shrovetide and marked by the eating of the “Shriving cake,” or pancake. Its flour symbolized the staff of life; its milk, innocence; and its egg, rebirth.
In the ninth century, when Christian canon law prescribed abstinence from meat, the pancake became even more popular as a meat substitute. And by the thirteenth century, a Shrove Tuesday pancake feast had become traditional in Britain, Germany, and Scandinavia, with many extant rhymes and jingles accompanying the festivities, for example:
Shrove Tuesday, Shrove Tuesday,
’Fore Jack went to plow
His mother made pancakes,
She scarcely knew how.
The church bell calling the shriving congregation became known as the “pancake bell,” and Shrove Tuesday as Pancake Day. A verse from Poor Robin’s Almanac for 1684 runs: “But hark, I hear the pancake bell / And fritters make a gallant smell.”
The most famous pancake bell in Western Europe was that of the Church of Sts. Peter and Paul in Olney, England. According to British tradition, an Olney woman in the fifteenth century, making pancakes when the bell tolled, unwittingly raced to church with the frying pan and its contents in her hand. The tale developed into an annual race to the church, with towns-women flipping flapjacks all the way, and it has survived into modern times, with the course a distance of four hundred and fifteen yards. Women competing must be at least eighteen years old, wear an apron and a head scarf, and somersault pancakes three times during the race. In 1950, a group of American women from Liberal, Kansas, members of the local Jaycees, staged their own version of the British pancake race.
The earliest American pancakes were of corn meal and known to the Plymouth Pilgrims as “no cakes,” from the Narragansett Indian term for the food, nokehick, meaning “soft cake.” Etymologists trace the 1600s’ “no cake” to the 1700s’ “hoe cake,” so named because it was cooked on a garden-hoe blade. And when cooked low in the flames of a campfire, often collecting ash, it became an “ashcake” or “ashpone.” In the next century, the most popular pancakes in America were those of a talented black cook, Nancy Green, the country’s Aunt Jemima.
Aunt Jemima. The story of America’s first commercially successful pancake mix begins in 1889 in St. Joseph, Missouri, where a local newspaperman, Chris Rutt, conceived the idea for a reliable premixed self-rising flour. Rutt loved to breakfast on pancakes, but lamented the fact that batter had to be made from scratch each morning. He packaged a formulation of flour, phosphate of lime, soda, and salt in plain brown paper sacks and sold it to grocers. The product, despite its high quality, sold poorly, and Rutt realized that he needed to jazz up his packaging.
The prototype for Aunt Jemima; A crowd in 1882 watches a short-order cook prepare pancakes.
Enter Aunt Jemima.
One evening in autumn 1889, Rutt attended a local vaudeville show. On the bill was a pair of blackface minstrel comedians, Baker and Farrell, and their show-stopping number was a rhythmic New Orleans-style cakewalk to a tune called “Aunt Jemima,” with Baker performing in an apron and red bandanna, traditional garb of a Southern female chef. The concept of Southern hospitality appealed to Rutt, and he appropriated the song’s title and the image of the Southern “mammy” for his pancake product.
Sales increased, and Rutt sold his interests to the Davis Milling Company, which decided to promote pancake mix at the 1893 Chicago World’s Fair. Initiating a dynamic concept that scores of advertisers have used ever since, the company sought to bring the Aunt Jemima trademark to life. Searching among Chicago’s professional cooks, the company found a warm, affable black woman, Nancy Green, then employed by a local family. As the personification of Aunt Jemima, Nancy Green served the fair’s visitors more than a million pancakes, and a special detail of policemen was assigned to prevent crowds from rushing the concession. Nancy Green helped establish the pancake in America’s consciousness and kitchens, touring the country as Aunt Jemima until her death in 1923, at age eighty-nine.
Betty Crocker: 1921, Minnesota
Although there was a real-life “Aunt Jemima,” the Betty Crocker who for more than sixty years has graced pantry shelves never existed, though she accomplished much.
In 1921, the Washburn Crosby Company of Minneapolis, a forerunner of General Mills, was receiving hundreds of requests weekly from home-makers seeking advice on baking problems. To give company responses a more personal touch, the management created “Betty Crocker,” not a woman but a signature that would appear on outgoing letters. The surname Crocker was selected to honor a recently retired company director, William Crocker, and also because it was the name of the first Minneapolis flour mill. The name Betty was chosen merely because it sounded “warm and friendly.” An in-house handwriting contest among female employees was held to arrive at a distinctive Betty Crocker signature. The winning entry, penned by a secretary in 1921, still appears on all Betty Crocker products.
American housewives took so believingly and confidingly to Betty Crocker that soon more than her signature was required.
In 1924, Betty Crocker’s voice (that of an actress) debuted on America’s airwaves in the country’s first cooking program, something of early radio’s equivalent to Julia Child, and it was an overnight success. Within months, the program was broadcast from thirteen stations, each with its own Betty Crocker reading from the same company-composed script. The Betty Crocker Cooking School of the Air would eventually become a nationwide broadcast, running uninterrupted for twenty-four years.
Although most American housewives believed Betty Crocker was a real person, no one had yet seen her picture, because none existed until 1936. That year, to celebrate the fifteenth birthday of the Betty Crocker name, a portrait was commissioned from a prominent New York artist, Neysa McMein. In an act of artistic egalitarianism, Neysa McMein did not use a single company woman to sit for the portrait. Instead, all the women in the company’s Home Service Department assembled, and the artist, as the company stated, “blended their features into an official likeness.”
That first Betty Crocker visage reigned unaltered until 1955, when the company “updated” the portrait. Instead of aging the appropriate nineteen years, Betty actually appeared younger in her 1955 portrayal. And she continued to grow more youthful and contemporary, in her official 1965 portrait and in the most recent one, painted in 1980, in which she appears as a modern professional woman.
For a fictitious woman, Betty Crocker acquired enviable fame. During World War II, she served the country at the request of the United States Department of State with a patriotic radio show, Your Nation’s Rations. She went on to write several best-selling cookbooks, narrate films, record recipes on cassette tapes, and become a one-woman cottage industry, something of a prototypal Jane Fonda.
Duncan Hines: 1948, Kentucky
While there was never a Betty Crocker, and only an Aunt Jemima impersonator, there was a flesh-and-blood, real-life Duncan Hines—though he never baked a cake professionally in his life. Duncan Hines only wrote about food.
In 1936, Hines published Adventures in Good Eating, a pocket-sized guidebook to the best restaurants along America’s highways. With the burgeoning craze for automobile travel in the ’30s, Hines’s book became a runaway success. Sales figures suggested that every car in America had a copy of the guide in its glove compartment. Restaurants across the country coveted their hard-earned sign that boasted “Recommended by Duncan Hines.”
A native of Bowling Green, Kentucky, Duncan Hines traveled fifty thousand miles a year for the sole purpose of sampling highway fare and updating his guidebook. In the late 1940s, when New York businessman Roy Park surveyed housewives to identify a trusted food authority to endorse a new line of baked goods, he found there was no competition: The name Duncan Hines was not only trusted, it was better known across America than that of the incumbent Vice President, Alben Barkley—even in Barkley’s home state of Kentucky.
In 1948, Roy Park and Duncan Hines teamed up to form Hines-Park Foods, Inc. Park was president, and Duncan Hines signed a contract permitting his name to be used on the company’s line of boxed baked goods. So respected was the Hines name that within three weeks of their introduction, the cake mixes had swallowed up 48 percent of the national market.
One could argue, of course, that the man behind the Duncan Hines brand was every bit the corporate ruse of an Aunt Jemima or a Betty Crocker, a harmless deception. And a popular way to personalize a product. Certainly no one ever seriously believed there was an aristocrat named Lady Kenmore, or if one existed, that she ever endorsed appliances.
Pie: 5th Century B.C., Greece
Although baking bread and confections began in ancient Egypt, there is no evidence that civilization’s first bakers ever stumbled on the idea of stuffing a dough shell with meat, fish, or fruit. That culinary advance was made in ancient Greece, where the artocreas, a hash-meat pie with only a bottom crust, endured for centuries. Two features distinguished those early pies from today’s: They had no top crust, and fillings were never fruit or custard, but meat or fish.
The first pies made with two layers of crust were baked by the Romans. Cato the Elder, a second-century B.C. Roman statesman who wrote a treatise on farming, De Agricultura, loved delicacies and recorded a recipe for his era’s most popular pie, placenta. Rye and wheat flour were used in the crust; the sweet, thick filling consisted of honey, spices, and cheese made from sheep’s milk; and the pie was coated with oil and baked atop aromatic bay leaves.
The first Western reference to a fruit pie—and a true dessert pie—appears surprisingly late in history: during the sixteenth-century reign of England’s Elizabeth I. Though home bakers may have used fruits such as apples and peaches, it is known that the queen requested pitted and preserved cherries as substitutions for the traditional fillings of meat or fish. Before the Elizabethan era, “pie” meant “meat pie,” a meal’s main course. The word’s antecedent, pi, referred to any confusing jumble or mixture of things: meats to the early Britons, and to the earlier Greeks, a perplexing and endless array of digits generated by dividing a circle’s circumference by its diameter.
Once the dessert fruit pie appeared, its references and fillings proliferated. Interestingly (perhaps following the queen’s lead), the preferred fillings initially were not cut fruits but berries, a 1610s British favorite being the dark-blue hurtleberry, which resembles a blueberry but has ten nutlike seeds; in America by 1670, it was called the huckleberry, the basis for huckleberry pie and the quintessentially American name of an adventure-some boy surnamed Finn.
Cookie: 3rd Century B.C., Rome
Today’s cookies are crisp or chewy, round or oval, plain or studded with nuts, raisins, and/or chocolate chips. In the ancient past, such options did not exist; a cookie was a thin unleavened wafer, hard, square, bland, and “twice baked.” Its origin and evolution are evident in its names throughout history.
The cookie began in Rome around the third century B.C. as a wafer-like biscuit—bis coctum in Latin, literally “twice baked,” signifying its reduced moisture compared to that of bread or cake. To soften the wafer, Romans often dipped it in wine.
But it was precisely the wafer’s firmness and crispness that earned it the echoic Middle English name craken, “to resound,” for on breaking, it “crackled.” The craken became the “cracker,” which in concept is considerably closer to the modern food the Roman cookie most closely resembled. Though neither the bis coctum nor the craken would satisfy a sweet craving, both were immensely popular foods in the ancient world because their low moisture content served effectively as a preservative, extending their home shelf life. As pies for centuries were meat pies, cookies were plain biscuits; sweetness did not become a cookie hallmark until after the Middle Ages.
The modern connotation of “cookie” is believed to have derived from a small, sweet Dutch wedding cake known as koekje, a diminutive of koek, Dutch for a full-sized “cake.” Made in numerous variations and never “twice baked,” the sweeter, softer, moister koekje, etymologists claim, at least gave us the words “cooky” and “cookie,” and probably the dessert itself.
Animal Cookies: 1890s, England
For Christmas 1902, thousands of American children received a new and edible toy: animal-shaped cookies in a small rectangular box imprinted to resemble a circus cage. The box’s string handle made it easy to carry and suitable as a play purse, but the white string had been added by National Biscuit Company (Nabisco) to encourage parents to hang the boxes of Animal Crackers as decorative Christmas tree gifts.
The design of the animal cookies had originated in England in the 1890s, but the American manufacturer displayed advertising genius with the package design. Labeled “Barnum’s Animals” in the decade when P. T. Barnum was popularizing the “Greatest Show on Earth,” the box immediately captured the imaginations of children and adults. And whereas British animal crackers came in only a handful of shapes, the American menagerie boasted a circus of seventeen different creatures (though the cookies came in eighteen distinct shapes): bison, camel, cougar, elephant, giraffe, gorilla, hippopotamus, hyena, kangaroo, lion, monkey, rhinoceros, seal, sheep, tiger, zebra, and sitting bear. The eighteenth shape was a walking bear.
Although a box of Animal Crackers contained twenty-two cookies, no child that Christmas of 1902 or thereafter was guaranteed a full representation of the zoo. This was because the machine-filled boxes could randomly contain, say, a caravan of camels and a laugh of hyenas but not so much as a lone kangaroo.
The randomness added an element of expectancy to a gift box of Animal Crackers, a plus the company had not foreseen. And soon parents were writing to Nabisco and revealing another unanticipated phenomenon (either trivial or of deep psychological import): Children across America nibbled away at the animals in a definite order of dismemberment: back legs, fore-legs, head, and lastly the body.
Fig Newton. Whereas the shapes of Animal Crackers made them a novelty and a success, there was another cookie of the same era that caught the imaginations of Americans for its originality of concept.
In 1892, a Philadelphia inventor named James Mitchell devised a machine that extruded dough in a firm wraparound sandwich that could hold a filling—but a filling of what? Mitchell approached the Kennedy Biscuit Works in Cambridgeport, Massachusetts, and after testing his machine, they decided in 1895 to manufacture a stuffed cookie containing the company’s first and most successful jam: figs. The snack’s name generated debate. Management agreed it should include the word “fig” and, for local marketing purposes, the familiar name of a nearby town. “Fig Bostons” and “Fig Shrewsburys” did not sound as appealing as the suggestion made by an employee who lived in Newton, Massachusetts. Thus was named the newest sweet in American cookie jars at the turn of the century.
Oreo. Following the success of Animal Crackers, Nabisco attempted several other cookie creations. Two of them were to be eaten once and forgotten; one would become the world’s all-time favorite seller.
On April 2, 1912, an executive memo to plant managers announced the company’s intentions: “We are preparing to offer to the trade three entirely new varieties of the highest class biscuit.” The memo predicted superior sales for two of the cookies. One, the “Mother Goose Biscuit,” would be an imaginative variation on the company’s successful Animal Crackers, “A biscuit bearing impressions of the Mother Goose legends.” How could Goldilocks, Little Red Riding Hood, and Cinderella cookies fail? (No one questioned if there was something macabre in cannibalizing beloved little girls, or stopped to consider which appendages children would eat first.)
The second cookie with great expectations was, according to the memo, “a delicious, hard, sweet biscuit of beautiful design” exotically named “Veronese.” The third new entry would consist of “two beautifully embossed, chocolate flavored wafers with a rich cream filling,” to be named the “Oreo Biscuit.” Relatively few people ever got to taste a “Mother Goose” or a “Veronese,” but from Nabisco’s soaring sales figures, it appeared that every American was eating Oreos. Today the cookie outsells all others worldwide, more than five billion being consumed each year in the United States alone.
From its original name of “Oreo Biscuit,” the cookie became the “Oreo Creme Sandwich,” and in 1974, the “Oreo Chocolate Sandwich Cookie.” What no archivist at Nabisco knows with certainty is the origin of the term Oreo. Two educated guesses have been offered: that the first chairman of the National Biscuit Company, Adolphus Green, coined the word from oros, Greek for “mountain,” since the cookie as originally conceived was to have a peaked, mountain-like top; or that the name was suggested by the French for “gold,” or, since on the original package, the cookie’s name was scrolled in gold letters.
Graham Cracker: 1830s, New England
The graham cracker originated as a health food, and in Britain it is still known as a “digestive biscuit.” It is also probably the only cookie or cracker to have sprung from a faddish health craze and religious movement, Grahamism, which swept New England in the 1820s and 1830s.
The Reverend Sylvester Graham was a Connecticut eccentric, congenitally prone to poor health. He married his nurse and became a self-styled physician and temperance leader, preaching impassioned lectures on white bread’s evils, nutritional and spiritual. Derided by Ralph Waldo Emerson as the “poet of bran,” the Reverend Graham did advocate many healthful things, if fanatically: little consumption of oil; no red meat, alcohol, or refined flour; frequent bathing and exercise; and brushing the teeth daily. He believed that the way to bodily health and spiritual salvation lay in diet, and his disciples, “Grahamites,” in accordance with his philosophy of “Grahamology,” followed a strict vegetarian diet, drank only water, and slept with windows open even in winter.
His teachings against commercial breads, cereals, and flour—in favor of coarse bran—incurred the wrath of New England bakers. They frequently harassed Graham on speaking tours and picketed outside his hotels. In 1837, he published a treatise urging Americans to eat only home-baked breads, pastries, and crackers, and his name became associated with a variety of unprocessed products: graham flour, graham cereal, and the graham cracker. Eventually, bakers adopted a more conciliatory attitude, and capitalizing on Graham’s popularity, they, too, offered a line of whole-wheat goods, including the graham cracker.
Due to Grahamism, a new breakfast trend developed in America. One of the Reverend Graham’s New York followers, Dr. James Caleb Jackson, advocated cold breakfast cereal, a bold reversal of the traditional hot morning gruel, but one that quickly caught on. The food would not become a true American tradition, however, until the 1890s, when another health-conscious physician, Dr. John Kellogg, who breakfasted daily on seven graham crackers, created his own “Battle Creek health foods,” the first being Granola, followed in 1907 by Corn Flakes. The prototype of packaged cold cereals was Dr. James Caleb Jackson’s own effort, Granula, a “granular” bran whose name was a compression of “Graham” and “bran.” As for “Dr.” Sylvester Graham, despite a low-fat, high-carbohydrate, and high-fiber diet, he remained a sickly man and died at age fifty-seven.
Chocolate Chip Cookie: Post-1847, United States
Although history does not unambiguously record the origin of the chocolate chip cookie, we can be certain that there was no such confection prior to 1847, for before that time, chocolate existed only as a liquid or a powder, not as a solid.
The route to the chocolate chip cookie began in Mexico around 1000 B.C. The Aztecs brewed a chocolate ceremonial drink, xocoatl, meaning “bitter water,” made from pulverized indigenous cocoa beans. In the Nahuatl dialects of Mexico, xocoatl became chocolatl. Spaniards introduced the New World drink to Europe, where chocolate remained a beverage until 1828. That year, a Netherlands confectioner, C. J. Van Houten, attempting to produce a finer chocolate powder that would more readily mix with milk or water, discovered the cocoa bean’s creamy butter. In 1847, the British confection firm of Fry and Sons produced the world’s first solid eating chocolate. Chocolate chips became a reality; the cookie a possibility.
Legend has it that the first chocolate chip cookies were baked around 1930 at the Toll House Inn, on the outskirts of Whitman, Massachusetts.
Built in 1708 as a tollgate for travelers halfway between Boston and New Bedford, the house was purchased in the late 1920s by a New England woman, Ruth Wakefield, and renovated as an inn. In her role of resident cook and baker, Mrs. Wakefield added chocolate pieces to her basic butter cookies, creating the Toll House Inn cookie, which would become a national product. For chocolate bits, Mrs. Wakefield laboriously diced the Nestle Company’s large Semi-Sweet Chocolate Bar. The company, impressed with her recipe, requested permission to print it on the chocolate bar’s wrapper, in exchange for supplying Mrs. Wakefield with a lifetime supply of free chocolate.
Doughnut: 16th Century, Holland
For over two hundred fifty years, doughnuts, which originated with Dutch bakers, did not have holes in the center; the hole was an American modification that, once introduced, redefined the shape of the pastry.
The deep-fried batter doughnut originated in sixteenth-century Holland, where it was known as an olykoek, or “oil cake,” named for its high oil content. Made with sweetened dough and sometimes sugared, the oil cake was brought to America by Pilgrims who had learned to make the confection during their stay in Holland in the first two decades of the 1600s. Small, the size of a walnut, the round oil cake in New England acquired the name “dough nut,” while a related long twisted Dutch pastry of fried egg batter became known as the cruller, from the Dutch krullen, “curl.”
The hole in the doughnut’s center appeared in the first half of the nineteenth century, the independent creation of the Pennsylvania Dutch and, farther east, a New England sailor. Hanson Gregory, a sea captain from Maine, is said to have poked holes in his mother’s doughnuts in 1847, for the practical reason (also stated by the Pennsylvania Dutch) that the increased surface area allowed for more uniform frying and eliminated the pastry’s soggy center. Today Hanson Gregory’s contribution of the hole is remembered in his hometown of Rockport, Maine, by a bronze plaque, suggesting that in America, fame can be achieved even for inventing nothing.
Chewing Gum: 1860s, Staten Island, New York
The action of chewing gum, through exercising the muscles of the jaw, relieves facial tension, which in turn can impart a general feeling of bodily relaxation. Gum is part of the U.S. Armed Forces’ field and combat rations, and soldiers consume gum at a rate five times the national average. Thus, it seems fitting that the man responsible for the chewing gum phenomenon was a military general: Antonio López de Santa Anna, the despised Mexican commander responsible for the massacre at the Alamo.
Santa Anna had reason to chew gum.
In the 1830s, when Texas attempted to proclaim its independence from Mexico, a Mexican army of five thousand men, led by Santa Anna, attacked the town of San Antonio. The one hundred fifty native Texans forming the garrison retreated into Fort Alamo. The Mexican general and his men stormed the fort, killing all but two women and two children. A few weeks later, charging under the battle cry “Remember the Alamo!” American forces under General Sam Houston defeated Santa Anna and forced Mexico to accept Texas’s secession. Texas became a state in 1845, and Santa Anna, one of the few Mexican commanders not executed for his war crimes, entered the United States and settled on Staten Island, New York.
The exiled general brought with him a large chunk of chicle, the dried milky sap or latex of the Mexican jungle tree the sapodilla. Known to the Aztecs as chictli, the tasteless resin had been a favorite “chew” of Santa Anna. On Staten Island, the former general introduced chicle to a local photographer and inventor, Thomas Adams, who imported a large quantity of the gummy resin, then tried and failed to convert it chemically into an inexpensive synthetic rubber. To recoup a portion of his investment, Adams, recalling how avidly his own son Horatio, as well as Santa Anna, enjoyed chewing chicle, decided to market it as an alternative to the then-popular wads of paraffin wax sold as chew.
Thomas Adams’s first small tasteless chicle balls went on sale in a Hoboken, New Jersey, drugstore in February 1871 for a penny apiece. The unwrapped balls, packaged in a box labeled “Adams New York Gum—Snapping and Stretching,” were sold along the East Coast by one of Adams’s sons, a traveling salesman. Chicle proved to be a superior chew to wax, and soon it was marketed in long, thin strips, notched so a druggist could break off a penny length. It had the jaw-exercising consistency of taffy.
The first person to flavor chicle, in 1875, was a druggist from Louisville, Kentucky, John Colgan. He did not add the candy-like oils of cherry or peppermint, but the medicinal balsam of tolu, an aromatic resin from the bark of a South American legume tree, Myroxylon toluiferum, familiar to children in the 1870s as a standard cough syrup. Colgan named his gum Taffy-Tolu, and its success spawned other flavored chicles.
Thomas Adams introduced a sassafras gum, then one containing essence of licorice and named Black Jack, which is the oldest flavored chewing gum on the market today. And in 1880, a manufacturer in Cleveland, Ohio, introduced a gum that would become one of the industry’s most popular flavors: peppermint. In the same decade, Adams achieved another first: the chewing gum vending machine. The devices were installed on New York City elevated-train platforms to sell his tutti-frutti gum balls.
It was in the 1890s that modern processing, packaging, and advertising made chewing gum truly popular. Spearheading that technology was a soap salesman turned chewing gum manufacturer, William Wrigley, Jr.
Wrigley’s first two brands, Lotta Gum and Vassar, were soon forgotten. But in 1892, he introduced Wrigley’s Spearmint, followed the next year by Juicy Fruit, both of which became America’s top-selling turn-of-the-century chewing gums. Wrigley was a tireless gum advertiser. Following his personal motto— “Everybody likes something for nothing” —and his business philosophy— “Get them hooked” —in 1915 he collected every telephone directory in the country and mailed four free sticks of gum to each of the 1.5 million listed subscribers. Four years later, he repeated the kindness and the ploy, even though the number of American phone subscribers exceeded seven million.
Popular as gum chewing was with many people, it was not without its detractors. The puritanical-minded saw it as a vice; snuff habitues dismissed it as sissified; teachers claimed it disrupted a child’s classroom concentration; parents warned that swallowed gum caused intestinal blockage; and physicians believed excessive chewing dried up the salivary glands. As late as 1932, engineering genius Nikola Tesla, inventor of the alternating-current electrical system, solemnly voiced that concern: “By exhaustion of the salivary glands, gum puts many a foolish victim in the grave.”
What we buy today is not General Santa Anna’s original taffy-like chicle, but a gentler synthetic polymer, polyvinyl acetate, itself tasteless, odorless, and unappetizingly named, which Americans chew at the rate of ten million pounds a year.
Chiclets and Bubble Gum. Two men who entered the burgeoning chewing gum business in the 1880s were brothers Frank and Henry Fleer, each pursuing a different goal that would result in an industry classic.
Frank Fleer sought to create a gum with high surface tension and “snap-back,” which could be blown into large bubbles. Snap-back, or a gum’s elasticity, is a crucial parameter; low snap-back, and a burst bubble explodes over the chin and nose without contracting; high snap-back, and the bulk of the gum retreats to the lips. His first bubble gum effort, with the tongue-twisting title Blibber-Blubber Bubble Gum, failed because Blibber-Blubber burst before achieving a large bubble. In addition, the gum was too “wet,” making a burst bubble stick to the skin.
Brother Henry Fleer was tackling a different challenge: to develop a brittle white candy coating that could encapsulate pellets of chicle. Henry’s task was the easier, and in the 1910s his product emerged, as Chiclets. Not until 1928 did brother Frank succeed in producing a sturdy, “dry” gum that blew bubbles twice the size of his earlier product. Double Bubble was an immediate success among Americans of varied ages. But what delighted Frank Fleer even more was that during World War II, American GIs introduced the gum to Eskimo populations in Alaska, where it quickly displaced their centuries-old traditional “chew,” whale blubber.
Ice Cream: 2000 B.C., China
Ice cream is rated as Americans’ favorite dessert and we consume it prodigiously. Annual production amounts to fifteen quarts a year for every man, woman, and child in the United States, and if water ice, sherbet, sorbet, spumoni, and gelato are added, the figure jumps to twenty-three quarts per person. But then ice cream was a dessert phenomenon from the time of its creation, four thousand years ago, in China, even if that first treat was more of a pasty milk ice than a smooth icy cream.
At that point in ancient history, the milking of farm animals had recently begun in China, and milk was a prized commodity. A favorite dish of the nobility consisted of a soft paste made from overcooked rice, spices, and milk, and packed in snow to solidify. This milk ice was considered a symbol of great wealth.
As the Chinese became more adept at preparing frozen dishes—they imported and preserved snow from mountain elevations—they also developed fruit ices. A juice, often including the fruit’s pulp, was either combined with snow or added to milk ice. By the thirteenth century, a variety of iced desserts were available on the streets of Peking, sold from pushcarts.
After China, ice milk and fruit ice appeared in fourteenth-century Italy, with credit for the desserts equivocally divided between Marco Polo and a Tuscan confectioner, Bernardo Buontalenti. Those European recipes were secrets, guarded by chefs to the wealthy, and with refrigeration a costly ordeal of storing winter ice in underground vaults for summer use, only the wealthy tasted iced desserts.
From Italy, frozen desserts traveled to France. When the Venetian Catherine de’Medici married the future King Henry II of France in 1533, she used fruit ices to demonstrate to the rest of Western Europe her country’s culinary sophistication. During a month-long wedding celebration, her confectioners served a different ice daily, with flavors including lemon, lime, orange, cherry, and wild strawberry. She also introduced into France a semifrozen dessert made from a thick, sweetened cream, more akin to modern ice cream than to Chinese milk ice.
Ice cream became fully freezable in large quantities in the 1560s as the result of a technical breakthrough. Blasius Villafranca, a Spanish physician living in Rome, discovered that the freezing point of a mixture could be attained rapidly if saltpeter was added to the surrounding bath of snow and ice. Florentine confectioners began to produce the world’s first solidly frozen, full-cream ices. Within a decade, a molded, multiflavored dessert of concentric hemispheres bowed in France as the bombe glacée.
An 1868 ice cream vendor, or “hokey pokey” man.
Italian immigrants relocating throughout Europe sold ice cream and ices from ice-cooled pushcarts, and the desserts came within reach of the masses. By 1870, the Italian ice cream vendor was a familiar sight on London streets, called by British children the “hokey pokey” man, a corruption of the vendor’s incessant cry, “Ecco un poco” — “Here’s a little.” Even in America, an ice cream vendor was known by that expression until the 1920s—until, that is, confectioner Harry Burt from Youngstown, Ohio, marketed the first chocolate-covered vanilla ice cream bar on a stick, naming it a “Good Humor Sucker.” Thus was born the Good Humor man.
Ice Cream Soda. The city of Philadelphia holds the distinction of being the point of entry of ice cream into the United States, as well as the home of the carbonated soda water concoction known as the ice cream soda or float.
It was Thomas Jefferson who tasted ice cream while ambassador to France, and returned to Philadelphia with a recipe. We know that Jefferson highly valued his “cream machine for making ice,” and that Dolley Madison’s White House dinners became renowned for their strawberry bombe glacée centerpiece desserts. By the early 1800s, Philadelphia was the country’s “ice cream capital” —because of the quantity of ice cream produced there; because of a much-loved vanilla-and-egg flavor called “Philadelphia”; and because of the city’s famous public ice cream “houses,” which would later be known as “parlors.” The ice cream soda was officially introduced and served in 1874 at the fiftieth anniversary of the city’s Franklin Institute.
Ice Cream Sundae and Whipped Cream. From extant menus, food historians feel certain that the ice cream sundae debuted in the mid-1880s, about a decade after the ice cream soda, and that its name originated as a fanciful spelling of “Sunday,” the only day of the week that the dish was sold. Why only on Sunday?
Two theories have been advanced. In parts of New England during the 1880s, certain religious restrictions prevented the sale and consumption of soda water (believed to be akin to spirits) on the holiest day of the week. Subtracting carbonated water from an ice cream soda leaves scoops of ice cream and syrup, a new dish fit for a Sunday. Two cities—Evanston, Illinois, and Norfolk, Virginia—claim to be the home of the sundae, each offering as proof ice cream parlor menus of the day.
The second theory, less widely held, is that the sundae originated independent of the ice cream soda, and that it was always topped with chocolate syrup. The syrup was expensive, so for most families, the dish became a special one-day-a-week (Sunday) treat.
Whipped cream was not a standard sundae or ice cream soda topping for many decades, because the cream had to be beaten by hand. That changed in the early 1930s when Charles Goetz, a senior chemistry major at the University of Illinois, discovered a way to saturate cream with nitrous oxide or laughing gas, a breakthrough that produced not only the first spray-on whipped cream, but also spray-can shaving lather.
As a high school student, Goetz worked part-time in an ice cream parlor, where he was frequently put to whipping cream. In 1931, as a college senior, he also worked part-time, but in the university’s Diary Bacteriology Department, improving milk sterilization techniques. One day it occurred to him that bacteria might not develop and multiply in milk if the milk was stored under high gas pressure. Experimenting, he discovered that milk released from a pressurized vessel foamed. As Goetz later wrote of his finding: “It was evident to me that if cream were used the foamed product would be whipped cream.”
There was a problem: every gas Goetz tested imparted its own undesirable flavor to the whipped cream. It was through a local dentist that he learned of odorless, tasteless, nonflammable nitrous oxide, used as an anesthetic in extracting teeth. Laughing gas led him to produce the world’s first commercial whipped cream from a can, ushering in the age of aerosols, which from a financial standpoint was nothing to laugh at.
Ice Cream Cone: 1904, St. Louis, Missouri
For centuries, ice cream was served in saucers and dishes and heaped on top of waffles, but there is no evidence for the existence of an edible pastry cone until 1904, at the St. Louis World’s Fair. Organized to commemorate the hundredth anniversary of the Louisiana Purchase, the fair cost fifteen million dollars (the same price as the Louisiana Purchase) and had a host of special attractions, including the John Philip Sousa Military Band and the first demonstration of electric cooking; it also offered its thirteen million visitors a large number of food concessions. Side by side in one area were a Syrian baker, Ernest Hamwi, specializing in waffles, and a French-American ice cream vendor, Arnold Fornachou.
As one version of the story goes, Fornachou, a teenager studying to become a watch repairman, ran out of paper ice cream dishes and rolled one of Hamwi’s waffles into a cone, creating a new sensation. The alternate version credits Ernest Hamwi. An immigrant pastry chef from Damascus, Hamwi offered fairgoers a zalabia, a wafer-thin Persian confection sprinkled with sugar. He is supposed to have come to Fornachou’s aid with rolled zalabias.
Nonetheless, several newspaper accounts of the day unequivocally record that ice cream cones, or “World’s Fair cornucopias,” became a common sight at the St. Louis Exhibition. Cones were rolled by hand until 1912, when Frederick Bruckman, an inventor from Portland, Oregon, patented a machine for doing the job. In little more than a decade, one third of all the ice cream consumed in the United States was eaten atop cones.
In a work such as this, which is indebted to so many journals, magazines, newspapers, trade books, encyclopedias, and corporate archive files, it would be impractical to cite a reference for each and every fact. What would better serve a reader interested in any particular topic discussed in this book, I felt, is a list of the sources that I used in writing each chapter. That material is provided here, along with comments, when appropriate, as to a particular source’s availability (or lack of it), for considerable information, especially that deriving from folklore, was culled from private communications with cultural and social anthropologists, as well as from many dusty, out-of-print volumes borrowed from the bowels of libraries and archives.
To ensure accuracy, I have attempted to employ a minimum of two sources for the origin of a particular superstition, custom, or belief. In dealing with subjects that arose in prehistoric times and for which there are no unequivocal archaeological records (e.g., the origin of the “evil eye” superstition or the practice of the handshake), I sought out consensus opinions among folklorists. When these authorities substantially disagreed within their own discipline, I have labeled the information in the text as speculative and open to various interpretations, and have often presented two views.
A word about folklore, since it plays a crucial role in the origins of many of the “everyday things” discussed in the early chapters of this book.
No field of learning, perhaps, is more misunderstood than folklore, often defined as “the learning of nonliterate societies transmitted by oral tradition.” In the United States, the word “folklore” itself often conjures up an image of long-haired folk singers or grizzled old-timers spinning questionable yarns about a Paul Bunyan or a Johnny Appleseed.
Contrary to popular opinion, the field is an intellectual subject with its own substantial, worldwide body of scholarship. Professional folklorists distinguish between genuine folk tradition, founded on actual historic figures, and embellished, largely fictive, imitation of those traditions. While serious students of folklore do not always agree on the boundaries of their discipline, they tend to follow one of three approaches to their material: the Humanistic Perspective, which emphasizes the human carrier of “oral literature”; the Anthropological Perspective, which focuses on cultural norms, values, and laws that form a consistent pattern in a nonliterate society; and the Psychoanalytic Perspective, which views the materials of folklore neither aesthetically nor functionally but behavioristically. In the last category, myths, dreams, jokes, and fairy tales express hidden layers of unconscious wishes and fears. In assembling many chapters of this book, it was necessary to borrow from each of these three perspectives.
Throughout the following references, I acknowledge my indebtedness to the many professionals who generously offered their time and advice, and who in numerous instances also provided me with articles, reprints, and books.
1 From Superstition
Superstitions are defined as irrational beliefs, half beliefs, or practices that influence human behavior. In attempting to compile the origins of many of our most cherished superstitions, I’ve relied on numerous sources for a consensus view. I wish to express gratitude to folklorist Dr. Alan Dundes, Department of Anthropology, University of California, Berkeley, for his thoughtful opinions, his collection of published papers on the origins of things as disparate as American Indian folklore and the Cinderella fairy tale, and his suggestions on other avenues of research. His article “Wet and Dry, the Evil Eye: An Essay in Semitic and Indo-European Worldview” is a definitive work on that superstitious belief and a prime example of a multiperspective approach to folklore material.
Additional works by Dr. Dundes that relate to this chapter: Interpreting Folklore, 1980, Indiana University Press; Scared Narratives, 1984, University of California Press; The Study of Folklore, edited by Dundes et al., 1965, Prentice-Hall.
Two volumes highly recommended for their scholarship, readability, and breadth of material are: A Treasury of Superstitions, by Dr. Claudia de Lys, 1957, Philosophical Library (reprinted in 1976 by Citadel Press under the title A Giant Book of Superstitions). A noted social anthropologist, Dr. de Lys spent more than three decades assembling the origins of superstitious beliefs from around the world. Though her treatment of each subject is brief, it is nonetheless quite comprehensive on a particular superstition’s possible roots. For the lay reader interested in a single volume covering the majority of superstitious practices, the de Lys book would be the most satisfying.
The second recommended volume, of a still more popular nature, is Superstitious? Here’s Why!, coauthored by Dr. de Lys and Julie Forsyth Batchelor, 1966, Harcourt Brace. Aimed primarily at young readers, the book is a selective condensation of Dr. de Lys’s scholarly material.
The reader interested in curious worldwide folktales that underlie many superstitious practices can turn to Superstition and the Superstitious, Eric Maple, 1972, A. S. Barnes; Superstition in All Ages, Jean Meslier, Gordon Press, a 1972 reprint of the original 1890 edition; and the thorough Encyclopedia of Superstitions, M. A. Radford, revised by Christina Hole, 1949, Philosophical Library; 1969, Greenwood Press; 1975, Hutchinson, London. The Radford book is particularly insightful concerning the rabbit’s foot superstition and the significance of the rabbit and hare to early societies, especially with reference to the origin of Easter customs.
A general overview of American folklore, including home-grown superstitious beliefs, appears in American Folklore, Richard Dorson, 1959, University of Chicago Press.
Highly recommended in this area are two books by folklorist Jan Harold Brunvand: Study of American Folklore, 1968, Norton; and The Mexican Pet: Urban Legends, 1986, Norton. Brunvand, a former editor of the Journal of American Folklore, explores the reasons why myths and superstitious beliefs take hold of the human imagination. He is often concerned with the humanistic perspective to folklore material, stressing the tale and the teller for their inherent worth and enjoyment, but he also ventures into the psychoanalytic realm, exploring legends and beliefs that derive from primal and universal human fears such as suffocation, castration, and blinding.
Many superstitious beliefs have religious roots—the supposed good fortune deriving from a horseshoe, for instance, is linked to a legend surrounding St. Dunstan. In such cases, I have attempted to substantiate the folklore story, whole or in part, using two encyclopedic books on religious figures: The Oxford Dictionary of Popes, J. N. D. Kelly, 1986, Oxford University Press; and The Avenel Dictionary of Saints, Donald Attwater, 1981, Avenel Books. Each volume is exhaustive and fascinating in its own right. In this same vein, I found helpful Christianity or Superstition, by Paul Bauer, 1966, Marshall Morgan and Scott.
2 By Custom
This chapter concerns the origins of why we do what we do, and its content derives in large part from studies in folklore, cultural anthropology, and etymology, for the original meaning of a word (such as “honeymoon”) often says much about the tradition surrounding the practice it describes. I am indebted to Dr. Barbara Kirshenblatt-Gimblett, folklorist and chair-person of Performing Arts at New York University, for her many helpful references.
Through the influence of tradition we perform many actions so habitually that their origins and original significance are seldom, if ever, questioned. Dr. R. Brasch, a rabbinical scholar, insightfully investigates numerous human customs, from marriage practices to death traditions, in How Did It Begin, 1976, David McKay. His book, unfortunately out of print, is well worth a visit to a local library. Rabbi Brasch’s linguistics background—he is a student of twelve languages, among them Babylonic-Assyrian, Arabic, Syriac, and Persian—provides a firm foundation for many subjects explored in this chapter.
A comprehensive book to browse, from which I gleaned many facts and verified others, is Funk and Wagnalls Standard Dictionary of Folklore, Mythology, and Legend, edited by Maria Leach and Jerome Fried, 1980.
For birthday practices: The Lore of Birthdays, Ralph and Adeline Linton, 1952, Henry Schuman. For a fascinating history of the song “Happy Birthday to You” and the legal dispute surrounding its authorship and subsequent royalty rights, see New York Times, “Dr. Patty S. Hill of Columbia University Dies,” May 26, 1946; and Louisville (Kentucky) Courier, “Their Song Becomes a Universal One,” Rhea Talley, February 15, 1948. For a capsule overview of the song, see The Book of World-Famous Music, James Fuld, 3rd edition, 1985, Dover.
Additional information on the origins of birthday practices was obtained through personal communications with the research staff at Hallmark Cards.
Two highly readable books on a wide variety of human customs are: Why You Say It, Webb B. Garrison, 1953, Abingdon Press; and Why We Do It, Edwin Daniel Wolfe, Books for Libraries Press, a 1968 reprint of the 1929 original. An excellent overview of the origins of customs throughout the world is Curiosities of Popular Customs, William S. Walsh, 1966, Gale.
For marriage customs: Here I recommend that the interested reader borrow from something old—The Customs of Mankind, Lillian Eichler Watson, Greenwood Press, a 1970 reprint of the 1925 original; and something new—The Bride, by Barbara Tober, 1984, Abrams. Christian aspects of wedding practices are expertly presented by theologian John C. McCollister in The Christian Book of Why, 1983, Jonathan David Publishers. The volume, written in a question-and-answer format, provides concise explanations of how and why various customs arose in ancient times and persist into the present. In addition to marriage customs, McCollister, a university professor and Lutheran minister, examines the origins of sacred artifacts, modes of prayer and worship, and festivals and dietary laws.
A particularly thorough book on the customs of wearing and exchanging rings is Rings Through the Ages, James R. McCarthy, 1945, Harper & Brothers.
On regional New England practices: Customs and Fashions in Old New England, Alice Morse Earle, 1893, Scribner, reprinted by Charles E. Tuttle, 1973.
On Old World practices: Peasant Customs and Savage Myths from British Folklorists, Richard Dorson, 1968, Chicago University Press, 2 volumes.
3 On the Calendar
Several excellent books exist that are devoted entirely to the origins of holidays. The three that I found most comprehensive, readable, and scholarly in the presentation of material are: The American Book of Days, George W. Douglas, revised by Jane M. Hatch and Helen D. Compton, 1978, H. W. Wilson Company. All About American Holidays, Maymie Krythe, 1962, Harper & Brothers. Celebrations: The Complete Book of American Holidays, by Robert J. Myer with the editors of Hallmark Cards, 1972, Doubleday. I am also indebted to the research staff at Hallmark Cards, Kansas City, Missouri, for a considerable amount of material on the origins of holidays, holiday foods and customs, as well as for figures on the numbers of greeting cards sold on major and minor observances.
Several holidays deserve particular note.
Mother’s Day: The committee of the International Mother’s Day Shrine in West Virginia was generous in providing material on Miss Anna Jarvis, the founder of the holiday. Additional information on Mother’s Day observances came from Public Broadcasting Television, Morgantown, West Virginia; and from WBOY-TV, Clarksburg, West Virginia. The National Restaurant Association provided figures on the number of families that eat out on various national holidays. Howard Wolfe, in Mother’s Day and the Mother’s Day Church, 1962, Kingsport Press, provided further insight into the life and ambitions of Anna Jarvis.
Thanksgiving: The best single source I located on the origins of this national holiday is Thanksgiving: An American History, by Diana K. Applebaum, 1984, Facts on File. Also helpful was The Mayflower, by Vernon Heaton, 1980, Mayflower Books.
Easter: While the first three books in this section give a rather comprehensive account of Easter and its traditions, a volume devoted entirely to the holy day is The Easter Book, Francis X. Weiser, 1954, Harcourt Brace. Additional Easter lore was provided by the PASS Dye Company of Newark, N.J., founded in the 1870s and one of the first commercial ventures to market prepackaged powdered Easter egg dyes.
The history and significance of eggs in early times, particularly among the Egyptians, Phoenicians, Persians, Greeks, and Romans, is detailed in Easter Eggs, Victor Houart, 1982, Stephen Green Press. An overview of the feast from pagan holiday to Christian holy day is contained in Easter and Its Customs, Christina Hole, 1961, M. Barrows Company.
Groundhog Day: An excellent article separating fact from fiction and locating the origin of this observance is “A Groundhog’s Day Means More to Us Than It Does to Him,” Bil Gilbert, Smithsonian, May 1985.
Bibliographic material on saints Patrick, Valentine, and Nicholas (the original Santa) is derived from The Oxford Dictionary of Popes and The Avenel Dictionary of Saints, op. cit.
Christmas: Christmas customs are numerous, diverse, and international and have been culled from many sources, including several of the above. In addition, the origin of the song “Rudolph, the Red-Nosed Reindeer” was provided by the Montgomery Ward department store chain, for which the lyrics were written in 1939.
Three sources deal with Christmas in America: The American Christmas, James H. Barnett, 1954, Macmillan; Christmas on the American Frontier, 1800–1900, John E. Baur, 1961, The Caxton Printers; and to a lesser extent, A Treasury of American Folklore, B. A. Botkin, editor, 1944, Crown. These references indicate the religious resistance among early colonists to the festive observance of Christ’s birthday.
Also helpful in creating the section on Christmas customs were the excellent overviews presented in Christmas Customs and Traditions: Their History and Significance, Clement A. Miles, 1976, Dover; and 1001 Christmas Facts and Fancies, Alfred C. Hattes, 1954, A. T. De La Mare. An international view of the holiday is found in Christmas Customs Around the World, Herbert Wernecke, 1979, Westminster Press.
A fine reference on the calendar and the origin of the week is The Seven Day Circle: The History and Meaning of the Week, Eviatar Zerubavel, 1985, The Free Press.
A highly readable and comprehensive book on holidays and customs, out of general circulation but still available in limited number from the publisher, is Days and Customs of All Faiths, by Howard Harper, 1957, Fleet Publications.
4 At the Table
This chapter opens with a discussion of the origins of etiquette, then proceeds to explore the evolution of eating with a knife, fork, and spoon, as well as such practices as blowing the nose. The classic, pioneering work that examines the links between such social graces and behavioral control is The History of Manners: The Civilizing Process, Volume 1, by Swiss sociologist Norbert Elias, translated by Edmund Jephcott and published by Pantheon, 1978 (a reprint of the 1939 original). Elias draws from a dazzling array of sources, including medieval etiquette and manners books, eighteenth-century novels, travel accounts, song lyrics, and paintings. Many of the quotations in this chapter I borrow from Elias.
A more readable, though not less scholarly, account of table manners is The Best of Behavior: The Course of Good Manners from Antiquity to the Present as Seen Through Courtesy and Etiquette Books, by Esther B. Aresty, 1970, Simon and Schuster. Both books are enjoyable, amusing, and highly recommended.
The opening remarks in this chapter on the decline in table manners in all segments of modern society are based on “Table Manners: A Casualty of the Changing Times,” William R. Greer, New York Times, October 16, 1985.
The early seminal books, by century, on which the information in this chapter is based are:
c. 2500 B.C., Egypt, The Instructions of Ptahhotep
c. 950 B.C., the writings of King Solomon and King David
c. 1000, Hebrew Household Books, the first writings on manners to appear in Western Europe
c. 1430, Italy, How a Good Wife Teaches Her Daughter and How a Wise Man Teaches His Son
Sources on specific manners include:
On nose blowing: The following three accounts treat the practice and detail acceptable standards of the day, and the last two shed light on the development and use of handkerchiefs (a topic covered in depth in Chapter 12): Fifty Table Courtesies, Bonvieino da Riva, 1290; On civility in children, Erasmus of Rotterdam, 1530; “On the Nose, and The Manner of Blowing the Nose and Sneezing,” The Venerable Father La Salle, 1729.
On dining manners: Fifty Table Courtesies and On civility in children, op. cit.
On the use of cutlery (revealing the gradual acceptance of the fork): “On Things to Be Used at Table,” The Venerable Father La Salle, 1729.
A history of the knife, fork, and spoon appears in Setting Your Table, by Helen Sprackling, 1960, Morrow; as well as in The Story of Cutlery, by J. B. Himsworth, 1953, Ernest Benn.
Concerning the section “Table Talk”: Why You Say It, Webb B. Garrison, and Why We do It, Edwin Daniel Wolfe, op. cit. I have also consulted Thereby Hangs a Tale: Stories of Curious Word Origins, Charles Earle Funk, 1950, Harper & Row, Harper Colophon Edition, 1985. An excellent book that includes the origins of cutlery names and dinnerware is Word Origins and Their Romantic Stories, Wilfred Funk, 1978, Crown.
A reference for Wedgwood Ware is Entrepreneurs: The Men and Women Behind Famous Brand Names and How They Made It, by Joseph and Suzy Fucini, 1985, G. K. Hall. The book, whose title is self-explanatory, provides fascinating reading on scores of individuals and the products they created, with information presented in concise, encapsulated form. I employed this reference to double check facts and flesh out information on several topics, including Culligan Water Softeners (Chapter 5), and Carrier Air Conditioners and Burpee Seeds (Chapter 6).
5 Around the Kitchen
In this and the following chapter, which deal with everyday inventions and gadgets found in the kitchen and throughout the home, I am indebted to the research staff at the Division of Patents and Trademarks of the U.S. Department of Commerce; to the National Inventors Hall of Fame, in Arlington, Virginia; and to individual companies that provided historical information on their products. Before providing specific references by product, I would direct the interested reader to four excellent general works, covering a wide variety of inventors, gadgets, and corporations.
The Fifty Great Pioneers of American Industry, Editors of Newsfront Year, 1964, Maplewood Press. Eureka! The History of Invention, edited by De Bono, 1974, Holt, Rinehart & Winston. The Great Merchants: America’s Foremost Retail Institutions and the People Who Made Them Great, Thomas Mahoney and Leonard Sloane, 1955, Harper & Brothers. Pioneers of American Business, Sterling G. Slappey, 1970, Grossett.
On the origin and use of Teflon: Personal communications with the National Inventors Hall of Fame for material on Teflon’s inventor, Dr. Roy J. Plunkett. A detailed description of Teflon’s development is in “Polytetra-fluoroethylene,” W. E. Hanford and R. M. Joyce, Journal of the American Chemical Society, Volume 68, 1946. Dr. Plunkett’s own description of his discovery appears in The Flash of Genius, Alfred B. Garrett, 1963, Van Nostrand. I also wish to thank the Du Pont Company of Wilmington, Delaware, for material on Dr. Plunkett and Teflon.
On the microwave oven: Personal communications with the Raytheon Company, Microwave and Power Tube Division, Waltham, Massachusetts. Two excellent accounts of the development of microwave cooking, one an article, the other a book, appear in “The Development of the Microwave Oven,” Charles W. Behrens, Appliance Manufacturers, November 1976; and The Creative Ordeal: The Story of Raytheon, Otto J. Scott, 1974, Atheneum. A recent account of the pioneering efforts that led to the discovery of microwave radiation for cooking is in Breakthroughs! by P. Nayak and J. Kettringham, 1986, Rawson, Chapter 8.
On the paper bag: Personal communications with the Kraft and Packaging Papers Division of the American Paper Institute, as well as with the National Inventors Hall of Fame.
On the history and evolution of the friction match: Eureka!, op. cit. And material provided by the Diamond Match Company of Springfield, Massachusetts.
An invaluable book in assembling the material in this chapter and Chapter 6 was The Housewares Story, by Earl Lifshey, 1973, published by the National Housewares Manufacturers Association, Chicago. This fascinating volume details the early marketing of numerous household products (e.g., orange juicers, bathroom scales, kitchen stools) that for reasons of space I was unable to include. A highly recommended work.
The National Inventors Hall of Fame also provided information on Leo H. Baekeland and Bakelite; Charles Goodyear and rubber; Charles Martin Hall and his 1885 discovery of the electrolytic method of producing inexpensive aluminum, which eventually brought the metal into wide use; and Samuel F. B. Morse. Additional information on the choice of the distress signal SOS is from the International Maritime Organization, and from “Mayday for Morse Code,” Science, March 1986.
On plastics discussed in this chapter and Tupperware: Eureka! op. cit., and Plastics: Common Objects, Classic Designs, by Sylvia Katz, 1984, Abrams. Ms. Katz covers the history of the plastics industry, beginning in the 1840s, and detailing the material’s uses in decorative objects, combs, furniture, and toys. Another lively history of the subject is Art Plastic: Designed for Living, by Andrea DiNoto, 1984, Abbeville Press. Her text is geared toward readers with little knowledge of the scientific techniques involved in the manufacture of plastic. Material on the “miracle” plastic, nylon, is from personal communications with the Du Pont Company, and from Du Pont Dynasty: Behind the Nylon Curtain, by Gerard Colby, 1984, Lyle Stuart.
On blenders and food processors: Communications with Mrs. Fred Waring helped clarify many conflicting accounts of her husband’s involvement in the development of the Waring Blendor. Also of assistance in assembling this material was information from Oster and Hamilton Beach, and from Topsellers, by Molly Wade McGrath, 1983, Morrow. I am indebted to Dave Stivers, archivist of Nabisco for directing me to Ms. McGrath’s book and for generously providing me with more material on products than I could possibly use in this volume. Also, New Processor Cooking, by Jean Anderson, 1983, Morrow.
On Pyrex: Material provided by Corning Glass Works, Corning, New York. (Also see references in Chapter 6 under “Glass Window.”)
On disposable paper cups: In addition to abovementioned general references on inventions, Why Did They Name It? by Hannah Campbell, 1964, Fleet Press. This is a gem of a book, highly recommended, and still available in limited number from the publisher in New York. Ms. Campbell provides entertaining histories of the brand names that have become an integral part of the American home. The book began as a series of articles published in Cosmopolitan magazine in the 1960s.
One final and excellent source covering a variety of gadgets found in the kitchen, bathroom, and around the home: The Practical Inventor’s Handbook, Orville Green and Frank Durr, 1979, McGraw-Hill.
6 In and Around the House
A delightful, informative book on the history and comforts of the home in Western culture is Home: A Short History of an Idea, by Witold Rybczynski, 1986, Viking. The book considers the home before the advent of electrical gadgets, after such convenience devices were introduced and proudly displayed as prestige acquisitions, then in modern times, when the decorating vogue has been a nostalgia for past simplicity in which “The mechanical paraphernalia of contemporary living has been put away, and replaced by brass-covered gun boxes, silver bed-side water carafes, and leather-bound books.”
As pertains to this chapter, Mr. Rybczynski paints a picture of home comfort and what it has meant in different times. He writes, “In the seventeenth century, comfort meant privacy, which lead to intimacy and, in turn, to domesticity. The eighteenth century shifted the emphasis to leisure and ease, the nineteenth to mechanically aided comforts—light, heat, and ventilation. The twentieth-century domestic engineers stressed efficiency and convenience.” This general discussion has been fleshed out in detail, invention by invention, through a number of sources listed below.
On lighting the home, from oil lamps in prehistoric times to fluorescent tubes, I found the most detailed single volume to be The Social History of Lighting, by William T. O’Dea, 1958, Routledge & Kegan Paul, London.
On types of glass and glass windows: “A History of Glassmaking,” John Harris, New Scientist, May 22, 1986. This Is Glass, generously provided by the Corning Glass Works of Corning, New York, and published by the company. Also, Glass Engineering Handbook, E. B. Shand, 1980, McGraw-Hill. And “Safety Glass: Its History, Manufacturer, Testing, and Development,” J. Wilson, Journal of the Society of Glass Technology, Volume 16, 1932.
Once again, an indispensable book on home convenience inventions is The Housewares Story, op. cit. Capsule descriptions of home inventions and inventors are found in the voluminous and entertaining The Ethnic Almanac, Stephanie Bernardo, 1981, Doubleday; a book that provides hours of fascinating browsing.
On brooms, carpet sweepers, and vacuum cleaners: An overview appears in The Housewares Story, op. cit. The Bissell sweeper can be found in Great American Brands, David Powers Cleary, 1981, Fairchild. Fabulous Dustpan: The Story of the Hoover Company, by Frank Garfield, 1955, World Publishing. Also on the vacuum cleaner: Everybody’s Business: An Almanac, edited by Milton Moskowitz et al., 1982, Harper & Row; a thoroughly entertaining book to browse. Additional material was provided by the Fuller Brush Company.
On the sewing machine and Elias Howe and Isaac Singer: Brainstorms and Thunderbolts: How Creative Genius Works, by Carol O. Madigan and Ann Elwood, 1983, Macmillan. The Patent Book: An Illustrated Guide and History for Inventors, Designers and Dreamers, James Gregory and Kevin Mulligan, 1979, A & W Publishers.
For a general discussion of the evolution of lawns, “Points of Origin: From Flowery Medieval Greensward to Modern Canned Meadow,” Michael Olmert, Smithsonian, May 1983.
On the wheelbarrow: Everyday Inventions, M. Hooper, 1976, Taplinger; an excellent and comprehensive reference. And The Encyclopedia of Inventions, edited by Donald Clark, 1977, Galahad Books.
On rubber and the garden hose: Charles Goodyear, Father of the Rubber Industry, L. M. Fanning, 1955, Mercer Publishing Co.; Everyday Inventions, op. cit.; plus information provided by the B. F. Goodrich Company of Akron, Ohio.
On Burpee seeds: Personal communications, and the Burpee company catalogues, plus Entrepreneurs, op. cit.
On the lawnmower: Eureka! and The Encyclopedia of Inventions, op. cit.
7 For the Nursery
It would have been impossible to assemble the material for this chapter without two definitive references on nursery rhymes and fairy tales: The Classic Fairy Tales, Iona and Peter Opie, 1974, Oxford University Press; and The Oxford Dictionary of Nursery Rhymes, Iona and Peter Opie, 1959, Oxford University Press. The Opies’ thoroughness of research has virtually monopolized this field of investigation; every additional reference I consulted on nursery rhymes and fairy tales expressed an indebtedness to the Opies’ works.
To flesh out the Opies’ material on many historical points, I consulted: Cinderella: A Folklore Casebook, edited by Alan Dundes, 1982, Wildman Press. Dr. Dundes provides a fascinating glimpse of this fairy tale in numerous cultures over many centuries. Jump Rope Rhymes, Roger D. Abrahams, 1969, American Folklore Society. American Non-singing Games, Paul Brewster, 1954, University of Oklahoma Press. Traditional Tunes of the Child Ballads, Bertrand Bronson, 1959, Princeton University Press. The Lore and Language of Schoolchildren, Iona and Peter Opie, 1967, Oxford University Press. The Interpretation of Fairy Tales, Marie Louis von Franz, 1970, Spring Publications. And finally, another work by the Opies, Children’s Games in Street and Playground, 1969, Oxford University Press.
Additional material on The Wizard of Oz, Bluebeard, and Dracula is from Brainstorms and Thunderbolts, by Carol O. Madigan and Ann Elwood, 1983, Macmillan.
8 In the Bathroom
A word on Thomas Crapper: According to British popular legend, Thomas Crapper is the inventor of the modern flush toilet, and several early and descriptive Victorian era names for his invention were the Cascade, the Deluge, and the Niagara. Crapper is referred to in many popular histories of the bathroom, but scant information is provided on his background and invariably no sources are listed.
After months of research for this book, I was fortunate enough to turn up what has to be the original source of the Thomas Crapper legend—which appears to be a purely fictive legend at that, perpetrated with droll British humor by author Wallace Reyburn. Flushed with Pride: The Story of Thomas Crapper, was published by Reyburn in 1969 in England by Macdonald & Co. and two years later in the United States by Prentice-Hall (now out of print). The book reads for long stretches as serious biography, but the accumulation of toilet humor puns, double entendres, and astonishing coincidences eventually reveals Wallace Reyburn’s hoax.
In an attempt to shed light on this bit of bathroom lore, here are several references from Reyburn’s “biography” of Thomas Crapper from which the reader can draw his or her own conclusions.
Crapper was born in the Yorkshire town of Throne in 1837, “the year in which Queen Victoria came to the throne.” He moved to London and eventually settled on Fleet Street, where he perfected the “Crapper W.C. Cistern…after many dry runs.” The installation of a flushing toilet at the royal palace of Sandringham was, according to Reyburn, “a high-water mark in Crapper’s career.” He became “Royal Plumber,” was particularly close with his niece Emma Crapper, and had a friend named “B.S.” (For another Reyburn hoax, on the bra, see References, page 439.) Reyburn’s book did not serve as a source for this chapter; the materials that did:
Highly recommended is Clean and Decent, by Lawrence Wright, 1960, Viking. The book begins with the Minoan achievements in plumbing and flush toilets and their use of wooden toilet seats. It details the Egyptian contributions, including stone seats, and traces plumbing developments through the accomplishments of British engineers in the eighteenth and nineteenth centuries. Wright’s book is thorough, and, revealingly, it makes no mention of Thomas Crapper.
Also of assistance in assembling material for this chapter were: The Early American House, by Mary Earl Gould, 1965, Charles E. Tuttle; “The Washtub in the Kitchen,” by Bill Hennefrund, September 1947, Nation’s Business; and Medical Messiahs: A Social History of Health Quackery in 20th Century America, James H. Young, 1967, Princeton University Press.
On the origins of the toothbrush, toothpaste, and dental practices: I am indebted to the American Dental Association for providing me with reprints of journal articles detailing the history of tooth and mouth care; particularly helpful was “The Development of the Toothbrush: A Short History of Tooth Cleansing,” Parts I and II, by Peter S. Golding, Dental Health, Volume 21, Nos. 4 and 5, 1982.
The Du Pont Company provided numerous articles from The Du Pont Magazine on the development of nylon and nylon toothbrush bristles; most enlightening were “A Personal Possession: Plastic Makes the Modern Toothbrush,” September 1937; “Introducing Exton Bristle: Dr. West’s Miracle-Tuft Toothbrush,” November 1938; and “Birth of a Toothbrush,” October 1951.
An excellent, highly recommended overview is found in Dentistry: An Illustrated History, by Malvin E. Ring, 1986, Abrams. This is a colorful account of dentistry from prehistoric times to the mid-twentieth century, enriched by excellent illustrations and photographs. Ring, a professor of dentistry at the State University of New York at Buffalo, focuses on the evolution of dental techniques from genuinely torturous procedures to modern painless ones, nonetheless dreaded.
Two excellent volumes on the history of false teeth are The Strange Story of False Teeth, John Woodforde, 1972, Drake; and Teeth, Teeth, Teeth, Sydney Garfield, 1969, Simon and Schuster. The Hagley Museum and Library of Wilmington, Delaware, provided me with excellent articles on the development of dentistry.
On the history of shaving, the razor, and the electric razor: Squibb, Schick, and Gillette provided material on their individual products, while the interested reader is directed to the following popular accounts: Great American Brands and Topsellers, op. cit.; both of these references also cover the origin of tissues. Also highly readable and informative on Gillette razors and Kleenex tissues is Why Did They Name It? Hannah Campbell, op. cit. Ms. Campbell devotes two chapters of her book to the development of items found in the bathroom.
On the origin of soap, in particular floating soap: I wish to thank corporate archivist Edward Rider of Procter & Gamble for providing me with a voluminous amount of research, as well as copies of early advertisements for Ivory Soap.
9 Atop the Vanity
Although many sources were employed to assemble the facts in this chapter, three works in particular deserve mention for their thoroughness and scholarship; one on makeup, one on hair, one on fragrances.
On the origin and evolution of makeup: The single best source I located is unfortunately out of print but available for in-house reading at New York City’s Lincoln Center Library: A History of Makeup, by M. Angeloglou, 1970.
On ancient to modern hair care, hair coloring, and wigs: The Strange Story of False Hair, by John Woodforde, 1972, Drake. Additional material was provided by personal communications with Clairol, and from statistics on hair coloring in Everybody’s Business, op. cit.
On the development of incense and its transition to perfume, then into an industry: Fragrance, The Story of Perfume from Cleopatra to Chanel, Edwin T. Morris, 1984, Scribner. Mr. Morris, who teaches fragrance at the Fashion Institute of Technology in New York City, provides a fascinating overview of the subject from its early days in Mesopotamia, where the most prized scent was cedar of Lebanon, through the French domination of the modern perfume industry. For the reader interested in pursuing the subject further than I have detailed, this book is highly recommended. Helping me extend the material into modern times in America was the Avon company. Two popular accounts of the development of Avon are found in Topsellers and Why Did They Name It? op. cit.
An excellent general reference for the development of combs, hairpins, jewelry, and makeup is Accessories of Dress, by Katherine M. Lester and Bess V. Oerke, 1940, Manual Arts Press.
Although The Encyclopedia of World Costume by Doreen Yarwood (1978, Scribner) is predominantly concerned with the origins of articles of attire, it contains a lengthy and excellent section on the history of cosmetics.
10 Through the Medicine Chest
This chapter more than any previous one deals with brand-name items; individual companies were contacted, and they provided material on their products. While I thank them all, I especially wish to single out Chesebrough (Vaseline), Johnson and Johnson (Band-Aids), Scholl, Inc. (Dr. Scholl’s Foot Care Products), Bausch & Lomb (contact lenses and eye care products), and Bayer (aspirin). What is provided below are easily available sources for the reader interested in pursuing specific topics further.
On the origin of drugs: History Begins at Sumer, Samuel Noah Kramer, 1981, University of Pennsylvania Press. Dr. Kramer provides translations of extant Sumerian clay tablets that serve as the first recorded catalogue of medications. In addition: Barbiturates, Donald R. Wesson, 1977, Human Science Press; The Tranquilizing of America, Richard Hughes, 1979, Harcourt Brace; The Medicine Chest, Byron G. Wels, 1978, Hammond Publications.
An excellent reference for over-the-counter drugs common to the home medicine chest is The Essential Guide to Nonprescription Drugs, David Zimmerman, 1983, Harper & Row. Augmenting information in this volume, I used Chocolate to Morphine, Andrew Weil and Winifred Rosen, 1983, Houghton Mifflin.
The single most comprehensive book I located on the development of the art and science of pharmacy is Kremers and Undang’s History of Pharmacy, originally published in 1940 and revised by G. Sonnedecker, with the 4th edition issued in 1976 by Lippincott. Highly recommended.
The Little Black Pill Book, edited by Lawrence D. Chilnick, 1983, Bantam, provides informative discussions of various classes of medicine chest drugs. For a fascinating account of the 1918 influenza pandemic (as mentioned in the section on Vick’s VapoRub), see Great Medical Disasters, Dr. Richard Gordon, 1983, Dorset, Chapter 19; as well as Influenza: The Last Great Plague, W. I. B. Beveridge, 1977, Neale Watson Academic Publications; and The Black Death, P. Ziegler, 1971, Harper & Row.
Miscellaneous references throughout the chapter to folk cures are often from the monthly “Folk Medicine” column by Carol Ann Rinzler, in American Health Magazine.
11 Under the Flag
I am indebted to the Troy, New York, Historical Society for excellent research material on Sam Wilson, America’s original Uncle Sam. For the interested reader, I would recommend Uncle Sam: The Man and The Legend, by Alton Ketchum, 1975, Hill and Wang.
On the Boy Scouts: Much historical material was provided by Boy Scouts of America, headquarters in Irving, Texas. Also, The Official Boy Scouts Handbook, William Hillcourt, 9th edition, 1983, published by the Boy Scouts of America. The best single source on Robert Baden-Powell, British founder of scouting, is The Character Factory: Baden-Powell and the Origins of the Boy Scout Movement, Michael Rosenthal, 1986, Pantheon Books. Although the scouting organization has always denied that the movement was initially intended to prepare boys for military service, Mr. Rosenthal clearly illustrates that the “good citizens” Baden-Powell hoped to fashion were only one step removed from good soldiers. And while scouting’s founder insisted that the movement was “open to all, irrespective of class, colour, creed or country,” it is equally clear that racial prejudice often crept into Baden-Powell’s writings.
On Mount Rushmore: Historical material provided by the Mount Rushmore National Memorial, administered by the National Park Service, U.S. Department of the Interior. An excellent account of the origin and evolution of the momument is contained in Mount Rushmore, Heritage of America, 1980, by Lincoln Borglum (with Gweneth Reed DenDooven), son of the man who sculpted the mountain and who himself added the finishing touches upon bis father’s death. It is issued by K.C. Publications, Nevada. A more detailed history is found in Mount Rushmore, Gilbert C. Fite, 1952, University of Oklahoma Press.
On American songs: A superb and definitive book on four tunes is Report: The Star-Spangled Banner, Hail Columbia, America and Yankee Doodle, by Oscar Sonnect, 1972, Dover. The book is fascinating in that it traces the lore surrounding each song and in a scholarly fashion separates fact from fiction. An additional reference on the origin of songs is The Book of World-Famous Music, by James Fuld, op. cit. Since its first publication in 1966, the book has been a monument in music scholarship, with Fuld painstakingly tracing the origins of nearly one thousand of the world’s best-known tunes back to their original printed sources. Long out of print, the book was updated by the author in 1984–85 and reissued by Dover in 1986. It makes for fascinating browsing.
Also used in compiling musical references in this chapter: American Popular Music, Mark W. Booth, 1983, Greenwood Press; and A History of Popular Music in America, Sigmund Spaeth, 1967, Random House.
On West Point: Material provided by the Public Affairs Office of the United States Military Academy. Also, West Point, issued by National Military Publications.
On the American flag: Though much has been written on the controversy surrounding who designed the country’s first flag, one highly readable and scholarly work is The History of the United States Flag, Milo M. Quaife et al., 1961, Harper & Brothers. The book dispels many “flag myths,” and in clear and concise fashion it lays out all the hard facts that are known about this early symbol of the Republic.
An interesting book that explains how “continents, countries, states, counties, cities, towns, villages, hamlets, and post offices came by their names” is The Naming of America, Allan Wolk, 1977, Thomas Nelson Publishers.
For this chapter I make one final recommendation: What So Proudly We Hail: All About Our American Flag, Monuments, and Symbols, by Maymie R. Krythe, 1968, Harper & Row. This one volume covers the origins of such topics as Uncle Sam, the American flag, the Statue of Liberty, the Lincoln and Jefferson memorials and the Washington Monument, and the White House.
I wish to thank the Washington, D.C., Chamber of Commerce and the Convention and Visitors Association for providing material on the history of the nation’s capital.
On the Statue of Liberty: Statue of Liberty, Heritage of America, Paul Weinbaum, 1980, K.C. Publications. I also wish to thank the Statue of Liberty?Ellis Island Foundation for generously providing me with historical material.
12 On the Body
Many separate items are covered in this chapter, and before providing specific references for each article of attire, I present several books that expertly cover the field.
The Encyclopedia of World Costume, by Doreen Yarwood, 1978, Scribner, is thorough and almost exhaustive on the subject of clothing. R. Turner Wilcox has written several books that provide detailed accounts of the origin and evolution of clothing: The Mode in Costume, 1958, Scribner; and in the same series, Mode in Hats and Headdress and Mode in Footware. Also, History of Costumes, Blanche Payne, 1965, Harper & Row.
Pictorial sources were: What People Wore: A Visual History of Dress from Ancient Times to the Present, Douglas Gorsline, 1952, Bonanza Books; and Historical Costumes in Pictures, Braun and Schneider, 1975, Dover. Another excellent Dover publication is A History of Costume by Carl Kohler, 1963. Also of assistance was The Fashion Dictionary, M. B. Picken, 1973, Funk and Wagnalls.
On the necktie: Collars and Cravats, 1655–1900, D. Colle, 1974, Rodale Press. Also, “Part II: Accessories Worn at the Neck,” in Accessories of Dress, op. cit. This work also contains excellent sections on the origins of hats, veils, girdles, shoes, gloves, fans, buttons, lace, handbags, and handkerchiefs.
On off-the-rack clothing: While many of the above books deal with the subject, one highly thorough source is Fashion for Everyone: The Story of Ready-to-Wear, Sandra Ley, 1975, Scribner.
On the hat: In addition to the above general sources, The History of the Hat, Michael Harrison, 1960, H. Jenkins, Ltd.
Vogue publishes a series of books that I found helpful:
Sportwear in Vogue Since 1910, C. Lee-Potter
Brides in Vogue Since 1910, Christina Probert
Shoes in Vogue Since 1910, Probert
Swimwear in Vogue Since 1910, Probert
Hats in Vogue Since 1910, Probert
Lingerie in Vogue Since 1910, Probert
On the zipper: I am indebted to the people at Talon, Meadville, Pennsylvania, for loaning me the only existing copies of material on the development of the zipper; particularly their own A Romance of Achievement: History of the Zipper. I also wish to thank the Chicago Historical Society for information on the presentation of the zipper at the 1893 Chicago World’s Fair, and the B. F. Goodrich Company for information on early zipped boots and the origin of the name “zipper.”
Additional material on swimwear and the bikini was provided by the Atlantic City Historical Society and by Neal Marshad Productions (they allowed me to view an informative film, Thirty Years of Swimsuit History); and the National Archives provided reprints of newspaper articles featuring the nuclear bomb blast on Bikini atoll.
On the umbrella: The single best source I located is A History of the Umbrella, T. S. Crawford, 1970, Taplinger. It is comprehensive, covering the earliest known umbrellas, which were sunshades, in Egypt and India, and it traces the development of the article through periods of waterproofing, through eras when an umbrella was never carried by a man, and into relatively modern times, when a British eccentric made the umbrella an acceptable male accessory of dress.
For the interested reader in the New York metropolitan area, the single best source of information on clothing is the Fashion Institute of Technology in Manhattan; its collection of materials (costumes and books) is the largest in the world. With time and patience, any question on fashion through the ages can be answered from its resources.
On fabric: The Fabric Catalogue, Martin Hardingham, 1978, Pocket Books. This volume provides the origin and history of every natural and man-made fiber and textile.
On the tuxedo: I wish to thank the Tuxedo Park, New York, Chamber of Commerce for historical material on this article on evening attire, as well as the Metropolitan Museum Costume Institute (like F.I.T., an invaluable source of information for this chapter).
On jeans: In addition to several general references cited above that contain information on blue jeans, I wish to thank the Levi Strauss Company for material.
On sneakers: Nike, through personal communications, as well as Breakthroughs! op. cit., Chapter 10.
Another excellent collection of books on individual items of attire is The Costume Accessories Series, published by Drama Books. By item:
Bags and Purses, Vanda Foster
Hats, Fiona Clark
Gloves, Valerie Cumming
Fans, Helene Alexander
By the same publisher: A Visual History of Costume Series:
The 16th Century, Jane Ashelford
The 17th Century, Valerie Comming
The 18th Century, Aileen Ribeiro
The 19th Century, Vanda Foster
Also, The History of Haute Couture, 1850–1950, Diana de Marley, Drama
13 Into the Bedroom
According to popular legend, the brassiere was invented in Germany by Otto Titzling, a name every bit as suspicious-sounding as that of the alleged inventor of the flush toilet, Thomas Crapper. And this is not surprising, for the “biographies” of both characters were penned by the same British author, Wallace Reyburn. Whereas the Crapper book is titled Flushed with Pride: The Story of Thomas Crapper (see References, page 432), the book on the bra bears the tide Bust Up: The Uplifting Tale of Otto Titzling; it was published by Macdonald in London, in 1971, and the following year in the United States by Prentice-Hall.
Did Titzling exist?
Surprisingly, Reyburn’s book is cited in the references of several works on the history of clothing and costumes. In no less serious a volume than Doreen Yarwood’s The Encyclopedia of World Costume (Scribner, 1978), the Reyburn work is listed uncritically as a source for information on “under-ware” (although the spelling of Titzling’s name appears as “Tilzling”). After months of research, it became apparent to me that few people (if any) actually ever read Reyburn’s fiction-cum-fact. Bust Up. That can be the only explanation of why it has been taken seriously by many people, why it has been quoted in references, and why it has crept into folklore. After locating one of the few surviving copies of the book (in the New York Public Library’s collection of noncirculating books), I offer the reader several facts from Reyburn’s work that should dispel the Titzling bra myth.
According to Reyburn, Titzling was born in Hamburg in 1884 and he invented the bra to free a buxom Wagnerian soprano, Swanhilda Olafsen, from the confines of a corset during performances. One is inclined to believe Reyburn until he points out that Titzling was assisted in his design efforts by a Dane, Hans Delving. With Hans Delving, Titzling prepared a bra for Sweden’s greatest female athlete, Lois Lung. Suspicious-sounding names continue to accumulate as Reyburn recounts how Titzling sued a Frenchman, Philippe de Brassiere, for infringement of patent rights. If Otto Titzling, Hans Delving, and Philippe de Brassiere did exist and pioneer the bra, their names certainly deserve to be immortalized.
An excellent work on the origin and evolution of the bed and bedroom is The Bed, by Wright Lawrence, 1962, Routledge and Kegan Paul, London. It served as the basis for the opening sections of this chapter.
On clothing found in the bedroom: A detailed account of the development of socks and stockings throughout the ages is A History of Hosiery, M. N. Grass, 1955, Fairchild. Facts on more intimate attire are found in A History of Ladies’ Underwear, C. Saint Laurent, 1968, Michael Joseph Publishing; Fashion in Underwear, E. Ewing, 1971, Batsford; and A History of Underclothes, C. W. Cunnington, 1951, Michael Joseph.
On early bras and slips: Corsets and Crinolines, N. Waugh, 1970, Batsford.
Sexual facts and figures are from: The Sex Researchers, edited by M. Brecher, 1969, Little, Brown; and “20 Greatest Moments in Sex History,” Philip Nobile, Form, May 1984.
On the Pill: “The Making of The Pill,” Carl Djerassi, Science, November 1984.
On word meanings: The Origin of Medical Terms, Henry A. Skinner, 2nd edition, 1961, Hafner; Christianity, Social Tolerance and Homosexuality, John Boswell, 1980, University of Chicago Press.
14 From the Magazine Rack
I am indebted to many magazines for providing me with historical background on their founding. Available for the general reader from Newsweek: “A Draft of History,” by the editors, 1983, and “Newsweek: The First 50 Years.” I wish to thank the research staff at TV Guide in Radnor, Pennsylvania, for considerable material on their publication.
Perhaps the single most definitive work on the development of magazines in the United States is A History of American Magazines, by Frank Mott, published throughout the 1950s and 1960s in five volumes by Harvard University Press. Mott provides a picture of the early struggles of periodicals in this country, and he details the births, deaths, and triumphs of hundreds of publications from the 1700s into the present.
Additional materials used in this chapter: “Lunches with Luce,” Gerald Holland, Atlantic Monthly, May 1971; “Time Inc.,” Edwin Diamond, New York Magazine, November 19, 1984.
15 At Play
An excellent starting point for the reader interested in pursuing the origins of various toys is Antique Toys and Their Background, by Gwen White, 1971, Arco Publishing. It covers every imaginable child’s toy, often in depth, and it contains an excellent bibliography.
Also of assistance to me in compiling this chapter were Toys in America, M. McClintock, 1961, Public Affairs Press; The Encyclopedia of Toys, C. E. King, 1978, Crown; Scarne’s Encyclopedia of Games, John Scarne, 1973, Harper & Row; and Children’s Games in Street and Playground, by Iona and Peter Opie, op. cit.
On firecrackers: An excellent volume is A History and Celebration, George Plimpton, 1984, Doubleday.
On dolls: Dolls, Max von Boehn, 1972, Dover; “The Case of the Black-Speckled Dolls,” New Scientist, November 1985. This article uncovers the mystery surrounding dark markings that often mar the faces of China dolls.
A discussion of children’s games popular in the Middle Ages, the Renaissance, and today is found in “Points of Origin,” Michael Olmert, Smithsonian, December 1983. The same author covers games of chance and skill in his “Points of Origin” column, Smithsonian, October 1984.
On the Slinky: Personal communications with Betty James, head of the Slinky company and wife of the inventor of the toy, Richard James.
An interesting book of fact and speculation on the origin and development of the Frisbee is Frisbee, Stancil Johnson, 1975, Workman Publishing.
16 In the Pantry
On ice cream: I wish to thank the International Association of Ice Cream Manufacturers, Washington, D.C., for providing me with facts and figures on the origin and development of this dessert. Their 1984 publication The Latest Scoop (available by request) contains a wealth of statistics on the consumption of ice cream worldwide.
The Missouri Historical Society provided material on the origin of the ice cream cone at the 1904 St. Louis World’s Fair.
At Nabisco, in Parsippany, New Jersey, corporate archivist Dave Stivers was of invaluable assistance not only on the subject of ice cream but also on cookies (Animal Crackers, Oreo), candies, and peanuts (particularly Planters).
The Portland, Oregon, Historical Society was of assistance in locating facts on the origin of the ice cream cone.
One final source on ice cream: An excellent overview of the subject is The Great American Ice Cream Book, by Paul Dickson, 1972, Atheneum.
The development of canned whipped cream is in A Flash of Genius, op. cit., under the chapter heading “Aeration Whipping Process.”
On the hot dog: Max Rosey, publicist for Nathan’s Famous, was of immense assistance in assembling the history of the sausage, as was Nathan’s rival, the Stevens Company. Both the Brooklyn Public Library and the Long Island Historical Society provided material on the introduction and sale of hot dogs at Coney Island.
On the potato chip: George S. Bolster of Saratoga Springs, New York, as well as the Saratoga Springs Historical Society, suggested material for this section.
I also wish to thank Heinz and Betty Crocker for articles on the origins of their companies and products.
I used, and highly recommend, four excellent books on food: Food, by Waverley Root, 1980, Simon and Schuster; this is a fascinating volume, presenting facts and lore about fruits, vegetables, and food preparations in alphabetical order. Also comprehensive in scope is The World Encyclopedia of Food, L. Patrick Coyle, Jr., 1982, Facts on File. And Food in Antiquity, Don and Pat Brothwell, 1969, Praeger; Food in History, Reay Tannahill, 1973, Stein & Day.
Finally, I wish to thank all the people at Telerep involved with production of the weekly television series The Start of Something Big; particularly Al Masini, Noreen Donovan, Rosemary Glover, Jon Gottlieb, and Cindy Schneider. Noreen, Rosemary, Jon, and Cindy were of great assistance in helping me compile information on the origins of about two dozen items in this book, which also appeared in various episodes of the show. A1 Masini is simply the most thoughtful, humane, and scrupulous television executive I have ever encountered, and I thoroughly enjoyed working with him on creating and executing the show.
Return to The Origins of Everyday Items - Part 1