Before I begin I must correct an oversight in last week’s The Reader Is Always Right. After reading, my father reminded me of something my grandmother used to say that she learned in the 1940s and 1950s working at Hutzler Brothers in Baltimore, which was: “The customer is not always right, but the customer is always the customer.” She had the spirit. Now had anyone told her, in her role as customer, that she might not be right about something—hell to pay.
Well, anyhow. Welcome to the part of the year where it’s always dark. When I used to commute to an office and commune with other office workers, this was the week during which we would complain to one another that it was dark when we got there, and dark when we left, and it felt like we never saw the sun. Now I work from my home, so I do not arrive here and I do not leave and I still never see the sun. I try to remind myself that time is a construct and so am I. That helps.
Today’s Shelf Life is about reading levels and how to assess the reading level of a text, any text, including a text you have written. I guess I’ll also talk a bit about why one might want to do such a thing—assess their own writing to understand its level of reading complexity.
When I’m writing I usually have the intended audience somewhere in my mind. Not as an image of people but as a concept of a population: for instance, “young adults,” or “adulty adults,” or “people like me.” I suspect we all know, though, this is not a good sense of the audience. What do I mean by “people like me”? Forty-year-old white women from the mid-Atlantic United States? College-educated editors in the life and social sciences? People with the same collection of likes and dislikes as me? Those populations each represent a wide range of comfortable reading levels.
Sometimes you will hear or read things like, “most American adults read at or below an eighth grade level.” Those statements are usually oversimplified and you shouldn’t trust them. There are a lot of ways to measure an individuals proficiency at reading comprehension and a lot of ways to measure the complexity of a text. I’m going to begin with the Program for the International Assessment of Adult Competencies (PIAAC).
PIAAC is a massive program to come up with a scale that measures literacy, numeracy, and digital problem-solving in an adult population. Several dozen countries collaborated on the scale and it is used in many international measurement instruments. Notably it was used in the Organisation for Economic Cooperation and Development’s (OECD’s) Survey of Adult Skills.
Respondents to the survey were scored on a proficiency scale of 1 (low) to 5 (high). In the the literacy section those briefly levels (briefly) mean:
Level 1: Readers can find one piece of information in the text that is worded exactly the same as in the question.
Level 2: Readers can sift relevant information from irrelevant or draw low-level inferences.
Level 3: Readers can locate information using inference or may be asked to located several discrete pieces of information from different parts of the text.
Level 4: Readers perform multiple-feature matching and locate, integrate, or contrast information found within a lengthy text with distractors based on an abstract request.
Level 5: Locate information in a dense text with multiple plausible distractors and make high-level inferences.
That’s a lot of background to tell you that about 33 percent of adults in the United States are proficient at Level 3 literacy, with about 13 percent at Levels 4 and 5 and the remaining (approximately) 54 percent of the adult US population reading at Levels 1 and 2. For what it’s worth, the US is slightly ahead of the average of participating countries and closest in score to Austria and Germany. This international survey was conducted in 2017 and is much more recent than the National Assessment of Adult Literacy, which measured only people in the United States way back in 2003.
All this is to say, that much of the time when I see statements like “X percent of adults in the United States read below a Y-grade level!” the data to back it up comes from the NAAL, which is not the most recent sample and which, confusingly, does not correspond to school grade level. I see a lot of media pointing at NAAL and the Department of Education broadly, but there’s no official correspondence between PIAAC proficiency score and US school grade level. At least not that I can find.
As far as adults go, PIAAC seems to be the gold-standard for literacy. That doesn’t translate into how easy or complex a text is to read. Literacy scores apply to people, not texts, and give an estimate of what types of reading tasks a person—or the adult population of the United States, in the aggregate—can do.
Next you’ve got your Flesch-Kincaid readability tests, which were developed in the 1970s to measure the ease or complexity of a text. These scores, unlike the PIAAC scores, apply to texts and not to people. There are two of them:
The Flesch Reading-Ease test, which uses total words, total syllables, and total sentences in a passage as variables in an equation that quantifies the difficulty of the passage as a numeric score (higher is easier to read; lower is harder to read); and
The Flesch-Kincaid Grade Level formula, which uses the same factors in a passage as variables in a different equation to derive a score in the form of a US school grade level.
This, y’all, is where the disconnect comes from. Flesch-Kincaid pegs the complexity of texts to US grade level, an easy scale for readers to understand because it typically corresponds to child age: Most of us can probably picture a second-grader (an approximately seven-year-old child) in our head versus an eighth-grader (and approximately thirteen-year-old). When the media wants to talk about how bad we are, as a society, at reading, it’s more sensational to throw a grade level out than to explain the PIAAC literacy score, as I have done, above.
Instead of readers responding, “oh gosh most Americans read at a PIAAC level 1 or level 2” they are responding, “oh no most Americans are reading at the same level as a nine-year-old child!” and that’s much more dramatic and alarming.
However you’re measuring reading comprehension level, I want to take a moment to explicitly state what I think goes without saying: A person’s maximum ability to comprehend text is not the level at which they are most comfortable reading, at least for most people. For example, I might flog myself through an experimental or complex text for any number of valid reasons, but that isn’t comfortable pleasure reading for me. If I’m reading a textbook to absorb a lot of information or making myself read a Nobel Prize winner’s oeuvre for some kind of personal betterment, I can do it. But for fun I’d prefer to read something that is easier to absorb most of the time.
Okay so anyway, Flesch-Kincaid is not the only readability test in town. There’s a bunch. Another important one to know is the Automated Readability Index (ARI), which is similar to Flesch-Kincaid but instead of using words, syllables, and sentences in a formula it uses characters, words, and sentences. The equation then produces a score that roughly corresponds to US grade level.
The difference in measuring syllables versus characters comes down to what’s easiest for a computer to measure. The ARI was designed to calculate readability in real time, as text is being drafted.
ARI is looking at the average characters per word and the average words per sentence, while Flesch-Kincaid is looking at the average syllables per word and the average words per sentence. From my perspective as a non-expert in readability science, I suspect there’s an edge to using syllables instead of characters when working with developing readers who are more likely to sound words out when reading. For adult readers, I imagine there’s less advantage to one or the other.
Computers are more advanced all the time, though, and figuring out syllables isn’t really that hard. You can get a computer to check your reading level in Flesch-Kincaid, ARI, and a number of other readability indexes just by pasting it into a calculator. For instance, Readability Formulas has a free online calculator that will take up to 3,000 words and give you scores using seven different formulas plus a consensus.
I pasted in some text from a young adult fantasy novel I had handy and it got a rating of 10th grade. A snippet from Hemingway’s The Old Man and the Sea: 6th grade. Today’s Shelf Life through the end of this sentence: 11th grade. (No 11th-grader would read this far, it’s too boring.) Several pages from “The Wife of Bath’s Tale” from The Canterbury Tales (in a modern English translation): 8th grade. An excerpt from Anna Karenina: Also 8th grade. A random Newsweek article: 10th grade.
That’s mostly all stuff intended for adults to consume. The most challenging fiction of the ones I checked was the only one intended specifically for young adults.
Readability (especially as measured by a formula) isn’t the only measure of what age group can or should read a text. None of these tests, for examples, measured
Level of abstraction
Complexity of concepts
Idea density
Use of figurative language (personification, metaphor, imagery)
Content areas that exceed the reader’s life experience
Consider: Animal Farm and Anna Karenina both returned an 8th-grade reading level but I’d only hand one of them to an 8th grader.
I think I’ve given a good overview of how to check your own writing for readability and enough information about adult literacy in the United States for you to think about where you might want your writing to fall in terms of readability based on who you hope will read you.
Is this something to think about while you’re drafting? Not during your first draft, in my opinion. Get the concepts out of your brain and onto paper. That’s all that matters in your first draft. But as you revise and shape your manuscript into something you plan to ultimately share with readers, consider that what you’re offering, what your reader wants from it, and what readers in general want when they pick up a text.
If the purpose of your writing is to communicate information, and that’s what your reader expects from your text, then they will put up with a less-readable text than if they were reading for entertainment. Readers’ comfortable literacy levels are still going to vary widely—there’s no one correct level that will be perfect for your entire audience—but relatively speaking. When reading to understand and absorb information, readers are more willing to go back and read over a sentence again if they didn’t understand it and are willing to read more slowly. There’s still a sliding scale: I’d put up with less readability in bicycle assembly instructions than in a textbook, and less in a textbook than in a self-help book.
If you’re writing to entertain, there is just less wiggle room to get it right. A text that is too complex will put off readers who don’t want to expend too much energy for their leisure reading. A text that is written with very straightforward, simple language will have to tell a very compelling story to stay interesting to an adult reader.
I’ve noticed that a lot of the YA reading I do has more complex prose than a lot of the adult novels I read. Unscientifically I think teenage readers may have more energy to put into reading (they have more energy for everything) and I think they want the challenge more than adult readers do. Voice also drives a lot of the writing intended for younger audiences which, combined with the unfamiliar settings you find in spec fic, can make getting into a new book more challenging. A Clockwork Orange blindsiding you with Nadsat from page 1, paragraph 1 is a good example.
There’s a fine line between “I want to know more” and “I don’t have time for this” that readers walk every time they start a new book. As I get older I lean more and more toward not having time, my takeaway being, your text needs to be either very comfortably readable or start with an immediately interesting scene. Otherwise I will give up and move on. I’m too old for this.
Okay that’s it that’s all I got. I hope you see the sun between now and—checking the calendar—March 2022. March. I’ll never make it. Send a sun lamp.
If you have questions that you'd like to see answered in Shelf Life, ideas for topics that you'd like to explore, or feedback on the newsletter, please feel free to contact me. I would love to hear from you.
For more information about who I am, what I do, and, most important, what my dog looks like, please visit my website.
After you have read a few posts, if you find that you're enjoying Shelf Life, please recommend it to your word-oriented friends.