Wednesday, March 27, 2019

How We Protect Ourselves


Psychologists over the last few decades have had a great deal of fun demonstrating how irrational we generally are.  This arises from our evolutionary development and the need, as Nobel Prize winner Daniel Kahneman puts it, in Thinking, Fast and Slow, to see the world coherently—we see it in a way that makes sense to us, enabling us to survive the world’s dangers.

This can lead to important mistakes, though.  We tend to see patterns when chance is at play; we attribute characteristics to people or places or things based on incomplete information; and we protect our understandings, particularly about how the world works, despite new information.

The names Kahneman (and others) give to these three biases—and they’ve found dozens more—are What You See Is What Is (WYSIWI), the Halo Effect, and Confirmation Bias.  Let me illustrate them in turn.
When we see patterns where they don’t really exist, when we make unjustified connections between events, we are drawing conclusions with insufficient information.

Gary Smith, author of Standard Deviations, reports on how an octopus was supposed to predict winners of the 2010 World Cup Soccer matches.  He announced his prediction by eating from a plastic box marked by the flag of his predicted winner, usually Germany.  Closer investigation disclosed that the invertebrate was really attracted to the brightness of the German flag, not the analytics of the German soccer team.  It couldn’t have hurt that the German team is always in the higher ranks of the sport, so the two chances together led a few to assign mysterious foresight to the Inkster.

Not investigating apparent patterns, including causal links, leads us to all sorts of questionable thinking.  We had a cold, snow-filled winter; therefore, global warming must be a myth.  Black men are over-represented in the prison population; therefore, they are inherently dangerous.  The free market encourages gross economic and social inequalities; therefore, it should be abolished.

Taking these cognitive leaps is easy.  Understanding that there are rational explanations why the conclusions don’t follow the premises requires skeptical inquiry, something we don’t often have the self-discipline or background knowledge to resort to.

Instead, it’s simpler to decide that What We See (on the surface) Is What Is.
Similarly, when you like something about a person (or place or thing), you will likely assume other characteristics are also admirable, even though you may not have any actual evidence for this conclusion.  You might think someone highly intelligent and assume she’s also generous, when in fact she might well be self-centered and cruel.  Or worse.  Who would have ever suspected that beautiful actress Lori Loughlin could be a criminal?

Kahneman confesses to halo effect guilt when grading student essays when he was a young professor.  He typically required students to answer two questions.  When they produced a strong answer to the first question but a weak answer to the second, he often gave the student the benefit of the doubt and graded their work higher than they deserved, something he withheld from students who reversed the thoroughness of the responses.  Detecting this eventually, he implemented strategies to split the two answers and keep students’ names off the actual answers.

First impressions are key to the strength of the halo effect, which can cover up numerous sins.

Finally, our tendency to only listen to information that defends our closely held notions about the world has been confirmed frequently.  Tobias Greitemeyer, for instance, has published a study through PLOS in which college students were given a short survey on whether video games caused avid players to be violent.  They were then given two summaries of fictional studies to read, one criticizing the notion, the other agreeing with it.  Students found their beliefs reinforced by the article that defended their original opinion.  The contrary information was ignored.  Consequently, presenting both sides of the debate actually caused the two groups’ opinions to become even more polarized.  Any temptation to moderation was squelched, apparently.  Bad news for “fair and balanced.”

Why should you care?  For two reasons.  You want to be aware of your own biases in order to avoid filtering out important information about how the world works.  The snap judgment that may have protected Neanderthal from a saber-toothed tiger might cause a policeman to shoot an unarmed black man.  It may fool us into emulating a beautiful person who is dishonest.  It may induce us to support a political candidate or program that in fact aims at goals opposed to our values.

And as a person writing to persuade, it challenges you to find ways to overcome your reader’s inherent biases.  Begin by realizing that simply dumping facts on her is unlikely to move her.

Find premises that you both have in common.  This approach goes all the way back to Socrates.  If you are writing to persuade conservatives, for example, start with the notion of freedom, and show how freedom can mandate a chain of reasoning that you advocate.  Much will depend, of course, on how you define “freedom.”  Do a little reading to find out.  If, instead, you are writing for liberals, you begin by advocating social justice and work your way from there to the conclusion you wish to reach that is consistent with the premise.  You will always be guilty of oversimplifying in making assumptions about you’re your audience thinks.  There is no help for this except better knowledge.  Knowing your audience’s base framework is obviously useful with this approach.

You also must frame arguments in a way that doesn’t directly challenge your reader’s world view, but that instead challenge the logic chain from premise to outcome.  Showing the absurdity of a position by taking it to its logical extreme, what is called reduction ad absurdum, is one way.  Describing what the future will look like if the opposite path to yours is taken, a “future fact” perspective.  Avoid exaggeration, though, or you’ll fall into the “slippery slope” fallacy.  Explaining what consequences will result that your audience will want to avoid.  Drawing an analogy that explores the weaknesses of the opposing position and the advantages of your own.

These are just a few approaches that you might take to flank a reader’s defense mechanisms and get them to take you seriously.

Knowing how self-limiting we all are should, above all, give you a sense of your own as well as others’ limitations and a humility when pressing any argument on someone else, be it a teacher, a friend, parent, or stranger.  That humility should lead you to respect your opponents before you consider their views.  As Aristotle recognized long ago, our credibility often hinges on our generosity toward our opponents more than our encyclopedic knowledge or rhetorical sophistication.

Tuesday, March 5, 2019


Sentence rhetoric can be a product of grammar, as I’ll show you in a later blog.  There are also rhythmic devices that were recognized by classical rhetoricians like Cicero, but also practiced by today's statesmen and black preachers, as below, from a sermon by Benjamin Mays in the 1960s:

Man shall not live by bread alone, but man must live by his dreams, by the goals he strives to reach, and by the ideals which he chooses and chases.  What is man anyway?  Man is flesh and blood, body and mind, bones and muscle, arms and legs, heart and soul, lungs and liver, nerves and veins—all these and more make a man.  But man is really what his dreams are.  Man is what he  aspires to be.  He is the ideals that beckon him on.  Man is the integrity that keeps him steadfast, honest, true.  If a young man tells me what he aspires to be, I can almost predict his future.

It must be borne in mind, however, that the tragedy in life does not lie in not reaching your goal.  The tragedy lies in having no goal to reach.  It isn’t a calamity to die with dreams unfulfilled, but it is a calamity not to dream.  It is not a disaster to be unable to capture your ideal, but it is a disaster to have no ideal to capture.  It is not a disgrace not to reach the stars, but it is a disgrace to have no stars to reach for.  Not failure, but low aim is the sin.

These two "paragraphs," written as you read them, were from a spoken sermon.  They feature the sort of rhythm I want you to incorporate into your writing when the opportunity presents itself.

Notice the repetitions:  the use of "by" phrases in the first sentence.  They are also part of a series.

Notice next the repetitions of the third sentence:  "Man is flesh and blood, body and mind, bones and muscle, arms and legs, heart and soul, lungs and liver, nerves and veins".  

This is also features a series, but the similarity of each element's structure is more precisely the same than in the first sentence:  flesh and blood/bones and muscle/arms and legs, and so on.  This greater precision is a version of parallelism called isocolon

Notice also that this second set of repetitions consists of physical traits that answer the questions "What is a man?"  Mays doesn't want us so be so restricted in our understanding.  So he sets up a contrast, by stating the physical so he can add contrasting qualities, specifically "A man is what his dreams are."   His aspirations signal what kind of a person he is.
Mays begins his second paragraph with a series of antitheses. This rhythm can be strictly formal, as in the first sentence and is called a chiasmus, (ke-ás-mus); the others exhibit a looser structure.

Chiasmus indicates a pattern formed by the letter X (chi is the Greek letter X), in which the subject and direct object are inverted:

"the tragedy in life does not lie in not reaching your goal.  The tragedy lies in having no goal to reach"











This is the inverted structure of Kennedy's famous statement from his inaugural address:

"Ask not what your country can do for you; ask what you can do for your country!"

The first chiasmus is followed by three additional inverted antitheses that elaborate the first.  See if you can diagram the inversions.
  • It isn’t a calamity to die with dreams unfulfilled, but it is a calamity not to dream. 
  • It is not a disaster to be unable to capture your ideal, but it is a disaster to have no ideal to capture. 
  • It is not a disgrace not to reach the stars, but it is a disgrace to have no stars to reach for. 
  • Not failure, but low aim is the sin.

The last sentence states a simple contradiction, not inverted, that summarizes the preceding chiasmi.

These rhythmic repetitions are not the normal rhythms of prose, which makes them stand out.  Consequently, you want to use them to emphasize especially important claims or conclusions you make in your writing.

One last repetition to notice, though Mays doesn't use it to great effect:
  • "chooses and chases"
  • "lungs and liver"
The repetition of words that begin with the same sound is called alliteration, and also serves to emphasize the words, since normally alliteration is a chance occurrence and rare in English.


Practice:  write a couple chiasmi on your chosen subjects, imitating the form given in Reverend Mays’s examples.

Then write a couple sentences that include series with isocolon parallelism.

Welcome to the world of classical rhetoric, alive and well in formal speech and writing today in presidential addresses and black preachers' sermons.  You'll find it elsewhere, too.  Sometimes writers use the form just because it's fun.

The forms I show you here are especially useful in writing exhortations, passionate pleas for change.  You can find them in any book or essay urging the reader to agree that the world is in desperate straits and needs to change.  Who can deny the need for exhortation, besides Pangloss?




Monday, January 28, 2019

Social Media: Addictive?

Everywhere you read that social media are “addictive.”  Simon Sinek in a viral video described Millennials’ online behavior this way a couple years ago, noting the high people get from the dopamine-induced rush.  Bailey Parnell, a year later, repeated the claim.

My students nod their heads in indifferent recognition:  So what?

First, clinical psychologist David Ley has debunked the notion of dopamine’s ability to addict us.  It’s a natural brain chemical necessary to normal human functioning.  So references to “addiction” to social media need to be thought of as metaphorical, not physically real.

Because the metaphor is so pervasive and persuasive, though, I think there is reason to abandon it altogether.  Again, as a writing instructor concerned about language and its ability to mislead as well as inform, I want to propose a more accurate frame.

Bill Davidow wrote an article for The Atlantic in 2013 titled “Skinner Marketing: We're the Rats, and Facebook Likes Are the Reward.”   Skinner doesn’t have the public persona he had when I was in high school and college, and his advocacy of stimulus-response psychology to shape human behavior is not on anyone’s radar these days.  However, his practice of “operant conditioning” is, as Davidow argues, the heart of our internet obsessions, not addiction.

Operant conditioning is a simple concept.  When someone rewards us for making targeted choices, we incorporate those behaviors into our automatic response system—they become part of us and our “natural” inclinations.  We learn.  We have been trained, like a dog to the sound of food being poured into its bowl or to the electric shock of the invisible fence.

Psychologists have worked with this technique long enough to be able to refine the most efficient frequency of the reward, called the “reinforcement schedule.”

Two things to notice about the operant conditioning social media subjects us to.

The first is that we do not know we are being trained.  We haven’t given our permission, certainly we haven’t been warned of the intent of social media programmers to train our attention so that we become rewarded by likes and posts and updates.

Secondly, the goal of the reinforcement schedule is to lock in the chosen behavior, whether total clicks on the site or pages seen or time on the site or just visits.  The site will invite you to be a member, which requires data the provider can then sell to advertisers, along with your history of where you go and what you’re interested in, what entertains you.  That’s the point of the whole interaction:  Making Money.

Not only do you not know that you are being trained like a dog, but your personal information is being used to sell you something.

This is the crucial distinction between “addiction,” even as a metaphor, and “conditioning”:  we “become addicted;”  it’s passive both grammatically and behaviorally.  No one else is accountable for such an outcome.  It is the natural result of our behavior.  Since we exercise free choice, we are responsible, no one else.

On the other hand, others condition us, in this case without informing us or asking our permission.  Our agency, our “free will,” is taken from us in order for someone to make money off us.

This is the insidious side of social media.  That insidiousness is lost if we persist in describing our online behavior as addiction instead of conditioning.

Words matter.  We need to use them better so we can know the world better.

Orwell put it best:  “In our time, political speech and writing are largely the defense of the indefensible.”

We need to discern when that happens, which only happens when we use language thoughtfully.

Wednesday, January 23, 2019

Social Media and the Brain


The growth of ADHD diagnoses in our student populations has been the subject of much discussion.  Rather than looking for something in the water, most investigations have focused on over-diagnosis—an easy way out to solve behavior problems at home or school—or increasing pressures on students in the wake of a high stakes testing regime initiated by the George W. Bush administration.

At the same time there has been an explosion of books and studies on the effects on peoples’ brains and behavior from social media obsession and the multi-tasking technology environment we live in.  Critics of this obsession have pointed to deliberate programming choices by social media companies to interrupt whatever their customers are doing to notice, read, and respond to changes in friending, liking, posting, sharing, tweeting, re-tweeting, trending, viraling. 

Essentially they charge that the interruptions aim at conditioning users to respond compulsively.  The competition for attention is thus being won through the same psychological mechanism that induces gamblers to push a button to win a bucketful of quarters and rats to push one button rather than another to be fed. 

I’m inclined to point the CDC and others to examine the electronic source of ADHD rather than blaming doctors.  When faced with an “epidemic” in the midst of a shortage of doctors, prescribing a pill as a diagnostic practice seems a reasonable way to proceed.  The evidence stares at college classroom teachers every day as we see students refuse to read assignments or instructions.  We attend to what we deem “relevant,” and school can’t compete with the available digital delights.

This is serious business, and other people have done an amazing job of predicting our predicament, so I’d rather they speak to you directly.
 
If you are a student, know that this is not a game one generation is foisting on another.  It is a high-stakes gamble by Facebook and Google and Amazon and others that interrupting  your life for profit is a social and personal good.  The evidence seems to be on the other side.

The irony, of course, is that much of the critique is in the form of books.  Just what you don’t want to read.  And this blog requires you to read.  Just what you don’t want to do.  And this blog will point you to those books you don’t want to read, as well as some TED talks that perhaps you’ll take the time to listen to.  They can at least plant a seed of unease in you.  And then allow the books to penetrate the defenses social media have helped you construct.

If the efforts of the critics turn out to be futile, then we are all in a lot of trouble.  
Start with a TED talk or two:

https://youtu.be/Czg_9C7gw0o  argues for a self-critical engagement with social media.


Journalists may be the easiest to read and the most engaging, so start with these:

The Shallows, by Nicholas Carr.  Follow with his The Glass Cage, which takes the personal to a more general discussion of the social and economic consequences of the long trend toward de-skilling work.

In the same vein, check Simon Head’s Mindless, which will show you where work is heading with the aid of “Big Data.”  Spoiler Alert:  your future in the new economy is at stake.

Or you can read Richard Sennet’s  The Culture of the New Capitalism and a companion book, The Craftsman.  The effects of big data on your future are enormous if you're not prepared.

To understand exactly why reading is so important to your mental development and so antithetical to current online compulsions, in addition to The Shallows read these two by Maryanne Wolf, a cognitive psychologist at Tufts University:

Proust and the Squid:  The Story and Science of the Reading Brain


Reader, Come Home:  The Reading Brain in a Digital World

Finally, when you recognize that your online compulsions are another habit you’ve gotten yourself into, you need better advice than to give up social media cold turkey or to enage it with better self-awareness.

The best book I can recommend to help you break the habit and establish more constructive alternatives is Breaking Habits, Making Habits by Jeremy Dean.  He will explain that habits are difficult to break, requiring time and attention and work and a plan, which often requires substitute behaviors to be successful.  It’s honest yet practical.

My offer of these resources is also the reason why reading is important:  if you engage it with full attention, you can learn things that are important to your well-being.  You can learn how the world works and how you work.  You can gain some agency in a world that is increasingly reducing your scope for action, for being yourself, whatever that means to you.

Good luck!

Wednesday, January 2, 2019

Invention II

In my previous post I explained some of the many logical argument forms available to you that you probably aren’t learning in your college comp classes. These come from the classical development of rhetoric, and are categorized as topics or loci, the Latin and Greek terms for “place.”

The idea was that you the orator or writer would “find” or “invent” your arguments, which you would locate in common “places.” Eventually, the argument forms, what I call “frames,” became known as commonplaces. The terminology is derived from two notions: that the forms existed outside of the specific content of any argument; and that, since most public arguments were argued orally and needed to be memorized.

 A mnemonic device we call a “memory palace” was conceived which imagined a many-roomed palace containing groups of related argments in each room. This placing of abstractions into imaginary rooms was where the orator would invent, or “come upon” (from L. “invenire”) argument frames he could use in his speech.

 In turn he could develop another memory palace, this one assigning specific argument forms and content to specific “rooms,” with a floor plan that arranged the arguments according to the conventions of the time. Then, with practice, he could deliver a speech of several hours length in front of the Assembly or, in Rome, in front of the Senate or the law courts, from memory.

 The argument frames, as I wrote in my last post, were gathered in two categories, deductive (logical) and inductive (example) by Aristotle. I’ve covered the deductive; now for the inductive.

Inductive arguments are often called arguments of experience, rendered by the word “empiricism.” We see them particularly in examples, which can come in different forms: illustrations and narratives, which in turn can be fables (fictional) or anecdotes (personal accounts).

 Frequently you will find it useful to offer an illustrative example to make clear what you mean by a term or claim. In poetic meter in English verse, lines form patterns of accented and unaccented syllables. The poet John Ciardi (pronounced Char-dee) explains a type of poetic “foot,” similar to measures in a line of written music:

The pyrrhic is a foot consisting of two unstressed syllables. It is possible to construct a theory of metrics in which the pyrrhic does not exist. In conventional metrics, however, it is impossible to scan certain lines without resort to the pyrrhic. A famous example occurs in Shakespeare’s much quoted line: 
 To mor/row and / to mor/row and / to mor ow
 The accented syllables are boldfaced. Two feet, the second and fourth, have no accented syllable, and are illustrative examples of pyrrhic feet.

When you were writing five-paragraph essays in high school, though, examples served as evidence. Unfortunately, three examples is almost as inadequate for evidence as one. Consequently, when my students used examples as evidence for a claim, they had to provide a plethora of examples.

 A simple illustration comes from Frederick Douglass in a speech he titled, “What, to the Slave, is the Fourth of July?”:
…Is it not astonishing that, while we are plowing, planting and reaping, using all kinds of mechanical tools, erecting houses, constructing bridges, building ships, working in metals of brass, iron, copper, silver and gold; that, while we are reading, writing and ciphering, acting as clerks, merchants and secretaries, having among us lawyers, doctors, ministers, poets, authors, editors, orators and teachers; that, while we are engaged in all manner of enterprises common to other men, digging gold in California, capturing the whale in the Pacific, feeding sheep and cattle on the hillside, living, moving, acting, thinking, planning; living in families as husbands, wives and children, and above all, confessing and worshipping the Christian’s God and looking hopefully for life and immortality beyond the grave, we are called upon to prove that we are men! 
Martin Luther King, in “Letter from Birmingham Jail,” offers a similarly overpowering catalog of examples of why “We can’t wait!” for equality, as though it were a gift from white America and not a right. The two works, the first a speech, the other a written rebuttal to his critics, are separated by 111 years but make the same demand for dignity using a plethora of examples in their defense.

You are attempting with a plethora to forestall any temptation by your reader to oppose that mountain of examples with a counter-example. Against the ant-hill of three examples, the counter-thrust would be sufficient, but not against the mountain. We argue, remember, not for certainty or for absolutely, we argue for probably and for-the-most-part.

A similar problem plagues the anecdote. It is, after all, only one example, regardless of how powerful it is to you. It is better, in my view, to use anecdotes to personalize your argument and encourage a positive emotional response in your reader rather than consider it sufficient reason for a claim.

I recently received a research paper draft from a student who argued that waitresses and waiters didn’t need the income protection with a higher minimum wage, but instead would earn what their energy and attention and personality deserved through tips. She cited an ark full of facts, but the paper was listless. I suggested to her that she instead refer to the situation her employer, the owner of the restaurant where she worked, and write summary stories of what she faced day after day to keep the business profitable. The student excelled beyond my hopes, making for an enjoyable read that lost none of its data-driven arguments in the process.

While a Roger-level anecdote—one that tells of a single incident in a specific time and place—evokes a stronger reader response, there will be times when you will want to summarize repeated incidents. Such a summary argues that similar behaviors meet with similar outcomes, and moves toward communicating the plethora you need to be convincing.

Stories—and descriptions, another inductive frame—require some sort of explanation of their significance to your argument, what is traditionally called “exposition.” Brent Staples does all three in the first paragraphs from his essay “Just Walk on By”. Find a PDF of the essay here.

 Arguing from example can be a powerful way to defend a claim if done with gusto. Just don’t forget that a lone example, used to clarify or illustrate what you are writing about, is a useful tool as well.

Monday, November 12, 2018

Invention I



Invention is the discovery of arguments, according to classical rhetoric.  The word comes from the Latin invenire, “to come upon.”  The premise behind the notion is that, while the facts of the issue at hand vary from case to case, the forms of the arguments available to the rhetor, whom I’ll call the writer, are similar from case to case.

Those forms, what I call frames, are categorized into two logical approaches, inductive and deductive.  Inductive frames are those of the example:  instances, stories, descriptions; and statistics, which generalize those Roger-level phenomena.

Deductive frames are derived from syllogisms.  They take the form of if this is true, then that must also be true.  The if clause (among many options) signals a premise, the then clause is the conclusion. 
Several frames are found among the deductive box of arguments:  contraries, comparisons, analogies and part-for-whole arguments require analytical reduction into components.  If A has these parts, and they correspond to B’s parts, then we can speak of A by speaking of B.  The analogy is akin to the commutative property in arithmetic:  If A=B, then B=A.  I speak of A by speaking of B.

Other frames for deductive argument include definition:  If justice means “getting what you deserve,” then …  Certain conclusions can be asserted to follow from that definition:
1.  You should be compensated for good work (be it flipping hamburgers, acing tests, closing sales).
2.  You should be free to vote if you’re an American citizen.
3.  You should be treated with dignity.

And so on.

Similarly, if you belong to the category American Citizen, you enjoy certain guaranteed rights, not all of which may be enjoyed by non-citizen residents, though, in our case, that number is limited.  Whether a person enjoys those reserved for citizens depends on whether that person belongs to the category of “American Citizen.”  This frame is traditionally called “genus-species,” or category—member.

Other deductive frames include Contraries (“If obsession with social media causes anxiety disorders, abstention from social media will calm the mind”) and “More and Less,” or what we might call a fortiori, Latin abbreviated from a fortiori argumento, “from stronger argument” (“If a high school graduate has trouble finding a good paying job, how much more trouble will a high school dropout have?”) We use Antecedents—Consequences and Cause—Effect as we do premise—conclusion.  Past-Fact/Future Fact involves moves to the origins or history of the issue at hand and extrapolation into the future if current trends continue.  There are others, but let this compilation suffice.

The advantage of such catalog of argument frames?  It gives you resources for how you present your case.  It makes possible your figuring out not only what but how to state your arguments.  It goes beyond reasons and examples, allowing you to be interesting as well as informative and persuasive.  I would argue that being interesting is the most efficient route to being persuasive.  And the vehicle for being interesting is variety.

The point I want to make here, though, is that there exist a multitude of ways to frame arguments that have been available to you for over 2,000 years.  But you’d never know it by reading a typical college composition textbook, which is focused on the writing process.

I continue the discussion of invention in my next post, with an overview of inductive frames.

In my YouTube videos I discuss many of the more popular frames, both deductive and inductive.  If you need more examples, check those out. The link for my channel is on the left side of this blog home screen.

Additionally, you can purchase the Martin Luther King Teaches Rhetoric to see the variety of frames King uses in his "Letter from Birmingham Jail." 


Saturday, September 29, 2018

Make “Transition” Meaningful

Just as we ask students to edit their writing in order to make it more concise and precise, occasionally it behooves us as professional English teachers to review the language we have inherited and often enlarged.
Our notion of transition is one such concept.  Haziness about what it is has resulted in a category that has no relation to what the word means.  On the premise that precision is a virtue to practice and not only preach, I propose to chip away at the accretions to arrive at something simple and useful.
The verb originates in the Latin transire, to cross from one place to another, or to change places (Online OED).  Not to stay in one “place.”  Not to “connect” or “add” or “compare” or any of the encyclopedic list of functions listed by Purdue OWL.  Those constructions, in a less grammar-phobic age, used to be properly called adverbial clauses. No, transition means, simply to change, from one locus or place to another.
To its credit, Purdue OWL’s presentation early on admits that weaving key words from one paragraph into a subsequent one is an established method of connecting paragraphs.  (I call this “chain stitching;” see https://youtu.be/KP3q3H5bnf8 for an analogical demonstration.)  It is useful to know that writers of non-fiction don’t use transitions perhaps as often as we think.  However, I think the student who wrote the page lost track of what the word means, what the concept includes. (“However” is a transition, from approval to disapproval.)
The category, it seems to me, should include those words that move us from one subject or argument or viewpoint to another, if we are to be consistent with the word’s meaning.  That would embrace contradiction (nevertheless, however, although and its cousins, still, but and yet, on the other hand, and so on) and chronology (earlier, later, subsequently).
Moreover, we want to distinguish those adverbs that signal continuation or elaboration of arguments in process (moreover, indeed, furthermore, additionally) from those arguments we need to transition to.
Furthermore, I question the need for elaborate categorization—it becomes something else to learn, instead of something encountered through reading arguments and learned tacitly as well as through modeling.   Better to call them adverbs as a general class if we need to call them anything, if for no other reason than to keep our vocabulary a more useful model of clarity and precision.
We ask it of our students.  We should be prepared to respond in kind.