Print article

Responses to American Poetry

The aim of this online space is to host the research work of university students or young scholars as this emerges from larger projects focusing on the American poetry scene. The objective of this initiative is to bring this kind of research activity to the attention of the general public in an attempt to further promote the exchange of ideas with regard to the process of reading, understanding and appreciating poetry writing.

  

Tatiani Rapatzikou 
(Associate Professor, School of English, Aristotle University of Thessaloniki, Greece; Advisor and initiative co-ordinator trapatz@enl.auth.gr)

 
Pamela Beatrice

The Whitman Algorithm

Kate opened the new coffee she had ordered based on Amazon’s recommendation, and it smelled rich and nutty—wonderful, really.  As she scooped the specialty beans into the coffee maker, she thought through her latest algorithm, which she began as a side project to her CIS1 thesis and hoped it could be an additional chapter and not a failed experiment. 

The need for fairer and more accurate algorithms is a ubiquitous concern, and it had consumed her recently. Machine learning and other algorithms have become so complicated that the designer can no longer determine how they will predict certain cases2. Almost daily, it seemed, there was some new example of an algorithm that either got things terribly wrong or, at the current time, was too underdeveloped to be useful.  Some time back the New York Times Sunday Book Review included a book review written by a robot—one not terribly sophisticated—and she had enjoyed reading that3. Thinking back further to when she’d decided to go into computer science, a story had come out about an algorithm used to evaluate candidates for software engineer positions that penalized all resumes containing the words “women’s”.  While that particular algorithm came to a rather sudden end, it did make her question why she went into this field.  

Opening her laptop, Kate paused to look at the photo of her parents on the desk.  Kate’s father had taught English while her mother was the mathematical, logic-oriented one. Her father loved poetry in general, but had an affinity for Walt Whitman and often lamented that society was losing an ability for inclusiveness and acceptance (never a strong point for human society in the first place) that the poet seems to almost revel in. “I hear America singing, the varied carols I hear,” one of her father’s favorite lines, seems almost understated when looking at the expansiveness of Whitman’s work4. It made her wonder how far one could go in defining an inclusive and fair algorithm, particularly since an algorithm is a set of steps to sort or classify data or otherwise optimize an answer to a specific question. Sorting out data is what it is supposed to do! And humans sort data (even Whitman must have done so) all the time–sometimes it’s called judgment. It’s just that humans can’t process data as fast, or nearly as much data, as a computer.

Kate enjoyed developing algorithms, and that made her think about her classmate Hans. Hans had joined her program after spending a year at ETH5 and he totally fit in with the macho culture of the department. He looked like one of those guys who bikes thirty miles through the Alps—or whatever was the most rugged terrain in the part of the world he found himself—before coming into the lab each morning. Ever confident, he was working on creative algorithms for autonomous vehicles. She could picture him creating a startup that would be bought out by Tesla. And he was making significant progress on reducing computational speed, amount of computer memory required and cost while improving accuracy as tested with multiple sets of training data. Hans knew his stuff, and he had lately further burnished his reputation in the department by setting up a new seminar series that showcased both experts and graduate students in a panel discussion on a hot topic. The department chair loved this seminar concept. Kate had been initially wary of Hans but recently they had had a couple of helpful, even friendly discussions on their work.  

What separated Hans’ work from many of the other graduate student’s topics was that he actively considered safety in his algorithms both from a practical and an ethical view.  In one of his presentations to the lab group he talked about the famous “trolley car” problem of a trolley car that is going to crash and how the operator should make the best decision about whether to crash into pedestrians or hit the stalled bus in front of the trolley or force the trolley into the cement wall bordering the track thus killing the passengers. One of the ethical issues is whether one should value each life equally (fairness and respect to each person) or whether the minimum loss of life should be the deciding factor (overall societal outcome). Then there was the question of whether the riders6 of the trolley should carry more weight since presumably the trolley company has a major investment in the safety of their riders. This type of problem frustrated her.  Here we are putting all the emphasis on autonomous vision and we’re ignoring the well-established sensor technology that could easily be designed to give a warning within braking distance of any obstacle. Furthermore, couldn’t we decide to build sidewalks and curbs near any tracks that would minimize the damage of any trolley careening off the tracks?  Somehow a more holistic approach, one with redundancies, was not going to hold much sway in the face of the faster, better, more glamorous results of phenomenal computational speed and accuracy.  

It was those recent debates with Hans that had made her more determined to design her Whitman algorithm as she had. And coffee in hand, she was about to run it in an example scenario for the first time. She reviewed her logic one more time. 

Her approach was to use the Whitman algorithm in parallel with a given algorithm, then run a comparison of the resulting subsets at a key decision point in the analysis, and if the difference was greater than some predetermined percentage, to expand the criteria to be more representative of the original data set as needed. The original algorithm could be checked and adjusted at numerous stages. She was testing this out using two different resume evaluation training data sets and with two popular algorithms currently being used. The plan was solid.

But before she could run her first scenario, she tensed up. The immediate problem with her model is that it took more computation time and therefore had a cost—she could only hope that, if the result was a better selection of final candidates, the added cost would not be considered fatally detrimental to her Whitman algorithm.  

She suddenly wondered what Walt Whitman would think of this effort. She instinctively realized he probably wouldn’t sweat this, he might revel in the attempt and even sing of it. She decided she shouldn’t sweat it either. As a matter of fact, Walt might simply think it was quite fine–the right thing, really–that she was testing out her theory no matter how it worked out. He would accept the results for what they were, whether promising or misguided. Isn’t that what a good scientist or perhaps any decent person should do—look at the data without bias?  

Her phone buzzed.  OMG-it was Hans!  

His message asked if she could talk. She thought to herself, Talk?  Not just message?  She wasn’t sure what to make of it. Maybe it was a good thing, maybe he was genuinely interested in talking with her. Then her skeptical side crept in. Maybe he was simply going to ask her to pick up a few dozen pastries for his seminar after-talk tomorrow, since her apartment was just a block from his favorite bakery (a French bakery, of course). Or, maybe, he did want to talk about something more than a request for a favor. She decided she shouldn’t sweat it—but he could wait a bit before she answered. 

Kate took a sip of coffee. It tasted okay, but it had a slightly bitter aftertaste, and that only made her laugh. Feeling unexpectedly lighthearted, she opened her Whitman Program V1 and hit RUN.  

 

WORKS CITED

Kearns, Michael, and Aaron Roth. The Ethical Algorithm: The Science of Socially Aware Algorithm Design. Oxford UP, 2019.
Kissinger, Henry A., Eric Schmidt, and Daniel Huttenlocker. The Age of AI: And Our Human Future. Little, Brown & Company, 2021.
Roose, Kevin. “A Robot Wrote This Review.” Review of The Age of AI by Henry A. Kissinger, Eric Schmidt and Daniel Huttenlocker.  NYT Sunday Book Review, 12 Dec. 2021, p.17.
Scharding, Tobey K. “Recognize Everyone’s Interest: An Algorithm for Ethical Decision-Making about Trade-Off Problems.” Business Ethics Quarterly, Vol. 31(3), 2021, pp.450-473.
Whitman, Walt. “I Hear America Singing.” Poetry Foundation, https://www.poetryfoundation.org/poems/46480/i-hear-america-singing. Accessed 14 Feb 2022.


FOOTNOTES

1 CIS is Computer Information Systems.

2 Kearns and Roth 11

3 Kevin Roose’s review of Henry A. Kissinger, Eric Schmidt and Daniel Huttenlocker’s The Age of AI: And Our Human Future in the Νew York Time Sunday Book Review on Dec. 12, 2021. The software-produced review included: “The book which you are reading at the moment is a book on a nook, which is a book on a book, which is a book on a subject, which is a subject on a subject, which is a subject on a subject,” p.17.

4 Whitman 1.

5 ETH Zurich - Swiss Federal Institute of Technology, Zürich Switzerland, a highly ranked science and engineering university.

6 See Scharding, “Recognize Everyone’s Interest: An Algorithm for Ethical Decision-Making about Trade-Off Problems.”

Contributor Bio: Pamela Beatrice

              © Poeticanet