Photos for Orlando Arena Project

Amway Arena Photo - Steel Construction editedOrlando Arena ConstructionOrlando Arena Groundbreaking2Orlando Arena Opening 1989

Advertisements

Getting My Hands Dirty

Getting My Hands Dirty

Starting the Project

In an effort to get my project off the ground, I have started organizing my data and have begun to “play around” with TimeLineJS.

Organizing the data simply means that I am properly filing, dating, and titling the articles that will drive this “discourse analysis meets digital history” project.

Capture Filing of Data

For this project I will draw mainly from Orlando Sentinel news articles acquired via ProQuest to attempt to chart the construction of identity in Orlando as expressed through the Magic basketball franchise and the original Orlando Arena.

It is essential, then, to organize the articles properly in order to convey this transformation over time.

In playing around with TimeLine JS, I have started to understand a bit better how the tool works, how I’ll be able to incorporate my data, and how I’ll be able to convey a narrative study in a visually intriguing and digitally transparent way.

Initial TimeLineJS Learning

TimeLineJS provides directions on how to use it that are easy to understand and simple to follow.

Capture Step 1 to make timeline

First, I had to open the Google Spreadsheet template provided by the KnightLab at Northwestern (creators of the tool).

Then, I went in and made some edits to see what it might look like:

The TimeLineJS Google Spreadsheet Template
The TimeLineJS Google Spreadsheet Template
Making some edits to the template
Making some edits to the template

Just to experiment, I started off with a quote from one of the Sentinel articles, which describes Orlando’s “blanket apathy” to pro-sport. I used that term as a “Header” and used a longer quote as a caption.

Then, I inserted an image of all of Orlando’s failed pro-sport franchises — again, just as an experiment to see how the incorporation of images and quotes might appear in the timeline.

A collage of the logos of failed pro sport franchises in Orlando. This image was found on Google images, and serves to relate to the quote that pre-Magic, Orlando was not enthusiastic about pro sports
A collage of the logos of failed pro sport franchises in Orlando. This image was found on Google images, and serves to relate to the quote that pre-Magic, Orlando was not enthusiastic about pro sports

Next, I continued following the provided directions by “publishing the timeline to the web.”

Publishing the timeline to the web
Publishing the timeline to the web
5 Capture Saving my project
Saving the Project in my own Google Drive

Continuing to go step-by-step, I then copy/pasted the URL (provided to me once I “published to the web”) into the box on TimeLineJS’s site (you can see it in the image below next to Step 3, where it says “Google Spreadsheet URL”)

6 Capture - Steps 3 and 4 in TimelineJS

When I pasted the URL, an “embed code” was generated in the box you see next to step 4.

Now, the directions on TimeLineJS say to “copy and paste it on your site where you want your TimeLineJS to appear (just like a YouTube video).

This confused me just a little bit — because I have embedded video and images before, into sites like Wikipedia and my Fantasy Football/Basketball pages. But that process of embedding was a bit different than doing the same thing in WordPress, so I would find out. In those instances, embedding meant making sure that all the ‘symbols’ were correct: proper placement of “<” and “[{ }]”, etc. Basically, (when working with something like Wikipedia) you just pay attention to how something on that site is “coded” or embedded, and just paste your URL in that same format.

I googled “How to embed in WordPress” because the site itself does not really clarify.

7 Capture googling how to embed

You can imagine how dumb I felt when I learned that all you have to do is copy paste the code into the text box.

Now, part of me thinks this may be somewhat incorrect — Shouldn’t I be able to interact with the timeline if it is “embedded”? Copy/Pasting provided the link that takes the user to the site where they can interact with the timeline, but I thought that embedding meant the site where the video/image/timeline was embedded would allow the user to interact with it right there, not via a link.

8 Capture Copy pasting the embed code into wordpress
Copy/Pasting the embed code into my wordpress text box

Either way, I was able to see my creation via the link, and it looks great!

The rough rough rough rough draft of my project, but cool to see how it might look!
The rough rough rough rough draft of my project, but cool to see how it might look!

//cdn.knightlab.com/libs/timeline3/latest/embed/index.html?source=1HHPOIpEUxuCfZfe124wN5iXgLEQu8aui5Rx2EyAU2uc&font=Default&lang=en&initial_zoom=2&height=650

Again, it is just a concept at this point and needs a lot of work — but even this little bit helped me familiarize myself with the tool, its capabilities, and potential issues. It allowed me to see what my project might look like, and right off the bat I can tell that I might need to rethink some things — like providing more interpretation for the quotes I borrow from.

In the coming weeks, I’ll be going through this same process, but with much more progress made. Hopefully, over time, my project will develop and the story will become clear!

Being ASSERTive: Asking a Question, Seeking an Answer

Being ASSERTive: Asking a Question, Seeking an Answer

Being ASSERTive

In his 2013 book Interactive Visualization: Insight Through Inquiry, digital scholar Bill Ferster defined a method to help those aspiring to develop their own digital projects, abbreviated in the acronym ASSERT.

ferster book
(MIT Press: 2013)

Ferster challenged these digital hopefuls to:

  • Ask a Question
  • Search for Data
  • Structure the Information
  • Envision the Answer
  • Represent the Visualization
  • Tell a Story Using Data

This blog post will outline my own plans to be ASSERTive this semester, and hopefully in doing so will convey the question I’ll ask, and the answers I seek, and the method by which I will search for insights.

Asking a Question

My question draws from my thesis research, broadly asking: How does the planning and construction of the Orlando Arena in 1986,87,88, and 89 represent an intersection of race, politics, economics, sport enthusiasm, and urban sprawl? Perhaps the question can also be asked another way: What sociopolitical and cultural elements developed leading up to the Orlando Arena’s construction, and how did the intersection of these elements influence the location, planning, and design of the structure.

The original Orlando Arena: Note the strategic layout of gardens and fountains. These are referenced by local media and citizens as ways to
The Orlando Arena, circa 2007

The question will seek to engage with urban historical theory that analyzes physical structures as mediums for cultural expression. What did the Orlando Arena express for and by different communities within the city?

Specifically, the question seeks to answer: Why was the arena placed in the African-American neighborhood of Parramore, and what happened to the neighborhood and its residents as a result?

Parramore overhead map
The Parramore Historic District. Note the sizable amount of land taken up by the arena and its parking lots (upper right corner, just below Lake Dot). Also note the highways which intersect through and shape the neighborhood – this will be explored in the “Why Parramore” aspect of the question.

Searching for Data

The search for data thus far has been mostly successful. My thesis is largely a discourse analysis, in which I analyze discursive sources with an intention to interpret their meaning. In a way, this approach represents a branch of intellectual historical theory, in which the words and thoughts of people (not necessarily elites, but anyone whose thoughts have been recorded) are interpreted for meaning. In regard the Orlando Arena, and the franchise it hosted — the Orlando Magic — the best available source is ProQuest. ProQuest is a digital newspaper archive (and also archives other documents, like dissertations, theses, periodicals, etc.) that hosts archived issues of the Orlando Sentinel dating back to 1983. At least for those sources related specifically to the Orlando Magic and the Orlando Arena, both of which came to fruition in the late 1980s, ProQuest offers plenty of discursive sources relevant to the topic, including “Letters to the Editor” columns, which collect the direct words of ‘the people.’

A snapshot of ProQuest's Database List. Note the Orlando Sentinel -- a bit odd to see the Sentinel as one of 26 databases, next to the New York Times and LA Times -- good luck for me, I suppose!
A snapshot of ProQuest’s Database List. Note the Orlando Sentinel — a bit odd to see the Sentinel as one of 26 databases, next to the New York Times and National Newspapers Core , but I’m not asking any questions!

I then compare this discourse with historical fact: When did the arena begin construction? Where was it ultimately located? How many people did it dislocate? How many people did it employ? How much of an “economic impact” did it make? These sources are largely governmental, and are a bit harder to come by. Some of the hard facts are also in the paper — several articles discuss urban dislocation, total cost, economic impact, etc. However, I much prefer to go directly to the source — what governmental records exist that can tell me such information?

To my surprise, some of the governmental documents exist in local history centers and in libraries across the state. For example, I was able to procure the “Orlando Arena Development of Regional Impact Report” series from the University of Florida’s library. These document detail the estimated outcomes and implications of the arena on the local economy, residents, and built environment.

Cover page of the
Cover page of the “Development of Regional Impact” report submitted by arena planners to the city in 1987

However, dealing with the City of Orlando has been quite difficult to this point. From my discourse analysis I can tell that there are a number of documents that must exist somewhere, and my feeling is that, because many of them were city initiatives, they must be in the city’s archives and records. However, my feeling is the city is giving me the run-around a bit. Regardless, the project should sufficiently answer the proposed questions even drawing only from sources already procured.

More data is needed on Parramore’s history, and especially the development of Interstate 4 and the East-West Expressway, both of which were precursors to the kind of ‘urban bullying’ represented later by the arena. I have seen this data in local history centers, but need to make appointments to scan PDFs and develop this aspect of my personal data bank.

Structuring the Information

The structure of the information will most likely develop into three categories, serving to differentiate certain elements while also allowing them to intersect and influence each other the way they did in real-time. These categories will represent the three categories used in a multi-layered timeline. They are, broadly:

  • What was happening in Parramore
  • What was happening with the Magic/Arena
  • What was happening in city/county political circles

All of these categories will be driven by the discourse found in newspaper articles. The story will follow what historical actors wrote (and in turn, thought) about the Parramore neighborhood, the Magic and Arena, and the politics surrounding both.

As I write this, I can tell the structure may need work and will most likely develop over time — but this is the preliminary structure as of now, and it at least serves to propel the project forward.

Envisioning the Answer

While I will do my best to remain open and let the research continue to influence me up to the very last word of my thesis is written, I must be honest in saying that “the writing is on the wall” so to speak. The answers to my questions seem clear to me as of now, and I’ll try and be brief here in summarizing them:

The Orlando Magic conjured incredible enthusiasm over pro-sport in a town that felt insufficient in regard to its identity and national relevance. To procure a franchise, the team needed an arena (quickly). To make this happen, Orlando and Orange County politicians sidestepped standard legislative processes to make the arena a reality. The arena was built in Parramore both for its proximity to downtown and the fact that the residents there had been historically rendered politically and economically impotent. The people of Orlando created a “community identity” and developed “civic pride” around the team, while the arena served to further disrupt a subjugated Parramore community and dislocate residents.

Representing the Visualization

I will work with a tool developed by Northwestern University called TimelineJS. TimelineJS allows for the incorporation of photo, video, and audio, and does well to lend itself to layering.

“The Banana Business”: An example of what TimelineJS can do.

Tell a Story Using Data

If the questions are answered with the data gathered, and represented visually in a way that is transparent, stimulating, and informative, then the story should tell itself!

Best,

Garrett

Reaction to the Reading: Inventing the Medium by Janet Murray

Reaction to the Reading: Inventing the Medium by Janet Murray

Janet Murray book

In Inventing the Medium: Principles of Interaction Design as a Cultural Practice, author Janet H. Murray states:

“One of the most important characteristics of humanistic inquiry is that it accommodates multiple frameworks of interpretation…and is strengthened by an expanded palette of possibilities” (1).

Indeed, the humanities lend themselves to multifaceted interpretation. As we are learning this semester, digital approaches expand the possibilities of interpretation and conveyance exponentially more.

However, digital approaches are still in their infancy according to Murray, and those in the humanities who opt to utilize digital tools “are dealing with an immature medium, which is much more diffuse and has much cruder building blocks at its disposal than a mature medium like print” (3).

Murray, a Georgia Tech professor of digital media, is undoubtedly correct in this regard. However, as crude as the immature medium may be, the digital fluency of most scholars—even those younger scholars regarded as ‘digital natives’—is far more crude.

In beginning my own digital projects, I have encountered several issues which take much time to work through. Granted, it is always possible to learn and figure out the answers to problems encountered, but the point is that creating something digitally is indeed more difficult than putting pen to paper.

However, as Murray notes, the digital medium offers benefits that print simply cannot, such as stimulation, interaction, visualization, immersion, etc. Going further, Murray argues that “inventing a medium is a collective process as old as human culture,” drawing parallels between digital humanities projects and the “representational drawings or symbolic markings on tally sticks or engraved stones” found is many ancient cultures (13, 15). In other words, exploring the potential for new forms of cultural expression (and, indeed, historical recording) is an intrinsic part of being human. It is fitting, then, that the charge toward digital mediums is led by humanists.

According to Murray, digital humanists must work to understand and convey the vast potential that digital tools offer. Just as “wood…affords carving and burning; blackboards afford writing and erasing…the door handle affords pulling,” digital tools afford real and tangible uses and outcomes (60). However, as mentioned, many simply do not know these uses and outcomes, and are thusly tentative about engaging with them altogether. “We need to know now only the results of a command but also the causal relationship: such as why a digital music player has chosen a particular song, a search engine has prioritized certain sites over others, or a computer operating system has suddenly caused all my working windows to disappear” (60,61).

The take away for me is that despite whatever trepidations I may have regarding digitization—trepidations born not out of disregard or contempt but rather confusion and a sort of ‘digital fear’—I have to buck up and jump in headfirst, as do all humanists. Still, this immersion must be adjoined by coursework that focuses not only on the doing but the how as well. How else are humanists to be expected to learn about the causal relationships Murray describes—knowing why a music player chose a particular song is heavily algorithmic and may be completely foreign to most humanists (certainly to me, anyway).

However, simply saying that something is difficult to understand is not justification for bypassing a medium altogether.

My thought is that the more humanists engage with digital mediums, the more they will understand and develop projects. This frequency in project production will prompt younger scholars to demand coursework that broadens their studies from strictly humanities, to humanities and, say, coding or algorithm-centered mathematics. After all, humans did not know how to carve in wood without first experimenting and teaching. Furthermore, if the STEM push holds up and the so-called “attack on humanities” persists, then digital humanities is further guaranteed to be the path humanists need to travel (if it isn’t already). Imagine a world of historians who can program and code, can design and create, and can convey and interpret a selected history—the merger of two schools of thought is the fruition of Murray’s vision. The medium would then be invented and developed.

For the purposes of this course and my own project, I have to continue my mantra of “one step at a time.” First (and quickly) designate a topic that is both archival according to the standards set by UCF and the course. Next, choose a digital tool. Lastly, develop a coherent project.

During the process, I hope to at least begin the kind of causal understanding Murray references. While I doubt I will leave the course as an algorithm whiz, I am certain that I will begin to understand why certain commands to what – and that’s a good start.

Best,

Garrett

Bob Marley, Google Maps, and A First Attempt at Hands-On Digital History

Bob Marley, Google Maps, and A First Attempt at Hands-On Digital History

I can only imagine that what I have experienced the past few days is a microcosm of what digital historians go through for years when working on a digital history project.

First, I struggled to decide on a digital approach (mapping, text mining, etc.). Next, I struggled to find a digital tool that I wanted to work with — Google My Maps, Google Fusion Tables, StoryMap JS, etc. Finally, I kept switching back and forth between topic ideas. What would work best with mapping? What history do I know well enough to convey digitally? What am I really trying to say here? Are my results actually conveying a history, or just plotting points on a map?

Needless to say, the whole process was extremely frustrating, and has me feeling (and I’m just being honest, here) that digital history, for me personally, might be too much of an uphill battle to engage with at this point in my career. All in all, it took me days to make a project that is not nearly as historically communicative as an article on the same topic would be. However, I am very pleased with how interactive it is, and I imagine that an article would not reach or entertain nearly as many people. In the end, I’ll continue to give this course an honest go, trying to learn as much as I can about digital tools in the hope that something, somewhere, will ‘click’ and make sense to me.

Choosing an Approach

Text mining is not my thing. There’s too much data to input, and all of the textual sources I have are PDFs, which don’t always play nicely with text mining software. Timelines are very cool, but I struggled to find timeline tools that seemed easy enough to work with for a first attempt.

Ultimately, mapping made the most sense for me. Maps are fun and cool to look at, and I thought that creating a map would produce a way for users to travel the globe in a click. Furthermore, I find it interesting to trace the journeys of historical actors as they move. Whether athletes from team to team, generals from battle to battle, or animals from habitat to habitat, spatial movement is a cool thing to follow.

Choosing a Specific Tool

Once I set my sights on mapping, I tried to decide between OpenStreetMap, StoryMapJS, or Google’s My Maps. OpenStreet was too confusing for me. I didn’t quite grasp how I could use it effectively, and I quickly gave up on it. StoryMap had some great options, or so it seemed. I loved the idea of making a gigapixel experience. But it seemed like way too much work to figure out in a short time span. The StoryMap mapping option sounded good, but ended up just looking like a powerpoint. I tried and tried and tried, and yet I could not figure out a way to use StoryMap for anything other than a slide presentation:

Capture storymap

Due simply to ease of use and understanding, I settled on Google My Maps. The next issue was coming up with a story I could tell.

Research Topic

My research topic at UCF has mostly been geared toward sport — sport and identity, sport and urban dislocation, etc. So naturally, I began this project with the intention of revealing something about sport. The objective this week was to make an honest attempt at engaging with a digital tool, and preferably in the field we work in. I’m hoping to do my project in this class either on sport stadia or UCF history, depending on which archives yield the best returns. However, at this stage in the game, I’m not ready to begin on either of those projects.

So all I tried to do this week was get my hands dirty and see what’s out there. I figured it would be best if I focused on a topic that I already know really well. That way, I wouldn’t need to conduct too much research, and I could focus on the tools themselves by drawing from information I already have.

My first thought was Central Florida sport — I know that, I’ve conducted research on it, so why not? The only issue is that it doesn’t really lend itself to mapping, at least in any way I could think of. Besides, I wanted something that was more global.

I moved on to an idea about MVP’s (Most Valuable Players) in American professional sport history. I wondered if there might be something in pinpointing where MVP’s had their origins. I decided to look at three major American sporting leagues — NBA, NFL, and MLB — to plot where league MVP’s had come from. I guess my hope was that I would see some sort of interesting trend or grouping that could tell me something. I got pretty far in that project:

Capture MVP Origins

But then I realized I wasn’t really saying anything, or at least it didn’t feel like it. “Okay,” I thought, “so a lot of football players come from Texas and Florida. Is that supposed to be news?”

I decided to jump ship, but I didn’t know what to. And that’s when I remembered Bob.

Rob_Partridge1_1125312c

Bob Marley and My Reggae Roots

Not a lot of people know this about me, but for a long period of my life I was extremely interested in the career of Reggae musician Bob Marley. In fact, I was certifiably obsessed with Reggae music in general. I collected CDs and albums of every Reggae artist I could. As I got older and more into reading, I read a lot of books on Bob and other Reggae artists, and on the genre itself. When that got too “secondary,” I started reading magazines in my school’s library — original magazines printed in the 1970s and 80s that detailed Reggae and Jamaican culture. That grew into an interest in Jamaican and Caribbean history, which led to an interest in Rastafari religion and history, which in turn grew into an interest in Ethiopian and African history. In fact, when I first came to UCF, I was trying to decide whether to do a project on the Pacific Islands or on the Caribbean islands. Dr. Lester (who was my first professor at UCF) may even remember that my initial meeting with her was about the possibility of doing a history of Afrocentric Caribbean culture for my thesis.

Anyway, the point is that I already know a lot about Bob Marley, and the story of his life leading up to his first studio album seemed like something I could feasibly plot, describe, and incorporate with photos, videos, and text. It seemed like an ideal way, at least at this early stage in my digital history career, to convey a particular history in a spatial, interactive, and mostly non-textual way.

It took quite a while to make — I had to create information boxes, look up exact addresses to be sure of where my pinpoints needed to go, and then find relevant images and videos, but it ultimately was very fun and engaging.

Capture bob edits on map

Results

The resulting project is indeed an interactive map. Users can click across the map, or follow the map legend, and read bits about Bob’s childhood, early career, and eventual success as a studio artist. However, fitting the history into small text boxes leaves out so much of the nuance. I often found myself getting frustrated because I know the history, and I know that this project leaves so many of the good parts out. However, with mapping I found it’s more important to convey the spatial and non-textual elements. I was able to create something that is undoubtedly more entertaining than any article could be. As such, perhaps this kind of project could supplement an article, or be used as an introduction to an article that stimulates interest. As a stand-alone, it’s simply a fun and engaging way to explore the first years of Bob’s career and music — and that’s okay.

I hope you all enjoy it! And remember, I’m still learning, so be easy on me here…

Best,

Garrett

https://www.google.com/maps/d/viewer?mid=zIusc7MxJmfk.kRXvO58PTpKk

Capture Bob Marley A Reggae Journey

Reading with Algorithms: An Introduction to Topic Modeling, Text Mining, and Distant Reading

Reading with Algorithms: An Introduction to Topic Modeling, Text Mining, and Distant Reading

BY THE NUMBERS

Gone our the days when those in the humanities could say “no” to math. Algorithms, statistics, and other mathematical tools can change the way people convey and interpret the humanities.

Mathematics, in conjunction with computer coding and programming, allows humanities scholars to “zoom out” of a sample subset–say, every written word of William Shakespeare, for example–and draw conclusions about the broader themes, interpretations, and aspects of that sample. Continuing with the example, such a zooming out could reveal how often Shakespeare focused on sexuality, poverty, monarchy, etc.

So the question is — what would something like that tell us? How would we be better off knowing the frequency with which Shakespeare discussed the British crown?

This is where there seems to be some contention among scholars. On the one side are those who swear by the numbers. Exactitude, these scholars say, yield concrete answers to difficult questions. On the opposite end of the spectrum are those who see statistical approaches as overly assuming. How, these scholars ask, can numbers reveal nuance?

ARE NUMBERS NEW?

The quantitative approach is nothing new — and I should be clear here to point out that the topic of discussion today is not the “quantitative approach” as historians know it. Yes, the approaches discussed here draw from numbers, but the quantitative approach generally results in a number: X amount of slaves on the Transatlantic slave trade, X number of Native Americans present in the New World before European exploration, etc.

However, using mathematics–especially algorithms and computer coding and programming–to reveal textual trends is relatively new. The results in these approaches are not always number-focused, but rather text-focused. How often was the word “X” used by a certain history journal over the course of its existence? How often did Union soldiers discuss “death” in their letters? What issues did Churchill discuss in his speeches? Yes, numbers are inherently a part of this approach, but the driving focus is on the text.

THE DIFFERENT APPROACHES

If there is any counter-argument against mine that textual/mathematical approaches are “relatively new,” it lies in the Index Thomisticus by Roberto Busa. Busa, an Italian Jesuit priest, began this project all the way back in the 1940s–his goal was to arrange the words of St. Thomas Aquinas in such a way as to allow scholars to interpret them based off of the frequency with which Aquinas discussed certain topics, and how individual words and correlating themes linked with others. As technology grew, so to did Busa’s project. Though Busa has since passed, the project exists now online, and Aquinas’ words can be searched and understood in over ten different languages.

Busa labeled his work as “textual informatics,” and his approaches have been adopted and adapted throughout the humanities.

Another term for this kind of work is “distant reading,” which journalist and author Kathryn Schulz defines as “understanding literature not by studying particular texts, but by aggregating and analyzing massive amounts of data.”

In other words, distant reading zooms out from the individual text, or collection of texts, all the way out to sample sizes far too large for any one human to read and interpret–say, 160,000 books or more–and interprets them in some way, shape, or form. For “distant readers,” understanding literature is more than just reading—it’s zooming out and analyzing sample sizes that are far too large for any one person to do on their own.

How can this happen? Again, using digital tools (at least now, not in Busa’s 1940s) to scan texts, identify word and term frequencies, create relations between words, form groups, and produce lists–or topics–for scholars to then interpret.

According to digital whizkid Scott Weingart (now the head of Carnegie Mellon’s digital humanities efforts), this approach, called “topic modeling…represents a class of computer programs that automagically extracts topics from texts.” However, according to Weingart, “It’s all text and no subtext…it’s a bridge, and often a helpful one.”

In other words, topic modeling and text mining assist the humanities in getting somewhere, but they are not the end-all, be-all.

(This is a good spot to define the difference between text mining and topic modeling — some good answers can be found here. Basically, “Text classification (text mining) is a form of supervised learning — the set of possible classes are known/defined in advance and don’t change” whereas “Topic modeling is a form of unsupervised learning (akin to clustering) — the set of possible topics are unknown apriori. They’re defined as part of generating the topic models. With a non-deterministic algorithm like LDA, you’ll get different topics each time you run the algorithm.”)

Schulz questions whether or not they are useful at all, at least in literary studies. “Literature is an artificial universe,” Schulz says, “and the written word, unlike the natural world, can’t be counted on to obey a set of laws.”

SO WHAT DO WE MAKE OF THESE TOOLS?

Perhaps Schulz is right, but can’t topic modeling, text mining, and other distant reading approaches assist in understanding overarching trends that the naked eye (and mind) simply cannot identify? Do these approaches reduce narratives to numbers, or do the numbers boost the narratives? I tend to lean toward the latter.

In my own work, I can see the benefit of applying text mining and/or topic modeling to find overarching trends in newspapers I research. How often to journalist discuss race, poverty, the built urban environment, and what can this tell me about the narrative I am working on?

If there is any reluctance on my part to engage with distant reading tools, it harkens back to the introduction — I tend to be one of those who says “no” to math. Not out of ignorance — it’s not a declaration. You’ll never hear me say, “death to Math!” except perhaps in my head during a GRE exam.

It’s just that I’m a bit frightened to engage with something so arduous and unfamiliar. And I’m not alone here.

But we as historians, humanitarians, and scholars in general need to embrace all approaches, and do our best to understand the nitty-gritty of numbers and distant reading. We might not think it will be useful to us. We might not want to “obey a set of laws” that make us feel overly numerated. But how will we know if we do not try to learn the method.

As Weingart says, “The model requires us to make meaning out of them, they require interpretation, and without fully understanding the underlying algorithm, one cannot hope to properly interpret the results.”