Arguably, the Complexity Loop preceded the Simplicity Loop in digital preeminence. There was a time when the internet was a rebellious, countercultural place\cite{alhadeff2017}. However, times have changed. Hossein Derakshan, an activist blogger who spent 6 years in an Iranian prision after blogger-fuelled protests against the government tells the story poignantly:
Blogs gave form to that spirit of decentralization: They were windows into lives you’d rarely know much about; bridges that connected different lives to each other and thereby changed them. Blogs were cafes where people exchanged diverse ideas on any and every topic you could possibly be interested in. They were Tehran’s taxicabs writ large.
Since I got out of jail, though, I’ve realized how much the hyperlink has been devalued, almost made obsolete.
Nearly every social network now treats a link as just the same as it treats any other object — the same as a photo, or a piece of text — instead of seeing it as a way to make that text richer. You’re encouraged to post one single hyperlink and expose it to a quasi-democratic process of liking and plussing and hearting: Adding several links to a piece of text is usually not allowed. Hyperlinks are objectivized, isolated, stripped of their powers. \cite{derakhshan2015}
A hyperlink allows people to spread ideas irrespective of their popularity, avoiding the dangers of the Simplicity Loop. In the Complexity Loops,  intricate and complex conversations spew new ideas and criticism. However, the intricacy and diversity of discussions in this feedback loop prevents them from growing more popular

How to Measure the Differences Between Feedback Loops

How could you tell a Simplicity Loop from an Complexity Loop?
My solution: by looking at the increase or decrease of linguistic richness.
Measuring linguistic richness has been called the Gordian Knot of literary studies \cite{Miranda_Garc_a_2005}. I explored many different ways of measuring linguistic richness on online text.,Hapax Legomena, the Type-Token ratio, Adjectives and Adverbs, and Yule’s I characteristic. I chose Yule's I characteristic because the previous three measures suffered from two key problems, jargon sensitivity and length dependence. Online comments vary tremendously in length and often feature thread-specific jargon and non-standard grammar. The variable length made Hapax Legomena and the Type Token ratio ineffective tools to measure richness (and from there meaning). A study of Adjectives and Adverbs quickly proved futile as the threads used non-standard grammar.

Yule’s I Measure of Richness

Udny Yule's I characteristic comes from his 1944 work, " A Statistical Study of Literary Vocabulary" \cite{Yule1944}called the result of this formula Characteristic I. Basically,  it measures the ratio of total words and unique words in a text while at the same time being length dependent. The phrase CAT CAT CAT would have a lower score than the phrase CAT CAT DOG even though both had the same amount of words. What separates Yule's I score from a simple type-token ratio is the ability to deal with texts of varying lengths. This is achieved by the use of the following formula:
\(\frac{\left(M1\cdot M1\right)}{M2-M1}\)
Where M1 is the total number of words (in the above examples both would be 3), and M2  is the sum of the product of the number of words at each frequency and the frequency squared. For example, for the post "Dogs eat cats. Cats eat pigs. Cats eat cats" there are  4 instances of the word "cats", three instances of the word "eat", and one instance of the words "dogs" and "pigs". Therefore M2 would be: \(1\cdot4^2\ +\ 1\cdot\ 3^2\ +\ 2\ \cdot1^{2\ }\ =27\)
CAT CAT CAT would have Yule's I score of \(\frac{\left(3\cdot3\right)}{\left(9-3\right)}\ =1.5\)
CAT CAT DOG would have a Yule's I score of \(\frac{\left(3\cdot3\right)}{\left(5-3\right)}\ =4.5\)
"Dogs eat cats. Cats eat pigs. Cats eat cats" would also have a Yule's I score of \(\frac{\left(9\cdot9\right)}{\left(27-9\right)}\ =4.5\)
This formula should roughly hold true no matter the size of the sample \cite{Yule1944}, unlike Hapax measures of type token measures. Furthermore, coding this measure in Python was straightforward and had been done before by web engineer Swizek Teller\cite{Teller2017}.
Returning to Luhmann’s definition of meaning- ‘the product of the different choices that a system makes to deal with complexity’- How does Yule’s I score fit in?
Like the type token ratio, the higher the score, the more information and different choices that a post could hold.
For example, “ Take a left turn” has a higher score than “Cat Cat Cat Cat,” and has more information. To further develop this connection, I will use a concept from Chaos Theory, Kolmogorov Complexity. Laid out by Soviet Mathematician Andrey Kolmogorov in “Three Approaches to the Quantitative Definition of Information” \cite{Kolmogorov1968}. Kolmogorov Complexity is the length of the shortest description it would take to produce an output. For example, the string “aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa” (50 Characters Long) can be described by the description “a 50 times” (8 Characters). However, the string “esfjbkskjbgsldfnapsdkngirngvlsjrnvsjadb” can only be completely described as “esfjbkskjbgsldfnapsdkngirngvlsjrnvsjadb”. even though it has less characters than the a string, it has a higher Kolmogorov Complexity.
Posts with higher Yule I scores have higher Kolmogorov complexities, as there are less repeated words. The higher the Yule I score, the higher the Kolmogorov complexity. The more complex a comment, the more potential choices, and therefore meaning it can store. Therefore, Yule’s I score measures the possible meaning in a comment.
By looking at the Yule’s I score of each comment, we can fit a line to the increase or decrease of Yule’s I over time. We can then compare the rate of Yule’s I increase/decrease with the number of comments on a post, thread or community. By using rate of increase of a measure instead of average Yule I score, we ignore the line intercept. This allows for a standardised measure across all categories, as each categories might have different baseline mean Yule I scores shaped by what the commenters knew before entering the thread.
Therefore, we can see which feedback loop is at work on an online conversation.
A Simplicity Loop is characterised by a decreasing Yule Score while a Complexity Loop is characterised by an increasing Yule I score.

An Example Case: Reddit

I decided to test this theory and method on Reddit, looking at polemic science threads. My hypothesis: Polemic Science issues of Reddit should be  sites of 
Reddit is an online forum divided into subreddits, threads, and comments. Each subreddit has its own unique rules and moderators and is built around set themes. As of June 2017, the most popular subreddits are AskReddit, Funny, TodayILearned, and Science according to http://redditlist.com. Each subreddit has different threads, which can be a question, a statement, or even a link to an article or video. People comment on these threads and then comment on those comments. They can then comment on those comments- commenting indefinitely. Furthermore, they can upvote or downvote both comments and threads.
I wanted to explore Reddit because it has elements of both the Simplicity Loop and the Complexity Loop. Reddit is a place where fringe ideas can thrive and new ideas arise. However, there is a process of liking that could allow for simple ideas to push out more complex ideas.

Why Polemic Science Issues?

More scientific papers than ever are being published \cite{Larsen2017}, and in cases of public interest, often contradict each other.This preponderance of conflicting information means that there are too many studies for a layman to easily judge. Thus, according to Luhman’s theory, we rely on social systems to process complexity and give us a simpler picture of the world. By looking at polemical science debates, we can analyse a dynamic and changing system. For these reasons, I chose to look at science issues.
The categories I chose, Artificial Intelligence, Global Warming, Genetically Modified Organisms, the CRISPR gene editing tool, and the debate over Vaccines were categories that interested me. I have been building a crude Artificial Intelligence throughout the year and was introduced to debates over Artificial Intelligence. The debate over Global Warming and Vaccines has come to the forefront in my home country, the United States, after a recent election and seemed pertinent. An interest in the debate over Genetically Modified Organisms and CRISPR stems from growing up in a family of geneticists.

Instruments

To collect and analyze Reddit comments, I wrote a script in the Python computing language. Reddit, which is open source, offers a service called PRAW written in Python, which allows for the easy mining of threads and comments. I, therefore, used Python for this project. Another computer language- R, provides stronger statistical analysis tools, and a more straightforward implementation of Yule’s I score, but seeing that I could measure it on Python and also use the PRAW kit, I decided to use Python when writing my program “SuperYule”.