Quantifying common sense: New research suggests it’s not so common

Common sense, or the practical knowledge shared by the majority of the population regarding everyday matters, is ambiguous: It is difficult to know exactly why something is common sense even though it is intuitively clear. Many often assume that something clear to one person is also clear to another, but this may not be the case. Some skeptics, therefore, claim that common sense does not even exist, contrary to what is widely believed. Computational social scientists Mark Whiting and Duncan Watts at the University of Pennsylvania tackled this paradox in their study published in Jan. 2024, which reveals that “common” sense actually may not be common. 

“Common sense, or the practical knowledge shared by the majority of the population regarding everyday matters, is ambiguous”

Common sense has been a difficult concept to study as there has not been an empirical method to calculate the extent to which a claim classifies as common sense or the percentage of people who are knowledgeable about such commonsense claims. Whiting and Watts began their research by establishing an equation to do so, which incorporates the mathematical calculations of two quantities that define common sense: consensus, or a population’s disagreement or agreement with a certain claim, and awareness, or individuals’ ability to predict others’ agreement with such a claim. According to Whiting and Watts, a claim’s common sense is the square root of the product of these two quantities.

After mathematically defining individual common sense, the scientists created a graphical method of analyzing collective common sense, represented as pq: the fraction of claims (p) shared by a fraction of people (q). If common sense is actually common, most people would agree on most claims, and the value of pq would be close tofraction would be close to 1. On the other hand, if not many agree on a set of claims, this number would approach 0.

To generate data, Whiting and Watts asked 2,046 participants to rate whether they agreed with 50 claims, if they thought most people would agree with the claims, and if they thought the claim was common sense. They used a total of 4,407 claims taken from various sources such as humans’ responses to prompts such as “write a claim you believe to be common sense about math and logic,” real-world claims from political campaigns, emails, and other sources, and AI-generated claims. They assert that every claim should have represented common knowledge. “We emphasize that these claims are only candidates for common sense, meaning that each claim was judged by someone (e.g., the respondent to our question, the creator of the AI corpus, the writer of a news story) to be self-evidently true, or at least sufficiently plausible that they hoped to persuade others of its self-evident truth,” they wrote in the paper. 

To determine whether specific types of knowledge are more or less common, Whiting and Watts sorted each claim into 13 knowledge domains, such as geography and places, culture and the arts, and history and events, to determine whether specific types of knowledge are more or less common. They also looked at how variations in people affect common sense scores by classifying participants according to specific demographic groups such as age, gender, race, income, and personality assessment scores. 

After extensive data analysis using the participants’ ratings of each claim, Whiting and Watts made several compelling findings. First, individual common sense analyses revealed that common sense varied substantially across all types of claims but not across people of varying demographics. Fact-like claims, for example, were more commonsense than opinions, literal language was more commonsense than figures of speech, and claims regarding reality were more commonsense than those about abstract issues. In terms of demographics, though, researchers found that the largest difference in individuals’ commonsense percentages between demographic subgroup pairs (for example, conservative vs. liberal) was only around 3.9%. They found, though, that personality test results correlated with common sense: Scores from the Reading the Mind in the Eyes (RME) test, which quantifies one’s social perceptiveness, correspond with greater levels of common sense and led to 19% of the variation in commonsense percentages. 

Perhaps the most surprising finding was that collective common sense is actually rare. According to the data, there were 0 claims with both universal and perceived consensus, and only a tiny fraction of people (p) agree on even a small number of claims (q). Only 8% of the sample size agreed with 25% of claims, and 0% agreed with 50%, revealing that only a small number of people within any population share even a small fraction of seemingly commonsense beliefs. Therefore, if defining one’s common sense as their total collection of commonsense beliefs, it would be almost impossible for two people to share the same conception of common sense since the perception of claims’ commonality varies so drastically across individuals.

Further studies regarding the quantification of common sense could explore how specific contexts may impact the commonsense of a claim, how common sense has changed throughout time or within specific countries, or the impact of common sense in real-world situations. How does the use of commonsense claims in politics or writing, for example, impact the audience’s beliefs? The methods used in this study could even be used to determine whether machines can learn common sense, and thus serve as a foundation for the future of AI.

The idea that common sense does not actually exist is rather paradoxical, as it seems like there are an endless number of claims that almost everyone would agree with. Hopefully, though, these findings will provide some comfort the next time you do not know something that is apparently “common sense” — after all, it is likely that less than 8% of the population knew it, too!