When we are studying the past, it is very difficult to be completely objective. Even if we don't produce fake histories, we cannot necessarily include everything in a history curriculum. So we include some things, while we exclude other things. The problem is that nations tend to favor historical events that make themselves appear more prestigious, while they tend to exclude historical events that make themselves appear less prestigious. This can perpetuate conflicts, such as for example the conflict between the Montagues and the Capulets in Romeo and Juliet by Shakespeare.
Most countries have some selection bias in their history curricula, and this might be one of the main reasons why the Israel-Palestine conflict and the Kashmir conflict never seem to end. It also seems to have been a prominent reason for the Cold War, the Yugoslav Wars, and presumably many other conflicts. You might assume however, that if we later in life are presented with historical facts which make our countries seem less prestigious, we tend to get a more unbiased and objective understanding of our histories. This is unfortunately not the case however, since we tend to ignore facts that oppose our beliefs, while we actively seek more facts to strengthen our beliefs.
When people search for information on the Internet, they are likely to search for information that confirms their beliefs, rather than for information that might be opposed to their beliefs. Even when people are confronted with information that contradicts their beliefs, they are likely to ignore it. This causes different political and religious groups to grow further apart, which again creates more conflicts in the world.
We are in general much better at seeing correlations that we are looking for, than seeing correlations we aren't looking for. If for example we want to figure out if a symptom is indicative of a disease, we might look upon if the infected are more likely to have the symptom or not. From just looking upon this, we might erroneously start to believe that the symptom is indicative of the being infected.
It is however also possible to look upon if the people that aren't infected are more likely to have the symptom or not. From just looking upon this, we might start to erroneously believe that the symptom is indicative of not being infected.
If we compare all of these things, we might see that there is absolutely no correlation between the symptom and the disease. The symptom is simply more prevalent among people in general; both people that are infected and people that aren't infected. The probability of being infected if you have the symptom, is the same as the probability to have the infection in general.
This can also be related to political and religious convictions. We might selectively choose to look only upon the prevalence of favorable things in our own religions and/or political affiliations, without comparing it to the prevalence of these same favorable things in other religions and/or political affiliations. Similarly, we might look upon the absence of adverse things in our own religions and/or political affiliations, without comparing it to the absence of these same adverse things in other religions and/or political affiliations.
People tend to associate perfection with one ethnicity, culture and/or personality type. Often their own ethnicity, culture and/or personality type. This way of thinking fails to recognize the benefits of diversity, as stated in the diversity prediction theorem.
We usually don't mind closed-minded people that adhere to the same ideologies as us, while we usually dislike closed-minded people that adhere to other ideologies. On the other hand, if people adhering to different ideologies are open-minded, we tend to like them much more. So we should probably be a bit less tolerant of closed-minded people adhering to our own ideologies, since we dislike so much closed-minded people that are adhering to other ideologies.
According to the principle of least effort we tend to choose the alternative which requires least effort. It is analogous to the path of least resistance in physics, which says that rivers over time usually will find the path with least resistance. When using search engines, we have a tendency to avoid complicated explanations in favor of more simplistic explanations, even when the complicated explanations are more accurate and/or more trustworthy.
This is why students often avoid topics that require a lot of work, in favor of topics that require less work. It can further be related to the appeal of populism in politics. Populistic politicians propose simplistic solutions to complicated problems, such as the war on drugs, the war on terror, or building a wall to stop immigration. Since such simplistic solutions are easy for people to comprehend, they tend to get widespread support, even if they aren't necessarily the best solutions to these complicated problems.
We derive more pleasure from thinking about nice things that happened to us in the past, than from thinking about boring and distasteful things that happened to us in the past. So we have a tendency to think more about nice things that happened to us in the past, and every time we remember something, we strengthen the memory. We also modify it a little, to make it appear even more agreeable, so that we can derive even more pleasure from thinking about it in the future.
Over time this tends to make us get an overly positive image of the past. It also tends to make us think that things are getting worse, or that society as a whole is in a state of decline. This way of thinking also tends to lead civilizations to stagnation, since there is much more focus upon reestablishing the past, than upon incorporating new ideas.
Sometimes when people are forced to explain their behavior or their choices, they might struggle a bit with coming up with an explanation. But after a while, most people manage to come up with some explanation. Research has however shown that these explanations tend to be fabrications, rather than true reasons[5,6]. We are often not aware of why we are behaving in a certain way, or why we made a choice, but if we are forced to come up with an explanation, we manage to fabricate something. We also tend to believe in these fabrications ourselves, even though they usually aren't based upon why we really behaved like that, or why we really made that choice.
When we are starting to learn about a new topic or a new skill, we might overestimate our competence, simply because we haven't learned yet about all the things we don't know or haven't mastered. As we learn more about what we don't know or haven't mastered, our confidence tends to go down. If however we continue to learn our confidence might start to increase again.
The illusion of knowing more about things we care about
Research has shown that the more people care about something, the more they tend think they know about it, regardless of if this is the case or not. For example, people that are heavily involved in environmental organizations or care a lot about environmentalism, might erroneously think they know a lot about the scientific theories related to global warming even if this isn't necessarily the case.
The illusion of knowing more about things we use a lot
People often feel like they know how an item works, if they know how to use it. For example, people that drive a lot might think they have a better understanding of how their car works than what is actually the case. Similarly, people that use computers and cellphones a lot, might think they have a better understanding of how these devices work, than what is actually the case.
We often ascribe our own successes to our superior skills, rather than to external circumstances. When it comes to failures however, we tend to blame it on external circumstances. We are probably better off with taking more responsibility for our failures, since it gives us motivation to improve ourselves.
We are however good at blaming other people for their failures, without taking into consideration that external circumstances might also influence their failures. This can lead to hostilities in marriages and work environments.
The brain is wired to find causal explanations. This also tends to make us believe in fallacious causes, especially in situations where things have a natural tendency to regress to the mean. If someone has an extremely bad performance, it is likely to be partially due to unfavorable randomness or bad luck. Similarly, if a person is performing extremely well, it is also likely to be partially due to randomness or luck.
However, the person performing extremely bad is likely to perform better the next time just due to less influence of bad luck or unfavorable randomness, while the person performing extremely well is likely to perform worse the next time just due to less influence of luck or favorable randomness.
In the past, it was commonly believed that punishment works better as an educational technique than reward. If exceptionally bad performance was punished, you might easily think that the punishment caused the performance improvement, even though it was just caused by less unfavorable randomness. If however exceptionally good performance is rewarded, we are likely to see a decrease in favorable randomness the next time, and then you probably wouldn't think of reward as a very useful educational technique. This has also caused a lot of superstition. For example, if you have the flue, you might start to drink some herbal tea, and after a few days you might feel better. In such a scenario, people are prone to believe that they got better because they were drinking herbal tea. It is however highly likely that you would have gotten better just as fast without drinking herbal tea, due to the normal functioning of your immune system.
We usually believe too much in things that have very low probabilities of occurring, since there is a possibility for these things to occur. This is why people buy lottery tickets. They focus upon that it is possible, even though it is very unlikely. Similarly, people often do not believe sufficiently in very high probabilities, since they do not feel certain. This is why people often buy expensive insurances. So that they can feel safe, even for very unlikely occurrences.
Most tests have false positives, since there usually is a bit of luck and/or randomness involved. For extremely rare conditions, these false positives can actually be far more common than the true positives. However, people often tend to neglect the background probabilities for rare conditions. Such a rare condition might for example be to have more than 145 in IQ.
Let us imagine that someone developed an IQ-test which would predict if a person has an IQ of more than 145 with 99% accuracy. So you take the IQ-test, and you score positively for more than 145 in IQ. Should you believe that you have indeed more than 145 in IQ? After all, the IQ-test is supposed to be 99% accurate. However, since only 0.1% of the world population is supposed to have more than 145 in IQ, you need to take this into consideration and use Bayes' theorem to find the real likelihood you have such a high IQ.
If you throw a fair coin, and assign the value 1 for heads, and the value 0 for tails, the average value gets closer to the expected value (0.5) with more trails. For coin tosses, the average value doesn't seem to always get really close to the expected value before around 100 000 tosses. With medicinal, nutritional and behavioral studies, there is always a bit of randomness for each participant. This can be minimized by using a large number of participants.
Just like the average value of coin tosses gets closer to the expected value with more trails, it also gets more rare or extreme average values with less trails. This is known as the law of small numbers. Rare or extreme cases are more common for smaller groups of people, or there is a higher variability between smaller groups of people than between larger groups of people. Smaller schools for example seem to be overrepresented among the best schools, but they also seem to be equally much overrepresented among the worst schools. Maybe just because there is a higher variability between smaller schools than between larger schools.
Women have two X chromosomes, while males only have one X chromosome. Since males only get one copy of each gene on the X chromosome, they are much less likely than females of obtaining fully functional versions of all these genes. However, since females obtain twice as many X chromosomal genes, they are also twice as likely to obtain at least one dysfunctional version of each gene.
Statistically, this would mean that males have a higher variability of the genes expressed on the X chromosome. The X chromosome contains many genes related to neurological development. Some feminists have argued that western democracies with equal rights for men and women still are discriminating women, since there tends to be more men in favorable highly paid societal positions. However, there also tends to be more men in prisons and in other unfavorable societal positions. This might simply be due to a greater variability in IQ for men than for women. Several studies have found males to be about 30% overrepresented among individuals with intellectual disability. It is actually selection bias to only focus upon one side of the spectrum.
You might often hear people say something like this: "I know a guy that smoked and lived until he was 100, so smoking cannot possibly be that bad for you". This is a generalization based upon a single individual. As we have seen from the law of large numbers, the average value of coin tosses varies widely until around 1000 trials, and we don't get really good estimates of the expected value before between 10 000 and 100 000 trials. So we need a large amount of individuals (preferably around 100 000) to make reliable generalizations.
Generalizations based upon our friends
In order to make reliable generalizations, we also need to have a random selection of people, and your friends are not a random selection of people. You might for example work for a construction company, and most of your friends could be colleges from work. If you generalized based upon your friends, you might erroneously start to believe that people in general know a lot about construction.
 S. E. Page, “Where diversity comes from and why it matters?,” European Journal of Social Psychology, vol. 44, p. 267–279, 2014.
 C. Wilson, V. Ottati, and E. Price, “Open-minded cognition: The attitude justification effect,” The Journal of Positive Psychology, vol. 12, no. 1, pp. 47–58, 2017.
 G. K. Zipf, Human Behavior and the Principle of Least Effort: AnIntroduction to Human Ecology. Martino Fine Books, 2012.
 T. R. Mitchell, L. Thompson, E. Peterson, and R. Cronk, “Temporal adjustments in the evaluation of events: The “rosy view”,” Journal of Experimental Social Psychology, vol. 33, pp. 421–448, jul 1997.
 T. D. Wilson, D. S. Dunn, D. Kraft, and D. J. Lisle, “Introspection, attitude change, and attitude-behavior consistency: the disruptive effects of explaining why we feel the way we do,” in Advancesin Experimental Social Psychology, pp. 287–343, Elsevier, 1989.
 T. D. Wilson and J. W. Schooler, “Thinking too much: introspection can reduce the quality of preferences and decisions,” J PersSoc Psychol, vol. 60, pp. 181–192, Feb 1991.
 J. Kruger and D. Dunning, “Unskilled and unaware of it: How difficulties in recognizing one’s own incompetence lead to inflated self-assessments.,” Journal of Personality and Social Psychology, vol. 77, no. 6, pp. 1121–1134, 1999.
 M. Fisher and F. C. Keil, “The illusion of argument justification.,” Journal of Experimental Psychology: General, vol. 143, no. 1, pp. 425–433, 2014.
 L. Rozenblit and F. Keil, “The misunderstood limits of folk science: an illusion of explanatory depth,” Cognitive Science, vol. 26, pp. 521–562, sep 2002.
 A. Tversky and D. Kahneman, “The framing of decisions and the psychology of choice,” Science, vol. 211, pp. 453–458, jan 1981.
 A. Tversky and D. Kahneman, “Rational choice and the framing of decisions,” The Journal of Business, vol. 59, p. S251, jan 1986.
 D. Kahneman, Thinking, Fast and Slow. Farrar, Straus and Giroux, 2013.
 C. R. Sunstein, “Probability neglect: Emotions, worst cases, and law,” SSRN Electronic Journal, 2001.
 M. Bar-Hillel, “The base-rate fallacy in probability judgments,” Acta Psychologica, vol. 44, pp. 211–233, may 1980.
 E. Seneta, “A tricentenary history of the law of large numbers,” Bernoulli, vol. 19, pp. 1088–1121, sep 2013.
 A. Tversky and D. Kahneman, “Belief in the law of small numbers.,” Psychological Bulletin, vol. 76, no. 2, pp. 105–110, 1971.
 H.-H. Ropers and B. C. J. Hamel, “X-linked mental retardation,” Nature Reviews Genetics, vol. 6, pp. 46–57, jan 2005.
 F. L. Raymond, “X linked mental retardation: a clinical guide,” Journal of Medical Genetics, vol. 43, pp. 193–200, aug 2005.