Chapter 7 | Science and Technology: Public Attitudes and Understanding
Public Knowledge about S&T
Science and Engineering Indicators has been reporting results of assessments of Americans’ knowledge about S&T since 1979. Initial indicators focused on the proper design of a scientific study and whether respondents viewed pseudoscientific belief systems, such as astrology, as scientific. The questions also examined understanding of probability, and questions meant to assess understanding of basic scientific facts were added in the late 1980s and early 1990s (Miller 2004). These later factual questions—called here the trend factual knowledge questions—remain the core of one of the only available data sets on trends in adult Americans’ knowledge of science (NASEM 2016c).
Although tracking indicators on science knowledge is an important part of this chapter, it is also important to recognize that research has shown that science literacy only has a small—though meaningful—impact on how people make decisions in their public and private lives (see, e.g., [Allum et al. 2008]; [Bauer, Allum, and Miller 2007]; [NASEM 2016c]; [NSB 2012:7–27]). It is also, however, clear that such knowledge need not result in accepting the existence of a scientific consensus or a policy position that such a consensus might suggest (Kahan et al. 2012). One challenge of measuring the effect of science literacy is that processes—such as formal and informal education—through which knowledge is gained also contribute to interest in S&T and confidence in the S&T community. These same processes might also affect general and specific attitudes about science. The National Academies of Sciences, Engineering, and Medicine also recently highlighted that science literacy is largely a function of general (or “foundational”) literacy and that more focus should be put on the ability of groups to use science to make high-quality decisions (NASEM 2016c). In this regard, it should be recognized that the science literacy of individuals is unequally distributed across societies, so that some groups or communities are able to make use of science when needed while others are not because they may not have access to resources such as local expertise (e.g., community members who are also scientists, engineers, doctors).
It is also noteworthy that the current survey uses a relatively small number of questions compared to all the scientific subjects about which someone could be asked and thus cannot be said to represent a deep measurement of scientific knowledge. Given such concerns, the 2010 version of Indicators included responses to an expanded list of knowledge questions and found that people who “answered the additional factual questions accurately also tended to provide correct answers to the trend factual knowledge questions included in the GSS” (NSB 2010:7-20). The trend questions used in this report thus likely represent a reasonable indicator of basic science knowledge. The goal when designing these questions was to assess whether an individual likely possessed the knowledge that might be needed to understand a quality newspaper’s science section (Miller 2004).
There is, however, evidence that the current trend measures may be better at differentiating low and medium levels of knowledge than they are at differentiating those with higher levels of knowledge (Kahan 2016). More generally, considering the limitations of using a small number of questions largely keyed to knowledge taught in school, generalizations about Americans’ knowledge of science should be made cautiously.
Another issue is that, although the focus in Indicators is on assessing knowledge about scientific facts and processes, it could also be important to assess knowledge about the institutions of science and how they work—such as peer review and the role of science in policy discussions (Toumey et al. 2010). Others have similarly argued that the knowledge needed for citizenship might be different from what might be needed to be an informed consumer or to understand the role of science in our culture (Shen 1975). Science literacy can also be understood as the capacity to use scientific knowledge, to identify questions, and to draw evidence-based conclusions in order to understand and help make decisions about the natural world and the changes made to it through human activity (OECD 2003:132–33).
The degree to which respondents demonstrate an understanding of basic scientific terms, concepts, and facts; an ability to comprehend how S&T generates and assesses evidence; and a capacity to distinguish science from pseudoscience have become widely used indicators of basic science literacy. The 2016 GSS continues to show that many Americans provide multiple incorrect answers to basic questions about scientific facts and do not apply appropriate reasoning strategies to questions about selected scientific issues. Residents of other countries, including highly developed ones, rarely appear to perform better when asked similar questions.
Understanding Scientific Terms and Concepts
U.S. Patterns and Trends
In 2016, Americans correctly answered an average of 5.6 of the 9 true-or-false or multiple-choice items (63%) from NSF’s factual knowledge questions. This score is not substantially lower than the 2012 and 2014 scores of 5.8 and is thus generally consistent with recent years (Figure 7-7; Appendix Table 7-8). Two additional true-or-false questions about the theory of evolution and the Big Bang, which are not included in the 9-item measure, are also discussed subsequently.
Mean number of correct answers to trend factual knowledge of science scale: 1992–2016
Note(s)
Mean number of correct answers to nine questions included in trend factual knowledge of science scale; see Appendix Table 7-2 for explanation and list of questions. See Appendix Table 7-8 for percentage of questions answered correctly. See Appendix Tables 7-9 and 7-10 for responses to individual questions.
Source(s)
National Science Foundation, National Center for Science and Engineering Statistics, Survey of Public Attitudes Toward and Understanding of Science and Technology (1992–2001); NORC at the University of Chicago, General Social Survey (2006–16).
Science and Engineering Indicators 2018
The public’s measured level of factual knowledge about science has not changed much over the past two decades. Since 2001, the average number of correct answers to a series of 9 questions for which fully comparable data have been collected has ranged from 5.6 to 5.8 correct responses—a difference that is small enough that it could have occurred by chance, given the sample size—although scores for individual questions have varied somewhat more over time (Figure 7-8; Appendix Table 7-8, Appendix Table 7-9, and Appendix Table 7-10). The Pew Research Center (2013) used several of the same questions in a 2013 survey and received similar results.
Within the GSS data, trend factual knowledge of science is strongly related to individuals’ level of formal schooling and the number of science and mathematics courses completed (Figure 7-8; Appendix Table 7-8 and Appendix Table 7-10). Those who had not completed high school answered 43% of the 9 questions correctly, whereas those for whom a bachelor’s degree was their highest academic credential answered 74% of the questions correctly. Similarly, Americans who took five or fewer high school or college science or mathematics courses answered 55% of the questions correctly, whereas those who had taken nine or more courses answered 80% correctly (Appendix Table 7-8).
Although NSF survey data showed a large gap in scientific knowledge between the top-performing age groups (typically those in the middle range of age categories) and those in the older age groups, the current data suggest that this gap has narrowed (Appendix Table 7-8). For example, in 1992, 35- to 44-year-olds answered 65% of the trend questions correctly, whereas those 65 years or older answered 47% of the questions correctly. By 2016, the top-performing age group (25- to 34-year-olds) answered 67% of the questions correctly, while respondents age 65 years or older answered 55% correctly. The gap thus shrank from 18 percentage points to 12 percentage points between 1992 and 2016. Analyses of surveys conducted between 1979 and 2006 concluded that public understanding of science has increased over time and by generation, even after controlling for formal education levels (Losh 2010, 2012).
Factual knowledge about science, as measured in the current GSS, is also associated with respondents’ sex. Men (67%) tend to answer somewhat more factual science knowledge questions in the GSS correctly than women (60%) (Figure 7-8). The Pew Research Center found a similar result using a set of 12 questions that were different from those used by the NSF survey (Funk and Goo 2015). In the Pew survey, men’s scores averaged 8.6, whereas women’s scores averaged 7.3. For the NSF S&T survey (i.e., the current GSS data), men have typically done slightly better on physical science questions, whereas women have performed more similarly to men on biology questions (Appendix Table 7-10). However, men did better than women on an expanded set of biology questions in the 2008 GSS, which suggests that sex differences in correct answers may depend on the specific questions asked. The 2015 Pew Research Center data focus primarily on physical science questions, and the organization has not consistently seen these types of gender differences for questions focused on health and biomedical knowledge (Funk and Goo 2015). Some evidence also suggests that men might be more likely to guess rather than say they do not know the correct answer. This could partly account for men’s slightly higher science knowledge score (Mondak 2004). Pew did not differentiate biology from physics questions.
Correct answers to trend factual knowledge of science scale, by respondent characteristic: 2016
Note(s)
Data reflect average percentage of nine questions answered correctly. "Don’t know" responses and refusals to respond counted as incorrect. See Appendix Table 7-2 for explanation, list of questions, and additional respondent characteristics. See Appendix Tables 7-9 and 7-10 for responses to individual questions.
Source(s)
NORC at the University of Chicago, General Social Survey (2016).
Science and Engineering Indicators 2018
Evolution and the Big Bang
The GSS includes two additional true-or-false science questions that are not included in the index calculation because Americans’ responses appear to reflect factors beyond familiarity with scientific facts. One of these questions is about evolution, and the other is about the origins of the universe. In 2016, 52% of Americans correctly indicated that “human beings, as we know them today, developed from earlier species of animals,” and 39% correctly indicated that “the universe began with a big explosion” (Appendix Table 7-10). Both scores are relatively low compared with scores on the other knowledge questions in the survey. The percentage of Americans answering the evolution question has risen from a low of 42% in 2004, while the origins of the universe question is similar to where it has been since 2010 (38%) but is higher than it was during much of the last two decades—it was at lows of 32% in 1990 and 1997 (Appendix Table 7-9).
Those with more education and more factual knowledge typically do well on the two questions. Younger respondents are also more likely to answer both questions correctly. For example, 70% of those ages 18–24 years answered the evolution question correctly, whereas 45% of those 65 or older answered the evolution question correctly. This pattern is not as pronounced for the other knowledge questions described above (Appendix Table 7-10).
An additional question-wording experiment was included in the 2016 GSS to expand on similar experiments conducted in 2004 (NSB 2006) and 2012 (NSB 2014, 2016). These experiments involve randomly giving each survey respondent one of two or three different survey questions and then comparing the results. The earlier experiments showed that changing the wording to the evolution and origin of the universe questions substantially increased the percentage of respondents getting them correct. For example, in 2012, 48% of those asked whether it was true or false that “human beings, as we know them today, developed from earlier species of animals” gave the correct answer of true, but 72% answered the question correctly when presented with the same statement with the addition of the preface “According to the theory of evolution.” Similarly, 39% of respondents correctly stated it was true that “the universe began with a big explosion,” but 60% gave the correct answer when presented with the same statement prefaced by “According to astronomers” (Appendix Table 7-9).
Similar patterns were evident in the 2016 version of the experiments. For evolution, 74% gave the correct response to the evolution question when respondents were asked whether it was true or false that “elephants, as we know them today, descended from earlier species of animals” (for a discussion of this question, see [Maitland, Tourangeau, and Yan 2014] and [Maitland, Tourangeau, Yan, Bell, et al. 2014]). This is 22 percentage points higher than the 52% who gave the correct answer when asked the similar question about humans. For the Big Bang question, 69% gave the correct response when the preface “According to astronomers” was added to the original question, and 64% gave the correct response when asked whether it was true or false that “the universe has been expanding ever since it began.” These represent differences of 30 percentage points and 25 percentage points, respectively, from the 39% of respondents who gave the correct response when asked the original question of whether the universe began with a huge explosion. As before, the results suggest that the evolution and origin of the universe items, as originally worded, may lead some people to provide incorrect responses based on factors other than their knowledge of what most scientists believe. While issues of personal identity are not the focus of Indicators, other research has pointed to the important role that religious beliefs play in shaping views about evolution and the origins of the universe (e.g., [Roos 2014]). While issues of personal identity are not the focus of Indicators, research has pointed to the role that religious beliefs play in shaping views about evolution and the origins of the universe (e.g., [Roos 2014]). For additional findings related to these questions, see sidebar Testing Alternative Wording of the Big Bang and Evolution Questions.
International Comparisons
There are very few current international efforts to measure science knowledge in the way that this is done in the United States because scholarly attention has shifted to understanding attitudes about science and scientists. This has likely occurred because of the aforementioned evidence that science knowledge is only weakly related to science attitudes (Bauer, Allum, and Miller 2007), including support of science (NASEM 2016c). Most data now available are thus somewhat dated.
Knowledge scores for individual items vary from country to country, and it is rare for one country to consistently outperform others across all items in a given year (Table 7-1). One exception is a 2013 Canadian survey that has Canadians scoring as well as or better than Americans and residents of most other countries on the core science questions (CCA 2014). For the physical and biological science questions, knowledge scores are relatively low in China, Russia, and Malaysia (CRISP 2016; Gokhberg and Shuvalova 2004; MASTIC 2010). Compared with overall scores in the United States and the European Union (EU) (European Commission 2005), scores in Japan (NISTEP 2012) are also relatively low for several questions.
Scores on a smaller set of four questions administered in 12 European countries in 1992 and 2005 show each country performing better in 2005 (European Commission 2005), in contrast to a flat trend in corresponding U.S. data. In Europe, as in the United States, men, younger adults, and more educated people tended to score higher on these questions.
Percentage of correct answers to factual knowledge questions in physical and biological sciences, by region or country: Most recent year
Reasoning and Understanding the Scientific Process
U.S. Patterns and Trends
Another indicator of the public understanding of science focuses on the public’s understanding of how science generates and assesses evidence rather than knowledge of particular science facts. Such measures reflect recognition that knowledge of specific S&T facts is conceptually different from knowledge about the overall scientific processes (Miller 1998), as well as the increased emphasis placed on process in science education (NRC 2012).
Data on three scientific process elements—probability, experimental design, and the scientific method—show trends in Americans’ understanding of the process of scientific inquiry. One set of questions tests how well respondents apply the principles of probabilistic reasoning to a set of questions about a couple whose children have a 1-in-4 chance of suffering from an inherited disease. A second set of questions deals with the logic of experimental design, asking respondents about the best way to design a test of a new drug for high blood pressure. A third open-ended question probes what respondents think it means to study something scientifically. Because probability, experimental design, and the scientific method are all central to scientific research, these questions are relevant to how respondents evaluate scientific evidence. These measures are reviewed separately and then as a combined indicator of public understanding about scientific inquiry.
With regard to probability, 82% of Americans in 2016 correctly indicated that the fact that a couple’s first child has the illness has no relationship to whether three future children will have the illness. In addition, about 72% of Americans correctly responded that the odds of a genetic illness are equal for all of a couple’s children. Overall, 64% got both probability questions correct (Table 7-2; Appendix Table 7-11). The public’s understanding of probability has been fairly stable over time, with the percentage giving both correct responses ranging from 64% to 69% since 1999 and has been no lower than 62% dating back to 1992 (Table 7-2).
With regard to understanding experiments, about half (51%) of Americans were able to answer a question about how to test a drug and then provide a correct response to an open-ended question that required them to explain the rationale for an experimental design (i.e., giving 500 people a drug while not giving the drug to 500 additional people, who then serve as a control group) (Table 7-2). The 2016 results, similar to the 2014 results and results from most recent survey years, are a substantial improvement over the unusually low 2012 results that had only 34% answering this set of questions correctly. Although there has been an average increase in the percentage of correct responses over the previous two decades, there has also been substantial year-to-year variation that may in part reflect reliance on human coders to categorize responses.
When all the scientific reasoning questions are combined into an overall measure of understanding of scientific inquiry (Figure 7-9), about 43% of Americans could both correctly respond to the two questions about probability and provide a correct response to at least one of the open-ended questions about experimental design or what it means to study something scientifically (Table 7-2). The 2016 proportion was not meaningfully different from the 46% found in 2014. Further, 2014 had the highest proportion of correct responses on surveys for which NSF has data, dating back to 1995. In general, men, respondents with more education, and respondents with higher incomes did better on the scientific inquiry questions. Both younger and older age groups did relatively less well compared with those in the middle of the age range (Appendix Table 7-11).
Correct answers to scientific process questions: Selected years, 1999–2016
Understanding scientific inquiry, by respondent characteristic: 2016
Note(s)
See Appendix Table 7-11 for explanation of understanding scientific inquiry and questions included in the index and additional respondent characteristics.
Source(s)
NORC at the University of Chicago, General Social Survey (2016).
Science and Engineering Indicators 2018
International Comparisons
Reasoning and understanding have not been the focus of surveys in most other countries in recent years. A 2010 Chinese survey reported that 49% understood the idea of probability, 20% understood the need for comparisons in research, and 31% understood the idea of scientific research (CRISP 2010). In a July 2011 Japanese survey, 62% correctly answered a multiple-choice question on experiments related to the use of a control group, whereas 57% answered correctly in a follow-up December 2011 survey (NISTEP 2012). As noted previously, 66% of Americans provided a correct response to a similar question in 2014.
Pseudoscience
Another indicator of public understanding about S&T comes from a measure focused on the public’s capacity to distinguish science from pseudoscience. One such measure, Americans’ views on whether astrology is scientific, has been included in Indicators because of the availability of data going back to the late 1970s. Other examples of pseudoscience include the belief in lucky numbers, extrasensory perception, or magnetic therapy.
More Americans see astrology as unscientific today than in the past, although there has been some variation in recent years. In 2016, about 60% of Americans said astrology is “not at all scientific,” a value near the middle of the historical range and down somewhat from 65% in 2014. Twenty-nine percent said they thought astrology was “sort of scientific,” and the remainder said they thought astrology was “very scientific” (8%) or that they “didn’t know how scientific” astrology is (3%). The percentage of Americans who report seeing astrology as unscientific has ranged between 50% (1979) and 66% (2004).
Respondents with more years of formal education and higher income were less likely to see astrology as scientific. For example, in 2016, 76% of those with bachelor’s degrees indicated that astrology is “not at all scientific,” compared with 57% of those whose highest level of education was high school. Age was also related to perceptions of astrology. Younger respondents were the least likely to reject astrology, with only 54% of the youngest age group (18–24 years old) and 53% of the next group (25–34 years old) saying that astrology is “not at all scientific.” At least 60% of all other groups rejected astrology (Appendix Table 7-12).
Perceived Understanding of Scientific Research
U.S. Patterns and Trends
While factual knowledge is important, people may also develop attitudes and engage in behaviors because of their perception of how much they know (Ladwig et al. 2012). The NSF survey has included data on the degree to which respondents believe they “have a clear understanding of what it means” when they “read or hear the term scientific study.” In 2016, 31% of Americans said they thought they had a clear understanding of the meaning, while 48% said they felt they had a “general understanding” of the topic. Another 21% said they had “little understanding” (19%) or said they did not know (2%) (Appendix Table 7-13 and Appendix Table 7-14).
The proportion of respondents saying they have a clear understanding of what the term scientific study means was 22% in 1979 and climbed to a high of 37% in 1997 before dropping back down to 24% in 2012 (Appendix Table 7-14). The current level, 31%, is at the overall average (31%); men, those with more education, and those with more income are most likely to say they have a clear understanding. A perceived sense of “clear understanding” also appears to peak in the 25- to 34-year-old age group. Factual knowledge also matters. About 51% of those in the highest quartile of factual knowledge as measured by the NSF said they had a clear understanding of the term scientific study. About 15% of those in the lowest quartile of factual knowledge said they felt they had a clear understanding of the term. About 4% of those in the highest knowledge quartile and 44% of those in the lowest knowledge quartile said they had “little understanding” of the term (Appendix Table 7-13).
International Comparisons
Only a small number of countries ask about their residents’ perceived understanding of science. In Switzerland, about 28% agreed with a statement about being well-informed about science and research by choosing 4 or 5 on a 5-point scale, where 1 indicated complete disapproval of the statement and 5 indicated complete approval (Schafer and Metag 2016). This is similar to the 31% who expressed “clear understanding” in the United States.