When explaining a cause effect relationship which type of variable is the cause?

The magnitude of the indirect effects is determined by taking the product of the path coefficients along the pathway between the two causally related variables. Thus, the total indirect effect between two variables in a path model equals the sum of the products of each indirect effect. For example, child's school engagement affects educational attainment indirectly through its effect on achievement. Thus, the magnitude of the indirect effect between engagement and attainment can be estimated by multiplying the paths from school engagement to achievement and from achievement to educational attainment, (pEA × pAS).

Calculating the total indirect effect between mother's education and child's educational attainment is a bit more complicated but follows the same logic. Maternal education affects educational attainment indirectly through child's achievement and the magnitude of the indirect effect is (pEA × pAM). Maternal education also indirectly influences educational attainment via child's school engagement and the magnitude of the effect is (pES × pSM). In addition, mother's education influences child's educational attainment both through its effect on school engagement and on achievement. The magnitude of this indirect effect is (pEA × pAS × pSM). Thus, the total indirect effect of mother's educational attainment on child's educational attainment is the sum of all of these indirect effects, (pEA × pAM) + (pES × pSM) + (pEA ×  pAS × pSM). Since mother's education is also correlated with parental income, all of these indirect effects also occur via this correlation.

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0123693985004837

Causal Inference

Alberto Abadie, in Encyclopedia of Social Measurement, 2005

Introduction

Establishing causal relationships is an important goal of empirical research in social sciences. Unfortunately, specific causal links from one variable, D, to another, Y, cannot usually be assessed from the observed association between the two variables. The reason is that at least part of the observed association between two variables may arise by reverse causation (the effect of Y on D) or by the confounding effect of a third variable, X, on D and Y.

Consider, for example, a central question in education research: “Does class size affect test scores of primary school students? If so, by how much?” A researcher may be tempted to address this question by comparing test scores between primary school students in large and small classes. Small classes, however, may prevail in wealthy districts, which may have, on average, higher endowments of other educational inputs (highly qualified teachers, more computers per student, etc.) If other educational inputs have a positive effect on test scores, the researcher may observe a positive association between small classes and higher test scores, even if small classes do not have any direct effect on students' scores. As a result, observed association between class size and average test scores should not be interpreted as evidence of effectiveness of small classes improving students' scores.

This gives the rationale for the often-invoked mantra “association does not imply causation.” Unfortunately, the mantra does not say a word about what implies causation. Moreover, the exact meaning of causation needs to be established explicitly before trying to learn about it.

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0123693985001821

Epidemiology

R.H. Riffenburgh, in Statistics in Medicine (Third Edition), 2012

Inferring Causation

Identifying causal relationships in observational studies can be difficult. If neither chance nor bias is determined to be a likely explanation of a study’s findings, a valid statistical association may be said to exist between an exposure and an outcome. Statistical association between two variables does not establish a cause-and-effect relationship. The next step of inferring a cause follows a set of logical criteria by which associations could be judged for possible causality, which was first described by Sir Bradford Hill in 1965.

Evidence Supporting Causality

Seven criteria currently in widespread use facilitate logical analysis and interpretation of epidemiologic data

1.

Size of effect. The difference between outcomes given exposure and those not given exposure is termed effect. Large effects are more likely to be causal than small effects. Effect size is estimated by the RR. (Relative risk is the probability of having a disease when it is predicted in ratio to the probability of having the disease when the prediction is not having it; see Section 10.1 for further details.) As a reference, an RR>2.0 in a well-designed study may be added to the accumulating evidence of causation.

2.

Strength of association. Strength of association is based on the p-value, the estimate of the probability of rejecting the null hypothesis. A weak association is more easily dismissed as resulting from random or systematic error. By convention, p<0.05 is accepted as evidence of association.

3.

Consistency of association. A particular effect should be reproducible in different settings and populations.

4.

Specificity of association. Specificity indicates how exclusively a particular effect can be predicted by the occurrence of potential cause. Specificity is complete where one manifestation follows from only one cause.

5.

Temporality. Putative cause must precede putative effect.

6.

Biologic gradient. There should be evidence of a cause-to-outcome process, which frequently is expressed as a dose–response effect, the term being carried over from clinical usage.

7.

Biologic plausibility. There should be a reasonable biologic model to explain the apparent association.

Further information on the evidence list can be found in elsewhere in the literature52,86,122.

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123848642000251

Validity, Data Sources

Michael P. McDonald, in Encyclopedia of Social Measurement, 2005

Internal and External Validity

The causal relationship of one concept to another is sometimes also discussed in terms of validity. Internal validity refers to the robustness of the relationship of a concept to another internal to the research question under study. Much of the discussion in the section under threats to validity and the tests for validity is pertinent to the internal validity of a measure, vis-a-vis another concept with which it is theoretically correlated. External validity refers to the greater generalizability of the relationship between two concepts under study. Is the uncovered relationship applicable outside of the research study?

The relationship between one measure and another may be a true relationship, or it may be a spurious relationship that is caused by invalid measurement of one of the measures. That is, the two measures may be related because of improper measurement, and not because the two measures are truly correlated with one another. Similarly, two measures that are truly related may remain undetected because invalid measurement prevents the discovery of the correlation. By now, the reader should be aware that all measures are not perfectly valid, the hope is that the error induced in projecting theory onto the real world is small and unbiased so that relationships, be they findings that two measures are or are not correlated, are correctly determined.

All of the threats to validity apply to the strength of the internal validity of the relationship between two measures, as the two measures must be valid in order for the true relationship between the two, if any exists, to be determined. Much of the discussion of tests of content and convergent validity also applies to internal validity. In addition, researchers should consider the rules of inference in determining if a relationship is real or spurious. Are there confounding factors that are uncontrolled for driving the relationship? A classic example in time-series analysis is cointegration, the moving of two series together over time, such as the size of the population and the size of the economy, or any other measure that grows or shrinks over time. In the earlier example of voter turnout, the confounding influence of a growing ineligible population led researchers to incorrectly correlate a largely invalid measure of decreasing voter turnout to negative advertising, a decline of social capital, the rise in cable television, campaign financing, the death of the World War II generation, globalization, and decline in voter mobilization efforts by the political parties.

External validity refers to the generalizability of a relationship outside the setting of the study. Perhaps the most distinguishing characteristic of the social sciences from the hard sciences is that social scientists do not have the luxury of performing controlled experiments. One cannot go back in history and change events to determine hypothetical counterfactuals, while physicists may repeatedly bash particles together and observe how changing conditions alter outcomes. The closest the social sciences come to controlled experiments is in laboratory settings where human subjects are observed responding to stimuli in controlled situations. But are these laboratory experiments externally valid to real situations?

In a classic psychology experiment, a subject seated in a chair is told that the button in front of them is connected to an electric probe attached to a second subject. When the button is pushed an increasing amount of voltage is delivered. Unknown to the subject, the button is only hooked to a speaker, simulating screams of pain. Under the right circumstances, subjects are coerced into delivering what would be fatal doses of voltage.

Such laboratory experiments raise the question as to whether in real situations subjects would respond in the similar manner and deliver a fatal charge to another person, i.e., is the experiment externally valid? Psychologists, sociologists, political scientists, economists, cognitive theorists, and others who engage in social science laboratory experiments painstakingly make the laboratory as close to the real world as possible in order to control for the confounding influence that people may behave differently if they know they are being observed. For example, this may take the form of one-way windows to observe child behavior. Unfortunately, sometimes the laboratory atmosphere is impossible to remove, such as with subjects engaged in computer simulations, and subjects are usually aware prior to engaging in a laboratory experiment that they are being observed.

External validity is also an issue in forecasting, where relationships that are based on observed relationships may fail in predicting hypothetical or unobserved events. For example, economists often describe the stock market as a random walk. Despite analyst charts that graph levels of support and simple trend lines, no model exists to predict what will happen in the future. For this reason, mutual funds come with the disclaimer, “past performance is no guarantee of future returns.” A successful mutual fund manager is likely to be no more successful than another in the next business quarter.

The stock market is perhaps the best example of a system that is highly reactionary to external shocks. Unanticipated shocks are the bane of forecasting. As long as conditions remain constant, modeling will be at least somewhat accurate, but if the world fundamentally changes then the model may fail. Similarly, forecasts of extreme values outside the scope of the research design may also fail, or when the world acts within the margin of error of the forecast then predictions, such as the winner of the 2000 presidential election, may be indeterminate.

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0123693985000463

Returns to Education in Developed Countries

M. Gunderson, P. Oreopoulos, in International Encyclopedia of Education (Third Edition), 2010

Introduction

Understanding the causal relationship between education and the financial returns to such education is important for addressing a range of questions of practical and policy importance. What are the private returns that individuals can expect from investing in education? How do those returns vary by factors such as level of education, field of study, and individual background characteristics? How have those returns varied over time and across different countries? Is there an extra effect from a year of education if that year provides the credential of completing a phase of study such as graduating from high school or university? If potential dropouts are compelled to stay in school longer by compulsory school laws, do they receive returns that are higher or lower than the average returns? Are the returns the result of education enhancing the productivity and skills of individuals or are they the result of signaling of such conventionally unobserved factors such as ability, motivation, and time-management skills? What are the appropriate methodologies for estimating the returns to education, especially for dealing with factors such as measurement error, ability bias, credential effects, and financial constraints?

The purpose of the article is to address these practical and methodological questions. The emphasis here is on the causal returns to education after controlling for other observable and unobservable factors such as innate ability or motivation that may affect the outcomes associated with higher education. Understanding the underlying causal relationship process is important for policy purposes so as to ascertain the effect of policy interventions, for example, to reallocate resources from fields of low returns to fields of high returns or raise the age of compulsory schooling or institute policies to deter dropping out. It can also be important for predicting future changes as the underlying causal factors change.

The article moves from the simple to the more complex. It starts with estimates of the return to education based on basic schooling equations where education is not exogenous but can be correlated with other factors that can affect outcomes. It then moves to a discussion of refinements to the basic model: the appropriate measure of earnings and the inclusion of nonwage benefits; measurement error in the schooling variable; corrections for ability bias, omitted variables, and selection bias; and the possibility of heterogeneous returns and credential or sheepskin effects.

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978008044894701215X

On Causality in Nonlinear Complex Systems

James A. Coffman, in Philosophy of Complex Systems, 2011

Summary

Science seeks to delineate causal relationships in an effort to explain empirical phenomena, with the ultimate goal being to understand, and whenever possible predict, events in the natural world. In the biological sciences, and especially biomedical science, causality is typically reduced to those molecular and cellular mechanisms that can be isolated in the laboratory and thence manipulated experimentally. However, increasing awareness of emergent phenomena produced by complexity and non-linearity has exposed the limitations of such reductionism. Events in nature are the outcome of processes carried out by complex systems of interactions produced by historical contingency within dissipative structures that are far from thermodynamic equilibrium. As such, they cannot be adequately explained in terms of lower level mechanisms that are elucidated under artificial laboratory conditions. Rather, a full causal explanation requires comprehensive examination of the flow networks and hierarchical relationships that define a system and the context within which it exists.

The fact that hierarchical context plays a critical role in determining the outcome of events reinvigorates Aristotelian conceptions of causality. One such perspective, which I refer to as developmentalism, views all non-random causality as a product of development at some level. Development (‘self-organization’) occurs via the selective agency of autocatalytic cycles inherent in certain configurations of processes, which competitively organizes a system as resources become limiting. In this view bottom-up causality (the concern of reductionism) holds sway mainly in immature systems, whereas top-down causality (organizational or informational constraint) dominates mature systems, the functioning of which is less dependent (and more constraining) on the activities of their lower-level parts. Extrapolating the developmentalist perspective to the limit, one might posit that the ultimate arbiters of causality, the ‘laws of physics’, are themselves no more than organizational constraints produced by (and contingent upon) the early development of the universe. The causal relationships that define chemistry and biology are more highly specified organizational constraints produced by later development. Developmentalism helps resolve a number of long-standing dialectics concerned with causality, including reductionism/holism, orthogenesis/adaptation, and stasis/change.

In biological sciences, developmentalism engenders a discourse that overcomes barriers imposed by the still-dominant paradigms of molecular reductionism on the one hand and Darwinian evolution on the other. With regard to the former, it provides a better interpretive framework for the new science of ‘systems-biology’, which seeks to elucidate regulatory networks that control ontogeny, stem cell biology, and the etiology of disease. With regard to the latter, it provides an intelligible bridge between chemistry and biology, and hence an explanation for the natural origin of life. Finally, developmentalism, being an inherently ecological perspective, is well-suited as a paradigm for addressing problems of environmental management and sustainability.

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780444520760500109

Business, Social Science Methods Used in

Gayle R. Jennings, in Encyclopedia of Social Measurement, 2005

Experimental and Quasi-experimental Methods

Experiments enable researchers to determine causal relationships between variables in controlled settings (laboratories). Researchers generally manipulate the independent variable in order to determine the impact on a dependent variable. Such manipulations are also called treatments. In experiments, researchers essay to control confounding variables and extraneous variables. Confounding variables may mask the impact of another variable. Extraneous variables may influence the dependent variable in addition to the independent variable. Advantages of experiments include the ability to control variables in an artificial environment. Disadvantages include the mismatch between reality and laboratory settings and the focus on a narrow range of variables at any one time. Laboratory experiments enable researchers to control experiments to a greater degree than those experiments conducted in simulated or real businesses or business-related environments. Experiments in the field (business and business-related environments) may prove to be challenging due to issues related to gaining access and ethical approval. However, field experiments (natural experiments) allow the measurement of the influence of the independent variable on the dependent variable within a real-world context, although not all extraneous variables are controllable. The classical experimental method involves independent and dependent variables, random sampling, control groups, and pre- and posttests. Quasi-experiments omit aspects from the classical experiment method (such as omission of a control group or absence of a pretest).

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B012369398500270X

Energy–growth nexus, domestic credit, and environmental sustainability: a panel causality analysis

Matheus Belucio, ... António Cardoso Marques, in The Extended Energy-Growth Nexus, 2019

6.5 Conclusion

In this chapter, we studied the causal relationships of energy–growth nexus, domestic credit, and environmental sustainability in two ways, namely (1) the bivariate relation between variables; and (2) the elaboration of a PVAR with the performance of the Granger causality test. Nineteen high-income countries were selected for the study. The data were comprised of annual information from 2001 to 2016.

The combination of the bivariate analysis and the PVAR allowed us to observe the relationships between variables in addition to providing more robust results. The presence of cross-section dependency indicates the need for elaboration of bootstrap replications, which was performed.

The main results of the bivariate analysis revealed that the GDP has no statistical relationship with primary energy consumption or with the generation of electricity. However, when the PVAR panel analysis is performed, and other variables were added, the results show the existence of endogeneity in the model and reveal the relationship between the variables.

Other variables that do not have a causal relationship through the bivariate model are the financial sector of domestic credit and GDP, but the relationship is seen through the endogenous PVAR model. At the same time, the domestic credit of the private sector shows to be related to GDP in both methods.

The role of credit in the economy is indispensable. It is paramount for the productive sector and for families. Our results suggest that domestic credit should be considered in future research on the energy–growth nexus. We also suggest that public policy makers should consider this variable because of their ability to capture effects on economic growth.

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128157190000061

Longitudinal Studies, Panel

Scott Menard, in Encyclopedia of Social Measurement, 2005

Longitudinal Panel Designs and the Study of Causal Relationships

In order to establish the existence of a causal relationship between any pair of variables, three criteria are essential: (1) the phenomena or variables in question must covary, as indicated, for example, by differences between experimental and control groups or by a nonzero correlation between the two variables; (2) the relationship must not be attributable to any other variable or set of variables—that is, it must not be spurious, but must remain nonzero even when other variables are controlled experimentally or statistically; and (3) the presumed cause must precede or be simultaneous in time with the presumed effect, as indicated by the change in the cause occurring no later than the associated effect it is supposed to produce. Evidence for covariation may easily be obtained from cross-sectional data. Evidence for nonspuriousness is never really sufficient (there is always something else that could have been controlled), but evidence for spuriousness or its absence in the presence of the controls used in a particular study can also be obtained from cross- sectional data. In some instances, it is also possible to infer temporal (and implicitly causal) order from cross-sectional data. For instance, whether an individual is male or female, along with other genetically determined characteristics, is determined at birth, and necessarily precedes any voluntary behavior on the part of the individual. Thus, although being male is a plausible cause of violent behavior (with respect to covariation, time ordering, and perhaps nonspuriousness), violent behavior is not realistically plausible as a cause of being male.

In the social sciences, however, we are often dealing with time-varying characteristics, and the order of change may not be readily ascertainable at the level of the individual research participant without longitudinal panel data. In the case of qualitative changes, with a short enough measurement interval (the time from one wave to the next), it should be possible to determine which of two changes occurred first (or that they occurred at the same time). That one change preceded another is not sufficient to support the claim that the first change must have caused the second (the “post hoc ergo propter hoc,” or “after therefore because of,” fallacy), because the criteria of covariation and nonspuriousness must also be satisfied, but it does mean that the second change can probably be ruled out as a cause of the first. For example, in one study, the proposition that substance use (underage alcohol use and illicit drug use) caused other illegal behavior (violent and property crime) was examined; the findings were that the initiation of substance use typically came after, not before, other illegal behavior. Although this evidence does not establish other illegal behavior as a cause of substance use, it seriously weakens the case that initiation of other illegal behavior is a result of substance use, insofar as an effect cannot precede a cause. Besides initiation or onset of a behavior, other possible qualitative changes include escalation of attitudes or behaviors (entry of a higher state on an ordinal scale), de-escalation or reduction (entry of a lower state on an ordinal scale), and suspension (permanent or temporary exit from all states that indicate involvement in a particular kind of behavior or agreement with a particular attitudinal statement).

What is the variable that is the cause in a cause

A central goal of most research is the identification of causal relationships, or demonstrating that a particular independent variable (the cause) has an effect on the dependent variable of interest (the effect).

Is a variable that is considered as a cause in a causal relationship?

An independent variable is the cause, and a dependent variable is the effect. Dependent variables depend on independent variables.

What is cause/effect relationship?

A cause-and-effect relationship is claimed where the following conditions are satisfied: the two events occur at the same time and in the same place; one event immediately precedes the other; the second event appears unlikely to have happened without the first event having occurred.

What are the 3 types of cause

Two teaching strategies are often effective in teaching students to recognize and understand the cause/effect text structure: teaching signal words (because, so, and since) and teaching the three types of cause/effect relationships (stated, unstated, and sequential).