Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Structural equation modeling in educational psychology
(USC Thesis Other)
Structural equation modeling in educational psychology
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
Structural Equation Modeling in Educational Psychology by Bune Choi A Thesis Presented to the FACULTY OF THE GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Fulfillment of the Requirements for the Degree MASTER OF SCIENCE (Statistics) August 1995 UNIVERSITY O F SO U T H E R N CALIFORNIA T H E G RAD U A TE SC H O O L U N IV ER SITY PARK L O S A N G E L E S. C A L IF O R N IA 9 0 0 0 7 This thesis, written by 8uyy)e oh&z under the direction of h Thesis Committee, and approved by all its members, has been pre sented to and accepted by the Dean of The Graduate School, in partial fulfillment of the requirements for the degree of Mf°t $ l SC[tr(JC C T Dean Date .A U _gU S t 2 5 , _ 19 9 5 THESIS COMMITTEE ......... C hair m itt D edication This thesis is dedicated to my parents. A cknowledgm ents I would like to offer my sincerest thanks to my academic advisor, Dr. Tavare, who provided me with valuable advice and materials. W ithout his academic and financial support, I could not have completed my study successfully. Also, I would like to acknowledge my committee members, Dr. Arratia and Dr. Goldstein. In addition, Dr. Hocevar deserves recognition for his help with EQS techniques. C ontents D edication ii A cknow ledgm ents iii List O f Tables vi List O f Figures vii A bstract viii 1 Introduction 1 2 Structural Equation M odel (SEM ) 3 2.1 Model Specification.......................................................................................... 3 2.2 Id en tific a tio n ................................................................................................... 7 2.2.1 f - R u le .................................................................................................... 8 2.2.2 Two-Step R u l e .................................................................................... 8 2.2.3 MIMIC Rule ....................................................................................... 9 2.3 E stim ation......................................................................................................... 10 2.3.1 Maximum Likelihood (ML) ............................................................. 11 2.3.2 Unweighted Least Squares ( U L S ) .................................................. 13 2.3.3 Generalized Least Squares ( G L S ) .................................................. 13 2.4 Assessment of F i t ............................................................................................ 14 2.4.1 Overall Model Fit M e a su re s............................................................. 15 2.4.1.1 R e s id u a ls ............................................................................ 15 2.4.1.2 A Chi-Square (x 2) Test ................................................. 17 2.4.1.3 Additional Measures of Overall Model F i t ................. 20 2.4.2 Component Fit M easures................................................................... 21 2.5 Respecification ............................................................................................... 22 3 Em pirical A pplication 24 3.1 D a t a ................................................................................................................... 24 3.2 Model Specification........................................................................................ 26 iv 3.2.1 Structural M o d e l................................................................................ 26 3.2.2 Measurement M o d e l ......................................................................... 27 3.3 Id en tific a tio n ................................................................................................... 29 3.4 Estimation and Model E v alu atio n ............................................................... 32 4 Conclusions 38 R eference List 39 A ppendix A EQS Program for the CFA M o d e l......................................................................... 42 A ppendix B EQS program for the Path Models ..................................................................... 44 v List O f Tables 3.1 Covariance Matrix and Standard Deviation for Observed Variables . . 33 3.2 The ML Estimates for the param eters....................................................... 35 3.3 Summary of Overall Fit .............................................................................. 36 3.4 The Component Fit Measures ( Rl.'s and R ^ s ) ..................................... 36 vi List O f Figures 2.1 Example of path diagram for SEM ............................................................ 5 3.1 Confirmatory Factor Analysis Model for Worry (£i), Visualization (£2 ), and Problem-Solving Ability (£ 3 ) ........................................................ 25 3.2 A Latent Variables Model of Worry (£1 ), Visualization (£2), and Problem- Solving Ability (rji) ......................................................................................... 27 3.3 Confirmatory Factor Analysis Model for Worry (£1 ), Visualization (£2 ), and Problem-Solving Ability (£ 3 ) ........................................................ 29 3.4 The First Step of Reformulation for Two-Step R u l e .............................. 31 3.5 The Second Step of Latent Variable Model for Two-Step Rule .... 32 3.6 The respecified CFA m o d el............................................................................. 33 A bstract This thesis focuses on a review of the theoretical foundation, and an empirical exam ple, of structural equation modeling. The theoretical review includes model specifi cation, identification, estimation, assessment of fit, and respecification. An example from the field of educational psychology is provided to illustrate the theory, together with a detailed discussion of the results. The example involves the relations among three latent variables (worry, visualization, and problem-solving ability) on the per formance of students in Calculus. The thesis concludes with a brief discussion of some difficulties of this method. Chapter 1 Introduction Substantive use of structural equation modeling (SEM) has been growing in psy chology and the social sciences. In fact, SEM has been known as a unified model which joins models and methods from econometrics, psychometrics, sociometrics, and multivariate statistics (Bentler, 1992). The generality and wide applicability of the structural equation model approach has been amply demonstrated (Joreskog & Sorbom, 1989; Bentler, 1992). The genesis of SEM can be found in the idea of path analysis. Path analysis is defined as a strategy for understanding causal processes through the analysis of correlational data. After development by the geneticist Sewall Wright (1921) as a quantitative aid for biological research, path analysis was introduced to the social sciences by Simon (1954, 1957). Later, Blalock (1961, 1962, 1964) extended and popularized Simon’s work. Through additional contributions by Boudon (1965) and Duncan (1966), path analysis became a viable method for rationally inferring causal 1 relationships from correlations, provided certain highly restrictive assumptions are met. With all these accomplishments of path analysis, Joreskog (1973), Keesing (1972), and Wiley (1973) developed very general structural equation models that incorpo rated path diagrams and other features of path analysis into their presentations. These techniques are known by the abbreviation of the JKW model, or more com monly as the LISREL model. Later, Bentler and Weeks (1980), McArdle and Mc Donald (1984), and others have proposed alternative representations of general struc tural equations. As is path analysis, structural equation modeling is a method for estimating the magnitude of the causal relationships that are assumed to operate among the variables in the model. The term “structural” stands for the assumption that the parameters are not just descriptive measures of association but rather that they reveal an invariant “causal” relation. However, converting association into causation is not that simple and needs strong restrictions and assumptions, as many researchers indicate (Cliff, 1983; Freedman 1986, 1993; Anderson and Gerbing, 1988). Therefore, this paper focuses on the theoretical review of SEM for causal inference along with an empirical example of the SEM method. Chapter 2 provides a brief review on the theoretical foundation of SEM. In Chapter 3, a practical example of the use of SEM is presented. 2 C hapter 2 Structural Equation M odel (SEM ) The general structural equation model represents a synthesis of two model types: one is measurement model (or, a confirmatory factor analysis); and the other is a latent variable model (or, structural model). A confirmatory measurement, or factor analysis, model specifies the relation of the observed measures to their posited underlying constructs, with the constructs allowed to intercorrelate freely. On the other hand, a structural model shows the influence of latent variables on each other. In this chapter, we briefly summarize model specification, identification, estima tion, assessment of fit, and respecification from Bollen (1989) and Byrne (1994). 2.1 M odel Specification The first component of the structural equation is the latent variable model, rj = Br} + r £ + C 3 where rj is the m X 1 vector of latent endogenous random variables; £ is the n x 1 latent exogenous random variables; B is the m x m coefficient m atrix showing the influence of the latent endogenous variables on each other; and r is the m x n coefficient m atrix for the effects of £ on T ). The matrix (I — B) is assumed to be nonsingular. £ is the disturbance vector that is assumed to have an expected value of zero E(C) = o and which is uncorrelated with £. Also, it is assumed that E(rj) = o and E (£) = o. The second component of the general structural equation model is the measure ment model, y = A yT ) + e x = A x£ + S where the y(p x 1) and x(q x 1) vectors are observed variables; A y(p x m) and A x(q X n) are the coefficient matrices that show the relation of y to i] and x to £, respectively; and e(p x 1) and the S(q x 1) are the errors of measurement for y and *, respectively. The errors of measurement e and S are assumed to be uncorreffited with £ and £, and with each other, r), £, e, and 5 are also assumed to have an expected value of zero. All of these relations and assumptions are summarized in the example of the path diagram in Figure 2.1. The procedures of SEM emphasize covariances rather than cases. In SEM, we minimize the difference between the sample covariances and the covariances pre dicted by the model, instead of minimizing functions of observed and predicted Y21 e F 4 Figure 2.1: Example of path diagram for SEM individual values. The fundamental hypothesis for the structural equation proce dures is that the covariance m atrix of the observed variables is a function of a set of parameters: E = E(6) where £ is the population covariance m atrix of observed variables; 6 is a vector that contains the model parameters; and E(0) is the covariance matrix written as a function of 0. That is, each element of the covariance m atrix is a function of one or more model parameters. The implied covariance matrix E{6) can be decomposed into three pieces: the covariance matrix of y , E yy{9)\ the covariance matrix of y with x, E yx(6)] and the covariance matrix of x, S xx{6). Consider first 2Jyy(0), the implied covariance m atrix of y: S yy{9) = E (yy') = E (Ayri + e)(rf'A'y + e') = A yE(riri,)A, y + © £ where 0 £ = covariance matrix of e. Since rj = (I — B )-1( r £ + £), E,y(6) = M I - B )-' ( r $ r + !P) [(/ - B ) - ] ' A'y + 0 , where # = covariance matrix of £ and & = covariance matrix of £. When referring to the covariance matrix of y with x, S yx, as a function of the structural parameters, it is E yx{9). E yx{9) = E{yx>) = E[(Ayr, + e)(t'A'x + S') = A yE(r,Z>)A>x Again using rj = (I — B) 1(r $ + £), 2 ^ ( 0 ) = Ay(I - B ) - 'r * A 'x 6 Finally, the covariance m atrix of x, J2xx> written as a function of the structural parameters is: Exx{Q) = E(xx') = E (Axt + 6)(Z>A'x + 6> ) = A xE{tt')A'x + 0 s = Ax&Ax + ®s where 0$ = covariance m atrix of S. Therefore, by assembling these three components, the covariance matrix for the observed y and x variables can be represented as a function of the model parameters: £ { 9) = ^yy(Q) ^yx(^) £ xy(9) £ X x(9) 2.2 Identification In the previous section, we showed that the covariance structure £ = £{0) implies 2 03+ < ? )(£ * +<7 + 1) nonredundant equations of the form cqj = <7,-j(0)(z < j), where crtJ is the ij element of £ and <Jij(0) is the ij element of £(0). If an element of 6 can be expressed as a function of one or more cr,j, then this establishes its identification. If all elements of 9 meet this condition, the model is identified. There are widely used rules for the identification of a general model. The 2-rule, the two-step rule, and MIMIC (Multiple Indicators and Multiple Causes) rule are reviewed in this section. Even though none of these rules is a necessary and sufficient condition for model identification, researchers can apply one or more of these to help assess a model’s identification. 2.2.1 t-R ule The t-rule for identification is that the number of nonredundant elements in the covariance matrix of the observed variables must be greater than or equal to the number of unknown parameters in 0: t < \{p + q)(p + Q + 1 ) where p + q is the number of observed variables and t is the number of free and unconstrained parameters in 0. The nonredundant elements of U = S (6 ) imply \(p + q)(p + < 7 + 1) equations. If the number of unknowns in 6 exceeds the number of equations, identification is not possible. This rule is a necessary but not sufficient condition of identification. 2.2.2 T w o-Step R ule The two-step rule consists of two parts: the first part is confirmatory factor analysis; and the second part is path analysis of latent variables. In the first step, a model is reformulated as a measurement model, viewing the original * and y as x variables and the original £ and rj as £ variables. The only relationships between the latent 8 variables that are of concern are their variances and covariances (^ ). If identifica tion can be established for the confirmatory factor analysis, the second step of the identification can be applied. The second step concerns establishing identification of latent variable model as if latent variables were observed with no measurement error. That is, treat latent variable model as a structural equation in observed variables. Then it is ready to determine whether B , T, and 3* are identified. If the first step shows that the measurement parameters are identified and the second step shows that the latent variable model parameters also are identified, then this is sufficient to identify the whole model. 2.2.3 M IM IC R ule MIMIC models contain observed variables that are multiple indicators and multiple causes of a single latent variable. The equations for this model are: rh = rx + Cl y = A yr,h + e x = £ The equations show that x is a perfect measure of £ and that only one latent variable, r}1, is present. The variable rj1 is directly affected by one or more x variables, and it is indicated by one or more y variables. Identification of MIMIC models that conform to the above equations follows if p (the number of y’s) is two or greater and q (the number of rc’s) is one or more, provided that r)1 is assigned a scale. The MIMIC rule that p > 2 and q < 1 is a sufficient condition for identification but not a necessary one. 2.3 E stim ation The estimation procedures derive from the relation of the covariance matrix of the observed variables to the structural parameters. In the section on model specifica tion, it was shown that the covariance m atrix is: S xx{0) A y{ I - B ) - 'r 4 > A 'x A X(I - B ) ~ 'r $ A 'y A x$A'x + 0 s 2(9) = where E xx{6) = A V(I - B ) " 1 ( J W ' + tf) [(/ - B ) - 1]' A'y + 0 e The unknown parameters in B , B, <£, and are estimated so that the implied covariance m atrix S (= 27(0)), is close to the sample covariance matrix S. Many different fitting functions for the task are possible. The fitting functions F(S, 27(0)) are based on S, the sample covariance matrix, and 27(0), the implied covariance m atrix of the structural parameters. The fitting functions have the fol lowing properties: (1) F(S, 27(0)) is a scalar 10 (2) F (S ,2 7 (0 )) > 0 (3) F(S, 27(0)) = 0 if and only if 27(0) = S (4) F (S , 27(0)) is continuous in S and 27(0) Three such fitting functions, maximum likelihood (ML), unweighted least squares (ULS), and generalized least squares (GLS) are reviewed here. 2.3.1 M axim um Likelihood (M L) In deriving Fm l, the set of N independent observations are of the multinormal random variables y and a?. If we combine y and x into a single (p + q) x 1 vector z, where z consists of deviation scores, its probability density function is /(z ; 27) = (27r)-(p+^ 2|27|-1 /2exp - k s ~'z For a random sample of N independent observations of z, the joint density is / ( z , , . . . , z j v ; 27) = / ( z , ; 27) /( z 3; 27) • • ■ f ( z N- 27) W ith a given sample, the likelihood function is L{9) = (27 r)-A r(p+ 9)/2|2 7 (0 )rAr/2exp The log of the likelihood function is N . z i=l l°g L(d) — - N(p + q) N. 1 N 2 log(2jr) - j l o g 1 27(9)1 - - ^ z '2 7 - ( 9 ) z i 11 The last term of the log of the likelihood function can be rewritten as = - 4 l > (»)*.- L 1 = 1 z «=i N N r i = - y l > J v - * ,■*!.£->(») z i= 1 = - ^ t r [ 5 T - ‘(»)] where 5* is the sample ML estimator of the covariance matrix which employs N rather than (N — 1) in the denominator. Using the rewritten term, the log of the likelihood function can be represented as N N log L{9) = constant— — log|JC (0)|— — trf S 'X 1 1(0)] z z N = constant— — {log\£{6)\ + tr [S 'X 1 1(^)]} z Based on the log of the likelihood function, the fitting function that is minimized can be represented as FMl = log |27(0)| + tr(S 2 7 -l (0)) - log \S\ - (p + q) The unbiased sample covariance matrix S is used in Fm l, while the ML estimator S* is used in log L(6). Since S* = [(N-1)/N]S, these matrices will be essentially equal in large samples. 12 2.3.2 U nw eighted Least Squares (ULS) The ULS fitting function is Fvls = ^tr [(5 - E{Q)f F u l s minimizes one-half the sum of squares of each element in the residual matrix (S — U(9)). The residual m atrix in this case consists of the differences between the sample variances and covariances and the corresponding ones predicted by the model. See Bollen (1989) for further details. 2.3.3 G eneralized Least Squares (GLS) Since F u l s is not scale invariant, it would seem reasonable to apply a GLS fitting function that weights the elements of (S — S{9)) according to their variances and covariances with other elements. A general form of the GLS fitting function is F g l s = i t r ( { [ S - .£ ( 0 ) ] W - } 2) where W -1 is a weight matrix for the residual matrix. For selecting the “correct” weighting matrix W ~ 1, we make two assumptions: (1 ) E(Sij) = (7 ij (2) the asymptotic distribution of the elements of S is multinormal with means of < Tij and asymptotic covariances of S(j and sgh equal to N~1(aigajh + Cih&jg) If the assumptions are satisfied, then W ~ 1 should be chosen so that plim W -1 = c27-1, where c is any constant (typically c = 1). 13 Although many W 1 are consistent estimators of E 1, the most common choice is W - 1 = S - 1: F o l s = i t r ( { [ S - S ( # ) ] S - } 2) = itr{[/-x :(# )S -f} This F q l s is found in both LISREL and EQS (Joreskog h Sorbom, 1989; Bentler, 1992). 2.4 A ssessm ent o f Fit After estimating model parameters, given a converged and proper solution, a re searcher should assess how well the specified model accounted for the data. To help in the evaluation of a model, a number of statistical measures of fit have been proposed. For example, the LISREL program provides the probability value asso ciated with the chi-square likelihood ratio test, the goodness-of-fit index, and the root-mean-square residual (Joreskog Sorbom, 1986). The chi-square probability value and the normed and nonnormed fit indices (Bentler Sz Bonett, 1980) are ob tained from the EQS program (Bentler, 1985). For the proper model evaluation, it is suggested to examine two kinds of fit measures: overall model fit measures and component fit measures (Bollen, 1989). 14 2.4.1 O verall M odel F it M easures The covariance structure hypothesis is that £ = £(0). The overall fit measures help to assess whether the hypothesis is valid, and if not, they help to measure the departure of £ from £{9). Since £ and £ (9) are unavailable, their sample counterparts S and £{9) are examined. The S is the usual sample covariance matrix, and £{9) is the implied covariance matrix evaluated at the estimate of 9 which minimizes either F m l > F g l s , or F j j l s - 2.4.1.1 R esiduals The residual matrix is perhaps the simplest function of S and £ for assessing the overall model fit. Since the null hypothesis, Ho, is £ = £(9), S — £ can be used as the counterpart of £ — £(9). The individual sample residual covariances are (sjj — crij) where s,j is the ijth element in S and < 7 ,j is the corresponding element in £ . The individual residuals can help in assessing model fit as Joreskog and Sorbom (1986) proposed: R M R = t'ifr'i 9 (9 + 1 ) For a “good” model, all individual residuals should be near zero. However, the sample residuals are affected by several factors: (1) the departure of £ from £{9) (2) the scales of the observed variables 15 (3) sampling error The most interesting factor is (1), that is, whether E = E(9). When E ^ E(6), one or more of the covariances or variances of the observed variables are not exactly predicted by the model. Also, the magnitudes of the individual and mean of the residuals are altered if the observed variables are measured in different units. A big residual can be due to an observed variable with scale units that have a much larger range than that of the other variables. In addition, the expected magnitude of the sample residuals depends on N, even when the null hypothesis is true. Under fairly general conditions, S — E converges to E — E(6) as N — > ■ oo. For a given model, (Sij — frij) tends to be smaller, the bigger is the sample. So, in judging the residuals for small samples, it is expected that bigger residuals than when examining residuals in large samples, when the model is true in both sample. Considering the negative factors previously stated, some researchers propose cor rected residuals. For the scale problem, Bentler (1985) suggests correlation residuals. Each correlation residual is r,j — f,j where r,j is the sample correlation of the ith and j th variables, and r,j is the model predicted correlation. Individually (r,-j — i'jj) gauges how well a correlation ( or a standardized variance for i = j) is reproduced. A correlation residual should be fairly close to zero for most well-fitting models. On the other hand, Joreskog and Sorbom (1986) propose a normalized residual: (sij ~ &ij) Normalized residual 1/2 16 The numerator is the residual and the denominator is the square root of its estimated asymptotic variance. This normalized residual provides an approximate correction for such sample size effects and for scaling differences. The largest absolute values of the normalized residuals indicate the elements that are most poorly fit by the model. 2.4.1.2 A Chi-Square (y 2) Test The quantities of (N — 1 )Fml or (N — 1)Tgls provide chi-square estimators to test Ho' £ = £(&)■ Since Ho is equivalent to the hypothesis that £ — £{9 ) = 0, the chi-square test is a simultaneous test that all residuals in £ — £ {9 ) are zero. Here, we review how the likelihood ratio rationale for the asymptotic chi-square distribution of ( N - I ) F ml can be established. The null hypothesis indicates that the specification of the fixed, free, and constrained parameters in A x, and ®s is valid. Under H0, we have ML estimators of the free and constrained parameters in these matrices that together with the fixed parameters comprise the estimated matrices A x, and @s. Let log Lq represent the log of the likelihood function corresponding to Ho, and log Li represent that corresponding to an alternative hypothesis, H\. When evaluated at S and £ , the log Lq is lo g lo = - ^ l { l o g | 2 | + t r ( 2 - ‘S )} 17 This is the log of the numerator for the likelihood ratio test. If £ is set to S the sample covariance matrix, the log L\ is at its maximum value. So the log L 1 is log Li = ~ N 2 1 {log \S\ + tr(S x 5)} N - 1 — {log|S| + 9} This is the log of the denominator for the likelihood ratio. Since Hi sets £ to S, comparing logLi to logL 0 evaluates Ho vis-a-vis a perfect fit, Hi. The natural logarithm of the likelihood ratio, log(To/Ti), when multiplied by — 2 is distributed as chi-square variate when H0 is true and (N — 1) is large. In this case — 2 log = — 2 log L q + 2 log L\ = (N - 1) [log\£\ + tr(£~ 'S)] - ( N - l)(log |S | + q) = (IV - 1) (log 1^1 + tr(£ ~ 'S ) - log \S\ - q) In the last line of the right-hand side of the previous equation, the quantity within parentheses is the fitting function F m l evaluated at S and £ . So, the expression of the last line shows that (IV — 1) times the fitting function Fml evaluated at 9 is approximately distributed as a chi-square variate. Its degrees of freedom are \q{q + 1) — t, where the first term is the number of nonredundant elements in S given q observed variables, and t is the number of free parameters in 6 . 18 For the chi-square test of the SEM, the null hypothesis H0 is that the constraints on £ implied by the model are valid (i.e., £ = £{0)). The standard of comparison is the perfect fit of £ equals to S. The probability level of the calculated chi-square is the probability of obtaining a x 2 value larger than the value obtained if Ho is correct. The higher the probability of the x 2> the closer is the fit of H0 to the perfect fit. The chi-square approximation makes use of several assumptions: (1 ) x has no kurtosis (2 ) the covariance matrix is analyzed (3) the sample is sufficiently large (4) the H0’ . £ = £ {0 ) hold exactly When all of these assumptions are satisfied, ( N — 1 ) F m l (°r ( N — 1 ) F q l s ) is a good approximation to a chi-square variable suitable for tests of statistical significance. However, if one or more of the preceding conditions is violated, then the x 2 test loses some of its value. 19 2.4.1.3 A dditional M easures of Overall M odel Fit There are several other ways to measure overall model fit. Joreskog and Sorbom (1986) propose a Goodness of Fit Index (GFI) and an Adjusted GFI for models fitted with Fml and with Fjjls' - tr [ ( I T 1 - I f ] G FIml = 1 ------ ^ ------- T1 tr ( 1 7 - S )’ A G F Iml = 1 — 9 (9 + 1 ) G F Iuls — 1 — tr 2 df \ l - £ f ] [ 1 - G FIml) A G F IULs = 1 - tr (S 2) 9 (9 + 1 ) 2 df [ 1 — G F Iuls] where df = ( l) (p + q)(p + q + 1 ) — (number of parameters to be estimated) The G F Iml measures the relative amount of the variances and covariances in S that are predicted by £ The A G F Iml adjusts for the degrees of freedom of a model relative to the number of variables. Also, Tanaka and Huba (1985) propose GLS versions: G F Igls = 1 tr (.I - E S - 1)2 AG FI gls — 1 — 9 9 (9 + 1 ) 2 d f [ 1 - G FIgls) 20 2.4.2 C om ponent F it M easures In addition to the measure of overall fit, an examination of the components of the model is essential since nonsense results for individual parameters can occur. For the component fit of a model, several measures are suggested: parameter estimates, asymptotic standard errors, asymptotic correlation m atrix of parameter estimates, and R 2. for observed variables. Among them, parameter estimates and R 2. for observed variables are discussed here. For the parameter estimates of A x, and &s, misspecification of the model could give improper solutions. Improper solutions refer to sample estimates that take values that are impossible in the population, such as negative variances and correlations greater than one. Improper solutions can be caused by several factors. First, the covariance (correlation) matrix analyzed may have outliers or influential observations that lead to distorted measures of association for the observed variables, which in turn affect the parameter estimates. Second, there could be a fundamental fault of specification in the model. The model requires reconstruction based on the researcher’s substantive knowledge. Another measure of component fit is R2 for each x ,- variable. It is estimated as where c f ,- ,- = variance of x ,- predicted by the model. The R% is analogous to the squared multiple correlation coefficient, with x ,- as the dependent variable and the 21 latent variables (£) as the explanatory variables. Generally, the goal is finding mea sures with high R2 's. 2.5 R especification There are many potential causes for low measures of overall fit. A common cause is a misspecified model. The incorrect inclusion or exclusion of a parameter can be the error. So a common response to a poorly fitting model is to respecify it. However, respecification decisions should not be based on statistical considerations alone but rather in conjunction with theory and content considerations. The potentially richest source of ideas for respecification is the theoretical or substantive knowledge of the researcher. The first respecification necessary is in response to nonconvergence or an im proper solution. Nonconvergence can occur because of a fundamentally incongruent pattern of sample covariances that is caused either by sampling error in conjunction with a properly specified model or by a misspecification. Relying on content, one can obtain convergence for the model by respecifying one or more problematic indicators to different constructs or by excluding them from further analysis. Considering improper solutions, Van Driel (1978) presented three potential causes: sampling variations in conjunction with true parameter values close to zero, a funda mentally misspecified model, and underidentification of the model. Recently Gerbing 22 and Anderson (1987) found that for improper estimates due to sampling error, re- specifying the model with the problematic parameter fixed at zero has no appreciable effect on the parameter estimates of other factors or on the overall goodness-of-fit indices. Given a converged and proper solution but unacceptable overall fit, Anderson and Gerbing (1988) suggest four basic ways for respecification: ( 1 ) relate the indicator to a different factor (2 ) delete the indicator from the model (3) relate the indicator to multiple factors (4) use correlated measurement errors 23 C hapter 3 Em pirical A pplication In this Chapter, an empirical example is presented to show how a structural model can be used for latent variables. The example is a three-factor model with two or three indicators (See Figure 3.1). The structural model is designed to examine the relationships among three latent variables: worry (£1 ), visualization (f2), and problem-solving ability (771). The latent variable worry has two indicators, worryl (xi) and worry2 (x2). Visualization construct also has two indicators, visualizationl (£3 ) and visualization (£4). The construct of problem-solving ability has three indicators, self-efficacy (yi), metacognition (y2), and cognitive strategy (j/3 ). For this example, the EQS program (Benter, 1992) is used to analyze the data set. 3.1 D ata The subjects consist of 113 students in a calculus course at University of Southern California. At the end of the semester (Spring, 1995), they were asked to complete 24 12 V 12 6 3 S4 Figure 3.1: Confirmatory Factor Analysis Model for Worry (<5), Visualization (£2), and Problem-Solving Ability (£3) 25 a questionnaire booklet used to measure three constructs in this study. The Paper Folding Test (French, Ekstrom, & Price, 1963) was used to measure visualization skills and the Self-Assessment Questionnaire (O’Neil & Abedi, 1992) was used to measure worry, self-efficacy, metacognition, and cognitive strategy. 3.2 M odel Specification Based on the theoretical discussion on model specification in the previous chapter, the model is specified in two components: the structural model of the latent vari ables; and the measurement model of the relationships between latent variables and indicators. The general system for the structural model containing both components is expressed as: 77 = B r j + r £ + C y = AyT} + e 3 3 = - } - S 3.2.1 Structural M odel The data represents the relations of worry, visualization, and problem-solving ability in Calculus. In many previous research (Hembree, 1992; Malpass, 1994; Mayer, 1990, 1993), worry is considered as a negative effect on problem-solving ability in Mathematics, while visualization is often seen as enhancing it. In addition, worry 26 Figure 3.2: A Latent Variables Model of Worry (£1 ), Visualization (£2)> and Problem- Solving Ability (rft) and visualization may have negative correlational relation. These idea suggest that the latent variable model should have worry (£1 ) and visualization (£2) influencing problem-solving ability (r]i). Based on this description, we can form the elements of r and the latent variable model: m = 7h £ i + 7 1 2 6 + Ci t o ] 7n 712 + [C i] The relationships in the latent variable model are represented in the path diagram as shown in Figure 3.2. 3.2.2 M easurem ent M odel For the measurement model, seven indicators are used to measure three constructs of x part and y part. For x part equations, two indicators are used for each construct 27 of worry (£1 ) and visualization (£2 )- The indicators of worryl (a ,’!) and worry2 (x2) and the indicators of visualizationl (x3) and visualization2 (2 4 ) are for the worry and the visualization constructs, respectively. The scale of £ 4 is set to xi, and that for £ 2 is set to 2 3 . Furthermore the coefficient linking £ 1 and x 2, and that linking £ 2 and X4 are unconstrained. The preceding information reveals the pattern of A x. The x measurement equation is X\ X2 X3 X4 £i + *i ^ 2 £l + $2 £ 2 + <$3 ^ 4 £ 2 + ^ 4 ' Xi ' ‘ 1 0 ' X 2 A 2 0 x3 0 1 . X 4 . 0 1 ___ + r * <*2 For y part, three measures of problem-solving ability (t]i) are self-efficacy (j/i), metacognition (y2), and cognitive strategy (y3). Problem-solving ability (771) is set to the scale of 3/1 , whereas coefficients showing t/j’s influence 0 1 1 y2 and t / 3 are un constrained. The y measurement equation is J/i = Vi + ei 2 /2 = + ^ 2 2/3 = + e3 28 5j i>2 53 o4 Figure 3.3: Confirmatory Factor Analysis Model for Worry (£1 ), Visualization (£2), and Problem-Solving Ability (£3) " y i ' 1 ' ei " V 2 = ^6 lm} + ^2 . 2/3. . ^ £3 . All of these relations are summarized in the path diagram in Figure 3.3. Using the model we have specified, we want to examine the relationships among the latent variables. Similarily with other studies, we expect visualization has a positive ef fect on problem solving ability, while worry influences it negatively. We expect to investigate the negative relation between worry and visualization as well. 3.3 Identification The equations for the path diagram of this model is [Vi] = 7n 712 + [Ci] 29 r x l ' • 1 0 ' r* i i x 2 a2 0 £ 1 ' 1 * 2 X 3 0 1 £ 2 * T < ^ 3 X4 . 0 A 4 . .S*. ' Z /i " " 1 ' ’ C ] 2 /2 = ^6 [ 7 1 ] + £ 2 . 2 /3 . . £3 The covariance m atrix of the observed variables £ is Var(xi) ’ Cov(x 2 ,Zi) Var(x2) Cov(x3, xi) Cov(x 3 ,x 2) Var(x3) Cov(x 4 , Xi) Cov(x4 , x 2) Cov(x 4 ,x 3) Var(x4) Cov(yi,xi) Cov(j/i, x 2) Cov(j/i, x3) Cov(yi,x4) Var(yi) Cov(2/2, Ki) Cov(j/2 ,x 2) Cov(y2,x3) Cov(7/2 , *4) Cov(y2i 2/l) Var(y2) . Cov(y3 ,x i) Cov(y3 l x 2) Co v(y3 ,x 3) Cov(y3 ,x 4) Cov(y3 ,yi) Cov(y3 , 2/2) Var(y3) _ Substituting the parameter matrices for the above model into the implied covari ance m atrix derived in the previous chapter shows that £ (9 ) is where 011 + Vi A 20n A|0u -r v 2 012 A 2012 022 + V3 A 2A 4 0 1 2 A 2 A 4 0 1 2 A 4022 A|022 + V4 * 2 \ 292 ^3 A 4 //3 01+ v A 6 ^ 2 A 2 A g 0 2 Xq 9s A 4 A g 0 3 A 60i \ t92 A 2A 7 ^ 2 A 7 0 3 A 4 A 7 ^ 3 A 7 0 1 Var ( < £ ,■ ) , 1 < i < 4 Var(cj), 5 < i < 7,1 < j < 3 Al^i+Ve A gA 70i A| 0 i + V7 Vi = 01 6 2 03 7i2i 0 h + 2711712^12 + 712^12 + 0 n 711011 + 712012 711012 + 712022 [A 2 A4 Ag A7 711 712 <£11 0 1 2 022, Var(ei) Var(e2) Var(e3) Var(<h) V ar(< J2) Var(J3) V ar^) i p n ] 30 6j 52 ft3 8^ 8s( - f .]) k2) Figure 3.4: The First Step of Reformulation for Two-Step Rule A quick way to detect some underidentified models is by the i-rule. The covari ance structure of £ = S (9 ) leads to twenty eight [= |(7)(8)] equations in seventeen unknowns (number of free parameters). Thus the model may be identified. To further examine its identification, we apply the two-step rule. The first step establishes that all parameters in the measurement model are identified, including the covariance m atrix of the latent variables. So, for this example, 771 is redefined as Vu 2/2 , and y3 are now xs, x 6, and x7, ci, e2, and e3 are 65, S6, and S7, and 77 1, 7 1 1 , and 7 2 1 are not considered. Instead, we now examine the variances and covariance °f < fii &} and the new £3(= 771). This reformulation is represented in the Figure 3.4. In the second step, the latent variable model parameters are identified if the latent variables are treated as perfectly measured variables. Figure 3.5 shows the latent variable model. 31 Figure 3.5: The Second Step of Latent Variable Model for Two-Step Rule 3.4 E stim ation and M odel Evaluation The hypothesis of our model is £ = S{8). Thus, given the sample covariance matrix of the observed variables, S, how can we choose 8 so that is close to S I In this study, the ML fitting function is used. As shown earlier, the ML fitting function is F m l = log |27(0)| + tr ( S r -* ( 0 )) - log |S | - (p + q) We want to minimize this function with respect to 8. Similarly, the GLS and ULS fitting functions can be used as many researchers have shown (Bentler, 1992; Bollen, 1989; Joreskog h Sorbom, 1989). Based on the fit assessment of the base model and the result of the LMtest, the model is respecified as shown in Figure 3.6. Two correlational relations are added: 32 x5 (= y.) 1 \X? r >4 X 6(= y2) x7(=y3) 1 1 Figure 3.6: The respecified CFA model visualization (£ 4 ) and self-efficacy (x5): and metacognition (a^) and cognitive strat- egy (3:7 ). The correlational relations of these errors are uniqueness of the indicator shared in common rather than errors. In our example, the lower half of the sample covariance m atrix, S, is in Table 3.1. The residual m atrix (S — S ) after applying F M l is 0 . 0 0 0 0 . 0 0 0 0 . 0 0 0 0.425 -0.084 0 . 0 0 0 S - E = 0.501 0.108 -0.042 -0.088 -0.112 0.066 -0.152 -0.108 0.045 0.333 0.249 0.431 1.037 0.301 0 . 0 0 0 -0.901 -0.403 0.392 0.009 -0.354 0 . 0 0 0 None of the elements of this m atrix seems large. The average absolute value of the residuals is 0.219, while the average off-diagonal absolute covariance residuals is reported as 0.286. Compare to the magnitude of the elements in S, this is small. Since some elements of a covariance matrix are exactly predicted for a given model 33 Table 3.1: Covariance Matrix and Standard Deviation for Observed Variables Xx X2 x3 Xi 2/i 2/2 2 /3 Xx 3.504 X2 3.998 8.352 X3 0.223 -0.485 2.657 X4 0.266 -0.357 2.730 5.443 Vl -2.199 -4.083 0.440 1.767 12.292 2/2 -3.245 -6.863 1.446 2.213 14.283 46.567 2 /3 -5.310 -9.169 1.643 1.459 16.881 52.850 87.068 sd 3.506 6.824 9.331 1.872 2.890 1.630 2.333 and fitting function regardless of the sample covariance matrix, we have some zero elements in the residual matrix. For the measurement model, the ML estimates A x, A y, ©s, and ©t are A-x — 1.00 0.00 1.98 0.00 0.00 1.00 0.00 1.59 Ay - - 1.00 1.71 2.11 diag \ ©s) = diag ( 0 £ ) = 1.49 .40 .25 2.32 ] 4.09 22.60 50.65 The ML estimates for the path model parameters and tP are r = - 1 . 0 2 2 .161 ] 2.01 -.203 2.39 -2.086 .592 8.15 34 Table 3.2: The ML Estimates for the parameters Parameter ML Estimates Standard Errors Standardized values Ai O O O O r - H - .758 A 2 1.988 .369 .976 A 3 1.000c - .949 A 4 1.159 .881 .762 A 5 1.000c - .816 A 6 1.714 .376 .717 A 7 2.113 .489 .647 7n -1.022 .229 -.507 7 1 2 .161 .221 .087 0 n 2.011 .540 0 1 2 -.202 .227 -.092 0 2 2 2.392 1.842 V a r ^ ) 4. 090 1.713 Var(e2) 22.599 5.585 Var(e3) 50.654 10.239 V ar(^) 1.493 .389 Var (S2) .404 1.323 Var(J3) .265 1.808 Var ( 0 4 ) 2.317 2.449 0 1 1 5.930 1.821 Note: c = constrained to equal 1.00 = 5.93 ] These estimates of parameters are reported with their standard errors and standard ized values in Table 3.2. As far as the overall fit of the model is concerned, the measures are summarized in Table 3.3. The overall fit of this model is good. Since the probability value for the x 2 statistics (.197) with 9 df is higher than an a level of .05, we don’t reject the 35 Table 3.3: Summary of Overall Fit Fit Index X2(df) NFI NNFI CFI Standardized x 2 p-value Estimates 12.29 (9) .96 .97 .99 1 2 . 1 1 .197 Table 3.4: The Component Fit Measures ( R^.’s and R 2 y.'s) Xx X2 X3 X4 2/i 2/2 2 /3 R% .58 .96 .04 .60 . . . K .. _ _ _ . 6 8 .53 .43 hypothesis (U = U(0)). Besides, all fit indices are extremely high. Therefore, our model is supported by the data. For better fit assessment of the model, we need to examine the component mea sures. The R^.'s for the worry and visualization indicators and the R ^ s for the problem solving ability indicators are shown in Table 3.4. Except for the first in dicator of the visualization factor, the squared multiple correlation coefficients for each indicator are moderate to high. Particularly, the second indicator of worry factor (x2) is extremely high, showing 96% of the variance in x 2 accounted for by the latent worry variable (^j). In summary, the assessment of fit, both the overall and component fit measures suggest that the model properly matches the data. As we expected, worry (£i) has 36 negative influence on problem solving ability (771), whereas visualization skills (£2) enhance the problem solving ability in Calculus. Worry and visualization are neg atively correlated. Worry alone explained a modest amount of the variance (about 26% ) of problem solving ability. Since the effect of visualization on problem solving ability is not significant, the combined influence of worry and visualization led to about 26% explained variance in problem solving ability. In addition, the indica tors of worry and problem solving ability were fairly good, with 43% to 96% of the variance explained by the latent factors, while the indicators of visualization were relatively low with the percentage of explained variances 4% and 60%. 37 C hapter 4 C onclusions This paper reviews the theoretical foundation of structural equation modeling along with an empirical example. For the theoretical background, a brief discussion on model specification, identification, estimation, assessment of fit, and respecification is presented. For an empirical example, we show a structural model with three latent variables and seven indicators. The model examines the relationships among three latent variables: worry, visualization, and problem solving ability in Calculus. Worry and visualization have two indicators, whereas problem solving ability has three indicators. Briefly, worry has significant negative influence on problem solving ability while visualization doesn’t have significant effect on it. As other researchers have indicated (Cliff, 1983; Freedman 1987, 1993; Anderson and Gerbing, 1988), there are many difficulties in establishing causal relations. Since the basic modeling techniques of SEM is converting association into causation, some caution is needed in application of the causal modeling method. First, identifying the exogenous variables is a problem since results can depend quite strongly on 38 assumptions of exogeneity. Second, the possible effects of variables that are not included in a model must be considered. Third, a model is never confirmed by data; rather, it gains support by failing to be rejected. In spite of good fit of a model, other models with equal fit may exist. Fourth, the nominalistic fallacy—naming something does not necessarily mean that one understands it—must also be considered. So, validity and reliability of observed variables are required. 39 R eference List [1] Anderson, J. C., and Gerbing, D. W. (1988). Structural equation modeling in practice: A review and recommended two-step approach. Psychological Bulletin, 103, 411-423. [2] Bentler, P. M. (1985). Theory and implementation of EQS: A structural equa tions program. Los Angeles, BMDP Statistical Software. [3] Bentler, P. M. (1992). EQS: Structural equation program manual. Los Angeles, BMDP Statistical Software. [4] Bentler, P. M., and Bonett, D. G. (1980). Significance tests and goodness-of-fit in the analysis of covariance structures. Psychological Bulletin, 8 8 , 588-600. [5] Bentler, P. M., and Weeks, D. G. (1980). Multivariate analysis with latent variables. In P. R, Krishnaiah and L. Kanal, eds., Handbook of Statistics. Vol. 2. Amsterdam: North-Holland, pp. 747-771. [ 6 ] Blalock, H. M. (1961). Correlation and causality: The multivariate case. Social Forces, 39, 246-251. [7] Blalock, H. M. (1962). Four-variable causal models and partial correlations. American Journal of Sociology, 6 8 , 182-1941. [ 8 ] Blalock, H. M. (1964). Causal inferences in nonexperimental research. Chapel Hill: University of North Carolina. [9] Bollen, K. A. (1989). Structural equations with latent variables. New York: Wi ley. [10] Boudon, R. (1965). A method of linear causal analysis: Dependence analysis. American Sociological Review, 30, 365-374. [11] Byrne, B. M. (1994). Structural equation modeling with EQS and EQS/Windows. Thousand Oaks, CA: Sage. [12] Cliff, N. (1983). Some cautions concerning the application of causal modeling methods. Multivariate Behavioral Research, 18, 115-126. 40 [13] Duncan, 0 . D. (1966). Path analysis: Sociological examples. American Journal of Sociology, 72, 1-16. [14] Freedman, D. (1986). As others see us: A case study in path analysis. Journal of Educational Statistics, 1 2 , 101-128. [15] Freedman, D. (1993). From association to causation via regression. Talk pre sented at the Notre Dame Conference on Causality in Crisis. [16] French, J. W., Gabel, D. L., and Price, L. A. (1963). Kit of reference tests for cognitive factors. Princeton, NJ: Educational Testing Service. [17] Gerbing, D. W., and Anderson, J. C. (1987). Improper solutions in the analysis of covariance structures: Their interpretability and a comparison of alternate respecifications. Psychometrika, 52, 99-111. [18] Hembree, R. (1992). Experiments and relational studies in problem solving: A meta-analysis. Journal for Research in Mathematics Education, 2 1 , 33-46. [19] Joreskog, K. G. (1973). A general method for estimating a linear structural equation system. In A. S. Goldberger and O. D. Duncan, eds., Structural Equa tion Models in the Social Sciences. New York: Academic Press, pp. 85-112. [20] Joreskog, K. G., and Sorbom, D. (1986). LISREL VI: Analysis of linear struc tural relationships by maximum likelihood and least square methods. Mooresville, IN: Scientific Software, Inc. [21] Joreskog, K. G., and Sorbom, D. (1989). LISREL 7: A guide to the program and applications. Chicago: SPSS, Inc. [22] Keesling, J. W. (1972). Maximum likelihood approaches to causal analysis. Ph.D. dissertation. Department of Education: University of Chicago. [23] Malpass, J. R. (1994). A structural model of self-efficacy, goal orientation, worry, self=regulated learning, and mathematics achievement. Ph.D. disserta tion. School of Education: University of Southern California. [24] Mayer, R. E. (1990). Problem solving. In M. W. Eysenck, ed., The Blackwell dictionary of cognitive psychology. Oxford, UK: Basil Blackwell, pp. 284-288. [25] Mayer, R. E. (1993). Illustrations that instruct. In R. Glaser, ed., Advances in instructional psychology. Hillsdale, NJ: Lawrence Erlbaum Associates, pp. 253-284. [26] McArdle, J. J., and McDonald, R. P. (1984). Some algebraic properties of the reticular action model for moment structures. British Journal of Mathematical and Statistical Psychology, 37, 234-251. 41 [27] O ’Neil, H. F., Jr., and Abedi, J. (1994). Development and validation of state metacognitive scales. Unpublished manuscript. [28] Simon, H. A. (1954). Spurious correlation: A causal interpretation. Journal of the America Statistical Association, 49, 467-479. [29] Simon, H. A. (1957). Models of man. New York: Wiley. [30] Tanaka, J. S., and Huba, G. J. (1985). A fit index for covariance structure models under arbitrary GLS estimation. British Journal of Mathematical and Statistical Psychology, 38, 197-201. [31] Van Driel, 0 . P. (1978). On various causes of improper solutions of maximum likelihood factor analysis. Psychometrika, 43, 225-243. [32] Wiley, D. E. (1973). The identification problem for structural equation models with unmeasured variables. In A. S. Goldberger and 0 . D. Duncan, eds., Struc tural Equation Models in the Social Sciences. New York: Academic Press, pp. 69-83. [33] Wright, S. (1921). Correlation and causation. Journal of Agricultural Research, 20, 557-585. 42 A ppendix A EQS Program for th e CFA M odel /title thesis: confirmatory factor analysis /specifications case=113;var=7; me=ml; analysis=covariance; matrix=correlation /labels vl=worryl; v2=worry2; v3=visuall; v4=visual2; v5=selfeff; v6=metacog;v7=heuristic; fi=worry; f2=visual; f3=psability; /equations vl= fl+el; v2=*fi+e2; v3= f2+e3; v4=*f2+e4; v6=f3+e6; v6=*f3+e6; v7=*f3+e7; /variances fl=1.0*; f 2=1.0*; f3=1.0*; el to e7=*; /cov fl, f2=*; fl, f3=*; f 2 , f3=*; e7, e6=*; e5, e4=*; /matrix=correlations 1 . 0 0 0 .739 1.000 .073 -.103 1.000 .061 -.053 .718 1.000 -.335 -.403 .077 .216 1.000 -.254 -.348 . 130 .139 .597 1.000 -.304 -.340 .108 .067 .516 .830 /standard deviations 1.872 2.890 1.630 2.333 3.506 6.824 9.331 /lmtest set=pee; /wtest /end 44 A ppendix B EQS program for th e P ath M odels /title thesis: Path Analysis /specifications case=113;var=7; me=ml; analysis=covariance; matrix=correlation /labels vl=worryl; v2=worry2; v3=visuall; v4=visual2; v5=selfeff; v6=metacog;v7=heuristic; fl=worry; f2=visual; f3=psability; /equations vl= fl+el; v2=*fl+e2; v3= f2+e3; v4=*f2+e4; v6=f3+e6; v6=+f3+e6; v7=*f3+e7; f3=*fl+*f2+dl; /variances el to e7=*; dl=*; /cov f2, f1=*; e7, e6=*; e4, e5=*; /matrix=correlations 1 . 0 0 0 .739 1.000 .073 -.103 1.000 .061 -.053 .718 1.000 -.335 -.403 .077 .216 1.000 -.254 -.348 .130 .139 .597 1.000 -.304 -.340 .108 .067 .516 .830 1.000 /standard deviations 1.872 2.890 1.630 2.333 3.506 6.824 9.331 /lmtest set=pee; /wtest /end INFORMATION TO USERS This manuscript has been reproduced from the microfilm master. UMI films the text directly from the original or copy submitted. Thus, some thesis and dissertation copies are in typewriter face, while others may be from any type of computer printer. The quality of this reproduction is dependent upon the quality of the copy submitted. Broken or indistinct print, colored or poor quality illustrations and photographs, print bleedthrough, substandard margins, and improper alignment can adversely affect reproduction. In the unlikely event that the author did not send UMI a complete manuscript and there are missing pages, these will be noted. Also, if unauthorized copyright material had to be removed, a note will indicate the deletion. Oversize materials (e.g., maps, drawings, charts) are reproduced by sectioning the original, beginning at the upper left-hand comer and continuing from left to right in equal sections with small overlaps. Each original is also photographed in one exposure and is included in reduced form at the back of the book. Photographs included in the original manuscript have been reproduced xerographically in this copy. Higher quality 6” x 9” black and white photographic prints are available for any photographs or illustrations appearing in this copy for an additional charge. Contact UMI directly to order. UMI A Bell & Howell Information Company 300 North Zeeb Road, Ann Arbor MI 48106-1346 USA 313/761-4700 800/521-0600 UMI Number: 1378407 UMI Microform 1378407 Copyright 1996, by UMI Company. All rights reserved. This microform edition is protected against unauthorized copying under Title 17, United States Code. UMI 300 North Zeeb Road Ann Arbor, MI 48103
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Independent process approximation for the coupon collector's problem
PDF
The analysis of circular data
PDF
The effects of dependence among sites in phylogeny reconstruction
PDF
Repeated Measures In Psychology: Bias In Collinearity Judgment
PDF
Markovian Models For Discrete Data With Repeated Patterns
PDF
A structural model of problem-solving ability, self-efficacy, effort, worry, and achievement in calculus
PDF
Ordered Probit Models For Transaction Stock Prices
PDF
A finite element approach on singularly perturbed elliptic equations
PDF
Complementarity problems over matrix cones in systems and control theory
PDF
Implementation aspects of Bézier surfaces to model complex geometric forms
PDF
Predictors of pretend play in Korean-American and Anglo-American preschool children
PDF
A physiologic model of granulopoiesis
PDF
Relative Efficiency Study Of Nested Case-Control Sampling In The Logistic Regression Model
PDF
The relationship between alcoholism and crime: autonomic and neurpsychological factors
PDF
An analysis of nonresponse in a sample of Americans 70 years of age and older in the longitudinal study on aging 1984-1990
PDF
The Hall Canyon pluton: implications for pluton emplacement and for the Mesozoic history of the west-central Panamint Mountains
PDF
Fine motor skills of two- to three-year-old drug exposed children
PDF
A study of the solution crystallization of poly(ether ether ketone) using dynamic light scattering
PDF
The study of temporal variation of coda Q⁻¹ and scaling law of seismic spectrum associated with the 1992 Landers Earthquake sequence
PDF
Meta-analysis on the misattribution of arousal
Asset Metadata
Creator
Choi, Bune
(author)
Core Title
Structural equation modeling in educational psychology
School
Graduate School
Degree
Master of Science
Degree Program
Statistics
Degree Conferral Date
1995-08
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
education, educational psychology,education, tests and measurements,OAI-PMH Harvest,statistics
Language
English
Contributor
Digitized by ProQuest
(provenance)
Advisor
Tavare, Simon (
committee chair
), Arratia, Richard (
committee member
), Goldstein, Larry (
committee member
)
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c18-7453
Unique identifier
UC11357739
Identifier
1378407.pdf (filename),usctheses-c18-7453 (legacy record id)
Legacy Identifier
1378407-0.pdf
Dmrecord
7453
Document Type
Thesis
Rights
Choi, Bune
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the au...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus, Los Angeles, California 90089, USA
Tags
education, educational psychology
education, tests and measurements
statistics