Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Repeated Measures In Psychology: Bias In Collinearity Judgment
(USC Thesis Other)
Repeated Measures In Psychology: Bias In Collinearity Judgment
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
INFORMATION TO USERS This manuscript has been reproduced from the microfilm master. UMI films the text directly from the original or copy submitted. Thus, some thesis and dissertation copies are in typewriter face, while others may be from any type of computer printer. The quality of this reproduction is dependent upon the quality of the copy submitted. Broken or indistinct print, colored or poor quality illustrations and photographs, print bleedthrough, substandard margins, and improper alignment can adversely affect reproduction. In the unlikely event that the author did not send UMI a complete manuscript and there are missing pages, these will be noted. Also, if unauthorized copyright material had to be removed, a note will indicate the deletion. Oversize materials (e.g., maps, drawings, charts) are reproduced by sectioning the original, beginning at the upper left-hand comer and continuing from left to right in equal sections with small overlaps. Each original is also photographed in one exposure and is included in reduced form at the back of the book. Photographs included in the original manuscript have been reproduced xerographically in this copy. Higher quality 6” x 9” black and white photographic prints are available for any photographs or illustrations appearing in this copy for an additional charge. Contact UMI directly to order. UMI A Bell & Howell Information Company 300 North Zeeb Road, Ann Arbor MI 48106-1346 USA 313/761-4700 800/521-0600 REPEATED MEASURES IN PSYCHOLOGY : BIAS IN COLLINEARITY JUDGMENT by Young-eui Chung A Thesis Presented to the FACULTY OF THE GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Fulfillment of the Requirements for the Degree MASTER OF SCIENCE (Statistics) December 1995 UMI Number: 1379577 UMI Microform 1379577 Copyright 1996, by UMI Company. All rights reserved. This microform edition is protected against unauthorized copying under Title 17, United States Code. UMI 300 North Zeeb Road Ann Arbor, MI 48103 UNIVERSITY O F S O U T H E R N CALIFORNIA T H E G RAD U A TE S C H O O L U N IV E R SIT Y PA R K L O S A N G E L E S. C A L IF O R N IA 9 0 0 0 7 This thesis, written by ........ YOUNG-EUI..CHUNG........... under the direction of h&X. Thesis Committee, and approved by all its members, has been pre sented to and accepted by the D ean of The Graduate School, in partial fulfillm ent of the requirements for the degree of Master of Science C. . Dean D a t e . . 1 .... THESIS COMMITTEE .................... Chairman D ed ication This thesis is dedicated to my husband who had always encouraged me. This thesis is also dedicated to my sons, Jae-young and Tae-young, for their love. u A cknow ledgem ents I would like to thank my academic advisor Dr. Simon Tavare for his careful guidance on this thesis. His very patient support and effort has been invaluable. I also would like to thank Dr. Ernest Greene and Dr. Larry Goldstein for lending their support on my thesis committee. C ontents D ed ica tio n ii A ck n ow led gem en ts iii A b stra ct v 1 In trod u ction 1 1.1 Background and objectives.............................................................................. 1 1.2 The D a t a ............................................................................................................. 3 2 S ta tistica l A pproach 6 2.1 Preliminary data analysis .............................................................................. 6 2.2 Correlation .......................................................................................................... 9 2.3 Sim ulation............................................................................................................. 10 3 M od el F ittin g 16 3.1 Model specification............................................................................................ 16 3.2 Estimating a,, ct -, crj-................................................................................... 17 4 A n alysis of th e E ffects o f th e D ev ia tio n 22 4.1 Testing homogeneity between subjects .................................................... 22 4.2 Testing homogenity of parameters a and b ............................................. 25 4.3 Testing constant variance............................................................................... 27 4.4 Testing covariance matrix ............................................................................ 30 5 D iscu ssion 32 A p p en d ix A ............................................................................................................................................ 34 iv A bstract Many studies have suggested that perception is more accurate when the line seg ment is horizontal or vertical, and least accurate when it lies at 45°. The aim of our experiments was to explain the relationship between the bias in collinearity judg ment and the orientation of the line segment. Subjects judged collinearity of a line segment. The orientation of the line segment varied systematically in 7.5° incre ments through the range of possible positions. Error was measured as the angular deviation from the true collinearity. The angular deviation, y, as a function of the angle of orientation of the line segment, x, was described by the equation and the parameters a, b and c were estimated. Homogeneity of the deviation among subjects was tested. Homogeneities of a and b parameters were also tested. Analysis indicates that there are differences of angular deviation among subjects. That is, the sine curve with each of the parameters fitted individually was best. v C hapter 1 In trod u ction 1.1 B ackground and ob jectives Visual form perception is characterized by the fact that the perception of a particular object may arise from a great many configurations of the retinal light distribution (Bouma and Andriessen, 1968). The well-known neurophysiological evidence put forward by Hubei and Wiesel (1959, 1962) suggests that the angle of orientation of the stimulus on the retina is among the first isolated form cues in the visual cortex of cats. During the past 50 years, there has been a lot of research which is concerned with the sources of collinearity judgement bias. Many studies have suggested that perception is more accurate when the line segment is horizontal or vertical, and least accurate when it lies at 45°. Studies by Salomon (1947), Rochlin (1955), Keene (1963) and Andrews (1965) have indicated that the perceived angle of orientation may be different from the true geometrical slant. This difference has also shown directly by several induction phenomena, in which the perceived orientation of a line segment is influenced by neighbouring lines or curves. Bouma and Andriessen (1968) have indicated that there exists a systematic bias of collinearity judgment as a function of orientation in the sense that the perceived orientation comes closer to the nearest horizontal or vertical as compared with the true orientation. They varied orientations in quadrant IV (270° < a < 360°) in steps of 15°.The angular deviation, (3, as a function of the angle of orientation of the line segment, a, was described by the equation (3 = — i? sin 4c* The estimated value of (3 was 3°. Three subjects were tested; this might be an insufficient sample to see the range of individual differences. Dr. Ernest Greene and his associates in the Department of Psychology, Univer sity of Southern California (USC), conducted an experiment to study the bias in collinearity judgement of a larger sample of subjects, as part of a larger program to study of collinearity judgment. This analysis was carried out to explain the rela tionship between the bias in collinearity judgement and the orientation of the line segment. This study was performed with a relatively restricted set of stimulus el ements — specifically, a single test segment whose collinear relationship was to be judged on the basis of an operant response. 1.2 T h e D a ta The data were collected in a research project under sponsorship by the Neuropsy chology Foundation. • S am ple : The sample consisted of eight undergradute students at IJSC. The participants were naive with respect to the specific hypothesis as well as the phenomenon under investigation. When questioned afterward, they gave no indication that anyone of them had deduced the nature of the perceived ori entation bias. Therefore the findings and implications of the study could be generalized, to the extent that future groups are similar to the participants. • S tim u lu s m a terials : This study had one experimental treatment condition, test segment orientation. The test segment was 0.54 mm thick, and 6 cm long. A given page contained only a single test segment at a specified orientation, and the test set consisted of the pages which provided a complete sampling of all orientations of the test segment in 7.5° steps. For each configuration of the experiment, the test segment was placed in one of the four quadrants of the page. The experimenters did not want to have all segments pivot from the center of the page, but neither did they want the position of segments within the page to be completely arbitrary. As a compromise, they settled on having the segments for each of the quadrants pivot on a “quadrant pivot point” which was positioned 6 cm from the center 3 of the page, along the 45° diagonal from the page-center. For each page in the set, the test segment was placed at one of 13 angles, from horizontal to vertical orientations in steps of 7.5°. Thus there were 13 levels of test segment orientation and these were repeated in each of the four quadrants. Two replications of the test segment were tested. Thus the complete test set contained each combination of the treatments for each of the quadrants, for a total of 13 x 4 x 2 = 104 pages in the test package. These were presented in a different random order for each subject. The subject’s operant task was to place a point which appeared to be collinear to the segment. Subjects were encouraged to respond in the open space toward the center of the page, a distance of 10-15 cm being indicated as the appropriate zone for their response. The view of the test page was maintained at a constant distance of 46 cm. The subject’s task was to place a point which appeared to be collinear to the segment. Error was measured as the angular deviation from true collinearity, i.e. also using the quadrant pivot point (the tip of the test segment), as the pivot for the angular error. We defined “deviation” from this axis as the collinearity error. The clockwise deviation angles were designated as positive and counterclockwise deviation angles as negative. Error could be measured to the nearest 0.5 mm. * D a ta p re p a ra tio n : In our experiment there were 13 levels of test-segment orientation in one quadrant. Repeating a set of 13 levels of the angle in each of the four quadrants yielded duplicate observations at 0°,90°, 180° and 270°. This is because horizontal and vertical orientations are in each quadrant and, for example, the stimuli of horizontal orientation in quadrants 1 and 3 are in fact 0°. Thus there are 48 levels of angle ranging from 0° to 352.5°. Since two replications of the 13x4 test-segment, were tested, there are 4 observations at 0°,90°, 180° and 270° and 2 observations at all the other orientations for each subject. We randomly sampled 2 among the 4 observations at 0°, 90°, 180° and 270° so that the equal number of observations were made for each stimuli. Each pair was averaged, giving 48 points for each subject. C hapter 2 S ta tistica l A pproach 2.1 P relim in ary data analysis Our first task was to plot the data in order to display their gross features. This can be done by constructing box plots (or box-and-whisker plots) that were popularized by Tukey. A box plot is a simple graphical representation showing the center and spread of a distribution, along with a display of outliers. For each of the 48 levels of test- segment orientation, a set of box plots of deviations is shown in Figure 2.1. They were produced by the S-PLUS graphics feature (1992). The white dot in the interior of each box is located at the median of the data. This estimates the center of the distribution for the data. The height of the box is equal to the interquartile distance (IQD). The IQD indicates the spread of the distribution of the data. The whiskers extend to the extreme values of the data or a distance 1.5x IQD from the center, 6 o c o 03 * > CD Q o LO oinir)U)oinu}ir>ou)i/>ifloi/)u)inou)iAi/)OiniflinoiAU)i/)oi/)U)U)oinioinou)inu)oir)iflU)OiAi/)in fs:^c\iw r ^ c 0 ^ r ^ ^ c \ i OT^ ° c \ i ™ r ^ c \ j £ ^ ^ c \ i ® ^ 2 c \ j ^ r ^ ^ c \ i S r ^ £ c v £ ; r ^ £ c \ i ° N : r : c \ j S 2 r ^ £ c N j C \J C O i n C D CO 0 ) ’ - O J ’ T O ) N C D O W i - n W ^ ( D W N W O i n O n W ” n n U ) t - t - T - i - f - r - Q J C U C N J C N J C M C g O J C O C O C O C O Orientation of Test Segment Figure 2.1: Box plots of deviations for forty eight levels of test-segment orientation whichever is less. Data points which fall outside the whiskers may be outliers, and so they are indicated by horizontal lines. There are three features to note in Figure 2.1: 1. There appears to be a systematic influence of test segment orientation on the deviation in the sense that the deviation decreases as the angle becomes nearer 0°,90°, 180° or 270°. This indicates that the collinearity judgement is more accurate at horizontal or vertical lines than at the other angles. 7 2. Distributions of deviations show distinctively smaller variances at 0°, 90°, ISO0 and 270°. It looks as though as the angles move away from the four orienta tions, the variances tend to be bigger. 3. There is some evidence that distributions for several orientations are skewed. We also wanted to make individual plots to look at possible differences in the collinearity errors between subjects. Figure 2.2 displays the individual plots. First L O - subjectl - subject2 - subject3 - subject4 - subject5 - subject6 - subject7 • subject8 o L O - c o .2 5 o • > CD Q L O . O lO 0 100 200 300 400 500 Orientation of Test Segment Figure 2.2: Plots for the separate subjects 8 of all, we observe that subject 8 is distinctively different from all the other sub jects. The individual levels appear to differ markedly. This will be tested further in Chapter 4. 2.2 C orrelation Our next concern was about the presence of correlation. Because our data consist of repeated measurements, there might be correlations between the measurements at different angles for any given subject. An autocorrelation function (ACF) plot provides an estimate of the correlation between observations separated by a lag of zero, one, or more angle units. Figure 2.3 shows eight ACF plots, each series representing one subject. The horizontal dotted lines provide an approximate 95% confidence interval for the autocorrelation estimate at each lag. Although subject 2 shows a different pat tern, the whole set of ACF plots indicates serial correlation at lags 1 and 2. Cor relation is higher at lag 1 than at lag 2. The overall impression is that correlation decreases as angles of test segment orientations get further apart. This will be checked through simulation in the following section. 9 Series : subjectl S eries : subject2 Series : subject3 Series : subject4 < jlL lL in T P - 5 10 15 Lag Series : subjects Series : subject6 Series : subject7 0 5 10 15 Lag Series : subject8 Figure 2.3: Autocorrelation plots for eight subjects 2.3 Sim ulation In this section our main concern is to look at the correlation structure of our data. To do this, we simulated two different covariance matrices. Denote by y,- = (yn, ■ ■ ■ s)T the 48 measurements on the ith subject (i — 1, ■ • •, 8) and by y = (/<i, • • •, /< 4 s)r the 48 means for y t -. In our data we have a sample of eight vector observations y i , • • • ,ys- We fitted a model under the assumption that the observed vector observations yi,--*,ys are from A ^ 48(yu,^), a 48-dimensional 1 0 norma] distribution with mean /.i and covariance matrix E- The model on which the simulation is based is Yi — f-t + e, for subject i where the errors e, are assumed to be independent with means £(e,-) = 0 and covariances V(ei) = E ; thus E is 48 x 48 with (j , k )th element C(e{j, e^.) = J2jk- E , V and C denote respectively expectation, variance and covariance. Note the implication here that V r(y{) = E > s the same for all i. Two possible covariance matrices are considered where E has a structure specified in terms of some parameter vector (j) — (p, cr). One assumption for the covariance matrix E is E = ^ 2{(1 - P)hs + J 4s} (2.1) where I48 is a 48 x 48 unit matrix and J 48 is a 48 x 48 matrix of l ’s. Thus E has diag onal entries a2 and off-diagonal entries a2p. This is equivalent to every measurement having the same variance, and correlations between any pair of measurements being equal. This matrix is called compound symmetry and is often used in traditional approaches for analysis of repeated measurements. The other model is the case where E has correlations pminfli-*4H8-(j-<;)|} for (j, A;)th level of orientation. Thus the correlation between components ej decreases geometrically with their separa tion in angle of the test-segment orientation. Therefore the geometrically decreasing correlation form can be sensibly incorporated with variance a2 by taking 11 Y jlc = a2p>«M\j-k\A8-\3-k\} (_1 < p < 1) (2.2) The elements of the inverse of E can be calculated by using a computer algebra package such as Mathematica (1991). We obtain S j / = K M1 + P2) j = 1, • • ‘ ,48 Sj,j+ 1 = S j + l J = S i , 48 = ^48,1 = ^ 1(P) 3 = T ‘ ' i4 7 S j j + 2 3 = £ j + 2 3 ,j = S /./+ 2 5 — S /+ 2 5 ,/ = ~ A ' {p^) j = T ’ ’ - > 2 5 I = I,' • • ,2,1 EJ}+2 4 = £7+ 24, = K -'(P 2V +P2)) J= 1, ■ • • , * 2 4 where K = cr2(l — p2)(l — p48). All the other entries are 0. The determinant of E * s d etE = (< x2)48(l - p2)48(l - p48)2 2 and E -1 (and therefore E ) * s positive definite. In the model the estimator of p is the sample mean vector y. We want to estimate < f> = (p, a) for both models using the method of maximum likelihood. The log-likelihood function is log L {< f> ) = lognf=i[(27r)_^ { d e tE } “ 2 e x p { - i( y i - p ) r E _1( y i- p ) } ] We wrote a FORTRAN program for each of the two covariance matrices to estimate the parameters a and p by maximising the log-likelihood in formula (2.4). The program is given in the Appendix. The estimated values and the log-likelihoods are given in Table 2.1. 12 Table 2.1 Estimates of p,<r and log L Covariance matrix P cr log L £ = <j2{(1 — p)I<J8 + J 4 8 } 0.06 2.54 -896.80 £ . f c = cr2/9n,i,,fb’- fc l> 48-b'-A :l} 0.59 2.54 -819.79 When < / > is known the maximum likelihood estimator of £ can be calculated. A well-known theorem about diagonalizing matrices states that for a real symmetric matrix with non-zero determinant, there exists an orthogonal matrix Q such that £ — Qdiag(A}, • ■ ■, A?l)QT where diag(Ai, • • •, A„) is the diagonal matrix with the eigenvalues Ai, ■ • •, A„ of £ on its diagonal. Let £ 2 = Q diag(v/A7, • • •, y/\n)QT Then £ * £ * = Qdiag(N /A7, • • •, v % :)g Tgdiag(v/A7, • ■ •, ^A~)QT = (Jdiag(Aj, • • • ,A n)QT = £ Thus we can obtain £ 2 from £ . If Z is a standard normal variate, then £ 2 Z has distribution Af(0, £ ) ; the com ponents of £ 2 Z are a set of normal variates with mean 0 and £ . We generated eight copies of £ Z for each of the two covariance matrices, using the fitted values of £ and /t for each of them. The ACF plot is constructed for each of them. Figure 2.4 shows the results for £ of (2.1) and Figure 2.5 for £ of (2.2). S eries : simul Series : simu2 Series : simu3 S e r ie s : simu4 il i t .I, 5 10 15 Lag Series : simu5 5 10 15 Lag Series : simuG 5 10 15 Lag Series : simu7 TTTTT Series : simu8 Lag 5 tO Lag Figure 2.4: Simulation on J2 = <x2{ ( 1 — p)Li8 + J 4 8 } The ACF plots in Figure 2.5, which is from the of (2.2), look close to the ones of the real data in Figure 2.3. This indicates that the geometrically decreasing correlation assumption is more appropriate for our data. 14 -0.4 -02 0 .0 0 .2 ACF0 .4 0 .6 0 .8 1 .0 -0 2 0 .0 0 2 A C (T Series : sim ul Series : simu2 Series : simu3 S e r ie s : simu4 o c O © u> o 5 tO Lag S e r ie s: simu5 p o p © 0 5 10 15 Lag Series : simu6 5 . tO Lag © < D O < e g © o C J 5 15 0 to Lag o < o to © 9 0 5 10 15 Lag Series : simu7 o < < M O p C O O O < o 0 5 10 15 Lag Series : simu8 o a o (D eg o N o 5 10 15 Lag 5 10 t5 Lag Figure 2.5: Simulation on = (T2p m in{|i-A .-|,48-|i-A :|} 15 C hapter 3 M od el F ittin g 3.1 M od el sp ecification The resulting plot in Figure 2.1 indicates that the deviation becomes progressively smaller as the test segment is rotated toward the horizontal or vertical position. It appears to be fitted by a cyclic curve. The assessment of this question is based on a statistical model. We formulate the model, expressed in a sine curve, by ijij = cii sin ^4(.Tj - + a + e{j (3.1) where i/,j is the deviation taken by the i-th subject at Xj, Xj is the j-th angle of test-segment orientation and etj is the usual zero-mean error term. Here, o, and e, enter linearly and is the nonlinear parameter. We allowed > 0 and 0 < b{ < The systematic part of the model is the regression function for the mean deviation, 16 and the random part is the normal error variation about the regression. If we write /.tij for the mean deviation for the jf-th level of orientation, then fiij = a{ sin (4(a:j - M ilo) + c” VH = M + eo This may be written in vector notation as y % = X ( 6 j ) / 3 , - + et where y t- = (yiu ■ • •, yit48)T, e; = (e,i, ■ ■ •, eit48)T, fa = ((ii,Ci)T and X ( 6 ,-) is 48 x 2 with j-th row {sin (A{xj — > !}• The errors et are assumed to be independent and normally distributed with mean zero and have a covariance matrix Then = X ( 6 ,•)/?,■ , y,- = m + e,-, ~ N (0 , £ ) which we will call model 1 : in all we will consider seven models for our data. 3.2 E stim atin g aj, 6 The probability density function for y t - is / ( y 1/3, 6 , ^> ) = (2 7 r)-^(d etE )_ 2 exp{-|(y,- - X(6,-)/30T E - 1 (y.- ~ X ( W ;) } and the log-likelihood function is therefore log L(p,b,<j>) = log[(27r)-^(detE)- ^ exp{-±(y,- - X ( 6 t -)A,)TE - 1 (y.‘ - X (W <)}] The parameters 6 ,-, c,-, /> ,• and cq are unknown, and we choose the best- fitting values of the unknown parameters. Maximum likelihood estimates for A, 6 ;,/?; and (7 { are found by maximising the log L(/3, b, p, a). To do this we wrote a FORTRAN program. The program is given in the Appendix. The maximum likelihood estimates are shown in Table 3.1. Table 3.1 Estimates of parameters and log L for each subject(Model 1) Subject a b c P a log L 1 1.14 8.08 0.06 0.45 2.50 -106.77 2 0.43 75.39 0 . 1 2 0.31 2.07 -100.70 3 2.07 44.14 -0.13 0.69 2.40 -94.55 4 0.79 32.24 -0.07 0.67 1.82 -82.29 5 2 . 1 1 38.96 0.54 0.28 2.13 -102.48 6 2 . 1 2 39.83 -0.52 0.48 2.96 -114.03 7 0 . 8 6 29.11 0.24 0.28 1.69 -91.22 8 3.79 37.53 1.95 0.63 3.31 -113.46 Figure 3.1 displays the observed and fitted plots of deviations. Examination of the data for failure of the normal assumption can be done through quantile plots of the standardized residuals. A vector of residuals is defined as y,- — X (& ,•)/?,•, a standardized version being ETi (yi -X (6 i)A) (3-2) 18 —1 _ 1 _ I where J2i 2 > s the inverse of any square root of * -e- such that Yli 2J2iYli 2 = Ius- _ 1 Since Y,i 2 (yi ~ X(bi)/3i), based on the true parameters a;,h,-,c; and has distri- bution A ^ 4 8 (0 , l 4 8), the components of 2 (y« — X ( 6 ,•)/?,■ ) evaluated at the estimated parameters should resemble a set of 48 independent standard normal variates. Hence, these provide an indication of fit or lack of fit of the model. Plots of the quantiles of the standardized residuals, H ;_5(yi — X ( 6 j)/?i), against the normal quantiles are given in Figure 3.2. If the probability distribution is correctly specified, the plot should be roughly a straight line. The data set for subject 8 shows systematic curvature, however the points in all the other plots cluster along the straight line. 19 Deviation Deviation Deviation Deviation - 5 0 5 1 0 1 5 -1 5 - 5 0 5 1 0 1 5 -1 5 - 5 0 5 1 0 1 5 -1 5 - 5 0 5 1 0 15 subject 1 subject 2 o 100 200 300 Orientation of Test Segment subject 3 Orientation of Test Segment subject 5 Orientation of Test Segment subject 7 o 100 200 300 0 100 300 200 L O ■ » — 0 1 0 0 200 300 £2 > < D LO O O T “ 0 100 200 300 Orientation of Test Segment subject 4 o . 1 10 o I K > •••< io 0 1 0 0 200 300 Orientation of Test Segment subject 6 c o '> Q ) o LO O T“ 100 0 200 300 Orientation of Test Segment subject 8 C O > C D LO O O LO i 0 100 200 300 Orientation of Test Segment Orientation of Test Segment Figure 3.1: Observed and fitted plots of deviations 20 subject 1 subject 2 subject 3 subject 4 X * X j x* x» < \ » y c y C M y C M 1 C o C o y S £ O • a i ° i u f ' Q y f o Q O C M c m y C M 7 7 7 7 •2 -1 0 1 2 -2 * 1 0 1 2 - 2 - 1 0 1 2 - 2 - 1 0 1 2 Q uantiles of S tan d ard Normal Q uantiles of S tan d ard Normal Q uantiles o f S tan d ard Normal Q uantiles o f S tan d ard Normal subject 5 subject 6 subject 7 subject 8 xf X * X ( V I ■ y or y C M y C M y s s s y T s y r « o < 0 ■ = > ■ S ° - < 5 5 & y * £ < 3 ( V I C M y C M y f C M y x» f 7 -2 -1 0 1 2 - 2 - 1 0 1 2 - 2 - 1 0 1 2 - 2 - 1 0 1 2 Q uantiles of S tan d ard Normal Quantile® o l S tan d ard Normal Q uantiles of S tan d ard Normal Q uantiles of S tan d ard Normal Figure 3.2: Probability plots of the standardized residuals for each of eight subjects 21 C hapter 4 A n alysis o f th e E ffects o f th e D ev ia tio n In this chapter we compare several models. We 1. Test homogeneity of the deviation between subjects. 2. Test homogeneity of a and b parameters. 3. Test constant variance. 4. Test covariance matrix. 4.1 T estin g h om ogen eity b etw een su b jects In our original analysis, we fitted individual sine curves. In this section, the question of interest concerns differences between subjects. The individual plots in Figure 2.2 look quite cyclic but the individual levels appear to differ markedly. The fitted lines, judging by the fitted values of n, b and c in Table 3.1, vary quite a lot. This can be tested as follows. We throw all measurements together and fit a single line in (3.1) which will be called Model 2. Here the covariance matrix J2 °f e« 22 is J2 — c 2pm,n^ ^ ’4 8 ^ Thus there are five parameters a,b, c,p and a to be estimated. The results are given in Table 4.1. Table 4.1 Estimates of a,b, c, p and log L a b c P a log L 1.30 38.39 0.27 0.59 0.72 -847.67 If the common line fits the data almost as well as the individual lines then the log-likelihood log L for the model with a common set of the parameters of -847.67 should not greatly fall behind the sum of the individual log Ls in Table 3.1, which is — 106.77 — 100.70 — • • • — 113.46 = — 805.50. A formal comparison of the two models is expressed in terms of the likelihood ratio test of the hypothesis that there are no differences between individuals. The likelihood ratio test statistic for this hypothesis is A = - 2 ( -847.67 + 805.50) = 84.34 Critical values for A are based on its asymptotic distribution when the hypothesis is true, which is approximately x'2 with 40 — 5 = 35 degrees of freedom. Choosing ci=0.05, the critical value is 49.80. This value is considerably smaller than the value of A. Therefore, we conclude that there are significant differences in deviations between subjects. These test bear out one’s visual impressions from Figure 2.2 that individual lines do not coincide. 23 subject 1 subject 2 subject 3 subject 4 I it © 5 Q 7 •2 0 1 2 (M I r Q •2 0 2 O J I 1 o O CM •2 0 2 Q uantise of Standard Normal Q uantit* of Standard Normal Q uantise of Standard Normal Q uantise of Standard Normal subject 5 subject 6 subject 7 subject 8 04 8 I ° < 3 •2 0 2 w | § ® 3 -2 0 1 2 1 o 0 2 •2 • 1 .8 •2 o 2 1 0 2 •2 Q uantise of Standard Normal Quantise of Standard Normal Q uantise of Standard Normal Q uantise of Standard Normal Figure 4.1: Probability plots of the standardized residuals for eight subjects _ J_ The quantiles of the standardized residuals, J2i 2(y i — X(b),-/#;)(* = 1, • ■ • , 8 ), are plotted against the quantiles of the standard normal in Figure 4.1. If the probability distribution is correctly specified, the plot should be roughly a straight line. Subjects 1, 2, 3, 5, 6 and 7 fit seem quite adequate, but there is some evidence of a systematic lack-of-fit in subjects 4 and 8 . 24 4.2 T esting h om ogenity o f param eters a and b From the Table 3.1, it appears that the major differences are among the as or 6 s, the cs being possibly equal among individuals. To test this, two sets of eight sine curves can be fitted. A set of eight sine curves with different rq - and common 6 , c, cr and p can be fitted (Model 3). For testing differences of 6 parameters another set of eight sine curves with different 6 ; and common a,c, cr and p are fitted (Model 4). The results are given in Table 4.2. Table 4.2 Estimates of parameters for the model with different as Model a, « 2 «3 a4 a 5 «G a 7 « 8 6 C - P a 3 0 0 1.92 0.71 2 . 1 1 2 . 1 1 0.67 3.78 38.77 0.27 0.55 2.55 Model 6 , 6 2 b .-j 6 4 6 5 6 6 6 7 6 8 a c P a 4 8.08 75.39 44.15 32.24 38.96 39.83 29.11 37.53 1 . 6 6 0.27 0.56 2.62 To compare four models 1 , 2, 3 and 4, we first present the all maximum log- likelihoods in one table as follows. 25 Table 4.3 Comparison of Model 1, 4, 5 and 6 Model log L df Source 1 -805.50 40 all different parameters 2 -847.67 5 common a , 6 ,c, p and a 3 -835.20 1 2 different as 4 -840.35 1 2 different bs We first want to compare Models 2 and 3, we observed the value X2 = 2(847.67 - 835.20) = 24.94 This is greater than 14.07, the 95% quantile of x 2- This suggests that the a param eters are different between individuals. To compare Models 2 and 4 we observed X 2 = 2(847.67 - 840.35) = 14.64 which is a little greater than 14.07, the 95% quantile of x \ 2. Thus we get weak evidence that b parameters are different between individuals. Model 3 and Model 4 have the equal number of parameters of 12. Looking at the maximum log-likelihoods of Model 3 and 4, the log L of Model 3 is greater than the one of Model 4. This 26 tells that the differences are greater among os than among bs. Finally we compare Models 1 and 3. The observed x 2 test statistic is X 2 = 2(835.20 - 805.50) = 59.40 which is greater than 41.34, the 95% quantile of x 2s- Therefore we conclude that among the four models Model 1 is the best fitting and Model 3 is the second best. 4.3 T estin g con stan t variance So far we assumed that the variance was constant over test-segment orientations. However the variance does not look constant from the box plots in Figure 2.1. From a visual inspection of the plots, it appeared to be small near horizontal and vertical orientations and become large as the angle of orientation is near 45°. Now we assume two different variances, erf and <r], for each of two categories of the orientations. Let a\ be the variance for the nearest 5 orientations including horizontal or vertical ones and a\ the variance for the other orientations (af < cr.]). The two different variances assumption can be incorporated with the geometrically decreasing correlation form of (2 .2 ). Covariance matrix £ can be written as the product of three components as follows: £ = D A D 27 1. A represents the correlation matrix. The correlation for (j,k)th level of orien tation A jk = 2. D represents standard deviation matrix. Thus D is a diagonal matrix and the diagonal entries are Djj = ctj j = 1,2,3,47,48 j = {V2k - 1), • • •, (12fc + 3) A r = 1,2,3 D(( = < 7 2 I 7£ j 1 < I < 48 Then £ - 1 = D _ 1 A _ 1 D _ 1 =(20c7^ + 28a2 2) - 1A - 1 Thus a 2 is replaced by 2 0 crjf + 28cr| from the inverse of the covariance matrix assumed for Model 1 . The determinant of can be easily found. d et(£ )= d et(D A D ) ={det(D )} 2 {det(D)} K ^ 2 )2V I ) 2 8 {det(A)} = (a2)20(a2)2S(l - p2y*(l - p48)22 It is convenient to call the model of the sine curve with this covarince matrix Model 5. We want to carry out a formal test of significance to see which assumption of the variance is reasonable. We fitted the Model 5 to each data set of eight subjects. The estimated values of the parameters are given in Table 4.4. For the purpose of comparison, we present them with the Model 1 together. 28 Table 4.4 Comparison Model 1 (upper row) and 5 (lower row) Subject a b c P a log L A 1 1.14 0.83 36.92 11.47 0.06 0.34 0.45 0.46 2.50 1.88 2.90 -106.77 -104.66 4.22 2 0.43 0.36 59.61 58.31 0 . 1 2 0.31 0.31 0.37 2.07 1.43 2.52 -100.70 -97.78 5.84 3 2.07 2.45 0 . 8 6 44.93 -0.13 -0.08 0.69 0.69 2.40 1.98 2.64 -94.55 -93.48 2.14 4 0.79 0 . 8 8 12.76 34.71 -0.07 -0 . 2 2 0.67 0.64 1.82 1.17 2.03 -82.29 -78.62 7.34 5 2 . 1 1 2.84 6.04 39.91 0.54 0.61 0.28 0.26 2.13 1.39 2.58 -102.48 -99.55 5.86 6 2 . 1 2 1.94 5.17 39.97 -0.52 -0.53 0.48 0.47 2.96 2.57 3.19 -114.03 -113.38 1.30 7 0 . 8 6 0.87 15.89 29.63 0.24 0.23 0.28 0.27 1.69 1.56 1.76 -91.22 -91.05 0.34 8 3.79 3.76 7.47 37.22 1.95 1 . 8 6 0.63 0.62 3.31 3.00 3.45 -113.46 -113.20 0.52 Each x 2 statistic is compared with the 95% quantile of 3.84. The values for subject 1, 2, 4 and 5 are greater than the critical value. However those values are not much bigger than the critical value. Therefore we can hardly say that the variances are different. 4.4 T estin g covariance m atrix In this section we want to check whether the assumed form for the covariance matrix is appropriate with the sine curve. To do this, we fit the model with each of the two different covariance matrices. We present them in Table 4.5 below in terms of the parameter vector 4 > . Table 4.5 The three possible covariance matrices Model Covariance matrix(22) Parameter vector(< /> ) 2 cr2^niin{|i-A :|,48-|i-A .i} 6 a 21 4 8 (< *) 7 0 - 2 { 0 ~P)l48 + J48} (p,<r) Model 2 is the one which we think is appropriate for our data. This counts the degree of correlation with separation in angle of the test-segment orientation. Model 6 assumes that the components of y ; are uncorrelated each with variance cr2. Model 7 is a slightly more general model which assumes that each component of y t has the same variance a 1 and all correlations between any pair of components are equal. We want to compare Model 2 and Model 6 with Model 7 on the basis of log L. The maximum likelihood estimates for the three covariance matrices are given in Table 4.6. 30 Table 4.6 Estimates of corresponding covariance matrix and log L Model a b c P cr log L 2 1.30 37.39 0.27 0.59 2.72 -847.67 6 1.30 37.39 0.27 - 2.72 -928.71 7 1.97 37.39 0.27 0.04 2.71 -930.64 The log-likelihoods in Table 4.6 imply that the correlation assumption for Model 2 provides the best fit. This is consistent with the result in section 2.2. To test Model 2 and Model 6 , we get a log-likelihood for Model 2 of -847.67 as in Table 3.1. We compare this to the log-likelihood log L - - 928.71 obtained from the fits of Model 6 to the data. Twice the difference of these log-likelihoods is approximately distributed as a x 2 with 5-4=1 degree of freedom, under the hypothesis that Model 6 is true. We observed the value /Y2 = — 2(— 928.71 +847.67) = 162.08 which is considerably greater than 3.84, the 95% point of Xv This gives evidence Model 2 is much better than Model 6 . Model 2 and Model 7 have the same number of parameters. Looking at the log-likelihoods for the two models log L for Model 2 is much greater than the one for Model 7. This suggests the assumption of Model 2 is more appropriate than the one for Model 7. Therefore we conclude that the assumption of decreasing correlation with their separation in angle of test-segment orientation is reasonable. 31 C hapter 5 D iscu ssion We initially fitted separate sine curves to the data sets for the eight subjects. The fits of these models appeared to be satisfactory, as judged by the standardized residual plots. We further sought to simplify the structure of the fits by testing for equality of the sine curve parameters. We found that a parameters were significantly dif ferent between subjects. Subject differences for the b parameters were marginally significant, but the major differences were among a parameters. However the sine curve with each of the parameters fitted individually was best. There are several aspects of this analysis that might be extended. We made the assumption that the correlation between deviations within an individual at different angles of test-segment orientation decreases with their separation. An assessment of this would be useful. We checked the goodness of fit of the sine curve through the quantile plots of the standardized residuals of (3.2). However, because of the depen dence within subject, whether the plots are appropriate remains an open question. R eference List [1 ] Andrews, D. P. (1965). Perception of contours in the central fovea. Nature, Lond. 205, 1218-1220. [2] Bouma, H. and Andriessen, J. .J. (1968). Perceived orientation of isolated line segments. Vision Res. Vol. 8 , 493-507. [3] Chatfield, C. (1993). Problem solving: a statistician’ s guide. London, New York, Tokyo, Melbourne and Madras: Chapman and Hall. [4] Crowder, M. J. and Hand, D. J. (1990). Analysis of repeated measures. London, Newyork, Tokyo, Melbourne and Madras: Chapman and Hall. [5] Maxwell, S. and Delaney, H. D. (1990). Designing experiments and analyzing data. Brooks/Cole. [6 ] Greene, E. (1994). Collinearity judgement as a function of induction angle. Perceptual and Motor Skills, 78, 655-674. [7] Hubei, D. H. and Wiesel, T. N. (1959). Receptive fields of simple neurones in the cat’s striate cortex. J. Physiol. Lond. 148, 574-591. [8 ] Hubei, D. H. and Wiesel, T. N. (1962). Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex. J. Physiol., Lond. 160, 106-154. [9] Keene, G. C. (1963). The effect of response codes on the accuracy of making absolute judgments of lineal inclinations. J. Gen. Psychol. 69, 37-50. [10] Mathematica (1991). Version 2.2. Wolfram Research, Inc., Champaign, Illinois. [11] Rochlin, A. M. (1955). The effect of tilt on the visual perception of parallelness. Am. J. Psychol. 6 8 , 223-236. [12] Salomon, A. D. (1947). Visual field factors in the perception of parallelness. Am. J. Psychol. 60, 6 8 -8 8 . [13] S-plus (1992). Version 3.1. Statistical Sciences, Inc., Seattle. A p p en d ix A This appendix contains a FORTRAN program. We wrote eight programs in all for the eight models and present one of the eight programs. This program was written to find maximum likelihood estimates for the parameters under the Model 1 which was discussed in chapter 3. c Program for fitting dependent Normal model c Eight-person version c Oct 6, 1995 c a > 0, 0 < b < 90 Implicit Double Precision (A-H,0-Z) Common /DATA/ XJ(48), ANGLE(8,48) Common /SS/ RSS Dimension X(5) Dimension XI(5,5) Character*64 FN1 Open (Unit=ll,File='chung.est’,Status='unknown') Open (Unit=12,File='angnew7.out',Status='unknown') Write (6,*) 'Enter name of data file' Read (5,'(A)') FN1 Open (Unit=10,File=FNl,Status='unknown’) Write (12,5000) FN1 Do 10 J = 1, 48 Read (10,*) XJ(J), (ANGLE(I,J),I = 1,8) 10 Continue Do 20 I = 1, 5 Read (11,*) X(I) 20 Continue X(4) = L0G(X(4)) 34 X(5) = SQRT( ( 1 . 0 + X ( 5 ) ) / ( l .0— X (5))) Write (12,*) ’ Initial A = ’, X(l) Write (12,*) ’ Initial B = ’, X(2) Write (12,*) ’ Initial C = ’, X(3) Write (12,*) X(l) = log(X(l)) x(2) = sqrt(x(2)/(90.0 - x(2))) RHO = (X(5)**2-l.0) / (X(5)**2+l.0) SIGMA = EXP(X(4)) Write (12,*) ' Initial RHO = ’, RHO Write (12,*) ’ Initial SIGMA = ’, SIGMA Write (12,*) FRET = FUNC(X) Write (6,*) ’initial log-likelihood = ', -FRET Write (12,*) ’initial log-likelihood = ', -FRET Write (12,*) Do 40 I = 1, 5 Do 30 J = 1, 5 XI(I,J) = 0.0 If (I.EQ.J) XI(I,J) = 1.0 30 Continue 40 Continue FT0L = 1.0E-9 Call POWELL(X,XI,5,5,FT0L,ITER,FRET) FRET = FUNC (X) Write (6,*) ’final log-likelihood = ’, -FRET Write (12,*) ’final log-likelihood = ’, -FRET Write (6,*) ’residual sum of squares = ’, RSS Write (12,*) ’residual sum of squares = ’, RSS x(2) = 90.0 * x(2)**2/(1.0 + x(2)**2) Write (6,*) ’ A = ’, EXP(X(1)) Write (6,*) ’ B = ’, X(2) Write (6,*) ’ C = ’, X(3) Write (6,*) Write (12,*) > A = ’, EXP(X(1)) Write (12,*) ' B = 1, X(2) Write (12,*) ' C = ', X(3) Write (12,*) RHO = (X(5)**2-1.0) / (X(5)**2+1.0) SIGMA = EXP(X(4)) Write (6,*) ' RHO = J, RHO Write (6,*) ' SIGMA = \ SIGMA Write (6,*) Write (12,*) 1 RHO = >, RHO Write (12,*) » SIGMA = ', SIGMA Write (12,*) Write (12,*) ’ ************************************************ ’ 5000 Format (IX,'Input file = ’,A64) End Function FUNC(X) Implicit Double Precision (A-H,0-Z) Common /DATA/ XJ(48), ANGLE(8,48) Common /SS/ RSS Dimension X(5) Dimension C0VINV(48,48) Dimension XMU(48) PI = 3.1415926536 Do 20 I = 1, 48 Do 10 J = 1, 48 COVINV(I.J) =0.0 10 Continue 20 Continue BVAL = 90.0 * x(2)**2/(1.0 + x(2)**2) AVAL = EXP(X(1)) RHO = (X(5)**2-1.0) / (X(5)**2+1.0) SIGMA = EXP(X(4)) RHO2 = RHO * RHO SIGMA2 = SIGMA * SIGMA new inverse of covariance matrix CONS = SIGMA2 * (1.0-RH02) * (1.0-RH0**48) CONS = 1.0 / CONS Do 30 I = 1, 48 COVINV(1,1) = 1 + RHO * RHO 30 Continue Do 40 I = 1, 47 COVINV(1,1+1) = -RHO COVINV(1+1,I) = -RHO 40 Continue Do 50 I = 1, 25 COVINV(I,1+23) = -RHO ** 25 C0VINV(I+23,I) = -RHO ** 25 50 Continue Do 60 I = 1, 24 COVINV(I,1+24) = RHO ** 24 * (1.0+RH0*RH0) C0VINV(I+24,I) = RHO ** 24 * (1.0+RH0*RH0) 60 Continue Do 70 I = 1, 23 C0VINV(I,1+25) = -RHO ** 25 C0VINV(I+25,I) = -RHO ** 25 70 Continue COVINV(1,48) = -RHO COVINV(48,1) = -RHO Do 90 I = 1, 48 Do 80 J = 1, 48 C0VINV(I,J) = CONS * C0VINV(I,J) 80 Continue 90 Continue Do 100 I = 1, 48 DEL = 4.0 * (XJ(I)-BVAL) * PI / 180.0 XMU(I) = AVAL * SIN(DEL) + X(3) 100 Continue S = - 192.0 * L0G(2.0*PI) RSS=0.0 Do 130 I = 1, 8 Do 120 J = 1, 48 37 AJ = ANGLE(I,J) - XMU(J) Do 110 L = 1, 48 AL = ANGLE(I,L) - XMU(L) RSS = RSS + AJ * COVINV(J,L) * AL 110 Continue 120 Continue 130 Continue S = S - RSS/2.0 DEL = 48.0 * LOG(SIGMA2) + 48.0 * L0G(1.0-RH02) + 22.0 * * LOG(1.0-RH0**48) DEL =8.0* DEL S = S - DEL / 2.0 FUNC = -S Return End c *********** NUMERICAL RECIPES CODE BELOW HERE ********* Subroutine POWELL(P,XI,N,NP,FTOL,ITER,FRET) C (C) Copr. 1986-92 Numerical Recipes Software 53#]v. Subroutine LINMIN(P,XI,N,FRET) C (C) Copr. 1986-92 Numerical Recipes Software 53#]v. Function BRENT(AX,BX,CX,F,TOL,XMIN) C (C) Copr. 1986-92 Numerical Recipes Software 53#]v. Function FIDIM(X) C (C) Copr. 1986-92 Numerical Recipes Software 53#]v. Subrout ine MNBRAK(AX,BX,CX,FA,FB,FC,FUNC) C (C) Copr. 1986-92 Numerical Recipes Software 53#]v. 38
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Structural equation modeling in educational psychology
PDF
The analysis of circular data
PDF
Independent process approximation for the coupon collector's problem
PDF
Markovian Models For Discrete Data With Repeated Patterns
PDF
Ordered Probit Models For Transaction Stock Prices
PDF
Relative Efficiency Study Of Nested Case-Control Sampling In The Logistic Regression Model
PDF
The effects of dependence among sites in phylogeny reconstruction
PDF
The Müller-Lyer illusion: a new variant, some old and new results
PDF
A qualitative study on the performance of R-code statistical software
PDF
Bayesian vector autoregression, a time series model
PDF
Applications of the non-Gaussian stable process to communication engineering
PDF
Protein synthesis in the hippocampus of rats during learning assessed by radioautography
PDF
Application of Bayesian methods in association mapping
PDF
Cue Dependent Active Memory And Ecs Produced Memory Occlusion
PDF
Visual Evoked Potentials, Laterality Of Eye Movements, And The Asymmetry Of Brain Functions
PDF
Eye Contact As An Indicator Of Infant Social Development
PDF
Geometric bounds for Markov Chain and brief applications in Monte Carlo methods
PDF
High-Temperature Creep In Aluminum Alloy Matrix Composites
PDF
A finite element approach on singularly perturbed elliptic equations
PDF
Analysis of robustness and residuals in the Affymetrix gene expression microarray summarization
Asset Metadata
Creator
Chung, Young-Eui Hong (author)
Core Title
Repeated Measures In Psychology: Bias In Collinearity Judgment
Degree
Master of Science
Degree Program
Statistics
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
OAI-PMH Harvest,psychology, psychometrics,statistics
Language
English
Contributor
Digitized by ProQuest
(provenance)
Advisor
Tavare, Simon (
committee chair
), Goldstein, Larry (
committee member
), Greene, Ernest G. (
committee member
)
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c18-12286
Unique identifier
UC11357746
Identifier
1379577.pdf (filename),usctheses-c18-12286 (legacy record id)
Legacy Identifier
1379577-0.pdf
Dmrecord
12286
Document Type
Thesis
Rights
Chung, Young-Eui Hong
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the au...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus, Los Angeles, California 90089, USA
Tags
psychology, psychometrics