No topic (SPSS Portfolio) Essay Example

  • Category:
    Psychology
  • Document type:
    Math Problem
  • Level:
    Undergraduate
  • Page:
    8
  • Words:
    5622

Before any analysis is done in SPSS the variables need to be set up and this is done under variable view window

After coding for missing data it was found that one participant had left out a question. This was coded as 9 and it was for participant ID 4 and the un responded to question was number 2.

When recoding the gender variable the following procedure was followed

Transform

Into Different Variables

of Questions 1 to 4 the following procedure is followedaverage To compute a new variable called Score that gives the

Transform

Compute . . .

Target Variable: Numeric Expression:

Score = (Q1+Q2+Q3+Q4)/4

In order to compare the average scores between males and females on the questionnaire only for participants of 23 years to 40 years of age the following procedure is used

Select Cases . . .

If condition is satisfied

Then specify: Age > 22.75 and Age < 40.25

(so you don’t miss out participants aged exactly 23, or 40!)

Question 6: What is the result, both for the test of the assumption of homogeneity of variance, and for the t test itself?

Levene’s Test for Equality of Variances is not significant because (p = .608, which is greater than alpha = .05), therefore homogeneity of variance is satisfactory.

t(7) = 0.22, p = .83 (This is not significant and means there is no significant difference on Score between females and males)

The following are the steps followed in checking for univariate outliers

(z > 3.29 criteria, separately for each Gender is used.)

Split File . . .

 Compare groups

Gender Groups Based On:

Descriptive Statistics 

Descriptives . . .

Score Variables

 Save standardized values as variables

Question 8: Are there any outliers, which ones (i.e., which ID numbers)?

No, there are none. None of the z scores (that are added to the dataset) are equal to or greater than 3.29.

The following is the procedure of checking for nornmality

Descriptive Statistics 

Explore . . .

Score Dependent List:

Plots…

 Normality plots with tests

Question 10: What conclusion do you draw about the assumption of normality?

Both normality tests (Kolmogorov_Smirnov and Shapiro-Wilk) are not significant, indicating that the normality assumption is satisfactory.

Question 11: If the assumption of normality is violated, what test should you perform instead of the t-test?

Mann-Whitney Test (nonparametric equivalent)

Question 12: What must you remember to do before performing this alternative test?

Remember to turn Split File off.

Question 13: What is the result of the alternative test?

Mann Whitney z = -0.25, p = .91

This is also not significant, and, similar to the t test, also indicates no difference in Score between the genders.

What must you now do if you want to repeat the analysis on the whole dataset (i.e., all ages included)? Question 14:

Select Cases… (and choose All cases)

Group Statistics

Std. Deviation

Std. Error Mean

Independent Samples Test

Levene’s Test for Equality of Variances

t-test for Equality of Means

95% Confidence Interval of the Difference

Sig. (2-tailed)

Mean Difference

Std. Error Difference

Equal variances assumed

Equal variances not assumed

Tests of Normality

Kolmogorov-Smirnova

Shapiro-Wilk

Statistic

Statistic

a. Lilliefors Significance Correction

*. This is a lower bound of the true significance.

EXERCISE 2: BIVARIATE CORRELATION AND REGRESSION

Question 1: Which variable should appear on the X-axis of the scatterplot?

In the scatter plot temperature is on the X-ais while assault is on the Y-axis. Fron the diagram it can be seen that there is a positive relationship between the two variables. There are some points which are deviating away from the line which is an indication that the relationship between the two variable is not perfect.

The assumptions that are being observed

  • That thee is a linear relationship between the two variables with any indication of curvilinear relationship in other words there should the variables are to be homoscedastic and not heteroscedastic

  • The distribution is supposed to be equal across the range of X-scores

  • There should be be no restricted range where in some areas it seem there no relationship between variables

  • There should be no outliers

Question 6: What is the correlation between heat and aggression, and what part of the output gives you this information?

The correlation between the two variables is given in the correlations tables as can be seen below. Fron the table it can be seen that that there is a strong positive correlation between the assault rate and temperature of 0.877.

Question 7:
What is the value of R2; what does it tell you?

The value of R2 M is 0.769 which tells as that 76.9% of the variation in the variables can be explained by the other.

Question 8:
The adjusted R2 is the most robust estimate of variance. Why is this so?

The adjusted R2 is the most robust as it represents the whole population and thus it can be assumed to be more accurate.

Question 9:
Why is the ANOVA result important; what does it enable you to conclude about the relationship between heat and assault rates in this case?

The ANOVA results are important as it tells as whether the equation between the two variables are significant or not. From the table it can be seen that the equation is significant due to the small p-value. The sum of squares of the residuals can also help to have a picture of how the values of assault deviate away from the regression line.

TABLE 3 ANOVAb

Sum of Squares

Mean Square

Regression

39611.666

39611.666

Residual

11911.251

1191.125

51522.917

a. Predictors: (Constant), Temperature

b. Dependent Variable: Assault rate

Question 10:
If you wanted to predict the assault rate for a month with a temperature of 22, what equation would you use?

The equation to be used can be derived with the help of the table 4 below

TABLE 4 COEFFICIENTSA

Unstandardized Coefficients

Standardized Coefficients

Std. Error

(Constant)

Temperature

a. Dependent Variable: Assault rate

Y= 56.416- 0.712x

Substituting for x

Y= 64.887X– 0.712 X 22 = 80.55

Question 11:
Show your calculation of the predicted assault rate using this equation?

Now add another variable (Monthly Homicide Rate) with values as follows:

2, 8, 6, 2, 1, 2, 2, 3, 4, 4, 6, 7

TABLE 5 CORRELATIONS

monthly homicide

Assault rate

Temperature

monthly homicide

Pearson Correlation

Sig. (2-tailed)

Assault rate

Pearson Correlation

Sig. (2-tailed)

Temperature

Pearson Correlation

Sig. (2-tailed)

*. Correlation is significant at the 0.05 level (2-tailed).

**. Correlation is significant at the 0.01 level (2-tailed).

Then, request correlations among all three numeric variables.

Question 12:
Note that this produces a rather “wasteful” matrix, where everything is repeated.

What is the correlation between assault rate and homicide rate?

The correlation between assault rate and monthly homicide rate is 0.494which is not significant. This is due to the fact that the p-value given in the table is 0.103 which is greater than 0.05.

Question 13: Is this correlation significant; and how do you know?

Result section appearance

R(10) = 0.494 p>0.05

Question 14:
Write out this correlation exactly as it would need to appear in the Results section of a research report.

The correlations between temperature and homicide was 0.645. The value is significant at 0.05 level given that the p-value of 0.023 given in the table is lower than 0.05.

Result section appearance

R(10) = 0.023 p<0.05

TABLE 1 CORRELATIONS

Temperature

Assault rate

Temperature

Pearson Correlation

Sig. (2-tailed)

Assault rate

Pearson Correlation

.877** (A)

Sig. (2-tailed)

  • **. Correlation is significant at the 0.01 level (2-tailed).

TABLE 2 MODEL SUMMARY

Adjusted R Square

Std. Error of the Estimate

Change Statistics

R Square Change

Sig. F Change

a. Predictors: (Constant), Temperature

EXERCISE 3: REALITY ANALYSIS

For internal consistency the items are supposed to an item-total correlation of not less than 0.3 (Foolproof, p.286). From the test on perspective taking the results in TABLE 1 were obtained. From the results it is seen that variable P4 (Perspective taking 4) had a value of .183 which is below the acceptable criterion of .30.which is an indication of lack of consistency with the overall scale. Following the deletion of P4 variable P1 is found is seen to have an item-total correlation of .268 which is below .30 as can be seen in table 2 cell .Finally, after deletion of P1 the remaining five variables have an item-total correlation above .30, indicating they are all now consistent with the overall scale. Following the deletion process the Crosbach constant improved from .638 to .662 to the final value .669 as can be seen from tables 1, 4 and 6 respectively. All the values are below the recommended value of .7

From table 10 the first test on emphatic concern indicated that the variables E2 and E3 with item-total correlations of .279 (cell B1) and .187(cell B2) respectively, are below the criterion of .30 which is an indication of lack of consistency with the overall scale. According to (foolproof ) the item with the least loading should be deleted first and observation made in the improvement. The deletion of E3 will bring about improvement in Cronbach’s alpha from .646 (cell X2) as seen in table 8 to .656 (cell C1) as seen in table10. After deletion of just E3, we now have a new combination of variables in the scale, and with this new combination we find that item E2 is now consistent with the total scale. It now has an item-total correlation of .314, which is just above the .30 criterion. All six remaining items are consistent according to the .30 criterion. This illustrates how item deletion might need to be done one-at-a-time, rather than removing all items below .30 at each step. Researchers may need to look at both approaches, referring back to the actual wording and meaning of items to try to determine how the nature of the scale changes with item deletion.

From table it is observed that all seven of the personal distress items have item-total correlations exceeding the criterion of .30. From table 13 cell X5 the value of Cronbach’s alpha is .779 which is above the .7 recommended for reliability.

Reliability Statistics

Cronbach’s Alpha

Cronbach’s Alpha Based on Standardized Items

N of Items

Item Statistics

Std. Deviation

Perspective taking 1

Perspective taking 2

Perspective taking 3

Perspective taking 4

Perspective taking 5

Perspective taking 6

Perspective taking 7

Item-Total Statistics

Scale Mean if Item Deleted

Scale Variance if Item Deleted

Corrected Item-Total Correlation

Squared Multiple Correlation

Cronbach’s Alpha if Item Deleted

Perspective taking 1

Perspective taking 2

Perspective taking 3

Perspective taking 4

Perspective taking 5

Perspective taking 6

Perspective taking 7

Scale: Perspective taking

Reliability Statistics

Cronbach’s Alpha

Cronbach’s Alpha Based on Standardized Items

N of Items

(X2) .661

Item-Total Statistics

Scale Mean if Item Deleted

Scale Variance if Item Deleted

Corrected Item-Total Correlation

Squared Multiple Correlation

Cronbach’s Alpha if Item Deleted

Perspective taking 1

Perspective taking 2

Perspective taking 3

Perspective taking 5

Perspective taking 6

Perspective taking 7

Reliability Statistics

Cronbach’s Alpha

Cronbach’s Alpha Based on Standardized Items

N of Items

(X3) .669

Item-Total Statistics

Scale Mean if Item Deleted

Scale Variance if Item Deleted

Corrected Item-Total Correlation

Squared Multiple Correlation

Cronbach’s Alpha if Item Deleted

Perspective taking 2

Perspective taking 3

Perspective taking 5

Perspective taking 6

Perspective taking 7

Reliability Statistics

Cronbach’s Alpha

N of Items

(X2) .646

Item Statistics

Std. Deviation

Empathic concern 1

Empathic concern 2

Empathic concern 3

Empathic concern 4

Empathic concern 5

Empathic concern 6

Empathic concern 7

Item-Total Statistics

Scale Mean if Item Deleted

Scale Variance if Item Deleted

Corrected Item-Total Correlation

Cronbach’s Alpha if Item Deleted

Empathic concern 1

Empathic concern 2

(A2)
.279

Empathic concern 3

(A3) .187

Empathic concern 4

Empathic concern 5

Empathic concern 6

Empathic concern 7

Reliability Statistics

Cronbach’s Alpha

Cronbach’s Alpha Based on Standardized Items

N of Items

Item-Total Statistics

Scale Mean if Item Deleted

Scale Variance if Item Deleted

Corrected Item-Total Correlation

Squared Multiple Correlation

Cronbach’s Alpha if Item Deleted

Empathic concern 5

Empathic concern 6

Empathic concern 7

Empathic concern 4

Empathic concern 2

Empathic concern 1

Reliability Statistics

Cronbach’s Alpha

Cronbach’s Alpha Based on Standardized Items

N of Items

(X5)
.779

Item-Total Statistics

Scale Mean if Item Deleted

Scale Variance if Item Deleted

Corrected Item-Total Correlation

Squared Multiple Correlation

Cronbach’s Alpha if Item Deleted

Personal distress 1

Personal distress 3

Personal distress 2

Personal distress 5

Personal distress 4

Personal distress 6

Personal distress 7

EXERCISE 4: FACTOR ANALYSIS: PART 1: ASSUMPTION CHECKING AND FACTOR EXTRACTION

From the frequency tables it is clear that some of the variables constitute extreme variables where everyone score pretty much the same. EXTINCT and PETS are the variables that exhibit these characteristics and have to be eliminated. The other variable with such characteristics is MEAT but this will not eliminated for the purpose of analysis 2.Others such as SOUVENIR (in particular), FERAL, FUR, TRANS, DUCK, and possibly WILD are somewhat restricted, but have some range, so they are retained.

On running principal component analysis the value of KMO is .706 at .000 signifince as can be observed from TABLE 21 in APPENDIX 6. Although this value is not particularly high it is acceptable. From TABLE 22 it is evident that all the variables have values of commonalities above .30 none is an outlier. TABLE 23 shows that 6 components should be extracted using the eigenvalue> 1 while the scee plot (APPENDIX 8) show that 3 components should be extracted. Using the value from the scree plot there are three extracted factors whose eigenvalues and variance are as in TABLE 26 (APPENDIX 10). The values are

Component 1: 4.464 24.799%

Component 2: 1.703 9.460%

Component 3: 1.607 8.928%

Total: 43.187%

TABLE 25 show that MEAT and SOUVENIR are 2 variables which seem to be outliers as they have commonalities of .215 and .258 which are below the minimum value of .30

SOUVENIR with a communality of .258

The following are the assumptions that have not been checked but should be checked in actual research study

Linearity (by examining at least a sample of variable pair scatterplots)

Outliers – univariate (modifying any variables with Z-scores more extreme than 3.29)

Outliers – multivariate (identified by Mahalanobis value, to be covered under multiple regression)

Normality

The sample size is just the minimum acceptable as there was 100 participants as shown in the data. This gives a minimum of 5 cases per variable (20×5=100 on eliminating two we have 18×5 = 90). There is need for a bigger sample as from the component matrix most of the loadings are low with values of less than .8

APPENDIX 1

TABLE 1: HENS –Has full range Table 2: SPORT- Has full range

TABLE 3 FISHING- Has full range

Frequency

Valid Percent

Cumulative Percent

0

Frequency

Valid Percent

Cumulative Percent

0

TABLE 4 EXPER- Has full range

Frequency

Valid Percent

Cumulative Percent

0

Frequency

Valid Percent

Cumulative Percent

0

APPENDIX 2

TABLE 5: FARM- Lack -3 but reasonable

Frequency

Valid Percent

Cumulative Percent

0

TABLE 6: TRANS- Has full range

Frequency

Valid Percent

Cumulative Percent

0

TABLE 7:FUR- Has full range with few in negative

Frequency

Valid Percent

Cumulative Percent

0

TABLE 8: RODEO- Has full range

Frequency

Valid Percent

Cumulative Percent

0

APPENDIX 3

TABLE 9: AILDOCK- Has full range

Frequency

Valid Percent

Cumulative Percent

0

TABLE 10: GAMEFISH- Has full range

Frequency

Valid Percent

Cumulative Percent

0

TABLE11: CIRCUS- Has full range

Frequency

Valid Percent

Cumulative Percent

0

TABLE 12: EATKANG-Full range with more on negative

Frequency

Valid Percent

Cumulative Percent

0

APPENDIX 4

TABLE 13: WILD-Lack -3 with very few in negative

Frequency

Valid Percent

Cumulative Percent

0

TABLE 14: SOUVENIR- Full range but few in the negative

Frequency

Valid Percent

Cumulative Percent

0

TABLE 15THREAT- Has full range

Frequency

Valid Percent

Cumulative Percent

0

TABLE 16: DUCK- Full range but few in the negative

Frequency

Valid Percent

Cumulative Percent

0

APPENDIX 5

TABLE 17: FERAL- Full range but few in positive

Frequency

Valid Percent

Cumulative Percent

0

TABLE 19: EXTINCT- Has restricted range

Frequency

Valid Percent

Cumulative Percent

0

TABLE 18: MEAT-Has restricted range

Frequency

Valid Percent

Cumulative Percent

0

TABLE 20: PETS- Pets has restricted range

Frequency

Valid Percent

Cumulative Percent

APPENDIX 6

KMO and Bartlett’s Test

Kaiser-Meyer-Olkin Measure of Sampling Adequacy.

Bartlett’s Test of Sphericity

Approx. Chi-Square

Communalities

Extraction

taildock

gamefish

souvenir

Extraction Method: Principal Component Analysis.

APPENDIX 7

Total Variance Explained

Component

Initial Eigenvalues

Extraction Sums of Squared Loadings

% of Variance

Cumulative %

% of Variance

Cumulative %

Extraction Method: Principal Component Analysis.

APPENDIX 8

No topic (SPSS Portfolio)No topic (SPSS Portfolio) 1No topic (SPSS Portfolio) 2

APPENDIX 9

KMO and Bartlett’s Test

Kaiser-Meyer-Olkin Measure of Sampling Adequacy.

Bartlett’s Test of Sphericity

Approx. Chi-Square

Communalities

Extraction

taildock

gamefish

souvenir

Extraction Method: Principal Component Analysis.

APPENDIX 10

Total Variance Explained

Component

Initial Eigenvalues

Extraction Sums of Squared Loadings

Rotation Sums of Squared Loadings

% of Variance

Cumulative %

% of Variance

Cumulative %

% of Variance

Cumulative %

Extraction Method: Principal Component Analysis.

EXERCISE 5: FACTOR ANALYSIS PART 2: ROTATION AND INTERPRETATION

Question 1: Does the rotated factor matrix exhibit good simple structure?

To some degree, but nearly half the variables are complex variables that load above .30 on more than one component

Out of 16 variables, the following 7 are complex: TRANS, FUR, GAMEFISH, FARM, DUCK, TAILDOCK, and EXPER.

Question 2: Which items are pure measures of Component 1?

HENS, CIRCUS, FISHING, and RODEO.

Question 3: Which items are pure measures of Component 2?

THREAT, SPORT, and WILD.

Question 4: Which items are pure measures of Component 3?

FERAL and EATKANG.

Question 5: Refer back to the list of item details. Note that the component loadings are positive,

indicating that high scores on the variable (meaning supportive of animal wellbeing) accompany high

scores on the component. Paying particular attention to the above “pure” variables for each

component, what do the variables loading on each component have in common; hence what label

would you give to each of the three components?

This solution does not appear to be particularly meaningful, as it is hard to ascertain what the

variables loading on each component really have in common.

However, this is a subjective process, so individual students may see something different to what is suggested below. This is perfectly acceptable provided there is a logical rationale for the decision.

It is not uncommon for factor analysis to become problematic at the interpretation stage, although this may mean that there is no coherent structure underlying the variables (possibly, this is the case here).

Were a situation like this to arise in a real research study, an oblique rotation should be tried as a

next step, to ascertain if it makes things any clearer. Generally, though, if any real structure exists

there should

be reasonably clear evidence of it whatever method is used. (Note: As oblique rotation is beyond

the scope of this

unit, there is no need for students to attempt it here.)

Suggested interpretations are:

Component 1: HENS, CIRCUS, FISHING, and RODEO (also complex variables of TRANS, FUR,

GAMEFISH).

These relate mainly to quite common instrumental uses of animals by humans, so label could be

Opposition to Common Uses of Animals.

Component 2: THREAT, SPORT, and WILD (also a number of complex variables: FARM, DUCK,

GAMEFISH, FUR, EXPER, and TAILDOCK).

These all relate to less common uses of animals that are more the province of particular groups

or individuals in society, rather than applying to society at large. The highest loading ones also

relate more to

wild animals.

A label could be Opposition to Specialist Uses of Animals.

Component 3: FERAL, EATKANG (also complex variables of EXPER, TRANS, and FARM).

These possibly relate to more emotive areas in the use of animals, although one wonders why

HENS is not included.

A tentative label is Emotive Opposition.

Question 6: What is the KMO value for this solution? Is it acceptable?

.712, and yes, it is acceptable, as it exceeds the recommended criteria of .60.

Question 7: In this solution, how many components should be extracted using the

eigenvalue > 1 criteria?

KMO and Bartlett’s Test

Kaiser-Meyer-Olkin Measure of Sampling Adequacy.

Bartlett’s Test of Sphericity

Approx. Chi-Square

Total Variance Explained

Component

Initial Eigenvalues

Extraction Sums of Squared Loadings

% of Variance

Cumulative %

% of Variance

Cumulative %

Extraction Method: Principal Component Analysis.

Rotated Component Matrixa

Component

taildock

gamefish

Extraction Method: Principal Component Analysis.

Rotation Method: Varimax with Kaiser Normalization.

a. Rotation converged in 5 iterations.

EXERCISE 6: ONE-WAY CHI-SQUARE

Question 1:
What is the answer to your research question based on this analysis?

The answer to research question is that there is significant difference in the number of can for each of the five types cat feed.

Reporting result

X2 (4,N=335) = 10.09,p<0.05

The fact that some of the people may be buying a variety of the feeds is of concern

There is a chance that the conclusion is wrong is wrong and to come to ascertain the answer there is need to increase the sample size.

Question 5:
What is the answer to your research question based on this survey?

Just like in the first case the answer to research question is that there is a significant difference in the number cans of catfood purchased as can be seen in table 6 where the Chi-square value is 43.024 at 0.000 significance level

Question 6:
Calculate by hand the proportions for each brand in each study. Are these proportions very different? If not, how do you explain the different results?

Table 1 gives the results of the manual calculation for the proportions of the number of cans of catfood purchased in two cases. From the tables it can be seen that the proportions have negligible difference. From the table it is observed that Yum-yum has the highest purchase with 17.0% for the survey study (16.7% for supermarket study). The second position is held by Spoilt cat brand with 15.0% for the survey study (15.2 for supermarket study).

Question 8:
Conduct the appropriate analysis to ascertain if there is any difference among the least preferred brands. What is the result?

To a certain if there is a difference in the least preferred brands a chi-square test was done for the two brands that had least purchase. From the results in table 7 it can be seen that there is no difference in the purchase of the two brands. The chi-square value is 0.649 with a p = 0.42. Another test to ascertain if there is any difference in the most preferred brands indicate that there is no difference in preference of the two brands. This can be seen from table 9 where the chi-square value is 0.234 and the p= 0.629

Question 10:
Do the UWS students fit the Australian population or are they significantly different? Write down the chi-square result as it would be reported statistically.

From table 10 and table 13 it can be seen that the UWS students do not fit the opinion of the general population. The reporting would be x2(4,96)=43.024,p>0.05

Question 11:
What do your results suggest is the nature of any difference?

Generally the students seem to be more in opposition that the general public who are supporting.

BRAND OF FEED

Frequency

Frequency

Spoit Cat

Chaw Time

Dead Things

Pongy Stuff

Brand of feed

Observed N

Expected N

Residual

Spoit Cat

Chaw Time

Dead Things

Pongy Stuff

Test Statistics

Brand of feed

Chi-Square

Asymp. Sig.

a. 0 cells (.0%) have expected frequencies less than 5. The minimum expected cell frequency is 67.0.

feed brand

Observed N

Expected N

Residual

Test Statistics

feed brand

Chi-Square

Asymp. Sig.

a. 0 cells (.0%) have expected frequencies less than 5. The minimum expected cell frequency is 295.2.

Brand of feed

Observed N

Expected N

Residual

Dead Things

Pongy Stuff

Test Statistics

Brand of feed

Chi-Square

Asymp. Sig.

a. 0 cells (.0%) have expected frequencies less than 5. The minimum expected cell frequency is 77.0.

Brand of feed

Observed N

Expected N

Residual

Spoit Cat

Test Statistics

Brand of feed

Chi-Square

Asymp. Sig.

a. 0 cells (.0%) have expected frequencies less than 5. The minimum expected cell frequency is 53.5.

Opinion of student

Frequency

Strongly support

Undecided

Strongly oppose

Opinion of student

Observed N

Expected N

Residual

Strongly support

Undecided

Strongly oppose

Test Statistics

Opinion of student

Chi-Square

Asymp. Sig.

a. 0 cells (.0%) have expected frequencies less than 5. The minimum expected cell frequency is 19.2.

General population statistics

Strongly support

Undecided

Strongly oppose

EXERCISE 7: TWO-WAY CHI-SQUARE

From table 1 it is observed that the expected counts for categories of deforestation, habitat destruction, globalization and over consumption have expected counts less than five. To solve this problem it is necessary to recode so that the categories are combined into one named Other.

The analysis is the repeated with the new combined category it is evident that there is a significant relationship between perceived threat and country. This can be observed in TABLE 1 where the chi-square value is 16.62 with a P-value of 0.011 . From Table 3 it is observed that relative to Indonesia, Australia emphasises overpopulation, greenhouse gas, and other; while Indonesia emphasises poverty. Greenhouse gas is most strongly endorsed in Australia, whereas poverty is in Indonesia.

The following observation can also be observed from TABLE 3

  • 18.4% of Australian respondents consider poverty is the main threat

  • 10.5% of Indonesian respondents consider overpopulation is the main threat

  • 32.0% of the respondents who see greehhouse as a threat are Indonesian

  • In the total sample are 13.8% is the portion of the Australian who see other things as the main threat.

In order to exclude the combined category the following is the procedure

Select Cases . . .

If condition is satisfied

{select} threat < 4

When the test is run under the condition of excluding the combined the results are as seen in TABLE 6. The Chi-square test there is significant relationship in the three remaining variables where the Chi-square value is 8.872 with p-value being 0.012 as seen in TABLE 6. This therefore the conclusion still remains that there is significant difference among the variables.

An analysis can be done where the combined category is excluded and the ranks separated. Do do this the variable rank which comprises of Junior and Senior is assigned to Layer 1 of 1 in Crosstabs. The results for this test is as shown in TABLE 7 where it is observed that there is lack of significant relationship for Seniour managers while there is signicant relationship which exists for junior managers. The fact that there are very few senior managers in the analysis could be a problem. Due to the smallness of the sample of managers no conclusion can be made in this circumstances. There is need for further research involving a bigger sample

On reporting the following is the format

(3, N=87)= 9.06, P=0.028

country * threat Crosstabulation

Overpopulation

Greenhouse gas emissions

Third world poverty

Deforestation

Habitat destruction

Globalisation

Over consumption

Australia

Expected Count

% within country

% within threat

% of Total

Indonesia

0

Expected Count

% within country

% within threat

% of Total

Expected Count

% within country

% within threat

% of Total

TNo topic (SPSS Portfolio) 3 ABLE 1

Chi-Square Tests

Asymp. Sig. (2-sided)

Pearson Chi-Square

Likelihood Ratio

Linear-by-Linear Association

N of Valid Cases

a. 8 cells (57.1%) have expected count less than 5. The minimum expected count is 1.31.

Combined categories threat * Country Crosstabulation

Australia

Indonesia

Combined categories threat

Overpopulation

% within Combined categories threat

% within Country

% of Total

Greenhouse gas emissions

% within Combined categories threat

% within Country

% of Total

Third world poverty

% within Combined categories threat

% within Country

% of Total

% within Combined categories threat

% within Country

% of Total

% within Combined categories threat

% within Country

% of Total

Chi-Square Tests

Asymp. Sig. (2-sided)

Pearson Chi-Square

Likelihood Ratio

Linear-by-Linear Association

N of Valid Cases

a. 0 cells (.0%) have expected count less than 5. The minimum expected count is 6.55.

Combined categories threat * country Crosstabulation

Australia

Indonesia

Combined categories threat

Overpopulation

% within combine

% within country

% of Total

Greenhouse gas emissions

% within combine

% within country

% of Total

Third world poverty

% within combine

% within country

% of Total

% within combine

% within country

% of Total

Chi-Square Tests

Asymp. Sig. (2-sided)

Pearson Chi-Square

Likelihood Ratio

Linear-by-Linear Association

N of Valid Cases

a. 0 cells (.0%) have expected count less than 5. The minimum expected count is 6.72.

COUNT LESS THAN FIVE

No topic (SPSS Portfolio) 4

combine * country * rank Crosstabulation

Australia

Indonesia

Overpopulation

% within combine

% within country

% of Total

Greenhouse gas emissions

% within combine

% within country

% of Total

Third world poverty

% within combine

% within country

% of Total

% within combine

% within country

% of Total

Overpopulation

% within combine

% within country

% of Total

Greenhouse gas emissions

% within combine

% within country

% of Total

Third world poverty

% within combine

% within country

% of Total

% within combine

% within country

% of Total

Chi-Square Tests

Asymp. Sig. (2-sided)

Pearson Chi-Square

Likelihood Ratio

Linear-by-Linear Association

N of Valid Cases

Pearson Chi-Square

Likelihood Ratio

Linear-by-Linear Association

N of Valid Cases

a. 5 cells (83.3%) have expected count less than 5. The minimum expected count is 1.90.

b. 1 cells (16.7%) have expected count less than 5. The minimum expected count is 4.78.

EXERCISE 8

In this case the dependant variable is stress while the independent variables are number of assignments and hatred for statistics.

The multiple R is significantly different from zero as can be seen in TABLE 2. With the following result being reported

F(2,12) = 16.71, p < .001

The variance in stress level s explained by the combination of the number of assignments and hatred for statistics is 0.736 as can be seen in TABLE 1

The contribution made by each of the two dependant variables can be is given in TABLE 3. From the table it is clear that the number of assignments make a significant unique contribution with values of t=4.34 at p=0.001. on the other hand hatred of statistics does not make a significant contribution as the value of t=-0.63 at p=0.541

Question 7: Is hatred of statistics significantly correlated with stress? Report the statistics that support your conclusion, as they would be shown in a Results section?

r(13) = .57, p = .028

Question 8: Your mate Sam will be doing the course in the following semester. If Sam will be required to complete 6 assignments, and has a hatred of statistics score of 7, what is Sam’s predicted stress score? Write the equation out in full and calculate the predicted score.

Predicted stress = 1.962 + (.98 x 6) + (-.164 x 7) = 1.962 + 5.88 – 1.148

Question 9: Sam ends up with an actual stress score of 3.5. What is Sam’s residual?

Actual stress score (3.5) – Predicted stress score (6.694) = -3.194

Question 10: Based on this analysis, and adapting from Tables 1 and 2 (pp. 276) of the Foolproof guide, complete the Table below, including giving it a title.

Model Summaryb

Adjusted R Square

Std. Error of the Estimate

a. Predictors: (Constant), Number of assignment, Hatred of statistics

b. Dependent Variable: Stress level

Sum of Squares

Mean Square

Regression

Residual

a. Predictors: (Constant), Number of assignment, Hatred of statistics

b. Dependent Variable: Stress level

Coefficientsa

Unstandardized Coefficients

Standardized Coefficients

Std. Error

(Constant)

Hatred of statistics

Number of assignment

a. Dependent Variable: Stress level

Correlations

Stress level

Hatred of statistics

Number of assignment

Stress level

Pearson Correlation

Sig. (2-tailed)

Hatred of statistics

Pearson Correlation

Sig. (2-tailed)

Number of assignment

Pearson Correlation

Sig. (2-tailed)

*. Correlation is significant at the 0.05 level (2-tailed).

**. Correlation is significant at the 0.01 level (2-tailed).

FROM CORRELATIONS TABLE

FROM COEFFICIENTS TABLE

No topic (SPSS Portfolio) 5No topic (SPSS Portfolio) 6

VNo topic (SPSS Portfolio) 7 ariables

Stress DV

Statistics

No of assignments

No topic (SPSS Portfolio) 8No topic (SPSS Portfolio) 9

0No topic (SPSS Portfolio) 10 .955

Hatred of statistics

Intercept

MNo topic (SPSS Portfolio) 11

5No topic (SPSS Portfolio) 12 .73

No topic (SPSS Portfolio) 13

RNo topic (SPSS Portfolio) 14No topic (SPSS Portfolio) 152

0No topic (SPSS Portfolio) 16 .74

No topic (SPSS Portfolio) 17No topic (SPSS Portfolio) 18

FROM MODEL SUMMARY TABLE

FROM DESCRIPTIVE STATISTICS TABLE

EXERCISE 10: FACTORIAL BETWEEN-GROUPS ANOVA

Between-Subjects Factors

Value Label

0

Brisbane

Melbourne

Descriptive Statistics

Dependent Variable:Support for US foreign policy

Std. Deviation

Brisbane

Melbourne

Brisbane

Melbourne

Brisbane

Melbourne

Levene’s Test of Equality of Error Variancesa

Dependent Variable:Support for US foreign policy

Tests the null hypothesis that the error variance of the dependent variable is equal across groups.

a. Design: Intercept + gender + city + gender * city

Tests of Between-Subjects Effects

Dependent Variable:Support for US foreign policy

Type III Sum of Squares

Mean Square

Partial Eta Squared

Noncent. Parameter

Observed Powerb

Corrected Model

10841.668a

1548.810

Intercept

29707.692

29707.692

1055.269

1055.269

10035.154

10035.154

20578.941

61841.021

Corrected Total

31420.610

a. R Squared = .345 (Adjusted R Squared = .339)

b. Computed using alpha = .05

3. gender * city

Dependent Variable:Support for US foreign policy

Std. Error

95% Confidence Interval

Lower Bound

Upper Bound

Brisbane

Melbourne

Brisbane

Melbourne

Estimates

Dependent Variable:Support for US foreign policy

Std. Error

95% Confidence Interval

Lower Bound

Upper Bound

Pairwise Comparisons

Dependent Variable:Support for US foreign policy

(I) gender

(J) gender

Mean Difference (I-J)

Std. Error

95% Confidence Interval for Differencea

Lower Bound

Upper Bound

Based on estimated marginal means

*. The mean difference is significant at the .05 level.

a. Adjustment for multiple comparisons: Sidak.

Pairwise Comparisons

Dependent Variable:Support for US foreign policy

Mean Difference (I-J)

Std. Error

95% Confidence Interval for Differencea

Lower Bound

Upper Bound

Brisbane

Melbourne

Brisbane

Melbourne

Melbourne

Brisbane

Brisbane

Melbourne

Based on estimated marginal means

a. Adjustment for multiple comparisons: Sidak.

*. The mean difference is significant at the .05 level.

3. gender * city

Dependent Variable:Support for US foreign policy

Std. Error

95% Confidence Interval

Lower Bound

Upper Bound

Brisbane

Melbourne

Brisbane

Melbourne