Explore BrainMass
Share

# Defining Effect Size and Central Limit Theorem

This content was STOLEN from BrainMass.com - View the original, and get the already-completed solution here!

1. Define Effect Size. Does increasing N increase Effect Size? Explain your answer with the Central Limit Theorem as your tool.

2. Assume that for the Quantitative GRE score, µ = 550 and σ = 50.
A sample of 25 students take the GRE after ingesting marijuana, the killer weed.
Their mean score is 530?less than a standard deviation below the population mean.

a) What is Cohen's d in this example? How would you (or Cohen) interpret this?

b) Which gives you more Power-- A 95% confidence interval or a 99% confidence interval?

c) Just for laughs, let's say that your sample of 25 students gets a mean score of 530.4. The parameters of the problem remain the same. Use whatever simplifying assumptions are necessary, and calculate power (with &#61537; = .05) for this example.

d) Run the study again with 100 participants. The mean is still 530.4, and the standard deviation remain the same. With simplifying assumptions (and with α = .05), what is Power now?

e) Calculate Prep for the results of a Z test performed on the data in 2(c) above with N = 25.

https://brainmass.com/statistics/quantative-analysis-of-data/235400

#### Solution Preview

1. Define Effect Size. Does increasing N increase Effect Size? Explain your answer with the Central Limit Theorem as your tool.

The specific definition of effect size can vary depending on the inferential statistics involved (e.g., t test, ANOVA) and the specific effect size statistic (e.g., Cohen's d, eta-squared) that is used. That said, effect size is basically just some way to represent the magnitude of an effect, such as "how big" the difference is between the means of two groups. For example, compared to those in the control group that did not receive the medication, on average how much longer did it take those in the experimental group that did receive the medication to complete the task. It is usually expressed in some standard way, such as in terms of the number of standard deviations instead of the number of seconds, for example, by which two groups might differ on average. Cohen's d (Cohen, 1962) is effect size expressed in terms of the number of standard deviations that separate group means, which can be computed as the difference between group means divided by an estimate of standard deviation (usually the "pooled standard deviation" of the two groups; see the attached formulas from Wikipedia).

According to the central limit theorem, the theoretical distribution of sample means has a standard deviation of σ / &#8730;N, which is the population standard deviation divided by the square root of the sample size. Because N is in the denominator of the formula for standard deviation, increasing N will make the standard deviation smaller. The central limit theorem also shows that given a sufficiently large number of samples that the standard deviation provides a good estimate of the standard deviation of each individual sample. ...

#### Solution Summary

The expert defines the effect size. The central limit theorem is examined.

\$2.19

## Defining statistics terms: Sampling error, estimators, central limit theorem, standard error

Define (a) parameter, (b) estimator, (c) sampling error, and (d) sampling distribution.

Explain the difference between sampling error and bias. Can they be controlled?

Name three estimators. Which ones are unbiased?

Explain what it means to say an estimator is (a) unbiased, (b) efficient, and (c) consistent.

State the main points of the Central Limit Theorem for a mean.

Why is population shape of concern when estimating a mean? What does sample size have to do
with it?

(a) Define the standard error of the mean. (b) What happens to the standard error as sample size
increases? (c) How does the law of diminishing returns apply to the standard error?

Define (a) point estimate, (b) interval estimate, (c) confidence interval, and (d) confidence level.

List some common confidence levels. Why not use other confidence levels?

(a) List the steps in testing a hypothesis. (b) Why can't a hypothesis ever be proven?

(a) Explain the difference between the null hypothesis and the alternative hypothesis. (b) How is the
null hypothesis chosen (why is it "null")?

(a) Why do we say "fail to reject H0" instead of "accept H0"? (b) What does it mean to "provisionally
accept a hypothesis"?

(a) Define Type I error and Type II error. (b) Give an original example to illustrate.

(a) Explain the difference between a left-sided test, two-sided test, and right-sided test. (b) When
would we choose a two-sided test? (c) How can we tell the direction of the test by looking at a pair
of hypotheses?

View Full Posting Details