Search

Close
Show passwordHide password

Log in
Close

Do you really want to create a new entry?

Offices and unitsDemographicsPartiesRegionsSettlementsPlacesPeopleArticles

Create new

t, F, and ρ

There are a few things to remember,

Parameter

Shorthand

Description

Significance level

For a 95% level of confidence, we calculate our based on whether we are using a one-tailed or a two-tailed test.

For a one-tailed test,

For a two-tailed test,

Test statistic

The test statistic must be calculated. Once we have the test statistic, we can immediately test our hypothesis against the critical values. Alternatively, we can use it to calculate a value to test our hypothesis against the significance level, . The greater the test statistic, the more likely we will have to reject the null hypothesis, or in other words, the less likely that the hypothesis holds true.

And recall, of course, that .

Critical value

With this, we can examine how our test statistic (t) performs with our significance level. The t-distribution table, a.k.a. a critical value distribution table, gives us the t-values we would need to see for various significance levels and degrees of freedom, if we are to accept or reject or hypothesis.

Essentially, we reject the null hypothesis if the test statistic "exceeds" the critical value. However, that can change depending on what test we are performing.

  • For a two-sided test, we reject the null hypothesis if the absolute value of the test statistic is greater than the critical value.

  • For a one-sided upper test, we reject the null hypothesis if the test statistic is greater than the critical value.

  • For a one-sided lower test, we reject the null hypothesis if the test statistic is less than the negative of the critical value.

Probability value

The probability value calculates the likelihood that our observed t-statistic would exceed the critical value, thus rejecting the null hypothesis. It calculated using the t-statistic, and the degrees of freedom. Alternatively, a table is table is referenced which gives us an approximate based on a rounded t-statistic and the degrees of freedom. Ultimately, we reject or accept the null hypothesis based on whether or .

The measures the likelihood that the observation occurred by chance. The lower it is, the greater the statistical significance. The higher it is, lower the statistical significance. We calculate from and the degrees of freedom. Although it is directly calculated from the t-statistic, the advantage of the value is that it gives us a percentage that can be clearer to interpret. Unlike the t-statistic, we never calculate the value by hand. Either we use a statistical tool to calculate it, or we look it up from a table consisting of rows and columns for various commonly used or rounded values for and for degrees of freedom.

If the -value is greater than our then we fail to reject the null hypothesis. We would state this: "Because , we fail to reject the null hypothesis, and we reject the hypothesis." Or under the other circumstance: "Because , we reject the null hypothesis, and accept the hypothesis." In other words, is the minimum significance level needed for our hypothesis to be true, so if , then our hypothesis cannot be true at significance level .

F

*

Hypothesis testing

  • Two-sided test

    • Using the test statistic,

      • If the absolute value of the test statistic is greater than the critical value, we reject the null hypothesis.

      • If the absolute value of the test statistic is less than the critical value, we accept the null hypothesis.

    • Using the probability value,

      • If the probability value is less than the significance level , we reject the null hypothesis.

      • If the probability value is greater than the significance level , we accept the null hypothesis.

  • One-sided upper test

    • Using the test statistic,

      • If the test statistic is greater than the critical value, we reject the null hypothesis.

      • If the test statistic is less than the critical value, we accept the null hypothesis.

    • Using the probability value,

      • Same as two-sided test.

  • One-sided lower test

    • Using the test statistic,

      • If the statistic is less than the negative of the critical value, we reject the null hypothesis.

      • If the test statistic is greater than the negative of the critical value, we accept the null hypothesis.

    • Using the probability value,

      • Same as two-sided test.

Critical values table for the t-distribution

Using spreadsheet software, the critical value for a particular significance level () and degrees of freedom (DF) is calculated as ABS(T.INV(, DF)).

Degrees of freedom

One tail: 90%
Two tail: 95%

One tail: 95.0%
Two tail: 97.5%

One tail: 97.50%
Two tail: 98.75%

One tail: 99.0%
Two tail: 99.5%

One tail: 99.50%
Two tail: 99.75%

One tail: 99.90%
Two tail: 99.95%

One tail: 99.950%
Two tail: 99.975%

0.1

0.05

0.025

0.01

0.005

0.001

0.0005

1

3.07768

6.31375

12.70620

31.82052

63.65674

318.30884

636.61925

2

1.88562

2.91999

4.30265

6.96456

9.92484

22.32712

31.59905

3

1.63774

2.35336

3.18245

4.54070

5.84091

10.21453

12.92398

4

1.53321

2.13185

2.77645

3.74695

4.60409

7.17318

8.61030

5

1.47588

2.01505

2.57058

3.36493

4.03214

5.89343

6.86883

6

1.43976

1.94318

2.44691

3.14267

3.70743

5.20763

5.95882

7

1.41492

1.89458

2.36462

2.99795

3.49948

4.78529

5.40788

8

1.39682

1.85955

2.30600

2.89646

3.35539

4.50079

5.04131

9

1.38303

1.83311

2.26216

2.82144

3.24984

4.29681

4.78091

10

1.37218

1.81246

2.22814

2.76377

3.16927

4.14370

4.58689

11

1.36343

1.79588

2.20099

2.71808

3.10581

4.02470

4.43698

12

1.35622

1.78229

2.17881

2.68100

3.05454

3.92963

4.31779

13

1.35017

1.77093

2.16037

2.65031

3.01228

3.85198

4.22083

14

1.34503

1.76131

2.14479

2.62449

2.97684

3.78739

4.14045

15

1.34061

1.75305

2.13145

2.60248

2.94671

3.73283

4.07277

16

1.33676

1.74588

2.11991

2.58349

2.92078

3.68615

4.01500

17

1.33338

1.73961

2.10982

2.56693

2.89823

3.64577

3.96513

18

1.33039

1.73406

2.10092

2.55238

2.87844

3.61048

3.92165

19

1.32773

1.72913

2.09302

2.53948

2.86093

3.57940

3.88341

20

1.32534

1.72472

2.08596

2.52798

2.84534

3.55181

3.84952

21

1.32319

1.72074

2.07961

2.51765

2.83136

3.52715

3.81928

22

1.32124

1.71714

2.07387

2.50832

2.81876

3.50499

3.79213

23

1.31946

1.71387

2.06866

2.49987

2.80734

3.48496

3.76763

24

1.31784

1.71088

2.06390

2.49216

2.79694

3.46678

3.74540

25

1.31635

1.70814

2.05954

2.48511

2.78744

3.45019

3.72514

26

1.31497

1.70562

2.05553

2.47863

2.77871

3.43500

3.70661

27

1.31370

1.70329

2.05183

2.47266

2.77068

3.42103

3.68959

28

1.31253

1.70113

2.04841

2.46714

2.76326

3.40816

3.67391

29

1.31143

1.69913

2.04523

2.46202

2.75639

3.39624

3.65941

30

1.31042

1.69726

2.04227

2.45726

2.75000

3.38518

3.64596

60

1.29582

1.67065

2.00030

2.39012

2.66028

3.23171

3.46020

120

1.28865

1.65765

1.97993

2.35782

2.61742

3.15954

3.37345

1000

1.28240

1.64638

1.96234

2.33008

2.58075

3.09840

3.30028

1.28155

1.64485

1.95996

2.32635

2.57583

3.09023

3.29053

table, or P table

This table was calculated using the spreadsheet function with inputs for the t-statistic (t), degrees of freedom (DF), and whether our test is two-sided (2), or one-sided (1): TDIST(t, DF, sidedness). The resulting value can then be directly compared to to establish whether we will accept or reject or hypothesis. Sometimes, books will include tables for values, and the tables can become quite large. Conventionally, the t-statistics are given within a certain range, in increments of 0.02, but of course it is far better to calculate it using a spreadsheet function or statistical software. Below is a limited range of t-statistics and degrees of freedom,

Row: t-statistic
Column: DF

1

2

3

4

5

6

7

8

9

10

1.30

0.209

0.162

0.142

0.132

0.125

0.121

0.117

0.115

0.113

0.111

1.32

0.206

0.159

0.139

0.129

0.122

0.117

0.114

0.112

0.110

0.108

1.34

0.204

0.156

0.136

0.126

0.119

0.114

0.111

0.109

0.107

0.105

1.36

0.202

0.153

0.134

0.123

0.116

0.111

0.108

0.105

0.103

0.102

1.38

0.200

0.151

0.131

0.120

0.113

0.108

0.105

0.102

0.100

0.099

1.40

0.197

0.148

0.128

0.117

0.110

0.106

0.102

0.100

0.098

0.096

1.42

0.195

0.146

0.125

0.114

0.107

0.103

0.099

0.097

0.095

0.093

1.44

0.193

0.143

0.123

0.112

0.105

0.100

0.097

0.094

0.092

0.090

1.46

0.191

0.141

0.120

0.109

0.102

0.097

0.094

0.091

0.089

0.087

1.48

0.189

0.139

0.118

0.106

0.099

0.095

0.091

0.089

0.087

0.085

1.50

0.187

0.136

0.115

0.104

0.097

0.092

0.089

0.086

0.084

0.082

1.52

0.185

0.134

0.113

0.102

0.094

0.090

0.086

0.083

0.081

0.080

1.54

0.183

0.132

0.111

0.099

0.092

0.087

0.084

0.081

0.079

0.077

1.56

0.181

0.130

0.108

0.097

0.090

0.085

0.081

0.079

0.077

0.075

1.58

0.180

0.127

0.106

0.095

0.087

0.083

0.079

0.076

0.074

0.073

1.60

0.178

0.125

0.104

0.092

0.085

0.080

0.077

0.074

0.072

0.070

1.62

0.176

0.123

0.102

0.090

0.083

0.078

0.075

0.072

0.070

0.068

1.64

0.174

0.121

0.100

0.088

0.081

0.076

0.073

0.070

0.068

0.066

1.66

0.173

0.119

0.098

0.086

0.079

0.074

0.070

0.068

0.066

0.064

F-statistic, a.k.a F-multiplier

While the t-statistic is easily calculated based on the percentage of the interval and the degrees of freedom (derived from the sample or population size), the F-statistic has two sets of degrees of freedom (DF). There is the of the numerator, and the of the denominator. The DF of the numerator deals with variance between groups, while the DF of the denominator deals with variance within groups. The order is very important, because switching the numerator and denominator results in very different F-statistics. In shorthand, the percentage of the interval, the numerator , and the denominator are all represented as follows, in this order,

The numerator will generally be set as or, less commonly, as . The denominator will be . If there are two variables, this means that , and .

So for example, if we want are using 95% as our confidence, and we are working with 100 samples for two variables, then we wind up with and can look it up accordingly on a F-statistic table.

The spreadsheet command is,

=F.DIST.RT(x, degree_freedom1, degree_freedom2)

The spreadsheet command for the F critical value is,

=F.INV.RT(,,)

See it in action

Please refer to this Google Sheets spreadsheet,

https://docs.google.com/spreadsheets/d/1H3EtaltideRpUeVNMq7jxO2mea8NGcXHz4bYxhAJu58/edit?usp=sharing