ks.test {stats} | R Documentation |
Performs one or two sample Kolmogorov-Smirnov tests.
ks.test(x, y, ..., alternative = c("two.sided", "less", "greater"), exact = NULL)
x |
a numeric vector of data values. |
y |
either a numeric vector of data values, or a character string naming a distribution function. |
... |
parameters of the distribution specified (as a character
string) by y . |
alternative |
indicates the alternative hypothesis and must be
one of "two.sided" (default), "less" , or
"greater" . You can specify just the initial letter. |
exact |
NULL or a logical indicating whether an exact
p-value should be computed. See Details for the meaning of
NULL . Not used for the one-sided two-sample case. |
If y
is numeric, a two-sample test of the null hypothesis
that x
and y
were drawn from the same continuous
distribution is performed.
Alternatively, y
can be a character string naming a continuous
distribution function. In this case, a one-sample test is carried
out of the null that the distribution function which generated
x
is distribution y
with parameters specified by
...
.
The presence of ties generates a warning, since continuous distributions do not generate them.
The possible values "two.sided"
, "less"
and
"greater"
of alternative
specify the null hypothesis
that the true distribution function of x
is equal to, not less
than or not greater than the hypothesized distribution function
(one-sample case) or the distribution function of y
(two-sample
case), respectively.
Exact p-values are not available for the one-sided two-sample case, or
in the case of ties. exact = NULL
(the default), an exact
p-value is computed if the sample size if less than 100 in the
one-sample case, and if the product of the sample sizes is less than
10000 in the two-sample case. Otherwise, asymptotic distributions are
used whose approximations may be inaccurate in small samples. In the
one-sample two-sided case, exact p-values are obtained as described in
Marsaglia, Tsang & Wang (2003). The formula of Birnbaum & Tingey
(1951) is used for the one-sample one-sided case.
If a single-sample test is used, the parameters specified in
...
must be pre-specified and not estimated from the data.
There is some more refined distribution theory for the KS test with
estimated parameters (see Durbin, 1973), but that is not implemented
in ks.test
.
A list with class "htest"
containing the following components:
statistic |
the value of the test statistic. |
p.value |
the p-value of the test. |
alternative |
a character string describing the alternative hypothesis. |
method |
a character string indicating what type of test was performed. |
data.name |
a character string giving the name(s) of the data. |
Z. W. Birnbaum & Fred H. Tingey (1951), One-sided confidence contours for probability distribution functions. The Annals of Mathematical Statistics, 22/4, 592–596.
William J. Conover (1971), Practical nonparametric statistics. New York: John Wiley & Sons. Pages 295–301 (one-sample “Kolmogorov” test), 309–314 (two-sample “Smirnov” test).
Durbin, J. (1973) Distribution theory for tests based on the sample distribution function. SIAM.
George Marsaglia, Wai Wan Tsang & Jingbo Wang (2003), Evaluating Kolmogorov's distribution. Journal of Statistical Software, 8/18. http://www.jstatsoft.org/v08/i18/.
shapiro.test
which performs the Shapiro-Wilk test for
normality.
x <- rnorm(50) y <- runif(30) # Do x and y come from the same distribution? ks.test(x, y) # Does x come from a shifted gamma distribution with shape 3 and rate 2? ks.test(x+2, "pgamma", 3, 2) # two-sided, exact ks.test(x+2, "pgamma", 3, 2, exact = FALSE) ks.test(x+2, "pgamma", 3, 2, alternative = "gr")