Strictly Increasing Continuous Distribution Function Uniformly Distributed
Event History Analysis
Nancy Brandon Tuma , in Encyclopedia of Social Measurement, 2005
Cumulative Distribution Function
The cumulative distribution function (CDF) of T is the complement of S(t):
(2)
where F(t) is the probability that the event occurs before time t. The CDF and the survival probability give equivalent information, but traditionally the survival probability is reported more often than the CDF in event history analysis. Ordinarily, F(∞) = 1; eventually the event occurs. If the probability distribution of the event time is defective, there is a nonzero probability that the event does not occur, even after an infinite amount of time has elapsed. Then F(∞) < 1.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B0123693985001584
Additional Topics on Optimum Design
Jasbir S. Arora , in Introduction to Optimum Design (Third Edition), 2012
Cumulative Distribution Function
The cumulative distribution function (CDF) F X (x) describes the probability that a random variable X with a given probability distribution will be found at a value less than or equal to x. This function is given as
(20.69)
That is, for a given value x, F X (x) is the probability that the observed value of X is less than or equal to x. If f X is continuous at x, then the probability density function is the derivative of the cumulative distribution function:
(20.70)
The CDF also has the following properties:
(20.71)
The cumulative distribution function is illustrated in Figure 20.4(b). It shows that the probability of X being less than or equal to x l is F X (x l ). This is a point on the F X (x) versus x curve in Figure 20.4(b) and it is the shaded area in Figure 20.4(a).
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123813756000292
Probability and Random Variables
FABRIZIO GABBIANI , STEVEN J. COX , in Mathematics for Neuroscientists, 2010
11.5 CUMULATIVE DISTRIBUTION FUNCTIONS
The cumulative distribution function of a random variable X is defined by
The cumulative distribution function is monotone increasing, meaning that x 1 ≤ x 2 implies F(x 1) ≤ F(x 2). This follows simply from the fact that {X ≤ x 2} = {X ≤ x 1}∪{x 1 ≤ X ≤ x 2} and the additivity of probabilities for disjoint events. Furthermore, if X takes values between −∞ and ∞, like the Gaussian random variable, then F(−∞) = 0 and F(∞) = 1. If the random variable X is continuous and possesses a density, p(x), like the Gaussian random variable does, it follows immediately from the definition of F, and since F(−∞) = 0, that
Conversely, according to the fundamental theorem of calculus, Eq. (1.7), p(x) = F′(x). Thus, the probability density is the derivative of the cumulative distribution function. This in turn implies that the probability density is always nonnegative, p(x) ≥ 0, because F is monotone increasing. The cumulative distribution function of the standard normal distribution is, up to constant factors, the error function,
(Exercise 7). The error function is not an elementary function, meaning that it cannot be built explicitly in terms of simple functions like the exponential, the logarithm or nth roots by means of the four elementary operations (addition, subtraction, multiplication, and division).
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123748829000113
Random Variables
Oliver C. Ibe , in Fundamentals of Applied Probability and Random Processes (Second Edition), 2014
2.5 Discrete Random Variables
A discrete random variable is a random variable that can take on at most a countable number of possible values. The number can be finite or infinite; that is, the random variable can have a countably finite number of values or a countably infinite number of values. For a discrete random variable X, the probability mass function (PMF), p X (x), is defined as follows:
(2.2)
where The PMF is nonzero for at most a countable number of values of x. In particular, if we assume that X can only assume one of the values x 1, x 2, …, x n , then
The CDF of X can be expressed in terms of p X (x) as follows:
(2.3)
The CDF of a discrete random variable is a series of step functions. That is, if X takes on values at x 1, x 2, x 3, …, where x 1 < x 2 < x 3 < ⋯, then the value of F X (x) is constant in the interval between x i − 1 and x i and then takes a jump of size p X (x i ) at x i , i = 1, 2, 3, … . Thus, in this case, F X (x) represents the sum of all the probability masses we have encountered as we move from − ∞ to x.
Example 2.4
Assume that X has the PMF given by
The PMF of X is given in Figure 2.5(a), and its CDF is given by
Thus, the graph of the CDF of X is as shown in Figure 2.5(b).
Example 2.5
Let the random variable X denote the number of heads in three tosses of a fair coin. (a) What is the PMF of X? (b) Sketch the CDF of X.
Solution:
- a.
-
The sample space of the experiment is
The different events defined by the random variable X are as follows:
Since the eight sample points in Ω are equally likely, the PMF of X is as follows:
The PMF is graphically illustrated in Figure 2.6(a).
- b.
-
The CDF of X is given by
The graph of F X (x) is shown in Figure 2.6(b).
Example 2.6
Let the random variable X denote the sum obtained in rolling a pair of fair dice. Determine the PMF of X.
Solution:
Let the pair (a, b) denote the outcomes of the roll, where a is the outcome of one die and b is the outcome of the other. Thus, the sum of the outcomes is X = a + b. The different events defined by the random variable X are as follows:
Since there are 36 equally likely sample points in the sample space, the PMF of X is given by:
Example 2.7
The PMF of the number of components K of a system that fail is defined by
- a.
-
What is the CDF of K?
- b.
-
What is the probability that less than 2 components of the system fail?
Solution:
- a.
-
The CDF of K is given by
- b.
-
The probability that less than 2 components of the system fail is the probability that either no component fails or one component fails, which is given by
Example 2.8
The PMF of the number N of customers that arrive at a local library within one hour interval is defined by
What is the probability that at most two customers arrive at the library within one hour?
Solution:
The probability that at most two customers arrive at the library within one hour is the probability that 0 or 1 or 2 customers arrive at the library within one hour, which is
where the second equality on the first line is due to the fact that the three events are mutually exclusive.
2.5.1 Obtaining the PMF from the CDF
So far we have shown how to obtain the CDF from the PMF; namely, for a discrete random variable X with PMF p X (x), the CDF is given by
Sometimes we are given the CDF of a discrete random variable and are required to obtain its PMF. From Figures 2.5 and 2.6 we observe that the CDF of a discrete random variable has the staircase plot with jumps at those values of the random variable where the PMF has a nonzero value. The size of a jump at a value of a random variable is equal to the value of the PMF at the value.
Thus, given the plot of the CDF of a discrete random variable, we can obtain the PMF of the random variable by noting that the random variable only takes on values that have nonzero probability at those points where jumps occur. The probability that the random variable takes on any other value than where the jumps occur is zero. More importantly, the probability that the random variable takes a value where a jump occurs is equal to the size of the jump.
Example 2.9
The plot of the CDF of a discrete random variable X is shown in Figure 2.7. Find the PMF of X.
Solution
The random variable takes on values with nonzero probability at X = 1, X = 2, X = 4 and X = 6. The size of the jump at X = 1 is the size of the jump at X = 2 is the size of the jump at X = 4 is and the size of the jump at X = 6 is Thus, the PMF of X is given by
Example 2.10
Find the PMF of a discrete random variable X whose CDF is given by:
Solution:
In this example, we do not need to plot the CDF. We observe that it changes values at X = 0, X = 2, X = 4 and X = 6, which means that these are the values of the random variable that have nonzero probabilities. The next task after isolating these values with nonzero probabilities is to determine their probabilities. The first value is p X (0), which is At X = 2 the size of the jump is Similarly, at X = 4 the size of the jump is Finally, at X = 6 the size of the jump is Therefore, the PMF of X is given by
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B978012800852200002X
Elements of Probability
Sheldon Ross , in Simulation (Fifth Edition), 2013
2.4 Random Variables
When an experiment is performed we are sometimes primarily concerned about the value of some numerical quantity determined by the result. These quantities of interest that are determined by the results of the experiment are known as random variables.
The cumulative distribution function, or more simply the distribution function, of the random variable is defined for any real number by
A random variable that can take either a finite or at most a countable number of possible values is said to be discrete. For a discrete random variable we define its probability mass function by
If is a discrete random variable that takes on one of the possible values then, since must take on one of these values, we have
Example 2a
Suppose that takes on one of the values 1, 2, or 3. If
then, since , it follows that .
Whereas a discrete random variable assumes at most a countable set of possible values, we often have to consider random variables whose set of possible values is an interval. We say that the random variable is a continuous random variable if there is a nonnegative function defined for all real numbers and having the property that for any set of real numbers
(2.1)
The function is called the probability density function of the random variable .
The relationship between the cumulative distribution and the probability density is expressed by
Differentiating both sides yields
That is, the density is the derivative of the cumulative distribution function. A somewhat more intuitive interpretation of the density function may be obtained from Eqution (2.1) as follows:
when is small. In other words, the probability that will be contained in an interval of length around the point is approximately . From this, we see that is a measure of how likely it is that the random variable will be near .
In many experiments we are interested not only in probability distribution functions of individual random variables, but also in the relationships between two or more of them. In order to specify the relationship between two random variables, we define the joint cumulative probability distribution function of and by
Thus, specifies the probability that is less than or equal to and simultaneously is less than or equal to .
If and are both discrete random variables, then we define the joint probability mass function of and by
Similarly, we say that and are jointly continuous, with joint probability density function , if for any sets of real numbers and
The random variables and are said to be independent if for any two sets of real numbers and
That is, and are independent if for all sets and the events and are independent. Loosely speaking, and are independent if knowing the value of one of them does not affect the probability distribution of the other. Random variables that are not independent are said to be dependent.
Using the axioms of probability, we can show that the discrete random variables and will be independent if and only if , for all ,
Similarly, if and are jointly continuous with density function then they will be independent if and only if, for all
where and are the density functions of and , respectively.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780124158252000024
Probability Theory
P.K. Bhattacharya , Prabir Burman , in Theory and Methods of Statistics, 2016
Probability Integral Transform
Suppose X has cdf F which is continuous and strictly increasing. Then F −1 is uniquely defined as
Then the cdf of Y = F(X) at u ∈ (0, 1) is
Thus f Y (u) = 1 for 0 < u < 1 and f Y (u) = 0 for u∉(0, 1), because 0 < Y = F(X) < 1 with probability 1. In other words, if X has a continuous and strictly increasing cdf F, then Y = F(X) is distributed with pdf
A rv with this pdf is said to be a Uniform(0, 1) rv. Conversely, if U is Uniform(0, 1), then X = F −1(U) has cdf F. This fact is useful in generating random samples (ie, iid rv's) with cdf F by first generating random samples U 1, U 2, … from Uniform(0, 1), which is easy, and then transforming U 1, U 2, … to X 1 = F −1(U 1), X 2 = F −1(U 2), ….
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128024409000011
Pairs of Random Variables
Scott L. Miller , Donald Childers , in Probability and Random Processes (Second Edition), 2012
Section 5.1 Joint CDFs
- 5.1
-
Recall the joint CDF given in Example 5.1,
- (a)
-
Find Pr(X < 3/4).
- (b)
-
Find Pr(X > 1/2).
- (c)
-
Find Pr(Y > 1/4).
- (d)
-
Find Pr(1/4 < X < 1/2, 1/2 < Y <1).
- 5.2
-
A colleague of your proposes that a certain pair of random variables be modeled with a joint CDF of the form
- (a)
-
Find any restrictions on the constants a, b, and c needed for this to be a valid joint CDF.
- (b)
-
Find the marginal CDFs, FX (x) and Fy (y) under the restrictions found in part (a).
- 5.3
-
Consider again the joint CDF given in Exercise 5.2.
- (a)
-
For constants a and b, such that 0 < a < 1, 0 < b < 1 and a < b, find Pr(a < X < b).
- (b)
-
For constants c and d, such that 0 < c < 1, 0 < d < 1 and c < d, find Pr(c < Y < d).
- (c)
-
Find Pr(a < X < b|c < Y<d). Are the events {a < X < b} and {c < Y < d} statistically independent?
- 5.4
-
Suppose a random variable X has a CDF given by FX (x) and similarly, a random variable Y has a CDF, Fy (y). Prove that the function F(x, y) = FX (x)Fy (y) satisfies all the properties required of joint CDFs and hence will always be a valid joint CDF.
- 5.5
-
For the joint CDF that is the product of two marginal CDFs, FX, Y (x, y) = F X(x)FY (y), as described in Exercise 5.4, show that the events { a < X < b } and { c < Y < d } are always independent for any constants a < b and c < d.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123869814500084
Random Variables
Sheldon M. Ross , in Introduction to Probability Models (Twelfth Edition), 2019
2.3.1 The Uniform Random Variable
A random variable is said to be uniformly distributed over the interval if its probability density function is given by
Note that the preceding is a density function since and
Since only when , it follows that X must assume a value in . Also, since is constant for is just as likely to be "near" any value in (0, 1) as any other value. To check this, note that, for any ,
In other words, the probability that X is in any particular subinterval of equals the length of that subinterval.
In general, we say that X is a uniform random variable on the interval if its probability density function is given by
(2.8)
Example 2.13
Calculate the cumulative distribution function of a random variable uniformly distributed over .
-
Solution: Since , we obtain from Eq. (2.8) that
Example 2.14
If X is uniformly distributed over , calculate the probability that (a) , (b) , (c) .
-
Solution:
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B978012814346900007X
Quality of Analytical Measurements: Statistical Methods for Internal Validation
M.C. Ortiz , ... A. Herrero , in Comprehensive Chemometrics, 2009
Appendix 1 Some Basic Elements of Statistics
A distribution function (cumulative distribution function (cdf)) in R is any function F, such that
- 1.
-
F is an application from R to the interval [0,1]
- 2.
-
- 3.
-
- 4.
-
F is a monotonously increasing function, that is, a ≤ b implies F(a) ≤ F(b).
- 5.
-
F is continuous on the left or the right. For example, F is continuous on the left if for each real number a.
Any probability defined in R corresponds to a distribution function and vice versa.
If p is the probability defined for intervals of real numbers, F(x) is defined as the probability that accumulates until x, that is, F(x) = p(–∞,x). It is easy to show that F(x) verifies the above definition of distribution function.
If F is a cdf continuous on the left, its associated probability p is defined by
If the distribution function is continuous, then the above limits coincide with the value of the function in the corresponding point. The probability density function f(x), abbreviated pdf, if it exists, is the derivative of the cdf.
Each random variable X is characterized by a distribution function F X (x).
When several random variables are handled, it is necessary to define the joint distribution function.
(A1)
If the previous joint probability is equal to the product of the individual probabilities, it is said that the random variables are independent:
(A2)
Equations (3) and (4) define the mean and variance of a random variable. Some basic properties are
(A3)
(A4)
Given a random variable, X, the standardized variable is obtained by subtracting the mean and dividing by the standard deviation, . The standardized variable has and .
For any two random variables, the variance is
(A5)
and the covariance is defined as
(A6)
In the definition of the covariance (Equation (A6)), is the joint pdf of the random variables. In the case where they are independent, the joint pdf is equal to the product and the covariance is zero.
In general, , except where the variables are independent, in which case the equality holds.
In the applications in Analytical Chemistry, it is very frequent to use formulas to obtain the final measurement from other intermediate ones that had experimental variability. A strategy for the calculation of the uncertainty (variance) in the final result under two basic hypotheses has been developed. The strategy is to make a linear approach to the formula and then to assimilate the quadratic terms to the variance of the implied random variable (see for example the 'Guide to the Expression of Uncertainty in Measurement'). 2 This procedure, called in many texts the method of transmission of errors, can lead to unacceptable results. Hence, an improvement based on Monte Carlo simulation has been suggested for the calculation of the compound uncertainty (see the Supplement 1 to the aforementioned guide).
A useful representation of the data is the so-called box and whisker plot (or simply box plot). To explain its procedure of construction, we will use the 100 values of the method A of Figure 2 .
These data have the following characteristics (summary of statistics):
-
Minimum: 5.23
-
Maximum: 7.86
-
First or lower quartile, Q 1 = 6.39. It is the value below which lie 25% of the data.
-
Second quartile (median), Q 2 = 6.66. It is the value below which lie 50% of the data.
-
Third or upper quartile, Q 3 = 6.98. It is the value below which lie 75% of the data.
-
Interquartile range, IR = Q 3 – Q 1 = 0.59 in our case.
With these quartiles, the central rectangle (the box) is drawn that contains 50% of the data around the median.
The lower and upper limits are computed as LL = Q 1 − 1.5IR and UL = Q 3 + 1.5IR. In the example, LL = 6.39 − 1.5 × 0.59 = 5.505 and UL = 7.865.
Then, the 'whiskers' are made by joining the inferior side of the rectangle with the data immediately greater than or equal to LL, and the superior side of the rectangle with the greatest value in the data that is immediately less than UL.
The three smallest values in our case are 5.396, 5.233, and 5.507; thus the whisker will go until 5.507 and the other two values are left 'disconnected'. The other whisker reaches the maximum 7.86 because it is less than UL. The box and whisker plot is the first one in Figure A1 .
The advantage of using box plots is that the quartiles are practically insensitive to outliers. For example, suppose that value 7.86 is changed by 8.86; this change does not affect the median or the quartiles, the box plot continues being similar but with a datum outside the upper whisker, as can be seen in the second box plot in Figure A1 .
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780444527011000909
Brownian Motion and Related Processes
Mark A. Pinsky , Samuel Karlin , in An Introduction to Stochastic Modeling (Fourth Edition), 2011
Exercises
- 8.3.1
-
Show that the cumulative distribution function for reflected Brownian motion is
Evaluate this probability when x = 1, y = 3, and t = 4.
- 8.3.2
-
The price fluctuations of a share of stock of a certain company are well described by a Brownian motion process. Suppose that the company is bankrupt if ever the share price drops to zero. If the starting share price is A(0) =5, what is the probability that the company is bankrupt at time t = 25? What is the probability that the share price is above 10 at time t = 25?
- 8.3.3
-
The net inflow to a reservoir is well described by a Brownian motion. Because a reservoir cannot contain a negative amount of water, we suppose that the water level R(t) at time t is a reflected Brownian motion. What is the probability that the reservoir contains more than 10 units of water at time t = 25? Assume that the reservoir has unlimited capacity and that R (0) =5.
- 8.3.4
-
Suppose that the net inflows to a reservoir follow a Brownian motion. Suppose that the reservoir was known to be empty 25 time units ago but has never been empty since. Use a Brownian meander process to evaluate the probability that there is more than 10 units of water in the reservoir today.
- 8.3.5
-
Is reflected Brownian motion a Gaussian process? Is absorbed Brownian motion (cf. Section 8.1.4)?
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123814166000083
Source: https://www.sciencedirect.com/topics/mathematics/cumulative-distribution-function
0 Response to "Strictly Increasing Continuous Distribution Function Uniformly Distributed"
Post a Comment