Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
A “pointless” theory of probability
(USC Thesis Other)
A “pointless” theory of probability
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
A “Pointless” Theory of Probability by Qilin Ye A Thesis Presented to the FACULTY OF THE USC DANA AND DAVID DORNSIFE COLLEGE OF LETTERS, ARTS AND SCIENCES UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Fulfillment of the Requirements for the Degree MASTER OF SCIENCE (APPLIED MATHEMATICS) May 2024 Copyright 2024 Qilin Ye Table of Contents Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii Chapter 1: Introduction: Why Gunky Spaces? . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Chapter 2: A Gunky R n under Isometric Transformations . . . . . . . . . . . . . . . . . . . 4 1 Linear Algebraic Properties of Isometries . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2 Recovering Euclidean Structure via Isometries . . . . . . . . . . . . . . . . . . . . . . . . . 9 Chapter 3: Incompatibility of Gunk and Measure . . . . . . . . . . . . . . . . . . . . . . . . 15 1 Boolean Algebra and the Fat Cantor Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2 First Solution: Forcing Null Boundaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 3 Second Solution: Equivalence Relations on Sets with Null Difference . . . . . . . . . . . . 22 Chapter 4: An Exploration of Gunky Probability Spaces . . . . . . . . . . . . . . . . . . . . 25 1 Defining a Complete, Countably Additive Probability Measure . . . . . . . . . . . . . . . . 25 2 Elementary and Generalized Random Variables . . . . . . . . . . . . . . . . . . . . . . . . . 32 3 Expected Values and Convergence Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . 39 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 ii Abstract Traditionally, spaces are modeled as infinite collections of infinitesimal points. In recent decades, however, an alternate view has gained increasing scholarly attention: what if space is gunky, consists of regions, and does not admit a smallest, indivisible atom, a notion akin to points? In a gunky space, what set of primitives would one adopt in replacement of the Euclidean premises? Are these primitives as expressive as the ones in standard pointy models? And finally, what contradictions would arise in such exotic worlds, and how, if possible, can we resolve them? In this paper, I provide an overview of previous literature on this area, highlighting some inherent complications that are embedded in a gunky perspective. A particular challenge that arose in the gunky models is the attempt to define a consistent notion of “size,” or measure, to regions, and I discuss and evaluate existing solutions to this issue. Building on the notion of measure, I spend the second half of the paper investigating the feasibility of introducing a theory of probability to a gunky model. I show that it is possible to extend a gunky space and equip it with a countably additive probability measure. With this, I further recover several important concepts in probability theory, including random variables, abstract integration, and several convergence results. iii Chapter 1. Introduction: Why Gunky Spaces? The traditional understanding of space by mathematicians and philosophers is that space consists of infinitesimal, indivisible parts, called points. This perspective is so well-established that we are barely aware of its existence, taking the assumptions for granted. Indeed, a “pointy” model of the world feels so natural: every aspect of elementary math is introduced in this manner. Points are assumed to underpin the structure of most aspects of math. We assume that the state of material objects are described by functions defined on points. Geometry of Euclidean spaces, arithmetic on higher-dimensional spaces, electric fields in physics, among others, are all taught using points. And we say points make up regions and spaces. Over the past two centuries, with increased focus in recent decades, a growing number of scholars have explored alternative models of space, the “pointless models,” that omit the fundamental concept of points. Despite a relatively niche direction, scholars early in this field tended to have little collaboration and instead independently derived their own models of pointless geometry. Consequently, the list of scholarly attempts to axiomatize pointless geometry in the early days has been rather extensive, despite its relative lack of popularity compared to its pointy counterpart. The earliest documented exploration of pointless geometry was due to Lobachevsky in 1835, who assumed the primitive notions of solids and contact between solids. Yet it suffered from obscurity and a lack of rigor, and was discarded by the author himself. The first rigorous treatment of this subject was by Whitehead in the 1920s, where he showed how solids and parthood, the notion of one solid being contained in another, can be used to define an abstract notion of a point [5]. This approach was called “gunky” geometry, a term coined by Lewis [10], and many 1 subsequent work ensued. In a gunky model, every region contains yet smaller regions — the space itself is infinitely divisible, not containing any indivisible points. Contrasting to pointy spaces in nature, the gunky space also raises several significant motivations for studying it. The first one comes from a pure mathematical perspective: if the continuum of space consists of points, then assigning the notion of size (or measure) to regions can turn into serious trouble. A classical fact in real analysis shows that there are non-measurable sets such like the Vitali sets, but a more blatant contradiction arises from the Banach-Tarski Paradox. In a pointy space, the unit ball in R 3 (or any higher dimension) can be partitioned into five disjoint sets which, under rigid motions, can be reassembled rigidly into two disjoint unit balls, each congruent to the original one. These partitioned sets, of course, are not Lebesgue measurable. But this fundamentally still leads to problem, since rigid motions preserve the notion of size regardless of the measure we choose, yet through these carefully constructed pathological examples, the post-assembly size is twice the original one. Therefore, the BanachTarski Paradox exposes the limitations of point-based measures and encourages the exploration of gunky geometry, where (hopefully) the absence of indivisible points could circumvent these paradoxes. Another potential motivation for studying a gunky space, as pointed out by Arntzenius, relates to the foundational theories of physics. As Arntzenius [1] notes, in modern physics, in particular non-relativistic quantum mechanics, the representation of a particle’s state is by a probabilistic integration performed on wave functions which is invariant between functions that differ on a Lebesgue null set. This equivalence suggests a natural inclination towards considering these functions not as distinct entities but as members of the same equivalence class, defined by the relation of differing up to a null set. A gunky model, on the other hand, naturally embraces this idea: the very concept of point-sized differences is systematically eschewed under such settings. 2 Finally, purely philosophical motivations also justify the pursuit of a gunky space. Russell ([14]) points out that, beyond the actual structure of our universe, metaphysicians are also intrigued by the potential forms of what spaces could be, regardless of our current epistemic limitations. Debates are particularly focused on the viability of atomless (i.e. infinitely divisible) gunk as a credible form of matter, challenging models that cannot accommodate such concepts. This paper is organized as follows. In Section 2, inspired by Tarski’s approach ([6], [17]) of recovering standard Euclidean geometrical notions using only parthood and sphere, I show that it is entirely possible to achieve the same result using properties of isometric transformations on regions in R n . In Section 3, I provide an overall review of the theory of gunky models, explaining three commonly considered conditions of gunk: mereological, topological, and measure-theoretic. I then introduce a challenge posed by Russell’s impossibility result [14] that shows the inherent inconsistency between topological and measure-theoretic gunks. Then, I discuss two recent workaround attempts ([1], [8]) to fix the problem with measures. By now, we have reached a point where it is reasonable to introduce another layer of abstraction, describing our space via Boolean algebras. Finally, in Section 4, I look into the difficulties in defining a probability space on a Boolean algebra capturing the structure of the spaces used by the previous sections, and offer some partial solutions to them. It is worth pointing out that my approach to “fixing” some of the issues encountered in the Boolean algebra bears traces of Arntzenius [1]’s solution to fixing measure on gunks. In this section, I also attempt to recover several of the important probability theoretic results. 3 Chapter 2. A Gunky R n under Isometric Transformations Alfred Tarski was one of the pioneers in exploring a gunky space. Using only the notion of spheres (as a special form of solid) and parthood (the notion of one solid being contained in another), Tarski [17] proved that it was possible to define all other geometric notion one encounters in Euclidean spaces. In particular, Tarskian spheres are sufficient to help recover the three Euclidean primitives: point, betweenness (colinearity between three points), and congruence (of lines or regions). We will briefly demonstrate the process below using spheres [2], i.e., open balls in R 2 . (1) B1 and B2 are disjoint if and only if no ball B3 is contained in both (satisfying the parthood relation to both), written B1 ∩ B2 = ∅. (2) Defining tangency: • B1 and B2 are externally tangent if: – B1 ∩ B2 = ∅, and – For any two balls that both contain B1 and disjoint from B2, one of them must be contained in the other. In other words, for any B3, B4 such that B1 ⊂ B3, B1 ⊂ B4, and B3 ∩ B2 = B4 ∩ B2 = ∅, either B3 ⊂ B4 or B4 ⊂ B3. • B1 and B2 are internally tangent (assuming B1 ⊂ B2, resp. B2 ⊂ B1) if: – B1 ⊂ B2, and 4 – For any two balls that both contain B1 and are contained in B2, one of them must be contained in the other. In other words, for any B3, B4 such that B1 ⊂ B3 ⊂ B2 and B1 ⊂ B4 ⊂ B2, either B3 ⊂ B4 or B4 ⊂ B3. (3) Recovering betweenness (diametrical opposites): • B1 and B2 are externally diametrical of a ball B3 if: – B1 and B3 are externally tangent, and so are B2 and B3, and – If B′ 1 contains B1 but is disjoint from B3, and B′ 2 likewise, then B′ 1 ∩ B′ 2 = ∅. • B1 and B2 are internally diametrical of a ball B3 if: – B1 and B3 are internally tangent, and so are B2 and B3, and – If B′ 1 is externally tangent to both B1 and B3, and B′ 2 likewise, then B′ 1 ∩ B′ 2 = ∅. (4) Defining concentric balls: Assuming B1 ⊂ B2, they are concentric if, for any two balls B3, B4 external diametric opposites of B1 and (internally) tangent to B2, they are also at internal diametric opposites of B2. (5) Recovering points: concentricity defines an equivalence relation, and a point is an equivalence class of concentric spheres. (6) Recovering congruence: a ball B1 “passes through the center of” another ball B2 if for any B′ 2 concentric with B2, we have B1∩B′ 2 6= ∅. For three balls B1, B2, B3, we say (B1, B3)is equidistant to (B2, B3), written (B1, B3) ≡ (B2, B3), if there exists a ball B′ 3 concentric with B3 such that B′ 3 passes through the center of both B1 and B2. In this section, we consider another “pointless” approach of characterizing the Euclidean primitives. Instead of combining it with parthood, we consider the effects imposed by rigid motions, or isometric transformations. Since the Euclidean space itself is invariant under any fixed isometric transformation, 5 our derivation of other geometric notions will be purely based on the algebraic structure of regions and isometries, independent from the metric equipped on the space as well as other external qualities. One advantage of such characteristic, as we will see later, is that our derivation is dimension-free and, in fact, capable of instead recovering the dimension of the Euclidean space that we start with. 1 Linear Algebraic Properties of Isometries In this section, we consider the linear algebraic properties isometric transformations on R n and identify a very special type of isometry: rotation. Recall that a map ψ : R n → R n is called an isometry if for all x, y, kx − yk = kf(x) − f(y)k. Our first claim draws a connection between a specific class of isometries and orthogonal matrices: Proposition 1.1. A transformation ψ : R n → R n is an origin-preserving (meaning ψ(0) = 0) isometry if and only if ψ(x) = Ax for some orthogonal matrix A (meaning AAT equals the identity). Proof. Let ψ be an origin-preserving isometry. Let {ei} n i=1 be the standard Euclidean basis of R n . Note that an isometry sends triangles to congruent triangles, so in particular law of cosine implies that it also preserves angles and dot products, which we call the “dot product identity.” Consequently {ψ(ei)} n i=1 forms an orthonormal basis of R n like {ei} does. Our first observation is that ψ is linear. To see this, pick any u, v, w ∈ R n . The “dot product identity” shows ψ(u + v) · ψ(w) = (u + v) · w and (ψ(u) + ψ(v)) · ψ(w) = ψ(u) · ψ(w) + ψ(v) · ψ(w) = u · w + v · w = (u + v) · w. 6 Keeping u, v fixed and letting w = ei shows that ψ(u+v) and ψ(u)+ψ(v) agree coordinate-wise, so they equal. Similarly ψ(cu) = cψ(u). Therefore ψ is a linear transformation characterized by some matrix A. But then using the dot product identity once again, for any standard basis vectors ei , ej , 1[i = j] = (Aei) · (Aej ) = e T j A T Aei = (A T A)i,j . This shows AT A is the identity (and so is AAT ). The converse is trivial: if ψ(x) = Ax then (Au) · (Av) = v T AT Au = u · v. Setting u = v we see A maps basis to basis while also preserving length. With this result, we are able to fully characterize isometries of R n as the composition of an originpreserving isometry and a translation: Theorem 1.2. Every isometry ψ of R n can be characterized by an orthogonal matrix A ∈ On(R) and a “translation” vector b, such that ψ(x) = Ax + b. Proof. First suppose ψ(x) = Ax + b subject to the constraints above. This implies ψ is an isometry, since orthogonal matrices preserves distance and so does translation. Conversely, let an isometry ψ be given. We first represent ψ as the composition of an origin-preserving isometry and a translation. To do so, write ψ = (ψ − ψ(0)) + ψ(0). It remains to show that there exists an orthogonal matrix A such that Ax = ψ(x) − ψ(0) for all x. This result now follows from the previous proposition. It is easy to see that isometries form a group under composition. Among these transformations, reflections are of particular significance. These are the transformations that uses a certain affine subspaces as mirror, sending points across the mirror along the line perpendicular (orthogonal) to it. Formally a reflection ρ is an isometry of order 2 (i.e. ρ = ρ −1 or ρ 2 = I, identity): indeed, mirroring twice and one 7 ends up staying at the starting point. To transform u into v via a reflection, the affine mirror must lie in the “center” of the u, v. The canonical example in this case would be a (n − 1)-dimensional hyperplane containing the “average” (u + v)/2 but also remaining orthogonal to the difference, u − v. Orthogonality gives rise to the following formula H = {x ∈ R n : (x−(u+v)/2)·(u−v) = 0} = {x ∈ R n : x·(u−v) = (u−v)·(u+v)/2 = (kuk−kvk)/2}. From now on, given a reflection ρ, we use aff(ρ) to denote its corresponding affine subspace, i.e., the “mirror” aff(ρ) := {x ∈ R n : ρ(x) = x}. In the next theorem, we show that reflections generate the entire group of isometry. Theorem 1.3. Every isometry ψ of R n is a composition of at most n + 1 reflections. Proof. We may WLOG assume ψ(0) = 0, so that ψ is completely characterized by some A ∈ On(R), at the cost of at most one reflection. To see this, suppose ψ(0) 6= 0. We consider a reflection ρ (0) such that aff(ρ (0)) contains the midpoint ψ(0)/2 and is orthogonal to the line crossing 0 and ψ(0). One can also verify that one such example is the reflection across the (n − 1)-dimensional hyperplane described by {u ∈ R n : u · (ψ(0) − 0) = u · ψ(0) = (kψ(0)k − k0k)/2 = ψ(0)2/2}. Now we assume ψ(0) = 0 and prove by induction. The case n = 1 is clear, since an origin-preserving isometry on R is either identity or negative identity. For the inductive step assume the claim holds for R n−1 . Consider an arbitrary ψ represented by A ∈ On(R), and let {ei} n i=1 be the standard basis of R n . Repeating the argument in the first paragraph, we see that there exists a reflection fn, possibly identity, such that fn(Aen) = (fn ◦ ψ)(en) = en. Furthermore, since A ∈ On(R) we know kAenk = kenk. This means aff(fn) = {u ∈ R n : u · (Aen − en) = (kAenk − kenk)/2 = 0} contains the origin, and therefore fn ∈ On(R). 8 By construction, fn ◦ ψ equals identity on E = {0} n−1 × R, the subspace spanned by en, whereas its orthogonal complement, H = R n−1 × {0}, remains invariant under fn. Our induction hypothesis states that there exist reflections f1, . . . , fn−1 whose composition agrees with fn ◦ψ on H. We can easily define an extension of f1 ◦ . . . ◦ fn−1 by setting its n th component to be an identity mapping, i.e., (f1 ◦ . . . ◦ fn−1)(·, xn) = ((f1 ◦ . . . ◦ fn−1)(·), xn). This extended function now coincides with fn ◦ ψ on all of R n , so ψ = f −1 n ◦ f1 ◦ . . . ◦ fn−1, and the inductive step is complete. To sum up, we showed that every origin-preserving isometry of R n is the composition of up to n reflections, and more generally, every isometry of R n is the composition of up to n + 1 reflections. 2 Recovering Euclidean Structure via Isometries Observe that in R n , any reflection ρ satisfies dim aff(ρ) ⩽ n − 1. For a simple example, in R 2 , reflections are characterized by 0- or 1-dimensional affine subspaces, namely, points or lines. And a simple drawing shows that if we have two line reflections across ℓ1 and ℓ2, then they commute if and only if (ℓ1 = ℓ2 or ℓ1 ⊥ ℓ2). In R n , the high-dimensional analogy also holds. Proposition 2.1. Let ρ1, ρ2 be two reflections of R n , with affine subspaces H1, H2. Then they commute if and only if one of the following commutativity criteria holds: (1) One of the subspaces is contained in the other, or (2) H1 ∩ H2 6= ∅, and every u ∈ H1\(H1 ∩ H2) = H1\H2 is orthogonal to every v ∈ H2\H1. For convenience we call these the “reflection commutativity criterion.” In either cases, aff(ρ1 ◦ ρ2) = aff(ρ2 ◦ ρ1) = H1 ∩ H2. Remark. Despite sounding like an orthogonality criterion, condition (2) cannot be replaced with “H1 and H2 are orthogonal.” Consider for example two reflections in R 3 , one across the xy-plane: (x, y, z) 7→ 9 (x, y, −z) and one across the yz-plane: (x, y, z) 7→ (−x, y, z). They clearly commute, but the two planes, when viewed as subspaces, are not orthogonal — they intersect at a line. This observation suggests a certain structure of composition of commutative reflections. Note that if ρ1 commutes with ρ2, then their composition is yet another reflection: (ρ1ρ2)(ρ1ρ2) −1 = ρ1ρ2ρ −1 2 ρ −1 1 = ρ1ρ −1 1 = I. This suggests we can define commutative groups of reflection, with identity transformation being the identity. Theorem 2.2. In R n , a commutative group of reflections under composition has at most 2 n elements, and this upper bound can be attained. Proof. It’s trivial to construct a commutative group of reflections with 2 n elements. For 1 ⩽ i ⩽ n, define ρi to be the mapping that flips the sign of the i th coordinate while keeping others unchanged. It follows that the ρi ’s commute, and that they generate a group of 2 n elements — pick any subset of the n coordinates in R n and flip the sign. Conversely, let Gn be a commutative group of reflections of R n . For each 1 ⩽ k ⩽ n, consider {ρ ∈ Gn : dim aff(ρ) = k}, the subset of reflections whose affine subspace has dimension k. They have to commute pairwise so they have to satisfy the reflection commutativity criterion, and clearly they need to satisfy (2). When their dimensions match, criterion (2) is equivalent to requiring that the normal vectors to these subspaces are orthogonal. In R n there exist n k pairwise orthogonal vectors that can serve as normal vectors to k-dimensional affine subspaces. Therefore |{ρ ∈ Gn : dim aff(ρ) = k}| ⩽ n k , and |Gn| = Xn k=0 |{ρ ∈ G : dim aff(ρ) = k}| ⩽ Xn k=0 n k = 2n . (∆) Let |Gn| = 2n be a commutative group of reflections of R n . There are two “special” elements in this group, namely when k = n and when k = 0 as in (∆). The former corresponds to a reflection whose axis is the entire space — this is the identity transformation. The latter, on the other hand, is the reflection across a 0-dimensional affine subspace, namely a point, if one assumes its existence a priori. We call it the point reflection associated with Gn. This point reflection is special in the sense that it “looks the same from every perspective” whereas for every other reflection ρ ∈ Gn, the effect imposed by ρ on a region depends on the location of the region and its relation to the affine subspace associated with ρ. In other words, the point reflection is the “special” element of Gn displaying a certain invariance property. This should remind one of the notion of conjugation in abstract algebra. Let us quickly recall that if G is a group, then g, h ∈ G are conjugates of each other if there exists a f ∈ G such that h = fgf −1 . It follows by group properties that conjugacy is an equivalence relation, which gives rise to conjugacy classes. Going back to the group of isometries. Let f, g be two isometric transformations. We say x is invariant under g if x = g(x). An immediate result following conjugation is that x is invariant under g if and only if f(x) is invariant under fgf −1 : x = g(x) ⇒ (fgf −1 )(f(x)) = (fg)(x) = f(x) and conversely (fgf −1 )(f(x)) = f(x) ⇒ (fgf −1 )(f(x)) = fg(x) = f(x) ⇒ g(x) = x since isometries are injective, directly by definition: kf(x) − f(y)k = 0 if and only if kx − yk = 0. Now we restrict our attention to point reflections — x is invariant under a reflection ρ if and only if x belongs to the affine subspace associated with ρ. Point reflections only have one fixed point, namely the “point” across which the reflection is performed. Therefore, if p is the point reflection associated with Gn and f is any other isometry (we don’t require it to be another reflection), then f pf −1 is also a point 11 reflection. Using the characterizations of commutative reflections, we see that these two point reflections p1, p2 commute if and only if they are identical — otherwise, aff(p1) ∩ aff(p2) = ∅, and both condition fails. Put formally: Definition 2.3. A point reflection p of R n is an isometry satisfying the following: (1) p is not the identity mapping, and (2) For any other isometry f, the composition f pf −1 either equals p or does not commute with it. Note that this definition itself does not invoke any notion of points, nor is it dependent on dimensionality, unlike the theorem on maximal commutative group of reflections. Having characterized point reflections, now we may work our way backwards and recover other primitives and notions under standard Euclidean geometry. First, linearity. Just like how two points define a line, we can fix two point reflections and define a corresponding line reflection. In order to do so, we need to establish a strict partial order on reflections with respect to the dimension and containment of their associated affine subspaces. The idea builds on the following fact: if H1 ⊂ H2 are with dimensions d1 < d2, then there exists an H′ 1 ⊂ H2 such that dim(H′ 1 ) = dim(H1), and they have orthogonal normal vectors. In order to formally state these, we need to (i) define subspace containment, and (ii) define orthogonal subspaces of the same dimension. Luckily, with conjugation, we have all the ingredients to cook up these notions. We use the symbol ρ1 ≺ ρ2 to represent that ρ1’s subspace is strictly contained in ρ2’s. Definition 2.4. Given two commutative reflections ρ1, ρ2 of R n , we say ρ1 ≺ ρ2 if the following holds: (1) There exists a translation f leaving ρ2 invariant under conjugation but not ρ1, i.e., ρ1 6= f ρ1f −1 , but ρ2 = f ρ2f −1 , and (2) There does not exist a translation g leaving ρ1 invariant but not ρ2. 12 The conditions impose structural constraints on ρ1 and ρ2’s subspaces: aff(ρ2) has “extra” free dimensions whereas aff(ρ1) doesn’t. This ensures that if ρ1 ≺ ρ2, then aff(ρ1) ⊂ aff(ρ2) with a strictly lower dimension. By definition, ≺ is irreflexive and asymmetric. To show transitivity, suppose ρ1 ≺ ρ2 ≺ ρ3, with f12 fixing ρ2 but not ρ1, and f23 fixing ρ3 but not ρ2. Then (f12f23)ρ3(f12f23) −1 = f12(f23ρ3f −1 23 )f −1 12 = f12ρ3f −1 12 . Since f12 leaves ρ2 invariant and ρ2 ≺ ρ3, by definition f12 also leaves ρ3 invariant, so the above equals ρ3. The other direction is trivial — if a translation leaves ρ1 invariant under conjugation then by definition it leaves ρ2 invariant. Repeating this argument once again, we see it must also leave ρ3 invariant. While we proved ≺ to be a partial order, it is still a much weaker notion than dimensioanlity, since only reflections with overlapping affine subspaces are comparable. Indeed, in R n we may compare the dimension of two arbitrary suitably nice regions, but this will suffice in our upcoming definitions. Definition 2.5. Given two point reflections p1 and p2, the line reflection ℓ = ℓ(p1, p2) associated with p1 and p2 is the minimal reflection (w.r.t. ≺) such that p1 ≺ ℓ and p2 ≺ ℓ. In other words, if p1, p2 ≺ ρ for some other reflection ρ, then ℓ ≺ ρ. It naturally follows that a point reflection p3 is colinear with p1 and p2 if p3 ≺ ℓ(p1, p2). More generally, we can iteratively apply this definition and recover the entire structure of the Euclidean space. Define a point reflection to be of dimension 0 and a line reflection of dimension 1. Given two reflections ρ1, ρ2 of dimensions n − 1, we may define a corresponding reflection ρ = ρ(ρ1, ρ2) of dimension n to be the minimal reflection w.r.t. ≺ satisfying ρ1 ≺ ρ, ρ2 ≺ ρ. Finally, given three reflections ρ1, ρ2, ρ3 of the same dimension, we say (ρ1, ρ2) is congruent to (ρ2, ρ3) if there exists a translation τ such that τ ◦ ρ1 = ρ2 and τ ◦ ρ2 = ρ3. 13 To wrap up, we have identified a method that helps us cover the standard Euclidean geometric structure using only the notion of regions and algebraic properties of rigid motions on these regions. We proved that, using commutativity of rotations, it is possible to recover an analogous notion of point using a special type of isometry called point reflections. We then showed that dimensionality and colinearity can be defined by working “backwards” once we have identified point reflections. Finally, congruence is easy with the help of translation, another type of isometry. Throughout this process, we never relied explicitly on the dimension of the ambient space, but were able to recover the dimension through our own characterizations. 14 Chapter 3. Incompatibility of Gunk and Measure After the Whiteheadian perspective was proposed, it has thus become a tradition to model a gunky sapce using Boolean algebra of regular open sets in the Euclidean space. Recall that a Boolean algebra [15] is a set B equipped with binary operations ∧ (meet), ∨ (join), and distinct members 0, 1 of B such that: (i) hB, ∨, ∧i is a distributive lattice (meaning pairwise meets and joins exist, and the two operations are distributive), (ii) x ∨ 0 = x and x ∧ 1 = x for all x, and (iii) x ∨ (−x) = 1 and x ∧ (−x) = 0 for all x. The following are some immediate consequences of these definitions. We see that Boolean algebra exhibits structures that coincide nicely with many of the spatial relations we are allowed to impose in a pointless model. (i) We may define a partial order ⩽ on B, by x ⩽ y if and only if x ∨ y = y if and only if x ∧ y = x. In the space of regular open sets, this encodes the parthood relation. Analogously a strict partial order < can be defined, and x < y means x is a proper part of y. (ii) The space itself can be considered a region and is represented by 1. The 0 does not really represent any region, and is included for mathematical convenience. We use 0 and ∅ interchangeably. (iii) Every region divides the space into two parts: informally, the one belonging to the region itself, and the one that does not. In this mereological sense, there should be nothing else. This correpsonds 15 to the Boolean algebraic complement: joining the two regions gives the top element, i.e., the whole space, and meeting the two regions gives nothing. Historically, philosophers and mathematicians have explored relevant spatial structures that would be “nice” to have in a model. One of the earliest mereological condition simply demands that the space itself be atomless: Every region has a proper subregion. (Mereological Gunk) A stronger extension of mereological gunk would instead require that not only does every region contain a proper subregion, but the subregion sits well inside it. The approach, following the tradition set by Whitehead [18] and Roeper [12], relies on a primitive called connectedness. Every region x would have to contain a subregion that is disjoint from any region disjoint from x: Every region x contains a subregion that is disjoint from any region disjoint from x. (Topological Gunk) But regions have sizes. While the topological gunk condition provides a statement that demands the qualitative existence of an “interior” part, it lacks a quantitative formulation. To fill in this gap, gunky spaces shouldn’t have discrete, quantitatively indivisible chunks. This gives rise to the strongest condition of the three: With respect to any measure, every region contains a strictly smaller part. (Measure-Theoretic Gunk) All three approaches have been extensively studied. The mereological one was established by Whitehead. The topological condition was discussed in Roeper, and the measure-theoretic gunk was studied 16 by Skyrms [16] and Sikorski [15]. Alternate approaches also exist, such as a nonstandard mereology by Forrest [3] and a metrical appraoch by Gerla [4]. It would be nice to have a model satisfying all three structural constraints on gunky models, but this attempt has been proven impossible by Russell [14], where he claimed that topological gunk and measuretheoretic gunk are inherently incompatible. In the remainder of the section, we will restrict our attention to the Boolean algebra formed by regular open sets in R n and reconstruct a famous counterexample involving the Fat Cantor Set. We will then briefly discuss several attempts that address the issues brought up by the Fat Cantor Set. 1 Boolean Algebra and the Fat Cantor Set A set A is regular open if A = Int Cl A (interior of closure). We consider the collection B of regular open sets in R n , equipped with the following operations: • join: A ∨ B := Int Cl(A ∪ B) • meet: A ∧ B := A ∩ B • Boolean complement: −A = Int Cl(R n\A) := Int(R n\A) = Int(Ac ) • 0, 1 represented by ∅ and R n . Proposition 1.1. The collection of regular open sets in R n , with operations defined above, forms a complete Boolean algebra. 17 Proof that B is a Boolean algebra. We need to prove that B is a distributed, complemented lattice. It is clear that hB, ∨, ∧i form a lattice. (ii) is also clear. To prove (iii), it might be convenient to denote Int Cl(A) by A∗∗, where A∗ := R n\ Cl A = (Cl A) c . For any A ∈ RO(R n ), A ∨ (−A) = (A ∪ (−A))∗∗ = (A ∪ A ∗ ) ∗∗ = (A ∗ ∩ A ∗∗) ∗ = (A ∗∗ ∩ A) ∗ = ∅ ∗ = 1 = R n and A ∧ (−A) = A ∩ Int(A c ) ⊂ A ∩ A c = ∅. It remains to prove distributivity. First note that A ∧ (B ∨ C) = A ∧ (B ∪ C) ∗∗ = A ∗∗ ∩ (B ∪ C) ∗∗ and that (A ∧ B) ∪ (A ∧ C) = ((A ∧ B) ∪ (A ∧ C))∗∗ = ((A ∩ B) ∪ (A ∩ C))∗∗ = (A ∩ (B ∪ C))∗∗ . It remains to connect these two chains of equalities — B ∪ C is open regular, and X∗∗ ∩ Y ∗∗ = X ∩ Y = (X ∩ Y ) ∗∗ for any X, Y ∈ RO(R n ). The other distributive identity can be proven analogously. 18 Proof that B is complete. Let I be an index set. Then X = _ i∈I Ai = Int Cl [ i∈I Ai and Y = ^ i∈I Ai = Int Cl \ i∈I Ai are well-defined in RO(R n ). We show that X is the supremum of {Ai}. For each i ∈ I, Ai = Int Cl Ai ⊂ Int Cl [ i∈I Ai = X. On the other hand if X′ ∈ RO(R n ) is another upper bound of {Ai}i∈I then Ai ⊂ X′ for all i ∈ I, so S Ai ⊂ X′ , and X = Int Cl [ i∈I Ai ⊂ Int Cl X′ = X′ . That Y is the infimum can be proven analogously. To avoid abuse of symbolic variables we will denote X, Y by sup{Ai}i∈I and inf{Ai}i∈I , respectively. Unfortunately, a point-less Euclidean geometry does not come without technical challenges. One particular complication is the notion of size or measure: Proposition 1.2. Lebesgue measure is not finitely additive on B. Proof. To see this, we appeal to the famous Smith-Volterra-Cantor set ([1], [8], [11], [14]), also known as the Fat Cantor Set. Let m denote the Lebesgue measure. Define I = [0, 1] the unit interval. In the first iteration, remove a subinterval of length 1/4 from the middle of I, and use I1 to denote this removed interval: I1 = [3/8, 5/8]. Now I\I1 consists of two disjoint intervals of equal length. In the second iteration, remove two subintervals of length 4 −2 from each of the two remaining intervals in I\I1, and use I2 to denote the union of these two removed intervals. Iterative, 19 in the k th iteration we remove intervals of length 4 −k from the middle of each of the 2 k−1 remaining subintervals in I\ Sk−1 i=1 Ii and define Ik accordingly. Since Ik consists of 2 k−1 disjoint intervals, each with length 4 k , m(Ik) = 2−k/2. And clearly Ii ∩ Ij = ∅ for i 6= j, so m( S k⩾1 Ik) = 1/4 + 1/8 + . . . = 1/2. We define the Fat Cantor Set to be C = I\ S k⩾1 Ik and it follows that m(C) = 1 − 1/2 = 1/2. Before visiting pathological examples, it is worth comparing this fat C against the standard middlethirds Cantor set, the most notable difference being that C has a nonzero Lebesgue measure. Despite so, C is still totally disconencted and is the boundary of I\C = S k⩾1 Ik, just like the standard Cantor set. Given x ∈ R and S ⊂ R, define the point-set distance d(x, S) = infs∈S|x − s|. It follows that for any x ∈ [0, 1] = I, after the first iteration, d(x, I\I1) < 1/2, and by induction d(x, I\ Sk i=1 Ii) < 2 −k . This shows that given any x ∈ [0, 1] and ϵ > 0, there exists a sufficiently large k such that(x−ϵ, x+ϵ)∩Ik 6= ∅. In other words, the closure of S k⩾1 Ik is [0, 1]. Now we partition S k⩾1 Ik into two parts, a Big Cantor defined by S k odd Ik and a Small Cantor = S k even Ik. (So far we have Big Cantor, Small Cantor, and C, whose disjoint union is I.) It is immediately clear that these sets are Lebesgue measurable, with m(Big Cantor) = 1/3 and m(Small Cantor) = 1/6. While their Lebesgue measures behave nicely under usual set-theoretic operations, things look rather different when we turn into Boolean operations defined above. To see this, observe that Big Cantor ∧ Small Cantor = ∅, so m(Big Cantor ∧ Small Cantor) = 0. On the other hand, Big Cantor ∨ Small Cantor = Int Cl(Big Cantor ∪ Small Cantor) = Int Cl( S k⩾1 Ik) = Int([0, 1]) = [0, 1], but 1 = m(Big Cantor ∨ Small Cantor) 6= m(Big Cantor) + m(Small Cantor) = 1/2. Here we constructed examples in R. Higher-dimensional counterparts can be constructed analogously. 20 The problem here is that we constructed two sets A, B such that A ∨ B = Int Cl(A ∪ B) ⊋ (A ∪ B), and this leads to unwanted behaviors. The root of this pathology lies in the fact that both Big Cantor and Small Cantor (i) have boundaries of positive measure, and (ii) their boundaries are both C. Consequently, “too much” is introduced by the closure operator Cl to the extent where Lebesgue measure breaks. A few modifications have been proposed and explored by previous literature. 2 First Solution: Forcing Null Boundaries The first one is proposed by Lando and Scott [8], where we simply restrict our attention to sets with boundaries of Lebesgue measure zero, which we denote by RON(R n ) ⊂ RO(R n ). (In their paper, they began with regular closed sets instead of regular open sets, but all of the following results hold by taking appropriate complementations and swapping orders of interior and closure operators.) This set explicitly excludes the pathological Big/Small Cantor example above, and indeed preserves finite additivity of Lebesgue measures. Proposition 2.1. Lebesgue measure is finitely additive on RON(R n ) with operations defined as in B. Proof. Suppose A, B ∈ RON(R n ) and A ∧ B = A ∩ B = ∅. In particular the interiors are disjoint, so Cl(A) ∩ Cl(B) = (Int A ∪ ∂A) ∩ (Int B ∪ ∂B) = (∂A ∩ Cl B) ∪ (∂B ∩ Cl A). Since m(∂A) = m(∂B) = 0 by assumption, the above set has measure 0. Then m(Cl(A ∪ B)) = m(Cl(A) ∪ Cl(B)) = m(Cl(A)) + m(Cl(B)) − m(Cl(A) ∩ Cl(B)) = m(Cl(A)) + m(Cl(B)) = m(A) + m(B). 21 It remains to notice that ∂(A ∪ B) ⊂ ∂A ∪ ∂B. Thus A ∪ B has null boundary and m(Cl(A ∪ B)) = m(Int Cl(A ∪ B)), completing the proof. While restricting attention indeed resolves the issue of finite additivity of Lebesgue measure, it still doesn’t resolved the issue of countable additivity. Clearly, each Ik as in the construction of C is in RON(R n ), but as we already showed, 1/2 = X k⩾1 m(Ik) 6= m( W k⩾1 Ik) = m(Int Cl( S k⩾1 Ik)) = m([0, 1]) = 1. This means that the Boolean algebra of regular open sets with null boundaries form an incomplete Boolean algebra, and that its completion is once again RO(R n ). Nevertheless, Lando and Scott have shown that this Boolean algebra has many nice properties. RON(R n )sits densely in RO(R n )in the algebraic sense. It satisfies Roeper (1997)’s axiomatizations of region-based topology, a collection of axioms which Roeper proved to be necessary for any collection of pointless regions constructed via an equivalence relation defined on boundary points for pointy regions in a locally compact Hausdorff space, to which the collection of regular open sets belong. And like the previous section, Lando and Scott have also shown that the notion of points, along with the entire pointy topology of R n , can be identified via certain equivalence relations, and that RON(R n ) is not isomorphic to RON(R m) for any m 6= n, essentially establishing the notion of dimensionality in a parameter-free approach. 3 Second Solution: Equivalence Relations on Sets with Null Difference Another more classical approach is proposed by Arntznius [1], where he considered Lebesgue measure algebra, the algebra of Borel subset of R n modulo the ideal of Lebesgue null sets. In his own words, he blurs the differences in regions which “do not correspond to differences in actual physical space.” If we define an equivalence relation A ∼ B if the symmetric difference A∆B = (A\B) ∪ (B\A) has measure 22 zero, then naturally all regions in the same equivalence class have the same Lebesgue measure. Therefore, it is well-defined to let µ0([A]) = m(A) where [A] is the equivalence class containing A. Since we are dealing with Lebesgue measure algebra, we may relax our assumption on regular open sets and instead directly work on equivalence classes of Borel sets. To this end we redefine the notions of join, meet, and complement on Lebesgue measure algebra canonically: [A] ∧ [B] := [A ∩ B] [A] ∨ [B] := [A ∪ B] − [A] := [A c ]. Since a countable union of null sets is still null, intuitively, Proposition 3.1. µ0 is a countably additive measure on the Lebesgue measure algebra. Proof. Indeed, given pairwise disjoint equivalence classes {[Ak]}k⩾1 (in the sense that [Ai ] ∧ [Aj ] = [∅]), we define B1 = A1 and Bk = Ak\ Sk−1 i=1 Ai inductively. It follows that the Bk’s are pairwise disjoint, and Ak∆Bk = Ak\Bk = Ak ∩ k [−1 i=1 Ai = k [−1 i=1 Ak ∩ Ai which is a null set because Ak ∩ Ai has measure zero for all i < k. Therefore [Ak] = [Bk] for each k, and µ0( _ k⩾1 [Ak]) = µ0( _ k⩾1 [Bk]) = µ0([[ k⩾1 Bk]) (Completeness of ∨) = m( [ k⩾1 Bk) = X k⩾1 m(Bk) = X k⩾1 µ0([Bk]) (Since µ0([A]) = m(A)) = X k⩾1 µ0([Ak]). (Since [Ak] = [Bk]) This approach clearly solves the predicament of Big Cantor and Small Cantor as no closure/interior operator is induced during the process of ∨ and ∧ with respect to Lebesgue measure algebra. In the measure 23 theoretic sense, this is a satisfying result since we can recover all Borel measurable pointy functions up to a null set of difference. Arntzenius’s approach meets the conditions for Roeper’s axiomatizations, but in order to evaluate its compatibility, a topology must first be given to the Lebesgue measure algebra. Arntzenius’s construction turns out to satisfy only nine out of the ten axioms by Roeper, failing the last one, which roughly says “if A sits ‘strictly inside’ B, then there exists a C such that A sits ‘strictly inside C, and C sits ‘strictly inside’ B.” Note that this is essentially the condition for Topological Gunk. Formally, for two Borel sets A, B, Arntzenius defined them to be connected if there exists a point p such that any open set containing p intersects both A and B in a non-null region. A region A sit “strictly inside” B if A is not connected with the complement of B. Proof. To construct a counterexample that breaks Roeper’s last axiom, let us once again consider the Fat Cantor Set C. Our goal is to show that any non-null subset of C is connected to C c . Recall the definition of d(x, S) at the beginning of the introduction of Fat Cantor Set. Let x be any point in any non-null subset of C and let ϵ > 0 be given. It follows that there exists a sufficiently large k such that d(x, Sk i=1 Ii) < ϵ/2. This means (x−ϵ/2, x+ϵ/2)∩ Sk i=1 Ii is nonempty. Since Sk i=1 Ii is a union of intervals, all with lengths ⩾ 4 −k > 0, it follows that (x − ϵ, x + ϵ) ∩ Sk i=1 Ii contains at least an interval of positive length. This proves that any arbitrary non-null subset of C is connected to C c , so C violates Roeper’s last axiom, and therefore Arntzenius’s model does not satisfy Topological Gunk. 24 Chapter 4. An Exploration of Gunky Probability Spaces If the structure of a gunky space can be used to recapture the essential structures of a pointy space, how about modeling events defined on gunky spaces? In this section, we attempt to recover methods of abstract integration of real-valued random variables from a point-less perspective. Certainly, without appealing to points, some familiar notions in pointy probability theory are now unaccessible. For example, we cannot pursue pointwise limits. And we need new modes of convergence that is compatible with the spatial structure of gunky spaces. In this section, we begin by exploring the possibilities of extending a Boolean algebra so that it becomes possible to define a countably additive probability measure. With a “nice” probability space as such defined, we then explore the aspects of defining random variable, as well as integration and convergence on them. Like many standard treatments of probability theory, we will begin with the easiest, most well-behaved types of random variables, and work our ways upward toward more abstract, general random variables. 1 Defining a Complete, Countably Additive Probability Measure Let us consider a Boolean algebra B and first define a finitely additive probability measure on it. The basic notions are analogous to that of a standard pointy probability theory. Definition 1.1. Let B be Boolean algebra with 0 = ∅ and 1, as well as join operator ∨, meet ∧, and complement defined. We define a “gunky” probability law P on B to be a real-valued function that is: 25 (1) Strictly positive: P(x) ⩾ 0 and P(x) = 0 if and only if x = 0, so that every region has positive “size”; (2) P(x) ⩽ 1 and P(x) = 1 if and only if x = 1; and (3) P(x ∨ y) = P(x) + P(y) if x ∧ y = ∅. Without having looked much into the structure of “gunky” regions, we cannot venture to impose assumptions that are too strong. But one thing we would like to have, with respect to measure-theoretic gunk, is that all regions have positive measure. Nevertheless, right away we can derive a list of intuitive results that align with the pointy probability: • P is finitely additive. • Define a partial order ⩽ by non-strict set inclusion and < its strict counterpart. Then if x ⩽ y (resp. x < y), P(x) ⩽ P(y) (resp. P(x) < P(y)). • Boolean rings. We can define two more operations on B: + and · so that (B, +, ·) forms a ring. For x, y ∈ B, define x + y to be the symmetric difference (x ∧ −y) ∨ (−x ∧ y), and define x · y to be x ∧ y. The zero of this ring corresponds to 0 = ∅ and the multiplicative identity corresponds to 1. It is also worth noting that this ring is idempotent: x · x = x for all x. • Difference: define x − y to be the unique element z such that z ∧ y = 0 and z ∨ y = x. It follows that if x ⩽ y then P(x) + P(y − x) = P(y). • Inclusion-exclusion. P(x) + P(y) = P(x ∨ y) + P(x ∧ y). Proof. Observe that x ∨ y = (x ∧ y) ∨ (x − x ∧ y) ∨ (y − x ∧ y), and that P(x − x ∧ y) + P(y − x ∧ y) = P(x) + P(y) − 2P(x ∧ y). 26 Rearranging the original equation gives the desired result. Of course, we would want stronger formulations of the gunky probability measure — right now it is neither countably additive nor guaranteed to be complete. To approach this, we will define a metric space (B, d) from (B, P) and define the completion B of B. Naturally, a distance metric on elements of B measures how “far” elements are between each other, a natural candidate of which is the symmetric difference +. Hence we define d(x, y) := P(x + y). It is easy to verify that d(·, ·) indeed defines a metric. d(x, y) = 0 if and only if P(x + y) = 0, if and only if x+y = ∅, or equivalently x = y. Symmetry is clear. For triangle inequality, note that a+b ⩽ a∨b, so d(x, y) = P(x + y) = P((x + y) + (z + z)) = P((x + z) + (y + z)) ⩽ P((x + z) ∨ (y + z)) = P(x + z) + P(y + z) = d(x, z) + d(y, z). With a metric defined, we obtain our first mode of convergence, which we call B-convergence. Definition 1.2. We say a sequence {xn} B-converges to a B-limit x ∈ B if d(xn, x) → 0, or equivalently P(xn + x) → 0. Similarly, we say {xn} is B-Cauchy if limn→∞ supi,j⩾n d(xi , xj ) → 0, and like usual, we say (B, d) is complete if every B-Cauchy sequence also B-converges in the space. As one may suspect, not every space defined in this way is complete. Consider again the example of Fat Cantor Set in RO(R), where P is the Lebesgue measure. By construction, d( Sk−1 i=1 Ii , Sk i=1 Ii) = 2−k , so { Sk i=1 Ii}k⩾1 forms a Cauchy sequence with respect to this metric space. Yet this sequence has no limit, since otherwise S k⩾1 Ik = I\C would have been an open regular set, which we have shown is false, since the interior of its closure is just (0, 1). 27 Our next goal, naturally, would be to complete (B, d), since it is well known that every metric space can be completed. The process has nothing exotic in it — it is a standard application of Cauchy completion [11]. We consider C, the collection of B-Cauchy sequences in (B, d). Let us first clarify the algebraic structures defined on C. Two sequences {xn}, {yn} are considered the same if they agree term-wise. Viewing C as a Boolean ring, the operations are defined via: • Multiplicative identity (one): the constant sequence of 1’s. • Additive identity (zero): the constant sequence of 0 = ∅’s. • Addition is defined term-wise: {xn} + {yn} = {xn + yn} (where the latter is addition defined on (B, +, ·)). • Multiplication is defined term-wise too: {xn} · {yn} = {xn · yn} (· from (B, +, ·) too). It follows that if we define ∨ and ∧ on C by {xn} ∧ {yn} := {xn} · {yn} and {xn} ∨ {yn} := {xn} + {yn} + {xn} · {yn}, then {xn} ∨ {yn} = {xn ∨ yn} {xn} ∧ {yn} = {xn ∧ yn} {xn} c = {1} + {xn} = {x c n}. That is, C can be viewed as a Boolean algebra with respect to these operations. We say two Cauchy sequences {xn}, {yn} are co-Cauchy if d(xn, yn) → 0. Define B to be C modulo the equivalence relation of being co-Cauchy. Alternatively, this can be characterized by C modulo the ideal C0 of sequences with B-limit ∅. We write B = C/C0. (Our approach to fixing the completeness (and as 28 we later show, countable additivity of P) of B is similar to how Arntzenius [1] attempted to fix the issue of countable additivity on a gunky R n .) Finally, we define a new metric d on B by d(x, y) = d([{xn}], [{yn}]) := lim n→∞ d(xn, yn). This is a well-defined metric because of how we constructed our equivalence classes: if {yn} and {zn} are co-Cauchy and lim d(xn, yn) → 0, then lim(xn, zn) ⩽ lim d(xn, yn)+lim d(yn, zn) = lim d(xn, yn) = 0. It is, therefore, natural to define a probability law P on B by P([{xn}]) := lim n→∞ P(xn). Theorem 1.3. B is complete with respect to d, and P is a countably additive probability measure on B. Proof that B is complete. Let {[Ck]}k⩾1 be a Cauchy sequence on (B, d). Note by definition each Ck is a sequence of B. For each k, discard early terms of Ck, leaving C ′ k , the collection of sufficiently late terms so that sup x,y∈C′ k d(x, y) < 1 k . Since a subsequence of a Cauchy sequence is co-Cauchy with the original sequence, for each k, Ck and C ′ k belong to the same equivalence class, and therefore {[Ck]}k⩾1 = {[C ′ k ]}k⩾1. Define ck,n to be the n th term of the modified sequence C ′ k . Now construct a sequence X = {xn} ⊂ B by setting xk = ck,k, the diagonal sequence of {C ′ k }. The proof is done if we show that X ∈ C and that {[Ck]} converges to [X] with respect to (B, d). 29 To show X is B-Cauchy, let ϵ > 0 be given and let N be sufficiently large so that supk,j>N d(Ck, Cj ) < ϵ/3. Then, for k, j > N and any n, d(xk, xj ) = d(ck,k, cj,j ) ⩽ d(ck,k, ck,n) + d(ck,n, cj,n) + d(cj,n, cj,j ) ⩽ 1/k + 1/j + d(ck,n, cj,n). Assuming N > 3ϵ −1 , 1/k+1/j < 2ϵ/3. On the other hand by assumption d(Ck, Cj ) = limn d(ck,n, cj,n) < ϵ/3, so for sufficiently large n, d(ck,n, cj,n) < ϵ/3 as well. Since the above triangle inequality holds for arbitrary n, we conclude that d(xk, xj ) < ϵ, and therefore X is B-Cauchy. Finally, to show {[Ck]} converges to [X], for any ϵ > 0, we pick N sufficiently large such that supk,j>N d(xk, xj ) < ϵ/2. Then d(ck,j , xj ) ⩽ d(ck,j , ck,k) + d(ck,k, xj ) = d(ck,j , ck,k) + d(xk, xj ) ⩽ 1/k + ϵ/2 < ϵ if N > 2ϵ −1 . Taking limit in j, then in k, we see d([Ck], [X]) = 0, completing the proof that B is complete. Proof that P is a countably additive probability measure. It is clear that P is a probability measure, since it inherits the structure of P. So we will focus on proving its countable additivity. 30 Let us first show that P is finitely additive. In particular let [{xn}] and [{yn}] be given. Assume they are disjoint in the sense that [{xn}] ∧ [{yn}] = [{∅}]. Proving additivity in this case is straightforward by definition: P([{xn}] ∨ [{yn}]) = P([{xn ∨ yn}]) = lim n→∞ P(xn ∨ yn) (by definition) = lim n→∞ (P(xn) + P(yn) − P(xn ∧ yn)) (Inclusion-exclusion) = lim n→∞ P(xn) + lim n→∞ P(yn) − lim n→∞ P(xn ∧ yn) | {z } =0 = P([{xn}]) + P([{yn}]) To prove countable additivity, let {[Ck]} ⊂ B be pairwise disjoint, where each Ck is a B-Cauchy sequence. We assume that their infinite join [C] = W k⩾1 [Ck] exists in B, and the goal is to show P([C]) = P k⩾1 P([Ck]). It is known in standard pointy measure theory that finite additivity, combined with bounded continuity from above, implies countable additivity. Here we adopt a similar approach. To this end, let us consider another sequence with [Xk] ⩾ [Xk+1] for each k, and that V k⩾1 [Xk] = [{∅}] (with respect to d). Monotonicity of P implies that lim P([Xk]) exists and the sequence is in particular Cauchy. Since B is complete, [Xk] also B-converges to some limit which we call [X]. Our goal is, of course, to show that [X] = [{∅}], but this is obvious: for each k ⩾ 1, [X] ∧ [Xk] = lim n→∞ [Xn] ∧ lim n→∞ [Xk] (treating {[Xk]n⩾1} as constant sequence in n) = lim n→∞ ([Xn] ∧ [Xk]) = lim n→∞ [Xn] = [X]. ([Xk] is monotonically decreasing) 31 This means [X] ⩽ [Xk] for each k. Since V k⩾1 [Xk] = [{∅}] we conclude that [X] = [{∅}], and so P([Xk]) = 0. This shows continuity of P at the zero of B. To complete the proof, intuitively we can “translate” and “invert” the monotonicity and limit. Formally, we can define [Xk] := [C] + W i⩽k [Ci ] and [X] = V k⩾1 [Xk]. Here, Xk is the “tail join” of [Ck]’s that is disjoint from the early part W i⩽k [Ci ], and [X] in some informal sense the lim inf of the [Ck]’s. If [X] 6= [{∅}] then it is disjoint from any of the [Ci ]’s, so [X] ∧ [C] = [{∅}]. Since [Xk] ⩽ [C] it follows that [X] ∧ [Xk] = [{∅}] as well. But on the other hand [X] is defined to be the infinite meet of the [Xk]’s, so [X] ∧ [Xk] = [X] for every k. Therefore [X] = [{∅}], and by continuity at zero, P([X]) = 0. Using definition, 0 = P([X]) = lim k→∞ P([Xk]) = lim k→∞ P([C] + _ k i=1 [Ci ]) = P([C]) − lim k→∞ P( _ k i=1 [Ci ]) and the proof is complete! 2 Elementary and Generalized Random Variables Now that we have shown that every Boolean algebra can be extended to a complete one equipped with a countably additive probability, let us abuse notation and assume from now on that B is complete and P countably additive on it, with all previous operations defined, along with 0 = ∅ and 1. Our next goal would be to define random variables and integrations (expected values) on them. Without directly working with points, we naturally consider “random variables” characterized by their values on various regions. To this end, we say a collection of elements {xi}i∈I , countable or finite, partitions B if they are pairwise disjoint, i.e., xi ∧ xj = ∅ for i 6= j, and W i∈I xi = 1. It follows by countable additivity that any partition {xi}i∈I satisfies P i∈I P(xi) = 1. The following definitions are straightforward: Definition 2.1. Let (B, P) be a complete space. We define the following types of random variables: 32 • A simple random variable (resp. elementary random variable) is a real-valued function X defined on a finite (resp. countable) partition {xn} of B that maps each xi to a constant value. We denote the collection of elementary random variables by E. • A constant random variable is a simple random variable defined on {1, ∅}. • Given x ∈ B, a corresponding indicator random variable Ix can be defined as the simple random variable on {x, xc} with Ix(x) = 1 and Ix(x c ) = 0. For simplicity, unless otherwise stated, when given X and is corresponding partition {xn}, we assume X does not take duplicate values on different regions: if xi 6= xj then X(xj ) 6= X(xj ), for otherwise we can always take out xi , xj from {xn} and insert xi ∨ xj into it. In some sense, we are defining elementary random variables based on their indicator decomposition — we will justify this statement later. For now, we compare two random variables X on {xn} and Y on {yn} by considering the following “finer” partition consisting of all non-zero meets of xi ∧yj . Let us write this as {zm} := {xn} ⊕ {yn}. It is easy to see that {zm} is still a countable partition of B. Then both X and Y can be defined with respect to {zm} (though this violates our just-stated no duplicate assumption), and we define their sum to be the elementary random variable X ⩽ Y if X(zi) ⩽ Y (zi) for each zi ∈ {zm} = {xn} ⊕ {yn}. We may define ⩾ analogously. Elementary arithmetic of X and Y can be defined via {zm} as well, and the idea behind each should be self-explanatory: • Addition: X + Y is defined to be the elementary random variable on {zm} by (X + Y )(zi) := X(zi) + Y (zi), or equivalently, (X + Y )(xi ∧ yj ) := X(xi) + Y (yj ). • Scalar multiplication: given c ∈ R, define the scalar multiple cX as (cX)(xi) := cX(xi) for each i. 33 • Multiplication: XY is defined to be the elementary random variable on {zm} by (XY )(zi) := X(zi)Y (zi), or equivalently, (X + Y )(xi ∧ yj ) := X(xi)Y (yj ). Many nice properties of elementary random variables follow from these definitions: symmetry, linearity, distributivity, and so on. We define the “region-wise” minimum, X ∧ Y and X ∨ Y , by (X ∧ Y )(zi) = min(X(zi), Y (zi)) or equivalently (X ∧ Y )(xi ∧ yj ) = min(X(xi), Y (yj )), and likewise a “region-wise” maximum X ∨ Y . Next, we define the positive and negative parts of X, denoted X+, X−, by X+ := X ∨ 0 and X− = −(X ∧ 0) where 0 here represents the constant random variable taking value 0. It follows that X = X+ − X− and we define the absolute value of X to be |X| = X+ ∨ X−. So far, we have analyzed simple random variables on (B, P) and have shown that they behave much like simple random variables defined in a pointy setting, e.g., a probability space defined on Borel sets in R n . With finite addition of simple random variables defined, given X defined on {xi} n i=1, we can naturally represent X as a linear combination of indicator random variables X = Pn i=1 X(xi)Ixi . Clearly, we would want a similar expression for elementary random variables, where the finite sum is replaced by an infinite sum. To do so, we need to justify the limit and before that, define what it means for a sequence of (elementary) random variables to converge. 34 Definition 2.2. We say a sequence of elementary random variables {Xn} ⊂ E converges to X, written Xn → X, if there exists a decreasing sequence {Yn} ⊂ E such that ^ k⩾1 Yk exists and equals 0 and |Xn − X| ⩽ Yn. Analogously, we define {Xn} ⊂ E to be Cauchy if there exists a decreasing sequence {Yn} ⊂ E such that for each n and all i, j > n, |Xi , Xj | ⩽ Yn. Intuitively, this is a relatively weak sense of convergence — to draw some analogy to the pointy spaces, this notion is somewhat similar to pointwise convergence: V k⩾1 Yk is the infinite term-wise minimum of the sequence {Yn}, so if that equals 0, then Yn → 0 pointwise. It is however slightly stronger than pointwise convergence, in the sense that it still imposes some kind of uniform bound. In pointy probability theory a famous example showing almost sure convergence does not imply convergence in expectation is Xn = n · I[0,1/n] , where because of the unboundedneess of n, a point mass “escapes” at 0. Here, such things cannot happen because we required {Yn} to decrease a priori. Returning to our main goal, let X ∈ E be defined on a countable {xn}. To show that X can be represented by P∞ i=1 X(xi)Ixi , we approximate this infinite sum by finite sums Pn i=1 X(xi)Ixi . We define the partial joins yn = Wn i=1 xi and notice that the partial sums Pn i=1 X(xi)Ixi = IynX. We apply this convergence to Iyn and show it converges to 1, the constant random variable taking value 1 (or equivalently the indicator of 1 ∈ B): |Iyn − 1| = ^n i=1 x c i and ^n i=1 x c i = ∅ 35 since the infinite join does not contain any xi , but the xi ’s make up the entire space. Therefore Iyn → 1. By the same token we see that IynX → X, so the proof is complete, and we claim that X = X k⩾1 X(xk)Ixk is indeed an indicator representation of X. Our next question is, how do we define a more general form of random variables using approximations of elementary random variables? In a pointy setting, given X ⩾ 0, it is well-known that X can be approximated pointwise by a sequence {Xn} of monotone increasing simple random variables by considering X1[X ⩽ n] and rounding the values of this variable down modulo 2 −n , so that the resulting random variable only takes values of multiples of 2 −n , with minimum 0 and maximum n. In our setting, we also consider convergence of elementary random variables, but instead of appealing to dyadic numbers we again consider equivalence classes of convergent elementary random variables based on the “limit,” as we once did during the process of Cauchy completing our initial Boolean algebra. To achieve this, we first define sequence-wise operations of sequences of elementary random variables. Abusing the notations, given {Xn}, {Yn} ⊂ E, we define addition, multiplication, scalar multiplication, join / pointwise maximum, and meet / pointwise minimum element-wise, i.e., {Xn}+{Yn} := {Xn+Yn} and so on. Once again, we consider the quotient space of Cauchy sequences modulo sequences that converge to zero. Notation-wise, we define C(E) to be the space of Cauchy sequences of elementary random variables, with operations defined component-wise, and C0(E) the space of sequences of elementary random variables converging to 0 (the zero constant variable). Then we define the space of general random variables (or just random variable) to be X := C(E)/C0(E). In other words, we identify each general random variable with the collection of all Cauchy sequences whose “limit” agree, and we shall a general random variable by an equivalence class written as X = [{Xn}]. This way, X has many “nice” properties 36 that align with our intuition. Furthermore, when defining X we used a notion of convergence akin to pointwise convergence, but Kappos [7] showed that if Xn converges to X, then there exists another sequence {Yn} that converges to X uniformly. That is, there exist constant random variables {cn}, cn ↓ 0, such that |Xn − X| ⩽ cn. In the following sections we consider several properties of random variables defined in this manner, and show that they align nicely with our intuition and pointy counterparts. We first explore the notion of distribution. In a pointy setting it is well-known that a random variable X is characterized by its distribution function F(x) = P(X < x). Here, we will show that a similar notions holds on both E and X. Firstly, for any elementary random variable X ∈ E defined on {xi}and c ∈ R, we define DX(c) := [X ⩽ c] = _ X(xi)⩽c xi and DX(c) := [X < c] = _ X(xi)<c xi . Both DX and DX are monotone increasing functions, in the sense that if c1 < c2 then W X(xi)⩽c1 xi is contained in W X(xi)⩽c2 xi . Since {xi} can be labeled arbitrarily, we may WLOG assume that X(xi) < X(xj ) for i < j, so that _ X(xi)⩽c xi = k _ (c) i=1 xi where k(c) is the largest index, 0 if nonexistent, such that X(xi) ⩽ c, and likewise for strict inequality and DX. Clearly limc→−∞ k(c) = 0 and limc→∞ k(c) = ∞. But then lim c→−∞ DX(c) = lim c→−∞ DX(c) = lim k(c)→0 k _ (c) i=1 xi = ∅ and lim c→∞ DX(c) = lim c→∞ DX(c) = 1. 37 What happens if we now consider {Xn} ⊂ E and the corresponding X = [{Xn}] ∈ X? We can extend the definition of D, D by dsefining DX(c) := lim sup n→∞ DXn (c) and DX(c) := lim inf n→∞ DXn (c), where the lim sup, lim inf are defined with nested operations of ∨ and ∧. This definition is compatible with the version defined on E because for any X ∈ E we can simply consider the constant sequence {X, X, . . .}. To show that this definition is well-defined, we consider {Xn} ∈ C(E) Cauchy and {Yn} ∈ C0(E), a sequence converging to 0. Since we may replace {Yn} with a sequence uniformly converging to 0, the difference between Xn and Xn + Yn is uniformly bounded, and so are DXn and DXn+Yn (and likewise for D). Letting ϵ ↓ 0 the claim follows. Therefore, Theorem 2.3. Any X ∈ X is uniquely characterized by its distribution function F : R → B defined by F(c) = [X < c]. This is a monotone function with limits F(−∞) = limc→−∞ F(c) = ∅ and F(∞) = 1. Further, this function is right continuous, i.e., limx↓c F(x) = F(c). An important result that we can derive via distribution functions is: Theorem 2.4. X is complete with respect to arbitrary joins and meets taken over a collection of uniformly bounded random variables. In other words, if {Xi}i∈I ⊂ X is indexed over any arbitrary I, and there exists M ⩾ 0 ∈ X such that −M ⩽ Xi ⩽ M for all i, then both W i∈I Xi and V i∈I Xi exist and are in X. Proof. B is complete, so the arbitrary join V i∈I DXi (c) exists for each c. Define DX(c) to be the right limits of V i∈I DXi (x), i.e., DX(c) := lim x↓c ^ i∈I DXi (x). 38 Then D satisfies all the criterion for a distribution function. And it follows by completeness of B that the random variable corresponding to DX must coincide with W i∈I Xi , so the closure with respect to arbitrary join is proven. The other case V i∈I Xi is analogous. Note that the assumption of boundedness is necessary. Since if X ⩽ Y , DX ⩾ DY , −M ⩽ Xi ⩽ M implies that for all i, DM ⩽ DXi ⩽ D−M. Without this assumption the claim fails, for we may construct an example where DX becomes uniformly 0, breaking limc→∞ DX(c) = 1. 3 Expected Values and Convergence Theorems Let us now turn to defining expected values of random variables. As usual, we start with elementary ones. Definition 3.1. Let X ∈ E and write X as P k⩾1 X(xk)Ixk . If P k⩾1 |X(xk)|P(xk) < ∞, then we define the expected value of X, written EX, to be P k⩾1 X(xk)P(xk). More generally, given X ∈ X, we know that there exists a sequence {Xn} ⊂ E converging uniformly to X. We say X possesses an expected value if in addition each Xn possesses an expected value. Finally, we define L 1 to be the space of all random variables X with E|X| < ∞. Note that not all X ∈ E possess expected values, since not all of them have absolutely convergent indicator representations. It is clear that the collection of elementary variables with well-defined expected values is closed under addition, scalar multiplication, and maximum (∨) and minimum (∧). As we generalize the notion of expected values to non-elementary random varaibles, the intuition is that uniform 39 convergence preserves limit of expected values, so the above is well-defined. This is indeed true: uniform convergence, along with triangle inequality, implies |EXn − EX| ⩽ E|Xn − X| → 0. The previously mentioned algebraic operators are also well-defined on X with respect to expected values, since the definition is a natural extension of the expected value defined on E. Consequently, L 1 is also closed under addition, scalar multiplication, maximum, and minimum. This definition turns out to be highly compatible with its pointy counterparts, in the sense that many important convergence theorems hold as well. We first prove the Theorem 3.2 (Dominated Convergence Theorem). If Xn ⩾ 0, Xn ∈ L 1 is monotone decreasing with Xn → 0, then the expectations converge as well: EXn ↓ 0. Proof. We will prove this claim via a multi-step procedure: first we show it holds for simple random variables, then elementary random variables, and finally, (general) random variables. Step 1: simple Random vaRiables. Let each Xn be simple, defined on {x(n,i)} c(n) i=1 . Using indicator representation, we write Xn = Pc(n) i=1 Xn(x(n,i) )Ix(n,i) . Further WLOG assume that for each fixed n, the x(n,i) ’s are arranged in the decreasing order based on Xn(x(n,i) ), i.e., Xn(x(n,i) ) ⩾ Xn(x(n,i+1)). Let ϵ > 0 be given. Our goal is to show that for sufficiently large n, EXn < ϵ. The idea behind the proof is that we show as n → ∞, Xn takes value > ϵ on a sufficiently small region, whose contribution to EXn can be controlled, whereas Xn is sufficiently small on the remainder of the space, and consequently its contribution to EXn is also controllable. Let M = X1(x1,1). By monotonicity of both {xn,1, xn,2, . . .} and {Xn}, we know M ⩾ EX1 ⩾ EXn for all n. On the other hand, since the values of Xn(xn,u) is decreasing, for each n there exists a d(n) such that the first d(n) terms of Xn(x(n,i) ) is ⩾ ϵ/2 and the remaining c(n) − d(n) terms are < ϵ/2. As 40 discussed informally before, for each n we consider the partition of space by Wd(n) i=1 xn,i and W i>d(n) xn,i. By the convergence assumption Xn → 0, we must have ^∞ k=1 d _ (k) i=1 xk,i = ∅ for no part of Xn can remain above ϵ forever. But we previously showed P’s countable additivity is equivalent to continuity at 0, so P( Wd(k) i=1 xk,i) → 0 as k → ∞, which means for sufficiently large k, P( Wd(k) i=1 xk,i) < ϵ/2M. Therefore, for sufficiently large n, EXn = X i⩾1 Xn(xn,i)P(xn,i) = d X (n) i=1 Xn(xn,i) | {z } ⩽M P(xn,i) + X i>d(n) Xn(xn,i) | {z } ⩽ϵ/2 P(xn,i) ⩽ M d X (n) i=1 P(xn,i) + ϵ 2 X i>d(n) P(xn,i) ⩽ M · P( d _ (n) i=1 xn,i) + ϵ 2 < ϵ 2 + ϵ 2 = ϵ. End of Step 1 Step 2: elementaRy Random vaRiables. Now let {Xn} ⊂ E be a sequence of elementary random variables, each possessing an expected value and converging downward to 0. The idea is to use Step 1 to approximate these Xn’s and show that the error term can be carefully controlled. More formally, since EXn = P i⩾1 Xn(xn,i)P(xn,i) < ∞there exists a e(n)such thatP i>e(n) Xn(xn,i)P(xn,i) < ϵ2 −n . The first e(n) terms form a simple random variable X′ n := Pe(n) i=1 Xn(xn,i)Ix(n.i) . Further define X′′ n := Vn i=1 X′ n so that X′′ n is monotonically decreasing. Since Xn → 0 and 0 ⩽ X′′ n ⩽ X′ n ⩽ Xn, we know X′′ n → 0 monotonically as well, and by the previous part EX′′ n ↓ 0. How about the remainder? To fix the potential issue of X′ n not being monotone we introduced X′′ n , but in doing so, controlling Xn − X′′ n becomes slightly more involved as well. We inductively prove that 41 Xn − X′′ n can still be controlled. Of course, X′ 1 = X1 − X′′ 1 . For the inductive step, we use inclusionexclusion on X′′ n and Xn+1: E(X′ n+1) + E(X′′ n ) = E(X′ n+1 ∧ X′′ n ) + E(X′ n+1 ∨ X′′ n ) = E(X′′ n+1) + E(X′ n+1 ∨ X′′ n ) ⩽ E(X′′ n+1) + E(X′ n+1 ∨ X′ n ) ⩽ E(X′′ n+1) + EXn. Rearranging gives E(X′ n+1) ⩽ E(X′′ n+1) + EXn − EX′ n < E(X′′ n+1) + ϵ. This shows Xn −X′′ n is not far from Xn −X′′ n , so lim sup EXn = lim sup(EX′′ n +E(Xn −X′′ n )) ⩽ ϵ. Since ϵ is arbitrary the proof is complete. End of Step 2 Step 3: (geneRal) Random vaRiables. Let {Xn} ⊂ X be L 1 . For each n, let {Yn,k}k⩾1 be a sequence of elementary random variables with expected values defined that converge uniformly to Xn. Further WLOG assume each {Yn,k}k⩾1 is decreasing, for we may otherwise consider { V i⩾k Yn,k}k⩾1. Consider the vertical sequence Zn = Vn i=1 Yi,n. On one hand, since Yn,· ↓ Xn we must have Zn ⩾ Xn. On the other hand, by construction we also know that Yn,k ⩾ Zk if n ⩽ k. Finally, it is clear that Zn is decreasing since by assumption Yn,k ⩾ Yn,k+1. Combining these three identities along with Xn → 0, we see that lim Zn = lim Xn = 0. But Zn is elementary, so by PaRt 2 EZn ↓ 0. Finally, noting that Zn ⩾ Xn so EZn ⩾ EXn, we conclude that EXn ↓ 0. A direct consequence of DCT is that if Xn → X is monotonically decreasing and L 1 , then EXn ↓ EX since we can apply the previous proof to Xn−X, a sequence of L 1 random variables converging downward to 0. This also gives rise to the: 42 Theorem 3.3 (Monotone Convergence Theorem). If {Xn} ⊂ L 1 is monotone increasing and Xn → X ∈ L 1 , then EXn ↑ EX. Finally, we prove the Theorem 3.4 (Fatou’s Lemma). Let {Xn} ⊂ L 1 . If there exists a uniform bound M ∈ L 1 such that 0 ⩽ Xn ⩽ M then E(lim inf n→∞ Xn) ⩽ lim inf n→∞ EXn, where lim infn→∞ Xn is defined as W∞ k=1 V n⩾k Xn. Proof. First, lim inf Xn is well-defined because V n⩾k Xn is bounded for each k, and X is closed under arbitrary joins and meets taken over bounded subsets. Further, since 0 ⩽ lim inf Xn ⩽ M we see it is also in L 1 . On the other hand note that V n⩾k Xn is strictly increasing to lim infn Xn as n → ∞ and is uniformly bounded by M, so by MCT, E( V n⩾k Xn) ↑ E(lim infn Xn). But by definition V n⩾k Xn ⩽ Xk for each k, so E( V n⩾k Xn) ⩽ EXk for each k, and we conclude that E(lim inf n→∞ Xn) = lim k→∞ E( ^ n⩾k Xn) ⩽ lim inf n→∞ EXn. 43 Bibliography [1] Arntzenius, Frank (2008). Gunk, topology, and measure. In Oxford Studies in Metaphysics, 225-247. Vol 4. Oxford University Press, 2008. [2] Betti, Arianna, and Leob, Iris (2012). On Tarski’s foundations of the geometry of solids. The Bulletin of Symbolic Logic, 18(2), 230-260. [3] Forrest, Peper (2003). Nonclassical mereology and its application to sets. Notre Dame Journal of Formal Logic, 43, no.2 (2003): 79-94. [4] Gerla, Giangiacomo (1990). Pointless metric spaces. The Journal of Symbolic Logic, vol. 55, no.1, pp. 207-219. [5] Gerla, Giangiacomo (1995). Pointless geometries. In Handbook of Incidence Geometry: Buildings and Foundations, chapter 18, 1015-1053. North-Holland, 1995. [6] Givant, Steven R., and Mackenzie, Ralph (1986). Alfred Tarski: Collected Papers, vol. 1, Birkhäuser, 1986. [7] Kappos, Demetrios A. (1969). Probability Algebras and Stochastic Spaces. Academic Press. [8] Lando, Tamar, and Scott, Dana (2019). A calculus of regions respecting both measure and topology. The Journal of Philosophical Logic, 48, 825-850 (2019). [9] Levy, Azriel (1979). Basic Set Theory. Dover Publications: 2012. [10] Lewis, David (1991). Parts of Classes. Cambridge: Blackwell. [11] Pugh, Charles C. (2002). Real Mathematical Analysis. Springer. [12] Roeper, Peter (1997). Region-based topology. Journal of Philosophical Logic, 26(3), 251-309. [13] Rudin, Walter (1987). Real and Complex Analysis, 3rd ed. New York: McGraw-Hill. [14] Russell, Jeff S. (2008). The structure of gunk: adventures in the ontology of space. In Oxford studies in metaphysics, Vol. 4, 248-274. Oxford: Oxford University Press. [15] Sikorski, Roman (1964). Boolean Algebras. Berlin: Springer Verlag. [16] Skyrms, Brian (1983). Zeno’s paradox of measure. In Physics, Philosophy and Psychoanalysis, pp. 223- 254. [17] Tarski, Alfred (1929). Les fondements de la géométrie des corps, Annales de la Socié’e Polonaise de Mathématiques, 29-34. Reprinted in Givant and Mackenzie (1986). [18] Whitehead, Alfred N. (1929). Process and Reality. New York: Macmillan. 44
Abstract (if available)
Abstract
Traditionally, spaces are modeled as infinite collections of infinitesimal points. In recent decades, however, an alternate view has gained increasing scholarly attention: what if space is gunky, consists of regions, and does not admit a smallest, indivisible atom, a notion akin to points? In a gunky space, what set of primitives would one adopt in replacement of the Euclidean premises? Are these primitives as expressive as the ones in standard pointy models? And finally, what contradictions would arise in such exotic worlds, and how, if possible, can we resolve them?
In this paper, I provide an overview of previous literature on this area, highlighting some inherent complications that are embedded in a gunky perspective. A particular challenge that arose in the gunky models is the attempt to define a consistent notion of "size," or measure, to regions, and I discuss and evaluate existing solutions to this issue. Building on the notion of measure, I spend the second half of the paper investigating the feasibility of introducing a theory of probability to a gunky model. I show that it is possible to extend a gunky space and equip it with a countably additive probability measure. With this, I further recover several important concepts in probability theory, including random variables, abstract integration, and several convergence results.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Bulk and edge asymptotics in the GUE
PDF
Asymptotic properties of two network problems with large random graphs
PDF
The existence of absolutely continuous invariant measures for piecewise expanding operators and random maps
PDF
Coexistence in two type first passage percolation model: expansion of a paper of O. Garet and R. Marchand
PDF
Supervised learning algorithms on factors impacting retweet
PDF
Return time distributions of n-cylinders and infinitely long strings
PDF
Improvement of binomial trees model and Black-Scholes model in option pricing
PDF
Elements of dynamic programming: theory and application
PDF
A nonlinear pharmacokinetic model used in calibrating a transdermal alcohol transport concentration biosensor data analysis software
PDF
Some results concerning the critical points of directed polymers
PDF
Application of statistical learning on breast cancer dataset
PDF
M-estimation and non-parametric estimation of a random diffusion equation-based population model for the transdermal transport of ethanol: deconvolution and uncertainty quantification
PDF
Increase colorectal cancer prediction accuracy with the influence (I)-score
PDF
Applications of contact geometry to first-order PDEs on manifolds
PDF
Large deviations rates in a Gaussian setting and related topics
PDF
Set-valued stochastic differential equations with unbounded coefficients and applications
PDF
CLT, LDP and incomplete gamma functions
PDF
High-frequency Kelly criterion and fat tails: gambling with an edge
PDF
Unbounded utility
PDF
First passage percolation in a correlated environment
Asset Metadata
Creator
Ye, Qilin (author)
Core Title
A “pointless” theory of probability
School
College of Letters, Arts and Sciences
Degree
Master of Science
Degree Program
Applied Mathematics
Degree Conferral Date
2024-05
Publication Date
05/21/2024
Defense Date
05/08/2024
Publisher
Los Angeles, California
(original),
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
Euclidean space,gunk,measure,mereology,OAI-PMH Harvest,probability measure,topology
Format
theses
(aat)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Lototsky, Sergey (
committee chair
), Bacon, Andrew (
committee member
), Dimitrov, Evgeni (
committee member
)
Creator Email
yeqilin@usc.edu,yql.skorpion@gmail.com
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-oUC113950171
Unique identifier
UC113950171
Identifier
etd-YeQilin-12991.pdf (filename)
Legacy Identifier
etd-YeQilin-12991
Document Type
Thesis
Format
theses (aat)
Rights
Ye, Qilin
Internet Media Type
application/pdf
Type
texts
Source
20240521-usctheses-batch-1158
(batch),
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the author, as the original true and official version of the work, but does not grant the reader permission to use the work if the desired use is covered by copyright. It is the author, as rights holder, who must provide use permission if such use is covered by copyright.
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Repository Email
cisadmin@lib.usc.edu
Tags
Euclidean space
gunk
measure
mereology
probability measure
topology