Annals of Pure and Applied Logic 170 (2019) 515–538
Contents lists available at ScienceDirect
Annals of Pure and Applied Logic www.elsevier.com/locate/apal
Isolated maximal d.r.e. degrees Yong Liu 1 Department of Mathematics, National University of Singapore, 10 Lower Kent Ridge Road, 119076 Singapore
a r t i c l e
i n f o
Article history: Received 14 November 2017 Received in revised form 1 December 2018 Accepted 13 December 2018 Available online 17 December 2018 MSC: 03D25 03D28
a b s t r a c t There are very few results about maximal d.r.e. degrees as the construction is very hard to work with other requirements. In this paper we show that there exists an isolated maximal d.r.e. degree. In fact, we introduce a closely related notion called (m, n)-cupping degree and show that there exists an isolated (2, ω)-cupping degree, and there exists a proper (2, 1)-cupping degree. It helps understanding various degree structures in the Ershov Hierarchy. © 2018 Elsevier B.V. All rights reserved.
Keywords: Maximal d.r.e. degree Isolation Cupping degree
1. Introduction A recursively enumerable (r.e.) set A can be approximated by a uniformly computable sequence {As }s<ω where A0 = ∅, for any x, we have A(x) = lims As (x) and there is at most one s such that As (x) = As+1 (x). If we allow finitely many s such that As (x) = As+1 (x), we have the limit computable sets. Note that A is limit computable iff A ≤T K. A limit computable set A is ω-r.e. if there exists also a computable function h such that for all x, |{s | As (x) = As+1 (x)}| ≤ h(x). For n < ω, A is n-r.e. if h(x) ≤ n. A is d.r.e. if it is 2-r.e. It is well-known that the d.r.e. degrees form a different structure from the r.e. degrees. On one hand, we have the famous Sacks Density Theorem [7], which states that for any r.e. degrees a < b there is an r.e. degree c such that a < c < b. On the other hand, we have the d.r.e. Nondensity Theorem by Cooper, Harrington, Lachlan, Lempp, and Soare [4], which states that there exists a d.r.e. degree d < 0 such that E-mail address:
[email protected]. The contents of this paper form a part of the author’s Ph.D. thesis at the National University of Singapore. The author is partially supported by NUS research grant WBS R-146-000-231-114. The author also acknowledges the support of JSPS-NUS grants R-146-000-192-133 and R-146-000-192-733 during the course of the work. 1
https://doi.org/10.1016/j.apal.2018.12.002 0168-0072/© 2018 Elsevier B.V. All rights reserved.
516
Y. Liu / Annals of Pure and Applied Logic 170 (2019) 515–538
there is no d.r.e. degree c with d < c < 0 (a degree of this kind is called a maximal d.r.e. degree). In fact, with just minor modifications to the construction, they showed that there is no ω-r.e. degree c such that d < c < 0 . Hence d.r.e. degrees is not elementarily equivalent to r.e. degrees. As the construction for maximal d.r.e. degree is extremely complicated, it is very interesting and also challenging to know what other properties can a maximal d.r.e. degree have. Arslanov, Cooper and Li [1][2], and independently Downey and Yu [5], showed that there is no low maximal d.r.e. degree. Apparently people made other attempts with very few results. For the purpose of our discussion, we make the following definition. Definition 1.1. For m, n ≤ ω, we say a degree d is (m, n)-cupping degree if d < 0 is a m-r.e. degree and for any n-r.e. degree a, either d ∨ a = 0 or a ≤ d. A (2, 2)-cupping degree d is thus a maximal d.r.e. degree and is clearly a (2, 1)-cupping degree. A (2, 1)-cupping degree is also called almost universal cupping degree in [6]. It was shown by J. Liu and G. Wu [6] that there is an isolated almost universal cupping degree. (A d.r.e. degree d is isolated by an r.e. degree b if b < d and for any r.e. degree w, w ≤ d implies w ≤ b.) In the same paper, they asked whether there is an isolated maximal d.r.e. degree and we give a proof for this in this paper. Theorem 1.2. There is an isolated (2, ω)-cupping degree. While it is clear that a (2, ω)-cupping degree is (2, n)-cupping degree for all n ≤ ω, it is not clear whether a (2, 2)-cupping degree is automatically a (2, ω)-cupping degree. On one hand, it is plausible in the sense that the proofs for d.r.e. Nondensity Theorem and ω-r.e. Nondensity Theorem are essentially the same. On the other hand, it seems very hard to investigate it. The following theorem answers a very special case to this question. Theorem 1.3. There is a (2, 1)-cupping degree d which is not a (2, 2)-cupping degree. So it makes sense to talk about a proper (2, 1)-cupping degree. It is not known whether proper (2, m)-cupping degree exists for 2 ≤ m < ω. It is interesting because if it exists, it will yield a new proof, using less parameters, for that m-r.e. degrees do not form a Σ1 elementary substructure of n-r.e. degrees for 2 ≤ m < n (Yang and Yu [8] for 1 = m < n, Cai, Shore, and Slaman [3] for 1 ≤ m < n). The rest of the paper is to show Theorem 1.2 and Theorem 1.3. 2. The requirements of Theorem 1.2 We are building a d.r.e. set D, and an r.e. set B, so that deg(D) is a (2, ω)-cupping degree and deg(B) isolates deg(D). Requirements are the following, SU :K = ΓD⊕U ∨ U = ΔD , TW :K = ΛD⊕W ∨ W = ΘB , Re :ΨeD⊕B = X, where U ranges over all ω-r.e. sets, W ranges over all r.e. sets, Ψe ranges over all partial recursive functionals. The set X is an r.e. set we will build. It ensures that D ⊕ B
Y. Liu / Annals of Pure and Applied Logic 170 (2019) 515–538
517
The requirements are sufficient. Re -requirements imply D ≤T D ⊕ B
Y. Liu / Annals of Pure and Applied Logic 170 (2019) 515–538
518
Let Λ and Θ belong to the same TW -requirement, α is the last TW -node along γ. We say (1) Λ has been killed at γ if there exists α∗ such that α ⊆ α∗ Θ ⊆ γ, or there exists β such that β ⊆ α ⊆ β∗ Δβ (or β∗ Θβ ) ⊆ γ. In the latter case we call β injures α. (2) TW is Λ-satisfied at γ if Λ has not been killed at γ. (3) TW is Θ-satisfied at γ if there exists α∗ such that α ⊆ α∗ Θ ⊆ γ. (4) TW is satisfied at γ if it is either Λ-satisfied at γ or Θ-satisfied at γ. Remark 3.1. Θ does not get killed. Say α is the last R-requirement along γ. We say R is satisfied at γ if α d ⊆ γ or α f ⊆ γ. Here f is the finite outcome of α. d is also a finite outcome with an emphasis on that α succeeds with diagonalization. Assign(γ) is the algorithm to decide which requirement we assign at γ, and what node to be assigned next. Suppose for all β γ, β has been assigned but γ has not, Assign(γ) works as follows. Choose, among the list of the requirements, the first one which is not satisfied at γ. Assign it to γ. If it is a (1) SU -requirement, Assign(γ 0). (2) TW -requirement, Assign(γ 0). (3) R-requirement, list all SU -requirements which are Γ-satisfied at γ and TW -requirements which are Λ-satisfied at γ, as A0 , A1 , . . . , Ak where each Ai is either SU - or TW -requirement and is assigned at αi such that α0 ⊆ α1 ⊆ · · · ⊆ αk ⊆ γ Assign(γ f ). If the list is empty. Do nothing more. Otherwise, let j = k. While j ≥ 0, (a) if Aj is a TW -requirement, Assign(γ Θj ), let j = j − 1. (b) if Aj is a SU -requirement, Assign(γ Δj ), and stop. If j = −1, Assign(γ d). (The outcomes of γ are listed from left to right as Δ(or d), Θj+1 , Θj+2 , . . . , Θk , f ) Definition 3.2. The priority tree T is the result of Assign(λ), where λ is the empty string. The following lemma is clear from the definition of T . Lemma 3.3. Given any infinite path p on T . Every requirement is assigned on p finitely many times. Remark 3.4. (1) Here γ outcome ∈ T is an immediate child of γ. Technically we can encode outcome by a natural number, so the node of the tree would just be a finite string of natural numbers. (2) f -outcome: waiting for a computation or having a successful diagonalization.
Y. Liu / Annals of Pure and Applied Logic 170 (2019) 515–538
519
(3) d-outcome: having a successful diagonalization. Essentially it can be merged into the f -outcome, but we keep it as the left most outcome. (4) Δ-outcome: Δ is being built. (5) Θ-outcome: Θ is being built. 4. The scenarios of Theorem 1.2 The basic strategy for SU is to build a Turing functional Γ such that K = ΓD⊕U . If, however, it fails, then we make sure a Turing functional Δ such that U = ΔD is built somewhere. TW follows the same idea. The basic strategy for Re is standard Friedberg–Muchnik Strategy. That is, Re picks a fresh number x (i.e. x is larger than any number we have seen). It waits until it sees ΨeD⊕B (x) ↓= 0. Then it puts x into X so that ΨeD⊕B (x) = X(x). The conflicts of SU and R requirements are fully settled in [4]. The conflicts of TW and R are essentially the same as in [6]. Combining them all requires some efforts. We begin with some basics and notations. We define ΓX (i) = Y (i) with use u at stage s has the standard meaning. That is, we enumerate the triple (Xs u, i, Ys (i)) into a set Γ, which is going to be r.e. This Γ has to be consistent in the sense that if (σ, i, j), (τ, i, k) ∈ Γ, and σ ⊆ τ , we then have j = k. As in the proof of d.r.e. Nondensity Theorem, we need something more. In this paper, to define ΓD⊕U (i) = K(i) with use block B = [a, b] at stage s means to define ΓD⊕U (i) = K(i) with use b at stage s, and we reserve the interval [a, b], targeting the set D, for future use. γ(i)-block refers to the interval [a, b] on the set D. A γ(i)-block is initially for correcting. That is, at some stage s, if we see ΓD⊕U (i) ↓= 0 but K(i) = 1, then we put a point, called correcting point, into γ(i)-block, so that we can define ΓD⊕U (i) = 1 = K(i) with the same use block while keeping Γ consistent. Sometimes we decide to extract this point at one stage but will have to put another one into this block at a later stage, and we will do this for multiple times. This is the reason that we need to reserve a block. In some case current γ(i)-block is too small and we need to define a fresh γ(i)-block. This is done by putting a point, called killing point, into current γ(i)-block (and we call this block a killing block from now on), so we can define ΓD⊕U (i) = k where k ∈ {0, 1} with a fresh use block and from now on we can use the latest γ(i)-block for correcting. Similarly, we can extract this point at one stage but will have to put another one into this same block at a later stage. Defining ΛD⊕W (i) = K(i), ΔD (i) = U (i), and ΘB (i) = W (i) are all similar except that Θ does not need use block. Remark 4.1. (1) If γ(i)-block is B = [a, b], we say x < γ(i) < y iff x < a < b < y. |γ(i)| = b − a. (2) A use block B = [a, b] should be sufficiently large. We will specify it in the construction. Notation 4.2. (1) diff(U, y, s0 , s) iff ∃i ≤ y Us0 (i) = Us (i). (2) same(U, y, s0 , s) iff ¬ diff(U, y, s0 , s). (3) SAME(U, y, s0 , s) if for all t such that s0 ≤ t ≤ s we have same(U, y, t, s). (This notation will be convenient in slowdown condition.) We will next present some mini cases, through which we review some ideas from [4] and get some ideas of current Theorem 1.2.
520
Y. Liu / Annals of Pure and Applied Logic 170 (2019) 515–538
4.1. Scenario 0: basics We will review some basics and set up some more notations. For an R-node α. If it finds ΨD⊕B (x) ↓, we define the use of this computation to be the smallest y such that only D ⊕ B y is involved in this computation. We define ψ(x) = y. If we let σ = D ⊕ B y then Ψσ (x) ↓ is exactly the same computation as ΨD⊕B (x) ↓. This finite information σ is essential to us. When we say that α sees a computation y (with the understanding what x is), it refers to the either ψ(x) or σ = D ⊕ B ψ(x) evaluated at the moment the computation is found. Sometimes we abuse the notation and use y(i) to mean σ(i). It is often clear from context that how we use the notion y. At stage s after y is found, we say (the relevant stage is usually clear from context, so we omit it in most cases) (1) y is injured if Ds ⊕ Bs y = σ. (2) y is recovered if Ds ⊕ Bs y = σ. (3) y is badly injured if ∃x ≤ y(σ(x) = 1 ∧ Ds (x) = 0). When y is injured but not badly injured, we consider Js = Js (y) = {x ≤ y | σ(x) = 0 ∧ Ds (x) = 1} We say, (1) y is Γ-injured if Js ∩ Γ = ∅ (i.e. ∃x ∈ Js and x is in a Γ-block). (2) y is Λ-injured if Js ∩ Λ = ∅. (3) y is Δ-injured if Js ∩ Δ = ∅. At stage t > s, if we extract all points in Js ∩ Γ, we say that the Γ-injuries to y are removed, and we have Jt ∩ Γ = ∅. Note that y is recovered (at stage t) iff Jt = ∅ (all injuries are removed). Given a computation y of an R-node α, which is already injured. In which condition can we remove the injuries to y? (1) Removing Γ-injuries, where Γ is built above α. Suppose γ(w) < y and γ(w)-block contains a killing point x which is enumerated at stage s. Let s∗ be the stage when γ(w)-block was initially defined. The killing point x undefines all definitions of Γ made between stage s∗ and s. In order to extract x, we need a point of equal power. If at s1 > s there is some i < γ(w) such that Us1 (i) = Ut (i) for all s∗ ≤ t ≤ s (∗), we are allowed to remove all Γ-injuries to y and also Γ can use the latest blocks for correcting. However, (∗) is not permanent as U is d.r.e. In case (∗) fails, γ(w)-block should remain killed by possibly enumerating another point in it. So the size of γ(w)-block becomes an issue, which we will discuss later. (2) Removing Γ-injuries, when Γ is built below α. We can do this if we are not going to visit this Γ. If we decide to visit Γ again, γ(w)-block should remain killed by possibly enumerating another point in it. (3) Removing Λ-injuries is similar to removing Γ-injuries except that we now have an r.e. set W instead of an ω-r.e. set U . In this case, the property (∗) will be permanently true if it becomes true at some stage. This makes our job easier. (4) Removing Δ-injuries. There are two ways to do this. (a) We do not visit Δ. So temporarily we can manipulate the points in a Δ-block and forget about the correctness of Δ. (b) The old definition of Δ is correct. We also have size issue for Δ-block.
Y. Liu / Annals of Pure and Applied Logic 170 (2019) 515–538
521
SU
TW
R0 Δ
Θ
TW
R3 Δ
R1 d
Θ
f
f
R4 f
R2 f
Fig. 1. Case 1.
If we extract a killing point from a block B, we say B is unkilled. If we extract a correcting point from a block B, we say B is uncorrected. We will also say y is Γ-safe if Γ-injuries has been removed, and y is Λ-safe if Λ-injuries has been removed. For an R-node α, α is a destroyer if α has a Δ-outcome. α is a controller if α has no Δ-outcome. The differences of a destroyer α and a controller β are the following. When α has a computation yα that could be injured by a correction of Γ, α is allowed to destroy Γ and build Δ so that SU -node is still happy. β also has a computation yβ that could be injured by a correction of Δ. However, β is not allowed to destroy Δ. It has to find another way to win. That is β can become active (see Definition 4.3 (4)) and while it is active, there is always some R-node α ⊆ β having successful diagonalization. 4.2. Scenario 1: main conflicts We want to look at the case in Fig. 1, where R0 , R3 are destroyers and R1 , R2 , R4 are controllers. 4.2.1. TW and R R0 picks a Friedberg–Muchnik witness x0 and finds a computation y0 at stage s0 . R0 now hopes one day y0 will become Γ-safe and Λ-safe so R0 can have a successful diagonalization by having y0 recovered and enumerating x0 . The new enemy here is TW , which defines and corrects Λ. R0 would like to lift λ(w0 )-correcting block to a place larger than y0 . To achieve this, R0 kills current λ(w0 )-correcting block (and call it a killing block), and define ΘB (i) = W (i) for this particular computation y0 (we say Θ(i) now carries y0 ). Later TW will be able to choose a fresh block to correct things hence avoiding putting numbers smaller than y0 into D. However the killing point itself may be less than y0 and the computation is gone. Luckily, a W (i)-change will allow us to undo the killing while keeping the fresh λ(w0 )-correcting block usable. If this happens, then y0 will be Λ-safe forever (We are using the fact that W is r.e. so that every change of W is permanent). Once y0 becomes Λ-safe, it is in the same situation as in the construction of [4]. It only costs finitely many times. 4.2.2. SU and R When an R-node α sees a computation y, it carries the following information.
522
Y. Liu / Annals of Pure and Applied Logic 170 (2019) 515–538
Definition 4.3. (1) t1 (y) is the stage y is found. (2) t0 (y, γ(w)) is the stage when previous γ(w)-block is killed. In other words, the last stage when Δ-outcome is visited. t0 (y, λ(w)) is defined similarly. (Note that the current γ(w)-block is defined after t0 (y, γ(w)).) (3) t2 (y, γ(w)) is the stage when R begins to kill the current γ(w)-correcting block. t2 (y, λ(w)) is defined similarly. (4) (for controller only) t2 (y) is the stage when y becomes Λ-safe for all Λ built above it, and it is the stage that α becomes active. Clearly, we have t0 (y, γ(w)) < t1 (y) ≤ t2 (y, γ(w)) and t0 (y, λ(w)) < t1 (y) ≤ t2 (y, λ(w)). Unlike the original construction of maximal d.r.e. degrees, here t1 (y) < t2 (y, γ(w)) is possible because R0 will begin to kill Γ(w) only when y becomes Λ-safe, which takes some time. (This is one of the major sources of complication comparing to original maximal d.r.e. construction.) Back to our case. We will denote s∗ = t2 (y1 ) and we call R1 active. Note that in general, y1 < γ(w0 ) < y0 (we can think y0 is at least as large as γ(w0 )). At s > s∗ , in order to undo the killing to γ(w0 )-block, what do we need? We killed γ(w0 )-block to undefine all definitions of Γ made between stage when γ(w0 )-block was initially defined and stage t2 (y, γ(w0 )), during which U may change wildly. So we need a replacement of equal power in order to undo the killing. Thus we request R1 to wait for the following slowdown condition SAME(U, y1 , t0 (y0 , γ(w0 )), s) before claiming y1 is found. So we have SAME(U, y1 , t0 (y0 , γ(w0 )), t1 (y1 )). Now if we have diff(U, y1 , t1 (y1 ), s), this provides a difference to all Ut for t0 (y0 , γ(w0 )) < t < t2 (y0 , γ(w0 )) which allows us to undo the killing to γ(w0 )-block happened at t2 (y0 , γ(w0 )). Note that at some s such that t1 (y0 ) < s < t2 (y0 , γ(w0 )), Γ may put some correcting point smaller than y0 into D. Luckily, we can also undo that correcting under diff(U, y1 , t1 (y1 ), s). Note also that at t2 (y0 , γ(w0 )) when we need to keep Δ correct, we may put some correcting points c < y0 into D. When we want y0 to be recovered, we can undo this correcting simply because we are not visiting Δ in such case. By the slowdown condition, we also have y1 ⊆ y0 (y1 is an initial segment of y0 ), so recovering y0 will not badly injure y1 . Now, if same(U, y1 , t1 (y1 ), s), can we use y1 ? Yes. Because from SAME(U, y1 , t0 (y0 , γ(w0 )), t1 (y1 )) we know that there will be no Δ-correction ever happen below y1 between t0 (y0 , γ(w0 )) and t1 (y1 ). Even if Δ does correct itself using a very small number between t1 (y1 ) and s∗ = t2 (y1 ), the old definition of Δ y + 1 is correct, so we are allowed to undo the small Δ-correcting once and for all. In summary, at s > s∗ , (1) If Condition C1 : same(U, y1 , t1 (y1 ), s) is met, y1 is recoverable. (2) If Condition C0 : diff(U, y1 , t1 (y1 ), s) is met, y0 is recoverable. 4.2.3. Size of blocks Let us remind the reader the size issue of each block. We begin with some definitions for controller. Definition 4.4. When a controller β becomes active, we define E(β) so that α ∈ E(β) iff α = β or α satisfies the following,
Y. Liu / Annals of Pure and Applied Logic 170 (2019) 515–538
523
(1) α Δ ⊆ β, (2) ¬∃γ, γ is a TW -node, and γ ⊆ α Δ ⊆ γ∗ Θ. (These Δ are killed, so not real threats.) For each α ∈ E(β), it carries a computation yα , and a set Cα of conditions which is required to recover yα . We list E(β) as α0 ⊂ α1 ⊂ · · · ⊂ αk = β. We write yi = yαi and Ci = Cαi . At stage s, an active controller β can make decision D(β, s). We define D(β, s) = αj if j ≤ k is the largest such that Cj is met. If D(β, s) = D(β, s − 1) we say β changes decision at stage s. If D(β, s) = α, we say α is selected by β for computation yα . While β is active, define DD(β) = |{s | D(β, s) = D(β, s − 1)}|, which will be seen to be finite. Remark 4.5. The idea is that if D(β, s) = α and α is being visited, then yα can be recovered and hence α has a successful diagonalization. Coming back to our case, we will use R1 instead of calling it β. We have E(R1 ) = {R0 , R1 }. Note that R1 can change decision. How γ(w0 )-block reacts? If D(R1 , s) = R0 , γ(w0 )-block should be unkilled. If D(R1 , s) = R1 , γ(w0 )-block should remain killed by enumerating another killing point into it. Thus we need b(U, y1 ) < |γ(w0 )-block|. Note that if γ(w0 )-block [a, b] is defined with b = a + b(U, a), then y1 < γ(w0 ) will imply that b(U, y1 ) ≤ b(U, a) = |γ(w0 )-block| so γ(w0 )-block is sufficiently large. δ-block is similar in this easy case. For more subtle case, see Section 4.4. Note that Δ(i) for i ≤ y1 never needs correction. If for some i such that y1 < i < δ(i), then for this block, we also need b(U, y1 ) < |δ(i)-block| for the same reason as γ(w0 )-block. We remark that this condition cannot be met using slowdown because δ(i)-block is as large as it is after its initial definition. Therefore, this is achieved by choosing a big block when Δ(i) was initially defined. Namely, it will choose a fresh block [a, a + b(U, a)]. 4.2.4. To undefine Θ(i) What are the differences between R1 and R2 ? R1 is dealing with Λ, and R2 will have to deal with Θ, thinking λ(w1 ) → ∞. Θ is well behaved because Θ never actively correct itself by enumerating points into B. There are two cases when i is enumerated into W : (1) Θ(i) is already undefined and so there is no problem redefining Θ(i). (2) Θ(i) is defined and carrying y, then we can make this y Λ-safe using this W -change to undo some killing and correcting of Λ. We just need to make sure y is not badly injured. In other words, those actions which badly injure y must also undefine Θ(i).
524
Y. Liu / Annals of Pure and Applied Logic 170 (2019) 515–538
Consider a computation y1 found by R1 , and is carried by Θ(i) (see Section 4.2.1 for a reminder of being carried). Suppose at stage s∗ , R2 becomes active with y0 and y2 . Suppose also that R2 is the first node who could extract a point. (1) Suppose t1 (y0 ) ≤ t1 (y1 ) ≤ t1 (y2 ). We must have y2 ⊆ y1 ⊆ y0 . Thus, recovering between y0 and y2 will not badly injure y1 . In this case, if i is enumerated into W and R1 is being visited, y1 can become Λ-safe and R1 d will be visited. (2) Suppose t1 (y1 ) < t1 (y0 ). Since we assume that R2 is the first node who could extract a point, we cannot have the case that y1 (i) = 1, y0 (i) = 0. Clearly y1 will not be badly injured when recovering y0 or y2 in other cases. (3) Suppose t1 (y2 ) < t1 (y1 ), y1 is likely to be badly injured because some point i may be enumerated after t1 (y2 ) but before t1 (y1 ). If nothing is done, y1 is forever gone. In that case, if i goes into W later, R1 cannot make any progress using y1 , so we have to make sure Θ(i) is correctable. Our solution to this is that when R2 changes decision for the k-th time, we enumerate t1 (y2 ) + k into B. Note that θ(i) = t2 (y1 , λ(w1 )) > t1 (y1 ) > t1 (y2 ) so t1 (y2 ) + 1 surely undefines Θ(i). Since we know DD(β) is finite, we can reserve a β-block, [t1 (y2 ), t1 (y2 ) + DD(β)], on B to undefine certain Θ. As a conclusion, if Θ(i) carries a computation y1 such that t1 (y2 ) < t1 (y1 ), Θ(i) needs to be undefined in the event that β changes decision, which happens only finitely many times. 4.3. Scenario 2: TW is not a big problem The case in Fig. 2 is to demonstrate that TW will not be a big problem. Assume R0 has a computation y0 and kills Γ, and R1 finds a computation y1 . R1 would like to take control was TW not there. R1 then kills Λ and defines ΘB (i) = W (i). TW
SU
R0 Δ
f
R1 d
Θ
f
SˆU
Fig. 2. Case 2.
Y. Liu / Annals of Pure and Applied Logic 170 (2019) 515–538
525
SU0
SU1
R0 Δ1
R1 Δ0
SU2
R2 Δ2
R3
Fig. 3. Case 3.
Then y1 will wait for W (i) to change so that λ(w)-block can be unkilled and we can therefore remove the Λ-injuries to y1 . Indeed, we can also remove the Λ-injuries to y0 , by observing that W is r.e. and hence W (i)-change is permanent and can undefine all existing Λ-definitions. Now, since Λ-injuries to y0 and y1 have been removed, R1 can be an active controller as usual. 4.4. Scenario 3: block size We look again to the size issue in a more general setting. As Λ-block is not involved, we focus on the case in Fig. 3, which is also a case described in [4]. However, some details might have been overlooked there. Usually U2 would just be U1 if we want to assign every SU -requirements on any path of the tree, but the construction does not care if U2 = U1 or not. As usual, we may assume y3 < γ2 (w2 ) < y2 < γ0 (w1 ) < y1 < γ1 (w0 ) < y0 and E(R3 ) = {R0 , R1 , R2 , R3 } with s∗ = t1 (y0 ) = t1 (y1 ) = t1 (y2 ) = t1 (y3 ), and C3 : same(U0 , y3 , s∗ , s) ∧ same(U2 , y3 , s∗ , s) C2 : same(U0 , y2 , s∗ , s) ∧ diff(U2 , y3 , s∗ , s) C1 : diff(U0 , y2 , s∗ , s) ∧ same(U1 , y1 , s∗ , s) C0 : diff(U0 , y2 , s∗ , s) ∧ diff(U1 , y1 , s∗ , s) If Ci is met, yi is recoverable. As an example, we assume that C2 is met. same(U0 , y2 , s∗ , s) says Δ0 y2 needs no correction, hence it will not injure y2 . diff(U2 , y3 , s∗ , s) says γ2 (w2 )-block can be unkilled while SU2 can make correction at a place beyond y2 . Note that it is not obvious that one of C0 , C1 , C2 , C3 is met. However, if we observe that same(U0 , y2 , s∗ , s) implies same(U0 , y3 , s∗ , s), it should be clear that at least one of them is met. Indeed, we can recursively find one as follows:
526
Y. Liu / Annals of Pure and Applied Logic 170 (2019) 515–538
Let S0 = {C0 , C1 , C2 , C3 }. (1) If same(U0 , y2 , s∗ , s), let S1 = {C2 , C3 }. (a) If same(U2 , y3 , s∗ , s), let S2 = {C3 }. (b) If diff(U2 , y3 , s∗ , s), let S2 = {C2 }. (2) If diff(U0 , y2 , s∗ , s), let S1 = {C0 , C1 }. (a) If same(U1 , y1 , s∗ , s), let S2 = {C1 }. (b) If diff(U1 , y1 , s∗ , s), let S2 = {C0 }. Thus we have S0 S1 S2 = {Ci } and Ci is met. However, R3 may prefer a different decision. This is important in showing that Γ-block and Δ-block is sufficiently large, as we shall see. 4.4.1. Γ-block We start with the conclusion, and explain why. (1) |γ0 (w1 )| ≥ b(U0 , y2 ). To see this, note that if same(U0 , y2 , s∗ , s), then D(R3 , s) ∈ {R2 , R3 }, in which case γ0 (w1 )-block should remain killed (by possibly enumerating a point into it). When D(R3 , s) ∈ {R0 , R1 } and so γ0 (w1 )-block should be unkilled (by possibly extracting a point from it), we must have diff(U0 , y2 , s∗ , s). (Note that diff(U0 , y2 , s∗ , s) does not imply D(R3 , s) ∈ {R0 , R1 } as D(R3 , s) = R3 is still possible.) Thus γ0 (w1 )-block should be as large as b(U0 , y2 ). This is achieved automatically if we define γ0 (w1 )-block as [a, a + b(U0 , a)]. Since y2 < a, we see γ0 (w1 )-block is sufficiently large. (2) |γ1 (w0 )| ≥ b(U1 , y1 ). To see this, note that if same(U0 , y2 , s∗ , s), then D(R3 , s) ∈ {R2 , R3 }. If diff(U0 , y2 , s∗ , s) and same(U1 , y1 , s∗ , s), then D(R3 , s) ∈ {R1 }. In a nutshell, if same(U1 , y1 , s∗ , s), then D(R3 , s) ∈ {R1 , R2 , R3 }, in which case γ1 (w0 )-block should remain killed. When D(R3 , s) ∈ {R0 } and so γ1 (w0 )-block should be unkilled, we must have diff(U1 , y1 , s∗ , s). Thus γ1 (w0 )-block should be as large as b(U1 , y1 ). This is achieved automatically if we define γ1 (w0 )-block as [a, a + b(U1 , a)]. Since y1 < a, we see γ1 (w0 )-block is sufficiently large. (3) |γ2 (w2 )| ≥ b(U0 , y3 ) + b(U2 , y3 ). To see this, note that if same(U0 , y3 , s∗ , s) and same(U2 , y3 , s∗ , s), then D(R3 , s) ∈ {R3 }, in which case γ2 (w2 )-block should remain killed. When D(R3 , s) ∈ {R0 , R1 , R2 } and so γ1 (w0 )-block should be unkilled, we must have diff(U0 , y3 , s∗ , s) or diff(U2 , y3 , s∗ , s). Thus γ2 (w2 )-block should be as large as b(U0 , y3 ) + b(U2 , y3 ). This is achieved automatically if we define γ2 (w2 )-block as [a, a + b(U0 , y3 ) + b(U2 , y3 )]. Since y3 < a, we see γ1 (w0 )-block is sufficiently large. And this definition is valid because at SU2 we have seen U0 and can access to the information of U0 . 4.4.2. Δ-block Δ is more subtle. Suppose U = ΔD , where h is the bounding function for U . In order to correct Δ, we must at least have |δ(i)| > h(i) (every time U (i) changes, we put a point into the δ(i)-block). Recall we have a list SU 0 , SU 1 , · · · and if U = U k , we say k is the true index for U . We define Δ(i) = [a, a + h(i) + Σk≤i b(U k , a)]. For example, we have in our case that U0 = U 0 , U1 = U 1 , and U2 = U 1 .
Y. Liu / Annals of Pure and Applied Logic 170 (2019) 515–538
527
Let us consider Δ0 . Observe first that Δ0 is never killed. (1) δ0 (i), i > y2 . Note that if D(R3 , s) ∈ {R2 , R3 }, δ0 (i) may need correction, since C2 and C3 does not say Δ0 (i) is correct. If D(R3 , s) ∈ {R1 , R0 }, δ0 (i) needs to be uncorrected. Therefore |δ0 (i)| > h(i) +b(U0 , y2 ) is needed. This is achieved automatically since we have y2 < i and y2 < a, as b(U0 , a) is a summand in our definition. (2) δ0 (i), y3 < i ≤ y2 . Note that If D(R3 , s) ∈ {R3 }, then δ0 (i) needs correction, as C3 does not imply its correctness. If D(R3 , s) ∈ {R0 , R1 , R2 }, δ0 (i) needs to be uncorrected. Thus we may require |δ0 (i)| > h(i) + b(U0 , y3 ) + b(U2 , y3 ). This is achieved automatically if, besides y3 < a, we also have 1 ≤ y3 (so b(U0 , a) and b(U2 , a) are summands in our definition). Note that in general, the true index of U2 may not appear before the definition of Δ0 . So Δ0 has to prepare ahead of time. How to achieve 1 ≤ y3 in general? The number 1 depends only on the tree, since R3 only sees U 0 , U 1 above it, so he can think his computation as max{y3 , 1}. In general, for any R-node α, let U 0 , . . . , U k be all that appear before α. Then α will think his computation y as max{y, k}. (3) δ0 (i), i ≤ y3 . In this case, if D(R3 , s) ∈ {R2 , R3 }, we know δ0 (i) is correct and hence need no correction. If D(R3 , s) ∈ {R0 , R1 }, then δ0 (i) should be uncorrected. But there was no correction, so there will be no extraction. Hence, in this case, δ0 (i)-block is not affected. Δ1 and Δ2 are similar, so we leave them to Lemma 5.4. 5. The construction of Theorem 1.2 We will define an algorithm Visit(α) for each α on the tree. Say α is an A-node, where A could be SU , TW , or R. Visit(α, A), Visit(α) has the same meaning. We begin with some notation. Recall that an R-node β is a controller if β has no Δ-outcome, β is a destroyer if β has Δ-outcome. Init(α) means to initialize all works that has been done at α and all β >P α. Reset(α) means to Init(α) except for keeping the threshold point w for α. In particular, a controller β becomes inactive if Init(β). 5.1. Global strategy Recall we have Definition 4.4. In addition to that definition, β also have a β-block of the form [a, a+DD(β)] where a = t1 (yβ ). The intuition is that whenever β changes decision, it will enumerate a point into β-block to undefine certain Θ-functionals. Actions for global strategy. At stage s. We have a list of active controllers β0
P βj and stop current stage. If j does not exist, Visit(λ). 5.2. Visit(α, SU ) The basic strategy for this α is to keep Γ correct and make a new definition. A block B, is initially an correcting block, which can turn into a killing block, but not vice versa. Actions for Visit(α, SU ). At stage s. Let s∗ < s be the last stage when α is visited. Let dom Γ = {0, 1, . . . , k}.
528
Y. Liu / Annals of Pure and Applied Logic 170 (2019) 515–538
Let j ≤ k, if it exists, be the least such that all γ(j)-blocks are killing blocks. If such j does not exist, set j = k + 1. For i < j, we first check whether we can redefine Γ(i) using current γ(i)-correcting block. If it is not the case, it must due to the fact that some of the γ(i)-killing block is unkilled so we keep it killed by enumerating another point into it. This ensures that the γ(i)-correcting block is usable. Then we can redefine Γ(i) with the γ(i)-correcting block by possibly enumerating a point into it in case that Ks∗ (i) = Ks (i). For i such that j ≤ i ≤ k + 1, we define Γ(i) with a fresh block (this action is valid since such Γ(i) is undefined by the killing to γ(j)-block). The size of the block is determined as follows. Let S(α) collect all U such that U = ΔD is built above α and Δ has not been killed at α. Then the block is defined as [a, a + 1 + ΣU ∈S(α) b(U, a)] for a fresh number a, where the summand 1 targets to deal with K(i)-change, and the summand ΣU ∈S(α) b(U, a) is to cope with a controller. Visit(α 0). 5.3. Visit(α, TW ) The basic strategy for this α is to keep Λ correct and make a new definition of it. The action of Visit(α, TW ) is almost the same as Visit(α, SU ). The only difference is that it is simpler, as W is r.e. 5.4. Visit(α, R) We make the following conventions and apply them tacitly. Let U 0 , . . . , U k be all U -sets which appears above α. Then α will think its computation y has use at least k, and his threshold point w is chosen so that w > k. Δ(i)-block is chosen to be [a, a + h(i) + Σj≤i b(U j , a)]. α carries the following information. A threshold point w. An interval [x0 , x1 ] in which we look for diagonalizing witnesses. A number max(α) so all the computations found at α must have use smaller than max(α). Actions of Visit(α, R). Current stage is s, the last stage α was visited is s∗ . (1) Threshold Point and Resetting (for destroyer only). (a) If it does not have a threshold point, pick a fresh w, and stop current stage. (b) Let w be the threshold point. Let K, U, W be all those sets which is seen at or above R. Say Y = K, U , or W , if ¬ SAME(Y, w, s∗ , s), Reset(α) and stop current stage. (While α is not initialized, Reset(α) happens only finitely many times.) (2) Selected? If α is selected by some controller β for computation yα . (a) If yα has not been recovered, recover yα . (This action is valid since Cα is met.) Throw away all Θ built by α, Init(α f ) and stop current stage. (Recall, α f is the node below the f -outcome of α.) (b) If yα has been recovered, Visit(α f ). (3) Look into Θ. Let us assume α has the following outcomes Θ0 < Θ1 < · · · < Θm . Let i ≤ m be the least such that there exists (least) j ∈ dom Θi with ΘB i (j) = Wi,s (j). B (a) Θi (j) ↑. We throw away existing Θi+1 , . . . , Θm , and stop current stage. (b) ΘB i (j) ↓= Wi,s (j). We can undo the killing and correcting to Λi and hence the computation y which Θi (j) carries becomes Λi -safe. Enumerate t1 (y) + m − i into B (to undefine certain Θ-functionals). (i) Suppose i = 0 and dom Θi−1 = {0, 1, . . . , n}. We now kill λi−1 (w)-block and define ΘB i−1 (n +1) = Wi−1,s (n + 1) with use s. Throw away existing Θi , . . . , Θm . Set max(β) = n + 1 ≤ λi−1 (w) for all β below Θi−1 -outcome. Visit(α Θi−1 ). (ii) Suppose i = 0. In this case we know y becomes Λi -safe for all i ≤ m. (A) If α has Δ-outcome. So y is facing some Γ built by an SU -node β. We kill γ(w)-block, claiming this block is killing block, and also those δ (w) if Δ is injured by β. Say
Y. Liu / Annals of Pure and Applied Logic 170 (2019) 515–538
529
dom Δ = {0, 1, . . . , n} we keep Δ correct by possibly enumerating a point into D and define ΔD (n + 1) = U (n + 1). Throw away existing Θ0 , . . . , Θm . Set max(β) = n + 1 ≤ γ(w) for all β below Δ-outcome. Visit(α Δ). (B) If α has no Δ-outcome, then α is a controller. Now we say α becomes active. Put x0 , the witness for yα , into X. Init(β) for all β >P α. Generating {Cα | α ∈ E(α)} (see below). Claim [t1 (y), t1 (y) + DD(α)] be α-block (reserve to undefine certain Θ-functionals). Stop current stage. (4) Searching for a new computation. (a) If x0 , x1 haven’t been picked. Pick a fresh x0 and set x1 = x0 . (b) If Δ-outcome was visited at stage s∗ , set x1 to be x1 + 1. (c) If ΨD⊕B (x) ↓= X(x) for all x ∈ [x0 , x1 ], we let y = max{ψ(x) | x0 ≤ x ≤ x1 } (if D y is restored, all these computations are recovered) and let Z(y) = {x | x0 ≤ x ≤ x1 ∧ X(x) = 0}. (Elements in Z(y) are potential diagonalizing witnesses.) (5) Slowdown conditions. Before we claim y is found, we check if the following conditions are met. (a) y < max(α). (b) For all β ⊆ β Δβ ⊆ α and Uβ = ΔD β has not been killed at α, check SAME(Uβ , y, t0 (yβ , γβ (wβ )), s). (c) If α is a controller, check in addition that whether x0 ∈ Z(yα ) for all α ∈ E(α). If any of the condition is not met, Visit(α f ). If all conditions are met, claim y is found. (6) y starts to fight. (a) If α has Θ-outcome with the rightmost one Θm . Say dom Θm = {0, 1, . . . , n}. We kill λm (w) and B define ΘB m (n + 1) = Wm,s (n + 1). So Θm (n + 1) now carries this computation y. For each β below Θm -outcome, redefine max(β) = n + 1 ≤ λm (w). Visit(α Θm ). (b) If α has no Θ-outcome but has Δ-outcome. Then we kill Γ(w) and keep Δ correct and define ΔD (n + 1) = U (n + 1). For all β below Δ-outcome, redefine max(β) = n + 1 ≤ γ(w). Visit(α Δ). (c) If α has neither Θ-outcomes nor Δ-outcome, then α (controller) becomes active. Init(β) for all β >P α. Generating {Cα | α ∈ E(α)} (see below). Claim [t1 (y), t1 (y) + DD(β)] be α-block (reserve to undefine certain Θ-functionals). Put x0 into X. Stop current stage. 5.5. Generating Cα Let β be a controller. In this section, we describe how each condition Cα is generated when β becomes active. Recall the definition of E(β) and we list E(β) as α0 ⊂ α1 ⊂ · · · ⊂ αk = β i Each αi kills K = ΓD⊕U and builds Ui = ΔD i . For each yi let us see what condition will suffice to i recover yi . If αj Δj ⊆ αi and Δj has not been killed at αi , then put
same(Uj , yi , t1 (yi ), s) into Ci . So Δj -correction will not injure yi . To be precise, recall we have a slowdown condition at αi so that SAME(Uj , yi , t0 (yj , γj (wj )), t1 (yi )) thus same(Uj , yi , t1 (yi ), s) implies same(Uj , yi , t0 (yj , γj (wj )), s). Therefore the definition of Δj yj is still correct, so Δj need no correction below yi . If αi ⊆ αj Δj ⊆ αj+1 and Γj is built at some SU -node above αi , then put diff(Uj , yj+1 , t1 (yj+1 ), s)
530
Y. Liu / Annals of Pure and Applied Logic 170 (2019) 515–538
into Ci . So we can undo the killing and correcting to Γj while keeping it happy. To be precise, recall we have a slowdown condition to ensure SAME(Uj , yj+1 , t0 (yj , γj (wj )), t1 (yj+1 )). Now diff(Uj , yj+1 , t1 (yj+1 ), s) implies all definition of Γj (wj ) made between stage t0 (yj , γj (wj )) and t2 (yj , γj (wj )) can be undefined by this change of Uj . That being said, we no longer need the threshold point so it can be undone. Note also that all corrections of Γ made between stage t0 (yj , γj (wj )) and t2 (yj , γj (wj )) can be undone if necessary. Thus we have a definition for each Ci . Note that any functional built below αi will not be visited at stage s if D(β, s) = αi , thus it is not necessary to keep them correct while recovering yi . By the above argument we have proved the following lemma. Lemma 5.1. If Cα is met, yα can be recovered. Remark 5.2. For an extreme case that E(β) = {β}, then Cβ contains no condition. Therefore yβ is always good. This case happens, for example, when we have only TW - and R-requirements. It is not clear from the definition of Cα that at any stage s while β is active, there exists at least one α ∈ E(β) such that Cα is met. Thus we need Lemma 5.3. D(β, s) is defined while β is active. Proof. List E(β) as α0 ⊂ α1 ⊂ · · · ⊂ αk = β. Let S0 = {C0 , C1 , . . . , Ck }, we will find our solution in S0 ⊃ S1 ⊃ S2 ⊃ · · · . Let Γj be defined before α0 and has highest priority among all such Γ and we have α0 ⊆ αj Δj ⊆ αj+1 . If diff(Uj , yj+1 , t0 (yj , γj (wj )), s), we let S1 = {C0 , C1 , . . . , Cj }. Note that for each C ∈ S1 , we have C contains condition diff(Uj , yj+1 , t0 (yj , γj (wj )), s). If same(Uj , yj+1 , t0 (yj , γj (wj )), s), we let S1 = {Cj+1 , . . . , Ck }. Because it implies same(Uj , yl , t1 (yl ), s), which is a condition in Cl for each l > j. An easy induction will give us a sequence S0 ⊃ S1 ⊃ · · · ⊃ Sm = {Ci } such that Ci is met. 2 Note that the above argument (we refer to as existence argument) shows only the existence of a solution, but this solution may not be preferred by D(β, s). This is important in the next technical lemma. Lemma 5.4. All Γ-blocks, Δ-blocks, and Λ-blocks are sufficiently large. Proof. We focus on Γ-blocks and Δ-blocks first, as Λ-block is similar to Γ-blocks. List E(β) as α0 ⊂ α1 ⊂ · · · ⊂ αk = β. Since we may think yj is at least as large as γj (wj ), we assume yk < γk−1 (wk−1 ) < yk−1 < · · · < y1 < γ0 (w0 ) < y0 We will write D(β, s) = j if D(β, s) = αj . We proceed by induction. Base step. Let Γj be defined (by ξ) before α0 and has highest priority among all such Γ’s and ξ ⊆ α0 ⊆ αj Δj ⊆ αj+1 (1) γj (wj )-block. Note that we have the following two cases: (a) If D(β, s) ≤ j, then one of y0 , . . . , yj is likely to be recovered, therefore γj (wj )-block should be unkilled (by possibly extracting a point from it).
Y. Liu / Annals of Pure and Applied Logic 170 (2019) 515–538
531
(b) If D(β, s) ≥ j + 1, then γj (wj )-block should remain killed (by possibly enumerating another point in it). Thus it suffices to count the times we switch between these two cases. If same(Uj , yj+1 , t1 (yj+1 ), s), then by existence argument we know one of Cj+1 , . . . , Ck is met, thus D(β, s) ≥ j + 1. Thus the times we switch between these two cases are bounded by b(Uj , yj+1 ). All we need is |γj (wj )| > b(Uj , yj+1 ). Recall that we have yj+1 < γj (wj ) = [a, a + ΣU ∈S(ξ) b(U, a)] and Uj ∈ S(ξ). Hence γj (wj )-block is sufficiently large. (2) δj (i)-block for yj+1 < i. Note that we have the following two cases: (a) If D(β, s) ≤ j, then δj (i)-block should be uncorrected. (b) If D(β, s) ≥ j + 1, then δj (i)-block should remain corrected, since none of Cj+1 , . . . , Ck implies δj (i) is correct. Like γj (wj )-block, we need |δj (i)| > hj (i) + b(Uj , yj+1 ). Recall we have yj+1 < i < δj (i) = [a, a + hj (i) + Σk≤i b(U k , a)] Say Uj = U m , then U m appears above αj+1 , hence we have m < yj+1 , therefore b(Uj , a) is a summand in Σk≤i b(U k , a). Thus δj (i) for yj+1 < i is sufficiently large. Next step (induction step) We have two cases. First Case. If j = 0, stop. If 0 < j, let Γl be defined (by ξ) before α0 (exists since 0 < j) such that Γl has the second highest priority among all such. Let ξ ⊆ α0 ⊆ αl Δl ⊆ αl+1 ⊆ αj (1) γl (wl )-block. As usual, (a) If D(β, s) ≤ l, then γl (wl )-block should be unkilled. (b) If D(β, s) ≥ l + 1, then γl (wl )-block should remain killed. Note that if same(Uj , yj+1 , t1 (yj+1 ), s), we have D(β, s) ≥ j + 1. If diff(Uj , yj+1 , t1 (yj+1 ), s) and same(Ul , yl+1 , t1 (yl+1 , s)), then l + 1 ≤ D(β, s) ≤ j. In summary, if same(Ul , yl+1 , t1 (yl+1 , s)), then D(β, s) ≥ l + 1. Therefore we need |γl (wl )| > b(Ul , yl+1 ). Recall we have yl+1 < γl (wl ) = [a, a + ΣU ∈S(ξ) b(U, a)] and Ul ∈ S(ξ). Thus, γl (wl )-block is sufficiently large.
Y. Liu / Annals of Pure and Applied Logic 170 (2019) 515–538
532
(2) δl (wj )-block. Note that δl (wj ) is killed by αj , therefore we may assume yj+1 < δl (wj ) < yj (a) If D(β, s) ≥ j + 1, δ(wj )-block should remain killed. (b) If D(β, s) ≤ j, δ(wj )-block should be unkilled. Thus we require |δl (wj )| > b(Uj , yj+1 ) Recall we have yj+1 < δl (wj ) = [a, a + hl (i) + Σk≤wj b(U k , a)] and the true index of Uj is < wj . Thus δl (wj )-block is sufficiently large. (3) δl (i) for yl+1 < i. Note that if D(β, s) ≥ j + 1, as δl (wj )-block remained killed, so δl (i)-block has no problem. If D(β, s) ≤ j, since δl (wj )-block is unkilled, we are in a situation that Δl looks as good as if it had never been killed, and we are in the same situation as we discuss δj (i) for yj+1 < i. By exactly the same argument, we conclude δl (i) for yl+1 < i is sufficiently large. Second Case. If j + 1 = k, stop. If j + 1 < k, then let Γl be built by ξ, which is below αj Δj (exists because j + 1 < k) and has highest priority among all such, and αj Δj ⊆ ξ ⊆ αj+1 ⊆ αl ⊆ αl Δl ⊆ αl+1 (1) γl (wl )-block. (a) If D(β, s) ≥ l + 1, γl (wl )-block should remain killed. (b) If D(β, s) ≤ l, γl (wl )-block should be unkilled. Note that, if same(Uj , yl+1 , t1 (yl+1 ), s) and same(Ul , yl+1 , t1 (yl+1 ), s), we know that D(β, s) ≥ l + 1, hence we require |γl (wl )| > b(Uj , yl+1 ) + b(Ul , yl+1 ) but recall that we have yl+1 < γl (wl ) = [a, a + ΣU ∈S(ξ) b(U, a)] and Ul , Uj ∈ S(ξ), thus γl (wl )-block is sufficiently large. (2) δl (i)-block for yl+1 < i. (a) If D(β, s) ≥ l + 1, then δl (i)-block should remain corrected. (b) If D(β, s) ≤ l, then δl (i)-block should be uncorrected. As before, we require |δl (i)| > hl (i) + b(Uj , yl+1 ) + b(Ul , yl+1 ). Recall that we have yl+1 < δl (i) = [a, a + hl (i) + Σk≤i b(U k , a)] and Uj , Ul ’s true index < yl+1 < i. Therefore δl (i)-block is sufficiently large.
Y. Liu / Annals of Pure and Applied Logic 170 (2019) 515–538
533
(3) δj (i)-block for yl+1 < i ≤ yj+1 . (a) If D(β, s) ≥ l + 1, then δj (i)-block should remain corrected. (b) If D(β, s) ≤ l, then δj (i)-block should be uncorrected. As before, we require |δj (i)| > hj (i) + b(Uj , yl+1 ) + b(Ul , yl+1 ) but recall we have yl+1 < i < δj (i) = [a, a + Σk≤i b(U k , a)] and Uj , Ul ’s true index < yl+1 < i, therefore δj (i)-block is sufficiently large. It can be made into a formal induction. We also know that every Γ-block is discussed and every Δ-block is discussed except for the following case. δj (i) for i < yk is not discussed in the induction step. For this δj (i)-block, note that (1) If D(β, s) = k, δj (i) is correct already, and there will be no correction needed. (2) If D(β, s) < k, δj (i) is uncorrected. But extraction is needed at most once (possible when t1 (yβ ) < t2 (yβ )). Thus δj (i) has no size issue. Same argument shows that for each δ(i) such that i ≤ yk and Δ has not been killed at αk , is sufficiently large. For a λ(w)-block, if for some i, we have yi+1 < λ(w) < yi , then same argument as we discuss γi (wi ) shows that λ(w) is sufficiently large.
2
Therefore the construction is almost bug-free. There is one possibility that at s, if there is two active controllers β0 , β1 such that D(β0 , s) = D(β1 , s) = α, it may be ambiguous. We include the following lemma to say it is not possible. Lemma 5.5. Fix α. At any stage s, there exists at most one β such that D(β, s) = α. Proof. Let s0 be the stage that some β changes its decision to α. That is, D(β, s0 ) = α and D(β, s0 −1) = α. Therefore, at s0 , all nodes γ >P β is initialized. While β is active and does not change its decision, α Δ is not going to be visited. Hence α ∈ / E(β ) for any new controller. Therefore there exists at most one β such that D(β, s) = α. 2 6. The verification of Theorem 1.2 Definition 6.1 (True path). Let δs denote the longest node we visit at stage s. Define the true path p by p(i) = lim inf s δs (i). If α o ⊆ p, then o is the true outcome of α. Remark 6.2. It is routine to see inductively that p(i) ↓ implies p(i + 1) ↓, so p ∈ [T ]. Lemma 6.3 (Finite initialization lemma). Let p be the true path. Then each α ⊆ p is initialized or reset at most finitely often.
534
Y. Liu / Annals of Pure and Applied Logic 170 (2019) 515–538
Proof. Suppose α ⊆ p is the least node which get initialized infinitely often. It can be initialized at stage s if (1) some β to the left of α is visited at s; (2) some controller located at β with β ⊆ α changes minds. Case 1 cannot happen infinitely often because that contradicts that p is the true path. Case 2 cannot happen infinitely often because a controller can only changes minds finitely often. If α is an R-requirement, then after some stage s0 , it will not get initialized. Therefore it does not change the threshold point w thereafter. At α, there are finitely many set U appearing above α. U w and K w will eventually stabilize, after which moment α will never get reset. 2 Lemma 6.4. Let α be an R-node with a Θ-outcome. Let Θ(i) carry a computation y. If y is likely to be badly injured, then ΘB (i) is undefined, and undefined for at most finitely many times. Proof. There are only two events when points of D can be extracted and hence may badly injure y. One is when some R-node sees a W -change and thus removes Λ-injuries to its computation. The other is when some active controller β with D(β, s) = αj causes αj to recover its computation yj . (1) In case some α0 extracts points for y0 without being selected by a controller. If α0 Θ0 ⊆ α, then α will be initialized when α0 extracts points, as α0 will visit an outcome to the left of Θ0 . If α Θ ⊆ α0 , we have two cases: (a) If t1 (y) ≤ t1 (y0 ), then y will not be badly injured. Because α0 is extracting a point x to partially restore y0 , so y0 (x) = 0. Suppose that y(x) = 1, there would be no need for α0 to extract x in the first place since x will not be enumerated again. Thus y(x) = 0. (b) If t1 (y0 ) < t1 (y), then y is likely to be badly injured. When α0 extracts some points, we will enumerate t1 (y0 ) into B, which is small enough to undefine Θ(i). Because Θ(i) was defined with use ≥ t1 (y). (2) Suppose we have an active controller β with E(β) listed as α0 ⊂ α1 ⊂ · · · ⊂ αk = β Suppose β d ⊆ α, then α will be initialized whenever β changes decision. Suppose α0 ⊆ α Θ ⊆ β, (a) If t1 (y) ≤ t1 (α0 ), then y will not be badly injured as for the point x we extract to restore yj , we must have y(x) = 0. (b) If t1 (yk ) < t1 (y), then y is likely to be badly injured. But when some yj is restored, we will enumerate y1 (β) into B, hence Θ(i) will be undefined. (c) If t1 (yj ) < t1 (y) ≤ t1 (yj+1 ) for some j, y is comparable with all yj by construction. Therefore no matter which yj is being restored, y is not badly injured. Suppose α Θ ⊆ α0 . The case for t1 (y) ≤ t1 (y0 ) and the case for t1 (yk ) < t1 (y) are the same as before. Now we suppose that t1 (yj ) < t1 (y) ≤ t1 (yj+1 ). Note that we have s = t1 (yj ) < t1 (yj+1 ). This means that at stage s, αj+1 was not being visited. It was delayed because some R-node α such that αj ⊆ α ⊂ αj+1 was still dealing with some Λ-functional, which takes some time. To be precise, if at stage s, αj was not visiting its Δj -outcome, let α = αj . Otherwise, let α be the next R-node below αj Δj (α cannot be αj+1 ). So we have t1 (y ) = s < t1 (y). Now we are back to Case 1, where α is responsible to undefine y.
Y. Liu / Annals of Pure and Applied Logic 170 (2019) 515–538
535
We next show Θ(i) can be undefined only finitely many times. Note that by the above argument we know the node which is responsible for badly injuring y is below α Θ. Let s be the first stage that Θ(i) gets undefined, by some β (controller or destroyer) below α Θ. Observe that after s, any new computation found at a node below α Θ will never need to undefine Θ(i) again. Thus, only restoring or partially restoring a computation found before s will possibly need to undefine Θ(i). But there are only finitely many computations before s. Hence Θ(i) can be undefined only finitely many times. 2 Lemma 6.5. At stage s, when α is being visited, (1) If α is an SU -node, then K = ΓD⊕U is correct and has a new definition. (2) If α is a TW -node, then K = ΛD⊕W is correct and has a new definition. (3) If α is an R-node, (a) If current outcome is f -outcome or d-outcome. If α is selected by some β for computation yα , then α has a successful diagonalization. If α is not selected, no particular information is needed. (b) If current outcome is Θ-outcome, then Θ is correct on its domain. (c) If current outcome is Δ-outcome, then Δ is correct and has a new definition. Proof. (1) In construction, we keep Γ correct without compromise and make a new definition every time it is visited. (2) Same as above. (3) (a) If D(β, s) = α, which says Cα must be met, and therefore yα is recovered. Hence α has a successful diagonalization. Otherwise, α does not find a new computation y, and α f is visited with the meaning ‘waiting’. (b) Let Θ(i) carry computation y. By Lemma 6.4, we know that if y is likely to be badly injured, Θ(i) would be undefined. Thus if Θ(i) is defined but becomes incorrect, then y is not badly injured. In this case, we can remove Λ-injuries to y using the change of W (i), and proceed to the next outcome to the left of Θ. So Θ is initialized. Thus, the only case when Θ could be incorrect, Θ will be initialized and Θ is not current outcome. (c) If Δ is visited, it just corrects itself and makes a new definition. 2 Lemma 6.6. All SU -requirements are satisfied. Proof. Fix the true path p and any SU . Let α be the last SU -node and α∗ be such that α ⊂ α∗ Δ ⊂ p if exists. Suppose α∗ does not exist, we show Γ is correct and total. By Lemma 6.5, we only need to show it is total. Suppose Γ is not total. Let w be the least such that γ(w) → ∞. There must be an R-node β below α whose killing point is w. If β kills Γ because β is killing some functional of higher priority than Γ, then α would not be the last SU -node. Therefore, β kills Γ directly. Each time β kills Γ, β Δ is visited. Then we have a contradiction since α∗ = β exists. Hence Γ is total and correct. Suppose α∗ exists, we show Δ is total. Suppose not. Then some β below α∗ is killing Δ(w) infinitely often at w. But β must be killing some node of higher priority than α. Then α would not be the last SU -node. Hence, SU -requirement holds. 2 Lemma 6.7. All R-requirements are satisfied. Proof. Fix the true path p and any R-requirement. Let α be the last R-node on p, so either α f or α d is true outcome. If α f is the true outcome. Let s0 be the stage after which α never get initialized or reset, and α f is visited whenever α is visited after s0 .
536
Y. Liu / Annals of Pure and Applied Logic 170 (2019) 515–538
(1) If α is waiting forever for a computation (with all slowdown condition passed). In this case, x1 never increases. Suppose towards a contradiction that ΨD⊕B (x) ↓= X(x) for all x ∈ [x0 , x1 ]. Let y = ψ(x1 ). There is a stage s1 > s0 such that for all s > s1 , we have Ds ⊕ Bs y = D ⊕ B y. As y never increases later, all slowdown conditions must be met eventually. Contradiction. Thus ∃x ∈ [x0 , x1 ], we have ΨD⊕B (x) = X(x). Hence R-requirement holds. (2) If α is selected by some β for yα . Let s1 > s0 be the stage such that for all s > s1 , we have β is active and D(β, s) = α. By Lemma 6.5, yα is a successful diagonalization. Hence R-requirement holds. If α d is the true outcome, then α must be a controller, and D(α, s) = α cofinally. Hence α has a successful diagonalization, and R-requirement holds. 2 Lemma 6.8. All TW -requirements are satisfied. Proof. Let p be the true path and fix any TW -requirement. Let α be the last TW -node. Let α∗ be such that α ⊂ α∗ Θ ⊂ p, if exists. Suppose α∗ does not exist, then Λ is total and correct by the same argument in Lemma 6.6. Suppose α∗ exists. By Lemma 6.5, we only need to show Θ is total. By Lemma 6.4, we know each Θ(i) can be undefined at most finitely many times. Then we can inductively see that for each i, Θ(i) is eventually defined and correct. 2 This completes the proof of Theorem 1.2. 7. A proper (2, 1)-cupping degree We prove Theorem 1.3 using preceding construction with modifications. We are building d.r.e. sets D, E such that D
Y. Liu / Annals of Pure and Applied Logic 170 (2019) 515–538
537
SU
Q Δ
P
Fig. 4. Mini case.
When Q is selected at stage s, we must have diff(U, yP , s∗ , s). Since U is r.e. for any t > s we have diff(U, yP , s∗ , t). That is, P will never be selected again. Formally, we need the following lemma and some minor modifications to the construction. Lemma 7.1. While a controller β is active, D(β, s) is nonincreasing. Proof. Suppose at stage s + 1 we have D(β, s + 1) = j = D(β, s). We want to show D(β, s) > j. The reason that D(β, s) = j is one of the following, (1) one of Cj+1 , . . . , Ck is met at stage s, so D(β, s) > j. (2) none of Cj , Cj+1 , . . . , Ck is met at stage s. If (1) happens, we are done. We suppose (2) happens, towards a contradiction. First, consider a condition in Cj of the form same, (W, yj , s∗ , −), where − is a place holder for the stage we are discussing. Because Cj is met at stage s + 1, so we know that same(W, yj , s∗ , s + 1), which also implies same(W, yj , s∗ , s) since W is r.e. So all such conditions hold also at stage s. Next, consider a condition in Cj of the form diff(W, yl+1 , s∗ , −) and we assume that K = ΓD⊕W is built above αj and killed by αl , αj ⊆ αl Δ ⊆ αl+1 . We know one of such condition is not met at stage s. Choose the one such that K = ΓD⊕W has highest priority among all such that diff(W, yl+1 , s∗ , s + 1) ∧ same(W, yl+1 , s∗ , s) By existence argument as in Lemma 5.3, we know one of Cl+1 , Cl+2 , · · · is met at stage s. Contradiction. Hence (2) cannot happen. Hence D(β, s) > j, and D(β, s) is nonincreasing. 2 We have the following modifications to the construction: (1) At a controller β, we will wait until each P -, or Q-node αj ∈ E(β) has different witness. This can be done because Z(yj ) is increasing. If αl is lower than αj , αl can wait until Z(yj ) \ Z(yl ) = ∅. (2) Each P -node αj ∈ E(β) will put its witness into E only when D(β, s) = αj and yj is being recovered. Observe the following. While a controller β is active, if D(β, s) = αi is a Q-node, then we may extract the witness of a P -node αj , which was previous selected, so αj cannot be used anymore. Since D(β, s) is
538
Y. Liu / Annals of Pure and Applied Logic 170 (2019) 515–538
nonincreasing, we have j > i and αj will never be selected again. The rest of the verification is the same as before. Hence, Theorem 1.3 is proved. References [1] M. Arslanov, S.B. Cooper, A. Li, There is no low maximal d.c.e. degree, Math. Log. Q. 46 (3) (2000) 409–416. [2] M. Arslanov, S.B. Cooper, A. Li, There is no low maximal d.c.e. degree – Corrigendum, Math. Log. Q. 50 (6) (2004) 628–636. [3] Mingzhong Cai, Richard A. Shore, Theodore A. Slaman, The n-r.e. degrees: undecidability and Σ1 substructures, J. Math. Log. 12 (2012) 1250005. [4] S. Barry Cooper, Leo Harrington, Alistair H. Lachlan, Steffen Lempp, Robert I. Soare, The d.r.e. degrees are not dense, Ann. Pure Appl. Logic 55 (1991) 125–151. [5] R. Downey, L. Yu, There are no maximal low d.c.e. degrees, Notre Dame J. Form. Log. 45 (3) (2004) 147–159. [6] Jiang Liu, Guohua Wu, An almost universal cupping degree, J. Symbolic Logic 76 (4) (2011) 1137–1152. [7] G.E. Sacks, The recursively enumerable degrees are dense, Ann. of Math. (2) 80 (1964) 300–312. [8] Yue Yang, Liang Yu, On Σ1 -structural differences among finite levels of the Ershov hierarchy, J. Symbolic Logic 71 (4) (Dec. 2006) 1223–1236.