The intuition of a sequence is that the terms get closer to the convergence point. Looking at the first trillion elements of a sequence feels like it ought to give one some kind of information about the number. But without the convergence information, those first trillion elements of the sequence can be wholly useless and simply randomly chosen rational numbers. This is an "of course", but when talking about defining a real number with these sequences, as opposed to approximating them, this gives me a great deal of unease.
In particular, it is quite possible to prove a theorem that a sequence is Cauchy, but that there is no way to explicitly figure out N for a given epsilon. The sequence is effectively useless. This presumably is possible, and common, with using the Axiom of Choice. One can even imagine an algorithm for such a sequence that produces numbers and eventually converges, but the convergence is not knowable. Again, if this is just approximating something, then we can simply say it is a useless approximation scheme. But defining real numbers as the equivalence class of Cauchy sequences suggests taking such a sequence seriously in some sense and is the answer.
In contrast, consider integer and rational number versions, it is quite immediate how to reduce them to their canonical form, assuming unlimited finite arithmetic ability. For example, 200/300 ~ 2/3 and one recognizes that 200/300 and 2/3 are different forms of what we take to be the same object for most of our purposes. There is no canonical Cauchy sequence to reduce to and concluding two sequences are equivalent could take a potentially infinite number of computations /comparisons. While that is somewhat inherent to the complexity of real numbers, it feels particularly acute when it is something that must be done in defining the object.
Dedekind cuts have the opposite problem. There is only one of them for an irrational number, but it is not entirely clear what we would be computing out as an approximation, particularly if the lower cut viewpoint is adopted.
Intervals, on the other hand, inherently contain the approximation information. By dividing them and picking out the next subinterval, one also has a method to computing out a sequence of ever better approximations. I suppose one could prove the existence of the family of containment intervals without explicitly being able to produce them, but at least the emptiness of the statement would be quite clear (nothing is produced) in contrast to the sequences that could produce essentially meaningless numbers for an arbitrarily large number of terms.
In particular, it is quite possible to prove a theorem that a sequence is Cauchy, but that there is no way to explicitly figure out N for a given epsilon. The sequence is effectively useless. This presumably is possible, and common, with using the Axiom of Choice. One can even imagine an algorithm for such a sequence that produces numbers and eventually converges, but the convergence is not knowable. Again, if this is just approximating something, then we can simply say it is a useless approximation scheme. But defining real numbers as the equivalence class of Cauchy sequences suggests taking such a sequence seriously in some sense and is the answer.
In contrast, consider integer and rational number versions, it is quite immediate how to reduce them to their canonical form, assuming unlimited finite arithmetic ability. For example, 200/300 ~ 2/3 and one recognizes that 200/300 and 2/3 are different forms of what we take to be the same object for most of our purposes. There is no canonical Cauchy sequence to reduce to and concluding two sequences are equivalent could take a potentially infinite number of computations /comparisons. While that is somewhat inherent to the complexity of real numbers, it feels particularly acute when it is something that must be done in defining the object.
Dedekind cuts have the opposite problem. There is only one of them for an irrational number, but it is not entirely clear what we would be computing out as an approximation, particularly if the lower cut viewpoint is adopted.
Intervals, on the other hand, inherently contain the approximation information. By dividing them and picking out the next subinterval, one also has a method to computing out a sequence of ever better approximations. I suppose one could prove the existence of the family of containment intervals without explicitly being able to produce them, but at least the emptiness of the statement would be quite clear (nothing is produced) in contrast to the sequences that could produce essentially meaningless numbers for an arbitrarily large number of terms.