We can talk about simple things that have to be either one way or another, not in-betweens, such as pregnancy, mousetraps, light switches that work properly, and numbers as represented inside (digital) computers, but not necessarily as represented in mathematicians' minds. Spectra provide us with ways of thinking about more fluid concepts, like heat, light, pleasure, and we can even think of making spectra discrete so as to represent things as steps along a range, which is what digitization does to approximate continuity, and what we can do to deal with things that can take on more than 2 different values, like gender (there are 4 types in certain Ontario public databases).
I wanted to bring a prosaic example to the forefront, having to do with a previous post about service delivery. It has to do with safety, and I will keep the specifics vague to avoid further complications in the real-world. A colleague has had a service company come to assess a replacement for his hot-water heater (very prosaic as I mentioned), and the service guy instead of doing that, checked his chimney flue and red-tagged it as unsafe, although it had been venting his furnace and hot water heater without incident for 17 years. He now has to replace all three items at great cost. Safety is not to be questioned and the service guy has the authority to do this kind of thing. The law is strict on acceptable carbon monoxide levels in homes (it is not seen as a spectrum but as a trigger, above a certain level is illegal because it is dangerous and can be lethal). The problem as I see it is about the transition. You can set a level to be your true/false trigger. True - you have to do something, false you are safe, but surely there must be a spectrum around that number, a fuzzy logic zone, maybe even based on history.
Engineers have known about this for a long time, likely before steam engines, and have used a concept called hysteresis to deal with it. The classic example is thermostats. The thermostat will turn on a furnace when the temperature drops below a point, but since turning on the furnace may just raise the temperature enough to have the thermostat go back to turning it off, there is a fuzzy range between on and off that uses the direction the temperature is moving to decide when to switch the other way. As the temperature goes up, the thermostat does not turn off until some level above the trigger point is exceeded (by a little bit), and as the temperature drops, the thermostat does not turn the furnace on unless the it is a bit below the threshold. The history and direction of the movement comes into play. Calculus was invented (discovered some say- hah!),
to deal with such history and trends in the domain of functions. It triggered a crisis in mathematics which was addressed in the 19th century.
So things get complicated around the triggers. Logic has had to deal with this problem, and it was what motivated Zadeh to publish his famous fuzzy logic paper.
In the physical world, circuits that implement logic for computers address the potential noise around zero and one levels by having triggers that implement hysteresis around the trigger point, and eliminate the uncertainty resulting from noise by deciding that once latched they stay latched until the signal level drops enough below the noise to justify a decision change, and vice-versa going the other way.
Now what about quantum phenomena? The cat in the box? The undecidability of state is a bit like this notion of trigger point. We cannot theoretically know which way the trigger will go so we have to observe it to know. It is a limit problem, but one without a resolution in theory, effectively undecidable. Recent papers have suggested that the traditional statistical approaches to dealing with this may not be the best way to look at it. It may be that the size of the system imposes a limit on the trigger point resolution, in fact making it impossible for undecidability to maintain itself once the system exceeds a certain size. After that, it becomes Newtonian, logical, true or false, and the decision point can be made in advance, using classical deterministic methods.
All this makes me wonder if our accepted mathematical model for continuity may not have something to do with this difficulty. We think of a number line as infinitely divisible and more, having points on it that are impossible to measure exactly, Reals, not rationals (not to mention the irrationals). But as a result we get strange phenomena like Cantor's subsets whose members can be put into one-one correspondence with the members of its super-sets, and the fact that 0.9999... is equal to (resolves) to 1. Computers, discretization, and stair-casing can provide some relief. Any numbers can be represented symbolically, and numerically if we accept a limit on the imprecision, but it can be made precise to the level required for the problem space at hand. This is of course not enough for a mathematician, who will always seek the absolute certainty, but since logic is used to implement the math inside the computer, it may be enough for a logician, and I would argue that if you throw in enough time, then you can get close to what the mathematicians want, although you may never reach it. Limits. So as some have suggested, we may benefit from throwing out Reals. They are limits, and therefore somewhat platonic. Let's just rock'n'roll.
Once last bit:
The law gets around the fuzziness of limit cases by specifying with words what to do around the decision point. Laws are written to say which way trigger points are decided. For example, in property law, a wall can belong to a lot on which the larger width of it sits, or it may belong jointly to both lots if it crosses the property line between the lots, or whatever. It is a rule, a logical rule, like in a game. Whether the rule is ethical or moral is a judgement based on history, acceptance and tradition, in other words statistics.