07-02-2013, 04:22 AM
(This post was last modified: 07-02-2013, 04:24 AM by bennyscube.)
In the past year or so, I've noticed one of the most common arguments on the server is "How many ticks is my redstone device?" Now I know as well as anyone how annoying and distracting these arguments can be. However, the more I listen to these arguments, the more I notice a pattern.
In all of these arguments, everyone seems to play one of two roles. In the first role, they're the person who sat down and counted out how many ticks the device has. They're the person who carefully tested for pulsing and other odd behavior that may occur. They know the device inside and out, and are confident in their understanding.
In the second role, they're the person who's using the device. They might know the tricks and oddities of the device as well, but they also know it doesn't matter in the way they're using it. They aren't using a device that responds to pulses, and they aren't using the path that takes an extra two ticks. It's a classic case of theory versus practice.
At first, I thought all this argument was just something that happens when you go in depth into these things. The more I think about it though, the more I think it might be an issue with the theory.
Don't get me wrong: I don't think anyone will deny how useful the redstone tick is. It's the basis of a huge variety of redstone innovations. Rather, I think there might be more to it than we've previously considered.
Consider the definition of a theory (from Wikipedia): "A theory is a 'proven' hypothesis, that, in other words, has never been disproved through experiment, and has a basis in fact" I think we can agree that the "theory" of redstone ticks matches this for the most part. The part that makes me wonder a bit is whether it can be disproved through experiment.
For example, I made an XOR gate a while ago, demonstrated here: https://www.youtube.com/watch?v=QqnGAwKYVO8
When I made this, I claimed that it was 2 ticks, which was met with some debate. If you changed the inputs from 10 to 01 in perfect sync, the XOR emitted a 1 tick off pulse, which caused some people the believe the gate was in fact 3 ticks. I'll agree: by the strict interpretation of the current theory, it is indeed 3 ticks. However, isn't it a bit odd that a 3 tick device produces 2 tick results in nearly every usage case?
I won't argue that the pulse doesn't matter as it certainly does in some cases. I will, however, argue that this XOR has a simple, observable case where the theory can be inconsistent with reality. I don't think that makes the theory wrong, but I think there is a factor that isn't fully accounted for in the current system.
To illustrate this, I think it's best to take a brief look at where the idea of ticks comes from. In the early days of redstone, people noticed it takes a certain amount of time for redstone devices such as torches to activate. Using this, they could measure how long it takes a signal to pass through a circuit by how many of these devices they pass through. Thus, the concept of the redstone tick is born.
At some point, they noticed that the system isn't perfect. What if there was a device that takes 4 ticks on one input, and 6 ticks on the other? How many ticks is this device? Eventually, they decided it should be considered 6 ticks, based on its worst case. After all, it wouldn't make a lot of sense if a 4 tick device suddenly started taking 6 ticks.
However, doesn't it make just as little sense for a 6 tick device to suddenly start taking 4 ticks? Counting based on worst case isn't a bad choice, but it's hardly the best one. If the device is used in a way that doesn't trigger the worst case, then isn't the worst case a bit misleading?
Although it was a decent system in the past, modern redstone is starting to challenge this worst case system. For instance, there are adders which are only 2 ticks on the rising edge because the slower falling edge case can happen before the adder is needed again. By current definition, these adders are still 3.5 ticks because of the worst case, even though the worst case is completely irrelevant to the intended usage.
To be quite honest, I think measuring ticks by worst case may have outlived its benefits.
The original ticks system came from early redstone, when devices were simple and had a single, simple path to travel through. Back then, they weren't considering rising vs falling edge cases or look-ahead logic. Now we not only expect but require people joining us to understand binary circuitry and logic integration.
We've come a long way since this system came about, and I think it's about time we updated it to reflect that. I don't think it's my place create this update; I think something this universal should be decided by the entire community. I will however, propose one change for this update: A best case measurement.
Having that additional number will not only reflect the behavior of the device more accurately, but it should drastically cut down on the "How many ticks is my redstone device?" arguments. It's hard to debate on the one "true" amount of ticks if there's clear recognition that there is no "true" amount of ticks.
Any thoughts or comments are appreciated.
In all of these arguments, everyone seems to play one of two roles. In the first role, they're the person who sat down and counted out how many ticks the device has. They're the person who carefully tested for pulsing and other odd behavior that may occur. They know the device inside and out, and are confident in their understanding.
In the second role, they're the person who's using the device. They might know the tricks and oddities of the device as well, but they also know it doesn't matter in the way they're using it. They aren't using a device that responds to pulses, and they aren't using the path that takes an extra two ticks. It's a classic case of theory versus practice.
At first, I thought all this argument was just something that happens when you go in depth into these things. The more I think about it though, the more I think it might be an issue with the theory.
Don't get me wrong: I don't think anyone will deny how useful the redstone tick is. It's the basis of a huge variety of redstone innovations. Rather, I think there might be more to it than we've previously considered.
Consider the definition of a theory (from Wikipedia): "A theory is a 'proven' hypothesis, that, in other words, has never been disproved through experiment, and has a basis in fact" I think we can agree that the "theory" of redstone ticks matches this for the most part. The part that makes me wonder a bit is whether it can be disproved through experiment.
For example, I made an XOR gate a while ago, demonstrated here: https://www.youtube.com/watch?v=QqnGAwKYVO8
When I made this, I claimed that it was 2 ticks, which was met with some debate. If you changed the inputs from 10 to 01 in perfect sync, the XOR emitted a 1 tick off pulse, which caused some people the believe the gate was in fact 3 ticks. I'll agree: by the strict interpretation of the current theory, it is indeed 3 ticks. However, isn't it a bit odd that a 3 tick device produces 2 tick results in nearly every usage case?
I won't argue that the pulse doesn't matter as it certainly does in some cases. I will, however, argue that this XOR has a simple, observable case where the theory can be inconsistent with reality. I don't think that makes the theory wrong, but I think there is a factor that isn't fully accounted for in the current system.
To illustrate this, I think it's best to take a brief look at where the idea of ticks comes from. In the early days of redstone, people noticed it takes a certain amount of time for redstone devices such as torches to activate. Using this, they could measure how long it takes a signal to pass through a circuit by how many of these devices they pass through. Thus, the concept of the redstone tick is born.
At some point, they noticed that the system isn't perfect. What if there was a device that takes 4 ticks on one input, and 6 ticks on the other? How many ticks is this device? Eventually, they decided it should be considered 6 ticks, based on its worst case. After all, it wouldn't make a lot of sense if a 4 tick device suddenly started taking 6 ticks.
However, doesn't it make just as little sense for a 6 tick device to suddenly start taking 4 ticks? Counting based on worst case isn't a bad choice, but it's hardly the best one. If the device is used in a way that doesn't trigger the worst case, then isn't the worst case a bit misleading?
Although it was a decent system in the past, modern redstone is starting to challenge this worst case system. For instance, there are adders which are only 2 ticks on the rising edge because the slower falling edge case can happen before the adder is needed again. By current definition, these adders are still 3.5 ticks because of the worst case, even though the worst case is completely irrelevant to the intended usage.
To be quite honest, I think measuring ticks by worst case may have outlived its benefits.
The original ticks system came from early redstone, when devices were simple and had a single, simple path to travel through. Back then, they weren't considering rising vs falling edge cases or look-ahead logic. Now we not only expect but require people joining us to understand binary circuitry and logic integration.
We've come a long way since this system came about, and I think it's about time we updated it to reflect that. I don't think it's my place create this update; I think something this universal should be decided by the entire community. I will however, propose one change for this update: A best case measurement.
Having that additional number will not only reflect the behavior of the device more accurately, but it should drastically cut down on the "How many ticks is my redstone device?" arguments. It's hard to debate on the one "true" amount of ticks if there's clear recognition that there is no "true" amount of ticks.
Any thoughts or comments are appreciated.