07-02-2013, 07:00 AM
(This post was last modified: 07-02-2013, 07:03 AM by Guy1234567890.)
Well, the way I have dealt with this issue is simple: I build my devices so that every case is the worst case. If I notice that something is running too slow (and that it itself is already optimized) I will slow down the rest of the machine to match it. Imagine using a device where depending on the input you would get a different timing on the output. How could you possibly use such a device when you didn't know what input you were putting in (like an adder in a CPU)? The only plausible way would be to time it for the worst case.
However, as we know, devices like the two tick adder utilizing pistons are two ticks on every 00->XX input, and assuming data isn't coming at a rate less than the time it takes to reset plus the two ticks it takes to generate an output, the reset time doesn't matter in the device. And so, it would only make sense to treat the adder as taking two ticks. Despite this, it is in fact 4 or 5 ticks (I cannot remember), and this is the minimum rate at which data can be funneled through it.
Therefore, it should be stated, along with the 'effective' delay, the 'absolute' delay. And so, I would argue that the minimum is not the time that should be stated, but rather the time it would take if used the way it was intended. For example, if a device takes 1 tick on occasion, but takes 2 ticks in other cases that it will encounter, it should be called 2 ticks. However, I believe that the timing discrepancy should have been fixed in the device in the first place on all edges...
However, as we know, devices like the two tick adder utilizing pistons are two ticks on every 00->XX input, and assuming data isn't coming at a rate less than the time it takes to reset plus the two ticks it takes to generate an output, the reset time doesn't matter in the device. And so, it would only make sense to treat the adder as taking two ticks. Despite this, it is in fact 4 or 5 ticks (I cannot remember), and this is the minimum rate at which data can be funneled through it.
Therefore, it should be stated, along with the 'effective' delay, the 'absolute' delay. And so, I would argue that the minimum is not the time that should be stated, but rather the time it would take if used the way it was intended. For example, if a device takes 1 tick on occasion, but takes 2 ticks in other cases that it will encounter, it should be called 2 ticks. However, I believe that the timing discrepancy should have been fixed in the device in the first place on all edges...