yeasah at schwide.com
Wed May 24 18:17:47 UTC 2006
Daniel Kristjansson wrote:
>The table id is different, 0x42 vs. 0x46.
Ah, I see. Yes, they are marked actual vs. other as you would expect. So
that's good, anyway!
>Yuk! Well we can handle this, please open a ticket with the relevant
>info and assign it to me.
Ok, I will do that.
>It wasn't flexible enough for any of the places it was still used
>except for a case in siscan that could be handled with one line of code!
>Thousands of lines of code distributed over a half dozen files to
>support a bunch of undocumented hacks most of which were irrelevant
>vs. one line of code, what do you expect me to say? BTW 99% the code
>that used the table is gone, the privatetypes support code would have
>needed to be ported to the new code. Any additional discussion of
>the undocumented hacks table will be ignored by me.
Forgive me, but I think you may have misunderstood my point. As you
request, I won't write anything further on this topic after this, but
I'll try to clarify what I was trying to say one last time.
I'm not saying that the code as it previously existed should be brought
back, nor the table, and I certainly agree that the undocumented nature
of the privatetypes table was problematic and it's great to have gone
and worked it all over -- thanks for that. I'm shocked to hear that the
implementation was so complicated -- all we're really talking about is a
uniform provision for application code to query for per-network-id
information that could enable alternate behavior. IMO there's no reason
that virtually all of the implementation couldn't be in one class/file,
and it should only make an appearance elsewhere in the application where
it was actually used.
I guess really these represent two partially overlapping approaches to
dealing with non-standard streams, each with their own pros and cons.
1) Use an override table like privatetypes.
a. No need to come up with code to detect non-standard conditions.
b. Can handle potential situations that can't really be handled
in any other good way.
c. Additional code. (Though properly done, it would only add much
complexity in one place, which would be a one-time effort)
d. It would have to be maintained as providers changed behavior.
e. It's easy for somebody to add something without documenting
its intent/effect, which becomes a code maintenance issue. (Though
really it's just as easy for somebody to add undocumented behavior to
the application itself, so really this is just an oversight issue --
generally speaking I would expect improperly documented contributions to
2) Attempt to detect non-standard conditions or simply write code such
that the non-standard conditions don't matter.
a. Works for everybody, with no need to maintain a list of
b. In the case that code of similar complexity can be written
such that it works equally well for all cases, it's less complex.
c. If the condition needs to be detected, performance may be
reduced (e.g. SDT timeout)
d. In some cases, ongoing complexity is actually worse than the
table driven method, since for some exceptional cases you would need to
add code to detect the condition, as well as to correct (versus in the
explicit table-driven method, where you would simply correct)
I posit that there are some cases that fit better into model #2, and
some cases that fit better into model #1. The one line of hardcoded
stuff that is now in there is an example of something that probably fits
better into #1 (though maybe not, perhaps it's just that way because
nobody has gotten around to implementing the right general-purpose fix yet)
I don't see the table-driven method as necessarily a source of horrible
hackery, though I would certainly agree that it can be abused. An
analogy: sometimes it is better to be told ahead of time that a
particular road has a hidden pothole (i.e. looking up the indication of
pothole presence in a table of road exceptions), and other times a
pothole might be easy to spot well ahead of time and avoid without any
penalties (such as slamming on the brakes or swerving).
I'm not saying that there is no better solution to this particular
problem, but just as a concrete example, take the SDT vs. PAT/PMT thing:
if you don't know ahead of time that the SDT won't be coming in time,
you end up waiting 6 seconds to time out -- whereas if you knew ahead of
time you just saved 6 seconds.
Honestly, I see point #1d as being the best argument against the
table-driven method -- if you can come up with a method that works just
as well without having to be told what to expect, it's obviously the
superior solution. I guess you can bank on always being able to do that,
but I think you're going to end up with some solutions that do NOT work
just as well as a table-driven solution would have, and may even
ultimately end up having to add a facility for overrides anyway because
of a brick wall situation where something just can't be detected in any
In my way of thinking, the value of having an efficient, cleanly
implemented, non-invasive implementation of a table-driven override
system that can be used generally in the application is greater than
just the one line of use it would currently see. Its value is in having
another tool in the toolbox, another way to solve problems that may be
better suited in some of the future cases of poor standards compliance.
If you really think that there won't be any need for special cases as
the new code sees wider adoption amongst the various strange and
non-compliant providers out there, well, I can understand the
relucatance to add anything like this back in, regardless of its
*potential* value. I'm just concerned that, if the tool isn't there at
all, and something comes up that is best solved with that tool, users
might end up with poorer functionality because it doesn't seem worth it
to implement it just for "one more" case.
Ok, done with that. I'm sorry if it bothered you.
More information about the mythtv-dev