kernel defconfig, debugging, preemption, and very noticeable speedups/ debugging

Mon Jan 11 15:37:00 CET 2010

werner,

i have full understading of you position (shared with paul and some
other people). But i see major problems in your position as it seem for
me incompatible with optimization goal.

how, i've finally prepared all 12 kernels for testing to finally help us
to move further in this issue, i'll try to describe my position, in
mathematical sense.

let consider for some constant abstract set of 'work':
D - set of debug options
T(s) - total time, where s is set of enabled options, each of them is in
D

so:
T({}) - is time without any options
T({i}) - is time with one option i enabled (i in D)

also lets define C(j) = T(j) - T({})

epsilon - threshold you think neglibible

so you state that if we mesure T({}) - T({i} for each i and compare to
epsilon, we can judge can that option be included or not: if T({}) -
T({i} < epsilon so lets keep option.

now, goal of optimization is to reduce T as much as possible. now i'll
try show why this is incopatible with you idea:

problem 0: C({i}) + C({j}) < C({i, j}),
so if C({i}) < epsilon and C({j}) < epsilon, C({i, j}) might be >
epsilon. and more, in fact always will be because we can always find
such task which will highlight particular option weakness.

so, for example if i'll later propose to change 2 options not related to
debug later and cumulative effect will be > epsilon, I'll have hard time
to prove that, instead of simple change, move on and measure all options
in sum.

problem 1: we can't measure C({i}) for all tasks, we can only do some
systetetic testing. i choose to do syntetic test this to get at least
mininal attention to problem, not to measure some small %%, so problem
is that C({i})_measuredX - C({i})_measuredY > epsilon

problem 2: options depends on each other you can't measure some options
'alone'

problem 3: people don't know if they need or not particual options, and
don't know that is done by each option, but insist on their 'stay'.
also, it is really stupid to measure how 'PREEMPTION DEBUG' influence
microtests because a have no idea how it can help. this is wrong. by
default debug option should be off and you have to have reason to turn
them on, not vise versa.

so, all the tasks or measuring 'options one by one' reminds me russian
national fairytale where Ivan-the-fool was sent to bring 'something I do
not know particulary what'. it brings nothing but sends me to the
distant travel.

also, you can see that many not really 'mathemathical' matters here, all
is matter of expierence. i can tell only one thing here - then i first
took a look into kernel config, i immediately noticed the problem and
then for the first time i had time to check i checked it and found
results astonishing. that's why i consider this kernel options set as a
bug, sorry. so, we'll never agree to keep some options, as i'll be
always for performance except reason to keep (like backtraces) is strong
enought from my pov, other thing is that nobody really need my
agreement, but i really have no need in it too, so my way is to just do
whetether i planned, if people have questions/ideas, want to use - it's
nice, not vise versa. ok, and all have their fun.

also, i think you assumption that some one particular option is
'responsible' for slowdown is wrong, sorry, but soon be sure in that.

sure, optimization always reveal some bugs, but this is ok from my pov
and should be only reason to have fun fixing some complicted bug.

better to deside which options are really need and why, and how they
helped in past (om have long kernel history), i think this will solve
any questions on topic.

В Пнд, 11/01/2010 в 06:51 -0300, Werner Almesberger пишет:
> > i went out of that discussions absolutely frustrated.
>
> I saw that little flame fest a bit later. I noticed that you seemed
> quite distressed, but I don't quite understand why.
>
> Everybody agrees that you discovered something important. Everbody
> also agrees that the current debugging options need changing. The
> only thing where there's disagreement is about what degree of
> changing your research so far justifies.
>
> The previous assumption was that debug options are good to have and
> their cost is negligible. So we used plenty of them.
>
> You've shown that this is incorrect. However, the conclusion drawn
> from this was is that all debug options are evil and they all must
> be avoided.
>
> The data you provided doesn't support such a radical conclusion.
>
> Maybe it's clearer if I express this mathematically. D be a set of
> debug options, D_{OM} be the set of of debug options the Openmoko
> kernels used so far, cost(x) be the cost of debug option x where x
> can be a single option used alone or a set of debug options.
> Finally, \epsilon be the (non-zero) cost we would still consider
> negligible.
>
> The previous assumption was   cost(D_{OM}) < \epsilon
> What you've shown is          cost(D_{OM}) \gg cost(\emptyset)
> implying                      cost(D_{OM}) \gg \epsilon
> Furthermore, you stated that  \sum{d in D} cost(d) \le cost{D}
> The conclusion drawn is       \forall d \in D: cost(d) > \epsilon
> and therefore		      \nexists D \ne \emptyset: cost(D) < \epsilon
>
> I think everybody agrees except for the last two points. And I hope
> you can agree that these last points don't follow from the rest :)
>
> It would help to clarify the issue if you could post the results of
> \forall d \in D_{OM}: cost(d)
> or, if you prefer to emphasize the superlinear increase of cost when
> combining options, a sequence of
> cost(d_0), cost({d_0, d_1}), cost({d_0, d_1, d_2}), ...
>
> Let's replace voodoo with science, not old voodoo with new voodoo :-)
>
> Thanks,
> - Werner