new kernel ... / another try at getting anything upstream

Werner Almesberger werner at
Fri Sep 19 16:07:55 CEST 2008

Andy Green wrote:
> Something I didn't see you mention is that upstream is interested in
> living patches against their HEAD, not what we ship (2.6.24).

Yes, good point. Seems that we're close to being able to move forward
with our stable branch as well. That would be a good thing to do in
general, since there's already a number of bugs that have been fixed
in upstream but that still exist in our tree.

How would you feel about switching to *-tracking once Om2008.9 is out ?

> Carsten had an interesting way to come at a general attack on upstream
> that I think you should reconsider since you didn't get anywhere with
> your automated method the last six months.

Well, I didn't actually have more than about a week to spend on
actually trying to make this thing work, so it's not quite *that*
bad :-)

> His concept was to blow through our git history by essentially diffing
> upstream and our patched tree in a single flat diff, then carving it up
> by hand into chunks to be fed upstream.

That's the traditional way, yes. I have used that approach many times
in the past and I would do it again if we could just freeze other
development on our tree while everybody is working on pushing things
upstream. But I don't think this is a realistic model.

The broken-down diff method would basically be equivalent to returning
to the quilt topic patches as we had them in SVN, and restructuring
them. That might have been an acceptable approach while we were using
that as our principal structure, but now that the code is maintained
on a changeset basis in git, that isn't compatible with our development
model anymore.

And in any case, since even many of the topic patches in SVN weren't
properly structured, we would have needed some tools for rearranging
them as well. Manually shifting lots of hunks from one patch to
another and then - again, manually - fixing the rejects without any
feedback on the overall change that this accomplished just leads to

Since I expect the total time from starting to prepare for upstream to
completing most of the merges to be measured in months (the merge
windows only open so often, and I'm sure we will get feedback that
will take more than just a few days to process), we need to be able to
keep on going while this is happening.

One could of course do another big diff, etc., but that very quickly
leads to an accumulation of human error, particularly given that it's
usually difficult to remember if you've done something once or more
than once.

We can of course still cherry-pick small changes that don't depend
on the rest, and push them upstream without waiting for the rest.
That won't reduce the bulk, but it'll make the overall task less

The problem is of course that I don't think there are quite so many
of these at the moment. (The obvious ones, such as S3C and ALSA
fixes, have already found their way upstream long ago.) By doing the
cleanup and breaking unnecessary dependencies, we can increase their
number a little, which certainly won't hurt.

By the way, "the cleanup" isn't something horrible. Holger did some
of that janitorial work some months ago, and that fit very smoothly
with the rest of our development work.

- Werner

More information about the openmoko-kernel mailing list