Unable to use OM build system
andy at openmoko.com
Sat Apr 19 14:22:55 CEST 2008
-----BEGIN PGP SIGNED MESSAGE-----
Hi folks -
Thanks to John Lee's step by step instructions I was able to start to
work with the OM build system. But I didn't get to complete my bitbake
due to uncompilable packages, and the current result is that the
consensus is I can't use OM build system on Fedora 9 prerelease. So
this attempt is so far a failure.
Well, I don't mind some trouble and I don't in itself hold it against OM
Build system that gcc 4.3 made some surprises for it. But it is a bit
curious that these problems did not come with Fedora itself: the Fedora
packages for the same things that OM build failed on I have installed
already are built on that compiler AFAIK.
OM Build system can be described as Hybrid-cross Gentoo. When I asked
it to bitbake DM2, it found > 1000 packaging actions it had to perform
before I could do that, and subsequently it choked fatally on at least
two of those self-elected actions before we gave up on it for my host.
That's why I say Hybrid-cross, bitbake's ambitions to control my system
extended beyond cross builds and actually into determining versions of
every host package it wanted to touch, never mind some of these are
evidently too old to build with latest compilers.
But I KNOW that I can cross-compile DM2 on this machine if I did it by
hand, since I use John Lee's toolchain. I just needed some target-arch
libs to link against is all. So I would characterize these 1000
actions, many of the host-arch builds of stuff I already have, as
remarkable bloat considering the minimum it actually needed to do for
what I needed.
Somehow getting those libs and includes to compile and link against
became a huge task taking many hours of compile that we are unable to
complete on this host. It choked on host-arch build of DBUS for
example. Why do I need a specific version of host DBUS to cross-compile
a few libs and link against DM2? I guess it is because bitbake has in
mind every case including stuff out of scope for this task like Qemu,
but I am interested in the real reason if it is different.
Why do I need to recompile ANYTHING in fact -- there is an official
toolchain issued by the project, and there are precooked binary and dev
packages available. What is the point of making me recook them to the
same bits that already exist?
Let me explain what could have happened here instead of what is for me
failure to be able to use OM Build system.
I could have found that I should use "opkg" or whatever on the build
host to install target-arch packages, for libs and -dev. I would have
gone to an OM repo, and gotten tslib and tslib-dev, and installed these
target-arch packages on my build host in a central location -- like the
way the toolchain has libs and includes from its tarball. (But "opkg"
or whatever will allow me to manage it ongoing in a safe way -- if tslib
is upleveled I just likewise upgrade the package on my build host using
the normal target-arch package.)
Then I would just run a very lightweight build app the equivalent of
rpmbuild, it would read the bb script and perform the build action for
one package, DM2 ONLY -- hey, it is the Unix way -- instructing DM2
configure to look in the central place managed by opkg on the host for
libs and includes. Out would come a DM2 package which is compatible
with the existing tslib and other dependent lib packages.
For extra marks, a source package should come as well, so I have the
option to provide it with the binary.
Now I heard already this infrastructure is being worked on in OE -- are
we committed to move to it when it is available? Is there anyone going
to explain that the current way is preferable to this proposal or do we
all agree we need to improve the deal offered to devs that may consider
to casually work with this build system?
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org
-----END PGP SIGNATURE-----
More information about the distro-devel