Tuesday, September 23, 2008

My App Fails the LSB

My position on the Linux Standard Base has evolved. When I first heard about it, I was all for it. The LSB as a standard could be useful to some, but now I disagree with the goals of the LSB working group. To be sure, this post is not about dissing the Linux Foundation. They have many worthwhile projects.

What follows is my experience with the LSB Application Checker, my take on the purpose of the LSB, and my own suggested solution for installing applications on GNU/Linux distributions. The Realeyes application failed to certify using the checker v2.0.3, which certifies against the LSB v3.2. Everything that it called out could be changed to pass the tests, but I will only consider correcting a few of the 'errors'.

After building the Realeyes v0.9.3 release, I collected all executable files in a common directory tree, downloaded the LSB application checker, and untarred it. The instructions say to run the Perl script, app-checker-start.pl, and a browser window should open. The browser window did not open, but a message was issued saying that I should connect to http://myhost:8889. This did work, and I was presented with the Application Check screen.

There was a text box to enter my application name for messages and one to enter the files to be tested. Fortunately, there was a button to select the files, and when I clicked on it a window opened that let me browse my file system to find the directories where the files were located. For each file to be tested, I clicked on the checkbox next to it, and was able to select all of the files, even though they were not all in the same directory. Then I clicked on the Finish button and all 87 of the selected files were displayed in the file list window.

When I clicked on the Run Test button, a list of about a dozen tasks was displayed. Each was highlighted as the test progressed. This took less than a minute. Then the results were displayed.

There were four tabs on the results page:
  • Distribution Compatability: There were 27 GNU/Linux distributions checked, including 2 versions of Debian, 4 of Ubuntu, 3 of openSUSE, 3 of Fedora, etc. Realeyes passed with warnings on 14 and failed on the rest.

  • Required Libraries: These are the external libraries required by the programs written in C. There were nine for Realeyes, and three (libcrypto, libssl, and libpcap) are not allowed by the LSB. This means that distros are not required to include the libs in a basic install, so they are not guaranteed to be available.

  • Required interfaces: These are the calls to functions in the libraries. There were almost a thousand in all, and the interfaces in the libraries not allowed by the LSB were called out.

  • LSB Certification: This is the meat of the report and is described in some detail below.

The test summary gives an overview of the issues:
  • Incorrect program loader: Failures = 11

  • Non-LSB library used: Failures = 4

  • Non-LSB interface used: Failures = 60

  • Bashism used in shell script: Failures = 21

  • Non-LSB command used: Failures = 53

  • Parse error: Failures = 5

  • Other: Failures = 53

The C executables were built on a Debian Etch system and use /lib/ld-linux.so.2 instead of /lib/ld-lsb.so.3. The Non-LSB libraries and interfaces were described above, but there was an additional one. The Bashisms were all a case of either using the 'source' built in command or using a test like:
  while (( $PORT < 0 )) || (( 65535 < $PORT )); do

which, in other Bourne shells, requires a '$(( ... ))'. The parse errors were from using the OR ("||") symbol.

The fixes for these are:
  • Use the recommended loader

  • Statically link the Non-LSB libraries

  • Use '.' instead of 'source'

  • Rework the numeric test and OR condition

So far, all of this is doable, sort of. But every time a statically linked library is updated, the app must be rebuilt and updates sent out. Also, the additional non-LSB library is used by another one. So I would have to build that library myself and statically link the non-LSB library (which happens to be part of Xorg) in it (and that one is part of the GTK). The reason I am using it is because the user interface is built on the Eclipse SWT classes, which uses the local graphics system calls to build widgets.

The non-LSB commands include several Debian specific commands (such as adduser), and for my source packages I had to rework the scripts to allow for alternatives (such as useradd). But the other disallowed commands are:
  • free: To display the amount of free memory

  • sysctl: To set system values based on the available memory

  • scp: Apparently the whole SSL issue is a can of worms

  • java

  • psql

Finally, all of the other 'errors' are: Failed to determine the file type, for the file types:
  • JAR

  • AWK

  • SQL

  • DTD and XML

Part of the problem with the LSB is that it has bitten off more than it can chew. Apparently Java apps are non-LSB compliant. So are apps written in PHP, Ruby, Erlang, Lisp, BASIC, Smalltalk, Tcl, Forth, REXX, S-Lang, Prolog, Awk, ... From my reading, Perl and Python are the only non-compiled languages that are supported by the LSB, but I don't know what that really means (although I heartily recommend to the Application Checker developers that they test all of the executables in their own app ;-). And I suspect that apps written in certain compiled languages, such as Pascal or Haskell, will run into many non-LSB library issues.

Then there are databases. Realeyes uses PostgreSQL, and provides scripts to build and maintain the schema. Because of changes at version 8, some of these scripts (roles defining table authorizations) only work for version 8.0+. The LSB Application Checker cannot give me a guarantee that these will work on all supported distros because it didn't test them. I have heard that there is some consideration being given to MySQL, but from what I can tell, it is only to certifying MySQL, not scripts to build a schema in MySQL.

After all this kvetching, I have to say that the Application Checker application is very well written. It works pretty much as advertised, it is fairly intuitive, and it provides enough information to resolve the issues reported by tests. My question is, "Why is so much effort being put into this when almost no one is using it?"

An argument can be made that the LSB helps keep the distros from becoming too different from each other and without the promise of certified apps, the distros would not be motivated to become compliant. But I only see about a dozen distros on the list, with Debian being noticeably absent. And yet, there is no more sign of fragmentation in the GNU/Linux world than there ever was.

My theory on why UNIX fragmented is that proprietary licenses prevented the sharing of information which led to major differences in the libraries, in spite of POSIX and other efforts to provide a common framework. In the GNU/Linux world, what reduces fragmentation is the GPL and other FOSS licenses, not the LSB. All distros are using most of the same libraries, and the differences in versions are not nearly as significant as every UNIX having libraries written from scratch.

I have to confess, I couldn't care less whether Realeyes is LSB compliant, because it is licensed under the GPL. Any distro that would like to package it is welcome. In fact, I will help them. That resolves all of the dependency issues.

While I am not a conspiracy theorist, I do believe in the law of unintended consequences. And I have a nagging feeling that the LSB could actually be detrimental to GNU/Linux. The only apps that benefit from LSB compliance are proprietary apps. The theory behind being LSB compliant is that proprietary apps can be guaranteed a successful installation on any LSB compliant GNU/Linux distro. I'm not arguing against proprietary apps. If a company can successfully sell them for GNU/Linux distros, more power to them. However, what if proprietary libraries manage to sneak in? This is where the biggest threat of fragmentation comes from.

But even more importantly, one of the most wonderful features of GNU/Linux distros is updates, especially security updates. They are all available from the same source, using the same package manager, with automatic notifications. If the LSB is successful, the result is an end run around package managers, and users get to deal with updates in the Balkanized way of other operating systems. That is a step in the wrong direction.

The right direction is to embrace and support the existing distro ecosystems. There should be a way for application teams to package their own apps for multiple distros, with repositories for all participating distros. The packages would be supported by the application development team, but would be as straightforward to install and update as distro supported packages.

There is such a utility, developed by the folks who created CUPS. It is called the ESP Package Manager. It claims to create packages for AIX, Debian GNU/Linux, FreeBSD, HP-UX, IRIX, Mac OS X, NetBSD, OpenBSD, Red Hat Linux, Slackware Linux, Solaris, and Tru64 UNIX. If the effort that has gone into LSB certification were put into this project or one like it, applications could be packaged for dozens of distros.

And these would not just be proprietary apps. There are many FOSS apps that don't get packaged by distros for various reasons, and they could be more widely distributed. Since the distros would get more apps without having to devote resources to building packages, they should be motivated to at least cooperate with the project. And don't forget the notification and availability of updates.

As a developer and longtime user of GNU/Linux ('95), I believe that all of the attempts to create a universal installer for GNU/Linux distros are misguided and should be discouraged. I say to developers, users, and the LSB working group, "Please use the package managers. A lot of effort has been put into making them the best at what they do."

Later . . . Jim

6 comments:

Anonymous said...

I think this is a very interesting post. I think your comment that enforcing open-source via GNU and other open source licences does prevent the various Linux distros from fragmenting into an incompatible messes as UNIX did.

protin said...

I believe that your analysis of the fragmentation of the UNIX market was a little off the mark. As I see things, the "cause" was two fold. The license did not prevent the various vendors from collaborating, it only allowed them to hide their "improvements". Their motivation to not share was money. Each vendor saw their differences as "value added". By time it was obvious that everybody had the equivalent, nobody was willing to eat the expense of changing to be the same.

Jose said...

What do you think about the additional option of effectively distributing the patches with an automated "recipe" for building the new binaries. The assumption is that the users have the source code and devel tools installed or can have these installed automatically (for caching purposes, they may also have pre-made past object files and any other intermediate files since these might not be affected by a patch).

I think this is by far the most flexible and empowering method. It's doable because of how cheap hard drives have gotten. It's different than providing only a tarball in that there would be a supporting framework (eg, to support "recipes"). It's different than central package management because we can create the framework's semantics to be general for any packaging system since the end user gets the source and build instructions. This system can also be used to update a running system or send it into any configuration. Using symlinks and chroot environments and clever configuration semantics, we can make it cheap to change configurations and possible to save them. This beats rebooting or running a VM (faster and greater integration).

The reason I want "end" users with source (there are security issues to be tamed and best practices to be developed) is that it lowers the boundary for users to participate in creating their platform (and hence improving Linux). The source code (and any other source material) and common set of configuration recipes that will evolve (eg, you may subscribe to the recipes of someone you trust) serve as a base platform from which you can derive debian, fedora, your-own-distro, a bifurcation of these, etc.

Currently, attempts to unify packaging assumes that the user won't compile code. This is very limiting. You have third parties trying to design the user experience but these designs are all incompatible. Instead, working with the assumption that source code is the foundation at the user's machine, we build recipes and everything else in a more flexible manner.

There are many possible ways to implement this. I am considering one such design, but likely others will come up with many alternate designs that are of higher quality.

Here are some more details of what I am sort of designing as well as some more general descriptions.

You distribute an XML configuration file that includes pre-reqs (tests) and then the actions for the machine to take depending on how these pre-reqs are fulfilled. For example, the user's machine might "compile", "install", etc, your app, perhaps replacing an existing one, and adjusting some other apps and running some scripts. Anyone can distribute such XML configuration files. Through this you can update a system a bit or totally remake it (ie, integrate and/or chop and add new parts including the whole "distro"). All distributors are at the same level (including the end user).

We want to standardize the tags/attributes, but anyone can extend as they see fit to be able to generate their effect. There are security issues here. We want to avoid having new/unknown tag semantics since that implies running unknown scripts. I suspect many people will put online their selection of material and sign off on third party sources and tags [and a user can subscribe to whom they trust.. eg, debian's or to john smith's certified configurations and certified source material].

Don't want to hassle with flash on the web browser? Well, how about distribute the XML config file that effectively gets the user to run your "flash" effects but using xine or something more powerful (eg, beryl desktop effects), and maybe provide a custom version of gimp and a binding with xine so that at key moments the movie frames are sent to gimp and processed and recovered. .. This setup would normally not be done since gimp, xine, firefox, etc are separately controlled. Instead, "distro makers" do this sort of thing (but they tend not to do anything too dramatic and there are few of them) or perhaps someone working on a contract basis for you, but now we can have more people participate and create many such modest effects since they don't have to build a full distro and become experts in all aspects of distro creation. Ie, creating and sharing these bifurcation effects become much easier. More sophisticated users will build interfaces to make the process easier for others. Remember, such a config file is like a partial or full definition of the computing experience/setup. ["Partial" can be used to "install" apps; "full" can be to distribute a new distro; or you can do anything in between. You can even incorporate post install (ie, run time) controlling behavior.] Running full-featured apps natively can frequently give a better experience and offer more options than to try and shove everything as flash, html, etc, on a single webpage.

Existing FOSS projects would normally be able to include support to integrate better with this framework, but you can always simply add your patches and configuration files of your choice to these projects if you want (eg, if they won't), creating new tags as necessary along the way. You then run these and all the sources get installed (if necessary), patched, built (if necessary), and other further changes take place.

We should be able to identify each source item on the computer ("sourceball"). We can thus associate that certain data be preserved or treated differently. eg, all user created or saved files and configurations from various applications. This can make saving states and backups efficient and automatic. Only certain stuff gets backed up bit by bit while the configuration of the system gets saved instead of all binaries. From one saved body of source material and compact configurations, you can generate (save) a great many number of systems/states.

Sources can be uniquely id'd, in part, using the domain name system. Eg, your own projects, your patches to other projects, and your bifurcations of others' projects might be addressed through your website's url or even through a local machine name where you keep the data if you don't have a website.

Distributing your public material can be through a website or any other network protocol (eg, bittorrent). Also, third party FOSS project sources have unique info (like homepage perhaps), but you can host them. This allows you to use a network computer as the source for source material that the other network machines will need (or to provide backups if the main site is down or no longer exists). Versioning and other details are important as well.

The best part is that you can patch other projects as necessary and seamlessly (to the extent you can create the proper config file yourself.. so we need to make this process easy), but, eventually, this will largely be maintained in a completely distributed fashion, plus some basic volunteer amount of "centralizaton," as groups will rise to act as authenticators of existing FOSS, patches, etc, resources [eg, you can be one such authenticator to your clients and associates]. So short term, we can build this without cooperation by patching ourselves. Long term, most projects, if this effort turns out to have legs, will maintain the various components in distributed fashion. They will also design their tags/attribs based on how their project tends to be used commonly by others so as to save users the effort to re-invent the particular glue (wheel) for the more common uses of that particular project and to avoid having many nonstandard tags floating around (remember, we are going beyond the mere build/install by itself process; eg, patching will be common). If you have any further semantic needs beyond what the third party project provides, you just add a patch config file to guide the particular custom patching/building/integration/setup for your need. [Note, "to patch" need not be in the sense of the "patch" utility. XML offers many ways to incorporate changes with associated pattern matching.] Closely associated with the snippets of XML configuration are the traditional gnu autoconf family of tools.

End users can have traditional configuration files belonging to an app be chaned so that new updated configuration files from the projects are integrated with the end user changes if possible (pattern matching/state testing of some type will be involved here -- hopefully as much as possible standardized into easy to use XML). This is useful whenever you adjust config files for a server, app, or the system itself and want the changes automatically included into upgrades [puppet and other management systems, including custom scripts and/or manual labor, might handle this task today].

Security requirements would require that the base system be trusted (so no patching to its source code unless specially authorized). I'll repeat, there are various security issues to be dealt with overall. Central repos/catalogs of certified/authenticated tags and source material will be a practical must (as stated earlier, you might be one such source for your clients; debian, fedora, etc, might take on this role as well).

FOSS has this advantage over the guys that won't reveal source. This is also very engaging for the end users. They become empowered and attached to FOSS frameworks. They can create more and to their liking. They can use others' creations more easily. Many users will build interesting apps, including more interesting and full ways of interacting online. More people competently creating means Linux becomes easier and more interesting faster.

Proprietary vendors have bad lock-in (via secrets). This system takes the source to the user and gives them the real power.. that's the good lock-in.

The end user calls the shots and participates on an equal basis [an analogy is git and other distributed version control systems vs the more centralized ones]. They have readily accessible the world public source materials and recipes. They easily enlarge these sets naturally. This lowers many bars, lubrication is increased, lock-in (eg, from distros) is weakened allowing the best presentations (eg, distros or various integrated features) to rise at any time without upsetting the user's environment undully.

One example short-term gain will be that FOSS app projects can build ways to showcase their apps (including building sophisticated multimedia tutorials) without having to build a full unique distro that many end users might not like to boot into that frequently. To showcase their apps, they might want to make use of competing apps, perhaps patched in various ways to, eg, make it easy to learn how to use their app by allowing the user to walk along the competing app while they get feedback of some sort (including videos) to match the actions as would be required from the main app. Today, this would likely only be done as a specialty distro or remix (which is more difficult to do and is a bit disruptive and not yet easy to integrate with the user's existing environment).

In short, users will be more empowered, be more engaged, and become more learned, and FOSS will improve faster and spread quicker.

Lowering the bar for end users is the ultimate objective. We are all end users. We need to take more advantage of the source. That is our ultimate advantage, our killer app.

Thoughts?

Jose_X

PS. Consider this post, the intro I told you about on LT.
PS2. Sorry, if this is a bit hard to read. I didn't want to pass up the opportunity to post tonight while this was on my mind, but I need to get to bed.

Jim Sansing said...

@protin

I am not a historian, so I will admit that my theory is simplistic. However, it seems that your explanation and mine are pretty close.

@jose

What you have described sounds like a source code distribution system, and your use of XML sounds a lot like ant. Also, the idea of building apps for a Linux distro is what Gentoo does. I bring these up not to shoot you down, but to demonstrate that they are viable ideas.

The thing about distros that build binary packages (and even Gentoo) is that packages include the dependency information. If the dependencies are not already installed, the package manager will issue a message and, if accepted, automatically install them.

The LSB is trying to get around the dependency issue by saying that certain libs are guaranteed to be installed, and the developer must deal with all others. Regardless of the availability of source code (and I'm with you on that one, too), the dependency issue is hard to solve.

When I used MS DOS 3.1, apps would include dependencies on the install disk, and sometimes overlay existing libs with older versions, which caused existing apps to fail. If we get to a point of vendors having to deal with their own dependencies, it could get that bad.

So my main argument against the LSB is that it doesn't really solve much and actually introduces new problems.

Later . . . Jim

Jose said...

>> What you have described sounds like a source code distribution system, and your use of XML sounds a lot like ant. Also, the idea of building apps for a Linux distro is what Gentoo does. I bring these up not to shoot you down, but to demonstrate that they are viable ideas.

>> The thing about distros that build binary packages (and even Gentoo) is that packages include the dependency information. If the dependencies are not already installed, the package manager will issue a message and, if accepted, automatically install them.

From what I know about Ant (and it did crop up at various points as I thought about this), I think that it would need to be extended in particular ways.

Gentoo is probably one of existing distros that comes closest to what I have in mind (from what I know about it). I have some questions. Does gentoo, as is, make it *relatively easy* for end users and third party packagers to create ebuilds that can undo other ebuilds, partially or wholly, and accurately? [Is this even a question that can be asked wrt portage/gentoo? I don't think so but am not certain.] Can you test runtime and filesystem specifics to decide how to change the system? Could you quickly build an ebuild where mostly you just specify "Ubuntu" as a pre-req and then specify the rest of the desired result in a way that basically installs your package as if it were on a std Ubuntu system.. and do this while perhaps preserving a few things from your existing system? Is it easy for the user to integrate patches with the gentoo sources, mix and match among project sources, and have the results easily become ebuilds that can specify HAVEs/DON'T HAVEs pre-reqs on the system so that conflicts with other ebuilds can be avoided?

I think all of these existing systems like gentoo are missing a great many easy-to-use semantics that you would want if the intent is to design with the fundamental assumption that (a) the end user is the central integrator and (b) all third party "packagers" should be on equal footing and be able to build "configuration runs" that can coexist with others, perhaps even having existing binaries be moved aside but not eliminated and always able to be put "back into place" when the user wants such an earlier configuration active again.

With gentoo and most other systems (a notable partial exception I am aware of being LFS), there is also the practical problem that the user doesn't have access to the majority of the sources from the start (though they could manually install it perhaps). This is because source code management is not required on the user's system, so the designs don't facilitate it.

We would also need a system that starts off with and caches sources and intermediaries so that new setups ("configuration runs") can happen relatively quickly (ie, to quickly figure out what is needed and then link/unlink as necessary while avoiding compilation as much as possible).

The bottom line here is that these distros are not designed with the requirement that the user should be in control to *easily* extend the system as they want. Distros offer users flexibility, but not nearly enough. If a user wants to break from the mold, s/he has to jump over many hoops and have great knowledge, and s/he still runs a real risk of creating a package that is incompatible with any existing configuration of the current system or of a future distro-controlled upgrade path for the system.

I also anticipate that to make this fast enough to be useful, we *might* need a new filesystem design (I'll need to research) and to use some serious indexing; however, the indexing should degrade safely so that if the indexes are not up to date (maintained with cron for example), the system can still work, accurately leveraging as much as possible the existing indexes.

I should look at Portage, Ant, Maven, autoconf tools, etc, more closely eventually, but first I want to design what I have in mind in detail before I shop around more to see if something out there provides X% of the solution where X is not too far from 100.

>> The LSB is trying to get around the dependency issue by saying that certain libs are guaranteed to be installed, and the developer must deal with all others.

So if you assume LSB, your app would be crippled in anything less. If you don't, you haven't solved much.

Well, LSB is probably great as a guide no matter what. And it might be extra useful for those that truly only care to target primarily LSB systems.

>> Regardless of the availability of source code (and I'm with you on that one, too), the dependency issue is hard to solve.

Yes, taking concept to implementation is not trivial by any means -- except maybe in deep slumber. :-)

I do think a redesign (at least of developer/packager focus and of supporting semantics and tools) should be necessary if one were to look closely at existing potential solutions.

>> When I used MS DOS 3.1, apps would include dependencies on the install disk, and sometimes overlay existing libs with older versions, which caused existing apps to fail.

If instead these app installers simply gave a set of *declarative* statements, they could let the user's System Control manage the rest safely (if such a System Control could be guaranteed to exist at the user's site); however, you probably need source code and you'd need the specific set of semantic tags ("API") to accomplish the right effect.

I would also like such a System Control to be extensible by users and by third parties (creating a need for certification of third party additions, for security purposes mainly). Its design requires smart and precise source code management, at least to get flexibility out of it, or you are stuck back with the preloaded binary limitations. The design around source code also is necessary if the central figure is to be the end user, in particular, with the requirement to make it easy for the end user to create alongside and extend upon the FOSS world's work and to share back easily.

>> If we get to a point of vendors having to deal with their own dependencies, it could get that bad.

Central repos/catalogs with good searchable tools (or local tools that access these catalogs) means that those building apps or presentations (ie, a change of or significant addition to the user's environment) for the proposed system would have many standard pre-req targets they could use to, in many cases, quickly make sure the app will find a hospitable environment, at least to the extent the dependencies have public source code accessible and you can precisely specify such source material (universal unique name for project source + patches).

When the actual "dependencies" used in "packaging" are generally at a higher level (eg, borrowing target tags from catalogs), it's easier to get it right. Sure, the community will have to design these carefully, but you are also likely to have a larger community pulling together when a high level target (pre-req, etc) system exists. And because of the System Control and its integrated access to source code, we can make it so that there are fewer details that have to be specified by the packager.

I am not talking about revolutionary approaches. This is evolutionary stuff, but design emphasis may need to be readjusted in order to get maximum benefits.

And a main design point is to use declarative instructions at a high level whenever possible (without limiting the ability to add as many implementation details as necessary in order to overcome deficiencies in the std tags/semantics) and allow the System Control to handle the rest.

[BTW, I just started using the "System Control" moniker in this post. It seems like as good a label to use as most any.]

>> So my main argument against the LSB is that it doesn't really solve much and actually introduces new problems.

That's mostly the impression I got from the way you described it in your reply.

PS: I still need to get more details resolved and down on paper (eg, include specific tag descriptions, more detailed examples, and more use case descriptions) and do a nice introductory write-up. Thanks for the mini-soapbox and audience.

Jose said...

Here is another project that comes to mind..

>> The design around source code also is necessary if the central figure is to be the end user, in particular, with the requirement to make it easy for the end user to create alongside and extend upon the FOSS world's work and to share back easily.

One app that is currently missing in the FOSS world is a GUI that makes it easy, especially for the nondeveloper, to manage and extend existing source code. [OK, maybe a properly set up emacs, eclipse, or similar system accomplishes this, but I want more than what I have seen to date.]

A nondev won't want to do this you might say, but I disagree. With integration and videos and other simplicities.. with specific design for specific bodies of code.. with Easy modes of interface/edit.. "nondevelopers" (and developers) can find it easy to contribute to or fork existing projects [fork in the sense of git, monotone, bazaar vcs, etc].

Imagine something like Javadoc on super steroids. Imagine such a GUI app that unleashes an experience relying on vid and sound clips, "macros", text/xml, guided GUI behavior, actual project source, and other associated data files so that a tutorial and learning environment comes to life with the goal of facilitating understanding and modifications to the source code for that particular project. One possibility is that the user can learn about and learn to manage the source code of a well-defined project version that wasn't too ancient. This knowledge would then allow the user to dive into the more recent source code branches even if these were not kept up to date in terms of these super-charged easy-docs. In practice, the most stable parts of the code would develop these tutorials (indexes, metadata, mark-ups, etc) at a higher quality, but with developer experience and access to tools, fresh new contributions might also be integrated into such a "browser/editor" system with such meta documenting becoming a part of the full patch. Basically, the goal would be to have the easy-docs kept up to date with source.

Something like a voice generator/animation might be used for video/voice parts. Edits might automatically turn into a properly meta-tagged patch. This GUI learning system could certainly work comfortably within and leverage something like the System Control infrastructure described in the earlier comments.

More clearly, imagine if FOSS code is generally documented in the future as a matter of course so that a type of super browser/editor as described above can be unleashed on it. Then further, whenever you create a new patch, you might simultaneously create a mini marked up voice script and maybe a few other files so that this new code is immediately usable by those browsing/learning/editing it.

As the number of users that can relate somewhat to a project grows, so will improve the maintenance of that project (including "easy-docs") and its ability to be more easily learned.. leading to more users.. and so on.. until a saturation point dependent on the value of this project in the big scheme of things at that point in time is reached.

But the easy-to-use tools must be there. It's not enough that source code exists, for example. We need to take the task of teaching source code as something on a similar level of importance as the source code itself. With a growth in the number of savvy users, FOSS will improve proportionately fast. We should thus aim to make it easier to become a savvy user.

As is suggested with much of what I post on LT, here, etc, I am essentially out to help make contributing and leveraging FOSS (leveraging the source) as easy and interesting as possible for the largest number of users. The strongest future for any part of society is one where users are involved and empowered. Software is crucial to helping advance many disciplines/hobbies and to helping solve many problems users encounter (at least in theory sw is this key). Also, motivation to study specific code (there are various levels of "code") of something you really care about, while difficult for many, is achievable and would then be instrumental to helping the individual stretch his/her mind and develop a more active approach to other problem areas.

Passive consumerism is unhealthy to individuals and by extension to society. Growth and creation can be fun.. if the tools are easy enough. We need to design FOSS with this sort of to-be-enabled user in mind. ["We" "need" to ..cause I said so!]