AKA Thin Bridge
The theory is that if you didnt have to learn about bridges, and you needed to learn minimal SDL, the ramp to learning vesta would be much smaller.
Thus i propose a bridgeless methodology.
In this mode, what these people would be doing is populating vesta with the tools they need -- in a directory structure copied from what they already do in linux. then it should be something trivial to convert that into a binding w/ the same hier and put that in root to the temporary encapsulated root filesystem (TERFS) w/ the same hier. The point here is that once they start executing run_tool() funcs, the terfs env looks the same to them as their regular unix env.
Then there's some relatively trivial code to move input files into the terfs and result files out, but that's boilerplate.
I am explicitly suggesting that we give up the benefit of portability (but see below) that we get with bridges, to lessen -- perhaps dramatically -- the ramp to get these benefits of vesta:
- cached results
- shared build products
- auto-snapshot work areas.
- explicit versioning (no "ground pulled out from under you" syndrome)
That seems like a lot!
Steps to adding a flow in Bridgeless Vesta
pick a location in the repository, eg, /vesta/my.domain.com/play/$USER/mynewflow/ eg /vesta/mmdc1.intel.com/play/jvkumpf/mynewflow/.
in /vest-work copy build.ves from the template
add tools to /vesta/mmdc1.intel.com/play/bridgelessvesta/path/bin/ it /vesta/mmdc1.intel.com/play/bridgelessvesta/path/bin/newtool/pkg/.
under bridgelessvesta/path/bin/ is /bin, /usr/bin, /usr/local/bin etc
every tool gets its own pkg. So the hierarchy is just like unix, but with an extra pkg/N level inserted to do the versioning
boilerplate imports of things from /bin including perl, awk, grep, sh, csh, whatever
files clause to include input files
files clause to include any user-specific scripts
- assemble command line from strings, directly calling tools
call _run_tool() or thin bridge version of run_tool()
extract results from _run_tool()'s return binding (boilerplate)
1. should we open up people's existing flow scripts and pull out the steps and write them as separate _run_tool() calls, or just call the multi-step perl or csh or sh script with a single _run_tool() call? Why or why not?
2. should this template/boilerplate build.ves file call _run_tool() directly, or should it call a thin bridge which just takes its inputs and then calls _run_tool()?
- adv thin bridge: the thin bridge version would allow the passing of strings, instead of making the user pass a list of strings, forcing them to learn, at least a little, the SDL list syntax.
- adv directly: the thing bridge would complicate the template because the thin bridge would have to be imported. If this is all boilerplate, then maybe this disadv doesnt matter.
- adv thin bridge: the thin bridge gives an opportunity for porting or other helps later on
Do we really give up portability? Perhaps not. A build.ves file which contains explicit calls to _run_tool() with explicit strings and switches and pathnames -- coupled with some knowledge that this runs on a particular version of linux/gnu/gcc/etc, becomes an abstract description of what that build.ves file should do. Ie, a specific detailed script applied to a specific version of the platform describes an abstract step.
It should be theoretically possible, then, to port this build.ves file to another platform, or, better said, run this build.ves file on another platform, w/o editing the build.ves file by passing it thru a translation layer, that interprets its fixed strings against its target platform, infers from that the meaning and goal of the command and/or option, and creates the equivalent on the new platform.
If this ideas is implemented with a "thin bridge" which just takes its inputs and calls _run_tool() directly, then the thin bridge is a great place to intercept the call and translate to another platform.